text
stringlengths
56
7.94M
{{\itbf e}}gin{document} \title[Imaging by speckle intensity correlations] {Imaging through a scattering medium by speckle intensity correlations } \author{Josselin Garnier$\hbox{}^{(1)}$ and Knut S\o lna$\hbox{}^{(2)}$} \address{$\hbox{}^{(1)}$Centre de Math\'ematiques Appliqu\'ees, Ecole Polytechnique, 91128 Palaiseau Cedex, France} \end{eqnarray}d{[email protected]} \address{$\hbox{}^{(2)}$Department of Mathematics, University of California Irvine, Irvine CA 92617} \end{eqnarray}d{[email protected]} {{\itbf e}}gin{abstract} In this paper we analyze an imaging technique based on intensity speckle correlations over incident field position proposed in [J. A. Newmann and K. J. Webb, Phys. Rev. Lett. 113, 263903 (2014)]. Its purpose is to reconstruct a field incident on a strongly scattering random medium. The thickness of the complex medium is much larger than the scattering mean free path so that the wave emerging from the random section forms an incoherent speckle pattern. Our analysis clarifies the conditions under which the method can give a good reconstruction and characterizes its performance. The analysis is carried out in the white-noise paraxial regime, which is relevant for the applications in optics that motivated the original paper.\end{abstract} \noindent{\it Keywords}: Waves in random media, speckle imaging, multiscale analysis. \maketitle \section{Introduction} Imaging and communication through a randomly scattering medium is challenging because the coherent incident waves are transformed into incoherent wave fluctuations. This degrades wireless communication \cite{alamouti,book1}, medical imaging \cite{huang}, and astronomical imaging \cite{tokovinin}. When scattering is weak, different methods have been proposed, which consists in extracting the small coherent wave from the recorded field \cite{aubry09,aubry11,borcea11,borcea05,sha14}. These methods fail when scattering becomes strong and the coherent field completely vanishes. However recent developments have shown that it is possible to achieve wave focusing through a strongly scattering medium by control of the incident wavefront \cite{vellekoop10,vellekoop07,vellekoop08}. These results have opened the way to new methods for wave imaging through a strongly scattering medium \cite{katz12,mosk12,popoff10}. In \cite{webb14} an original imaging method is presented that makes it possible to reconstruct fields incident on a randomly scattering medium from intensity-only measurements. From the experimental point of view, the speckle intensity images are taken as a function of incident field position and then used to calculate the speckle intensity correlation over incident position. From the theoretical point of view, the speckle intensity correlation function is then expressed using a moment theorem as the magnitude squared of the incident field autocorrelation function. The modulus of the spatial Fourier transform of the incident field can then be extracted, and the incident field itself can be reconstructed using a phase retrieval algorithm. The key argument is the moment theorem that is based on a zero-mean circular Gaussian assumption for the transmitted field. In \cite{webb14} the authors claim that heavy clutter is necessary and sufficient for this. One of the main applications is a new method to view binary stars from Earth (using the Earth's rotation and atmospheric scatter). Other biomedical applications are proposed and extensions of the technique to imaging hidden objects with speckle intensity correlations over object position have been proposed~\cite{webb16}. In this paper we present a detailed analysis of the technique in the white-noise paraxial regime, which is the regime relevant for the applications \cite{strohbehn,tappert}. We clarify the conditions under which the imaging approach proposed in \cite{webb14} can be efficient. In particular, we will see that the zero-mean circular Gaussian assumption is not strictly necessary, however, that strongly scattering media may not create the right conditions for the imaging approach to work well. We can distinguish two strongly scattering regimes, the scintillation regime (in which the correlation radius of the medium fluctuations is smaller than the field radius) and the spot-dancing regime (in which the correlation radius of the medium fluctuations is larger than the field radius), and these regimes give completely different results. In the scintillation regime we will explain that the method proposed by \cite{webb14} can give a correct image, but not in the spot-dancing regime. In particular, the spot-dancing regime may be relevant for Earth-based astronomy \cite{andrews}, which would let little hope that the method can be used there, but it could be efficient in other configurations in the scintillation regime. The paper is organized as follows. In Section \textcolor{red}f{sec:intcor} we describe the experiment and introduce the empirical speckle intensity covariance. In Section \textcolor{red}f{sec:ito} we present the white-noise paraxial wave equation. We analyze the properties of the statistical speckle intensity covariance in the scintillation regime in Section \textcolor{red}f{sec:scin} and in the spot-dancing regime in Section \textcolor{red}f{sec:spot}. Section \textcolor{red}f{sec:con} summarizes the main findings. {{\itbf e}}gin{figure} {{\itbf e}}gin{center} {{\itbf e}}gin{tabular}{c} \includegraphics[width=6.2cm]{./figure.eps} \end{tabular} \end{center} \caption{The experimental imaging set-up. The source transmits a time-harmonic plane wave. The object to be imaged is a mask. For each position of the mask the intensity of the transmitted field can be recorded by the camera. } \label{fig:1} \end{figure} \section{The intensity covariance function} \label{sec:intcor} The spatial variable is denoted by $({{\itbf x}},z)\in \mathbb{R}^d \times \mathbb{R}$. The source transmits a time-harmonic plane wave going into the $z$-direction with frequency $\omega$ and wavenumber $k_o=\omega/c_o$, with $c_o$ the background velocity. The object to be imaged is a mask (a double slit in the experiment \cite{webb14}) that can be shifted transversally by a shift vector denoted by ${{\itbf r}}$ so that the field just after the mask is of the form {{\itbf e}}gin{equation} \label{eq:inc} U_{{\itbf r}} ({{\itbf x}}) = U({{\itbf x}}-{{\itbf r}}) , \end{equation} for some function $U$ (see Figure \textcolor{red}f{fig:1}). {Note that we here assume that the homogeneous scattering medium fills the space in between the mask and the camera, see also Remark~ \textcolor{red}f{remark:49}. } The time-harmonic field in the plane of the camera is denoted by $E_{{\itbf r}} ( {{\itbf x}} ) $. It results from the propagation of the incident field $U_{{\itbf r}}$ through the scattering medium. The measured or empirical intensity covariance is {{\itbf e}}gin{eqnarray} \nonumber C_{{{\itbf r}},{{\itbf r}}'} &=& \frac{1}{|A_o|} \int_{A_o} |E_{{\itbf r}} ( {{\itbf x}}_0 )|^2 |E_{{{\itbf r}}'}( {{\itbf x}}_0 )|^2 {\rm d} {{\itbf x}}_0 \\ && - \Big( \frac{1}{|A_o|} \int_{A_o} |E_{{\itbf r}} ( {{\itbf x}}_0 )|^2 {\rm d} {{\itbf x}}_0 \Big) \Big( \frac{1}{|A_o|} \int_{A_o} |E_{{{\itbf r}}'} ( {{\itbf x}}_0 )|^2 {\rm d} {{\itbf x}}_0 \Big) , \label{def:intcor} \end{eqnarray} where $A_o$ is the spatial support of the camera. The conjecture found in \cite{webb14} is the following one. {{\itbf e}}gin{conjecture} {{\itbf e}}gin{eqnarray} \label{eq:pred} C_{{{\itbf r}},{{\itbf r}}'} \approx \Big| \int_{\mathbb{R}^d} |\hat{U}({{\itbf k}})|^2 \exp \big( i{{\itbf k}} \cdot ( {{\itbf r}}'-{{\itbf r}}) \big) {\rm d}{{\itbf k}} \Big|^2 , \end{eqnarray} up to a multiplicative constant, where {{\itbf e}}gin{equation} \hat{U}({{\itbf k}}) = \int_{\mathbb{R}^d} U({{\itbf x}}) \exp \big( -i {{\itbf k}} \cdot {{\itbf x}} \big) {\rm d} {{\itbf x}} . \end{equation} \end{conjecture} When this formula holds, it is possible to reconstruct the incident field $U$ by a phase retrieval algorithm as shown in \cite{webb14}. Indeed ( \textcolor{red}f{eq:pred}) gives the modulus of the inverse Fourier transform of $|\hat{U}({{\itbf k}})|^2$, and we know the phase of $|\hat{U}({{\itbf k}})|^2$, which is zero, so that a Gerchberg-Saxon-type iterative algorithm can be applied to reconstruct $|\hat{U}({{\itbf k}})|^2$ \cite{fienup,fienup87}. Using the estimated value of the modulus of the Fourier transform of $U({{\itbf x}})$ and applying again the same algorithm (assuming that the phase of $U({{\itbf x}})$ is known, for instance, equal to zero) it is possible to extract the incident field $U({{\itbf x}})$. The main question we want to address is to understand under which circumstances and to what extent the formula ( \textcolor{red}f{eq:pred}) holds true. In the expression ( \textcolor{red}f{def:intcor}) it is assumed that the pixel size of the camera is so small that it is possible to consider that the camera measures the spatially resolved intensity pattern. It is of interest to address the role of the pixel size and to assume that the measured intensity is rather {{\itbf e}}gin{equation} I_{{\itbf r}}^{\rho_o}({{\itbf x}}_0) = \frac{1}{(2\pi)^{d/2} \rho_o^d} \int_{\mathbb{R}^d} |E_{{\itbf r}} ( {{\itbf x}}_0 +{{\itbf y}}_0 )|^2 \exp\Big(- \frac{|{{\itbf y}}_0|^2}{2\rho_o^2}\Big) {\rm d} {{\itbf y}}_0 , \end{equation} where $\rho_o$ is the size of the pixel of the camera. Then the measured or empirical intensity covariance is {{\itbf e}}gin{eqnarray} \label{def:Crrrho0} \hspace*{-2.3cm} C_{{{\itbf r}},{{\itbf r}}'}^{\rho_o} = \frac{1}{|A_o|} \int_{A_o} I_{{\itbf r}}^{\rho_o}({{\itbf x}}_0) I_{{{\itbf r}}'}^{\rho_o}({{\itbf x}}_0){\rm d} {{\itbf x}}_0 - \Big( \frac{1}{|A_o|} \int_{A_o} I_{{\itbf r}}^{\rho_o}({{\itbf x}}_0) {\rm d} {{\itbf x}}_0 \Big) \Big( \frac{1}{|A_o|} \int_{A_o} I_{{{\itbf r}}'}^{\rho_o}({{\itbf x}}_0) {\rm d} {{\itbf x}}_0 \Big) . \end{eqnarray} Note that in order to characterize $C_{{{\itbf r}},{{\itbf r}}'}^{\rho_o}$ we need to be able to evaluate fourth-order moments for the field $E_{{\itbf r}} ( {{\itbf x}} ) $. We describe in the next section the It\^o-Shr\"odinger model that makes it possible to compute such fourth-order moments and in particular how we can use this to characterize the intensity covariance. In Sections \textcolor{red}f{sec:scin} and \textcolor{red}f{sec:spot} we delineate two important sub-regimes of the It\^o-Shr\"odinger model corresponding respectively to a large or small radius of the mask, and show how the measured intensity covariance function can be characterized in these cases based on our general theory for the fourth moment. \section{The white-noise paraxial model} \label{sec:ito} The model for the time-harmonic field in the plane of the camera is {{\itbf e}}gin{equation} \label{eq:model1} E_{{\itbf r}} ( {{\itbf x}} ) = \int_{\mathbb{R}^d} \hat{g}\big(({{\itbf x}},{\ell}), ({{\itbf x}}',0) \big) U_{{\itbf r}}({{\itbf x}}') {\rm d} {{\itbf x}}' , \end{equation} where $U_{{\itbf r}}$ is the incident field ( \textcolor{red}f{eq:inc}) {in the plane $z=0$}, ${\ell}$ is the propagation distance to the camera {localized in the plane $z=\ell$}, and $\hat{g}$ is the fundamental solution of the white-noise paraxial wave equation which we describe in the next subsections. There should be an additional factor $\exp(i k_o {\ell})$ in ( \textcolor{red}f{eq:model1}) but it does not play any role as we only record intensities. \subsection{The random paraxial wave equation} We consider the time-harmonic form of the scalar wave equation {with a source of the form $2ik_o f({{\itbf x}}){\rm d}elta(z)$ localized in the plane $z=0$ (which corresponds to an initial condition for the field of the form $f({{\itbf x}})$ in the plane $z=0$ as we will see below)}: {{\itbf e}}gin{equation} \label{eq:wave0} (\partial_z^2+\Delta) E+ k_o^2 \big(1 + \mu({{\itbf x}},z)\big) E = 2ik_o {\rm d}elta(z) f({{\itbf x}}) , \end{equation} where $\Delta$ is the transverse Laplacian (i.e., the Laplacian in ${{\itbf x}}$) and $f$ is a source in the plane $z=0$. Here $\mu$ is a zero-mean, stationary, $d+1$-dimensional random process with mixing properties in the $z$-direction {(this means that we assume that the medium is statistically homogeneous from the plane $z=0$ to the plane $z=\ell$)}. The function $\hat\phi $ (slowly-varying envelope of a plane wave going along the $z$-axis) defined by {{\itbf e}}gin{equation} E ( {{\itbf x}},z) = e^{i k_o z } \hat\phi \big( {{{\itbf x}}} ,z \big) \end{equation} satisfies {{\itbf e}}gin{equation} \label{eq:bitos} \partial_{z}^2 \hat\phi+ \left( 2 i k_o \partial_z \hat\phi + \Delta \hat\phi + k_o^2 \mu\big( {{\itbf x}} , z \big) \hat\phi \right)= 2i k_o {\rm d}elta(z) f({{\itbf x}}) . \end{equation} {{\itbf e}}gin{definition} \label{def:par} In the white-noise paraxial regime, the wavelength is much smaller than the initial field radius and the correlation radius of the medium, which are themselves much smaller than the propagation distance, {in such a way that the product of the wavelength and the propagation distance is of the same order as the square radii.} \end{definition} In the white-noise paraxial regime, the forward-scattering approximation in direction $z$ is valid (i.e., the second derivative in $z$ in ( \textcolor{red}f{eq:bitos}) can be neglected) and the white-noise approximation is valid (i.e., $\mu$ can be replaced by a white noise in $z$), so that $\hat\phi$ satisfies the It\^o-Schr\"odinger equation \cite{garniers1} {{\itbf e}}gin{equation} \label{eq:IS} 2 i k_o {\rm d}_z \hat{\phi} ( {{\itbf x}},z) +\Delta \hat{\phi} ( {{\itbf x}},z) {\rm d} z + k_o^2 \hat{\phi}( {{\itbf x}},z)\circ {\rm d} B({{\itbf x}},z) =0 , \end{equation} starting from $ \hat{\phi} ( {{\itbf x}},0) = f({{\itbf x}})$, where $B({{\itbf x}},z)$ is a Brownian field, that is, a Gaussian process with mean zero and covariance function {{\itbf e}}gin{equation} \label{def:covgaus} \mathbb{E}\big[ B({{\itbf x}},z)B({{\itbf x}}',z')\big] = \gamma_0({{\itbf x}}-{{\itbf x}}') \big( z \wedge z' \big), \end{equation} with {{\itbf e}}gin{equation} \label{def:gamma0} \gamma_0({{\itbf x}})= \int_{-\infty}^\infty \mathbb{E}[ \mu({\bf 0},0)\mu({{\itbf x}},z) ] {\rm d} z . \end{equation} Here the $\circ$ stands for the Stratonovich stochastic integral. The rigorous statement has the form of a convergence theorem for Hilbert-space valued processes \cite{garniers1}. \subsection{The fundamental solution} The fundamental solution $\hat{g}$ is defined as the solution of the It\^o-Schr\"odinger equation in $({{\itbf x}},z)$: {{\itbf e}}gin{equation} \label{def:greens} 2i k_o {\rm d}_z \hat{g} + \Delta \hat{g} {\rm d} z+ k_o^2 \hat{g} \circ {\rm d} B({{\itbf x}},z)= 0, \end{equation} starting from $\hat{g}\big( ({{\itbf x}},z=z'),({{\itbf x}}',z' ) \big) = {\rm d}elta({{\itbf x}}-{{\itbf x}}')$. In a homogeneous medium ($B \equiv 0$) the fundamental solution is (for $z> z'$) {{\itbf e}}gin{equation} \label{eq:green0} \hat{g}_0 \big( ({{\itbf x}},z), ({{\itbf x}}',z') \big) = \Big( \frac{ k_o}{2 i \pi (z-z')} \Big)^{d/2} \exp \Big( i \frac{k_o |{{\itbf x}}-{{\itbf x}}'|^2}{2 (z-z')} \Big) . \end{equation} In a random medium, the first two moments of the random fundamental solution have the following expressions. {{\itbf e}}gin{prop} \label{prop:parax2} The first order-moment of the random fundamental solution exhibits damping (for $z > z'$): {{\itbf e}}gin{eqnarray}n \mathbb{E} \big[ \hat{g}\big( ({{\itbf x}},z),({{\itbf x}}',z') \big) \big] &=& \hat{g}_0\big( ({{\itbf x}},z),({{\itbf x}}',z') \big) \exp \Big( -\frac{\gamma_0({\bf 0}) k_o^2 (z-z')}{8} \Big) , \label{eq:mom1parax1} \end{eqnarray}n where $\gamma_0$ is given by ( \textcolor{red}f{def:gamma0}). The second order-moment of the random fundamental solution exhibits spatial decorrelation: {{\itbf e}}gin{eqnarray} \nonumber && \hspace*{-2cm} \mathbb{E} \big[ \hat{g}\big( ({{\itbf x}}_1,z),({{\itbf x}}',z') \big) \overline{\hat{g}\big( ({{\itbf x}}_2,z),({{\itbf x}}',z') \big)} \big] = \hat{g}_0\big( ({{\itbf x}}_1,z),({{\itbf x}}',z') \big) \overline{\hat{g}_0\big( ({{\itbf x}}_2,z),({{\itbf x}}',z') \big)} \\ && \times \exp \Big( - \frac{ \gamma_2({{\itbf x}}_1-{{\itbf x}}_2) k_o^2 (z-z')}{4} \Big) , \label{eq:mom2parax1} \end{eqnarray} where {{\itbf e}}gin{equation} \gamma_2({{\itbf x}})= \int_0^1 \gamma_0({\bf 0}) -\gamma_0({{\itbf x}} s) {\rm d} s . \end{equation} \end{prop} These are classical results (see \cite[Chapter 20]{ishimaru} and \cite{garniers2}) once the paraxial and white-noise approximations have been proved to be correct, as is the case here. The result on the first-order moment shows that any coherent wave imaging method based on the mean field cannot give good images if the propagation distance is larger than the scattering mean free path {{\itbf e}}gin{equation} \label{def:lsca:parax} \ell_{\rm sca} = \frac{8 }{\gamma_0({\bf 0}) k_o^2}, \end{equation} because the coherent wave components are then exponentially damped. This is the situation we have in mind in this paper. However, here the key quantity of interest is the intensity covariance function, which means that we need to understand the behavior of the fourth-order moment of the field. We explain this next. \subsection{The statistical intensity covariance function} In our paper the quantities of interest are the mean intensity {{\itbf e}}gin{equation} {\cal I}_{{\itbf r}}({{\itbf x}}_0) = \mathbb{E} \big[ |E_{{{\itbf r}}}({{\itbf x}}_0)|^2\big] \end{equation} and the statistical intensity covariance function {{\itbf e}}gin{equation} {\cal C}_{{{\itbf r}},{{\itbf r}}'} ({{\itbf x}}_0,{{\itbf x}}_0') = \mathbb{E} \big[ |E_{{{\itbf r}}} ({{\itbf x}}_0)|^2 |E_{{{\itbf r}}'}({{\itbf x}}_0')|^2 \big] - \mathbb{E} \big[ |E_{{{\itbf r}}}({{\itbf x}}_0)|^2\big]\mathbb{E}\big[ |E_{{{\itbf r}}'}({{\itbf x}}_0')|^2 \big] . \end{equation} We remark that the statistical intensity covariance function is general in that we have two, in general different, observation points ${{\itbf x}}_0$ and ${{\itbf x}}_0'$, while in the kernel in \eqref{def:intcor} the quadratic intensity term is evaluated at a common observation ${{\itbf x}}_0$. We will discuss below, in Section \textcolor{red}f{sec:Crho}, the measured intensity covariance function introduced in \eqref{def:Crrrho0} and how it relates to the mean intensity and the statistical intensity covariance function that we discuss here. {{\itbf e}}gin{prop} The second moment of the intensity can be expressed as {{\itbf e}}gin{eqnarray} \nonumber \mathbb{E} \big[ |E_{{\bf 0}}({{\itbf x}}_0)|^2 |E_{{{\itbf r}}}({{\itbf x}}_0')|^2 \big] &=&\frac{1}{(2\pi)^{4d}} \int\int_{\mathbb{R}^{4d}} e^{i {{\itbf z}}eta_1 \cdot({{\itbf x}}_0+{{\itbf x}}_0')+ i {{\itbf z}}eta_2 \cdot({{\itbf x}}_0-{{\itbf x}}_0')}\\ &\times& \hat{\mu}_{{\itbf r}} ( {{\itbf x}}i_1,{{\itbf x}}i_2, {{\itbf z}}eta_1, {{\itbf z}}eta_2,{\ell}) {\rm d} {{\itbf x}}i_1{\rm d} {{\itbf x}}i_2 {\rm d} {{\itbf z}}eta_1 {\rm d} {{\itbf z}}eta_2 , \label{eq:2M} \end{eqnarray} where $\hat{\mu}_{{\itbf r}}$ satisfies {{\itbf e}}gin{eqnarray} \nonumber && \frac{\partial \hat{\mu}_{{\itbf r}}}{\partial z} + \frac{i}{k_o} \big( {{\itbf x}}i_1\cdot {{\itbf z}}eta_1+ {{\itbf x}}i_2\cdot {{\itbf z}}eta_2\big) \hat{\mu}_{{\itbf r}} = \frac{k_o^2}{4 (2\pi)^d} \int_{\mathbb{R}^d} \hat{\gamma}_0({{\itbf k}}) \Big[ \hat{\mu}_{{\itbf r}} ( {{\itbf x}}i_1-{{\itbf k}}, {{\itbf x}}i_2-{{\itbf k}}, {{\itbf z}}eta_1, {{\itbf z}}eta_2) \\ \nonumber && \quad + \hat{\mu}_{{\itbf r}} ( {{\itbf x}}i_1-{{\itbf k}},{{\itbf x}}i_2, {{\itbf z}}eta_1, {{\itbf z}}eta_2-{{\itbf k}}) + \hat{\mu}_{{\itbf r}} ( {{\itbf x}}i_1+{{\itbf k}}, {{\itbf x}}i_2-{{\itbf k}}, {{\itbf z}}eta_1, {{\itbf z}}eta_2) \\ \nonumber && \quad + \hat{\mu}_{{\itbf r}} ( {{\itbf x}}i_1+{{\itbf k}},{{\itbf x}}i_2, {{\itbf z}}eta_1, {{\itbf z}}eta_2-{{\itbf k}}) - 2 \hat{\mu}_{{\itbf r}}({{\itbf x}}i_1,{{\itbf x}}i_2, {{\itbf z}}eta_1, {{\itbf z}}eta_2) \\ && \quad - \hat{\mu}_{{\itbf r}} ( {{\itbf x}}i_1,{{\itbf x}}i_2-{{\itbf k}}, {{\itbf z}}eta_1, {{\itbf z}}eta_2-{{\itbf k}}) - \hat{\mu}_{{\itbf r}} ( {{\itbf x}}i_1,{{\itbf x}}i_2+{{\itbf k}}, {{\itbf z}}eta_1, {{\itbf z}}eta_2-{{\itbf k}}) \Big] {\rm d} {{\itbf k}} , \label{eq:fouriermom0} \end{eqnarray} starting from {{\itbf e}}gin{eqnarray} \nonumber && \hat{\mu}_{{\itbf r}} ( {{\itbf x}}i_1,{{\itbf x}}i_2, {{\itbf z}}eta_1, {{\itbf z}}eta_2,z=0) = \hat{U}\Big( \frac{{{\itbf x}}i_1+{{\itbf x}}i_2+{{\itbf z}}eta_1+{{\itbf z}}eta_2}{2}\Big) \overline{\hat{U}}\Big( \frac{{{\itbf x}}i_1+{{\itbf x}}i_2-{{\itbf z}}eta_1-{{\itbf z}}eta_2}{2}\Big) \\ && \hspace*{-0.15in} \times \hat{U}\Big( \frac{{{\itbf x}}i_1-{{\itbf x}}i_2+{{\itbf z}}eta_1-{{\itbf z}}eta_2}{2}\Big) \overline{\hat{U}}\Big( \frac{{{\itbf x}}i_1-{{\itbf x}}i_2-{{\itbf z}}eta_1+{{\itbf z}}eta_2}{2}\Big) \exp \big( i {{\itbf r}} \cdot ({{\itbf z}}eta_2-{{\itbf z}}eta_1)\big). \end{eqnarray} \end{prop} No closed-form expression of the fourth moment of the field or of the second moment of the intensity is available, but it is possible to get explicit expressions in two asymptotic regimes, the scintillation regime and the spot-dancing regime, which correspond to the cases where the correlation radius of the medium is smaller (resp. larger) than the incident field radius and we discuss these in the next two sections. As we show in Section \textcolor{red}f{sec:Crho} the speckle imaging scheme considered here will work well in the scintillation regime, however, as follows from the discussion in Section \textcolor{red}f{sec:spot} not well in the spot dancing regime. {\rm d}ebproof Note first that, by statistical transverse stationarity of the random medium, we have {{\itbf e}}gin{equation} {\cal C}_{{{\itbf r}},{{\itbf r}}'}({{\itbf x}}_0,{{\itbf x}}_0') = {\cal C}_{{\bf 0},{{\itbf r}}'-{{\itbf r}}} \big({{\itbf x}}_0-{{\itbf r}},{{\itbf x}}_0'-{{\itbf r}} \big) , \end{equation} It is therefore sufficient to study ${\cal C}_{{\bf 0},{{\itbf r}}} \big({{\itbf x}}_0 ,{{\itbf x}}_0' \big)$. We can write {{\itbf e}}gin{eqnarray} \mathbb{E} \big[ |E_{{\bf 0}}({{\itbf x}}_0)|^2 |E_{{{\itbf r}}}({{\itbf x}}_0')|^2 \big] = {\cal M}_{{{\itbf r}}}( {{\itbf x}}_0,{{\itbf x}}_0',{{\itbf x}}_0,{{\itbf x}}_0',{\ell}) , \end{eqnarray} where we find using \eqref{eq:IS} and the It\^o theory for Hilbert-space valued random processes \cite{kunita} that the fourth-order moment ${\cal M}_{{{\itbf r}}}( {{\itbf x}}_1,{{\itbf x}}_2,{{\itbf y}}_1,{{\itbf y}}_2,z)$ is solution of {{\itbf e}}gin{eqnarray}\label{eq:M} &&\frac{\partial {\cal M}_{{\itbf r}}}{\partial z} = \frac{i}{2k_o} \Big( \Delta_{{{\itbf x}}_1}+\Delta_{{{\itbf x}}_2} - \Delta_{{{\itbf y}}_1} -\Delta_{{{\itbf y}}_2}\Big) {\cal M}_{{\itbf r}}+ \frac{k_o^2}{4} {\cal U} \big( {{\itbf x}}_1,{{\itbf x}}_2, {{\itbf y}}_1,{{\itbf y}}_2\big) {\cal M}_{{\itbf r}} , \\ && {\cal M}_{{{\itbf r}}}( {{\itbf x}}_1,{{\itbf x}}_2,{{\itbf y}}_1,{{\itbf y}}_2,z=0) = U ({{\itbf x}}_1) \overline{U ({{\itbf y}}_1)} U_{{{\itbf r}}}({{\itbf x}}_2) \overline{U_{{{\itbf r}}}({{\itbf y}}_2)}, \end{eqnarray} with the generalized potential {{\itbf e}}gin{eqnarray} && \hspace*{-.9cm} {\cal U}\big( {{\itbf x}}_1,{{\itbf x}}_2, {{\itbf y}}_1,{{\itbf y}}_2 \big) = \sum_{j,l=1}^2 \gamma_0({{\itbf x}}_j-{{\itbf y}}_l) - \gamma_0( {{\itbf x}}_1-{{\itbf x}}_2) - \gamma_0( {{\itbf y}}_1-{{\itbf y}}_2) - 2\gamma_0({\bf 0}) , \end{eqnarray} and where $U$ is the shape of the mask as in Eq. ( \textcolor{red}f{eq:inc}). We parameterize the four points ${{\itbf x}}_1,{{\itbf x}}_2,{{\itbf y}}_1,{{\itbf y}}_2$ in the special way: {{\itbf e}}gin{eqnarray} \label{eq:reliexr1} {{\itbf x}}_1 = \frac{{{\itbf r}}_1+{{\itbf r}}_2+{{\itbf q}}_1+{{\itbf q}}_2}{2}, \quad \quad {{\itbf y}}_1 = \frac{{{\itbf r}}_1+{{\itbf r}}_2-{{\itbf q}}_1-{{\itbf q}}_2}{2}, \\ {{\itbf x}}_2 = \frac{{{\itbf r}}_1-{{\itbf r}}_2+{{\itbf q}}_1-{{\itbf q}}_2}{2}, \quad \quad {{\itbf y}}_2 = \frac{{{\itbf r}}_1-{{\itbf r}}_2-{{\itbf q}}_1+{{\itbf q}}_2}{2}. \label{eq:reliexr2} \end{eqnarray} We denote by $\mu_{{\itbf r}}$ the fourth-order moment in these new variables: {{\itbf e}}gin{equation} \mu_{{\itbf r}} ({{\itbf q}}_1,{{\itbf q}}_2,{{\itbf r}}_1,{{\itbf r}}_2,z) := {\cal M}_{{\itbf r}} ( {{\itbf x}}_1 , {{\itbf x}}_2 , {{\itbf y}}_1 , {{\itbf y}}_2 ,z ) , \end{equation} with ${{\itbf x}}_1,{{\itbf x}}_2,{{\itbf y}}_1,{{\itbf y}}_2$ given by ( \textcolor{red}f{eq:reliexr1}- \textcolor{red}f{eq:reliexr2}) in terms of ${{\itbf q}}_1,{{\itbf q}}_2,{{\itbf r}}_1,{{\itbf r}}_2$. The Fourier transform (in ${{\itbf q}}_1$, ${{\itbf q}}_2$, ${{\itbf r}}_1$, and ${{\itbf r}}_2$) of the fourth-order moment is defined by: {{\itbf e}}gin{eqnarray} \nonumber \hat{\mu}_{{\itbf r}}({{\itbf x}}i_1,{{\itbf x}}i_2,{{\itbf z}}eta_1,{{\itbf z}}eta_2,z) &=& \int\int_{\mathbb{R}^{4d}} {\mu}_{{\itbf r}}({{\itbf q}}_1,{{\itbf q}}_2,{{\itbf r}}_1,{{\itbf r}}_2,z) \\ && \hspace*{-1.3in} \times \exp \big(- i{{\itbf q}}_1 \cdot {{\itbf x}}i_1- i{{\itbf q}}_2 \cdot {{\itbf x}}i_2- i{{\itbf r}}_1\cdot {{\itbf z}}eta_1- i{{\itbf r}}_2\cdot {{\itbf z}}eta_2\big) {\rm d} {{\itbf q}}_1{\rm d} {{\itbf q}}_2 {\rm d} {{\itbf r}}_1 {\rm d} {{\itbf r}}_2 \label{eq:fourier} . \end{eqnarray} When then arrive at Eq. ( \textcolor{red}f{eq:2M}) using Eq. ( \textcolor{red}f{eq:M}) and the Fourier transform. {\small $\Box$} \\ \section{The scintillation regime} \label{sec:scin} The scintillation regime is a physically important regime corresponding to order one relative fluctuations for the intensity. The scintillation regime is valid if {the white-noise paraxial regime (Definition \textcolor{red}f{def:par}) is valid, and, additionally, the correlation radius of the medium fluctuations (that determines the transverse correlation radius of the Brownian field in the It\^o-Schr\"odinger equation) is smaller than the incident field radius. The standard deviation of the Brownian field then needs to be relatively small and the propagation distance needs to be relatively large to observe an effect of order one.} More precisely, we define the scintillation regime as follows. {{\itbf e}}gin{definition} \label{def:scint} Consider the paraxial regime of Definition \textcolor{red}f{def:par} so that the evolution of the field amplitude is governed by the It\^o-Schr\"odinger equation ( \textcolor{red}f{eq:IS}). In the scintillation regime, {{\itbf e}}gin{enumerate} \item the covariance function $\gamma_0^\varepsilons$ has an amplitude of order $\varepsilons$: {{\itbf e}}gin{equation} \label{sca:sci} \gamma_0^\varepsilons({{\itbf x}})= \varepsilons \gamma_0 ({{\itbf x}}) , \end{equation} \item the radius of the incident field and the vector shift are of order $1/\varepsilons$: {{\itbf e}}gin{equation} \label{def:feps} U^\varepsilons_{{\itbf r}}({{\itbf x}}) = U\big( \varepsilons( {{\itbf x}} -{{\itbf r}})\big) , \end{equation} \item the propagation distance is of order of $1/\varepsilons$: {{\itbf e}}gin{equation} \label{def:Leps} \ell^\varepsilons =\frac{L}{\varepsilons} , \end{equation} \end{enumerate} for a small dimensionless $\varepsilons$. \end{definition} Note that this problem was analyzed in \cite{garniers4} when ${{\itbf r}}={\bf 0}$ and $U$ has a Gaussian profile. The following proposition \textcolor{red}f{prop:sci1} is an extension of this original result. \subsection{The fourth-order moment of the transmitted field} Let us denote the rescaled function {{\itbf e}}gin{equation} \label{eq:renormhatM2} \tilde{\mu}_{{\itbf r}}^\varepsilons ({{\itbf x}}i_1,{{\itbf x}}i_2,{{\itbf z}}eta_1,{{\itbf z}}eta_2,z) := \hat{\mu}_{{\itbf r}} \Big({{\itbf x}}i_1,{{\itbf x}}i_2,{{\itbf z}}eta_1,{{\itbf z}}eta_2 , \frac{z}{\varepsilons} \Big) \exp \Big( \frac{i z}{k_o \varepsilons} ({{\itbf x}}i_2 \cdot {{\itbf z}}eta_2 + {{\itbf x}}i_1 \cdot {{\itbf z}}eta_1) \Big) . \end{equation} Our goal is to study the asymptotic behavior of $\tilde{\mu}_{{\itbf r}}^\varepsilons$ as $\varepsilons \to 0$. We have the following result, which shows that $\tilde{\mu}_{{\itbf r}}^\varepsilons$ exhibits a multi-scale behavior as $\varepsilons \to 0$, with some components evolving at the scale $\varepsilons$ and some components evolving at the order one scale. The proof is similar to the one of Proposition 1 in \cite{garniers4}. In \cite{garniers4} we used a Gaussian source profile while we here need to extend the result to the case of a general incident field, thus the calculus of the Gaussian for the source shapes do not apply directly as before. However, the main steps of the proof remain unchanged and we obtain the following proposition. {{\itbf e}}gin{prop} \label{prop:sci1} In the scintillation regime of Definition \textcolor{red}f{def:scint}, if $\gamma_0\in L^1(\mathbb{R}^d)$ and $\gamma_0({\bf 0})<\infty$, then the function $\tilde{\mu}^\varepsilons_{{\itbf r}}({{\itbf x}}i_1,{{\itbf x}}i_2, {{\itbf z}}eta_1,{{\itbf z}}eta_2,z ) $ can be expanded as {{\itbf e}}gin{eqnarray} \nonumber && \tilde{\mu}_{{\itbf r}}^\varepsilons({{\itbf x}}i_1,{{\itbf x}}i_2, {{\itbf z}}eta_1,{{\itbf z}}eta_2,z ) = \frac{K(z)}{\varepsilons^{4d}} \hat{U} \Big( \frac{{{\itbf x}}i_1+{{\itbf x}}i_2+{{\itbf z}}eta_1+{{\itbf z}}eta_2}{2 \varepsilons}\Big) \overline{\hat{U}} \Big( \frac{{{\itbf x}}i_1+{{\itbf x}}i_2-{{\itbf z}}eta_1-{{\itbf z}}eta_2}{2 \varepsilons}\Big)\\ \nonumber && \hspace*{0.4in}\times \hat{U} \Big( \frac{{{\itbf x}}i_1-{{\itbf x}}i_2+{{\itbf z}}eta_1-{{\itbf z}}eta_2}{2 \varepsilons}\Big) \overline{\hat{U}} \Big( \frac{{{\itbf x}}i_1-{{\itbf x}}i_2-{{\itbf z}}eta_1+{{\itbf z}}eta_2}{2 \varepsilons}\Big) \exp \Big( i {{\itbf r}} \cdot \frac{{{\itbf z}}eta_2-{{\itbf z}}eta_1}{\varepsilons}\Big) \\ \nonumber && \quad + \frac{K(z)}{\varepsilons^{3d}} \hat{V}_{\bf 0} \Big(\frac{{{\itbf z}}eta_2+{{\itbf z}}eta_1}{\varepsilons}\Big) \hat{U} \Big( \frac{{{\itbf x}}i_1-{{\itbf x}}i_2+{{\itbf z}}eta_1-{{\itbf z}}eta_2}{2 \varepsilons}\Big) \overline{\hat{U}} \Big( \frac{{{\itbf x}}i_1-{{\itbf x}}i_2-{{\itbf z}}eta_1+{{\itbf z}}eta_2}{2 \varepsilons}\Big)\\ \nonumber && \hspace*{0.4in}\times \exp \Big( i {{\itbf r}} \cdot \frac{{{\itbf z}}eta_2-{{\itbf z}}eta_1}{\varepsilons}\Big) A\big(\frac{{{\itbf x}}i_2+{{\itbf x}}i_1}{2} ,\frac{{{\itbf z}}eta_2 + {{\itbf z}}eta_1}{\varepsilons} ,z \big) \\ \nonumber && \quad + \frac{K(z)}{\varepsilons^{3d}} \overline{\hat{V}_{\bf 0}} \Big(\frac{{{\itbf z}}eta_2-{{\itbf z}}eta_1}{\varepsilons}\Big) \hat{U} \Big( \frac{{{\itbf x}}i_1+{{\itbf x}}i_2+{{\itbf z}}eta_1+{{\itbf z}}eta_2}{2 \varepsilons}\Big) \overline{\hat{U}} \Big( \frac{{{\itbf x}}i_1+{{\itbf x}}i_2-{{\itbf z}}eta_1-{{\itbf z}}eta_2}{2 \varepsilons}\Big)\\ \nonumber && \hspace*{0.4in}\times \exp \Big( i {{\itbf r}} \cdot \frac{{{\itbf z}}eta_2-{{\itbf z}}eta_1}{\varepsilons}\Big) A \big(\frac{{{\itbf x}}i_2-{{\itbf x}}i_1}{2} ,\frac{{{\itbf z}}eta_2- {{\itbf z}}eta_1}{\varepsilons} ,z \big) \\ \nonumber && \quad + \frac{K(z)}{\varepsilons^{3d}} \hat{V}_{{{\itbf r}}} \Big(\frac{{{\itbf x}}i_2+{{\itbf z}}eta_1}{\varepsilons}\Big) \hat{U} \Big( \frac{{{\itbf x}}i_1-{{\itbf z}}eta_2+{{\itbf z}}eta_1-{{\itbf x}}i_2}{2 \varepsilons}\Big) \overline{\hat{U}} \Big( \frac{{{\itbf x}}i_1-{{\itbf z}}eta_2-{{\itbf z}}eta_1+{{\itbf x}}i_2}{2 \varepsilons}\Big)\\ \nonumber && \hspace*{0.4in}\times \exp\Big( i \frac{{{\itbf r}}}{2} \cdot \frac{{{\itbf z}}eta_2-{{\itbf x}}i_1-2{{\itbf z}}eta_1}{\varepsilons}\Big) A\big( \frac{{{\itbf z}}eta_2+{{\itbf x}}i_1}{2} ,\frac{{{\itbf x}}i_2+ {{\itbf z}}eta_1}{\varepsilons} ,z \big) \\ \nonumber && \quad + \frac{K(z)}{\varepsilons^{3d}} \overline{\hat{V}_{{{\itbf r}}}} \Big(\frac{{{\itbf x}}i_2-{{\itbf z}}eta_1}{\varepsilons}\Big) \hat{U} \Big( \frac{{{\itbf x}}i_1+{{\itbf x}}i_2+{{\itbf z}}eta_1+{{\itbf z}}eta_2}{2 \varepsilons}\Big) \overline{\hat{U}} \Big( \frac{{{\itbf x}}i_1-{{\itbf x}}i_2-{{\itbf z}}eta_1+{{\itbf z}}eta_2}{2 \varepsilons}\Big)\\ \nonumber && \hspace*{0.4in}\times \exp\Big( i \frac{{{\itbf r}}}{2} \cdot \frac{{{\itbf z}}eta_2+{{\itbf x}}i_1-2{{\itbf z}}eta_1}{\varepsilons}\Big) A\big( \frac{{{\itbf z}}eta_2-{{\itbf x}}i_1}{2} ,\frac{{{\itbf x}}i_2- {{\itbf z}}eta_1}{\varepsilons} ,z \big) \\ \nonumber && \quad +\frac{K(z)}{\varepsilons^{2d}} \hat{V}_{\bf 0} \Big( \frac{{{\itbf z}}eta_2+{{\itbf z}}eta_1}{\varepsilons}\Big) \overline{\hat{V}_{\bf 0} }\Big( \frac{{{\itbf z}}eta_2-{{\itbf z}}eta_1}{\varepsilons}\Big) \\ \nonumber && \hspace*{0.4in}\times \exp\Big( i {{\itbf r}} \cdot \frac{{{\itbf z}}eta_2-{{\itbf z}}eta_1}{\varepsilons}\Big) A \big( \frac{{{\itbf x}}i_2+{{\itbf x}}i_1}{2}, \frac{{{\itbf z}}eta_2+ {{\itbf z}}eta_1}{\varepsilons} ,z \big) A \big( \frac{{{\itbf x}}i_2-{{\itbf x}}i_1}{2}, \frac{{{\itbf z}}eta_2- {{\itbf z}}eta_1}{\varepsilons} ,z \big) \\ \nonumber && \quad + \frac{K(z)}{\varepsilons^{2d}} \hat{V}_{{{\itbf r}}} \Big( \frac{{{\itbf x}}i_2+{{\itbf z}}eta_1}{\varepsilons}\Big) \overline{ \hat{V}_{{\itbf r}} } \Big( \frac{{{\itbf x}}i_2-{{\itbf z}}eta_1}{\varepsilons}\Big) \\ \nonumber && \hspace*{0.4in}\times \exp\Big(- i {{\itbf r}} \cdot \frac{{{\itbf z}}eta_1}{\varepsilons}\Big) A \big( \frac{{{\itbf z}}eta_2+{{\itbf x}}i_1}{2}, \frac{{{\itbf x}}i_2+ {{\itbf z}}eta_1}{\varepsilons} ,z \big) A \big( \frac{{{\itbf z}}eta_2-{{\itbf x}}i_1}{2}, \frac{{{\itbf x}}i_2- {{\itbf z}}eta_1}{\varepsilons} ,z \big) \\ && \quad + R^\varepsilons ( {{\itbf x}}i_1,{{\itbf x}}i_2 , {{\itbf z}}eta_1 ,{{\itbf z}}eta_2 ,z ) , \label{eq:propsci11} \end{eqnarray} where the functions $K$ and $A$ are defined by {{\itbf e}}gin{eqnarray} \label{def:K} K(z) &:=& \exp\Big(- \frac{k_o^2}{2} \gamma_0({\bf 0}) z\Big) , \\ \nonumber A({{\itbf x}}i,{{\itbf z}}eta,z) &:=& \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \Big[ \exp \Big( \frac{k_o^2}{4} \int_0^z \gamma_0 \big( {{\itbf x}} + \frac{ {{\itbf z}}eta}{k_o} z' \big) {\rm d} z' \Big) -1\Big]\\ && \times \exp \big( -i {{\itbf x}}i\cdot {{\itbf x}} \big) {\rm d} {{\itbf x}} , \label{def:A} \end{eqnarray} the function $\hat{V}_{{\itbf r}}$ is {{\itbf e}}gin{equation} \hat{V}_{{\itbf r}}({{\itbf z}}eta) = \int \hat{U} \big({{\itbf k}} + \frac{{{\itbf z}}eta}{2}\big) \overline{ \hat{U} } \big({{\itbf k}} - \frac{{{\itbf z}}eta}{2}\big) \exp\big( i {{\itbf k}} \cdot {{\itbf r}} \big) {\rm d} {{\itbf k}}, \end{equation} and the function $R^\varepsilons $ satisfies {{\itbf e}}gin{eqnarray}n \sup_{z \in [0,{\ell}]} \| R^\varepsilons (\cdot,\cdot,\cdot,\cdot, z ) \|_{L^1(\mathbb{R}^d\times \mathbb{R}^d\times \mathbb{R}^d\times \mathbb{R}^d)} \stackrel{\varepsilons \to 0}{\longrightarrow} 0 . \end{eqnarray}n \end{prop} It is shown in \cite{garniers4} that the function ${{\itbf x}}i \to A({{\itbf x}}i,{{\itbf z}}eta,z)$ belongs to $L^1(\mathbb{R}^d)$ and its $L^1$-norm $\| A(\cdot,{{\itbf z}}eta,z)\|_{L^1(\mathbb{R}^d)}$ is bounded uniformly in ${{\itbf z}}eta \in \mathbb{R}^d$ and $z\in [0,{\ell}]$. It follows that all terms in the expansion (except the remainder $R^\varepsilons$) have $L^1$-norms of order one when $\varepsilons \to 0$. \subsection{The statistical intensity covariance function} We will here characterize the mean intensity and the statistical intensity covariance function when {{\itbf e}}gin{equation} \label{eq:paramx0} {{\itbf x}}_0= \frac{{{\itbf X}}_0}{\varepsilons}+\frac{{{\itbf Y}}_0}{2},\quad \quad {{\itbf x}}_0'= \frac{{{\itbf X}}_0}{\varepsilons}-\frac{{{\itbf Y}}_0}{2} , \quad \quad {{\itbf r}}= \frac{{{\itbf R}}}{\varepsilons}, \quad \quad {{\itbf r}}'= \frac{{{\itbf R}}'}{\varepsilons}. \end{equation} Here coordinates in capital letters are of order one with respect to $\varepsilons$, so that ${{\itbf X}}_0, {{\itbf R}}, {{\itbf R}}'$ are rescaled lateral coordinates and ${{\itbf Y}}_0$ is the observation offset in original coordinates. {This means that we consider the intensity covariance function for mid-points located within the beam whose radius is large (of order $\varepsilons^{-1}$) but we look at offsets that are small (of the order of the correlation length of the medium, that is of order one). The motivation for this parameterization is indeed that the intensity distribution decorrelates for the observation offset on this scale, while we will see that the intensity covariance function as a function of ${{\itbf r}}$ and ${{\itbf r}}'$ varies naturally at the scale $\varepsilons^{-1}$.} Recall that, by ( \textcolor{red}f{def:Leps}), $L$ is the propagation distance from the mask to the camera in rescaled longitudinal coordinates. We have the following result. {{\itbf e}}gin{prop}\label{prop:2} In the scintillation regime, we have in the limit $\varepsilons \to 0$ {{\itbf e}}gin{eqnarray} \nonumber {\cal I}_{{\itbf r}} ({{\itbf x}}_0) &=& \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \Big( \int_{\mathbb{R}^d} |U({{\itbf X}}-{{\itbf R}})|^2 \exp \big( - i{{\itbf z}}eta\cdot {{\itbf X}} \big) {\rm d} {{\itbf X}}\Big) \\ & \times & \exp \big( i{{\itbf z}}eta\cdot {{\itbf X}}_0 \big) \exp \Big( \frac{k_o^2}{4} \int_0^L \gamma_0 \big( \frac{ {{\itbf z}}eta}{k_o} z\big) - \gamma_0({\bf 0}) {\rm d} z \Big) {\rm d} {{\itbf z}}eta , \label{eq:m} \end{eqnarray} and {{\itbf e}}gin{eqnarray}\nonumber && \hspace*{-2cm} {\cal C}_{{{\itbf r}},{{\itbf r}}'}({{\itbf x}}_0,{{\itbf x}}_0') = \Big| \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \Big( \int_{\mathbb{R}^d} U\big({{\itbf X}}+\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big)\overline{U}\big({{\itbf X}}-\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big) \exp \big( - i{{\itbf z}}eta\cdot {{\itbf X}} \big){\rm d} {{\itbf X}}\Big) \\ \nonumber & & \times \exp \Big( i{{\itbf z}}eta\cdot \big({{\itbf X}}_0- \frac{{{\itbf R}}+{{\itbf R}}'}{2}\big)\Big) \exp \Big( \frac{k_o^2}{4} \int_0^L \gamma_0 \big( \frac{ {{\itbf z}}eta}{k_o} z-{{\itbf Y}}_0 \big) -\gamma_0({\bf 0}) {\rm d} z \Big) {\rm d} {{\itbf z}}eta \Big|^2 \\ \nonumber & & \hbox{} -\Big| \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \Big( \int_{\mathbb{R}^d} U\big({{\itbf X}}+\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big) \overline{U}\big({{\itbf X}}-\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big) \exp \big( - i{{\itbf z}}eta\cdot {{\itbf X}} \big){\rm d} {{\itbf X}}\Big) \\ \label{eq:c} & & \times \exp \Big( i{{\itbf z}}eta\cdot \big({{\itbf X}}_0- \frac{{{\itbf R}}+{{\itbf R}}'}{2} \big)\Big) \exp\Big(- \frac{k_o^2}{4} \gamma_0({\bf 0}) L \Big) {\rm d} {{\itbf z}}eta \Big|^2 . \end{eqnarray} \end{prop} {\rm d}ebproof The result follows from Proposition \textcolor{red}f{prop:sci1}. {\small $\Box$} \\ In order to get explicit expressions for the quantity of interest it is convenient to introduce the strongly scattering regime defined as follows. Recall that the scattering mean free path $\ell_{\rm sca}$ is defined by ( \textcolor{red}f{def:lsca:parax})). {{\itbf e}}gin{definition} \label{def:sto} In the strongly scattering regime, we have $L/\ell_{\rm sca} \gg 1$ and {the fluctuations of the random medium are smooth so that} the function $\gamma_0$ can be expanded as {{\itbf e}}gin{equation} \label{eq:expandgamma0} \gamma_0({{\itbf x}}) = \gamma_0({\bf 0}) - \frac{1}{2} {{\itbf e}}gin{eqnarray}r{\gamma}_2 |{{\itbf x}}|^2 + o(|{{\itbf x}}|^2), \end{equation} for ${{\itbf x}}$ {smaller than the correlation length of the medium (i.e., the width of $\gamma_0$)}. \end{definition} This corresponds to large, but smooth, medium fluctuations. We can now identify simplified expressions for the mean intensity and the intensity covariance function. {{\itbf e}}gin{lemma}\label{lem:1} Assume the scintillation and strongly scattering regime, then we have {{\itbf e}}gin{eqnarray} {\cal I}_{{\itbf r}} ({{\itbf x}}_0) = \frac{6^{d/2}}{(\pi {{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3)^{d/2}} \int_{\mathbb{R}^d} |U({{\itbf X}})|^2 \exp \Big( - \frac{6 | {{\itbf X}}- {{\itbf X}}_0+{{\itbf R}} |^2}{{{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3} \Big) {\rm d} {{\itbf X}} \label{eq:Irx0} \end{eqnarray} and {{\itbf e}}gin{eqnarray} \nonumber &&{\cal C}_{{{\itbf r}},{{\itbf r}}'}({{\itbf x}}_0,{{\itbf x}}_0') = \frac{6^d}{(\pi {{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3)^d} \Big| \int_{\mathbb{R}^d} U\big({{\itbf X}}+\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big)\overline{U}\big({{\itbf X}}-\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big) \\ \nonumber && ~~ \times \exp \Big( - \frac{6 | {{\itbf X}}- {{\itbf X}}_0+ \frac{{{\itbf R}}+{{\itbf R}}'}{2} |^2}{{{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3} -i \frac{3 k_o}{2L} {{\itbf Y}}_0 \cdot \big( {{\itbf X}}- {{\itbf X}}_0 + \frac{{{\itbf R}}+{{\itbf R}}'}{2} \big) \Big) {\rm d} {{\itbf X}} \Big|^2 \\ && ~~ \times\exp\Big( -\frac{{{\itbf e}}gin{eqnarray}r{\gamma}_2 k_o^2 L }{16} |{{\itbf Y}}_0|^2 \Big). \label{eq:covtot0} \end{eqnarray} \end{lemma} {\rm d}ebproof {In the scintillation regime, we can write by Eq.~( \textcolor{red}f{eq:expandgamma0}): $$ \exp\Big( - \big(1-\frac{\gamma_0({{\itbf x}})}{\gamma_0({\bf 0})}\big) \frac{L}{\ell_{\rm sca}} \Big) \simeq \exp \Big( - \frac{ {{\itbf e}}gin{eqnarray}r{\gamma}_2 L}{2\gamma_0({\bf 0})\ell_{\rm sca}} |{{\itbf x}}|^2\Big) , $$ since this is true for $|{{\itbf x}}|$ smaller than the correlation length, moreover, since this is also true for $|{{\itbf x}}|$ of the order of or larger than the correlation length in the sense that the two members of the equations are exponentially small in $L/\ell_{\rm sca}$. } It then follows from Proposition \textcolor{red}f{prop:2} that the mean intensity is {{\itbf e}}gin{eqnarray} \nonumber {\cal I}_{{\itbf r}} ({{\itbf x}}_0) &=& \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \Big( \int_{\mathbb{R}^d} |U({{\itbf X}}-{{\itbf R}})|^2 \exp \big( - i{{\itbf z}}eta\cdot {{\itbf X}} \big) {\rm d} {{\itbf X}}\Big) \\ & & \times \exp \big( i{{\itbf z}}eta\cdot {{\itbf X}}_0 \big) \exp \Big( - \frac{{{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3}{24} |{{\itbf z}}eta|^2 \Big) {\rm d} {{\itbf z}}eta \end{eqnarray} and the intensity covariance function is {{\itbf e}}gin{eqnarray*} \nonumber && {\cal C}_{{{\itbf r}},{{\itbf r}}'}({{\itbf x}}_0,{{\itbf x}}_0') = \\ && \hspace{-1.7cm} \Big| \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \Big( \int_{\mathbb{R}^d} U\big({{\itbf X}}+\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big)\overline{U}\big({{\itbf X}}-\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big) \exp \big( - i{{\itbf z}}eta\cdot {{\itbf X}} \big){\rm d} {{\itbf X}}\Big) \\ \nonumber && \hspace{-1.7cm} \times \exp \Big( i{{\itbf z}}eta\cdot \big({{\itbf X}}_0- \frac{{{\itbf R}}+{{\itbf R}}'}{2} \big)\Big) \exp \Big( -\frac{{{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3}{24} |{{\itbf z}}eta|^2 + \frac{{{\itbf e}}gin{eqnarray}r{\gamma}_2 k_oL^2}{8} {{\itbf z}}eta \cdot {{\itbf Y}}_0 -\frac{{{\itbf e}}gin{eqnarray}r{\gamma}_2 k_o^2 L }{8} |{{\itbf Y}}_0|^2 \Big) {\rm d} {{\itbf z}}eta \Big|^2 . \nonumber \end{eqnarray*} The lemma then follows after integrating in ${{\itbf z}}eta$. {\small $\Box$} \\ The beam radius enhancement due to scattering in a random medium with thickness $L$ is given by \cite[Eq.~(74)]{garniers4}: {{\itbf e}}gin{eqnarray}\label{eq:cA} \mathcal {A}_L := {{\itbf e}}gin{eqnarray}r{\gamma}_2^{1/2} L^{3/2} /(\varepsilons \sqrt{6}) . \end{eqnarray} Let us assume a regime of large enhanced aperture defined as follows. {{\itbf e}}gin{definition}\label{def:enh} In the large enhanced aperture regime, the radius of the incident field $U$, the radius and center point of the camera, and the shifts $|{{\itbf r}}|,|{{\itbf r}}'|$ are small relative to the beam radius enhancement $\mathcal {A}_L$. \end{definition} {As we show below this is the configuration in which the intensity covariance function has a simple form and the profile of the incident field can be explicitly extracted, because one can extract a large range of values in ${{\itbf r}}-{{\itbf r}}'$ of the intensity covariance function}. We can also address the general situation, albeit with less explicit expressions, and we do so in Remark \textcolor{red}f{remark2}. It follows from Lemma \textcolor{red}f{lem:1} that in the large enhanced aperture regime we have the following result. {{\itbf e}}gin{lemma} In the scintillation, strongly scattering, and large enhanced aperture regime, the mean intensity is constant over the camera: {{\itbf e}}gin{equation} {\cal I}_{{\itbf r}} ({{\itbf x}}_0) = \frac{6^{d/2}}{(\pi {{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3)^{d/2}} \int_{\mathbb{R}^d} |U({{\itbf X}})|^2 {\rm d} {{\itbf X}} , \label{eq:Irx0b} \end{equation} and the intensity covariance function is {{\itbf e}}gin{eqnarray} \nonumber {\cal C}_{{{\itbf r}},{{\itbf r}}'}({{\itbf x}}_0,{{\itbf x}}_0') &=& \frac{6^d}{(\pi {{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3)^d} \Big| \int_{\mathbb{R}^d} U\big({{\itbf X}}+\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big)\overline{U}\big({{\itbf X}}-\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big) {\rm d} {{\itbf X}} \Big|^2 \\ & & \times \exp \Big( -\frac{{{\itbf e}}gin{eqnarray}r{\gamma}_2 k_o^2 L }{16} |{{\itbf Y}}_0|^2 \Big). \label{eq:Crrp2} \end{eqnarray} \end{lemma} Thus, the intensity covariance function does not depend on the mid observation point ${{\itbf X}}_0$ and on the shift mid-point $({{\itbf R}}+{{\itbf R}}')/2$, but it decays as a function of the shift offset ${{\itbf R}}'-{{\itbf R}}$ on a scale length that is of the order of the incident field radius in a way that makes it possible to reconstruct the incident field. It was shown in \cite[Proposition 6.3]{garniers3} and also in \cite[Eq.~(75)]{garniers4} that {{\itbf e}}gin{equation} \label{def:rhoL} \rho_L := \frac{2}{ \sqrt{{{\itbf e}}gin{eqnarray}r{\gamma}_2 k_o^2 L}} \end{equation} is the typical correlation radius of the speckle pattern generated by a {plane wave} going through a random medium with thickness $L$. We can see from ( \textcolor{red}f{eq:Crrp2}) that the intensity covariance function decays as a whole with the observation offset ${{\itbf Y}}_0$ on a scale length equal to $\rho_L$. Indeed, the intensity covariance function decays on the scale $\rho_L$ with respect to observation offset because the speckle pattern, that is the intensity fluctuations, decorrelates on this scale. \subsection{Extraction of the incident field profile} \label{sec:Crho} The empirical intensity covariance function is given by ( \textcolor{red}f{def:Crrrho0}). If the radius $r_A=R_A/\varepsilons$ of the camera is large enough (more precisely, if condition ( \textcolor{red}f{cond:sa}) below holds true, {which means that the camera covers many speckle spots}), then the empirical intensity covariance function is self-averaging and equal to {{\itbf e}}gin{eqnarray} {\cal C}^{\rho_o}_{{{\itbf r}},{{\itbf r}}'} = \frac{1}{(4\pi)^{d/2}\rho_o^d} \int {\cal C}_{{{\itbf r}},{{\itbf r}}'}({{\itbf x}}_0,{{\itbf x}}_0') \exp \Big( - \frac{|{{\itbf Y}}_0|^2}{4 \rho_o^2} \Big) {\rm d} {{\itbf Y}}_0 . \label{eq:Crrpa} \end{eqnarray} {This is because $ \frac{1}{|A_o|} \int_{A_o} \cdots {\rm d} {{\itbf x}}_0$ in ( \textcolor{red}f{def:Crrrho0}) becomes equal to $\mathbb{E}[\cdots]$ by the law of large numbers.} Here we used the parameterization \eqref{eq:paramx0} and the expressions \eqref{eq:m} and \eqref{eq:c} which show in particular that the mean intensity varies on the slow scale $\varepsilons^{-1}$ relative to the characteristic speckle size, the scale of decorrelation of the intensities. We also remark that the condition ( \textcolor{red}f{cond:sa}) means that the camera has many pixels and also observes many speckle spots. The result \eqref{eq:Crrpa} is valid in the general scintillation case. We next present the main result of the paper. {{\itbf e}}gin{prop}\label{prop:M1} Assume the scintillation, strongly scattering, and large enhanced aperture regimes of Definitions \textcolor{red}f{def:scint}, \textcolor{red}f{def:sto}, and \textcolor{red}f{def:enh} respectively. Moreover, assume that the radius of the camera satisfies {{\itbf e}}gin{equation} \label{cond:sa} R_A \gg \sqrt{\rho_o^2 +\rho_L^2} . \end{equation} Then the intensity covariance function is self-averaging and given by {{\itbf e}}gin{eqnarray} \nonumber {\cal C}^{\rho_o}_{{{\itbf r}},{{\itbf r}}'} &=&{\cal Z}^{\rho_o} \Big| \int_{\mathbb{R}^d} U \big({{\itbf X}}+\frac{{{\itbf R}}-{{\itbf R}}'}{2} \big)\overline{U} \big({{\itbf X}}-\frac{{{\itbf R}}-{{\itbf R}}'}{2}\big) {\rm d}{{\itbf X}} \Big|^2 \\ &=&{\cal Z}^{\rho_o} \Big| \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} |\hat{U}({{\itbf z}}eta)|^2 \exp \big( i{{\itbf z}}eta \cdot \frac{{{\itbf R}}'-{{\itbf R}}}{2} \big) {\rm d}{{\itbf z}}eta \Big|^2 , \label{eq:Crf} \end{eqnarray} with {{\itbf e}}gin{equation} {\cal Z}^{\rho_o} = \frac{6^d}{(\pi {{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3)^d} \frac{1}{\big( 1+ {\rho_o^2}/{\rho_L^2} \big)^{d/2}} . \end{equation} \end{prop} The multiplicative factor ${\cal Z}^{\rho_o}$ is the one associated with the mean square intensity in \eqref{eq:I2} below when the pixel size $\rho_o$ is smaller than the speckle size $\rho_L$. This is exactly the formula ( \textcolor{red}f{eq:pred}) predicted in \cite{webb14}. {\rm d}ebproof We have assumed:\\ 1) scattering is strong $L/\ell_{\rm sca} \gg 1$, leading to \eqref{eq:expandgamma0};\\ 2) the radius $R_A$ of the camera $A_o$ is smaller than ${{\itbf e}}gin{eqnarray}r{\gamma}_2^{1/2} L^{3/2}$;\\ 3) the shifts $|{{\itbf R}}|,|{{\itbf R}}'|$ and camera center point magnitude are smaller than ${{\itbf e}}gin{eqnarray}r{\gamma}_2^{1/2} L^{3/2}$.\\ Then the intensity covariance function has the form \eqref{eq:Crrp2}. Substituting ( \textcolor{red}f{eq:Crrp2}) into \eqref{eq:Crrpa} gives {{\itbf e}}gin{equation} \hspace*{-1.5cm} {\cal C}^{\rho_o}_{{{\itbf r}},{{\itbf r}}'} = \frac{6^d}{(\pi {{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3)^d} \frac{1}{\big( 1+ {\rho_o^2}/{\rho_L^2} \big)^{d/2}} \Big| \int_{\mathbb{R}^d} U\big({{\itbf X}}+\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big)\overline{U}\big({{\itbf X}}-\frac{{{\itbf R}}'-{{\itbf R}}}{2}\big) {\rm d} {{\itbf X}} \Big|^2 . \label{eq:Crrp3} \end{equation} {The self-averaging can be considered as efficient because the amplitude of the main peak of the intensity covariance function is larger than the fluctuations of the background. Indeed the background is the square of the mean intensity (see ( \textcolor{red}f{eq:Irx0b})): } {{\itbf e}}gin{eqnarray}\label{eq:I2} {\cal I}^2 = \frac{6^{d}}{(\pi {{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3)^{d}} \Big( \int_{\mathbb{R}^d} |U({{\itbf X}})|^2 {\rm d} {{\itbf X}} \Big)^2 , \end{eqnarray} and its fluctuations are of the order of ${\cal I}^2/ \sqrt{M}$ where $M$ is the number of speckle spots over which the averaging has been carried out, that is, $M=(R_A / \rho_L)^d$. Note that in the strongly scattering scintillation regime the field $E_{{\itbf r}}({{\itbf x}})$ will, from the point of view of the fourth moment, behave as a complex-valued circularly symmetric Gaussian random variable \cite{garniers4}, which means in particular that $\mathbb{E}[ | E_{{\itbf r}}({{\itbf x}}) |^2 ]^2 = {\rm Var} [ | E_{{\itbf r}}({{\itbf x}}) |^2 ]$. The amplitude of the main peak of the intensity covariance function is (by ( \textcolor{red}f{eq:Crrp3})) {{\itbf e}}gin{eqnarray}n {\cal C}_{{\bf 0},{\bf 0}}^{\rho_o} = \frac{1}{\big( 1+ {\rho_o^2}/{\rho_L^2} \big)^{d/2}} {\cal I}^2 . \end{eqnarray}n The main peak can be clearly estimated if $ {1}/( 1+ {\rho_o^2}/{\rho_L^2} )^{d/2} \gg {\rho_L^d}/{R_A^d} $, which leads to the condition ( \textcolor{red}f{cond:sa}). {\small $\Box$} \\o {{\itbf e}}gin{remark} The results presented in this section show that the intensity covariance function over incident field position makes it possible to reconstruct the incident field. One may ask whether the intensity covariance function over transmitted field position may also possess this property. To answer this question we inspect ${\cal C}_{{\bf 0},{\bf 0}}({{\itbf x}}_0,{{\itbf x}}_0')$. As shown by ( \textcolor{red}f{eq:Crrp2}), this intensity covariance over transmitted field position in the strongly scattering regime is {{\itbf e}}gin{equation} {\cal C}_{{\bf 0},{\bf 0}}({{\itbf x}}_0,{{\itbf x}}_0') = \frac{6^d}{(\pi {{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3)^d} \Big| \int_{\mathbb{R}^d} |U({{\itbf X}})|^2 {\rm d} {{\itbf X}} \Big|^2 \exp \Big( -\frac{ |{{\itbf Y}}_0|^2}{4\rho_L^2} \Big) , \end{equation} when ${{\itbf x}}_0$ and ${{\itbf x}}_o'$ are as in ( \textcolor{red}f{eq:paramx0}). There is, therefore, no way to reconstruct the incident field given this function only. \end{remark} {{\itbf e}}gin{remark} \label{remark2} To be complete, let us now address the case when the radius of the incident field is of the order of or even larger than the enhanced aperture ${{\itbf e}}gin{eqnarray}r{\gamma}_2^{1/2} L^{3/2}$. Then we also consider a camera $A_o$ with a radius larger than ${{\itbf e}}gin{eqnarray}r{\gamma}_2^{1/2} L^{3/2}$ and shifts $|{{\itbf R}}|,|{{\itbf R}}'|$ larger than ${{\itbf e}}gin{eqnarray}r{\gamma}_2^{1/2} L^{3/2}$. Under these circumstances Eq.~( \textcolor{red}f{eq:Irx0}) shows that the mean intensity gives a blurred version of the incident field profile, in the form of a convolution of $|U|^2$ with a Gaussian kernel of width of order ${{\itbf e}}gin{eqnarray}r{\gamma}_2^{1/2} L^{3/2}$. The intensity covariance function ( \textcolor{red}f{eq:covtot0}) depends on the mid point ${{\itbf X}}_0$. Let us first consider the situation when we integrate the actual intensity covariance function with respect to mid point ${{\itbf X}}_0$, which gives {{\itbf e}}gin{eqnarray} \nonumber &&\int {\cal C}_{{{\itbf r}},{{\itbf r}}'}({{\itbf x}}_0,{{\itbf x}}_0') {\rm d}{{\itbf X}}_0 \\ \nonumber &&= \frac{1}{(2\pi)^d} \int_{\mathbb{R}^d} \Big| \int_{\mathbb{R}^d} U \big({{\itbf X}} +\frac{{{\itbf R}}'-{{\itbf R}}}{2} \big) \overline{U} \big({{\itbf X}} - \frac{{{\itbf R}}'-{{\itbf R}}}{2} \big) e^{-i {{\itbf z}}eta \cdot {{\itbf X}}} {\rm d}{{\itbf X}} \Big|^2 \\ && \quad \times \exp \Big(-\frac{{{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3}{12}\big| {{\itbf z}}eta - \frac{3k_o}{2L} {{\itbf Y}}_0\big|^2 \Big) {\rm d}{{\itbf z}}eta \exp \Big( - \frac{|{{\itbf Y}}_0|^2}{4\rho_L^2} \Big). \end{eqnarray} Thus, with an offset ${{\itbf Y}}_0$ in the observation points there is a damping of the information on the scale $\rho_L$ due to decorrelation of the speckle pattern. We also have a damping of the information at spatial scales of $U$ that are larger than ${{\itbf e}}gin{eqnarray}r{\gamma}_2^{1/2} L^{3/2}$. The empirical intensity covariance function is self-averaging and equal to ( \textcolor{red}f{eq:Crrpa}), so that the integrated (in ${{\itbf X}}_0$) version is equal to {{\itbf e}}gin{eqnarray} \nonumber &&\int {\cal C}_{{{\itbf r}},{{\itbf r}}'}^{\rho_o}({{\itbf x}}_0,{{\itbf x}}_0') {\rm d}{{\itbf X}}_0 \\ \nonumber && = \frac{3^{d/2}}{[\pi {{\itbf e}}gin{eqnarray}r{\gamma}_2L^3 (1+ \rho_o^2 /\rho_L^2 )]^{d/2}} \int\int U \big({{\itbf X}} +\frac{{{\itbf R}}'-{{\itbf R}}}{2} \big)\overline{U} \big({{\itbf X}} - \frac{{{\itbf R}}'-{{\itbf R}}}{2} \big) \\ && \times \overline{U} \big({{\itbf X}}' +\frac{{{\itbf R}}'-{{\itbf R}}}{2} \big) {U} \big({{\itbf X}}' - \frac{{{\itbf R}}'-{{\itbf R}}}{2} \big) \exp \Big( - \frac{|{{\itbf X}}-{{\itbf X}}'|^2}{2R_L^2} \Big) {\rm d} {{\itbf X}} {\rm d}{{\itbf X}}', \end{eqnarray} where {{\itbf e}}gin{equation} R_L^2 = \frac{{{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3}{6} \frac{1+ \rho_o^2 /\rho_L^2}{1+4 \rho_o^2 /\rho_L^2} , \end{equation} which is between ${{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3/{24}$ and ${{\itbf e}}gin{eqnarray}r{\gamma}_2 L^3/{6}$. This shows that the intensity covariance function is proportional to ( \textcolor{red}f{eq:Crrp3}) when the radius of the incident field is smaller than $R_L$, but it becomes blurred by the Gaussian convolution with radius $R_L$ when the radius of the incident field is larger. \end{remark} { {{\itbf e}}gin{remark} In this paper we assume that the phase of $U$ is known, so that a phase-retrieval algorithm can be used to extract $U$ from $|\hat{U}|$. This is the case in the experimental setting described at the beginning of Section \textcolor{red}f{sec:intcor}, as we assume that 1) the illumination is a plane wave and 2) the object is a mask. If the plane wave is normally incident, then the phase is zero (or constant). If the plane is obliquely incident with a known angle, then the phase is also known. If the illumination phase is unknown, then it should still be possible -in principle- to reconstruct the complex profile $U$ provided one has a sufficiently strong support constraint as demonstrated in \cite{fienup87}, but this is not obvious. \end{remark} } { {{\itbf e}}gin{remark}\label{remark:49} In this paper we assume that the medium is statistically homogeneous between the plane of the mask $z=0$ and the plane of the camera $z=\ell$. One could consider a more general situation in which there are three regions, namely a random medium sandwiched in between two homogeneous media. This situation will be addressed in a further work but we may anticipate the contributions of interesting phenomena such as the shower curtain effect \cite{ishimaru}. \end{remark} } \section{The spot-dancing regime} \label{sec:spot} The spot-dancing regime is valid if {the white-noise paraxial regime (Definition \textcolor{red}f{def:par}) is valid, and, additionally, the correlation radius of the medium fluctuations (that determines the transverse correlation radius of the Brownian field in the It\^o-Schr\"odinger equation) is larger than the incident field radius. The standard deviation of the Brownian field then needs to be relatively large so that one can see an effect of order one.} More precisely, we define the spot-dancing regime as follows. {{\itbf e}}gin{definition} \label{def:spo} {Consider the paraxial regime of Definition \textcolor{red}f{def:par} so that the evolution of the field amplitude is governed by the It\^o-Schr\"odinger equation ( \textcolor{red}f{eq:IS}).} In the spot-dancing regime, the covariance function $\gamma_0^\varepsilonsilon$ is of the form: {{\itbf e}}gin{equation} \label{eq:spregime} \gamma_0^\varepsilonsilon ({{\itbf x}}) = \varepsilonsilon^{-2} \gamma_0 (\varepsilonsilon {{\itbf x}}) , \end{equation} for a small dimensionless parameter $\varepsilonsilon$, and the function $\gamma_0$ is smooth and can be expanded as ( \textcolor{red}f{eq:expandgamma0}). \end{definition} We want to study the asymptotic behavior of the moments of the field in this regime, which is called the spot-dancing regime for reasons that will become clear from the following discussion. {{\itbf e}}gin{prop}\label{pro:spo} In the spot-dancing regime we have the following asymptotic description for the transmitted field in distribution {{\itbf e}}gin{eqnarray} \nonumber {E}_{{\itbf r}}({{\itbf x}}) &=&{E}^0_{{\itbf r}}({{\itbf x}} -{{\itbf X}}_{\ell}) \exp \Big( - i \frac{ k_o\sqrt{{{\itbf e}}gin{eqnarray}r{\gamma}_2} {{\itbf W}}_{\ell} }{2} \cdot ({{\itbf x}}-{{\itbf X}}_{\ell}) \Big) \\ && \times \exp \Big( i \frac{k_o {{\itbf e}}gin{eqnarray}r{\gamma}_2 }{8} \big( {\ell} |{{\itbf W}}_{\ell}|^2 - \int_0^{\ell} |{{\itbf W}}_{z}|^2 {\rm d} z \big) \Big) , \end{eqnarray} {where ${{\itbf W}}_z$ is a standard $d$-dimensional Brownian motion,} {{\itbf e}}gin{equation} {E}^0_{{\itbf r}}({{\itbf x}}) = \Big( \frac{k_o}{2\pi {\ell}} \Big)^{d/2} \int_{\mathbb{R}^d} U({{\itbf y}}-{{\itbf r}}) \exp \Big( i \frac{k_o}{2{\ell}} |{{\itbf x}}-{{\itbf y}}|^2 \Big){\rm d} {{\itbf y}} \label{eq:fieldfar1} \end{equation} is the field that is observed when the medium is homogeneous and {{\itbf e}}gin{equation} {{\itbf X}}_z = \frac{\sqrt{{{\itbf e}}gin{eqnarray}r{\gamma}_2}}{2} \Big(\int_0^z {{\itbf W}}_{z'} {\rm d} z'- z {{\itbf W}}_z \Big) = -\frac{\sqrt{{{\itbf e}}gin{eqnarray}r{\gamma}_2}}{2} \int_0^z z' {\rm d} {{\itbf W}}_{z'} \end{equation} is the random center of the field, that is a $\mathbb{R}^d$-valued Gaussian process with mean zero and covariance {{\itbf e}}gin{equation} \mathbb{E} \big[ {{\itbf X}}_z {{\itbf X}}_{z'}^T \big] = \frac{{{\itbf e}}gin{eqnarray}r{\gamma}_2 (z\wedge z')^3}{12} {\bf I} . \end{equation} \end{prop} In particular the intensity of the transmitted field is {{\itbf e}}gin{equation} | {E}_{{\itbf r}} ({{\itbf x}})|^2 = | {E}^0_{{\itbf r}} ({{\itbf x}} - {{\itbf X}}_{\ell})|^2 = | {E}^0_{\bf 0} ({{\itbf x}} - {{\itbf r}}-{{\itbf X}}_{\ell})|^2 . \label{eq:meanintspot1} \end{equation} This representation justifies the name ``spot-dancing regime": the transmitted intensity has the same transverse profile as in a homogeneous medium, but its center is randomly shifted by the Gaussian process ${{\itbf X}}_z$. Note that in this case, there is no statistical averaging when one considers the empirical intensity covariance function ( \textcolor{red}f{def:Crrrho0}), which is the random quantity equal to {{\itbf e}}gin{eqnarray} \nonumber && \hspace*{-1cm} C^{\rho_o}_{{{\itbf r}},{{\itbf r}}'} = \frac{1}{|A_o|} \int_{A_o} | {E}^0_{\bf 0} ({{\itbf x}}_0 - {{\itbf r}}-{{\itbf X}}_{\ell})|^2| {E}^0_{\bf 0} ({{\itbf x}}_0 - {{\itbf r}}'-{{\itbf X}}_{\ell})|^2 {\rm d} {{\itbf x}}_0 \\ && \hspace*{-1cm} - \Big( \frac{1}{|A_o|} \int_{A_o} | {E}^0_{\bf 0} ({{\itbf x}}_0 - {{\itbf r}}-{{\itbf X}}_{\ell})|^2 {\rm d} {{\itbf x}}_0 \Big) \Big( \frac{1}{|A_o|} \int_{A_o} | {E}^0_{\bf 0} ({{\itbf x}}_0 - {{\itbf r}}'-{{\itbf X}}_{\ell})|^2 {\rm d} {{\itbf x}}_0 \Big) . \end{eqnarray} If the radius of the camera is larger than the radius of the incident field, moreover, large relative to $\sqrt{{{\itbf e}}gin{eqnarray}r{\gamma}_2 {\ell}^3}$, the typical spot dancing shift, and the shift ${{\itbf r}}$, then the intensity covariance function gives the autocovariance of the unperturbed intensity profile: {{\itbf e}}gin{eqnarray} && \hspace*{-0.6in} C^{\rho_o}_{{{\itbf r}},{{\itbf r}}'} =\frac{1}{|A_o|} \int_{\mathbb{R}^d} | {E}^0_{\bf 0} ({{\itbf x}}_0 )|^2| {E}^0_{\bf 0} ({{\itbf x}}_0 - {{\itbf r}}'+{{\itbf r}})|^2 {\rm d} {{\itbf x}}_0 - \Big( \frac{1}{|A_o|} \int_{\mathbb{R}^d} | {E}^0_{\bf 0} ({{\itbf x}}_0 )|^2 {\rm d} {{\itbf x}}_0 \Big)^2 . \label{eq:expresCrhospot2} \end{eqnarray} Therefore, in the spot-dancing regime, the random medium does not modify the intensity covariance function compared to the case of a homogeneous medium. {\rm d}ebproof We review the results that can be found in \cite{andrews,dawson84,furutsu72,furutsu73,garniers3} and put them in a convenient form for the derivation. If the covariance function $\gamma_0$ can be expanded as ( \textcolor{red}f{eq:expandgamma0}), then the equation for the Fourier transform of the fourth-order moment can be simplified in the spot-dancing regime $\varepsilonsilon \to 0$ as: {{\itbf e}}gin{equation} \frac{\partial \hat{\mu}_{{\itbf r}}}{\partial z} + \frac{i}{k_o} \big( {{\itbf x}}i_1\cdot {{\itbf z}}eta_1+ {{\itbf x}}i_2\cdot {{\itbf z}}eta_2\big)\hat{\mu}_{{\itbf r}} = \frac{k_o^2 {{\itbf e}}gin{eqnarray}r{\gamma}_2}{2} \Delta_{{{\itbf x}}i_1} \hat{\mu}_{{\itbf r}}. \end{equation} This equation can be solved (by a Fourier transform in ${{\itbf x}}i_1$): {{\itbf e}}gin{eqnarray} \nonumber \hspace*{-2cm} \hat{\mu}_{{\itbf r}}({{\itbf x}}i_1,{{\itbf x}}i_2,{{\itbf z}}eta_1,{{\itbf z}}eta_2,z) &=& \int \hat{\mu}_{{\itbf r}}({{\itbf x}}i_1',{{\itbf x}}i_2,{{\itbf z}}eta_1,{{\itbf z}}eta_2,0)\\ &&\times \exp \Big( -i \frac{z}{k_o} ({{\itbf x}}i_1'\cdot {{\itbf z}}eta_1 +{{\itbf x}}i_2\cdot {{\itbf z}}eta_2) \Big) \psi( {{\itbf x}}i_1-{{\itbf x}}i_1',{{\itbf z}}eta_1,z) {\rm d} {{\itbf x}}i_1' , \end{eqnarray} with {{\itbf e}}gin{equation} \psi( {{\itbf x}}i ,{{\itbf z}}eta_1,z) =\frac{1}{(2 \pi k_o^2 {{\itbf e}}gin{eqnarray}r{\gamma}_2 z)^{d/2}} \exp \Big( - \frac{ {{\itbf e}}gin{eqnarray}r{\gamma}_2 z^3}{24} |{{\itbf z}}eta_1|^2 - i \frac{ z}{2k_o} {{\itbf x}}i \cdot {{\itbf z}}eta_1 -\frac{1}{2 k_o^2 {{\itbf e}}gin{eqnarray}r{\gamma}_2 z} |{{\itbf x}}i|^2 \Big) . \end{equation} This gives an explicit expression for the fourth-order moment which is what we need to analyze the speckle imaging approach considered here. As shown in \cite{garniers3}, it is in fact possible to compute all the moments in the spot-dancing regime and to identify the statistical distribution of the transmitted field $E_{{\itbf r}}({{\itbf x}})$. We have in distribution {{\itbf e}}gin{equation} \hat{E}_{{\itbf r}} ({{\itbf k}}) = \hat{U}_{{\itbf r}} \Big( {{\itbf k}} + \frac{k_o\sqrt{ {{\itbf e}}gin{eqnarray}r{\gamma}_2}}{2} {{\itbf W}}_{\ell} \Big) \exp \Big( - \frac{i}{2k_o} \int_0^{\ell} \big| {{\itbf k}}+ \frac{k_o \sqrt{ {{\itbf e}}gin{eqnarray}r{\gamma}_2}}{2} {{\itbf W}}_{z} \big|^2 {\rm d} z \Big) , \end{equation} from which Proposition \textcolor{red}f{pro:spo} follows. {\small $\Box$} \\o {{\itbf e}}gin{remark}\label{remark3} To be complete, we can add that it is quite easy to reconstruct the incident field profile $U$ under the natural assumption that the camera is in the far field (i.e. ${\ell}$ is larger than the Rayleigh length $k_o r_U^2$ where $r_U$ is the radius of the mask). Indeed, ( \textcolor{red}f{eq:fieldfar1}) and ( \textcolor{red}f{eq:meanintspot1}) show that the transmitted intensity $| {E}_{{\itbf r}} ({{\itbf x}})|^2$ is equal to $|\hat{U}_{{{\itbf r}}+{{\itbf X}}_{\ell}}(k_o {{\itbf x}} / {\ell})|^2$ (up to a multiplicative constant). From the modulus of the Fourier transform of $U_{{{\itbf r}}+{{\itbf X}}_{\ell}}({{\itbf x}})$ and from its phase (assumed to be known, for instance, zero) it is possible to reconstruct the incident field profile by a phase-retrieval algorithm \cite{fienup}. Note, however, that for a large window the displacement ${{\itbf X}}_{\ell}$ may vary over the image. \end{remark} \section{Summary and Concluding Remarks} \label{sec:con} We have considered an algorithm for imaging of a moving object based on speckle statistics. The scheme is as introduced in \cite{webb14} and the basic quantity computed is the measured or empirical intensity covariance over incident position {{\itbf e}}gin{eqnarray} \nonumber C_{{{\itbf r}},{{\itbf r}}'} &=& \frac{1}{|A_o|} \int_{A_o} |E_{{\itbf r}} ( {{\itbf x}}_0 )|^2 |E_{{{\itbf r}}'}( {{\itbf x}}_0 )|^2 {\rm d} {{\itbf x}}_0 \\ & & \hbox{} - \Big( \frac{1}{|A_o|} \int_{A_o} |E_{{\itbf r}} ( {{\itbf x}}_0 )|^2 {\rm d} {{\itbf x}}_0 \Big) \Big( \frac{1}{|A_o|} \int_{A_o} |E_{{{\itbf r}}'} ( {{\itbf x}}_0 )|^2 {\rm d} {{\itbf x}}_0 \Big) , \label{def:intcor2} \end{eqnarray} where $A_o$ is the spatial support of the camera and ${{\itbf r}},{{\itbf r}}'$ are incident positions, see Figure~ \textcolor{red}f{fig:1}. The conjecture of \cite{webb14} is that {{\itbf e}}gin{equation} \label{eq:pred2} C_{{{\itbf r}},{{\itbf r}}'} \approx \Big| \int_{\mathbb{R}^d} |\hat{U}({{\itbf k}})|^2 \exp \big( i{{\itbf k}} \cdot ( {{\itbf r}}'-{{\itbf r}}) \big) {\rm d}{{\itbf k}} \Big|^2 \propto \left| (U \star \overline{U})( {{\itbf r}} - {{\itbf r}}') \right|^2 , \end{equation} where $\star$ stands for convolution, so that the mask $U$ can be recovered via a phase retrieval step. The interesting consequence of such a result is that precise information about the shape of the mask is hidden in the complex speckle pattern, moreover, that the expression for the empirical intensity covariance does not depend on the properties of the complex section and the associated character of the scattering process. The argument in \cite{webb14} is based on a strong scattering assumption and an associated zero-mean circular Gaussian assumption for the transmitted wave field. Here we have presented an analysis of this problem with a view toward identifying the precise scaling regime where the beautiful relation \eqref{eq:pred2} as set forth in \cite{webb14} can be mathematically justified when modeling the complex section as shown in Figure \textcolor{red}f{fig:1} as a random medium, moreover, when we consider scalar harmonic wave propagation, as a model for narrow band optics. To set the stage for our discussion let us consider that the random medium fluctuations in ( \textcolor{red}f{eq:wave0}) have mean zero and covariance of the form {{\itbf e}}gin{eqnarray}n \mathbb{E} \big[ \mu ({{\itbf x}},z) \mu ({{\itbf x}}',z') ] = \sigma^2 {\cal C}_{\mu}\Big( \frac{{{\itbf x}}-{{\itbf x}}'}{\ell_c} , \frac{z-z'}{\ell_c}\Big), \end{eqnarray}n with ${\cal C}_{\mu}$ a normalized function (such that ${\cal C}_{\mu}({\bf 0})=1$ and the radius of ${\cal C}_{\mu}$ is of order one). In this model $\sigma^2$ is the variance of the relative random fluctuations of the medium and $\ell_c$ is the coherence length. We also let {{\itbf e}}gin{eqnarray}n \gamma_0({{\itbf x}}-{{\itbf x}}') = \int_{-\infty}^{\infty} \mathbb{E} \big[ \mu ({{\itbf x}},z) \mu ({{\itbf x}}',z+z') ] \, {\rm d} z' , \end{eqnarray}n which is the lateral spectrum of the driving Brownian motion in the It\^o-Schr\"odinger equation in \eqref{eq:IS}. Some central parameters associated with this formulation are then (i) the central wavelength $\lambda_o=2\pi c_0/k_o$, (ii) the medium coherence length $\ell_c$, (iii) the relative magnitude of the medium fluctuations $\sigma$, (iii) the radius of the camera ${r}_A$, (iv) the size ${r}_U$ of the mask $U$, (v) the distance from the mask to the camera ${\ell}$ corresponding to the thickness of the random section. The main scaling regime we have considered is the scaling regime leading to the It\^o-Schr\"odinger equation in \eqref{eq:IS}, or the white-noise paraxial model, corresponding to {{\itbf e}}gin{eqnarray}n \lambda_o=2\pi/k_o \ll \ell_c \ll {\ell} . \end{eqnarray}n Then, we have considered two subregimes of propagation which essentially are the two canonical scaling regimes in the white-noise paraxial model: (a) the {\it scintillation regime} corresponding to ${r}_U \gg \ell_c$, (b) the {\it spot-dancing regime} corresponding to ${r}_U \ll \ell_c$. In the spot-dancing regime, the wave intensity pattern is as in the homogeneous case, however, modified by a random lateral shift in the profile. In fact, in this case the formula \eqref{eq:pred2} is not valid, however, the mask can still be recovered, albeit with a different approach corresponding to the one one would have used in a homogeneous medium. In the scintillation regime, the transmitted wave forms a speckle pattern with rapid fluctuations of the intensity. In order to discuss the scintillation regime let us introduce two parameters. First, the characteristic size of the speckle fluctuations or speckle radius at range ${\ell}$ is {{\itbf e}}gin{eqnarray}n \rho_{\ell} = \frac{\ell_c^{1/2}}{ \sigma k_o {\ell}^{1/2} } . \end{eqnarray}n The other fundamental parameter associated with the scintillation regime is the beam spreading width at range ${\ell}$ which is {{\itbf e}}gin{eqnarray}n {\mathcal A}_{\ell} = \frac{\sigma {\ell}^{3/2}}{\ell_c^{1/2}} = \frac{k_o {\ell}}{\rho_{\ell}} . \end{eqnarray}n In order to have a high signal-to-noise ratio so that the empirical intensity covariance function is close to its expectation we assume {{\itbf e}}gin{eqnarray}n \rho_{\ell} \ll {r}_A . \end{eqnarray}n We remark that if the camera is associated with finite-sized elements, of size $\rho_o$, then we assume that $ \rho_o =O (\rho_{\ell})$ to retain a high signal-to-noise ratio (the effects of having finite-sized elements is analyzed in detail above). \\ We then arrive at the asymptotic description in \eqref{eq:c} for the empirical intensity covariance. This expression involves the medium (second-order) statistics and the mask function $U$. It can form the basis for an estimation procedure for the mask and we remark that it holds true whatever the magnitude of the scattering mean free path $\ell_{\rm sca}$ is relative to the range ${\ell}$, with $\ell_{\rm sca}$ given in \eqref{def:lsca:parax} and which corresponds to {{\itbf e}}gin{eqnarray}n \ell_{\rm sca} = {\ell} \Big( \frac{\rho_{\ell}}{\ell_c} \Big)^2 , \end{eqnarray}n so that the regime of long-range propagation corresponds to $\rho_{\ell} \ll \ell_c$. Upon some last scaling assumptions we arrive exactly at the description in \eqref{eq:pred2}. Specifically assume (i) relatively large spreading so that $|{{\itbf r}}|, {r}_A \ll {\mathcal A}_{\ell}$ (ii) long-range propagation so that $\ell_{\rm sca} \ll {\ell}$ and (iii) smooth medium fluctuations so that \eqref{eq:expandgamma0} is valid. These are the last stepping stones toward the formula \eqref{eq:pred2}. Let us next comment on an informal interpretation of the above result. Let $G_{\ell}({{\itbf x}},{{\itbf r}})$ be the Green's function over the section $z\in (0,{\ell})$ for a source point at $({{\itbf r}},0)$ and an observation point at $({{\itbf x}},{\ell})$. Then we have for the transmitted field: {{\itbf e}}gin{eqnarray}n E_{{{\itbf r}}}({{\itbf x}}) = \int_{\mathbb{R}^d} U({{\itbf y}}-{{\itbf r}}) G_{\ell}({{\itbf x}}, {{\itbf y}}) {\rm d} {{\itbf y}} . \end{eqnarray}n Let us first consider the field covariance function with respect to shift vector ${{\itbf r}}$: {{\itbf e}}gin{eqnarray}n {\cal D}_{{{{\itbf r}}},{{{\itbf r}}'}}({{\itbf x}}_0,{{\itbf x}}_0) &=& \mathbb{E} \big[ E_{{\itbf r}} ( {{\itbf x}}_0 ) \overline{E_{{{\itbf r}}'}( {{\itbf x}}_0 )}\big] \end{eqnarray}n which, making use of reciprocity, can be expressed as {{\itbf e}}gin{eqnarray}n {\cal D}_{{{{\itbf r}}},{{{\itbf r}}'}}({{\itbf x}}_0,{{\itbf x}}_0) &=& \int\int_{\mathbb{R}^{2d}} U({{\itbf y}}-{{\itbf r}}) \overline{U(\tilde{{{\itbf y}}}-{{\itbf r}}')} \mathbb{E}\left[ G_{\ell}({{\itbf x}}_0, {{\itbf y}}) \overline{G_{\ell}({{\itbf x}}_0, \tilde{{{\itbf y}}}) } \right] {\rm d} {{\itbf y}} {\rm d} \tilde{{{\itbf y}}} . \end{eqnarray}n In the strongly scattering regime and under the assumption that the speckle radius $\rho_{\ell}$ is much smaller than ${r}_U$, the covariance $ \mathbb{E} [ G_{\ell}({{\itbf x}}_0, {{\itbf y}}) \overline{G_{\ell}({{\itbf x}}_0, \tilde{{{\itbf y}}}) } ] $ is approximately delta-correlated in ${{\itbf y}}-\tilde{{{\itbf y}}}$ and is proportional to an envelope with beam width ${\cal A}_{\ell}$, so that we get {{\itbf e}}gin{eqnarray}n && {\cal D}_{{{{\itbf r}}},{{{\itbf r}}'}}({{\itbf x}}_0,{{\itbf x}}_0) \propto \int_{\mathbb{R}^{d}} U({{\itbf y}}- {{\itbf r}}) \overline{U( {{{\itbf y}}} -{{\itbf r}}')} s\Big( \frac{{{\itbf y}}-{{\itbf x}}_0 }{{\mathcal A}_{\ell}} \Big) {\rm d} {{\itbf y}} , \end{eqnarray}n for $s$ a normalized envelope function with unit width and unit amplitude. Under the assumption that ${r}_U,{r}_A$ and the camera center point have small magnitude relative to ${\mathcal A}_{\ell}$ we get {{\itbf e}}gin{eqnarray}n && {\cal D}_{{{{\itbf r}}},{{{\itbf r}}'}}({{\itbf x}}_0,{{\itbf x}}_0) \propto (U \star \overline{U})( {{\itbf r}}' - {{\itbf r}}) . \end{eqnarray}n We next have for the speckle covariance function with respect to the shift vector {{\itbf e}}gin{eqnarray}n && \hspace*{-2.7cm} {\cal C}_{{{{\itbf r}}},{{{\itbf r}}'}}({{\itbf x}}_0,{{\itbf x}}_0) =\mathbb{E}\big[ |E_{{\itbf r}} ( {{\itbf x}}_0 )|^2 |E_{{{\itbf r}}'}( {{\itbf x}}_0 )|^2 \big] - \mathbb{E}\big[ |E_{{\itbf r}} ( {{\itbf x}}_0 )|^2 \big]\mathbb{E}\big[|E_{{{\itbf r}}'}( {{\itbf x}}_0 )|^2 \big] \\&=& \mathbb{E}\bigg[ \int_{\mathbb{R}^d} U({{\itbf y}}-{{\itbf r}}) G_{\ell}({{\itbf x}}_0, {{\itbf y}}) {\rm d} {{\itbf y}} \int_{\mathbb{R}^d} U({{\itbf y}}-{{\itbf r}}') G_{\ell}({{\itbf x}}_0, {{\itbf y}}) {\rm d} {{\itbf y}} \\ & & \hbox{} \times \int_{\mathbb{R}^d} \overline{U({{\itbf y}}-{{\itbf r}}) G_{\ell}({{\itbf x}}_0, {{\itbf y}}) }{\rm d} {{\itbf y}} \int_{\mathbb{R}^d} \overline{U({{\itbf y}}-{{\itbf r}}') G_{\ell}({{\itbf x}}_0, {{\itbf y}}) }{\rm d} {{\itbf y}} \bigg] \\ & & - {\cal D}_{{{{\itbf r}}},{{{\itbf r}}}}({{\itbf x}}_0,{{\itbf x}}_0) \overline{ {\cal D}_{{{{\itbf r}}'},{{{\itbf r}}'}}({{\itbf x}}_0,{{\itbf x}}_0) } \\ &=& \int\int_{\mathbb{R}^{4d}} \left( U({{\itbf y}}_1-{{\itbf r}}) \overline{U({{\itbf y}}_2-{{\itbf r}}')} \right) \left( \overline{U({{\itbf y}}_3-{{\itbf r}})} {U({{\itbf y}}_4-{{\itbf r}}')} \right) \\ & & \hbox{} \times \mathbb{E}\left[ G_{\ell}({{\itbf x}}_0, {{\itbf y}}_1) \overline{G_{\ell}({{\itbf x}}_0, {{\itbf y}}_2)} \overline{G_{\ell}({{\itbf x}}_0, {{\itbf y}}_3)} {G_{\ell}({{\itbf x}}_0, {{\itbf y}}_4)} \right] {\rm d} {{\itbf y}}_1 {\rm d} {{\itbf y}}_2 {\rm d} {{\itbf y}}_3 {\rm d} {{\itbf y}}_4 \\ & & \hbox{} - {\cal D}_{{{{\itbf r}}},{{{\itbf r}}}}({{\itbf x}}_0,{{\itbf x}}_0) \overline{ {\cal D}_{{{{\itbf r}}'},{{{\itbf r}}'}}({{\itbf x}}_0,{{\itbf x}}_0) } \\ &=& {\cal D}_{{{{\itbf r}}},{{{\itbf r}}'}}({{\itbf x}}_0,{{\itbf x}}_0) \overline{ {\cal D}_{{{{\itbf r}}},{{{\itbf r}}'}}({{\itbf x}}_0,{{\itbf x}}_0)} , \end{eqnarray}n where we have used a Gaussian summation rule (Isserlis formula) which states that for four jointly complex circularly symmetric Gaussian random variables, $Z_j, j=1,\ldots,4$, we have {{\itbf e}}gin{eqnarray} \label{eq:gaussrule} \mathbb{E} \big[ Z_1 \overline{Z_2 Z_3} Z_4\big] = \mathbb{E}\big[ Z_1 \overline{Z_2} \big] \mathbb{E} \big[\overline{ Z_3 } Z_4\big] + \mathbb{E}\big[ {Z_1 } \overline{Z_3} \big] \mathbb{E} \big[ \overline{ Z_2 } Z_4\big] . \end{eqnarray} We then arrive at {{\itbf e}}gin{eqnarray}n && {\cal C}_{{{{\itbf r}}},{{{\itbf r}}'}}({{\itbf x}}_0,{{\itbf x}}_0) \propto \left| ( U \star \overline{U})( {{\itbf r}} - {{\itbf r}}') \right|^2 , \end{eqnarray}n which is \eqref{eq:pred2}. We comment here that it is clear from the above argument that in this version of speckle imaging the so-called memory effect for the speckle pattern, which is important in other modalities of speckle imaging \cite{freund88,vellekoop07,vellekoop08}, is not important. What is important here is a small speckle radius and a large spreading of the field. Moreover, in this formal argument we made use of a Gaussian assumption which made it possible to factor a fourth moment in terms of second moments. That this is valid in the considered regime is a deep result of waves in random media which was recently developed in \cite{garniers4}. Note also that the above argument shows how a similar mask imaging procedure can be constructed when we have access to the wave field itself: it is then possible to estimate the field covariance function with respect to shift vector and the Gaussian property is not needed. Finally, in Remarks \textcolor{red}f{remark2} and \textcolor{red}f{remark3} we discuss how, under various circumstances about the random medium, the image may be subject to blurring and geometric distortion operators. In practice some amount of both of these effects will be present. For instance, in the context of turbulence mitigation for propagation through the atmosphere, they need to be corrected for. We refer to \cite{gilles0,gilles1,bert} for frameworks that aim at mitigating such effects where in particular a physical model, the so-called ``fried kernel'', is partly and successfully being used. Here, we have developed the theory for how such distortion operators can be modeled in the context of speckle imaging. {Indeed in this paper we were able to address separately the two canonical scaling regimes in the white-noise paraxial model: the scintillation regime corresponding to ${r}_U \gg \ell_c$ and the spot-dancing regime corresponding to ${r}_U \ll \ell_c$. The intermediate regime, when ${r}_U \sim \ell_c$, cannot be addressed via the asymptotic techniques used in our paper. We may expect that it should produce a mixture of the two canonical scaling regimes, which would result in a more challenging situation from the inverse problems point of view. In particular, we anticipate that the intensity covariance function should then not be statistically stable.} \section*{Acknowledgments} This research is supported in part by AFOSR grant FA9550-18-1-0217, NSF grant 1616954, Centre Cournot, Fondation Cournot, and Universit\'e Paris Saclay (chaire D'Alembert). {{\itbf e}}gin{thebibliography}{99} \bibitem{alamouti} S. M. Alamouti, A simple transmit diversity technique for wireless communications, IEEE J. Sel. Areas Commun. {\bf 16} (1998), 1451-1458. \bibitem{andrews} L. C. Andrews and R. L. Philipps, {\it Laser Beam Propagation Through Random Media}, SPIE Press, Bellingham, 2005. \bibitem{aubry09} A. Aubry and A. Derode, Random matrix theory applied to acoustic backscattering and imaging in complex media, Phys. Rev. Lett. {\bf 102} (2009), 084301. \bibitem{aubry11} A. Aubry and A. Derode, Multiple scattering of ultrasound in weakly inhomogeneous media: application to human soft tissues, J. Acoust. Soc. Am. {\bf 129} (2011), 225-233. \bibitem{borcea11} L. Borcea, J. Garnier, G. Papanicolaou, and C. Tsogka, {Enhanced statistical stability in coherent interferometric imaging}, Inverse Problems {\bf 27} (2011), 085004. \bibitem{borcea05} L.~Borcea, G.~Papanicolaou, and C.~Tsogka, {Interferometric array imaging in clutter}, Inverse Problems {\bf 21} (2005), 1419-1460. \bibitem{dawson84} D. Dawson and G. Papanicolaou, {A random wave process}, Appl. Math. Optim. {\bf 12} (1984), 97-114. \bibitem{fienup} {J. R. Fienup}, {Phase retrieval algorithms: a comparison}, Appl. Opt. {\bf 21} (1982), 2758-2769. \bibitem{fienup87} { {J. R. Fienup}, {Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint}, J. Opt. Soc. Am. A {\bf 4} (1987), 118-123. } \bibitem{book1} J.-P. Fouque, J. Garnier, G. Papanicolaou, and K. S\o lna, {\em Wave Propagation and Time Reversal in Randomly Layered Media}, Springer, New York, 2007. \bibitem{freund88} I. Freund, M. Rosenbluh, and S. Feng, Memory effects in propagation of optical waves through disordered media, Phys. Rev. Lett. {\bf 61} (1988), 2328-2331. \bibitem{furutsu72} K. Furutsu, Statistical theory of wave propagation in a random medium and the irradiance distribution function, J. Opt. Soc. Am. {\bf 62} (1972), 240-254. \bibitem{furutsu73} K. Furutsu and Y. Furuhama, Spot dancing and relative saturation phenomena of irradiance scintillation of optical beams in a random medium, Optica {\bf 20} (1973), 707-719. \bibitem{garniers1} J. Garnier and K. S\o lna, Coupled paraxial wave equations in random media in the white-noise regime, Ann. Appl. Probab. {\bf 19} (2009), 318-346. \bibitem{garniers2} J. Garnier and K. S\o lna, Scaling limits for wave pulse transmission and reflection operators, Wave Motion {\bf 46} (2009), 122-143. \bibitem{garniers3} J. Garnier and K. S\o lna, Scintillation in the white-noise paraxial regime, Comm. Partial Differential Equations {\bf 39} (2014), 626-650. \bibitem{garniers4} J. Garnier and K. S\o lna, Fourth-moment analysis for beam propagation in the white-noise paraxial regime, Archive on Rational Mechanics and Analysis {\bf 220} (2016), 37-81. \bibitem{gilles0} J. Gilles and S. Osher, {Fried deconvolution}, Proceedings SPIE Defense, Security and Sensing conference, Baltimore, 2012. \bibitem{huang} D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and F. G. Fujimoto, Optical coherence tomography, Science {\bf 254} (1991), 1178-1181. \bibitem{ishimaru} A. Ishimaru, {\em Wave Propagation and Scattering in Random Media}, Academic Press, San Diego, 1978. \bibitem{katz12} O. Katz, E. Small, and Y. Silberberg, Looking around corners and through thin turbid layers in real time with scattered incoherent light, Nature Photon. {\bf 6} (2012), 549-553. \bibitem{kunita} H. Kunita, {\it Stochastic flows and stochastic differential equations}, Cambridge, University Press, Studies in Advanced Mathematics {\bf 24}, (1990). \bibitem{gilles1} Y. Mao and J. Gilles, Non rigid geometric distortions correction - application to atmospheric turbulence stabilization, Journal of Inverse Problems and Imaging {\bf 6} (2012), 531-546. \bibitem{bert} M. Micheli, Y. Lou, S. Soatto, and A.L. Bertozzi, A linear systems approach to imaging through turbulence, Journal of Mathematical Imaging and Vision {\bf 48} (2013), 185-201. \bibitem{mosk12} A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, Controlling waves in space and time for imaging and focusing in complex media, Nature Photon. {\bf 6} (2012), 283-292. \bibitem{webb14} J. A. Newman and K. J. Webb, Imaging optical fields through heavily scattering media, Phys. Rev. Lett. {\bf 113} (2014), 263903; see also J. A. Newman and K. J. Webb, Fourier magnitude of the field incident on a random scattering medium from spatial speckle intensity correlations, Opt. Lett. {\bf 37} (2012), 1136-1138. \bibitem{webb16} J. A. Newman, Q. Luo, and K. J. Webb, Imaging hidden objects with spatial speckle intensity correlations over object position, Phys. Rev. Lett. {\bf116} (2016), 073902. \bibitem{popoff10} S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, Image transmission through an opaque material, Nature Commun. {\bf 1} (2010), 1-5. \bibitem{sha14} S. Shahjahan, A. Aubry, F. Rupin, B. Chassignole, and A. Derode, A random matrix approach to detect defects in a strongly scattering polycrystal: How the memory effect can help overcome multiple scattering, Applied Physics Letters {\bf 104} (2014), 234105. \bibitem{strohbehn} J. W. Strohbehn, ed., {\it Laser Beam Propagation in the Atmosphere}, Springer, Berlin, 1978. \bibitem{tappert} F. Tappert, The parabolic approximation method, in {\it Wave Propagation and Underwater Acoustics}, J. B. Keller and J. S. Papadakis, eds., 224-287, Springer, Berlin (1977). \bibitem{tokovinin} A. Tokovinin, Measurement of seeing and the atmospheric time constant by differential scintillations, Appl. Opt. {\bf 41} (2002), 957-964. \bibitem{vellekoop10} I. M. Vellekoop, A. Lagendijk, and A. P. Mosk, Exploiting disorder for perfect focusing, Nature Photon. {\bf 4} (2010), 320-322. \bibitem{vellekoop07} I. M. Vellekoop and A. P. Mosk, Focusing coherent light through opaque strongly scattering media, Opt. Lett. {\bf 32} (2007), 2309-2311. \bibitem{vellekoop08} I. M. Vellekoop and A. P. Mosk, Universal optimal transmission of light through disordered materials, Phys. Rev. Lett. {\bf 101} (2008), 120601. \end{thebibliography} Received xxxx 20xx; revised xxxx 20xx. \end{document}
\begin{document} \title{The Hodge Conjecture for general Prym varieties} \author[I. Biswas]{Indranil Biswas} \address{School of Maths, TIFR, Homi Bhabha Road, Mumbai 400 005, India.} \email{[email protected]} \author[K. H. Paranjape]{Kapil H.~Paranjape} \address{IMSc, CIT Campus, Tharamani, Chennai 600 113, India.} \email{[email protected]} \maketitle \section*{Introduction} We work over $\mathbb{C}$, the field of complex numbers. The Prym variety of a double cover $C\to D$ of a smooth connected projective curve $D$ by a smooth connected curve $C$ is defined (see \cite{Mumford}) as the identity component of the kernel of the norm homomorphism $N:J(C)\to J(D)$ between the Jacobians of the curves. This is an abelian variety polarised by the restriction of the canonical polarisation on $J(C)$; we denote this variety by $P(C\to D)$ or simply $P$ when there is no possibility of ambiguity. A Hodge class on a variety $X$ is an integral singular cohomology class on the complex manifold $X(\mathbb{C})$ which is represented by a closed differential form of type $(p,p)$. The Hodge conjecture (see \cite{Hodge}) asserts that some multiple of such a class is the cohomology class of an algebraic cycle on $X$. Let $A$ be an abelian variety. The K\"unneth decomposition implies that the rational singular cohomology of $A\times\dots\times A$ is a direct sum of subquotients of tensor products of $\HH^1(A(\mathbb{C}),\mathbb{Q})$. Hence we have an action of a linear automorphism of this vector space on these cohomology groups. The Mumford-Tate group $H(A)$ of $A$ can thus be defined (see \cite{DMOS}) as the group of all linear automorphisms of $\HH^1(A(\mathbb{C}),\mathbb{Q})$ which stabilise all Hodge cycles on the varieties $A\times\dots\times A$. The aim of this note is to show that the Mumford-Tate group $H(P)$ of a {\em general} Prym variety $P(C\to D)$ is isomorphic to the full symplectic group $\Symp(2g)$; where the class in $\ext{2}\HH^1(P(\mathbb{C}),\mathbb{Q})=\HH^2(P(\mathbb{C}),\mathbb{Q})$ which is stabilised by this group is the first Chern class of the natural polarisation on the Prym variety. Invariant theory (see \cite{Weyl} or \cite{Howe1} and \cite{Howe2}) then implies that the only Hodge cycles on $P$ are powers (under cup-product) of this polarisation class. In particular, we obtain the Hodge conjecture for $P$ as a consequence of this result. As a particular case the N\'eron-Severi group of a general Prym variety is $\mathbb{Z}$. This was proved earlier by Pirola (see \cite{Pirola}). We do not give a new proof of that result and use it in an essential way to prove our result. The outline of the paper is as follows. In section~1 we set out some standard arguments about Mumford-Tate groups in families. In section~2 we use an extension (due to Beauville \cite{Beauville}) of the definition of Prym varieties to the case where $C$ and $D$ are singular curves. The results on Mumford-Tate groups are applied to this larger family of Prym varieties in section three. In addition we use the semi-simplicity of the Mumford-Tate group (see \cite{DMOS}) and the result of Pirola (see \cite{Pirola}) to reduce the problem to an elementary lemma on subgroups of the symplectic group. \section{Mumford-Tate groups in families} Let $f:X\to S$ be a family of smooth projective varieties parametrised by a smooth connected variety $S$. For some positive integer $k$ let $V=R^if_*\mathbb{Q}_X$ denote the variation of pure Hodge structures of weight $k$ on $S$. More generally we can consider any variation $V$ of Hodge structures of weight $k$ on $S$. Let $V^{a,b}= V^{\otimes a}\otimes V^{*\otimes b}$ be the associated tensor variations of pure Hodge structures of weight $(a-b)k$. For every $(a,b)$ such that $(a-b)k=2p$ is even, we have the nested sequence of analytic subvarieties \[ H^{a,b}:=V^{a,b}_{\mathbb{Z}}\cap F^p V^{a,b} \subset V^{a,b}_{\mathbb{Z}} \subset V^{a,b}_{\mathbb{C}} \] of the complex vector bundle $V^{a,b}_{\mathbb{C}}$ over $S$ associated with $V^{a,b}$. The analytic variety $H^{a,b}$ parametrises pairs $(s,c)$, where $s$ is a point of $S$ and $c$ an integral class of type $(p,p)$ in $V^{a,b}_s$; i.~e.\ $c$ is a {\em Hodge cycle}. If $W$ is an irreducible component of $H^{a,b}$ such that the natural map $W\to S$ is open at some point, then $W$ contains an open subset of $V^{a,b}_{\mathbb{Z}}$; hence $W$ is a connected component of $V^{a,b}_{\mathbb{Z}}$. Let $A^{a,b}$ be the the union of all such components. The map $A^{a,b}\to S$ makes each component of the former a covering space of $S$. Now, if $W$ is an irreducible component of $H^{a,b}$ for which the map $W\to S$ is {\em not} open at any point then its image in $S$ is a set of measure zero by Sard's theorem. Let $B$ be the (countable) union of these images as we vary over all the components of $H^{a,b}$ and as we vary $a$ and $b$. If $s$ is any point of $S$ which is not in $B$, then by the above reasoning, the only points of $H^{a,b}$ that lie over it are in $A^{a,b}$. Let $t$ be any other point of $S$ and $\gamma$ be a path in $S$ connecting $s$ and $t$. We can use $\gamma$ to identify $V^{a,b}_{\mathbb{Z},s}$ with $V^{a,b}_{\mathbb{Z},t}$; this then gives an identification of $A^{a,b}_s$ with $A^{a,b}_t$. Hence, under this identification, the collection of Hodge cycles in $V^{a,b}_{\mathbb{Z},s}$ is contained in the collection of Hodge cycles in $V^{a,b}_{\mathbb{Z},t}$. Thus the Mumford-Tate group $G_t$ of $V_t$ is identified by $\gamma$ with a subgroup of the Mumford-Tate group $G_s$ of $V_s$. In other words, we have \begin{quote} The Mumford-Tate group at a general point contains (a conjugate of) the Mumford-Tate group at a special point in a variation of Hodge structures over a smooth connected variety. \end{quote} \section{Degenerate covers} A connected projective curve which has at worst ordinary double points as its singularities is called a {\em semi-stable} curve. The {\em dual graph} of such a curve has as its vertices the irreducible components; each singular point gives an edge incident on the two vertices corresponding to the components that contain it. We will be interested in semi-stable curves whose dual graph is contractible and hence a tree; such curves are called {\em tree-like}. In this case, the first cohomology of the curve is a direct sum of the first cohomology of its components with the induced (pure) Hodge structure. In particular, the Jacobian of a tree-like semi-stable curve is the product of the Jacobians of its components. A finite morphism $C\to D$ of semi-stable curves is called a semi-stable cover (or an admissible cover) if \begin{enumerate} \item This is a topological cover of constant degree of $D$ outside a finite set of points which includes the singular locus of $D$. \item The inverse image of a singular point of $D$ consists of singular points of $C$. \item The order of ramification on the two branches at a singular point of $C$ must be equal. \end{enumerate} This notion was first defined by Beauville \cite{Beauville} for the case of degree two covers (which are the case of interest) and later generalised (see \cite{Mumford-Harris}). In these papers, it is shown that the deformations of such a semi-stable cover of tree-like curves are unobstructed. In other words, there is a smooth (open) curve $S$, a flat morphism $p:\mathcal{D}\to S$ and a finite flat morphism $f:\mathcal{C}\to\mathcal{D}$. There is a point $o$ of $S$ over which the $f$ restricts to the given semi-stable cover $C\to D$. Moreover, the general fibre is a double cover $C'\to D'$ of a smooth curve $D'$ by a smooth curve $C'$. We are interested in the case of degree two covers $C\to D$; here the singular points of $C$ are either unramified on each branch or ramified of order two on each branch. Let us further assume that $C$ and $D$ are tree-like. For each component $D_i$ of $D$ there are two possibilities: \begin{enumerate} \item There is exactly one component $C_i$ of $C$ that lies over it. The map $C_i\to D_i$ is a double cover in the usual sense. \item There are two components $C'_i$ and $C''_i$ of $C$ that lie over $D_i$ and the given map is an isomorphism between these components and $D_i$. \end{enumerate} The Prym variety can be defined as before as the identity component of the kernel of the natural norm homomorphism between the Jacobians $J(C)\to J(D)$. It follows that the Prym variety is the product of the Prym varieties of the covers $C_i\to D_i$ corresponding to the first case and the Jacobians of the curves $D_i$ corresponding to the second case. In particular, the product of these components gives an abelian variety. Hence we have \begin{quote} The family of Prym varieties can be extended to include the Prym varieties of degenerate tree-like covers. In particular, the Mumford-Tate group of a general Prym variety contains (a conjugate of) the Mumford-Tate group of the Prym variety of any degenerate tree-like cover. \end{quote} In the special case when $D$ has two exactly components (call them $D_1$ and $D_2$), such a cover can be constructed in one of two ways: \begin{enumerate} \item[I] Let $C_1\to D_1$ be a double cover that is not branched at the common point $p=D_1\cap D_2$. Then, $C$ is obtained by attaching to $C_1$ two copies of $D_2$, one at each point lying over $p$. \item[II] Let $C_1\to D_1$ and $C_2\to D_2$ be double covers that are both branched at the common point $p$. We obtain $C$ by attaching the curves $C_1$ and $C_2$ along their respective ramification points lying over $p$. \end{enumerate} The specific covers that we are interested in are the following. \begin{enumerate} \item A covering of type (II) which is the degeneration of a double cover $C\to D$ where $D$ is rational and $C$ is of genus $g$. The curves $D_i$ are smooth rational curves. The curve $C_1$ is an elliptic curve and the curve $C_2$ is any hyperelliptic curve of genus $g-1$. \item A covering of type (I) which is the degeneration of a double cover $C\to D$ where $D$ has genus at least 2 and the cover is \'etale. The curve $D_1$ is any elliptic curve, $C_1\to D_1$ is an \'etale double cover and $D_2$ is any curve of genus one less than that of $D$. \item A covering of type (I) which is the degeneration of a double cover $C\to D$ where $D$ has genus at least 1 and the cover is ramified at some point. The curve $D_1$ is any curve of genus one less than that of $D$ and $C_1\to D_1$ is a double cover ramified at the same number of points as the cover $C\to D$; $D_2$ is any elliptic curve. \end{enumerate} As a result we have \begin{lemma}\label{main} We have containments of Mumford-Tate groups as enumerated below. \begin{enumerate} \item The Mumford-Tate group of a general hyperelliptic curve of genus $g$ contains a conjugate of the product of the Mumford-Tate group of any elliptic curve with the Mumford-Tate group of any hyperelliptic curve of genus $g-1$. \item The Mumford-Tate group of the Prym variety of a general \'etale cover of a curve of genus $g\geq 2$ contains a conjugate of the Mumford-Tate group of any curve of genus $g-1$. \item The Mumford-Tate group of the Prym variety of a general cover of a curve of genus $\geq 1$ ramified at $r\geq 1$ points contains a conjugate of the product of the Mumford-Tate group of any elliptic curve with the Mumford-Tate group of the Prym variety of any cover of a curve of genus $g-1$ which is ramified at $r$ points. \end{enumerate} \end{lemma} \begin{proof} The first cohomology group of the product of two abelian varieties is the direct sum of the first cohomology groups of the individual abelian varieties. Moreover, the Hodge cycles on the individual varieties pull-back to give Hodge cycles on the product. Thus it follows that the Mumford-Tate group of the product contains the product of the Mumford-Tate groups. The result now follows from the above constructions. \end{proof} \section{The Main result} To prove the main result we need the following three lemmas. \begin{lemma}[Pirola] The N\'eron-Severi group of a general Prym variety is the free group on 1 generator. \end{lemma} This lemma is proved in \cite{Pirola}. We note that this case includes the case of a general hyperelliptic curve. \begin{lemma} Let $G$ be a connected semi-simple subgroup of the symplectic group $Sp(2n)$ which contains (a conjugate of) the product $Sp(2a)\times Sp(2n-2a)$, then $G$ is either this product or it is the the full symplectic group \end{lemma} \begin{proof} Let $V$ be the standard representation of $Sp(2n)$. Let $\oplus_{i\in I} W_i$ be its decomposition into isotypical components as a representation of $G$. Let $V=V_1\oplus V_2$ be the decomposition of $V$ as a representation of $Sp(2a)\times Sp(2n-2a)$. Then each $W_i$ is either $V_1$ or $V_2$ or $V=V_1\oplus V_2$. The result follows by dimension counting. The lemma also follows from the fact that the quotient \[ \frac{sp(2n)}{sp(2a)\times sp(2n-2a)} \] of the Lie algebras is an irreducible module over $Sp(2a)\times Sp(2n-2a)$. \end{proof} \begin{lemma} Let $A$ be an abelian variety of dimension $n$ whose Mumford-Tate group is the product $Sp(2a)\times Sp(2n-2a)$ in $Sp(2n)$. Then the N\'eron-Severi group of $A$ is of rank at least 2. \end{lemma} \begin{proof} The first cohomology group of $A$ decomposes as a direct sum of two (polarised) sub-Hodge structures. It follows that $A$ is the product of two abelian subvarieties. Hence we have the result. \end{proof} \begin{thm}\label{Main Theorem} The Mumford-Tate group of a general Prym variety is the full symplectic group. \end{thm} \begin{proof} We begin with the case where the base curve has genus zero. In this case the Prym varieties are the Jacobians of the corresponding hyperelliptic double cover. The result is classical for elliptic curves which can be considered as the Prym varieties associated with double covers of smooth rational curves branched at 4 points. By induction, let us assume that the result is known for hyperelliptic Jacobians of genus less than $g\geq 2$. The lemma~\ref{main} then shows that the Mumford-Tate group of a general hyperelliptic curve of genus $g$ contains $Sp(2)\times Sp(2g-2)$. By the above results we see that thus Mumford-Tate group must be either $Sp(2g)$ or $Sp(2)\times Sp(2g-2)$. In the latter case, the N\'eron-Severi group of the curve would have rank at least two but by Pirola's result we know that this is not true for the general hyperelliptic curve. Hence we see that the Mumford-Tate group of a general hyperelliptic curve must be $Sp(2g)$ where $g$ is the genus of the curve. Now let us consider the case where the cover is unramified. Then we may assume that the base curve of the double cover has genus $g$ at least 2 (else the Prym variety is just a point). In this case the Prym variety has dimension $n=g-1$. By the lemma~\ref{main} we know that the Mumford-Tate group contains the Mumford-Tate group of any curve of genus $g-1$. In particular, it contains the Mumford-Tate group of any hyperelliptic curve of genus $g-1$ and hence by the previous paragraph it contains (and is thus equal to) $Sp(2n)$. Now assume that the base curve of the double cover has genus $g$ at least 1 and the cover is ramified. We argue by induction on the genus of the base curve. We can begin the induction since we already know the result for the hyperelliptic curves. Let us assume that the result is known for base curves of genus less than $g$. By the lemma~\ref{main} we know that the Mumford-Tate group contains the product of the Mumford-Tate group of an elliptic curve with the Mumford-Tate group of the Prym variety of a the double of a curve of genus $g-1$; in other words it contains $Sp(2)\times Sp(2n-2)$ by induction. Now as argued above, the three lemmas above imply that the Mumford-Tate group must be $Sp(2n)$. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \end{document}
\begin{document} \title[Generalized Lambert Series Identities] {Generalized Lambert Series Identities and Applications in Rank Differences} \author[Bin Wei]{Bin Wei} \address{Center for Applied Mathematics, Tianjin University, Tianjin 300072, P.R. China} \email{[email protected]} \author[Helen W.J. Zhang]{Helen W.J. Zhang} \address{Center for Applied Mathematics, Tianjin University, Tianjin 300072, P.R. China} \email{[email protected]} \subjclass[2010]{33D15, 05A17, 11P81, 11F37} \keywords{Generalized Lambert series, Overpartition, Dyson's rank, Rank differences, Mock theta function.} \thanks{The authors are supported by NSFC (Grant NO. 11701412).} \maketitle \noindent {\bf Abstract.} In this article, we prove two identities of generalized Lambert series. By introducing what we call $\mathcal{S}$-series, we establish relationships between multiple generalized Lambert series and multiple infinite products. Compared with Chan's work, these new identities are useful in generating various formulas for generalized Lambert series with the same poles. Using these formulas, we study the 3-dissection properties of ranks for overpartitions modulo 6. In this case, $-1$ appears as a unit root, so that double poles occur. We also relate these ranks to the third order mock theta functions ${\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)$ and $\rho(q)$. \setcounter{section}{-1} \section{Notations} Throughout this article, we use the common $q$-series notations associated with infinite products: \begin{align*} &(a)_\infty:=(a;q)_\infty:=\prod_{n=0}^\infty(1-aq^n), &&(a_1,a_2,\elldots,a_k)_\infty:=(a_1)_\infty\cdots(a_k)_\infty, \\[5pt] &[a]_\infty:=(a,q/a)_\infty, &&[a_1,a_2,\elldots,a_k]_\infty:=[a_1]_\infty\cdots[a_k]_\infty, \\[5pt] &j(z;q):=(z;q)_\infty (q/z;q)_\infty (q;q)_\infty, &&J_{a,m}:=j(q^a;q^m),\ \ J_m:=(q^m;q^m)_\infty. \end{align*} For the sake of convergence, we always assume that $|q|<1$. Also, we adopt a notation due to D. B. Sears \cite{Sears-1951}: \begin{align*} &F(b_1,b_2,\elldots,b_m)+\mathrm{idem}(b_1;b_2,\elldots,b_m) \\ &:=F(b_1,b_2,\elldots,b_m)+F(b_2,b_1,b_3,\elldots,b_m) +\cdots+F(b_m,b_2,\elldots,b_{m-1},b_1). \end{align*} \section{Introduction} A Lambert series, named for Johann Heinrich Lambert, takes the form $$ \sum_{n=1}^\infty a_n\frac{q^n}{1-q^n}, $$ where $\{a_n\}$ is any set of real or complex numbers. A generalized Lambert series allows more general exponents in both numerators and denominators. Such series are often useful in obtaining formulas for various generating functions, since the denominators can be expanded as a geometric series. Expanded generalized Lambert series are naturally linked with infinite products. For example, Chan \cite{Chan-2005} proved three generalized Lambert series expansions for infinite products. One of the theorems concerning $r+1$ poles in generalized Lambert series is stated as following. \begin{lem}\ellabel{Chan-Thm-2.2} For non-negative integers $r<s$, we have \begin{align*} &\frac{(a_1q,q/a_1,\elldots,a_rq,q/a_r,q,q)_\infty} {[b_1,b_2,\cdots,b_s]_\infty} \\ &\indent= \frac{[a_1/b_1,\cdots,a_r/b_1]_\infty} {[b_2/b_1,\cdots,b_s/b_1]_\infty} \sum_{n=-\infty}^\infty\frac{(-1)^{(s-r)n+r}q^{(s-r)n(n+1)/2}b_1^ra_1^{-1}\cdots a_r^{-1}} {(1-b_1q^n)(1-b_1q^n/a_1)\cdots(1-b_1q^n/a_r)} \\ &\indent\quad \times\elleft(\frac{a_1\cdots a_rb_1^{s-r-1}q^r} {b_2\cdots b_s}\right)^n +\mathrm{idem}(b_1;b_2,\elldots,b_s). \end{align*} For $r=s$, this is true provided that $|q|<|\frac{a_1\cdots a_r}{b_1\cdots b_s}|<|q^{-r}|$. \end{lem} Using these theorems, Chan provided brief proofs for amounts of beautiful and useful identities. Particularly, when taking $s=3$ and $r=0$, Lemma \ref{Chan-Thm-2.2} delivers the key identity used by Atkin and Swinnerton-Dyer \cite{Atkin-Swinnerton-Dyer-1954} in proving Ramanujan's famous partition congruences. One limitation of applying Chan's theorems is that, the exponents in numerators are partially or totally determined by the poles. In this article, we prove two other generalized Lambert series identities. First, for a sequence ${\boldsymbol a}=(a_1,\elldots,a_r)$, we define series $\mathcal{S}(a_1,\elldots,a_r)$ as \begin{equation}\ellabel{F(a,q)} \mathcal{S}(a_1,\elldots,a_r):=\mathcal{S}(a_1,\elldots,a_r;q)= \sum_{u=1}^r\sum_{n=0}^\infty\elleft(\frac{1}{1-a_uq^n} -\frac{1}{1-a_u^{-1}q^{n+1}}\right). \end{equation} We also write $\mathcal{S}({\boldsymbol a})=\mathcal{S}(a_1,\elldots,a_r)$ for brevity. The following theorem concern identity of generalized Lambert series with double poles. \begin{thm}\ellabel{main-result} Let ${\boldsymbol a}=(a_1,\elldots,a_r)$ and ${\boldsymbol b}=(b_1,\elldots,b_s)$. Then for non-negative integers $r<s$, we have \begin{align}\ellabel{R} &\frac{(q)_\infty^2[a_1,\cdots,a_r]_\infty} {[b_1,\cdots,b_s]_\infty}\elleft(1-\mathcal{S}({\boldsymbol a})+\mathcal{S}({\boldsymbol b})\right) \nonumber\\[5pt] &\quad=\frac{[a_1/b_1,\cdots,a_r/b_1]_\infty} {[b_2/b_1,\cdots,b_s/b_1]_\infty}\sum_{n=-\infty}^\infty \frac{(-1)^{(s-r)n}q^{(s-r)n(n+1)/2}}{(1-b_1q^n)^2} \elleft(\frac{a_1\cdots a_rb_1^{s-r-1}}{b_2\cdots b_s}\right)^n \nonumber\\[5pt] &\indent\indent\quad+\mathrm{idem}(b_1;b_2,\cdots,b_s). \end{align} For $r=s$, this is true provided that $|q|<|\frac{a_1\cdots a_r}{b_1\cdots b_s}|<1$. \end{thm} A similar identity concerning generalized Lambert series with single poles is also given in \S2. Theorem \ref{main-result} is aimed at decoupling parameters in ${\boldsymbol a}$ from the denominators. Therefore, it is helpful in generating various identities concerning generalized Lambert series with the same poles. The generalized Lambert series $\mathcal{S}$ defined in (\ref{F(a,q)}) appears as an encumbrance in our identities for infinite products. Though, we provide an algorithm to show that $\mathcal{S}(\pm q^m;q^n)$ with $m$, $n$ integers can be expanded as sums of infinite products. Therefore, our main result Theorem \ref{main-result} establishes a bridge between multiple infinite products and multiple generalized Lambert series. For example, we show that the following identity holds in \S4: $$ \nonumber\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{(1+q^{3n+1})^2} +\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{(1-q^{3n+1})^2} =\frac{4J^3_{6}}{3J_{2}}+\frac{J_{3,6}^2J_{6}^6}{2J_{1,6}^2J_2^2}+\frac{J_{1,6}^6J_{2}^2J_{3,6}^2}{6J_{6}^6}. $$ The motivation of establishing new identities for generalized Lambert series arises in the study of the series $$ \overline{R}(-1;q)=\frac{4(-q)_\infty}{(q)_\infty}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{n^2+n}}{(1+q^{n})^2}. $$ Bringmann and Lovejoy \cite{Bringmann-Lovejoy-2007} proved that $\overline{R}(-1;q)$ is the holomorphic part of a harmonic weak Maass form of weight $3/2$. They also pointed out that this is the most complicated case among $\overline{R}(z;q)$ since double poles occur. In this article, we use Theorem \ref{main-result} to give the 3-dissection properties of $\overline{R}(-1;q)$. Recall that an overpartition of positive integer $n$, denoted by $\overline{p}(n)$, is a partition of $n$ where the first occurrence of each distinct part may be overlined. Particularly, we set $\overline{p}(0)= 1$. The rank of an overpartition was introduced by Lovejoy \cite{Lovejoy-2005} as the largest part minus the number of parts. Let $\overline{N}(m,n)$ denote the number of overpartitions of $n$ with the rank $m$, and let $\overline{N}(s,\ell,n)$ denote the number of overpartitions of $n$ of rank congruent to $s$ modulo $\ell$. Lovejoy gave a generating function of $\overline{N}(m,n)$ \begin{align}\ellabel{GF} \overline{R}(z;q) &:=\sum_{n=0}^\infty\sum_{m=-\infty}^{\infty}\overline{N}(m,n)z^mq^n \nonumber\\[5pt] &=\frac{(-q)_\infty}{(q)_\infty}\elleft\{1+2\sum_{n=1}^\infty \frac{(1-z)(1-z^{-1})(-1)^nq^{n^2+n}}{(1-zq^n)(1-z^{-1}q^n)}\right\}. \end{align} Rank differences between different residues are widely studied, where identities of generalized Lambert series usually play key roles. In \cite{Lovejoy-Osburn-2008}, Lovejoy and Osburn gave formulas for the full rank differences $\overline{N}(s, \ell, \ell n + d)-\overline{N}(t, \ell, \ell n + d)$ for $\ell=3,5$, in terms of infinite products and generalized Lambert series. The modulus $7$ have been determined by Jennings-Shaffer \cite{Jennings-Shaffer-2016}. Besides, when considering even moduli, only special linear combinations of rank differences can be obtained previously. In \cite{Ji-Zhang-Zhao-2017}, Ji, Zhang and Zhao studied 3-dissection properties of the form \begin{equation}\ellabel{Ji-Zhang-Zhao} \sum_{n=0}^{\infty}(\overline{N}(0,6,n)+\overline{N}(1,6,n)-\overline{N}(2,6,n)-\overline{N}(3,6,n))q^n. \end{equation} The difficulty of providing full rank differences lies in the truth that $-1$ is a unit root of even moduli, so that $\overline{R}(-1;q)$ arises naturally. Similar situation happens in related problems, which are associated with various types of ranks (such as crank, $M_2$-rank, etc.) for different types of partitions (see \cite{Andrews-Lewis-2000,Garvan-1988,Garvan-1990,Mao-2013,Mao-2015} for example). As a consequence of successfully handling double poles, we are now able to handle the full rank differences associated with even moduli. In fact, we gave the formulas for each residue, i.e., each term in \eqref{Ji-Zhang-Zhao} instead. Here we take the 3-dissection properties of ranks of overpartitions modulo $6$ as an example. Let \begin{equation}\ellabel{r_s(d)} \overline{r}_s(d)=\sum_{n=0}^\infty\overline{N}(s,6,3n+d)q^n. \end{equation} When $d=2$, we have the following theorem. \begin{thm}\ellabel{rank-diff-2} We have \begin{align*} \overline{r}_0(2) &=\frac{2J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} -\frac{4J^3_{6}}{3J_{2}J_{3,6}} +\frac{2J_{1,6}^6J_2^2J_{3,6}}{3J_{6}^6} \nonumber\\[5pt] &\quad\quad\quad -\frac{4}{J_{3,6}} \sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n+1})^2} +\frac{4}{J_{3,6}}\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1+q^{3n+1}},\\ \overline{r}_1(2) &=\frac{2J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} +\frac{2J_{6}^3}{J_{2}J_{3,6}} -\frac{2J_{1,6}^6J_2^2J_{3,6}}{3J_{6}^6} \nonumber\\[5pt] &\quad\quad\quad+\frac{4}{J_{3,6}} \sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n+1})^2} -\frac{4}{J_{3,6}}\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1+q^{3n+1}},\\ \overline{r}_2(2) &=\frac{2J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} +\frac{2J^3_{6}}{3J_{2}J_{3,6}} +\frac{2J_{1,6}^6J_2^2J_{3,6}}{3J_{6}^6} \nonumber\\[5pt] &\quad\quad\quad -\frac{4}{J_{3,6}} \sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n+1})^2} +\frac{2}{J_{3,6}}\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1+q^{3n+1}},\\ \overline{r}_3(2) &=\frac{2J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} -\frac{4J_{6}^3}{J_2J_{3,6}} -\frac{2J_{1,6}^6J_2^2J_{3,6}}{3J_{6}^6} \nonumber\\[5pt] &\quad\quad\quad +\frac{4}{J_{3,6}}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n+1})^2}. \end{align*} \end{thm} The formulas for residues $d=0$ and $1$ are listed in \S5. One should not be surprised of simultaneous occurrences of terms containing denominators $(1+q^{3n+1})$ and $(1+q^{3n+1})^2$, since double poles exist. These explicit formulas suggest inequalities of ranks between different residues, such as $\overline{N}(1,6,3n+2)\ge\overline{N}(3,6,3n+2)$. A conjecture on total ordering will also be discussed in \S5. In \cite{Hickerson-Mortenson-2014}, Hickerson and Mortenson showed that a mock theta function can be expressed in terms of Appel-Lerch sums. Inspired by their work, we establish a relation between the third order mock theta functions ${\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)$ and $\rho(q)$ and the ranks of overpartitions modulo 6, where ${\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)$ and $\rho(q)$ are defined by {\rm \cite{Watson-1936}:} $${\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)=\sum_{n=0}^\infty\frac{q^{2n(n+1)}}{(q;q^2)_{n+1}^2} \quad \text{and} \quad \rho(q)=\sum_{n=0}^\infty\frac{q^{2n(n+1)}(q;q^2)_{n+1}}{(q^3;q^6)_{n+1}}.$$ \begin{thm}\ellabel{mock} We have \begin{align} \overline{r}_0(2)+\overline{r}_3(2) &=\frac{4}{9}\rho(q)-\frac{16}{9}{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)+M(q), \\[3pt] \overline{r}_1(2)-\overline{r}_3(2) &=2{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q), \\[3pt] \overline{r}_2(2)+\overline{r}_3(2) &=-\frac{2}{9}\rho(q)-\frac{10}{9}{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)+M(q), \end{align} where $M(q)$ is (explicit) weakly holomorphic modular form given by: \[M(q)=\frac{4J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3}.\] \end{thm} This paper is organized as follows. In \S 2, we derive the main theorems by discussing poles in Chan's identities. In \S 3, we introduce an algorithm for $\mathcal{S}$-series, which helps transform $\mathcal{S}$-series into sums of infinite products. In \S 4, we use our new identities to generate some formulas concerning the 3-dissections of generalized Lambert series. These formulas help establish 3-dissection properties of ranks for overpartitions modulo 6, in \S 5. Finally, we prove the relations between the ranks of overpartitions and mock theta functions in \S 6. \section{Proofs of Main Theorems} We start with the following lemma, where we have made slight variants in the subscripts of parameters. \begin{lem}[Chan \cite{Chan-2005}]\ellabel{Chan-Thm-3.2} For non-negative integers $r<s$, we have \begin{align*} &\frac{(a_0q,q/a_0,q,q)_\infty[a_1,a_2,\cdots,a_r]_\infty} {[b_0,b_1,\cdots,b_s]_\infty} \\ &\indent\indent= \frac{[a_0/b_0,a_1/b_0,\cdots,a_r/b_0]_\infty} {[b_1/b_0,b_2/b_0,\cdots,b_s/b_0]_\infty} \sum_{n=-\infty}^\infty\frac{(-1)^{(s-r)n+1}q^{(s-r)n(n+1)/2}b_0a_0^{-1}} {(1-b_0q^n)(1-b_0q^n/a_0)} \\ &\indent\indent\quad \times\elleft(\frac{a_0a_1\cdots a_rb_0^{s-r-1}q} {b_1\cdots b_s}\right)^n +\mathrm{idem}(b_0;b_1,b_2,\elldots,b_s). \end{align*} For $r=s$, this is true provided that $|q|<|\frac{a_1\cdots a_r}{b_1\cdots b_s}|<1$. \end{lem} Compared with Lemma \ref{Chan-Thm-3.2}, Theorem \ref{main-result} is aimed at decoupling parameters in ${\boldsymbol a}$ from poles in generalized Lambert series, making it convenient to control orders of $q$ in numerators. When generating identities in special forms, this also permits us to save a parameter in ${\boldsymbol b}$, and so a term of generalized Lambert series.\\ \noindent\emph{Proof of Theorem 1.2.} Briefly speaking, Theorem \ref{main-result} follows from setting $a_0=1$ and $b_0=q$ in Lemma \ref{Chan-Thm-3.2}. Obviously this would results in double poles in both sides, so we need to compute the limits at $b_0=q$. First replacing $a_0$ by $1$ in Lemma \ref{Chan-Thm-3.2}, we have \begin{align}\ellabel{Chan-gene} &\frac{(q)_\infty^4[a_1,\cdots,a_r]_\infty}{[b_0,b_1,\cdots,b_s]_\infty} =\frac{[b_0^{-1},a_1/b_0,\cdots,a_r/b_0]_\infty} {[b_1/b_0,\cdots,b_s/b_0]_\infty} \nonumber\\ &\quad\quad\quad\times\sum_{n=-\infty}^\infty\frac{(-1)^{(s-r)n+1} q^{(s-r)n(n+1)/2}b_0}{(1-b_0q^n)^2} \elleft(\frac{a_1\cdots a_rb_0^{s-r-1}q}{b_1\cdots b_s}\right)^n \nonumber\\ &\quad\quad\quad+\mathrm{idem}(b_0;b_1,\cdots,b_s). \end{align} Denote the term on the left-hand side of (\ref{Chan-gene}) by $L$ and those on the right-hand side by $R_0,\elldots,R_s$ respectively, which is \[L=R_0+R_1+\cdots+R_s.\] For the right-hand side, the pole $b_0=q$ occurs only in the term $R_0$. So we may set $b_0\rightarrow q$ directly in other terms. As for $R_1$, we have \begin{align}\ellabel{R_1} \nonumber\ellim_{b_0\rightarrow q}R_1 &=\frac{[a_1/b_1,\cdots,a_r/b_1]_\infty} {[b_2/b_1,\cdots,b_s/b_1]_\infty}\\ &\quad\times\sum_{n=-\infty}^\infty\frac{(-1)^{(s-r)n}q^{(s-r)n(n+1)/2}}{(1-b_1q^n)^2} \elleft(\frac{a_1\cdots a_rb_1^{s-r-1}}{b_2\cdots b_s}\right)^n, \end{align} which is the first term of the right-hand side in (\ref{R}). Thus it remains to show that \begin{align}\ellabel{L-R_0} \ellim_{b_0\rightarrow q}(L-R_0)=\frac{(q)_\infty^2[a_1,\cdots,a_r]_\infty} {[b_1,\cdots,b_s]_\infty}\elleft(1-\mathcal{S}({\boldsymbol a})+\mathcal{S}({\boldsymbol b})\right). \end{align} We separate terms containing poles from $L$ and $R_0$ successively. We begin with rewriting $L$ and $R$ as \begin{align*} L=\frac{(q)_\infty^4[a_1,\cdots,a_r]_\infty} {(b_0,b_0^{-1}q^2)_\infty[b_1,\cdots,b_s]_\infty} \cdot\frac{b_0}{b_0-q}, \end{align*} and \begin{align} R_0\ellabel{R_0} \nonumber&=\frac{(1-b_0^{-1}q)(b_0,b_0^{-1}q^2)_\infty [a_1q/b_0,\cdots,a_rq/b_0]_\infty} {[b_1q/b_0,\cdots,b_sq/b_0]_\infty} \\[6pt] &\quad\times\sum_{n=-\infty}^\infty \frac{(-1)^{(s-r)(n+1)}q^{(s-r)n(n+1)/2}b_0}{(1-b_0q^n)^2} \elleft(\frac{a_1\cdots a_rb_0^{s-r}}{b_1\cdots b_s}\right)^{n+1}\elleft(\frac{q}{b_0}\right)^n. \end{align} It is easy to see that, in the generalized Lambert series in $R_0$, poles occurs only when $n=-1$. Considering the factor $(1-b_0^{-1}q)$, other terms vanish when setting $b_0\rightarrow q$. Thus, in (\ref{L-R_0}), we have \begin{align}\ellabel{L^*-R_0^*} \nonumber\ellim_{b_0\rightarrow q}(L-R_0)&=\ellim_{b_0\rightarrow q}\frac{1}{b_0-q} \elleft(\frac{(q)_\infty^4[a_1,\cdots,a_r]_\infty} {(b_0,b_0^{-1}q^2)_\infty[b_1,\cdots,b_s]_\infty} \cdot b_0\right. \\[6pt] &\nonumber\quad\quad\quad\elleft.-\frac{(b_0,b_0^{-1}q^2)_\infty [a_1q/b_0,\cdots,a_rq/b_0]_\infty} {[b_1q/b_0,\cdots,b_sq/b_0]_\infty}\cdot q\right) \\[6pt] &\nonumber=\ellim_{b_0\rightarrow q}\frac{{\rm d}}{{{\rm d}} b_0} \elleft(\frac{(q)_\infty^4[a_1,\cdots,a_r]_\infty} {(b_0,b_0^{-1}q^2)_\infty[b_1,\cdots,b_s]_\infty} b_0\right. \\[6pt] &\nonumber\quad\quad\quad\elleft.-\frac{(b_0,b_0^{-1}q^2)_\infty [a_1q/b_0,\cdots,a_rq/b_0]_\infty} {[b_1q/b_0,\cdots,b_sq/b_0]_\infty}q\right) \\[6pt] &:=\ellim_{b_0\rightarrow q}\frac{{\rm d}}{{{\rm d}} b_0}(L^*-R_0^*), \end{align} where the penultimate equation follows by L'H\^{o}pital's rule. For $L^{*}$, It is easy to obtain \begin{align}\ellabel{L^{*}} \ellim_{b_0\rightarrow q}\frac{{\rm d}L^{*}}{{\rm d}b_0}&=\frac{(q)_\infty^2[a_1,\cdots,a_r]_\infty}{[b_1,\cdots,b_s]_\infty}. \end{align} For $R_0^*$, we have \begin{align*} \ellim_{b_0\rightarrow q}R_0^*=\frac{q(q)_\infty^2[a_1,\cdots,a_r]_\infty} {[b_1,\cdots,b_s]_\infty}. \end{align*} It follows by taking the logarithmic derivative that \begin{align*} \ellim_{b_0\rightarrow q}\frac{{\rm d}\ellog R_0^*}{{\rm d}b_0}=\frac{\mathcal{S}({\boldsymbol a})-\mathcal{S}({\boldsymbol b})}{q}, \end{align*} where $\mathcal{S}$ is defined in \eqref{F(a,q)}. Therefore, \begin{align}\ellabel{R_0^*} \ellim_{b_0\rightarrow q}\frac{{\rm d}R_0^*}{{\rm d}b_0} &=\ellim_{b_0\rightarrow q}\elleft(R_0^*\frac{{\rm d}\ellog R_0^*}{{\rm d}b_0}\right) \nonumber\\[5pt] &=\frac{(q)_\infty^2[a_1,\cdots,a_r]_\infty} {[b_1,\cdots,b_s]_\infty}\elleft(\mathcal{S}({\boldsymbol a})-\mathcal{S}({\boldsymbol b})\right). \end{align} Thus we complete the proof by substituting (\ref{L^{*}}) and (\ref{R_0^*}) into (\ref{L^*-R_0^*}). \qed Chan \cite{Chan-2005} also proved the following identity concerning generalized Lambert series with single poles. \begin{lem}\ellabel{Chan-Thm-2.1} For non-negative integers $r<s$, we have \begin{align}\ellabel{Chan-2.1} \frac{[a_1,\cdots,a_r]_\infty(q)_\infty^2} {[b_0,b_1,\cdots,b_s]_\infty} &=\frac{[a_1/b_0,\cdots,a_r/b_0]_\infty} {[b_1/b_0,\cdots,b_s/b_0]_\infty}\nonumber\\ &\times\sum_{n=-\infty}^\infty\frac{(-1)^{(s-r+1)n}q^{(s-r+1)n(n+1)/2}}{1-b_0q^n} \elleft(\frac{a_1\cdots a_rb_0^{s-r}} {b_1\cdots b_s}\right)^n \nonumber\\ &\indent\indent\quad +\mathrm{idem}(b_0;b_1,\elldots,b_s). \end{align} For $r=s$, this is true provided that $|q|<|\frac{a_1\cdots a_r}{b_1\cdots b_s}|<1$. \end{lem} Similarly by taking $b_0\rightarrow q$, we obtain the following theorem. \begin{thm}\ellabel{single-pole} Let ${\boldsymbol a}=(a_1,\elldots,a_r)$ and ${\boldsymbol b}=(b_1,\elldots,b_s)$. Then for non-negative integers $r\elle s$, we have \begin{align}\ellabel{R'} &\frac{[a_1,\cdots,a_r]_\infty} {[b_1,\cdots,b_s]_\infty}\elleft(1-\mathcal{S}({\boldsymbol a})+\mathcal{S}({\boldsymbol b})\right) \nonumber\\[5pt] &\indent+\frac{[a_1,\cdots,a_r]_\infty}{[b_1,\cdots,b_s]_\infty} \sum_{n=-\infty\atop n\neq0}^\infty \frac{(-1)^{(s-r+1)n}q^{(s-r+1)n(n+1)/2-n}}{1-q^n} \elleft(\frac{a_1\cdots a_r}{b_1\cdots b_s}\right)^n \nonumber\\[5pt] &=\frac{[a_1/b_1,\cdots,a_r/b_1]_\infty}{[b_1,b_2/b_1,\cdots,b_s/b_1]_\infty}\\ &\indent\indent\indent\indent\times\sum_{n=-\infty}^\infty\frac{(-1)^{(s-r+1)n}q^{(s-r+1)n(n+1)/2-n}}{1-b_1q^n} \elleft(\frac{a_1\cdots a_rb_1^{s-r}}{b_2\cdots b_s}\right)^n \nonumber\\[5pt] &\indent\indent\indent\indent\indent\indent+\mathrm{idem}(b_1;b_2,\elldots,b_s). \end{align} For $r=s+1$, this is true provided that $|q^2|<|\frac{a_1\cdots a_r}{b_1\cdots b_s}|<|q|$. \end{thm} \noindent{\it Proof.} The proof is similar to that of Theorem \ref{main-result}. Denote the term on the left-hand side of \eqref{Chan-2.1} by $L'$, and those on the right-hand side by $R'_0,\elldots,R'_s$ respectively, which is \[L'=R'_0+R'_1+\cdots+R'_s.\] Likewise by taking $b_0\rightarrow q$ directly in terms other than $R'_0$, we get the right-hand side in \eqref{R'}. The difference arises in $R'_0$. The terms with $n\neq-1$ are no longer vanishing while taking $b_0\rightarrow q$, which results in an extra generalized Lambert series. In this case we have \begin{align*} &\ellim_{b_0\rightarrow q}R'_0 =\ellim_{b_0\rightarrow q}\frac{1}{b_0-q}\frac{[a_1/b_0,\cdots,a_r/b_0]_\infty} {[b_1/b_0,\cdots,b_s/b_0]_\infty}\cdot\frac{(-b_0)^{r-s}b_1\cdots b_s}{a_1\cdots a_r}\cdot q \nonumber\\[5pt] &+\frac{[a_1/q,\cdots,a_r/q]_\infty} {[b_1/q,\cdots,b_s/q]_\infty} \sum_{n=-\infty\atop n\neq-1}^\infty\frac{(-1)^{(s-r+1)n}q^{(s-r+1)n(n+1)/2+(s-r)n}}{1-q^{n+1}} \elleft(\frac{a_1\cdots a_r}{b_1\cdots b_s}\right)^n. \end{align*} Thus, denoting the first term by $R''_0$, it suffices to show \begin{align} \ellim_{b_0\rightarrow q}(L'-R''_0)&=\frac{[a_1,\cdots,a_r]_\infty} {[b_1,\cdots,b_s]_\infty}\elleft(1-\mathcal{S}({\boldsymbol a})+\mathcal{S}({\boldsymbol b})\right). \end{align} This can be proved following similar procedures in proving \eqref{L-R_0}. \qed \section{An Algorithm for $\mathcal{S}$-series} The generalized Lambert series $\mathcal{S}$ defined in (\ref{F(a,q)}) appears as encumbrance in our expansions for infinite products. In this section, we show that $\mathcal{S}(\pm q^m;q^n)$ with $m,n$ integers can be expanded as sums of infinite products. Therefore, our main results Theorem \ref{main-result} and \ref{single-pole} establish a bridge between infinite products and generalized Lambert series. We first give some trivial properties concerning $\mathcal{S}$. The following lemma shows that, for special ${\boldsymbol a}$, the function $\mathcal{S}({\boldsymbol a})$ degenerates to concise forms. \begin{lem}\ellabel{Lem-F} The function $\mathcal{S}$ has the following properties: \begin{enumerate} \item $\mathcal{S}(-1)=-\frac{1}{2}$, $\mathcal{S}(-q)=\frac{1}{2}${\rm;} \item $\mathcal{S}(aq)=\mathcal{S}(a)+1${\rm;} \item $\mathcal{S}(q/a)=-\mathcal{S}(a)${\rm;} \item Let ${\boldsymbol a}=(a_1,\elldots,a_r)$. If $(q/a_1,\elldots,q/a_r)$ is a permutation of ${\boldsymbol a}$, we have $\mathcal{S}({\boldsymbol a})=0${\rm;} \item $\mathcal{S}(q^s;q^{-t})=\mathcal{S}(q^{s+t};q^t)$. \end{enumerate} \end{lem} In view of (2) and (5), it suffices to consider $\mathcal{S}(\pm q^m;q^n)$ with $m,n$ positive integers. The proof is trivial, though one should be scrupulous in considering the order of summations in \eqref{F(a,q)}. \noindent{\it Proof.} (1) According to the definition of $\mathcal{S}({\boldsymbol a})$, we obtain \begin{align*} \mathcal{S}(-1)&=\ellim_{m\rightarrow\infty}\sum_{n=0}^m\elleft(\frac{1}{1+q^n}-\frac{1}{1+q^{n+1}}\right)\\ &=\ellim_{m\rightarrow\infty}\elleft(\frac{1}{2}-\frac{1}{1+q^{m+1}}\right) =-\frac{1}{2}. \end{align*} Consequently by (2), we have $$\mathcal{S}(-q)=\mathcal{S}(-1)+1=\frac{1}{2}.$$ (2) Similarly, we have \begin{align*} &\mathcal{S}(aq)-\mathcal{S}(a) \\[3pt] &=\ellim_{m\rightarrow\infty}\sum_{n=0}^m\elleft(\frac{1}{1-aq^{n+1}}-\frac{1}{1-q^n/a}\right) -\ellim_{m\rightarrow\infty}\sum_{n=0}^m\elleft(\frac{1}{1-aq^{n}}-\frac{1}{1-q^{n+1}/a}\right) \\[3pt] &=\ellim_{m\rightarrow\infty}\elleft(\frac{1}{1-q^{m+1}/a}-\frac{1}{1-a}+\frac{1}{1-aq^{m+1}}-\frac{1}{1-1/a}\right) =1. \end{align*} (3) By definition, we have \begin{align*} \mathcal{S}(q/a)&=\sum_{n=0}^\infty\elleft(\frac{1}{1-q^{n+1}/a}-\frac{1}{1-aq^n}\right) \\[3pt] &=-\sum_{n=0}^\infty\elleft(\frac{1}{1-aq^n}-\frac{1}{1-q^{n+1}/a}\right) =-\mathcal{S}(a). \end{align*} (4) This follows directly by (3). (5) We have \begin{align*} \mathcal{S}(q^s;q^{-t})&=\sum_{n=0}^\infty\elleft(\frac{1}{1-q^sq^{-tn}}-\frac{1}{1-q^{-s}q^{-tn-t}}\right) \\[3pt] &=\sum_{n=0}^\infty\frac{q^{s-tn}-q^{-s-tn-t}}{(1-q^{s-tn})(1-q^{-s-tn-t})}. \end{align*} Multiply both the denominator and numerator of each term by $q^{s+tn+t}q^{-s+tn}$, we derive \begin{align*} \mathcal{S}(q^s;q^{-t})&=\sum_{n=0}^\infty\elleft(\frac{1}{1-q^{s+tn+t}}-\frac{1}{1-q^{-s+tn}}\right) =\mathcal{S}(q^{s+t};q^{t}). \end{align*} \qed The following lemma is due to Andrews, Lewis and Liu \cite{Andrews-Lewis-Liu-2001}. Chan \cite{Chan-2005} provided another proof using Lemma \ref{Chan-Thm-2.2}. \begin{lem}\ellabel{Chan-Coro-3.2} For $|q|<1$, we have \begin{align} \frac{[ab,bc,ca]_\infty(q)_\infty^2}{[a,b,c,abc]_\infty} &=1+\sum_{n=0}^\infty\frac{aq^n}{1-aq^n}-\sum_{n=1}^\infty\frac{q^n/a}{1-q^n/a} +\sum_{n=0}^\infty\frac{bq^n}{1-bq^n} \nonumber\\[5pt] &\quad-\sum_{n=1}^\infty\frac{q^n/b}{1-q^n/b} +\sum_{n=0}^\infty\frac{cq^n}{1-cq^n}-\sum_{n=1}^\infty\frac{q^n/c}{1-q^n/c} \nonumber\\[5pt] &\quad-\sum_{n=0}^\infty\frac{abcq^n}{1-abcq^n} +\sum_{n=1}^\infty\frac{q^n/abc}{1-q^n/abc}. \end{align} \end{lem} Lemma \ref{Chan-Coro-3.2} associates the function $\mathcal{S}$ with theta functions. We denote the infinite products on the left-hand side of Lemma \ref{Chan-Coro-3.2} by $\mathcal{P}(a,b,c)$, which is \begin{equation*} \mathcal{P}(a,b,c)=\mathcal{P}(a,b,c;q)=\frac{[ab,bc,ca]_\infty(q)_\infty^2}{[a,b,c,abc]_\infty}. \end{equation*} For the sake of brevity, we denote $\mathcal{P}(a,a,a)$ by $\mathcal{P}(a)$. Then, Lemma \ref{Chan-Coro-3.2} shows that \begin{align}\ellabel{Chan-Coro-3.2-huajian} \mathcal{P}(a,b,c)=1+\mathcal{S}(a) +\mathcal{S}(b)+\mathcal{S}(c)-\mathcal{S}(abc). \end{align} We are now equipped to propose an algorithm for $\mathcal{S}(\pm q^m;q^n)$ with arbitrary positive integers $m$ and $n$. First in \eqref{Chan-Coro-3.2-huajian}, by replacing $q$ by $q^n$ and setting $a=\pm q^m$, $b=\pm q^m$ and $c=-q^{n-2m}$, we have \begin{align}\ellabel{G(q^m,q^m,-q^{n-2m}} \nonumber&\mathcal{P}(\pm q^m,\pm q^m,-q^{n-2m};q^n)\\ \nonumber&\indent\indent=1+2\mathcal{S}(\pm q^m;q^n) +\mathcal{S}(-q^{n-2m};q^n) -\mathcal{S}(-q^n;q^n) \\ &\indent\indent=\frac{1}{2}+2\mathcal{S}(\pm q^m;q^n)-\mathcal{S}(-q^{2m};q^n). \end{align} Therefore, in order to obtain expansions for $\mathcal{S}(\pm q^m;q^n)$ in terms of $\mathcal{P}$-functions, we need to calculate $\mathcal{S}(-q^{2m};q^n)$. Our strategy is to implement a recursive procedure using (\ref{Chan-Coro-3.2-huajian}). Suppose that $n=3^s\cdot n'$ with $(3,n')=1$. We denote by $k$ the order of $3$ in the cyclic group $\mathbb{Z}_{n'}$, which is \begin{equation}\ellabel{k} k=k(n')=\mathrm{ord}_{\mathbb{Z}_{n'}}(3). \end{equation} Thus, we have \begin{equation*} 3^k\equiv 1 \mmod{n'} \end{equation*} and accordingly \begin{equation}\ellabel{3^{s+k}} 3^{s+k}\equiv 3^{s} \mmod{n}. \end{equation} Then, by setting all $a,b,c$ with $-q^{3^{j-1}\cdot 2m}$ where $j=1,\elldots,s+k$ successively, we obtain a chain of identities as following: \begin{align*} j=1:&&\mathcal{P}(-q^{2m};q^n)&=1+3\mathcal{S}(-q^{2m};q^n)-\mathcal{S}(-q^{3\cdot 2m};q^n);\\ \vdots\indent~~&&\vdots\indent~~&\indent\indent\indent\indent\vdots\\ j=s:&&\mathcal{P}(-q^{3^{s-1}\cdot 2m};q^n)&=1+3\mathcal{S}(-q^{3^{s-1}\cdot 2m};q^n)-\mathcal{S}(-q^{3^{s}\cdot 2m};q^n);\\ j=s+1:&&\mathcal{P}(-q^{3^{s}\cdot 2m};q^n)&=1+3\mathcal{S}(-q^{3^{s}\cdot 2m};q^n)-\mathcal{S}(-q^{3^{s+1}\cdot 2m};q^n);\\ \vdots\indent~~&&\vdots\indent~~&\indent\indent\indent\indent\vdots\\ j=s+k:&&\mathcal{P}(-q^{3^{s+k-1}\cdot 2m};q^n)&=1+3\mathcal{S}(-q^{3^{s+k-1}\cdot 2m};q^n)-\mathcal{S}(-q^{3^{s+k}\cdot 2m};q^n). \end{align*} In view of \eqref{3^{s+k}} and Lemma \ref{Lem-F}(2), we are now able to solve $\mathcal{S}(-q^{2m};q^n)$. Concretely, for $j=1,\elldots,s$, we multiply the identities by $3^{s-j}$ respectively. Then, their weighted summation turns to \begin{equation}\ellabel{s-identities} \sum_{j=1}^{s}3^{s-j}\mathcal{P}(-q^{3^{j-1}\cdot 2m};q^n) =\frac{3^s-1}{2}+3^s\mathcal{S}(-q^{2m};q^n)-\mathcal{S}(-q^{3^{s}\cdot 2m};q^n). \end{equation} Again for $j=s+1,\elldots,s+k$, we multiply the identities by $3^{s+k-j}$ respectively. Then, their weighted summation turns to \begin{equation}\ellabel{k-identities} \sum_{j=s+1}^{s+k}3^{s+k-j}\mathcal{P}(-q^{3^{j-1}\cdot 2m};q^n) =\frac{3^k-1}{2}+3^k\mathcal{S}(-q^{3^{s}\cdot 2m};q^n)-\mathcal{S}(-q^{3^{s+k}\cdot 2m};q^n). \end{equation} Considering \eqref{3^{s+k}}, we have \begin{equation}\ellabel{3^(s+k)*2m} \mathcal{S}(-q^{3^{s+k}\cdot 2m};q^n)=\mathcal{S}(-q^{3^{s}\cdot 2m};q^n)+\frac{3^s(3^k-1)\cdot 2m}{n}. \end{equation} Combining \eqref{s-identities}, \eqref{k-identities} and \eqref{3^(s+k)*2m}, we are able to obtain $\mathcal{S}(-q^{2m};q^n)$, and consequently $\mathcal{S}(\pm q^m;q^n)$ by \eqref{G(q^m,q^m,-q^{n-2m}}. We summarize the algorithm as the following theorem. \begin{thm}\ellabel{F-main} Suppose that $m$ and $n$ are positive integers with $n=3^s\cdot n'$ and $(3,n')=1$. Denote by $k$ the order of $3$ in the cyclic group $\mathbb{Z}_{n'}$. Then, we have \begin{align*} 2\mathcal{S}(\pm q^m;q^n) &+\frac{n-2m}{n}\\[3pt] &=\sum_{j=1}^{s+k}\frac{3^{k-j}}{3^k-1}\mathcal{P}(-q^{3^{j-1}\cdot 2m};q^n)-\sum_{j=1}^{s}\frac{3^{-j}}{3^k-1}\mathcal{P}(-q^{3^{j-1}\cdot 2m};q^n)\\[5pt] &\indent\indent\indent\indent\indent+\mathcal{P}(\pm q^m,\pm q^m,-q^{n-2m};q^n). \end{align*} \end{thm} The length of the chain may be reduced for special $m$ and $n$. We consider the first $l$ identities in the chain. Their summation with weights $3^{l-j}$ gives \begin{equation}\ellabel{l-identities} \sum_{j=1}^{l}3^{l-j}\mathcal{P}(-q^{3^{j-1}\cdot 2m};q^n) =\frac{3^l-1}{2}+3^l\mathcal{S}(-q^{2m};q^n)-\mathcal{S}(-q^{3^{l}\cdot 2m};q^n). \end{equation} Lemma \ref{Lem-F} provides values of $\mathcal{S}$ at special points, which would help to shorten the chain of identities. Suppose that \begin{align*} &n=3^{s_1}\cdot 2^{t_1}\cdot n' \indent\mathrm{with} \indent (3,n')=1 ~\mathrm{and}~ (2,n')=1,\\ &m=3^{s_2}\cdot 2^{t_2}\cdot m' \indent\mathrm{with} \indent (3,m')=1 ~\mathrm{and}~ (2,m')=1. \end{align*} We consider two special cases. \emph{Case I:} $n'\mid m'$ and $t_1\elle t_2+1$. We take $l$ by setting \begin{equation}\ellabel{l} l=\begin{cases} 0,&\text{when $s_2\ge s_1$,}\\ s_1-s_2,&\text{when $s_2< s_1$.}\end{cases} \end{equation} In this case, $l$ is the least nonnegative integer such that $$3^l\cdot 2m\equiv 0\mmod n.$$ By Lemma \ref{Lem-F}, we have \begin{align}\ellabel{3^l*2m-1} \nonumber\mathcal{S}(-q^{3^{l}\cdot 2m};q^n) &=\frac{3^l\cdot 2m}{n}+\mathcal{S}(-1;q^n)\\ &=\frac{3^l\cdot 2m}{n}-\frac{1}{2}. \end{align} Combining \eqref{l-identities} and \eqref{3^l*2m-1}, we are able to obtain $\mathcal{S}(-q^{2m};q^n)$, and consequently $\mathcal{S}(q^m;q^n)$ by \eqref{G(q^m,q^m,-q^{n-2m}}. \emph{Case II:} $n'\mid m'$ and $t_1=t_2+2$. We take $l$ as in \eqref{l}. Now $l$ is the least nonnegative integer such that $$3^l\cdot 2m\equiv n/2\mmod n.$$ The discussion is similar to that of Case I. A tiny difference lies in \eqref{3^l*2m-1}, where we now have \begin{align}\ellabel{3^l*2m-2} \nonumber\mathcal{S}(-q^{3^{l}\cdot 2m};q^n) &=\frac{3^l\cdot 2m-n/2}{n}+\mathcal{S}(q^\frac{n}{2};q^n)\\ &=\frac{3^l\cdot 2m}{n}-\frac{1}{2}. \end{align} We summarize these two cases as the following corollary. \begin{coro}\ellabel{cor-F_1} Let $m$ and $n$ be positive integers. Suppose that there exists a least nonnegative integer $l$ such that $3^l\cdot 4m\equiv 0\mmod n$. Then, we have \begin{align*} 2\mathcal{S}(\pm q^m;q^n)&+\frac{n-2m}{n}\\ &=\sum_{j=1}^{l}3^{-j}\mathcal{P}(-q^{3^{j-1}\cdot 2m};q^n) +\mathcal{P}(\pm q^m,\pm q^m,-q^{n-2m};q^n). \end{align*} \end{coro} For example, when $n=3$, we have $l=1$ in Corollary \ref{cor-F_1}. We give the explicit expansion for $\mathcal{S}(\pm q;q^3)$ in terms of infinite products. \begin{coro} We have \begin{align}\ellabel{Chan-Coro-gene} \nonumber1+6\mathcal{S}(q;q^3) &=3\mathcal{P}(q,q,-q;q^3) -\mathcal{P}(-q,-q,-q;q^3) \\[5pt] &=\frac{3J_{3,6}^2J_{6}^3}{2J_{1,6}^2J_2} -\frac{J_{1,6}^6J_{2}^3J_{3,6}^2}{2J_{6}^9} \end{align} and \begin{align}\ellabel{Coro-gene} 1+6\mathcal{S}(-q;q^3) =2\mathcal{P}(-q,-q,-q;q^3) =\frac{J_{1,6}^6J_{2}^3J_{3,6}^2}{J_{6}^9}. \end{align} \end{coro} \section{Examples for 3-dissections} Combined with the algorithm of $\mathcal{S}$-series, Theorem \ref{main-result} and \ref{single-pole} constructed a bridge between sums of generalized Lambert series and those of theta functions. In this section, We show some examples concerning 3-dissections of generalized Lambert series. These formulas make comparison between Chan's identities and ours in this article. In \S5, they are useful to discuss properties of ranks of overpartitions modulo 6. First, by both replacing $q$ by $q^3$ and taking $b_0=q$, $b_1=-q$ in Lemma \ref{Chan-Thm-2.1} and Theorem \ref{main-result} respectively, we have the following corollary. This shows that, the double poles make it more complicated for the correspondence between generalized Lambert series and infinite products. \begin{coro}\ellabel{cor4.1} We have \begin{align} &\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1+q^{3n+1}} +\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1-q^{3n+1}} =\frac{2J^3_{6}}{J_{2}},\ellabel{simpli1}\\ &\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{(1+q^{3n+1})^2} +\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{(1-q^{3n+1})^2} =\frac{4J^3_{6}}{3J_{2}}+\frac{J_{3,6}^2J_{6}^6}{2J_{1,6}^2J_2^2}+\frac{J_{1,6}^6J_{2}^2J_{3,6}^2}{6J_{6}^6}.\ellabel{simpli2} \end{align} \end{coro} The following corollary makes comparison between Lemma \ref{Chan-Thm-2.1} and Theorem \ref{single-pole}. \begin{coro}\ellabel{cor3.6} We have \begin{align} \nonumber\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+3n}}{1+q^{9n}} &+\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+15n+6}}{1+q^{9n+6}}\\ &=\frac{2J_{3,18}}{J_{9,18}} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n+3}}{1+q^{9n+3}} +\frac{ J_{3,18}^6J_{6}^3J_{9,18}^2}{2J_{18}^9}, \ellabel{Chan-Thm-2.1-coro}\\ \nonumber\sum_{n=-\infty\atop n\neq0}^{\infty}\frac{(-1)^nq^{9n^2+3n}}{1-q^{9n}} &+\sum_{n=-\infty}^{\infty}\frac{(-1)^nq^{9n^2+15n+6}}{1-q^{9n+6}}\\ &=\frac{2J_{3,18}}{J_{9,18}}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{9n^2+9n+3}}{1-q^{9n+3}} +\frac{J_{3,18}^6J_{6}^3J_{9,18}^2}{6J_{18}^9} -\frac{1}{6}.\ellabel{main-result-coro} \end{align} \end{coro} \noindent{\it Proof.} For \eqref{Chan-Thm-2.1-coro}, we set $r=1$ and $s=3$ in Lemma \ref{Chan-Thm-2.1}. By replacing $q$ by $q^9$ and taking $a_1=q^3$, $b_1=-1$, $b_2=-q^3$, and $b_3=-q^6$, we obtain \begin{align*} \sum_{n=-\infty}^{\infty}\frac{(-1)^nq^{9n^2+3n}}{1+q^{9n}} &+\sum_{n=-\infty}^{\infty}\frac{(-1)^nq^{9n^2+15n+6}}{1+q^{9n+6}} \nonumber\\[5pt] &=\frac{[-1;q^9]_\infty}{[-q^3;q^9]_\infty}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{9n^2+9n+3}}{1+q^{9n+3}} +\mathcal{P}(-q^3;q^9). \end{align*} For \eqref{main-result-coro}, we set $r=1$ and $s=2$ in Theorem \ref{single-pole}. By replacing $q$ by $q^9$ and taking $a_1=-q^{12}$, $b_1=q^3$ and $b_2=q^6$, we obtain \begin{align*} \sum_{n=-\infty\atop n\neq0}^{\infty}\frac{(-1)^nq^{9n^2+3n}}{1-q^{9n}} &+\sum_{n=-\infty}^{\infty}\frac{(-1)^nq^{9n^2+15n+6}}{1-q^{9n+6}} \nonumber\\[5pt] &=\frac{[-1;q^9]_\infty}{[-q^3;q^9]_\infty}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{9n^2+9n+3}}{1-q^{9n+3}} +\mathcal{S}(-q^3;q^9). \end{align*} Then \eqref{main-result-coro} follows by \eqref{Coro-gene}. \qed Consider 3-dissections of generalized Lambert series according to the summation index $n$ modulo 3: \begin{align*} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}}{1+q^{3n}} &=\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+3n}}{1+q^{9n}}\\ &\indent\indent -\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n+2}}{1+q^{9n+3}} +\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+15n+6}}{1+q^{9n+6}},\\ \sum_{n=-\infty\atop n\neq0}^\infty\frac{(-1)^nq^{n^2+n}}{1-q^{3n}} &=\sum_{n=-\infty\atop n\neq0}^\infty\frac{(-1)^nq^{9n^2+3n}}{1-q^{9n}}\\ &\indent\indent -\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n+2}}{1-q^{9n+3}} +\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+15n+6}}{1-q^{9n+6}}. \end{align*} Using Corollary \ref{cor3.6}, we transform these 3-dissections into forms containing one single generalized Lambert series. \begin{coro}\ellabel{cor3.7} We have \begin{align*} &\sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}}{1+q^{3n}}\\ &\indent\indent=\elleft(2q\frac{J_{3,18}}{J_{9,18}}-1\right) \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n+2}}{1+q^{9n+3}} +\frac{J_{3,18}^6J_{6}^3J_{9,18}^2}{2J_{18}^9},\\ &\sum_{n=-\infty\atop n\neq0}^\infty\frac{(-1)^nq^{n^2+n}}{1-q^{3n}}\\ &\indent\indent=\elleft(2q\frac{J_{3,18}}{J_{9,18}}-1\right) \sum_{n=-\infty}^\infty \frac{(-1)^nq^{9n^2+9n+2}}{1-q^{9n+3}} +\frac{J_{3,18}^6J_{6}^3J_{9,18}^2}{6J_{18}^9} -\frac{1}{6}. \end{align*} \end{coro} As we mentioned, Theorem \ref{main-result} decouples parameters in ${\boldsymbol a}$ from poles in generalized Lambert series. This helps in constructing identities with variant orders in the numerators. Consider the following 3-dissections of generalized Lambert series: \begin{align*} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}}{(1+q^{3n})^2} &=\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+3n}}{(1+q^{9n})^2}\\ &\indent\indent -\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n+2}}{(1+q^{9n+3})^2} +\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+15n+6}}{(1+q^{9n+6})^2},\\ \sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+2n}}{(1+q^{3n})^2} &=\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+6n}}{(1+q^{9n})^2}\\ &\indent\indent -\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+12n+3}}{(1+q^{9n+3})^2} +\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+18n+8}}{(1+q^{9n+6})^2},\\ \sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+3n}}{(1+q^{3n})^2} &=\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n}}{(1+q^{9n})^2}\\ &\indent\indent -\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+15n+4}}{(1+q^{9n+3})^2} +\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+21n+10}}{(1+q^{9n+6})^2}. \end{align*} Similar to Corollary \ref{cor3.7}, we aim to transform these 3-dissections into forms containing one single generalized Lambert series. \begin{coro}\ellabel{cor3.8} We have \begin{align*} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}}{(1+q^{3n})^2} &=\frac{J_{3,18}^6J_6^3J_{9,18}^2}{2J_{18}^9} \elleft(\frac{2}{3}-\frac{J_{9,18}^2J_{18}^3}{4J_{3,18}^2J_6} +\frac{J_{3,18}^6J_{6}^3J_{9,18}^2}{12J_{18}^9}\right)\\ &\indent\indent\indent\indent-\elleft(1-\frac{2qJ_{3,18}}{J_{9,18}}\right) \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n+2}}{(1+q^{9n+3})^2},\\ \sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+2n}}{(1+q^{3n})^2} &=\frac{J_{3,18}^6J_6^3J_{9,18}^2}{2J_{18}^9} \elleft(\frac{1}{3}+\frac{J_{9,18}^2J_{18}^3}{4J_{3,18}^2J_6} -\frac{J_{3,18}^6J_{6}^3J_{9,18}^2}{12J_{18}^9}\right)\\ &\indent\indent\indent\indent-\elleft(1-\frac{2qJ_{3,18}}{J_{9,18}}\right) \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+18n+5}}{(1+q^{9n+3})^2},\\ \sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+3n}}{(1+q^{3n})^2} &=\frac{qJ_{3,18}^5J_{6}^2J_{9,18}^3}{2J_{18}^6} +\elleft(1-\frac{2qJ_{3,18}}{J_{9,18}}\right) \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n}}{(1+q^{9n})^2}. \end{align*} \end{coro} \noindent{\it Proof.} For the third identity, we replace $q$ by $q^9$ and set $r=1$, $s=3$ in Lemma \ref{Chan-Thm-3.2}. Then by taking $a_0=1$, $b_1=-1$, $b_2=-q^3$, $b_3=-q^6$, we obtain \begin{align} \nonumber-\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+15n+3}}{(1+q^{9n+3})^2} &+\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+21n+9}}{(1+q^{9n+6})^2}\\ &=\frac{J_{6}^2J_{3,18}^5J_{9,18}^3}{2J_{18}^6} -\frac{2J_{3,18}}{J_{9,18}} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n}}{(1+q^{9n})^2}.\ellabel{q^{n^2+3n}} \end{align} This proves the third identity in the corollary. When concerning the first and second identities, Lemma \ref{Chan-Thm-3.2} fails to give a proper relationship similar to \eqref{q^{n^2+3n}}. Poles are twisted with the parameter $a_0$, which limits the orders of $q$ in numerators. Instead we replace $q$ by $q^9$ and set $r=1$, $s=3$ in Theorem \ref{main-result}. Then by taking $a_1=q^3$, $b_1=-1$, $b_2=-q^3$, $b_3=-q^6$, we obtain \begin{align*} &\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+3n}}{(1+q^{9n})^2} +\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+15n+6}}{(1+q^{9n+6})^2}\\ &=\frac{J_6^3J_{3,18}^6J_{9,18}^2}{2J_{18}^9} \elleft(\frac{1}{2}-\mathcal{S}(q^3;q^9)\right) +\frac{2J_{3,18}}{J_{9,18}} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n+3}}{(1+q^{9n+3})^2}. \end{align*} Then the second identity follows by \eqref{Chan-Coro-gene}. Similarly, by taking $a_1=q^6$, $b_1=-1$, $b_2=-q^3$, $b_3=-q^6$, we obtain \begin{align*} &\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+6n}}{(1+q^{9n})^2} -\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+12n+3}}{(1+q^{9n+3})^2}\\ &=\frac{J_6^3J_{3,18}^6J_{9,18}^2}{2J_{18}^9} \elleft(\frac{1}{2}-\mathcal{S}(q^6;q^9)\right) -\frac{2J_{3,18}}{J_{9,18}} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+18n+9}}{(1+q^{9n+6})^2}. \end{align*} Noting that \begin{align*} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+18n+8}}{(1+q^{9n+6})^2} =-\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+18n+5}}{(1+q^{9n+3})^2}. \end{align*} Then the second identity follows by \eqref{Chan-Coro-gene} and \begin{equation*} \mathcal{S}(q^6;q^9)=-\mathcal{S}(q^3;q^9). \end{equation*} \qed \section{Ranks of Overpartitions modulo $6$} In this section, we study 3-dissection properties of ranks of overpartitions modulo 6. Noting that \begin{equation*} \overline{N}(s,\ell,n)=\overline{N}(\ell-s,\ell,n), \end{equation*} it suffices to consider four residue classes when $n=6$. Replacing $z$ by $\xi_6=e^{\frac{\pi i}{3}}$, the root of unity modulo $6$, left-hand side of \eqref{GF} reduces to \begin{align*} \overline{R}(\xi_6;q) &=\sum_{n=0}^\infty\sum_{m=-\infty}^{\infty}\overline{N}(m,n)\xi_6^mq^n \nonumber\\[5pt] &=\sum_{n=0}^\infty\sum_{t=0}^5\sum_{m=-\infty}^{\infty}\overline{N}(6m+t,n)\xi_6^tq^n \nonumber\\[5pt] &=\sum_{n=0}^\infty(\overline{N}(0,6,n)+\overline{N}(1,6,n) -\overline{N}(2,6,n)-\overline{N}(3,6,n))q^n. \end{align*} On the other hand, in light of the fact that $\xi_6+\xi_6^{-1}=1$, we get \begin{align*} \overline{R}(\xi_6;q) &=\frac{(-q)_\infty}{(q)_\infty}\elleft\{1+2\sum_{n=1}^\infty \frac{(2-\xi_6-\xi_6^{-1})(-1)^nq^{n^2+n}}{1-\xi_6q^n-\xi_6^{-1}q^n+q^{2n}}\right\}\\ &=\frac{(-q)_\infty}{(q)_\infty}\elleft\{1+2\sum_{n=1}^\infty \frac{(-1)^nq^{n^2+n}}{1-q^n+q^{2n}}\right\}\\ &=\frac{2(-q)_\infty}{(q)_\infty}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{n^2+n}}{1+q^{3n}}. \end{align*} Thus, we get \begin{align}\ellabel{xi_6} \nonumber\overline{R}(\xi_6;q) &=\sum_{n=0}^\infty(\overline{N}(0,6,n)+\overline{N}(1,6,n) -\overline{N}(2,6,n)-\overline{N}(3,6,n))q^n\\ &=\frac{2(-q)_\infty}{(q)_\infty}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{n^2+n}}{1+q^{3n}}. \end{align} Similarly, if we replace $z$ by $\xi_6^2$, $\xi_6^3$ and $1$ in the left-hand side of \eqref{GF} respectively, we obtain \begin{align} \nonumber\overline{R}(\xi_6^2;q) &=\sum_{n=0}^\infty(\overline{N}(0,6,n)-\overline{N}(1,6,n) -\overline{N}(2,6,n)+\overline{N}(3,6,n))q^n\\ &=\frac{6(-q)_\infty}{(q)_\infty} \sum_{n=-\infty\atop n\neq0}^\infty\frac{(-1)^nq^{n^2+n}}{1-q^{3n}}+\frac{(-q)_\infty}{(q)_\infty},\ellabel{xi_6^2}\\ \nonumber\overline{R}(\xi_6^3;q) &=\sum_{n=0}^\infty(\overline{N}(0,6,n)-2\overline{N}(1,6,n) +2\overline{N}(2,6,n)-\overline{N}(3,6,n))q^n\\ &=\frac{4(-q)_\infty}{(q)_\infty}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{n^2+n}}{(1+q^{n})^2},\ellabel{xi_6^3}\\ \nonumber\overline{R}(1;q) &=\sum_{n=0}^\infty(\overline{N}(0,6,n)+2\overline{N}(1,6,n) +2\overline{N}(2,6,n)+\overline{N}(3,6,n))q^n\\ &=\frac{(-q)_\infty}{(q)_\infty}.\ellabel{1} \end{align} Now, we have a linear equation system concerning all residues of ranks for overpartitions modulo 6. The rank of its coefficient matrix is full, so we are able to solve $\overline{N}(i,6,n)$ for $i=0,1,2,3$ in terms of $\overline{R}(z;q)$. \begin{lem}\ellabel{lem4.1} We have \begin{align*} \sum_{n=0}^\infty\overline{N}(0,6,n)q^n &=\frac{1}{6}\elleft(\overline{R}(1;q) +2\overline{R}(\xi_6;q) +2\overline{R}(\xi_6^2;q) +\overline{R}(\xi_6^3;q)\right),\\[5pt] \sum_{n=0}^\infty\overline{N}(1,6,n)q^n &=\frac{1}{6}\elleft(\overline{R}(1;q) +~~\overline{R}(\xi_6;q) -~~\overline{R}(\xi_6^2;q) -\overline{R}(\xi_6^3;q)\right),\\[5pt] \sum_{n=0}^\infty\overline{N}(2,6,n)q^n &=\frac{1}{6}\elleft(\overline{R}(1;q) -~~\overline{R}(\xi_6;q) -~~\overline{R}(\xi_6^2;q) +\overline{R}(\xi_6^3;q)\right),\\[5pt] \sum_{n=0}^\infty\overline{N}(3,6,n)q^n &=\frac{1}{6}\elleft(\overline{R}(1;q) -2\overline{R}(\xi_6;q) +2\overline{R}(\xi_6^2;q) -\overline{R}(\xi_6^3;q)\right). \end{align*} \end{lem} Therefore, if we can elaborate 3-dissection properties of each $\overline{R}$ function, we can go further to those of overpartitions. In view of \eqref{xi_6}-\eqref{1}, it is not surprised that the identities we obtained in \S3 will play a key role. We first give some lemmas. \begin{lem}\ellabel{Lovejoy-Osburn-2008-mod6} We have \begin{align} \nonumber\frac{(q;q)_\infty}{(-q;q)_\infty} &=\frac{(q^9;q^9)_\infty}{(-q^9;q^9)_\infty} -2q(q^3,q^{15},q^{18};q^{18})_\infty\\[5pt] &=J_{9,18}-2qJ_{3,18}. \end{align} \end{lem} \noindent{\it Proof.} This is \cite[Theorem 1.2]{Andrews-Hickerson-1991}. \qed \begin{lem}\ellabel{V-0} We have \begin{equation}\ellabel{V-0-rr} \frac{J_{18}^3}{J_{3,18}^3}-8q^3\frac{J_{18}^3}{J_{9,18}^3} =\frac{J_{3,18}^5J_{6}^4}{J_{18}^9}. \end{equation} \end{lem} \noindent{\it Proof.} This identity is equivalent to \begin{equation}\ellabel{4.6} \mathcal{P}(q^3,q^3,-q^3;q^9)+\mathcal{P}(-q^3,-q^3,q^6;q^9)=\mathcal{P}(-q^3,-q^3,-q^3;q^9), \end{equation} which can be easily verified by \eqref{Chan-Coro-3.2-huajian}. In fact, \eqref{4.6} is a special case of the following identity in \cite{Atkin-Swinnerton-Dyer-1954} \begin{align*} j(x;q)^2j(yz;q)j(yz^{-1};q) &=j(y;q)^2j(xz;q)j(xz^{-1};q)\\ &\indent\indent-yz^{-1}j(z;q)^2j(xy;q)j(x y^{-1};q) \end{align*} by replacing $q$ by $q^9$, then setting $x=-q^3$, $y=q^3$ and $z=-1$. \qed Now, we give the 3-dissections of $\overline{R}(z;q)$ for $z=1,\xi_6,\xi_6^2,\xi_6^3$ successively. \begin{lem}\ellabel{R(1;q)} We have \begin{align*} &\overline{R}(1;q) =\frac{J_{18}^{12}}{J_{3,18}^8J_{6}^4J_{9,18}} +q\frac{2J_{18}^{12}}{J_{3,18}^7J_{6}^4J_{9,18}^2} +q^2\frac{4J_{18}^{12}}{J_{3,18}^6J_{6}^4J_{9,18}^3}. \end{align*} \end{lem} \noindent{\it Proof.} Hirschhorn and Sellers \cite{Hirschhorn-Sellers-2005} proved that \begin{align}\ellabel{Hirschhorn-Sellers-2005} \frac{(-q)_\infty}{(q)_\infty} =\frac{J_{18}^{12}}{J_{3,18}^8J_{6}^4J_{9,18}} +q\frac{2J_{18}^{12}}{J_{3,18}^7J_{6}^4J_{9,18}^2} +q^2\frac{4J_{18}^{12}}{J_{3,18}^6J_{6}^4J_{9,18}^3}. \end{align} Then, the lemma follows by \eqref{1}. One can also verify \eqref{Hirschhorn-Sellers-2005} easily by Lemma \ref{Lovejoy-Osburn-2008-mod6} and \ref{V-0}. \qed \begin{lem}\ellabel{R(xi_6;q)} We have \begin{align*} &\overline{R}(\xi_6;q) =\frac{J_{18}^3J_{9,18}}{J^2_{3,18}J_{6}} +q\frac{2J_{18}^3}{J_{3,18}J_{6}} +q^2\elleft(\frac{4J^3_{18}}{J_{6}J_{9,18}} -\frac{2}{J_{9,18}}\sum_{n=-\infty}^\infty\frac{(-1)^n q^{9n^2+9n}}{1+q^{9n+3}}\right),\\ &\overline{R}(\xi_6^2;q) =\frac{J_{18}^3J_{9,18}}{J_{3,18}^2J_{6}} +q\frac{2J_{18}^3}{J_{3,18}J_{6}} +q^2\elleft(\frac{4J_{18}^3}{J_{6}J_{9,18}}-\frac{6}{J_{9,18}} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n}}{1-q^{9n+3}}\right). \end{align*} \end{lem} \noindent{\it Proof.} Ji, Zhang and Zhao \cite[(2.3)]{Ji-Zhang-Zhao-2017} proved $\overline{R}(\xi_6;q)$ by using Corollary \ref{cor3.7}. For $\overline{R}(\xi_6^2;q)$, we substitute the second identity of Corollary \ref{cor3.7} into \eqref{xi_6^2}, and obtain \begin{align*} \overline{R}(\xi_6^2;q) &=\frac{6(-q)_\infty}{(q)_\infty} \sum_{n=-\infty\atop n\neq0}^\infty\frac{(-1)^nq^{n^2+n}}{1-q^{3n}}+\frac{(-q)_\infty}{(q)_\infty} \nonumber\\[5pt] &=-\frac{6}{J_{9,18}} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n+2}}{1-q^{9n+3}} +\frac{(-q)_\infty}{(q)_\infty} \frac{J_{6}^3J_{3,18}^6J_{9,18}^2}{J_{18}^9}. \end{align*} Then, the lemma follows by \eqref{Hirschhorn-Sellers-2005}. \qed It is worthy noting that, the 3-dissection of $\overline{R}(\xi_6^2;q)$ plays a key role in \cite{Lovejoy-Osburn-2008}, where Lovejoy and Osburn elaborated rank differences of overpartitions modulo $3$. In fact, we have \begin{align*} \nonumber\overline{R}(\xi_6^2;q) &=\sum_{n=0}^\infty(\overline{N}(0,6,n)-\overline{N}(1,6,n) -\overline{N}(2,6,n)+\overline{N}(3,6,n))q^n\\ &=\sum_{n=0}^\infty(\overline{N}(0,6,n)-\overline{N}(1,6,n) -\overline{N}(4,6,n)+\overline{N}(3,6,n))q^n.\\ &=\sum_{n=0}^\infty(\overline{N}(0,3,n)-\overline{N}(1,3,n))q^n. \end{align*} Thus, we have provided a new proof of \cite[Theorem 1]{Lovejoy-Osburn-2008} \footnote{Lemma \ref{R(xi_6;q)} reduced $-1$ from \cite[Theorem 1]{Lovejoy-Osburn-2008}, since we assume the convention $\overline{p}(0)=1$.}. $\overline{R}(\xi_6^3;q)$ is the most complicated part, since double poles occur. Bringmann and Lovejoy \cite{Bringmann-Lovejoy-2007} pointed out that $\overline{R}(\xi_6^3;q)$ is the holomorphic part of a harmonic weak Maass form of half integral weight. We here elaborate its property of 3-dissection. In view of \eqref{xi_6^3}, a straightforward idea is to split the sum into three sums according to the summation index $n$ modulo 3, such as \begin{align*} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}}{(1+q^{n})^2} &=\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+3n}}{(1+q^{3n})^2}\\ &\indent\indent -\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n+2}}{(1+q^{3n+1})^2} +\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+15n+6}}{(1+q^{3n+2})^2}. \end{align*} For the sake of matching the order of $q$ in both numerators and denominators, Lemma \ref{Chan-Thm-3.2} and Theorem \ref{main-result} will generate identities containing seven or six generalized Lambert series respectively, some of which are redundant. Therefore, we first make some adjustments in $\overline{R}(\xi_6^3;q)$, \begin{align*} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}}{(1+q^n)^2} &=\sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}(1-q^n+q^{2n})^2}{(1+q^{3n})^2} \nonumber\\ &=\sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}(1-2q^n+3q^{2n}-2q^{3n}+q^{4n})} {(1+q^{3n})^2}. \end{align*} Noting that \begin{align*} \sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}q^{mn}}{(1+q^{3n})^2} =\sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}q^{(4-m)n}}{(1+q^{3n})^2}, \end{align*} we have \begin{align}\ellabel{4.10} \nonumber\sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}}{(1+q^n)^2} &=2\sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}}{(1+q^{3n})^2}\\ &\quad-4\sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+2n}}{(1+q^{3n})^2} +3\sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+3n}}{(1+q^{3n})^2}. \end{align} Then Corollary \ref{cor3.8} will help. We now give the 3-dissection of $\overline{R}(\xi_6^3;q)$. \begin{lem}\ellabel{R(xi_6^3;q)} We have \begin{align*} &\overline{R}(\xi_6^3;q)\nonumber\\[5pt] &=\elleft(-\frac{2J_{3,18}^4J_6^2J_{9,18}^3}{J_{18}^6} +\frac{12}{J_{9,18}}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{9n^2+9n}}{(1+q^{9n})^2}\right) +q\frac{2J_{3,18}^5J_6^2J_{9,18}^2}{J_{18}^6}\nonumber\\[5pt] & +q^2\elleft(\frac{4J_{3,18}^6J_6^2J_{9,18}}{J_{18}^6} -\frac{24}{J_{9,18}}\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n}}{(1+q^{9n+3})^2} +\frac{16}{J_{9,18}}\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n}}{1+q^{9n+3}}\right). \end{align*} \end{lem} \noindent{\it Proof.} By Corollary \ref{cor3.8} and \eqref{4.10}, we find that \begin{align*} &\sum_{n=-\infty}^\infty\frac{(-1)^nq^{n^2+n}}{(1+q^n)^2} \nonumber\\[5pt] &\quad =\frac{J_{3,18}^{12}J_6^6J_{9,18}^4}{4J_{18}^{18}} +\elleft(1-\frac{2qJ_{3,18}}{J_{9,18}}\right) \elleft(3\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n}}{(1+q^{9n})^2} -\frac{3J_{3,18}^4J_6^2J_{9,18}^4}{4J_{18}^6}\right) \nonumber\\[5pt] &\indent\indent -q^2\elleft(1-\frac{2qJ_{3,18}}{J_{9,18}}\right) \elleft(6\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n}}{(1+q^{9n+3})^2} -4\sum_{n=-\infty}^\infty\frac{(-1)^n q^{9n^2+9n}}{1+q^{9n+3}}\right). \end{align*} Multiply both sides of $\frac{4(-q)_\infty}{(q)_\infty}$, then we have \begin{align*} &\overline{R}(\xi_6^3;q)\\ &=\frac{(-q)_\infty}{(q)_\infty}\frac{J_{3,18}^{12}J_6^6J_{9,18}^4}{J_{18}^{18}} +\frac{1}{J_{9,18}}\elleft(12\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n}}{(1+q^{9n})^2} -\frac{3J_{3,18}^4J_6^2J_{9,18}^4}{J_{18}^6}\right) \nonumber\\[5pt] &\indent\indent -\frac{q^2}{J_{9,18}} \elleft(24\sum_{n=-\infty}^\infty\frac{(-1)^nq^{9n^2+9n}}{(1+q^{9n+3})^2} -16\sum_{n=-\infty}^\infty\frac{(-1)^n q^{9n^2+9n}}{1+q^{9n+3}}\right). \end{align*} Compared with Lemma \ref{R(xi_6^3;q)}, it suffices to prove that \begin{align*} \frac{(-q)_\infty}{(q)_\infty}\frac{J_{3,18}^{12}J_6^6J_{9,18}^4}{J_{18}^{18}} =\frac{J_{3,18}^4J_6^2J_{9,18}^3}{J_{18}^6} +q\frac{2J_{3,18}^5J_6^2J_{9,18}^2}{J_{18}^6} +q^2\frac{4J_{3,18}^6J_6^2J_{9,18}}{J_{18}^6}, \end{align*} which is equivalent to \eqref{Hirschhorn-Sellers-2005}. \qed Summing up Lemma \ref{R(1;q)}-\ref{R(xi_6^3;q)}, we are now equipped to elaborate 3-dissections of ranks for overpartitions, in view of Lemma \ref{lem4.1}. Recall that $\overline{r}_s(d)$ is defined in \eqref{r_s(d)}. \begin{thm}\ellabel{rank-diff-0} For $d=0$, we have \begin{align*} \overline{r}_0(0) &=\frac{J_{6}^{12}}{6J_{1,6}^8J_{2}^4J_{3,6}} +\frac{2J_{6}^3J_{3,6}}{3J^2_{1,6}J_{2}} -\frac{J_{1,6}^4J_2^2J_{3,6}^3}{3J_{6}^6} +\frac{2}{J_{3,6}}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n})^2},\\[5pt] \overline{r}_1(0) &=\frac{J_{6}^{12}}{6J_{1,6}^8J_{2}^4J_{3,6}} \indent\indent\indent\indent~ +\frac{J_{1,6}^4J_2^2J_{3,6}^3}{3J_{6}^6} -\frac{2}{J_{3,6}}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n})^2},\\[5pt] \overline{r}_2(0) &=\frac{J_{6}^{12}}{6J_{1,6}^8J_{2}^4J_{3,6}} -\frac{J_{6}^3J_{3,6}}{3J^2_{1,6}J_{2}} -\frac{J_{1,6}^4J_2^2J_{3,6}^3}{3J_{6}^6} +\frac{2}{J_{3,6}}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n})^2},\\[5pt] \overline{r}_3(0) &=\frac{J_{6}^{12}}{6J_{1,6}^8J_{2}^4J_{3,6}} \indent\indent\indent\indent~ +\frac{J_{1,6}^4J_2^2J_{3,6}^3}{3J_{6}^6} -\frac{2}{J_{3,6}}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n})^2}. \end{align*} \end{thm} \begin{thm}\ellabel{rank-diff-1} For $d=1$, we have \begin{align*} \overline{r}_0(1) &=\frac{J_{6}^{12}}{3J_{1,6}^7J_{2}^4J_{3,6}^2} +\frac{4J_{6}^3}{3J_{1,6}J_{2}} +\frac{J_{1,6}^5J_2^2J_{3,6}^2}{3J_{6}^6},\\[5pt] \overline{r}_1(1) &=\frac{J_{6}^{12}}{3J_{1,6}^7J_{2}^4J_{3,6}^2} \indent\indent\indent\indent~ -\frac{J_{1,6}^5J_2^2J_{3,6}^2}{3J_{6}^6},\\[5pt] \overline{r}_2(1) &=\frac{J_{6}^{12}}{3J_{1,6}^7J_{2}^4J_{3,6}^2} -\frac{2J_{6}^3}{3J_{1,6}J_{2}} +\frac{J_{1,6}^5J_2^2J_{3,6}^2}{3J_{6}^6},\\[5pt] \overline{r}_3(1) &=\frac{J_{6}^{12}}{3J_{1,6}^7J_{2}^4J_{3,6}^2} \indent\indent\indent\indent~ -\frac{J_{1,6}^5J_2^2J_{3,6}^2}{3J_{6}^6}. \end{align*} \end{thm} The case $d=2$ would be kind of complicated. Here we use Corollary \ref{cor4.1} to remove the generalized Lambert series with denominator $(1-q^{3n+1})$. \begin{thm}\ellabel{rank-diff-2} For $d=2$, we have \begin{align*} \overline{r}_0(2) &=\frac{2J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} -\frac{4J^3_{6}}{3J_{2}J_{3,6}} +\frac{2J_{1,6}^6J_2^2J_{3,6}}{3J_{6}^6} \nonumber\\[5pt] &\quad\quad\quad -\frac{4}{J_{3,6}} \sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n+1})^2} +\frac{4}{J_{3,6}}\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1+q^{3n+1}},\\ \overline{r}_1(2) &=\frac{2J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} +\frac{2J_{6}^3}{J_{2}J_{3,6}} -\frac{2J_{1,6}^6J_2^2J_{3,6}}{3J_{6}^6} \nonumber\\[5pt] &\quad\quad\quad+\frac{4}{J_{3,6}} \sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n+1})^2} -\frac{4}{J_{3,6}}\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1+q^{3n+1}},\\ \overline{r}_2(2) &=\frac{2J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} +\frac{2J^3_{6}}{3J_{2}J_{3,6}} +\frac{2J_{1,6}^6J_2^2J_{3,6}}{3J_{6}^6} \nonumber\\[5pt] &\quad\quad\quad -\frac{4}{J_{3,6}} \sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n+1})^2} +\frac{2}{J_{3,6}}\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1+q^{3n+1}},\\ \overline{r}_3(2) &=\frac{2J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} -\frac{4J_{6}^3}{J_2J_{3,6}} -\frac{2J_{1,6}^6J_2^2J_{3,6}}{3J_{6}^6} \nonumber\\[5pt] &\quad\quad\quad +\frac{4}{J_{3,6}}\sum_{n=-\infty}^\infty \frac{(-1)^nq^{3n^2+3n}}{(1+q^{3n+1})^2}. \end{align*} \end{thm} Theorem \ref{rank-diff-0}-\ref{rank-diff-2} suggest information of rank sizes for different residues. Some of the comparisons are quite trivial. For example, by Theorem \ref{rank-diff-0}, it is easy to derive that, the following inequalities hold for $n\ge 1$: \begin{align*} \overline{N}(1,6,3n)=\overline{N}(3,6,3n),\\ \overline{N}(0,6,3n)\ge\overline{N}(2,6,3n). \end{align*} Though, some other comparisons take more efforts. We find that, for fixed $d$, the generating functions of ranks for each residue share a common main term, which is the first term. After taking differences, the growth rates of the second terms (some have the coefficient $0$) overcome all the others left. This results in a total ordering relation. For large integers, this should be able to verify by computing efficient asymptotic formulas of all terms, using standard analytic methods. While for small ones, this can be verified directly by computer. Though, this is far from the theme of this article and would take up a dozen pages. Therefore, we leave it as a conjecture here. \begin{conj} For $n\ge 11$, we have \begin{align*} \overline{N}(0,6,3n)\ge\overline{N}(1,6,3n)&=\overline{N}(3,6,3n)\ge\overline{N}(2,6,3n), \\[3pt] \overline{N}(0,6,3n+1)\ge\overline{N}(1,6,3n+1)&=\overline{N}(3,6,3n+1)\ge\overline{N}(2,6,3n+1), \\[3pt] \overline{N}(1,6,3n+2)\ge\overline{N}(2,6,3n+2) &\ge\overline{N}(0,6,3n+2)\ge\overline{N}(3,6,3n+2). \end{align*} \end{conj} \section{Mock Theta Functions} Recall that the Appell-Lerch sum is defined as \begin{equation} m(x,q,z):=\frac{1}{j(z;q)}\sum_{r=-\infty }^{\infty} \frac{(-1)^r q^{r\choose 2}z^r}{1-q^{r-1}xz}, \end{equation} where $x,z \in \mathbb{C}^*$ with neither $z$ nor $xz$ an integral power of $q$. In \cite{Hickerson-Mortenson-2014}, it points out that the third order mock theta functions ${\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)$ and $\rho(q)$ can be expressed in term of $m(x,q,z)$ as follows, \begin{align}\ellabel{a2} {\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)&=-2q^{-1}m(q,q^{6},q^2)+\frac{J^3_{6}}{J_{2}J_{3,6}},\\[3pt] \ellabel{a1} \rho(q)&=q^{-1}m(q,q^{6},-q). \end{align} A generalized Lambert series with single poles is essentially an Appell-Lerch sums, so it plays the key role in combining rank differences and mock theta functions. This section is devoted to proving the relations between the rank differences of overpartitions and mock theta functions, as stated in Theorem \ref{mock}. First we recall the universal mock theta function $g_2(x,q)$ defined by Gordon and McIntosh \cite{Gordon-2012} \[g_2(x,q):=\frac{1}{J_{1,2}}\sum_{n=-\infty}^\infty\frac{(-1)^nq^{n(n+1)}}{1-xq^n}.\] Hickerson and Mortenson \cite{Hickerson-Mortenson-2014} showed that $g_2(x,q)$ and $m(x,q,z)$ have the following relation, \begin{equation}\ellabel{g2-m} g_2(x,q)=-x^{-1}m(x^{-2}q,q^2,x). \end{equation} They also introduced the following identities on $m(x,q,z)$: \begin{align} m(x,q,z)&=x^{-1}m(x^{-1},q,z^{-1}),\ellabel{mm2} \\[3pt] m(x,q,z_1)-m(x,q,z_0)&=\frac{z_0J_1^3j(z_1/z_0;q)j(xz_0z_1;q)} {j(z_0;q)j(z_1;q)j(xz_0;q)j(xz_1;q)}.\ellabel{mm} \end{align} We are now in a position to give a proof of Theorem \ref{mock}. \vskip 0.2cm \noindent{\it Proof of Theorem \ref{mock}.} From Theorem \ref{rank-diff-2}, we have \begin{align}\ellabel{mock-1} \overline{r}_0(2)+\overline{r}_3(2) &=\frac{4}{J_{3,6}}\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1+q^{3n+1}} +\frac{4J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} -\frac{16J^3_{6}}{3J_{2}J_{3,6}}. \end{align} Replacing $q$ by $q^3$ in \eqref{g2-m} and setting $x=-q$, we have \begin{equation} g_2(-q,q^3)=q^{-1}m(q,q^6,-q), \end{equation} and by \eqref{a1}, we deduce that \begin{equation}\ellabel{m-j-1} \rho(q)=g_2(-q,q^3). \end{equation} Together with the identity in \cite[p.63]{Watson-1936} \begin{equation}\ellabel{m-j} {\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)+2\rho(q)=\frac{3J^3_{6}}{J_{2}J_{3,6}}, \end{equation} we find that \eqref{mock-1} can be transformed as follows: \begin{align*}\nonumber \overline{r}_0(2)+\overline{r}_3(2) &=4\rho(q)-\frac{16}{9}({\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)+2\rho(q)) +\frac{4J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} \\[3pt] &=\frac{4}{9}\rho(q)-\frac{16}{9}{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)+\frac{4J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3}. \end{align*} Similarly, we have \begin{align*} \overline{r}_1(2)-\overline{r}_3(2) &=-\frac{4}{J_{3,6}}\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1+q^{3n+1}} +\frac{6J^3_{6}}{3J_{2}J_{3,6}} \\[3pt] &=2{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q), \end{align*} and \begin{align*} \overline{r}_2(2)+\overline{r}_3(2) &=\frac{2}{J_{3,6}}\sum_{n=-\infty}^\infty\frac{(-1)^n q^{3n^2+3n}}{1+q^{3n+1}} -\frac{10J^3_{6}}{3J_{2}J_{3,6}}+\frac{4J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} \\[3pt] &=2\rho(q)-\frac{10}{9}({\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)+2\rho(q))+\frac{4J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3} \\[3pt] &=-\frac{2}{9}\rho(q)-\frac{10}{9}{\omega}} {\,{\rm d}}ef\Ome{{\Omega}} {\,{\rm d}}ef\bfome{{\boldsymbol \ome}ga(q)+\frac{4J_{6}^{12}}{3J_{1,6}^6J_{2}^4J_{3,6}^3}. \end{align*} Thus we complete the proof of Theorem \ref{mock}.\qed \end{document}
\begin{document} \title{Implementation of optimal phase-covariant cloning machines} \author{Fabio Sciarrino$^{1,2}$,\ and Francesco De Martini$^{2}$} \address{$^{1}$Centro di\ Studi e Ricerche ''Enrico Fermi'', Via Panisperna 89/A,\\ Compendio del Viminale, Roma 00184, Italy\\ $^{2}$Dipartimento di Fisica and Consorzio Nazionale Interuniversitario per\\ le Scienze Fisiche della \ Materia, Universit\'{a} ''La Sapienza'', Roma\\ 00185, Italy} \begin{abstract} The optimal phase covariant cloning machine (PQCM) broadcasts the information associated to an input qubit into a multi-qubit systems, exploiting a partial a-priori knowledge of the input state. This additional a priori information leads to a higher fidelity than for the universal cloning. The present article first analyzes different experimental schemes to implement the $1\longrightarrow 3\ PQCM$. The method is then generalized to any $1\ \longrightarrow $ $M$ machine for odd value of $M$ by a theoretical approach based on the general angular momentum formalism. Finally different experimental schemes based either on linear or non-linear methods and valid for single photon polarization encoded qubits are discussed. \pacs{23.23.+x, 56.65.Dy} \end{abstract} \maketitle The problem of manipulating and controlling the flux of quantum information between many quantum systems has in general been tackled and solved by the theory of quantum cloning and broadcasting \cite{Scar05,Cerf05,DeMa05}. From a practical point of view, this feature renders the theory of cloning a fundamental tool for the analysis of the security of quantum cryptographic protocols, for the distribution of quantum information to many partners and for the transmission of information contained in a system into correlations between many systems. In spite of the fact that, for fundamental reasons, the quantum cloning and flipping operations over an unknown qubit $\left| \phi \right\rangle $ are unrealizable in their exact forms \cite {Woot82,Bech99}, they can be optimally approximated by the corresponding universal quantum machines, i.e. the universal optimal quantum cloning machine (UQCM) and the universal-NOT (U-NOT) gate \cite{Buze96}. The optimal quantum cloning machine has been experimentally realized following different approaches: by exploiting the process of stimulated emission \cite {DeMa02,Lama02,Fase02}, by means of a quantum network \cite{Cumm02} and by adopting projective operators into the symmetric subspaces of many qubits \cite{Ricc04,Scia04,Irvi04}. The $N\rightarrow M$ UQCM transforms $N$ input qubits in the state $\left| \phi \right\rangle $ into $M$ output qubits, each one in the same mixed state $\rho _{out}.$ The quality of the copies is quantified by the fidelity parameter ${\cal F}_{univ}^{N\rightarrow M}=\left\langle \phi \right| \rho _{out}\left| \phi \right\rangle =\frac{ N+1+\beta }{N+2}$ with $\beta =\frac{N}{M}\leq 1.$ Not only the ''universal'' cloning of any unknown qubit is forbidden, but also the cloning of subsets containing non orthogonal states. This no-go theorem ensures the security of cryptographic protocols as $BB84$ \cite{Gisi02}. Recently {\it state dependent, }non universal, optimal cloning machines have been investigated where the cloner is optimal with respect to a given ensemble \cite{Brub00}. This partial $a-priori$ knowledge of the state allows to reach a higher fidelity than for the universal cloning. The simplest and most relevant case is represented by the cloning covariant under the Abelian group $U(1)$ of phase rotations, the so called ''{\it phase-covariant''} cloning. There the information is encoded in the phase $ \phi _{i}$ of the input qubit belonging to any equatorial plane $i$ of the corresponding Bloch sphere. In this context the general state may be expressed as: $\left| \phi _{i}\right\rangle =(\left| \psi _{i}\right\rangle +\exp (i\phi _{i})\left| \psi _{i}^{\perp }\right\rangle )$ and $\left\{ \left| \psi _{i}\right\rangle ,\left| \psi _{i}^{\perp }\right\rangle \right\} $ is a convenient normalized basis, $ \left\langle \psi _{i}\mid \psi _{i}^{\perp }\right\rangle =0$ \cite{Brub00} . Precisely, in the general case the $N\rightarrow M$ phase covariant cloning map $C_{NM}$ satisfies the following covariance relation $ C_{NM}\left( T_{\phi i}^{\otimes N}\rho _{N}T_{\phi i}^{\dagger \otimes N}\right) $ = $T_{\phi i}^{\otimes M}C_{NM}\left( \rho _{N}\right) T_{\phi i}^{\dagger \otimes M}$ where $T_{\phi i}$= $\exp [-\frac{i}{2}\phi _{i}\sigma _{i}].$ There the $\sigma _{i}$\ Pauli operator identifies the set of input states which are cloned, e.g. $\sigma _{Y}$ corresponding to states belonging to the $x-z$ plane of the Bloch sphere. The values of the optimal fidelities ${\cal F}_{cov}^{N\rightarrow M}$ for this machine have been found \cite{DAri03}. Restricting the analysis to a single input qubit to be cloned $N=1$ into $M>1$ copies, as we do in the present paper, the ''cloning fidelity'' is found: ${\cal F}_{cov}^{1\rightarrow M}=\frac{1}{2 }\left( 1+\frac{M+1}{2M}\right) $ for $M$ assuming odd values, or ${\cal F} _{cov}^{1\rightarrow M}=\frac{1}{2}\left( 1+\frac{\sqrt{M\left( M+2\right) } }{2M}\right) \;$for $M$ even$.$ In\ particular we have ${\cal F} _{cov}^{1\rightarrow 2}=0.854$ and ${\cal F}_{cov}^{1\rightarrow 3}=0.833$\ to be compared with the corresponding figures valid for universal cloning: $ {\cal F}_{univ}^{1\rightarrow 2}=0.833$ and ${\cal F}_{univ}^{1\rightarrow 3}=0.778.$ In the above perspective it is worthwhile to enlighten the deep connection between the cloning processes and the theory of quantum measurement \cite {Brus98}. Indeed the concept of universal quantum cloning is related to the problem of optimal quantum state estimation since, for $M\rightarrow \infty $ , and $\beta \longrightarrow 0\;$the cloning fidelity converges toward the fidelity of state estimation of an arbitrary unknown qubit: ${\cal F} _{univ}^{N\rightarrow M}\rightarrow {\cal F}_{estim}^{N}=\frac{N+1}{N+2}\;$ \cite{Mass95}. In a similar way, the phase-covariant cloning is connected with the estimation of an equatorial qubit, that is, with the problem to find the optimal strategy to estimate the value of the phase $\phi $ \cite {Hole82, Derk98}. The optimal strategy has been found in \cite {Derk98}: it consists of a POVM measurement corresponding to a von Neuman measurement onto the $N$ input qubits characterized by a set of $N+1$ orthogonal projectors and achieves a fidelity ${\cal F}_{phase}^{N}$. In general for $M\rightarrow \infty ,$ ${\cal F}_{cov}^{N\rightarrow M}\rightarrow {\cal F}_{phase}^{N}.$ In particular we have ${\cal F} _{cov}^{1\rightarrow M}={\cal F}_{phase}^{1}+\frac{1}{4M}$ with ${\cal F} _{phase}^{1}=3/4.$ Recently the experimental realization of the $1\rightarrow 3$ PQCM has been reported by adopting the methods of quantum optics \cite{Scia05}. The present article introduces in Section I different alternative approaches to implement the $1\rightarrow 3$ device within any quantum information technique. In Section II such methods are generalized to any $1\rightarrow M$ PQCM machine for odd value of $M$. There the corresponding theoretical analysis based by on the well established $\left| J,J_{z}\right\rangle $ angular momentum formalism of a general $J$-spin system will be given. Finally, in Section III different experimental schemes that can be adopted for single photon polarization encoded qubit based either on linear and non-linear methods will be presented. \section{Realization of the 1$\rightarrow $3 phase-covariant cloning machine} In the present Section we describe two different techniques to implement the $1\rightarrow 3$ PQCM. ({\bf a}) The first method combines the implementation of a $1\rightarrow 2$ UQCM, together with a spin flipper $ \sigma _{i}$ and the projection of the output qubits over the symmetric subspace: Fig. 1-({\bf a}). ({\bf b}) The second one exploits the symmetrization of the input qubit to clone with an ancillary entangled pair: Fig. 1-({\bf b}). \begin{figure} \caption{Scheme for the realization of the $1\rightarrow 3$ PQCM. ({\bf a} \label{fig1} \end{figure} We describe the approach ({\bf a}), first introduced in \cite{Scia05}. The input qubit is expressed as: $\left| \phi \right\rangle _{S}=2^{- {\frac12} }(\left| R\right\rangle _{S}+\exp (i\phi _{Y})\left| L\right\rangle _{S})=\alpha \left| 0\right\rangle _{S}+\beta \left| 1\right\rangle _{S}$, with $\left\langle R\mid L\right\rangle =0$, $\left| \alpha \right| ^{2}+\left| \beta \right| ^{2}=1$ and $\alpha $, $\beta $ real parameters. Here we consider, in particular, the $\phi _{Y}$ -covariant cloning and $ \sigma _{i}=\sigma _{Y}$ realizes the NOT gate for the qubits belonging to the $x-z$ plane. The output state of the $1\rightarrow 2$ UQCM device reads: \begin{equation} \begin{aligned} &\left| \Sigma \right\rangle _{SAB}=\sqrt{\frac{2}{3}}\left| \phi \right\rangle _{S}\left| \phi \right\rangle _{A}\left| \phi ^{\perp }\right\rangle _{B}\\&-\frac{1}{\sqrt{6}}\left( \left| \phi \right\rangle _{S}\left| \phi ^{\perp }\right\rangle _{A}+\left| \phi ^{\perp }\right\rangle _{S}\left| \phi \right\rangle _{A}\right) \left| \phi \right\rangle _{B} \end{aligned} \end{equation} where the qubits $S$ and $A$ are the optimal cloned qubits while the qubit $ B $ is the optimally flipped one. According to the scheme represented by Fig. 1-({\bf a}), the idea is now to exactly flip the qubit $B$ for a given subset of the Bloch sphere. This local flipping transformation of $\left| \phi \right\rangle _{B}$\ leads to: $\left| \Upsilon \right\rangle _{SAB}=( I_{S} I _{A}\otimes \sigma _{Y})\left| \Sigma \right\rangle _{SAB}=\sqrt{\frac{2}{3}} \left| \phi \right\rangle _{S}\left| \phi \right\rangle _{A}\left| \phi \right\rangle _{B}+\\-\frac{1}{\sqrt{6}}\left( \left| \phi \right\rangle _{S}\left| \phi ^{\perp }\right\rangle _{A}+\left| \phi ^{\perp }\right\rangle _{S}\left| \phi \right\rangle _{A}\right) \left| \phi ^{\perp }\right\rangle _{B}$. By this non-universal cloning process three {\it asymmetric} copies have been obtained: two clones (qubits $S$ and $A)$ with fidelity $5/6$, and a third one (qubit $B$) with fidelity $2/3$. We may now project $S,$ $A$ and $B$ over the symmetric subspace and obtain three symmetric clones with a higher average fidelity. The symmetrization operator $\Pi _{SAB}^{3}$ reads as $\Pi _{SAB}^{3}$= $\left| \Pi _{1}\right\rangle \left\langle \Pi _{1}\right| +\left| \Pi _{2}\right\rangle \left\langle \Pi _{2}\right| +\left| \Pi _{3}\right\rangle \left\langle \Pi _{3}\right| +\left| \Pi _{4}\right\rangle \left\langle \Pi _{4}\right| $ where $\left| \Pi _{1}\right\rangle =\left| \phi \right\rangle _{S}\left| \phi \right\rangle _{A}\left| \phi \right\rangle _{B}$, $\left| \Pi _{2}\right\rangle =\left| \phi ^{\perp }\right\rangle _{S}\left| \phi ^{\perp }\right\rangle _{A}\left| \phi ^{\perp }\right\rangle _{B}$, $\left| \Pi _{3}\right\rangle =\frac{1}{\sqrt{3}}\left( \left| \phi \right\rangle \left| \phi ^{\perp }\right\rangle \left| \phi ^{\perp }\right\rangle +\left| \phi ^{\perp }\right\rangle \left| \phi \right\rangle \left| \phi ^{\perp }\right\rangle +\left| \phi ^{\perp }\right\rangle \left| \phi ^{\perp }\right\rangle\left| \phi \right\rangle \right) $ and $\left| \Pi _{4}\right\rangle $= $\frac{ 1}{\sqrt{3}}\left( \left| \phi \right\rangle \left| \phi \right\rangle \left| \phi ^{\perp }\right\rangle +\left| \phi ^{\perp }\right\rangle \left| \phi \right\rangle \left| \phi \right\rangle +\left| \phi \right\rangle \left| \phi ^{\perp }\right\rangle \left| \phi \right\rangle \right) $. The symmetric subspace has dimension 4 since three qubits are involved. The probability of success of the projection is equal to $\frac{8}{9}$. The normalized output state $ \left| \xi \right\rangle _{SAB}=\Pi _{SAB}^{3}\left| \Upsilon \right\rangle _{SAB}$ is \begin{widetext} \begin{equation} \left| \xi \right\rangle _{SAB}= \frac{1}{2} \sqrt{3}[\left| \phi \right\rangle _{S}\left| \phi \right\rangle _{A}\left| \phi \right\rangle _{B}-3^{-1}\left( \left| \phi \right\rangle _{S}\left| \phi ^{\perp }\right\rangle _{A}\left| \phi ^{\perp }\right\rangle _{B}+\left| \phi ^{\perp }\right\rangle _{S}\left| \phi \right\rangle _{A}\left| \phi ^{\perp }\right\rangle _{B}+\left| \phi ^{\perp }\right\rangle _{S}\left| \phi ^{\perp }\right\rangle _{A}\left| \phi \right\rangle _{B}\right) ] \label{outputPQCM} \end{equation} \end{widetext} Let us now estimate the output reduced density matrices of the qubits $S,$ $ A $ and $B:\;\rho _{S}=\rho _{A}=\rho _{B}=\frac{5}{6}\left| \phi \right\rangle \left\langle \phi \right| +\frac{1}{6}\left| \phi ^{\perp }\right\rangle \left\langle \phi ^{\perp }\right| $. This leads to the fidelity ${\cal F}_{cov}^{1\rightarrow 3}=5/6$ equal to the optimal one obtained in the general case \cite{Brub00,DAri03}. By applying a different unitary operator $\sigma _{i}$ to the qubit $B$ we can implement the phase-covariant cloning for the corresponding different equatorial planes of the Bloch sphere, orthogonal to the $i-$axis. Let us now consider the second approach ({\bf b}), which represents an innovative simplification of the previous scheme. The PQCM device can be realized by applying the symmetrization projection $\Pi _{SAB}^{3}$ to the input qubit and to an ancillary entangled pair $\left| \Phi ^{+}\right\rangle _{AB}=\frac{1}{\sqrt{2}}\left( \left| 0\right\rangle _{A}\left| 0\right\rangle _{B}+\left| 1\right\rangle _{A}\left| 1\right\rangle _{B}\right) .$ The output state reads: \begin{equation} \Pi _{SAB}^{3}\left( \left| \phi \right\rangle _{S}\otimes \left| \Phi ^{+}\right\rangle _{AB}\right) =\left| \xi \right\rangle _{SAB} \end{equation} Again the qubits $S,$ $A$ and $B$ are found to be the optimal phase-covariant clones of the input one. By modifying the ancillary entangled state, the set of states cloned is changed. The state $\left| \Psi ^{+}\right\rangle _{AB}$ leads to the PQCM\ machine for the $y-z$ plane, while $\left| \Phi ^{-}\right\rangle _{AB}$ for the $x-y$ plane. Such result is at variance with the one found for the universal cloning process \cite {Scia04}. Indeed the $1\rightarrow 3$ UQCM transformation can be achieved by applying the projector $\Pi _{SAB}^{3}$ to the qubit $\left| \phi \right\rangle _{S}$ and to two ancillas qubit, each one in a fully mixed state $\frac{I}{2}$. \section{General approach: 1$\rightarrow $M device} In the present Section the two previous approaches are generalized to the realization of the $1\rightarrow M=2P-1$ PQCM: the first one ({\bf a}) exploits the universal cloning machine, covariant flipping and final symmetrization while the second one ({\bf b})\ is based on appropriate symmetrization of the input qubit with entangled pairs of qubit. \begin{figure} \caption{General scheme for the realization of the $1\rightarrow (2P-1)$ PQCM. ({\bf a} \label{fig1} \end{figure} Let us consider the scheme of Fig.2-({\bf a}). The UQCM\ broadcast the information on the input qubit over $2P-1$ qubit.\ The overall output state after the UQCM map reads \begin{equation} \left| \Omega ^{\prime }\right\rangle =\sum_{k=0}^{P-1}b_{k}\left| \left\{ (P-k)\phi ;k\phi ^{\perp }\right\} \right\rangle _{C}\otimes \left| \left\{ k\phi ;\left( P-1-k\right) \phi ^{\perp }\right\} \right\rangle _{AC} \end{equation} where$\ b_{k}=\left( -1\right) ^{k}\sqrt{\frac{2}{P+1}}\sqrt{\frac{ (P-1)!(P-k)!}{P!(P-1-k)!}}$ and the notation $\left| \left\{ p\phi ;q\phi ^{\perp }\right\} \right\rangle $ stands for a total symmetric combination of $p$ qubits in the state $\left| \phi \right\rangle $ and of $q$ qubits in the state $\left| \phi ^{\perp }\right\rangle $ \cite{Buze96}$.$ The labels $ C$ and $AC$ identify, respectively, the cloning and anticloning subsystems. Hereafter, we assume the input qubit to be in the state $\left| \phi \right\rangle =\left| 0\right\rangle $ without lack of generality. The $P$ qubits of the set $C$ exhibit a fidelity of the cloning process equal to $ {\cal F}_{1\rightarrow P}=$ $\frac{2+\beta }{3}$ with $\beta =1/P,$ while the $P-1$ qubits of the set $AC$ exhibit a fidelity of the flipping process equal to ${\cal F}_{1\rightarrow (P-1)}^{\ast }=\frac{2}{3}$. We associate to each qubit state a spin $\frac{1}{2}$ system. The previous expression can hence be expressed by exploiting the formalism of the angular momentum $ \left| J,J_{z}\right\rangle $ of a general $J-$spin system. The overall state in the basis $\left| j;m_{j}\right\rangle _{C}\otimes \left| j;m_{j}\right\rangle _{AC}$ reads \begin{equation} \left| \Omega ^{\prime }\right\rangle =\sum_{k=0}^{P-1}b_{k}\left| \frac{P}{2 };\frac{P}{2}-k\right\rangle _{C}\otimes \left| \frac{P-1}{2};\frac{-(P-1)}{2 }+k\right\rangle _{AC} \label{OutputCloning} \end{equation} In the above representation, the overall output state of the cloner is written as the composition of two angular momenta: ${\bf J}_{C},{\bf J}_{AC}$ \ defined respectively over the ''cloning'' and ''anticloning'' output channels. We note that the qubits $AC$ assume the maximum allowed value of $ J=\frac{P-1}{2}$, thus they lie in the symmetric subspace in analogy with the clone ones. As following step, a covariant flipping process is applied to the subspace $ AC$ transforming $\left| \Omega ^{\prime }\right\rangle $ into \begin{equation} \begin{aligned} &\left| \Omega ^{\prime \prime }\right\rangle =I_{C}\otimes \left( \sigma _{Y}^{\otimes (P-1)}\right) _{AC}\left| \Omega ^{\prime }\right\rangle \\&=\sum_{k=0}^{P-1}b_{k}\left| \frac{P}{2};\frac{P}{2} -k\right\rangle _{C}\otimes \left| \frac{P-1}{2};\frac{(P-1)}{2} -k\right\rangle _{AC} \end{aligned} \end{equation} Such expression holds for any qubit belonging to the equatorial plane under consideration. Let us now express $\left| \Omega ^{\prime \prime }\right\rangle $ adopting the overall angular momentum ${\bf J}_{T}{\bf =J} _{C}+{\bf J}_{AC}$ in the basis $\left| j_{C};j_{AC};j_{T};m_{T}\right\rangle $ \begin{equation} \left| \Omega ^{\prime \prime }\right\rangle =\sum_{j_{T}=1/2}^{2P-1} \sum_{m_{T}=-j_{T}}^{j_{T}}c(j_{T},m_{T})\left| \frac{P}{2};\frac{ P-1}{2};j_{T};m_{T}\right\rangle \end{equation} where$\ c(j_{T},m_{T})$ can be derived exploiting the Clebsch - Gordan coefficient $\left\langle j_{1};j_{2};m_{1};m_{2}\right| \left. j_{1};j_{2};j_{T};m_{T}\right\rangle $\ with $j_{1}=\frac{P}{2}$, $j_{2}= \frac{P-1}{2}$, $m_{1k}=\frac{P}{2}-k$, $m_{2k}=\frac{(P-1)}{2}-k$ \cite {Edmonds}. To complete the protocol, the overall output state is symmetrized by applying the projector $\Pi ^{M}$ with $M=2P-1$ defined as: $\Pi ^{M}=\sum_{j=0}^{M}\left| \frac{P}{2};\frac{P-1}{2};\frac{M}{2};\frac{M}{2} -j\right\rangle \left\langle \frac{P}{2};\frac{P-1}{2};\frac{M}{2};\frac{M}{2 }-j\right| .$ The non-vanishing contributions to the projected state comes from terms with $j_{T}=\frac{2P-1}{2}.$ After the action of $\Pi ^{M}$ we obtain the following normalized output state \begin{widetext} \begin{equation} \left| \Omega ^{\prime \prime \prime }\right\rangle =\Pi ^{M}\left| \Omega ^{\prime \prime }\right\rangle =\sum_{k=0}^{P-1}d_{k}\left| \frac{P}{2}; \frac{P-1}{2};\frac{2P-1}{2};\frac{2P-1}{2}-2k\right\rangle \label{outputNM} \end{equation} with \begin{eqnarray} d_{k} &=&b_{k}\left\langle \frac{P}{2};\frac{P-1}{2};\frac{P}{2}-k;\frac{ (P-1)}{2}-k\right| \left. \frac{P}{2};\frac{P-1}{2};\frac{2P-1}{2};\frac{2P-1 }{2}-2k\right\rangle = \\ &=&\left( -1\right) ^{k}\sqrt{\frac{2}{P+1}} {P-1 \choose k} {2P-1 \choose 2k} ^{-1/2} \nonumber \end{eqnarray} \end{widetext} The normalization factor reads \begin{equation} \left| \Pi ^{M}\left| \Omega ^{\prime \prime }\right\rangle \right| ^{2}= \frac{2}{P+1}\sum_{k=0}^{P-1}\frac{ {P-1 \choose k} ^{2}}{ {2P-1 \choose 2k} } \end{equation} The fidelities of the phase-covariant cloning process can be inferred re-arranging the output state (\ref{outputNM}) as follows \begin{equation} \left| \Omega ^{M}\right\rangle =\sum_{k=0}^{2P-1}d_{k}\left| \left\{ (2P-1-2k)\phi ;2k\phi ^{\perp }\right\} \right\rangle \end{equation} All the $2P-1$ qubits belonging to such state have an identical reduced density matrix equal to \begin{equation} \rho _{cov}=\gamma (P)\left| \phi \right\rangle \left\langle \phi \right| +(1-\gamma (P))\left| \phi ^{\perp }\right\rangle \left\langle \phi ^{\perp }\right| \label{reducedqubits} \end{equation} with \[ \gamma (P)=\frac{\sum_{k=0}^{P-1}\frac{(2P-1-2k)}{(2P-1)}\frac{ {P-1 \choose k} ^{2}}{ {2P-1 \choose 2k} }}{\sum_{k=0}^{P-1}\frac{ {P-1 \choose k} ^{2}}{ {2P-1 \choose 2k} }}=\frac{1}{2}\left( 1+\frac{M+1}{2M}\right) \] The previous expression has been demonstrated numerically, for value of $M$ up to 2000. The fidelity of the cloning process is thus \begin{equation} {\cal F}_{1\rightarrow M}=\left\langle \phi \right| \rho _{cov}\left| \phi \right\rangle =\frac{1}{2}\left( 1+\frac{M+1}{2M}\right) \end{equation} and is found equal to the optimal one. As alternative approach, the $1\rightarrow M$ PQCM\ device can be obtained by applying the symmetrization projector $\Pi ^{M}$ over the input qubit and $(P-1)$ ancilla entangled pairs $\left| \Phi ^{+}\right\rangle _{AB}$: Fig.2-({\bf b}). Such a result can easily be obtained by manipulating the scheme of Fig.2-({\bf a}) as follows. The UQCM of Fig. 2-({\bf a}) can be realized starting from the input qubit $\left| \phi \right\rangle $ and $ (P-1)$ entangled pairs $\left| \Psi ^{-}\right\rangle _{AB}$ as shown in Ref. \cite{Scia04}. The cloning map is achieved by symmetrization of the input qubit and $(P-1)$ ancilla qubits $A$, each one belonging to an entangled pair $\left| \Psi ^{-}\right\rangle _{AB}$ \begin{equation} \Pi _{SA}^{P}\otimes I_{B}^{P-1}(\left| \phi \right\rangle _{S}\left| \Psi ^{-}\right\rangle _{AB}^{\otimes (P-1)}) \end{equation} The output state is equal to the one $\left| \Omega ^{\prime }\right\rangle $ of Eq.\ref{OutputCloning} up to a normalization factor. To implement the PQCM device, the covariant flipping $\sigma _{Y}$ is then applied to the $ (P-1)$ qubits belonging to the subset $B$. The same result can be obtained starting from the input state $\left| \Phi ^{+}\right\rangle _{AB}^{\otimes (P-1)}$, indeed \begin{equation} \begin{aligned} &\left( I_{SA}^{P}\otimes \sigma _{Y-B}^{\otimes (P-1)}\right) \left( \Pi _{SA}^{P}\otimes I_{B}^{P-1}(\left| \phi \right\rangle _{S}\left| \Psi ^{-}\right\rangle _{AB}^{\otimes (P-1)})\right) \\&=\left( \Pi _{SA}^{P}\otimes I_{B}^{P-1}(\left| \phi \right\rangle _{S}\left| \Phi ^{+}\right\rangle _{AB}^{\otimes (P-1)})\right) \end{aligned} \end{equation} As final step the overall state is projected into the symmetric subspace through the projector $\Pi _{SAB}^{2P-1}$: \begin{eqnarray} \left| \Omega ^{\prime \prime \prime }\right\rangle &=&\Pi _{SAB}^{2P-1}\left( \left( \Pi _{SA}^{P}\otimes I_{B}^{P-1}(\left| \phi \right\rangle \left| \Phi ^{+}\right\rangle _{AB}^{\otimes (P-1)})\right) \right) = \\ &=&\Pi _{SAB}^{2P-1}\left( \left| \phi \right\rangle _{S}\left| \Phi ^{+}\right\rangle _{AB}^{\otimes (P-1)}\right) \end{eqnarray} In the previous expression we have exploited the concatenation property of the symmetrization projector $\Pi _{SAB}^{2P-1}\left( \Pi _{SA}^{P}\otimes I_{B}^{P-1}\right) =\Pi _{SAB}^{2P-1}$ which has been demonstrated experimentally in Ref. \cite{Masu05}. This concludes our simple proof of the scheme of Fig.2-({\bf b}). \section{Realization by quantum optics} In quantum optics the qubit can be implemented by exploiting the isomorphism between the qubit state $\left| \phi \right\rangle =\alpha \left| 0\right\rangle +\beta \left| 1\right\rangle $ and the polarization state $ \alpha \left| H\right\rangle +\beta \left| B\right\rangle $ of a single photon. In this context it has been proposed to realize the unitary transformation, $U_{N\rightarrow M}$, leading to the deterministic UQCM, by means of the ''quantum injected'' optical parametric amplification (QIOPA) in the entangled configuration. The experimental demonstrations of both optimal cloning and flipping processes by exploiting this technique have been reported in \ \cite{Lama02,Scia04,Irvi04}. At the same time, a different scenario has been disclosed by the discovery that it is possible to implement contextually the $1\rightarrow 2$ universal quantum cloning machine (UQCM) and the $1\rightarrow 1$ universal NOT gate by modifying the quantum state teleportation protocol \cite{Ricc04,Scia04}. The last procedure is based on a symmetric projective operation realized by combining single-photon interferometry and post-selection techniques, and it can be extended to the generic $N\rightarrow M$ cloning device. \begin{figure} \caption{Linear methods: ({\bf a} \label{fig1} \end{figure} The symmetrization of two polarization encoded \ qubit can be achieved by letting two independent-qubits to impinge onto the input arms of a beam splitter (BS) in an Hong-Ou-Mandel interferometer \cite{Hong87}, and then by probabilistically post-selecting the events in which the two photons emerge in the same spatial output mode. The basic principle at the heart of these realizations is the following: the two photons are initially superimposed at the BS interface in order to make them indistinguishable; then, a spatial symmetric wavefunction of the two photons is post-selected by the measurement apparatus. Such scheme can be extended in a controlled way to an higher number of photons, as shown in Ref.\cite{Masu05}. There a linear optics multi qubit symmetrization apparatus has been realized by a chain of interconnected Hong-Ou-Mandel interferometer. Here we introduce a variety of schemes which can be realized through the methods of quantum optics outlined above. By restricting our attention to the $1\rightarrow 3$ PQCM, the diagram below can be easily extended to general case $1\rightarrow M$ for odd values of $M$ following the guidelines of the previous Section. Let us consider first linear optics approach. Fig.3 shows the experimental scheme implementing, respectively, the scheme of Fig.1-({\bf a}), ({\bf A}), and Fig.1-({\bf b}), ({\bf B}). The flipping operation $\sigma _{Y}$ is realized by means of two $\lambda /2$ waveplates acting on the polarization state, while the symmetrization is implemented by overlapping the incoming photons on a beam splitter and post-selecting the events in which they emerge over the same mode, as said. Such scheme is similar to the one proposed by Zou {\it et al.} \cite{Zou05} to implement the $1\rightarrow 3$ PQCM\ for photonic qubit. Finally the same results can be obtained adopting non-linear methods. Let us consider the $1\rightarrow 3$ PQCM, in particular the optimal quantum cloning for $x-z$ equatorial qubits by taking linear polarization states as input. The $UQCM$ has been realized by adopting a quantum-injected optical parametric amplifier (QIOPA), while the $\sigma _{Y}$ operation and the $\Pi ^{3}$ projection have been implemented with linear optics and post-selection techniques Fig.({\bf a}). The flipping operation on the output mode ${\bf k} _{AC}$ was realized by means of two $\lambda /2$ waveplates, while the physical implementation of the projector $\Pi ^{3}$ on the three photons-states was carried out by linearly superimposing the modes ${\bf k} _{C}$ and ${\bf k}_{AC}$ on the 50:50 beamsplitter $BS$ and then by selecting the case in which the three photons emerged from $BS$ on the same output mode ${\bf k}_{PC}$ (or, alternatively on ${\bf k}_{PC}^{\prime }$ ). Interestingly, the same overall state evolution can also be obtained, with no need of the final $BS$ symmetrization, at the output of a QI-OPA with a type II crystal working in a {\it collinear} configuration, ({\bf b}) \cite {DeMa98}. In this case the interaction Hamiltonian $\widehat{H}_{coll}=i\chi \hbar \left( \widehat{a}_{H}^{\dagger }\widehat{a}_{V}^{\dagger }\right) +h.c.$ acts on a single spatial mode $k$. A fundamental physical property of $\widehat{H}_{coll}$ consists of its rotational invariance under $U(1)$ transformations, that is, under any arbitrary rotation around the $z$-axis. Indeed $\widehat{H}_{coll}$ can be re-expressed as $\frac{1}{2}i\chi \hbar e^{-i\phi }\left( \widehat{a}_{\phi }^{\dagger 2}-e^{i2\phi }\widehat{a} _{\phi \perp }^{\dagger 2}\right) +h.c.$ for $\phi \in (0,2\pi )$ where $ \widehat{a}_{\phi }^{\dagger }=2^{-1/2}(\widehat{a}_{H}^{\dagger }+e^{i\phi } \widehat{a}_{V}^{\dagger })$ and $\widehat{a}_{\phi \perp }^{\dagger }=2^{-1/2}(-e^{-i\phi }\widehat{a}_{H}^{\dagger }+\widehat{a}_{V}^{\dagger }) $. Let us consider an injected single photon with polarization state $ \left| \phi \right\rangle _{in}=2^{-1/2}(\left| H\right\rangle +e^{i\phi }\left| V\right\rangle )=\left| 1,0\right\rangle _{k}$where $\left| m,n\right\rangle _{k}$ represents a product state with $m$ photons of the mode $k$ with polarization $\phi $, and $n$ photons with polarization $\phi ^{\perp }$. The first contribution to the amplified state, $\sqrt{6}\left| 3,0\right\rangle _{k}-\sqrt{2}e^{i2\phi }\left| 1,2\right\rangle _{k}$ is identical to the output state obtained with the device introduced above up to a phase factor which does not affect the fidelity value. \section{Conclusions} We have introduced different schemes to implement the optimal $1\rightarrow M>1$ phase covariant cloning machine, by exploiting either the QIOPA method or the projection over the symmetric subspace. The introduced approaches are probabilistic, however such feature does not spoil the main physical result of the present procedure since the optimal fidelity value can not be improved by any probabilistic procedure implementation \cite{Fiur04}. The present schemes do not hold for even values of $M.$ Indeed it has been noticed that different features affect the $1\rightarrow 2P$ and $ 1\rightarrow \left( 2P-1\right) $ PQCM maps \cite{Brub00}. Recently an optical scheme to realize the $1\rightarrow 2$ PQCM has been proposed \cite {Fiur03} and realized experimentally \cite{Cern06}. The experimental realization of the different protocols with the standard quantum optics techniques has been discussed. There we found an answer to the question recently raised by Scarani et al. \cite{Scar05} whether\ it is possible implement any cloning transformation different from the universal one using amplification through stimulated emission. We have just seen that this can it be done directly either by linear optics elements, either by a nonlinear, quantum injected optical parametric amplification process. The generalization of such schemes to an higher number of input qubits $N>1$ has been found to be non-optimal and hence deserves further investigation. Finally we shall enlighten that the present cloning maps are economical, that is, do not require any extra physical resources than the clones qubits \cite{Busc05}. We acknowledge financial support from the Ministero della Istruzione, dellUniversit\`{a} e della Ricerca (PRIN 2005). \begin{references} \bibitem{Scar05} V. Scarani, S. Iblisdir, N. Gisin, and A. Ac\'{i}n, Rev. Mod. Phys. {\bf 77}, 1225 (2005). \bibitem{Cerf05} N. Cerf, and J. Fiurasek, quant-ph/0512172. \bibitem{DeMa05} F. De Martini, and, F. Sciarrino, Progress in Quantum Electronics {\bf 29}, 165 (2005). \bibitem{Woot82} W.K. Wootters, and W.H. Zurek, Nature (London) {\bf 299}, 802 (1982). \bibitem{Bech99} H. Bechmann-Pasquinucci and N. Gisin, Phys. Rev. A {\bf 59} , 4238 (1999). \bibitem{Buze96} V. Bu\v{z}ek, and M. Hillery, Phys. Rev. A {\bf 54}, 1844 (1996); N. Gisin, and S. Massar, Phys. Rev. Lett. {\bf 79}, 2153 (1997); R. Derka, V. Buzek and A. Ekert, Phys. Rev. Lett. {\bf 80}, 1571 (1998). \bibitem{DeMa02} F. De Martini, V. Bu\v{z}ek, F. Sciarrino, and C. Sias, Nature (London)\ {\bf 419}, 815 (2002); D. Pelliccia, et al., Phys. Rev. A {\bf 68}, 042306 (2003); F. De Martini, D. Pelliccia, and F. Sciarrino, Phys. Rev. Lett. {\bf 92}, 067901 (2004). \bibitem{Lama02} A. Lamas-Linares, C. Simon, J.C. Howell, and D. Bouwmeester, Science{\em \ }{\bf 296}, 712 (2002). \bibitem{Fase02} S. Fasel, {\it et al.}, Phys. Rev. Lett. {\bf 89}, 107901 (2002). \bibitem{Cumm02} H.K. Cummins {\it et al.,} Phys. Rev. Lett. {\bf 88}, 187901 (2002). \bibitem{Ricc04} M. Ricci, F. Sciarrino,\ C. Sias, and F. De Martini, Phys. Rev. Lett. {\bf 92}, 047901 (2004); F. Sciarrino, C. Sias, M. Ricci, and F. De Martini, Phys. Lett. A {\bf 323}, 34 (2004). \bibitem{Scia04} F. Sciarrino, C. Sias, M. Ricci, and F. De Martini, Phys. Rev. A {\bf 70}, 052305 (2004). \bibitem{Irvi04} W.T.M. Irvine, A. Lamas Linares, M.J.A. de Dood, and D. Bouwmeester, Phys. Rev. Lett. {\bf 92}, 047902 (2004). \bibitem{Gisi02} N. Gisin, G. Ribordy, W. Tittel., H.\ Zbinden, Rev. Mod. Phys. {\bf 74}, 145 (2002). \bibitem{Brub00} D. Bru$\beta $, M. Cinchetti, G.M. D'Ariano, and C. Macchiavello, Phys. Rev. A {\bf 62}, 012302 (2000); G.M. D'Ariano, and P. Lo Presti, Phys. Rev. A {\bf 64}, 042308 (2001); H. Fan, K. Matsumoto, X. Wang, and M. Wadati, Phys. Rev. A {\bf 65}, 012304 (2001). \bibitem{DAri03} D. Bru$\beta $ and C. Macchiavello, J. Phys. A {\bf 34}, 6815 (2001); G.M. D'Ariano, and C. Macchiavello, Phys. Rev. A {\bf 67}, 042306 (2003). \bibitem{Brus98} D. Bruss, A. Ekert, and C. Macchiavello, Phys. Rev. Lett. {\bf 81}, 2598 (1998). \bibitem{Mass95} S. Massar and S. Popescu, Phys. Rev. Lett. {\bf 74}, 1259 (1995). \bibitem{Hole82} A.S. Holevo, {\it Probabilistic and Statistical Aspects of Quantum Theory }(North-Holland, Amsterdam, 1982), p.163. \bibitem{Derk98} R. Derka, V. Buzek, and A.E. Ekert, Phys. Rev. Lett. {\bf 80}, 1571 (1998). \bibitem{Scia05} F. Sciarrino, and F.\ De Martini, Phys. Rev. A {\bf 72}, 062313 (2005). \bibitem{Edmonds} A.R. Edmonds, {\it Angular momentum in quantum mechanics} (2. ed.) (N.Y.: Princeton University press, 1960). \bibitem{Masu05} L. Masullo, M. Ricci, and F.\ De Martini, Phys. Rev. A {\bf 72}, 060304 (2005). \bibitem{Hong87} C.K. Hong, Z.Y. Ou, and L. Mandel, Phys. Rev. Lett. {\bf 59 }, 2044 (1987). \bibitem{Zou05} X. Zou and W. Mathis, Phys. Rev. A {\bf 72}, 022306 (2005) \bibitem{DeMa98} F. De Martini, Phys. Lett. A {\bf 250}, 15 (1998). \bibitem{Fiur04} J.\ Fiurasek, Phys. Rev. A {\bf 70}, 032308 (2004). \bibitem{Fiur03} J.\ Fiurasek, Phys. Rev. A {\bf 67}, 052314 (2003). \bibitem{Cern06} A. Cernoch, et al., quant-ph/0607149. \bibitem{Busc05} F. Buscemi, G.M.D'Ariano, C. Macchiavello, Phys. Rev. A {\bf 71}, 042327 (2005); T. Durt, J.Fiurasek, N.J. Cerf, Phys. Rev. A {\bf 72 }, 052322 (2005). \end{references} \end{document}
\begin{document} \title{Iterative quantum state transfer along a chain of nuclear spin qubits \footnote{Corresponding authors: Jingfu Zhang, [email protected], [email protected];\\ Dieter Suter, [email protected] }} \author{ Jingfu Zhang, Nageswaran Rajendran, Xinhua Peng, and Dieter Suter } \address{Fachbereich Physik, Universit$\ddot{a}$t Dortmund, 44221 Dortmund, Germany\\ } \date{\today} \begin{abstract} Transferring quantum information between two qubits is a basic requirement for many applications in quantum communication and quantum information processing. In the iterative quantum state transfer (IQST) proposed by D. Burgarth et al. [Phys. Rev. A 75, 062327 (2007)], this is achieved by a static spin chain and a sequence of gate operations applied only to the receiving end of the chain. The only requirement on the spin chain is that it transfers a finite part of the input amplitude to the end of the chain, where the gate operations accumulate the information. For an appropriate sequence of evolutions and gate operations, the fidelity of the transfer can asymptotically approach unity. We demonstrate the principle of operation of this transfer scheme by implementing it in a nuclear magnetic resonance quantum information processor. \end{abstract} \pacs{03.67.Lx} \maketitle \section{Introduction} Quantum state transfer (QST), i.e., the transfer of an arbitrary quantum state $\alpha|0\rangle+\beta|1\rangle$ from one qubit to another, is an important element in quantum computation and quantum communication \cite{books,Bose03,PST,Bose,thesis}. The most direct method to implement QST is based on SWAP operations \cite{swap}. This approach consist of a series of SWAP operations between neighboring qubits until the quantum state arrives at the target qubit. In a general-purpose quantum register, these quantum gates require the application of single- as well as two qubit operations. For longer distances, the number of such operations can become quite large; it may then be advantageous to rely on quantum teleportation instead \cite{telep}, which requires fewer gate operations, but shared entanglement between sender and receiver. For specific systems, it is possible to transfer quantum information without applying gate operations, but instead relying on a static coupling network \cite{Bose03,PST}. The main difficulty with this approach is the required precision with which the couplings have to be realized in order to generate a transfer with high fidelity. This requirement can be relaxed significantly, without compromising the fidelity of the transfer, by applying gate operations to the receiving end of the spin chain that effects the transfer \cite{Bose}. The capability for applying such gate operations is not an additional requirement, since such operations are required anyway if the spin chain is to be used for communication between quantum registers. This gate accumulates any amplitude of the initial state that is transferred along the chain. The protocol allows one, in principle, to obtain unit fidelity for the transfer, even if the couplings along the chain have arbitrary fluctuations, as long as a finite amplitude reaches the end of the chain. Obtaining a large transfer amplitude requires multiple iterations, each of which includes the evolution of the spin chain and the two-qubit gate operation. The fidelity for transfer increases with the number of the iterations and can approach $1$ asymptotically. Hence we refer to this protocol as the iterative quantum state transfer (IQST). In this paper we implement the protocol in an NMR quantum information processor and demonstrate its basic feasibility. \section{Iterative transfer algorithm} \subsection{System} We illustrate the IQST proposed in Ref. \cite{Bose} using a system of three spins coupled by Heisenberg XY- interactions, as shown in Figure \ref{cha}. The spin chain consists of spins $1$ and $2$, which are coupled by a constant (time-independent) interaction. Spin 3 is the target spin used to receive the transferred quantum state. The interaction between spins $2$ and $3$ can be switched on and off. Our purpose is to transfer an arbitrary quantum state $\alpha|0\rangle+\beta|1\rangle$ from spin $1$ to $3$, where $\alpha$ and $\beta$ are two complex numbers normalized to $|\alpha|^2+|\beta|^2=1$. The Hamiltonian of the the spin chain without the end qubit is \begin{equation}\label{Ham12} H_{12}=\frac{1}{2}\pi J_{12}(\sigma_{x}^{1}\sigma_{x}^{2}+\sigma_{y}^{1}\sigma_{y}^{2}) , \end{equation} where $J_{12}$ denotes the coupling strength. The Hamiltonian of spins $2$ and $3$ is \begin{equation}\label{Ham23} H_{23}(t)=\frac{1}{2}\pi J_{23}(t)(\sigma_{x}^{2}\sigma_{x}^{3}+\sigma_{y}^{2}\sigma_{y}^{3}) , \end{equation} where $J_{23}(t)$ is $J_{23}$ when the interaction is switched on and $0$ otherwise. \subsection{IQST algorithm} \label{IQSTtheory} The purpose of the IQST algorithm is the transfer of an arbitrary state $\alpha|0\rangle+\beta|1\rangle$ from the start of the chain (qubit 1) to the end (qubit 3). We start the discussion by choosing as the initial state of the complete 3-qubit system the state $\alpha|000\rangle+\beta|100\rangle$, i.e. a product state with spin $1$ in state $\alpha|0\rangle+\beta|1\rangle$, and spins $2$ and $3$ in $|0\rangle$. Transferring the $\alpha|0\rangle$ part of the input state is trivial, since spins 1 and 3 are in the same state and this state is invariant under the $XY$ interaction. We therefore only have to consider the $\beta|1\rangle$ part. The chosen initial state of the spin chain is not unique. We could, e.g., choose to start with the total system in $\alpha|011\rangle+\beta|111\rangle$. In this case, the $|111\rangle$ is invariant and only the transfer of the $\alpha|0\rangle$ part needs to be considered. At the end of this section, we discuss additional possibilities. The iterative transfer scheme of Burgarth et al. consists of a continuous evolution under the spin-chain Hamiltonian, interrupted by successive applications of the end-gate operation. We write the transfer operator as \begin{equation}\label{iter} T_{k}= \prod_{n=1}^{k}W^{23}(c_{n}, d_{n})U^{12}(\tau) \label{e.Tk} \end{equation} where \begin{equation}\label{U12t} U^{12}(\tau) = e^{-i\tau H_{12}}\otimes I^{3} = \left (\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & C_{12} & -i S_{12} & 0 \\ 0 & -i S_{12} & C_{12} & 0 \\ 0 & 0 & 0 & 1 \end{array} \right ) \otimes \left ( \begin{array}{cc} 1 & 0\\ 0 & 1 \end{array} \right ) \end{equation} represents the evolution of the spin chain and \begin{equation}\label{psgate} W^{23}(c_{n},d_{n})= \left ( \begin{array}{cc} 1 & 0\\ 0 & 1 \end{array} \right ) \otimes \left (\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & d_{n}^{*} & c_{n}^{*} & 0 \\ 0 & -c_{n} & d_{n} & 0 \\ 0 & 0 & 0 & 1 \end{array} \right ) \end{equation} the end gate operation. Here, $C_{12} = \cos(\pi J_{12} \tau )$ and $S_{12} = \sin(\pi J_{12} \tau )$ and $n$ represents the iteration step. The parameters $c_n, d_n$ are related by the unitarity condition $|c_{n}|^{2}+|d_{n}|^{2}=1$. For each step of the iteration, they are equal to the coefficients of the relevant states $|010\rangle$ and $|001\rangle$ just before the gate is applied. Under this condition, $$ W^{23}(c_{n},d_{n})(c_{n}|010\rangle+d_{n}|001\rangle)=|001\rangle , $$ i.e. the transfer to the final state $|001\rangle$ is maximized. During the $n^{th}$ step, the two coefficients are \begin{equation}\label{cn} c_{n}=-i\frac{S_{12}C^{n-1}_{12}}{\sqrt{1-C_{12}^{2n}}}, \label{Uini} \end{equation} \begin{equation} \label{dn} d_{n}=\sqrt{\frac{1-C_{12}^{2(n-1)}} {1-C_{12}^{2n}}}. \end{equation} \subsection{Quantification of transfer} After $k$ iterations, $|100\rangle$ is transferred to \begin{equation} \label{phik} |\Psi_{k}\rangle=T_{k}|100\rangle=C_{12}^{k}|100\rangle+ \sqrt{1-C_{12}^{2k}}|001\rangle. \end{equation} Apparently, the transfer increases monotonically with the number of iterations and can asymptotically approach unity provided $|C_{12}| < 1$. Writing $F_{k}=\langle001|\Psi_{k}\rangle$ for the overlap of the system with the target state, we find \begin{equation}\label{Fidelty} F_{k}=\sqrt{1-C_{12}^{2k}}. \end{equation} Eq. (\ref{e.Tk}) implies that only the spin chain or the end gate are active at a given time. If the spin chain interactions are static (not switchable), this can only be realized approximately if the coupling between the two end-gate qubits is much stronger than the couplings in the spin chain, $J_{23} \gg J_{12}$. In the NMR system, we instead refocus the spin-chain interaction during the application of the end-gate operation to better approximate the ideal operation \begin{equation}\label{impsg} W^{23}(c_n,d_n)=e^{-i\pi J_{23}t_{n}(\sigma_{x}^{2}\sigma_{x}^{3}+\sigma_{y}^{2}\sigma_{y}^{3})/2} \end{equation} where \begin{equation}\label{endgateimp} \tan(\pi J_{23}t_{n})=-ic_n/d_n \, . \end{equation} \subsection{Generalization to mixed states} The IQST algorithm works also when the spin chain is in a suitable mixed state. As an example, we choose $\alpha = \beta = \frac{1}{\sqrt{2}}$. The second and third qubit can be chosen in any combination of $|0\rangle$ and $|1\rangle$. Here, we implement all four possibilities in parallel \cite{parallel} by putting qubits 2 and 3 into the maximally mixed state $I^{2}\otimes I^{3}$, where $I$ denotes the unit operator and the upper index labels the qubit. The sample thus contains an equal number of molecules with qubits in the states $\alpha |0l\rangle + \beta |1l\rangle$ with $l =\{ 00, 01, 10, 11 \}$. The traceless part of the corresponding density operator is \cite{Chuang} \begin{equation}\label{ini} \rho_{ini} = \sum_{l=00}^{11}\sigma_{x}^{1}\otimes(|l\rangle\langle l|). \end{equation} If the system is initially in one of the states $|l\rangle = |01 \rangle, |10 \rangle $, it acquires an overall phase factor of $-1$ during the transfer. Combining this with the results of Sec. \ref{IQSTtheory}, we find that after $k$ iterations, the system is in the state \begin{equation}\label{rhok} \rho_{k} = T_{k} \, \rho_{ini} \, T_{k}^{\dag} = \sqrt{1-F_k^2} \, \sigma^{1}_{x} \, I^2I^3+ F_k \, \sigma^{1}_{z}\sigma^{2}_{z}\sigma^{3}_{x}. \end{equation} Similarly, when the initial state is chosen as \begin{equation}\label{inimy} \rho_{ini} = \sum_{l=00}^{11}\sigma_{y}^{1}\otimes(|l\rangle\langle l|), \end{equation} the algorithm generates the state \begin{equation}\label{rhok2} \rho_{k} = T_{k} \rho_{ini} T_{k}^{\dag} = \sqrt{1-F_k^2} \, \sigma^{1}_{y}I^2I^3+ F_k \, \sigma^{1}_{z}\sigma^{2}_{z}\sigma^{3}_{y} \end{equation} after $k$ iterations. \section{Implementation} For the experimental implementation, we chose the $^{1}$H, $^{19}$F, and $^{13}$C spins of Ethyl 2-fluoroacetoacetate as qubits. The chemical structure of Ethyl 2-fluoroacetoacetate is shown in Figure \ref{2F}, where the three qubits are denoted as H1, F2, and C3, respectively. The strengths of the $J$-couplings are $J_{12}=48.5$ Hz, $J_{23}=-195.1$ Hz and $J_{13}=160.8$ Hz. $T_1$ and $T_2$ values for these three nuclei are listed in the right table in Figure \ref{2F}. In the rotating frame, the Hamiltonian of the three- qubit system is \cite{Chuang,Ernst,CoryPRL99} \begin{equation}\label{HamCHF} H_{NMR}=\frac{\pi}{2}(J_{12}\sigma^1_z\sigma^2_z +J_{23}\sigma^2_z\sigma^3_z+J_{13}\sigma^1_z\sigma^3_z). \end{equation} The sample consisted of a 3:1 mixture of unlabeled Ethyl 2-fluoroacetoacetate and d6-acetone. Molecules with a $^{13}$C nucleus at position 2, which we used as the quantum register, were therefore present at a concentration of about $1 \%$. They were selected against the background of molecules with $^{12}$C nuclei by measuring the $^{13}$C signal. We chose H1 as the input qubit and C3 as the target qubit. Figure \ref{inistate} (a) shows the $^{13}$C NMR spectrum obtained by applying a readout pulse to the system in its thermal equilibrium state. Each of the resonance lines is associated with a specific spin state of qubits 1 and 2. \subsection{Initial state preparation} The initial pseudo-pure state $|000\rangle$ is prepared by spatial averaging \cite{spatial}. The following radio-frequency (rf) and magnetic field gradient pulse sequence transforms the system from the equilibrium state \begin{equation}\label{equ} \rho_{eq}=\gamma_{1}\sigma_{z}^{1}+ \gamma_{2}\sigma_{z}^{2}+\gamma_{3}\sigma_{z}^{3} \end{equation} to $|000\rangle$: $[\phi_{1}]_{y}^{1}-[\phi_{2}]_{y}^{2}-[grad]_{z}-[\pi/2]^{1}_{x}-[1/2J_{13}] -[-\pi/2]^{1}_{y}-[\pi/4]^{3}_{x}-[-1/2J_{23}]-[-\pi/4]^{3}_{y}-[grad]_{z} -[\pi/4]^{1}_{x}-[1/2J_{13}]-[-\pi/4]^{1}_{y}-[grad]_{z}$. Here $\gamma_{1}$, $\gamma_{2}$ and $\gamma_{3}$ denote the gyromagnetic ratios of H1, F2, and C3, respectively, and $\cos \phi_{1}=2\gamma_{3}/\gamma_{1}$, and $\cos \phi_{2}=\gamma_{3}/2\gamma_{2}$. $[grad]_{z}$ denotes a gradient pulse along the $z$- axis. $[\pi/2]_{x}^{1}$ denotes a $\pi/2$ pulse along the $x$- axis acting on the H1 qubit. Overall phase factors have been ignored. The coupled-spin evolution between two spins, for instance, $[1/2J_{13}]$, can be realized by the pulse sequence $1/4J_{13}-[\pi]^{2}_{y} -1/4J_{13}-[-\pi]^{2}_{y}$, where $1/4J_{13}$ denotes the evolution caused by $H_{NMR}$ for a time $1/4J_{13}$ \cite{ZZcouple}. The target state can be prepared directly from the state $|000\rangle$ by applying a $[\pi/2]^{3}_{y}$ pulse. It corresponds to $|00\rangle(|0\rangle-|1\rangle)/\sqrt{2}$, i.e. to transverse magnetization of the target spin, with the first two qubits in state $\vert 00 \rangle$. If we measure the free induction decay (FID) of this state and calculate the Fourier transform of the signal, we obtain the spectrum shown in Figure \ref{inistate} (b). This spectrum serves as the reference to which we scale the data from the IQST experiment. The input state for the IQST is $|\Psi_{in}\rangle=|\psi(\theta)\rangle|00\rangle$. We generate this state by rotating H1 by an angle $\theta$ around the $y$-axis: $|\Psi_{in}\rangle = e^{i\theta\sigma^{1}_{y}/2} |000\rangle$. After $k$ iterations of the IQST algorithm, $|\Psi_{in}\rangle$ is transferred to \begin{equation} \label{phikout} T_{k}|\Psi_{in}\rangle= [(1-F_k)\cos(\theta/2)|0\rangle-\sqrt{1-F_{k}^{2}}\sin(\theta/2)|1\rangle]|00\rangle+|00\rangle F_{k}|\psi(\theta)\rangle . \end{equation} Here, we have used Eqs. (\ref{phik}-\ref{Fidelty}) and assumed $C_{12}\geq 0$, without loss of generality. Hence the state transfer can be observed through measuring carbon spectra. For the mixed input state, $\rho_{ini}$ [Eq. (\ref{inimy})] can be generated from $\rho_{eq}$ through the pulse sequence \cite{Tseng} \begin{eqnarray}\label{inip} [\frac{\pi}{2}]_{x}^{3}-[\frac{\pi}{2}]_{x}^{2}-[grad]_{z}-[\frac{\pi}{2}]_{x}^{1}. \end{eqnarray} \subsection{Effective XY-interactions} The IQST algorithm requires XY interactions, while the natural Hamiltonian contains ZZ couplings. To convert the ZZ interactions into XY type, we decompose the evolution $e^{-i\varphi(\sigma_{x}^{k}\sigma_{x}^{l}+\sigma_{y}^{k}\sigma_{y}^{l})}$ into $e^{-i\varphi\sigma_{x}^{k}\sigma_{x}^{l}}e^{-i\varphi\sigma_{y}^{k}\sigma_{y}^{l}}$ \cite{cory07} using $[\sigma_{x}^{k}\sigma_{x}^{l},\sigma_{y}^{k}\sigma_{y}^{l}]=0$, where $\varphi$ denotes an arbitrary real number. These tranformations can be implemented by a combination of radio-frequency pulses and free evolutions under the $J$-couplings: \cite{DuPRA03}. \begin{equation}\label{XX} e^{-i\varphi\sigma_{x}^{k}\sigma_{x}^{l}}= e^{\pm i\pi\sigma_{y}^{k}/4}e^{\pm i\pi\sigma_{y}^{l}/4} e^{-i\varphi\sigma_{z}^{k}\sigma_{z}^{l}} e^{\mp i\pi\sigma_{y}^{k}/4}e^{\mp i\pi\sigma_{y}^{l}/4} \end{equation} \begin{equation}\label{YY} e^{-i\varphi\sigma_{y}^{k}\sigma_{y}^{l}}= e^{\pm i\pi\sigma_{x}^{k}/4}e^{\pm i\pi\sigma_{x}^{l}/4} e^{-i\varphi\sigma_{z}^{k}\sigma_{z}^{l}} e^{\mp i\pi\sigma_{x}^{k}/4}e^{\mp i\pi\sigma_{x}^{l}/4} \, . \end{equation} Figure \ref{pulseq} shows the complete pulse sequence for implementing the IQST, starting from $|\Psi_{in}\rangle$. The subscript $n$ indicates that the pulses in the square brackets have to be repeated for every iteration. The duration of each $W^{23}$ segment varies, since $t_{n}=-\arctan(ic_{n}/d_{n})/\pi J_{23}$. For the initial state $\rho_{ini}$ in Eq. (\ref{ini}), the propagators $n$ can be simplified: since the density operator commutes with $\sigma^{1}_{x}\sigma^{2}_{x}$ and $\sigma^{2}_{y}\sigma^{3}_{y}$ at all times, it is sufficient to generate the propagator $$ e^{-i\pi J_{23}t_n\sigma^{2}_{x}\sigma^{3}_{x}/2}e^{-i\pi J_{12}\tau\sigma^{1}_{y}\sigma^{2}_{y}/2} . $$ Similarly, for the initial state in Eq. (\ref{inimy}), iteration $n$ can be replaced by $e^{-i\pi J_{23}t_n\sigma^{2}_{y}\sigma^{3}_{y}/2}e^{-i\pi J_{12}\tau\sigma^{1}_{x}\sigma^{2}_{x}/2}$. We use these simplified versions to shorten the duration of the experiment and thereby increase the fidelity. \subsection{Results for state transfer} When $\tau=1/2J_{12}$, the transfer can be implemented in a single step with a theoretical fidelity of $100\%$. The state transfer from H1 to C3 can be observed by measuring $^{13}$C spectra. The experimental result for $|\Psi_{in}\rangle=|\psi(\pi/4)\rangle|00\rangle$ is shown in Figure \ref{onepure} (a). Comparing with Figure \ref{inistate} (b) one finds that the output state is $|00\rangle(|0\rangle-|1\rangle)/\sqrt{2}$, i.e., the state $|\psi(\pi/4)\rangle$ is transferred from H1 to C3. Figure \ref{onepure} (b), show the corresponding result for the transfer of $\sigma_{y}^{1}$ from H1 to C3 in a single step, with qubits 2 and 3 initially in the completely mixed state. For this experiment, the receiver phase was shifted by $\pi/2$ with respect to the upper spectrum. Since this experiment implements the transfer for all possible states of the other qubits in parallel, we observe four resonance lines corresponding to the states $\{00, 01, 10, 11\}$ of qubits 1 and 2. For the states with odd parity, the transfer adds an overall phase factor of -1, which is directly visible as a negative amplitude in the spectrum. To demonstrate that iterative transfer works for a range of coupling strengths or (equivalently) evolution periods, we chose $\tau=1/5J_{12}$ and $\tau=1/6J_{12}$. For the case of pseudo-pure input states, three iterations are implemented for either case. When $\theta$ changes from $0$ to $2\pi$ the experimental results obtained from these transfer experiments are summarized in Figure \ref{tau56}, where the vertical axis denotes the amplitude of the NMR spectrum. For each input state the amplitude increases with the number of iterations. The increase of the amplitude shows the increase of the fidelity for the state transfer. The dependence on the input state parameter $\theta$ has the expected $\sin (\theta)$ dependence. The experimental data obtained for the mixed input states are summarized in Figures \ref{Res} (a) and (b), for $\tau=1/5J_{12}$ and $\tau=1/6J_{12}$, respectively. The positive lines indicate that the transfer occurs with positive sign if qubits 1 and 2 are in state $|00\rangle$ or $|11\rangle$, and with negative sign for the states $|01\rangle$ or $|10\rangle$, in agreement with Eq. (\ref{rhok2}). Obviously the amplitude of the signals increases with the number of iterations. According to Eq. (\ref{rhok2}) the increase of the amplitudes is a direct measure for the progress of the quantum state transfer. \section{Discussion and Conclusion} Our results clearly demonstrate the validity of the iterative state transfer algorithm of Burgarth et al. In principle, it is possible to iterate the procedure indefinitely, always improving the fidelity of the transfer. In practice, every iteration also increases the amount of signal loss, either through decoherence or through experimental imperfections. According to Eq. (\ref{rhok2}), the fidelity of the transfer is \begin{equation}\label{expzzx} F_{k}=|Tr[(\sigma^{1}_{z}\sigma^{2}_{z}\sigma^{3}_{y})\rho_{k}]| . \end{equation} The experimental measurement corresponds to a summation of the amplitudes of the resonance lines. We normalized the experimental values to the amplitudes obtained by direct preparation of the target states [see Figure \ref{inistate} (a)]. In Figure \ref{tau56x}, we show the experimentally measured fidelities of the transfer of the state $\sigma_{y}$ for 1-5 iterations. As expected, the experimental data points are below the theoretical curves (full lines). The experimental points can be fitted quite well if we include a decay parameter for each iteration. The dashed curves in Figure \ref{tau56x} represent the function $F_{k}e^{-kr}$ with $r=0.087$ and $r=0.079$ for $\tau=1/5J_{12}$ and $\tau=1/6J_{12}$, respectively. Each iteration thus adds imperfections (experimental plus decoherence) of about 8 \%. Larger numbers of iterations are meaningful only if this error rate can be reduced. In conclusion, we have implemented the iterative quantum state transfer in a three qubit NMR quantum information processor. The result shows that it is indeed possible to accumulate the quantum state at the end of a Heisenberg spin chain, whose couplings are always active. \section{Acknowledgment} We thank Prof. Jiangfeng Du for helpful discussions. This work is supported by the Alexander von Humboldt Foundation, the DFG through Su 192/19-1, and the Graduiertenkolleg No. 726. \begin{figure} \caption{The spin chain including the target spin ($3$) used for implementing the IQST. The XY- interactions in the spin chain, denoted by the solid line, is always active, while the XY- interaction between spins $2$ and $3$, denoted by the dashed line, can be switched on and off. $W^{23} \label{cha} \end{figure} \begin{figure} \caption{(Color online) The chemical structure of Ethyl 2-fluoroacetoacetate. The three spins in the dashed oval are the three qubits for implementing IQST. The strengths (in Hz) of the $J$-couplings between the relevant nuclear spins and the relaxation times are listed in the left and right tables, respectively.} \label{2F} \end{figure} \begin{figure} \caption{(a) $^{13} \label{inistate} \end{figure} \begin{figure} \caption{(Color online) Pulse sequence for implementing the IQST. The two blocks that implement $U^{12} \label{pulseq} \end{figure} \begin{figure} \caption{Experimental results for quantum state transfer with $\tau=1/2J_{12} \label{onepure} \end{figure} \begin{figure} \caption{(Color online) Experimental results for demonstrating the IQST when the initial state is $[\cos(\theta/2)|0\rangle -\sin(\theta/2)|1\rangle]|00\rangle$. Two cases for $\tau=1/5J_{12} \label{tau56} \end{figure} \begin{figure} \caption{(Color online) $^{13} \label{Res} \end{figure} \begin{figure} \caption{(Color online) Experimentally measured fidelity of the iterative state transfer as a function of the number of iteration steps when $\tau=1/5J_{12} \label{tau56x} \end{figure} \end{document}
\begin{document} \title{Qubit gates using hyperbolic secant pulses} \author{H. S. Ku}\email{[email protected]} \affiliation{National Institute of Standards and Technology, Boulder, Colorado 80305, USA} \author{J. L. Long} \affiliation{National Institute of Standards and Technology, Boulder, Colorado 80305, USA} \affiliation{Department of Physics, University of Colorado, Boulder, Colorado 80309, USA} \author{X. Wu} \affiliation{National Institute of Standards and Technology, Boulder, Colorado 80305, USA} \author{M. Bal} \affiliation{National Institute of Standards and Technology, Boulder, Colorado 80305, USA} \author{R. E. Lake} \affiliation{National Institute of Standards and Technology, Boulder, Colorado 80305, USA} \author{Edwin Barnes} \affiliation{Department of Physics, Virginia Tech, Blacksburg, Virginia 24061, USA} \author{Sophia E. Economou} \affiliation{Department of Physics, Virginia Tech, Blacksburg, Virginia 24061, USA} \author{D. P. Pappas} \affiliation{National Institute of Standards and Technology, Boulder, Colorado 80305, USA} \date{\today} \begin{abstract} It has been known since the early days of quantum mechanics that hyperbolic secant pulses possess the unique property that they can perform cyclic evolution on two-level quantum systems independently of the pulse detuning. More recently, it was realized that they induce detuning-controlled phases without changing state populations. Here, we experimentally demonstrate the properties of hyperbolic secant pulses on superconducting transmon qubits and contrast them with the more commonly used Gaussian and square waves. We further show that these properties can be exploited to implement phase gates, nominally without exiting the computational subspace. This enables us to demonstrate the first microwave-driven $Z$-gates with a single control parameter, the detuning. \end{abstract} \pacs{03.67.Lx} \maketitle Controlled rotations of two-level systems were among the first examples of time-dependent quantum phenomena ever studied and continue to be a very active area of research owing to the central role they play in numerous quantum-based technologies currently being pursued \cite{Ladd_Nature10,Gisin_RMP02,Degen_arxiv16,Devoret_Science13}. Early investigations of two-level quantum dynamics were conducted using a double Stern-Gerlach experiment in which spins traverse a region of rotating magnetic field \cite{Phipps1932}. In the rotating frame, this problem can be mapped onto the familiar Rabi oscillations of atoms in a field \cite{Gerry}, where the drive strength, $\Omega(t)$, and detuning, $\Delta$, of the drive frequency from the two-level energy splitting correspond to the magnetic field strength and precession rate of the spins in the field. It was recognized early on by Rosen and Zener \cite{Rosen1932} that there is an exact solution to this problem, with $\Omega(t)= \Omega_0 \sech(\rho t)$. More importantly, for specific values of $\Omega_0$, a spin in an arbitrary superposition of $\ket{0}$ and $\ket{1}$ will always return back to that same initial state {\it independently} of the value of the detuning. This surprising result has been leveraged extensively in fields such as spatial solitons, quantum optics and self-induced transparency \cite{LambSolitons1971,SegevSpatialSolitons1992,McCall_PR69,LehtoSech2016}. The cyclic evolution is accompanied by the acquisition of opposite phases by states $\ket{0}$ and $\ket{1}$, which has been suggested \cite{Economou2006,Economou_PRL07,Economou2012,Economou2015,Barnes_arxiv16} as a means to implement phase gates or multi-qubit entangling gates in various qubit systems through the use of states outside the computational subspace. Single-qubit gates are required for quantum computing and simulations. In the case of rotations about an axis in the $XY$~plane, the control design is rather straightforward: a resonant pulse of any shape will implement such a rotation, with the pulse area (the time integral of the pulse envelope) determining the angle of rotation. The theoretical simplicity of this concept has made qubit rotations about $XY$ axes routine in many labs \cite{Barends_Nature14,Johnson_NJP15,Blumoff_PRX16,Rol_arxiv16}. On the other hand, $Z$-rotations have been implemented to date via tuning \cite{SteffenTomographyScience2006}, combining rotations about $X$ and $Y$ using multiple pulses or just by keeping an accounting of all phases on the system \cite{McKayZ-GatesArxiv2017}. In general, these methods can result in increased decoherence in systems of multiple qubits, as it takes the parameters away from the high coherence regime. The other alternatives can result in increased overhead in pulse time or accounting. There is thus a need for a gate that achieves a rotation around the $Z$-axis using a single pulse to simplify and reduce overheads. One of the difficulties in implementing $Z$-rotations (i.e. phase gates) is that, unlike in the case of rotations about axes in the $XY$ plane, there is no generic analytical solution for the evolution operator corresponding to a $Z$-rotation. One general requirement for phase gates is that the qubit undergoes a full Rabi flop, with all populations restored to their initial values. This can be achieved with resonant $2\pi$ pulses, however such pulses induce the same phase factor (equal to -1) to both the $|0\rangle$ and the $|1\rangle$ state, resulting in a trivial qubit operation that does not change the phase. In the current work, we overcome this challenge and develop the first implementation of a microwave-mediated $Z$-gate, $Z_\mathrm{sech}(\Delta)$. By using a sech-function microwave pulse, we are able to achieve a phase gate in qubits using only a single parameter, the detuning $\Delta$, and nominally driving the lower two levels of a transmon qubit. After fixing the peak strength of the pulse to satisfy the full Rabi flop condition, the detuning is used to tune the angle of rotation. Here we demonstrate the general result that this property is unique to the sech pulse, in contrast with other pulse shapes such as square and Gaussian, and demonstrate that we can use sech pulses to generate microwave-based phase gates that are intrinsically high-fidelity, $F>99~\%$. To demonstrate a $Z$-gate with superconducting qubits, we use a microwave pulse with a $\sech$-envelope to rotate between the lowest two energy levels of a transmon. The drive pulse is defined as \begin{equation} \Omega(t)=\Omega_{0}\sech\left(\rho t\right)\cos{\left(\omega_\mathrm{D} t\right)}, \end{equation} where $\Omega_{0}$ is the drive strength, $\rho$ is the pulse bandwidth, and $\omega_\mathrm{D}$ is the drive frequency. The full Rabi flop condition is satisfied by choosing \begin{equation} \Omega_0/\rho=n, \end{equation} where $n$ is an integer \cite{Rosen1932,McCall_PR69}. The salient feature is that this cyclic transition condition is independent of $\omega_\mathrm{D}$. This enables us to devise a single-control microwave $Z$-gate. The $Z$-rotation is achieved as follows. Suppose the initial state of a qubit is in a superposition state \nolinebreak{$\Psi_0=a|0\rangle+b|1\rangle$}. After the qubit undergoes a cyclic evolution, the state $|0\rangle$ ($|1\rangle$) acquires a phase $\xi$ ($-\xi$), i.e., the state is driven to \nolinebreak{$\Psi=a|0\rangle+b\mathrm{e}^{i\phi}|1\rangle$} with $\phi=2\xi$. For a $2\pi$ pulse ($n=1$), the induced phase $\phi$ is given by \begin{equation} \phi=4\arctan\left(\rho/\Delta\right), \label{Eq_phi} \end{equation} where the detuning is $\Delta=\omega_\mathrm{D}-\omega_{10}$ and $\omega_{10}$ is the transition frequency for the lowest two transmon levels. By fixing the drive strength $\Omega_0$ and the bandwidth $\rho$, we can construct a single-control $Z$-gate, $Z_\mathrm{sech}(\Delta)$, by adjusting only $\Delta$. The specific device used for the experimental test of this gate is a concentric transmon \cite{Sandberg2013,Braumuller2016}. A transmon is essentially a nonlinear electrical LC oscillator that is read out using capacitive coupling, in the dispersive regime, to a linear resonator. The particular qubit used for this work had a transition frequency of $\omega_{10}=2\pi\times5.18$~GHz and an anharmonicity of $\omega_{10}-\omega_{21}=2\pi\times200$~MHz where $\omega_{21} $ is the transition frequency between the first and second excited states of the transmon. Further details of the qubit and the heterodyne readout method are given in the supplemental material. \begin{figure} \caption{(color online) Three different excitation profiles for sech-(red), Gaussian-(green), and square-(blue) pulse shapes.} \label{fig_Pulse} \end{figure} In order to demonstrate the uniqueness of the sech-drive, Rabi oscillations were driven with sech-, Gaussian- and square-pulse envelopes as shown in Fig.~\ref{fig_Pulse}. The synthesized microwave drive is generated by using modulation signals from an arbitrary pulse sequencer to IQ modulate a local oscillator. The full length of the pulse extended over $\pm4\sigma$ in order to minimize sharp cut-offs of sech- and Gaussian-functions. The standard deviation $\sigma$ is related to the bandwidth $\rho$ by $\sigma=\pi/(2\rho)$. This range was chosen because it fully utilizes the full digitization range (8 bits) of the arbitrary pulse sequencer. As seen in the inset of Fig.~\ref{fig_Pulse}, the sech-pulse is slightly broader with a longer tail than the Gaussian. While improvements to the pulse shape could be made by either creating a hard on/off transition \cite{MartinisZgatePRA2014} or reducing leakage with DRAG \cite{Motzoi_PRL09,Kaufmann, McKayZ-GatesArxiv2017}, in this work we have chosen to use a simple sech-pulse shape for direct comparison to theory and general purpose applications. A comparison of experimental and theoretical Rabi oscillations versus the detuning $\Delta$ and the pulse amplitude are shown in Fig.~\ref{fig_RabiAmp2D} for $\sigma=25$~ns pulses. The theory is simulated using the empirical result that our qubit is typically initialized in an incoherently mixed state of 90~$\%$ $\left|0\right\rangle$ and 10~$\%$ $\left|1\right\rangle$ due to heating in the dilution refrigerator. The excited-state ellipses obtained from the sech pulses are qualitatively and quantitatively different compared to the chevron-shaped response exhibited by the Gaussian and square pulses. The first feature to note in the comparison is that the widths of the maxima, in the detuning axis, for the sech pulse are approximately the same for subsequent oscillation maxima [Fig.~\ref{fig_RabiAmp2D}(a)]. If we take cuts of the images along constant detunings, this leads to uniform periodic oscillations in the excited state probability as a function of pulse amplitude in the case of the sech, as shown in the 1-D plots of Fig.~\ref{fig_RabiAmp1D}(a). On the other hand, the Gaussian maxima in Fig.~\ref{fig_RabiAmp2D}(b) progressively widen and curve further downward with each oscillation period. The 1-D slices shown in Fig.~\ref{fig_RabiAmp1D}(b) for the Gaussian illustrate that the points where the population returns to the ground state shift toward lower pulse amplitude, while the Rabi oscillation contrast grows with increasing drive strength. This behavior is further exaggerated for the square pulses, as is evident in Fig.~\ref{fig_RabiAmp2D}(c) and Fig.~\ref{fig_RabiAmp1D}(c). The uniformity of the oscillations in the sech-pulse case is a direct reflection of the fact that, even if the drive amplitude is fixed, one can still achieve the cyclic condition. To see this effect quantitatively, the pulse amplitude for a full Rabi flop versus the drive detuning is plotted in Fig.~\ref{fig_RabiCyclic} for all three pulse shapes. The population return amplitudes are found by quadratic fits near the minimum region which corresponds to $2\pi$ pulses in Fig.~\ref{fig_RabiAmp2D}. The sech-pulse only has a $8.2~\%$ variation in pulse amplitude over the $\lvert\Delta\rvert\leq10$~MHz range, while the Gaussian-pulse has a $43.5~\%$ change. This low detuning-dependence behavior allows us to vary only one parameter, the detuning $\Delta$, to obtain an arbitrary $Z$-gate. \begin{figure} \caption{(color online) The excited state probability as a function of pulse amplitude (vertical axis) and detuning (horizontal) for (a) sech, (b) Gaussian, and (c) square pulses. The left and right panels compare experimental and theoretical simulations. The simulations impose the cutoff at $\pm4\sigma$ for the 8-bit digitization and include the four lowest energy levels of the transmon.} \label{fig_RabiAmp2D} \end{figure} \begin{figure} \caption{(color online) Line cuts at various detunings versus pulse amplitude from Fig.~\ref{fig_RabiAmp2D} \label{fig_RabiAmp1D} \end{figure} \begin{figure} \caption{(color online) The pulse amplitude for the first full Rabi flop plotted versus the drive detuning for sech (red), Gaussian (green), and square (blue) pulse shapes.} \label{fig_RabiCyclic} \end{figure} Although the experimental and theoretical results in Fig.~\ref{fig_RabiAmp2D}(a) are in very good agreement, we do see evidence of nonideality in the sech data from two sources. First, the finite bit resolution cutoff at $\pm{4}\sigma$ results in a flattening of the third oscillation. The second source is the existence of higher levels of the transmon system, resulting in a slight tilt of the Rabi ellipses. This behavior is illustrated in the theoretical simulation for multiple Rabi oscillations, and is observed to some extent in the experiment. However, note that only the first oscillation (about 1/3 of the $Y$-axis range in Fig.~\ref{fig_RabiAmp2D}(a)) is used to implement $Z$-gates, and for this region there is negligible discrepancy between theory and experiment. This high degree of frequency independence explains the very high fidelities we obtained, as we discuss below. The preparation and tomography pulse sequence for the $Z_\mathrm{sech}(\Delta$) phase gate is shown in Fig.~\ref{fig_Zgate}(a). In this example, the state is initially prepared by a $\pi/2$ rotation around the $Y$-axis, i.e., a Hadamard-like gate. A subsequent 2$\pi$-sech-pulse is then applied with the drive frequency given by $\omega_\mathrm{D}=\Delta+\omega_{10}$, and in each experimental run, either the $X$-, $Y$-, or $Z$-projection of the final state is measured to complete the single-qubit tomography. From this, the resulting $\phi(\Delta)$ and $\theta(\Delta)$ are determined, where $\theta$ is the angle of the Bloch vector from the $Z$-axis [solid points in Fig.~\ref{fig_Zgate}(b)]. For $-10~\mathrm{MHz}\leq\Delta\leq10~\mathrm{MHz}$, these data show that $\theta$ is constant at $\pi/2$, while $\phi$ is the angle of rotation of the Bloch vector around the $Z$-axis. Both are in excellent agreement with the prediction, Eq.~\eqref{Eq_phi} [solid lines in Fig.~\ref{fig_Zgate}(b)]. To assess the performance of our $Z_\mathrm{sech}(\Delta)$ gate, we consider the fidelity of the rotations for the six input states obtained from $\pi/2$ rotations about the $\pm X$, $\pm Y$ directions, the identity operator, and a $\pi$ rotation. To account for the existence of mixed states in the initial state preparation, we calculate the fidelity of the gate operation, $Z_\mathrm{sech}(\Delta)$ as \begin{equation} F= \mathrm{Tr} \Bigg[ \sqrt{\sqrt{\rho_1}\rho_2\sqrt{\rho_1}} \Bigg], \label{Fidelity} \end{equation} for each of the initial states. The density matrix $\rho_1$ is reconstructed from tomography measurements and $\rho_2$ is calculated from the theory, Eq.~\eqref{Eq_phi}, with a {9~\%} excited state before the state preparation pulse. As shown in Fig.~\ref{fig_AvgFid}, the fidelity averaged over the six different initial states is $F_\mathrm{avg}(\Delta)>99~\%$ for $\lvert\Delta\rvert\leq10$~MHz. This range corresponds to a $\mp\pi$ rotation around the Z-axis. For $\lvert\Delta\rvert\geq10$~MHz, there is a slight drop-off of $F_\mathrm{avg}$ for negative detuning, presumably due to the existence of higher energy level transitions at lower frequencies due to the anharmonicity of the transmon. \begin{figure} \caption{(color online) Tomography of a single-qubit phase gate, $Z_\mathrm{sech} \label{fig_Zgate} \end{figure} \begin{figure} \caption{(color online) The average fidelity for the sequence in Fig.~\ref{fig_Zgate} \label{fig_AvgFid} \end{figure} In conclusion, we have shown that ideal sech-shaped pulses can be used to implement a fast, high-fidelity phase gate with a single control knob, the detuning. The unique properties of the sech allow us to achieve this gate while staying in the computational subspace throughout the duration of the gate. Our work paves the way toward high-fidelity two- and three-qubit entangling phase gates, which have been theoretically proposed based on the sech pulse \cite{Economou2015,Barnes_arxiv16} and can be advantageous when the lowest energy levels have some spread, as is intrinsic to the superconducting devices manufactured with lithographically defined and thermally oxidized components. Our results also lay the ground work for the superconducting circuit-based experimental demonstration of Self Induced Transparency \cite{McCall_PR69}, which occurs when, in addition to the temporal profile, the spatial distribution of the pulse is also a sech function. Such an experiment would be relevant for microwave-based logic with these circuits, slow-light demonstrations, and the development of larger scale circuits. \begin{acknowledgments} The authors acknowledge support of IARPA, the Army Research Office, and the NIST Quantum Initiative. \end{acknowledgments} \end{document}
\betagin{document} \title{The Prime Number Theorem and Pair Correlation of Zeros of the Riemann Zeta-Function} \author[Goldston]{D. A. Goldston} \address{Department of Mathematics and Statistics, San Jose State University} \email{[email protected]} \author[Suriajaya]{Ade Irma Suriajaya} \address{Faculty of Mathematics, Kyushu University} \email{[email protected]} \keywords{prime numbers, Riemann zeta-function, Prime Number Theorem} \subjclass[2010]{11M06, 11M26, 11N05} \date{\today} \betagin{abstract} We prove that the error in the prime number theorem can be quantitatively improved beyond the Riemann Hypothesis bound by using versions of Montgomery's conjecture for the pair correlation of zeros of the Riemann zeta-function which are uniform in long ranges and with suitable error terms. \end{abstract} \maketitle \noindent {\bf Added Remarks.} It has been brought to our attention that the main results in this paper have already been obtained in \cite{LPZ16}. We use here the function $F_\betata(x,T)$ defined in \eqref{Fbeta}, while in \cite{LPZ16} the authors use the same function with a change of variable $\betata = 1/\tau$. Their paper also has applications to other problems concerning primes. For further applications of extended pair correlation conjectures, we direct interested readers to the papers \cite{LPZ12}, \cite{LPZ16}, and \cite{LPZ17} of Languasco, Perelli, and Zaccagnini. \section{Introduction} Let $\pi(x)$ denote the number of primes less than or equal to $x$, and define $\mathcal{P}(x)$ by \betagin{equation} \label{piPNT} \pi(x) = \mathrm{li}(x) + \mathcal{P}(x), \qquad \text{where} \quad \mathrm{li}(x) := \int_2^x \frac{dt}{\log t}. \end{equation} The prime number theorem is the assertion that $\mathcal{P}(x) = o(\mathrm{li}(x)) =o(\frac{x}{\log x})$ as $x\to \infty$, and we refer to $\mathcal{P}(x)$ as the error in the prime number theorem. The connection of $\mathcal{P}(x)$ with the zeros of the Riemann zeta-function is made most easily by using the von Mangoldt function $\Lambdabda(n)$. Writing \betagin{equation} \label{psiPNT} \psi(x) := \sum_{n\le x} \Lambdabda(n) = x +\mathcal{R}(x), \end{equation} then the prime number theorem is equivalent to $\mathcal{R}(x) =o(x)$ as $x\to \infty$. This formulation of the prime number theorem is so widely used that it is also called the prime number theorem and $\mathcal{R}(x)$ is also referred to as the error in the prime number theorem. The transition from $\mathcal{R}(x)$ to $\mathcal{P}(x)$ is easily made; here we will assume the Riemann Hypothesis (RH) and use \cite[Theorem 13.2]{MontgomeryVaughan2007} \betagin{equation}\label{RtoP} \mathcal{P}(x) = \frac{\mathcal{R}(x)}{\log x} - \frac{x^{1/2}}{\log x} + O\left(\frac{x^{1/2}}{\log^2 x}\right). \end{equation} The best unconditional estimate for $\mathcal{R}(x)$ was obtained by Korobov and also Vinogradov in 1958. The Riemann Hypothesis is equivalent to the estimate \betagin{equation} \label{PNTequivRH} \mathcal{R}(x) \ll x^{1/2+\varepsilonsilon}, \qquad \text{for any $\varepsilonsilon >0$.}\end{equation} The best known estimate for the error in the prime number theorem assuming the RH is due to von Koch \cite{vonKoch1901}, who in 1901 proved \betagin{equation} \label{Koch} \mathcal{R}(x) = O\left( x^{1/2}\log^2 x\right), \end{equation} and thus $\mathcal{P}(x) = O\left( x^{1/2}\log x\right)$. On the other hand, Schmidt \cite{Schmidt1903} in 1903 proved \betagin{equation} \label{Schmidt} \mathcal{R}(x) = \Omegaega_{\pm}( x^{1/2}). \end{equation} This last result is unconditional since if RH is false an even stronger result is true. At this point a surprising difficulty arises since when substituting \eqref{Schmidt} into \eqref{RtoP} the constant obtained by Schmidt's method is too small to imply that there exists any value of $x$ for which $\mathcal{P}(x)>0$, see \cite[Footnote p. 93]{Ingham1932}, and this leaves open the question of whether $\mathrm{li}(x) > \pi(x) $ is always true.\footnote{ The definition we use here for $\mathrm{li}(x)$ in \eqref{piPNT} has $\mathrm{li}(x) < \pi(x)$ for $x=2,3,5,7$ but these exceptions do not occur for the usual definition of $\mathrm{li}(x)= \int_0^x dt/\log t$ used elsewhere in mathematics.} This question was answered by Littlewood in 1914, who proved \betagin{equation} \label{Little} \mathcal{R}(x) = \Omegaega_{\pm}( x^{1/2}\log \log\log x) \qquad \text{and}\quad \mathcal{P}(x) = \Omegaega_{\pm}(\frac{ x^{1/2}}{\log x}\log \log\log x).\end{equation} During the last hundred years there has been no improvement in von Koch's upper bound \eqref{Koch} and Littlewood's lower bound \eqref{Little}. As for the actual size of the error in the prime number theorem, on probability grounds it has been conjectured \cite[p. 484]{MontgomeryVaughan2007} that \betagin{equation} \label{guess}\mathrm{li}msup_{x\to \infty}\frac{\mathcal{R}(x)}{x^{1/2} (\log\log\log x)^2}= \frac{1}{2\pi}, \qquad \mathrm{li}minf_{x\to \infty}\frac{\mathcal{R}(x)}{x^{1/2} (\log\log\log x)^2}= -\frac{1}{2\pi}.\end{equation} In this paper we will improve on the RH bound \eqref{Koch} by assuming in addition conjectures related to the pair correlation for zeros of the Riemann zeta-function. The first result of this type is due to Gallagher and Mueller \cite{GaMu78} in 1978, who proved that \betagin{equation} \label{RlittleO} \mathcal{R}(x) = o(x^{1/2}\log^2 x), \end{equation} subject to the RH and the conjecture that there exists a pair correlation density function. Using an extension of this pair correlation density function conjecture and RH, Mueller \cite{Mueller76} proved that \betagin{equation} \int_0^X(\pi(x+\lambdabda\log X) -\pi(x) )^2 \, dx \sim (\lambdabda +\lambdabda^2)X,\end{equation} a result later obtained in a different way in \cite{GoldMont}. In unpublished work Gallagher and Mueller were able to use this extended pair correlation density conjecture with stronger error terms to prove \betagin{equation} \label{Rstronger} \mathcal{R}(x) = O(x^{1/2}(\log\log x)^2); \end{equation} however Mueller showed the conjecture is sometimes false with such strong error terms. Assuming only RH, Gallagher \cite{Gallagher80} proved that \eqref{Rstronger} holds except possibly on a set of finite logarithmic measure. We use here a related method introduced by Heath-Brown \cite{Heath-Brown} in 1981. For $x>0$, $T\ge 3$, and $\betata \ge 1$, define \betagin{equation} \label{Fbeta} F_\betata(x,T) := \sum_{0<\gammammama,\gammammama'\le T} x^{i(\gammammama -\gammammama')} w_\betata(\gammammama -\gammammama') , \qquad w_\betata(u) = \frac{4\betata^2}{4\betata^2+u^2},\end{equation} where the sum is over the imaginary parts $\gammammama$ and $\gammammama'$ of zeta-function zeros. This is a generalization of Montgomery's function $F(x,T)$ \cite{Montgomery72}, which is the case $\betata=1$ and we have $F(x,T) =F_1(x,T)$. Heath-Brown proved \eqref{RlittleO} by assuming RH together with the conjecture that $F(x,T) = o(T\log^2T)$ holds uniformly for $T\le x \le T^M$ for any fixed number $M$. The function $F_\betata(x,T)$ was introduced by Goldston and Heath-Brown \cite{GoldH-B84} in 1984, and we will use it as the main tool in this paper. Recently we obtained a quantitative improvement in the bound for $\mathcal{R}(x)$ as an application of a method related to a formula of Fujii \cite{GS21}. The main result we obtained is that, assuming RH and \betagin{equation} \label{Fbound} F(x,T)\ll T\log x \qquad \text{ holds uniformly for } T\le x\le T^{\log T}, \end{equation} then this implies for $1\le h\le x$ that \betagin{equation} \label{J(x,h)} J(x,h) := \int_0^x (\psi(t+h)-\psi(t) -h)^2\, dt \ll hx\log x, \end{equation} and this in turn implies \betagin{equation}\label{Fujiiresult} \mathcal{R}(x) \ll x^{1/2}(\log x)^{3/2}. \end{equation} Our approach here is to work directly with $\mathcal{R}(x)$ via its explicit formula. As we will show, conjectured bounds for $F_\betata(x,T)$ provides good upper bounds for $\mathcal{R}(x)$, and in turn conjectured asymptotic formulas for $F(x,T)$ with suitable error terms imply these bounds on $F_\betata(x,T)$. There is probably no hope of proving any of these conjectures at present, but they demonstrate that we can obtain significant improvements over \eqref{Koch} from upper bounds for exponential sums over zeros which are only powers of logarithms smaller than trivial, rather than the square root of the main term bounds needed in other problems concerning primes. \section{Statement of Results} We note that $F_\betata(x,T)\ge 0$ by \eqref{FbetaIntegral} below, and also the trivial bound \betagin{equation} \label{trivial} F_\betata(x,T) \ll \betata T \log^2T. \end{equation} Anything stronger than this bound over an appropriate range will improve on \eqref{Koch}. \betagin{Conj} \label{conj1} For $T\ge 3$, let $\mathcal{L}(T)$ and $\betata = \betata(T)$ be two non-decreasing continuous functions satisfying $1\le \betata(T) \ll \log^3T$ and $\log T \ll \mathcal{L}(T) \ll \betata(T) \log^2 T$. If $W=W(x)$ is an increasing function and $W(x)\gg (\log x)^A$ for a large constant $A$, then \betagin{equation} \label{Conjecture1} F_\betata(x,T) \ll T \mathcal{L}(T) \qquad \text{uniformly for} \quad W(x)\ll T \ll x^{1/2}\log^2x. \end{equation} \end{Conj} Define \betagin{equation} \label{M(x)} \mathcal{M}(x) := \sum_{1\le k\ll \log x} \sqrt{ \frac{\mathcal{L}(2^{k})}{\betata(2^k)}}. \end{equation} \betagin{theorem} \label{thm1} Assuming the Riemann Hypothesis and Conjecture 1, we have \betagin{equation} \label{thm1eq} \mathcal{R}(x) \ll x^{1/2} \left(\mathcal{M}(x)+ (\log W(x))^2\right). \end{equation} \end{theorem} If in Theorem \ref{thm1} we use $\betata=1$ and the trivial bound \eqref{trivial}, we get the RH bound \eqref{Koch}. The strongest results are obtained by taking $\mathcal{L}(T) = \log T$, from which we obtain the following corollaries. \betagin{corollary} \label{cor1} Assume the Riemann Hypothesis. For $\betata=(\log T)^{3-2a}$ with fixed $0<a \le 3/2$, if \betagin{equation} \label{cor1assume} F_{\betata}(x,T) \ll T\log T \qquad \text{ uniformly for} \quad \frac{T^2}{\log^4T}\ll x \ll T^{(\log T)^{(2-a)/a}}, \end{equation} then this implies \betagin{equation} \label{cor1eq}\mathcal{R}(x) \ll x^{1/2}\log^a x. \end{equation} \end{corollary} For the case $a=3/2$ and thus $\betata=1$ in \Cref{cor1}, we can weaken \eqref{cor1assume} slightly. \betagin{corollary} \label{cor2} Assuming the Riemann Hypothesis. If \betagin{equation} \label{cor2assume} F(x,T) \ll T\log x \qquad \text{ uniformly for} \quad \frac{T^2}{\log^4T}\ll x \ll T^{(\log T)^{1/3}}, \end{equation} then we have \betagin{equation} \mathcal{R}(x) \ll x^{1/2}(\log x)^{3/2}. \end{equation} \end{corollary} This is the result \eqref{Fujiiresult} obtained in \cite{GS21} with a smaller range of validity for the conjecture. \betagin{corollary} \label{cor3} Assuming the Riemann Hypothesis. Let $A> 1$ and \betagin{equation} \label{betalem3} \betata = \frac{\log^3T}{A^4(\log\log 2T)^2}. \end{equation} If \betagin{equation} \label{lem3assume} F_{\betata}(x,T) \ll T\log T, \qquad \text{ uniformly for} \quad \frac{ T^2}{ \log^4T}\ll x \ll T^{T^{1/A}}, \end{equation} then we have \betagin{equation} \label{lem3eq} \mathcal{R}(x) \ll A^2x^{1/2}(\log\log x)^2. \end{equation} \end{corollary} This is a version of Gallagher and Mueller's \eqref{Rstronger}. We have no evidence to support the conjecture \eqref{lem3assume} in such a long range of validity for $x$. The parameter $\betata$ is very useful in obtaining these results, but following \cite{FSZ09}, we show that it is possible to eliminate this parameter and formulate our results entirely in terms of conjectures on $F(x,T)$. \betagin{Conj} \label{conj2} For $T\ge 3$, let $\mathcal{L}(T)$ and $\betata = \betata(T)$ be two non-decreasing continuous functions satisfying $1\le \betata(T) \ll \log^3T$ and $\log T \ll \mathcal{L}(T) \ll \betata(T) \log^2 T$. If $W=W(x)$ is an increasing function and $W(x)\gg (\log x)^A$ for a large constant $A$, then \betagin{equation} \label{Conjecture2b} \max_{ \frac{1}{2\log T} \le v \le 2\log T} \left|F(xv,T)-F(x,T)\right| \ll \frac{T\mathcal{L}(T)}{\betata(T)^2} \qquad \text{uniformly for} \quad W(x)\ll T \ll x^{1/2}\log^2x. \end{equation} \end{Conj} \betagin{theorem}\label{thm2} \Cref{conj2} implies \Cref{conj1} and \eqref{Conjecture1} with the same choices of $\mathcal{L}(T)$, $\betata(T)$, and $W(x)$. \end{theorem} In \cite{FSZ09} it is conjectured that $F(x,T)$ satisfies an asymptotic formula with a power of logarithm savings error term. Using this we obtain the following result. \betagin{corollary} \label{cor4} Let $N(T)$ denote the number of complex zeros of the Riemann zeta-function in the upper half-plane up to a given height $T>0$ (explicitly given in \eqref{N(T)}). If for some constant $B>0$ \betagin{equation} \label{StrongFConj} F(x,T) = N(T)\left( 1 + O\left(\frac{1}{(\log T)^B}\right)\right) \qquad \text{uniformly for} \quad W\left(\frac{x}{\log T}\right) \ll T \ll x^{1/2}\log^3x, \end{equation} then Conjectures 1 and 2 hold with $\mathcal{L}(T)= \log T$, $1\le \betata(T)\ll (\log T)^{B/2} $, and the same choice of $W$. \end{corollary} Thus, for example, we obtain the result \eqref{lem3eq} from \eqref{StrongFConj} with $W(x)=(\log x)^A$ and $B=6$. \section{Two Lemmas} Our first lemma is a slight generalization of a lemma from \cite{GoldH-B84} and \cite{Heath-Brown}. \betagin{lem}\label{Lemma1} For $x\ge 1$, $3\le s<t$, and $0<\betata =\betata(t) \le t$, let \betagin{equation} \label{calF} \mathcal{F}_\betata(x,t,s) := \max_{s\le v,v'\le t}F_{\betata(v')}(x,v).\end{equation} Then we have \betagin{equation} \left|\sum_{s< \gammammama\le t} x^{i\gammammama} \right| \ll \sqrt{\frac{t}{\betata(t)}\mathcal{F}_\betata(x,t,s)}. \end{equation} \end{lem} \betagin{proof}[Proof of Lemma \ref{Lemma1}] We first note that \betagin{equation} \label{FbetaIntegral} F_\betata(x,T) = \betata\int_{-\infty}^\infty e^{-2\betata |u|}\left|\sum_{0<\gammammama \le T}x^{i\gammammama}e^{i\gammammama u}\right|^2\, du, \end{equation} which immediately follows from the formula \betagin{equation} \label{F_bformula} \betata \int_{-\infty}^\infty e^{-2\betata |u|} e^{ivu}\, du = w_\betata(v). \end{equation} One form of the Gallagher-Sobolev inequality \cite[Lemma 1.1]{Montgomery71} states that for any $C^1$ function $f$ on the interval $[-a,a]$ \[ |f(0)| \le \frac{1}{2a}\int_{-a}^a |f(u)|\, du + \frac{1}{2}\int_{-a}^a |f'(u)|\, du. \] Applying this with $a= 1/\betata(t)$, $3\le s<t $, and \[ f(u) = \left(\sum_{s<\gammammama \le t } x^{i\gammammama}e^{i \gammammama u}\right)^2,\] we have, on using the Cauchy-Schwarz inequality \[ \betagin{split} \left|\sum_{s<\gammammama\le t} x^{i\gammammama} \right|^2 &\le \frac{\betata(t)}{2} \int_{-1/\betata(t)}^{1/\betata(t)}\left|\sum_{s<\gammammama \le t } x^{i\gammammama}e^{i \gammammama u}\right|^2\, du + \int_{-1/\betata(t)}^{1/\betata(t)}\left|\sum_{s<\gammammama \le t } x^{i\gammammama}e^{i \gammammama u}\right| \left|\sum_{s<\gammammama \le t } \gammammama x^{i\gammammama}e^{i \gammammama u}\right|\, du\\ & \ll \betata(t) \int_{-\infty}^{\infty}e^{-2\betata(t)|u|}\left|\sum_{s<\gammammama \le t } x^{i\gammammama}e^{i \gammammama u}\right|^2\, du \\& \qquad +\left( \int_{-\infty}^{\infty}e^{-2\betata(t)|u|} \left|\sum_{s<\gammammama \le t } x^{i\gammammama}e^{i \gammammama u}\right|^2 \, du\right)^{1/2} \left( \int_{-\infty}^{\infty}e^{-2\betata(t)|u|} \left|\sum_{s<\gammammama \le t } \gammammama x^{i\gammammama}e^{i \gammammama u}\right|^2 \, du\right)^{1/2}. \end{split}\] Using the inequality $|a\pm b|^2\le 2a^2 +2b^2$, we see by \eqref{FbetaIntegral} and \eqref{calF} that \[ \betata(t) \int_{-\infty}^{\infty}e^{-2\betata(t)|u|}\left|\sum_{s<\gammammama \le t } x^{i\gammammama}e^{i \gammammama u}\right|^2\, du \le 2F_{\betata(t)}(x, s) + 2F_{\betata(t)}(x, t) \le 4 \mathcal{F}_\betata(x,t,s), \] and therefore \betagin{equation} \label{halfway} \left|\sum_{s<\gammammama\le t} x^{i\gammammama} \right|^2 \ll \mathcal{F}_\betata(x,t,s) + \left(\frac1{\betata(t)} \mathcal{F}_\betata(x,t,s)\right)^{1/2} \left( \int_{-\infty}^{\infty}e^{-2\betata(t)|u|} \left|\sum_{s<\gammammama \le t } \gammammama x^{i\gammammama}e^{i \gammammama u}\right|^2 \, du\right)^{1/2}. \end{equation} By partial summation with \betagin{equation} \label{S(r)} S(r) := \sum_{s<\gammammama\le r} (xe^u)^{i\gammammama}, \end{equation} we have \[\betagin{split} \sum_{s<\gammammama \le t } \gammammama x^{i\gammammama}e^{i \gammammama u} &= \int_{s}^t r dS(r) = tS(t) - \int_{s}^t S(r)\, dr \\ & = t\sum_{s<\gammammama \le t } x^{i\gammammama}e^{i \gammammama u} - \int_s^{t} \sum_{s<\gammammama \le r } x^{i\gammammama}e^{i \gammammama u}\, dr. \end{split}\] By the same argument used to obtain \eqref{halfway}, as well as \eqref{FbetaIntegral} and the Cauchy-Schwarz inequality, we have \[ \betagin{split} \int_{-\infty}^{\infty}&e^{-2\betata(t)|u|} \left|\sum_{s<\gammammama \le t} \gammammama x^{i\gammammama}e^{i \gammammama u}\right|^2 \, du \\ &\ll t^2 \int_{-\infty}^{\infty}e^{-2\betata(t)|u|} \left|\sum_{s<\gammammama \le t } x^{i\gammammama}e^{i \gammammama u}\right|^2 \, du + \int_{-\infty}^{\infty}e^{-2\betata(t)|u|} \left|\int_s^{t} \sum_{s<\gammammama \le r } x^{i\gammammama}e^{i \gammammama u}\, dr\right|^2 \, du \\ & \ll \frac{t^2}{\betata(t)} \mathcal{F}_\betata(x,t,s) + (t-s) \int_s^{t} \left(\int_{-\infty}^{\infty}e^{-2\betata(t)|u|} \left|\sum_{s<\gammammama \le r} x^{i\gammammama}e^{i \gammammama u}\right|^2 \,du \right)\,dr \\& \ll \frac{t^2}{\betata(t)} \mathcal{F}_\betata(x,t,s). \end{split}\] Substituting this into \eqref{halfway} gives \[ \left|\sum_{s<\gammammama\le t} x^{i\gammammama} \right|^2 \ll \left( 1 + \frac{t}{\betata(t)} \right)\mathcal{F}_\betata(x,t,s)\ll \frac{t}{\betata(t)} \mathcal{F}_\betata(x,t,s)\] if $0< \betata(t) \le t$. Lemma \ref{Lemma1} now follows. \end{proof} Our next lemma is a formula from \cite{FSZ09} for evaluating $F_\betata(x,T)$ using $F(x,T)$. \betagin{lem}\label{Lemma2} We have, for $x\ge 1$, $T\ge 3$, and $\betata >0$, \betagin{equation} \label{lemma1eq} F_\betata(x,T) = F(x,T) + \betata(1-\betata^2) \int_{-\infty}^\infty (F(xe^u,T)- F(x,T)) e^{-2\betata |u|}\, du. \end{equation} \end{lem} \betagin{proof}[Proof of Lemma \ref{Lemma2}] This can be easily verified directly using \eqref{FbetaIntegral} and \eqref{F_bformula}. It can also be obtained from these equations using the algebra identity \[ w_\betata(a) - \betata^2w(a) = (1-\betata^2)w(a)w_\betata(a), \] which implies \[ w_\betata(a) = w(a) + (1-\betata^2)\left(w(a)w_\betata(a) - w(a)\right). \] \end{proof} \section{Proof of the theorems and corollaries} The Riemann-von Mangoldt formula states \betagin{equation} \label{N(T)} N(T) := \sum_{0 < \gammammama \le T} 1= \frac{T}{2\pi} \log \frac{T}{2 \pi} - \frac{T}{2\pi} +O(\log T) ,\end{equation} see for example \cite[Theorem 25]{Ingham1932} or \cite[Theorem 9.4]{Titchmarsh}. Thus $N(T) \sim \frac{T}{2\pi}\log T$, and we also obtain \betagin{equation} \label{N(T+1)-N(T)} N(T+1) -N(T) = \sum_{T<\gammammama \le T+1} 1 \ll \log T, \end{equation} (see \cite[Theorem 25a]{Ingham1932} or \cite[Theorem 9.2]{Titchmarsh}). \betagin{proof}[Proof of Theorem 1] The truncated explicit formula for $\psi(x)$ \cite[Theorem 12.5]{MontgomeryVaughan2007} implies, for $x\ge 2$ and $Y\ge 3$, \[ \psi(x) = x - \sum_{\substack{\rho\\ |\gammammama|\le Y}} \frac{x^\rho}{\rho} + O\left(\frac{x}{Y}(\log xY)^2\right) + O(\log x).\] We take $3\le W < Y$, where \[ Y= 3 x^{1/2}\log^2(2x).\] Assuming RH, then the complex zeros of the zeta function come in complex conjugate pairs $\rho = 1/2+i\gammammama$ and $\overline{\rho} = 1/2-i \gammammama$, and we have \[ \frac{\mathcal{R}(x)}{x^{1/2}} = -2\,{\rm Im}\sum_{W<\gammammama\le Y }\frac{x^{i\gammammama}}{\gammammama} +O\left(\sum_{0<\gammammama \le W} \frac1{\gammammama}\right)+O\left(\sum_{0<\gammammama \le Y} \frac1{\gammammama^2}\right) + O(1).\] Since by \eqref{N(T+1)-N(T)} we have $\sum_{0<\gammammama \le W} \frac1{\gammammama} \ll \log^2W$ and $\sum_{0<\gammammama \le Y} \frac1{\gammammama^2}\ll 1$, see \cite[Theorem 25b]{Ingham1932}, we conclude \betagin{equation}\label{Restimate1}\left|\frac{\mathcal{R}(x)}{x^{1/2}}\right| \le 2 \left|\sum_{W<\gammammama\le T }\frac{x^{i\gammammama}}{\gammammama}\right| +O(\log^2W).\end{equation} By partial summation using $S(r)= \sum_{s<\gammammama\le r} x^{i\gammammama}$ from \eqref{S(r)} with $u=0$, we have \[ \sum_{s<\gammammama\le t }\frac{x^{i\gammammama}}{\gammammama} = \int_{s}^{t}\frac1{r}dS(r) = \frac{S(t)}{t} + \int_s^{t} \frac{S(r)}{r^2}\, dr, \] and hence \[\left|\sum_{s<\gammammama\le t }\frac{x^{i\gammammama}}{\gammammama}\right| \ll \frac1{s}\max_{s\le v\le t}\left|\sum_{s<\gammammama\le v} x^{i\gammammama}\right|.\] Taking $s=y$ and $t=2y$, and applying Lemma 1 with Conjecture 1, we conclude \betagin{equation}\label{neardone}\left|\sum_{y<\gammammama\le 2y }\frac{x^{i\gammammama}}{\gammammama}\right| \ll \frac1{y}\sqrt{\frac{y}{\betata(2y)}\mathcal{F}_\betata(x,2y,y)}\ll \sqrt{ \frac{\mathcal{L}(2y)}{\betata(2y)}}.\end{equation} Taking $y= 2^{k-1}$ we have \[\left|\sum_{W<\gammammama\le Y }\frac{x^{i\gammammama}}{\gammammama}\right| \le \sum_{k_1\le k\le k_2} \left|\sum_{2^{k-1}<\gammammama\le 2^{k}} \frac{x^{i\gammammama}}{\gammammama}\right|,\] where $k_1$ and $k_2$ are chosen so that $2^{k_1 -1} < W(x)\le 2^{k_1}$ and $2^{k_2-1} <Y\le 2^{k_2}$. Using \eqref{neardone} we have \[ \left|\sum_{W<\gammammama\le Y }\frac{x^{i\gammammama}}{\gammammama}\right| \ll \sum_{1\le k\ll \log x} \sqrt{ \frac{\mathcal{L}(2^{k})}{\betata(2^{k})}} := \mathcal{M}(x), \] where we have applied Conjecture 1 for $F_\betata(x, u)$ with $W\ll u \ll x^{1/2}\log^2x$ as required, and we have then insignificantly increased the upper bound by extending the range of $k$. Theorem 1 now follows from \eqref{Restimate1}. \end{proof} \betagin{proof}[Proof of \Cref{cor1}] For fixed $0< a\le 3/2$, take in Conjecture \ref{conj1}, $\betata=\betata(T)=(\log{T})^{3-2a}$, $\mathcal{L}(T)=\log{T}$, and $W(x) = e^{(\log x)^{a/2}}$. Thus \[ F_\betata(x,T) \ll T\log T \qquad \text{uniformly for} \qquad \frac{T^2}{\log^4 T}\ll x \ll T^{(\log T)^{(2-a)/a}},\] and we have \[ \mathcal{M}(x) \ll \sum_{1\le k \ll \log x}\sqrt{\frac{k}{k^{3-2a}}}\ll \log^ax. \] Applying Theorem \ref{thm1} we obtain \Cref{cor1}. \end{proof} \betagin{proof}[Proof of \Cref{cor2}] In the case when $\betata=1$ we replace Conjecture 1 by \eqref{cor2assume} in Theorem 1 where the range where this conjecture holds is equivalent to taking $W(x)= e^{(\log x)^{3/4}}$ in \eqref{Conjecture1}. Then \eqref{neardone} becomes \[ \left|\sum_{y<\gammammama\le 2y }\frac{x^{i\gammammama}}{\gammammama}\right| \ll \sqrt{\log x}. \] Taking $y=2^{k-1}$ as we did below \eqref{neardone}, we apply this bound $\ll \log x$ times and obtain from \eqref{Restimate1} \[\mathcal{R}(x) \ll x^{1/2}\Big( (\log x)^{3/2} + \log^2W \Big)\] and the result follows. \end{proof} \betagin{proof}[Proof of \Cref{cor3}] Take $W(x) = \log^Ax$. Then by Theorem \ref{thm1} we will obtain \eqref{lem3eq} if $$ \mathcal{M}(x)\ll A^2(\log\log x)^2. $$ Thus we need $\sqrt{\mathcal{L}(2^k)/\betata(2^k)} \ll A^2(\log 2k)/k$ which we obtain with $\mathcal{L}(T) = \log T$ by choosing $\betata(T)$ as in \eqref{betalem3}. \end{proof} \betagin{proof}[Proof of \Cref{thm2}] By \Cref{Lemma2}, for $\betata\ge 1$ \betagin{equation} \label{thm2Step1} |F_\betata(x,T)-F(x,T)| \ll \betata^3\left( \int_{0}^V + \int_V^\infty\right) |(F(xe^{\pm u},T)- F(x,T)| e^{-2\betata u}\, du = I_1 +I_2, \end{equation} where we take \[ V= \frac{\log (\betata\log T)}{\betata}. \] Using the trivial bound $F(x,T) \ll T\log^2T$ from \eqref{trivial}, we have \[ I_2 \ll \betata^3 T\log^2 T \int_V^\infty e^{-2\betata u}\, du = \frac{\betata^2}2 T\log^2Te^{-2\betata V} = \frac12 T,\] which is acceptable and holds for all $x$. Next, for $I_1$ we note that for $0\le u\le V$ we have $e^{-V} \le e^{\pm u} \le e^{V}$, and hence \[ I_1 \ll \betata^3 \max_{ e^{-V}\le v \le e^V}|F(xv,T)-F(x,T)| \int_0^\infty e^{-2\betata u}\, du \ll \betata^2\max_{ e^{-V}\le v \ll e^V}|F(xv,T)-F(x,T)|.\] Letting $f(x) = \frac{\log x}{x}$, by calculus we note for $x >0$ that $f(x) \le f(e) = \frac{1}{e}$. Thus for $\betata\ge 1$ \[ e^V = (\betata \log T)^{\frac1{\betata}}\le e^{f(\betata)}\log T\le e^{\frac{1}{e}}\log T < 2 \log T,\] and similarly $e^{-V}\ge e^{-\frac{1}{e}}\frac{1}{\log T} > \frac{1}{2\log T}.$ Thus by \Cref{conj2} we have $I_1\ll T\mathcal{L}(T)$ and \Cref{thm2} follows. \end{proof} \betagin{proof}[Proof of \Cref{cor4}] We apply \eqref{StrongFConj} with $x$ replaced with $xv'$, where $v'$ is a value with $1/\log T\ll v' \ll \log T$. Then $|F(xv',T)-F(x,T)| \ll T\log T/(\log T)^B$, and the bound in \eqref{Conjecture2b} holds if $1\le \betata(T) \ll (\log T)^{B/2}$. We have obtained this bound using \eqref{StrongFConj} in the range $W(xv'/\log T)\ll T\ll (xv')^{1/2}\log^3(xv')$, and since $W(xv'/\log T)\ll W(x)$ and $x^{1/2}\log^2x\ll (xv')^{1/2}\log^3(xv')$, we see \eqref{Conjecture2b} holds in the stated range $W(x) \ll T\ll x^{1/2}\log^2x$. \end{proof} \section*{Conflict of Interest} We have no conflicts of interest to disclose. \section*{Data Availability Statements} Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. \betagin{thebibliography}{ABC99} \bibitem[FSZ09]{FSZ09} K. Ford, K. Soundararajan, A. Zaharescu, {\it On the distribution of imaginary parts of zeros of the Riemann zeta function, II}, Math. Ann. {\bf 343} (2009), 487--505. \bibitem[Gal80]{Gallagher80} P. X. Gallagher, {\it Some consequences of the Riemann hypothesis}, Acta Arith. {\bf 37} (1980), 339--343. \bibitem[GM78]{GaMu78} P. X. Gallagher and J. H. Mueller, {\it Primes and zeros in short intervals}, J. reine angew. Math. {\bf 303/304} (1978), 205--220. \bibitem[GH84]{GoldH-B84} D. A. Goldston and D. R. Heath-Brown, {\it A note on the differences between consecutive primes}, Math. Ann. {\bf 266} (1984), 317--320. \bibitem[GM87]{GoldMont} D. A. Goldston and H. L. Montgomery, {\it Pair correlation of zeros and primes in short intervals,} Analytic Number Theory and Diophantine Problems (A. C. Adolphson and et al., eds.), Proc. of a Conference at Oklahoma State University (1984), Birkhauser Verlag, 1987, 183--203. \bibitem[GS23]{GS21} D. A. Goldston and A. I. Suriajaya, {\it On an average Goldbach representation formula of Fujii}, to appear in Nagoya Math. J., preprint in arXiv:2110.14250 [math.NT]. \bibitem[Hea82]{Heath-Brown} D. R. Heath-Brown, {\it Gaps between primes, and the pair correlation of zeros of the zeta-function}, Acta Arith. {\bf 41} (1982), 85--99. \bibitem[Ing32]{Ingham1932} A. E. Ingham, {\it The Distribution of Prime Numbers}, Cambridge Mathematical Library, Cambridge University Press, Cambridge, 1990. Reprint of the 1932 original; With a foreword by R. C. Vaughan. \bibitem[Koc01]{vonKoch1901} H. von Koch, {\it Sur la distribution des nombres premiers}, Acta Math. {\bf 24} (1901), 159--182. \bibitem[LPZ12]{LPZ12} Alessandro Languasco, Alberto Perelli, Alessandro Zaccagnini, {\it Explicit relations between pair correlation of zeros and primes in short intervals}, J. Math. Anal. Appl. {\bf 394} (2012), no. 2, 761--771. \bibitem[LPZ16]{LPZ16} Alessandro Languasco, Alberto Perelli, Alessandro Zaccagnini, {\it An extension of the pair-correlation conjecture and applications}, Math. Res. Lett. {\bf 23} (2016), no. 1, 201--220. \bibitem[LPZ17]{LPZ17} A. Languasco, A. Perelli, A. Zaccagnini, {\it An extended pair-correlation conjecture and primes in short intervals}, Trans. Amer. Math. Soc. {\bf 369} (2017), no. 6, 4235--4250. \bibitem[Mon71]{Montgomery71} Hugh L. Montgomery, {\it Topics in Multiplicative Number Theory}, Lecture Notes in Mathematics, Vol. 227, Springer-Verlag, Berlin-New York, 1971. \bibitem[Mon72]{Montgomery72} H.L. Montgomery, The Pair Correlation of Zeros of the Zeta Function, in: Analytic Number Theory, Proc. Sympos. Pure Math., Vol. XXIV, St. Louis Univ., St. Louis, Mo., 1972, 181--193. \bibitem[MV07]{MontgomeryVaughan2007} H. L. Montgomery and R. C. Vaughan, {\it Multiplicative Number Theory}, Cambridge Studies in Advanced Mathematics {\bf 97}, Cambridge University Press, Cambridge, 2007. \bibitem[Mue76]{Mueller76} J. H. Mueller, {\it Primes and zeros in short intervals}, Thesis, Columbia University 1976. \bibitem[Sch03]{Schmidt1903} E. Schmidt, {\it \"Uber die Anzahl der Primzahlen unter gegebener Grenze}, Math. Ann. {\bf 57} (1903), no. 2, 195--204. \bibitem[Tit86]{Titchmarsh} E. C. Titchmarsh, {\sl The Theory of the Riemann Zeta-Function}, 2nd ed., revised by D. R. Heath-Brown, Clarendon (Oxford), 1986. \end{thebibliography} \end{document}
\begin{document} \sloppy \righthyphenmin = 2 \newcommand{\mbox{$\alpha$}}{\mbox{$\mbox{$\alpha$}pha$}} \newcommand{\mbox{$\sigma$}}{\mbox{$\mbox{$\sigma$}gma$}} \newcommand{\mbox{$\delta$}}{\mbox{$\mbox{$\delta$}lta$}} \newcommand{\mbox{$\omega$}}{\mbox{$\mbox{$\omega$}ega$}} \newcommand{\mbox{$\Delta$}}{\mbox{$\mbox{$\Delta$}lta$}} \newcommand{\varepsilon}{\varepsilon} \newcommand{\mbox{$\lambda$}}{\mbox{$\mbox{$\lambda$}bda$}} \newcommand{\mbox{$\Lambda$}}{\mbox{$\mbox{$\Lambda$}mbda$}} \newcommand{\mbox{$\varphi$}}{\mbox{$\varphi$}} \newcommand{\mbox{$\gamma$}}{\mbox{$\mbox{$\gamma$}ma$}} \newcommand{\exp_{p_i}}{\exp_{p_i}} \newcommand{\exp^{-1}_{p_{i+1}}}{\exp^{-1}_{p_{i+1}}} \newcommand{\exp_{p_i}k}{\exp_{p_{i+1}}} \newcommand{\mbox{$\mathds{R}$}}{\mbox{$\mathds{R}$}} \newcommand{\mbox{$\textbf{S}$}}{\mbox{$\textbf{S}$}} \newcommand{\mbox{$\mathds{Z}$}}{\mbox{$\mathds{Z}$}} \newcommand{\mbox{$\mathds{N}$}}{\mbox{$\mathds{N}$}} \newcommand{\mbox{${\bf C}$}}{\mbox{${\bf C}$}} \newcommand{\mbox{${\cal F}$}}{\mbox{${\cal F}$}} \newcommand{\mbox{${\cal B}$}}{\mbox{${\cal B}$}} \newcommand{\mbox{${\cal K}$}}{\mbox{${\cal K}$}} \newcommand{\mbox{${\cal H}$}}{\mbox{${\cal H}$}} \newcommand{\mbox{${\cal L}$}}{\mbox{${\cal L}$}} \newcommand{\mbox{${\bf K}$}}{\mbox{${\bf K}$}} \newcommand{\mbox{PerSh}}{\mbox{PerSh}} \newcommand{\mbox{LipPerSh}}{\mbox{LipPerSh}} \newcommand{\sref}[1]{(\ref{#1})} \title{Periodic shadowing and $\Omega$-stability} \author{A. V.\ Osipov\footnotemark[1],\; S. Yu.\ Pilyugin\footnotemark[1],\; and S. B.\ Tikhomirov\footnotemark[2] \footnotemark[3]} \date{} \footnotetext[1] {Faculty of Mathematics and Mechanics, St.\ Petersburg State University, University av.\ 28, 198504, St.\ Petersburg, Russia} \footnotetext[2] {Department of Mathematics, National Taiwan University, No. 1, Section 4, Roosevelt Road, Taipei 106, Taiwan} \footnotetext[3] {The research of the third author is supported by NSC (Taiwan) 98-2811-M-002-061} \maketitle \begin{abstract} We show that the following three properties of a diffeomorphism $f$ of a smooth closed manifold are equivalent: (i) $f$ belongs to the $C^1$-interior of the set of diffeomorphisms having periodic shadowing property; (ii) $f$ has Lipschitz periodic shadowing property; (iii) $f$ is $\Omega$-stable. Bibliography: 20 titles. \end{abstract} Mathematics Subject Classification: 37C50, 37D20 Keywords: periodic shadowing, hyperbolicity, $\Omega$-stability \section{Introduction} The theory of shadowing of approximate trajectories (pseudotrajectories) of dynamical systems is now a well developed part of the global theory of dynamical systems (see, for example, the monographs [1, 2]). This theory is closely related to the classical theory of structural stability. It is well known that a diffeomorphism has shadowing property in a neighborhood of a hyberbolic set [3, 4] and a structurally stable diffeomorpism has shadowing property on the whole manifold [5 -- 7]. Analyzing the proofs of the first shadowing results by Anosov [3] and Bowen [4], it is easy to see that, in a neighborhood of a hyperbolic set, the shadowing property is Lipschitz (and the same holds in the case of a structurally stable diffeomorphism, see [1]). The shadowing property means that, near a sufficiently precise approximate trajectory of a dynamical system, there is an exact trajectory. One can pose a similar question replacing arbitrary approximate and exact trajectories by periodic ones (the corresponding property is called periodic shadowing property, see [8]). In this paper, we study relations between periodic shadowing and structural stability (to be more precise, $\Omega$-stability). It is easy to give an example of a diffeomorphism that is not structurally stable but has shadowing property (see [9], for example). Similarly, there exist diffeomorphisms that are not $\Omega$-stable but have periodic shadowing property. Thus, structural stability is not equivalent to shadowing (and $\Omega$-stability is not equivalent to periodic shadowing). One of possible approaches in the study of relations between shadowing and structural stability is the passage to $C^1$-interiors. At present, it is known that the $C^1$-interior of the set of diffeomorphisms having shadowing property coincides with the set of structurally stable diffeomorphisms [10]. Later, a similar result was obtained for orbital shadowing property (see [11] for details). In this paper, we show that the $C^1$-interior of the set of diffeomorphisms having periodic shadowing property coincides with the set of $\Omega$-stable diffeomorphisms. We are also interested in the study of the above-mentioned relations without the passage to $C^1$-interiors. Let us mention in this context that Abdenur and Diaz conjectured that a $C^1$-generic diffeomorphism with shadowing property is structurally stable; they have proved this conjecture for so-called tame diffeomorphisms [12]. Recently, it was proved that Lipschitz shadowing and the so-called variational shadowing are equivalent to structural stability [13, 9]. The second main result of this paper states that Lipschitz periodic shadowing property is equivalent to $\Omega$-stability. \section{Main results} Let us pass to exact definitions and statements. Let $f$ be a diffeomorphism of a smooth closed manifold $M$ with Riemannian metric $\mbox{dist}$. We denote by $Df(x)$ the differential of $f$ at a point $x\in M$. Denote by $T_xM$ the tangent space of $M$ at a point $x$; let $|v|,\;v\in T_xM$, be the norm generated by the metric $\mbox{dist}$. As usual, we say that a sequence $\xi=\{x_i\in M,\;i\in\mbox{$\mathds{Z}$}\}$ is a $d$-pseudotrajectory of $f$ if \begin{equation} \label{0} \mbox{dist}(f(x_i),x_{i+1})<d,\quad i\in\mbox{$\mathds{Z}$}. \end{equation} {\bf Definition 1. } We say that $f$ has {\em periodic shadowing} property if for any positive $\varepsilon$ there exists a positive $d$ such that if $\xi=\{x_i\}$ is a periodic $d$-pseudotrajectory, then there exists a periodic point $p$ such that \begin{equation} \label{00} \mbox{dist}(f^i(p),x_i)<\varepsilon,\quad i\in\mbox{$\mathds{Z}$}. \end{equation} Denote by $\mbox{PerSh}$ the set of diffeomorphisms having periodic shadowing property. {\bf Definition 2. } We say that $f$ has {\em Lipschitz periodic shadowing} property if there exist positive constants $\mbox{${\cal L}$},d_0$ such that if $\xi=\{x_i\}$ is a periodic $d$-pseudotrajectory with $d\leq d_0$, then there exists a periodic point $p$ such that \begin{equation} \label{00L} \mbox{dist}(f^i(p),x_i)\leq \mbox{${\cal L}$} d,\quad i\in\mbox{$\mathds{Z}$}. \end{equation} Denote by $\mbox{LipPerSh}$ the set of diffeomorphisms having Lipschitz periodic shadowing property. Denote by $\Omega S$ the set of $\Omega$-stable diffeomorphisms (it is well known that $f\in\Omega S$ if and only if $f$ satisfies Axiom A and the no cycle condition, see, for example, [14]). Denote by $\Diff ^1(M)$ the space of diffeomorphisms of $M$ with the $C^1$ topology. For a set $P\subset \Diff ^1(M)$ we denote by $\mbox{Int}^1(P)$ its $C^1$-interior. Let us state our main result. {\bf Theorem. } $\mbox{Int}^1(\mbox{PerSh})=\mbox{LipPerSh}=\Omega S$. The structure of the paper is as follows. In Sec. 3, we prove the inclusion $\Omega S\subset\mbox{LipPerSh}$. Of course, this inclusion implies that $\Omega S\subset\mbox{PerSh}$. Since the set $\Omega S$ is $C^1$-open, we conclude that $\Omega S\subset \mbox{Int}^1(\mbox{PerSh})$. In Sec. 4, we prove the inclusion $\mbox{Int}^1(\mbox{PerSh})\subset\Omega S$. In Sec. 5, we prove the inclusion $\mbox{LipPerSh}\subset\Omega S$. \section{$\Omega S\subset\mbox{LipPerSh}$} First we introduce some basic notation. Denote by $\Per(f)$ the set of periodic points of $f$ and by $\Omega(f)$ the nonwandering set of $f$. Let $N=\sup_{x\in M}\|Df(x)\|$. Let us formulate several auxiliary definitions and statements. It is well known that if a diffeomorphism $f$ satisfies Axiom A, then its nonwandering set can be represented as a disjoint union of a finite number of compact sets: \begin{equation} \label{spe} \Omega(f)=\Omega_1\cup\dots\cup\Omega_m, \end{equation} where the sets $\Omega_i$ are so-called basic sets (hyperbolic sets each of which contains a dense positive semi-trajectory). We say that a diffeomorphism $f$ has Lipschitz shadowing property on a set $U$ if there exist positive constants $\mbox{${\cal L}$},d_0$ such that if $\xi=\{x_i,\;i\in\mbox{$\mathds{Z}$}\}\subset U$ is a $d$-pseudotrajectory with $d\leq d_0$, then there exists a point $p\in U$ such that inequalities (\ref{00L}) hold. We say that a diffeomorphism $f$ is expansive on a set $U$ if there exists a positive number $a$ (expansivity constant) such that if two trajectories $\{f^i(p):\;i\in\mbox{$\mathds{Z}$}\}$ and $\{f^i(q):\;i\in\mbox{$\mathds{Z}$}\}$ belong to $U$ and the inequalities $$ \mbox{dist}(f^i(p),f^i(q))\leq a,\quad i\in\mbox{$\mathds{Z}$}, $$ hold, then $p=q$. The following statement is well known (see [1, 14], for example). {\bf Proposition. } {\em If $\mbox{$\Lambda$}mbda$ is a hyperbolic set, then there exists a neighborhood $U$ of $\mbox{$\Lambda$}mbda$ such that $f$ has Lipschitz shadowing property on $U$ and is expansive on $U$.} We also need the following two lemmas (see [15]). {\bf Lemma 1. }{\em Let $f$ be a homeomorpism of a compact metric space $(X,\dist)$. For any neighborhood $U$ of the nonwandering set $\Omega(f)$ there exist positive numbers $B,d_1$ such that if $\xi=\{x_i,\;i\in\mbox{$\mathds{Z}$}\}$ is a $d$-pseudotrajectory of $f$ with $d\leq d_1$ and $$ x_k,x_{k+1},\dots,x_{k+l}\notin U $$ for some $l>0$ and $k\in\mbox{$\mathds{Z}$}$, then $l\leq B$}. Let $\Omega_1,\dots,\Omega_m$ be the basic sets in decomposition (\ref{spe}) of the nonwandering set of an $\Omega$-stable diffeomorphism $f$. {\bf Lemma 2. }{\em Let $U_1,\dots,U_m$ be disjoint neighborhoods of the basic sets $\Omega_1,\dots,\Omega_m$. There exist neighborhoods $V_j\subset U_j$ of the sets $\Omega_j$ and a number $d_2>0$ such that if $\xi=\{x_i,\;i\in\mbox{$\mathds{Z}$}\}$ is a $d$-pseudotrajectory of $f$ with $d\leq d_2$ such that $x_0\in V_j$ and $x_t\notin U_j$ for some $j\in\{1,\dots,m\}$ and some $t>0$, then $x_i\notin V_j$ for $i\geq t$.} {\bf Lemma 3. }{$\Omega S\subset\mbox{LipPerSh}$.} {\em Proof. } Apply the above proposition and find disjoint neighborhoods $W_1,\dots,W_m$ of the basic sets $\Omega_1,\dots,\Omega_m$ in decomposition (\ref{spe}) such that (i) $f$ has Lipschitz shadowing property on any of $W_j$ with the same constants $\mbox{${\cal L}$},d^*_0$; (ii) $f$ is expansive on any of $W_j$ with the same expansivity constant $a$. Find neighborhoods $V_j,U_j$ of $\Omega_j$ (and reduce $d^*_0$, if necessary) so that the following properties are fulfilled: $\bullet$ $V_j\subset U_j\subset W_j,\quad j=1,\dots,m$; $\bullet$ the statement of Lemma 2 holds for $V_j$ and $U_j$ with some $d_2>0$; $\bullet$ the $\mbox{${\cal L}$} d^*_0$-neighborhoods of $U_j$ belong to $W_j$. Apply Lemma 1 to find the corresponding constants $B,d_1$ for the neighborhood $V_1\cup\dots\cup V_m$ of $\Omega(f)$. We claim that $f$ has the Lipschitz periodic shadowing property with constants $\mbox{${\cal L}$},d_0$, where $$ d_0=\min\left(d^*_0,d_1,d_2,\frac{a}{2\mbox{${\cal L}$}}\right). $$ Take a $\mu$-periodic $d$-pseudotrajectory $\xi=\{x_i,\;i\in\mbox{$\mathds{Z}$}\}$ of $f$ with $d\leq d_0$. Lemma 1 implies that there exists a neighborhood $V_j$ such that $\xi\cap V_j\neq\emptyset$; shifting indices, we may assume that $x_0\in V_j$. In this case, $\xi\subset U_j$. Indeed, if $x_{i_0}\notin U_j$ for some $i_0$, then $x_{i_0+k\mu}\notin U_j$ for all $k$. It follows from Lemma 2 that if $i_0+k\mu>0$, then $x_i\notin V_j$ for $i\geq i_0+k\mu$, and we get a contradiction with the periodicity of $\xi$ and the inclusion $x_0\in V_j$. Thus, there exists a point $p$ such that inequalities (\ref{00L}) hold. Let us show that $p\in\Per(f)$. By the choice of $U_j$ and $W_j$, $f^i(p)\in W_j$ for all $i\in\mbox{$\mathds{Z}$}$. Let $q=f^\mu(p)$. Inequalities (\ref{00L}) and the periodicity of $\xi$ imply that $$ \mbox{dist}(f^i(q),x_{i})= \mbox{dist}(f^i(q),x_{i+\mu})\leq \mbox{${\cal L}$} d,\quad i\in\mbox{$\mathds{Z}$}. $$ Thus, $$ \mbox{dist}(f^i(q),f^i(p))\leq 2\mbox{${\cal L}$} d\leq a,\quad i\in\mbox{$\mathds{Z}$}, $$ which implies that $f^\mu(p)=q=p$. This completes the proof. {\bf Remark. } Thus, we have shown that an $\Omega$-stable diffeomorphism has periodic shadowing property (and its Lipschitz variant). It must be noted that it was shown in [16] that there exist $\Omega$-stable diffeomorphisms that do not have weak shadowing property (hence, they do not have orbital and usual shadowing properties, see [11] for details). \section{ $\mbox{Int}^1(\mbox{PerSh})\subset\Omega S$} In the proof, we refer to the following well-known statement. Denote by $\mbox{HP}$ the set of diffeomorphisms $f$ such that every periodic point of $f$ is hyperbolic; let ${\cal F}=\mbox{Int}^1(\mbox{HP})$. It is known (see [17, 18]) that the set ${\cal F}$ coincides with the set $\Omega S$ of $\Omega$-stable diffeomorphisms. Thus, it suffices for us to prove the following statement. {\bf Lemma 4. } $\mbox{Int}^1(\mbox{PerSh})\subset{\cal F}$. {\em Proof. } In the proof of this lemma, as well as in some proofs below, we apply the usual linearization technique based on exponential mapping. Let $\exp$ be the standard exponential mapping on the tangent bundle of $M$ and let $\exp_x$ be the corresponding mapping $$ T_xM\to M. $$ Let $p$ be a periodic point of $f$; denote $p_i=f^i(p)$ and $A_i=Df(p_i)$. We introduce the mappings \begin{equation} \label{1} F_i=\exp^{-1}_{p_{i+1}}\circ f\circ\exp_{p_i}: T_{p_i}M\to T_{p_{i+1}}M. \end{equation} It follows from the standard properties of the exponential mapping that $D\exp_x(0)=\mbox{Id}$; hence, $$ DF_i(0)=A_i. $$ We can represent $$ F_i(v)=A_iv+\phi_i(v), $$ where $$ \frac{|\phi_i(v)|}{|v|}\to 0\mbox{ as } |v|\to 0. $$ Denote by $B(r,x)$ the ball in $M$ of radius $r$ centered at a point $x$ and by $B_T(r,x)$ the ball in $T_xM$ of radius $r$ centered at the origin. There exists $r>0$ such that, for any $x\in M$, $\exp_x$ is a diffeomorphism of $B_T(r,x)$ onto its image, and $\exp^{-1}_x$ is a diffeomorphism of $B(r,x)$ onto its image. In addition, we may assume that $r$ has the following property. If $v,w\in B_T(r,x)$, then $$ \frac{\mbox{dist}(\exp_x(v),\exp_x(w))}{|v-w|}\leq 2; $$ if $y,z\in B(r,x)$, then $$ \frac{|\exp^{-1}_x(y)-\exp^{-1}_x(z)|}{\mbox{dist}(y,z)}\leq 2. $$ Every time, constructing periodic $d$-pseudotrajectories of $f$, we take $d$ so small that the considered points of our pseudotrajectories, points of shadowing trajectories, their ``lifts" to tangent spaces, etc belong to the corresponding balls $B(r,p_i)$ and $B_T(r,p_i)$ (and we do not repeat this condition on the smallness of $d$). To prove Lemma 4, it is enough for us to show that $\mbox{Int}^1(\mbox{PerSh})\subset\mbox{HP}$ and to note that the left-hand side of this inclusion is $C^1$-open. To get a contradiction, let us assume that a diffeomorphism $f\in\mbox{Int}^1(\mbox{PerSh})$ has a nonhyperbolic periodic point $p$. Fix a $C^1$-neighborhood ${\cal N}\subset\mbox{PerSh}$ of $f$. For simplicity, let us assume that $p$ is a fixed point and that the matrix $A_0=Df(p)$ has an eigenvalue $\mbox{$\lambda$}=1$ (the remaining cases are considered using a similar reasoning, see, for example, [19]). In our case, an analog of mapping (\ref{1}), $$ F=\exp_p^{-1}\circ f\circ\exp_p: T_{p}M\to T_{p}M, $$ has the form $$ F(v)=A_0v+\phi(v). $$ Clearly, we can find a number $a\in(0,r)$ (recall that the number $r$ was fixed above when properties of the exponential mapping were described), coordinates $v=(u,w)$ in $T_pM$ with one-dimensional $u$, and a diffeomorphism $h\in{\cal N}$ such that if $$ H=\exp_p^{-1}\circ h\circ\exp_p $$ and $|v|\leq a$, then $$ H(v)=Av=(u,Bw), $$ where $B$ is a matrix of size $(n-1)\times(n-1)$ (and $n$ is the dimension of $M$). For this purpose, we take a matrix $A$, close to $A_0$ and having an eigenvalue $\mbox{$\lambda$}=1$ of multiplicity one, and ``annihilate" the $C^1$-small term $(A_0-A)v+\phi(v)$ in the small ball $B_T(a,p)$. Take a positive $\varepsilon$ such that $8\varepsilon<a$. Since $h\in{\cal N}$, there exists a corresponding $d\in(0,\varepsilon)$ from the definition of periodic shadowing (for the diffeomorphism $h$). Take a natural number $K$ such that $Kd>8\varepsilon$. Reducing $d$, if necessary, we may assume that \begin{equation} \label{2.01} 8\varepsilon<Kd<2a. \end{equation} Let us construct a sequence $y_k\in T_pM,\;k\in\mbox{$\mathds{Z}$},$ as follows: $$ y_0=0,\quad y_{k+1}=Ay_k+\left(\frac{d}{2},0\right),\quad 0\leq k\leq K-1, $$ $$ y_{k+1}=Ay_k-\left(\frac{d}{2},0\right),\quad K\leq k\leq 2K-1, $$ and $y_{k+2K}=y_k,\;k\in\mbox{$\mathds{Z}$}$. Clearly, \begin{equation} \label{2.2} y_K=\left(\frac{Kd}{2},0\right). \end{equation} Let $$ x_k=\exp_p(y_k). $$ Since $$ \exp_p^{-1}(h(x_k))=H(y_k)=Ay_k $$ and $$ |y_{k+1}-Ay_k|=\frac{d}{2}, $$ the sequence $\xi=\{x_k\}$ is a $2K$-periodic $d$-pseudotrajectory of $h$. By our assumption, there exists a periodic point $p_0$ of $h$ such that $$ \mbox{dist}(p_k,x_k)<\varepsilon,\quad k\in\mbox{$\mathds{Z}$}, $$ where $p_k=h^k(p_0)$. Let $$ p_k=\exp_p(q_k),\quad k\in\mbox{$\mathds{Z}$}, $$ where $q_k=(U_k,W_k)$, and let $y_k=(u_k,w_k)$; then $$ |U_k-u_k|\leq|q_k-y_k|<2\varepsilon,\quad k\in\mbox{$\mathds{Z}$}, $$ which implies that $$ |U_0|\leq|q_0|<2\varepsilon. $$ Since $q_{k+1}=H(q_k)$, $U_k=U_0$ for all $k$ due to the structure of $H$. We conclude that $|U_K|<2\varepsilon$ and get a contradiction with the inequalities $|U_K-u_K|<2\varepsilon$, (\ref{2.01}), and (\ref{2.2}). The lemma is proved. \section{ $\mbox{LipPerSh}\subset\Omega S$} In this section, we assume that $f\in\mbox{LipPerSh}$ (with constants $\mbox{${\cal L}$}\geq 1,d_0>0$). Clearly, in this case $f^{-1}\in\mbox{LipPerSh}$ as well (and we assume that the constants $\mbox{${\cal L}$},d_0$ are the same for $f$ and $f^{-1}$). In the construction of pseudotrajectories, we apply the same linearization technique as in the previous section. {\bf Lemma 5. } {\em Every point $p\in\Per(f)$ is hyperbolic.} {\em Proof. } To get a contradiction, let us assume that $f$ has a nonhyperbolic periodic point $p$ (to simplify notation, we assume that $p$ is a fixed point; literally the same reasoning can be applied to a periodic point of period $m>1$). In this case, mapping (\ref{1}) takes the form $$ F(v)=\exp^{-1}_p\circ f\circ\exp_p(v)=Av+\phi(v), $$ where $A$ is a nonhyperbolic matrix. The following two cases are possible: (Case 1): $A$ has a real eigenvalue $\mbox{$\lambda$}$ with $|\mbox{$\lambda$}|=1$; (Case 2): $A$ has a complex eigenvalue $\mbox{$\lambda$}$ with $|\mbox{$\lambda$}|=1$. We treat in detail only Case 1; we give a comment concerning Case 2. To simplify presentation, we assume that 1 is an eigenvalue of $A$; the case of eigenvalue $-1$ is treated similarly. We can find coordinates $v$ in $T_pM$ such that, with respect to this coordinate, the matrix $A$ has block-diagonal form, \begin{equation} \label{bform} A=\mbox{diag}(B,P), \end{equation} where $B$ is a Jordan block of size $l\times l$: $$ B=\left( \begin{array}{ccccc} 1&1&0&\ldots&0\\ 0&1&1&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\ldots&1 \end{array} \right). $$ Of course, introducing new coordinates, we have to change the constants $\mbox{${\cal L}$},d_0,N$; we denote the new constants by the same symbols. In addition, we assume that $\mbox{${\cal L}$}$ is integer. We start considering the case $l=2$; in this case, $$ B=\left( \begin{array}{cc} 1&1\\ 0&1 \end{array} \right). $$ Let $$ e_1=(1,0,0,\dots,0) \mbox{ and } e_2=(0,1,0,\dots,0) $$ be the first two vectors of the standard orthonormal basis. Let $K=25\mbox{${\cal L}$}$. Take a small $d>0$ and construct a finite sequence $y_0,\dots,y_Q$ in $T_pM$ (where $Q$ is determined later) as follows: $y_0=0$ and \begin{equation} \label{pst} y_{k+1}=Ay_k+de_2,\quad k=0,\dots, K-1. \end{equation} Then $$ y_K=(Z_1(K)d,Kd,0,\dots,0), $$ where the natural number $Z_1(K)$ is determined by $K$ (we do not write $Z_1(K)$ explicitly). Now we set $$ y_{k+1}=Ay_k-de_2,\quad k=K,\dots, 2K-1. $$ Then $$ y_{2K}=(Z_2(K)d,0,0,\dots,0), $$ where the natural number $Z_2(K)$ is determined by $K$ as well. Take $Q=2K+Z_2(K)$; if we set $$ y_{k+1}=Ay_k-de_1,\quad k=2K,\dots, Q-1, $$ then $y_Q=0$. Let us note that both numbers $Q$ and $$ Y:=\frac{\max_{0\leq k\leq Q-1}|y_k|}{d} $$ are determined by $K$ (and hence, by $\mbox{${\cal L}$}$). Now we construct a $Q$-periodic sequence $y_k,k\in\mbox{$\mathds{Z}$},$ that coincides with the above sequence for $k=0,\dots,Q$. We set $x_k=\exp_p(y_k)$ and claim that if $d$ is small enough, then $\xi=\{x_k\}$ is a $4d$-pseudotrajectory of $f$ (and this pseudotrajectory is $Q$-periodic by construction). Indeed, we know that $|y_k|\leq Yd$ for $k\in\mbox{$\mathds{Z}$}$. Since $\phi(v)=o(|v|)$ as $|v|\to 0$, \begin{equation} \label{5} |\phi(y_k)|<d,\quad k\in\mbox{$\mathds{Z}$}, \end{equation} if $d$ is small enough. The definition of $\{y_k\}$ implies that \begin{equation} \label{6} |y_{k+1}-Ay_{k}|=d,\quad k\in\mbox{$\mathds{Z}$}. \end{equation} Note that $$ \exp^{-1}_p(f(x_k))=F(y_k)=Ay_k+\phi(y_k); $$ thus, it follows from (\ref{5}) and (\ref{6}) that $$ |y_{k+1}-\exp^{-1}_p(f(x_k))|\leq |y_{k+1}-Ay_{k}|+|\phi(y_k)|<2d, $$ which implies that $\xi=\{x_k\}$ is a $4d$-pseudotrajectory of $f$ if $d$ is small enough. Now we estimate the distances between points of trajectories of the mapping $F$ and its linearization. Let us take a vector $q_0\in T_pM$ and assume that the sequence $q_k=F^k(q_0)$ belongs to the ball $|v|\leq (Y+8\mbox{${\cal L}$})d$ for $0\leq k\leq K$. Let $r_k=A^kq_0$ (we impose no conditions on $r_k$ since below we estimate $\phi$ at points $q_k$ only). Take a small number $\mu\in(0,1)$ (to be chosen later) and assume that $d$ is small enough, so that the inequality $$ |\phi(v)|\leq\mu|v| $$ holds for $|v|\leq (Y+8\mbox{${\cal L}$})d$. Then $$ |q_1|\leq|Aq_0|+|\phi(q_0)|\leq (N+1)|q_0|,\dots, |q_{k}|\leq|Aq_{k-1}|+|\phi(q_{k-1})|\leq (N+1)^k|q_0| $$ for $1\leq k\leq K$, and $$ |q_1-r_1|=|Aq_0+\phi(q_0)-Aq_0|\leq\mu|q_0|, $$ $$ |q_2-r_2|=|Aq_1+\phi(q_1)-Ar_1|\leq N|q_1-r_1|+\mu|q_1| \leq \mu(2N+1)|q_0|, $$ $$ |q_3-r_3|\leq N|q_2-r_2|+\mu|q_2| \leq \mu(N(2N+1)+(N+1)^2)|q_0|, $$ and so on. Thus, there exists a number $\nu=\nu(K,N)$ such that $$ |q_k-r_k|\leq \mu\nu|q_0|,\quad 0\leq k\leq K. $$ We take $\mu=1/\nu$, note that $\mu=\mu(K,N)$, and get the inequalities \begin{equation} \label{7} |q_k-r_k|\leq |q_0|,\quad 0\leq k\leq K, \end{equation} for $d$ small enough. Since $f\in\mbox{LipPerSh}$, for $d$ small enough, the $Q$-periodic $4d$-pseudotrajectory $\xi$ is $4\mbox{${\cal L}$} d$-shadowed by a periodic trajectory. Let $p_0$ be a point of this trajectory such that \begin{equation} \label{8} \mbox{dist}(p_k,x_k)\leq 4\mbox{${\cal L}$} d,\quad k\in\mbox{$\mathds{Z}$}, \end{equation} where $p_k=f^k(p_0)$. Let $q_k=\exp^{-1}_p(p_k)$. The inequalities $|y_k|\leq Yd$ and (\ref{8}) imply that \begin{equation} \label{9} |q_k|\leq |y_k|+2\mbox{dist}(p_k,x_k)\leq (Y+8\mbox{${\cal L}$})d,\quad k\in\mbox{$\mathds{Z}$}. \end{equation} Note that $|q_0|\leq 8\mbox{${\cal L}$} d$. Set $r_k=A^kq_0$; we deduce from estimate (\ref{7}) that if $d$ is small enough, then \begin{equation} \label{10} |q_K-r_K|\leq |q_0|\leq 8\mbox{${\cal L}$} d. \end{equation} Denote by $v^{(2)}$ the second coordinate of a vector $v\in T_pM$. It follows from the structure of the matrix $A$ that \begin{equation} \label{11} |r_K^{(2)}|=|q_0^{(2)}|\leq 8\mbox{${\cal L}$} d. \end{equation} The relations $$ |y_K^{(2)}|=Kd\mbox{ and } |q_K-y_K|\leq 8\mbox{${\cal L}$} d $$ imply that \begin{equation} \label{12} |q_K^{(2)}|\geq Kd-8\mbox{${\cal L}$} d=17\mbox{${\cal L}$} d \end{equation} (recall that $K=25\mbox{${\cal L}$}$). Estimates (\ref{10})--(\ref{12}) are contradictory. Our lemma is proved in Case 1 for $l=2$. If $l=1$, then the proof is simpler; the first coordinate of $A^kv$ equals the first coordinate of $v$, and we construct the periodic pseudotrajectory perturbing the first coordinate only. If $l>2$, the reasoning is parallel to that above; we first perturb the $l$th coordinate to make it $Kd$, and then produce a periodic sequence consequently making zero the $l$th coordinate, the $(l-1)$st coordinate, and so on. If $\mbox{$\lambda$}$ is a complex eigenvalue, $\mbox{$\lambda$}=a+bi$, we take a real $2\times 2$ matrix $$ R=\left( \begin{array}{cc} a&-b\\ b&a\\ \end{array} \right) $$ and assume that in representation (\ref{bform}), $B$ is a real $2l\times 2l$ Jordan block: $$ B=\left( \begin{array}{ccccc} R&E_2&0&\ldots&0\\ 0&R&E_2&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\ldots&R \end{array} \right), $$ where $E_2$ is the $2\times 2$ unit matrix. After that, almost the same reasoning works; we note that $|Rv|=|v|$ for any 2-dimensional vector $v$ and construct periodic pseudotrajectories replacing, for example, formulas (\ref{pst}) by the formulas $$ y_{k+1}=Ay_k+dw_k,\quad k=0,\dots,K-1, $$ where $j$th coordinates of the vector $w_k$ are zero for $j=1,\dots,2l-2,2l+1,\dots,n$, while the 2-dimensional vector corresponding to $(2l-1)$st and $2l$th coordinates has the form $R^kw$ with $|w|=1$, and so on. We leave details to the reader. The lemma is proved. {\bf Lemma 6. }{\em There exist constants $C>0$ and $\mbox{$\lambda$}\in(0,1)$ depending only on $N$ and $\mbox{${\cal L}$}$ and such that, for any point $p\in\Per(f)$, there exist complementary subspaces $S(p)$ and $U(p)$ of the tangent space $T_pM$ that are $Df$-invariant, i.e., (H1) $Df(p)S(p)=S(f(p))$ and $Df(p)U(p)=U(f(p))$, \noindent and the inequalities (H2.1) $|Df^j(p)v|\leq C\mbox{$\lambda$}^j|v|, \quad v\in S(p), j\geq 0$, \noindent and (H2.2) $|Df^{-j}(p)v|\leq C\mbox{$\lambda$}^j|v|, \quad v\in U(p), j\geq 0$, \noindent hold}. {\bf Remark. } Lemma 6 means that the set $\Per(f)$ has all the standard properties of a hyperbolic set, with the exception of compactness. {\em Proof. } Take a periodic point $p\in\Per(f)$; let $m$ be the minimal period of $p$. Denote $p_i=f^i(p)$, $A_i = D f(p_i)$, and $B = D f^m(p)$. It follows from Lemma 5 that the matrix $B$ is hyperbolic. Denote by $S(p)$ and $U(p)$ the invariant subspaces of $B$ corresponding to parts of its spectrum inside and outside the unit disk, respectively. Clearly, $S(p)$ and $U(p)$ are invariant with respect to $Df$, $T_{p}M = S(p) \oplus U(p)$, and the following relations hold: \begin{equation}\label{1.1} \lim_{n \to +\infty} B^n v_s = \lim_{n \to +\infty} B^{-n} v_u = 0, \quad v_s \in S(p), v_u \in U(p). \end{equation} We prove that inequalities (H2.2) hold with $C=16\mbox{${\cal L}$}$ and $\mbox{$\lambda$}=1+1/(8\mbox{${\cal L}$})$ (inequalities (H2.1) are established by similar reasoning applied to $f^{-1}$ instead of $f$). Consider an arbitrary nonzero vector $v_u \in U(p)$ and an integer $j\geq 0$. Define sequences $v_i, e_i \in T_{p_i}M$ and $\mbox{$\lambda$}_i > 0$ for $i\geq 0$ as follows: $$ v_0 = v_u, \quad v_{i+1} = A_i v_i, \quad e_i = \frac{v_i}{|v_i|}, \quad \mbox{$\lambda$}_i = \frac{|v_{i+1}|}{|v_i|} = |A_i e_i|. $$ Let $$ \tau= \frac{\mbox{$\lambda$}_{m-1}\cdot \ldots \cdot \mbox{$\lambda$}_1 + \mbox{$\lambda$}_{m-1}\cdot \ldots \cdot \mbox{$\lambda$}_2 + \ldots + \mbox{$\lambda$}_{m-1} + 1}{\mbox{$\lambda$}_{m-1}\cdot \ldots \cdot \mbox{$\lambda$}_0}. $$ Consider the sequence $\{a_i \in \mbox{$\mathds{R}$},\;i\geq 0\}$ defined by the following formulas: \begin{equation} \label{1.2} a_0 = \tau, \quad a_{i+1} = \mbox{$\lambda$}_i a_i -1. \end{equation} Note that \begin{equation} \label{1.3} a_{m} = 0 \quad \mbox{and} \quad a_i >0, \quad i \in [0, m-1]. \end{equation} Indeed, if $a_i\leq 0$ for some $i \in [0, m-1]$, then $a_k<0$ for $k \in [i+1, m]$. It follows from (\ref{1.1}) that there exists $n > 0$ such that \begin{equation} \label{2.1} |B^{-n}\tau e_0| < 1. \end{equation} Consider the finite sequence $\{w_i \in T_{p_i}M,\;i\in[0,m(n+1)]\}$ defined as follows: $$ \begin{cases} w_i=a_i e_i, & \quad i \in [0, m-1], \\ w_{m} = B^{-n}\tau e_0, & \;\\ w_{m+1+i} = A_i w_{m+i}, & \quad i \in [0, mn - 1]. \end{cases} $$ Clearly, $$ w_{km}=B^{k-1-n}\tau e_0,\quad k\in[1,n+1], $$ which means that we can consider $\{w_i\}$ as an $m(n+1)$-periodic sequence defined for $i\in\mbox{$\mathds{Z}$}$. Let us note that $$ A_iw_i=a_iA_ie_i= a_i\frac{v_{i+1}}{|v_{i}|},\quad i\in[0,m-2], $$ $$ w_{i+1}=(\mbox{$\lambda$}_ia_i-1)\frac{v_{i+1}}{|v_{i+1}|} =a_i\frac{v_{i+1}}{|v_{i}|}-e_{i+1},\quad i\in[0,m-2], $$ and $$ A_{m-1}w_{m-1}=a_{m-1}\frac{v_{m}}{|v_{m-1}|}= \frac{v_{m}}{\mbox{$\lambda$}_{m-1}|v_{m-1}|}=e_m $$ (in the last relation we take into account that $a_{m-1}\mbox{$\lambda$}_{m-1}=1$ since $a_m=0$). The above relations and condition (\ref{2.1}) imply that \begin{equation} \label{15} |w_{i+1} - A_i w_i| < 2, \quad i \in \mbox{$\mathds{Z}$}. \end{equation} Now we take a small $d>0$ and consider the $m(n+1)$-periodic sequence $\xi=\{x_i=\mbox{exp}_{p_i}(dw_i),\;i\in \mbox{$\mathds{Z}$}\}$. We claim that if $d$ is small enough, then $\xi$ is a $4d$-pseudotrajectory of $f$. Denote $$ \zeta_{i+1}=\exp^{-1}_{p_{i+1}}(f(x_i))\;\mbox{ and }\;\zeta'_{i+1}=\exp^{-1}_{p_{i+1}}(x_{i+1}). $$ Then $$ \zeta_{i+1}=\exp^{-1}_{p_{i+1}} f(\exp_{p_i}(dw_i))=F_i(dw_i)=A_idw_i+\phi_i(dw_i), $$ where the mapping $F_i$ is defined in (\ref{1}) and $\phi_i(v)=o(|v|)$, and $$ \zeta'_{i+1}=\exp^{-1}_{p_{i+1}}(x_{i+1})=dw_{i+1}. $$ It follows from estimates (\ref{15}) that $$ |\zeta'_{i+1}-\zeta_{i+1}|\leq 2d $$ for small $d$, and $$ \mbox{dist}(f(x_i),x_{i+1})\leq 4d. $$ By Lemma 5, the $m$-periodic trajectory $\{p_i\}$ is hyperbolic; hence, $\{p_i\}$ has a neighborhood in which $\{p_i\}$ is a unique periodic trajectory. It follows that if $d$ is small enough, then the pseudotrajectory $\{x_i\}$ is $4\mbox{${\cal L}$} d$-shadowed by $\{p_i\}$. The inequalities $\dist(x_i,p_i)\leq 4\mbox{${\cal L}$} d$ imply that $|a_i|=|w_i|\leq 8\mbox{${\cal L}$}$ for $0\leq i\leq m-1$. Now the equalities $\mbox{$\lambda$}_i = (a_{i+1}+1)/a_i$ imply that if $0\leq i\leq m-1$, then $$ \mbox{$\lambda$}_0\cdot\ldots\cdot\mbox{$\lambda$}_{i-1} =\frac{a_{1}+1}{a_0}\frac{a_{2}+1}{a_{1}}\dots \frac{a_{i}+1}{a_{i-1}}= $$ $$ =\frac{a_{i}+1}{a_0}\left(1+\frac{1}{a_{1}}\right)\dots\left(1+\frac{1}{a_{i-1}}\right)\geq $$ $$ \geq \frac{1}{8\mbox{${\cal L}$}}\left(1+\frac{1}{8\mbox{${\cal L}$}}\right)^{i-1}> \frac{1}{16\mbox{${\cal L}$}}\left(1+\frac{1}{8\mbox{${\cal L}$}}\right)^{i} $$ (we take into account that $1+1/(8\mbox{${\cal L}$})<2$ since $\mbox{${\cal L}$}\geq 1$). It remains to note that $$ |Df^i(p)v_u|=\mbox{$\lambda$}_{i-1}\cdots\mbox{$\lambda$}_0|v_u|,\quad 0\leq i\leq m-1, $$ and that we started with an arbitrary vector $v_u\in U(p)$. This proves our statement for $j\leq m-1$. If $j\geq m$, we take an integer $k>0$ such that $km>j$ and repeat the above reasoning for the periodic trajectory $p_0,\dots,p_{km-1}$ (note that we have not used the condition that $m$ is the minimal period). Lemma 6 is proved. {\bf Lemma 7. } {\em If} $f\in\mbox{LipPerSh}$, {\em then $f$ satisfies Axiom A.} {\em Proof. } Denote by $P_l$ the set of points $p\in\Per(f)$ of index $l$ (as usual, the index of a hyperbolic periodic point is the dimension of its unstable manifold). Let $R_l$ be the closure of $P_l$. Clearly, $R_l$ is a compact $f$-invariant set. We claim that any $R_l$ is a hyperbolic set. Let $n=\mbox{dim}M$. Consider a point $q\in R_l$ and fix a sequence of points $p_m\in P_l$ such that $p_m\to q$ as $m\to\infty$. By Lemma 6, there exist complementary subspaces $S(p_m)$ and $U(p_m)$ of $T_{p_{m}}M$ (of dimensions $n-l$ and $l$, respectively) for which estimates (H2.1) and (H2.2) hold. Standard reasoning shows that, introducing local coordinates in a neighborhood of $(q,T_qM)$ in the tangent bundle of $M$, we can select a subsequence $p_{m_k}$ for which the sequences $S(p_{m_k})$ and $U(p_{m_k})$ converge (in the Grassmann topology) to subspaces of $T_qM$ (let $S_0$ and $U_0$ be the corresponding limit subspaces). The limit subspaces $S_0$ and $U_0$ are complementary in $T_qM$. Indeed, consider the ``angle" $\beta_{m_k}$ between the subspaces $S(p_{m_k})$ and $U(p_{m_k})$ which is defined (with respect to the introduced local coordinates in a neighborhood of $(q,T_qM)$) as follows: $$ \beta_{m_k}=\min |v^s-v^u|, $$ where the minimum is taken over all possible pairs of unit vectors $v^s\in S(p_{m_k})$ and $v^u\in U(p_{m_k})$. It is shown in [16, Lemma 12.1] that the values $\beta_{m_k}$ are estimated from below by a positive constant $\mbox{$\alpha$}pha=\mbox{$\alpha$}pha(C,\mbox{$\lambda$},N)$. Clearly, this implies that the subspaces $S_0$ and $U_0$ are complementary. It is easy to show that the limit subspaces $S_0$ and $U_0$ are unique (which means, of course, that the sequences $S(p_m)$ and $U(p_m)$ converge). For the convenience of the reader, we prove this statement (our reasoning is close to that of [16]). To get a contradiction, assume that there is a subsequence $p_{m_i}$ for which the sequences $S(p_{m_i})$ and $U(p_{m_i})$ converge to complementary subspaces $S_1$ and $U_1$ different from $S_0$ and $U_0$ (for definiteness, we assume that $S_0\setminus S_1\neq\emptyset$). Due to the continuity of $Df$, the inequalities $$ |Df^j(q)v|\leq C\mbox{$\lambda$}^j|v|,\quad v\in S_0\cup S_1, $$ and $$ |Df^j(q)v|\geq C^{-1}\mbox{$\lambda$}^{-j}|v|,\quad v\in U_0\cup U_1, $$ hold for $j\geq 0$. Since $$ T_qM=S_0\oplus U_0=S_1\oplus U_1, $$ our assumption implies that there is a vector $v\in S_0$ such that $$ v=v^s+v^u,\quad v^s\in S_1, v^u\in U_1, v^u\neq 0. $$ Then $$ |Df^j(q)v|\leq C\mbox{$\lambda$}^j|v|\to 0,\quad j\to\infty, $$ and $$ |Df^j(q)v|\geq C^{-1}\mbox{$\lambda$}^{-j}|v^u|-C\mbox{$\lambda$}^j|v^s|\to \infty,\quad j\to\infty, $$ and we get the desired contradiction. It follows that there are uniquely defined complementary subspaces $S(q)$ and $U(q)$ for $q\in R_l$ with proper hyperbolity estimates; the $Df$-invariance of these subspaces is obvious. We have shown that each $R_l$ is a hyperbolic set with $\mbox{dim}S(q)=n-l$ and $\mbox{dim}U(q)=l$ for $q\in R_l$. If $r\in\Omega(f)$, then there exists a sequence of points $r_m\to r$ as $m\to\infty$ and a sequence of indices $k_m\to\infty$ as $m\to\infty$ such that $f^{k_m}(r_m)\to r$. Clearly, if we continue the sequence $$ r_m,f(r_m),\dots,f^{k_m-1}(r_m) $$ periodically with period $k_m$, we get a periodic $d_m$-pseudotrajectory of $f$ with $d_m\to 0$ as $m\to\infty$. Since $f\in \mbox{LipPerSh}$, for large $m$ there exist periodic points $p_m$ such that $\mbox{dist}(p_m,r_m)\to 0$ as $m\to\infty$. Thus, periodic points are dense in $\Omega(f)$. Since hyperbolic sets with different dimensions of the subspaces $U(q)$ are disjoint, we get the equality $$ \Omega(f)=R_0\cup\dots\cup R_{n}, $$ which implies that $\Omega(f)$ is hyperbolic. The lemma is proved. It was mentioned above that if a diffeomorphism $f$ satisfies Axiom A, then its nonwandering set can be represented as a disjoint union of a finite number of basic sets (see representation (\ref{spe})). The basic sets $\Omega_i$ have stable and unstable ``manifolds": $$ W^s(\Omega_i)=\{x\in M:\;\mbox{dist}(f^k(x),\Omega_i)\to 0,\quad k\to\infty\} $$ and $$ W^u(\Omega_i)=\{x\in M:\;\mbox{dist}(f^k(x),\Omega_i)\to 0,\quad k\to-\infty\}. $$ If $\Omega_i$ and $\Omega_j$ are basic sets, we write $\Omega_i\to\Omega_j$ if the intersection $$ W^u(\Omega_i)\cap W^s(\Omega_j) $$ contains a wandering point. We say that $f$ has a 1-cycle if there is a basic set $\Omega_i$ such that $\Omega_i\to\Omega_i$. We say that $f$ has a $t$-cycle if there are $t>1$ basic sets $$ \Omega_{i_1},\dots,\Omega_{i_t} $$ such that $$ \Omega_{i_1}\to\dots\to\Omega_{i_t}\to\Omega_{i_1}. $$ {\bf Lemma 8. } {\em If } $f\in\mbox{LipPerSh}$, {\em then $f$ has no cycles.} {\em Proof. } To simplify presentation, we prove that $f$ has no 1-cycles (in the general case, the idea is literally the same, but the notation is heavy). To get a contradiction, assume that $$ p\in(W^u(\Omega_i)\cap W^s(\Omega_i))\setminus\Omega(f). $$ In this case, there are sequences of indices $j_m,k_m\to\infty$ as $m\to\infty$ such that $$ f^{-j_m}(p),f^{k_m}(p)\to\Omega_i,\quad m\to\infty. $$ Since the set $\Omega_i$ is compact, we may assume that $$ f^{-j_m}(p)\to q\in\Omega_i\;\mbox{ and}\;f^{k_m}(p)\to r\in\Omega_i. $$ Since $\Omega_i$ contains a dense positive semi-trajectory, there exist points $s_m\to r$ and indices $l_m>0$ such that $f^{l_m}(s_m)\to q$ as $m\to\infty$. Clearly, if we continue the sequence $$ p,f(p),\dots,f^{k_m-1}(p),s_m,\dots,f^{l_m-1}(s_m),f^{-j_m}(p),\dots,f^{-1}(p) $$ periodically with period $k_m+l_m+j_m$, we get a periodic $d_m$-pseudotrajectory of $f$ with $d_m\to 0$ as $m\to\infty$. Since $f\in\mbox{LipPerSh}$, there exist periodic points $p_m$ (for $m$ large enough) such that $p_m\to p$ as $m\to\infty$, and we get the desired contradiction with the assumption that $p\notin\Omega(f)$. The lemma is proved. Lemmas 5 -- 8 show that $\mbox{LipPerSh}\subset\Omega S$. 1. S. Yu. Pilyugin, {\em Shadowing in Dynamical Systems}, Lecture Notes Math., vol. 1706, Springer, Berlin, 1999. 2. K. Palmer, {\em Shadowing in Dynamical Systems. Theory and Applications}, Kluwer, Dordrecht, 2000. 3. D. V. Anosov, {\em On a class of invariant sets of smooth dynamical systems}, Proc. 5th Int. Conf. on Nonlin. Oscill., {\bf 2}, Kiev, 1970, 39-45. 4. R. Bowen, {\em Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms}, Lecture Notes Math., vol. 470, Springer, Berlin, 1975. 5. C. Robinson, {\em Stability theorems and hyperbolicity in dynamical systems}, Rocky Mount. J. Math., {\bf 7}, 1977, 425-437. 6. A. Morimoto, {\em The method of pseudo-orbit tracing and stability of dynamical systems}, Sem. Note {\bf 39}, Tokyo Univ., 1979. 7. K. Sawada, {\em Extended $f$-orbits are approximated by orbits}, Nagoya Math. J., {\bf 79}, 1980, 33-45. 8. P. Ko\'scielniak, {\em On genericity of shadowing and periodic shadowing property}, J. Math. Anal. Appl., {\bf 310}, 2005, 188-196. 9. S. Yu. Pilyugin, {\em Variational shadowing}, Discrete Contin. Dyn. Syst. (accepted). 10. K. Sakai, {\em Pseudo orbit tracing property and strong transversality of diffeomorphisms of closed manifolds}, Osaka J. Math., {\bf 31}, 1994, 373-386. 11. S. Yu. Pilyugin, A. A. Rodionova, and K. Sakai, {\em Orbital and weak shadowing properties}, Discrete Contin. Dyn. Syst., {\bf 9}, 2003, 287-308. 12. F. Abdenur and L. J. Diaz, {\em Pseudo-orbit shadowing in the $C^1$ topology}, Discrete Contin. Dyn. Syst., {\bf 7}, 2003, 223-245. 13. S. Yu. Pilyugin and S. B. Tikhomirov, {\em Lipschitz shadowing implies structural stability} (to appear). 14. S. Yu. Pilyugin, {\em Spaces of Dynamical Systems} [in Russian], Reg. Chaotic Dynamics, Moscow-Izhevsk, 2008. 15. S. Yu. Pilyugin, K. Sakai, and O. A. Tarakanov, {\em Transversality properties and $C^1$-open sets of diffeomorphisms with weak shadowing}, Discrete Contin. Dyn. Syst., {\bf 9}, 2003, 287-308. 16. O. B. Plamenevskaya, {\em Weak shadowing for two-dimensional diffeomorphisms}, Mat. Zametki, {\bf 65}, 1999, 477-480. 17. N. Aoki, {\em The set of Axiom A diffeomorphisms with no cycle}, Bol. Soc. Brasil. Mat. (N.S.), {\bf 23}, 1992, 21-65. 18. S. Hayashi, {\em Diffeomorphisms in $\mathcal{F}^1(M)$ satisfy Axiom A}, Ergod. Theory Dyn. Syst., {\bf 12}, 1992, 233-253. 19. S. Yu. Pilyugin, {\em Sets of diffeomorphisms with various limit shadowing properties}, J. Dynamics Differ. Equat., {\bf 19}, 2007, 747-775. 20. S. Yu. Pilyugin, {\em Introduction to Structurally Stable Systems of Differential Equations}, Birkh\"auser-Verlag, 1994. \end{document}
\begin{document} \title{Optimizing Capacitated Vehicle Scheduling with Time Windows: A Case Study of RMC Delivery } \author{Mohamed~E.~Masoud, ~and~ Saeid~Belkasim \thanks{ Manuscript received... } \thanks{..} \thanks{M. Masoud, and S. Belkasim are with the Department of Computer Science, Georgia State University, Atlanta, GA, 30302 USA E-mail: [email protected]; [email protected].} } \maketitle \begin{abstract} Ready Mixed Concrete Delivery Problem (RMCDP) is a multi-objective multi-constraint dynamic combinatorial optimization problem. From the operational research prospective, it is a real life logistic problem that is hard to be solved with large instances. In RMCDP, there is a need to optimize the Ready Mixed Concrete ( RMC) delivery by predetermining an optimal schedule for the sites-trips assignments that adheres to strict time, distance, and capacity constraints. This optimization process is subjected to a domain of objectives ranging from achieving maximum revenue to minimizing the operational cost. In this paper, we analyze the problem based on realistic assumptions and introduce its theoretical foundation. We derive a complete projection of the problem in graph theory, and prove its NP-Completeness in the complexity theory, which constitutes the base of the proposed approaches. The first approach is a graph-based greedy algorithm that deploys dynamic graph weights and has polynomial time complexity. The second approach is a heuristic-based algorithm coupled with the dynamic programming and is referred to as Priority Algorithm. This algorithm is carefully designed to address the RMCDP dynamic characteristic, and satisfies its multi-objectivity. In comparison with the state-of-arts approaches, our algorithm achieves high feasibility rate, lower design complexity, and significantly lower computational time to find optimal or very slightly suboptimal solutions. \end{abstract} \IEEEpeerreviewmaketitle \begin{IEEEkeywords} Vehicle scheduling, logistics, graph theory, NP-Complete, optimization, concrete delivery \end{IEEEkeywords} \section{Introduction} The importance of the Ready Mixed Concrete Delivery Problem (RMCDP) stems from the huge construction industry that is based on the RMC production and dispatching processes. In US, the RMC revenue in 2014 has been estimated as $\$30$ billion from dispatching around 301 millions cubic yards produced by around 5,500 ready mixed Concrete Batch Plants (CBP) and delivered by approximately 65,292 trucks as reported by the National Ready Mixed Concrete Association (NRMCA) in their annual fleet benchmarking survey [1]. The NRMCA report covers the operational data of 90 RMC companies from eight US geographical regions. This realistic RMC statistical data is the basis for our assumptions. In the RMCDP with one depot,which is our case of study, there is exactly one CBP that mixes RMC ingredients (e.g. cement,water,and aggregates) before loading the RMC products to a number of trucks in order to dispatch and haul the products to a number of construction sites. Each site must be accessible by the depot within a specific time because of the perishable nature of the RMC product. This nature stipulates that the consecutive deliveries per same site must not exceed a predefined time constraint in order to guarantee proper bonding between these consecutive deliveries and to avoid the occurrence of planes of weakness in concrete or the so-called cold joints. Therefore, this time constraint is important to prevent the concrete from reaching its initial setting and generates those joints. This initial setting time is defined by the standards specification for the RMC [2] and is considered as the upper bound of the RMCDP time constraint. Therefore, the RMC delivery in realistic is a hard scheduling problem for the multiple constraints it has. Among these constraints, there are the truck capacity constraint due to the limited drum size, the CBP capacity constraint due to the limited CBP mixer size, the travelling distance constraint from the depot to the sites due to the perishable nature of the product, besides the limited loading time slots for the trucks at the depot which is another constraint due to the finite number of these loading time slots, finally, there is the main time constraint that exists between the consecutive deliveries per site. Therefore, solving the RMCDP by finding an effective dispatching algorithm is important for optimizing the dispatching process by finding an effective feasible solution that can handle the mentioned constraints and meets the firm objective in a competitive way. The classification of the RMCDP has multiple points of view and whether or not it can be classified as a special case of the Vehicle Routing and Scheduling Problem (VRSP). The main routing characteristics of RMCDP that differentiates it from other VRSP subclasses, is the site-trip relation which is one-to-many in the RMCDP. This difference is due to the production capacity and the truck capacity limitations in the RMCDP. Another important difference in the RMCDP is the limited number of the sites to be serviced, but this decrease in the total number of sites is usually coupled with an increase in the number of trips required for each site to satisfy its demand which make the problem hard to be solved. The problem feasible solution must satisfy the mentioned capacities and time constraints. In this paper, we go beyond the scope in [3] and address the RMC delivery problem from different aspects in order to mine its main characterises and propose the proper solution approaches based on our analysis and theoretical foundation. Our main contributions are summarized as follows: \begin{itemize} \itemsep0em \item[$\bullet$ ] We provide a complete projection of the problem in graph theory. \item[$\bullet$ ] We design heuristic algorithm based on priority principle of design. \item[$\bullet$ ] The test results show that our approach is competitive and able to find optimal or slightly suboptimal solutions with better processing speed and lower design complexity. \end{itemize} The rest of the paper is organized as follows. In Section II, related work review is given. The problem analysis, definitions, modelling, graph representation, complexity theory reduction, and algorithm design are discussed in Section III. The implementation and results are shown in Section IV, and the conclusion is presented in Section V. \section{RELATED WORK} The VRSP is the main category in which the RMCDP can be sub-classified. VRSP has received an intensive research work for decades for the importance it has in advancing the logistics and fleet management processes. In addition to these areas, also supply chain management systems and just-in-time production strategies have been enhanced by the improvement in combinatorial-based routing and scheduling problems. Such improvements have showed a direct economic impact in all these fields and their related systems [4]. Our consideration of the Vehicle routing problem (VRP) category and its subclasses (e.g. VRP with time window(VRPTW)) is because we show in this paper that the single depot RMCDP, which is our case of study, can be reduced in the complexity theory to that category of routing problems. Therefore, this reduction can mutually be exchanged between both domains and the advances in any can possibly be propagated to the other. In VRP, many approaches have been used to find an optimal or near optimal solutions for the vehicle routing and scheduling problems, and some exact methods have been proposed [5]. Despite the preference of the exact solutions, they often perform poorly in the average and large solution spaces as shown by Kritikos et al [6]. Therefore, near-optimal approaches have been widely used as successful alternatives for their reasonable feasibility rate and low time complexity [7]. In near-optimal cases, heuristic and meta-heuristic techniques constitute the main algorithm design solutions. Both have been used for solving the VRP in general and RMCDP in particular. In RMCDP, the evolutionary algorithms have been widely adopted [8-13]. For example, Feng et al [10] used Genetic Algorithm (GA) to solve the problem, while Cao and Lu [11] combined the genetic algorithm with simulation for more optimization. Particle Swarm Optimization (PSO) was used by Pan et al [12], and the Bee Colony Optimization (BCO) by Srichandum [13]. Despite the different source of inspiration for each technique, they have a high level of similarity in their basic ideas and share almost the same level of design complexity. The drawback of the evolutionary algorithms in the RMCDP is that they try to select a number of random permutations from the RMCDP permutation-based solution space when this solution space is actually distributed randomly for all the objective functions. Moreover, the correlation between the permutations in such a solution space is weak, which means that a near neighbour of the worst permutation according to the objective function can be the optimal solution. Therefore, techniques such as mutation or crossover may not be considered optimal in such space. The other approach that is widely used also in this domain is the linear programming based approaches [14-15]. In Yan and Lai [15], they deployed a mixed integer network flow model to solve the RMC delivery problem. Despite the feasibility of this approach, it brings additional complexity to the problem in terms of the large number of parameters used and the high computational times that are needed in case of sub-average problem instances. \section{RMC DELIVERY PROBLEM} The main processes in RMC delivery operation are given in Fig.1. The flow of the RMC delivery is represented by cyclic trips start and end at the depot during the RMC deliveries. In this paper, RMCDP refers to the problem that has exactly one depot for product loading, and object $depot$ refers to the RMC factory that has exactly one $CBP$, or simply one $plant$, at which the mixing of the RMC ingredients and the loading of the RMC product to the trucks take place. The depot also has, beside the plant, the fleet for the RMC delivery and the other supporting systems (e.g. workshops, auxiliary equipment). Therefore,the object $depot$ is more wider than the object $plant$ in this context. \begin{figure} \caption{Depot-Site delivery flow digram starts by the product loading at the depot and represents the main components of the object $trip$. } \label{fig_RMC} \end{figure} At this point also, we define the term $trip$ as the total time which includes : the truck loading time at the depot, the truck hauling time to the job site, the unloading process time at the placement point, and finally the returning time to the depot. Therefore, the $trip$ as a concept is the same for all sites, but as a parameter, it depends on the site distance from the depot and some other factors will be discussed later. To avoid possible confusion and maintain consistency with the RMCDP domain vocabularies, the word $truck$ replaces vehicle, and $site$ replaces job-site from now on, and in this context, object $site$ is the site that has exactly one RMC placement point. [1-2]. \begin{table}[t] \caption{List of variables and parameters. } \centering \begin{tabular}{c c l c } \hline\hline \\ Symbol && Description \\[0.5ex] \hline\\ $n$ && Number of sites to be serviced\\ $k_{i}$&& Sequence of trips per site $i$\\ $k^{t}_{i}$&& Trip duration time per site $i$\\ $k^{s}_{ij}$&& Starting time at site $i$ for trip $j$.\\ $k^{s}_{i}$ &&Site $i$ proposed time by customer for first trip \\ $k^{s}_{i1}$ &&Site $i$ first trip time at the site \\ $k^{d}_{ij}$&& Starting time at depot for trip $j$ of site $i$\\ $k^{e}_{ij}$&& Ending time at depot for trip $j$ of site $i$\\ $q_{i}$&& Demand of site $i$\\ $Q$&& Actual truck capacity for the homogeneous fleet\\ $K$&& Sequence of trips for all sites\\ $C$&& Depot concrete batch plant actual capacity\\ $D^{s}$&& Depot starting time\\ $P_{r}$&& Plant productivity\\ $L_{t}$&& Truck Loading time at the depot\\ $d_{i}$&& Site $i$ distance from the depot\\ $v_{i}$&& Truck average speed to Site $i$\\ $U_{i}$&& Truck unloading time at site $i$\\ $\gamma$&& Time of the initial setting of RMC = 90 min\\ $m$&& Total number of trucks per depot\\ $m_{u}$&& Upper bound of the trucks number needed for a delivery\\ $m_{l}$&& Lower bound of the trucks number needed for a delivery\\ $K_{s}$&& Total solution space of problem instance $I$\\ $K_{0}$&& Initial sequence of trips in RMCDP graph\\ $G_{|V|}$&& Complete RMCDP graph\\ $L\left(v \right) $&& Mapping each vertex $v$ to an element in $K_{0}$\\ $c_{v}$&& Cost of vertex $v$ in RMCDP graph\\ $c^{e}_{uv}$&& Cost of the edges between $\lbrace u,v\rbrace$ in RMCDP graph\\ $v^{s}$&& Service starting time at vertex $v$\\ $h_{L\left(v \right) }$ && Hauling time of the site that is labelled to vertex $v$\\ $U_{L\left(v \right) }$ && Unloading time of the site that is labelled to vertex $v$\\ $s^{s}_{L\left(v \right) }$ && Proposed starting time of the site that is labelled to $v$\\ $c_{r}$&& Total cost of Humiliation circuit $r$ in RMCDP graph\\\\ \hline \end{tabular} \begin{flushleft} \begin{center} \end{center} \end{flushleft} \label{table:nonlin} \end{table} Based on the previous basic description, the main $ characteristics$ that distinguishes the RMCDP from other vehicle scheduling and routing problems with time window constraint can be defined as follows: \\\\ \textbf{Definition 1}. In single depot RMCDP with homogeneous trucks, we are given a single depot that has exactly one plant of capacity $C$, given a set of $n$ sites such that $n\geq2$, each site $i \in\lbrace 1,..,n\rbrace $ has an accessible distance $d_{i}$ from the depot and has a positive demand $q_{i}$. This demand to be satisfied needs a sequence of trips $k_{i}=\left( k_{i_{1}},..,k_{i|k_{i}|}\right) $, $|k_{i}|\geq1$ from the depot by using a set of $m$ homogeneous trucks each of capacity $Q$ and average speed $\overline{v}_{i}$ such that $Q < q_{i}$. Each trip is assigned to exactly one site by using exactly one truck, and the time lagging between any consecutive trips $k_{i{j}},k_{i{j+1}}$ for site $i$ must not exceed the product initial setting time $\gamma$ such that the trips starting time at site $k^{s}_{i{j+1}}-k^{s}_{i{j}}\leq \gamma$. The task is to find the best legal sequence of trips for all sites $K$ that can optimize the problem for the objective of minimizing the sites idling time awaiting for their deliveries while avoiding queue of trucks at the sites. From definition 1, we can say that, as a general case in RMCDP, the truck capacity $Q <q_{i}$ $\forall i \in\lbrace 1,..,n\rbrace $. Therefore, for each site $i$, the site demand $q_{i}$ is partitioned into a set of $|k_{i}|$ elements such that $q^{s}_{i}=\lbrace q^{s}_{i_{1}},..,q^{s}_{i|k_{i}|}\rbrace $, where $q^{s}_{ij}\leq Q$ $ \forall j \in\lbrace 1,..,|k_{i}|\rbrace $. $|k_{i}|$ is the total number of trips for site $i$ that can be calculated as follows: \\ \begin{equation} |k_{i}|=\lceil\frac{q_{i}}{Q}\rceil \quad\quad\forall i \in\lbrace 1,..,n\rbrace \end{equation} Based on (1), we can formulate the total number of trips $|K|$ for all sites as follows: \\ \begin{equation} |K|=\sum\limits_{i=1}^n |k_{i}| \end{equation} \\ In our analysis, we consider only the case of $homogeneous$ fleet in which all trucks have the same capacity $Q=\overline{Q} $. We mean by the truck capacity, the actual maximum capacity percentage of the truck gross drum volume according to the RMC standard [2]. Our assumption of the trucks homogeneity is mainly assumed to avoid adding another optimization problem which is the subset sum problem in the heterogeneity case, also, truck homogeneity with maximum capacity is a desired goal for some economic factors that are related to truck maintenance and operation. The ceil delimiter in (1) is used because in some cases the site demand $q_{i}$ or the RMC quantity of the site last trip $k_{i|k_{i}|}$ can be less than the truck capacity $Q$. However, in order to maintain the QoS in the RMC delivery operation, the loading time slot at the depot can exactly be assigned to one trip with one truck for one site delivery even if a site that has a total or partial demand less than $Q$. In definition 1, the $plant$ capacity $C$ represents the volume of the plant mixer used for mixing the product ingredients. In general, for plant capacity $C < Q$, loading a truck with a trip quantity $q^{s}_{ij}>C$ needs a set of mixer batches $B=\lbrace b_{1},..,b_{|B|}\rbrace$ such that $b_{i} \leq C$. For a realistic assumption, we assume that the plant capacity $C$ is the actual capacity used for the concrete batches during the truck loading process. Under this assumption, and using the actual capacity parameter $C$, the plant productivity per hour can be determined by $P_{r}$ $ \leq C*60$ $m^{3}h^{-1}$ . In some cases,there are other factors that may impact the plant productivity such as increasing the batch mixing time at the plant mixer. This increase in mixing time is important in some cases for improving the product quality [16]. However, above the $default$ mixing time value predefined by the mixer manufacture, the increase in mixing time results in decrease in the plant productivity according to its $nomogram $ [17]. One possible solution to maintain the same product quality without affecting the production rate is by distributing the mixing time between the plant and the truck mixers which is well-known as shrink-mixing [2]. Based on these facts, we can neglect this factor, and consider only the actual capacity parameter $C$ for the $plant$ productivity. Another variable in our model that also depends on the plant productivity $P_{r}$ is the truck $loading$ time $L_{t}$, which is proportional to the truck capacity $Q$ and inversely proportional to the plant productivity $P_{r}$ as follows: \\ \begin{equation} L_{t}= \frac{Q}{P_{r}}*60 \end{equation} The truck loading time variable $L_{t}$ is needed for two objects: the $trip$ object, and the $truck$ object. For the $trip$ object, each trip $k_{ij}$ has a $starting$ time $k^{d}_{ij}$, $end$ time $k^{e}_{ij}$, and trip $duration$ time $k^{t}_{i}$ for each site $i$ such that $k^{t}_{i}=\lbrace k^{e}_{ij}-k^{d}_{ij}|i\in\lbrace 1,..,n\rbrace,j\in\lbrace 1,..,|k_{i}|\rbrace\rbrace$, and based on our initial definition above for the object $trip$, we can formulate the trip duration $k^{t}_{i}$ as follows: \begin{equation} k^{t}_{i}= L_{t}+ 2\left( \frac{d_{i}}{\overline{v}_{i}}\right)+ U_{i} \end{equation} \begin{equation} k^{e}_{ij}= k^{d}_{ij}+k^{t}_{i} \end{equation} \\ For simplicity, we assume in (4) that both of the hauling time and returning time are close to each other, that is, their difference can be neglected. Also, the other minor tasks that are associated with the RMC delivery are implicitly embedded in (4). These minor tasks may include the lab tests (e.g. slump test) and the time needed for rinsing the truck drum after the unloading the product in the site. The variable $U_{i}$ in (4) is the unloading time at site $i$. Equation (5) represents the sequence of trips for each site $i$ where site $i$ first trip starting time $k^{s}_{i_{1}}$ should match the site proposed starting time $k^{s}_{i}$, where $k^{s}_{i_{1}}=k^{d}_{i{1}}+L_{t}+\frac{d_{i}}{\overline{v}_{i}}$. For the $truck$ object, the need for the parameter $L_{t}$ is to determine the $upper$ bound of the number of trucks $m_{u}$ that should be $available$ for the RMC deliveries as follows: \\ \begin{equation} m_{u}= \frac{2.\gamma}{L_{t}} \end{equation} \\ The time window $\gamma$ in definition 1 and (6) represents the maximum time allowed between any two consecutive trips for the same site in order to maintain the product workability before the RMC reaches its initial setting. This time is estimated as 90 minutes under normal working conditions according to the RMC standard specifications [2]. We assumed also the truck speed $\overline{v}_{i}$ depends on the site $i$ location for the fact that some sites can be in high density areas and others may not. Usually there are pre-delivery arrangements between the depot and the new site such as determining the best route to the site, and the average truck speed to estimate the product hauling time to the site. \subsection{Problem Classification} Based on previous analysis and modelling for the RMC delivery problem, we can state the following: \\\\ \textbf{Definition 2.} The RMCDP is a multi-constraint and multi-objective optimization problem. From definition 1 we can identify different types of constraints in the problem. Starting with the time, the RMC time window $\gamma$ enforces a time constraint between the consecutive trips of same site $i$ such that $\left( k^{s}_{ij+1}-k^{s}_{ij}\right)\leq \gamma $. Also a lower bound for the time lag between the same trips can be generated as an additional constraint if, for example, the objective of the optimization problem is to minimize the trucks waiting time at site $i$ such that $U_{i}\leq\left( k^{s}_{ij+1}- k^{s}_{ij}\right)\leq \gamma $. Another constraint can also be generated by $\gamma$ is the radius of the depot service area such that $L_{t}+\left( \frac{d_{i}}{\overline{v}_{i}}\right)+U_{i} \leq \gamma$ , where $ \left( \frac{d_{i}}{\overline{v}_{i}}\right) $ represents the RMC hauling time to site $i$ for each $i\in\lbrace1,..n\rbrace$. Other time constraints can also be generated based on the problem objective. Beside the time constraints, there is also a capacity constraint that gives the RMCDP its unique characteristics and also its complexity. The limited truck capacity $Q$ compared to the site demand $q_{i}$ creates a constraint on the maximum number of visited sites per trip and limits that number to be exactly one site visit by one truck per trip. Other capacity constraints can also exist depending on the objective function, for example, if the objective is to maximize the number of sites to be serviced in specific time unit, this will result in creating a constraint on the maximum number of trips per site at that time unit. Many other examples also can be given here as a proof of the multi-constraint and multi-objectivity of the problem. It is easy to prove that trying to solve a problem instance for different objectives results in the same solution space but not the same solution. Also, running the same optimization problem under different number of constraints affects only the number of feasible solutions not the size of solution space. An example of that is the RMCDP in which its solution space size was originally estimated by Feng et al [10] and modified here to suit our assumptions and notations. In RMCDP, the solution space of all possible permutations of a problem instance $I$ can be represented by the set $K_{s} =\lbrace K_{s_{1}},..,K_{s_{|k_{s}|}} \rbrace$ such that the best solution sequence $K\subset K_{s}$. Therefore, for any instance $I$, the size of its solution space $|K_{s}|$ can be stated as follows: \\ \begin{equation} |K_{s}|=\frac{(\sum\limits_{i=1}^n |k_i|)!}{ \prod\limits_{i=1}^n (|k_i|!)} \end{equation} \\ From (7), we can say that the total number of permutations $ |K_{s}|$ is a function only in the number of sites $n$ and the number of trips per site $|k_{i}|$. For illustration, a simple example which we refer to as $example-1$ is given as follows: suppose we are given two sites with two trips per site, then $|K_{s}|=\lbrace6\rbrace$ possible sequences of trips such that $K_{s} =\lbrace \left( 1,1,2,2\right) ,..,\left( 2,2,1,1\right)\rbrace$. One of these sequences should be the winner sequence $K$ that best meets the problem objective. Therefore, the preference of the best feasible solution in the RMCDP depends on the performance of the trips dispatching sequence according to the problem objective. The best sequence is nothing but a permutation from the solution space. Therefore, designing an efficient algorithm to find this optimal permutation is the main challenge that needs to be tackled. This challenge stems from the huge solution space that exists when many sites with many trips per site are proposed. For a realistic example, let us reconsider the RMC statistical data reported by the NRMCA [1], in which there are around 301 millions $yd^{3}$ of RMC produced by 5,500 plants in one year. From this data we can calculate an average of 210 $yd^{3}$ of RMC as a daily production rate by each plant. By considering also, the average truck load capacity in the survey which is 8.0 $yd^{3}$, then we have around 26 trip from each plant per day. Such a number of trips can result from, for instance, five sites with five trips per site. These numbers of sites and their trips can produce a total solution space of more than 600 trillions of possible trips sequences. If we neglect the memory limitation, a processor with speed of 5 $GHz$, if exists, can find the optimal solution for this average size problem in around 32 hours, which in reality cannot be acceptable as a practical solution for the daily-based scheduling plan. Such a huge solution space and its high computational cost is expected in exact methods. Therefore, there is a need to identify the problem class precisely and which solution strategy should be adopted and why. \subsection{RMCDP in Graph theory} In this section, we address a common issue in the related literature, which is the absence of the problem projection in graph theory, the step that is imperative for a proper reduction of the problem. In complexity theory, and in order to identify the complexity class of the RMCDP, we need to find a polynomial time reduction algorithm such that $any$ instance of RMCDP can be transformed into an instance of a well-known classified problem. Therefore, defining the problem in graph theory comes first in order to accomplish that reduction. Based on our notations and prior definitions, we can state the following: \\\\ \textbf{Definition 3}: In RMCDP, and for a problem instance $I$ and objective $f$, the solution space is given by the set $K_{s}$ of all possible sequences of trips such that any solution sequence $K\in K_{s}$. The set $K_{s}$ can be represented by a $complete$ graph $G_{|V|} =\left( V,E\right) $ with a weight function $w:E\rightarrow R$, and a set of vertices $V=\lbrace s,v_{1},..,v_{|K|} \rbrace$ such that vertex $s$ is the depot, and $|K|$ is the total number of trips for all sites. Each $v\in V$ has a cost $c_{v}\geq 0$ depending on the objective $f$ and has a starting time $v^{s}$ to be serviced. Each $v\in V\setminus \lbrace s\rbrace$ is visited exactly once and assigned to exactly one trip element $\lbrace a_{\kappa}\rbrace$ from any $initial$ sequence of trips $K_{0}=\left( a_{1},..,a_{|K_{0}|}\right) $ and Labelled with that element such that $L\left(v \right)=\lbrace a| a_{\kappa}\in K_{0}, \kappa\in\lbrace1,..,|K_{0}|\rbrace\rbrace$. $E$ is the set of edges such that each edge $\lbrace i,j\rbrace $ has a cost $c^{e}_{ij}$ and associated with a time that can be defined as a function in the loading time $ L_{t} $. In definition 3, we define RMCDP as a multi-objective problem such that a problem objective function $f\in \textbf{\textit{F}}$ , where $\textbf{\textit{F}}$ is a domain of objectives applicable to the RMCDP. Before discussing our problem objective in this paper , and how to represent it in the graph theory, there is a need at first to give some insights on the RMCDP graph main characteristics. \begin{thm} In RMCDP completed graph $G$, for a problem instance $I$, any solution $K \in K_{s}$ is a $simple$ cycle in the graph $G$. \end{thm} \textbf{Proof:} From definition 3, RMCDP graph $G=\left(V,E \right) $ is a $complete$ graph such that each vertex $v\in V\setminus\lbrace s\rbrace$ represents a trip element in $K$ and $|V|=|K|+1$, where $|K|$ is the total number of trips of all sites can be found by (1) and (2). Each sequence of vertices starts and ends at vertex $s$ and each $v\in V\setminus\lbrace s\rbrace$ is visited exactly once, which results in generating a $simple$ cycle in $G$. Let $S_{n}$ be the total number of all possible $simple$ cycles in $G$ that start and end at vertex $s$, $S_{n}=\left( |K|\right) \left(|K|-1\right)\left(|K|-2\right)..\left( 2\right) \left( 1\right) = |K|!$ which is a greater number than the total solution space $|K_{s}|$ in (7). $\quad\blacksquare$ For illustrating these concepts, let us reconsider again the simple $example-1$ in the previous section in which we are given two sites with two trips per sites. By (2), the total number of trips for all sites $|K|=4$, and the solution space is given by a set $K_{s}$ of all possible sequences of trips such that $|K_{s}|=6$ . According to our definition (3), we can formulate this simple problem instance in graph theory using a complete graph $G_{|V|}$ such that $|V|=|K|+1=5$. The initial sequence $K_{0}$ can be any sequence of trips for all sites as $K_{0}=\left(1,1,2,2 \right) $. $K_{0}$ can be represented by $G_{5}$ such that each vertex $v\in V\setminus\lbrace s\rbrace$ is labelled with a trip elements in $K_{0}$ as shown in Fig. 2. The figure shows six $simple$ cycles in $G_{5}$ that represent the problem instance solution space $K_{s}$. \begin{figure} \caption{Example-1 solution space where all sequences of trips are represented by simple cycles in RMCDP graph $G_{5} \label{fig_RMC} \end{figure} \\ \begin{cor} A solution is $feasible$ in the RMCDP graph $G_{|V|}$, if it is a $simple$ $cycle$ satisfies the RMCDP constraints. \end{cor} \textbf{Proof:} Let $n$ be the total number of sites , $|k_{i}|$ the total number of trips for site $i$ where $ i\in \lbrace 1,..,n\rbrace $, let the $initial$ sequence of trips $K_{0}$ is any sequence of all trips for all sites such that each trip $k_{ij}\in K_{0}$, and for each $v\in V\setminus \lbrace s\rbrace$ in the RMCDP complete graph $G$, let the label of each vertex be as follows: \\ \begin{equation} L\left(v \right)=\lbrace i| k_{ij}\in K_{0}, i\in\lbrace1,..,n\rbrace,j\in\lbrace1,..,|k_{i}|\rbrace\rbrace \end{equation} \\ Now for any $simple$ cycle $r =\left(s,v_{1},v_{2},..,v_{|K_{0}|} \right) $ in graph $G$, if any two vertices $ v \in r\setminus\lbrace s\rbrace,u \in r\setminus\lbrace s,v\rbrace$ have the same label such that $L(v)=L(u)$, and the intermediate vertex $\nu_{m}$ between them (if exists) has different label such that $L(\nu_{m})\neq L(v)$, then $r$ is a $feasible$ solution in $G$ if and only if the same label vertices starting times $v^{s}$ and $u^{s}$ satisfy the follows: \begin{equation} \left( v^{s}-u^{s}\right) \leq \gamma \end{equation} where $u^{s} <v^{s}$, and $\gamma$ is the maximum time window allowed between $v^{s}$ and $u^{s}$. For the capacity constraint, it is considered in the RMCDP graph $G$ by the bijective mapping and labelling of each vertex $ v \in r\setminus\lbrace s\rbrace$ to each trip element in the initial sequence of trips $K_{0}$. For the number of trips $|k_{i}|$ per site $i$ and the total number of trips for all sites $|K|=|K_{0}|$ both are determined based on the truck capacity $Q$. $\quad\blacksquare$ \\\\ \begin{strip} \centering \includegraphics[scale=0.5]{Fig5_4} \captionof{figure}{Example-1 of three sequences of trips represented by simple cycles in RMCDP graph $G_{5}$. \textbf{(a)} The first solution sequence of trips $r_{1}$=(1,1,2,2) has a total cost of sites waiting $c_{r_{1}}$=70 $min$, and a total truck idling time at the sites of 20 $min$. \textbf{(b)} The feasible solution sequence of trips $r_{2}$=(1,2,1,2) has a total cost of $c_{r_{2}}$=60 and zero truck idling time at the sites(\textit{best sequence}). \textbf{(c)} An \textit{infeasible} solution sequence of trips (1,2,2,1) which results in a delay between the two consecutive trips for the same site (\textit{site 1}) such that $v^{s}_{4}-v^{s}_{1}>T$. } \end{strip} Under the assumption of the availability of enough number of trucks for all loading time slots at the depot, which we may refer to it as $ assumption-1$ , the service starting time $v^{s}$ for any vertices $v\in V\setminus\lbrace s\rbrace$ in the feasible solution $r$ depends on the depot starting time $s^{s}$ such that : \begin{equation} \begin{split} v^{s}_{i}&= s^{s}+\left( i-1\right)L_{t} \end{split} \end{equation} \begin{equation} \begin{split} =s^{s}+c^{e}_{1i} \end{split} \end{equation} where $ c^{e}_{1i}$ by definition (3) is the cost of all edges between the vertices $v_{1}$ and $v_{i}$ in the simple cycle sequence $r$ such that: \begin{equation} c^{e}_{1i}=\sum_{j=1}^{i-1} c^{e}_{jj+1} \end{equation} where $i$ is the vertex order, $i\in \lbrace 2,..,|K_{0}|\rbrace$ in the feasible solution sequence $r$. Therefore, under assumption (1), the edge weight or cost $c^{e}_{jj+1}$ in the RMCDP graph $G_{|V|}$ represents the truck loading time $L_{t}$ such that $c^{e}_{jj+1}= L_{t}$. Also if we consider the objective of minimizing the total sites waiting time with no truck queues at sites, then we can formulate the cost $c_{v}\in\mathbb{R^{+}}$ of each vertex in RMCDP graph $G_{|V|}$ as follows: \begin{equation} c_{v}= \begin{cases} v^{s}-\left(u^{s}+U_{L\left(v \right) }\right):&L\left( v\right) = L\left( u\right) \\ \left(v^{s}+L_{t}+h_{L\left(v \right) } \right)-s^{s}_{L\left(v \right) }: & \left( u^{s}=s^{s}\right) \\ 0 & \left( c_{v}<0\right) \end{cases} \end{equation} If $ U_{L\left(v \right) }\leq\left(v^{s}-u^{s}\right)$, then there will be no truck queues at sites. $s^{s}_{L\left(v \right) }$ is the suggested starting time for the site that is labelled to vertex $v$, $h_{L\left(v \right) }$ is the hauling time of the site that is labelled to vertex $v$. For any intermediate vertex $\nu_{m}$ between $v$ and $u$ (if exists), it should have a different label such that $L(\nu_{m})\neq L(v)$. In other words, $v$ is the next similar label vertex to $u$ in the feasible solution $r$ and both represent two consecutive trips for the same site. In (13), if $\left( c_{v}<0\right)$ this means there is no site waiting time for the trip that is mapped to current $v$, but there is a truck idling time at the site for that trip, and this idling time duration is the same as the calculated $c_{v}$ of vertex $v$. From (10) and (13) we can state the total cost $c_{r}$ for some feasible solution $r$ as follows: \begin{equation} c_{r}= \sum_{i=1}^{|r|-1} c_{vi} \end{equation} The best solution of the problem is the one that has minimum $c_{r}$ and satisfying its constraints. For illustrating our findings, suppose for example-1 that the trips unloading time in minutes $U_{1}$ and $U_{2}$ for sites 1 and 2 is 20 for each, and the truck loading time $L_{t}$ at the depot is 10 under the assumption (1) stated above. Let the suggested starting time $s^{s}_{1}$ and $s^{s}_{2}$ for both sites be 8:00 AM, and the hauling time $h_{1}$ and $h_{2}$ are 10 and 20 $min$. Suppose the maximum time allowed $T$ between consecutive trips is 20 $min$ which is small value used for the illustration purpose. With these parameters example-(1) graph $G_{5}$ can be represented by a weighted complete graph such that for each edge $e\in E\setminus \lbrace s,v\rbrace$ the cost $c^{e}=L_{t}$ and $\lbrace 0\rbrace$ otherwise, and for each vertex $v\in V\setminus\lbrace s\rbrace$, the cost $c_{v}$ determined by (13). The resulted graphs of three different solutions are shown in Fig. 3. The best feasible solution among the represented solutions is $\left(b \right)$ which has zero truck idling time at the sites and a total sites waiting time $c_{r_{b}}=60$. Because we address the objective of minimizing the sites waiting time with no truck queues or zero truck idling time at the sites, then solution $\left(a \right)$ is infeasible because it results in a total trucks idling time of 20 $min$ from the sum of $c_{v2}$ and $c_{v4}$ where the cost of both are in negative. When the cost of a trip is in negative $\left( c_{v}<0\right)$ as in (c), this means the truck of this trip is in the site waiting till the previous trip finish its unloading phase. Therefore, if a trip has a truck idling time at a site, then the cost of site waiting for this trip is zero as in (13). For graph $\left(c \right)$, the solution sequence is infeasible for not satisfying the time constraint $T$. By the end of the previous example, a proper projection of the RMCDP in graph theory have been achieved, which is imperative step for the problem classification in complexity theory. \begin{thm} RMCDP completed graph $G_{|V|}$ is a $Hamiltonian$ graph, that is, RMCDP as a decision problem is NP-complete. \end{thm} \textbf{Proof:} In the RMCDP completed graph $G_{|V|}$ there are $\left(|V|-1 \right)!$ $Hamiltonian$ $cycle$ (HC) which we referred to them before as simple cycles. Therefore, any HC in the RMCDP graph represents one possible solution sequence for the problem. Now to prove the $NP-completeness$ of the RMCDP, we need first to put it in a $decision$ problem form such that for some problem instance $\langle G_{|V|},c_{r},W\rangle$ RMCDP can be defined as follows: Given a RMCDP instance $\langle G_{|V|},c_{r},W\rangle$ , and positive integer $W$, is there a feasible solution sequence such that its cost $c_{r}\leq W$ ? \\\\\\ \begin{strip} \centering \includegraphics[scale=0.5]{Fig41} \captionof{figure}{Applying RMCDP Greedy Algorithm to Example-1 by initializing the edges costs to $\lbrace 0\rbrace$ as in (15). \textbf{(a)} Starting from $s$ and vertex $L(v)=1$ by moving them to $V_{H}$, $c^{e}_{vu}=U_{u}=20$ when $L(u)=L(v)=1$. (16) is used to duplicate this cost to all the edges of $L(u)=1$. \textbf{(b)} $L(u)=2$ is selected to move to $V_{H}$ for the minimum edge cost it has as shown in previous step. According to (15), $c^{e}_{vu}=U_{2}=20$ when $L(u)=L(v)=2$ and $c^{e}_{vu}= c^{e}_{vu}-L_{t}=10$ when $L(u)=1$ because $L(u)\in V_{H} $. \textbf{(c)} $L(u)=1$ is selected to move to $V_{H}$, $c^{e}_{vu}= c^{e}_{vu}-L_{t}=10$ when $L(u)=2$ because $L(u)\in V_{H} $. \textbf{(d)} $L(u)=2$ is selected to move to $V_{H}$, the solution sequence is $\left(1,2,1,2 \right)$.} \end{strip} The answer of the decision problem is yes/no. In case the answer is (yes), then we have a sequence of vertices that can be $verified$ in polynomial time such that the verification algorithm checks: 1) each vertex exists exactly once in the solution sequence, 2) the vertices total cost is at most $W$, 3) the lagging time between any same label and consecutive vertices is less than or equal $\gamma$. These checks can be performed in $\Omega\left(|V| \right) $, that is, RMCDP$\in$ NP. It is well known that finding HC in a graph is NP-complete problem, and it can be said that HC problem (HCP) is as hard as finding a minimum cost HC in a completed undirected graph because in both cases the solution space of the graph is $O\left(|V|! \right)$. Moreover, Solving the RMCDP is nothing but solving the HCP because the best solution in RMCDP graph is actually a HC that best meets the objective function. Thus, we can state that HCP $\leq_{p}$ RMCDP if this reduction satisfies the following: First, for any number of sites $n$, finding the initial sequence of trips $K_{0}$ as an input instance for the transformation function is $O \left( n\right) $ based on (1) which is used to determine the number of trips for each site. Also the mapping function in (8) which is used to transform the RMCDP instance to the completed Hamiltonian graph by assigning each trip element in the instance initial sequence of trips $K_{0}$ to each vertex in the graph, this function is a polynomial time transformation function with a running time of $O \left( |K_{0}|\right) $. In summary, because of the polynomial reduction HCP $\leq_{p}$ RMCDP which we just proved, then RMCDP is NP-hard, and because RMCDP $\in$ NP as we also proved, that is, RMCDP is NP-complete.$\quad\blacksquare$ The importance of theorem (3) stems from the fact that it shows that most probably there is no exact algorithm is capable of solving the RMCDP in polynomial time. Another important property of the problem can be noticed from Fig. 3 is the $dynamic$ characteristic of RMCDP because of its time dependency. Therefore, our solution strategies are designed based on these facts. In graph theory, a $greedy$ approach can be adopted to solve the RMCDP completed graph $G_{|V|}$ after considering the dynamic property of the problem. Hence, the graph edge cost $c^{e}_{vu}$ should not be considered as static value but a non-static value that changes in discrete time and is needed to be determined as a critical part in the proposed solution. For applying this solution, let the objective function be minimizing the sites waiting time with minimum trucks idling time at the sites. The greedy approach for the RMCDP can be stated as follows: Let the RMCDP graph $G_{|V|}=\left(V,E,w,t \right)$ be a weighted completed graph and $H_{G}=\left(V_{H},A_{H},w_{H} \right)$ be a $simple$ $directed$ graph represents the minimum cost Hamiltonian circuit of $G_{|V|}$ such that $|V_{H}|=|A_{H}|$, the greedy algorithm $RMCDP\left( G_{|V|},H_{G}\right) $ can be stated as follows: \begin{algorithm} \DontPrintSemicolon \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \SetKwFunction{KwFn}{} \Input{$G_{|V|}=(V,E,w,t)$, $G_{|V|}$ is RMCDP graph.} \Output{$H_{G}=(V_{H},A_{H},w_{H})$, $H_{G}$ is $min\left( HC\right) $ of $G_{|V|}$.} \BlankLine \nonl\textbf{RMCDP ($G_{|V|},H_{G}$)}\; $N \longleftarrow |V|$\; $V_{H} \longleftarrow \emptyset$\; $A_{H} \longleftarrow \emptyset$\; $V_{H} \longleftarrow s$ :$V_{H}=V_{H}\cup\lbrace s\rbrace, V=V-\lbrace s\rbrace $\; Move any $v\in V$ to $V_{H}$ : $V_{H}=V_{H}\cup\lbrace v\rbrace, V=V-\lbrace v\rbrace $\; $A_{H}=A_{H}\cup \lbrace s,v\rbrace$, $E=E- \lbrace s,v\rbrace$.\; Let $V_{H}$ be $sequence$ : $ V_{H}=\left( s,v_{1}\right) $\; \For{$i = 1$ \KwTo $N-2$}{ Let $\lbrace v_{i},u\rbrace$ be an edge such that $v_{i}\in V_{H}$, $u\in V$\; \For {$each$ $ Distinct$ $u\in V$}{ $Find$ $min\left( c^{e}_{v_{i}u}\right) $ } $V_{H}= V_{H}\cup \lbrace u\rbrace$.\; $A_{H}=A_{H}\cup \lbrace v_{i},u\rbrace$.\; $V=V-\lbrace u\rbrace$.\; } \If{ $IsFeasible$ $\left( H_{G}\right) $}{ \Return $H_{G}$ } \Else {$Print\left( No Feasible Solution\right). $} \caption{RMCDP Greedy Algorithm\label{IR}} \end{algorithm} \\ where $N$ is the total number of vertices in RMCDP graph $G_{|V|}$ in which each vertex $u\in V\setminus\lbrace s\rbrace$ represents a trip element as shown before. In RMCDP algorithm (1), the RMCDP main graph $G_{|V|}$ is converted to a simple directed graph $H_{G}$ that starts from the vertex $s$ and then collects the other trip elements vertices one by one in a greedy approach by considering the next $distinct$ vertex $u\in V$ from $G_{|V|}$ that has minimum cost with the last moved vertex $v_{i}$ in $H_{G}$ where $c^{e}_{vu}$ is the edge cost between vertices $v$ and $u$ such that: \begin{equation} c^{e}_{vu}= \begin{cases} 0 & |V_{H}|\leq 1\\ U_{u} & L(v)=L(u) \\ c^{e}_{vu}- L_{t} & L(u)\in V_{H} \end{cases} \end{equation} \begin{equation} c^{e}_{uw}=\lbrace c^{e}_{vu}| \forall w\in V\rbrace \end{equation} \\ where $U_{u}$ is the unloading time of the next trip vertex $u\in V$, and $ V_{H}\setminus\lbrace s\rbrace$ is the visited vertices represents the trips that started when $ |V_{H}|>1$ . At first, all the edges in $G_{|V|}$ have the same priority, and visiting any vertex results in updating $c^{e}_{vu}$ for each $u\in V$ according to (15). The last line in (15) stipulates visiting a previous $u$ with the same label of $v$ before updating $c^{e}_{vu}$. The iteration of the \textbf{for} loop in line 8 has a computational cost : $\left( N-2 \right)+\left( N-2 -1 \right)+..+ 2 + 1 = \frac{\left( N-1 \right).\left( N-2 \right)}{2} $ , which is $ O\left ( N^{2} \right)$. This computational cost can be more optimized by considering only the $distinct$ vertices in each iteration that have different labels. In this case, such vertices can optimize the cost to $ O\left ( n^{2} \right)$, where $n$ is the total number of sites. Also, the feasibility verification function in line 15 has an average cost of $ O\left ( N^{2} \right)$ which results in a total time complexity for the RMCDP greedy algorithm of $ O\left ( N^{2} +n^{2} \right)\simeq O\left ( |K|^{2}\right)$ including the edges cost update after each iteration according to (15). Illustrating the performance of the RMCDP Greedy Algorithm is shown in Fig. 4. By applying the algorithm to Example-1, a feasible solution is reached with the minimum cost Hamiltonian cycle and time complexity of O$\left ( |K|^{2}\right)$ where $|K|$ represents the total number of trips as in (2). \subsection{RMCDP Priority Algorithm} Based on the previous analysis and the mathematical model for the RMCDP, this problem is a problem that belongs to NP-complete (NPC) class, which means that the RMCDP is as hard as any problem in NP. All these problems that belong to NPC class are intractable and solving any of them results in solving the others. The advisable approach is to try for an approximation algorithm to solve NPC problem rather than searching for a polynomial time exact solution algorithm. This option also has two challenges, the first one is the inefficiency of the designed algorithm, and the second, is the need to handle the dynamic property of the RMCDP. We classify it as a dynamic problem because of the time dependency it has. Therefore, our second strategy in designing the solution algorithm after the graph-based one is based on understanding the main characteristic of the problem that we use as a principle for the algorithm design. \subsubsection{\textbf{Priority Algorithm - Site Waiting}} The problem objective that we use in last section and has been studied broadly in literature is the objective of minimizing the sites waiting times while maintaining the minimum trucks idling time at the sites, the situation that may occur if two consecutive trips or more for a site can cause a trucks queue during the product unloading at that site. The feasible sequence of trips for a given number of sites $n$ is the one that can satisfy the problem constraints as follows: \\ \begin{equation} U_{i}\leq\left( k^{s}_{ij+1}- k^{s}_{ij}\right)\leq \gamma \end{equation} \\ where the site $id$ $ i \in\lbrace 1,..,n\rbrace$, $ j \in\lbrace 1,..,|k_{i}|\rbrace $ , and $|k_{i}|$ is the total number of trips for site $i$. Therefore, under the assumption of maximum depot productivity, an effective algorithm to find a feasible sequence of trips $K$ can be designed based on the following principle: given a number of trucks $m$ sufficient for the maximum depot productivity such that its upper bound $m_{u}\leq 2.\gamma.L^{-1}_{t}$, then the objective function to minimize the sites waiting time with no truck queue at sites can be stated as follows: \\ \begin{equation} min \sum_{i=1}^{n}\sum_{j=1}^{|k_{i}|} k^{s}_{ij+1} - (k^{s}_{ij}+U_{i}) \end{equation} \\ A feasible sequence of trips $K$ that minimizes the objective function in (18) and guarantees no trucks queue can exist in the sites can be generated as follows: \\ \begin{equation} k^{s}_{ij+1}= k^{s}_{ij}+ \beta U_{i} \end{equation} \\ where $\beta$ is an optimization variable such that $\beta\geq 1$, and used to maintain the solution optimality. Therefore, the best sequence of trips is the one that is resulted by a $\beta$ satisfies (17) and (18). \paragraph*{ \textbf{Principle of Design}} The principle of design for the heuristic algorithm for the objective in (18) can be stated as follows: If two sites $i\in\lbrace a,b\rbrace$ have two trips $k_{ax}$ and $k_{by}$ where $j\in\lbrace x,y\rbrace$ and have the same loading time at the depot $ k^{d}_{ax}=k^{d}_{by}$ such that $a\neq b$, then the priority is given to the site $b$ if the site unloading time $U_{b}<U_{a}$. The priority can also be given according to other considerations, for example, if the a site has a specific requirement for the maximum time lagging $\gamma_{i}$ between any consecutive trips for the site such that $\left( k^{s}_{ij+1}- k^{s}_{ij}\right)\leq \gamma_{i}$, then the priority is given in this case for the site with the minimum $\gamma_{i}$. Even though such a requirement is adopted by the literature, but actually in real life it is hard to claim the importance of such a requirement because usually all the sites prefer a minimum time lagging in their deliveries. this $principle$ constitutes logical approach to locate feasible regions in the problem solution space. Moreover, when we try to solve a dynamic problem such as RMCDP, one feasible approach for that is by giving our solution algorithm the capability to take the proper decision during the processing time. Therefore, this design principle is the criteria of such a decision. Algorithm 2 represents the $principle$ used for minimizing the sites waiting for their deliveries. \begin{algorithm} \DontPrintSemicolon \SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output} \SetKwFunction{KwFn}{} \Input{$n,i\in\lbrace 1,..,n\rbrace,k_{i},k^{t}_{i},k^{s}_{i},D^{s},\gamma,L_{t},U_{i},m_{u} $} \Output{$K$} \BlankLine $K\leftarrow\phi$\; $minWait \leftarrow\infty$\; $Objective$ $variable $ $obj\in\left\lbrace U_{i} \right\rbrace $\; $ m_{u}\leftarrow 2.\gamma.L^{-1}_{t}$\; $Sort \left(U_{i} \right)$ $in$ $ascending$ $order:L_{t}+\frac{d_{i}}{v_{i}}+U_{i}\leq\gamma $\; $Generate$ $a$ $set$ $of$ $permutations$ $P_{n}$ for $n$: $|P_{n}|$=$\left( n\right) !$\; \For {$each $ $p \in P_{n}$}{ {$isFeasible\leftarrow True$}\; $W_{p} \leftarrow 0$\; \For{$i = 1$ \KwTo $n$}{ $w_{i} \leftarrow 0$\; $k^{d}_{i1}=D^{s}+[index\left( U_{i}\right)-1].L_{t}$\; $k^{s}_{i1}=k^{d}_{i1}+ L_{t} + \frac{d_{i}}{v_{i}}$\; $w^{s}_{i} \leftarrow(k^{s}_{i}<k^{s}_{i1}?k^{s}_{i1}-k^{s}_{i}:0)$\; $ obj\leftarrow \beta U_{i} $ \; \For {$j = 1$ \KwTo $|k_{i}|-1$}{ $k^{d}_{ij+1}= k^{d}_{ij}+ obj $\; \If{ $IsNotAvaliable$ $\left( k^{d}_{ij+1}\right) $} { $k^{d}_{ij+1}\leftarrow NextEmptyLoadingTime\left( \right) $\; \If{ $k^{s}_{ij+1}-k^{s}_{ij}\leq\gamma $} { $w_{i}\leftarrow w_{i}+k^{s}_{ij+1}-\left( k^{s}_{ij}+ obj\right) $\; } \Else{$isFeasible\leftarrow False$\; $break$} } } \If{$isFeasible$}{ $W_{p}\leftarrow W_{p}+w_{i}+w^{s}_{i}$\; } \Else{$W_{p}\leftarrow \infty$\;$break$} } \If{$minWait> W_{p}$}{$minWait\leftarrow W_{p}$\; $K\leftarrow allTripSeq(p)$ } } \Return $K$ \caption{RMCDP Priority Algorithm\label{IR}} \end{algorithm} In algorithm-2, $w^{s}_{i}$ is site $i$ waiting time for first delivery, $w_{i}$ is site $i$ total waiting time for next deliveries, $W_{p}$ is the summation of the total sites waiting time by the possible solution $p$. The other parameters are defined in the prior sections. The algorithm returns the best sequence of trips $K$ that has the minimum $W_{p}$ based on the objective variable $obj$ that is in use. The objective function of the problem is to minimize the sites waiting time while maintaining zero truck idling time at the sites. Therefore, the truck unloading parameter $U_{i}$ is the objective variable such that $obj=\beta U_{i}$, where $\beta$ according to (19) is an optimization factor with a value of $\beta=1$ as a default value. This default value guarantees no idling time for the trucks at the sites, and $W_{p}$ is used here for the best sequence of trips that makes the sites have minimum time when awaiting for their next trips. The constraint in line 5 is to guarantee the accessibility for each site $i$ from the depot with the time constraint $\gamma$. The advantage of the Priority Algorithm is best evidence from the huge reduction in the problem solution space $K_{s}$ to a computational cost of $ \left(n! \right) $ where $n$ is the total number of sites to be serviced. In order to illustrate this significant reduction, let us reconsider the average problem size of 5 sites and 5 trips per site, $K_{s}$ for this realistic example is around 6 trillion possible solutions, while the priority algorithm in this case is able to design $ \left( 5!\right) = 120$ possible solutions 100$\%$ of them are feasible and competitive as is shown in the implementation section thanks to the principles of design. \\ \begin{table}[t] \caption{List of variables and parameters. } \centering \begin{tabular}{c c l c } \hline\hline \\ Symbol && Description \\[0.5ex] \hline \\ $t$ &&Depot loading time slot \\ $D^{s}$ &&Depot starting time\\ $T_{k}$ &&All available time slots for product loading \\ $h_{i}$ &&Site $i$ travel time \\ $k^{s}_{ij}$ &&Site $i$ Trip $j$ starting time at site \\ $k^{s}_{i}$ &&Site $i$ proposed time by customer for first trip \\ $k^{s}_{i1}$ &&Site $i$ first trip time at the site \\ $k^{d}_{ij}$ &&Site $i$ Trip $j$ starting time at depot \\ $W_{ijj+1}$ &&Site $i$ wait time between consecutive trips $j$ and $j+1$ \\ $T_{ijj+1}$ &&Site $i$ time between consecutive trips $j$ and $j+1$ \\ $W_{i}$ &&Site $i$ wait of first trip \\ $|k_{i}|$ &&Site $i$ total number of trips\\ $X_{tij}$ &&Binary variable for time slot $t$ site $i$ trip $j$\\ \\ \hline \end{tabular} \begin{flushleft} \begin{center} \end{center} \end{flushleft} \label{table:nonlin} \end{table} \subsubsection{\textbf{Integer Programming Approach - Min Site Waiting}} The objective of minimizing the site waiting time for concrete delivery can be given as follows: \\\\ \begin{equation} min \sum_{i=1}^{n}\sum_{j=1}^{|k_{i}|-1}k^{s}_{ij+1} - (k^{s}_{ij}+U_{i}) + \sum_{i=1}^{n}( k^{s}_{i1}-k^{s}_{i} ) \end{equation} \\ where $k^{t}_{i}$ is the trip duration of site $i$, $L_{t}$ is the loading time at the depot, $U_{i}$ is product unloading time at the site $i$, and $|k_{i}|$ is the number of trips needed to satisfy the demand of site $i$. The proposed objective function in (22) can achieve more efficient operational management that has less overhead cost for minimizing the sites total idling time cost. In this section we introduce our model to solve the previous objective function. Our model is formulated as a Mixed Integer Programming (MIP) problem as follows: \\\\ \begin{equation} min \sum_{i=1}^{n}\sum_{j=1}^{|k_{i}|-1}W_{ijj+1} + \sum_{i=1}^{n}W_{i} \end{equation} \textbf{s.t.} \\ \begin{equation} k^{s}_{ij+1}-k^{s}_{ij}-T_{ijj+1}=0 \quad\quad \forall i\in \left\lbrace 1,..,n\right\rbrace \end{equation} \begin{equation} T_{ijj+1}-U_{i}- W_{ijj+1}=0 \quad\quad \forall j\in \left\lbrace 1,..,|k_{i}|-1\right\rbrace \end{equation} \begin{equation} T_{ijj+1}-U_{i}\geq0 \end{equation} \begin{equation} T_{ijj+1}-\gamma\leq0 \end{equation} \begin{equation} k^{s}_{i1}-k^{s}_{i}-W_{i}=0 \end{equation} \begin{equation} k^{s}_{ij}-k^{d}_{ij}-h_{i}-L_{t}=0 \end{equation} \begin{equation} \sum_{i=1}^{n}\sum_{j=1}^{|k_{i}|}\sum_{t=1}^{T_{k}}( D_{s} + (t-1)L_{t})X_{tij} - k^{d}_{ij}=0 \end{equation} \begin{equation} \sum_{t=1}^{T_{k}} \sum_{i=1}^{n}\sum_{j=1}^{|k_{i}|}X_{tij}\leq 1 \end{equation} \begin{equation} \sum_{i=1}^{n}\sum_{j=1}^{|k_{i}|} \sum_{t=1}^{T_{k}} X_{tij}\leq 1 \end{equation} \begin{equation} k^{s}_{ij},U_{i},k^{s}_{i},h_{i},L_{t}\geq 0 \end{equation} \begin{equation} X_{tij}\in \lbrace 0,1\rbrace \end{equation} Definitions of the above notations are given in Table-II. The above equations describe the model main functionality to minimize the total sites waiting time with no truck queues at sites as follows:\\ \begin{itemize} \itemsep0em \item[$\bullet$ ] In (21) the objective function minimizes the sites waiting time for the first delivery, and also the time between the next consecutive trips per site. \item[$\bullet$ ] In (22) the variable $T$ is defined as a time difference between the consecutive trips per site. \item[$\bullet$ ] In (23) the objective function variable $W_{ijj+1}$ is defined after excluding the site unloading time from the time duration between the consecutive trips for that site. \item[$\bullet$ ] In (24) a constraint is enforced between the consecutive trips per site to be greater than or equal to the unloading time of that site. \emph{This constraint is to avoid the truck idling time at sites. } \item[$\bullet$ ] In (25) a constraint is enforced between the consecutive trips per site to be less than or equal to the concrete setting time. This constraint is to avoid the cold joint problem of the concrete. \item[$\bullet$ ] In (26) the variable $W_{i}$ is defined as a waiting time between the first trips per site and the site proposed service starting time. \item[$\bullet$ ]In (27) the variable $k^d_{ij}$ is defined as the service starting time at the depot for loading trip $j$ of site $i$. \item[$\bullet$ ] In (28) a binary decision variable $X_{tij}$ to assign the proper loading time slot that best fits with the problem constraints. \item[$\bullet$ ] constraints (29) and (30) are to ensure that each site trip will be serviced exactly one time by exactly one loading time slot from the depot. \end{itemize} \section{IMPLEMENTATION AND RESULTS} We implemented our Priority Algorithms in C/C++, using Intel 2.3 GHz CPU, 6 GB RAM, and Windows 7. The MIP model is formulated and solved using CPLEX solver as an optimization tool. Because of the absence of any standard testing datasets in the RMCDP domain, we propose different problem instances for the evaluation purpose. The instances are used to evaluate our approaches to solve the problem for the objective function of Minimizing sites' waiting for delivery. The scenarios beyond the proposed instances are inspired from real operational datasets, for example, the values of average instance of five sites are based on the operational data collected by the relevant professional association [1]. We quantized the values of the instances main parameters to certain levels in order to exclude any noisy data. In order to simplify our assumptions, we assume that the difference between the hauling and return distances can practically be neglected. In real life, the problem is actually open to high diversity of quantities for its main parameters. Therefore, the average case scenario is the focus of this study. \subsection{\textbf{Minimizing Sites Waiting}} Our objective function is to minimize the site waiting time for their deliveries while maintaining zero truck idling time at the sites. Each of the MIP approach and Priority algorithm are evaluated against this objective. The instances are shown in Table-III and representing average and large instances respectively. In real life, there is a range of acceptable and applicable values that can be used with the main parameters in the table for the evaluation purpose. \begin{table}[b] \caption{The table represents the experiment parameters for three problem instances. } \centering \begin{tabular}{c c c c } \\ \hline\hline\\ Parameter & Instance-1 & Instance-2 & Unit \\[0.5ex] \hline \\ No. of Sites & 5 & 9& -\\ Site Demand & $50^{*}$ & 50 & $m^{3}$ \\ RMC Setting Time & 90 &90 &$min$ \\ Truck Capacity & 10 & 10& $m^{3}$ \\ Depot Productivity & 120 &120 &$m^{3}$/h \\ Truck Avg. Speed & 60 &60 &$Km/h$ \\ Site Starting time & 8:00 &8:00 &$ AM$ \\ Site Unload time & 25,25,25 & 20 &$min$ \\ - &30,30 & - &-\\ Site Distance & 30,20,20 &30,30,30,20, &$Km$ \\ - &10,10 &20,20,10,10,10 &-\\ [1ex] \hline \end{tabular} \begin{flushleft} * The quantity represents each site demand.\\ \end{flushleft} \label{table:nonlin} \end{table} \begin{figure*} \caption{ CPLEX Solutions convergence for the problem instance-I and II. } \label{fig_first_case0} \label{fig_second_case2} \label{fig_sim} \end{figure*} In Table-III, two instances are proposed for average case and large case scenarios. The parameters value are practically acceptable and matching some real life operational data that is inspired from [1], and was reported also in firm x, in Alexandria city. For both instances, We assume that the depot starting time is at 8:00 AM, and the number of trucks in use are the upper bound according to Eq.(6). We give a special consideration for these two possible instances. Both instances represent important scenarios show the advantage of priority algorithm over MIP approach. In instance-1 of 5 sites, besides that it is the average case scenario in real life, it also represents the case that for any site $i$, the unloading time $u_{i}\geq n.L_{t}$ $ \forall i \in \left\lbrace 1,..,n \right\rbrace $. In this case the probability of fair distribution of the loading time slots among the sites can increase. This gives the algorithm the advantage to trap the optimal solution as is shown in Table-IV. \begin{table}[ht] \caption{The table represents the optimal found by the optimizers for the problem instance-1. } \centering \begin{tabular}{c c c c c c c c c c } \\ \hline\hline \\ Site-Trip & &Start at & &Start at& & End at & & Delivery \\ & & Depot && Site && Site & & \\[0.5ex] \hline \\ 1-1 && 8:00 && 8:35 && 9:00 &&10 \\ 1-2 && 8:25 && 9:00 && 9:25 && 20\\ 1-3 && 8:50 && 9:25 && 9:50 && 30 \\ 1-4 && 9:15 && 9:50 && 10:15 && 40 \\ 1-5 && 9:40 && 10:15&& 10:40 && 50 \\\\ 2-1 && 8:05 &&8:30 && 8:55 && 10\\ 2-2 && 8:30 &&8:55 && 9:20 && 20\\ 2-3 && 8:55 &&9:20 && 9:45&& 30\\ 2-4 && 9:20 &&9:45 && 10:10 && 40\\ 2-5 && 9:45 &&10:10 && 10:35 && 50\\\\ 3-1 && 8:10 &&8:35 && 9:00 && 10\\ 3-2 && 8:35 &&9:00 && 9:25 && 20\\ 3-3 && 9:00 &&9:25 && 9:50 && 30\\ 3-4 && 9:25 &&9:50 && 10:15 && 40\\ 3-5 && 9:50 &&10:15 && 10:40 && 50\\\\ 4-1 && 8:20 &&8:35 && 9:05 && 10\\ \textbf{4-2} && \textbf{9:05} &&\textbf{9:20} && \textbf{9:50} && \textbf{20}\\ 4-3 && 9:35 &&9:50 && 10:20&& 30\\ 4-4 && 10:05 &&10:20 && 10:50&& 40\\ 4-5 && 10:35 &&10:50 && 11:20&& 50\\\\ 5-1 && 8:15 &&8:30 && 9:00 && 10\\ 5-2 && 8:45 &&9:00 && 9:30 && 20\\ \textbf{5-3} && \textbf{9:30} &&\textbf{9:45} && \textbf{10:15}&& \textbf{30}\\ 5-4 && 10:00 &&10:15 && 10:45&& 40\\ 5-5 && 10:30 &&10:45 && 11:15&& 50\\ [1ex] \hline \end{tabular} \label{table:nonlin} \end{table} For the big data obtained by solving the problem instances 2, we illustrate only the full results of problem Instance-1. Basically, Instance-1 has a total solution space of more than 600 trillions of possible permutations according to (7). Each permutation represents a unique sequence of trips that can be feasible or infeasible. Finding the optimal solution by trying each permutation may take at least 72 hours with our current 2.3 GHz CPU. Therefore, our MIP model and priority algorithm are designed and used as low computational cost approaches to solve the problem. The optimal sequence of trips for instance-1 found by both of CPLEX optimizer and the priority algorithm is ( 1 2 3 5 4 1 2 3 5 1 2 3 4 1 2 3 5 4 1 2 3 5 4 5 4 ). The detailed scheduling of the solution is given in Table-IV. The solution satisfies the minimum site waiting time during the product delivery and also minimum truck idling time at the sites. The optimal solution found is 195 $min$ as a minimum sites waiting time for their deliveries and zero truck waiting time at the sites. It can be recognized from the above scheduling plan that the next trip for each site starts upon the completion of the previous trip unloading phase which is a result of (19) in the algorithm and constraint (26) in our MIP model. Also because the depot service starting time is at 8:00 AM which is the same time that each site expects its first delivery according to the problem parameters in Table-III, therefore, a delay occurs for the first deliveries of the sites such that site-1,2,3,4 and 5 wait 35,30,35,35 and 30 min respectively before receiving their first deliveries. The other sites waiting time occur in site-4 and 5. In site-4 the delay occurs in trip-2 which starts at the site 15 $min$ late after its previous trip ended at the site. It is important to mention again that the optimal solution shown in Table-IV is the same solution found by each of CPLEX and the priority algorithm separately. In the previous section we showed that the principle of design for the case of sites' waiting gives priority for the site of shorter unloading time if two sites trip try to start the product loading at the same available time slot. Based on this principle, the priority algorithm found that site-4 has unloading time $U_{4}$ of 30 $min$ and its trip-2 try to start at depot at 8:50 AM which is 30 $min$ after its previous trip starting time. However, site-1 trip-3 is also trying to start at 8:50 AM, and because of the priority is given to the site of shorter unloading time which is site-1 ($U_{1}=25$), site-4 trip-2 is shifted to next available loading time slot which is 8:55 AM. The waiting occurs again at 8:55 because site-2 trip-3 is trying loading at the same time and has a higher priority because $U_{2} < U_{4}$ which results in site-4 trip-2 to be shifted again to next available slot. The process repeats again with site-3 trip-3 at 9:00 which results in site-4 trip-2 to be waiting 15 $min$ till 9:05 before it starts loading at the depot. \begin{table*}[ht] \caption{The table represents optimal and feasible solutions found by CPLEX for the problem instance-1 and 2. } \centering \begin{tabular}{c c c c c c c c c c c c c c c c c } \\ \hline\hline \\ \# Sites& &Total Trips & & \# Integer Var. & & \# Constraints& &Comp. Time & & & Bound &&& Status & & Opt Gap \\ & & & & & & & & (Sec)& &Lower & & Upper&& & & \% \\[0.5ex] \hline \\ 5 && 25 && 7295 && 448 &&3.06 &&195& & 195&& Optimal && 0.00\\ 9 && 45 && 13131 && 576 && 3600$^{*}$ &&869& & 885&&Feasible && 1.81 \\ \hline \end{tabular} \begin{flushleft} * Solution convergence terminated after 6 hours.\\ \end{flushleft} \label{table:nonlin} \end{table*} Another delay also happens to site-5 trip-3 for 15 $min$ after shifting its starting time from 9:15 to 9:30. Therefore, the total sites waiting time is 35+30+35+35+30+15+15=195. This is the minimum objective value found by both CPLEX and priority algorithm. Therefore, we may claim at this point that priority algorithm was successfully able to trap the optimal solution in instance-1 huge solution space thanks to its principle of design. This optimal solution is found by CPLEX after 21 iterations and a CPU time of 3.06 sec as shown in Fig.5-(a). The same optimal solution found by priority algorithm after 0.104 sec which is very promised and a competitive result. The statistics of both approaches are given in Table-V and VI. \begin{table}[b] \caption{The table shows the optimality of the Priority Algorithm and low computational cost.} \centering \begin{tabular}{c c c c c } \\ \hline\hline \\ No of & Total Solutions & Feasibility &Best Solution &RunTime \\ Sites & Created by Alg. &\% & (Min)&(Sec.) \\[0.5ex] \hline \\ 5& 120& 100& 195 & 0.104 \\ 9& 362880 & 16.57 & 885&16.35 \\ [1ex] \hline \end{tabular} \label{table:nonlin} \end{table} Instance-2 of 9 sites represents another important scenario in which all sites can be considered to have the same unloading time duration. In contrary to instance-1 scenario where $u_{i}\geq n.L_{t}$, in instance-2 $u_{i}\leq n.L_{t}$. Priority algorithm was able to trap again the optimal solution after around 16 sec, while CPLEX has a very slow convergence and could not converge to zero optimality gap after 6 hours of running time as shown in Fig. 5(b). The gap is the deference between the upper and lower bounds. When the gap is zero this means an optimal solution is found. For the priority algorithm, the optimal Sequence of Trips is ( 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 5 6 7 8 5 6 7 8 5 6 7 8 5 6 7 8 5 6 7 8 9 9 9 9 9 ). This optimal solution is found after evaluating 362880 possible permutations generated by the algorithm. 60160 permutations out of 362880 are found as feasible with a percentage of 16.57\% of the created permutations. The best feasible permutation achieves 885 min sites' waiting time which is the optimal solution. The superior performance of the priority algorithm stems from its ability to allocate feasible regions effectively in the solution space. The algorithm does not search for these regions but actually it creates a number of permutation based on its principle of design and allocate the best among them. For example, the total solution space for the problem instance-1 is $6.2336074$ x $10^{14}$ possible solutions, while priority algorithm is able to create exactly 120 solutions based on the priority principle. All of the created sequences of trips are feasible and one of them was the optimal solution as shown in Table-IV. In instance-2 of 9 sites, the total solution space of the problem is $2.3183588$ x $10^{37}$ possible solutions which is a very huge space that may take around $2.8$ x $10^{24}$ hours to evaluate all the permutations in that space. The priority algorithm is able to create 362880 which constitutes around $1.5652452$ x $10^{-30}$\% of the solution space. 16.57\% of the created permutations are feasible and one of them is the optimal solution. \paragraph*{\textbf{Algorithm Performance Analysis}} \begin{figure} \caption{The impact of decreasing the number of trucks for instance-1. } \label{fig_inst1} \end{figure} In order to analyze the performance of the priority algorithm, we evaluate its performance and optimality against range of values for instance-1 parameters. Instance-1 is in focus because it represents the average case in real life. The upper bound of the number of trucks can be used with instance-1 is 18 according to (6). Fig. 6 shows the impact of decreasing the number of trucks on the objective value. The plot shows that decreasing the number of trucks yields an increase in the objective value. The optimal value of 195 needs 17 trucks or greater. \begin{figure*} \caption{ Priority algorithm performance analysis for instance-I. } \label{fig_first_case0} \label{fig_second_case2} \label{fig_third_case2} \label{fig_time_case1} \label{fig_first_case12} \label{fig_second_case22} \label{fig_third_case32} \label{fig_time_case2} \label{fig_analysis} \end{figure*} Because we address a minimization problem, the performance ratio $ R_{p}$ or the optimality of the priority algorithm can be defined as the ratio of the minimum objective value found by the algorithm to the optimal value such that $ R_{p}\geq 1$ . Therefore, the minimum and best performance ratio that can be found is 1. Fig. 7, shows the performance analysis of the priority algorithm for instance-I. The focus on instance-I is not only because it represents the average case scenario in real life, also because it can be solved optimally in CPLEX in reasonable time. In this analysis we need to compare for each case the solution found by priority algorithm against the optimal solution found by our MIP model using CPLEX. In Fig.7(a), we show the impact of increasing and decreasing the sites unloading time in table-III. We refer to these changes as deviations from the original values in the table. For example, 0 deviation in the figure means no change in the parameter values, while -5 means decreasing the unloading time for each site by 5 such that the new parameter values become $\lbrace 20,20,20,25,25\rbrace$. The deviation of 0 here represents the original values of $\lbrace 25,25,25,30,30\rbrace$ which is the case that for any site $i$ the unloading time $u_{i}\geq n.L_{t}$ $ \forall i \in \left\lbrace 1,..,n \right\rbrace $. The deviation of -5 and -10 represent the cases when there is at least $u_{i} < n.L_{t}$. For example with -5 deviation, site 1,2, and 3 each has unloading time of 20 while $n.L_{t}$=5*5=25. For the cases of 0, 5, and 10 deviations in which $u_{i}\geq n.L_{t}$, priority algorithm is able to find the same optimal solution as CPLEX. The results of 0 deviation is shown in Table-IV and a detailed explanation of the algorithm mechanism is given. The algorithm performance ratio for these deviations is given in Fig.7(e). A performance ratio of 1 is achieved for the cases $u_{i} \geq n.L_{t}$, while for cases of $u_{i} < n.L_{t}$ as in -10 and -5 deviations, performance ratios of 1.056 and 1.02 are achieved for each respectively. Another scenario that also common in real life is when the sites have unloading times close to each other such that the differences are slight and can be neglected. We analyze also this case with a range of unloading times as shown if Fig.7(b) and (f). The x-axis for both graphs represents the unloading time per site. For example, if this time is 25 min, this means the unloading time for each of the five sites is 25 min. In this scenario, the priority algorithm is able to find optimal solutions for all unloading time values and achieves a performance ratio of 1 as shown in (f). This algorithm advantage is best deployed in case of large instances such as instance-II of nine sites. In instance-II the algorithm is able to find the optimal solution in 16.35 sec as shown in table-VI, while CPLEX run 3600 sec to find a feasible solution and may need days before it can converge to optimal solution. The computational cost of the priority algorithm is also evaluated for instance-I by changing the quantity of demands of the sites. The increase in these demands yields increasing in the total number of trips up to 45 trips as shown in Fig.7(c). By changing only, the total number of trips and keeping other parameters the same, priority algorithm is able to find optimal solutions for any number of trips as long as for any site $i$ the unloading time $u_{i}\geq n.L_{t}$. Under this condition, priority algorithm achieves a performance ratio of 1 as shown in Fig.7(g). Even though both of priority algorithm and MIP model have the same solutions as in Fig.7(c), they have completely different computational times as shown in Fig.7(h). The computational time of priority algorithm increases almost linearly in (d) when the total number of trips increases and has very low runtime. For CPLEX, its computational time increases exponentially as shown in (h). For the case of 45 trips, priority algorithm needs around 0.16 sec to find the optimal solution, while CPLEX needs 550.9 sec to find the optimal solution. Both of priority approach, and MIP approach are trying to allocate the feasible regions in the solution space. The main advantages of the priority algorithm is the optimality and low runtime it has. The main difficulty is that we need to perform deep analysis of the problem prior to designing the algorithm. For the LP based solutions, they are highly time and resource consuming approaches with no guarantee of optimality when the instances become larger in size. Unfortunately, the tendency in the RMC industry [18] indicates that its instances are going to be larger with time due to the increasing rate of the CBP productivity which doesn't meet the same rate in the truck capacity, therefore, heuristic approaches such as priority algorithm are expected to dominate in the near future and their parallel versions [19-20] may play the key roles. \section{Conclusion} In this paper, the vehicle scheduling problem under capacity and time window constraints has been proposed and analyzed in depth. We adopt the RMCDP as case of study in this category and show the proper projection of the RMCDP in graph theory, which is an important step that constitutes our first contribution in that domain. By this projection and proving the NP- Completeness of the RMCDP we opened the door to import any improvement in the graph-based optimization technique to that domain and vice versa. Apart from the over complexity and high computational time of the linear programming based approaches, and the low optimality and feasibility of the evolutionary algorithms, we adopted the heuristic approach for its high optimality and low time complexity which have been proven by our results. In order to design an effective heuristic approach, we proposed the problem set of definitions and their associated analysis. Our approach is based on mining the problem main characteristics in order to design the feasible solutions in a systematic way rather than searching randomly for them. The algorithm shows high optimality and low runtime cost for both average and large instances. However, we believe our approach has a full range of potential that need to be explored. Our contributions in this paper can be applied to other fields and may not be restricted to the RMCDP domain. Extending our work to other open optimization problems is under consideration as a future work. \end{document}
\begin{document} \title{ Learning Kernel-Based Halfspaces with the Zero-One Loss } \begin{abstract} We describe and analyze a new algorithm for agnostically learning kernel-based halfspaces with respect to the {\mathbf e}mph{zero-one} loss function. Unlike most previous formulations which rely on surrogate convex loss functions (e.g. hinge-loss in SVM and log-loss in logistic regression), we provide finite time/sample guarantees with respect to the more natural zero-one loss function. The proposed algorithm can learn kernel-based halfspaces in worst-case time $\mathrm{poly}({\mathbf e}xp(L\log(L/{\mathbf e}psilon)))$, for ${\mathbf e}mph{any}$ distribution, where $L$ is a Lipschitz constant (which can be thought of as the reciprocal of the margin), and the learned classifier is worse than the optimal halfspace by at most ${\mathbf e}psilon$. We also prove a hardness result, showing that under a certain cryptographic assumption, no algorithm can learn kernel-based halfspaces in time polynomial in $L$. {\mathbf e}nd{abstract} \section{Introduction} A highly important hypothesis class in machine learning theory and applications is that of halfspaces in a Reproducing Kernel Hilbert Space (RKHS). Choosing a halfspace based on empirical data is often performed using Support Vector Machines (SVMs) \cite{Vapnik98}. SVMs replace the more natural 0-1 loss function with a convex surrogate -- the hinge-loss. By doing so, we can rely on convex optimization tools. However, there are no guarantees on how well the hinge-loss approximates the 0-1 loss function. There do exist some recent results on the {\mathbf e}mph{asymptotic} relationship between surrogate convex loss functions and the 0-1 loss function \citep{Zhang04a,BartlettJoMc06}, but these do not come with finite-sample or finite-time guarantees. In this paper, we tackle the task of learning kernel-based halfspaces with respect to the non-convex 0-1 loss function. Our goal is to derive learning algorithms and to analyze them in the finite-sample finite-time setting. Following the standard statistical learning framework, we assume that there is an unknown distribution, $\mathcal{D}$, over the set of labeled examples, $\mathcal{X} \times \{0,1\}$, and our primary goal is to find a classifier, $h : \mathcal{X} \to \{0,1\}$, with low generalization error, \begin{equation} \label{eqn:def_err} {\mathbf e}rr_\mathcal{D}(h) ~{\mathbf e}qdef \E_{({\mathbf x},y)\sim \mathcal{D}}[ |h({\mathbf x})-y|] ~. {\mathbf e}nd{equation} The learning algorithm is allowed to sample a training set of labeled examples, $ ({\mathbf x}_1,y_1),\ldots,({\mathbf x}_m,y_m)$, where each example is sampled i.i.d. from $\mathcal{D}$, and it returns a classifier. Following the agnostic PAC learning framework \cite{KearnsScSe92}, we say that an algorithm $({\mathbf e}psilon,\delta)$-learns a concept class $H$ of classifiers using $m$ examples, if with probability of at least $1-\delta$ over a random choice of $m$ examples the algorithm returns a classifier $\hat{h}$ that satisfies \begin{equation} \label{eqn:aPAC} {\mathbf e}rr_\mathcal{D}(\hat{h}) ~\le~ \inf_{h \in H} {\mathbf e}rr_\mathcal{D}(h) + {\mathbf e}psilon ~. {\mathbf e}nd{equation} We note that $\hat{h}$ does not necessarily belong to $H$. Namely, we are concerned with {\mathbf e}mph{improper} learning, which is as useful as proper learning for the purpose of deriving good classifiers. A common learning paradigm is the Empirical Risk Minimization (ERM) rule, which returns a classifier that minimizes the average error over the training set, \[ \hat{h} \in \argmin_{h \in H} \frac{1}{m} \sum_{i=1}^m |h({\mathbf x}_i)-y_i| ~. \] The class of (origin centered) halfspaces is defined as follows. Let $\mathcal{X}$ be a compact subset of a RKHS, which w.l.o.g. will be taken to be the unit ball around the origin. Let $\phi_{0-1} : \mathbb{R} \to \mathbb{R}$ be the function $\phi_{0-1}(a) = \indct{a \ge 0} = \thalf ({\mathrm{sgn}}(a)+1)$. The class of halfspaces is the set of classifiers \[ H_{\phi_{0-1}} ~{\mathbf e}qdef~ \{ {\mathbf x} \mapsto \phi_{0-1}(\inner{{\mathbf w},{\mathbf x}}) \,:\, {\mathbf w} \in \mathcal{X} \} ~. \] Although we represent the halfspace using ${\mathbf w} \in \mathcal{X}$, which is a vector in the RKHS whose dimensionality can be infinite, in practice we only need a function that implements inner products in the RKHS (a.k.a. a kernel function), and one can define ${\mathbf w}$ as the coefficients of a linear combination of examples in our training set. To simplify the notation throughout the paper, we represent ${\mathbf w}$ simply as a vector in the RKHS. It is well known that if the dimensionality of $\mathcal{X}$ is $n$, then the VC dimension of $H_{\phi_{0-1}}$ equals $n$. This implies that the number of training examples required to obtain a guarantee of the form given in {\mathbf e}qref{eqn:aPAC} for the class of halfspaces scales at least linearly with the dimension $n$ \citep{Vapnik98}. Since kernel-based learning algorithms allow $\mathcal{X}$ to be an infinite dimensional inner product space, we must use a different class in order to obtain a guarantee of the form given in {\mathbf e}qref{eqn:aPAC}. One way to define a slightly different concept class is to approximate the non-continuous function, $\phi_{0-1}$, with a Lipschitz continuous function, $\phi : \mathbb{R} \to [0,1]$, which is often called a transfer function. For example, we can use a sigmoidal transfer function \begin{equation} \label{eqn:sigdef} \phi_{\mathrm{sig}}(a) ~{\mathbf e}qdef~ \frac{1}{1+ {\mathbf e}xp(-4L\,a)} ~, {\mathbf e}nd{equation} which is a $L$-Lipschitz function. Other $L$-Lipschitz transfer functions are the erf function and the piece-wise linear function: \begin{equation}\label{eq:erfpw} \phi_{{\mathbf e}rf}(a) ~{\mathbf e}qdef~ \thalf\left(1+\text{erf}\left(\sqrt{\pi} \,L\,a{\mathbf{r}}ight){\mathbf{r}}ight) ~~~~,~~~~ \phi_{\mathrm{pw}}(a) ~{\mathbf e}qdef~ \max\left\{\min\left\{\tfrac{1}{2} + L\,a \,,\, 1{\mathbf{r}}ight\} \, 0{\mathbf{r}}ight\} {\mathbf e}nd{equation} An illustration of these transfer functions is given in \figref{fig:erf}. \begin{figure}[t] \begin{center} \begin{tikzpicture}[scale=2.7] \draw[->,gray] (-1.2,0) -- (1.2,0); \draw[->,gray] (0,0) -- (0,1.2); \draw (-1,0.05) -- (-1,-0.05) node[below] {\scriptsize -1}; \draw (1,0.05) -- (1,-0.05) node[below] {\scriptsize 1}; \draw (0.05,1) -- (-0.05,1) node[left] {\scriptsize 1}; \draw[-, thick,green] plot[smooth] coordinates{(-1.00,0.00) (-0.95,0.00) (-0.90,0.00) (-0.85,0.00) (-0.80,0.00) (-0.75,0.00) (-0.70,0.00) (-0.65,0.00) (-0.60,0.00) (-0.55,0.00) (-0.50,0.00) (-0.45,0.00) (-0.40,0.00) (-0.35,0.00) (-0.30,0.00) (-0.25,0.00) (-0.20,0.00) (-0.15,0.00) (-0.10,0.01) (-0.05,0.11) (0.00,0.50) (0.05,0.89) (0.10,0.99) (0.15,1.00) (0.20,1.00) (0.25,1.00) (0.30,1.00) (0.35,1.00) (0.40,1.00) (0.45,1.00) (0.50,1.00) (0.55,1.00) (0.60,1.00) (0.65,1.00) (0.70,1.00) (0.75,1.00) (0.80,1.00) (0.85,1.00) (0.90,1.00) (0.95,1.00) (1.00,1.00) }; \draw[dotted,very thick,black] plot[smooth] coordinates{(-1.00,0.00) (-0.95,0.00) (-0.90,0.00) (-0.85,0.00) (-0.80,0.00) (-0.75,0.00) (-0.70,0.00) (-0.65,0.00) (-0.60,0.00) (-0.55,0.00) (-0.50,0.00) (-0.45,0.00) (-0.40,0.00) (-0.35,0.00) (-0.30,0.00) (-0.25,0.00) (-0.20,0.00) (-0.15,0.00) (-0.10,0.02) (-0.05,0.12) (0.00,0.50) (0.05,0.88) (0.10,0.98) (0.15,1.00) (0.20,1.00) (0.25,1.00) (0.30,1.00) (0.35,1.00) (0.40,1.00) (0.45,1.00) (0.50,1.00) (0.55,1.00) (0.60,1.00) (0.65,1.00) (0.70,1.00) (0.75,1.00) (0.80,1.00) (0.85,1.00) (0.90,1.00) (0.95,1.00) (1.00,1.00) }; \draw[dashed,blue,very thick] (-1,0) -- (0,0) -- (0,1) -- (1,1); \draw[dashed,red,thick] (-1,0) -- (-0.05,0) -- (0.05,1) -- (1,1); {\mathbf e}nd{tikzpicture} \hspace{1cm} \begin{tikzpicture}[scale=2.7] \draw[->,gray] (-1.2,0) -- (1.2,0); \draw[->,gray] (0,0) -- (0,1.2); \draw (-1,0.05) -- (-1,-0.05) node[below] {\scriptsize -1}; \draw (1,0.05) -- (1,-0.05) node[below] {\scriptsize 1}; \draw (0.05,1) -- (-0.05,1) node[left] {\scriptsize 1}; \draw[-, thick,green] plot[smooth] coordinates{(-1.00,0.00) (-0.95,0.00) (-0.90,0.00) (-0.85,0.00) (-0.80,0.00) (-0.75,0.00) (-0.70,0.00) (-0.65,0.00) (-0.60,0.00) (-0.55,0.00) (-0.50,0.00) (-0.45,0.00) (-0.40,0.00) (-0.35,0.00) (-0.30,0.01) (-0.25,0.03) (-0.20,0.07) (-0.15,0.13) (-0.10,0.23) (-0.05,0.35) (0.00,0.50) (0.05,0.65) (0.10,0.77) (0.15,0.87) (0.20,0.93) (0.25,0.97) (0.30,0.99) (0.35,1.00) (0.40,1.00) (0.45,1.00) (0.50,1.00) (0.55,1.00) (0.60,1.00) (0.65,1.00) (0.70,1.00) (0.75,1.00) (0.80,1.00) (0.85,1.00) (0.90,1.00) (0.95,1.00) (1.00,1.00) }; \draw[dotted,very thick,black] plot[smooth] coordinates{(-1.00,0.00) (-0.95,0.00) (-0.90,0.00) (-0.85,0.00) (-0.80,0.00) (-0.75,0.00) (-0.70,0.00) (-0.65,0.00) (-0.60,0.00) (-0.55,0.00) (-0.50,0.00) (-0.45,0.00) (-0.40,0.01) (-0.35,0.01) (-0.30,0.03) (-0.25,0.05) (-0.20,0.08) (-0.15,0.14) (-0.10,0.23) (-0.05,0.35) (0.00,0.50) (0.05,0.65) (0.10,0.77) (0.15,0.86) (0.20,0.92) (0.25,0.95) (0.30,0.97) (0.35,0.99) (0.40,0.99) (0.45,1.00) (0.50,1.00) (0.55,1.00) (0.60,1.00) (0.65,1.00) (0.70,1.00) (0.75,1.00) (0.80,1.00) (0.85,1.00) (0.90,1.00) (0.95,1.00) (1.00,1.00) }; \draw[dashed,blue,very thick] (-1,0) -- (0,0) -- (0,1) -- (1,1); \draw[dashed,red,thick] (-1,0) -- (-0.1667,0) -- (0.1667,1) -- (1,1); {\mathbf e}nd{tikzpicture} {\mathbf e}nd{center} \caption{\footnotesize Illustrations of transfer functions for $L=10$ (left) and $L=3$ (right): the 0-1 transfer function (dashed blue line); the sigmoid transfer function (dotted black line); the erf transfer function (green line); the piece-wise linear transfer function (dashed red line).} \label{fig:erf} {\mathbf e}nd{figure} Analogously to the definition of $H_{\phi_{0-1}}$, for a general transfer function $\phi$ we define $H_\phi$ to be the set of predictors ${\mathbf x} \mapsto \phi(\inner{{\mathbf w},{\mathbf x}})$. Since now the range of $\phi$ is not $\{0,1\}$ but rather the entire interval $[0,1]$, we interpret $\phi(\inner{{\mathbf w},{\mathbf x}})$ as the probability to output the label $1$. The definition of ${\mathbf e}rr_\mathcal{D}(h)$ remains\footnote{ Note that in this case ${\mathbf e}rr_\mathcal{D}(h)$ can be interpreted as $\mathrm{Pr}ob_{({\mathbf x},y) \sim \mathcal{D}, b \sim \phi(\inner{{\mathbf w},{\mathbf x}})}[y \neq b]$.} as in {\mathbf e}qref{eqn:def_err}. The advantage of using a Lipschitz transfer function can be seen via Rademacher generalization bounds \cite{BartlettMe02}. In fact, a simple corollary of the contraction lemma implies the following: \begin{theorem} \label{thm:Rad} Let ${\mathbf e}psilon,\delta \in (0,1)$ and let $\phi$ be an $L$-Lipschitz transfer function. Let $m$ be an integer satisfying \[ m ~\ge~ \left(\frac{2L + 3\sqrt{2\ln(8/\delta)}}{{\mathbf e}psilon} {\mathbf{r}}ight)^2 ~. \] Then, for any distribution $\mathcal{D}$ over $\mathcal{X} \times \{0,1\}$, the ERM algorithm $({\mathbf e}psilon,\delta)$-learns the concept class $H_\phi$ using $m$ examples. {\mathbf e}nd{theorem} The above theorem tells us that the sample complexity of learning $H_\phi$ is $\tilde{\Omega}(L^2/{\mathbf e}psilon^2)$. Crucially, the sample complexity does not depend on the dimensionality of $\mathcal{X}$, but only on the Lipschitz constant of the transfer function. This allows us to learn with kernels, when the dimensionality of $\mathcal{X}$ can even be infinite. A related analysis compares the error rate of a halfspace ${\mathbf w}$ to the number of margin mistakes ${\mathbf w}$ makes on the training set - see \secref{sec:margin} for a comparison. From the computational complexity point of view, the result given in \thmref{thm:Rad} is problematic, since the ERM algorithm should solve the non-convex optimization problem \begin{equation} \label{eqn:optERM} \argmin_{{\mathbf w}: \|{\mathbf w}\| \le 1} ~\frac{1}{m} \sum_{i=1}^m |\phi(\inner{{\mathbf w},{\mathbf x}_i})-y_i| ~. {\mathbf e}nd{equation} Solving this problem in polynomial time is hard under reasonable assumptions (see \secref{sec:hardness} in which we present a formal hardness result). Adapting a technique due to \cite{Ben-DavidSi00} we show in \fullpaper{\appref{sec:ERMsolve}}\conferencepaper{the full version of this paper \cite{SSSS10}} that it is possible to find an ${\mathbf e}psilon$-accurate solution to {\mathbf e}qref{eqn:optERM} (where the transfer function is $\phi_\mathrm{pw}$) in time $\mathrm{poly}\left({\mathbf e}xp\left(\tfrac{L^2}{{\mathbf e}psilon^2} \log(\tfrac{L}{{\mathbf e}psilon}){\mathbf{r}}ight){\mathbf{r}}ight)$. The main contribution of this paper is the derivation and analysis of a more simple learning algorithm that $({\mathbf e}psilon,\delta)$-learns the class $H_\mathrm{sig}$ using time and sample complexity of at most $\mathrm{poly}\left({\mathbf e}xp\left(L\, \log(\tfrac{L}{{\mathbf e}psilon} ){\mathbf{r}}ight){\mathbf{r}}ight)$. That is, the runtime of our algorithm is exponentially smaller than the runtime required to solve the ERM problem using the technique described in \cite{Ben-DavidSi00}. Moreover, the algorithm of \cite{Ben-DavidSi00} performs an exhaustive search over all $(L/{\mathbf e}psilon)^2$ subsets of the $m$ examples in the training set, and therefore its runtime is always order of $m^{L^2/{\mathbf e}psilon^2}$. In contrast, our algorithm's runtime depends on a parameter $B$, which is bounded by ${\mathbf e}xp(L)$ only under a worst-case assumption. Depending on the underlying distribution, $B$ can be much smaller than the worst-case bound. In practice, we will cross-validate for $B$, and therefore the worst-case bound will often be pessimistic. The rest of the paper is organized as follows. In \secref{sec:main} we describe our main results. Next, in \secref{sec:hardness} we provide a hardness result, showing that it is not likely that there exists an algorithm that learns $H_\mathrm{sig}$ or $H_\mathrm{pw}$ in time polynomial in $L$. We outline additional related work in \secref{sec:related}. In particular, the relation between our approach and margin-based analysis is described in \secref{sec:margin}, and the relation to approaches utilizing a distributional assumption is discussed in \secref{sec:kalai}. We wrap up with a discussion in \secref{sec:discussion}. \section{Main Results} \label{sec:main} In this section we present our main result. Recall that we would like to derive an algorithm which learns the class $H_\mathrm{sig}$. However, the ERM optimization problem associated with $H_\mathrm{sig}$ is non-convex. The main idea behind our construction is to learn a larger hypothesis class, denoted $H_B$, which approximately contains $H_\mathrm{sig}$, and for which the ERM optimization problem becomes convex. The price we need to pay is that from the statistical point of view, it is more difficult to learn the class $H_B$ than the class $H_\mathrm{sig}$, therefore the sample complexity increases. The class $H_B$ we use is a class of {\mathbf e}mph{linear} predictors in some other RKHS. The kernel function that implements the inner product in the newly constructed RKHS is \begin{equation} \label{eqn:kerneldef} K({\mathbf x},{\mathbf x}') ~{\mathbf e}qdef~ \frac{1}{1 - \nu \inner{{\mathbf x},{\mathbf x}'}} ~, {\mathbf e}nd{equation} where $\nu \in (0,1)$ is a parameter and $\inner{{\mathbf x},{\mathbf x}'}$ is the inner product in the original RKHS. As mentioned previously, $\inner{{\mathbf x},{\mathbf x}'}$ is usually implemented by some kernel function $K'({\mathbf z},{\mathbf z}')$, where ${\mathbf z}$ and ${\mathbf z}'$ are the pre-images of ${\mathbf x}$ and ${\mathbf x}'$ with respect to the feature mapping induced by $K'$. Therefore, the kernel in {\mathbf e}qref{eqn:kerneldef} is simply a composition with $K'$, i.e. $K({\mathbf z},{\mathbf z}')=1/(1-\nu K'({\mathbf z},{\mathbf z}'))$. To simplify the presentation we will set $\nu = 1/2$, although in practice other choices might be more effective. It is easy to verify that $K$ is a valid positive definite kernel function (see for example \cite{ScholkopfSm02,CristianiniSh04}). Therefore, there exists some mapping $\psi : \mathcal{X} \to \mathbb{V}$, where $\mathbb{V}$ is an RKHS with $\inner{\psi({\mathbf x}),\psi({\mathbf x}')} = K({\mathbf x},{\mathbf x}')$. The class $H_B$ is defined to be: \begin{equation} \label{eqn:HBdef} H_B ~{\mathbf e}qdef~ \{ {\mathbf x} \mapsto \inner{{\mathbf v},\psi({\mathbf x})} \, : \, {\mathbf v} \in \mathbb{V},~\|{\mathbf v}\|^2 \le B \} ~. {\mathbf e}nd{equation} The main result we prove in this section is the following: \begin{theorem}\label{thm:mainres} Let ${\mathbf e}psilon,\delta \in (0,1)$ and let $L \ge 3$. Let $B = 2L^4+{\mathbf e}xp\left(7L \log\left(\tfrac{2L}{{\mathbf e}psilon}{\mathbf{r}}ight)+3{\mathbf{r}}ight)$ and let $m$ be a sample size that satisfies $m \ge \frac{8B}{{\mathbf e}psilon^2} \, \left(2+9\sqrt{\ln(8/\delta)}{\mathbf{r}}ight)^2 $. Then, for any distribution $\mathcal{D}$, with probability of at least $1-\delta$, any ERM predictor $\hat{h} \in H_B$ with respect to $H_B$ satisfies \[ {\mathbf e}rr_\mathcal{D}(\hat{h}) \le \min_{h \in H_\mathrm{sig}} {\mathbf e}rr_\mathcal{D}(h_\mathrm{sig}) + {\mathbf e}psilon ~. \] {\mathbf e}nd{theorem} We note that the bound on $B$ is far from being the tightest possible in terms of constants and second-order terms. Also, the assumption of $L\geq 3$ is rather arbitrary, and is meant to simplify the presentation of the bound. To prove this theorem, we start with analyzing the time and sample complexity of learning $H_B$. The sample complexity analysis follows directly from a Rademacher generalization bound \cite{BartlettMe02}. In particular, the following theorem tells us that the sample complexity of learning $H_B$ with the ERM rule is order of $B/{\mathbf e}psilon^2$ examples. \begin{theorem} \label{thm:HBsample} Let ${\mathbf e}psilon,\delta \in (0,1)$, let $B \ge 1$, and let $m$ be a sample size that satisfies \[ m ~\ge~ \frac{2B}{{\mathbf e}psilon^2} \, \left(2+9\sqrt{\ln(8/\delta)}{\mathbf{r}}ight)^2 ~. \] Then, for any distribution $\mathcal{D}$, the ERM algorithm $({\mathbf e}psilon,\delta)$-learns $H_B$. {\mathbf e}nd{theorem} \begin{proof} Since $K({\mathbf x},{\mathbf x}) \le 2$, the Rademacher complexity of $H_B$ is bounded by $\sqrt{2 B/m}$ (see also \cite{KakadeSrTe08}). Additionally, using Cauchy-Schwartz inequality we have that the loss is bounded, $|\inner{{\mathbf v},\psi({\mathbf x})}-y| \le \sqrt{2 B} + 1$. The result now follows directly from \cite{BartlettMe02,KakadeSrTe08}. {\mathbf e}nd{proof} Next, we show that the ERM problem with respect to $H_B$ can be solved in time $\mathrm{poly}(m)$. The ERM problem associated with $H_B$ is \[ \min_{{\mathbf v} : \|{\mathbf v}\|^2 \le B} \frac{1}{m} \sum_{i=1}^m | \inner{{\mathbf v},\psi({\mathbf x}_i)} - y_i| ~. \] Since the objective function is defined only via inner products with $\psi({\mathbf x}_i)$, and the constraint on ${\mathbf v}$ is defined by the ${\mathbf e}ll_2$-norm, it follows by the Representer theorem \cite{Wahba90} that there is an optimal solution ${\mathbf v}^\star$ that can be written as ${\mathbf v}^\star = \sum_{i=1}^m \alpha_i \psi({\mathbf x}_i)$. Therefore, instead of optimizing over ${\mathbf v}$, we can optimize over the set of weights $\alpha_1,\ldots,\alpha_m$ by solving the equivalent optimization problem \[ \min_{\alpha_1,\ldots,\alpha_m} \frac{1}{m} \sum_{i=1}^m \left| \sum_{j=1}^m \alpha_j K({\mathbf x}_j,{\mathbf x}_i) - y_i {\mathbf{r}}ight| ~~~\textrm{s.t.}~~~ \sum_{i,j = 1}^m \alpha_i \alpha_j K({\mathbf x}_i,{\mathbf x}_j) \le B~. \] This is a convex optimization problem in $\mathbb{R}^m$ and therefore can be solved in time $\mathrm{poly}(m)$ using standard optimization tools.\footnote{ In fact, using stochastic gradient descent, we can $({\mathbf e}psilon,\delta)$-learn $H_B$ in time $O(m^2)$, where $m$ is as defined in \thmref{thm:HBsample} ---See for example \cite{BottouBo08,ShalevSr08}.} We therefore obtain: \begin{corollary} Let ${\mathbf e}psilon,\delta \in (0,1)$ and let $B \ge 1$. Then, for any distribution $\mathcal{D}$, it is possible to $({\mathbf e}psilon,\delta)$-learn $H_B$ in sample and time complexity of $\mathrm{poly}\left(\tfrac{B}{{\mathbf e}psilon} \,\log(1/\delta){\mathbf{r}}ight)$. {\mathbf e}nd{corollary} It is left to understand why the class $H_B$ approximately contains the class $H_\mathrm{sig}$. Recall that for any transfer function, $\phi$, we define the class $H_\phi$ to be all the predictors of the form ${\mathbf x} \mapsto \phi(\inner{{\mathbf w},{\mathbf x}})$. The first step is to show that $H_B$ contains the union of $H_\phi$ over all polynomial transfer functions that satisfy a certain boundedness condition on their coefficients. \begin{lemma} \label{lem:PB} Let $P_B$ be the following set of polynomials (possibly with infinite degree) \begin{equation} \label{eqn:PB} P_{B} {\mathbf e}qdef \left\{ p(a) = \sum_{j=0}^\infty \beta_j \,a^j \,:\, \sum_{j=0}^\infty \beta_j^2 \, 2^{j} \le B {\mathbf{r}}ight\} ~. {\mathbf e}nd{equation} Then, \[ \bigcup_{p \in P_B} H_p ~\subset~ H_B ~. \] {\mathbf e}nd{lemma} \begin{proof} To simplify the proof, we first assume that $\mathcal{X}$ is simply the unit ball in $\mathbb{R}^n$, for an arbitrarily large but finite $n$. Consider the mapping $\psi:\mathcal{X} {\mathbf{r}}ightarrow \mathbb{R}^{\mathbb{N}}$ defined as follows: for any ${\mathbf x} \in \mathcal{X}$, we let $\psi({\mathbf x})$ be an infinite vector, indexed by $k_{1}\ldots,k_{j}$ for all $(k_{1},\ldots,k_{j})\in \{1,\ldots,n\}^j$ and $j=0\ldots\infty$, where the entry at index $k_{1}\ldots,k_{j}$ equals $ 2^{-j/2} x_{k_{1}}\cdot x_{k_{2}}\cdots x_{k_{j}} $. The inner-product between $\psi({\mathbf x})$ and $\psi({\mathbf x}')$ for any ${\mathbf x},{\mathbf x}' \in \mathcal{X}$ can be calculated as follows, \begin{align*} \inner{\psi({\mathbf x}),\psi({\mathbf x}')}& ~=~ \sum_{j=0}^{\infty}\sum_{(k_{1},\ldots,k_{j})\in \{1,\ldots,n\}^j}2^{-j}x_{k_1}x'_{k_1}\cdots x_{k_j}x'_{k_j} ~=~ \sum_{j=0}^{\infty}2^{-j}(\inner{{\mathbf x},{\mathbf x}'})^j ~=~ \frac{1}{1- \thalf \inner{{\mathbf x},{\mathbf x}'}}. {\mathbf e}nd{align*} This is exactly the kernel function defined in {\mathbf e}qref{eqn:kerneldef} (recall that we set $\nu = 1/2$) and therefore $\psi$ maps to the RKHS defined by $K$. Consider any polynomial $p(a)=\sum_{j=0}^{\infty} \beta_j a^j$ in $P_{B}$, and any ${\mathbf w} \in \mathcal{X}$. Let ${\mathbf v}_{{\mathbf w}}$ be an element in $\mathbb{R}^{\mathbb{N}}$ explicitly defined as being equal to $\beta_j 2^{j/2} w_{k_1}\cdots w_{k_j}$ at index $k_1,\ldots,k_j$ (for all $k_1,\ldots,k_j \in \{1,\ldots,n\}^j,j=0\ldots \infty$). By definition of $\psi$ and ${\mathbf v}_{{\mathbf w}}$, we have that \begin{align*} \inner{{\mathbf v}_{{\mathbf w}},\psi({\mathbf x})} & = \sum_{j=0}^{\infty}\sum_{k_1,\ldots,k_j}2^{-j/2}\beta_j 2^{j/2}w_{k_1}\cdots w_{k_j}x_{k_{1}}\cdot \cdots x_{k_{j}} = \sum_{j=0}^{\infty}\beta_j(\inner{{\mathbf w},{\mathbf x}})^j = p(\inner{{\mathbf w},{\mathbf x}})~. {\mathbf e}nd{align*} In addition, \begin{align*} \|{\mathbf v}_{{\mathbf w}}\|^2 & = \sum_{j=0}^\infty \sum_{k_1,\ldots,k_j}\beta_j^2 2^{j} w_{k_1}^2\cdots w_{k_j}^2 = \sum_{j=0}^\infty \beta_j^2 2^{j} \sum_{k_1}w_{k_1}^2\sum_{k_2}w_{k_2}^2\cdots \sum_{k_j}w_{k_j}^2 = \sum_{j=0}^\infty \beta_j^2 2^{j} \left(\|{\mathbf w}\|^2{\mathbf{r}}ight)^j \leq B. {\mathbf e}nd{align*} Thus, the predictor ${\mathbf x} \mapsto \inner{{\mathbf v}_{\mathbf w},\psi({\mathbf x})}$ belongs to $H_B$ and is the same as the predictor ${\mathbf x} \mapsto p(\inner{{\mathbf w},{\mathbf x}})$. This proves that $H_p \subset H_B$ for all $p \in P_B$ as required. Finally, if $\mathcal{X}$ is an infinite dimensional RKHS, the only technicality is that in order to represent ${\mathbf x}$ as a (possibly infinite) vector, we need to show that our RKHS has a countable basis. This holds since the inner product $\inner{{\mathbf x},{\mathbf x}'}$ over $\mathcal{X}$ is continuous and bounded (see \citep{Berlinet03}). {\mathbf e}nd{proof} Finally, the following lemma states that with a sufficiently large $B$, there exists a polynomial in $P_B$ which approximately equals to $\phi_\mathrm{sig}$. This implies that $H_B$ approximately contains $H_\mathrm{sig}$. \begin{lemma}\label{lem:sig} Let $\phi_{\mathrm{sig}}$ be as defined in {\mathbf e}qref{eqn:sigdef}, where for simplicity we assume $L \geq 3$. For any ${\mathbf e}psilon>0$, let \[ B = 2L^4+{\mathbf e}xp\left(7L \log\left(\tfrac{2L}{{\mathbf e}psilon}{\mathbf{r}}ight)+3{\mathbf{r}}ight). \] Then there exists $p\in P_{B}$ such that \[ \forall {\mathbf x},{\mathbf w} \in \mathcal{X},~~|p(\inner{{\mathbf w},{\mathbf x}}) - \phi_\mathrm{sig}(\inner{{\mathbf w},{\mathbf x}})| \le {\mathbf e}psilon ~. \] {\mathbf e}nd{lemma} The proof of the lemma is based on a Chebyshev approximation technique and is given in \fullpaper{\appref{sec:Sigmoid}}\conferencepaper{the full version of our paper \cite{SSSS10}}. Since the proof is rather involved, we also present a similar lemma, whose proof is simpler, for the $\phi_{\mathbf e}rf$ transfer function (see \fullpaper{\appref{sec:erf}}\conferencepaper{\cite{SSSS10}}). It is interesting to note that $\phi_{{\mathbf e}rf}$ actually {\mathbf e}mph{belongs} to $P_B$ for a sufficiently large $B$, since it can be defined via its infinite-degree Taylor expansion. However, the bound for $\phi_{\mathbf e}rf$ depends on ${\mathbf e}xp(L^2)$, rather than ${\mathbf e}xp(L)$ for the sigmoid transfer function $\phi_{\mathrm{sig}}$. Finally, \thmref{thm:mainres} is obtained as follows: Combining \thmref{thm:HBsample} and \lemref{lem:PB} we get that with probability of at least $1-\delta$, \begin{equation} \label{eqn:combeq} {\mathbf e}rr_\mathcal{D}(\hat{h}) ~\le~ \min_{h \in H_B} {\mathbf e}rr_\mathcal{D}(h) + {\mathbf e}psilon/2 \le \min_{p \in P_B} \min_{h \in H_p} {\mathbf e}rr_\mathcal{D}(h) + {\mathbf e}psilon/2~. {\mathbf e}nd{equation} From \lemref{lem:sig} we obtain that for any ${\mathbf w} \in \mathcal{X}$, if $h({\mathbf x}) = \phi_\mathrm{sig}(\inner{{\mathbf w},{\mathbf x}})$ then there exists a polynomial $p_0 \in P_B$ such that if $h'({\mathbf x}) = p_0(\inner{{\mathbf w},{\mathbf x}})$ then $ {\mathbf e}rr_\mathcal{D}(h') \le {\mathbf e}rr_\mathcal{D}(h) + {\mathbf e}psilon/2 $. Since it holds for all ${\mathbf w}$, we get that \[ \min_{p \in P_B} \min_{h \in H_p} {\mathbf e}rr_\mathcal{D}(h) \le \min_{h \in H_\mathrm{sig}} {\mathbf e}rr_\mathcal{D}(h) + {\mathbf e}psilon/2 ~. \] Combining this with {\mathbf e}qref{eqn:combeq}, \thmref{thm:mainres} follows. \section{Hardness} \label{sec:hardness} In this section we derive a hardness result for agnostic learning of $H_\mathrm{sig}$ or $H_\mathrm{pw}$ with respect to the zero-one loss. The hardness result relies on the hardness of standard (non-agnostic)\footnote{In the {\mathbf e}mph{standard} PAC model, we assume that some hypothesis in the class has ${\mathbf e}rr_\mathcal{D}(h)=0$, while in the {\mathbf e}mph{agnostic} PAC model, which we study in this paper, ${\mathbf e}rr_\mathcal{D}(h)$ might be strictly greater than zero for all $h \in H$. Note that our definition of $({\mathbf e}psilon,\delta)$-learning in this paper is in the agnostic model.} PAC learning of intersection of halfspaces given in Klivans and Sherstov~\cite{KlivansSh06} (see also similar arguments in \cite{FeldmanGoKhPo06}). The hardness result is representation-independent ---it makes no restrictions on the learning algorithm and in particular also holds for improper learning algorithms. The hardness result is based on the following cryptographic assumption: \begin{assumption} \label{crypto} There is no polynomial time solution to the $\tilde{O}(n^{1.5})$-unique-Shortest-Vector-Problem. {\mathbf e}nd{assumption} In a nutshell, given a basis ${\mathbf v}_1,\ldots,{\mathbf v}_n\in \mathbb{R}^n$, the $\tilde{O}(n^{1.5})$-unique-Shortest-Vector-Problem consists of finding the shortest nonzero vector in $\{a_1{\mathbf v}_1+\ldots+a_n{\mathbf v}_n:a_1,\ldots,a_n\in \mathcal{Z}\}$, even given the information that it is shorter by a factor of at least $\tilde{O}(n^{1.5})$ than any other non-parallel vector. This problem is believed to be hard - there are no known sub-exponential algorithms, and it is known to be NP-hard if $\tilde{O}(n^{1.5})$ is replaced by a small constant (see \cite{KlivansSh06} for more details). With this assumption, Klivans and Sherstov proved the following: \begin{theorem}[Theorem 1.2 in Klivans and Sherstov~\cite{KlivansSh06}] \label{thm:Klivans} Let $\mathcal{X} = \{\pm 1\}^n$, let \[H = \{{\mathbf x} \mapsto \phi_{0,1}(\inner{{\mathbf w},{\mathbf x}} - \theta - 1/2) : \theta \in \mathbb{N}, {\mathbf w} \in \mathbb{N}^n, |\theta|+\|{\mathbf w}\|_1 \le poly(n)\} ~,\] and let $H_k = \{{\mathbf x} \mapsto (h_1({\mathbf x}) \land \ldots \land h_k({\mathbf x})) : \forall i, h_i \in H\}$. Then, based on Assumption {\mathbf{r}}ef{crypto}, $H_k$ is not efficiently learnable in the standard PAC model for any $k = n^{{\mathbf{r}}ho}$ where ${\mathbf{r}}ho > 0$ is a constant. {\mathbf e}nd{theorem} The above theorem implies the following. \begin{lemma} \label{lem:hard1} Based on Assumption {\mathbf{r}}ef{crypto}, there is no algorithm that runs in time $\mathrm{poly}(n,1/{\mathbf e}psilon,1/\delta)$ and $({\mathbf e}psilon,\delta)$-learns the class $H$ defined in \thmref{thm:Klivans}. {\mathbf e}nd{lemma} \begin{proof} To prove the lemma we show that if there is a polynomial time algorithm that learns $H$ in the {\mathbf e}mph{agnostic} model, then there exists a weak learning algorithm (with a polynomial edge) that learns $H_k$ in the standard (non-agnostic) PAC model. In the standard PAC model, weak learning implies strong learning \cite{Schapire90}, hence the existence of a weak learning algorithm that learns $H_k$ will contradict \thmref{thm:Klivans}. Indeed, let $\mathcal{D}$ be any distribution such that there exists $h^\star \in H_k$ with ${\mathbf e}rr_\mathcal{D}(h^\star) = 0$. Let us rewrite $h^\star = h_1^\star \land \ldots \land h_k^\star$ where for all $i$, $h^\star_i \in H$. To show that there exists a weak learner, we first show that there exists some $h \in H$ with ${\mathbf e}rr_\mathcal{D}(h) \le 1/2 - 1/2k^2$. Since for each ${\mathbf x}$ if $h^\star({\mathbf x})=0$ then there exists $j$ s.t. $h^\star_j({\mathbf x})=0$, we can use the union bound to get that \[ 1 = \mathrm{Pr}ob[{\mathbf e}xists j : h^\star_j({\mathbf x}) = 0 | h^\star({\mathbf x})=0] \le \sum_j \mathrm{Pr}ob[h^\star_j({\mathbf x})=0 | h^\star({\mathbf x})=0] \le k \max_j \mathrm{Pr}ob[h^\star_j({\mathbf x})=0 | h^\star({\mathbf x})=0] ~. \] So, for $j$ that maximizes $\mathrm{Pr}ob[h^\star_j({\mathbf x})=0 | h^\star({\mathbf x})=0]$ we get that $\mathrm{Pr}ob[h^\star_j({\mathbf x})=0 | h^\star({\mathbf x})=0] \ge 1/k$. Therefore, \begin{align*} {\mathbf e}rr_\mathcal{D}(h^\star_j) &= \mathrm{Pr}ob[h^\star_j({\mathbf x})=1 \land h^\star({\mathbf x})=0] = \mathrm{Pr}ob[h^\star({\mathbf x})=0] \, \mathrm{Pr}ob[h^\star_j({\mathbf x}) = 1 | h^\star({\mathbf x})=0] \\ &= \mathrm{Pr}ob[h^\star({\mathbf x})=0] \, (1 - \mathrm{Pr}ob[h^\star_j({\mathbf x}) = 0 | h^\star({\mathbf x})=0]) \le \mathrm{Pr}ob[h^\star({\mathbf x})=0] \, (1 - 1/k) ~. {\mathbf e}nd{align*} Now, if $\mathrm{Pr}ob[h^\star({\mathbf x})=0] \le 1/2+1/k^2$ then the above gives \[{\mathbf e}rr_\mathcal{D}(h^\star_j) \le (1/2 + 1/k^2)(1-1/k) \leq 1/2-1/2k^2 ~, \] where the inequality holds for any positive integer $k$. Otherwise, if $\mathrm{Pr}ob[h^\star({\mathbf x})=0] > 1/2+1/k^2$, then the constant predictor $h({\mathbf x}) = 0$ has ${\mathbf e}rr_\mathcal{D}(h) < 1/2 - 1/k^2$. In both cases we have shown that there exists a predictor in $H$ with error of at most $1/2 - 1/2k^2$. Finally, if we can agnostically learn $H$ in time $\mathrm{poly}(n,1/{\mathbf e}psilon,1/\delta)$, then we can find $h'$ with ${\mathbf e}rr_\mathcal{D}(h') \le \min_{h \in H} {\mathbf e}rr_\mathcal{D}(h) + {\mathbf e}psilon \le 1/2 - 1/2k^2 + {\mathbf e}psilon$ in time $\mathrm{poly}(n,1/{\mathbf e}psilon,1/\delta)$ (recall that $k=n^{\mathbf{r}}ho$ for some ${\mathbf{r}}ho>0$). This means that we can have a weak learner that runs in polynomial time, and this concludes our proof. {\mathbf e}nd{proof} Let $h$ be a hypothesis in the class $H$ defined in \thmref{thm:Klivans} and take any ${\mathbf x} \in \{\pm 1\}^n$. Then, there exist an integer $\theta$ and a vector of integers ${\mathbf w}$ such that $h({\mathbf x}) = \phi_{0,1}(\inner{{\mathbf w},{\mathbf x}}-\theta-1/2)$. But since $\inner{{\mathbf w},{\mathbf x}}-\theta$ is also an integer, if we let $L = 1$ this means that $h({\mathbf x}) = \phi_{\mathrm{pw}}(\inner{{\mathbf w},{\mathbf x}}-\theta-1/2)$ as well. Furthermore, letting ${\mathbf x}' \in \mathbb{R}^{n+1}$ denote the concatenation of ${\mathbf x}$ with the constant $1$ and letting ${\mathbf w}' \in \mathbb{R}^{n+1}$ denote the concatenation of ${\mathbf w}$ with the scalar $(-\theta-1/2)$ we obtain that $h({\mathbf x}) = \phi_{\mathrm{pw}}(\inner{{\mathbf w}',{\mathbf x}'})$. Last, let us normalize $\tilde{{\mathbf w}} = {\mathbf w}'/\|{\mathbf w}'\|$, $\tilde{{\mathbf x}}={\mathbf x}/\|{\mathbf x}'\|$, and redefine $L$ to be $\|{\mathbf w}'\|\,\|{\mathbf x}'\|$, we get that $h({\mathbf x}) = \phi_{\mathrm{pw}}(\inner{\tilde{{\mathbf w}},\tilde{{\mathbf x}}})$. That is, we have shown that $H$ is contained in a class of the form $H_\mathrm{pw}$ with a Lipschitz constant bounded by $\mathrm{poly}(n)$. Combining the above with \lemref{lem:hard1} we obtain the following: \begin{corollary} \label{cor:hardpw} Let $L$ be a Lipschitz constant and let $H_\mathrm{pw}$ be the class defined by the $L$-Lipschitz transfer function $\phi_\mathrm{pw}$. Then, based on Assumption {\mathbf{r}}ef{crypto}, there is no algorithm that runs in time $\mathrm{poly}(L,1/{\mathbf e}psilon,1/\delta)$ and $({\mathbf e}psilon,\delta)$-learns the class $H_\mathrm{pw}$. {\mathbf e}nd{corollary} A similar argument leads to the hardness of learning $H_\mathrm{sig}$. \begin{theorem} Let $L$ be a Lipschitz constant and let $H_\mathrm{sig}$ be the class defined by the $L$-Lipschitz transfer function $\phi_\mathrm{sig}$. Then, based on Assumption {\mathbf{r}}ef{crypto}, there is no algorithm that runs in time $\mathrm{poly}(L,1/{\mathbf e}psilon,1/\delta)$ and $({\mathbf e}psilon,\delta)$-learns the class $H_\mathrm{sig}$. {\mathbf e}nd{theorem} \begin{proof} Let $h$ be a hypothesis in the class $H$ defined in \thmref{thm:Klivans} and take any ${\mathbf x} \in \{\pm 1\}^n$. Then, there exist an integer $\theta$ and a vector of integers ${\mathbf w}$ such that $h({\mathbf x}) = \phi_{0,1}(\inner{{\mathbf w},{\mathbf x}}-\theta-1/2)$. However, since $\inner{{\mathbf w},{\mathbf x}}-\theta$ is also an integer, we see that $$ |\phi_{0,1}(\inner{{\mathbf w},{\mathbf x}}-\theta-1/2) - \phi_{\mathrm{sig}}(\inner{{\mathbf w},{\mathbf x}}-\theta-1/2)| \le \frac{1}{1 + {\mathbf e}xp(2 L )} ~. $$ This means that for any ${\mathbf e}psilon > 0$, if we pick $ L = \frac{\log(2/{\mathbf e}psilon - 1)}{2} $ and define $h_\mathrm{sig}({\mathbf x}) = \phi_{\mathrm{sig}}(\inner{{\mathbf w},{\mathbf x}}-\theta-1/2)$, then $|h({\mathbf x}) - h_\mathrm{sig}({\mathbf x})| \le {\mathbf e}psilon/2$. Furthermore, letting ${\mathbf x}' \in \mathbb{R}^{n+1}$ denote the concatenation of ${\mathbf x}$ with the constant $1$ and letting ${\mathbf w}' \in \mathbb{R}^{n+1}$ denote the concatenation of ${\mathbf w}$ with the scalar $(-\theta-1/2)$ we obtain that $h_\mathrm{sig}({\mathbf x}) = \phi_{\mathrm{sig}}(\inner{{\mathbf w}',{\mathbf x}'})$. Last, let us normalize $\tilde{{\mathbf w}} = {\mathbf w}'/\|{\mathbf w}'\|$, $\tilde{{\mathbf x}}={\mathbf x}/\|{\mathbf x}'\|$, and redefine $L$ to be \begin{align}\label{eq:ell} L = \frac{\|{\mathbf w}'\| \|{\mathbf x}'\| \log(2/{\mathbf e}psilon - 1)}{2} {\mathbf e}nd{align} so that $h_\mathrm{sig}({\mathbf x}) = \phi_{\mathrm{sig}}(\inner{\tilde{{\mathbf w}},\tilde{{\mathbf x}}})$. Thus we see that if there exists an algorithm that runs in time $\mathrm{poly}(L,1/{\mathbf e}psilon,1/\delta)$ and $({\mathbf e}psilon/2,\delta)$-learns the class $H_\mathrm{sig}$, then since for all $h \in H$ exists $h_\mathrm{sig} \in H_\mathrm{sig}$ such that $ |h_\mathrm{sig}({\mathbf x}) - h({\mathbf x})| \le {\mathbf e}psilon/2 $, there also exists an algorithm that $({\mathbf e}psilon,\delta)$-learns the concept class $H$ defined in \thmref{thm:Klivans} in time polynomial in $(L,1/{\mathbf e}psilon,1/\delta)$ (for $L$ defined in Equation {\mathbf{r}}ef{eq:ell}). But by definition of $L$ in Equation {\mathbf{r}}ef{eq:ell} and the fact that $\|{\mathbf w}'\|$ and $\|{\mathbf x}'\|$ are of size $\mathrm{poly}(n)$, this means that there is an algorithm that runs in time polynomial in $(n,1/{\mathbf e}psilon,1/\delta)$ and $({\mathbf e}psilon,\delta)$-learns the class $H$, which contradicts \lemref{lem:hard1}. {\mathbf e}nd{proof} \section{Related work} \label{sec:related} The problem of learning kernel-based halfspaces has been extensively studied before, mainly in the framework of SVM \cite{Vapnik98,CristianiniSh04,ScholkopfSm02}. When the data is separable with a margin $\mu$, it is possible to learn a halfspaces in polynomial time. The learning problem becomes much more difficult when the data is not separable with margin. In terms of hardness results, \cite{Ben-DavidSi00} derive hardness results for proper learning with sufficiently small margins. There are also strong hardness of approximation results for {\mathbf e}mph{proper} learning {\mathbf e}mph{without} margin (see for example \citep{GuruswamiRa06} and the references therein). We emphasize that we allow improper learning, which is just as useful for the purpose of learning good classifiers, and thus these hardness results do not apply. Instead, the hardness result we derived in \secref{sec:hardness} hold for improper learning as well. As mentioned before, the main tool we rely on for deriving the hardness result is the representation independent hardness result for learning intersections of halfspaces given in \cite{KlivansSh06}. Practical algorithms such as SVM often replace the 0-1 error function with a convex surrogate, and then apply convex optimization tools. However, there are no guarantees on how well the surrogate function approximates the 0-1 error function. Recently, \cite{Zhang04a,BartlettJoMc06} studied the {\mathbf e}mph{asymptotic} relationship between surrogate convex loss functions and the 0-1 error function. In contrast, in this paper we show that even with a finite sample, surrogate convex loss functions can be competitive with the 0-1 error function as long as we replace inner-products with the kernel $K({\mathbf x},{\mathbf x}') = 1/(1-0.5\inner{{\mathbf x},{\mathbf x}'})$. \subsection{Margin analysis} \label{sec:margin} Recall that we circumvented the dependence of the VC dimension of $H_{\phi_{0-1}}$ on the dimensionality of $\mathcal{X}$ by replacing $\phi_{0-1}$ with a Lipschitz transfer function. Another common approach is to require that the learned classifier will be competitive with the {\mathbf e}mph{margin} error rate of the optimal halfspace. Formally, the $\mu$-margin error rate of a halfspace of the form $h_{\mathbf w}({\mathbf x}) = \indct{\inner{{\mathbf w},{\mathbf x}} > 0}$ is defined as: \begin{equation} \label{eqn:errmu} {\mathbf e}rr_{\mathcal{D},\mu}({\mathbf w}) ~=~ \Pr[h_{\mathbf w}({\mathbf x}) \neq y \lor |\inner{{\mathbf w},{\mathbf x}}| \leq \mu] ~. {\mathbf e}nd{equation} Intuitively, ${\mathbf e}rr_{\mathcal{D},\mu}({\mathbf w})$ is the error rate of $h_{\mathbf w}$ had we $\mu$-shifted each point in the worst possible way. Margin based analysis restates the goal of the learner (as given in {\mathbf e}qref{eqn:aPAC}) and requires that the learner will find a classifier $h$ that satisfies: \begin{equation} \label{eqn:mb} {\mathbf e}rr_\mathcal{D}(h) ~\le~ \min_{{\mathbf w} : \|{\mathbf w}\| = 1} {\mathbf e}rr_{\mathcal{D},\mu}({\mathbf w}) + {\mathbf e}psilon ~. {\mathbf e}nd{equation} Bounds of the above form are called margin-based bounds and are widely used in the statistical analysis of Support Vector Machines and AdaBoost. It was shown \citep{BartlettMe02,McAllester03} that $m = \Theta(\log(1/\delta)/(\mu\,{\mathbf e}psilon)^2)$ examples are sufficient (and necessary) to learn a classifier for which {\mathbf e}qref{eqn:mb} holds with probability of at least $1-\delta$. Note that as in the sample complexity bound we gave in \thmref{thm:Rad}, the margin based sample complexity bound also does not depend on the dimension. In fact, the Lipschitz approach used in this paper and the margin-based approach are closely related. First, it is easy to verify that if we set $L = 1/(2\mu)$, then for any ${\mathbf w}$ the hypothesis $h({\mathbf x}) =\phi_{\mathrm{pw}}(\inner{{\mathbf w},{\mathbf x}})$ satisfies ${\mathbf e}rr_\mathcal{D}(h) \le {\mathbf e}rr_{\mathcal{D},\mu}({\mathbf w})$. Therefore, an algorithm that $({\mathbf e}psilon,\delta)$-learns $H_\mathrm{pw}$ also guarantees that {\mathbf e}qref{eqn:mb} holds. Second, it is also easy to verify that if we set $L = \tfrac{1}{4\mu}\log\left(\tfrac{2-{\mathbf e}psilon}{{\mathbf e}psilon}{\mathbf{r}}ight)$ then for any ${\mathbf w}$ the hypothesis $h({\mathbf x}) =\phi_{\mathrm{sig}}(\inner{{\mathbf w},{\mathbf x}})$ satisfies ${\mathbf e}rr_\mathcal{D}(h) \le {\mathbf e}rr_{\mathcal{D},\mu}({\mathbf w}) + {\mathbf e}psilon/2$. Therefore, an algorithm that $({\mathbf e}psilon/2,\delta)$-learns $H_\mathrm{sig}$ also guarantees that {\mathbf e}qref{eqn:mb} holds. As a direct corollary of the above discussion we obtain that it is possible to learn a vector ${\mathbf w}$ that guarantees {\mathbf e}qref{eqn:mb} in time $\mathrm{poly}({\mathbf e}xp(\tilde{O}(1/\mu)))$. A computational complexity analysis under margin assumptions was first carried out in \citep{Ben-DavidSi00} (see also the hierarchical worst-case analysis recently proposed in \citep{Ben-David06}). The technique used in \citep{Ben-DavidSi00} is based on the observation that in the noise-free case, an optimal halfspace can be expressed as a linear sum of at most $1/\mu^2$ examples. Therefore, one can perform an exhaustive search over all sub-sequences of $1/\mu^2$ examples, and choose the optimal halfspace. Note that this algorithm will always run in time $m^{1/\mu^2}$. Since the sample complexity bound requires that $m$ will be order of $1/(\mu{\mathbf e}psilon)^2$, the runtime of the method described by \citep{Ben-DavidSi00} becomes $\mathrm{poly}({\mathbf e}xp(\tilde{O}(1/\mu^2)))$. In comparison, our algorithm achieves a better runtime of $\mathrm{poly}({\mathbf e}xp(\tilde{O}(1/\mu)))$. Moreover, while the algorithm of \cite{Ben-DavidSi00} performs an exhaustive search, our algorithm's runtime depends on the parameter $B$, which is $\mathrm{poly}({\mathbf e}xp(\tilde{O}(1/\mu)))$ only under a worst-case assumption. Since in practice we will cross-validate for $B$, it is plausible that in many real-world scenarios the runtime of our algorithm will be much smaller. \subsection{Distributional Assumptions} \label{sec:kalai} The idea of approximating the zero-one transfer function with a polynomial was first proposed by \citep{KalaiKlMaSe05} who studied the problem of agnostically learning halfspaces without kernels in $\mathbb{R}^n$ under distributional assumption. In particular, they showed that if the distribution over $\mathcal{X}$ is uniform over the unit ball, then it is possible to agnostically learn $H_{\phi_{0-1}}$ in time $\mathrm{poly}(n^{1/{\mathbf e}psilon^4})$. This was further generalized by \citep{BlaisOdWi08}, who showed that similar bounds hold for product distributions. Beside distributional assumptions, these works are characterized by explicit dependence on the dimension of $\mathcal{X}$, and therefore are not adequate for the kernel-based setting we consider in this paper, in which the dimensionality of $\mathcal{X}$ can even be infinite. More precisely, while \citep{KalaiKlMaSe05} try to approximate the zero-one transfer function with a low-degree polynomial, we require instead that the coefficients of the polynomials are bounded. The principle that when learning in high dimensions ``the size of the parameters is more important than their number'' was one of the main advantages in the analysis of the statistical properties of several learning algorithms (e.g. \citep{Bartlett96}). Interestingly, in \cite{ShalevShSr09tech} we show that the very same algorithm we use in this paper recover the same complexity bound of \cite{KalaiKlMaSe05}. \section{Discussion} \label{sec:discussion} In this paper we described and analyzed a new technique for agnostically learning kernel-based halfspaces with the zero-one loss function. The bound we derive has an exponential dependence on $L$, the Lipschitz coefficient of the transfer function. While we prove that (under a certain cryptographic assumption) no algorithm can have a polynomial dependence on $L$, the immediate open question is whether the dependence on $L$ can be further improved. A perhaps surprising property of our analysis is that we propose a single algorithm, returning a single classifier, which is simultaneously competitive against {\mathbf e}mph{all} transfer functions $p\in P_{B}$. In particular, it learns with respect to the ``optimal'' transfer function, where by optimal we mean the one which attains the smallest error rate, $\E[|p(\inner{{\mathbf w},{\mathbf x}})-y|]$, over the distribution $\mathcal{D}$. Our algorithm boils down to linear regression with the absolute loss function and while composing a particular kernel function over our original RKHS. It is possible to show that solving the vanilla SVM, with the hinge-loss, and composing again our particular kernel over the desired kernel, can also give similar guarantees. It is therefore interesting to study if there is something special about the kernel we propose or maybe other kernel functions (e.g. the Gaussian kernel) can give similar guarantees. Another possible direction is to consider other types of margin-based analysis or transfer functions. For example, in the statistical learning literature, there are several definitions of ``noise'' conditions, some of them are related to margin, which lead to faster decrease of the error rate as a function of the number of examples (see for example \cite{Bousquet02,Tsybakov04,Steinwart07}). Studying the computational complexity of learning under these conditions is left to future work. \section*{Acknowledgments} We would like to thank Adam Klivans for helping with the Hardness results. This work was partially supported by a Google Faculty Research Grant. \conferencepaper{{\mathbf e}nd{document}} \appendix \section{Solving the ERM problem given in {\mathbf e}qref{eqn:optERM}} \label{sec:ERMsolve} In this section we show how to approximately solve {\mathbf e}qref{eqn:optERM} when the transfer function is $\phi_\mathrm{pw}$. The technique we use is similar to the covering technique described in \cite{Ben-DavidSi00}. For each $i$, let $b_i = 2(y_i - 1/2)$. It is easy to verify that the objective of {\mathbf e}qref{eqn:optERM} can be rewritten as \begin{equation} \label{eqn:optERM2} \frac{1}{m} \sum_{i=1}^m f(b_i \inner{{\mathbf w},{\mathbf x}_i}) ~~~~\textrm{where}~~ f(a) = \min\{1,\max\{0,1/2-L\,a\}\} ~. {\mathbf e}nd{equation} Let $g(a) =\max\{0,1/2-L\,a\}$. Note that $g$ is a convex function, $g(a) \ge f(a)$ for every $a$, and equality holds whenever $a \ge -1/2L$. Let ${\mathbf w}^\star$ be a minimizer of {\mathbf e}qref{eqn:optERM2} over the unit ball. We partition the set $[m]$ into \[ I_1 = \{i \in [m] : g(b_i\inner{{\mathbf w}^\star,{\mathbf x}_i}) = f(b_i \inner{{\mathbf w}^\star,{\mathbf x}_i})\} ~~~,~~~ I_2 = [m] \setminus I_1 ~. \] Now, let $\hat{{\mathbf w}}$ be a vector that satisfies \begin{equation} \label{eqn:regsuff} \sum_{i \in I_1} g(b_i\inner{\hat{{\mathbf w}},{\mathbf x}_i}) ~\le~ \min_{{\mathbf w} : \|{\mathbf w}\| \le 1} \sum_{i \in I_1} g(b_i \inner{{\mathbf w},{\mathbf x}_i}) + {\mathbf e}psilon\,m ~. {\mathbf e}nd{equation} Clearly, we have \begin{equation*} \begin{split} \sum_{i=1}^m f(b_i\inner{\hat{{\mathbf w}},{\mathbf x}_i}) &\le \sum_{i \in I_1} g(b_i\inner{\hat{{\mathbf w}},{\mathbf x}_i}) + \sum_{i \in I_2} f(b_i\inner{\hat{{\mathbf w}},{\mathbf x}_i}) \\ &\le \sum_{i \in I_1} g(b_i\inner{\hat{{\mathbf w}},{\mathbf x}_i}) + |I_2| \\ &\le \sum_{i \in I_1} g(b_i\inner{{\mathbf w}^\star,{\mathbf x}_i}) + {\mathbf e}psilon\,m + |I_2| \\ &=\sum_{i=1}^m f(b_i\inner{{\mathbf w}^\star,{\mathbf x}_i}) + {\mathbf e}psilon\,m ~. {\mathbf e}nd{split} {\mathbf e}nd{equation*} Dividing the two sides of the above by $m$ we obtain that $\hat{{\mathbf w}}$ is an ${\mathbf e}psilon$-accurate solution to {\mathbf e}qref{eqn:optERM2}. Therefore, it suffices to show a method that finds a vector $\hat{{\mathbf w}}$ that satisfies {\mathbf e}qref{eqn:regsuff}. To do so, we use a standard generalization bound (based on Rademacher complexity) as follows: \begin{lemma} Let us sample $i_1,\ldots,i_k$ i.i.d. according to the uniform distribution over $I_1$. Let $\hat{{\mathbf w}}$ be a minimizer of $\sum_{j=1}^k g(b_{i_j} \inner{{\mathbf w},{\mathbf x}_{i_j}})$ over ${\mathbf w}$ in the unit ball. Then, \[ \E\left[ \tfrac{1}{|I_1|}\sum_{i \in I_1} g(b_i\inner{\hat{{\mathbf w}},{\mathbf x}_i}) - \min_{{\mathbf w} : \|{\mathbf w}\| \le 1} \tfrac{1}{|I_1|}\sum_{i \in I_1} g(b_i \inner{{\mathbf w},{\mathbf x}_i}) {\mathbf{r}}ight] ~\le~ 2 L / \sqrt{k} ~, \] where expectation is over the choice of $i_1,\ldots,i_k$. {\mathbf e}nd{lemma} \begin{proof} Simply note that $g$ is $L$-Lipschitz and then apply a Rademacher generalization bound with the contraction lemma. {\mathbf e}nd{proof} The above lemma immediately implies that if $k \ge 4L^2/{\mathbf e}psilon^2$, then there exist $i_1,\ldots,i_k$ in $I_1$ such that if $\hat{{\mathbf w}} \in \argmin_{{\mathbf w} : \|{\mathbf w}\| \le 1} \sum_{j=1}^k g(b_{i_j} \inner{{\mathbf w},{\mathbf x}_{i_j}})$ then $\hat{{\mathbf w}}$ satisfies {\mathbf e}qref{eqn:regsuff} and therefore it is an ${\mathbf e}psilon$-accurate solution of {\mathbf e}qref{eqn:optERM2}. The procedure will therefore perform an exhaustive search over all $i_1,\ldots,i_k$ in $[m]$, for each such sequence the procedure will find $\hat{{\mathbf w}} \in \argmin_{{\mathbf w} : \|{\mathbf w}\| \le 1} \sum_{j=1}^k g(b_{i_j} \inner{{\mathbf w},{\mathbf x}_{i_j}})$ (in polynomial time). Finally, the procedure will output the $\hat{{\mathbf w}}$ that minimizes the objective of {\mathbf e}qref{eqn:optERM2}. The total runtime of the procedure is therefore $\mathrm{poly}(m^k)$. Plugging in the value of $k = \lceil 4L^2/{\mathbf e}psilon^2 {\mathbf{r}}ceil$ and the value of $m$ according to the sample complexity bound given in \thmref{thm:Rad} we obtain the total runtime of \[ \mathrm{poly}\left( (L/{\mathbf e}psilon)^{L^2/{\mathbf e}psilon^2} {\mathbf{r}}ight) = \mathrm{poly}\left( {\mathbf e}xp\left( \tfrac{L^2}{{\mathbf e}psilon^2} \log(L/{\mathbf e}psilon){\mathbf{r}}ight) {\mathbf{r}}ight) ~. \] \section{Proof of \lemref{lem:sig}} \label{sec:Sigmoid} In order to approximate $\phi_{\mathrm{sig}}$ with a polynomial, we will use the technique of {\mathbf e}mph{Chebyshev approximation} (cf. \citep{Mason03}). One can write any continuous function on $[-1,+1]$ as a Chebyshev expansion $\sum_{n=0}^{\infty}\alpha_n T_{n}(\cdot)$, where each $T_{n}(\cdot)$ is a particular $n$-th degree polynomial denoted as the $n$-th Chebyshev polynomial (of the first kind). These polynomials are defined as $T_{0}(x)=1,T_{1}(x)=x$, and then recursively via $T_{n+1}(x)=2xT_{n}(x)-T_{n-1}(x)$. For any $n$, $T_{n}(\cdot)$ is bounded in $[-1,+1]$. The coefficients in the Chebyshev expansion of $\phi_{\mathrm{sig}}$ are equal to \begin{equation}\label{eq:def_alphan} \alpha_n = \frac{1+\mathbf{1}(n>0)}{\pi}\int_{x=-1}^{1}\frac{\phi_{\mathrm{sig}}(x)T_{n}(x)}{\sqrt{1-x^2}}dx. {\mathbf e}nd{equation} Truncating the series after some threshold $n=N$ provides an $N$-th degree polynomial which approximates the original function. In order to obtain a bound on B, we need to understand the behavior of the coefficients in the Chebyshev approximation. These are determined in turn by the behavior of $\alpha_n$ as well as the coefficients of each Chebyshev polynomial $T_{n}(\cdot)$. The following two lemmas provide the necessary bounds. \begin{lemma}\label{lem:a_bound} For any $n> 1$, $|\alpha_n|$ in the Chebyshev expansion of $\phi_{\mathrm{sig}}$ on $[-1,+1]$ is upper bounded as follows: \[ |\alpha_n|\leq \frac{1/2L+1/\pi}{(1+\pi/4L)^n}. \] Also, we have $|\alpha_0|\leq 1$, $|\alpha_1|\leq 2$. {\mathbf e}nd{lemma} \begin{proof} The coefficients $\alpha_n$, $n=1,\ldots$ in the Chebyshev series are given explicitly by \begin{equation}\label{eq:alphadef} \alpha_n = \frac{2}{\pi}\int_{x=-1}^{1}\frac{\phi_{\mathrm{sig}}(x)T_{n}(x)}{\sqrt{1-x^2}}dx. {\mathbf e}nd{equation} For $\alpha_0$, the same equality holds with $2/\pi$ replaced by $1/\pi$, so $\alpha_0$ equals \[ \frac{1}{\pi}\int_{x=-1}^{1}\frac{\phi_{\mathrm{sig}}(x)}{\sqrt{1-x^2}}dx, \] which by definition of $\phi_{\mathrm{sig}}(x)$, is at most $(1/\pi)\int_{x=-1}^{1}\left(\sqrt{1-x^2}{\mathbf{r}}ight)^{-1}dx = 1$. As for $\alpha_1$, it equals \[ \frac{2}{\pi}\int_{x=-1}^{1}\frac{\phi_{\mathrm{sig}}(x)x}{\sqrt{1-x^2}}dx, \] whose absolute value is at most $(2/\pi)\int_{x=-1}^{1}\left(\sqrt{1-x^2}{\mathbf{r}}ight)^{-1}dx = 2$. To evaluate the integral in {\mathbf e}qref{eq:alphadef} for general $n$ and $L$, we will need to use some tools from complex analysis. The calculation follows \citep{Elliot64}, to which we refer the reader for justification of the steps and further details\footnote{We note that such calculations also appear in standard textbooks on the subject, but they are usually carried under asymptotic assumptions and disregarding coefficients which are important for our purposes.}. On the complex plane, the integral in {\mathbf e}qref{eq:alphadef} can be viewed as a line integral over $[-1,+1]$. Using properties of Chebyshev polynomials, this integral can be converted into a more general complex-valued integral over an arbitrary closed curve $C$ on the complex plane which satisfies certain regularity conditions: \begin{equation}\label{eq:alphan_complex} \alpha_n = \frac{1}{\pi i}\int_{C}\frac{\phi_{\mathrm{sig}}(z)dz}{\sqrt{z^2-1}(z\pm \sqrt{z^2-1})^n}dz, {\mathbf e}nd{equation} where the sign in $\pm$ is chosen so that $|z\pm \sqrt{z^2-1}|>1$. In particular, for any parameter ${\mathbf{r}}ho>1$, the set of points $z$ satisfying $|z\pm \sqrt{z^2-1}|={\mathbf{r}}ho$ form an ellipse, which grows larger with ${\mathbf{r}}ho$ and with foci at $z=\pm 1$ and which grows larger with ${\mathbf{r}}ho$. Since we are free to choose $C$, we choose it as this ellipse while letting ${\mathbf{r}}ho {\mathbf{r}}ightarrow \infty$. To understand what happens when ${\mathbf{r}}ho {\mathbf{r}}ightarrow \infty$, we need to characterize the singularities of $\phi_{\mathrm{sig}}(z)$, namely the points $z$ where $\phi_{\mathrm{sig}}(z)$ is not well defined. Recalling that $\phi_{\mathrm{sig}}(z) = (1+e^{-4Lz})^{-1}$, we see that the problematic points are $i(\pi + 2\pi k)/4L$ for any $k=\pm 1, \pm 2,\ldots$, where the denominator in $\phi_{\mathrm{sig}}(z)$ equals zero. Note that this forms a discrete set of isolated points - in other words, $\phi_{\mathrm{sig}}$ is a {\mathbf e}mph{meromorphic function}. The fact that $\phi_{\mathrm{sig}}$ is 'well behaved' in this sense allows us to perform the analysis below. The behavior of the function at its singularities is defined via the {\mathbf e}mph{residue} of the function at each singularity $c$, which equals $\lim_{z{\mathbf{r}}ightarrow c} (z-c)\phi_{\mathrm{sig}}(z)$ assuming the limit exists (in that case, the singularity is called a {\mathbf e}mph{simple pole}, otherwise a higher order limit might be needed). In our case, the residue for the singularity at $i\pi/4L$ equals \[ \lim_{z{\mathbf{r}}ightarrow 0} \frac{z}{1+e^{-i \pi -4Lz}} = \lim_{z{\mathbf{r}}ightarrow 0} \frac{z}{1-e^{-4Lz}} = \lim_{z{\mathbf{r}}ightarrow 0} \frac{1/4L}{e^{-4Lz}} = 1/4L, \] where we used l'H\^{o}pital's rule to calculate the limit. The same residue also apply to all the other singularities. For points in the complex plane uniformly bounded away from these singularities, $|\phi_{\mathrm{sig}}(z)|$ is bounded, and therefore it can be shown that the integral in {\mathbf e}qref{eq:alphan_complex} will tend to zero as we let $C$ become an arbitrarily large ellipse (not passing too close to any of the singularities) by taking ${\mathbf{r}}ho{\mathbf{r}}ightarrow \infty$. However, as ${\mathbf{r}}ho$ varies smoothly, the ellipse does cross over singularity points, and these contribute to the integral. For meromorphic functions, with a discrete set of isolated singularities, we can simply sum over all contributions, and it can be shown (see equation $10$ in \citep{Elliot64} and the subsequent discussion) that \[ \alpha_n = -2\sum_{k=-\infty}^{\infty}\frac{r_k}{\sqrt{z_k^2-1}\left(z_k\pm \sqrt{z_k^2-1}{\mathbf{r}}ight)^n}, \] where $z_k$ is the singularity point $i(\pi + 2\pi k)/4L$ with corresponding residue $r_k$. Substituting the results for our chosen function, we have \[ \alpha_n = \sum_{k=-\infty}^{\infty}\frac{1/4L} {\sqrt{\left(i(\pi+2\pi k)/4L{\mathbf{r}}ight)^2-1}\left(i(\pi+2\pi k)/4L\pm \sqrt{\left(i(\pi+2\pi k)/4L{\mathbf{r}}ight)^2-1}{\mathbf{r}}ight)^n}. \] A routine simplification leads to the following\footnote{On first look, it might appear that $\alpha_n$ takes imaginary values for even $n$, due to the $i^{n+1}$ factor, despite $\alpha_n$ being equal to a real-valued integral. However, it can be shown that $\alpha_n=0$ for even $n$. This additional analysis can also be used to slightly tighten our final results in terms of constants in the exponent, but it was not included for simplicity.}: \[ \alpha_n = \sum_{k=-\infty}^{\infty}\frac{1/4L} {i^{n+1}\sqrt{\left((\pi+2\pi k)/4L{\mathbf{r}}ight)^2+1}\left((\pi+2\pi k)/4L\pm \sqrt{\left((\pi+2\pi k)/4L{\mathbf{r}}ight)^2+1}{\mathbf{r}}ight)^n}. \] It can be verified that $\pm$ should be chosen according to $\indct{k\geq 0}$. Therefore, \begin{align*} |\alpha_n| &= \sum_{k=-\infty}^{\infty}\frac{1/4L} {\sqrt{\left((\pi+2\pi k)/4L{\mathbf{r}}ight)^2+1}\left(|\pi+2\pi k|/4L+ \sqrt{\left((\pi+2\pi k)/4L{\mathbf{r}}ight)^2+1}{\mathbf{r}}ight)^n}\\ & \leq \sum_{k=-\infty}^{\infty}\frac{1/4L} {\left(|\pi+2\pi k|1/4L+1{\mathbf{r}}ight)^n} \leq \frac{1/4L}{(1+\pi/4L)^n}+ 2\sum_{k=1}^{\infty}\frac{1/4L} {\left(1+\pi(1+2k)/4L{\mathbf{r}}ight)^n}\\ &\leq \frac{1/4L}{(1+\pi/4L)^n}+\int_{k=0}^{\infty}\frac{1/2L} {\left(1+\pi(1+2k)/4L{\mathbf{r}}ight)^n}dk\\ {\mathbf e}nd{align*} Solving the integral and simplifying gives us \[ |\alpha_n|\leq \frac{1}{(1+\pi/4L)^n}\left(1/4L+\frac{1+\pi/4L}{\pi(n-1)}{\mathbf{r}}ight). \] Since $n\geq 2$, the result in the lemma follows. {\mathbf e}nd{proof} \begin{lemma}\label{lem:t_bound} For any non-negative integer $n$ and $j=0,1,\ldots,n$, let $t_{n,j}$ be the coefficient of $x^j$ in $T_{n}(x)$. Then $t_{n,j}=0$ for any $j$ with a different parity than $n$, and for any $j>0$, \[ |t_{n,j}| \leq \frac{e^{n+j}}{\sqrt{2\pi}} \] {\mathbf e}nd{lemma} \begin{proof} The fact that $t_{n,j}=0$ for $j,n$ with different parities, and $|t_{n,0}|\leq 1$ is standard. Using an explicit formula from the literature (see \citep{Mason03}, pg. 24), as well as Stirling approximation, we have that \begin{align*} &|t_{n,j}| ~=~ 2^{n-(n-j)-1}\frac{n}{n-\frac{n-j}{2}}\binom{n-\frac{n-j}{2}}{\frac{n-j}{2}} ~=~ \frac{2^j n}{n+j}\frac{\left(\frac{n+j}{2}{\mathbf{r}}ight)!}{\left(\frac{n-j}{2}{\mathbf{r}}ight)!j!}\\ &\leq \frac{2^j n}{j!(n+j)}\left(\frac{n+j}{2}{\mathbf{r}}ight)^{j} ~=~ \frac{n(n+j)^{j}}{(n+j)j!} ~\leq~ \frac{n(n+j)^{j}}{(n+j)\sqrt{2\pi j}(j/e)^j} ~=~ \frac{ne^j}{(n+j)\sqrt{2\pi j}}\left(1+\frac{n}{j}{\mathbf{r}}ight)^{j}\\ &\leq \frac{ne^j}{(n+j)\sqrt{2\pi j}}e^n. {\mathbf e}nd{align*} from which the lemma follows. {\mathbf e}nd{proof} We are now in a position to prove a bound on B. As discussed earlier, $\phi_{\mathrm{sig}}(x)$ in the domain $[-1,+1]$ equals the expansion $\sum_{n=0}^{\infty}\alpha_{n}T_{x}$. The error resulting from truncating the Chebyshev expanding at index $N$, for any $x\in [-1,+1]$, equals \[ \left|\phi_{\mathrm{sig}}(x)-\sum_{n=0}^{N}\alpha_{n}T_{n}(x){\mathbf{r}}ight| = \left|\sum_{n=N+1}^{\infty}\alpha_{n}T_{n}(x){\mathbf{r}}ight| \leq \sum_{n=N+1}^{\infty}|\alpha_{n}|, \] where in the last transition we used the fact that $|T_{n}(x)|\leq 1$. Using \lemref{lem:a_bound} and assuming $N>0$, this is at most \[ \sum_{n=N+1}^{\infty}\frac{1/2L+1/\pi}{(1+\pi/4L)^n} = \frac{2+4L/\pi}{\pi(1+\pi/4L)^N}. \] In order to achieve an accuracy of less than ${\mathbf e}psilon$ in the approximation, we need to equate this to ${\mathbf e}psilon$ and solve for $N$, i.e. \begin{equation}\label{eq:N_bound} N= \left\lceil \log_{1+\pi/4L}\left(\frac{2+4L/\pi}{\pi{\mathbf e}psilon}{\mathbf{r}}ight) {\mathbf{r}}ight{\mathbf{r}}ceil {\mathbf e}nd{equation} The series left after truncation is $\sum_{n=0}^{N}\alpha_n T_{n}(x)$, which we can write as $\sum_{j=0}^{N}\beta_j x^j$. Using \lemref{lem:a_bound} and \lemref{lem:t_bound}, the absolute value of the coefficient $\beta_j$ for $j>1$ can be upper bounded by \begin{align*} &\sum_{n=j..N, n=j \mod 2}|a_n||t_{n,j}| \leq \sum_{n=j..N, n=j \mod 2} \frac{1/2L+1/\pi}{(1+\pi/4L)^n}\frac{e^{n+j}}{\sqrt{2\pi}}\\ &=\frac{(1/2L+1/\pi) e^j}{\sqrt{2\pi}}\sum_{n=j..N, n=j \mod 2}\left(\frac{e}{1+\pi/4L}{\mathbf{r}}ight)^n\\ &=\frac{(1/2L+1/\pi) e^j}{\sqrt{2\pi}}\left(\frac{e}{1+\pi/4L}{\mathbf{r}}ight)^j \sum_{n=0}^{\lfloor \frac{N-j}{2}{\mathbf{r}}floor}\left(\frac{e}{1+\pi/4L}{\mathbf{r}}ight)^{2n}\\ &\leq \frac{(1/2L+1/\pi) e^j}{\sqrt{2\pi}}\left(\frac{e}{1+\pi/4L}{\mathbf{r}}ight)^j \frac{(e/(1+\pi/4L))^{N-j+2}-1}{(e/(1+\pi/4L))^2-1}. {\mathbf e}nd{align*} Since we assume $L\geq 3$, we have in particular $e/(1+\pi/4L)>1$, so we can upper bound the expression above by dropping the $1$ in the numerator, to get \begin{align*} \frac{1/2L+1/\pi}{\sqrt{2\pi}((e/(1+\pi/4L))^2-1)}\left(\frac{e}{1+\pi/4L}{\mathbf{r}}ight)^{N+2}e^{j}. {\mathbf e}nd{align*} The cases $\beta_0,\beta_1$ need to be treated separately, due to the different form of the bounds on $\alpha_0,\alpha_1$. Repeating a similar analysis (using the actual values of $t_{n,1},t_{n,0}$ for any $n$), we get \begin{align*} &\beta_0 \leq 1+\frac{1}{\pi}+\frac{2L}{\pi^2}\\ &\beta_1 \leq 2+\frac{3(1+2L/\pi)(4L+\pi)}{2\pi^2}. {\mathbf e}nd{align*} Now that we got a bound on the $\beta_j$, we can plug it into the bound on $B$, and get \begin{align*} &B = \sum_{j=0}^{N}2^j \beta_j^2 \leq \beta_0^2+2\beta_1^2+\sum_{j=2}^{N}\left(\frac{1/2L+1/\pi}{\sqrt{2\pi}((e/(1+\pi/4L))^2-1)}{\mathbf{r}}ight)^2\left(\frac{e}{1+\pi/4L}{\mathbf{r}}ight)^{2N+4}(2e^{2})^j\\ &\leq\beta_0^2+2\beta_1^2+\left(\frac{1/2L+1/\pi}{\sqrt{2\pi}((e/(1+\pi/4L))^2-1)}{\mathbf{r}}ight)^2\left(\frac{e}{1+\pi/4L}{\mathbf{r}}ight)^{2N+4}\frac{(2e^2)^{N+1}}{e^2-1}\\ &=\beta_0^2+2\beta_1^2+\frac{2(1/2L+1/\pi)^2e^6}{(e^2-1)2\pi((e/(1+\pi/4L))^2-1)^2(1+\pi/4L)^4}\left(\frac{\sqrt{2}e^2}{1+\pi/4L}{\mathbf{r}}ight)^{2N}. {\mathbf e}nd{align*} To make the expression more readable, we use the (rather arbitrary) assumption that $L\geq 3$. In that case, by some numerical calculations, it is not difficult to show that we can upper bound the above by \[ 2L^4+0.15\left(\frac{\sqrt{2}e^2}{1+\pi/4L}{\mathbf{r}}ight)^{2N} \leq 2L^4+0.15(2e^4)^N. \] Combining this with {\mathbf e}qref{eq:N_bound}, we get that this is upper bounded by \[ 2L^4+0.15(2e^4)^{\log_{1+\pi/4L}\left(\frac{2+4L/\pi}{\pi{\mathbf e}psilon}{\mathbf{r}}ight)+1}, \] or at most \[ 2L^4+{\mathbf e}xp\left(\frac{\log(2e^4)\log\left(\frac{2+4L/\pi}{\pi{\mathbf e}psilon}{\mathbf{r}}ight)}{\log(1+\pi/4L)}+3{\mathbf{r}}ight). \] Using the fact that $\log(1+x)\geq x-x^2$ for $x\geq 0$, and the assumption that $L\geq 3$, we can bound the exponent by \[ \frac{\log(2e^4)\log\left(\frac{2+4L/\pi}{\pi{\mathbf e}psilon}{\mathbf{r}}ight)}{\frac{\pi}{4L}\left(1-\frac{\pi}{8L}{\mathbf{r}}ight)}+3 \leq 7\log(2L/{\mathbf e}psilon)L+3. \] Substituting back, we get the result stated in \lemref{lem:sig}. \section{The $\phi_{{\mathbf e}rf}(\cdot)$ Function} \label{sec:erf} In this section, we prove a result anaologous to \lemref{lem:sig}, using the $\phi_{{\mathbf e}rf}(\cdot)$ transfer function. In a certain sense, it is stronger, because we can show that $\phi_{{\mathbf e}rf}(\cdot)$ actually belongs to $P_{B}$ for sufficiently large $B$. However, the resulting bound is worse than \lemref{lem:sig}, as it depends on ${\mathbf e}xp(L^2)$ rather than ${\mathbf e}xp(L)$. However, the proof is much simpler, which helps to illustrate the technique. The relevant lemma is the following: \begin{lemma}\label{lem:erf} Let $\phi_{{\mathbf e}rf}(\cdot)$ be as defined in {\mathbf e}qref{eq:erfpw}, where for simplicity we assume $L\geq 3$. For any ${\mathbf e}psilon>0$, let \[ B\leq \frac{1}{4}+2L^2\left(1+3\pi e L^2 e^{4\pi L^2}{\mathbf{r}}ight). \] Then $\phi_{{\mathbf e}rf}(\cdot)\in P_{B}$. {\mathbf e}nd{lemma} \begin{proof} By a standard fact, $\phi_{{\mathbf e}rf}(\cdot)$ is equal to its infinite Taylor series expansion at any point, and this series equals \[ \phi_{{\mathbf e}rf}(a) = \frac{1}{2}+\frac{1}{\sqrt{\pi}}\sum_{n=0}^{\infty}\frac{(-1)^n (\sqrt{\pi}La)^{2n+1}}{n!(2n+1)}. \] Luckily, this is an infinite degree polynomial, and it is only left to calculate for which values of $B$ does it belong to $P_{B}$. Plugging in the coefficients in the bound on $B$, we get that \begin{align*} &B\leq \frac{1}{4}+\frac{1}{\pi}\sum_{n=0}^{\infty} \frac{(2 \pi L^2)^{2n+1}}{(n!)^2(2n+1)^2 } \leq \frac{1}{4}+\frac{1}{\pi}\sum_{n=0}^{\infty}\frac{(2\pi L^2)^{2n+1}}{(n!)^2}\\ &= \frac{1}{4}+2L^2\left(1+\sum_{n=1}^{\infty}\frac{(2\pi L^2)^{2n}}{(n!)^2}{\mathbf{r}}ight) \leq \frac{1}{4}+2L^2\left(1+\sum_{n=1}^{\infty}\frac{(2\pi L^2)^{2n}}{(n/e)^{2n}}{\mathbf{r}}ight)\\ &= \frac{1}{4}+2L^2\left(1+\sum_{n=1}^{\infty}\left(\frac{2\pi eL^2}{n}{\mathbf{r}}ight)^{2n}{\mathbf{r}}ight). {\mathbf e}nd{align*} Thinking of $(2\pi eL^2/n)^{2n}$ as a continuous function of $n$, a simple derivative exercise shows that it is maximized for $n=2\pi L^2$, with value $e^{4\pi L^2}$. Therefore, we can upper bound the series in the expression above as follows: \begin{align*} &\sum_{n=1}^{\infty}\left(\frac{2\pi eL^2}{n}{\mathbf{r}}ight)^{2n} = \sum_{n=1}^{\lfloor 2\sqrt{2}\pi e L^2 {\mathbf{r}}floor}\left(\frac{2\pi eL^2}{ n}{\mathbf{r}}ight)^{2n} + \sum_{n=\lceil 2\sqrt{2}\pi eL^2 {\mathbf{r}}ceil}^{\infty} \left(\frac{2\pi eL^2}{ n}{\mathbf{r}}ight)^{2n}\\ &\leq 2\sqrt{2}\pi eL^2e^{4\pi L^2}+\sum_{n=\lceil 2\sqrt{2}\pi eL^2 {\mathbf{r}}ceil}^{\infty}\left(\frac{1}{2}{\mathbf{r}}ight)^{n}\leq 3\pi e L^2 e^{4\pi L^2}. {\mathbf e}nd{align*} where the last transition is by the assumption that $L\geq 3$. Substituting into the bound on $B$, we get the result stated in the lemma. {\mathbf e}nd{proof} {\mathbf e}nd{document}
\begin{document} \title[Solutions of the Yamabe equation]{Solutions of the Yamabe equation by Lyapunov-Schmidt reduction} \author{Jorge D\'avila} \address{} \email[Jorge D\'avila]{[email protected]} \address{ CIMAT A.C., A.P. 402, 36000, Guanajuato. Gto., M\'exico.} \author{Isidro H. Munive} \address{Department of Mathematics, University Center of Exact Sciences and Engineering, University of Guadalajara, 44430 Guadalajara, Mexico} \email[Isidro H. Munive]{[email protected]} \begin{abstract} Given any closed Riemannian manifold $(M,g)$ we use the Lyapunov-Schmidt finite-dimensional reduction method and the classical Morse and Lusternick-Schnirelmann theories to prove multiplicity results for positive solutions of a subcritical Yamabe type equation on $(M,g)$. If $(N,h)$ is a closed Riemannian manifold of constant positive scalar curvature we obtain multiplicity results for the Yamabe equation on the Riemannian product $(M\times N , g + \varepsilon^2 h )$, for $\varepsilon >0$ small. For example, if $M$ is a closed Riemann surface of genus ${\bf g}$ and $(N,h) = (S^2 , g_0)$ is the round 2-sphere, we prove that for $\varepsilon >0$ small enough and a generic metric $g$ on $M$, the Yamabe equation on $(M\times S^2 , g + \varepsilon^2 g_0 )$ has at least $2 + 2 {\bf g}$ solutions. \end{abstract} \maketitle \section{\textbf{Introduction}} In \cite{Yamabeart} H. Yamabe considered the following question: Let $(M, g)$ be a closed Riemannian manifold of dimension $n\geq 3$. Is there a metric $h$ which is conformal to $g$ and has constant scalar curvature? If we express the conformal metric $h$ as $h=u^{\frac{4}{n-2}}g$ for a positive function $u$, the scalar curvature $s_h$ of $h$ is related to the scalar curvature of $g$ by $$ -a_{n}\Delta_{g}u + s_gu = s_hu^{p_n -1},$$ \noindent where $\Delta_{g}$ is the Laplacian operator associated with the metric $g$, $a_{n}=\dfrac{4(n-1)}{(n-2)}$ and $p_n =\dfrac{2n}{n-2}.$ It follows that the metric $h$ has constant scalar curvature $\lambda \in \mathbb{R}$ if and only if $u$ is a positive solution of the {\it Yamabe equation}: \begin{equation} \label{yamabeEquation} -a_{n}\Delta_{g}u + s_gu = \lambda u^{p_n -1}. \end{equation} It is easy to check that Eq. (\ref{yamabeEquation}) is the Euler-Lagrange equation of the {\it Yamabe functional}, $Y_g$, defined by: \begin{equation} \label{FunctionalYamabe} Y_{g}(u)=\dfrac{\int\limits_{M} \Big(a_{n}\varepsilonrt \nabla u\varepsilonrt^2 + s_gu^2 \Big)d\mu_{g}}{\Big(\int\limits_{M} u^{p_n} \ d\mu_g \Big )^{\frac{n-2}{n}}}= \dfrac{\int\limits_{M} \Big(a_{n}\varepsilonrt \nabla u\varepsilonrt^2 + s_gu^2 \Big)d\mu_{g}}{\|u\|_{ p_n}^{2}}. \end{equation} If $\mathcal{E}$ denotes the normalized Hilbert-Einstein functional \begin{equation*} \mathcal{E}(g)=\dfrac{\int\limits_{M} s_gd\mu_{g}}{Vol(M,g)^{\frac{n-2}{n}}}, \end{equation*} it follows that $Y_{g}(u)=\mathcal{E}(u^{\frac{4}{n-2}}g)$. The Yamabe constant of $g$ is defined as the infimum of the Yamabe functional $Y_{g}$ : \begin{equation} Y(M,g)=\inf\limits_{u \in H^{1}(M) - \{ 0 \}} Y_{g}(u) . \end{equation} A minimizer for the Yamabe constant is therefore a solution of (\ref{yamabeEquation}) and, moreover, from elliptic theory this must be strictly positive and smooth. Yamabe presented a proof that a minimizer always exists, but his argument contained an error which was pointed out (and fixed under certain conditions) by N. Trudinger in \cite{t}. Later T. Aubin \cite{a1} and R. Schoen \cite{sch} completed the proof that for any metric $g$ the infimum of the Yamabe functional is achieved. Therefore there is always at least one (positive) solution to the Yamabe equation (\ref{yamabeEquation}). If $Y(M,g)\leq 0$ the solution is unique (up to homothecies). In the case of $Y(M, g) > 0$ uniqueness in general fails. The sphere $(S^{n}, g_{o})$ with the curvature one metric is a first example of multiplicity of solutions. The case of the sphere is very special because it has a non-compact family of conformal transformations which induces a noncompact family of solutions to the Yamabe equation. By a result of M. Obata \cite{Obata} each metric of constant scalar curvature which is conformal to the round metric on $S^{n}$ is obtained as the pull-back of the round metric under a conformal diffeomorphism. Therefore, if $g_{o}$ is the round metric over $S^{n}$, every solution to (\ref{yamabeEquation}) is minimizing. But in general, for the positive case there will be non-minimizing solutions. For instance, D. Pollack proved in \cite{Pollack} that every conformal class with positive Yamabe constant can be $C^{0}$-approximated by a conformal class with an arbitrary number of (non-isometric) metrics of constant scalar curvature which are not minimizers. Also, S. Brendle in \cite{Brendle} constructed smooth examples of Riemannian metrics with a non-compact family of non-minimizing solutions of the Yamabe equation. Another important example was considered by R. Schoen in \cite{Schoen} (and also by O. Kobayashi in \cite{Kobayashi}). In \cite{Schoen} Schoen worked with the product metric on $S^{n-1}\times S^{1}({L})$ (the circle of radius $L$). He showed that all solutions to (\ref{yamabeEquation}) are constant along the $(n-1)$-spheres and, therefore, the Yamabe equation reduces to an ordinary differential equation. By a careful analysis of this equation, Schoen proved that there are many non-mimizing solutions if $L$ is large. Similar to the case of $S^{n-1}\times S^{1}({L})$, particular interest arises in the study of products of the form $(M\times N , g + \delta h)$, where the constant $\delta >0$ goes to 0 (or $\infty$). The Yamabe constants of such Riemannian products were studied in \cite{Akutagawa}. Multiplicity results for the Yamabe equation were obtained in \cite{Bettiol-Piccione, Lima-Piccione-Zedda, Lima-Piccione-Zedda2, Henry, Qinian_YanYan, Petean0} using bifurcation theory, assuming that the scalar curvatures of $g$ and $h$ are constant. In this paper we consider the case of Riemannian products were one of the scalar curvatures is not a constant. Let $(M^n,g)$ be any closed Riemannian manifold and $(N^m,h)$ be a Riemannian manifold of constant positive scalar curvature. The function $u:M\rightarrow \mathbb{R}_{>0}$ is a solution of the Yamabe equation in $(W,g_{\varepsilon})=(M\times N,g+\varepsilon^2h)$ if it satisfies \[ -a_{n+m}\Delta_gu+\left(s_g+\varepsilon^{-2}s_h\right)u=u^{p_{m+n}-1}. \] This is of course equivalent to finding solutions of the equation \begin{equation} \label{Yamabe2} -a_{n+m}\Delta_gu+\left(s_g+\varepsilon^{-2}s_h\right)u=\varepsilon^{-2}s_hu^{p_{m+n}-1}. \end{equation} Moreover, we can normalize $h$ and assume that $s_h = a_{m+n}$. Then Eq. (\ref{Yamabe2}) is equivalent to: \begin{equation} \label{yam-nor} -\varepsilon^2 \Delta_gu+\left(\frac{s_g}{a_{m+n}}\varepsilon^{2}+1\right)u=u^{p_{m+n}-1}. \end{equation} We will find solutions of (\ref{yam-nor}) using the Lyapunov-Schmidt reduction technique, which was introduced in \cite{Bahari-Coron,Floer, Li}, for instance. The same technique was also used by A. M. Micheletti and A. Pistoia in \cite{Micheletti-Pistoia} to study the sub-critical equation equation $-\varepsilon^2 \Delta_{g} u + u = u^{p-1}$ on a Riemannian manifold. Here we will use a similar approach. We now give a brief description of this method and state the results we have obtained. Let $H_{\varepsilon} (M)$ be the Hilbert space $H^1_g(M)$ equipped with the inner product \[ \langle u,v\rangle_{\varepsilon}\doteq \frac{1}{\varepsilon^n}\left(\varepsilon^2\int_M \langle \nabla_gu , \nabla_gv \rangle \ d\mu_g+\int_Muv \ d\mu_g\right), \] and the induced norm \[ \|u\|^2_{\varepsilon}\doteq \frac{1}{\varepsilon^n}\left(\varepsilon^2\int_M|\nabla_gu|^2d\mu_g+\int_Mu^2d\mu_g\right). \] Consider the functional $J_{\varepsilon} : H_{\varepsilon} (M) \rightarrow \mathbb{R}$ given by $$J_{\varepsilon} (u) = \varepsilon^{-n} \int_M \left( \frac{1}{2} \varepsilon^2 | \nabla u |^2 + \frac{{\bf s}_g \varepsilon^2 + a_{m+n} }{2a_{m+n}} u^2 -\frac{1}{p_{m+n}} (u^+ )^{p_{m+n}} \right) d\mu_g .$$ \noindent where $u^+ =\max \{ u , 0 \}$. The critical points of the functional $J_{\varepsilon}$ are the positive solutions of Eq. (\ref{yam-nor}). Let us consider the map $$S_{\varepsilon} \doteq \nabla J_{\varepsilon} : H_{\varepsilon}\rightarrow H_{\varepsilon}.$$ The Yamabe equation (\ref{yam-nor}) is then equivalent to $S_{\varepsilon}(u) =0.$ Note that $p_{m+n}<p_n$. From now on we let $q \in (2 , p_n )$. There exists a unique (up to translation) positive finite-energy solution $U$ of the equation on $\mathbb{R}^n$ \begin{equation} \label{limeq} -\Delta U + U = U^{q-1}. \end{equation} The function $U$ is radial (around some fixed point). We also consider the linear equation \[ -\Delta\psi+\psi=(q-1)U^{q-2}\psi\quad \text{in $\mathbb{R}^n$}. \] It is well known that all solutions of above equation are the directional derivatives of $U$, i.e., the solutions are of the form \[ \psi^v(z)\doteq \frac{\partial U}{\partial v}(z), \text{ $v \in$ $\mathbb{R}^n$}. \] The function $U_{\varepsilon} (x) = U((1/\varepsilon ) x)$ is a solution of $$-\varepsilon^2 \Delta U_{\varepsilon} + U_{\varepsilon} = U_{\varepsilon}^{q-1}.$$ Similarly, we have that $\psi_{\varepsilon}^v (x) \doteq \psi^v ( (1/\varepsilon ) x)$ solves $$-\varepsilon^2 \Delta \psi_{\varepsilon} + \psi_{\varepsilon} = (q-1)U_{\varepsilon}^{q-2} \psi_{\varepsilon} .$$ Using the exponential map $\exp_x :B(0,r) \rightarrow B_g (x,r)$, we define \[ U_{\varepsilon,x}(y)\doteq \begin{cases} U_{\varepsilon}(\exp^{-1}_x(y))\chi_r(\exp^{-1}_x(y))& \text{if $y\in B_g(x,r)$},\\ 0&\text{otherwise}. \end{cases} \] We regard $U_{\varepsilon,x}$ as an approximate solution of Eq. (\ref{yam-nor}), and we will try to find an exact solution of the form $u\doteq U_{\varepsilon,x}+\phi$, where $\phi$ is a small perturbation. For that we consider the following subspace of $H_{\varepsilon} (M)$: \[ K_{\varepsilon,x}= \Big\{W^v_{\varepsilon,x} : v\in \mathbb{R}^n \Big\}, \] where \[ W^v_{\varepsilon,x}(y)\doteq \begin{cases} \psi^v_{\varepsilon}(\exp^{-1}_x(y))\chi_r(\exp^{-1}_x(y))& \text{if $y\in B_g(x,r)$},\\ 0&\text{otherwise}. \end{cases} \] \noindent $W^v_{\varepsilon,x}$ is an approximate solution of the linearized equation $S_{\varepsilon}' (U_{\varepsilon,x} ) (v)=0$, and $K_{\varepsilon,x}$ an approximation to the kernel of $S_{\varepsilon}' (U_{\varepsilon,x} )$. We are going to solve our equation modulo $K_{\varepsilon,x}$ for $\phi$ in the orthogonal complement $K^{\perp}_{\varepsilon,x}$ of $K_{\varepsilon,x}$ in $H_{\varepsilon}$. In other words, for $\varepsilon >0$ small and $x\in M$, we will find $\phi_{\varepsilon,x}\in K^{\perp}_{\varepsilon,x}$ such that \[ \Pi^{\perp}_{\varepsilon,x}\Big\{S_{\varepsilon}\left(U_{\ve,x}+\phi_{\varepsilon,x}\right)\Big\} = 0, \] where $\Pi^{\perp}_{\varepsilon,x}:H_{\varepsilon}\rightarrow K^{\perp}_{\varepsilon,x}$ is the orthogonal projection. Hence, if for some $x_o\in M$ we have \[ \Pi_{\varepsilon,x_o}\Big\{S_{\varepsilon}\left(U_{\varepsilon,x_o}+\phi_{\varepsilon,x_o}\right)\Big\} = 0, \] with $\Pi_{\varepsilon,x}:H_{\varepsilon}\rightarrow K_{\varepsilon,x}$ the orthogonal projection, then $U_{\varepsilon,x_o}+\phi_{\varepsilon,x_o}$ is a solution of Eq. (\ref{yam-nor}). In this way, the problem is reduced to a problem in finite dimensions. This is called the Lyapunov-Schmidt finite-dimensional reduction. The following theorem is the key result of this paper: \begin{theorem} There exists $\varepsilon_o >0$ such that for $\varepsilon \in (0, \varepsilon_o )$ and for any $x\in M$ there exists a unique $\phi_{\varepsilon,x}\in K^{\perp}_{\varepsilon,x}$ such that \[ \Pi^{\perp}_{\varepsilon,x}\Big\{S_{\varepsilon}\left(U_{\ve,x}+\phi_{\varepsilon,x}\right)\Big\} = 0, \] and $\| \phi_{\varepsilon,x} \|_{\varepsilon} = O(\varepsilon^2 )$. The map $x \in M \mapsto J_{\varepsilon} (U_{\varepsilon ,x} + \phi_{\varepsilon,x} )$ is $C^2$, and if $x_o$ is a critical point of this map then $U_{\varepsilon ,x_o } + \phi_{\varepsilon,x_o }$ is a positive solution of equation (\ref{yam-nor}). \end{theorem} Let $F_{\varepsilon} (x) =J_{\varepsilon} (U_{\varepsilon ,x} + \phi_{\varepsilon,x} )$. The critical points of this $C^2$ function on $M$ give positive solutions of Eq. (\ref{yam-nor}). This allows to apply the classical results about the number of critical points of functions on closed manifolds. The most direct application comes from Lusternik-Schnirelmann Theory. Recall that the Lusternik-Schnirelmann category of $M$, $Cat(M)$, is the minimal integer $k$ such that $M$ can be covered by $k$ subsets, $M\subset M_{1}\cup M_{2}...\cup M_{k} $, with $M_{i}$ closed and contractible in $M$. The classical result of Lusternick-Schnirelmann theory says that any $C^1$ function on a closed manifold $M$ has at least $Cat(M)$ critical points. Therefore, from Theorem 1.1 (and the discussion above) we can deduce the following result, which was proved by J. Petean in \cite{Petean1} with a different approach: \begin{theorem} Let (M,g) be any closed Riemannian manifold and (N, h) be a Riemannian manifold of constant positive scalar curvature. There exist $\varepsilon_{o}>0$ such that for $0<\varepsilon < \varepsilon_{o}$ the Yamabe equation on the Riemannian product $(M\times N , g + \varepsilon^2 h )$ has at least $Cat(M)$ solutions which depend only on $M$. \end{theorem} In \cite{Petean1} J. Petean proves the existence of $Cat(M)$ low energy solutions and one higher energy solution. The solutions provided in our theorem have low energy and they are close to the explicit approximate solutions. We also mention that C. Rey and M. Ruiz \cite{Rey_Ruiz} also applied the Lyapunov-Schmidt reduction technique to construct {\it multipeak} high-energy solutions under certain conditions. These seem to be the only known results when the scalar curvature of $g$ is not a constant. Further applications can be obtained using Morse Theory. For that we have to consider the asymptotic expansion of $F_{\varepsilon}$ in terms of $\varepsilon$. Similar expansions were considered when studying solutions of the equation $-\varepsilon^2 \Delta_{g} u + u = u^{p-1}$ on a Riemannian manifold by A. M. Micheletti and A. Pistoia for instance in \cite{Micheletti-Pistoia}. Positive solutions of this equation are the critical points of the functional $$J^0_{\varepsilon} (u) = \varepsilon^{-n} \int_M \left( \frac{\varepsilon^2}{2} | \nabla u |^2 + \frac{1 }{2} u^2 -\frac{1}{p_{m+n}} (u^+ )^{p_{m+n}} \right) d\mu_g .$$ Then A. M. Micheletti and A. Pistoia perform the Lyapunov -Schmidt reduction and define the map $F^0_{\varepsilon} (x) =J^0_{\varepsilon} (U_{\varepsilon ,x} + \phi_{\varepsilon,x} )$ and prove in \cite[Lemma 5.1]{Micheletti-Pistoia} that we have the following $C^1$-uniformly expansion: $$F^0_{\varepsilon} (x)= \alpha - \dfrac{\varepsilon^{2}}{6}s_{g}(x)\int_{\mathbb{R}^{n}}\Big(\dfrac{U'(|z|)}{|z|}\Big)^{2}z_{1}^{4}dz + o(\varepsilon^{2}) ,$$ \noindent where $U=U_{p_{m+n}}$ is the solution of equation (\ref{limeq}) with $q=p_{m+n}$, $U'$ means the derivative of $U$ in the radial direction, and $\alpha = \dfrac{1}{2}\|U\|_{H^{1}(\mathbb{R}^{n})}^{2}-\dfrac{1}{p}\|U\|^{p}_{L^{p}(\mathbb{R}^{n})}$. There is an extra factor in the functional $J_{\varepsilon}$ involving $s_g \varepsilon^2$, which has an effect in the expansion of the function $F_{\varepsilon}$. This was considered by C. Rey and M. Ruiz in \cite[Lemma 3.3]{Rey_Ruiz}. They obtain: \begin{eqnarray*} F_{\varepsilon}(x)&=& \alpha - \dfrac{\varepsilon^{2}}{6}s_{g}(x)\int_{\mathbb{R}^{n}}\Big(\dfrac{U'(|z|)}{|z|}\Big)^{2}z_{1}^{4}dz + \dfrac{\varepsilon^{2}}{2a_{m+n}}s_{g}(x)\int_{\mathbb{R}^{n}} U(z)^{2}dz + o(\varepsilon^{2})\\ &=& \alpha +\dfrac{\beta_{m,n}}{2}\varepsilon^{2}s_{g}(x) + o(\varepsilon^{2}) \end{eqnarray*} which is $C^{1}-$uniformly with respect to $x$ when $\varepsilon$ tends to zero, where $$\beta_{m,n}=\dfrac{1}{a_{m+n}}\int_{\mathbb{R}^{n}} U^{2}(z)dz -\dfrac{1}{3}\int_{\mathbb{R}^{n}}\Big(\dfrac{U'(|z|)}{|z|}\Big)^{2}z_{1}^{4}dz.$$ In \cite{Rey_Ruiz} it is also proved that $$\beta_{m,n}=\dfrac{1}{a_{m+n}}\int_{\mathbb{R}^{n}} U^{2}(z)dz -\dfrac{1}{n(n+2)}\int_{\mathbb{R}^{n}}|\nabla U(z)|^{2}|z|^{2}dz,$$ \noindent and that numerical computations show that $\beta_{m,n} \neq 0$ if $m+n \leq 8$. It is difficult to prove analitically that $\beta_{m,n} \neq 0$ in general but in Section 6 we will prove it in the case $m=n=2$. Assuming that $\beta_{m,n} \neq 0$ and that $x_0$ is a nondegenerate critical point of $s_g$ it is easy to prove, using the previous expansion, that for any $\delta >0$, if $\varepsilon$ is small enough, then $F_{\varepsilon}$ has a critical point in $B(x_0 , \delta )$. It was proved by A. M. Micheletti and A. Pistoia in \cite{Micheletti-Pistoia2} that for a generic metric (on any closed manifold) all the critical points of its scalar curvature are nondegenerate, i. e. the scalar curvature is a Morse function on the manifold. We can then apply Morse theory. Let $b_i (M) \doteq \dim (H_i (M, \mathbb{R} ))$ and $b(M)\doteq b_1 (M) + \dots +b_n (M)$. If $f$ is a Morse function on $M$ then $f$ has at least $b(M)$ critical points. Therefore we obtain: \begin{theorem} Let $(N, h)$ be a closed Riemannian manifold of dimension $m$ of constant positive scalar curvature. Let $M$ be a closed manifold of dimension $n$. Assume that $\beta_{m,n} \neq 0$. For a generic Riemannian metric $g$ on $M$ there exist $\varepsilon_{o}>0$ such that if $0<\varepsilon < \varepsilon_{o}$ the Yamabe equation on the Riemannian product $(M\times N , g + \varepsilon^2 h )$ has at least $b(M)$ positive solutions. \end{theorem} Using that $\beta_{2,2} \neq 0$ we have: \begin{theorem} Let $g_0$ be the round metric on the sphere $S^2$. Let $M$ be a closed manifold of dimension $2$. For a generic Riemannian metric $g$ on $M$ there exist $\varepsilon_{o}>0$ such that if $0<\varepsilon < \varepsilon_{o}$ the Yamabe equation on the Riemannian product $(M\times S^2 , g + \varepsilon^2 g_0 )$ has at least $b(M)$ positive solutions. \end{theorem} In case the scalar curvature of $g$ is constant the expansion of $F_{\varepsilon}$ up to order $\varepsilon^2$ is constant and to obtain critical points one would need to consider higher order expansions. For the equation $-\varepsilon^2 \Delta_{g} u + u = u^{p-1}$ such an expasion was considered for instance by S. Deng, Z. Khemiri and F. Mahmoudi in \cite{DKM}. In Sections 2 and 3 we will discuss some preliminary results about the Lyapunov-Schmidt reduction technique and prove some delicate estimates involving the approximate solutions. In Section 4 we prove the existence of the appropriate perturbation functions $\phi_{x,\varepsilon}$, see Proposition 4.2. In Section 5 we complete the proof of Theorem 1.1. Finally in Section 6 we will prove that $\beta_{2,2} \neq 0$. \section{\textbf{Preliminaries}} \subsection{The limiting equation and its solution on $\mathbb{R}^n$} Let $2<q<p_n$ (where if $n=2$ then $p_n = \infty $). It is well known that there exists a unique (up to translation) positive finite-energy solution $U$ of the equation $$-\Delta U + U = U^{q-1},\quad \text{ in $\mathbb{R}^n$}. $$ The function $U$ is radial (around some chosen point) and it is exponentially decreasing at infinity (see for instance \cite{Gidas}): $$|U (x) | \leq C e^{-c| x | } \quad\text{and}\quad | \nabla U (x) | \leq C e^{-c| x | }.$$ Consider the functional $E: H^1 (\mathbb{R}^n ) \rightarrow \mathbb{R}$, $$E(f)= \int_{\mathbb{R}^n} (1/2) | \nabla f |^2 + (1/2) f^2 -(1/q) (f^+)^q \ dx ,$$ \noindent where $f^+ (x):=\max \{ f(x), 0\}$. Note that $U$ is a critical point of $E$. For any $\varepsilon >0$ let $$E_{\varepsilon} (f)=\varepsilon^{-n} \int_{\mathbb{R}^n} (\varepsilon^2 /2) |\nabla f |^2 + (1/2) f^2 -(1/q) (f^+)^q \ dx .$$ The function $U_{\varepsilon} (x) \doteq U((1/\varepsilon ) x)$ is a critical point of $E_{\varepsilon}$, i.e. a solution of \begin{equation} \label{limequatione} -\varepsilon^2 \Delta U_{\varepsilon} + U_{\varepsilon} = U_{\varepsilon}^{q-1}. \end{equation} \noindent Now, let us consider the linear equation \begin{equation} \label{lineareq} -\Delta\psi+\psi=(q-1)U^{q-2}\psi\quad \text{in $\mathbb{R}^n$}. \end{equation} It is well known that all solutions of Eq. (\ref{lineareq}) are the directional derivatives of $U$, i.e. the solutions are of the form \[ \psi^v(z)\doteq \frac{\partial U}{\partial v}(z), \text{ $v \in$ $\mathbb{R}^n$}. \] In particular, set $\psi^i \doteq \psi^{e_i}$. Since $U$ is radial, we have that the set $\{\psi^1,\ldots,\psi^n\}$ is orthogonal in $H^1(\mathbb{R}^n)$, i.e. \begin{equation} \label{ortoline} \int_{\mathbb{R}^n}\Big\{ \langle \nabla\psi^i , \nabla\psi^j \rangle +\psi^i(z)\psi^j(z)\Big\}dz=0, \quad \text{for $i\neq j$}. \end{equation} For more details see for instance \cite{Gidas, Kwong, Wei-Winter}. \subsection{The setting on a Riemannian manifold} Let $H_{\varepsilon}$ be the Hilbert space $H^1_g(M)$ equipped with the inner product \[ \langle u,v\rangle_{\varepsilon}\doteq \frac{1}{\varepsilon^n}\left(\varepsilon^2\int_M \langle \nabla_gu , \nabla_gv \rangle d\mu_g+\int_Muvd\mu_g\right), \] and the induced norm \[ \|u\|^2_{\varepsilon}\doteq \frac{1}{\varepsilon^n}\left(\varepsilon^2\int_M|\nabla_gu|^2d\mu_g+\int_Mu^2d\mu_g\right). \] Let $L^q_{\varepsilon}$ be the Banach space $L^q_g(M)$ with the norm \[ |u|_{q,\varepsilon}\doteq \left(\frac{1}{\varepsilon^n}\int_M|u|^qd\mu_g\right)^{1/q}, \quad u\in L^q_g(M). \] The standard norm in $L^q_g(M)$ will be denoted from now on by \[ |u|_{q}\doteq \left(\int_M|u|^qd\mu_g\right)^{1/q},\quad u\in L^q_g(M), \] \begin{remark} For $u\in H^1 (\mathbb{R}^n )$ we let $u_{\varepsilon} (x) =u(\varepsilon^{-1} x)$. For any $\varepsilon >0$ we have \begin{equation} \| u_{\varepsilon} \|_{\varepsilon} =\| u \|_{H^1} \end{equation} and \begin{equation} | u_{\varepsilon} |_{q,\varepsilon } = | u |_{q} . \end{equation} \end{remark} \begin{remark} \label{emb} For $q\in (2,p_n )$ if $n\geq 3$ or $q > 2$ if $n=2$, the embedding $i_{\varepsilon} : H_{\varepsilon} \hookrightarrow L^q_{\varepsilon}$ is a continuous map. Moreover, one can easily check that there exists a constant $c$ independent of $\varepsilon$ such that \[ |i_{\varepsilon} (u) |_{q,\varepsilon}\leq c\|u\|_{\varepsilon},\quad \text{for any $u\in H_{\varepsilon}$}. \] Let $q'=\frac{q}{q-1}$, so that $\frac{1}{q} + \frac{1}{q'} =1$. Then, there exists a continuous operator $i^*_{\varepsilon}:L^{q'}_{\varepsilon}\rightarrow H_{\varepsilon}$, called the adjoint of $i_{\varepsilon}$, such that \[ \langle i^*_{\varepsilon}(v),\varphi\rangle_{\varepsilon}=\langle v,i_{\varepsilon}\left(\varphi\right)\rangle\doteq \frac{1}{\varepsilon^n}\int_Mv\cdot i_{\varepsilon}\left(\varphi\right),\quad \forall\ v\in L^{q'}_{\varepsilon}\ \text{and}\ \forall\ \varphi \in H_{\varepsilon} \] In order to see this, we notice that for $v\in L^{q'}_{\varepsilon}$, the map $\mathfrak{F}_v:H_{\varepsilon}\rightarrow\mathbb{R}$, given by \[ \mathfrak{F}_v\left(\varphi\right)=\langle v,i_{\varepsilon}\left(\varphi\right)\rangle,\quad \varphi \in H_{\varepsilon}, \] is a continuous functional by the compact embedding $i_{\varepsilon}:H_{\varepsilon}\hookrightarrow L^q_{\varepsilon}$. By the Riesz representation theorem, there exists $u_{v}\in H_{\varepsilon}$ such that \begin{equation} \label{adj} \mathfrak{F}_v(\varphi)=\langle u_v,\varphi\rangle_{\varepsilon}, \quad \forall\,\varphi\in H_{\varepsilon}. \end{equation} Therefore, $i^{\ast}(v)=u_v$. Finally, observe that \begin{equation} \label{bas1} \|i^*_{\varepsilon}(v)\|_{\varepsilon}\leq c|v|_{q',\varepsilon}, \quad \text{for any $v\in L^{q'}_{\varepsilon}$}, \end{equation} where the constant $c>0$ does not depend on $\varepsilon>0$. \end{remark} Recall that if $v\in L^{q'}_{\varepsilon}$, then a function $u$ is a solution of \begin{equation} \label{eqv} \begin{cases} -\varepsilon^2\Delta_gu+u=\ v \quad\text{in } M, \\ u\ \in\ H^1_g(M), \end{cases} \end{equation} if and only if $u\in H^1_g(M)$, and it satisfies \[ \frac{1}{\varepsilon^n}\left(\varepsilon^2\int_M \langle \nabla_gu , \nabla_g \varphi \rangle d\mu_g+\int_Mu\varphi \ d\mu_g\right)= \frac{1}{\varepsilon^n}\int_Mv \cdot i_{\varepsilon}\left(\varphi\right) d\mu_g ,\quad\forall\ \varphi \in H_{\varepsilon}. \] If we define $u\doteq i^*_{\varepsilon}(v)$, with $v\in L^{q'}_{\varepsilon}$, then $u$ is a solution of (\ref{eqv}). This implies that if $v \in C^k (M)$ then $u\in C^{k+2} (M)$. Now, let $u\in H_{\varepsilon}$, then \[ \frac{1}{\varepsilon^n}\int_M |(u^+)^{q-1}|^{q'}d\mu_g\leq \frac{1}{\varepsilon^n}\int_M|u|^{q}d\mu_g= |u|^q_{q,\varepsilon}. \] Moreover, by Jensen's inequality \begin{equation} \label{tro} \Big|\frac{s_g(x)}{a_{m+n}}\varepsilon^{2}u\Big|_{q',\varepsilon}\leq c_o \varepsilon^{2+\frac{n}{q}-\frac{n}{q'}}|u|_{q,\varepsilon}, \end{equation} where $c_o >0$ depends only on $M$. It is easy to see that \[ 2+\frac{n}{q}-\frac{n}{q'}>0, \quad \text{since}\quad 2<q<\frac{2n}{n-2}. \] Now, we set $q\doteq p_{m+n}$. It follows that if $u \in H_{\varepsilon}$, then \[ F(u)\doteq (u^+(x))^{p_{m+n} -1} -\frac{s_g(x)}{a_{m+n}}\varepsilon^{2}u(x) \in L^{p_{m+n}'}_{\varepsilon}. \] We define the operator $S_{\varepsilon} : H_{\varepsilon} \rightarrow H_{\varepsilon}$ by \begin{equation} \label{yameq} S_{\varepsilon} (u)= u- i^*_{\varepsilon}\left(F(u)\right). \end{equation} By the Remark \ref{emb}, $S_{\varepsilon} (u) = \nabla J_{\varepsilon} (u)$, where, as in the Introduction, $$J_{\varepsilon} (u) = \varepsilon^{-n} \int_M \left( \frac{1}{2} \varepsilon^2 | \nabla u |^2 + \frac{{ s}_g \varepsilon^2 + a_{m+n} }{2a_{m+n}} u^2 -\frac{1}{p_{m+n}} (u^+ )^{p_{m+n}} \right) d\mu_g .$$ In particular, $S_{\varepsilon} (u) =0$ if and only if $u$ is a critical point of the functional $J_{\varepsilon}$. Note also that \begin{equation} \label{lineary} S'_{\varepsilon}(u)\varphi=\varphi-\ i^*_{\varepsilon}\left( (p_{m+n} -1) (u^+)^{p_{m+n} -2}\varphi-\frac{s_g(x)}{a_{m+n}}\varepsilon^{2}\varphi\right), \quad\varphi\ \in\ H_{\varepsilon}(M). \end{equation} \section{\textbf{Approximate solutions}} Let $U$ be the solution of Eq. (\ref{limeq}) where $q= p_{m+n}$. For simplicity we will use $p$ to denote $p_{n+m}$. Let \begin{equation} \label{Ux} U_{\varepsilon,x}(y)\doteq \begin{cases} U_{\varepsilon}(\exp^{-1}_x(y))\chi_r(\exp^{-1}_x(y)),& \text{if $y\in B_g(x,r)$},\\ 0,&\text{otherwise}. \end{cases} \end{equation} Since $U_{\varepsilon}$ solves (\ref{limequatione}), we consider $U_{\varepsilon,x}$ as an approximate solution of Eq. (\ref{yam-nor}). In this section we will prove some estimates related to $U_{\varepsilon,x}$. Similar estimates have been obtained before, see for instance in \cite{Micheletti-Pistoia}. We sketch the proofs of the estimates for completeness and to point out the necessary adjustments to handle the extra term $\frac{s_g\varepsilon^{2}}{a_{m+n}}u$ in Eq. (\ref{yam-nor}). The function $U_{\varepsilon,x}$ is an approximate solution in the following sense. \begin{lemma} \label{bRs} There exists an $\varepsilon_o>0$ and $C>0$ such that for every $x\in M$ and every $\varepsilon \in (0,\varepsilon_o)$ we have \[ \|S_{\varepsilon} (U_{\varepsilon,x} ) \|_{\varepsilon}\leq C\varepsilon^2. \] \end{lemma} \begin{proof} Observe \[\|S_{\varepsilon} (U_{\varepsilon,x} ) \|_{\varepsilon} =\sup_{\|v\|_{\varepsilon}=1} \langle S_{\varepsilon} (U_{\varepsilon,x} ) , v \rangle _{\varepsilon} .\] Now $$ \langle S_{\varepsilon} (U_{\varepsilon,x} ) , v \rangle _{\varepsilon} =\dfrac{1}{\varepsilon^{n}}\int_{M}\Big[\varepsilon^{2} \langle \nabla U_{\varepsilon,x} , \nabla v \rangle + \left( 1 + \dfrac{s_g\varepsilon^{2}}{a_{m+n}}\right) U_{\varepsilon,x} v - U_{\varepsilon,x}^{p-1}v\Big]d\mu_g$$ $$=\dfrac{1}{\varepsilon^{n}}\int_{M} \Big(-\varepsilon^{2}\Delta U_{\varepsilon,x} +U_{\varepsilon,x} - U_{\varepsilon,x}^{p-1}\Big)v \ d\mu_{g} +\dfrac{1}{\varepsilon^{n}}\int_{M} \dfrac{s_g\varepsilon^{2}}{a_{m+n} } U_{\varepsilon,x} v \ d\mu_g.$$ \\ On one hand \begin{eqnarray} \label{AA} \dfrac{\varepsilon^{2}}{\varepsilon^{n}}\bigg\rvert \int_{M} \dfrac{s_g}{a_{m+n}} U_{\varepsilon,x} v d\mu_g\bigg\rvert &\leq& C_1 \dfrac{\varepsilon^{2}}{\varepsilon^{n}}\int_{M} \rvert U_{\varepsilon,x} v \rvert d\mu_{g} \\ &=& C_1 \varepsilon^{2}|U_{\varepsilon,x}|_{p', \varepsilon}|v|_{\varepsilon,p}\leq C_2 \ \varepsilon^{2}|U_{\varepsilon,x}|_{p', \varepsilon},\nonumber \end{eqnarray} \noindent using H\"older's inequality and Remark 2.2. It follows from the exponential decay of $U$ and change of variables, as in Remark 2.1, that $\lim\limits_{\varepsilon\rightarrow 0}|U_{\varepsilon,x}|_{p', \varepsilon}^{p'} =|U|_{p'}^{p'} < \infty .$ Therefore there exists $C>0$ such that \[ \bigg\rvert\frac{1}{\varepsilon^{n}} \int_{M}\dfrac{s_g\varepsilon^{2}}{a_{m+n}} U_{\varepsilon,x} v d\mu_g\bigg\rvert \leq C\varepsilon^{2} . \] On the other hand, we have by the embedding that \begin{eqnarray*} \bigg\rvert \dfrac{1}{\varepsilon^{n}}\int_{M} \Big(-\varepsilon^{2}\Delta U_{\varepsilon,x} +U_{\varepsilon,x} - U_{\varepsilon,x}^{p-1}\Big)vd\mu_{g} \bigg\rvert &\leq& | -\varepsilon^{2}\Delta U_{\varepsilon,x} +U_{\varepsilon,x} - U_{\varepsilon,x}^{p-1}|_{p',\varepsilon}|v|_{p,\varepsilon}\\ &\leq & c | -\varepsilon^{2}\Delta U_{\varepsilon,x} +U_{\varepsilon,x} - U_{\varepsilon,x}^{p-1}|_{p', \varepsilon}. \end{eqnarray*} From the proof of Lemma 3.3 in \cite{Micheletti-Pistoia}, we have that there is positive constant $C$ and $\varepsilon_o>0$ such that for all $x\in M$ and $\varepsilon\in (0,\varepsilon_o)$ it holds, \begin{equation} \Big| -\varepsilon^{2}\Delta U_{\varepsilon,x} +U_{\varepsilon,x} - U_{\varepsilon,x}^{p-1}\Big|_{p', \varepsilon}\leq C \varepsilon^{2} . \end{equation} This completes the proof of the lemma. \end{proof} We consider now the kernel of the linearized equation at the approximate solution, $\{ v \in H^1 (M) : S'_{\varepsilon}(U_{\ve,x}) (v) =0 \} $. In order to have information about the kernel we consider $\varepsilon>0$, $x\in M$, and pick an orthonormal basis of $T_x M$ to identified it with $\mathbb{R}^n$. Using normal coordinates we define the following subspace of $H^1 (M)$: \[ K_{\varepsilon,x}= \Big\{W^v_{\varepsilon,x} : v\in \mathbb{R}^n \Big\}, \] where \begin{equation} \label{Wi} W^v_{\varepsilon,x}(y)\doteq \begin{cases} \psi^v_{\varepsilon}(\exp^{-1}_x(y))\chi_r(\exp^{-1}_x(y))& \text{if $y\in B_g(x,r)$},\\ 0&\text{otherwise}, \end{cases} \end{equation} with $\psi^v_{\varepsilon}(z)=\psi^v(\frac{z}{\varepsilon})$ (as in the Introduction). Note that $W^v_{\varepsilon,x}$ depends on the choice of the orthonormal basis but the space itself $K_{\varepsilon,x}$ does not. We will also set $W^i_{\varepsilon,x} \doteq W^{e_i}_{\varepsilon,x}$. It is easy to see from (\ref{ortoline}) and Remark 2.1 that \begin{equation}\label{A} \langle W^i_{\varepsilon,x},W^i_{\varepsilon,x}\rangle_{\varepsilon}\rightarrow C, \quad\langle W^i_{\varepsilon,x},W^j_{\varepsilon,x}\rangle_{\varepsilon}\rightarrow 0\quad\text{if $i\neq j$}, \quad\text{as $\varepsilon\rightarrow 0$}, \end{equation} where the constant $C=\int_{\mathbb{R}^{n}}( \langle \nabla \psi^{i} , \nabla \psi^{i} \rangle + \psi^{i}\psi^{i})dx >0$ is independent of $i\in \{1,\ldots,n\}$ and $x\in M$. One can also show the following (details can be found in Lemma 6.1 and Lemma 6.2 in \cite{Micheletti-Pistoia}). \begin{proposition} \label{wii} We have that \begin{equation}\label{end1} \lim_{\varepsilon \rightarrow 0} \ \varepsilon^2 \ \Big\| \frac{\partialrtial}{\partialrtial v} W_{\varepsilon ,x }^v \Big\|_{\varepsilon} \ = 0, \end{equation} and \begin{equation}\label{end2} \lim_{\varepsilon \rightarrow 0} \varepsilon \Big\langle \frac{\partialrtial}{\partialrtial v} (U_{\varepsilon ,x} ) , W_{\varepsilon ,x }^v \Big\rangle_{\varepsilon} = \langle \psi^v , \psi^v \rangle_{H^1} >0. \end{equation} \end{proposition} The function $W^v_{\varepsilon,x}$ is an approximate solution of the linearized equation in the following sense. \begin{lemma} \label{bR} For any $v\in \mathbb{R}^n$ there exists an $\varepsilon_o>0$ and $C>0$ such that for every $x\in M$ and all $\varepsilon \in (0,\varepsilon_o)$ we have \[ \|S'_{\varepsilon} (U_{\varepsilon,x} ) (W^v_{\varepsilon ,x} ) \|_{\varepsilon}\leq C\varepsilon^2 \| v \| . \] \end{lemma} \begin{proof} It is enough to consider the case $v =e_i $. We have \[\|S'_{\varepsilon} (U_{\varepsilon,x} ) (W^i_{\varepsilon ,x} ) \|_{\varepsilon} =\sup_{\|w\|_{\varepsilon}=1} \langle S'_{\varepsilon} (U_{\varepsilon,x})(W^i_{\varepsilon ,x}),w \rangle_{\varepsilon}. \] Now, we have that \begin{eqnarray*} \langle S'_{\varepsilon} (U_{\varepsilon,x})(W^i_{\varepsilon ,x}),w \rangle_{\varepsilon}&=&\dfrac{1}{\varepsilon^{n}}\int_{M}\Big[\varepsilon^{2} \langle \nabla W^i_{\varepsilon ,x} , \nabla w \rangle + \left( 1 + \dfrac{s_g\varepsilon^{2}}{a_{m+n} }\right) W^i_{\varepsilon ,x} \,w - (p-1 )(U_{\varepsilon,x})^{p-2} W^i_{\varepsilon ,x}\, w\Big]d\mu_g\\ &=&\dfrac{1}{\varepsilon^{n}}\int_{M} \Big(-\varepsilon^{2}\Delta W^i_{\varepsilon ,x} +W^i_{\varepsilon ,x} - (p-1)(U_{\varepsilon,x})^{p-2} W^i_{\varepsilon ,x}\Big)w\, d\mu_{g}\\ && +\dfrac{1}{\varepsilon^{n}}\int_{M} \dfrac{s_g\varepsilon^{2}}{a_{m+n}} W^i_{\varepsilon ,x}\, w\, d\mu_g.\\ \end{eqnarray*} Observe that \begin{eqnarray*} \dfrac{\varepsilon^{2}}{\varepsilon^{n}}\bigg\rvert \int_{M} \dfrac{s_g}{a_{m+n}} W^i_{\varepsilon ,x} w d\mu_g\bigg\rvert& \leq & C \dfrac{\varepsilon^{2}}{\varepsilon^{n}}\int_{M} \rvert W^i_{\varepsilon ,x} w \rvert d\mu_{g} \\ & \leq & C\varepsilon^{2}|W^i_{\varepsilon ,x}|_{p^{'},\varepsilon} |w|_{p,\varepsilon} \leq C\varepsilon^{2}|W^i_{\varepsilon ,x}|_{p', \varepsilon}, \end{eqnarray*} by a similar argument as in (\ref{AA}). It follows form the exponential decay of $\psi^i$ and change of variables that $\lim_{\varepsilon \rightarrow 0} |W^i_{\varepsilon ,x}|_{p',\varepsilon } = | \psi^i |_{p'}$. We conclude that \begin{equation}\label{AAA} \dfrac{\varepsilon^{2}}{\varepsilon^{n}}\bigg\rvert \int_{M} \dfrac{s_g}{a_{m+n}} W^i_{\varepsilon ,x} w \ d\mu_g\bigg\rvert \leq \overline{C} \varepsilon^2 . \end{equation} Moreover, by Remark \ref{emb} we have \begin{eqnarray*} &&\bigg\rvert \dfrac{1}{\varepsilon^{n}}\int_{M} \Big(-\varepsilon^{2}\Delta W^i_{\varepsilon ,x} +W^i_{\varepsilon ,x} - (p-1)(U_{\varepsilon,x})^{p-2} W^i_{\varepsilon ,x}\Big)wd\mu_{g} \bigg\rvert \\ &=& \Big| -\varepsilon^{2}\Delta W^i_{\varepsilon ,x} +W^i_{\varepsilon ,x} - (p-1)(U_{\varepsilon,x})^{p-2} W^i_{\varepsilon ,x}\Big|_{p',\varepsilon}|w|_{p,\varepsilon}\\ &\leq & \Big| -\varepsilon^{2}\Delta W^i_{\varepsilon ,x} +W^i_{\varepsilon ,x} - (p-1)(U_{\varepsilon,x})^{p-2} W^i_{\varepsilon ,x}\Big|_{p',\varepsilon}\|w\|_{\varepsilon} \\ &=&| -\varepsilon^{2}\Delta W^i_{\varepsilon ,x} +W^i_{\varepsilon ,x} - (p-1)(U_{\varepsilon,x})^{p-2} W^i_{\varepsilon ,x}|_{p', \varepsilon}. \end{eqnarray*} It is shown in Lemma 5.2 of \cite{Micheletti-Pistoia} that \begin{equation} \label{wterm} \Big| -\varepsilon^{2}\Delta W^i_{\varepsilon ,x} +W^i_{\varepsilon ,x} - (p-1)(U_{\varepsilon,x})^{p -2}W^i_{\varepsilon ,x}\Big|_{p', \varepsilon} \leq C\varepsilon^2 , \end{equation} \noindent Estimate (\ref{wterm}) together with (\ref{AAA}) finishes the proof of the lemma. \end{proof} We now solve $S_{\varepsilon} (u)=0$ modulo $K_{\varepsilon,x}$. We consider the orthogonal complement $K^{\perp}_{\varepsilon,x}$ of $K_{\varepsilon,x}$ in $H_{\varepsilon}$ and we find $\phi_{\varepsilon,x}\in K^{\perp}_{\varepsilon,x}$ such that \begin{equation} \label{perpeq} \Pi^{\perp}_{\varepsilon,x}\Big\{S_{\varepsilon}\left(U_{\ve,x}+\phi_{\varepsilon,x}\right)\Big\} = 0, \end{equation} \noindent where $\Pi^{\perp}_{\varepsilon,x} : H_{\varepsilon} \rightarrow K^{\perp}_{\varepsilon,x}$ is the orthogonal projection. In the next section we will show that there exists $\varepsilon_o=\varepsilon_o(M)>0$, such that for every $x\in M$ and $\varepsilon\in (0,\varepsilon_o)$, there is a unique $\phi_{\varepsilon,x}\in K^{\perp}_{\varepsilon,x}$ that solves Eq. (\ref{perpeq}). It will remain then to find points $x\in M$ for which \begin{equation} \label{finiteeq} \Pi_{\varepsilon,x}\Big\{S_{\varepsilon}\left(U_{\ve,x}+\phi_{\varepsilon,x}\right)\Big\} = 0 , \end{equation} where $\Pi_{\varepsilon,x} : H_{\varepsilon} \rightarrow K_{\varepsilon,x}$ is the orthogonal projection. \section{\textbf{The finite-dimensional reduction}} This section is devoted to solve Eq. (\ref{perpeq}). For $x\in M$ and $\varepsilon >0$ we consider the linear operator $L_{\varepsilon,x}:K^{\perp}_{\varepsilon,x}\rightarrow K^{\perp}_{\varepsilon,x}$ defined by \[ L_{\varepsilon,x}(\phi)\doteq \Pi^{\perp}_{\varepsilon,x}\Big\{S'(U_{\varepsilon,x})\phi\Big\}, \] where by (\ref{lineary}) \[ S'(U_{\varepsilon,x})\phi=\phi-i^*_{\varepsilon}\Big[(p-1)(U_{\ve,x})^{p_{m+n}-2} \phi-\varepsilon^{2}\frac{s_g}{a_{m+n}}\phi\Big]. \] In the following proposition we show that the bounded operator $L_{\varepsilon,x}$ satisfies a coercivity estimate for $\varepsilon>0$ small enough, uniformly on $M$. From this result it follows the invertibility of $L_{\varepsilon,x}$ for $\varepsilon>0$ small. \begin{proposition} \label{invl} There exists $\varepsilon_o>0$ and $c>0$ such that for any point $x\in M$ and for any $\varepsilon\in (0,\varepsilon_o)$ \[ \|L_{\varepsilon,x}(\phi)\|_{\varepsilon}\geq c\|\phi\|_{\varepsilon} \quad \text{for all $\phi\in K^{\perp}_{\varepsilon,x}$}. \] \end{proposition} \begin{proof} Assume the proposition is not true. Then there exists a sequence of positive numbers $\varepsilon_i $, with $\lim_{i\rightarrow \infty} \varepsilon_i =0$, and sequences $\{x_i\} \subset M$, $\{\phi_i\} \subset K^{\perp}_{\varepsilon_i ,x_i} $ with $\| \phi_i \|_{\varepsilon_i} =1$, such that $\| L_{\varepsilon_i , x_i} (\phi_i ) \|_{\varepsilon_i} \rightarrow 0.$ Moreover, since $M$ is compact we can assume that there exists $x\in M$ such that $x_i \rightarrow x$. \begin{claim} \label{xi0} Let $\omega_i\doteq L_{\varepsilon_i , x_i} (\phi_i ) $ and set \begin{equation} \label{xieq} \xi_{i}\doteq S'_{\varepsilon_i}(U_{\varepsilon_i,x_i})\phi_i-\omega_i\in K_{\varepsilon_i,x_i}. \end{equation} Then, \[ \|\xi_{i}\|_{\varepsilon_i}\rightarrow 0,\quad \text{as $i\rightarrow \infty$}. \] \end{claim} \begin{proof}[Proof of Claim \ref{xi0}] To prove the claim note that for any $v \in \mathbb{R}^n$, \begin{eqnarray*} \langle \xi_i, W^v_{\varepsilon_i,x_i}\rangle_{\varepsilon_i} =\langle S'_{\varepsilon_i}(U_{\varepsilon_i,x_i})\phi_i, W^v_{\varepsilon_i,x_i}\rangle_{\varepsilon_i}=\langle \phi_i, S'_{\varepsilon_i}(U_{\varepsilon_i,x_i}) (W^v_{\varepsilon_i,x_i} ) \rangle_{\varepsilon_i}. \end{eqnarray*} The claim then follows from Lemma \ref{bR}. \end{proof} Now, we have \begin{equation} \label{uif} u_i\doteq\phi_i-\omega_i-\xi_i = \phi_i- S'_{\varepsilon_i}(U_{\varepsilon_i,x_i})\phi_i = i^*_{\varepsilon_i}\left((p-1)(U_{\varepsilon_i,x_i} )^{p-2} \phi_i -\frac{s_g(x)}{a_{m+n}}\varepsilon_i^{2}\phi_i \right), \end{equation} by (\ref{lineary}). It follows from Claim \ref{xi0} that \begin{equation} \| u_i \|_{\varepsilon_i} \rightarrow 1. \end{equation} From Remark 2.2 and Eq. (\ref{uif}), $u_i$ solves \begin{equation} \label{xieq1} -\varepsilon_i^2\Delta_gu_i +u_i =\ (p-1)(U_{\varepsilon_i,x_i} )^{p-2} \phi_i -\frac{s_g(x)}{a_{m+n}}\varepsilon_i^{2}\phi_i . \end{equation} Let \[ v_i\doteq i^*_{\varepsilon_i}\left((p-1)(U_{\varepsilon_i,x_i} )^{p-2} \phi_i \right) = u_i + i^*_{\varepsilon_i} \left( \frac{s_g(x)}{a_{m+n}}\varepsilon_i^{2}\phi_i \right) . \] Then $v_i$ is supported in $B(x_i ,r)$ and \begin{equation} \label{ui01} \|v_i\|_{\varepsilon_i}\rightarrow 1 \ \ \ \ \ , \ \ \ \| v_i -\phi_i \|_{\varepsilon_i} \rightarrow 0 . \end{equation} Moreover, it solves \begin{equation} \label{xieq2} -\varepsilon_i^2\Delta_gv_i +v_i =\ (p-1)(U_{\varepsilon_i,x_i} )^{p-2} \phi_i . \end{equation} \begin{claim} \label{uiw0c} Let \[ \widetilde{v}_i(y)\doteq v_i\left(\exp_{x_i}\left(\varepsilon_i y\right)\right), \quad y\in B\left(0,r/\varepsilon_i\right) \subset \mathbb{R}^n . \] Then, \begin{equation} \label{uiw0} \widetilde{v}_i\rightarrow 0\quad \text{weakly in $H^1(\mathbb{R}^n)$ and strongly in $L^q_{loc}(\mathbb{R}^n)$}, \end{equation} for any $q\in(2, p_n )$ if $n\geq 3$ or $q > 2$ if n=2. \end{claim} \begin{proof}[Proof of Claim \ref{uiw0c}] Let $\widetilde{v}_{i_{\varepsilon_i}} (y) = \widetilde{v}_i (\varepsilon_i^{-1} y)= v_i \left( \exp_{x_i} ( y ) \right) $. Observe that \begin{equation} \label{uibd} \|\widetilde{v}_i\|_{H^1( \mathbb{R}^n )} = \|\widetilde{v}_{i_{\varepsilon_i}} \|_{H_{\varepsilon_i}( \mathbb{R}^n )} \leq C\|v_i\|_{\varepsilon_i}\leq C, \quad \text{for all $i\in\mathbb{N}$}. \end{equation} Therefore, by taking a subsequence we can assume that there exists $ \widetilde{v} \in H^1 (\mathbb{R}^n )$ such that $\widetilde{v}_i\rightarrow \widetilde{v}$ weakly in $H^1(\mathbb{R}^n )$, and strongly in $L^q_{loc}(\mathbb{R}^n)$ for any $q\in (2, p_n )$ if $n\geq 3$ or $q > 2$ if $n=2$. Now, observe that by Claim \ref{xi0} for $j=1,\ldots,n$, \begin{equation} \label{Wui1} \langle W^j_{\varepsilon_i,x_i}, v_i\rangle_{\varepsilon_i}=\langle W^j_{\varepsilon_i,x_i}, u_i\rangle_{\varepsilon_i} +o(\varepsilon_i )= -\langle W^j_{\varepsilon_i,x_i}, \xi_i\rangle_{\varepsilon_i} +o(\varepsilon_i )\rightarrow0,\quad \text{as $i\rightarrow \infty$}, \end{equation} and (by change of variables and the exponential decay of $\psi^j$) \begin{equation} \label{Wui2} \langle W^j_{\varepsilon_i,x_i}, v_i\rangle_{\varepsilon_i}\rightarrow \int_{\mathbb{R}^n}\left(\nabla\psi^j\nabla \widetilde{v}+\psi^j\widetilde{v}\right)dy, \quad\text{as $i\rightarrow \infty$}. \end{equation} We have from (\ref{ui01}) and (\ref{xieq2}) that $\widetilde{v}$ solves \begin{equation} \label{solwu} -\Delta\widetilde{v}+\widetilde{v}=(p-1)(U)^{p-2} \widetilde{v}\quad \text{in $\mathbb{R}^n$}. \end{equation} Therefore, $\widetilde{v}\in \mathrm{span}\{\psi^1,\ldots,\psi^n\}$. From Eq.'s (\ref{Wui1}) and (\ref{Wui2}), we have that $\widetilde{v}$ is orthogonal to $\{\psi^1,\ldots,\psi^n\}$, hence $\widetilde{v}\equiv 0$, and the claim follows. \end{proof} Multiplying Eq. \ref{xieq2} by $v_i\in H_{\varepsilon}$, we obtain from (\ref{ui01}) \begin{eqnarray} \label{ui} \|v_i\|^2_{\varepsilon_i}&=&\frac{1}{\varepsilon^n_i}\int_M\Big\{(p-1)(U_{\varepsilon_i,x_i})^{p-2} \Big\} v_i \ \phi_i \rightarrow 1 \end{eqnarray} But, by Claim \ref{uiw0c} we have \begin{eqnarray} \frac{1}{\varepsilon^n_i}\int_M\Big\{(p-1)(U_{\varepsilon_i,x_i})^{p-2} \Big\} v_i \ \phi_i \rightarrow \int_{\mathbb{R}^n} (p-1)(U)^{p-2} \widetilde{v}^2 =0. \label{wxu} \end{eqnarray} This is a contradiction, thus proving the proposition. \end{proof} Now, we write for $ \phi \in K^{\perp}_{\varepsilon ,x} $, \begin{equation} \label{seq} S_{\varepsilon}(U_{\ve,x}+\phi)= S_{\varepsilon}(U_{\ve,x})+S'_{\varepsilon}(U_{\ve,x})\phi+\widetilde{N}_{\varepsilon,x}(\phi), \end{equation} where \begin{eqnarray*} \widetilde{N}_{\varepsilon,x}(\phi)&=&S_{\varepsilon}(U_{\ve,x}+\phi)- S_{\varepsilon}(U_{\ve,x})-S'_{\varepsilon}(U_{\ve,x})\phi \\ &=& -i^*_{\varepsilon}\left( ((U_{\ve,x}+\phi)^+)^{p-1}-(U_{\ve,x})^{p-1}-(p-1)(U_{\ve,x})^{p-2}\phi\right). \end{eqnarray*} Applying $\Pi^{\perp}_{\varepsilon,x}$ to (\ref{seq}) we see that (\ref{perpeq}) is equivalent to \begin{equation} \label{eql} L_{\varepsilon,x}(\phi)=N_{\varepsilon,x}(\phi) - \Pi^{\perp}_{\varepsilon,x} (S_{\varepsilon} (U_{\varepsilon,x} )), \end{equation} where \[ N_{\varepsilon,x}(\phi)\doteq - \Pi^{\perp}_{\varepsilon,x} (\widetilde{N}_{\varepsilon,x}(\phi) ) = \Pi^{\perp}_{\varepsilon,x}\Big\{i^*_{\varepsilon}\Big[((U_{\ve,x}+\phi)^+ )^{p-1}-(U_{\ve,x})^{p-1}-(p-1)(U_{\ve,x})^{p-2}\phi\Big]\Big\}. \] We are now ready to prove the main result of this section. \begin{proposition}\label{SecPropoPrinc} There exists an $\varepsilon_o>0$ and $A>0$ such that for any $x\in M$ and for any $\varepsilon\in (0,\varepsilon_o)$ there exists a unique $\phi_{\varepsilon,x}=\phi(\varepsilon,x) \in K^{\perp}_{\varepsilon ,x} $ that solves Eq. (\ref{perpeq}) with $\|\phi_{\varepsilon,x}\|_{\varepsilon} \leq A$. Moreover, there exists a constant $c_o >0$ independent of $\varepsilon$ such that \[ \|\phi_{\varepsilon,x}\|_{\varepsilon} \leq c_o \varepsilon^2, \] and $x\rightarrow \phi_{\varepsilon,x}$ is a $C^2$ map. \end{proposition} \begin{proof} In order to solve Eq. (\ref{perpeq}), or equivalently Eq. (\ref{eql}), we have to find a fixed point of the operator $T_{\varepsilon,x}:K^{\perp}_{\varepsilon,x}\rightarrow K^{\perp}_{\varepsilon,x}$ given by \[ T_{\varepsilon,x}(\phi)\doteq L^{-1}_{\varepsilon,x}\left(N_{\varepsilon,x}(\phi)- \Pi^{\perp}_{\varepsilon,x} (S_{\varepsilon} (U_{\varepsilon,x} )) \right). \] Now, from Proposition \ref{invl} we have that there is a constant $C>0$ such that \begin{equation} \label{T} \|T_{\varepsilon,x}(\phi)\|_{\varepsilon}\leq C\left(\|N_{\varepsilon,x}(\phi)\|_{\varepsilon}+\| \Pi^{\perp}_{\varepsilon,x} (S_{\varepsilon} (U_{\varepsilon,x} )) \|_{\varepsilon} \right), \quad \forall \phi\in K^{\perp}_{\varepsilon,x}. \end{equation} \begin{claim} \label{1} For any $b\in (0,1)$ there exist constants $a,\varepsilon_o >0$ such that for any $\varepsilon \in (0, \varepsilon_o )$, if $\phi_1 , \phi_2 \in K^{\perp}_{\varepsilon,x}$, $\| \phi_1 \|_{\varepsilon}$, with $\| \phi_2 \|_{\varepsilon} <a, $ then $\| N_{\varepsilon,x} (\phi_1 ) - N_{\varepsilon,x} (\phi_2 ) \|_{\varepsilon} \leq b \| \phi_1 - \phi_2 \|_{\varepsilon}$. \end{claim} \begin{proof}[Proof of Claim \ref{1}] \[ N_{\varepsilon, x}(\phi_1)-N_{\varepsilon, x}(\phi_2)=\Pi^{\perp}\{S_{\varepsilon}(U_{\varepsilon,x}+ \phi_2)-S_{\varepsilon}(U_{\varepsilon,x}+ \phi_1)-S^{'}_{\varepsilon}(U_{\varepsilon,x})(\phi_2 - \phi_1)\} \] Therefore, \[ \| N_{\varepsilon, x}(\phi_1)-N_{\varepsilon, x}(\phi_2)\|_{\varepsilon}\leq \|S_{\varepsilon}(U_{\varepsilon,x}+ \phi_2)-S_{\varepsilon}(U_{\varepsilon,x}+ \phi_1)-S^{'}_{\varepsilon}(U_{\varepsilon,x})(\phi_2 - \phi_1) \|_{\varepsilon} \] \[ =\| i^*_{\varepsilon} \left( ((U_{\varepsilon,x}+ \phi_1)^+ )^{p-1} -((U_{\varepsilon,x} + \phi_2 )^+ )^{p-1} + (p -1) U_{\varepsilon,x}^{p -2} (\phi_2 - \phi_1) \right) \|_{\varepsilon} \] \[ \leq c\Big|((U_{\varepsilon,x}+ \phi_1)^+ )^{p-1} -((U_{\varepsilon,x}+ \phi_2)^+ )^{p-1} - (p-1 )(U_{\varepsilon,x})^{p-2} (\phi_1-\phi_2) \Big|_{p',\varepsilon} \] By the Intermediate Value Theorem, there is a $\lambda \in [0,1]$ such that \begin{eqnarray} \label{mvtl} |(U_{\varepsilon,x}+ \phi_1)^+ )^{p_{m+n}-1} -((U_{\varepsilon,x}+ \phi_2)^+ )^{p_{m+n}-1}|_{p',\varepsilon} =\\ \nonumber | (p_{m+n}-1 )(U_{\varepsilon,x} + \phi_1 + \lambda (\phi_2 -\phi_1 ))^{p-2} (\phi_2-\phi_1)|_{p',\varepsilon}. \end{eqnarray} Then, we have from Eq. \eqref{mvtl} that \begin{eqnarray*} &&\varepsilonrt ((U_{\varepsilon,x}+ \phi_1)^+ )^{p-1} -((U_{\varepsilon,x}+ \phi_2)^+ )^{p-1} - (p-1 )(U_{\varepsilon,x})^{p-2} (\phi_1-\phi_2) \varepsilonrt_{p',\varepsilon} \\ &=& \varepsilonrt [ (p-1 )(U_{\varepsilon,x} +\phi_1 + \lambda (\phi_2 -\phi_1 ))^{p-2} - (p-1 )(U_{\varepsilon,x})^{p-2} ] (\phi_1-\phi_2) \varepsilonrt_{p',\varepsilon} \\ &\leq & c \varepsilonrt (U_{\varepsilon,x} +\phi_1 + \lambda (\phi_2 -\phi_1 ))^{p-2} - (U_{\varepsilon,x})^{p-2} \varepsilonrt_{\frac{p}{p-2},\varepsilon} \varepsilonrt (\phi_2 - \phi_1) \varepsilonrt_{p,\varepsilon} \\ &\leq& c \varepsilonrt (U_{\varepsilon,x} + \phi_1 \lambda (\phi_2 -\phi_1 ))^{p-2} - (U_{\varepsilon,x})^{p-2} \varepsilonrt_{\frac{p}{p-2},\varepsilon} \| (\phi_2 - \phi_1) \|_{\varepsilon }, \end{eqnarray*} by H\"{o}lder's inequality and Remark 2.2. In order to complete the estimate we need the following elementary observation which appeared in \cite[Lemma 2.1]{Li}. Let $a >0$ and $b\in \mathbb{R}$, then \begin{equation} \label{yy} \rvert |a+b|^\beta -a^\beta\rvert \leq\begin{cases} C(\beta)\min\{|b|^\beta , a^{\beta-1}|b|\}& \text{ if } 0<\beta<1.\\ C(\beta)(|a|^{\beta-1}|b| +|b|^\beta) & if \beta\geq 1. \end{cases} \end{equation} Applying (\ref{yy}), we see that for all $v\in H_\varepsilon$ \begin{equation} \label{LYanYan} | (U_{\varepsilon,x}+ v)^{p -2} -(U_{\varepsilon,x})^{p-2} | \leq\begin{cases} C(p)|v|^{p-2}& \text{ if } 2<p<3 .\\ C(p)\Big ( |U_{\varepsilon,x}|^{p-3}|v|+|v|^{p-2}\Big ) &\text{ if } p \geq 3. \end{cases} \end{equation} Then, it follows that \begin{equation} \label{LYanYan2} | (U_{\varepsilon,x}+ v)^{p-2} -(U_{\varepsilon,x})^{p-2} |_{\frac{p}{p-2},\varepsilon} \leq\begin{cases} C(p)|v|_{p,\varepsilon}^{p-2}& \text{ if } 2<p<3,\\ C(p)\Big ( |U_{\varepsilon,x}|^{p-3}_{p,\varepsilon}|v|_{p,\varepsilon}+|v|^{p-2}_{p,\varepsilon}\Big ) &\text{ if } p \geq 3. \end{cases} \end{equation} Using (\ref{LYanYan2}) and Remark 2.2 we can see that if $a$ is small enough then \[ c \Big| (U_{\varepsilon,x} + \lambda (\phi_2 -\phi_1 ))^{p-2} - (U_{\varepsilon,x})^{p-2} \Big|_{\frac{p}{p-2},\varepsilon} <b, \] proving the claim. \end{proof} In similar fashion we can prove the following claim. \begin{claim} \label{2}For any $b \in (0,1)$ there exist constants $a>0$ and $\varepsilon_o >0$ such that for any $\varepsilon \in (0,\varepsilon_o )$, if $\| \phi \|_{\varepsilon} <a$ then $\| N_{\varepsilon,x} (\phi ) \|_{\varepsilon} \leq b \| \phi \|_{\varepsilon}$. \end{claim} \begin{proof}[Proof of Claim \ref{2}] \[ \| N_{\varepsilon,x}(\phi)\|_{\varepsilon}= \|\Pi^{\perp}\{S_{\varepsilon}(U_{\varepsilon,x}+ \phi)-S_{\varepsilon}(U_{\varepsilon,x})-S^{'}_{\varepsilon}(U_{\varepsilon,x})(\phi)\}\|_{\varepsilon}\] \[=\| i_{\varepsilon}^{*}((U_{\varepsilon,x})^{p_{m+n}-1}-((U_{\varepsilon,x} + \phi)^+ )^{p_{m+n}-1} + (p_{m+n}-1)(U_{\varepsilon,x})^{p_{m+n}-2} \phi)\|_{\varepsilon}, \] and we can apply the Intermediate Value Theorem and Remark 4.3, as in the proof of Claim \ref{1}, to prove the claim. \end{proof} Now we prove the first statements of the proposition using the claims. Let $C$ be the constant in (\ref{T}) and take $b=\frac{1}{2C}$. Let $a$ be the constant given by Claim \ref{1} and Claim \ref{2} (the minimum of the two, to be precise). From Lemma \ref{bRs} and Claim \ref{2} there exists $\varepsilon_o >0$ such that if $\varepsilon \in (0, \varepsilon_o )$ then $T_{\varepsilon,x}$ sends the ball of radius $a$ in $K^{\perp}_{\varepsilon,x}$ to itself. If $\| \phi_1 \|_{\varepsilon}$, $\| \phi_2 \|_{\varepsilon} <a$ , we have that \[ \|T_{\varepsilon,x}(\phi_1)-T_{\varepsilon,x}(\phi_2)\|_{\varepsilon}\leq C\|N_{\varepsilon,x}(\phi_1)-N_{\varepsilon,x}(\phi_2)\|_{\varepsilon} \leq \frac{1}{2} \| \phi_1 - \phi_2 \|_{\varepsilon}. \] We see then that $T_{\varepsilon,x}$ is a contraction in the ball of radius $a$. It follows that it has a unique fixed point there. The fixed point is obtained for instance as the limit ot the sequence $a_k = T_{\varepsilon,x}^k (0)$. Note that $\| a_1 \|_{\varepsilon} \leq C\varepsilon^2 $ by Lemma \ref{bRs} and then from Claim \ref{1} we have that for all $k$, $\| a_k \|_{\varepsilon} \leq 2C\varepsilon^2$. It remains to prove that the map $x\rightarrow \phi_{\varepsilon,x}$ is $C^2$. In order to show this, we apply the Implicit Function Theorem to the $C^2-$function $G:M\times H_{\varepsilon}\rightarrow H_{\varepsilon}$ defined by \[ G(x,u)=\Pi^{\perp}_{\varepsilon,x}\Big\{S_\varepsilon (U_{\varepsilon,x}+\Pi^{\perp}_{\varepsilon,x}u)\Big\}+\Pi_{\varepsilon,x}u. \] Observe that $G(x,\phi_{\varepsilon,x})=0$, and that the derivative $\frac{\partial G}{\partial u}(x,\phi_{\varepsilon,x}):H_{\varepsilon}\rightarrow H_{\varepsilon}$ is given by \[ \frac{\partial G}{\partial u}(x,\phi_{\varepsilon,x})(u)=\Pi^{\perp}_{\varepsilon,x}\Big\{S_{\varepsilon}^{'} (U_{\varepsilon,x}+\phi_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}u\Big\}+\Pi_{\varepsilon,x}u \] The proof would be done if we show the next claim. \begin{claim} \label{3} For $\varepsilon>0$ small enough, there is $C>0$ such that \[ \Big\| \frac{\partial G}{\partial u}(x,\phi_{\varepsilon,x})(u)\Big\|_{\varepsilon}\geq C\|u\|_{\varepsilon}, \] for every $x\in M$. \end{claim} \begin{proof} [Proof of Claim \ref{3}] We have that for $c=\frac{1}{\sqrt{2}}$ that $$\Big\|\frac{\partial G}{\partial u}(x,\phi_{\varepsilon,x})(u)\Big\|_{\varepsilon} \geq c\Big\|\Pi^{\perp}_{\varepsilon,x}\Big\{S'_\varepsilon(U_{\varepsilon,x} +\phi_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u)\Big\}\Big\|_\varepsilon+ c\Big\|\Pi_{\varepsilon,x}(u)\Big\|_\varepsilon$$ $$= c\Big\|\Pi^{\perp}_{\varepsilon,x}\Big\{S'_\varepsilon(U_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u) + S'_\varepsilon(U_{\varepsilon,x} +\phi_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u) - S'_\varepsilon(U_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u)\Big\}\Big\|_\varepsilon+ c\Big\|\Pi_{\varepsilon,x}(u)\Big\|_\varepsilon$$ $$\geq c\Big\|\Pi_{\varepsilon,x}(u)\Big\|_\varepsilon + c\Big\|L_{\varepsilon,x}(\Pi^{\perp}_{\varepsilon,x}(u))\Big\|_\varepsilon - c\Big\|\Pi^{\perp}_{\varepsilon,x}\Big\{S'_\varepsilon(U_{\varepsilon,x} +\phi_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u) - S'_\varepsilon(U_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u)\Big\}\Big\|_\varepsilon $$ It follows from Proposition 4.1 that, for another constant $c>0$, $\Big\|L_{\varepsilon,x}(\Pi^{\perp}_{\varepsilon,x}(u))\Big\|_\varepsilon \geq c \Big\| \Pi^{\perp}_{\varepsilon,x}(u) \Big\|_{\varepsilon} $. Then we have that for some constant $C>0$, $$ c\Big\|\Pi_{\varepsilon,x}(u)\Big\|_\varepsilon + c\Big\|L_{\varepsilon,x}(\Pi^{\perp}_{\varepsilon,x}(u))\Big\|_\varepsilon \geq C \|u\|_{\varepsilon}.$$ Therefore, it only remains to prove that $$\lim_{\varepsilon \rightarrow 0} \Big\|\Pi^{\perp}_{\varepsilon,x}\Big\{S'_\varepsilon(U_{\varepsilon,x} +\phi_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u) - S'_\varepsilon(U_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u)\Big\}\Big\|_\varepsilon \ =0 .$$ But, $$S'_\varepsilon(U_{\varepsilon,x} +\phi_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u) - S'_\varepsilon(U_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u) = (p-1) i_{\varepsilon}^* ( (U_{\varepsilon,x} +\phi_{\varepsilon,x})^{p-2} - (U_{\varepsilon,x})^{p-2} \Pi^{\perp}_{\varepsilon,x}(u) ) .$$ Hence, as in the proof of Claim \ref{1}, $$\Big\| S'_\varepsilon(U_{\varepsilon,x} +\phi_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u) - S'_\varepsilon(U_{\varepsilon,x})\Pi^{\perp}_{\varepsilon,x}(u) \Big\|_\varepsilon \leq c | ( (U_{\varepsilon,x} +\phi_{\varepsilon,x})^{p-2} - (U_{\varepsilon,x})^{p-2}) \Pi^{\perp}_{\varepsilon,x}(u) |_{p',\varepsilon}$$ $$\leq c | ( (U_{\varepsilon,x} +\phi_{\varepsilon,x})^{p-2} - (U_{\varepsilon,x})^{p-2}) |_{\frac{p}{p-2},\varepsilon} | \Pi^{\perp}_{\varepsilon,x}(u) |_{p,\varepsilon} $$ $$\leq c | ( (U_{\varepsilon,x} +\phi_{\varepsilon,x})^{p_{m+n}-2} - (U_{\varepsilon,x})^{p-2}) |_{\frac{p}{p-2},\varepsilon} | \| u \|_{\varepsilon} .$$ Arguing as in the end of the proof of Claim \ref{1} we can see that $$\lim_{\varepsilon \rightarrow 0} | ( (U_{\varepsilon,x} +\phi_{\varepsilon,x})^{p-2} - (U_{\varepsilon,x})^{p-2}) |_{\frac{p}{p-2},\varepsilon} =0,$$ thus completing the proof of the claim. \end{proof} This finishes the proof of the proposition. \end{proof} \section{\textbf{Proof of Theorem 1.1}} Recall that the critical points of the functional $J_{\varepsilon} : H^1 (M) \rightarrow \mathbb{R}$ given by $$J_{\varepsilon} (u) = \varepsilon^{-n} \int_M \left( \frac{1}{2} \varepsilon^2 \| \nabla u \|^2 + \frac{{\bf s}_g \varepsilon^2 + a_{m+n} }{2a_{m+n}} u^2 -\frac{1}{p} (u^+ )^{p} \right) d\mu_g ,$$ \noindent are the positive solutions of Eq. (\ref{yam-nor}). Proposition 4.2 tells us that there exists $\varepsilon_o>0$ such that for $\varepsilon \in (0,\varepsilon_o )$ and $x\in M$ there exists a uniquely defined $ \phi_{\varepsilon , x} \in K^{\perp}_{\varepsilon ,x} $ such that $U_{\varepsilon ,x} + \phi_{\varepsilon , x}$ solves Eq. (\ref{perpeq}). In order to finish the proof of Theorem 1.1 we have to establish the following result. \begin{proposition}\label{PrincipalProp} There exists $\varepsilon_o >0$ such that if $\varepsilon \in (0, \varepsilon_o )$ and $x_o\in M$ is a critical point of $F_{\varepsilon} : M \rightarrow \mathbb{R} $, where \begin{equation} F_{\varepsilon} (x) \doteq J_{\varepsilon} (U_{\varepsilon ,x} + \phi_{\varepsilon , x} ) , \end{equation} then $U_{\varepsilon ,x_o } + \phi_{\varepsilon , x_o } $ is a positive solution of Eq. (\ref{yam-nor}). \end{proposition} \begin{proof} Let $x_o\in M$ be a critical point of $F_{\varepsilon}$ where $\varepsilon>0$. We need to show that for each $\varphi \in H_{\varepsilon} (M)$ one has that $$\langle S_{\varepsilon} ( U_{\varepsilon ,x_o } + \phi_{\varepsilon , x_o }) , \varphi \rangle_{\varepsilon} =0 .$$ If $\varphi \in K^{\perp}_{\varepsilon ,x_o }$ then $$\langle S_{\varepsilon} \ ( U_{\varepsilon ,x_o } + \phi_{\varepsilon , x_o }) , \varphi \rangle_{\varepsilon} = \langle \Pi^{\perp}_{\varepsilon,x} (S_{\varepsilon} ( U_{\varepsilon ,x_o } + \phi_{\varepsilon , x_o }) ),\varphi \rangle_{\varepsilon} =0 ,$$ since $U_{\varepsilon ,x_o } + \phi_{\varepsilon , x_o}$ solves Eq. (\ref{perpeq}). Then it is enough to show that $\langle S_{\varepsilon} ( U_{\varepsilon ,x_o } + \phi_{\varepsilon , x_o }) , \varphi \rangle_{\varepsilon} =0$ if $\varphi \in K_{\varepsilon, x_o }$. On the other hand we know that $\langle S_{\varepsilon} ( U_{\varepsilon ,x_o } + \phi_{\varepsilon , x_o }) , \varphi \rangle_{\varepsilon} =0$ if $\varphi$ is tangent to the map $x \mapsto V(x)= U_ {\varepsilon ,x} + \phi_{\varepsilon , x } $ at $x_o$. And since $M$ and $K_{\varepsilon,x_o}$ have the same dimension it is enough to see that the projection $\Pi_{{\varepsilon, x_o}} \circ D_{x_o} V : T_{x_o} M \rightarrow K_{\varepsilon , x_o}$ is injective. Then to finish the proof it is enough to show that, fixing geodesic coordinates centered at $x_o$, for any $v \in \mathbb{R}^n $ \begin{equation}\label{AAAA} \Big\langle \frac{\partialrtial}{\partialrtial v} (U_{\varepsilon ,x} + \phi_{\varepsilon , x} ) (x_o ), W_{\varepsilon ,x_o }^v \Big\rangle_{\varepsilon} \neq 0 . \end{equation} Note that $\langle \phi_{\varepsilon , x} , W_{\varepsilon ,x }^v \rangle_{\varepsilon} =0$. Then \[ \Big\langle \frac{\partialrtial}{\partialrtial v} ( \phi_{\varepsilon , x} ) , W_{\varepsilon ,x_o }^v \Big\rangle_{\varepsilon} = - \Big\langle \phi_{\varepsilon , x} , \frac{\partialrtial}{\partialrtial v} W_{\varepsilon ,x_o }^v \Big\rangle_{\varepsilon} . \] As we pointed out in (\ref{end1}), we have $$\lim_{\varepsilon \rightarrow 0} \ \varepsilon^2 \ \| \frac{\partialrtial}{\partialrtial v} W_{\varepsilon ,x_o }^v \|_{\varepsilon} =0 .$$ Then, it follows from Cauchy-Schwarz inequality and Proposition \ref{SecPropoPrinc} that $$\lim_{\varepsilon \rightarrow 0} \Big\langle \frac{\partialrtial}{\partialrtial v} ( \phi_{\varepsilon , x} ) , W_{\varepsilon ,x_o }^v \Big\rangle_{\varepsilon} =0.$$ From (\ref{end2}), \[ \lim_{\varepsilon \rightarrow 0} \varepsilon \Big\langle \frac{\partialrtial}{\partialrtial v} (U_{\varepsilon ,x} ) , W_{\varepsilon ,x_o }^v \Big\rangle_{\varepsilon} = \langle \psi^v , \psi^v \rangle >0. \] Then, for $\varepsilon >0$ small enough (\ref{AAAA}) holds, and the proposition is proved. \end{proof} \section{Analytic proof that $\beta_{2,2} \neq 0$} In \cite{Rey_Ruiz} C. Rey and M. Ruiz numericallly checked that $\beta_{m,n} <0$ if $n+m \leq 9.$ In this section we prove that $\beta_{m,n}$ is not equal to zero for values $m$ and $n$ such that $m+n=4$. Fix $m ,n$ and let \[ \beta = a_{m+n} \ \beta_{m,n} = \int_{\mathbb{R}^n}U^2-\frac{a_{m+n}}{n(n+2)}\int_{\mathbb{R}^n}|\nabla U|^2|z|^2dz. \] Recall that $p=p_{m+n}$. \begin{theorem} If $m$ and $n$ such that $m+n=4$, then $\beta < 0.$ If $n\neq 4$, $m+n >4$ we have \[\beta=\frac{6-a_{m+n}}{n(n-4)}\int_{\mathbb{R}^n}\left(\frac{2}{p}\cdot\frac{m}{(m+n-4)}U^p-U^2\right)|z|^2dz.\] \end{theorem} \begin{proof} We know that $U$ satisfies \begin{equation} \label{equ} \Delta U=U-U^{p-1}. \end{equation} Let us multiply \eqref{equ} by $U|z|^2$ and integrate \begin{eqnarray*} \int_{\mathbb{R}^n}\left(U^2-U^p\right)|z|^2dz &=& \int_{\mathbb{R}^n}\Delta U\cdot U|z|^2dz\\ &=&-\int_{\mathbb{R}^n}\langle\nabla U, \nabla \left(U\cdot |z|^2\right)\rangle dz\quad \text{(by the Divergence Theorem)}\\ &=&-\int_{\mathbb{R}^n}|\nabla U|^2|z|^2dz-2\int_{\mathbb{R}^n}U\langle \nabla U,z\rangle \\ &=&-\int_{\mathbb{R}^n}|\nabla U|^2|z|^2dz-\int_{\mathbb{R}^n}\langle \nabla U^2,z\rangle \\ &=&-\int_{\mathbb{R}^n}|\nabla U|^2|z|^2dz+n\int_{\mathbb{R}^n}U^2dz\\ \end{eqnarray*} Hence, \begin{equation} \label{equ2} n\int_{\mathbb{R}^n}U^2dz=\int_{\mathbb{R}^n}|\nabla U|^2|z|^2dz+\int_{\mathbb{R}^n}\left(U^2-U^p\right)|z|^2dz. \end{equation} It is proved in Lemma 5.5 in \cite{Micheletti-Pistoia} that \[ \int_{\mathbb{R}^n} \left(\frac{\partialrtial U}{\partialrtial z_i}\right)^2z^2_idz= \frac{1}{2} \int_{\mathbb{R}^n}|\nabla U|^2 z_i^2 dz+\int_{\mathbb{R}^n}\left( (1/2) U^2- (1/p) U^p\right) z_i^2dz \] \[ =\frac{1}{2n} \int_{\mathbb{R}^n}|\nabla U|^2 |z|^2 dz+\int_{\mathbb{R}^n}\left( (1/2n) U^2- (1/np) U^p\right) |z|^2dz \] And using that (see for instance the proof of Lemma 3.3 in \cite{Rey_Ruiz}) \[ \int_{\mathbb{R}^n} \left(\frac{\partialrtial U}{\partialrtial z_i}\right)^2z^2_idz=\int_{\mathbb{R}^n}\left(\frac{U'(|z|)}{|z|}\right)^2z^4_idz=\frac{3}{n(n+2)}\int_{\mathbb{R}^n}|\nabla U|^2|z|^2dz,\quad i=1,\ldots,n, \] \noindent we have \begin{eqnarray} \label{equp} \left(\frac{n-4}{n+2}\right)\int_{\mathbb{R}^n}|\nabla U|^2|z|^2dz &=&\frac{2}{p}\int_{\mathbb{R}^n}U^p|z|^2dz-\int_{\mathbb{R}^n}U^2|z|^2dz. \end{eqnarray} Now, observe that by (\ref{equ2}) \begin{equation} \label{beta} n\beta= \left(\frac{n+2-a_{m+n}}{n+2}\right)\int_{\mathbb{R}^n}|\nabla U|^2|z|^2dz+\int_{\mathbb{R}^n}\left(U^2-U^p\right)|z|^2dz. \end{equation} Hence, by \eqref{equp} and \eqref{beta} \begin{eqnarray*} n(n-4)\beta&=&\frac{1}{p}\cdot\left(2n+4-2a_{m+n}+(4-n)p\right)\int_{\mathbb{R}^n}U^p|z|^2dz-(6-a_{m+n})\int_{\mathbb{R}^n}U^2|z|^2dz\\ &=&\frac{4}{p}\cdot\frac{m}{m+n-2}\int_{\mathbb{R}^n}U^p|z|^2dz-(6-a_{m+n})\int_{\mathbb{R}^n}U^2|z|^2dz. \end{eqnarray*} Notice that if $n=3,m=1$ or $n=m=2$, we have $a_{m+n}=6$. Therefore, in these two cases we obtain $\beta < 0$. Finally, if $n\neq 4$ and $m+n>4$, \[ \beta= \frac{6-a_{m+n}}{n(n-4)}\int_{\mathbb{R}^n}\left(\frac{2}{p}\cdot\frac{m}{(m+n-4)}U^p-U^2\right)|z|^2dz. \] \end{proof} \textbf{Acknowledgments.} The authors wish to thank Prof. Jimmy Petean for his constant interest and the many helpful conversations on the Yamabe equation. \end{document}
\begin{document} \title{ Estimating functional time series by moving average model fitting\footnote{This research was partially supported by NSF grants DMS 1305858 and DMS 1407530} } \mathbf{a}uthor{ Alexander Aue\footnote{Department of Statistics, University of California, Davis, CA 95616, USA, email: \tt{aaue@ ucdavis.edu}} \mathbf{a}nd Johannes Klepsch\footnote{Center for Mathematical Sciences, Technische Universit\"at M\"unchen, 85748 Garching, Boltzmannstra{\ss}e 3, Germany, email: \tt{[email protected]}}\;\footnote{Corresponding author} } \date{\today} \maketitle \begin{abstract} \setlength{\baselineskip}{1.8em} Functional time series have become an integral part of both functional data and time series analysis. Important contributions to methodology, theory and application for the prediction of future trajectories and the estimation of functional time series parameters have been made in the recent past. This paper continues this line of research by proposing a first principled approach to estimate invertible functional time series by fitting functional moving average processes. The idea is to estimate the coefficient operators in a functional linear filter. To do this a functional Innovations Algorithm is utilized as a starting point to estimate the corresponding moving average operators via suitable projections into principal directions. In order to establish consistency of the proposed estimators, asymptotic theory is developed for increasing subspaces of these principal directions. For practical purposes, several strategies to select the number of principal directions to include in the estimation procedure as well as the choice of order of the functional moving average process are discussed. Their empirical performance is evaluated through simulations and an application to vehicle traffic data. \\ \noindent {\bf Keywords:} Dimension reduction; Estimation, Functional data analysis; Functional linear process; Functional time series, Hilbert spaces; Innovations Algorithm, Moving average process \noindent {\bf MSC 2010:} Primary: 62M10, 62M15, 62M20; Secondary: 62H25, 60G25 \end{abstract} \setlength{\baselineskip}{1.8em} \section{Introduction} \langlebel{sec:intro} With the advent of complex data came the need for methods to address novel statistical challenges. Among the new methodologies, functional data analysis provides a particular set of tools for tackling questions related to observations conveniently viewed as entire curves rather than individual data points. The current state of the field may be reviewed in one of the comprehensive monographs written by Bosq \cite{bosq}, Ramsay and Silverman \cite{ramsay1}, Horv\'ath and Kokoszka \cite{horvath}, and Hsing and Eubank \cite{hsing}. Many of the applications discussed there point to an intrinsic time series nature of the underlying curves. This has led to an upsurge in contributions to the functional time series literature. The many recent works in this area include papers on time-domain methods such as H\"ormann and Kokoszka \cite{weaklydep}, who introduced a framework to describe weakly stationary functional time series, and Aue et al.\ \cite{aue} and Klepsch and Kl\"uppelberg \cite{kk}, who developed functional prediction methodology; as well as frequency domain methods such as Panaretos and Tavakoli \cite{panaretos}, who utilized functional cumulants to justify their functional Fourier analysis, H\"ormann et al.\ \cite{hoermann}, who defined the concept of dynamic functional principal components, and Aue and van Delft \cite{avd}, who designed stationarity tests based on functional periodogram properties. This paper is concerned with functional moving average (FMA) processes as a building block to estimate potentially more complicated functional time series. Together with the functional autoregressive (FAR) processes, the FMA processes comprise one of the basic functional time series model classes. They are used, for example, as a building block in the $L^p$-$m$-approximability concept of H\"ormann and Kokoszka \cite{weaklydep}, which is based on the idea that a sufficiently close approximation with truncated linear processes may adequately capture more complex dynamics, based on a causal infinite MA representation. It should be noted that, while there is a significant number of papers on the use of both FMA and FAR processes, the same is not the case for the more flexible functional autoregressive moving average (FARMA) processes. This is due to the technical difficulties that arise from transitioning from the multivariate to the functional level. One advantage that FMA processes enjoy over other members of the FARMA class is that their projections remain multivariate MA processes (of potentially lower order). This is one of the reasons that makes them attractive for further study. Here interest is in estimating the dynamics of an invertible functional linear process through fitting FMA models. The operators in the FMA representation, a functional linear filter, are estimated using a functional Innovations Algorithm. This counterpart of the well-known univariate and multivariate Innovations Algorithms was recently introduced by Klepsch and Kl\"uppelberg \cite{kk}, where its properties were analyzed on a population level. These results are extended to the sample case and used as a first step in the estimation. The proposed procedure uses projections to a number of principal directions, estimated through functional principal components analysis (see, for example, Ramsay and Silverman \cite{ramsay1}). To ensure appropriate large-sample properties of the proposed estimators, the dimensionality of the principle directions space is allowed to grow slowly with the sample size. In this framework, the consistency of the estimators of the functional linear filter is the main theoretical contribution. It is presented in Section \ref{sec:methodology}. The theoretical results are accompanied by selection procedures to guide the selection of the order of the approximating FMA process and the dimension of the subspace of principal directions. To choose the dimension of the subspace a sequential test procedure is proposed. Order selection based on AICC, Box--Ljung and FPE type criteria are suggested. Details of the proposed model selection procedures are given in Section \ref{sec:selection}. Their practical performance is highlighted in Section~\ref{sec:sim}, where results of a simulation study are reported, and Section~\ref{sec:app}, where an application to real-world data on vehicle traffic data is discussed. To summarize, this paper is organized as follows. Section \ref{sec:setting} briefly reviews basic notions of Hilbert-space valued random variables before introducing the setting and the main assumptions. The proposed estimation methodology for functional time series is detailed in Section \ref{sec:methodology}. Section \ref{sec:selection} discusses in some depth the practical selection of the dimension of the projection space and the order of the approximating FMA process. These suggestions are tested in a Monte Carlo simulation study and an application to traffic data in Sections \ref{sec:sim} and \ref{sec:app}, respectively. Section \ref{sec:conclusion} concludes and proofs of the main results can be found in Section \ref{sec:proof}. \section{Setting} \langlebel{sec:setting} Functional data is often conducted in $H=L^2[0,1]$, the Hilbert-space of square-integrable functions, with canonical norm $\|x\|=\langlengle x,x\ranglengle^{1/2}$ induced by the inner product $\left\langlengle x , y \right\ranglengle=\int_0^1 x(s)y(s)ds$ for $x,y\in H$. For an introduction to Hilbert spaces from a functional analytic perspective, the reader is referred to Chapters~3.2 and 3.6 in Simon~\cite{simon}. All random functions considered in this paper are defined on a probability space $(\Omega,\mathcal{A},\mathcal{P})$ and are assumed to be ${\mathcal{A}}$-${\mathcal{B}}_H$-measurable, where ${\mathcal{B}}_H$ denotes the Borel $\sigma$-algebra of subsets of $H$. Note that the space of square integrable random functions $L^2_{H}=L^2(\Omega, \mathcal{A},\mathcal{P})$ is a Hilbert space with inner product ${\rm E}[\left\langlengle X,Y\right\ranglengle]={\rm E}[\int_0^1X(s)Y(s)ds]$ for $X,Y \in L^2_{H}$. Similary, denote by $L^p_H=L^p(\Omega,{\mathcal{A}},\mathcal{P})$ the space of $H$-valued functions such that $\nu_p(X)=({\rm E}[\|X\|^p])^{1/p}<\infty$. Let $\mathbb{Z}$, ${\mathbb N}$ and ${\mathbb N}_0$ denote the set of integers, positive integers and non-negative integers, respectively. Interest in this paper is in fitting techniques for functional time series $(X_j\colon j\in\mathbb{Z})$ taking values in $L_H^2$. To describe a wide variety of temporal dynamics, the framework is established for functional linear processes $(X_j\colon j\in\mathbb{Z})$ defined through the series expansion \begin{equation} \langlebel{eq:flp} X_j=\sum_{\ell=0}^\infty \partialsi_\ell\varepsilonpsilon_{j-\ell}, \qquad j\in\mathbb{Z}, \end{equation} where $(\partialsi_\ell\colon\ell\in\mathbb{N}_0)$ is a sequence in ${\mathcal{L}}$, the space of bounded linear operators acting on $H$, equipped with the standard norm $\|A\|_{{\mathcal{L}}}=\sup_{ \|x\|\leq 1} \|Ax\|$, and $(\varepsilonpsilon_j\colon j\in\mathbb{Z})$ is assumed to be an independent and identically distributed sequence in $L^2_H$. Additional summability conditions are imposed on the sequence of coefficient operators $(\partialsi_\ell\colon\ell\in\mathbb{N}_0)$ if it is necessary to control the rate of decay of the temporal dependence. Whenever the terminology ``functional linear process'' is used in this paper it is understood to be in the sense of \eqref{eq:flp}. Note that, as for univariate and multivariate time series models, every stationary causal functional autoregressive moving average (FARMA) process is a functional linear process (see Spangenberg \cite{spangenberg}, Theorem~2.3). Special cases include functional autoregressive processes of order $p$, FAR$(p)$, which have been thoroughly investigated in the literature, and the {\em functional moving average process of order $q$}, FMA$(q)$, which is given by the equation \begin{align} X_j = \sum_{\ell=1}^q \theta_\ell \varepsilonpsilon_{j-\ell}+\varepsilonpsilon_j, \qquad j \in \mathbb{Z}, \langlebel{FMA} \end{align} with $\theta_1,\ldots,\theta_q \in {\mathcal{L}}$. While the functional linear process in \eqref{eq:flp} is the prototypical causal time series, in the context of prediction, the concept of invertibility naturally enters; see Chapter 5.5 of Brockwell and Davis~\cite{brockwell}, and Nsiri and Roy~\cite{nsiri}. For a functional time series $(X_j\colon j\in\mathbb{Z})$ to be {\em invertible}, it is required that \begin{equation}\langlebel{eq:invertible} X_j=\sum_{\ell=1}^\infty\partiali_\ell X_{j-\ell}+\varepsilonpsilon_j, \qquad j\in\mathbb{Z}, \end{equation} for $(\partiali_\ell\colon\ell\in\mathbb{N})$ in ${\mathcal{L}}$ such that $\sum_{\ell=1}^\infty\|\partiali_\ell\|_{{\mathcal{L}}}<\infty$; see Merlev\`ede \cite{merlevede}. A sufficient condition for invertibility of a functional linear process, which is assumed throughout, is given in Theorem~7.2 of Bosq \cite{bosq}. The definition of a functional linear process in \eqref{eq:flp} provides a convenient framework for the formulation of large-sample results and their verification. In order to analyze time series characteristics in practice, however, most statistical methods require a more in-depth understanding of the underlying dependence structure. This is typically achieved through the use of autocovariances which determine the second-order structure. Observe first that any random variable in $L^p_H$ with $p\geq 1$ possesses a unique {\em mean function} in $H$, which allows for a pointwise definition; see Bosq \cite{bosq}. For what follows, it is assumed without loss of generality that $\mu=0$, the zero function. If $X\in L_H^p$ with $p\geq 2$ such that ${\rm E}[X]=0$, then the {\em covariance operator} of $X$ exists and is given by \begin{align*} C_X(y) = {\rm E} [\langlengle X, y \ranglengle X], \qquad y \in H. \end{align*} If $X,Y \in L^p_{H}$ with $p\geq 2$ such that ${\rm E} [X] = {\rm E} [Y]=0$, then the {\em cross covariance operator} of $X$ and $Y$ exists and is given by \begin{align*} C_{X,Y}(y) = C_{Y,X}^*(y)={\rm E} [\langlengle X, y \ranglengle Y], \qquad y \in H. \end{align*} where $C_{Y,X}^*$ denotes the adjoint of $C_{Y,X}$, noting that the adjoint $A^*$ of an operator $A$ is defined by the equality $\langlengle Ax,y \ranglengle = \langlengle x , A^* y\ranglengle$ for $x,y\in H$. The operators $C_X$ and $C_{Y,X}$ belong to ${\mathcal{N}}$, the class of {\em nuclear operators}, whose elements $A$ have a representation $A=\sum_{j=1}^{\infty} \langlembda_j\langlengle e_j , \cdot\ranglengle f_j $ with $\sum_{j=1}^{\infty} \vert \langlembda_j \vert < \infty$ for two orthonormal bases (ONB) $(e_j)_{j\in{\mathbb N}}$ and $(f_j)_{j\in{\mathbb N}}$ of $H$. In that case $\Vert A\Vert_{{\mathcal{N}}}=\sum_{j=1}^\infty \vert \langlembda_j\vert <\infty$ ; see Section~1.5 of Bosq \cite{bosq}. Furthermore, $C_X$ is self-adjoint ($C_X=C_X^*$) and non-negative definite with spectral representation \begin{align*} C_X(y)=\sum_{i=1}^{\infty} \langlembda_i \langlengle y, \nu_i \ranglengle \nu_i, \qquad y\in H, \end{align*} where $(\nu_i\colon i\in{\mathbb N})$ is an ONB of $H$ and $(\langlembda_i\colon i\in{\mathbb N})$ is a sequence of positive real numbers such that $\sum_{i=1}^{\infty} \langlembda_i < \infty$. When considering spectral representations, it is standard to assume that the $(\langlembda_i\colon i\in{\mathbb N})$ are ordered decreasingly and that there are no ties between consecutive $\langlembda_i$. For ease of notation, introduce the operator $x\otimes y(\cdot)= \langlengle x ,\cdot \ranglengle y$ for $x,y\in H$. Then, $C_X = {\rm E} [ X\otimes X ]$ and $C_{X,Y} = {\rm E}[X\otimes Y]$. Moreover, for a stationary process $(X_j\colon j\in{\mathbb Z})$, the {\em lag-$h$ covariance operator} can be written as \begin{align} C_{X;h}={\rm E}[X_0\otimes X_h],\qquad h\in {\mathbb Z}. \langlebel{cxh} \end{align} The quantities in \eqref{cxh} are the basic building block in the functional Innovations Algorithm and the associated estimation strategy to be discussed in the next section. \section{Estimation methodology} \langlebel{sec:methodology} \subsection{Linear prediction in function spaces} Briefly recall the concept of linear prediction in Hilbert spaces as defined in Section~1.6 of Bosq \cite{bosq}. Let $(X_j\colon j\in {\mathbb Z})$ be an invertible, functional linear process. Let $\bar{L}_{n,k}$ be the ${\mathcal{L}}$-closed subspace (LCS) generated by the stretch of functions $X_{n-k},\ldots,X_n$. LCS here is to be understood in the sense of Fortet \cite{fortet} that is $\bar{L}_{n,k}$ is the smallest subspace of $H$ containing $X_{n-k},\ldots,X_n$, closed with respect to operators in ${\mathcal{L}}$. Then, the best linear predictor of $X_{n+1}$ given $\{X_{n},X_{n-1},\dots,X_{n-k}\}$ at the population level is given by \begin{align} \langlebel{blp} \tilde X_{n+1,k}^f = P_{\bar{L}_{n,k}}(X_{n+1}), \end{align} where the superscript $f$ in the predictor notation indicates the fully functional nature of the predictor and $P_{\bar{L}_{n,k}}$ denotes projection on $\bar{L}_{n,k}$. Note that there are major differences to the multivariate prediction case. Due to the infinite dimensionality of function spaces, $\tilde X_{n+1,k}^f$ in \eqref{blp} is not guaranteed to have a representation in terms of its past values and operators in ${\mathcal{L}}$, see for instance Proposition~2.2 in Bosq \cite{bosq2014} and the discussion in Section~3 of Klepsch and Kl\"uppelberg \cite{kk}. A typical remedy in FDA is to resort to projections into principal directions and then to let the dimension $d$ of the projection subspace grow to infinity. At the subspace-level, multivariate methods may be applied to compute the predictors; for example the multivariate Innovations Algorithm; see Lewis and Reinsel \cite{lewis} and Mitchell and Brockwell \cite{mitchell}. This, however, has to be done with care, especially if sample versions of the predictors in \eqref{blp} are considered. Even at the population level, the rate at which $d$ tends to infinity has to be calibrated scrupulously to ensure that the inversions of matrices occurring, for example, in the multivariate Innovations Algorithm are meaningful and well defined (see Theorem 5.3 of Klepsch and Kl\"uppelberg \cite{kk}). Therefore, the following alternative to the functional best linear predictor defined in \eqref{blp} is proposed. Recall that $(\nu_j\colon j\in\mathbb{N})$ are the eigenfunctions of the covariance operator $C_X$. Let $\mathcal{V}_d={\rm \overline{sp}}\{\nu_1,\dots,\nu_d\}$ be the subspace generated by the first $d$ principal directions and let $P_{\mathcal{V}_d}$ be the projection operator projecting from $H$ onto $\mathcal{V}_d$. Let furthermore $(d_i\colon i\in{\mathbb N})$ be an increasing sequence of positive integers and define \begin{align} X_{d_i,j}=P_{\mathcal{V}_{d_i}}X_j,\qquad j\in{\mathbb Z},\; i\in{\mathbb N}. \langlebel{xdi} \end{align} Note that \eqref{xdi} allows for the added flexibility of projecting different $X_j$ into different subspaces $\mathcal{V}_i$. Then, $X_{n+1}$ can be projected into the LCS generated by $X_{d_k,n},X_{d_{k-1},n-1},\ldots,X_{d_1,n-k}$, which is denoted by $\bar{{F}}_{n,k}$. Consequently, write \begin{align}\langlebel{blpd} \tilde X_{n+1,k} = P_{\bar{{F}}_{n,k}}(X_{n+1}) \end{align} for the best linear predictor of $X_{n+1}$ given $\bar{{F}}_{n,k}$. This predictor could be computed by regressing $X_{n+1}$ onto $X_{d_k,n},X_{d_{k-1},n-1},\ldots,X_{d_1,n-k}$, but interest is here in the equivalent representation of $\tilde{X}_{n+1,k}$ in terms of one-step ahead prediction residuals given by \begin{align}\langlebel{innov} \tilde{X}_{n+1,k} = \sum_{i=1}^k\theta_{k,i}(X_{d_{k+1-i},n+1-i}-\tilde{X}_{n+1-i,k-i}), \end{align} where $\tilde{X}_{n-k,0}=0$. On a population level, it was shown in Klepsch and Kl\"uppelberg \cite{kk} that the coefficients $\theta_{k,i}$ with $k,i\in{\mathbb N}$ can be computed with the following algorithm. \begin{algo}[\bf{Functional Innovations Algorithm}] \langlebel{fia} Let $(X_j\colon j\in{\mathbb Z})$ be a stationary functional linear process with covariance operator $C_X$ possessing eigenpairs $(\langlembda_i,\nu_i\colon i\in{\mathbb N})$ with $\langlembda_i>0$ for all $i\in{\mathbb N}$. The best linear predictor $\tilde{ X}_{n+1,k}$ of ${ X}_{n+1}$ based on $\bar{{F}}_{n,k}$ defined in \eqref{innov} can be computed by the recursions \begin{align} \tilde{X}_{n-k,0}&=0\qquad\mbox{and}\qquad V_{1}=P_{\mathcal{V}_{d_1}}C_{X}P_{\mathcal{V}_{d_1}},\notag\\ \tilde{X}_{n+1,k}&= \sum_{i=1}^{k} \theta_{k,i} (X_{d_{k+1-i},n+1-i}-\tilde{X}_{n+1-i,{k-i}}), \notag \\ \theta_{k,k-i}&=\bigg(P_{\mathcal{V}_{d_{k+1}}}\,C_{X;k-i}\,P_{\mathcal{V}_{d_{i+1}}} - \sum_{j=0}^{i-1} \theta_{k,k-j} \ V_{j} \ \theta_{i,i-j}^*\bigg)V_{i}^{-1}, \qquad i=1,\dots,n-1, \langlebel{theta1} \\ V_{k}&=C_{X_{d_{k+1}}-\tilde{X}_{n+1,k}}= C_{X_{d_{k+1}}} - \sum_{i=0}^{k-1} \theta_{k,k-i}V_{i}\theta^*_{k,k-i}. \langlebel{vd1} \end{align} Note that $\theta_{k,k-i}$ and $V_i$ are operators in ${\mathcal{L}}$ for all $i=1,\dots,k$. \end{algo} The first main goal is now to show how a finite sample version of this algorithm can be used to estimate the operators in \eqref{FMA}, as these FMA processes will be used to approximate the more complex processes appearing in Definition \ref{def:lpm}. Note that H\"ormann and Kokoszka \cite{weaklydep} give assumptions under which $\sqrt{n}$-consistent estimators can be obtained for the lag-$h$ autocovariance operator $C_{X;h}$, for $h\in{\mathbb Z}$. However, in \eqref{theta1}, estimators are required for the more complicated quantities $P_{\mathcal{V}_{d_{k+1}}}\,C_{X;k-i}\,P_{\mathcal{V}_{d_{i+1}}}$, for $k,\, i\in{\mathbb N}$. If, for $i\in{\mathbb N}$, the projection subspace $\mathcal{V}_{d_i}$ is known, consistent estimators of $P_{\mathcal{V}_{d_{k+1}}}\,C_{X;k-i}\,P_{\mathcal{V}_{d_{i+1}}}$ can be obtained by estimating $C_{X;k-i}$ and projecting the operator on the desired subspace. This case will be dealt with in Section~\ref{known}. In practice, however, the subspaces $\mathcal{V}_{d_i}$, $i\in{\mathbb N}$, need to be estimated. This is a further difficulty that will be addressed separately in an additional step as part of Section~\ref{unknown}. Now, introduce additional notation. For $k\in{\mathbb N}$, denote by $(X_j(k)\colon j\in{\mathbb Z})$ the functional process taking values in $H^k$ such that \[ X_j(k)=(X_j,X_{j-1},\dots,X_{j-k+1})^\top, \] where $^\top$ signifies transposition. Let \begin{align*} \mathbb{G}amma_k= C_{X(k)} \qquad \text{and} \qquad \mathbb{G}amma_{1,k}=C_{X_{n+1},X_n(k)}={\rm E} \big[X_{n+1} \otimes X_n(k)\big]. \end{align*} Based on a realization $X_1,\dots,X_n$ of $(X_j\colon j\in{\mathbb Z})$, estimators of the above operators are given by \begin{align} \hat\mathbb{G}amma_k = \frac{1}{N-k}\sum_{j=k}^{N-1} X_j(k)\otimes X_j(k) \qquad\mbox{and}\qquad \hat\mathbb{G}amma_{1,k} = \frac{1}{N-k} \sum_{j=k}^{N-1} X_{j+1}\otimes X_j(k). \langlebel{gammak} \end{align} The following theorem establishes the $\sqrt{n}$-consistency of the estimator $\hat\mathbb{G}amma_k$ of $\mathbb{G}amma_k$ defined in \eqref{gammak}. \begin{theorem} \langlebel{l4mapp} If $(X_j\colon j\in\mathbb{Z})$ is a functional linear process defined in \eqref{eq:flp} such that the coefficient operators $(\partialsi_\ell\colon\ell\in\mathbb{N}_0)$ satisfy the summability condition $\sum_{m=1}^\infty\sum_{\ell=m}^\infty\|\partialsi_\ell\|_{{\mathcal{L}}}<\infty$ and with independent, identically distributed innovations $(\varepsilonpsilon_j\colon j\in\mathbb{Z})$ such that ${\rm E}[\|\varepsilonpsilon_0\|^4]<\infty$, then \begin{align*} (N-k)\, {\rm E} \big[\Vert \hat \mathbb{G}amma_k - \mathbb{G}amma_k \Vert_{\mathcal{N}}^2\big] \leq k \, U_X, \end{align*} where $U_X$ is a constant that does not depend on $n$. \end{theorem} The proof of Theorem \ref{l4mapp} is given in Section \ref{sec:proof}. There, an explicit expression for the constant $U_X$ is derived that depends on moments of the underlying functional linear process and on the rate of decay of the temporal dependence implied by the summability condition on the coefficient operators $(\partialsi_\ell\colon\ell\in\mathbb{N}_0)$. \subsection{Known projection subspaces} \langlebel{known} In this section, conditions are established that ensure consistency of estimators of a functional linear process under the assumption that the projection subspaces $\mathcal{V}_{d_i}$ are known in advance. In this case as well as in the unknown subspace case, the following the general strategy is pursued; see Mitchell and Brockwell \cite{mitchell}. Start by providing consistency results for the estimators regression estimators of $\beta_{k,1},\dots,\beta_{k,k}$ in the linear model formulation \[ \tilde X_{n+1,k}=\beta_{k,1}X_{d_k,n} + \beta_{k,2}X_{d_{k-1},n-1}+\dots+\beta_{k,k}X_{d_1,n-k+1} \] of \eqref{blpd}. To obtain the consistency of the estimators $\theta_{k,1},\ldots,\theta_{k,k}$ exploit then that regression operators and Innovations Algorithm coefficient operators are, for $k\in{\mathbb N}$, linked through the recursions \begin{align} \theta_{k,i}=\sum_{j=1}^i \beta_{k,j} \theta_{k-j,i-j}, \qquad i=1,\dots,k.\langlebel{link} \end{align} Define furthermore $P_{(k)}= \mathrm{diag}(P_{\mathcal{V}_{d_k}}, \dots,P_{\mathcal{V}_{d_1}})$, the operator from $H^k$ to $H^k$ whose $i$th diagonal entry is given by the projection operator onto $\mathcal{V}_{d_i}$. One verifies that $P_{(k)}X_n(k)= (X_{d_k,n},X_{d_{k-1},n-1},\dots,X_{d_1,n-k})^\top$, $C_{P_{(k)}X(k)}=P_{(k)}\mathbb{G}amma_k P_{(k)}=\mathbb{G}amma_{k,d}$ and $C_{X,P_{(k)}X(k)}= P_{(k)}\mathbb{G}amma_{1,k}=\mathbb{G}amma_{1,k,d}$. With this notation, it can be shown that $B(k)=(\beta_{k,1},\dots,\beta_{k,k})$ satisfies the population Yule--Walker equations \begin{align*} B(k)=\mathbb{G}amma_{1,k,d}\,\mathbb{G}amma_{k,d}^{-1}, \end{align*} of which sample versions are needed. In the known subspace case, estimators of $\mathbb{G}amma_{1,k,d}$ and $\mathbb{G}amma_{k,d}$ are given by \begin{align} \hat\mathbb{G}amma_{k,d}=P_{(k)} \hat\mathbb{G}amma_k P_{(k)} \qquad\mbox{and}\qquad\hat\mathbb{G}amma_{1,k,d}=\hat\mathbb{G}amma_{1,k}P_{(k)}, \langlebel{hatgammakd} \end{align} where $\hat\mathbb{G}amma_{k}$ and $\hat\mathbb{G}amma_{1,k}$ are as in \eqref{gammak}. With this notation, $B(k)$ is estimated by the sample Yule--Walker equations \begin{align} \hat{B}(k) = \hat\mathbb{G}amma_{1,k,d}\hat\mathbb{G}amma_{k,d} ^{-1}. \langlebel{yulewalkerhat} \end{align} Furthermore, the operators $\theta_{k,i}$ in \eqref{innov} are estimated by $\hat\theta_{k,i}$, resulting from Algorithm~\ref{fia} applied to the estimated covariance operators with $\mathcal{V}_{d_i}$ known. In order to derive asymptotic properties of $\hat \beta_{k,i}$ and $\hat \theta_{k,i}$ as both $k$ and $n$ tend to infinity, the following assumptions are imposed. Let $\mathbf{\mathbf{a}lpha}pha_{d_k}$ denote the infimum of the eigenvalues of all spectral density operators of $(X_{d_k,j}\colon j\in\mathbb{Z})$. \begin{assumption}\langlebel{assumptions} As $n\rightarrow\infty$, let $k=k_n\rightarrow\infty$ and $d_k\rightarrow\infty$ such that \begin{enumerate}\itemsep-.2ex \item[(i)] $(X_j\colon j\in{\mathbb Z})$ is as in Theorem \ref{l4mapp} and invertible. \item[(ii)] $k^{1/2}(n-k)^{-1/2}\mathbf{\mathbf{a}lpha}pha_{d_k}^{-2}\rightarrow 0$ as $n\rightarrow \infty$. \item[(iii)] $k^{1/2}\mathbf{\mathbf{a}lpha}pha_{d_k}^{-1} \, \big(\sum_{\ell>k} \Vert \partiali_\ell\Vert_{{\mathcal{L}}} + \sum_{\ell=1}^k \Vert \partiali_\ell \Vert_{{\mathcal{L}}} \sum_{i>d_{k+1-\ell}} \langlembda_i \big) \rightarrow 0$ as $n\rightarrow\infty$. \end{enumerate} \end{assumption} Invertibility imposed in part {\em (i)}\/ of Assumption \ref{assumptions} is a standard requirement in the context of prediction and is also necessary for the univariate Innovations Algorithm to be consistent. Assumption~{\em (ii)}\/ describes the restrictions on the relationship between $k$, $d_k$ and $n$. The corresponding multivariate assumption in Mitchell and Brockwell \cite{mitchell} is $k^3/n\rightarrow 0 $ as $n\rightarrow\infty$. Assumption~{\em (iii)}\/ is already required in the population version of the functional Innovations Algorithm in Klepsch and Kl\"uppelberg \cite{kk}. It ensures that the best linear predictor based on the last $k$ observations converges to the conditional expectation for $k\rightarrow\infty$. The corresponding multivariate condition in Brockwell and Mitchell \cite{mitchell} is $k^{1/2} \sum_{\ell>k}\Vert \partiali_\ell\Vert \rightarrow 0$ as $n\rightarrow\infty$, where $(\partiali_\ell\colon\ell\in\mathbb{N})$ here denote the matrices in the invertible representation of a multivariate linear process. The main result concerning the asymptotic behavior of the estimators $\hat\beta_{k,i}$ and $\hat\theta_{k,i}$ is given next. \begin{theorem}\langlebel{autoregressive} Let $\mathcal{V}_{d_i}$ be known for all $i\in{\mathbb N}$ and let Assumption~\ref{assumptions} be satisfied. Then, for all $x\in H$ and all $i\in{\mathbb N}$ as $n\rightarrow\infty$, \begin{enumerate}\itemsep-.2ex \item[(i)] $\Vert (\hat \beta_{k,i} - \partiali_i )(x) \Vert \overlineerset{p}{\rightarrow} 0$, \item[(ii)]$\Vert( \hat \theta_{k,i}-\partialsi_i)(x)\Vert \overlineerset{p}{\rightarrow}0.$ \end{enumerate} If the operators $(\partialsi_\ell\colon\ell\in\mathbb{N})$ and $(\partiali_\ell\colon\ell\in\mathbb{N})$ in the respective causal and invertible representations are assumed Hilbert--Schmidt, then the convergence in (i) and (ii) is uniform. \end{theorem} The proof of Theorem \ref{autoregressive} is given in Section \ref{sec:proof}. The theorem establishes the pointwise convergence of the estimators needed in order to get a sample proxy for the functional linear filter $(\partiali_\ell\colon\ell\in{\mathbb N})$. This filter encodes the second-order dependence in the functional linear process and can therefore be used for estimating the underlying dynamics for the case of known projection subspaces. \subsection{Unknown projection subspaces} \langlebel{unknown} The goal of this section is to remove the assumption of known $\mathcal{V}_{d_i}$. Consequently, the standard estimators for the eigenfunctions $(\nu_i\colon i\in{\mathbb N})$ of the covariance operator $C_X$ are used, obtained as the sample eigenfunctions $\hat\nu_j$ of $\hat C_X$. Therefore, for $i\in{\mathbb N}$, the estimators of $\mathcal{V}_{d_i}$ and $P_{\mathcal{V}_{d_i}}$ are \begin{align} \hat{\mathcal{V}}_{d_i}= {\rm \overline{sp}}\{\hat \nu_1, \hat \nu_2,\dots,\hat \nu_{d_i}\} \qquad\mbox{and}\qquad \hat P_{\mathcal{V}_{d_i}} = P_{\hat{\mathcal{V}}_{d_i}}. \end{align} For $i\in\mathbb{N}$, let $\hat \nu_i'=c_i\hat \nu_i$, where $c_i = \text{sign}(\langlengle \hat \nu_i,\nu_i\ranglengle)$. Then, Theorem 3.1 in H\"ormann and Kokoszka \cite{weaklydep} implies the consistency of $\hat\nu_i'$ for $\hat\nu_i$, with the quality of approximation depending on the spectral gaps of the eigenvalues $(\langlembda_i\colon i\in\mathbb{N})$ of $C_X$. With this result in mind, define \begin{align} \hat {\hat\mathbb{G}amma}_{k,d}=\hat P_{(k)} \hat\mathbb{G}amma_k \hat P_{(k)} \qquad\mbox{and}\qquad \hat{\hat\mathbb{G}amma}_{1,k,d}=\hat\mathbb{G}amma_{1,k}\hat P_{(k)}. \langlebel{hathatgammakd} \end{align} Now, if the projection subspace $\mathcal{V}_{d_i}$ is not known, the operators appearing in \eqref{link} and can be estimated by solving the estimated Yule--Walker equations \begin{align} \hat{\hat{B}}(k) = \hat{\hat\mathbb{G}amma}_{1,k,d}\hat{\hat\mathbb{G}amma}_{k,d} ^{-1}. \langlebel{yulewalkerhathat} \end{align} The coefficient operators in Algorithm \ref{fia} obtained from estimated covariance operators and estimated projection space $\hat P_{\mathcal{V}_{d_i}} $ are denoted by $\hat{\hat\theta}_{k,i}$. In order to derive results concerning their asymptotic behavior, an additional assumption concerning the decay of the spectral gaps of $C_X$ is needed. Let $\delta_1=\langlembda_1-\langlembda_2$ and $\delta_j=\min\{\langlembda_{j-1}-\langlembda_j,\langlembda_j-\langlembda_{j+1}\}$ for $j\geq 2$. \begin{assumption} \langlebel{ass2} As $n\rightarrow\infty$, $k=k_n\rightarrow\infty$ and $d_k\rightarrow\infty$ such that \begin{enumerate} \item[(iv)] $k^{3/2}{\mathbf{\mathbf{a}lpha}pha_{d_k}^{-2} \, n^{-1}}(\sum_{\ell=1}^{d_k} \delta_\ell^{-2})^{1/2} \rightarrow 0$. \end{enumerate} \end{assumption} This type of assumption dealing with the spectral gaps is typically encountered when dealing with the estimation of eigenelements of functional linear processes (see, for example, Bosq \cite{bosq}, Theorem~8.7). We are now ready to derive the asymptotic result of the estimators in the general case that $A_{d_i}$ is not known. \begin{theorem}\langlebel{autoregressive2} Let Assumptions~\ref{assumptions} and \ref{ass2} be satisfied. Then, for all $x\in H$ and $i \in{\mathbb N}$ as $n\rightarrow\infty$, \begin{enumerate}\itemsep-.2ex \item[(i)] $\Vert (\hat{\hat \beta}_{k,i} - \partiali_i )(x) \Vert \overlineerset{p}{\rightarrow} 0,$ \item[(ii)] $ \Vert( \hat{\hat\theta}_{k,i}-\partialsi_i)(x)\Vert \overlineerset{p}{\rightarrow}0.$ \end{enumerate} If the operators $(\partialsi_\ell\colon\ell\in\mathbb{N})$ and $(\partiali_\ell\colon\ell\in\mathbb{N})$ are Hilbert--Schmidt, then the convergence is uniform. \end{theorem} The proof of Theorem \ref{autoregressive2} is given in Section \ref{sec:proof}. The theoretical results quantify the large-sample behavior of the estimates of the linear filter operators in the causal and invertible representations of the strictly stationary functional time series $(X_j\colon j\in\mathbb{Z})$. How to guide the application of the proposed method in finite samples is addressed in the next section. \section{Selection of principal directions and FMA order} \langlebel{sec:selection} Model selection is a difficult problem when working with functional time series. Contributions to the literature have been made in the context of functional autoregressive models by Kokoszka and Reimherr \cite{kokoreim}, who devised a sequential test to decide on the FAR order, and Aue et al.\ \cite{aue}, who introduced an FPE-type criterion. To the best of our knowledge, there are no contributions in the context of model selection in functional moving average models. This section introduces several procedures. A method for the selection of the subspace dimension is introduced in Section~\ref{sec:cvpind}, followed by a method for the FMA order selection in Section~\ref{sec:AICC}. A criterion for the simultaneous selection is in Section~\ref{sec:fFPE}. \subsection{Selection of principal directions} \langlebel{sec:cvpind} The most well-known method for the selection of $d$ in functional data analysis is based on total variance explained, TVE, where $d$ is chosen such that the first $d$ eigenfunctions of the covariance operator explain a predetermined amount $P$ of the variability; see, for example, Horv\'ath and Kokoszka \cite{horvath}. In order to apply the TVE criterion in the functional time series context, one has to ensure that no essential parts of the dependence structure in the data are omitted after the projection into principal directions. This is achieved as follows. First choose an initial $d^*$ with the TVE criterion such with a fraction $P$ of variation in the data is explained. This should be done conservatively. Then apply the portmanteau test of Gabrys and Kokoszka \cite{gabrys} to check whether the non-projected part $(I_H-P_{\mathcal{V}_{d^*}})X_1,\ldots, (I_H-P_{\mathcal{V}_{d^*}})X_n$ of the observed functions $X_1,\ldots,X_n$ can be considered independent. Modifying their test to the current situation, yields the statistic \begin{align}\langlebel{testind} Q_n^{d^*}= n\sum_{h=1}^{\bar{h}} \sum_{\ell,\ell^\partialrime=d^*+1}^{d^*+p} f_{h}(\ell,\ell^\partialrime) b_h(\ell,\ell^\partialrime), \end{align} where $f_h(\ell,\ell^\partialrime)$ and $b_h(\ell,\ell^\partialrime)$ denote the $(\ell,\ell^\partialrime)$th entries of $C_{\mathbf{X}^*;0}^{-1}C_{\mathbf{X}^*;h}$ and $C_{\mathbf{X}^*;h}C_{\mathbf{X}^*;0}^{-1}$, respectively, and $(\textbf{X}_j^*\colon j\in{\mathbb Z})$ is the $p$-dimensional vector process consisting of the $d+1$st to $d+p$th eigendirections of the covariance operator $C_X$. Following Gabrys and Kokoszka \cite{gabrys}, it follows under the assumption of independence of the non-projected series that $Q_N^{d^*}\rightarrow\chi^2_{{p}^2\bar{h}}$ in distribution. If the assumption of independence is rejected, set $d^*=d^*+1$. Repeat the test until the independence hypothesis cannot be rejected and choose $d=d^*$ to estimate the functional linear filters. This leads to the following algorithm. \begin{algo}[\bf Test for independence] \langlebel{IND} Perform the following steps. \begin{enumerate}\itemsep-.2ex \item[(1)] For given observed functional time series data $X_1,\ldots,X_n$, estimate the eigenpairs $(\hat\langlembda_1,\hat\nu_1),\dots,(\hat\langlembda_n,\hat\nu_n)$ of the covariance operator $C_X$. Select $d^*$ such that \begin{align*} \mathrm{TVE}(d^*)=\frac{\sum_{i=1}^{d^*} \hat\langlembda_i}{\sum_{i=1}^{n} \hat\langlembda_i}\geq P \end{align*} for some prespecified $P\in(0,1)$. \item[(2)] While $Q_n^{d^*}>q_{\chi^2_{{p}^2\bar{h}},\mathbf{\mathbf{a}lpha}pha}$, set $d^*=d^*+1$. \item[(3)] If $Q_n^{d^*}\leq q_{\chi^2_{{p}^2\bar{h}},\mathbf{\mathbf{a}lpha}pha}$ stop and apply Algorithm \ref{fia} with $d_i=d^*$, for all $i\leq k$. \end{enumerate} \end{algo} Note that the Algorithm \ref{IND} does not specify the choices of $P$, $p$, $H$ and $\mathbf{\mathbf{a}lpha}pha$. Recommendations on their selection are given in Section~\ref{sec:sim}. Multiple testing could potentially be an issue, but intensive simulation studies have shown that, since $d^*$ is initialized with the TVE criterion, usually no more than one or two iterations and tests are required for practical purposes. Therefore the confidence level is not adjusted, even though it would be feasible to incorporate this additional step into the algorithm. \subsection{Selection of FMA order} \langlebel{sec:AICC} For a fixed $d$, multivariate model selection procedures can be applied to choose $q$. In fact, it is shown in Theorem~4.7 of Klepsch and Kl\"uppelberg \cite{kk} that the projection of an FMA$(q)$ process on a finite-dimensional space is a VMA$(q^*)$ with $q^*\leq q$. Assuming that the finite-dimensional space is chosen such that no information on the dependence structure of the process is lost, $q=q^*$. Then, the FMA order $q$ may be chosen by performing model selection on the $d$-dimensional vector model given by the first $d$ principal directions of $(X_j\colon j\in{\mathbb Z})$. Methods for selecting the order of VMA models are described, for example, in Chapter 11.5 of Brockwell and Davis \cite{brockwell}, and Chapter~3.2 of Tsai \cite{tsai}. The latter book provides arguments for the identification of the VMA order via cross correlation matrices. This Ljung--Box (LB) method for testing the null hypothesis $H_0\colon C_{\textbf{X};\underline{h}} = C_{\textbf{X};\underline{h}+1 } = \dots = C_{\textbf{X};\overlineerline{h}} = 0$ versus the alternative that $C_{\textbf{X};h}\neq 0$ for a lag $h$ between $\underline{h}$ and $\overlineerline{h}$ is based on the statistic \begin{align} \langlebel{ljungbox} Q_{\underline{h},\overlineerline{h}} = n^2 \sum_{h=\underline{h}}^{\overlineerline{h}} \frac{1}{n-h} \mathrm{tr} ( \hat C_{\textbf{X};h}^\top \hat C_{\textbf{X};0}^{-1} \hat C_{\textbf{X};h}^{\partialhantom{-1}}C_{\textbf{X};0 } ^{-1}). \end{align} Under regularity conditions $Q_{\underline{h},\overlineerline{h}}$ is asymptotically distributed as a $\chi^2 _{d ^2(\overlineerline{h}-\underline{h}+1)}$ random variable if the multivariate procss $(\textbf{X}_j\colon j\in{\mathbb Z})$ on the first $d$ principal directions follows a VMA$(q)$ model and $\underline{h}>q$. For practical implementation, one computes iteratively $Q_{1,\overlineerline{h}}, Q_{2,\overlineerline{h}},\ldots$ and selects the order $q$ as the largest $\underline{h}$ such that $Q_{\underline{h},\overlineerline{h}}$ is significant, but $Q_{\underline{h}+h,\overlineerline{h}}$ is insignificant for all $h>0$. Alternatively, the well-known AICC criterion could be utilized. Algorithm \ref{fia} allows for the computationally efficient maximization of the likelihood function through the use of its innovation form; see Chapter 11.5 of Brockwell and Davis~\cite{brockwell}. The AICC criterion is then given by \begin{align}\langlebel{AICC} \mathrm{AICC}(q)=-2 \ln L (\topheta_1,\dots,\topheta_q,\mathbf{S}igma) + \frac{2 n d ( q d^2 +1 )}{n d - qd^2 -2 }, \end{align} where $\topheta_1,\ldots,\topheta_q$ are the fitted VMA coefficient matrices and $\mathbf{S}igma$ its fitted covariance matrix. The minimizer of \eqref{AICC} is selected as order of the FMA process. Both methods are compared in Section~\ref{sec:sim}. \subsection{Functional FPE criterion} \langlebel{sec:fFPE} In this section a criterion that allows to choose $d$ and $q$ simultaneously is introduced. A similar criterion was established in Aue et al.\ \cite{aue}, based on a decomposition of the functional mean squared prediction error. Note that, due to the orthogonality of the eigenfunctions $(\nu_i\colon i\in{\mathbb N})$ and the fact that $\hat X_{n+1,k}$ lives in $\mathcal{V}_d$, \begin{align} \langlebel{decomposition} {\rm E}\big[ \Vert X_{n+1} - \hat X_{n+1,k} \Vert ^ 2\big] &= {\rm E} \big[\Vert P_{\mathcal{V}_d} (X_{n+1} - \hat X_{n+1,k}) \Vert ^2 \big]+ {\rm E}\big[ \Vert (I_H-P_{\mathcal{V}_d}) X_{n+1} \Vert ^2\big]. \end{align} The second summand in \eqref{decomposition} satisfies ${\rm E}[ \Vert (I_H-P_{\mathcal{V}_d}) X_{n+1} \Vert ^2] = {\rm E}[ \Vert \sum_{i>d} \langlengle X_{n+1} , \nu_i \ranglengle \nu_i \Vert ^2] = \sum_{i>d} \langlembda_i$. The first summand in \eqref{decomposition} is, due to the isometric isomorphy between $\mathcal{V}_d$ and ${\mathbb R}^d$ equal to the mean squared prediction error of the vector model fit on the $d$ dimensional principal subspace. It can be shown using the results of Lai and Lee \cite{lai} that it is of order $\mathrm{tr} (C_\mathbf{Z}) + qd\,\mathrm{tr}(C_\mathbf{Z})/n$, where $C_\textbf{Z}$ denotes the covariance matrix of the innovations of the vector process. Using the matrix version $\mathbf{V}_n$ of the operator $V_n$ given through Algorithm~\ref{fia} as a consistent estimator for $C_{\textbf{Z}}$, the functional FPE criterion \begin{align} \text{fFPE}(d,q) = \frac{n + q \,d}{n}\, \mathrm{tr}(\mathbf{V}_n)+\sum_{i>d}\hat\langlembda_i \langlebel{fFPE} \end{align} is obtained. It can be minimized over both $d$ and $q$ to select the dimension of the principal subspace and the order of the FMA process jointly. As is noted in Aue et al.\ \cite{aue}, where a similar criterion is proposed for the selection of the order of an FAR$(p)$ model, the fFPE method is fully data driven: no further selection of tuning parameters is required. \section{Simulation evidence} \langlebel{sec:sim} \subsection{Simulation setting} \langlebel{subsec:sim:setting} In this section, results from Monte Carlo simulations are reported. The simulation setting was as follows. Using the first $D$ Fourier basis functions $f_1,\ldots,f_D$, the $D$-dimensional subspace $G^D={\rm \overline{sp}}\{f_1,\ldots,f_D\}$ of $H$ was generated following the setup in Aue et al.\ \cite{aue}, then the isometric isomorphy between ${\mathbb R}^D$ and $G^D$ is utilized to represent elements in $G^D$ by $D$-dimensional vectors and operators acting on $G^D$ by $D\times D$ matrices. Therefore $N+q$ $D$-dimensional random vectors as innovations for an FMA$(q)$ model and $q$ $D\times D$ matrices as operators were generated. Two different settings were of interest: processes possessing covariance operators with slowly and quickly decaying eigenvalues. Those cases were represented by selecting two sets of standard deviations for the innovation process, namely \begin{align} \sigma_{\text{slow}} = (i^{-1}\colon i=1,\dots,D) \qquad\mbox{and}\qquad \sigma_{\text{fast}} = (2^{-i} \colon i=1,\dots,D). \end{align} With this, innovations \begin{align*} \varepsilonpsilon_j= \sum_{i=1}^{D} c_{j,i} f_i, \qquad j=1-q,\ldots,n, \end{align*} were simulated, where $c_{j,i}$ are independent normal random variables with mean $0$ and standard deviation $\sigma_{\cdot,i}$, the $\cdot$ being replaced by either slow or fast, depending on the setting. The parameter operators $\tilde\theta_\ell$, for $\ell=1,\dots,q$, were chosen at random by generating $D\times D$ matrices, whose entries $\langlengle \tilde\theta_\ell f_i , f_{i'} \ranglengle $ were independent zero mean normal random variables with variance $\sigma_{\cdot,i}\sigma_{\cdot,i'}$. The matrices were then rescaled to have spectral norm $1$. Combining the forgoing, the FMA($q$) process \begin{align} \langlebel{simfma} X_j = \sum_{\ell=1}^q\theta_\ell \varepsilonpsilon_{j-\ell} + \varepsilonpsilon_j, \qquad j=1,\ldots,n \end{align} were simulated, where $\theta_\ell=\kappa_\ell \tilde\theta_\ell$ with $\kappa_\ell$ being chosen to ensure invertibility of the FMA process. In the following section, the performance of the proposed estimator is evaluated, and compared and contrasted to other methods available in the literature for the special case of FMA(1) processes, in a variety of situations. \subsection{Estimation of FMA(1) processes}\langlebel{sec:fma1} In this section, the performance of the proposed method is compared to two approaches introduced in Turbillon et al.\ \cite{turbillon2} for the special case of FMA(1) processes. These methods are based on the following idea. Denote by $C_\varepsilonpsilon$ the covariance operator of $(\varepsilonpsilon_n\colon n\in{\mathbb Z})$. Observe that since $C_{X;1}= \theta_1 C_\varepsilonpsilon $ and $C_{X} = C_\varepsilonpsilon + \theta_1 C_\varepsilonpsilon \theta_1^*$, it follows that $\theta_1 C_X = \theta_1 C_\varepsilonpsilon+\theta_1^2C_\varepsilonpsilon \theta_1 ^\mathbf{a}st=C_{X;1}+\theta_1^2C_{X;1}^\mathbf{a}st$, and especially \begin{align} \theta_1 ^2 C_{X;1}^\mathbf{a}st-\theta_1 C_X+ C_{X;1}=0. \langlebel{lquad} \end{align} The estimators in Turbillon et al.\ \cite{turbillon2} are based on solving the quadratic equation in \eqref{lquad} for $\theta_1$. The first of these only works under the restrictive assumption that $\theta_1$ and $C_\varepsilonpsilon$ commute. Then, solving \eqref{lquad} is equivalent to solving univariate equations generated by individually projecting \eqref{lquad} onto the eigenfunctions of $C_X$. The second approach is inspired by the Riesz--Nagy method. It relies on regarding (\ref{lquad}) as a fixed-point equation and therefore establishing a fixed-point iteration. Since solutions may not exist in $H$, suitable projections have to be applied. Consistency of both estimators is established in Turbillon et al.\ \cite{turbillon2}. To compare the performance of the methods, FMA$(1)$ time series were simulated as described in Section~\ref{subsec:sim:setting}. As measure of comparison the estimation error $\Vert \theta_1 - \hat\theta_1\Vert_{\mathcal{L}}$ was used after computing $\hat\theta_1$ with the three competing procedures. Rather than selecting the dimension of the subspace via Algorithm \ref{IND}, the estimation error is computed for $d=1,\ldots,5$. The results are summarized in Table~\ref{esterror0.8}, where estimation errors were averaged over 1000 repetitions for each specification, using sample sizes $n=100,500$ and $1{,}000$. \begin{table}[ht] \centering \begin{tabular}{rrrrrrrrrrr} \hline &\ & \multicolumn{3}{c}{\textbf{$n=100$}} & \multicolumn{3}{c}{\textbf{$n=500$}} \ & \multicolumn{3}{c}{\textbf{$n=1000$}} \\ &$d$ & Proj & Iter & Inn &Proj & Iter & Inn & Proj & Iter & Inn \\ \hline \multirow{5}{*}{$\sigma_\text{fast}$} & 1 & 0.539 & 0.530 & 0.514 & 0.527 & 0.521 & 0.513 & 0.518 & 0.513 & 0.508 \\ &2 & 0.528 & 0.433 & \bf 0.355 & 0.508 & 0.391 & 0.287 & 0.500 & 0.386 & 0.277 \\ &3 & 0.533 & 0.534 & 0.448 & 0.512 & 0.467 & \bf 0.235 & 0.503 & 0.460 & \bf 0.197 \\ &4 & 0.534 & 0.650 & 0.582 & 0.513 & 0.573 & 0.276 & 0.504 & 0.567 & 0.216 \\ &5 & 0.534 & 0.736 & 0.646 & 0.513 & 0.673 & 0.311 & 0.504 & 0.662 & 0.239 \\ \hline \multirow{6}{*}{$\sigma_\text{slow}$} & 1 & 0.610 & 0.602 & 0.588 & 0.579 & 0.574 & 0.566 & 0.575 & 0.573 & 0.569 \\ &2 & 0.614 & 0.527 & \bf 0.513 & 0.581 & 0.487 & 0.434 & 0.577 & 0.483 & 0.422 \\ &3 & 0.618 & 0.552 & 0.610 & 0.583 & 0.504 & \bf 0.389 & 0.578 & 0.500 & 0.362 \\ &4 & 0.620 & 0.591 & 0.861 & 0.584 & 0.531 & 0.402 & 0.579 & 0.522 & \bf 0.344 \\ &5 & 0.620 & 0.630 & 1.277 & 0.584 & 0.556 & 0.448 & 0.579 & 0.548 & 0.358 \\ \hline \end{tabular} \langlebel{esterror0.8} \caption{Estimation error $\Vert \theta_1 -{\hat{\theta}}_1 \Vert_{\mathcal{L}}$, with $\theta_1=\kappa_1\tilde\theta_1$ and $\kappa_1 = 0.8$, with $\hat{\theta}_1$ computed with the projection method (Proj) and the iterative method (Iter) of \cite{turbillon2}, and the proposed method based on the functional Innovations Algorithm (Inn). The smallest estimation error is highlighted in bold for each case.} \end{table} For all three sample sizes, the operator kernel estimated with the proposed algorithm is closest to the real kernel. As can be expected, the optimal dimension increases with the sample size, especially for the case where the eigenvalues decay slowly. The projection method does not perform well, which is also to be expected, because the condition of commuting $\theta_1$ and $C_\varepsilonpsilon$ is violated. One can see that the choice of $d$ is crucial: especially for small sample sizes for the proposed method, the estimation error explodes for large $d$. In order to get an intuition for the shape of the estimators, the kernels of the estimators resulting from the different estimation methods, using $n=500$ and $\kappa_1=0.8$, are plotted in Figure~\ref{kernelslow0.8}. It can again be seen that the projection method yields results that are significantly different from both the truth and the other two methods who produce estimated operator kernels, whose shapes look roughly similar to the truth. \begin{figure} \caption{Estimated operator kernel of simulated FMA$(1)$ process with $\kappa_1=0.8$, $d=3$ and $\sigma_\text{fast} \end{figure} \subsection{Model selection} In this section, the performance of the different model selection methods introduced in Section~\ref{sec:selection} is demonstrated. To do so, FMA(1) processes with weights $\kappa_1=0.4$ and $0.8$ were simulated as in the previous section. In addition, two different FMA$(3)$ processes were simulated according to the setting described in Section \ref{subsec:sim:setting}, namely \begin{itemize}\itemsep-.2ex \item Model 1: $\kappa_1=0.8$, $\kappa_2=0.6$, and $\kappa_3=0.4$. \item Model 2: $\kappa_1=0$, $\kappa_2=0$, and $\kappa_3=0.8$. \end{itemize} For sample sizes $n=100$, $500$ and $1{,}000$, $1{,}000$ processes of both Model 1 and 2 were simulated using $\sigma_\text{slow}$ and $\sigma_\text{fast}$. The estimation process was done as follows. First, the dimension $d$ of the principal projection subspace was chosen using Algorithm \ref{IND} with TVE such that $P=0.8$. With this selection of $d$, the LB and AICC criteria described in Section~\ref{sec:AICC} were applied to choose $q$. Second, the fFPE criterion was used for a simultaneous selection of $d$ and $q$. The results are summarized in Figures~\ref{boxplotma1} and \ref{boxplotma3}. \begin{figure} \caption{Model selection for different MA(1) processes. The left three plots in each small figure give the $d$ chosen by total variation explained with $P=0.8$ (TVE), Algorithm \ref{IND} \end{figure} \begin{figure} \caption{Model selection for different MA(3) processes. Labeling of procedures is as in Figure~\ref{boxplotma1} \end{figure} Figures~\ref{boxplotma1} and \ref{boxplotma3} allow for a number of interesting observations. For both the FMA$(1)$ and the FMA$(3)$ example, the model order is estimated well. In all cases, especially for sample sizes larger than 100, all three selection methods (AIC, LB, FPEq) for the choice of $q$ yield the correct model order (1 or 3). The Ljung--Box (LB) method seems to have the most stable results. The methods for the choice of $d$ are more heterogeneous. The TVE method yields the most stable results among different sample sizes. For $\sigma_\text{fast}$, it almost always selects $d=2$ and for $\sigma_\text{slow}$ the choice varies between $d=2$ and $d=3$. However, the TVE method seems to underestimate $d$. Often there appears to be dependence left in the data, as one can see from the selection of $d$ by Algorithm \ref{IND}. Especially in the FMA$(3)$ case and Model~1, this algorithm yields some large choices for $d$ of about $7$ or $8$. The choice of FPEd seems to increase with increasing sample size: this is to be expected as for increasing sample size the variance of the estimators decreases and the resulting predictors get more precise, even for high-dimensional models. This is valid especially for $\sigma_\text{slow}$ where a larger $d$ is needed to explain the dynamics of the functional process. A similar trade-off is occasionally observed for Algorithm \ref{IND}. \section{Application to traffic data} \langlebel{sec:app} In this section, the proposed estimation method is applied to vehicle traffic data provided by the Autobahndirektion S\"udbayern. The dataset consists of measurements at a fixed point on a highway (A92) in Southern Bavaria, Germany. Recorded is the average velocity per minute from 1/1/2014 00:00 to 30/06/2014 23:59 on three lanes. After taking care of missing values and outliers, the velocity per minute was averaged over the three lanes, weighted by the number of vehicles per lane. This leads to $1440$ preprocessed and cleaned data points per day, which were transformed into functional data using the first $30$ Fourier basis functions with the \texttt{R} package \texttt{fda}. The result is a functional time series $(X_j\colon j=1,\ldots,n=119)$, which is deemed stationary and exhibits temporal dependence, as evidenced by Klepsch et al.\ \cite{KKW}. The goal then is to approximate the temporal dynamics in this stationary functional time series with an FMA fit. Observe that the plots of the spectral norms $\Vert \hat C_{\mathbf{X};h}\hat C_{\mathbf{X};0}^{-1}\Vert_{\mathcal{L}}$ for $h=0,\dots,5$ in Figure \ref{acf} display a pattern typical for MA models of low order. Here $\mathbf{X}$ stands for the multivariate auxiliary model of dimension $d$ obtained from projection into the corresponding principal subspace. \begin{figure} \caption{Spectral norm of estimated cross-correlation matrices for lags $h=1,\dots,5$ of the vector model based on principal subspaces of dimension $d=1$ to $d=5$ (from left to right).} \end{figure} Consequently, the methodology introduced in Section~\ref{sec:methodology} and \ref{sec:selection} was applied to the data. First, the covariance operator $C_{X;0}$ and its first $15$ eigenelements $(\langlembda_1,\nu_1), \dots,(\langlembda_{15},\nu_{15})$ were estimated to construct the vector process $(\hat{\mathbf{X}}_j\colon j=1,\ldots,n)$, where $\hat{\mathbf{X}}_j=( \langlengle X_j , \hat\nu_1\ranglengle,\dots,\langlengle X_j , \hat\nu_{15}\ranglengle)^\top$. Then, the methods described in Sections~\ref{sec:selection} were applied to choose the appropriate dimension $d$ and model order $q$. The first four sample eigenfunctions explained 81\% of the variability, hence the TVE criterion with $P=0.8$ gave $d^*=4$ to initialize Algorithm~\ref{IND}. The hypothesis of independence of the left-out score vector process $(\hat{\mathbf{X}}_j[4\!\!:\!\!15]\colon j=1,\ldots,n)$ was rejected with $p$-value $0.03$. Here $\mathbf{X}_j[i\!\!:\!\!i^\partialrime]$ is used as notation for the vector comprised of coordinates $i,\ldots,i^\partialrime$, with $i\leq i^\partialrime$, of the original 15-dimensional vector $\hat{\mathbf{X}}_j$. In the next step of Algorithm \ref{IND}, $d^*$ is increased to $5$. A second independence test was run on $(\hat{\mathbf{X}}_j[5\!:\!15]\colon j=1,\ldots,n)$ and did not result in a rejection; the corresponding $p$-value was $0.25$. This analysis led to using $d=5$ as dimension of the principal subspace to conduct model selection with the methods of Section~\ref{sec:AICC}. Since TVE indicated $d=4$, the selection procedures were applied also with this choice. In both cases, the AICC criterion in \eqref{AICC} and LB criterion in \eqref{ljungbox} opted for $q=1$, in accordance with the spectral norms observed in Figure~\ref{acf}. Simultaneously choosing $d$ and $q$ with the fFPE criterion of Section~\ref{sec:fFPE} yields $d=3$ and $q=1$. After the model selection step, the operator of the chosen FMA$(1)$ process was estimated using Algorithm \ref{fia}. Similarly the methods introduced in Section~\ref{sec:fma1} were applied. Figure \ref{realtheta} displays the kernels of the estimated integral operator for all methods, selecting for $d=3$ and $d=5$. The plots indicate that, on this particular data set, all three methods produce estimated operators that lead to kernels of roughly similar shape. The similarity is also reflected in the covariance of the estimated innovations. For $d=3$, the trace of the covariance matrix is $43.14$, $45.4$ and $44.41$ for the Innovations Algorithm, iterative method and projective method, respectively. For $d=4$, the trace of the covariance of the estimated innovations is $48.19$, $46.00$ and $45.74$ for the different methods in the same order. \begin{figure} \caption{Estimated FMA$(1)$ kernel with the three methods for $d=3$ (first row) and $d=4$ (second row)} \end{figure} \section{Conclusions} \langlebel{sec:conclusion} This paper is the first to introduce a complete methodology to estimate any stationary, causal and invertible functional time series. This is achieved by approximating the functional linear filters in the causal representation with functional moving average processes obtained from an application of the functional Innovations Algorithm. The consistency of the estimators is verified as the main theoretical contribution. The proof relies on the fact that $d$-dimensional projections of FMA($q$) processes are isomorph to $d$ dimensional VMA($q^*$) models, with $q^*\leq q$. Introducing appropriate sequences of increasing subspaces of $H$, consistency can be established in the two cases of known and unknown principal projection subspaces. This line of reasoning follows multivariate techniques given in Lewis and Reinsel \cite{lewis} and Mitchell and Brockwell \cite{mitchell}. The theoretical underpinnings are accompanied by model selection procedures facilitating the practical implementation of the proposed method. An independence test is introduced to select the dimension of the principal projection subspace, which can be used as a starting point for the suggested order selection procedures based on AICC and Ljung--Box criteria. Additionally, an fFPE criterion is established that jointly selects dimension $d$ and order $q$. Illustrative results from a simulation study and the analysis of traffic velocity data show that the practical performance of the proposed method is satisfactory and at least competitive with other methods available in the literature for the case of FMA(1) processes. Future research could focus on an extension of the methodology to FARMA processes in order to increase parsimony in the estimation. It should be noted, however, that this not a straightforward task as identifying the dynamics of the projection of an FARMA$(p,q)$ to a finite-dimensional space is a non-resolved problem. In addition, the proposed methodology could be applied to offer an alternative route to estimate the spectral density operator, a principal object in the study of functional time series in the frequency domain; see Aue and van Delft \cite{avd}, H\"ormann et al.\ \cite{hoermann} and Panaretos and Tavakoli \cite{panaretos}. \section{Proofs} \langlebel{sec:proof} The notion of $L^p$-$m$-approximability is utilized for the proofs. A version of this notion was used for multivariate time series in Aue et al.\ \cite{AHHR} and then translated to the functional domain by H\"ormann and Kokoszka~\cite{weaklydep}. The definition is as follows. \begin{definition} \langlebel{def:lpm} {\rm Let $p\geq 1$. A sequence $(X_j\colon j\in\mathbb{Z})$ with values in $L^p_H$ is called {\it $L^p$-$m$-approximable}\/ if \[ X_j=f(\varepsilonpsilon_j,\varepsilonpsilon_{j-1},\ldots), \qquad j\in\mathbb{Z}, \] can be represented as a functional Bernoulli shift with a sequence of independent, identically distributed random elements $(\varepsilonpsilon_j\colon j\in\mathbb{Z})$ taking values in the measurable space $S$, potentially different from $H$, and a measurable function $f\colon S^\infty\to H$ such that \[ \sum_{m=0}^\infty\big({\rm E}[\|X_j-X_{j}^{(m)}\|^p]\big)^{1/p}<\infty, \] where $X_{j}^{(m)}=f(\varepsilonpsilon_j,\ldots,\varepsilonpsilon_{j-m+1},\varepsilonpsilon_{j-m}^{(j)},\varepsilonpsilon_{j-m-1}^{(j)},\ldots)$ with $(\varepsilonpsilon_{j}^{(i)}\colon j\in{\mathbb Z})$, $i\in{\mathbb N}_0$, being independent copies of $(\varepsilonpsilon_j\colon j\in\mathbb{Z})$. } \end{definition} Conditions can be established for most of the common linear and nonlinear functional time series models to be $L^p$-$m$-approximable. In particular, the functional linear processes $(X_j\colon j\in\mathbb{Z})$ defined in \eqref{eq:flp} are naturally included if the summability condition $\sum_{m=1}^\infty\sum_{\ell=m}^\infty\|\partialsi_\ell\|_{{\mathcal{L}}}<\infty$ is met (see Proposition 2.1 in H\"ormann and Kokoszka \cite{weaklydep}). \begin{proof}[\bf Proof of Theorem \ref{l4mapp}] Using that $(X_j\colon j\in{\mathbb Z})$ is $L^4$-$m$-approximable, write \begin{align*} X_j(k) &=(f(\varepsilonpsilon_j,\varepsilonpsilon_{j-1},\dots),\ldots,f(\varepsilonpsilon_{j-k+1},\varepsilonpsilon_{j-k},\dots))^\top \\ & = g(\varepsilonpsilon_j,\varepsilonpsilon_{j-1},\dots), \end{align*} where $g\colon H^\infty\to H^k$ is defined accordingly. For $k,m\in{\mathbb N}$ and $j\in{\mathbb Z}$, define \begin{align*} X_j^{(m)}(k) &= \big( f(\varepsilonpsilon_{j},\dots,\varepsilonpsilon_{j-m+1},\varepsilonpsilon_{j-m}^{(j)},\varepsilonpsilon_{j-m-1}^{(j)},\ldots),\ldots, f(\varepsilonpsilon_{j-k+1},\dots,\varepsilonpsilon_{j-m+1},\varepsilonpsilon_{j-m}^{(j)},\varepsilonpsilon_{j-m-1}^{(j)},\dots)\big)^\top \\ &=g(\varepsilonpsilon_j,\varepsilonpsilon_{j-1},\dots,\varepsilonpsilon_{j-m+1},\varepsilonpsilon_{j-m}^{(j)},\varepsilonpsilon_{j-m-1}^{(j)},\ldots). \end{align*} Now, by definition of the norm in $H^k$, \begin{align} \sum_{m=k}^{\infty} \big({\rm E}\big[\Vert X_m(k) - X_m^{(m)}(k)\Vert ^4\big]\big)^{1/4} &= \sum_{m=k}^{\infty}\bigg(\sum_{i=0}^{k-1} {\rm E}\big[\Vert X_{m-i}-X_{m-i}^{(m-i)}\Vert^4\big]\bigg)^{1/4} \notag \\ &\leq\sum_{m=k}^{\infty}\bigg(\sum_{i=0}^{k-1}{\rm E}\big[\Vert X_{m-i}-X_{m-i}^{(m-k)}\Vert^4\big]\bigg)^{1/4}\notag \\ &=\sum_{m=k}^{\infty}\big(k {\rm E}\big[\Vert X_{m-k}-X_{m-k}^{(m-k)}\Vert^4\big]\big)^{1/4} \notag\\ &= k ^{1/4}\sum_{m=0}^{\infty} \big({\rm E}\big[\Vert X_{m}-X_{m}^{(m)}\Vert^4\big]\big)^{1/4}, \langlebel{xrk} \end{align} where the first inequality is implied by Assumption \ref{assumptions}, since ${\rm E}[\Vert X_j - X_j^{(m-i)}\Vert ^2] \leq {\rm E}[\Vert X_j - X_j^{(m)}\Vert^2]$ for all $i\geq 0$, and the last inequality, since ${\rm E}[\Vert X_1 - X_1^{(m-k)}\Vert ^2] = {\rm E}[ \Vert X_{j} - X_j^{(m-k)}\Vert ^2 ]$ by stationarity. But the right-hand side of \eqref{xrk} is finite because $(X_j\colon j\in{\mathbb Z})$ is $L^4$-$m$-approximable by assumption. This shows that $(X_j(k)\colon j\in{\mathbb Z})$ is also $L^4$-$m$ approximable. To prove the consistency of the estimator $\hat C_{X(k)}$, note that the foregoing implies, by Theorem~3.1 in H\"ormann and Kokoszka~\cite{weaklydep}, that the bound \begin{align*} n\,{\rm E}\big[\Vert \hat C_{X(k)} - C_{X(k)}\Vert_{\mathcal{N}}^2\big] \leq U_{X(k)}, \end{align*} holds, where $U_{X(k)}= {\rm E}[\Vert X_1(k) \Vert^4]+ 4 \sqrt{2}({\rm E}[\Vert X_1(k) \Vert^4])^{3/4} \sum_{m=0}^{\infty} ({\rm E}[\Vert X_m(k)- X_m^{(m)}(k) \Vert^4])^{1/4}$ is a constant that does not depend on $n$. Since ${\rm E}[\Vert X_1(k) \Vert ^4] =k{\rm E}[\Vert X_1\Vert ^4]$, \eqref{xrk} yields that $U_{X(k)}=kU_X$, which is the assertion. \end{proof} \begin{cor}\langlebel{corhelp} The operators $\hat\beta_{k,i}$ from \eqref{yulewalkerhat} and $\hat \theta_{k,i}$ from \eqref{innov} related through \begin{align} \hat\theta_{k,i}=\sum_{j=1}^i \hat\beta_{k,j} \hat\theta_{k-j,i-j}, \qquad i=1,\dots,k,\; k\in{\mathbb N}. \langlebel{linkhat} \end{align} \end{cor} \begin{proof}[\bf Proof] The proof is based on the finite-sample versions of the regression formulation of \eqref{blp} and the innovations formulation given in \eqref{innov}. Details are omitted to conserve space. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{autoregressive}] {\em (i)} It is first shown that, for all $x\in H^k$, \begin{align*} \Vert (\hat B (k) - \mathbb{P}i(k) )(x) \Vert \overlineerset{p}{\rightarrow} 0 \qquad (n\rightarrow \infty), \end{align*} where $\mathbb{P}i(k)=(\partiali_1,\ldots,\partiali_k)^\top$ is the vector of the first $k$ operators in the invertibility representation of the functional time series $(X_j\colon j\in{\mathbb Z})$. Define the process $(e_{j,k}\colon j\in{\mathbb Z})$ by letting \begin{align} \langlebel{elk} e_{j,k}=X_j-\sum_{\ell=1}^k\partiali_\ell X_{j-\ell} \end{align} and let $I_{H^k}$ be the identity operator on $H^k$. Note that \begin{align*} \hat B(k) - \mathbb{P}i(k) &= \hat \mathbb{G}amma_{1,k,d} \hat\mathbb{G}amma_{k,d}^{-1} - \mathbb{P}i(k) \hat\mathbb{G}amma_{k,d}\hat\mathbb{G}amma_{k,d}^{-1} + \mathbb{P}i(k)(I_{H^k}-P_{(k)})\\ &= \big(\mathbb{G}amma_{1,k,d}-\mathbb{P}i(k) \hat\mathbb{G}amma_{k,d}\big)\hat\mathbb{G}amma_{k,d}^{-1}+\mathbb{P}i(k)(I_{H^k}-P_{(k)}). \end{align*} Plugging in the estimators defined in \eqref{hatgammakd} and subsequently using \eqref{elk}, it follows that \begin{align*} \hat B(k) - \mathbb{P}i(k) &= \bigg(\frac{1}{n-k}\sum_{j=k}^{n-1} \big((P_{(k)}X_{j,k}\otimes X_{j+1}) - (P_{(k)} X_{j,k}\otimes\mathbb{P}i(k) X_{j,k})\big)\bigg)\hat\mathbb{G}amma_{k,d}^{-1}+\mathbb{P}i(k)(I_{H^k}-P_{(k)})\\ &=\bigg(\frac{1}{n-k}\sum_{j=k}^{n-1} \big(P_{(k)}X_{j,k}\otimes (X_{j+1} - \mathbb{P}i(k) X_{j,k})\big)\bigg)\hat\mathbb{G}amma_{k,d}^{-1}+\mathbb{P}i(k)(I_{H^k}-P_{(k)})\\ &=\bigg(\frac{1}{n-k}\sum_{j=k}^{n-1} \big(P_{(k)}X_{j,k}\otimes e_{j+1,k}\big)\bigg)\hat\mathbb{G}amma_{k,d}^{-1}+\mathbb{P}i(k)(I_{H^k}-P_{(k)}). \end{align*} Two applications of the triangle inequality imply that, for all $x\in H^k$, \begin{align} \Vert (\hat B(k) - \mathbb{P}i(k)) (x) \Vert &\leq\bigg\Vert\bigg(\frac{1}{n-k}\sum_{j=k}^{n-1} \big(P_{(k)}X_{j}(k)\otimes e_{j+1,k}\big)\bigg)\hat\mathbb{G}amma_{k,d}^{-1} (x)\bigg\Vert + \Vert\mathbb{P}i(k)(I_{H^k}-P_{(k)})(x)\Vert \notag \\ &\leq \bigg\Vert\bigg(\frac{1}{n-k}\sum_{j=k}^{n-1} \big(P_{(k)}X_{j}(k)\otimes (e_{j+1,k}-\varepsilonpsilon_{j+1})\big)\bigg)\hat\mathbb{G}amma_{k,d}^{-1}\bigg\Vert_{{\mathcal{L}}}\notag\\ &\qquad + \bigg\Vert\bigg(\frac{1}{n-k}\sum_{j=k}^{n-1} \big(P_{(k)}X_{j}(k)\otimes \varepsilonpsilon_{j+1}\big)\bigg)\hat\mathbb{G}amma_{k,d}^{-1}\bigg\Vert_{{\mathcal{L}}}+\Vert\mathbb{P}i(k)(I_{H^k}-P_{(k)})(x)\Vert\notag\\ &\leq \big(\Vert U_{1n}\Vert_{{\mathcal{L}}} + \Vert U_{2n}\Vert_{{\mathcal{L}}}\big)\Vert \hat\mathbb{G}amma_{k,d} ^{-1}\Vert_{{\mathcal{L}}}+\Vert\mathbb{P}i(k)(I_{H^k}-P_{(k)})(x)\Vert, \langlebel{help4} \end{align} where $U_{1n}$ and $U_{2n}$ have the obvious definitions. Arguments similar to those used in Proposition~6.4 of Klepsch and Kl\"uppelberg~\cite{kk} yield that the second term on the right-hand side of \eqref{help4} can be made arbitrarily small by increasing $k$. To be more precise, for $\delta>0$, there is $k_\delta\in{\mathbb N}$ such that \begin{align}\langlebel{res6} \Vert\mathbb{P}i(k)(I_{H^k}-P_{(k)})(x)\Vert<\delta \end{align} for all $k\ge k_\delta$ and all $x\in H^k$. To estimate the first term on the right-hand side of \eqref{help4}, focus first on $ \Vert \hat\mathbb{G}amma_{k,d} ^{-1}\Vert_{{\mathcal{L}}} $. Using the triangular inequality, $\Vert \hat\mathbb{G}amma_{k,d} ^{-1}\Vert_{{\mathcal{L}}} \leq \Vert \hat \mathbb{G}amma_{k,d}^{-1} - \mathbb{G}amma_{k,d}^{-1} \Vert_{{\mathcal{L}}}+\Vert \mathbb{G}amma_{k,d}^{-1}\Vert_{{\mathcal{L}}}.$ Theorem 1.2 in Mitchell \cite{mitchell2} and Lemma~6.1 in Klepsch and Kl\"uppelberg \cite{kk} give the bound \begin{align} \Vert \mathbb{G}amma_{k,d}^{-1}\Vert_{{\mathcal{L}}} \leq \mathbf{\mathbf{a}lpha}pha_{d_k}^{-1}, \langlebel{res1} \end{align} where $\mathbf{\mathbf{a}lpha}pha_{d_k}$ is the infimum of the eigenvalues of all spectral density operators of $(X_{d_k,j}\colon j\in{\mathbb Z})$. Furthermore, using the triangle inequality and then again Lemma~6.1 of Klepsch and Kl\"uppelberg \cite{kk}, \begin{align} \Vert \hat \mathbb{G}amma_{k,d}^{-1} - \mathbb{G}amma_{k,d}^{-1} \Vert_{{\mathcal{L}}} &= \Vert \hat \mathbb{G}amma_{k,d}^{-1} (\hat \mathbb{G}amma_{d,k} - \mathbb{G}amma_{d,k}) \mathbb{G}amma_{k,d}^{-1} \Vert_{{\mathcal{L}}} \notag \\ &\leq \big( \Vert \hat \mathbb{G}amma_{k,d}^{-1} - \mathbb{G}amma_{k,d}^{-1} \Vert_{{\mathcal{L}}} + \Vert \mathbb{G}amma_{k,d}^{-1} \Vert_{{\mathcal{L}}} \big) \Vert\hat \mathbb{G}amma_{d,k} - \mathbb{G}amma_{d,k}\Vert_{{\mathcal{L}}} \mathbf{\mathbf{a}lpha}pha_{d_k}^{-1}. \langlebel{help1} \end{align} Hence, following arguments in the proof of Theorem~1 in Lewis and Reinsel \cite{lewis}, \begin{align*} 0 \leq \frac{\Vert \hat \mathbb{G}amma_{k,d}^{-1} - \mathbb{G}amma_{k,d}^{-1} \Vert_{{\mathcal{L}}} }{\mathbf{\mathbf{a}lpha}pha_{d_k}^{-1} (\Vert \hat \mathbb{G}amma_{k,d}^{-1} - \mathbb{G}amma_{k,d}^{-1} \Vert_{{\mathcal{L}}} +\mathbf{\mathbf{a}lpha}pha_{d_k}^{-1})} \leq \Vert\hat \mathbb{G}amma_{d,k} - \mathbb{G}amma_{d,k}\Vert_{{\mathcal{L}}} , \end{align*} by \eqref{help1}. This yields \begin{align}\langlebel{help2} \Vert\hat \mathbb{G}amma_{d,k}^{-1} - \mathbb{G}amma_{d,k}^{-1}\Vert_{{\mathcal{L}}} \leq \frac{\Vert\hat \mathbb{G}amma_{d,k} - \mathbb{G}amma_{d,k} \Vert_{{\mathcal{L}}} \mathbf{\mathbf{a}lpha}pha_{d_k}^{-2}}{1-\Vert\hat \mathbb{G}amma_{d,k} - \mathbb{G}amma_{d,k}\Vert_{{\mathcal{L}}}\mathbf{\mathbf{a}lpha}pha_{d_k}^{-1}}. \end{align} Note that, since $P_{(k)}P_k=P_{(k)}$, $\Vert \mathbb{G}amma_{k,d}\Vert_{{\mathcal{L}}} = \Vert P_{(k)}P_k \mathbb{G}amma_{k} P_k P_{(k)} \Vert_{{\mathcal{L}}} \leq \Vert P_{k} \mathbb{G}amma_{k} P_{k}\Vert_{{\mathcal{L}}}.$ Also, by Theorem~\ref{l4mapp}, for some positive finite constant $M_1$, ${\rm E}[ \Vert P_k \hat\mathbb{G}amma_k P_k - P_k \mathbb{G}amma_k P_k \Vert^2] \leq M_1 k/{(n-k)}.$ Therefore, \begin{align} \Vert \hat\mathbb{G}amma_{d,k}- \mathbb{G}amma_{d,k} \Vert = O_p\bigg(\sqrt{\frac{k}{n-k}}\bigg). \end{align} Hence, the second part of Assumption~\ref{assumptions} and \eqref{help2} lead first to $\Vert\hat \mathbb{G}amma_{d,k}^{-1} - \mathbb{G}amma_{d,k}^{-1}\Vert_{{\mathcal{L}}} \overlineerset{p}{\rightarrow} 0$ and, consequently, combining the above arguments, \begin{align} \Vert \hat\mathbb{G}amma_{k,d} ^{-1}\Vert_{{\mathcal{L}}} = O_p(\mathbf{\mathbf{a}lpha}pha_{d_k}^{-1}). \langlebel{res5} \end{align} Next consider $U_{1n}$ in \eqref{help4}. With the triangular and Cauchy--Schwarz inequalities, calculate \begin{align*} {\rm E}[\Vert U_{1n}\Vert] &={\rm E}\bigg[\bigg\Vert \frac{1}{n-k} \sum_{j=k}^{n-1} P_{(k)}X_{j}(k) \otimes (e_{j+1,k}-\varepsilonpsilon_{j+1}) \bigg\Vert_{{\mathcal{L}}}\bigg] \\ &\leq \frac{1}{n-k} \sum_{j=k}^{N-1}{\rm E}\bigg[\bigg\Vert P_{(k)}X_{j}(k) \otimes (e_{j+1,k}-\varepsilonpsilon_{j+1})\bigg\Vert_{{\mathcal{L}}}\bigg] \\ &\leq \frac{1}{n-k} \sum_{j=k}^{N-1} \big({\rm E}[\Vert P_{(k)}X_{j}(k)\Vert^2] \big)^{1/2} \big({\rm E}[\Vert e_{j+1,k}-\varepsilonpsilon_{j+1}\Vert^2] \big)^{1/2}. \end{align*} The stationarity of $(X_j\colon j\in{\mathbb Z})$ and the fact that $X_j\in L^2_H$ imply that, for a positive finite constant $M_2$, \begin{align} {\rm E}[\Vert U_{1n}\Vert_{\mathcal{L}}] & \leq\big({\rm E}[\Vert P_{(k)} X_{j}(k) \Vert^2] \big)^{1/2} \big({\rm E}[\Vert e_{j+1,k} -\varepsilonpsilon_{j+1} \Vert^2] \big)^{1/2} \notag \\ &\leq \sqrt{k} \big( {\rm E}[\Vert P_{\mathcal{V}_{d_k}} X_0 \Vert ^2] \big)^{1/2} \bigg({\rm E}\bigg[ \bigg\Vert \sum_{\ell>k} \partiali_\ell X_{1-\ell} + \sum_{\ell=1}^{k} \partiali_{\ell}(I_H-P_{\mathcal{V}_{d_{k+1-\ell}}})X_{1-\ell}\bigg\Vert^2\bigg]\bigg)^{1/2} \notag\\ &\leq \sqrt{k} \bigg(2 {\rm E}\bigg[ \Vert \sum_{\ell>k} \partiali_\ell X_{j+1-\ell} \Vert^2\bigg] + 2{\rm E}\bigg[\bigg\Vert \sum_{\ell=1}^{k} \partiali_{\ell} (I_H-P_{\mathcal{V}_{d_{k+1-\ell}}}) X_{1-\ell} \bigg\Vert^2\bigg]\bigg)^{1/2} \notag \\ &= M_2\sqrt{k(J_1 + J_2)} \notag \\ & \leq M_2\sqrt{k}(\sqrt{J_1} + \sqrt{J_2}), \langlebel{help5} \end{align} where $J_1$ and $J_2$ have the obvious definition. Since for $X\in L^2_H$, ${\rm E}[ \Vert X\Vert^2] = \Vert C_X\Vert_{\mathcal{N}}$, the term $J_1$ can be bounded as follows. Observe that \begin{align*} J_1 &=\bigg\Vert {\rm E}\bigg[ \sum_{\ell>k} \partiali_\ell X_{1-\ell} \otimes \sum_{\ell'>k} \partiali_{\ell'} X_{1-\ell'} \bigg]\bigg\Vert_{{\mathcal{N}}} \\ &=\bigg\Vert\sum_{\ell,\ell'>k} \partiali_\ell C_{X;\ell-\ell'} \partiali_{\ell'}^* \bigg\Vert_{{\mathcal{N}}} \\ &\leq \sum_{\ell,\ell'>k} \Vert \partiali_\ell\Vert_{{\mathcal{L}}} \Vert \partiali_{\ell^\partialrime}\Vert_{{\mathcal{L}}} \Vert C_{X;\ell-\ell'} \Vert_{{\mathcal{N}}}. \end{align*} Now $C_{X;\ell-\ell'}\in{\mathcal{N}}$ for all $\ell,\ell'\in{\mathbb Z}$, hence $\Vert C_{X;\ell-\ell'} \Vert_{{\mathcal{N}}}\leq M_3$ and $J_1\leq M_3 (\sum_{\ell>k} \Vert \partiali_\ell\Vert_{{\mathcal{L}}})^2$. Concerning $J_2$, note first that, since ${\rm E}[ \Vert X\Vert^2] = \Vert C_X\Vert_{\mathcal{N}}$, \begin{align} J_2 &= \bigg\Vert {\rm E} \bigg [\sum_{\ell=1}^k \partiali_\ell( I_H-P_{\mathcal{V}_{d_{k+1-\ell}}})X_{1-\ell} \otimes \sum_{\ell'=1}^n \partiali_{\ell'} ( I_H-P_{\mathcal{V}_{d_{k+1-\ell'}}})X_{1-\ell'} \bigg] \bigg\Vert_{{\mathcal{N}}}. \notag \end{align} Using the triangle inequality together with properties of the nuclear operator norm and the definition of $C_{X;h}$ in display \eqref{cxh} leads to \begin{align} J_2 &\leq \sum_{\ell,\ell'=1}^k \Vert \partiali_\ell\Vert_{{\mathcal{L}}} \Vert \partiali_\ell'\Vert_{{\mathcal{L}}} \big\Vert {\rm E} \big[ ( I_H-P_{\mathcal{V}_{d_{k+1-\ell}}})X_{1-\ell} \otimes ( I_H-P_{\mathcal{V}_{d_{k+1-\ell'}}})X_{1-\ell'}\big] \big\Vert_{{\mathcal{N}}} \notag\\ &= \sum_{\ell,\ell'=1}^k \Vert \partiali_\ell\Vert_{{\mathcal{L}}} \Vert \partiali_\ell'\Vert_{{\mathcal{L}}} \big\Vert ( I_H-P_{\mathcal{V}_{d_{k+1-\ell}}})C_{X;\ell-\ell'} ( I_H-P_{\mathcal{V}_{d_{k+1-\ell'}}}) \big\Vert_{{\mathcal{N}}} \notag \\ &= \sum_{\ell,\ell'=1}^k \Vert \partiali_\ell\Vert_{{\mathcal{L}}} \Vert \partiali_\ell'\Vert_{{\mathcal{L}}} K(\ell,\ell').\langlebel{proof1} \end{align} By the definition of $\mathcal{V}_d$ in \eqref{xdi} and since $( I_H-P_{\mathcal{V}_{d_{i}}})= \sum_{r>d_{i}} \nu_r\otimes\nu_r$, it follows that \begin{align} K(\ell,\ell') &=\bigg\Vert\sum_{s>d_{k+1-\ell'}} \sum_{r>d_{k+1-\ell}} \langlengle C_{X;\ell-\ell'} (\nu_r), \nu_{s} \ranglengle \nu_r\otimes \nu_{s}\bigg\Vert_{{\mathcal{N}}} \notag \\ &\leq \bigg\Vert\sum_{s>d_{k+1-\ell'}} \sum_{r>d_{k+1-\ell}} \sqrt{\langlembda_r\langlembda_{s}} \nu_r\otimes \nu_{s}\bigg\Vert_{{\mathcal{N}}} \notag\\ &= \sum_{i=1}^{\infty} \bigg\langlengle \sum_{s>d_{k+1-\ell'}} \sum_{r>d_{k+1-\ell}} \sqrt{\langlembda_r\langlembda_{s}} \nu_r\otimes \nu_{s} (\nu_i), \nu_i \bigg\ranglengle \notag \\ & \leq \sum_{i>d_{k+1-\ell}} \langlembda_i, \langlebel{proof2} \end{align} where Lemma~6.2 in Klepsch and Kl\"uppelberg \cite{kk} was applied to give $\langlengle C_{X;\ell-\ell^\partialrime} \nu_r,\nu_s \ranglengle \leq \sqrt{\langlembda_r \langlembda_s}$. Plugging \eqref{proof2} into \eqref{proof1}, and recalling that $\sum_{\ell=1}^{\infty} \Vert\partiali_\ell\Vert_{{\mathcal{L}}}=M_4<\infty$, gives that \begin{align} J_2 &\leq M_4 \sum_{\ell=1}^k \Vert \partiali_\ell \Vert_{{\mathcal{L}}} \sum_{i>d_{k+1-\ell}} \langlembda_i. \langlebel{proof6} \end{align} Inserting the bounds for $J_1$ and $J_2$ into \eqref{help5}, for some $M<\infty$, \begin{align} {\rm E}[\Vert U_{1n} \Vert] & \leq \sqrt{k} M_2 (M_3 \sqrt{J_1}+ \sqrt{J_2}) \notag \\ &\leq \sqrt{k} M_2 \bigg(M_3 \sum_{\ell>k} \Vert \partiali_\ell\Vert_{{\mathcal{L}}} + M_4 \sum_{\ell=1}^k \Vert \partiali_\ell \Vert_{{\mathcal{L}}} \sum_{i>d_{k+1-\ell}} \langlembda_i\bigg) \notag\\ &\leq \sqrt{k}M\bigg(\sum_{\ell>k} \Vert \partiali_\ell\Vert_{{\mathcal{L}}} + \bigg(\sum_{\ell=1}^k \Vert \partiali_\ell \Vert_{{\mathcal{L}}} \sum_{i>d_{k+1-\ell}} \langlembda_i ) \bigg) \langlebel{help7}. \end{align} Concerning $U_{2n}$ in \eqref{help4}, use the linearity of the scalar product, the independence of the innovations $(\varepsilonpsilon_j\colon j\in\mathbb{Z})$ and the stationarity of the functional time series $(X_j\colon j\in\mathbb{Z})$ to calculate \begin{align*} E[\Vert U_{2n} \Vert^2] &\leq \bigg(\frac{1}{n-k}\bigg)^2 \sum_{j=k}^{n-1} {\rm E}\big[ \Vert P_{(k)}X_{j}(k)\Vert ^2\big] {\rm E}\big[\Vert\varepsilonpsilon_{j+1}\Vert^2\big] \\ &\leq \frac{1}{n-k} {\rm E}\big[ \Vert P_{(k)}X_{0}(k)\Vert ^2\big] {\rm E}\big[\Vert\varepsilonpsilon_{0}\Vert^2\big] \\ &\leq \frac{k}{n-k} {\rm E}\big[ \Vert X_{0}\Vert ^2\big] {\rm E}\big[\Vert\varepsilonpsilon_{0}\Vert^2\big]. \end{align*} Since both $(X_j\colon j\in{\mathbb Z})$ and $(\varepsilonpsilon_j\colon j\in{\mathbb Z})$ are in $L^2_H$, \eqref{res5} implies that \begin{align*} \Vert U_{2n} \Vert_{\mathcal{L}} \Vert \Vert \hat\mathbb{G}amma_{k,d} ^{-1}\Vert_{{\mathcal{L}}} = O_p\bigg(\frac{1}{\mathbf{\mathbf{a}lpha}pha_{d_k}} \sqrt{\frac{k}{n-k}}\bigg). \end{align*} Furthermore, \eqref{res5} and \eqref{help7} show that \begin{align*} \Vert U_{1n} \Vert_{\mathcal{L}} \Vert \Vert \hat\mathbb{G}amma_{k,d} ^{-1}\Vert_{{\mathcal{L}}} = O_p\bigg(\frac{\sqrt{k}}{\mathbf{\mathbf{a}lpha}pha_{d_k}} \bigg(\sum_{\ell>k} \Vert \partiali_\ell\Vert_{{\mathcal{L}}} + \sum_{\ell=1}^k \Vert \partiali_\ell \Vert_{{\mathcal{L}}} \sum_{i>d_{k+1-\ell}} \langlembda_i \bigg)\bigg). \end{align*} Thus Assumption~\ref{assumptions}, \eqref{help4} and \eqref{res6} assert that, for all $x\in H^k$, $\Vert \hat B_k - \mathbb{P}i(k) (x) \Vert \overlineerset{p}{\rightarrow} 0$, which proves the first statement of the theorem. {\em (ii)} First note that, for all $x\in H^k$, $\Vert (\hat\beta_{k,i}-\beta_{k,i})(x)\Vert \leq \Vert (\hat\beta_{k,i} - \partiali_i)(x)\Vert + \Vert (\partiali_i-\beta_{k,i})(x)\Vert \overlineerset{p}{\rightarrow } 0$ as $n\to\infty$. Now $\theta_{k,1} = \beta_{k,1}$ and by Corollary~\ref{corhelp} $\hat\theta_{k,1}=\hat\beta_{k,1}$. Since furthermore $\sum_{j=1}^{k} \partiali_j \partialsi_{k-j}=\partialsi_k$ (see, for instance, the proof of Theorem~5.3 in Klepsch and Kl\"uppelberg \cite{kk}), $\partialsi_1=\partiali_1$. Therefore, \begin{align*} \Vert (\hat\theta_{k,1}-\partialsi_1)(x)\Vert = \Vert(\hat\beta_{k,1}-\partiali_1)(x)\Vert \overlineerset{p}{\rightarrow} 0 \end{align*} as $n\to\infty$. This proves the statement for $i=1$. Proceed by assuming the statement of the theorem is true for $i=1,\dots,N\in{\mathbb N}$, and then use induction on $N$. Indeed, for $i=N+1$, the triangle inequality yields, for all $x\in H$, \begin{align*} \Vert (\hat\theta_{k,N+1}-\partialsi_{N+1})(x)\Vert & = \bigg\Vert \bigg(\sum_{j=1}^{N+1} \hat\beta_{k,j} \hat\theta_{k-j,N+1-j} - \partiali_j \partialsi_{N+1-j}\bigg)(x)\bigg\Vert \\ &\leq \sum_{j=1}^{N+1} \Vert (\hat\beta_{k,j}-\partiali_j) \hat\theta_{k-j,N+1-j}(x)\Vert + \Vert \partiali_j(\hat\theta_{k-j,N+1-j}- \partialsi_{N+1-j})(x)\Vert. \end{align*} Now, for $n\rightarrow\infty$, the first summand converges in probability to $0$ by part {\em (i)}, while the second summand converges to $0$ in probability by induction. Therefore the statement is proven. \end{proof} \begin{proof}[\bf Proof of Theorem~\ref{autoregressive2}] {\em (i)} The proof is based again on showing that, for all $x\in H^k$, $\Vert (\hat{\hat B}(k) - \mathbb{P}i (k)) (x) \Vert \overlineerset{p}{\rightarrow} 0$ as $n\rightarrow\infty$, where $\hat {\hat B}(k)= (\hat{\hat \beta}_{k,1}, \dots, \hat {\hat \beta}_{k,k})$. To this end, first note that \begin{align} \Vert (\hat{\hat B}(k) - \mathbb{P}i(k))(x) \Vert \leq \Vert (\hat{\hat B}(k) - \hat B (k) )(x) \Vert + \Vert (\hat B(k) - \mathbb{P}i(k)) (x) \Vert. \langlebel{ausgang2} \end{align} Under Assumptions~\ref{assumptions}, the second term of the right-hand side converges to $0$ in probability for all $x\in H^k$ by part {\em (i)} of Theorem~\ref{autoregressive}. The first term of the right-hand side of \eqref{ausgang2} can be investigated uniformly over $ H^k$. Using the plug-in estimators defined as in \eqref{yulewalkerhathat}, we get for $k\in{\mathbb N}$ \begin{align} \Vert \hat{\hat B}(k) - \hat B (k) \Vert_{\mathcal{L}} &= \Vert \hat{\hat\mathbb{G}amma}_{1,k,d}\hat{\hat \mathbb{G}amma}_{k,d}^{-1}-\hat \mathbb{G}amma_{1,k,d} \hat { \mathbb{G}amma}_{k,d}^{-1}\Vert_{\mathcal{L}}\notag\\ &\leq \Vert \big( \hat{\hat\mathbb{G}amma}_{1,k,d} -\hat \mathbb{G}amma_{1,k,d} \big) \hat{\hat \mathbb{G}amma}_{k,d}^{-1}\Vert_{\mathcal{L}} + \Vert \hat \mathbb{G}amma_{1,k,d} \big( \hat{ \mathbb{G}amma}_{k,d}^{-1}-\hat {\hat \mathbb{G}amma}_{k,d}^{-1}\big)\Vert_{\mathcal{L}}. \langlebel{ausgang1} \end{align} Following the same intuition as in the proof of Theorem~\ref{autoregressive}, start by investigating the term $ \Vert( \hat{ \mathbb{G}amma}_{k,d}-\hat {\hat \mathbb{G}amma}_{k,d})\Vert_{\mathcal{L}} $. Applying triangle inequality, linearity of the inner product and the inequalities $\Vert P_{(k)}X_{j}(k)\Vert \leq \Vert X_j(k)\Vert$ and $\Vert \hat P_{(k)}X_{j}(k)\Vert \leq \Vert X_j(k)\Vert$, it follows that \begin{align} \Vert( \hat{ \mathbb{G}amma}_{k,d}-\hat {\hat \mathbb{G}amma}_{k,d})\Vert_{\mathcal{L}} &= \bigg\Vert \frac{1}{n-k} \sum_{j=k}^{n-1} \big(P_{(k)}X_j(k)\otimes P_{(k)}X_j(k) - \hat P_{(k)}X_j(k)\otimes \hat P_{(k)}X_j(k) \big)\bigg\Vert_{\mathcal{L}}\notag\\ &\leq \frac{2}{n-k} \sum_{j=k}^{n-1} \big\Vert X_j(k) \big \Vert \big\Vert P_{(k)}X_j(k)-\hat P_{(k)}X_j(k) \big\Vert.\langlebel{help6} \end{align} Note that, from the definitions of $X_j(k)$, $P_{(k)}$ and $\hat{P}_{(k)}$, \begin{align*} P_{(k)}X_j(k) &=\bigg(\sum_{i=1}^{d_k}\langlengle X_j,\nu_i\ranglengle \nu_i,\ldots,\sum_{i=1}^{d_1}\langlengle X_{j-k},\nu_i\ranglengle \nu_i\bigg)^\top, \\ \hat P_{(k)}X_j(k) &=\bigg(\sum_{i=1}^{d_k}\langlengle X_j,\hat \nu_i\ranglengle \hat\nu_i,\dots,\sum_{i=1}^{d_1}\langlengle X_{j-k},\hat\nu_i\ranglengle \hat\nu_i\bigg)^\top. \end{align*} These relations show that \begin{align*} \big\Vert P_{(k)}X_j(k)- \hat P_{(k)}X_j(k)\big\Vert &= \bigg\Vert\bigg(\sum_{i=1}^{d_k}\langlengle X_j,\hat \nu_i\ranglengle \hat\nu_i - \langlengle X_j,\nu_i\ranglengle \nu_i,\dots,\sum_{i=1}^{d_1}\langlengle X_{j-k},\hat\nu_i\ranglengle \hat\nu_i-\langlengle X_{j-k},\nu_i\ranglengle \nu_i\bigg)^\top\bigg\Vert \\ &= \bigg\Vert\bigg(\sum_{i=1}^{d_k}\langlengle X_j, \hat\nu_i-\nu_i\ranglengle \hat\nu_i ,\dots,\sum_{i=1}^{d_1}\langlengle X_{j-k},\hat\nu_i-\nu_i\ranglengle \hat\nu_i\bigg)^\top\bigg\Vert\\ &\qquad+\bigg\Vert\bigg(\sum_{i=1}^{d_k}\langlengle X_j,\nu_i\ranglengle (\nu_i-\hat\nu_i),\dots,\sum_{i=1}^{d_1}\langlengle X_{j-k},\nu_i\ranglengle (\nu_i-\nu_i)\bigg)^\top\bigg\Vert. \end{align*} Observe that, for $x=(x_1,\dots,x_k)\in H^k$, $\Vert x\Vert = (\sum_{i=1}^{k} \Vert x_i\Vert ^2 )^{1/2}$, Then, applications of the Cauchy--Schwarz inequality and the orthonormality of $(\nu_i\colon i\in{\mathbb N})$ and $(\hat\nu_i\colon i\in{\mathbb N})$ lead to \begin{align} \big\Vert P_{(k)}X_j(k)- \hat P_{(k)}X_j(k)\big\Vert &\leq \bigg(\sum_{i=0}^{k-1} \bigg\Vert\sum_{i=1}^{d_i} \langlengle X_{j-i}, \hat\nu_l-\nu_l\ranglengle \hat\nu_l\bigg\Vert^2 \bigg)^{1/2} + \bigg(\sum_{i=0}^{k-1} \bigg\Vert\sum_{l=1}^{d_i} \langlengle X_{j-i},\nu_l\ranglengle (\nu_l-\hat\nu_l)\bigg\Vert^2 \bigg) ^{1/2} \notag\\ &\leq \bigg(\sum_{i=0}^{k-1} \sum_{l=1}^{d_i} \Vert X_{j-i}\Vert^2 \Vert \hat\nu_l-\nu_l\Vert^2 \bigg)^{1/2} + \bigg(\sum_{i=0}^{k-1} \sum_{l=1}^{d_i} \Vert X_{j-i}\Vert^2 \Vert \nu_l-\hat\nu_l \Vert ^2 \bigg) ^{1/2}\notag\\ &\leq 2 \bigg(\sum_{i=0}^{k-1} \sum_{l=1}^{d_k} \Vert X_{j-i}\Vert^2 \Vert \hat\nu_l-\nu_l\Vert^2 \bigg)^{1/2} \notag\\ &\leq 2\Vert X_j(k)\Vert \bigg( \sum_{l=1}^{d_k} \Vert \hat\nu_l-\nu_l\Vert^2 \bigg)^{1/2}. \notag \end{align} Plugging this relation back into \eqref{help6}, it follows that \begin{align} \Vert \hat{ \mathbb{G}amma}_{k,d}-\hat {\hat \mathbb{G}amma}_{k,d}\Vert_{\mathcal{L}} &\leq 4 \bigg( \sum_{l=1}^{d_k} \Vert \hat\nu_l-\nu_l\Vert^2 \bigg)^{1/2} \frac{2}{n-k} \sum_{j=k}^{n-1}\Vert X_j(k)\Vert^2. \notag \end{align} Since $(X_j\colon j\in{\mathbb Z})$ is $L^4$-$m$ approximable, Theorems~3.1 and 3.2 in H\"ormann and Kokoszka~\cite{weaklydep} imply that, for some finite positive constant $C_1$, $N{\rm E}[\Vert \hat\nu_l-\nu_l\Vert^2] \leq C_1/\delta_l$, where $\delta_l$ is the $l$-th spectral gap. Hence, \begin{align*} \sum_{l=1}^{d_k} \Vert \hat\nu_l-\nu_l\Vert^2 \leq \frac{C_1}{N} \sum_{l=1}^{d_k} \frac{1}{\mathbf{\mathbf{a}lpha}pha_l^{2}}. \end{align*} Furthermore, note that \begin{align*} \frac{2}{n-k}\sum_{j=k}^{n-1} {\rm E}\big[\Vert X_j(k)\Vert^2\big] &\leq 2 \sum_{i=0}^{k-1} {\rm E}\big[\Vert X_{k-i} \Vert ^2\big] = 2 k \Vert C_X\Vert_{\mathcal{N}}. \end{align*} Therefore, collecting the previous results yields the rate \begin{align} \Vert \hat{ \mathbb{G}amma}_{k,d}-\hat {\hat \mathbb{G}amma}_{k,d}\Vert_{\mathcal{L}} = O_p \bigg( \frac{k}{n}\bigg(\sum_{l=1}^{d_k} \frac{1}{\mathbf{\mathbf{a}lpha}pha_l^{2}}\bigg)^{1/2}\bigg). \langlebel{res4} \end{align} Next, investigate $\Vert \hat {\hat \mathbb{G}amma}_{k,d}^{-1} \Vert_{\mathcal{L}}$. Similarly as in the corresponding part of the proof of Theorem~\ref{autoregressive}, it follows that $\Vert \hat {\hat \mathbb{G}amma}_{k,d}^{-1} \Vert_{\mathcal{L}} \leq \Vert \hat {\hat \mathbb{G}amma}_{k,d}^{-1}- {\hat \mathbb{G}amma}_{k,d}^{-1}\Vert_{\mathcal{L}} + \Vert \hat { \mathbb{G}amma}_{k,d}^{-1} \Vert_{\mathcal{L}}$. By \eqref{res5}, $ \Vert \hat { \mathbb{G}amma}_{k,d}^{-1} \Vert_{\mathcal{L}} = O_p(\mathbf{\mathbf{a}lpha}pha_{d_k}^{-1})$. Furthermore, the same arguments as in \eqref{help1} and \eqref{help2} imply that \begin{align} \Vert \hat {\hat \mathbb{G}amma}_{k,d}^{-1}- {\hat \mathbb{G}amma}_{k,d}^{-1}\Vert_{\mathcal{L}} \leq \frac{\Vert\hat {\hat\mathbb{G}amma}_{d,k} - \hat\mathbb{G}amma_{d,k} \Vert_{{\mathcal{L}}} \Vert \hat\mathbb{G}amma_{k,d}^{-1}\Vert_{{\mathcal{L}}}^{2}}{1-\Vert\hat {\hat\mathbb{G}amma}_{d,k} - \hat\mathbb{G}amma_{d,k}\Vert_{{\mathcal{L}}}\Vert\hat\mathbb{G}amma_{k,d}^{-1}\Vert_{{\mathcal{L}}}}. \langlebel{help10} \end{align} Hence, by \eqref{res5} and \eqref{res4}, \begin{align*} \Vert\hat {\hat\mathbb{G}amma}_{d,k} - \hat\mathbb{G}amma_{d,k} \Vert_{{\mathcal{L}}} \Vert \hat\mathbb{G}amma_{k,d}^{-1}\Vert_{{\mathcal{L}}}^{2} = O_p\bigg(\frac{k}{n\mathbf{\mathbf{a}lpha}pha_{d_k}^{2}}\bigg(\sum_{l=1}^{d_k} \frac{1}{\mathbf{\mathbf{a}lpha}pha_l^{2}}\bigg)^{1/2}\bigg). \end{align*} Therefore, by Assumption~\ref{ass2} as $n\rightarrow \infty$, $\Vert \hat {\hat \mathbb{G}amma}_{k,d}^{-1}- {\hat \mathbb{G}amma}_{k,d}^{-1}\Vert_{\mathcal{L}} \overlineerset{p}{\rightarrow} 0$. Taken the previous calculations together, this gives the rate \begin{align} \Vert \hat {\hat \mathbb{G}amma}_{k,d}^{-1} \Vert_{\mathcal{L}} = O_p\bigg(\frac{1}{\mathbf{\mathbf{a}lpha}pha_{d_k}}\bigg) \langlebel{res8}. \end{align} Going back to \eqref{ausgang1} and noticing that $ \Vert\hat{\hat\mathbb{G}amma}_{1,k,d} -\hat \mathbb{G}amma_{1,k,d}\Vert_{\mathcal{L}} \leq \Vert(I_H,0,\dots,0)(\hat{\hat\mathbb{G}amma}_{k,d} -\hat \mathbb{G}amma_{k,d})\Vert_{\mathcal{L}}$, the first summand in this display can be bounded by \begin{align} \Vert \big( \hat{\hat\mathbb{G}amma}_{1,k,d} -\hat \mathbb{G}amma_{1,k,d} \big) \hat{\hat \mathbb{G}amma}_{k,d}^{-1}\Vert_{\mathcal{L}} &\leq \Vert \hat{\hat\mathbb{G}amma}_{1,k,d} -\hat \mathbb{G}amma_{1,k,d} \Vert_{\mathcal{L}} \Vert \hat{\hat \mathbb{G}amma}_{k,d}^{-1}\Vert_{\mathcal{L}}\notag \\ &\leq \Vert (I_H,0,\dots,0)(\hat{\hat\mathbb{G}amma}_{k,d} -\hat \mathbb{G}amma_{k,d}) \Vert_{\mathcal{L}} \Vert \hat{\hat \mathbb{G}amma}_{k,d}^{-1}\Vert_{\mathcal{L}}\notag\\ &=O_p\bigg(\frac{k}{n\mathbf{\mathbf{a}lpha}pha_{d_k}}\bigg(\sum_{l=1}^{d_k} \frac{1}{\mathbf{\mathbf{a}lpha}pha_l^{2}}\bigg)^{1/2}\bigg), \langlebel{res9} \end{align} where the rate in \eqref{res4} was used in the last step. For the second summand in \eqref{ausgang1}, use the plug-in estimator for $\hat\mathbb{G}amma_{1,k,d}$ to obtain, for all $k<n$, \begin{align*} \Vert \hat \mathbb{G}amma_{1,k,d} \big( \hat{ \mathbb{G}amma}_{k,d}^{-1}-\hat {\hat \mathbb{G}amma}_{k,d}^{-1}\big)\Vert_{\mathcal{L}} &\leq \bigg\Vert\frac{1}{n-k} \sum_{j=k}^{n-1} P_{(k)} X_{j}(k) \otimes X_{j+1}\bigg \Vert_{\mathcal{L}} \big\Vert\hat{ \mathbb{G}amma}_{k,d}^{-1}-\hat {\hat \mathbb{G}amma}_{k,d}^{-1}\Vert_{\mathcal{L}}. \end{align*} Since \begin{align*} {\rm E}\bigg[\bigg\Vert \frac{1}{n-k} \sum_{j=k}^{n-1} P_{(k)} X_{j}(k) \otimes X_{j+1}\bigg\Vert_{\mathcal{L}}\bigg] &\leq \frac{1}{n-k} \sum_{j=k}^{n-1} {\rm E}\big[\Vert P_{(k)} X_{j}(k) \otimes X_{j+1} \Vert_{\mathcal{L}}\big] \\ &\leq \frac{1}{n-k} \sum_{j=k}^{n-1} \big({\rm E}[\Vert P_{(k)} X_{j}(k) \Vert ^2]\big)^{1/2}\big({\rm E}[ \Vert X_{j+1} \Vert^2]\big)^{1/2} \\ &= \bigg( \sum_{l=0}^{k-1} {\rm E}[ \Vert X_{j-l} \Vert^2] \bigg)^{1/2} \Vert C_X \Vert_{\mathcal{N}} ^{1/2}\\ &= \sqrt{k} \Vert C_X\Vert_{{\mathcal{N}}}, \end{align*} the result in \eqref{help10} implies that \begin{align} \big\Vert \hat \mathbb{G}amma_{1,k,d} \big( \hat{ \mathbb{G}amma}_{k,d}^{-1}-\hat {\hat \mathbb{G}amma}_{k,d}^{-1}\big) \big\Vert_{\mathcal{L}} = O_p\bigg(\frac{k^{3/2}}{n\mathbf{\mathbf{a}lpha}pha_{d_k}^{2}}\bigg(\sum_{l=1}^{d_k} \frac{1}{\mathbf{\mathbf{a}lpha}pha_l^{2}}\bigg)^{1/2}\bigg). \langlebel{res10} \end{align} Applying Assumption~\ref{ass2} to this rate and collecting the results in \eqref{ausgang2}, \eqref{ausgang1}, \eqref{res9} and \eqref{res10}, shows that, for all $x\in H^k$ as $n\rightarrow\infty$, $\Vert (\hat{\hat B}(k) - \mathbb{P}i (k)) (x) \Vert \overlineerset{p}{\rightarrow} 0$. This is the claim. {\em (ii)} Similar to the proof of part {\em (ii)} of Theorem~3.6. \end{proof} \end{document}
1.5egin{document} \sloppy \title[Affine IETs with a singular conjugacy to an IET]{Affine interval exchange maps with a singular conjugacy to an IET} 2uthor{Frank Trujillo and Corinna Ulcigrai} \maketitle 1.5egin{abstract} We produce affine interval exchange transformations (AIETs) which are 1.3mph{topologically} conjugated to (standard) interval exchange maps (IETs) via a 1.3mph{singular conjugacy}, i.e.~a diffeomorphism $h$ of $[0,1]$ which is $\mathcal{C}^{0}$ but not $\mathcal{C}^{1}$ and such that the pull-back of the Lebesgue measure is a 1.3mph{singular} invariant measure for the AIET. In particular, we show that for almost every IET $T_0$ of $d\geq 2$ intervals and any vector $\omega$ belonging to the central-stable space $E_{cs}(T_0)$, for the Rauzy-Veech renormalization, any AIET $T$ with log-slopes given by $\omega$ and semi-conjugated to $T_0$ is topologically conjugated to $T$. In addition, if $\omega \notin E_s(T_0)$, the conjugacy between $T$ and $T_0$ is singular. 1.3nd{abstract} \section{Introduction and main results} The study of circle diffeomorphisms is a classical topic in dynamical systems, initiated by H. Poincaré (we refer, for example, to 2.3ite{katok_introduction_1995}, 2.3ite{sinai_topics_1994} or 2.3ite{de_melo_one-dimensional_1993} for a basic overview). Two fundamental questions addressed by the theory of circle diffeomorphisms are the 1.3mph{existence} and the 1.3mph{regularity} of a topological conjugacy between a minimal circle diffeomorphism $f$ and its linear, {isometric model (which is a rigid rotation $R_2lpha$, where $2lpha $ is the rotation number $2lpha$ of $f$)}, namely of a homeomorphism $h$ such that $h2.3irc f= R_2lpha 2.3irc h$ (see for example M. Herman's work 2.3ite{herman_sur_1979}). The initial motivation for Poincaré to study circle diffeomorphisms is that they appear naturally as first-return maps of flows on surfaces of genus one. 1.3mph{Interval exchange transformations}, or IETs (more precisely 1.3mph{generalized} IETs as well as, as special cases, affine or standard IETs) for short, appear as first-return maps of flows on surfaces and are thus seen as natural generalizations of circle diffeomorphisms to higher genus (with rigid rotations and affine circle diffeomorphisms, in turn, generalizing IETs and affine IETs, respectively). Thus, it is natural to ask to what extent the theory of circle diffeomorphisms extends to generalized interval exchange maps. Efforts in this direction have been ongoing since the early eighties and this is currently an active area of research, see for example 2.3ite{levitt_decomposition_1987, forni_solutions_1997, marmi_cohomological_2005, marmi_affine_2010, marmi_linearization_2012, ghazouani_local_2020, ghazouani_priori_2021}. We refer the reader to the articles 2.3ite{marmi_linearization_2012, ghazouani_priori_2021} for further references and a more in-depth discussion about linearization and rigidity questions for GIETs. In this paper, we give a contribution to the study of the regularity of conjugacies between an affine interval exchange transformation (AIET) and its linear (piecewise) isometric model, namely, a (standard) interval exchange transformation (IET). In particular, we produce AIETs that are conjugated to a standard minimal interval exchange transformation via a conjugacy $h$ which is $\mathcal{C}^0$ but fails to be $\mathcal{C}^1$. These AIETs are uniquely ergodic, and the unique invariant measure is 1.3mph{singular} with respect to the Lebesgue measure; in this case, we say that they have a 1.3mph{singular conjugacy} to a (minimal) IET. A one-parameter family of examples of AIETs with a singular conjugacy to a minimal IET was constructed by Isabelle Liousse in 2.3ite{liousse_echanges_2002}. We provide a criterium that allows constructing AIETs having singular conjugacy with its (piecewise) isometric model for full measure classes of IET rotation numbers (see the Theorem in \S~\ref{sec:regularity} below for an informal statement, as well as Theorems~\ref{thm:topconjugacy} and \ref{thm:regularity} in \S~\ref{sec:main} for precise results). AIETs (whose formal definition we postpone to \S~\ref{sec:AIETs}) can be seen as a generalization to higher genus of 1.3mph{affine} -also known as 1.3mph{piecewise linear} or for short PL- circle diffeomorphisms (defined in \S~\ref{sec:singular} below). In the setting of PL-circle diffeomorphisms, singularity of the conjugacy to the corresponding linear model is a well-known phenomenon, as the results summarized in \S~\ref{sec:singular} show. Contrary to (PL-)circle diffeomorphisms, though, for which the existence of a topological conjugacy follows from the classical work of A. Denjoy as long as there is sufficient regularity (see \S~\ref{sec:wandering}), for AIETs (and GIETs in general) semi-conjugated to a minimal IET, the existence of a topological conjugacy is not granted, i.e.~regularity assumptions are not sufficient to exclude the presence of wandering intervals: several results (see \S~\ref{sec:wandering}) show not only the existence but also the ubiquity of wandering intervals in AIETs. Therefore, a crucial part of the present paper is to prove a criterion for the absence of wandering intervals which can be applied to full measure sets of IETs rotation numbers. We now summarize some of the results in the literature concerning the singularity of conjugacies of affine circle diffeomorphisms (\S~\ref{sec:singular}), and the existence of wandering intervals in circle homeomorphisms and AIETs (\S~\ref{sec:wandering}). An informal statement of the main result of this paper is given at the end of this introduction, in \S~\ref{sec:regularity}. \subsection{Singular conjugacies in PL setting}\label{sec:singular} It is well known that sufficiently regular circle diffeomorphisms are smoothly conjugated to the corresponding linear rotation for a full measure set of rotation numbers, a celebrated result proved by M.~Herman 2.3ite{herman_sur_1979} and later extended by J.-C.~Yoccoz 2.3ite{yoccoz_conjugaison_1984} to all Diophantine rotation numbers (thus providing the optimal arithmetic condition). M.~Herman 2.3ite{herman_sur_1979} was the first to show that a circle homeomorphism with irrational rotation number that is piecewise linear and has exactly two points (called 1.3mph{break points}) where the first derivative is discontinuous has an invariant measure absolutely continuous with respect to Lebesgue if and only if its break points lie on the same orbit. More generally, a homeomorphism of the circle $f: \mathbb{T} \to \mathbb{T}$ is called a 1.3mph{piecewise smooth circle homeomorphism} or a 1.3mph{P-homeomorphism} if it is a smooth orientation preserving homeomorphism, differentiable away from countable many points, so-called 1.3mph{break-points}, at which left and right derivatives, denoted by $Df_-$, $Df_+$ respectively, exist but do not coincide, and such that $\log Df$ has bounded variation. A P-homeomorphism which is 1.3mph{linear} (i.e.~1.3mph{affine}) in each domain of differentiability is called a 1.3mph{PL-homeomorphism}. In 2.3ite{liousse_nombre_2005}, Isabelle Liousse showed that the invariant measure of a generic PL-homeomorphism with a finite number of break points and irrational rotation number of bounded type is singular with respect to Lebesgue. The generic condition in 2.3ite{liousse_nombre_2005} is explicit and appears as an arithmetic condition on the logarithm of the slopes of the PL-homeomorphism. For general P-homeomorphisms with exactly one break point and irrational rotation number, A.~Dzhalilov and K.~Khanin 2.3ite{dzhalilov_invariant_1998} showed that the associated invariant probability measure is singular with respect to Lebesgue. The case of two break points has been studied by A. Dzahlilov, I. Liousse 2.3ite{dzhalilov_circle_2006} in the bounded rotation number case, and by A. Dzahlilov, I. Liousse and D. Mayer 2.3ite{dzhalilov_singular_2009} for arbitrary irrational rotation numbers. In both works, the authors conclude the singularity of the associated invariant probability measure. More recently, for P-homeomorphisms of class $C^{2 + 1.3psilon}$ with a finite number of break points and nonzero mean nonlinearity, K. Khanin and S. Kocić 2.3ite{khanin_hausdorff_2017} showed that the Hausdorff dimension of their unique invariant measure is equal to $0$, provided that their rotation number belongs to a certain (explicit) full-measure set of irrational numbers. In the same work, the authors show that this result cannot be extended to all irrational rotation numbers. \subsection{Existence and absence of wandering intervals}\label{sec:wandering} Given a piecewise continuous map $f: I \to I$ defined on a compact interval, a subinterval $J \subset I$ is said to be a 1.3mph{wandering interval} of $f$ if the forward iterates of $J$ by $f$ are pairwise disjoint. It is also common in the literature to include in the definition of a wandering interval the request that the $\omega$-limit set of $J$ is not finite. Their existence or absence plays an important role in one-dimensional dynamics, which has been widely studied in different settings. A celebrated theorem of A. Denjoy 2.3ite{denjoy_sur_1932} shows that sufficiently smooth circle diffeomorphisms with irrational rotation number (more precisely, as soon as the logarithm of the derivative has bounded variation) do not admit wandering intervals. J.-C.~Yoccoz 2.3ite{yoccoz_il_1984} proved that Denjoy's result remains valid for sufficiently smooth circle homeomorphisms with non-flat critical points, in particular, for analytic circle homeomorphisms. In the context of smooth interval transformations, M.~Martens, W.~de Melo, and S.~van Strien 2.3ite{martens_julia-fatou-sullivan_1992} obtained a more general version of the previous result by showing that any $C^2$ map on a compact interval with non-flat critical points has no wandering intervals. On the other hand, several examples of transformations with wandering intervals exist in the literature. In 2.3ite{denjoy_sur_1932}, A.~Denjoy constructed examples of $C^1$ circle diffeomorphisms with irrational rotation number having wandering intervals. This result was later improved to $C^{2 - 1.3psilon}$ regularity by M.~Herman 2.3ite{herman_sur_1979} and similar examples in the context of multimodal maps of the interval are well-known (see, e.g., 2.3ite[\S 3]{de_melo_one-dimensional_1993}). G.~Hall 2.3ite{hall_cinfty_1981} constructed an example of a $C^\infty$ circle map with at most two flat critical points admitting wandering intervals. Hall's construction was recently generalized by Liviana Palmisano 2.3ite{palmisano_denjoy_2015}, who obtained similar examples for circle maps with a half-critical point. \noindent {\it Wandering intervals in AIETs.} G. Levitt 2.3ite{levitt_decomposition_1987} showed the existence of non-uniquely ergodic affine interval exchange transformations having wandering intervals and raised the question of whether unique-ergodicity for this class of transformations would be enough to rule out the existence of wandering intervals. R. Camelier and C. Gutierrez gave a negative answer to this question in 2.3ite{camelier_affine_1997}. The example by Camelier and Gutierrez was studied in detail by M. Cobo 2.3ite{cobo_piece-wise_2002}. The same example was later generalized by X. Bressaud, P. Hubert, and A. Maass 2.3ite{bressaud_persistence_2010}, and their techniques have been recently used by M. Cobo, R. Gutierriez-Romo and A. Maass to show the existence of wandering intervals in a well-known transformation, the cubic Arnoux-Yoccoz map. {A condition for the absence of wandering intervals in the setting of substitutions (which include, in particular, AIETs with periodic rotation number) with unit eigenvalues is proved by X. Bressaud, A. Bufetov, and P. Hubert in 2.3ite{bressaud_deviation_2014}.} All the previous results in the setting of AIETs concern only an exceptional class among these transformations (namely those whose combinatorial rotation number is periodic), but in 2.3ite{marmi_affine_2010}, S.~Marmi, P.~Moussa, and J.-C.~Yoccoz approached the general case and showed that AIETs which are semi-conjugated to a minimal IET of $d\geq 4$ intervals, under a full measure condition on the IET, possess wandering intervals. \subsection{Regularity and Oseledets flags}\label{sec:regularity} Let $T$ be an AIET with $d\geq 2$ continuity intervals which we assume is semi-conjugated to a minimal IET $T_0$. The IET $T_0$ can be thought of as a 1.3mph{combinatorial} (IET) 1.3mph{rotation number} for $T$, namely, it encodes combinatorial information on the structure of orbits of $T$ (see \S~\ref{sec:RV} for a precise definition) and, assuming that $T_0$ is minimal, it plays the role of 1.3mph{irrationality} of the (IET) rotation number. In addition to the IET rotation number, the AIET is determined by the vector of 1.3mph{slopes} $s=(s_i)_{i=1}^d \in 2.8^d_{+}$, recording the slope $s_i$ of each affine branch of $T$ (see \S~\ref{sec:AIETs}). Let $w =(w_i)_{i=1}^d\in \mathbb{R}^d$ denote the 1.3mph{log-slope vector} of $T$, whose entries are given by $w_i:=\log s_i$ for $1\leq i\leq d$. A key realization by M. Cobo in 2.3ite{cobo_piece-wise_2002} is that to study wandering intervals as well as the regularity of conjugacies for AIETs (under a full measure condition on the combinatorial rotation number), it is essential to know the position of the log-slope vector $\omega$ in the Oseledet's filtration of the 1.3mph{Zorich cocycle} (a celebrated tool in the study of IETs which provide a multi-dimensional generalization of the continued fraction entries, see \S~\ref{sec:RV}). { The action of the Zorich cocycle on $\omega$ describes indeed how log-slopes change under renormalization (see \S~\ref{sec:RV}). One can show that for a.e.~IET $T_0$ on $d \geq 2$ intervals, there exist subspaces $\{0\}\subsetneq E_{s}(T_0) \subset E_{cs} (T_0) \subsetneq 2.8^d$ (where $s$ and ${cs}$ stand for 1.3mph{stable} and 1.3mph{central stable}, respectively), such that if $\omega$ belongs to $E_s (T_0)$ (resp.~$E_{cs}(T_0)$) then the norm of the log-slopes decreases exponentially (resp. grows subexponentially) under renormalization, while if $ \omega\in 2.8^d 1.5ackslash E_{cs}(T_0) $, the norm of log-slopes grows exponentially. More precisely, the combination of several classical works 2.3ite{veech_gauss_1982, zorich_finite_1996, forni_deviation_2002, avila_simplicity_2007} shows that the Zorich cocycle has $2g$ non-zero Lyapunov exponents (where $1 \leq g \leq \frac{d}{2}$ is determined by the combinatorics of the IET, see 1.3qref{eq:dimension_oseledets} in \S~\ref{sec:filtration}) of the form $$\theta_{1}> \theta_2 > 1.8ots > \theta_g>0 > -\theta_g > 1.8ots -\theta_2>-\theta_1,$$ and the Oseledets filtration of a generic $T_0$ has the form \[2.8^d = E_{g} \supsetneq E_{g-1} 1.8ots \supsetneq E_{1} \supsetneq E_{0} \supseteq E_{-1} \supsetneq 1.8ots \supsetneq E_{-g+1} \supsetneq E_{-g} \supsetneq \{0\},\] where $E_i:= E_i(T_0)$ (resp.~$E_{-i}:= E_{-i}(T_0)$) is associated to the Lyapunov exponent $\theta_{g-i+1}$ (resp.~$-\theta_{-(g-i+1)}$), for $1\leq i\leq g$, and vectors in $E_01.5ackslash E_{-1}$ are associated to a zero Lyapunov exponent. We can then see that $E_{s}(T_0):=E_{-1}(T_0)$ and $E_{cs}(T_0):= E_0(T_0)$. The space $E_{-g}$ is called the 1.3mph{strong-stable} space and we denote it by $E_{ss}= E_{ss}(T_0)$. We remark that $E_{cs}=E_s= E_{-1}$ (i.e.~there are no non-zero vectors associated with a zero exponent) if and only if $g = \frac{d}{2}$. } Under full measure conditions on $T_0$, the log-slope vector $\omega$ of $T$, which necessarily belongs to {$E_{2}(T_0)$} by 2.3ite[Lemma 3.3]{camelier_affine_1997} (see also 2.3ite{marmi_affine_2010} and \S~\ref{sec:prop:non_empty_affine} below), the following holds: 1.5egin{itemize} \item If $\omega \in E_{ss}(T_0)$ then $T$ is $C^\infty$ conjugated to $T_0$, by 2.3ite[Theorem 1]{cobo_piece-wise_2002}. \item If $\omega \in E_{s}(T_0) \setminus E_{ss}(T_0)$ then $T$ is $C^1$, and not $C^2$, conjugated to $T_0$, by 2.3ite[Theorem 1]{cobo_piece-wise_2002} and 2.3ite[Theorem A]{liousse_echanges_2002}, \item If $g \geq 2$ and {$\omega \in E_{g - 1}(T_0) \setminus E_{g - 2}(T_0)$}, the AIET $T$ possesses a wandering interval, by 2.3ite[Theorem 3.2]{marmi_affine_2010}. 1.3nd{itemize} It is clear that in the first two cases, the AIET $T$ has no wandering intervals. The main results of this article (Theorem~\ref{thm:topconjugacy} and Theorem~\ref{thm:regularity} stated in \S~\ref{sec:main}) imply the following: 1.5egin{theorem*} Under full measure conditions on $T_0$, if the log-slope vector $\omega$ belongs to $E_{cs}(T_0) \setminus E_{s}(T_0)$, then $T$ is $C^0$ but not $C^1$-conjugate to $T_0$. 1.3nd{theorem*} \noindent In particular, $T$ as above does not admit wandering intervals. Moreover, it will follow from Theorem \ref{thm:regularity} that, in this case, the unique invariant measure of $T$ is singular with respect to the Lebesgue measure. To prove the theorem above (in the form of Theorems~\ref{thm:topconjugacy} and \ref{thm:regularity}), we introduce a full measure condition in the space of IETs (see Definitions~\ref{def:BC_condition} and \ref{def:HS_Condition} and Proposition \ref{prop:fullmeasure}) which allows us to control the behaviour of the Zorich cocycle when restricted to $E_{cs}(T_0)$ and then studying Birkhoff sums of the piecewise constant function associated to the log-slope vector. The existence of a topological conjugacy (namely Theorem~\ref{thm:topconjugacy}, proved in \S~\ref{sec:wandering}) generalizes to full measure a result for periodic type IETs proved in the setting of substitutions by X. Bressaud, P. Hubert and A. Maass in 2.3ite{bressaud_deviation_2014}. Let us point out that the absence of wandering intervals might also be inferred from the deep dynamical dichotomy proved in recent work 2.3ite{ghazouani_priori_2021} by S. Ghazouani and the second author, but this would require assuming a much more subtle and technical Diophantine-like condition (see Definition 3.3.4 in 2.3ite{ghazouani_priori_2021}), while the proof we provide here is simpler and self-contained. Furthermore, we prove a result about Birkhoff sums of piecewise constant functions in the space over IETs, which is of independent interest (see Proposition~\ref{prop:boundedseq} and in particular, Corollary~\ref{cor:boundedseq}). The singularity of the conjugacy (Theorem~\ref{thm:regularity}) can also be deduced from the work of M. Cobo 2.3ite{cobo_piece-wise_2002}, which is in turn based on work by W. Veech 2.3ite{veech_metric_1984} (see Appendix~\ref{app:Cobo}). We provide an independent proof in \S~\ref{sc:singularity}. We conclude by commenting on the interest of this result from the point of view of the study of generalized interval exchange transformations (GIETs), in the light of recent and ongoing work that evidences the crucial role played by AIETs in the study of GIETs. In 2.3ite{ghazouani_priori_2021}, S. Ghazouani and the second author proved that to a given GIET, under a full measure condition on the IET rotation number, given by an IET $T_0$, one can associate an AIET called the (unstable) 1.3mph{shadow}. When the log-slope $\omega$ of this shadow is non-zero, one expects wandering intervals and the lack of a topological conjugacy, a result which for now was proved in genus two, see 2.3ite{ghazouani_priori_2021}. On the other hand, when $\omega$ is zero, and the 1.3mph{boundary} of the GIET (an invariant defined by S. Marmi, P. Moussa, and J.-C. Yoccoz in 2.3ite{marmi_linearization_2012}) is zero, it is shown in 2.3ite{ghazouani_priori_2021} that one can prove, in the spirit of M. Herman's work 2.3ite{herman_sur_1979} on circle diffeomorphisms, the existence of a differentiable conjugacy between the GIET and its IET model (see also 2.3ite{marmi_linearization_2012}, and 2.3ite{ghazouani_local_2021} for local results describing $\mathcal{C}^r$-conjugacy classes of IETs for $r\geq 2$ and $r=1$ respectively). The result of this paper indicates that the assumption that the boundary is zero is necessary to have a non-singular conjugacy. The study of GIETs which have non-zero boundary (but total non-linearity zero) is undertaken in 2.3ite{berk_rigidity_2022} by P. Berk and the first author. For those, when the log-slope $\omega$ of the (unstable) shadow of 2.3ite{ghazouani_local_2021} vanishes, one can define a finer notion of (central) shadow, which allows recovering rigidity results that naturally generalize the known rigidity results for PL-circle diffeomorphisms. The absence of wandering intervals for AIETs that we prove in this paper (namely Theorem~\ref{thm:topconjugacy}) then provides the leverage to show the absence of wandering intervals also for the GIETs in the considered class. \section{Background material and notations}\label{sec:background} Let us start by recalling some of the basic notions and properties related to IETs and introduce some notations. The objects we will consider are now classical; we refer the interested reader to 2.3ite{viana_ergodic_2006}, 2.3ite{yoccoz_interval_2010} for a complete introduction to the subject as well as for proofs and additional details. \subsection{Standard and affine interval exchange transformations}\label{sec:AIETs} A 1.3mph{standard interval exchange transformation}, or simply an 1.3mph{interval exchange transformation} (IET), is a bijective right-continuous piecewise translation of an interval with a finite number of discontinuities. More precisely, given a compact interval $I \subset 2.8$, we say that a bijection $T: I \to I$ is an IET on $d \geq 2$ intervals if there exists a partition of $I$ on $d$ disjoint left-closed and right-open subintervals of $I$ such that $T$ is a translation when restricted to each of the intervals on the partition. An IET with $d \geq 2$ intervals can be described by the way the intervals are exchanged and their lengths. For this, we fix a finite alphabet $3$ with $d$ elements and consider pairs $(\pi_0,\pi_1)$ of bijections $\pi_0,\pi_1:\mathcal A\to\{1,1.8ots,d\}$ to denote the order of the intervals before and after the exchange. We always assume that the datum $(\pi_0,\pi_1)$ is 1.3mph{irreducible}, i.e. $$\pi_12.3irc\pi_0^{-1}(\{1,\ldots,k\})\!=\!\{1,\ldots,k\}2.8ightarrow k=d. $$ The class of 1.3mph{irreducible} IETs, i.e.~IETs with irreducible data $(\pi_0,\pi_1)$ and $d \geq 2$ intervals, can then be parametrized by the set $\mathscr{I}_3^+ = \PermSpace \times \R_+^\A,$ where $\mathcal{G}_\A $ denotes the set of irreducible pairs $ (\pi_0,\pi_1) $ of bijections of $d$ symbols, and the set of 1.3mph{normalized IETs} on $d$ intervals, that is, IETs defined on the unit interval $I = [0, 1)$, by $\mathscr{I}_3 = \PermSpace \times \R_+^\ANorm$, where \[8elta_\A = \left\{\lambda\in2.8_+^{3} \left| \ |\lambda|_1 =1 \right.\right\}.\] We endow $\PermSpace \times \R_+^\A$ and $\PermSpace \times \R_+^\ANorm$ with the product measure $d\pi \times \textup{Leb}$, where $d\pi$ denotes the counting measure in $\mathcal{G}_\A$. {We say that a property holds for 1.3mph{almost every} IET on $d$ intervals if it holds for almost every point of $\PermSpace \times \R_+^\A$ with respect to this product measure.} \subsubsection*{Affine interval exchange transformations.} An 1.3mph{affine interval exchange transformation} (AIET) is a bijective right-continuous piecewise affine of an interval with a finite number of discontinuities and having positive slope on each continuity interval. Similarly to IETs, we encode AIETs using the order in which intervals are exchanged, their lengths, and the logarithm of the slope on each continuity interval (the use of log-slopes instead of slopes will be justified later on, see 1.3qref{eq:log_slopes_height_cocycle}). Thus, $ e^{\omega_2lpha} $ is, by definition, the slope of the restriction of $T$ to the interval indexed by $2lpha\in 3$. Notice that if this interval has length $1.3ta_2lpha>0$, its image by $T$ has length $1.3ta_2lpha e^{\omega_2lpha}$. We can parametrize the set of AIETs on $d$ intervals by \[ \mathscr{A}_3^+ = \left\{ (\pi, 1.3ta, \omega) \in \PermSpace \times \R_+^\A \times 2.8^3 \,\left|\, \sum_{2lpha \in 3} 1.3ta_2lpha e^{\omega_2lpha} = \sum_{2lpha \in 3} 1.3ta_2lpha \right.\right\}.\] The last condition guarantees that the sum of the lengths of the images under $T$ of the subintervals is the same as the domain length. We parametrize the set of 1.3mph{normalized AIETs} analogously by a set $\mathscr{A}_3 \subset \PermSpace \times \R_+^\ANorm \times 8elta_3$. \subsection{Rauzy-Veech and Zorich induction}\label{sec:RV} A classical induction procedure for IETs, known as the 1.3mph{Rauzy-Veech induction}, as well as its subsequent normalizations and accelerations, are well known to be extremely useful in studying IETs (as well as AIETs and GIETs). We recall some basic definitions and notations in this section and refer the reader to 2.3ite{viana_ergodic_2006} or 2.3ite{yoccoz_interval_2010} for a detailed introduction. \subsubsection*{Rauzy-Veech induction algorithm.} The Rauzy-Veech induction associates to almost every interval exchange transformation (IET) another IET, with the same number of intervals, by inducing the initial transformation into an appropriate subinterval. The subinterval is chosen according to the 1.3mph{type} of the IET, which encodes whether the `last' interval in the partition, i.e., $I_{\pi_0^{-1}(d)}$, is longer or smaller than the interval going to the last position after applying the transformation, i.e., $I_{\pi_1^{-1}(d)}$. This procedure can be iterated infinitely many times if and only if the IET satisfies 1.3mph{Keane's condition}. By 2.3ite{keane_interval_1975}, any IET satisfying Keane's condition is minimal. The Rauzy-Veech induction defines an oriented graph structure in $\mathcal{G}_\A$, called the 1.3mph{Rauzy graph}. Each connected component in this graph is called a 1.3mph{Rauzy class}. The infinite path in the Rauzy graph defined by an IET satisfying Keane's condition is called 1.3mph{combinatorial rotation number}. We denote the set of 1.3mph{IETs verifying Keane's condition} by $X_\A \subset \PermSpace \times \R_+^\A$, and the 1.3mph{Rauzy-Veech induction} and the 1.3mph{Zorich acceleration} by $$2.8V: X_\A \to X_\A,\quad\quad \mathbb{Z}orichMap: X_\A \to X_\A,$$ respectively. The map $\mathbb{Z}orichMap$ is defined as $\mathbb{Z}orichMap(\pi, \lambda) = 2.8V^{z(\pi, \lambda)}(\pi, \lambda)$ {where the measurable map $z: X_\A \to \mathbb{N}$ is defined so that ${z(\pi, \lambda)}$ is the largest integer such that $(\pi, \lambda), 2.8V (\pi, \lambda), 1.8ots, 2.8V^{z(\pi, \lambda)-1}(\pi, \lambda)$ all have the same type.} \subsubsection*{Notations} Given an IET $(\pi, \lambda) \in X_\A$, we denote its 1.3mph{type} by $1.3psilon(\pi, \lambda) \in \{0, 1\}$, its 1.3mph{winner} (resp. 1.3mph{loser}) 1.3mph{symbol} by $2lpha_1.3psilon(\pi, \lambda)$ (resp. $2lpha_{1 - 1.3psilon(\pi, \lambda)}$). { Assume that $T_0 = (\pi, \lambda)$ verifies Keane's condition so that $2.8V^n(T_0)$ is defined for any $n\in \mathbb{N}$. We denote the 1.3mph{combinatorial rotation number} of $(\pi, \lambda)$ by $\gamma(\pi, \lambda)$. For any $n \geq 0$, we denote } 1.5egin{equation*} 1.5egin{aligned} &T^{(n)} = 1.5ig(\pi^{(n)}, \lambda^{(n)}1.5ig) = 2.8V^n(T_0), & \text{ orbit of } (\pi, \lambda) \text{ by } 2.8V, \\ & I^{(n)}(T_0), & \text{ domain of definition of } 2.8V^n(T_0),\\ & I^{(n)}_2lpha(T_0), & \text{ intervals exchanged by } 2.8V^n(T_0),\\ & q^{(n)}(T_0) = (q^{(n)}_2lpha(T_0))_{2lpha \in 3}, & \text{ the return time of } I^{(n)}_2lpha \text{ to } I^{(n)} \text{ by } T_0. 1.3nd{aligned} 1.3nd{equation*} If there is no risk of confusion, we will omit the explicit dependence on $T_0$ in all of the above notations. \subsubsection*{Dynamical partitions} Given an IET $T_0 = (\pi, \lambda)$ verifying Keane's condition, we can associate a sequence of \textit{dynamical partitions} and 1.3mph{Rohlin towers} as follows. We define the 1.3mph{dynamical partition} $\mathcal{P}^{(n)}$ of $I$ 1.3mph{at level} $n$ as $$ \mathcal{P}^{(n)} := 1.5igcup_{2lpha \in 3} {\mathcal{P}^{(n)}_2lpha}, \qquad \text{where}\quad \mathcal{P}^{(n)}_2lpha = 1.5ig\{ I_2lpha^{(n)}, T1.5ig(I_2lpha^{(n)}1.5ig), 2.3dots, T^{q_2lpha^n - 1}1.5ig(I_2lpha^{(n)}1.5ig)1.5ig\}. $$ One can verify that $\mathcal{P}^{(n)}$ is a partition of $[0,1)$ into subintervals and that, for each $2lpha \in 3$, the collection $\mathcal{P}^{(n)}_2lpha$ is a Rohlin tower of height $q_2lpha^n$. Notice that if $n>m$, then $\mathcal{P}^{(n)}$ is a refinement of $\mathcal{P}^{(m)}$. \subsubsection*{Zorich cocycle} In the following, for any $F: X \to X$, $\phi: X \to GL(d, \mathbb{Z})$ and $n > m \geq 0$, we denote \[ \phi_{m, n}(x) = \phi(F^{n - 1}(x)) 2.3dot 1.8ots 2.3dot \phi(F^m(x)).\] The length vector and the return times of the iterates of an IET by the Zorich map can be described via a cocycle \[B: X_\A \to SL(3, \mathbb{Z}),\] that we obtain as a proper acceleration of the cocycle \[\Function{A}{X_\A}{SL(3, \mathbb{Z})}{(\pi, \lambda)}{\textup{Id} + E_{2lpha_{1.3psilon(\pi, \lambda)}, 2lpha_{1 - 1.3psilon(\pi, \lambda)}}},\] which encodes the change of the length vector after one step of Rauzy-Veech induction. More precisely, for any $n > m \geq 0$, the cocycles $A^{-1}$ and $A^T$ verify 1.5egin{align} \label{eq:length_prop_cocycle} & \lambda^{(n)} = \LRC{m}{n}(\pi, \lambda)\lambda^{(m)}, \\ \label{eq:height_prop_cocycle} & q^{(n)} = \HRC{m}{n}(\pi, \lambda) q^{(m)}, 1.3nd{align} where $q^{(0)} = \overline{1} \in 2.8_+^3$ is the vector whose coordinates are all equal to $1$. Defining $B(\pi, \lambda) = A(\pi, \lambda)1.8ots A(\pi^{(z(\pi, \lambda) - 1)}, \lambda^{(z(\pi, \lambda) - 1)}),$ the accelerated cocycles $B^{-1}$ and $B^T$ verify analogous properties with respect to the Zorich map. The cocycle $B^T$ is called the 1.3mph{Zorich cocycle} or 1.3mph{Kontsevich-Zorich cocycle}. Since $B^{-1}$ is also sometimes referred to as the Zorich cocycle, and in view of 1.3qref{eq:length_prop_cocycle}, 1.3qref{eq:height_prop_cocycle}, to avoid any possible confusion, we will refer to $B^{-1}$ as the 1.3mph{length cocycle} and to $B^T$ as the 1.3mph{height cocycle}. { \subsubsection*{Dynamical interpretation of entries}\label{sc:incidence_matrices} The matrices $\mathbb{Z}C{m}{n}$ (and consequently their accelerations) have the following dynamical interpretation.} The $2lpha1.5eta$-th entry of the incidence matrix $\mathbb{Z}C{m}{n}$ is the number of times the orbit by $T^{(m)}$ of any $x \in I^{(n)}_{2lpha}$ visits $I^{(m)}_{1.5eta}$ up to its first return to $I^{(n)}$. The incidence matrix entries also have an interpretation in terms of Rohlin towers. In fact, they describe how the Rohlin towers in the dynamical partition $\mathcal{P}^{(n)}$ can be obtained by 1.3mph{cutting and stacking} Rohlin towers of $\mathcal{P}^{(m)}$. More precisely, for any $2lpha$, the Rohlin tower $\mathcal{P}^{(n)}_2lpha$ is obtained by stacking 1.3mph{subtowers} of the Rohlin towers $\mathcal{P}^{(m)}_1.5eta$, $1.5eta \in 3$ (namely, sets of the form $\{ T^k J \mid 0 \leq k <q^{m}_1.5eta\}$ for some subinterval $J\subset I_1.5eta^{(m)}$). Indeed, the $2lpha1.5eta$-th entry of the incidence matrix $\mathbb{Z}C{m}{n}$ is the number of subtowers of $\mathcal{P}^{(m)}_1.5eta$ inside $\mathcal{P}^{(n)}_2lpha$. It follows that $\mathcal{P}^{(n)}_2lpha$ is made by stacking exactly $\sum_{1.5eta \in 3} (\mathbb{Z}C{m}{n})_{2lpha1.5eta}$ subtowers of Rohlin towers of $\mathcal{P}^{(m)}$. \subsection{Rauzy-Veech induction for AIETs} The Rauzy-Veech induction and the Zorich acceleration extend naturally to the space of AIETs, as well as all the notions introduced above in the IET setting, such as combinatorial rotation number, dynamical partitions, incidence matrices, etc. Given an AIET $T = (\pi, 1.3ta, \omega)$ satisfying Keane's condition we denote its orbit under $2.8V$ by $$T^{(n)} = (\pi^{(n)}, 1.3ta^{(n)}, \omega^{(n)}) = 2.8V^n (\pi, 1.3ta, \omega), \qquad n \in \mathbb{N}.$$ For the sake of simplicity, we will use the notations introduced in the IET setting to denote the intervals of definition and the return times of iterates by $2.8V$ of an AIET. Let us point out that the incidence matrices $ \mathbb{Z}C{m}{n}$ depend only on the combinatorial rotation number. In particular, given an AIET $T$ and an IET $T_0$, both satisfying Keane's condition and such that $\gamma(T) = \gamma(T_0)$, the incidence matrices of $T$ and $T_0$ coincide. In the context of AIETs, the height cocycle verifies an additional property of fundamental importance to us: the change in the log-slope vector of $2.8V$ iterates of an AIET is described by the height cocycle. More precisely, given an AIET $T = (\pi, 1.3ta, \omega)$ satisfying Keane's condition, for any $n \geq m \geq 0$, 1.5egin{equation} \label{eq:log_slopes_height_cocycle} \omega^{(n)} = \mathbb{Z}C{m}{n}(\pi, 1.3ta, \omega)\omega^{(m)}. 1.3nd{equation} \subsection{Oseledet's filtration}\label{sec:filtration} As mentioned before, the 1.3mph{normalized version of $\mathbb{Z}orichMap$}, which is defined in the subset $X_\ANorm \subset \PermSpace \times \R_+^\ANorm$ of normalized IETs satisfying Keane's condition and that we denote by $$\mathbb{Z}orichNorm: X_\ANorm \to X_\ANorm,$$ admits a unique invariant probability measure $\mu_{\mathbb{Z}orichNorm}$ equivalent to the Lebesgue measure on $X_\ANorm$. Moreover, the height and length cocycles, $B^T$ and $B^{-1}$ are integrable with respect to this invariant measure, and thus they admit invariant Oseledet's filtrations 1.5egin{gather*} E_s(\pi, \lambda) \subset E_{cs} (\pi, \lambda) \subset 2.8^{3},\\ F_s(\pi, \lambda) \subset F_{cs} (\pi, \lambda) \subset 2.8^{3}, 1.3nd{gather*} for a.e. $(\pi, \lambda) \in X_\ANorm$, respectively. With these notations, the sets $E_s(\pi, \lambda)$, $E_{cs}(\pi, \lambda) \,\setminus\, E_s(\pi, \lambda)$ and $2.8^3 \,\setminus\,E_{cs}(\pi, \lambda)$, correspond to the set of vectors with negative, zero and positive Lyapunov exponents for the cocycle $B^T$, respectively. That is, for a.e. $(\pi, \lambda) \in X_\ANorm$ and for every $v \in 2.8^3$, the limit \[ \theta(\pi, \lambda, v)= \lim_{n \to +\infty} \frac{\log |\HC{0}{n}(\pi, \lambda) v|_1}{n}\] exists and verifies \[ \left\{ 1.5egin{array}{lcl} \theta(\pi, \lambda, v) < 0 & \text{if} & v \in E_s(\pi, \lambda),\\ \theta(\pi, \lambda, v) = 0 & \text{if} & v \in E_{cs}(\pi, \lambda) \,\setminus\, E_s(\pi, \lambda), \\ \theta(\pi, \lambda, v) > 0 & \text{if} & v \in 2.8^3 \,\setminus\,E_{cs}(\pi, \lambda). 1.3nd{array}\right.\] Analogous properties hold for the cocycle $B^{-1}$ and its associated splitting. We point out that the dimension of these vector spaces depends only on the permutation $\pi$. Indeed, denoting $\Omega_\pi: 2.8^3 \to 2.8^3,$ where 1.5egin{equation} \label{eq:exchange_matrix} (\Omega_\pi)_{2lpha, 1.5eta} = \left\{ 1.5egin{array}{cl} +1 & \text{if } \pi_1(2lpha) > \pi_1(1.5eta) \text{ and } \pi_0(2lpha) < \pi_0(1.5eta), \\ -1 & \text{if } \pi_1(2lpha) < \pi_1(1.5eta) \text{ and } \pi_0(2lpha) > \pi_0(1.5eta), \\ 0 & \text{in other cases,} 1.3nd{array}\right. 1.3nd{equation} we have that 1.5egin{equation} \label{eq:dimension_oseledets} 1.5egin{aligned} & 1.8im(E_s(\pi, \lambda)) = g, \quad 1.8im(E_{cs}(\pi, \lambda)) = d - g, { \quad \text{where} \ g := \frac{d - 1.8im(\textup{Ker}(\Omega_\pi))}{2},} 1.3nd{aligned} 1.3nd{equation} for a.e. $(\pi, \lambda) \in X_\ANorm$. \section{Statements of the results} In this section, we state our main results. Let us start by recalling some of the existent results concerning the semi-conjugacies of AIET to IETs. Recall that the 1.3mph{combinatorial rotation number} of an IET that satisfies Keane's condition is, by definition, the infinite path in the Rauzy graph produced by iterating the Rauzy-Veech induction procedure. \subsection{Semiconjugacies of AIETs to IETs} \label{sec:prop:non_empty_affine} An infinite Rauzy path $\gamma$ is said to be 1.3mph{$\infty$-complete} if every symbol in $3$ appears infinitely many times as a winner symbol in $\gamma$. It is well-known that any IET satisfying Keane's condition defines an $\infty$-complete Rauzy path in the Rauzy graph and, conversely, any $\infty$-complete Rauzy path determines a unique normalized IET (for a proof, see e.g. 2.3ite[Section 7]{yoccoz_echanges_2005}). Given an infinite path $\gamma$ in the Rauzy graph and $\omega \in 2.8^3$, we denote by $\textup{Aff}(\gamma, \omega)$ the set of normalized AIETs with log-slope $\omega$ and combinatorial rotation number $\gamma.$ If a path $\gamma$ is $\infty$-complete, maps in $\textup{Aff}(\gamma, \omega)$ are semi-conjugated to the unique IET whose rotation number is equal to $\gamma$. More precisely, we have the following. 1.5egin{proposition}[Proposition 7 in 2.3ite{yoccoz_echanges_2005}] \label{prop:semi-conjugacy} Let $T_0$ be an IET such that $\gamma(T_0)$ is $\infty$-complete and let $\omega \in 2.8^3$. Then, any $T \in \textup{Aff}(\gamma(T_0), \omega)$ is semi-conjugated to $T_0$ via an increasing surjective map $h: [0, 1) \to [0, 1)$, satisfying $T_0 2.3irc h = h 2.3irc T$. Moreover, if $T$ has no wandering intervals, then $h$ defines a conjugacy between $T_0$ and $T$. 1.3nd{proposition} However, not all choices of $\gamma$ and $\omega$ are compatible. 1.5egin{proposition}[Proposition 2.3 in 2.3ite{marmi_affine_2010}] \label{prop:non_empty_affine} Let $T_0 = (\pi, \lambda)$ be an IET such that $\gamma(T_0)$ is $\infty$-complete and let $\omega \in 2.8^3$. Then, $\textup{Aff}(\gamma(T_0), \omega) \neq 1.3mptyset$ if and only if $\langle \omega, \lambda \rangle = 0$. 1.3nd{proposition} \subsection{Main results}\label{sec:main} We now state the main results of this article. 1.5egin{theorem} \label{thm:topconjugacy} For almost every IET $T_0$ and for any $\omega \in E_{cs}(T_0) \,\setminus\, E_s(T_0)$, any AIET $T \in \textup{Aff}(\gamma(T_0), \omega)$ is topologically conjugated to $T_0$. 1.3nd{theorem} {\noindent A special case of this theorem, namely, the same result for the (measure zero) set of IETs whose combinatorial rotation number is periodic (also known as 1.3mph{periodic-type} IETs), was proved, in the setting and language of substitutions, in 2.3ite{bressaud_deviation_2014}.} The proof of Theorem~\ref{thm:topconjugacy} is presented in \S\ref{sec:topconj}. We also prove the following. 1.5egin{theorem} \label{thm:regularity} For almost every IET $T_0$ and for any $\omega \in E_{cs}(T_0) \,\setminus\, E_s(T_0)$, any AIET $T \in \textup{Aff}(\gamma(T_0), \omega)$ is uniquely ergodic and its unique invariant probability measure is singular with respect to the Lebesgue measure. 1.3nd{theorem} \noindent {Theorem~\ref{thm:regularity} can be deduced combining Theorem~\ref{thm:topconjugacy} with a result proved by M. Cobo 2.3ite[Theorem 1]{cobo_piece-wise_2002}, which in turns { rely on results by} W. Veech~2.3ite{veech_gauss_1982} (see Appendix~\ref{app:Cobo}, where the result and the deduction are presented). The proof we give in this paper, which is given in \S~\ref{sc:singularity}, { has the advantage of being self-contained and perhaps more transparent.}} Let us mention that by the 1.3mph{duality} of the heights and lengths cocycles it follows that 1.5egin{equation}\label{rk:centralinclusion} E_{cs}(T_0) \subset \lambda^1.5ot, 1.3nd{equation} for a.e. IET $T_0$ (we refer the interested reader to 2.3ite{zorich_deviation_1997} for a precise definition of dual cocycle and to 2.3ite[pages 384-385]{cobo_piece-wise_2002} for a proof of this fact). Thus, if $\pi \in \mathcal{G}_\A$ is such that $\textup{Ker}(\Omega_\pi) \neq \{0\}$, where $\Omega_\pi$ is the matrix given by 1.3qref{eq:exchange_matrix}, it follows from 1.3qref{eq:dimension_oseledets} and Proposition \ref{prop:non_empty_affine} that the set $\textup{Aff}(\gamma(T_0), \omega)$ in Theorems \ref{thm:topconjugacy} and \ref{thm:regularity} is non-empty. \subsection{The full measure Diophantine-type conditions}\label{sec:fullmeasure} Let us now state explicitly the generic condition satisfied by an IET $T_0$ for Theorem \ref{thm:topconjugacy} to hold. This condition is an example of a 1.3mph{Diophantine-type condition} on an IET rotation number. 1.5egin{definition}[The BC condition] \label{def:BC_condition} We say that an IET $T_0 = (\pi, \lambda)$ satisfies the 1.3mph{Bounded Central} Condition (or, for short, the BC Condition) if it verifies Keane's condition, it is Oseledets generic, $\gamma(T_0)$ is $\infty$-complete, and there exists a sequence $(n_k)_{k\in \mathbb{N}} \subset \mathbb{N}$ verifying the following: 1.5egin{enumerate}[(i)] \item \label{cond:positive_matrix} There exists $N \in \mathbb{N}$ and a constant $K > 0$ such that \[ 1 \leq \mathbb{Z}C{n_k}{n_k + N}(\pi, \lambda)_{2lpha1.5eta} \leq K, \qquad \forall 2lpha, 1.5eta \in 3, \,\forall k\in \mathbb{N}. \] \item \label{cond:bounded_central_stable} There exists a constant $V > 0$ such that \[ \left \|\mathbb{Z}C{0}{n_k}(\pi, \lambda)\mid_{E_{cs}(\lambda, \pi)} \right \| \leq V, \qquad \forall k\in \mathbb{N}. \] 1.3nd{enumerate} 1.3nd{definition} \noindent As we shall see, this is a full measure condition in the space of IETs (see Proposition~\ref{prop:fullmeasure} below). For the proof of Theorem~\ref{thm:regularity}, it is also useful to introduce the following condition. 1.5egin{definition}[HS Condition] \label{def:HS_Condition} We say that an IET $T_0 = (\pi, \lambda)$ satisfies the 1.3mph{high singularities} Condition (or, for short, the HS Condition) if it verifies Keane's condition and there exist $C > 0$ and a sequence $(n_k)_{k\in \mathbb{N}} \subset \mathbb{N}$ verifying the following: 1.5egin{enumerate}[(i)] \item\label{cond:balanced_heights} $\max_{2lpha, 1.5eta \in 3} \frac{q^{(n_k)}_2lpha}{q^{(n_k)}_1.5eta} < C.$ \item \label{cond:continuous_iterates} $T^i\mid_{I^{(n_k)}}$ is continuous for $0 \leq i \leq \tfrac{1}{4} \max q^{(n_k)}_2lpha$. 1.3nd{enumerate} 1.3nd{definition} 1.5egin{proposition} \label{prop:fullmeasure} Almost every irreducible IET satisfies the BC and HS Conditions. 1.3nd{proposition} \noindent For the sake of clarity of exposition, we postpone the proof of the above proposition to \S\ref{sc:fullmeasureproof} as it requires the introduction of the natural extension of the Zorich renormalization as well as several definitions and notations that will not appear anywhere else in the article. We now prove Theorems \ref{thm:topconjugacy} and \ref{thm:regularity}, respectively in \S\ref{sc:proof_topconjugacy} and \S~\ref{sc:singularity}. \section{Existence of a topological conjugacy}\label{sec:topconj} In this section, we prove Theorem \ref{thm:topconjugacy}, by showing the existence of a topological conjugacy between an AIET and IET having the same combinatorial rotation number, under the BC condition (see Definition \ref{def:BC_condition}) on the IET. \subsection{Wandering intervals and Birkhoff sums} Let us recall that for $T$ and $T_0$ as in Theorem \ref{thm:topconjugacy}, and under a full measure condition on $T_0$, the assumption $\gamma(T)=\gamma(T_0)$ automatically yields a semi-conjugacy $h$ between $T$ and $T_0$. Moreover, this semi-conjugacy is indeed a conjugacy if the map $T$ has no wandering intervals (see Proposition \ref{prop:semi-conjugacy}). The main criterion we will use to exclude the presence of wandering intervals on a given AIET is stated in Lemma~\ref{lemma:wanderingintervalscriterium}, and it is due to M. Cobo 2.3ite{cobo_piece-wise_2002} (see also 2.3ite{bressaud_persistence_2010}). This criterion reduces the question of the existence of wandering intervals to a study of Birkhoff sums of the log-slope vector of the AIET. Let us first introduce some notation. \noindent Let $f:[0,1]\to \mathbb{R}$ be a real-valued function and let $T$ be an AIET. For each $n\in \mathbb{Z}$, we define the $n$-th 1.3mph{Birkhoff sum} of $f$ over $T$ by 1.5egin{equation}\label{def:BS} S_n^Tf:= 1.5egin{cases} \sum_{j=0}^{n-1}f 2.3irc T^j , & \text{if}\ n>0,\\ 0 , & \text{if}\ n=0,\\ \sum_{j=n}^{-1}f 2.3irc T^{-j} , & \text{if}\ n<0.\\ 1.3nd{cases} 1.3nd{equation} If there is no risk of confusion concerning the transformation being considered, we will denote the Birkhoff sums simply by $S_n f$. The definition of the Birkhoff sums $S_nf$, for $n\leq 0$, is given so that $(S_nf)_{n\in\mathbb{Z}}$ is a $\mathbb{Z}$-additive cocycle, i.e. satisfies 1.5egin{equation}\label{eq:cocyclerel} S_{n+m}\, f = S_n\, f + S_m f2.3irc T^n,\qquad \text{ for\ all \ } n,m \in \mathbb{Z}. 1.3nd{equation} The space of piecewise-constant real-valued functions, which are continuous on each of the continuity intervals of $T$, can be identified with a vector in $2.8^3$. Indeed, given a vector $\omega \in 2.8^3$, we associate the piecewise constant function $f_{T,\omega} : = I \to 2.8$ given by 1.5egin{equation}\label{def:fv} f_{T,\omega} (x):= \omega_2lpha, \qquad \text{if}\quad x\in I_2lpha \text{ for some }2lpha \in 3. 1.3nd{equation} We will also write simply $f_\omega $ instead than $f_{T,\omega}$ when the dependence on $T$ is clear. In view of the following criterium, the existence of wandering intervals for an AIET with log-slope vector $\omega$ can be reduced to the study of Birkhoff sums of the function $f_\omega$. 1.5egin{lemma}[Wandering intervals via Birkhoff sums, 2.3ite{cobo_piece-wise_2002}]\label{lemma:wanderingintervalscriterium} An AIET $T$ with log-slope vector $\omega$ has a wandering interval if and only if there exists a point $x_0\in [0,1]$ such that $$ \sum_{n\geq 1} e^{S_{n} f_\omega(x_0)}<+\infty, \qquad \sum_{n\leq 0} e^{S_{n} f_\omega(x_0)}<+\infty. $$ 1.3nd{lemma} The proof of this lemma can be found in 2.3ite[pp. 392-393]{cobo_piece-wise_2002}. This criterion has also been used in 2.3ite{bressaud_persistence_2010}, and 2.3ite{marmi_affine_2010}. The result we prove in order to exploit this Lemma is the following. 1.5egin{proposition}\label{prop:boundedseq} Let $T_0$ be an IET satisfying the BC condition and let $\omega \in E_{cs}(T_0)$. Then, for any AIET $T \in \textup{Aff}(\gamma(T_0), \omega)$ and for any $x\in [0,1]$ there exists a sequence $(m_k)_{k\in \mathbb{Z}}$ with $|m_k|\to \infty$ as $|k|\to \infty$ such that 1.5egin{equation}\label{eq:boundedBS} \sup_{k\in \mathbb{Z}}|S_{m_k}^T f_{T,\omega}(x)| < +\infty. 1.3nd{equation} 1.3nd{proposition} In particular, this Proposition gives a result for Birkhoff sums over an IET $T_0$ of a piecewise constant function $f_{T_0,\omega } $ associated to a vector $\omega\in E_{cs}(T_0) $, which we believe is of independent interest, namely: 1.5egin{corollary}\label{cor:boundedseq} For $T_0$ and $\omega$ as in Proposition~\ref{prop:boundedseq}, for any $x\in [0,1]$ there exists a sequence $(m_k)_{k\in \mathbb{Z}}$ with $|m_k|\to \infty$ as $|k|\to \infty$ such that $$ \sup_{k\in \mathbb{Z}}|S_{m_k}^{T_0} f_{T_0,\omega}(x)| < +\infty. $$ 1.3nd{corollary} The proof of Proposition \ref{prop:boundedseq} exploits the decomposition of Birkhoff sums into special Birkhoff sums, which are building blocks controlled by the cocycle matrices produced by Rauzy-Veech induction. In \S~\ref{sec:sums}, we recall their definition and some basic properties. Before proving Proposition \ref{prop:boundedseq}, whose proof we postpone to \S\ref{sec:proofboundedseq}, let us show how to use it to prove Theorem \ref{thm:topconjugacy}. \subsection{{Proof of absence of wandering intervals}} \label{sc:proof_topconjugacy} In this section, we prove Theorem~\ref{thm:topconjugacy} by showing the absence of wandering intervals under the theorem's assumptions. 1.5egin{proof}[Proof of Theorem~\ref{thm:topconjugacy}] Since $\gamma(T)=\gamma(T_0)$, by Proposition \ref{prop:semi-conjugacy}, there exists a semi-conjugacy, i.e.~a surjective continous increasing function $h:[0, 1] \to [0,1]$ such that $h2.3irc T=T_02.3irc h$. To show that $h$ is a homeomorphism and, therefore, a topological conjugacy, it is enough to show that it has no wandering intervals. This follows by Lemma~\ref{lemma:wanderingintervalscriterium}, since, by Proposition~\ref{prop:boundedseq}, for any $x\in [0,1]$, there exists $C>0$ and $(m_k)_{k\in \mathbb{Z}}$ such that \[\sup_{k\in \mathbb{Z}}|S_{m_k}^T f_\omega(x)| < C.\] Hence $$ \sum_{n\geq 1}e^{S_{n}^T f_\omega(x_0)} \geq \sum_{k\geq 1} e^{S_{m_k}^T f_\omega(x_0)} \geq \sum_{k\geq 1} e^{-C} =+\infty. $$ Similarly, $\sum_{n\leq 0} S_n^T f_\omega(x_0) =+\infty$. Therefore, Lemma \ref{lemma:wanderingintervalscriterium} implies that $T$ has no wandering intervals. 1.3nd{proof} \subsection{Birkhoff sums and special Birkhoff sums.}\label{sec:sums} Given an AIET $T$ and a function $f: I \to 2.8$, the Birkhoff sums $S_n f$, for $n\geq 0$, can be studied via renormalization exploiting the notion of 1.3mph{special Birkhoff sums} that we now recall. For any $n \in \mathbb{N}$, the 1.3mph{special Birkhoff sum of level $n$} is the function $\SBS{n}{f}: I^{(n)} \to \mathbb{R}$ obtained by 1.3mph{inducing} $f$ over the first return map $T^{(n)}$. More precisely, \[ \SBS{n}{f}(x):= S_{q^{(n)}_2lpha}f(x) =\sum_{1.3ll=0}^{q^{(n)}_2lpha - 1} f \left(T^1.3ll( x)\right), \quad \text{if}\ x\in I^{(n)}_2lpha, \quad \text{for any } 2lpha \in 3, \ n\in\mathbb{N}.\] Thus one can think of $\SBS{n}{f}(x)$ as the Birkhoff sum of $f$ at $x$ 1.3mph{along the Rohlin tower} of height $q^{(n)}_2lpha$ over $I^{(n)}_2lpha$. Given the $n$-th special Birkhoff sum $\SBS{n}{f}$, we can build $\SBS{n + 1}{f}$ from $\SBS{n}{f}$ and $T^{(n)}$ as 1.5egin{equation}\label{SBSdecomp} \SBS{n + 1}{f} (x) = \sum_{k=0}^{a_2lpha^{(n)}} \SBS{n}{f} 1.5ig((T^{(n)})^k(x)1.5ig), \qquad \text{for\ any}\ x\in I^{(n + 1)}_2lpha, 1.3nd{equation} where $a_2lpha^{(n)} = \sum_{1.5eta \in 3}(\mathbb{Z}C{n}{n + 1})_{2lpha1.5eta}$. Equation 1.3qref{SBSdecomp} follows from the relation between Rohlin towers of the partitions $\mathcal{P}^{(n + 1)}$ and $\mathcal{P}^{(n)}$ described in \S\ref{sc:incidence_matrices}. Hence, if $f = f_\omega$ for some $\omega \in 2.8^A$, it follows from the definition of special Birkhoff sums that 1.5egin{equation} \label{SBS_cocycle_relation} \SBS{n}{f_\omega}(x) = \mathbb{Z}C{0}{n}\omega. 1.3nd{equation} \subsection{Proof of { the Birkhoff sums upper bound (Proposition~\ref{prop:boundedseq})}} \label{sec:proofboundedseq} We are now ready to prove Proposition~\ref{prop:boundedseq}. The argument behind the proof provides a generalization of the key idea exploited in 2.3ite{bressaud_deviation_2014} to prove the analogous result in the special case of periodic combinatorial rotation numbers. While the proof in 2.3ite{bressaud_deviation_2014} is written in the language of substitutions and prefix-suffix decompositions, our proof below uses simply the decomposition of Birkhoff sums in special Birkhoff sums and the BC condition. The key idea is that for 1.3mph{any} point $x\in [0,1]$ it is possible to find times $(m_k)_{k\in \mathbb{Z}}$ such that $S_{m_k} f_\omega (x)$ can be decomposed into a bounded number of special Birkhoff sums. 1.5egin{proof}[Proof of Proposition~\ref{prop:boundedseq}] Let $(n_k)_{k\in \mathbb{N}}$ be the sequence of induction times given by the BC condition (see Definition \ref{def:BC_condition}). For the sake of simplicity and clarity of exposition, let us denote \[T_k = T^{(n_k)}, \quad {I}^k:=I^{(n_k)}, \quad {Z}^k_2lpha:={Z}^{(n_k)}_2lpha, \quad {q}^k_2lpha:={q}^{(n_k)}_2lpha, \quad \mathcal{P}^k = \mathcal{P}^{(n_k)},\] \[{I}^{k_+}:=I^{(n_k + N)}, \quad {Z}^{k_+}_2lpha:={Z}^{(n_k + N)}_2lpha, \quad {q}^{k_+}_2lpha:={q}^{(n_k + N)}_2lpha, \quad \mathcal{P}^{k_+} = \mathcal{P}^{(n_k + N)},\] for any $k \in \mathbb{N}$ and $2lpha \in 3$, where $N$ is given by the BC condition. Recall that, for any $n \in \mathbb{N}$, $I^{(n)}$ stands for the inducing subinterval of the $n$-th step of Rauzy-Veech induction while ${Z}^{(n)}_2lpha$, $q^{(n)}_2lpha$ and $\mathcal{P}^{(n)}$ denote the corresponding dynamical Rohlin towers, their heights, and the associated dynamical partition, respectively. Let $x \in [0, 1]$ be fixed. We will define a sequence $(m_{k})_{k \in \mathbb{Z}}$ such that 1.3qref{eq:boundedBS} holds. Fix $k \in \mathbb{N}$. Let $2lpha, 1.5eta\in \mathcal{A}$ be such that $x$ belongs to the towers $Z^k_2lpha$ and $Z^{k_+}_1.5eta$, respectively, and let $0\leq i< q^{k}_2lpha$ and $0\leq j< q^{k_+}_1.5eta $ be the indexes of the floors or $Z^k_2lpha$ and $Z^{k_+}_1.5eta$ respectively which contains $x$, i.e.~such that $$ x\in T^{i} (I^k_{2lpha}), \qquad x\in T^{j} (I^{k_+}_{1.5eta}). $$ Let also $1.5eta^-$ and $1.5eta^+$ the the indexes of the dynamical towers of the partition $\mathcal{P}^{k_+}$ 1.3mph{before} and 1.3mph{after} $Z^{k_+}_1.5eta$ which contain the orbit of $x$, namely such that 1.5egin{equation}\label{eq:towerspm} T^{-j-1} (x) \in Z^{k_+}_{1.5eta^-}, \qquad T^{q^{k_+}_1.5eta-j} (x) \in Z^{k_+}_{1.5eta^+}. 1.3nd{equation} Since by the BC condition $\mathbb{Z}C{n_k}{n_{k+1}}$ is a positive matrix, each tower $Z^{k_+}_{1.5eta}$ of level $k+1$ is obtained stacking at least once each the previous level $Z^{k}_{2lpha}$, $2lpha\in \mathcal{A}$. In particular, there exists a floor $F_-$ of $Z^{k_+}_{1.5eta^-}$ and a floor $F_+$ of $Z^{k_+}_{1.5eta^+}$ which belong to $I^k_{2lpha}$. Since the (full) orbit of $x$ under $T$ visits both $Z^{k_+}_{1.5eta_-}$ and $Z^{k_+}_{1.5eta_+}$ by definition of $1.5eta_\pm$, see 1.3qref{eq:towerspm}, it visits every floor of both, so there exists $j_-,j_+\geq 0$ such that 1.5egin{equation}\label{eq:jchoice} T^{-j_-}(x) \in F_-\subseteq I^{k}_{2lpha}2.3ap Z^{k_+}_{1.5eta^-}, \qquad T^{j_+}(x) \in F_+ \subseteq I^{k}_{2lpha} 2.3ap Z^{k_+}_{1.5eta^+}. 1.3nd{equation} We can now define $m_{-k}$ and $m_k$ as 1.5egin{equation}\label{eq:nkdef} m_{-k}:=-j_-+i, \qquad m_{k}:=j_+ + i. 1.3nd{equation} Notice that, by construction, in view of 1.3qref{eq:jchoice} and 1.3qref{eq:nkdef}, $T^{m_{-k}}(x)$ and $T^{m_k}(x)$ both belong, as $x$, to the $i$-th floor of the tower $Z^{k}_{2lpha}$. Let us illustrate the notations in the following picture. 1.5egin{figure}[h!] 2.3entering 1.5egin{tikzpicture}[scale=0.85, transform shape] 1.8ef2{2} 1.8ef1.5{1.5} 1.8ef2.3{2.3} 1.8ef1.8{1.8} 1.8ef1.3{1.3} 1.8ef3{3} 1.8ef4{4} 1.8ef5.5{5.5} 1.8ef8{8} 1.8ef10{10} 1.8raw [yellow, fill=yellow, opacity=0.4] (5.5 * 1.03,0) rectangle (5.5*1.03 + 4*0.1 - 3*0.1,1.8); 1.8raw [gray, fill=gray, opacity=0.4] (5.5 * 1.08,0) rectangle (5.5*1.08 + 10*0.1 - 8*0.1,1.8); 1.8raw [green, fill=green, opacity=0.4] (5.5 * 1.25,0) rectangle (5.5*1.25 + 8*0.1 - 5.5*0.1,1.8); \node[below] at (-0.2,-0.5) {\textcolor{black}{\tiny$I^{k_+}_{1.5eta_+}$}}; 1.8raw[<-,thin] (3*0.05 + 4 * 0.05, -0.15) to [out=270,in=60] (-0.2,-0.6) {}; 1.8raw[-, gray] (3*0.1,-0.2) -- (3*0.1,0) {}; 1.8raw[-, gray] (4*0.1,-0.2) -- (4*0.1,0) {}; 1.8raw[{Latex[scale=0.3]}-{Latex[scale=0.3]},line width=0.05mm, gray] (3*0.1,-0.15)--(4*0.1,-0.15); \node[ below] at (0.7,-0.5) {\textcolor{black}{\tiny$I^{k_+}_{1.5eta_-}$}}; 1.8raw[<-,thin] (5.5*0.05 + 8 * 0.05, -0.15) to [out=270,in=90] (5.5*0.05 + 8 * 0.05, -0.6) {}; 1.8raw[-, gray] (5.5*0.1,-0.2) -- (5.5*0.1,0) {}; 1.8raw[{Latex[scale=0.3]}-{Latex[scale=0.3]},line width=0.05mm, gray] (5.5*0.1,-0.15)--(8*0.1,-0.15); \node[ below] at (1.6,-0.5) {\textcolor{black}{\tiny$I^{k_+}_1.5eta$}}; 1.8raw[<-,thin] (8*0.05 + 10 * 0.05, -0.15) to [out=270,in=90] (1.6,-0.6) {}; 1.8raw[-, gray] (8*0.1,-0.2) -- (8*0.1,0) {}; 1.8raw[-, gray] (10*0.1,-0.2) -- (10*0.1,0) {}; 1.8raw[{Latex[scale=0.3]}-{Latex[scale=0.3]},line width=0.05mm, gray] (8*0.1,-0.15)--(10*0.1,-0.15); \node[] at (5.5*0.5 + 8*0.5,-0.8) {\textcolor{black}{$I^k_2lpha$}}; 1.8raw[-,gray] (5.5, -1) -- (5.5, 0); 1.8raw[-,gray] (8, -1) -- (8, 0); 1.8raw[<-,gray] (5.5, -0.8) -- (5.5 + 8*0.35 - 5.5*0.35, -0.8); 1.8raw[<-,gray] (8, -0.8) -- (8 + 5.5*0.35 - 8*0.35, -0.8); 1.8raw[-, gray] (5.5*1.03,-0.2) -- (5.5*1.03,0) {}; 1.8raw[-, gray] (5.5*1.03 + 4*0.1 - 3*0.1,-0.2) -- (5.5*1.03 + 4*0.1 - 3*0.1,0) {}; 1.8raw[{Latex[scale=0.3]}-{Latex[scale=0.3]},line width=0.05mm, gray] (5.5*1.03,-0.15)--(5.5*1.03 + 4*0.1 - 3*0.1,-0.15); \node[below] at (5.5*1.03 + 4*0.05 - 3*0.05 +0.05,-0.15) {\textcolor{black}{\scriptsize$F_+$}}; 1.8raw[-, gray] (5.5 * 1.25,-0.2) -- (5.5 * 1.25,0) {}; 1.8raw[-, gray] (5.5*1.25 + 8*0.1 - 5.5*0.1,-0.2) -- (5.5*1.25 + 8*0.1 - 5.5*0.1,0) {}; 1.8raw[{Latex[scale=0.3]}-{Latex[scale=0.3]},line width=0.05mm, gray] (5.5*1.25,-0.15)--(5.5*1.25 + 8*0.1 - 5.5*0.1,-0.15); \node[below] at (5.5*1.25 + 8*0.05 - 5.5*0.05,-0.15) {\textcolor{black}{\scriptsize$F_-$}}; \node[circle, fill, scale=0.2, label=below:] at (3 * 0.1 + 5.5 * 0.006, 0) {}; \node[circle, fill, scale=0.2, label=below:] at (5.5 * 0.115 , 1.5*5) {}; \node[circle, fill, scale=0.2, label=below:] at (8 * 0.11, 1.8*4) {}; \node[circle, fill, scale=0.2, label=below:] at (5.5 * 1.095, 1.8*0.6) {}; \node[circle, fill, scale=0.2, label=below:] at (5.5 * 1.037, 0) {}; \node[circle, fill, scale=0.2, label=below:] at (5.5 * 1.265, 0) {}; 1.8raw[<-,thin] (3 * 0.1 + 5.5 * 0.006, 0) to [out=40,in=190] (3,9) {}; \node[ right] at (3,9) {\textcolor{black}{\scriptsize$T^{q^{k_+}_1.5eta-j}(x) \in I_{1.5eta^+}^{k_+} \subseteq Z_{1.5eta_+}^{k_+}$}}; 1.8raw[<-,thin] (5.5 * 0.115 , 1.5*5) to [out=60,in=200] (3,10) {}; \node[ right] at (3,10) {\textcolor{black}{\scriptsize$T^{-j -1}(x) \in T^{q^{k_+}_{1.5eta_-}-1}1.5ig(I_{1.5eta^-}^{k_+}1.5ig) \subseteq Z_{1.5eta_-}^{k_+}$}}; 1.8raw[<-,thin] (8 * 0.11, 1.8*4) to [out=30,in=180] (6,7) {}; 1.8raw[<-,thin] (5.5 * 1.037, 0) to [out=110,in=240] (6,6) {}; \node[ right] at (6,6) {\textcolor{black}{\scriptsize$T^{j_+}(x) \in F_+ \subseteq I^k_2lpha 2.3ap Z_{1.5eta_+}^{k_+}$}}; 1.8raw[<-,thin] (5.5 * 1.095, 1.8*0.6) to [out=100,in=220] (6,7) {}; \node[ right] at (6,7) {\textcolor{black}{\scriptsize$x \in T^i(I_2lpha^k) 2.3ap T^j1.5ig(I_1.5eta^{k_+}1.5ig) \subseteq Z^k_2lpha 2.3ap Z_{1.5eta}^{k_+}$}}; 1.8raw[<-,thin] (5.5 * 1.265, 0) to [out=110,in=245] (6,5) {}; \node[ right] at (6,5) {\textcolor{black}{\scriptsize$T^{-j_-}(x) \in F_- \subseteq I^k_2lpha 2.3ap Z_{1.5eta_-}^{k_+}$}}; 1.8raw[<-,thin] (3 * 0.1 + 5.5 * 0.006, 1.8*3) to [out=60,in=200] (6,6) {}; \node[circle, fill, scale=0.2, label=below:] at (3 * 0.1 + 5.5 * 0.006, 1.8*3) {}; 1.8raw[-,thin] (3*0.1, 1.8*3) -- (4*0.1, 1.8*3) {}; 1.8raw[<-,thin] (5.5 * 0.115, 1.5*2) to [out=40,in=190] (6,5) {}; \node[circle, fill, scale=0.2, label=below:] at (5.5 * 0.115, 1.5*2) {}; 1.8raw[-,thin] (5.5*0.1, 1.5*2) -- (8*0.1, 1.5*2) {}; 1.8raw[-,thin] (8*0.1, 1.8*4) -- (10*0.1, 1.8*4) {}; 1.8raw[-,thin] (5.5*0.1, 1.5*5) -- (8*0.1, 1.5*5) {}; 1.8raw[-,thin] (5.5, 1.8*0.6) -- (8, 1.8*0.6) {}; \node[circle, fill, scale=0.2, label=below:] at (5.5 * 1.037, 1.8*0.6) {}; 1.8raw[<-,thin] (5.5 * 1.037, 1.8*0.6) to [out=40,in=190] (8,3.5) {}; \node[ right] at (8,3.5) {\textcolor{black}{\scriptsize$T^{m_k}(x) \in T^i(I_2lpha^k)$}}; \node[circle, fill, scale=0.2, label=below:] at (5.5 * 1.265, 1.8*0.6) {}; 1.8raw[<-,thin] (5.5 * 1.265, 1.8*0.6) to [out=40,in=190] (8,2.5) {}; \node[ right] at (8,2.5) {\textcolor{black}{\scriptsize$T^{m_{-k}}(x) \in T^i(I_2lpha^k)$}}; 1.8raw[-] (0,0) -- (10,0) node[right] {}; 1.8raw[-] (0,0) -- (0,2) node[right] {}; 1.8raw[-] (3,0) -- (3,2) node[right] {}; 1.8raw[-] (4,0) -- (4,2.3) node[right] {}; 1.8raw[-] (5.5, 0) -- (5.5,2.3) node[right] {}; 1.8raw[-] (8, 0) -- (8,1.8) node[right] {}; 1.8raw[-] (10, 0) -- (10,1.3) node[right] {}; \foreach \height/1.5ase/1.5aseend in {2/0/3,1.5/3/4,2.3/4/5.5,1.8/5.5/8,1.3/8/10}{ 1.8raw[-] (1.5ase, \height) -- (1.5aseend,\height) node[right] {}; } \foreach 2.3ol/\height/1.5ase/1.5aseend in {blue/1.3/0/3,yellow/2.3/3/4,red/1.8/4/5.5,green/1.5/5.5/8,gray/2/8/10}{1.8raw [2.3ol, fill=2.3ol, opacity=0.4] (1.5ase*0.1,0) rectangle (1.5aseend*0.1,\height*5); } 1.3nd{tikzpicture} 2.3aption{\small The Rohlin towers at the $n_k$-th and $(n_k + N)$-th steps of induction are represented by white and colored rectangles, respectively.} 1.3nd{figure} Let us show that with this definition, the sequence $(m_k)_{k \in \mathbb{Z}}$ satisfies 1.3qref{eq:boundedBS}. We will only prove that \[ \sup_{k\in \mathbb{N}} |S_{m_k} f_\omega(x)| < +\infty,\] since the proof for the sequence $m_{-k}$ is completely analogous. Indeed, it suffices to consider the backward orbit of $x$ instead of the forward one. We start by showing that we can decompose the Birkhoff sum $S_{m_k} f_\omega (x)$ into a sum of special Birkhoff sums of level $n_k$ plus an initial and final segment. More precisely, we will show that we can express $S_{m_k} f_\omega (x)$ as 1.5egin{equation}\label{eq:BSdecompnk} 1.5egin{split} S_{m_k} f_\omega (x ) = S_{q^k_2lpha -i}\, f_\omega (x) + & \sum_{0\leq 1.3ll \leq 1.3ll_0} \SBS{n_k}{f_\omega} (T_k^1.3ll (y)) + S_{i} f_\omega (z), 1.3nd{split} 1.3nd{equation} for some $1.3ll_0 \in \mathbb{N}$, where $y:= T^{q^k_2lpha -i}(x)$ and $ z:= T_k^{m_k - i} (x)$. Let us prove 1.3qref{eq:BSdecompnk}. By definition of $y$ and $z$, it follows that 1.5egin{equation} \label{eq:decomp} 1.5egin{aligned} S_{m_k} f_\omega (x ) & = S_{q^k_2lpha - i}\, f_\omega (x) + S_{m_k-q^k_2lpha} f_\omega (y) + S_i f_\omega (z). 1.3nd{aligned} 1.3nd{equation} Notice that, since $x$ belongs to the $i$-th floor of the tower $Z^{k}_{2lpha}$, we have $y \in I^k$. Consider now the successive iterates $T_k^1.3ll (y)$, $1.3ll\in \mathbb{N}$, which by definition of the induced map $T_k$ all belong to $I^k$, and let $1.3ll_0$ be the last one (in the natural order of the orbit of $x$ under $T$) before $T^{m_k}(x)$. Notice that $T_k^{1.3ll_0}(y) = z = T^{m_k - h_2lpha^k}(y)$. For $0\leq 1.3ll \leq 1.3ll_0$, let $2lpha_1.3ll \in 3$ be such that $T_k^1.3ll(y) \in I^k_{2lpha_1.3ll}$. Since, for any $0 \leq 1.3ll \leq 1.3ll_0$, we have \[ T_k(T_k^1.3ll(y)) = T^{h_{2lpha_{1.3ll}}^k}(T_k^1.3ll(y)), \qquad S_{h_{2lpha_{1.3ll}}^k}{f_\omega}(T_k^1.3ll(y)) = \SBS{n_k}{f_\omega}(T_k^1.3ll(y)),\] it follows from the cocycle relation 1.3qref{eq:cocyclerel}, that 1.5egin{equation}\label{eq:ellbound} S_{m_k-q^k_2lpha} f_\omega (y) = \sum_{0\leq 1.3ll \leq 1.3ll_0} S_{h_{2lpha_{1.3ll}}^k}{f_\omega}(T_k^1.3ll(y)) = \sum_{0\leq 1.3ll \leq 1.3ll_0} \SBS{n_k}{f_\omega}(T_k^1.3ll(y)), 1.3nd{equation} which together with 1.3qref{eq:decomp} conclude the proof of 1.3qref{eq:BSdecompnk}. Notice now that, since $1.3ll_0$ is bounded by the number of towers of level $n_k$ contained in the union of $Z^{k_+}_{1.5eta}$ and $Z^{k_+}_{1.5eta^+}$, by the interpretation of the entries of the incidence matrices (see \S~\ref{sc:incidence_matrices}) and the BC condition, it follows that 1.5egin{equation}\label{eq:l0bound} 1.3ll_0+1 \leq 2 \Vert \mathbb{Z}C{n_k}{n_k + N}\Vert \leq 2 K. 1.3nd{equation} Moreover, observe that the initial and final sum in 1.3qref{eq:BSdecompnk} combine into a full special Birkhoff sum since they combine to a sum over exactly one point for each floor of the tower $Z_2lpha^k$, and $f_\omega$ is constant on each floor of $Z_2lpha^k$, so that 1.5egin{align*} S_{q^k_2lpha -i}\, f_\omega (x) + S_i f_\omega (z) & = \sum_{m=0}^{q^k_2lpha-i+1} f_\omega (T^m x) + \sum_{m=q^k_2lpha-i}^{q^k_2lpha-1} f_\omega (T^m (T^{-i}z) \\ & = \sum_{n=0}^{q^k_2lpha-1} f_\omega (T^m x) = \SBS{n_k}{f_\omega}(x)= \omega^{(n_k)}_{2lpha}. 1.3nd{align*} Using these last two observations, we can estimate 1.3qref{eq:BSdecompnk} with a bounded number of special Birkhoff sums, which in turn, in view of 1.3qref{eq:log_slopes_height_cocycle}, 1.3qref{SBS_cocycle_relation}, 1.3qref{eq:ellbound}, 1.3qref{eq:l0bound} and the BC condition, yields 1.5egin{align*} \| S_{m_k} f_\omega (x ) \| & \leq \left| \omega^{(n_k)}_{2lpha}\right|+ \left| \sum_{0\leq 1.3ll \leq 1.3ll_0} \SBS{n_k}{f_\omega} (T_k^1.3ll (y)) \right| \\ & = \left| \omega^{(n_k)}_{2lpha}\right| + \left| \sum_{0\leq 1.3ll \leq 1.3ll_0} \omega^{(n_k)}_{2lpha_1.3ll} \right| \\ & \leq (1.3ll_0+2) \Vert \omega^{(n_k)} \Vert\\ & \leq (2K+1)V\|\omega\|. 1.3nd{align*} Therefore $ \sup_{k\in \mathbb{N}} |S_{m_k} f_\omega(x)| < +\infty,$ which concludes the proof. 1.3nd{proof} Let us also show how Corollary~\ref{cor:boundedseq} follows from Proposition~\ref{prop:boundedseq}. 1.5egin{proof}[Proof of Corollary~\ref{cor:boundedseq}] Notice that since $T_0$ satisfies the BC condition, in particular, its rotation number $\gamma(T_0)$ is $\infty$-complete. Furthermore, since $\omega\in E_{cs}(T_0)$ and $E_{cs}(T_0) \subset \lambda^1.5ot$ (see 1.3qref{rk:centralinclusion}), the assumptions of Proposition~\ref{prop:non_empty_affine} hold and therefore there exists an AIET $T$ in $ \textup{Aff}(\gamma(T_0),\omega)$. By Proposition \ref{prop:semi-conjugacy}, there also exists a semiconjugacy between $T$ and $T_0$ i.e. an increasing surjective map $h: [0, 1) \to [0, 1)$, satisfying $T_0 2.3irc h = h 2.3irc T$. Notice that $h$ maps continuity intervals of $T$ onto continuity intervals of $T_0$. Therefore, by definition of the functions $f_{T,\omega}$ and $f_{T_0,\omega}$ (see Definition~\ref{def:fv}), $f_{T_0, \omega}2.3irc h= f_{T,\omega}$. Thus, for every $n\in \mathbb{N}$ and any $x\in [0,1]$, $$ S_n^{T_0} f_{T_0,\omega} (h(x)) = \sum_{k=0}^{n-1} f_{T_0,\omega} ((T_0)^k2.3irc h (x))= \sum_{k=0}^{n-1} f_{T_0,\omega}( h 2.3irc T^k(x)) = S_n^{T} f_{T,\omega} (x) . $$ Similarly (recalling Definition~\ref{def:BS}), one sees that $S_n^{T_0} f_{T_0,\omega} 2.3irc h(x) = S_n^{T} f_{T,\omega} (x)$ for any $n\in \mathbb{Z}$. Thus, the conclusion of the corollary follows immediately from Proposition~\ref{prop:boundedseq}. 1.3nd{proof} \section{Singularity of the invariant measure} \label{sc:singularity} In this section, we present a direct proof of Theorem \ref{thm:regularity}. We first state a standard analysis Lemma (Lemma~\ref{lem:locally_constant} below), which will be helpful in the proof and roughly says that an integrable function is locally constant when looking at it on a sufficiently small scale. Given an integrable function $\psi: [0, 1] \to 2.8$, for any $x \in [0,1]$ and any $r, 1.8elta > 0$, we denote \[ E^\psi_r(x, 1.8elta) = \{ y \in [0, 1] \mid |x - y| < r;\, |\psi(x) - \psi(y)| > 1.8elta\}.\] The set $E^\psi_r(x, 1.8elta)$ is the set of `exceptional points' in the ball of radius $r$ around $x$ for which the value of $\psi$ differs from $\psi(x)$ by more than $1.8elta$. 1.5egin{lemma} \label{lem:locally_constant} Let $\psi: [0, 1] \to 2.8$ be an integrable function and let $1.8elta > 0$. Given $1.3psilon > 0$, there exists $r_0 > 0$ such that 1.5egin{equation} \label{eq:measure_good_points} \textup{Leb}\left(\left\{ x \in [0, 1] \,\left|\, 1.5ig| E^\psi_r(x, 1.8elta) 1.5ig| < \tfrac{2r1.3psilon}{1.8elta}\, \text{ for all } 0 < r < r_0\right. \right\}\right) > 1 - 1.3psilon. 1.3nd{equation} 1.3nd{lemma} This lemma is a simple consequence of the Lebesgue differentiation theorem and Egorov's theorem. 1.5egin{proof}[Proof of Theorem \ref{thm:regularity}] Let $T_0$ and $T$ as in Theorem \ref{thm:topconjugacy}, and assume WLOG that $T_0$ is uniquely ergodic. Furthermore, assume that $T_0$ verifies the HS Condition (see Definition \ref{def:HS_Condition}). Denote by $h$ the associated conjugating map $h$ verifying $T 2.3irc h = h 2.3irc T_0$. Recall that by the discussion at the beginning of \S\ref{sc:singularity}, this measure is either singular or absolutely continuous with respect to the Lebesgue measure. Suppose by contradiction that the unique invariant probability measure $\mu$ of $T$ is absolutely continuous with respect to Lebesgue and denote by $\varphi$ the associated Radon-Nykodim derivative, namely, $\mu = \varphi \textup{Leb}$. Since $\mu$ is $T$-invariant, 1.5egin{equation} \label{eq:invariant_density} (\varphi 2.3irc T) T' = \varphi 1.3nd{equation} almost surely. Recall that by Corollary \ref{cor:central_lower_bound}, \[ 1.8elta := \tfrac{1}{4}\inf_{n \in \mathbb{N}} |\omega^{(m_k)}| > 0,\] where the sequence $m_k$ denotes the sequence of Zorich times associated with $T_0$. We will derive a contradiction by showing that 1.3qref{eq:invariant_density} implies that the norm of $\omega^{(m_k)}$ may be arbitrarily small. \noindent By iterating 1.3qref{eq:invariant_density} and taking logarithm, it follows that, for any $k \in \mathbb{N}$, $$(\log \varphi 2.3irc T^k) - \log \varphi = S_k^T f_\omega^T $$ almost surely, where $\omega$ denotes the log-slope vector of $T$. Composing the previous equality with $h$ and denoting $\psi = \log \varphi 2.3irc h$, we obtain, \[ \psi 2.3irc T_0^k - \psi = S_k^{T_0}f_\omega^{T_0} \] almost surely. Considering the return times given by the Rauzy-Veech induction yields \[ \psi 2.3irc T_0^{q^{(n)}_2lpha}(x) - \psi(x) = \omega^{(n)}_2lpha, \] for $x \in I^{(n)}_2lpha$ and $2lpha \in 3.$ Let $(n_k)_{k \in \mathbb{N}}$ be the subsequence of Zorich times given by Proposition \ref{prop:fullmeasure}, and let us denote \[ q^k := q^{(n_k)}, \quad \omega^k := \omega^{(n_k)}, \quad I^k = I^{(n_k)}_2lpha.\] Notice that by Condition HS, 1.5egin{equation} \label{eq:extended_SBS} \psi 2.3irc T_0^{q^k_2lpha}(x) - \psi(x) = \omega_2lpha, 1.3nd{equation} for $x \in 1.5igcup_{i = 0}^{h_k} T^i(I^{(n)}_2lpha)$ and $2lpha \in 3$, where $h_k = \tfrac{1}{4} \min_{2lpha \in 3} q^k_2lpha.$ Since $T_0$ verifies the BC Condition, there exists $0 < c_0 < 1$ such that 1.5egin{equation} \label{eq:balanced_acc_lengths} \min_{2lpha \in 3} \frac{|I^k_2lpha|}{|I^k|} > c_0, 1.3nd{equation} for any $k \in \mathbb{N}$. Denote $1.3psilon = \tfrac{c_0}{8C}\min\{1, 1.8elta\}$, where $C$ is given by the HS Condition, and let $r_0$ be given by Lemma \ref{lem:locally_constant}. Let $k$ sufficiently large so that $|I^k| < r_0$. Since \[ \textup{Leb}\left( 1.5igcup_{2lpha \in 3} 1.5igcup_{i = 0}^{h_k} T^i(I^k) \right) > \tfrac{1}{4C},\] it follows from 1.3qref{eq:measure_good_points} that there exists $x_k = T^{i_k}(r_k)$, with $r_k \in 1.5ig[\tfrac{|I^k|}{2}, |I^k|1.5ig)$ and $0 \leq i_k \leq h_k$, such that $$|E^\psi_{r_k}(x_k, 1.8elta)| < \tfrac{2r_k1.3psilon}{1.8elta}.$$ Fix $2lpha \in 3$. Notice that \[T^{i_k}(I_2lpha^k), T^{q^k_2lpha + i_k}(I_2lpha^k) \subset B_{r_k}(x_k),\] and by 1.3qref{eq:balanced_acc_lengths}, \[|T^{i_k}(I_2lpha^k)|, |T^{q^k_2lpha + i_k}(I_2lpha^k)| \geq c_0 |I^k|.\] Hence 1.5egin{align*} \textup{Leb}& \left(\left\{ y \in T^{i_k}(I_2lpha^k) \,\left|\, y \notin E^\psi_{r_k}(x_k, 1.8elta); \, T^{q_2lpha^k}(y) \notin E^\psi_{r_k}(x_k, 1.8elta) \right. \right\} \right) \\ & \geq |I^k_2lpha| - 2 |E^\psi_{r_k}(x_k, 1.8elta)| \geq |I^k_2lpha| - \frac{4r_k1.3psilon}{1.8elta} \\ & \geq |I^k_2lpha| \left( 1 - \frac{41.3psilon}{1.8elta c_0}\right) \geq \frac{1}{2} |I^k_2lpha|. 1.3nd{align*} Thus, there exists $y^k_2lpha \in T^{i_k}(I_2lpha^k)$, verifying 1.3qref{eq:extended_SBS}, such that \[y^k_2lpha, T^{q^k_2lpha} (y^k_2lpha) \notin E^\psi_{r_k}(x_k, 1.8elta).\] Hence $$|\psi(y^k_2lpha) - \psi(T^{q^k_2lpha} (y^k_2lpha))| < 21.8elta,$$ and by 1.3qref{eq:extended_SBS}, $|\omega^k_2lpha| < 21.8elta$. Since $2lpha \in 3$ was arbitrary, and by definition of $1.8elta$, we have $|\omega^k| < 21.8elta,$ we reached a contradiction. This concludes the proof. 1.3nd{proof} \section{Full measure of the IETs conditions}\label{sc:fullmeasureproof} This section is devoted to the proof of Proposition \ref{prop:fullmeasure}, namely show simultaneously that the BC and the HS Conditions introduced in \S~\ref{sec:fullmeasure} (see Definitions~\ref{def:BC_condition} and~\ref{def:HS_Condition}) are satisfied by a full measure set of (irreducible) IETs. We start by introducing a few objects and notations needed in the proof. \subsection{Oseledet's splittings} We denote the 1.3mph{natural extensions} of the Zorich map $\mathbb{Z}orichMap$ and of the Zorich renormalization $\mathbb{Z}orichNorm$ by \[\mathbb{Z}orichMapExt: X_\AExt \to X_\AExt, \quad\quad \mathbb{Z}orichNormExt: X_\AExtNorm \to X_\AExtNorm.\] The domains of these transformations admit a geometric interpretation in terms of 1.3mph{zippered rectangles}, which were introduced by W. Veech 2.3ite{veech_gauss_1982} when considering suspensions over IETs, and can be seen as subsets of $X_\A \times 2.8^3$. We denote points in the domains $X_\AExt$ and $X_\AExtNorm$ by $(\pi, \lambda)Ext.$ Recall that $\mathbb{Z}orichNorm$ admits an unique invariant probability measure $\mu_{\mathbb{Z}orichNorm}$ equivalent to the Lebesgue measure and its natural extension $\mathbb{Z}orichNormExt$ admits an unique invariant probability measure $\mu_{\mathbb{Z}orichNormExt}$ equivalent to Lebesgue and such that $p_*(\mu_{\mathbb{Z}orichNormExt}) = \mu_{\mathbb{Z}orichNorm}$, where $p: X_\AExtNorm \to X_\AExt$ denotes the canonical projection $p(\pi, \lambda)Ext = (\pi, \lambda).$ Considering the natural extension of the Zorich renormalization and extending the cocycle $B$ trivially to $X_\AExtNorm$ using the canonical projection $p: X_\AExtNorm \to X_\AExt$, the height and length cocycles admit invariant Oseledet's splittings 1.5egin{gather*} E_s(\pi, \lambda)Ext \oplus E_c(\pi, \lambda)Ext\oplus E_u(\pi, \lambda)Ext =2.8^{3},\\ F_s(\pi, \lambda)Ext \oplus F_c(\pi, \lambda)Ext\oplus F_u(\pi, \lambda)Ext =2.8^{3}, 1.3nd{gather*} respectively, corresponding to the sets of vectors with negative, zero, and positive Lyapunov exponents. These spaces verify 1.5egin{equation} \label{eq:relation_flag_splitting} 1.5egin{aligned} & E_s(\pi, \lambda)Ext = E_s (\pi, \lambda), \quad\quad E_c(\pi, \lambda)Ext \oplus E_s(\pi, \lambda)Ext = E_{cs}(\pi, \lambda), \\ & F_s(\pi, \lambda)Ext = F_s (\pi, \lambda), \quad\quad F_c(\pi, \lambda)Ext \oplus F_s(\pi, \lambda)Ext = F_{cs}(\pi, \lambda), 1.3nd{aligned} 1.3nd{equation} for a.e. $(\pi, \lambda)Ext \in X_\AExtNorm$. Moreover, since the height and length cocycles are 1.3mph{dual} to each other, we have 1.5egin{equation} \label{eq: orthogonal_flags} E_s(\pi, \lambda)Ext = F_{cs}(\pi, \lambda)Ext ^1.5ot, \hskip1cm F_s(\pi, \lambda)Ext = E_{cs}(\pi, \lambda)Ext ^1.5ot, 1.3nd{equation} for a.e. $(\pi, \lambda)Ext \in X_\AExt.$ We refer the interested reader to 2.3ite{zorich_deviation_1997} for a precise definition of dual cocycle. For the sake of simplicity, for a.e. $(\pi, \lambda)Ext \in X_\AExt$ and for any $n \in \mathbb{Z}$, we denote their iterates under $\mathbb{Z}orichMapExt$ by $(\pi, \lambda)Extn{n}$ and the associated Oseledets subspaces by $E^n_1.3psilon(\pi, \lambda)Ext = E_1.3psilon(\pi, \lambda)Extn{n}$, where $1.3psilon \in \{s, c, u\}$. \subsection{Angle control between the splittings} Recall that the angle between two subspaces $\{0\} \subsetneq E, F \subsetneq 2.8^3$ is given by \[2ngle(E, F) = \min \left\{ 2rccos \left( \left | \langle v, w \rangle \right| \right) \mid v \in E, w \in F, |v| = 1 = |w|\right\}.\] Denoting by $\pi_{E, F}: E \to F$ the projection of $E$ to $F$, we have \[\| \pi_{E, F} \| \leq 2.3os2ngle(E, F).\] This implies the following. 1.5egin{lemma} \label{lem: bound_projection} Let $\{0\} \subsetneq E, F \subset 2.8^3$ and $1.8elta = 2.3os2ngle(E, F)$. Then \[ \sqrt{1 - 1.8elta} |v| \leq |\pi_{F^1.5ot}(v)|,\] for any $v \in E$, where {$\pi_{F^1.5ot}$} denotes the orthogonal projection to $F^1.5ot$. 1.3nd{lemma} The following observation will be of fundamental importance. For a proof see 2.3ite[Proposition 7.6]{yoccoz_interval_2010}. 1.5egin{proposition} \label{prop: trivial_action} For a.e. $(\pi, \lambda)Ext \in X_\AExt$ and for any $n \in \mathbb{Z}$, \[ \LC{0}{n}(\pi, \lambda)Ext (\textup{Ker}(\Omega_\pi)) = \textup{Ker}(\Omega_\pi)m{n}. \] Moreover, it is possible to pick a base of $\textup{Ker}(\Omega_\pi)$, for each $\pi \in \mathcal{G}_\A$, such that, for a.e. $(\pi, \lambda)Ext \in X_\AExt$ and for any $n \in \mathbb{Z}$, the matrix associated to the transformation \[ \LC{0}{n}(\pi, \lambda)Ext \mid_{\textup{Ker}(\Omega_\pi)}: \textup{Ker}(\Omega_\pi) \to \textup{Ker}(\Omega_\pi)m{n},\] with respect to the selected basis, is the identity. 1.3nd{proposition} The previous proposition shows that the central space for the length cocycle is given by 1.5egin{equation} \label{eq: kernel_central} F_c(\pi, \lambda)Ext = \textup{Ker}(\Omega_\pi), 1.3nd{equation} for a.e. $(\pi, \lambda)Ext \in X_\AExtNorm$. Applying Proposition \ref{prop: trivial_action}, we can show the following. 1.5egin{lemma} \label{lem:bounds_central} There exists $C_0 > 1$ such that for a.e. $(\pi, \lambda)$, \[ \left \| \pi_{\textup{Ker}(\Omega_\pi)m{n}} 2.3irc \HC{0}{n} \right\| \leq C,\] for any $n \in \mathbb{N}$. Moreover, if $\pi^n = \pi$ for some $n \in \mathbb{N}$, then \[ \pi_{\textup{Ker}(\Omega_\pi)m{n}} 2.3irc \HC{0}{n} = \pi_{\textup{Ker}(\Omega_\pi)m{n}}. \] 1.3nd{lemma} 1.5egin{proof} Let $n \in \mathbb{N}$ be fixed. Notice that $1.5ig(\HC{0}{n}1.5ig)^{-1} = 1.5ig(\LC{0}{n}1.5ig)^T$. Hence, for any $v \in 2.8^3$ and any $w \in \textup{Ker}(\Omega_\pi)m{n}$, 1.5egin{align*} 1.5ig \langle \HC{0}{n} v, w 1.5ig\rangle & = 1.5ig\langle v, 1.5ig(\HC{0}{n}1.5ig)^T w 1.5ig\rangle = 1.5ig\langle v, 1.5ig(\LC{0}{n}1.5ig)^{-1} w 1.5ig\rangle . 1.3nd{align*} By Proposition \ref{prop: trivial_action}, it follows that \[ 1.5ig| 1.5ig \langle \HC{0}{n} v, w 1.5ig\rangle 1.5ig| \leq C_0|v||w| \] for some constant $C_0$ depending only on $d$, and if $\pi^n = \pi$, \[ 1.5ig \langle \HC{0}{n} v, w 1.5ig\rangle = \langle v, w\rangle,\] which proves the lemma. 1.3nd{proof} 1.5egin{figure} 2.3entering 1.5egin{tikzpicture} 1.8ef2.8{2.8} 2.3oordinate (O) at (0,0); 2.3oordinate (J1) at (170:2.8); 2.3oordinate (-J1) at (170:-2.8); 2.3oordinate (J2) at (-10:2.8); 1.8raw[->] (0,0,0) -- (5,0,0) node[right] {}; 1.8raw[->] (0,0,0) -- (0,4,0) node[right] {}; 1.8raw[->] (0,0,0) -- (0,0,8) node[above] {}; 1.8raw[thick,blue] (170:4) -- (170:-4.8); 1.8raw[thick,black] (190:4) -- (190:-4); 1.8raw[thick,black] (135:4) -- (135:-4); 1.8raw[thin,dashed,black] (190:-3) -- (135:-3.4); \node[circle, fill, scale=0.3, label=above:$\omega$] at (190:-3) {}; \node[circle, fill=blue, scale=0.3,label=above:] at (170:-2.8) {}; \node[circle, fill=blue, scale=0.0,label={[text=blue]:{\scriptsize$\quad\pi_{\textup{Ker}(\Omega_\pi)}(\omega)$}}] at (170:-3.62) {}; \node[vector,right] at (170:-4.8) {$\textup{Ker}(\Omega_\pi)$}; \node[circle, fill, scale=0.3, label=below:$\omega^{(n)}\hspace{0.3cm}$] at (135:-3.4) {}; \node[circle, fill=blue, scale=0.0,label={[below,text=blue]:{\scriptsize$\quad\pi_{\textup{Ker}(\Omega_\pi)}1.5ig(\omega^{(n)}1.5ig)$}}] at (168:-3.62) {}; \node[vector,black,right] at (190:-4) {$E^c(\tau, \lambda, \pi)$}; \node[vector,black,right] at (135:-4) {$E^c(\tau^n, \lambda^n, \pi^n)$}; \jetcone{O}{J1}{2}{0.3} \jetconee{O}{J2}{2}{0.3} 1.3nd{tikzpicture} 2.3aption{\small Denoting $\omega^{(n)} = \HC{0}{n}(\pi, \lambda)Ext\omega$, for every $n$ such that $\pi^{n} = \pi$ the projection of $\omega^{(n)}$ to $\textup{Ker}(\Omega_\pi)$ remains constant (see Lemma \ref{lem:bounds_central}). Moreover, whenever the angle between $E^c(\tau^n, \lambda^n, \pi^n)$ and $\textup{Ker}(\Omega_\pi)$ is small, the norm of $\omega^{(n)}$ is bounded, from above and below, by positive constants depending only on $\omega$ (see Corollary \ref{cor:central_lower_bound} and Lemma \ref{lem: bound_cocycle_central}).} 1.3nd{figure} 1.5egin{corollary} \label{cor:central_lower_bound} For a.e. $(\pi, \lambda)Ext$ and for any $v \in E_c(\pi, \lambda)Ext \setminus \{0\}$ \[ \inf_{n \in \mathbb{N}} |\HC{0}{n}v| > 0. \] 1.3nd{corollary} 1.5egin{proof} By 1.3qref{eq: orthogonal_flags} 1.3qref{eq: kernel_central}, 1.5egin{align*} E_c(\tau, \lambda, \pi) 2.3ap {\textup{Ker}(\Omega_\pi)}^1.5ot & \subseteq E_c(\tau, \lambda, \pi) 2.3ap {\textup{Ker}(\Omega_\pi)}^1.5ot 2.3ap E_{cs}(\tau, \lambda, \pi) \\ & = E_c(\tau, \lambda, \pi) 2.3ap {\textup{Ker}(\Omega_\pi)}^1.5ot 2.3ap F_s(\tau, \lambda, \pi)^1.5ot \\ & = E_c(\tau, \lambda, \pi) 2.3ap ({\textup{Ker}(\Omega_\pi)} \oplus F_s(\tau, \lambda, \pi))^1.5ot \\ & = E_c(\tau, \lambda, \pi) 2.3ap F_{cs}(\tau, \lambda, \pi)^1.5ot \\ & = E_c(\tau, \lambda, \pi) 2.3ap E_s(\tau, \lambda, \pi), 1.3nd{align*} for a.e. $(\pi, \lambda)Ext.$ Hence, 1.5egin{equation} \label{eq:iso_central_kernel} E_c(\pi, \lambda)Ext 2.3ap \orth{\textup{Ker}(\Omega_\pi)} = \{0\}, 1.3nd{equation} for a.e. $(\pi, \lambda)Ext,$ since, otherwise, the stable and central spaces associated with the height cocycle would have a non-trivial intersection. By Lemma \ref{lem:bounds_central}, the projection of $\HC{0}{n}v$ to the spaces $\textup{Ker}(\Omega_\pi)$ is constant. Therefore, its norm is uniformly bounded from below. 1.3nd{proof} 1.5egin{lemma} \label{lem: bound_cocycle_central} For a.e. $(\pi, \lambda)Ext \in X_\AExtNorm$ and for any $n \geq m \geq 0$, there exists a constant $C (\pi, \lambda)Extn{n} > 0$, depending only on $2ngle(E_c^n(\pi, \lambda)Ext , \orth{\textup{Ker}(\Omega_\pi)m{n}})$, such that the linear \mbox{operator} \[\HC{m}{n}(\pi, \lambda)Ext \mid_{E^m_c(\pi, \lambda)}: E^m_c(\pi, \lambda)Ext \to E^n_c(\pi, \lambda)Ext\] verifies \[ \left \|\HC{m}{n}(\pi, \lambda)Ext \mid_{E^m_c(\pi, \lambda)Ext } \right \| \leq C(\pi, \lambda)Extn{n}.\] 1.3nd{lemma} 1.5egin{proof} Let $(\pi, \lambda)Ext \in X_\AExtNorm$ be Oseledets generic and fix $n > m \geq 0$. By 1.3qref{eq:iso_central_kernel}, we may assume without loss of generality that \[E_c^n(\pi, \lambda)Ext 2.3ap \textup{Ker}(\Omega_\pi)m{n}^1.5ot = \{0\}, \] for any $n \in \mathbb{N}$. Thus, by Lemma \ref{lem: bound_projection}, {applied to $E:=E^m_c(\pi, \lambda)Ext$ and $F:={\textup{Ker}(\Omega_\pi)m{n}}$,} 1.5egin{equation} \label{eq:bound_projection} \left \|\HC{m}{n} \mid_{E^m_c(\pi, \lambda)Ext } \right \| \leq C(\pi, \lambda)Extn{n} \left \| \pi_{\textup{Ker}(\Omega_\pi)m{n}} 2.3irc \HC{n}{m} \mid_{E^m_c(\pi, \lambda)Ext } \right \|, 1.3nd{equation} for some constant $C(\pi, \lambda)Extn{n}$ depending only on $2ngle(E_c^n(\pi, \lambda)Ext , \orth{\textup{Ker}(\Omega_\pi)m{n}})$, where $\pi_{\textup{Ker}(\Omega_\pi)m{n}}$ denotes the projection from $2.8^3$ onto $ \textup{Ker}(\Omega_\pi)m{n}.$ The result now follows from Lemma \ref{lem:bounds_central} and 1.3qref{eq:bound_projection}. 1.3nd{proof} \subsection{Full measure of the BC and HS conditions.}\label{sec:fullmeasureproof} We now have all the ingredients to present the proof of full measure of the BC and HS conditions, i.e.~Propostion~\ref{prop:fullmeasure}. 1.5egin{proof}[Proof of Proposition~\ref{prop:fullmeasure}] By Lemma \ref{lem: bound_cocycle_central}, there exists a measurable function \[\mathcal{C}: X_\AExtNorm \to (0, +\infty),\] such that for a.e. $(\pi, \lambda)Ext \in X_\AExtNorm$ and for any $n \geq 0$, \[ \left \|\HC{0}{n}(\pi, \lambda)Ext \mid_{E_c(\pi, \lambda)Ext } \right \| \leq \mathcal{C}(\pi, \lambda)Extn{n}.\] Clearly, it is enough to show that almost every IET in every fixed Rauzy class verifies the conditions, so let us fix a Rauzy class $\mathfrak{R}$ and a permutation $\pi^*$ in $\mathfrak{R}$. Let $\gamma$ be a finite path in the Rauzy-graph, starting and ending at $\pi^*$, such that $A_\gamma := \mathbb{Z}C{0}{|\gamma|}(\pi^*, \lambda)$ is a positive matrix, where {$N:=|\gamma|$} denotes the length of $\gamma$, and $\lambda$ belongs to the set $8elta^*$ of length vectors $\lambda \in 8elta_3$ such that $(\pi^*, \lambda)$ satisfies Keane's condition and its combinatorial rotation number starts by { $\gamma \star \gamma$, where $\star$ denotes the justapposition} of two Rauzy paths. For any $\pi \in \mathcal{G}_\A$, define \[\Xi_\pi := \left\{ \tau \in 2.8^A \,\left|\, \, \frac{h_2lpha}{2} < \sum_{\pi_0(1.5eta) < \pi_0(2lpha)} \tau_1.5eta, \text{ for all } 2lpha \in 3 \text{ s.t. } \pi_0(2lpha) \neq 1, d \right. \right\}, \] { where $h_2lpha:=-\frac{1}{2}(\Omega_\pi \tau)_2lpha$ is the height of the zippered rectangle labelled $2lpha\in \mathcal{A}$. } Then there exists a positive measure set \[Y \subset 1.5ig\{ (\pi, \lambda)Ext \in X_\AExtNorm \mid \pi = \pi^*;\, \lambda \in 8elta^* \text{ and } \tau^{(N)} \in \Xi_{\pi^*} \}.\] Moreover, { since $\mathcal{C}$ is measurable and $Y$ has positive measure, by Luzin's theorem, by replacing $Y$ with a smaller subset,} we can assume WLOG that $\mathcal{C}$ restricted to $\mathbb{Z}orichMapExt^N(Y)$ is uniformly bounded by a constant $V > 0$. By ergodicity of the extended Zorich cocycle, for a.e. $(\pi, \lambda)Ext \in X_\AExtNorm$ with $\pi \in \mathfrak{R}$ there exists an increasing sequence $(m_k)_{k \in \mathbb{N}} \subset \mathbb{N}$ such that $(\pi, \lambda)Extn{(m_k)} \in Y$, for all $k \in \mathbb{N}$. In particular, for any $k \in \mathbb{N}$, { since $|\gamma \star \gamma|=2N$, $$ \mathbb{Z}C{0}{2N}(\pi, \lambda)Extn{(m_k)} = A_\gamma\star\gamma = A_\gamma A_\gamma, $$ so that} 1.5egin{align} \label{eq:double_matrix} & \mathbb{Z}C{0}{N}(\pi, \lambda)Extn{(m_k)} = A_\gamma = \mathbb{Z}C{0}{N}(\pi, \lambda)Extn{(m_k + N)},\\ \label{eq:bounded_central_space} & \left \|\HC{0}{m_k + N}(\pi, \lambda)Ext \mid_{E_{c}(\pi, \lambda)Ext } \right \| \leq V, \\ \label{eq:good_tau} & \tau^{(m_k + N)} \in \Xi_{\pi^{*}}. 1.3nd{align} Since for a.e. $(\pi, \lambda)Ext \in X_\AExtNorm,$ \[ \sup_{n \geq 1} \left \|\HC{0}{n}(\pi, \lambda)Ext \mid_{E_{s}(\pi, \lambda)Ext} \right \| < +\infty,\] and by a standard Fubini argument, a full measure set in $X_\AExtNorm$ gives a full measure set of IETs in $X_\ANorm$ with the same forward cocycle matrices (see e.g.~2.3ite{ghazouani_priori_2021}), it follows from the second equality in 1.3qref{eq:double_matrix}, 1.3qref{eq:bounded_central_space} and 1.3qref{eq:relation_flag_splitting} that a.e. $(\pi, \lambda)Ext$ with $\pi \in \mathfrak{R}$ verifies the BC Condition along the subsequence $(n_k)_{k\in \mathbb{N}}$ given by $n_k:= m_k+N$. {We claim that the HS Condition also holds along the subsequence $(n_k)_{k\in \mathbb{N}}$ given by $n_k:= m_k+N$. It is indeed also standard to check that the first equality in 1.3qref{eq:double_matrix}} implies assertion 1.3qref{cond:balanced_heights} for some $C > 0$ depending only on $A_\gamma.$ Furthermore, assertion 1.3qref{cond:continuous_iterates} holds by 1.3qref{eq:good_tau}, see e.g.~Chapter 15 in 2.3ite{viana_ergodic_2006}. Thus, a.e. $(\pi, \lambda)Ext$ with $\pi \in \mathfrak{R}$ verifies the HS Condition. 1.3nd{proof} \section{Appendix}\label{app:Cobo} In this section, we provide an alternative proof of Theorem \ref{thm:regularity}, by showing that it follows as an application of Theorem \ref{thm:topconjugacy} from a result by M. Cobo (which in turns exploits the work of W. Veech 2.3ite{veech_gauss_1982}) which says the following. 1.5egin{theorem}[Theorem 1 in 2.3ite{cobo_piece-wise_2002}] \label{thm:Cobo} For almost every IET $T_0$, for any $\omega \in E_{cs}(T_0) \,\setminus\, E_s(T_0)$ and for any AIET $T \in \textup{Aff}(\gamma(T_0), \omega)$, any conjugating map between $T_0$ and $T$ is not an absolutely continuous function. 1.3nd{theorem} 1.5egin{proof}[Alternative proof of Theorem \ref{thm:regularity} using M. Cobo's work in 2.3ite{cobo_piece-wise_2002}] Notice that for a uniquely ergodic AIET $T$, its unique invariant measure $\mu$ is either singular or absolutely continuous with respect to the Lebesgue measure. Indeed, expressing $\mu = \mu_0 + \mu_1$, where $\mu_0 \ll \textup{Leb}$ and $\mu_1 1.5ot \textup{Leb}$, and since $T$ preserves the sets of zero Lebesgue measure, it follows that \[ T_* \mu = T_* \mu_0 + T_*\mu_1 = \mu_0 + \mu_1, \quad T_* \mu_0 \ll \textup{Leb}, \quad T_* \mu_1 1.5ot \textup{Leb}.\] Hence $T_* \mu_0 = \mu_0$ and $T_* \mu_1 = \mu_1.$ By unique ergodicity, either $\mu_0$ or $\mu_1$ is zero. Let $T_0$ and $T$ as in Theorem \ref{thm:topconjugacy}. By Theorem \ref{thm:Cobo}, we may assume WLOG that the map conjugating $T_0$ and $T$ is not absolutely continuous. Moreover, since almost every IET is uniquely ergodic (see 2.3ite{masur_interval_1982}, 2.3ite{veech_gauss_1982}), we may assume WLOG that $T_0$ (and hence $T$) is uniquely ergodic. Since the unique invariant probability measure $\mu$ of $T$ is the push-forward of the Lebesgue measure by the conjugating map, and this map is not absolutely continuous, it follows that the $\mu$ is not absolutely continuous with respect to Lebesgue. By the argument at the beginning of the proof, it follows that $\mu$ is singular with respect to the Lebesgue measure. 1.3nd{proof} \end{document}
\begin{equation}gin{document} \title{Scalable solid-state quantum computation in decoherence-free subspaces with trapped ions} \author{Li-Xiang Cen} \affiliation{ Department of Physics, Sichuan University, Chengdu 610065, China} \affiliation{Department of Physics \& Center of Theoretical and Computational Physics, University of Hong Kong, Pokfulam Road, Hong Kong, China} \author{Z. D. Wang} \email{[email protected]} \affiliation{Department of Physics \& Center of Theoretical and Computational Physics, University of Hong Kong, Pokfulam Road, Hong Kong, China} \affiliation{National Laboratory of Solid State Microstructures, Nanjing University, Nanjing, China} \author{S. J. Wang} \affiliation{ Department of Physics, Sichuan University, Chengdu 610065, China} \begin{equation}gin{abstract} We propose a decoherence-free subspaces (DFS) scheme to realize scalable quantum computation with trapped ions. The spin-dependent Coulomb interaction is exploited, and the universal set of unconventional geometric quantum gates is achieved in encoded subspaces that are immune from decoherence by collective dephasing. The scalability of the scheme for the ion array system is demonstrated, either by an adiabatic way of switching on and off the interactions, or by a fast gate scheme with comprehensive DFS encoding and noise decoupling techniques. \end{abstract} \pacs{03.67.Pp, 03.67.Lx, 03.65.Vf} \maketitle The practical accomplishment of quantum computation requires accurate control of quantum coherent evolution to perform the information storage for quantum bits (qubits), the processing of information by quantum gates, and a means of final readout \cite{QC}. The quantum computer model based on ion trap systems, which encodes and manipulates information via long-lived internal states of ions, was identified as one promising candidature proposal and has witnessed rapid development in the past decade \cite {cirac,monroe,molmer,milburn,knight,push,kiel,liebfried,garcia,duan,zhu}. In recent literatures \cite{push,garcia,duan,zhu}, it was suggested that quantum gates could be realized via certain spin-dependent Coulomb interactions. These theoretical scenarios relax significantly the physical constraints to execute gate operations, hence offer a robust manner to implement quantum information processing. The scalability of the model, i.e. the extension of quantum processors from two qubits to large numbers of ion units, however, is quite challenging due to the growing complexity of the ion vibrational mode spectrum. An alternative way to achieve the scalability, which is pursued by many current efforts, is to devise sophisticated microtrap architecture and design a series of reliable ion shuttling \cite{push,kiel}. The scaling scenario concerning a large array of ion crystals without ion shuttling has also been suggested in recent proposals \cite{duan,zhu}, where a prerequisite that the gate interactions should be comparable with the local ion oscillation frequency is indicated. In this paper we propose a scheme to realize scalable ion trap quantum computation in decoherence-free subspaces (DFS)~\cite{EAC} with an extended unconventional geometric scenario~\cite{zhuw}, which possesses the advantages of the DFS and geometric strategies: the former is immune from decoherence induced by collective dephasing while the latter is thought to be insensitive to certain random errors in the operation process. We exploit the spin-dependent Coulomb interactions and construct a universal set of unconventional geometric quantum gates in encoded subspaces. The potential to scale up the ion array system for quantum computation without ion shuttling is further investigated. Two different interaction configurations, including an adiabatic way of switching on and off the interactions and a scenario to execute rapidly the interaction pulses combined with a noise cancelation technique, are presented. The system we employed consists of $N$ trapped ions arrayed by a convenient structure. Two stationary internal states of each ion, denoted as $|0\rangle $ and $|1\rangle $, are selected to represent physical qubits. We assume that the ions are separated by some suitable distance so that on one hand the ions could be addressed individually and on the other hand the mutual Coulomb interactions of ions could be taken into account. In the absence of external forces, the potential $V$ of the system is normally approximated by the second-order expansion (the harmonic approximation) for small vibrations around the equilibrium configuration $(q_1^{(0)},\cdots ,q_N^{(0)})$. The motional degrees of freedom of the ions is therefore treated collectively, and the Hamiltonian in normal coordinates reads \begin{equation}gin{equation} H_{vib}=\sum_k\frac 1{2m}P_k^2+\frac m2\sum_k\omega _k^2Q_k^2. \label{vibh} \end{equation} Note that the normal coordinates relate to the local one by $Q_k=\sum_jD_{jk}q_j$, where $D$ is an orthogonal matrix that diagonalizes the Hessian $v_{ij}=(\frac{\partial ^2V}{\partial q_i\partial q_j} )(q_1^{(0)},\cdots ,q_N^{(0)})$, and $\omega _k$ $(k=1,\cdots ,N)$ account for the characteristic frequencies of the normal modes. Suppose, in order to realize the qubit-qubit coupling, that two of the ions located at $q_i$ and $ q_j$ are exerted by certain acceleration forces that are dependent of ion internal states by $F_{\mu \alpha }(t)=f_\mu (t)\sigma _\alpha ^{(\mu )}$ with $\mu =i,j$. The related new interaction term hence takes a form $ H_F(t)=-\sum_{\mu =i,j}f_\mu (t)q_\mu \sigma _\alpha ^{(\mu )}$. We have assumed a general dependence of the forces on the qubit states, where $ \sigma _\alpha ^{(\mu )}$ with $\alpha =x,y,z$ represent the three Pauli operators acting on the states $|0\rangle $ and $|1\rangle $ of the qubit at site $\mu $. It is readily seen that, by introducing the Fock operators $a_k=(m\omega _kQ_k+iP_k)/\sqrt{2m\omega _k}$ with $[a_k,a_k^{\dagger }]=1$, the Hamiltonian in the rotation picture with respect to $H_{vib}$ takes the following form (setting $\hbar =1$) \begin{equation}gin{equation} H(t)=-\sum_{k=1}^N[g_i^k(t)\sigma _\alpha ^{(i)}+g_j^k(t)\sigma _\alpha ^{(j)}]a_ke^{-i\omega _kt}+h.c., \label{hamil1} \end{equation} where $g_\mu ^k(t)=\tilde{D}_{\mu k}f_\mu (t)$ for $\mu =i,j$ and $\tilde{D} _{\mu k}=D_{\mu k}/\sqrt{2m\omega _k}$. To investigate the internal state evolution of the ions, we employ a gauged representation with respect to the unitary transformation $G(t)=\exp [-i\int_0^tH(\tau )d\tau ]$. The evolution operator of the system is given by $U(t)=G(t)U_g(t)$, where $U_g(t)$ satisfies the covariant equation $i\partial _tU_g(t)=H^g(t)U_g(t)$ and the gauged Hamiltonian $H^g(t)$ is obtained as \cite{wangsj} \begin{equation}gin{eqnarray} H^g(t) &=&G^{-1}HG-iG^{-1}\partial G/\partial t \nonumber \\ &=&\sum_{k=1}^NJ_{ij}^k(t)\sigma _\alpha ^{(i)}\sigma _\alpha ^{(j)} + \epsilon _0(t), \label{gaugeh} \end{eqnarray} where \begin{equation}gin{equation} J_{ij}^k(t)=\int_0^t[g_i^k(t)g_j^k(t^{\prime })+g_i^k(t^{\prime })g_j^k(t)]\sin \omega _k(t^{\prime }-t)dt^{\prime }, \label{paramj} \end{equation} and $\epsilon _0(t)$ is merely a c-number parameter. The equation (\ref{gaugeh}) manifests explicitly the ``spin-spin'' couplings between the ion qubits $i$ and $j$. Especially, as the external force is controlled with a particular configuration such that $\int_0^TH(t)dt=0$, the transformation $G(T)$ becomes an identity operator and the evolution operator of the system at time $T$ is exactly the one $U_g(T)=e^{-i\Phi (T)\sigma _\alpha ^{(i)}\sigma _\alpha ^{(j)}}=e^{-i(\Phi (T)/2)[(\sigma _\alpha ^{(i)}+\sigma _\alpha ^{(j)})^2-2]}$ with $\Phi (T)=\sum_{k=1}^N\int_0^TJ_{ij}^k(t)dt$. A key observation is that the generated transformation $U_g(T)$ contains only Pauli operators acting on the ion qubits, therefore the induced qubit operation is irrelevant with the ion motion degree of freedom, hence is insensitive to the vibrational temperature of ions. More intriguingly, similar to the case addressed in Ref.\cite{zhuw}, the present $U_g(T)$-gate may be viewed as an extended version of the unconventional geometric operation \cite{zhuw} whose advantages have been demonstrated in literatures \cite{molmer,liebfried,garcia,sackett,cenp}. This sort of gate will be utilized as a basic one to construct the universal set of gate operations for quantum computation in DFS. \begin{equation}gin{figure}[tbp] \begin{equation}gin{center} \epsfig{figure=fig.eps,width=0.4\textwidth} \end{center} \caption{Schematic of encoded logical qubits for scalable ion trap quantum computation in decoherence-free subspaces.} \end{figure} We employ the pair-bit code by which the logical qubit is encoded in a subspace $C_i^2$ as \begin{equation}gin{equation} |0_L\rangle _i=|0\rangle _{i_1}\otimes |1\rangle _{i_2},~~|1_L\rangle _i=|1\rangle _{i_1}\otimes |0\rangle _{i_2}, \label{enqubit} \end{equation} where $i=1,\cdots ,N/2$ indexes logical qubits and the schematic array of ions is shown as in Fig. 1. Such an encoding constitutes the well-known DFS \cite{EAC} against the collective dephasing of the system-bath interaction $ \sum_{i=1}^{N/2}Z_i\otimes B$, where $Z_i=\sigma _z^{(i_1)}+\sigma _z^{(i_2)} $ and $B$ is an arbitrary bath operator. Let us denote $\pi _x= $(\negthinspace {\tiny $ \begin{equation}gin{array}{ll} 0 & 1 \\ 1 & 0 \end{array} \!\!$}), $\pi _y=$ ({\tiny $\! \begin{equation}gin{array}{ll} 0 & -i \\ i & 0 \end{array} \!\!$}) and $\pi _z=$({\tiny $\! \begin{equation}gin{array}{ll} 1 & 0 \\ 0 & -1 \end{array} \!\!$}) as the three Pauli operators of the encoded logical qubit $ \{|0_L\rangle ,|1_L\rangle \}$. A logical controlled-phase flip on two encoded qubits $i$ and $j$, $e^{i\phi \pi _z^{(i)}\pi _z^{(j)}}$, could be generated via aforementioned spin-dependent Coulomb interactions as below. We exploit the acceleration forces of the form $F_\mu (t)=f_\mu (t)\sigma _z^{(\mu )}$, where the objects could be the two ions $\{\mu =i_1,j_1\}$, or alternatively the ions $\{\mu ^{\prime }=i_2,j_2\}$ with the similar force configuration. According to the previous analysis, as the force configuration is designed so that at time $T$ there are \begin{equation}gin{equation} \eta _\mu ^k(T)\equiv \int_0^Tg_\mu ^k(t)e^{-i\omega _kt}dt=0,~~k=1,\cdots ,N, \label{relation} \end{equation} the specified interactions of Eq. (\ref{hamil1}) shall generate the transformation $e^{-i\Phi (T)\sigma _z^{(i_1)}\sigma _z^{(j_1)}}$ or $ e^{-i\Phi (T)\sigma _z^{(i_2)}\sigma _z^{(j_2)}}$, corresponding to the two different addressing of forces on ions $\mu $ or $\mu ^{\prime }$, respectively. Note that the evolution generated by these interactions falls entirely into the encoded subspace $C_i^2\otimes C_j^2$ all along the time. Moreover owing to the simple fact of $Z_i=Z_j=0$, the actions of both the quadratic operators $\sigma _z^{(i_1)}\sigma _z^{(j_1)}$ and $\sigma _z^{(i_2)}\sigma _z^{(j_2)}$ in this restricted DFS are equivalent to that of $\pi _z^{(i)}\pi _z^{(j)}$. Therefore the gate operation $e^{i\phi \pi _z^{(i)}\pi _z^{(j)}}$ could be exactly achieved by any of the two interacting processes described above. The commensurability relation (\ref{relation}), which involves all of the ion oscillation modes, might be reached for large $N$ systems only by an adiabatic manner to carry out the pushing forces on the ions. In detail, let us assume that $f_\mu (t)$ characterizing the configuration of acceleration forces is some smooth function of time and satisfies $|\dot{f}_\mu (t)|\ll \omega _k$. Moreover, we take $f_\mu (t)$ undergoing from $f_\mu (0)=0$ and after some finite value to an end point $f_\mu (T)=0$. It is then direct to see that there is $\int_0^Tf_\mu (t)e^{-i\omega _kt}dt=0$, therefore the relations $\eta _\mu ^k(T)=0$ come into existence for all of the oscillation modes $k$. The phase of gate operation generated in this case has a simple form $\Phi (T)=-\sum_k(2/\omega _k)\int_0^Tg_i(t)g_j(t)dt$. Note that the currently proposed DFS scenario to implement qubit operations actually has already tackled partially the intrinsic obstacle associated with the adiabatic action, that the use of slow gates gives decoherence more time to exert its detrimental effects. We would also like to give some remarks on the combination of our DFS implementation of gate operations with another scalable approach, i.e., the fast gate scenario by using noise cancelation techniques. In Ref. \cite{duan} it was proposed that if the operation speed is comparable with the local ion oscillation frequency, the noise influence due to the complexity of phonon modes could be significantly reduced by designing a multi-cycle configuration of kicking forces, hence the indicated scalability has been effectively demonstrated. From our formalism, as the commensurability relation (\ref{relation}) is spoiled, the transformation $G_\mu (T)$ (or $G_{\mu ^{\prime }}(T)$ corresponding to the different ion addressing) contained in the evolution operator indicates actually a noise contribution, say, \begin{equation}gin{equation} G_\mu (T)=\exp \{i\sum_k\sum_{\mu =\{i_1,j_1\}}[\eta _\mu ^k(T)a_k+h.c.]\sigma _z^{(\mu )}\}. \label{noise} \end{equation} It is seen that such an undesirable influence could be suppressed by using two cycles of force pulses with reversal configuration. The reason is that the induced noise effects for the two opposite evolution will encounteract each other by viewing that the coefficients $\eta _\mu ^k(T)$ and $\eta _{\mu ^{\prime }}^k(T)$ of the noise operators $G_\mu (T)$ and $G_{\mu ^{\prime }}(T)$ associated with two circuits have reversed signs. Besides the direct extension utilizing the noise cancelation via reversed loop evolution, a slightly different approach to remove the noise effects could be achieved for our DFS gate scenario via combining the above specified two interactions with similar force configuration but different ion addressing on $\mu $ and $\mu ^{\prime }$. Note the fact that in the encoded subspace of $C_i^2\otimes C_j^2$, there are $\sigma _z^{(i_1)}=-\sigma _z^{(i_2)}$ and $\sigma _z^{(j_1)}=-\sigma _z^{(j_2)}$. The validity of the new approach relies also on the assumption that the ion array system possesses a periodic structure so that the relations $g_{i_1,j_1}^k(t)=g_{i_2,j_2}^k(t)$ exist for the specified interactions regarding the translation invariance of the system. Since the requirement of the force configuration is relaxed, this method actually provides an alternative way to implement the gate operation for large-scale ion array systems. To obtain fully the ability to perform quantum computation, one needs also construct the general rotation operations for single qubit units. Note that in the DFS schemes, nontrivial couplings between physical qubits are necessary even to build the gates for single logical qubits. We propose below a similar scheme to realize the two universal non-commuting single-qubit gates $\{e^{i\phi \pi _x},e^{i\phi ^\prime \pi _y}\}$ in the specified DFS by utilizing spin-dependent interactions. In detail, we make use of two different forces $F_{\mu \alpha }(t)$ and $F_{\mu \alpha }^{\prime }(t)$ $(\mu =1,2)$ to derive the two gates $e^{i\phi \pi _x}$ and $e^{i\phi \pi _y}$, by which $ F_{\mu \alpha }(t)=f_\mu (t)\sigma _x^{(\mu )}$ for the former and $F_{\mu \alpha }^{\prime }(t)=f_\mu ^{\prime }(t)\sigma _\alpha ^{(\mu )}$ with $ \{\sigma _\alpha ^{(1)}=\sigma _x^{(1)},\sigma _\alpha ^{(2)}=\sigma _y^{(2)}\}$ for the latter, respectively. The corresponding interactions of the form (\ref{hamil1}) should generate the transformations \begin{equation}gin{equation} U_F(T)=e^{-i\Phi (T)\sigma _x^{(1)}\sigma _x^{(2)}},~~U_{F^{\prime }}(T)=e^{-i\Phi ^{\prime }(T)\sigma _x^{(1)}\sigma _y^{(2)}} \label{gate1} \end{equation} at time $T$, respectively, provided that the relations (\ref{relation}) with $\mu =1,2$ are satisfied. Note that in the restricted subspace $\{|0_L\rangle ,|1_L\rangle \}$, actions of the quadratic operators $\sigma _x^{(1)}\sigma _x^{(2)}$ and $\sigma _x^{(1)}\sigma _y^{(2)}$ are exactly the same as those of $\pi _x$ and $\pi _y$. Therefore the transformations of (\ref{gate1}) offer actually the two gate operations $e^{i\phi \pi _x}$ and $ e^{i\phi ^\prime \pi _y}$, respectively. Protection of the state leakage throughout the gating period, however, needs to be scrutinized more carefully. It is recognized that the generated evolution of $U_F(t)$ and $U_{F^{\prime }}(t)$ actually employs the ion levels out of the DFS, the predicted protection against collective dephasing might be spoiled during the gate operations. For convenience, let us consider a simplified model with only one mode with frequency $\omega $ involved, which accounts physically for sideband addressing by laser beams to select out the particular phonon mode. We assume further a homogeneous dependence of the interactions on the ion internal states, that is, the parameters in that of (\ref{hamil1}) satisfying $g_1(t)=g_2(t)=g(t)$. Note that the evolution of the system now is governed by the overall Hamiltonian \begin{equation}gin{equation} H_{tot}(t)=H(t)+Z_i\otimes B, \label{totalh} \end{equation} where $H(t)$ is assumed to be interactions associated with $F_\mu (t)$ or $ F_\mu ^{\prime }(t)$ with specified parameters accordingly. It is readily seen that in the formerly described gauged representation with respect to $ G(t)$, one obtains \begin{equation}gin{equation} H_{tot}^g(t)=H^g(t)+\tilde{Z}_i(t)\otimes B, \label{totalhg} \end{equation} where $\tilde{Z}_i(t)=G^{-1}(t)Z_iG(t)$ and $H^g(t)$ has a form of Eq. (\ref {gaugeh}) with a degenerated parameter \begin{equation}gin{equation} J(t)=2\int_0^tg(t)g(t^{\prime })\sin \omega (t^{\prime }-t )dt^{\prime } . \label{paramjd} \end{equation} In detail, corresponding to the two interactions associated with $F_\mu (t)$ and $F_\mu ^{\prime }(t)$ respectively, the form of $\tilde{Z}_i(t)$ could be obtained as \begin{equation}gin{eqnarray} \tilde{Z}_i(t) &=&\cos \hat{\eta}^a(t)Z_i+\sin \hat{\eta}^a(t)(\sigma _{1y}+\sigma _{2y}), \nonumber \\ \tilde{Z}_i^{\prime }(t) &=&\cos \hat{\eta}^a(t)Z_i+\sin \hat{\eta} ^a(t)(\sigma _{1y}+\sigma _{2x}), \label{dephas} \end{eqnarray} where the operator $\hat{\eta}^a(t)=\eta (t)a+\eta ^{*}(t)a^{\dagger }$, and $\eta (t)=\int_0^tg(\tau )e^{-i\omega \tau }d\tau . \label{yita}$ It is evident to see that, even for the evolution with perfect parameter controls with $\eta (T)=0$, the occurrence of the final terms in the expression (\ref{dephas}) for $\tilde{Z}_i(t)$ and $\tilde{Z}_i^{\prime }(t)$ should inevitably mix the system and bath degrees of freedom, therefore spoil the desired gate operations. Notably, it happens that the decoherence effects induced above could be pined down effectively via a decoupling process by devising a symmetrized multi-circuit evolution. For instance, the first-order of decoupling could be achieved via a two-cycle refocused performance of the interactions indicated by $\eta (t+T_2/2)=-\eta (t)$, where $T_2$ denotes the whole time period of the two cycles. In view of the relations $ \int_0^{T_2}\eta ^n(t)dt=0$ with $n$ any odd numbers and the resulted one $ \int_0^{T_2}\sin \hat{\eta}^a(t)dt=0$, one obtains readily, by using the Magnus expansion \begin{equation}gin{equation} U_g^{tot}(T_2)=\hat{T}\exp \{-i\int_0^{T_2}H_{tot}^g(t)dt\}=e^{-i(h_1+h_2+\cdots )T_2}, \label{totug} \end{equation} the first order of the evolution with \begin{equation}gin{eqnarray} h_1 &=&\frac 1{T_2}\int_0^{T_2}H_{tot}^g(t)dt \nonumber \\ &=&\frac 1{T_2}\int_0^{T_2}H^g(t)dt+\int_0^{T_2}\cos \hat{\eta} ^a(t)dtZ_i\otimes B. \label{firstor} \end{eqnarray} The last term of Eq. (\ref{firstor}) actually contributes nothing in our DFS systems, therefore the decoherence effects have been effectively removed by the above first-order decoupling process. Physically, for the two cycles with reversed interactions, the ions are pushed to the reverse direction, and the unwanted coupling of the qubits with the vibrational degree of freedom induced by the dissipation has a reverse sign. Due to this sign reversal, the decoherence effects from these two cycles encounteract each other. The above decoupling process could be achieved to arbitrary high orders by iteratively application of the multiple refocusing cycles \cite{cenp}. The expected extension of the above operation scheme with the comprehensive DFS encoding and noise decoupling technique to the scalable system is argued as follows. For the scenario with fast execution of interaction pulses, the refocusing concept for noises cancelation for the dissipative effects and for decoherence induced by phonon complexity is actually consistent. That is, we have shown that the noise cancelation by decoupling process is able to remove both the two kind of noises provided that the time scale of the interaction pulse is fast enough comparable with the noise frequencies. On the other hand, for the adiabatic pushing scheme, the validity of the extension requires that $\dot{f}(t)\ll \omega _l$ and $\dot{f}(t)\gtrsim \tau _{rel}^{-1}$, where $\omega _l$ stands for the frequency of the longest wave-length phonon mode and $\tau _{rel}^{-1}$ denotes the relaxation rate of the internal state of physical ions. Before concluding, we would like to remark some features of the present scheme for a potential experimental implementation. Firstly, we point out that the initialization of the logical registers on $|0_L\rangle _i$ could be readily accomplished through laser light addressing and manipulating individually the ions $i_2$, i.e., initializing the states on $|1\rangle _{i_2}$. Moreover, our scheme possesses the following advantages: (i) The noise effects of collective dephasing, which is reported as a major source of decoherence in the ion trap system \cite{kiel2}, could be tackled in the scheme. (ii) Since the ion shuttling is not necessary in the scheme, one can design the ion array in any convenient geometry with periodic structure. (iii) There is no cooling requirement since the scheme is insensitive to the vibrational temperature of the ions. As has been shown, all of the oscillation modes would \textit{not} spoil the global operation generated by an adiabatic manner of the evolution. For the fast gate scenario, by making use of the noise cancelation technique, there is actually very weak influence of the vibrational temperature on the gate fidelity \cite{duan}. In summary, we have proposed a DFS scheme to implement scalable ion trap quantum computation with an extended unconventional geometric approach. Using the spin-dependent Coulomb interactions, we show that the universal set of quantum gates could be achieved in DFS, either via the adiabatic manner to switch on and off the interactions, or via the scenario to execute rapidly the interaction pulses combined with noise cancelation techniques. This work was supported by the RGC grant of Hong Kong (HKU7045/05P), the URC fund of HKU , and the NSFC grants (10375039, 10429401, and 90503008). \begin{equation}gin{references} \bibitem{QC} A. Steane, Rep. Prog. Phys. {\bf 61}, 117 (1998); D.P. DiVincenzo and C. Bennet {\sl Nature} {\bf 404}, 247 (2000). \bibitem{cirac} J.I. Cirac and P. Zoller, Phys. Rev. Lett. {\bf 74}, 4091 (1995). \bibitem{monroe} C. Monroe et al., Phys. Rev. Lett. {\bf 75}, 4714 (1995). \bibitem{molmer} A. Sorensen and K. Molmer, Phys. Rev. Lett. {\bf 82}, 1971 (1999); A. Sorensen and K. Molmer, Phys. Rev. A {\bf 62}, 022311 (2000). \bibitem{milburn} G. J. Milburn, S. Schneider, and D.F.V. James, Fortschr. Phys. {\bf 48}, 801 (2000). \bibitem{knight} D. Jonathan, M.B. Plenio, and P.L. Knight, Phys. Rev. A {\bf 62}, 042307 (2000). \bibitem{push} J.I. Cirac and P. Zoller, Nature {\bf 404}, 579 (2000). \bibitem{kiel} D. Kielpinski, C. Monroe, and D.J. Wineland, Nature {\bf 417}, 709 (2002). \bibitem{liebfried} D. Liebfried {\it et al.}, Nature {\bf 422}, 412 (2003). \bibitem{garcia} JJ. Garcia-Ripoll, P. Zoller, and J.I. Cirac, Phys. Rev. Lett. {\bf 91}, 157901 (2003). \bibitem{duan} L.-M. Duan, Phys. Rev. Lett. {\bf 93}, 100502 (2004). \bibitem{zhu} S.L. Zhu, C. Monroe, and L.M. Duan, Europhys. Lett. {\bf 73}, 485 (2006). \bibitem{EAC} L.M. Duan and G.C. Guo, Phys. Rev. Lett, {\bf 79}, 1953 (1997); P. Zanardi and M. Rasetti, Phys. Rev. Lett. {\bf 79}, 3306 (1997); D.A. Lidar, I.L. Chuang and K.B. Whaley, Phys. Rev. Lett. {\bf 81}, 2594 (1998). \bibitem{zhuw} S.L. Zhu and Z.D. Wang, Phys. Rev. Lett. {\bf 91}, 187902 (2003); S.L. Zhu, Z.D. Wang, P. Zanardi, {\it ibid.} {\bf 94}, 100502 (2005). \bibitem{wangsj} S.J. Wang, F.L. Li and A. Weiguny, Phys. Lett. A {\bf 180}, 189 (1993); L.-X. Cen, X.Q. Li, Y.J. Yan, H.Z. Zheng, and S.J. Wang, Phys. Rev. Lett. {\bf 90}, 147902 (2003). \bibitem{sackett} C.A. Sackett {\it et al.}, {\sl Nature} {\bf 404}, 256 (2000). \bibitem{cenp} L.-X. Cen and P. Zanardi, Phys. Rev. A {\bf 71}, R060307 (2005). \bibitem{kiel2} D. Kielpinski et al., Science {\bf 291}, 1013 (2001). \end{references} \end{document}
\begin{document} \title{The frequency of elliptic curve groups over prime finite fields} \begin{abstract} Letting $p$ vary over all primes and $E$ vary over all elliptic curves over the finite field $\mathbb F_p$, we study the frequency to which a given group $G$ arises as a group of points $E(\mathbb F_p)$. It is well-known that the only permissible groups are of the form $G_{m,k}:=\mathbb Z/m\mathbb Z\times \mathbb Z/mk\mathbb Z$. Given such a candidate group, we let $M(G_{m, k})$ be the frequency to which the group $G_{m, k}$ arises in this way. Previously, the second and fourth named authors determined an asymptotic formula for $M(G_{m, k})$ assuming a conjecture about primes in short arithmetic progressions. In this paper, we prove several unconditional bounds for $M(G_{m, k})$, pointwise and on average. In particular, we show that $M(G_{m, k})$ is bounded above by a constant multiple of the expected quantity when $m\le k^A$ and that the conjectured asymptotic for $M(G_{m, k})$ holds for almost all groups $G_{m, k}$ when $m\le k^{1/4-\epsilon}$. We also apply our methods to study the frequency to which a given integer $N$ arises as the group order $\#E(\mathbb F_p)$. \end{abstract} \section{Introduction}\label{intro} Given an elliptic curve $E$ over the prime finite field $\mathbb F_p$, we let $E(\mathbb F_p)$ denote its set of $\mathbb F_p$ points. It is well-known that $E(\mathbb F_p)$ admits the structure of an abelian group, and in fact, \[ E(\mathbb F_p)\cong G_{m,k}:= \mathbb Z/m\mathbb Z\times\mathbb Z/mk\mathbb Z \] for some positive integers $m$ and $k$. It is natural to wonder which groups of the form $G_{m,k}$ arise in this way and how often they occur as $p$ varies over all primes and $E$ varies over all elliptic curves over $\mathbb F_p$. The former problem, of characterizing which groups are realized in this way was studied in~\cite{BPS:2012, CDKS1}, while the frequency of occurrence was studied by the second and fourth named authors in~\cite{DS-MEG}. In the present work, we explore the frequency of occurrence further. Given a group $G$ of the form $G_{m,k}=\mathbb Z/m\mathbb Z\times\mathbb Z/mk\mathbb Z$, we set $N=|G|=m^2k$ and let $M_p(G)$ denote the weighted number of isomorphism classes of elliptic curves over $\mathbb F_p$ with group isomorphic to $G$, that is to say \[ M_p(G)=\sum_{\substack{E/\mathbb F_p\\ E(\mathbb F_p)\cong G}}\frac{1}{|\Aut_p(E)|}, \] where the sum is taken over all isomorphism classes of elliptic curves over $\mathbb F_p$ and $|\Aut_p(E)|$ is the number of $\mathbb F_p$-automorphisms of $E$. It is worth noting here that $|\Aut_p(E)|=2$ for all but a bounded number of isomorphism classes $E$ over $\mathbb F_p$, and hence \[ M_p(G) = \frac{1}{2} \#\{ E/\mathbb F_p : E(\mathbb F_p)\cong G \} +O(1), \] In \cite{DS-MEG}, the authors studied the weighted number of isomorphism classes of elliptic curves over any prime finite field with group of points isomorphic to $G$, i.e., they studied \[ M(G):=\sum_pM_p(G) . \] The primes counted by $M(G)$ must lie in a very short interval near $N=|G|$. This is because the Hasse bound implies that $p+1-2\sqrt p<N<p+1+2\sqrt p$, which is equivalent to saying that \[ N^{-} := N+1-2\sqrt{N} < p < N+1+2 \sqrt{N}=:N^+. \] Even the Riemann hypothesis does not guarantee the existence of a prime in such a short interval. Hence the main theorem of \cite{DS-MEG} can only be proven under an appropriate conjecture concerning the distribution of primes in short intervals. In the statement below, we refer to the conjecture assumed in~\cite{DS-MEG} as the Barban-Davenport-Halberstam (BDH) estimate for short intervals. Before stating the main theorem of~\cite{DS-MEG}, we fix some more notation. Given a group $G=G_{m,k}$, we let $\Aut(G)$ denote its automorphism group (as a group). This should not be confused with $\Aut_p(E)$ as defined above, which refers to the set of $\mathbb F_p$-automorphisms of the elliptic curve $E$. We also define the function \eq{define K(G)}{ K(G)= \prod_{\ell\nmid N}\left(1-\frac{\leg{N-1}{\ell}^2\ell+1}{(\ell-1)^2(\ell+1)}\right) \prod_{\ell\mid m}\left(1-\frac{1}{\ell^2}\right) \prod_{\substack{\ell\mid k\\ \ell\nmid m}}\left(1-\frac{1}{\ell(\ell-1)}\right), } where the products are taken over all primes $\ell$ satisfying the stated conditions and $\leg{\cdot}{\ell}$ denotes the usual Kronecker symbol. In~\cite{DS-MEG}, the function $K(G)$ was only computed for odd order groups, and its definition contained a mistake. It was corrected to the form that we give here in~\cite{DS-MEG-corr}. Note that the function $K(G)$ is bounded between two constants independently of the the parameters $m$ and $k$. In paraphrased form, the main theorem of~\cite{DS-MEG} is as follows. \begin{thm}[David-Smith] \label{meg-rephrased} Assume that the BDH estimate for short intervals holds. Fix $A,B>0$. Then for every nontrivial, odd order group $G=G_{m,k}$, we have that \[ M(G)=\left(K(G) + O_{A,B}\left( \frac{1}{(\log|G|)^{B}} \right)\right)\frac{|G|^2}{|\Aut (G)|\log |G|} \asymp \frac{mk^2}{\phi(m)\phi(k)\log k}, \] provided that $m\le (\log k)^A$. \end{thm} For precise details concerning the conjecture assumed to prove Theorem~\ref{meg-rephrased}, we refer the reader to \cite{DS-MEG}. We note that the result of Theorem~\ref{meg-rephrased} is restricted to the range $m\le (\log k)^A$. However, we believe that it should hold in the range $m\le k^A$. Proving such a result at the present time would however require an even stronger hypothesis than the one assumed in \cite{DS-MEG}. Unconditionally, it is possible to obtain upper bounds of the correct order of magnitude in this larger range. This is the context of our first theorem. \begin{thm} \label{CDKS} Fix $A>0$ and consider integers $m$ and $k$ with $1\le m\le k^A$. Let $G=G_{m,k}$, $N=|G|=m^2k$, and \[ \delta = \frac{1}{N/(\phi(m)\log(2N))} \sum_{\substack{ N^-<p\le N^+ \\ p\equiv 1\pmod m }} \sqrt{(p-N^-)(N^+-p)}, \] and note that $\delta\ll1$ by the Brun-Titchmarsch inequality. For any fixed $\lambda>1$, \[ \delta^\lambda \cdot \frac{|G|^2}{ |\Aut (G)|\log(2|G|)} \ll M(G) \ll \delta^{1/\lambda} \cdot \frac{|G|^2}{ |\Aut (G)|\log(2|G|)}, \] the implied constants depending at most on $A$ and $\lambda$. \end{thm} Employing the above result together with the Bombieri-Vinogradov theorem, we also show that the lower bound implicit in Theorem~\ref{meg-rephrased} holds for a positive proportion of groups $G$. \begin{thm} \label{CDKS2} Consider numbers $x$ and $y$ with $1\le x\le \sqrt{y}$. Then there are absolute positive constants $c_1$ and $c_2$ such that \[ M(G_{m, k}) \ge c_1\cdot \frac{|G_{m,k}|^2}{|\Aut (G_{m,k})|\log(2|G_{m,k}|)} \] for at least $c_2xy$ pairs $(m,k)$ with $m\le x$ and $k\le y$. \end{thm} \begin{rmk} It is not possible for such a lower bound to hold for all groups $G=G_{m, k}$. As was noted in~\cite{BPS:2012}, several groups of this form do not arise in this way at all. For example, the group $G_{11,1}$ never occurs as the group of points on any elliptic curve over any finite field. \end{rmk} Our final result for $M(G_{m, k})$ is that on average the full asymptotic of Theorem~\ref{meg-rephrased} holds unconditionally. \begin{thm} \label{CDKS3} Fix $\epsilon>0$ and $A\ge1$. For $2\le x\le y^{1/4-\epsilon}$ we have that \als{ \frac{1}{xy}\sum_{\substack{m\le x,\, k\le y \\ mk>1}} \left| M(G_{m,k}) - \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right| \ll \frac{y}{(\log y)^A}, } the implied constant depending at most on $A$ and $\epsilon$. Moreover, if the generalized Riemann hypothesis is true, then the same result is true for $x\le y^{1/2-\epsilon}$. \end{thm} In \cite{DS-MEN, DS-MEN-corr}, the second and fourth named authors studied the related question of how many elliptic curves over $\mathbb F_p$ have a given number of points, that is to say the asymptotic behaviour of \[ M(N) := \sum_p \sum_{\substack{ E/\mathbb F_p\\ \#E(\mathbb F_p)=N}}\frac{1}{|\Aut_p(E)|} . \] It was shown in \cite{DS-MEN, DS-MEN-corr} that \[ M(N)\sim K(N)\cdot \frac{N^2}{\phi(N)\log N} \quad(N\to\infty) \] under suitable assumptions on the distribution of primes in short arithmetic progressions, where \eq{define K(N)}{ K(N) = \prod_{\ell\nmid N} \left(1- \frac{ \leg{N-1}{\ell}^2\ell+1}{(\ell-1)^2(\ell+1)} \right) \prod_{\ell|N} \left(1-\frac{1}{\ell^{\nu_\ell(N)}(\ell-1)}\right). } Here $\nu_\ell(N)$ denotes the usual $\ell$-adic valuation of $N$. As one might expect, the methods of this paper apply to the study of $M(N)$ as well. We start by recording the obvious identity \[ M(N) = \sum_{m^2k=N} M(G_{m, k}) . \] Then it is possible to show that, as expected, most of the contribution to $M(N)$ comes from groups $G_{m, k}$ with $m$ small, that is to say groups that are nearly cyclic. \begin{thm}\label{MEN-1} For $N\ge1$ and $x\ge1$, we have that \[ M(N) = \sum_{\substack{m^2k=N \\ m\le x}} M(G_{m, k}) + O\left( \frac{N^2}{x\phi(N)\log(2N)} \right) . \] \end{thm} Finally, we conclude with two more results on $M(N)$. \begin{thm} \label{MEN-2} Let $N\ge1$ and set \[ \eta = \frac{1}{N/(\log(2N))} \sum_{\substack{ N^-<p\le N^+ \\ p\equiv 1\pmod m }} \sqrt{(p-N^-)(N^+-p)} , \] and note that $\eta\ll1$ by the Brun-Titchmarsch inequality. For any fixed $\lambda>1$, \[ \eta^\lambda \cdot \frac{N^2}{\phi(N) \log(2N) } \ll M(N) \ll \eta^{1/\lambda} \cdot \frac{N^2}{\phi(N)\log(2N)}, \] the implied constants depending at most on $\lambda$. \end{thm} \begin{thm} \label{MEN-3} Fix $A>0$. For $x\ge1$, we have that \als{ \frac{1}{x}\sum_{1<N\le x} \left| M(N) - \frac{K(N)N^2}{\phi(N)\log N} \right| \ll_A \frac{x}{(\log x)^A} . } \end{thm} The present paper also includes an appendix (by Greg Martin and the second and fourth named authors) giving a probabilistic interpretation to the Euler factors arising in the constants $K(N)$ and $K(G)$ defined by~\eqref{define K(G)} and~\eqref{define K(N)}, respectively. This interpretation is similar to the heuristic leading to the conjectural constants in related conjectures on properties of the reductions of a fixed global elliptic curve $E$ over the rationals (e.g., the Lang-Trotter conjectures~\cite{LT:1976} and the Koblitz~\cite{Kob:1988} conjecture) with the additional feature that the Euler factors at the primes $\ell$ dividing $N$ or $|G|$ are related to certain matrix counts over $\mathbb Z / \ell^e \mathbb Z$ for $e$ large enough. \subsection*{Notation} Given a natural number $n$, we denote with $P^+(n)$ and $P^-(n)$ its largest and smallest prime factor, respectively, with the convention that $P^+(1)=1$ and $P^-(1)=\infty$. Moreover, we let $\tau_r(n)$ denote the coefficient of $1/n^s$ in the Dirichlet series $\zeta(s)^r$. In particular, $\tau_r(n)=r^{\omega(n)}$ for square-free integers $n$, where $\omega(n)$ denotes the number of distinct prime factors of $n$. In the special case when $r=2$, we simply write $\tau(n)$ in place of $\tau_2(n)$, which counts the number of divisors of $n$. We write $f*g$ to denote the Dirichlet convolution of the arithmetic functions $f$ and $g$, defined by $(f*g)(n)=\sum_{ab=n}f(a)g(b)$. As usual, given a Dirichlet character $\chi$, we write $L(s,\chi)$ for its Dirichlet series. In addition, we make use of the notation \[ E(x,h;q) := \max_{(a,q)=1} \left| \sum_{\substack{ x <p \le x + h \\ p\equiv a\pmod q}} \log p - \frac{h}{\phi(q)} \right| . \] Finally, for $d\in\mathbb Z$ that is not a square and for $z\ge1$, we let \[ \mathcal L(d) = L\left(1,\left(\frac{d}{\cdot}\right)\right) = \prod_{\ell} \left( 1- \frac{\leg{d}{\ell}}{\ell}\right)^{-1} \quad\text{and}\quad \mathcal L(d;z) = \prod_{\ell\le z} \left( 1- \frac{\leg{d}{\ell}}{\ell}\right)^{-1}. \] \section{Outline of the proofs}\label{outline} In this section, we outline the chief ideas that go into the proofs of our main results. However, most of our remarks concern the proofs of Theorems~\ref{CDKS} and~\ref{CDKS3}. This is primarily because the remaining results are essentially corollaries of these theorems. In particular, the main ingredient in the proof of Theorem~\ref{MEN-1} is Theorem~\ref{CDKS}, and the main ingredients in the proof Theorem~\ref{MEN-3} are Theorems~\ref{CDKS3} and~\ref{MEN-1} together with a short computation. Theorem~\ref{MEN-2} is not truly a corollary, but its proof is essentially the same as that of Theorem~\ref{CDKS}. The proof of Theorem~\ref{CDKS2} is somewhat different. The ideas involved in its proof are essentially the same as those used to show Theorem 1.6 of~\cite{CDKS1} together with an application of Theorem~\ref{CDKS}. All of this will be expounded further in Section~\ref{proofs}, where we complete the proofs of all six results. For the remainder of this section, we focus our attention on outlining the main ingredients in the proofs of Theorems~\ref{CDKS} and~\ref{CDKS3}. Throughout, we fix a group $G=G_{m,k}=\mathbb Z/m\mathbb Z\times\mathbb Z/mk\mathbb Z$, and we set $N=|G|=m^2k$. Moreover, given a prime $p\equiv 1\pmod {m}$, we set \begin{eqnarray} \label{def-dp} d_{m,k}(p)= \frac{(p-1-N)^2-4N}{m^2}=\left(\frac{p-1}{m}-mk\right)^2-4k. \end{eqnarray} Often, when the dependence on $m$ and $k$ is clear from the context, we will simply write $d(p)$ in place of $d_{k,m}(p)$. Our starting point is the following lemma, whose proof is based on Deuring's work \cite{Deu} and its generalization due to Schoof \cite{Schoof}. We shall give the details of its proof in Section \ref{deuring lemma}. \begin{lma}\label{formula for M(G)} For any $m,k\in\mathbb N$, we have that \[ M(G_{m, k}) = \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }}\sum_{\substack{f^2\mid d(p) ,\, (f,k)=1 \\ d(p)/f^2\equiv 1,0\pmod 4 }} \frac{\sqrt{|d(p)|} \mathcal L(d(p)/f^2)}{2\pi f} . \] \end{lma} For the proof of Theorem \ref{CDKS}, we shall use the following simplified but weaker version of Lemma \ref{formula for M(G)}. \begin{cor}\label{bounds for M(G)} For any $m,k\in\mathbb N$, we have that \[ \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \sqrt{|d(p)|} \mathcal L(d(p)) \ll M(G_{m, k}) \ll \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \frac{|d(p)|^{3/2}}{\phi(|d(p)|)} \mathcal L(d(p)). \] \end{cor} \begin{proof} For the lower bound, note that the term $f=1$ in Lemma \ref{formula for M(G)} always contributes to $M(G_{m, k})$, since $d(p)\equiv 0,1\mod 4$ for all $m,k$ and $p\equiv 1\mod m$. For the upper bound, notice that \[ \mathcal L(d(p)/f^2) \le \frac{f}{\phi(f)} \mathcal L(d(p)) . \] Since \[ \sum_{f|n} \frac{1}{\phi(f)}\ll \frac{n}{\phi(n)} , \] the claimed upper bound follows. \end{proof} Evidently, Lemma \ref{formula for M(G)} and Corollary \ref{bounds for M(G)} reduce the estimation of $M(G_{m, k})$ to estimating an average of Dirichlet series evaluated at 1. In order to do so, we expand the Dirichlet series as an infinite sum and invert the order of summation by putting the sum over primes $p$ inside. For each fixed $n$ in the Dirichlet sum, understanding this sum over primes involves understanding the distribution of the set \eq{our set}{ \left\{\frac{p-1}{m}: N^-<p<N^+,\ p\equiv 1\mod m\right\} } in arithmetic progressions $a\mod b$, where the modulus $b=b(n)$ depends on $n$ and other parameters which are less essential. Already when $b=m=1$, this problem is very hard and unsolved, even if we assume the validity of the Riemann Hypothesis. In order to limit the size of the moduli $b$ that are involved, we need to truncate the Dirichlet series that appear before inverting the order of summation. We could do this for each individual Dirichlet series, using character sum estimates such as the P\'olya-Vinogradov inequality or Burgess's bounds as in \cite{DS-MEN,DS-MEG}, but this would still leave us to deal with rather large moduli $b$. Instead, we use the following result, which implies that for {\it most} characters $\chi$, $L(1,\chi)$ can be approximated by a very short Euler product, and then by a sum over integers $n$ supported only on small primes. \begin{lma} \label{lemmashortproduct} Let $\alpha \geq 1$ and $Q\ge3$. There is a set $\mathcal{E}_\alpha(Q)\subset[1,Q]\cap\mathbb Z$ of at most $Q^{2/\alpha}$ integers such that if $\chi$ is a Dirichlet character modulo $q\le\exp\{(\log Q)^2\}$ whose conductor does not belong to $\mathcal{E}_\alpha(Q)$, then \[ L(1,\chi) = \prod_{\ell\le (\log Q)^{8\alpha^2}} \left(1-\frac{\chi(\ell)}{\ell}\right)^{-1} \left(1 + O_\alpha\left(\frac1{(\log Q)^\alpha}\right)\right). \] \end{lma} \begin{proof} By a classical result, essentially due to Elliott (see \cite[Proposition 2.2]{GS}), we know that there is a set $\mathcal{E}_\alpha(Q)$ of at most $Q^{2/\alpha}$ integers from $[1,Q]$ such that \[ L(1,\psi) = \prod_{\ell \leq (\log{Q})^{8\alpha^2}} \left( 1 - \frac{\psi(\ell)}{\ell} \right)^{-1} \left( 1 + O\left(\frac{\alpha}{(\log Q)^\alpha}\right) \right) \] for all primitive characters $\psi$ of conductor in $[1,Q]\setminus\mathcal{E}_\alpha(Q)$. So if $\chi$ is a Dirichlet character modulo $q\le \exp\{(\log Q)^2\}$ induced by $\psi$ and the conductor of $\psi$ is in $[1,Q]\setminus \mathcal{E}_\alpha(Q)$, then \als{ L(1, \chi) &= \prod_{\ell \mid q} \left( 1 - \frac{\psi(\ell)}{\ell} \right) \prod_{\ell \leq (\log{Q})^{8\alpha^2}} \left( 1 - \frac{\psi(\ell)}{\ell} \right)^{-1} \left( 1 + O\left(\frac{\alpha}{(\log Q)^\alpha}\right) \right)\\ &= \prod_{\ell \mid q,\,\ell > (\log Q)^{8\alpha^2} } \left( 1 - \frac{\psi(\ell)}{\ell} \right) \prod_{\ell \le (\log Q)^{8\alpha^2} } \left( 1 - \frac{\chi(\ell)}{\ell} \right)^{-1} \left( 1 + O\left(\frac{\alpha}{(\log Q)^\alpha}\right) \right) . } Finally, note that \[ \log\left(\prod_{\ell \mid q,\,\ell > (\log Q)^{8\alpha^2} }\left( 1 - \frac{\psi(\ell)}{\ell} \right) \right) \ll \sum_{\ell \mid q,\,\ell > (\log Q)^{8\alpha^2} } \frac{1}{\ell} \le \frac{\omega(q) }{(\log Q)^{8\alpha^2}} \ll \frac{1}{(\log Q)^{8\alpha^2-2}}, \] since $\omega(q)\le \log q/\log 2\ll (\log Q)^2$, which completes the proof of the lemma. \end{proof} Expanding the short product in the above lemma leads to an approximation of $L(1,\chi)$ by a sum over $(\log Q)^A$-smooth integers, and we know that very few of them get $>Q^\epsilon$: \begin{lma}\label{smooth} Let $f:\mathbb{N}\to\{z\in\mathbb C:|z|\le1\}$ be a completely multiplicative function. For $u\ge1$ and $x\ge10$ we have that \[ \prod_{p\le x}\left(1-\frac{f(p)}p\right)^{-1} = \sum_{\substack{P^+(n)\le x\\n\le x^u}} \frac{f(n)}n+ O\left(\frac{\log x}{e^u}\right) . \] \end{lma} \begin{proof} We have that \als{ \left| \prod_{p\le x}\left(1-\frac{f(p)}p\right)^{-1} - \sum_{\substack{P^+(n)\le x\\n\le x^u}} \frac{f(n)}n \right| = \left| \sum_{\substack{P^+(n)\le x\\ n > x^u }} \frac{f(n)}n \right| &\le \frac{1}{e^u} \sum_{P^+(n)\le x} \frac{1}{n^{1-1/\log x}} \\ &\ll \frac{1}{e^u} \exp\left\{\sum_{p\le x} \frac{1}{p^{1-1/\log x}} \right\} . } So using the formula $p^{1/\log x}=1+O(\log p/\log x)$ and the prime number theorem, we obtain the claimed result. \end{proof} Combining Lemmas \ref{lemmashortproduct} and \ref{smooth}, we may replace $L(1,\chi)$ by a very short sum for most characters $\chi$, which means that we only need information for the distribution of the set \eqref{our set} for very small moduli. This leads to the following fundamental result, which is an improvement of Theorem \ref{meg-rephrased}. It will be proven in Section \ref{approx}. \begin{thm}\label{approximation of M(G)} Fix $\alpha\ge1$ and $\epsilon\le 1/3$, and consider integers $m$ and $k$ with $1\le m\le k^{\alpha}$ and $k$ large enough so that $k^{\frac{1}{2}-\epsilon} \ge(\log k)^{\alpha+2}$. Set $G=G_{m,k}$, and consider $h\in[mk^{\epsilon},m \sqrt{k}/(\log k)^{\alpha+2}]$. Then \als{ M(G) &= \frac{K(G) |G|^2}{|\Aut (G)|\log|G|} + O_{\alpha,\epsilon} \left(\frac{k}{(\log k)^{\alpha}} + \frac{\sqrt{k}}{h} \sum_{q\le k^{\epsilon} } \tau_3(q) \int_{N^-}^{N^+} E(y,h; qm ) \mathrm{d} y\right), } where $K(G)$ is defined by \eqref{define K(G)}. \end{thm} Even though we cannot estimate the error term for any given values of $m$ and $k$, we can do so if we average over $m$ and $k$ using the following result, which is a consequence of Theorem 1.1 in \cite{Kou}. \begin{lma}\label{bv-short}Fix $\epsilon>0$ and $A\ge1$. For $x\ge h\ge2$ and $1\le Q^2\le h/x^{1/6+\epsilon}$, we have that \[ \int_x^{2x}\sum_{q\le Q} E(y,h;q) \mathrm{d} y \ll\frac{xh}{(\log x)^A}. \] If, in addition, the Riemann hypothesis for Dirichlet $L$-functions is true, then the above estimate holds when $1\le Q^2\le h/x^\epsilon$. \end{lma} Theorem \ref{approximation of M(G)} and Lemma \ref{bv-short} lead to a proof of Theorem \ref{CDKS3} in a fairly straightforward way as we will see in Section \ref{proofs}. Next, we turn to the proof of Theorem \ref{CDKS}. Using Corollary~\ref{bounds for M(G)} and H\"older's inequality, we reduce the proof of this result to that of controlling sums of the form \eq{CDKS - key sum}{ \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \left(\frac{|d(p)|}{\phi(|d(p)|) }\right)^s \mathcal L(d(p))^r , } where we take $r>0$ to prove the implicit upper bound and $r<0$ for the lower bound. Nevertheless, we only seek an upper bound for the sum in \eqref{CDKS - key sum}, even for the lower bound in Theorem \ref{CDKS}. Therefore we can replace the sum over primes with a sum over almost primes and use sieve methods to detect the latter kind of integers. More precisely, we will majorize the characteristic function of primes $\le 2N$ by a convolution $\lambda*1$, where $\lambda$ is a certain truncation of the M\"obius function. This will be done using the {\it fundamental lemma of sieve methods}, which we state below in the form found in \cite[Lemma 5]{FI}. We could have also used Selberg's sieve, but the calculations are actually simpler when using Lemma \ref{lemma-FI}. \begin{lma}\label{lemma-FI} Let $y\ge2$ and $D=y^u$ with $u\ge2$. There exist two arithmetic functions $\lambda^\pm:\mathbb{N}\to[-1,1]$, supported on $\{d\in\mathbb{N}:P^+(d)\le y,\,d\le D\}$, for which \[ \begin{cases} (\lambda^-*1)(n)=(\lambda^+*1)(n)=1 &\text{if}\ P^-(n)>y,\\ (\lambda^-*1)(n)\le0\le(\lambda^+*1)(n) &\text{otherwise}. \end{cases} \] Moreover, if $g:\mathbb{N}\to\mathbb{R}$ is a multiplicative function with $0\le g(p)\le\min\{2,p-1\}$ for all primes $p\le y$, and $\lambda\in\{\lambda^+,\lambda^-\}$, then \[ \sum_d \frac{ \lambda(d)g(d) }{d} = (1+O(e^{-u})) \prod_{p\le y} \left(1-\frac{g(p)}p\right). \] \end{lma} Combining Lemmas \ref{lemmashortproduct} and \ref{lemma-FI}, we are led to following key result, which will proven in Section \ref{proof of Prop startwiththat}. As we will see in the same section, Theorem \ref{CDKS} is an easy consequence of this intermediate result. \begin{prop}\label{startwiththat} Let $m,k\in\mathbb N$ and set $N=m^2k$. For any $r\in\mathbb{R}$ and $s\ge0$, we have that \[ \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \left(\frac{|d(p)|}{\phi(|d(p)|) }\right)^s \mathcal L(d(p))^r \ll_{r,s} \left(\frac{k}{\phi (k)}\right)^r \frac{\sqrt{N}}{\phi(m)\log(2k)}. \] \end{prop} \section{Completion of the proof of the main results}\label{proofs} In this section we prove Theorems \ref{CDKS}-\ref{MEN-3}. We start by stating a preliminary result, which is Lemma 15 of \cite{DS-MEG} in slightly altered form. \begin{lma}\label{Aut(G)} For $m,k\in\mathbb N$, we have that \[ \frac{|{\Aut}(G_{m,k})|}{|G_{m,k}|} = m\phi(m) \frac{\phi(k)}{k} \prod_{\substack{\ell | m \\ \ell\nmid k}} \left(1-\frac{1}{\ell^2} \right) . \] \end{lma} \begin{proof}[Proof of Theorem \ref{CDKS}] The claimed inequalities are a consequence of Corollary \ref{bounds for M(G)}, Proposition \ref{startwiththat}, and H\"older's inequality. Indeed, let $\mu=\lambda/(\lambda-1)$, so that $1/\lambda+1/\mu=1$. Then we have that \als{ M(G_{m, k}) &\ll \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}} \sqrt{|d(p)|} \frac{|d(p)|}{\phi(|d(p)|)} \mathcal L(d(p)) \\ &\le \left( \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}} \sqrt{|d(p)|}\right)^{\frac{1}{\lambda}} \left( \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}} \sqrt{|d(p)|} \left(\frac{|d(p)|}{\phi(|d(p)|)}\right)^\mu \mathcal L(d(p))^{\mu}\right)^{\frac{1}{\mu}} \\ &\ll \left( \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}} \frac{\sqrt{(N^+-p)(p-N^-)}}{m} \right)^{\frac{1}{\lambda}} \left( \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}} \sqrt{k} \left(\frac{|d(p)|}{\phi(|d(p)|)}\right)^\mu \mathcal L(d(p))^{\mu}\right)^{\frac{1}{\mu}} , } since $|d(p)|=(N^+-p)(p-N^-)/m^2\ll N/m^2=k$. So the definition of $\delta$ and Proposition \ref{startwiththat} imply that \[ M(G_{m, k}) \ll_{\lambda,A} \delta^{1/\lambda} \frac{km}{\phi(m)\log(2N)} \frac{k}{\phi(k)}. \] Hence the upper bound in Theorem \ref{CDKS} follows by Lemma \ref{Aut(G)}. The proof of the lower bound is similar, having as a starting point the inequality \[ \sum_{\substack{ N^-<p<N^+ \\ p\equiv 1\mod m}} \sqrt{|d(p)|} \le \left( \sum_{\substack{ N^- <p \le N^+ \\ p\equiv 1\pmod{m} }} \sqrt{|d(p)|} \mathcal L(d(p)) \right)^{\frac{1}{\lambda}} \left( \sum_{\substack{ N^- <p \le N^+ \\ p\equiv 1\pmod{m} }} \frac{\sqrt{|d(p)|}}{\mathcal L(d(p))^{\mu/\lambda}} \right)^{\frac{1}{\mu}} . \] \end{proof} \begin{proof}[Proof of Theorem \ref{MEN-2}] The proof of Theorem \ref{MEN-2} is completely analogous to the proof of Theorem \ref{CDKS}. The only difference is that instead of starting with Corollary \ref{bounds for M(G)}, we observe that \[ \sum_{N^-<p<N^+} \sqrt{|D_N(p)|} \mathcal L(D_N(p)) \ll M(N) \ll \sum_{N^-<p<N^+} \frac{|D_N(p)|^{3/2}}{\phi(|D_N(p)|)} \mathcal L(D_N(p)), \] a consequence of relation~\eqref{reduction to class number avg} below with $n=1$. \end{proof} \begin{proof}[Proof of Theorem~\ref{CDKS2}] Note that when $m=k=1$ and $N=1$, then $N^+=4$ and $N^-=0$ and thus the primes 2 and 3 belong to the set $\{N^-<p\le N^+: p\equiv 1\pmod m\}$. So, by Theorem \ref{CDKS}, it suffices to show Theorem \ref{CDKS2} when $y$ is large enough. We further assume that $x\in\mathbb N$, which we may certainly do. Observe that $(N^+-p)(p-N^-)\asymp N$ for $p\in((\sqrt{N}-1/2)^2,(\sqrt{N}+1/2)^2)$, and thus \[ \frac{1}{N/(\phi(m)\log(2N))} \sum_{\substack{N^-<p<N^+ \\ p\equiv 1\mod{p} }} \sqrt{ (N^+-p)(p-N^-)} \gg \frac{\phi(m)}{\sqrt{N}} \sum_{\substack{(\sqrt{N}-1/2)^2<p<(\sqrt{N}+1/2)^2 \\ p\equiv 1\mod{m} }} \log p . \] So, if we set \[ C(m,k) = \frac{|G_{m, k}|^2}{|\Aut(G_{m, k})| \log(2G_{m, k})} \asymp \frac{mk^2}{\phi(m)\phi(k)\log(mk)}, \] then Theorem \ref{CDKS} with $\lambda=2$ implies that \als{ \sum_{\substack{3x/4<m\le x \\ y/100<k\le y}} \sqrt{ \frac{M(G_{m, k})}{C(m,k)} } &\gg \sum_{\substack{3x/4<m\le x \\ y/100<k\le y}} \frac{\phi(m)}{x\sqrt{y}} \sum_{\substack{ (m\sqrt{k}-1/2)^2<p< (m\sqrt{k}+1/2)^2 \\ p\equiv 1\pmod m}} \log p \\ &\ge\sum_{3x/4<m\le x} \sum_{\substack{ x^2y/3 < p \le 4x^2y/9 \\ p\equiv 1\pmod m }} \frac{\phi(m)\log p}{x\sqrt{y}}\sum_{ \substack{y/100<k \le y \\ (\sqrt p -1/2)^2/m^2<k < (\sqrt p+1/2)^2/m^2 }} 1 , } provided that $y$ is large enough. Note that \[ \frac{(\sqrt p+1/2)^2 - (\sqrt p-1/2)^2}{m^2} = \frac{2\sqrt{p}}{m^2} \ge \frac{2x\sqrt{y/3}}{x^2} > 1, \] by our assumptions that $x\le\sqrt{y}$. Since we also have that $(\sqrt p-1/2)^2/m^2>y/100$ and that $(\sqrt p+1/2)^2/m^2\le y$ for $y$ large enough and $m$ and $p$ as above, we conclude that \[ \sum_{\substack{3x/4<m\le x \\ y/100<k\le y}} \sqrt{ \frac{M(G_{m, k})}{C(m,k)} } \gg \frac{1}{x^2} \sum_{3x/4<m\le x} \phi(m) \sum_{\substack{ x^2y/3 < p \le 4x^2y/9 \\ p\equiv 1\pmod m }} \log p. \] This last double sum equals \[ \sum_{3x/4<m\le x} \phi(m) \cdot \frac{x^2y}{9\phi(m)} + O_A\left( \frac{x^3y}{(\log y)^A}\right) \gg x^3y, \] by the Bombieri Vinogradov theorem. Therefore we conclude that \[ \sum_{\substack{3x/4<m\le x \\ y/100<k\le y}} \sqrt{ \frac{M(G_{m,k})}{C(m,k)} } \gg xy. \] Since the summands are all $\ll1$ in this range by Theorem \ref{CDKS} (recall that $\delta\ll1$ there), we obtain Theorem \ref{CDKS2}. \end{proof} \begin{proof}[Proof of Theorem \ref{CDKS3}] Let $\theta$ be a parameter, which we take to be $1/2$ or $1/4$, according to whether we assume the generalized Riemann hypothesis or not. We then suppose that $1\le x\le y^{\theta-\epsilon}$. Note that Theorem \ref{CDKS} and Lemma \ref{Aut(G)} imply that \[ \sum_{\substack{m\le x,\, k\le y/(\log y)^A \\ mk>1}} \left| M(G_{m,k}) - \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right| \ll \frac{xy^2}{(\log y)^A} . \] We break the remaining range of $m$ and $k$ into dyadic intervals, hence reducing Theorem \ref{CDKS3} to showing that \[ E:= \sum_{\substack{ x/2< m\le x \\ y/2< k\le y}} \left| M(G_{m,k}) - \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right| \ll_{\epsilon,A} \frac{xy^2}{(\log y)^A} \] for $x\le y^{\theta-\epsilon}$. (Note that these might be different values of $x,y$ and $\epsilon$ than the ones we started with.) We apply Theorem \ref{approximation of M(G)} with $h= (x^2y)^{1/2}/(\log y)^{A+2}$ for all $m\in[x/2,x]$ and $k\in[y/2,y]$, to deduce that \als{ E &\ll \frac{\sqrt{y}}{h} \sum_{\substack{ x/2< m\le x \\ y/2< k\le y}} \sum_{q \le k^{\epsilon} } \tau_3(q) \int_{(m^2k)^-}^{(m^2k)^+} E(t,h; qm ) \mathrm{d} t + \frac{xy^2}{(\log y)^A} \\ &=: E'+ \frac{xy^2}{(\log y)^A} , } say. Putting the sum over $k$ inside, we find that \als{ E'&\ll \frac{\sqrt{y}}{h} \sum_{x/2<m\le x} \sum_{q \le y^{\epsilon} } \tau_3(q) \int_{x^2y/10}^{2x^2y} E(t,h; qm) \left(\sum_{ \substack{ y/2<k\le y \\ t^-/m^2<k < t^+/m^2}} 1 \right) \mathrm{d} t \\ &\ll \frac{y}{hx} \sum_{m\le x} \sum_{q \le y^{\epsilon} } \tau_3(q) \int_{x^2y/10}^{2x^2y} E(t,h; qm) \mathrm{d} t \le \frac{y}{hx} \sum_{m\le x} \sum_{q \le y^{\epsilon} } \tau_4(q) \int_{x^2y/10}^{2x^2y} E(t,h; q) \mathrm{d} t . } We note that $E(u,h; b) \ll \sqrt{h/\phi(b)} \sqrt{E(u,h;b)}$, by the Brun-Titchmarsch inequality. So the Cauchy-Schwarz inequality and Lemma \ref{bv-short} imply that \als{ E'&\ll \frac{y}{x h} \left(\sum_{b\le xy^{3\epsilon}} \tau_4(b)^2 \int_{x^2y/10}^{2x^2y} \frac{h}{\phi(b)} \mathrm{d} t\right)^{\frac{1}{2}} \left( \sum_{b\le xy^{3\epsilon}} \int_{x^2y/10}^{2x^2y} E(t,h; b) \mathrm{d} t\right)^{\frac{1}{2}} \\ &\ll \frac{y}{x h} \left( x^2yh(\log y)^{16} \cdot \frac{x^2yh}{(\log y)^{2A+16}} \right)^{\frac{1}{2}} = \frac{xy^2}{(\log y)^A} , } which completes the proof of Theorem \ref{CDKS3}. \end{proof} \begin{proof}[Proof of Theorem \ref{MEN-1}] Theorem~\ref{CDKS} implies that \[ M(G_{m, k}) \ll \frac{k^{3/2}}{\phi (k)} \frac{\sqrt{N}}{\phi(m)\log(2k)} = \frac{mk^2}{\phi(k)\phi(m)\log(2k)}\le \frac{Nmk}{\phi(N)\phi(m)\log(2k)}. \] Therefore, \[ \sum_{\substack{m^2k=N \\ m>x}} M(G_{m, k}) \ll \sum_{\substack{m^2|N \\ x<m\le\sqrt{N} }} \frac{N^2}{m\phi(m)\phi(N)\log(2N/m^2)} \ll \frac{N^2}{x\phi(N)\log(2N)}, \] which completes the proof of Theorem \ref{MEN-1}. \end{proof} \begin{proof}[Proof of Theorem \ref{MEN-3}] In view of Theorem \ref{MEN-1}, it suffices to show that \[ \sum_{1<N\le x} \left| \sum_{\substack{ m^2k=N \\ m\le (\log x)^A}} M(G_{m, k}) - \frac{K(N)N^2}{\phi(N)\log N} \right| \ll_A \frac{x^2}{(\log x)^A}, \] where $K(N)$ is defined by~\eqref{define K(N)}. Note that \als{ &\sum_{1<N\le x} \left| \sum_{\substack{ m^2k=N \\ m\le (\log x)^A}} M(G_{m, k}) - \sum_{\substack{ m^2k=N \\ m\le (\log x)^A}} \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right| \\ &\quad\le \sum_{\substack{1<m^2k\le x \\ m\le (\log x)^A}} \left| M(G_{m, k}) - \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right| \\ &\quad\le \sum_{1\le 2^j\le (\log x)^{A} } \sum_{\substack{k\le x/4^j \\ 2^j\le m<2^{j+1} \\ m^2k>1 }} \left| M(G_{m, k}) - \frac{K(G_{m, k}) |G_{m, k}|^2}{|\Aut (G_{m, k})|\log|G_{m, k}|} \right| \\ &\quad\ll_A \sum_{1\le 2^j\le (\log x)^{A}} \frac{x^2}{8^j(\log x)^A} \ll \frac{x^2}{(\log x)^A} } by Theorem~\ref{CDKS3}. So it suffices to show that \eq{MEN-3 goal}{ \sum_{1<N\le x} \frac{N}{\log N} \left| \sum_{\substack{ m^2k=N \\ m\le (\log x)^A}} \frac{K(G_{m, k}) |G_{m, k}|}{|\Aut (G_{m, k})|} - \frac{K(N)N}{\phi(N)} \right| \ll_A \frac{x^2}{(\log x)^A} . } In fact, Lemma \ref{Aut(G)} implies that \als{ \frac{ K(G_{m, k}) |G_{m, k}|}{|\Aut(G_{m, k})|} &= \frac{k}{m\phi(m)\phi(k)} \prod_{\substack{\ell | m \\ \ell\nmid k}} \left(1-\frac{1}{\ell^2} \right)^{-1} K(G_{m, k}) \\ &= \frac{N}{m^2\phi(N)}\prod_{\ell|(m,k)} \left(1-\frac{1}{\ell}\right)^{-1} \prod_{\substack{\ell | m \\ \ell\nmid k}} \left(1-\frac{1}{\ell^2} \right)^{-1} K(G_{m, k}) \\ &= \frac{N}{m^2\phi(N)} \prod_{\ell\nmid N}\left(1-\frac{\leg{N-1}{\ell}^2\ell+1}{(\ell-1)^2(\ell+1)}\right) \prod_{\ell\mid (m,k)}\left(1+\frac{1}{\ell}\right) \prod_{\substack{\ell\mid k\\ \ell\nmid m}}\left(1-\frac{1}{\ell(\ell-1)}\right) . } Therefore, \als{ \sum_{\substack{m^2k = N \\ m\le (\log x)^A}} \frac{K(G_{m, k}) |G_{m, k}|}{|\Aut(G_{m, k})|} &= \sum_{m^2k = N} K(G_{m, k}) \frac{|G_{m, k}|}{|\Aut(G_{m, k})|} + O\left(\frac{N}{(\log x)^A\phi(N)}\right) \\ &= \frac{N}{\phi(N)} \prod_{\ell\nmid N}\left(1-\frac{\leg{N-1}{\ell}^2\ell+1}{(\ell-1)^2(\ell+1)}\right) \cdot S(N) + O\left(\frac{N}{(\log x)^A\phi(N)}\right), } where \[ S(N) = \sum_{m^2k=N} \frac{1}{m^2} \prod_{\ell\mid (m,k)}\left(1+\frac{1}{\ell}\right) \prod_{\substack{\ell\mid k\\ \ell\nmid m}}\left(1-\frac{1}{\ell(\ell-1)}\right) . \] Note that \als{ S(\ell^v) &= 1-\frac{1}{\ell(\ell-1)} + \sum_{1\le j\le v/2} \frac{1}{\ell^{2j}} \left(1+\frac{{\bf 1}_{j<v/2}}{\ell}\right) \\ &=1-\frac{1}{\ell(\ell-1)} + \sum_{1\le j\le v/2} \frac{1}{\ell^{2j}} + \sum_{1\le j\le v/2} \frac{{\bf 1}_{j<v/2}}{\ell^{2j+1}} \\ &=1-\frac{1}{\ell(\ell-1)} + \sum_{i=2}^v \frac{1}{\ell^i} = 1-\frac{1}{\ell^v(\ell-1)} . } So we conclude that \[ \sum_{\substack{m^2k = N \\ m\le (\log x)^A}} \frac{K(G_{m, k}) |G_{m, k}|}{|\Aut(G_{m, k})|} = \frac{K(N) N}{\phi(N)} + O\left(\frac{N}{(\log x)^A\phi(N)}\right) , \] which yields relation \eqref{MEN-3 goal}, thus completing the proof of Theorem \ref{MEN-3}. \end{proof} \section{Reduction to an average of Dirichlet series}\label{deuring lemma} In this section, we prove Lemma \ref{formula for M(G)} using the theory developed by Deuring~\cite{Deu} and somewhat generalized by Schoof~\cite{Schoof}. As before, we fix a group $G=G_{m,k}=\mathbb Z/m\mathbb Z\times\mathbb Z/mk\mathbb Z$, and we set $N=|G|=m^2k$. Given a prime $p$ and an integer $n$ such that $n^2|N$, we define \[ M_p(N;n)=\sum_{\substack{E/\mathbb F_p\\ \#E(\mathbb F_p)=N\\ E(\mathbb F_p)[n]\cong G_{n,1}}}\frac{1}{|\Aut_p(E)|}, \] the weighted number of isomorphism classes of elliptic curves over any prime finite field which have exactly $N$ rational points and whose rational $n$-torsion subgroup is isomorphic to $G_{n,1}=\mathbb Z/n\mathbb Z\times\mathbb Z/n\mathbb Z$. It is not hard to relate $M_p(G)$ to a sum involving $M_p(N;n)$. This is accomplished via an inclusion-exclusion argument, which gives the relation \begin{equation}\label{inclusion exclusion} M_p(G)=\sum_{r^2 | k} \mu(r)M_p(N; r m). \end{equation} In~\cite{Schoof}, Schoof essentially gave a formula for $M_p(N;n)$ in terms of class numbers. However, one needs to exercise care here as Schoof counts each $\mathbb F_p$-isomorphism class $E$ with weight $1$ instead of with weight $1/|\Aut_p(E)|$ as we do here. Given a negative discriminant $D$, we let $H(D)$ denote the \textit{Kronecker class number}, which is defined as \[ H(D)=\sum_{ \substack{f^2\mid D\\ D/f^2 \equiv 0,1\pmod{4} }} \frac{h(D/f^2)}{w(D/f^2)}. \] Here, as usual, $h(d)$ denotes the (ordinary) class number of the unique imaginary quadratic order of discriminant $d$, and $w(d)$ denotes the cardinality of its unit group. Then letting \[ D_N(p)=(p+1-N)^2-4p=(p-1-N)^2-4N \] and reworking the proofs of~\cite[Lemma 4.8 and Theorem 4.9]{Schoof} to count each class $E$ with weight $1/|\Aut_p(E)|$, we arrive at the formula \eq{reduction to class number avg}{ M_p(N;n)= \begin{cases} H\left(\frac{D_N(p)}{n^2}\right)&\text{if }p\in(N^-,N^+)\text{ and }p\equiv 1\pmod n,\\ 0&\text{otherwise}. \end{cases} } Note here that $D_N(p)/n^2$ is a negative discriminant whenever $p\in(N^-,N^+)$, $p\equiv 1\pmod n$, and $n^2\mid N$. \begin{lma}\label{Deuring for groups} Let $m,k\in\mathbb N$ and recall that $d(p) = d_{m,k}(p)$ is defined by \eqref{def-dp}. If $p\in (N^-,N^+)$ and $p\equiv 1\pmod m$, then \[ M_p(G_{m, k})= \sum_{\substack{f^2\mid d(p),\, (f,k)=1\\ \frac{d(p)}{f^2}\equiv 0,1\pmod 4}}\frac{h(d(p)/f^2)}{w(d(p)/f^2)}. \] Otherwise, $M_p(G_{m, k})=0$. \end{lma} \begin{rmk} The above formula is amenable to computation. Indeed, given a prime $p$ and any $m$ and $k$, very simple modifications to the usual quadratic forms algorithm for computing class numbers (see~\cite[pp.~99--100]{BV:2007} for example) make it possible to compute $M_p(G_{m, k})$ using at most $O(k)$ arithmetic operations, which is reasonable for small $k$. If we put \begin{equation*} H_k(D)=\sum_{\substack{f^2\mid D,\, (f,k)=1\\ \frac{D}{f^2}\equiv 0,1\pmod 4}}\frac{h(D/f^2)}{w(D/f^2)} \end{equation*} for each negative discriminant $D$ and each positive integer $k$, then the only modifications needed are as follows. When the algorithm produces the (not necessarily primitive) form $ax^2+bxy+cy^2$, say with $(a,b,c)=f\ge1$, it is counted subject to the following rules, provided that $(f,k)=1$. \begin{enumerate} \item Forms proportional to $x^2+y^2$ are counted with weight $1/4$. \item Forms proportional to $x^2+xy+y^2$ are counted with weight $1/6$. \item All other forms are counted with weight $1/2$. \end{enumerate} Similarly, tables of $M(G_{m, k})$ or $M_p(G_{m, k})$ values can be computed for $m$ and $k$ of modest size by simultaneously computing a table of values of $H_k(D)$. \end{rmk} \begin{proof} It follows from~\eqref{reduction to class number avg} that $M_p(G)=0$ unless $p\in(N^-,N^+)$ and $p\equiv 1\pmod m$. Therefore, assume that $p\in (N^-,N^+)$ and $p\equiv 1\pmod m$, and write $k=s^2t$ with $t$ square-free. Combining relations~\eqref{inclusion exclusion} and~\eqref{reduction to class number avg} with the definition of the Kronecker class number, we find that \begin{equation*} \begin{split} M_p(G) =\sum_{\substack{r\mid s\\ p\equiv 1\pmod{rm}}}\mu(r)H\left(\frac{D_N(p)}{(rm)^2}\right) &=\sum_{\substack{r\mid s\\ p\equiv 1\pmod{rm}}}\mu(r)H\left(\frac{d(p)}{r^2}\right)\\ &=\sum_{\substack{r\mid s\\ p\equiv 1\pmod{rm}}}\mu(r) \sum_{\substack{f^2\mid\frac{d(p)}{r^2}\\ \frac{d(p)}{(rf)^2}\equiv 0,1\pmod 4}} \frac{h(d(p)/(rf)^2)}{w(d(p)/(rf)^2)}\\ &=\sum_{\substack{r\mid s\\ p\equiv 1\pmod{rm}}}\mu(r) \sum_{\substack{f^2\mid d(p),\, r\mid f\\ \frac{d(p)}{f^2}\equiv 0,1\pmod 4}}\frac{h(d(p)/f^2)}{w(d(p)/f^2)}. \end{split} \end{equation*} Now interchanging the sum over $r$ with the sum over $f$ and recalling the identity \begin{equation*} \sum_{r\mid n}\mu(n)=\begin{cases}1&\text{if }n=1,\\ 0&\text{otherwise},\end{cases} \end{equation*} we arrive at the formula \begin{equation*} \begin{split} M_p(G) &= \sum_{\substack{f^2\mid d(p)\\ (f,s,(p-1)/m)=1\\ \frac{d(p)}{f^2}\equiv 0,1\pmod 4}}\frac{h(d(p)/f^2)}{w(d(p)/f^2)}. \end{split} \end{equation*} In order to complete the proof, it is sufficient to show that, in the above sum, the condition $(f,s,(p-1)/m)=1$ implies the simpler condition $(f,k)=1$, the converse implication being immediate. To this end, we write $p=1+jm$ and assume that $(f,s,(p-1)/m)=(f,s,j)=1$. Then $d(p)=(j-mk)^2-4k$, and the condition $d(p)/f^2\equiv 0, 1\pmod 4$ may be rewritten as \begin{equation}\label{disc condition} (j-mk)^2-4k\equiv 0, f^2\pmod{4f^2}. \end{equation} Now let $\ell$ be any prime dividing $(f,k)$. Then the above congruence implies that $\ell\mid j$, but that implies that $\ell^2\mid (j-mk)^2$. Whence $\ell^2\mid 4k$. If $\ell$ is odd, then we have that $\ell^2\mid k$, and hence $\ell\mid (f,s,j)=1$, which is a contradiction. If $\ell=2$, then we divide~\eqref{disc condition} through by $4$ to obtain \begin{equation*} \left(\frac{j}{2}-m\frac{k}{2}\right)^2-k\equiv 0, \frac{f^2}{4}\pmod{f^2}. \end{equation*} Since $\ell=2\mid (f,k)$, we have that $k$ is even and congruent to a difference of two squares modulo $4$. This in turn implies that $k\equiv 0\pmod 4$, i.e., $2\mid s$. Thus, in this case we also have the contradiction $\ell=2\mid (f,s,j)=1$. Therefore, we conclude that $(f,k)=1$, and this completes the proof of the lemma. \end{proof} Lemma~\ref{Deuring for groups} together with the class number formula immediately yields Lemma \ref{formula for M(G)}. \section{Local computations}\label{local computations} In this section we gather some local computations which we will need in the proofs of Theorem \ref{approximation of M(G)} and Proposition \ref{startwiththat}. As before, we continue to assume that $m, k,$ and $N$ are positive integers with $N=|G_{m, k}|=m^2k$. \begin{lma}\label{generic quad lemma} Let $\ell$ be an odd prime prime. For $e\ge1$, $(d,\ell)=1$ and $(a,b)=1$, we have that \[ \#\{j\in\mathbb Z/\ell^e\mathbb Z : j^2\equiv d\pmod{\ell^e}\} = 1+ \leg{d}{\ell} \] and \[ \#\{j\in\mathbb Z/\ell^e\mathbb Z : j^2\equiv d\pmod{\ell^e},\,(a+bj,\ell)=1\} = 1+ \leg{a^2-db^2}{\ell}^2 \leg{d}{\ell} . \] \end{lma} \begin{proof} The first formula is classical. For the second, we first note that if $\leg{d}{\ell}=-1$, then $\leg{a^2-db^2}{\ell}^2=1$, and the formula holds. Now assume that $\leg{d}{\ell}=1$, so that there are exactly two solutions to the congruence $j^2\equiv d\mod{\ell^e}$, say $\pm j_0$. If $\ell\mid b$, then the condition $(a+bj,\ell)=1$ is satisfied trivially for all $j\in\mathbb Z$, and the claimed result follows. Finally, if $\ell\nmid b$, then we need to exclude exactly one of the solutions when $a\equiv \pm bj_0\mod{\ell}$, that is to say when $a^2\equiv b^2d\mod{\ell}$. So the claimed formula holds in this last case too. \end{proof} We set \eq{T(n) def}{ T(n) = \sum_{d\mod n} \leg{d-4k}{n} \#\{j\mod n: j^2\equiv d\mod n,\ (N+1+jm,n)=1\}. } \begin{prop}\label{odd primes prop} Let $\ell$ be a prime not dividing $2k$ and $w\ge1$. Then \[ \frac{T(\ell^w)}{\ell^{w-1}} = - \leg{m(N-1)}{\ell}^2 + \begin{cases} \ell - 1 - \leg{k}{\ell} &\mbox{if $w$ is even},\cr - 1 &\mbox{if $w$ is odd}. \end{cases} \] \end{prop} \begin{proof} We write $T(\ell^w)=T_1(\ell^w)+T_2(\ell^w)$, where $T_1(\ell^w)$ is the same sum as $T(\ell^w)$ with the additional restriction that $\ell|d$ and $T_2(\ell^w)$ is the remaining sum. First, we calculate $T_1(\ell^w)$. We have that \als{ T_1(\ell^w) &= \sum_{\substack{d\mod{\ell^w}\\\ell|d}} \leg{d-4k}{\ell^w} \sum_{\substack{j\mod{\ell^w} \\ j^2\equiv d\mod {\ell^w}}}\leg{N+1+jm}{\ell}^2 \\ &= \sum_{\substack{d\mod{\ell^w}\\\ell|d}} \leg{-4k}{\ell}^w \leg{N+1}{\ell}^2 \sum_{\substack{j\mod{\ell^w},\, \ell|j \\ j^2\equiv d\mod {\ell^w}}} 1 \\ &= \leg{-k}{\ell}^w \leg{N+1}{\ell}^2 \sum_{\substack{j\mod{\ell^w} \\ \ell|j }} 1 = \leg{-k}{\ell}^w \leg{N+1}{\ell}^2 \ell^{w-1} . } Finally, we compute $T_2(\ell^w)$. Applying Lemma \ref{generic quad lemma}, we find that \als{ T_2(\ell^w) &= \sum_{\substack{d\mod{\ell^w}\\ (d,\ell)=1}} \leg{d-4k}{\ell}^w \left( 1+\leg{(N+1)^2-dm^2}{\ell}^2 \leg{d}{\ell} \right) \\ &= \ell^{w-1} \sum_{d\mod{\ell}} \leg{d-4k}{\ell}^w \left( 1+\leg{(N+1)^2-dm^2}{\ell}^2 \leg{d}{\ell} \right) - \ell^{w-1}\leg{-k}{\ell}^w. } If $\ell\mid m$, then $\leg{(N+1)^2-dm^2}{\ell}=1$ for all $d\mod\ell$. On the other hand, if $\ell \nmid m$, then there is precisely one $d\mod\ell$ such that $(N+1)^2-dm^2\equiv 0\mod{\ell}$, for which we have that \[ \leg{d-4k}{\ell}^{w}=\leg{m^2d-4m^2k}{\ell}^{w} = \leg{(N-1)^2}{\ell}^{w} = \leg{N-1}{\ell}^2 \quad\text{and}\quad \leg{d}{\ell} = \leg{N+1}{\ell}^2. \] Thus, whether $\ell$ divides $m$ or not, we have \[ \frac{T_2(\ell^w)}{\ell^{w-1}} = - \leg{-k}{\ell}^w - \leg{m(N-1)(N+1)}{\ell}^2 + \sum_{d\mod\ell} \leg{d-4k}{\ell}^{w} \left( 1+ \leg{d}{\ell} \right), \] which implies that \als{ \frac{T(\ell^w)}{\ell^{w-1}} & = \leg{-k}{\ell}^w \leg{N+1}{\ell}^2 - \leg{-k}{\ell}^w - \leg{m(N-1)(N+1)}{\ell}^2 \\ &\quad + \sum_{d\mod\ell} \leg{d-4k}{\ell}^{w} \left( 1+ \leg{d}{\ell} \right) . } Note that if $\ell|N+1$, then $\leg{-k}{\ell}=1$ and thus \[ \leg{-k}{\ell}^w \leg{N+1}{\ell}^2 - \leg{-k}{\ell}^w - \leg{m(N-1)(N+1)}{\ell}^2 = -1 = - \leg{m(N-1)}{\ell}^2, \] whereas if $\ell\nmid N+1$, then \[ \leg{-k}{\ell}^w \leg{N+1}{\ell}^2 - \leg{-k}{\ell}^w - \leg{m(N-1)(N+1)}{\ell}^2 = - \leg{m(N-1)}{\ell}^2 . \] So \als{ \frac{T(\ell^w)}{\ell^{w-1}} &= - \leg{m(N-1)}{\ell}^2 + \sum_{d\mod\ell} \leg{d-4k}{\ell}^{w} \left( 1+ \leg{d}{\ell} \right) . } If now $w$ is odd, then \[ \sum_{d\mod\ell} \leg{d-4k}{\ell}^{w} \left( 1+ \leg{d}{\ell} \right) = \sum_{d\mod\ell} \leg{d-4k}{\ell}\leg{d}{\ell} = -1 , \] using for example \cite[Exercise 1.1.9]{Stepanov} since $(2k, \ell)=1$. Finally, if $w$ is even, then \[ \sum_{d\mod\ell} \leg{d-4k}{\ell}^{w} \left( 1+ \leg{d}{\ell} \right) = \ell-1 + \sum_{\substack{d\mod\ell \\ d\not\equiv 4k\mod{\ell}}} \leg{d}{\ell} = \ell-1 - \leg{k}{\ell} , \] which completes the proof of the proposition. \end{proof} \begin{cor}\label{formula for P(ell)} For a prime $\ell$ not dividing $2k$, we have that \[ P(\ell) := 1 + \sum_{w\ge1} \frac{T(\ell^w)}{\ell^{2w-1}(\ell- \leg{m}{\ell}^2) } = \frac{\ell^3-\leg{m}{\ell}^2\ell^2 - (1+\leg{m}{\ell}^2\leg{N-1}{\ell}^2)\ell - 1- \leg{N-1}{\ell}^2\leg{k}{\ell} } {(\ell^2-1)(\ell-\leg{m}{\ell}^2)} . \] \end{cor} \begin{proof} Lemma \ref{odd primes prop} and a straightforward computation imply that \[ P(\ell) = \frac{\ell^3-\leg{m}{\ell}^2\ell^2 - (1+\leg{m}{\ell}^2\leg{N-1}{\ell}^2)\ell + \leg{m}{\ell}^2-\leg{m(N-1)}{\ell}^2-1-\leg{k}{\ell} } {(\ell^2-1)(\ell-\leg{m}{\ell}^2)} . \] Finally, note that \[ \leg{m(N-1)}{\ell}^2+\leg{k}{\ell} - \leg{m}{\ell}^2 = \leg{N-1}{\ell}^2\leg{k}{\ell} , \] since $\leg{k}{\ell}=\leg{m}{\ell}^2=1$ if $\ell|N-1$. \end{proof} \section{Proof of Proposition \ref{startwiththat}}\label{proof of Prop startwiththat} This section is dedicated to the proof of Proposition \ref{startwiththat}, which gives an upper bound of the conjectured order of magnitude for the average of special values \[ \mathcal L(d(p)) = L \left(1, \displaystyle{\left( \frac{d(p)}{\cdot}\right)} \right) \] summed over integers with no small prime factors. A key role will be played by the fundamental lemma of sieve methods, i.e. Lemma \ref{lemma-FI}. \begin{proof}[Proof of Proposition \ref{startwiththat}] We shall employ the notation \[ \rho(n) := \frac{|n|}{\phi(|n|)} = \prod_{\ell | n} \left(1-\frac{1}{\ell}\right)^{-1} . \] We will simplify the sum we are estimating with an application of the Cauchy-Schwarz inequality but, first, we massage the $L$-functions that appear in it. Note that if $p=1+jm$, then $d(p)=(j-mk)^2 - 4k \equiv j^2\pmod k$. So \als{ \mathcal L(d(p))^r = \prod_{ \substack{ \ell | k \\ \ell\nmid j}} \left(1-\frac{1}{\ell} \right)^{-r} \prod_{ \ell \nmid k} \left(1-\frac{ \leg{d(p)}{\ell} }{\ell}\right)^{-r} \ll_r \rho(k)^r \rho((j,k))^{|r|} \mathcal L(k^2d(p))^r, } and consequently, \[ S := \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \rho(d(p))^{s} \mathcal L(d(p))^r \ll_r \rho(k)^r \sum_{\substack{N^-<p<N^+ \\ p =1+jm,\, j\in\mathbb N }} \rho((j,k))^{|r|} \rho(d(p))^s \mathcal L(k^2d(p))^r. \] Hence the Cauchy-Schwarz inequality yields that \eq{C-S}{ \frac{S}{\rho(k)^r } \ll_r \left(\sum_{\substack{N^-<p<N^+ \\ p =1+jm }} \rho((j,k))^{2|r|} \rho(d(p))^{2s} \right)^{\frac{1}{2}} \left(\sum_{\substack{N^-<p<N^+ \\ p\equiv 1\pmod m }} \mathcal L(k^2d(p))^{2r} \right)^{\frac{1}{2}} =: \sqrt{S_1 S_2} , } say. First, we estimate $S_1$. Note that \[ \rho(n)^v \asymp_v \prod_{\ell|n} \left(1+\frac{v}{\ell}\right) = \sum_{a|n} \frac{\mu^2(a) \tau_{v}(a) }{a} , \] for any $v\ge0$. Since \[ \sum_{\substack{a|n \\ a>x}} \frac{\mu^2(a) \tau_v(a)}{a} \le \frac{1}{x}\sum_{a|n} \mu^2(a)v^{\omega(a)} = \frac{(v+1)^{\omega(n)}}{x} \ll_{v,\epsilon} \frac{n^\epsilon}{x} , \] we find that \eq{C-S-e1}{ S_1 &\ll_r \sum_{\substack{ N^-<p <N^+ \\ p=1+jm}} \left( \sum_{\substack{a|(k,j) \\ a\le k^{1/5} }}\frac{\mu^2(a)\tau_{2|r|}(a)}{a} + O_r(k^{-1/6}) \right) \left( \sum_{\substack{b|d(p) \\ b\le k^{1/5} }}\frac{\mu^2(b)\tau_{2s}(b)}{b} + O_s(k^{-1/6})\right) \\ &= \sum_{ \substack{ a,b\le k^{1/5} \\ a|k }} \frac{\mu^2(a)\mu^2(b)\tau_{2|r|}(a)\tau_{2s}(b) }{ab} \sum_{\substack{N^-<p<N^+ \\ p=1+jm \\ a|j,\ b| d(p) }} 1 +O_{r,s}( k^{11/30} ), } using the trivial estimate $\#\{N^-<p<N^+:p\equiv1\mod m\}\ll \sqrt{N}/m=\sqrt{k}$. The innermost sum in the second line of \eqref{C-S-e1} equals \[ \sum_{\substack{ h \in \mathbb Z/[a,b]\mathbb Z \\ h \equiv 0 \pmod a \\ (h-mk)^2 \equiv 4k \pmod b}} \sum_{\substack{ N^-<p <N^+ \\ p=1+jm \\ j\equiv h \pmod{[a,b]} }} 1 \ll \frac{\sqrt{N}}{\phi(m[a,b])\log(2k)} \sum_{\substack{ h \in \mathbb Z/[a,b]\mathbb Z \\ h\equiv 0 \pmod a \\ (h-mk)^2 \equiv 4k \pmod b}} 1 \le \frac{\sqrt{N} \tau(b) }{\phi(m[a,b])\log(2k)} , \] where the first inequality follows from the Brun-Titchmarsch inequality and the second from the fact that $b$ is square-free. Since $\phi(m[a,b])\ge\phi(m)\phi([a,b])$, relation \eqref{C-S-e1} becomes \eq{estimation of S_1}{ S_1 &\ll_{r,s} \frac{\sqrt{N}}{\phi(m)\log(2k)} \sum_{ \substack{ a,b\le k^{1/5} \\ a|k }} \frac{\mu^2(a)\mu^2(b)\tau_{2|r|}(a)\tau_{2s}(b)^2 }{a\cdot b\cdot \phi([a,b])} + k^{11/30} \ll_{r,s} \frac{\sqrt{N}}{\phi(m)\log(2k)} . } Next, we turn to the estimation of \[ S_2 = \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \mathcal L(k^2d(p))^{2r} . \] Our first task is to replace the $L$-values that appear in the above sum with truncated Euler products. We set \[ S_3 = \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \mathcal L(k^2d(p);z^{80000})^{2r} \] with $z=\log(4k)$ and estimate the error \[ R:=S_2-S_3 \] using Lemma~\ref{lemmashortproduct}. First note that since $d(p)$ is a discriminant and $|d(p)|\le 4k$ for $p\in (N^-,N^+)$, it follows that \eq{leg-k^2d(p)}{ \leg{k^2d(p)}{\cdot} } is periodic modulo $k|d(p)|\le 4k^2$ and its conductor cannot exceed $|d(p)|\le 4k$. Thus, we may apply Lemma~\ref{lemmashortproduct} with $\alpha=100$ and $Q=4k$. Now let $d_1=d_1(p)$ be the discriminant of the quadratic number field $\mathbb Q(\sqrt{d(p)})$, so that the character in \eqref{leg-k^2d(p)} is induced by the primitive character $\leg{d_1}{\cdot}$. If $|d_1| \notin \mathcal{E}_{100}(4k)$, then we can approximate $\mathcal L(k^2d(p))^{2r}$ very well by $\mathcal L(k^2d(p);z^{80000})^{2r}$. Otherwise, we write $d(p) = d_1 b^2$ and note that \[ \mathcal L(k^2d(p))^{2r} \le \rho(kb)^{2|r|} \mathcal L(d_1)^{2r} \ll_r \rho(kb)^{2|r|}\cdot \begin{cases} (\log|d_1|)^{2r} &\text{if}\ r\ge0,\cr |d_1|^{1/8} &\text{if}\ r<0, \end{cases} \] the second estimate being a consequence of Siegel's theorem. In any case, we find that \[ \mathcal L(k^2d(p))^{2r} \ll_r (\rho(kb))^{2|r|} |d_1|^{1/8} \ll_r (kb |d_1|)^{1/8} \le (k |d(p)|)^{1/8} \le (2k)^{1/4}. \] Combining the above, we arrive at the estimate \als{ R &\ll_{r} \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \frac{(\log\log k)^{2|r|} }{\log^{100}(2k)} + \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m \\ |d_1| \in\mathcal{E}_{100}(4k)}} k^{1/4} } Note that if $p=1+jm$ is such that $|d_1| \in\mathcal{E}_{100}(4k)$, then $d(p)=d_1 b^2$ for some $b\in\mathbb{N}$, or equivalently, $(j-mk)^2-d_1 b^2=4k$. So for each fixed $d_1$ with $|d_1|\in \mathcal{E}_{100}(4k)$, there are at most $4\tau(4k)\ll k^{1/100}$ admissible values of $j$ (and hence of $p$). Consequently, \eq{estimation of R}{ R \ll_{r} \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \frac{(\log\log k)^{2|r|} }{\log^{100}(2k)} + k^{1/4} \cdot k^{1/100} \cdot |\mathcal{E}_{100}(4k)| \ll_{r} \frac{\sqrt{N} }{ \log(2k)\phi(m)}, } by Lemma\ \ref{lemmashortproduct} and the Brun-Titchmarsch inequality. Finally, we turn to the estimation of $S_3$. First, note that \[ \mathcal L(k^2d(p);z^{80000})^{2r} \ll_r \mathcal L(k^2d(p);\sqrt{z})^{2r} \ll_r \prod_{ \substack{ \ell\nmid 2pk \\ 2|r|+1< \ell \le \sqrt{z} }} \left(1+2r\cdot \frac{\leg{d(p)}{\ell} }{\ell}\right) , \] by Mertens' estimate, which immediately implies that \[ S_3 \ll_r \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod{m} }} \prod_{ \substack{ \ell\nmid 2pk \\ 2|r|+1 < \ell \le \sqrt{z} }} \left(1+2r\cdot \frac{\leg{d(p)}{\ell} }{\ell}\right) . \] We cannot estimate this sum as it is because that would require information about primes in arithmetic progressions that are currently not available. We refer the reader to~\cite{DS-MEG} for a more detailed discussion about this issue. Instead, we extend the summation from primes $p$ to integers $n$ with no prime factors $\le k^{1/8}$ and we apply Lemma~\ref{lemma-FI} with $D=k^{1/4}$ and $y=k^{1/8}$. Hence \eq{S3-S4}{ S_3 \ll_r \sum_{\substack{N^-< n <N^+\\ n \equiv 1\pmod {m} }}(\lambda^+*1)(n) \prod_{ \substack{2|r|+1 < \ell\le\sqrt{z} \\ \ell\nmid 2nk }} \left(1+2r\cdot \frac{\leg{d(n)}{\ell}}{\ell}\right)=:S_4, } by the positivity of the above Euler product. Expanding this product to a sum, opening the convolution $(\lambda^+*1)(n)$, and interchanging the order of summation yields \als{ S_4 &= \sum_{\substack{\ell|a\ \Rightarrow\ 2|r|+1<\ell\le \sqrt{z} \\ (a,2k)=1}}\frac{\mu^2(a) \tau_{2r}(a) }{a} \sum_{\substack{N^-<n<N^+\\ (n,a)=1 \\ n\equiv 1\pmod{m} }} (\lambda^+*1)(n) \leg{d(n)}{a} \nonumber \\ &= \sum_{\substack{\ell|a\ \Rightarrow\ 2|r|+1<\ell\le \sqrt{z} \\ (a,2k)=1}}\frac{\mu^2(a)\tau_{2r}(a) }{a} \sum_{\substack{b \leq k^{1/4}\\ (b,am)=1}}\lambda^+(b) \sum_{\substack{N^-<n<N^+\\ (n,a)=1,\, b|n \\ n\equiv 1\pmod{m} }}\leg{d(n)}{a} . } Splitting the integers $n\in(N^-,N^+)$ according to the congruence class of $d(n)\pmod a$, we deduce that \eq{S_3}{ S_4 = \sum_{\substack{\ell |a\ \Rightarrow\ 2|r|+1<\ell\le \sqrt{z} \\ (a,2k)=1}} \frac{\mu^2(a)\tau_{2r}(a)}{a} \sum_{\substack{b \leq k^{1/4}\\ (b,am)=1}}\lambda^+(b) \sum_{c\in\mathbb Z/a\mathbb Z} \leg{c}{a} S(a,b,c), } where \[ S(a,b,c) := \#\left\{ N^-<n<N^+ : \begin{array}{ll} n\equiv 1\pmod {m} & (n,a)=1 \\ n\equiv 0\pmod b & d(n)\equiv c\pmod{a} \end{array} \right\}. \] We fix $a$, $b$ and $c$ as above and calculate $S(a,b,c)$. Set $n=1+j m$, and define $\Delta(j)=(j-mk)^2-4k$, so that $d(n)=\Delta(j)$. Note that $n$ is counted by $S(a,b,c)$ if and only if $mk - 2\sqrt{k} < j < mk+2\sqrt{k}$, $\Delta(j)\equiv c\pmod a$, $1+jm\equiv0\pmod b$ and $(1 + jm, a) = 1$. Thus we have that \begin{equation}\label{T} S(a,b,c)=\left(\frac{4\sqrt{k}}{ab}+O(1)\right)J(a,b,c), \end{equation} where \[ J(a,b,c) :=\#\{j\in \mathbb Z/ab\mathbb Z : \Delta(j)\equiv c\pmod a,\ 1 + jm \equiv 0 \pmod b,\ (1 + jm, a) = 1\}. \] By the Chinese remainder theorem, we find that \[ J(a,b,c)=U(a,c) := \#\{j\in \mathbb Z/a\mathbb Z: \Delta(j)\equiv c\pmod a,\ (1+ jm,a) = 1\}, \] since $(b,m)=1$, and thus there is exactly one solution modulo $b$ to the equation $1+jm\equiv 0\pmod b$. Note that $U(a,c)\le \tau(a)$ by Lemma \ref{generic quad lemma} and that \[ \sum_{c\in\mathbb Z/a\mathbb Z}\leg{c}{a} U(a,c) = T(a), \] where $T(a)$ is defined by relation \eqref{T(n) def}. Together with relations~\eqref{S_3} and~\eqref{T}, this implies that \als{ S_4 &= 4\sqrt{k} \sum_{\substack{\ell|a\ \Rightarrow\ 2|r|+1<\ell\le \sqrt{z} \\ (a,2k)=1}}\frac{\mu^2(a)\tau_{2r}(a)T(a)}{a^2} \sum_{\substack{b \leq k^{1/4}\\ (b,am)=1}}\frac{\lambda^+(b)}b \\ &\quad + O \left( k^{1/4} \sum_{ P^+(a)\le\sqrt{z} } \mu^2(a)\tau_{2|r|}(a)\tau(a) \right) . } The error term in the above estimate is \[ \ll k^{1/4} \sum_{ P^+(a)\le\sqrt{z} } \mu^2(a)\tau_{2|r|}(a)\tau(a) = k^{1/4} \prod_{\ell\le \sqrt{z}} (1+4|r|) \ll_r k^{1/3} . \] Finally, note that $|T(a)|\le \tau(a)$ for square-free values of $a$, by Proposition \ref{odd primes prop}. So applying Lemma~\ref{lemma-FI} we conclude that \als{ S_4 &\ll_r \sqrt{k} \sum_{\substack{P^+(a)\le\sqrt{z} \\ (a,2k )=1}}\frac{\mu^2(a) \tau(a) \tau_{2|r|}(a)}{a^2} \prod_{\substack{\ell\le k^{1/8}\\ \ell\nmid am}}\left(1-\frac1{\ell}\right)+k^{1/3} \\ &\ll \sqrt{k}\sum_{\substack{P^+(a)\le\sqrt{z} \\ (a,2k )=1}}\frac{\mu^2(a) \tau(a) \tau_{2|r|}(a)}{a^2} \frac1{\log(2k)} \frac{m}{\phi(m)} \frac{a}{\phi(a)} +k^{1/3} . } Inserting this estimate in \eqref{S3-S4}, we obtain the upper bound \eq{S_3 e2}{ S_3 \ll_r \frac{\sqrt{k}}{\log(2k)}\frac{m}{\phi(m)}\sum_{(a,2k)=1}\frac{\mu^2(a)\tau(a)\tau_{2|r|}(a)}{a\phi(a)} \ll_r \frac{\sqrt{k}}{\log(2k)}\frac{m}{\phi(m)}. } Combining the above inequality with relations \eqref{C-S}, \eqref{estimation of S_1}, and \eqref{estimation of R} completes the proof of the proposition. \end{proof} \section{Approximating $M(G)$}\label{approx} In this section, we prove Theorem \ref{approximation of M(G)}. We start with a preliminary lemma. \begin{lma}\label{prime sum} Let $N=m^2k>1$ and $d(p)=d_{m,k}(p)$. If $1\le q\le h \le \sqrt{N}$ and $(a,q)=1$, then \[ \sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \sqrt{|d(p)|} = \frac{2\pi mk}{\phi(q)\log N} + O\left( \frac{h}{\sqrt{N}} \cdot \frac{mk}{q} + \frac{\sqrt{k}}{h\log N} \int_{N^-}^{N^+} E(y,h;q) dy \right) . \] \end{lma} \begin{proof} We note the trivial bound $\#\{t<p\le t+h:p\equiv a\mod q\}\ll h/q$, which we will use several times throughout the proof. We have that \eq{prime sum log insertion}{ \sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \sqrt{|d(p)|} = \sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \frac{\sqrt{|d(p)|}\log p}{\log N} + O\left( \frac{\sqrt{k}}{q} \right) . } Note that if $t=N+1+2\sqrt{N}u_0$ and $u_0\in[-1+2\eta,1-\eta]$ with $\eta:=h/\sqrt{4N}$, then \als{ \sqrt{|d(t)|} = 2\sqrt{k}\cdot \sqrt{1-u_0^2} &= \frac{2\sqrt{k}}{\eta}\int_{u_0-\eta}^{u_0} \sqrt{1-u^2} \,\mathrm{d} u + O\left(\frac{\eta\sqrt{k}}{\sqrt{1-u_0^2}}\right) \\ &=\frac{4mk}{h}\int_{u_0-\eta}^{u_0} \sqrt{1-u^2} \, \mathrm{d} u + O\left(\frac{h\sqrt{k}}{\sqrt{4N-(N+1-t)^2}}\right) . } Therefore \als{ \sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \frac{\sqrt{|d(p)|}\log p}{\log N} &= \sum_{\substack{10h+ N^-<p\le -10h+N^+\\ p\equiv a\mod q }} \frac{\sqrt{|d(p)|}\log p}{\log N} + O\left( \frac{h^{1/2}N^{1/4}}{m} \cdot \frac{h}{q}\right) \\ &= \frac{4mk}{h\log N}\sum_{\substack{N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} (\log p) \int_{\frac{p-N-1-h}{2\sqrt{N}}}^{\frac{p-N-1}{2\sqrt{N}}} \sqrt{1-u^2} \,\mathrm{d} u \\ &\quad + O\left( \sum_{\substack{N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} \frac{h\sqrt{k}}{\sqrt{(N^+-p)(p-N^-)}} + \frac{h^{3/2}N^{1/4}}{mq} \right) \\ &= \frac{4mk}{h\log N} \int_{-1+9\eta}^{1-10\eta} \sqrt{1-u^2} \sum_{\substack{N+1+2u\sqrt{N}<p\le N+1+2u\sqrt{N}+h \\ N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} (\log p) \,\mathrm{d} u \\ &\quad + O\left( \sum_{\substack{N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} \frac{h\sqrt{k}}{\sqrt{(N^+-p)(p-N^-)}} + \frac{h^{3/2}N^{1/4}}{mq} \right) . } First, we simplify the main term. If $u\in[-1+10\eta,1-11\eta]$, then the condition that $N^- +10h<p\le N^+ - 10h$ can be discarded. On the other hand, if $u\in[-1,1]\setminus[-1+10\eta,1-11\eta]$, then \als{ \sqrt{1-u^2} \sum_{\substack{N+1+2u\sqrt{N}<p\le N+1+2u\sqrt{N}+h \\ N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} (\log p) &\le \sqrt{1-u^2} \sum_{\substack{N+1+2u\sqrt{N}<p\le N+1+2u\sqrt{N}+h \\ p\equiv a\mod q }} (\log p) \\ &\ll \sqrt{\eta}\cdot \frac{h\log N}{q} . } Therefore \als{ &\int_{-1+9\eta}^{1-10\eta} \sqrt{1-u^2} \sum_{\substack{N+1+2u\sqrt{N}<p\le N+1+2u\sqrt{N}+h \\ N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} (\log p) \,\mathrm{d} u \\ &\qquad= \int_{-1}^1 \sqrt{1-u^2} \sum_{\substack{N+1+2u\sqrt{N}<p\le N+1+2u\sqrt{N}+h \\ p\equiv a\mod q }} (\log p) \, \mathrm{d} u + O\left( \frac{\eta^{3/2} h\log N}{q} \right) \\ &\qquad= \int_{-1}^{1} \sqrt{1-u^2} \frac{h}{\phi(q)} \, \mathrm{d} u + O\left( \int_{-1}^1 E(N+1+2u\sqrt{N},h;q) \mathrm{d} u + \frac{h^{5/2}\log N}{N^{3/4} q} \right) \\ &\qquad=\frac{\pi}{2}\cdot \frac{h}{\phi(q)} + O\left( \frac{1}{\sqrt{N}} \int_{N^-}^{N^+} E(y,h;q) \mathrm{d} y + \frac{h^{5/2} \log N}{N^{3/4} q} \right) . } Consequently, \als{ \sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \sqrt{|d(p)|} &= \frac{2\pi mk}{\phi(q)\log N} + O\left( \sum_{\substack{N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} \frac{h\sqrt{k}}{\sqrt{(N^+-p)(p-N^-)}}\right) \\ &+O\left( \frac{\sqrt{k}}{h\log N} \int_{N^-}^{N^+} E(y,h;q) dy + \frac{\sqrt{k}}{q} + \frac{h^{3/2}N^{1/4}}{mq}\right) , } where the term $\sqrt{k}/q$ inside the big-Oh comes from \eqref{prime sum log insertion}. It remains to bound \[ \sum_{\substack{N^- +10h<p\le N^+ - 10h \\ p\equiv a\mod q }} \frac{1}{\sqrt{(N^+-p)(p-N^-)}} . \] We break this sum into two pieces, according to whether $p\le N+1$ or $p>N+1$. Note that \als{ \sum_{\substack{N^- +10h<p\le N+1 \\ p\equiv a\mod q }} \frac{1}{\sqrt{(N^+-p)(p-N^-)}} &\ll N^{-1/4}\sum_{\substack{N^- +10h<n\le N+1 \\ n\equiv a\mod q }} \frac{1}{\sqrt{n-N^-}} . } We cover the range of summation by intervals of length $h$ to find that \als{ \sum_{\substack{N^- +10h<p\le N+1 \\ p\equiv a\mod q }} \frac{1}{\sqrt{(N^+-p)(p-N^-)}} &\ll N^{-1/4} \sum_{1\le j\le 2\sqrt{N}/h} \frac{1}{\sqrt{jh}} \cdot \sum_{\substack{N^-+jh<n\le N^-+jh+h \\ n\equiv a\mod q }} 1 \\ &\ll \frac{\sqrt{h}}{N^{1/4}q} \sum_{1\le j\le 2\sqrt{N}/h} \frac{1}{\sqrt{j}} \ll \frac{1}{q} . } Similarly, we find that \[ \sum_{\substack{N+1<p\le N^+ -10h \\ p\equiv a\mod q }} \frac{1}{\sqrt{(N^+-p)(p-N^-)}} \ll \frac{1}{q} \] too, which implies that \als{ \sum_{\substack{N^-<p\le N^+ \\ p\equiv a\mod q }} \sqrt{|d(p)|} &= \frac{2\pi mk}{\phi(q)\log N} + O\left( \frac{\sqrt{k}}{h\log N} \int_{N^-}^{N^+} E(y,h;q) dy + \frac{h\sqrt{k}}{q} + \frac{h^{3/2}N^{1/4}}{mq}\right) . } Since $h^{3/2}=N^{3/4} (h/\sqrt{N})^{3/2}\le N^{3/4} (h/\sqrt{N})$, the lemma follows. \end{proof} Using the above result and the results of Section \ref{local computations}, we will prove Theorem \ref{approximation of M(G)}. But first, we need to introduce some additional notation and state another intermediate result. Set \eq{J_r(v) def}{ J_r(v) = \{1\le j\le 2^{2v+3}:(j-mk)^2\equiv 4k+4^vr\pmod{2^{2v+3}}, \, jm\equiv 0\pmod 2\} } and \eq{J(v)}{ \mathcal J(v) = \frac{1}{2^{v_0-1}} \sum_{r\in\{0,1,4,5\}} \frac{|J_r(v)|}{2-\leg{r}{2}}, \quad\text{where}\quad v_0 = \begin{cases} 2 &\text{if}\ 2\nmid m,\cr 3 &\text{if}\ 2|m. \end{cases} } Finally, set \[ \mathcal J = \sum_{\substack{v\ge0 \\ (2^v,k)=1}} \frac{\mathcal J(v)}{8^v} . \] Then we have the following formula. \begin{lma}\label{formula for J} \[ \mathcal J = \begin{cases} \frac{2}{3} &\mbox{if $2\nmid mk$},\cr \frac{3}{2} &\mbox{if $2\mid (m,k)$},\cr 1 &\mbox{if $2\mid mk$, $2\nmid (m,k)$}. \end{cases} \] \end{lma} We postpone the proof of this lemma till the last section. \begin{proof}[Proof of Theorem \ref{approximation of M(G)}] We will show the theorem with $8\epsilon\in(0,1/3]$ in place of $\epsilon$ and when $k$ is large enough in terms of $\epsilon$, which is clearly sufficient. Our starting point is Lemma \ref{formula for M(G)}, which states that \[ M(G) =\sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \sum_{\substack{f^2\mid d(p), \, (f,k)=1 \\ d(p)/f^2\equiv 1,0\pmod 4 }} \frac{\sqrt{|d(p)|} \mathcal L(d(p)/f^2) }{2\pi f} , \] where $N=m^2k$ and $d(p)=d_{m,k}(p)=((p-N-1)^2-4N)/m^2$ as usual. If $p=1+jm$, then $d(p)=(j-mk)^2-4k$. Therefore, if $\ell$ is an odd prime dividing $k$, so that $(\ell,f)=1$ for $f$ as in the above sum, then \[ \leg{d(p)/f^2}{\ell} =\leg{d(p)}{\ell} = \leg{j}{\ell}^2 , \] Next, we write $f=2^vg$ with $g$ odd and consider $r\in\{0,1,4,5\}$ such that $d(p)/f^2\equiv r\mod 8$. Then we have that $\leg{d(p)/f^2}{2}=\leg{r}{2}$. Moreover, since $g^2\equiv1\mod{8}$, we have that \[ d(p)/f^2 \equiv d(p)/2^{2v}\mod{8}, \] Therefore, the conditions $f^2|d(p)$ and $d(p)/f^2\equiv r\mod 8$ are equivalent to having $d(p)\equiv 4^v r\mod{2^{2v+3}}$ and $g^2|d(p)$. Setting \[ \rho(g,d) = \prod_{\ell|g} \left(1-\frac{\leg{d}{\ell}}{\ell} \right)^{-1} \] then gives us that \[ \mathcal L(d(p)/f^2) = \mathcal L((2kg)^2d(p)) \frac{\rho(g,d(p)/g^2)}{1-\leg{r}{2}/2} \prod_{\ell|k,\,\ell\nmid 2j}\left(1-\frac{1}{\ell}\right)^{-1}. \] Since \[ \prod_{\ell|k,\,\ell\nmid 2j}\left(1-\frac{1}{\ell}\right)^{-1} = \sum_{\substack{a|k \\ (a,2j)=1}} \frac{\mu^2(a)}{\phi(a)}, \] we deduce that \als{ M(G) &= \sum_{r\in\{0,1,4,5\}} \frac{1}{2-\leg{r}{2}} \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \sum_{\substack{a|k \\ (a,2j)=1}} \sum_{\substack{v\ge 0,\, (2^v,k)=1 \\ d(p)\equiv 4^v r\mod{2^{2v+3}} }} \sum_{\substack{g^2\mid d(p)\\ (g,2k)=1 }} \frac{\mu^2(a) \sqrt{|d(p)|}}{\pi 2^v\phi(a)g} \\ &\qquad \times \rho(g,d(p)/g^2) \mathcal L((2kg)^2d(p) ) . } We now use Lemma~\ref{lemmashortproduct} to replace the $L$-value $\mathcal L((2kg)^2d(p))$ by a suitably truncated product. Arguing as in the proof of relation~\eqref{estimation of R}, we note that $\leg{(2kg)^2d(p)}{\cdot}$ is a character modulo $2kg|d(p)|\le 16k^{5/2}$ with conductor not exceeding $|d(p)|\le 4k$. Thus, we may apply Lemma~\ref{lemmashortproduct} with $Q=4k$ and $5\alpha$ in place of $\alpha$ to replace $\mathcal L((2kg)^2d(p))$ by $\mathcal L((2kg)^2d(p); z)$, where we take $z=(\log(4k))^{200\alpha^2}$. The result is that \als{ M(G) &= \sum_{r\in\{0,1,4,5\}} \frac{1}{2-\leg{r}{2}} \sum_{\substack{N^-<p<N^+ \\ p=1+jm,\,j\ge1 }} \sum_{\substack{a|k \\ (a,2j)=1}} \sum_{\substack{ (2^v,k)=1 \\ d(p)\equiv 4^v r\mod{2^{2v+3}} }} \sum_{\substack{g^2\mid d(p) \\ (g,2k)=1 }} \frac{\mu^2(a) \sqrt{|d(p)|}}{\pi 2^v\phi(a) g} \\ &\qquad \times \rho(g,d(p)/g^2) \mathcal L((2kg)^2d(p) ; z ) +O_{\alpha}\left(\frac{k}{(\log k)^{\alpha}} \right) . } Next, we notice that we can truncate the sums over $a,g$ and $v$ at the cost of a small error term. More precisely, using the crude bound \[ \rho(g,d(p)/g^2)\mathcal L((2kg)^2d(p);z) \ll \frac{g}{\phi(g)} \log(2kg|d(p)|)\ll (\log k)^2 , \] we find that the contribution to $M(G)$ by those summands with $\max\{a,g,2^v\}>k^\epsilon$ is \eq{divisor bound}{ \ll \frac{\sqrt{k} (\log k)^3}{k^\epsilon} \sum_{\substack{N^-<p<N^+ \\ p \equiv 1 \pmod m }} \sum_{\substack{a|k \\ (2^vg)^2|d(p)}} 1 \ll_\epsilon k^{(1-\epsilon)/2} \sum_{\substack{ N^-<n<N^+ \\ n\equiv 1\mod m }} 1 \ll k^{1-\epsilon/2} } by the bound $\tau(n)\ll_\delta n^\delta$, with $\delta < \epsilon/4$. Moreover, \[ \mathcal L((2kg)^2d(p) ; z) = \sum_{\substack{P^+(n)\le z \\ (n,2kg)=1}} \frac{\leg{d(p)}{n} }{n} = \sum_{\substack{P^+(n)\le z,\, n\le k^\epsilon \\ (n,2kg)=1}} \frac{ \leg{d(p)}{n} }{n} + O_{\epsilon,\alpha}\left( (\log k)^{-\alpha-10}\right) \] by Lemma \ref{smooth}. Therefore, \als{ M(G) &= \sum_{r\in\{0,1,4,5\}} \frac{1}{2-\leg{r}{2}} \sum_{\substack{a|k,\, a\le k^\epsilon \\ (a,2)=1}} \sum_{\substack{2^v\le k^\epsilon \\ (2^v,k)=1}} \sum_{\substack{g\le k^\epsilon \\ (g,2k)=1 }} \sum_{\substack{P^+(n)\le z,\, n\le k^\epsilon \\ (n,2kg)=1}} \frac{\mu^2(a)}{\pi 2^v\phi(a) g n} \\ &\qquad \times \sum_{\substack{N^-<p<N^+ \\ p=1+jm,\,j\ge1 \\ (a,j)=1,\, g^2|d(p) \\ d(p)\equiv 4^v r \mod{2^{2v+3}} }} \rho(g,d(p)/g^2) \leg{d(p)}{n}\sqrt{|d(p)|} + O_{\alpha,\epsilon} \left(\frac{k}{(\log k)^{\alpha}}\right) . } We note that if $d(p)/g^2\equiv b\mod{g}$, then $\leg{d(p)/g^2}{\ell} = \leg{b}{\ell}$ for all $\ell|g$ and consequently, $\rho(g,d(p)/g^2)=\rho(g,b)$. So summing over possible choices for $d(p)/g^2\mod g$ and $d(p)\mod n$, we deduce that \als{ M(G) &= \sum_{r\in\{0,1,4,5\}} \frac{1}{2-\leg{r}{2}} \sum_{\substack{a|k,\, a\le k^\epsilon \\ (a,2)=1}} \sum_{\substack{2^v\le k^\epsilon \\ (2^v,k)=1}} \sum_{\substack{g\le k^\epsilon \\ (g,2k)=1 }} \sum_{\substack{P^+(n)\le z,\, n\le k^\epsilon \\ (n,2kg)=1}} \frac{\mu^2(a)}{\pi 2^v \phi(a) g n } \\ &\qquad \times \sum_{b=1}^g \rho(g,b) \sum_{c=1}^n\leg{c}{n} S_r(v,a,g,b,n,c) + O_{\alpha,\epsilon} \left(\frac{k}{(\log k)^{\alpha}}\right) , } where \[ S_r(v,a,g,b,n,c) : = \sum_{\substack{N^-<p\le N^+ \\ p=1+jm,\,j\ge 1,\, (j,a)=1 \\ d(p)\equiv bg^2\mod{g^3} \\ d(p)\equiv 4^vr \mod{2^{2v+3}},\, d(p)\equiv c\mod{n} }} \sqrt{|d(p)|} . \] We write $p=1+jm$ and note that $(1+jm,2agn)=1$ if $k$ is large enough, since $2agn\le 2k^{3\epsilon}\le 2k^{1/8}$ by assumption, and $p>N^-=(m\sqrt{k}-1)^2$. Moreover, with this notation we have that $d(p)=\Delta(j):=(j-mk)^2-4k$. So, if we set \als{ J_r(v,a,g,b,n,c) = \left\{ j\mod{2^{2v+3} a g^3n}: \begin{array}{rl} \Delta(j) \equiv 4^v r \pmod{2^{2v+3} },& \Delta(j) \equiv b g^2 \pmod{g^3}, \\ \Delta(j)\equiv c\mod{n},& (j,a)=1,\\ (1+jm, agn)=1,& jm\equiv 0\pmod2 \end{array}\right\}, } then we find that \[ S_r(v,a,g,b,n,c) = \sum_{j\in J_r(v,a,g,b,n,c)} \sum_{\substack{N^-<p\le N^+ \\ p\equiv 1+jm\mod{2^{2v+3}ag^3nm}}} \sqrt{|d(p)|} . \] Applying Lemma \ref{prime sum} with $h$ as in the statement of the theorem, we deduce that \als{ \frac{S_r(v,a,g,b,n,c)}{ |J_r(v,a,g,b,n,c)|} &= \frac{2\pi mk}{\phi(2^{2v+3}ag^3nm)\log N} \\ &\quad + O\left( \frac{k}{4^vag^3n(\log k)^{\alpha+1}} + \frac{\sqrt{k}}{h\log k} \int_{N^-}^{N^+} E(y,h;2^{2v+3}ag^3nm) \mathrm{d} y \right) , } by our assumption that $h\le m\sqrt{k}/(\log k)^{\alpha+1}$ and that $m\le \sqrt{k}$. In order to compute the contribution of the above error term to $M(G)$, we note that \als{ \sum_{b=1}^g \rho(b,g) \sum_{c=1}^n |J_r(b,v,g,a,n,c)| &\le \sum_{b=1}^g \sum_{c=1}^n \sum_{ \substack{ j\mod {2^{2v+3} ag^3n} \\ \Delta(j)\equiv bg^2\pmod{g^3} \\ \Delta(j)\equiv 4^vr \pmod{2^{2v+3}} \\ 2|jm,\ \Delta(j)\equiv c \mod{n} }} \frac{g}{\phi(g)} = \sum_{ \substack{ j\mod {2^{2v+3} a g^3n} \\ g^2 | \Delta(j),\, 2|jm \\ \Delta(j)\equiv 4^vr\pmod{2^{2v+3}} }} \frac{g}{\phi(g)} \\ &= \frac{g}{\phi(g)} agn \sum_{ \substack{ j\mod {2^{2v+3} g^2} \\ 2|jm,\ g^2 | \Delta(j) \\ \Delta(j)\equiv 4^vr\pmod{2^{2v+3}} }} 1 \ll \frac{ag^2n}{\phi(g)} \cdot \tau(g) \cdot |J_r(v)| } by the Chinese remainder theorem and Lemma \ref{generic quad lemma}, where $J_r(v)$ is defined by \eqref{J_r(v) def}. Since we also have that $|J_r(v)|\ll \mathcal J(v) \ll1$ by Lemmas \ref{prime 2 lma1} and \ref{prime 2 lma2} below, we conclude that \als{ M(G) &= \frac{2mk}{\log N}\sum_{r\in\{0,1,4,5\}} \frac{1}{2-\leg{r}{2}} \sum_{\substack{a|k,\, a\le k^\epsilon \\ (a,2)=1}} \sum_{\substack{2^v\le k^\epsilon \\ (2^v,k)=1}} \sum_{\substack{g\le k^\epsilon \\ (g,2k)=1 }} \sum_{\substack{P^+(n)\le z,\, n\le k^\epsilon \\ (n,2kg)=1}} \frac{\mu^2(a)}{2^{3v+v_0}\phi(a)\phi(g^4an^2m)} \\ &\qquad\times \sum_{b=1}^g \rho(g,b) \sum_{c=1}^n \leg{c}{n}|J_r(b,v,g,a,n,c)| + O_{\alpha,\epsilon}\left( \frac{k}{(\log k)^\alpha} + E\right), } where $v_0$ is defined by \eqref{J(v)} and \[ E:= \frac{\sqrt{k}}{h} \sum_{q\le 8k^{7\epsilon}} \tau_3(q) \int_{N^-}^{N^+} E(y,h;mq) \mathrm{d} y , \] since, for any $q\in\mathbb N$, we have that \[ \sum_{\substack{q=2^{2v+3}ag^3n \\ a|k,\ (a,2)=(gn,2k)=1}} \tau(g) \le \sum_{g|q} \tau(g) = \tau_3(q) . \] If we set \[ I(g,b) = \#\{1\le j\le g^3: \Delta(j)\equiv bg^2\pmod{g^3}, \, (1+jm,g)=1\} \] and \[ F(a) = \#\{ 1\le j\le a: (j,a)=1,\,(1+jm,a)=1\} = \prod_{\ell^w \| a} \ell^{w-1} \left(\ell-1-\leg{m}{\ell}^2\right) , \] then the Chinese remainder theorem implies that \als{ \sum_{c=1}^n \leg{c}{n} |J_r(v,a,g,b,n,c)| &= F(a) \cdot |J_r(v)| \cdot I(g,b) \sum_{c=1}^n \leg{c}{n} \sum_{\substack{ j\mod n \\ \Delta(j)\equiv c\mod{n} \\(1+jm,n)=1 }}1 \\ &= F(a) \cdot |J_r(v)|\cdot I(g,b) \cdot T(n), } where $T(n)$ is defined by \eqref{T(n) def}. Therefore, \[ M(G) = \frac{mk}{\phi(m)\log N} S_1S_2S_3+ O_{\alpha,\epsilon}\left( \frac{k}{(\log k)^\alpha} + E\right), \] where \[ S_1 = \sum_{r\in\{0,1,4,5\}} \frac{2}{2-\leg{r}{2}} \sum_{\substack{2^v\le k^\epsilon \\ (2^v,k)=1}} \frac{|J_r(v)|}{2^{3v+v_0}} =\mathcal J + O(k^{-\epsilon}), \] by the trivial estimate $|J_r(v)|\ll 4^v$, \als{ S_2 = \sum_{\substack{a|k,\,a\le k^\epsilon \\(a,2)=1}} \frac{\mu^2(a) F(a)}{\phi(a)a} \prod_{\ell|a,\,\ell\nmid m} \frac{\ell}{\ell-1} &= \prod_{\substack{\ell|k\\ \ell\ne 2}} \left( 1+ \frac{\ell-1-\leg{m}{\ell}^2}{(\ell-1)(\ell-\leg{m}{\ell}^2)}\right) + O(k^{-\epsilon/2}) \\ &= \prod_{\substack{\ell|k\\ \ell\ne 2}} \frac{\ell^2- \leg{m}{\ell}^2 \ell -1 }{(\ell-1)(\ell-\leg{m}{\ell}^2)} + O(k^{-\epsilon/2}) } by arguing as in relation \eqref{divisor bound}, and \[ S_3 = \sum_{\substack{g\le k^\epsilon \\ (g,2k)=1 }} \sum_{b=1}^g \frac{\rho(g,b) I(g,b) S_4(g) }{g^4} \prod_{\ell|g,\,\ell\nmid m} \frac{\ell}{\ell-1} \] with \[ S_4(g) = \sum_{\substack{P^+(n)\le z,\, n\le k^\epsilon \\ (n,2kg)=1}} \frac{T(n)}{n^2}\prod_{\ell|n,\,\ell\nmid m}\frac{\ell}{\ell-1} . \] In the above, to factor $\phi(g^4 a n^2 m)$, we have used the identity $$ \phi(g^4 a n^2 m) = \phi(m) g^4 a n^2 \prod_{\ell|g,\,\ell\nmid m} \frac{\ell -1}{\ell} \; \prod_{\ell|a,\,\ell\nmid m} \frac{\ell - 1}{\ell} \; \prod_{\ell|n,\,\ell\nmid m} \frac{\ell - 1}{\ell} $$ which holds since $a,n$ and $g$ are pairwise coprime. Note that \als{ I(g,b) &= \prod_{\ell^w\| g} \#\{j\mod {\ell^{3w}}: (j-mk)^2\equiv 4k+bg^2\mod{\ell^{3w}},\, (1+jm,\ell)=1\} \\ & = \prod_{\ell|g} \left(1+ \leg{(N+1)^2-(4k+bg^2)m^2}{\ell}^2 \leg{4k+bg^2}{\ell}\right) \\ & =\left(1+ \leg{N-1}{\ell}^2\leg{k}{\ell} \right)^{\omega(g)} } by Lemma \ref{generic quad lemma}, which is applicable here because $4k+bg^2\equiv 4k\not\equiv 0\mod\ell$ for all primes $\ell|g$. So we see that $I(g,b)$ is independent of $b$, which implies that \als{ \sum_{b=1}^g \rho(g,b) I(g,b) &= I(g,0) \prod_{\ell^w\|g} \left( \sum_{b=1}^{\ell^w} \frac{1}{1-\leg{b}{\ell}/\ell} \right) \\ &= I(g,0) \prod_{\ell^w\|g} \left(\ell^{w-1}+ \ell^{w-1}\frac{\ell-1}{2}\frac{1}{1-1/\ell} +\ell^{w-1}\frac{\ell-1}{2}\frac{1}{1+1/\ell}\right) \\ &= g I(g,0) \prod_{\ell|g} \frac{\ell^2+\ell+1}{\ell(\ell+1)} . } Thus we conclude that \[ S_3 = \sum_{\substack{g\le k^\epsilon \\ (g,2k)=1 }} \frac{S_4(g)}{g^3} \prod_{\ell|g} \frac{(1+\leg{N-1}{\ell}^2\leg{k}{\ell})(\ell^2+\ell+1)}{(\ell-\leg{m}{\ell}^2)(\ell+1)}. \] Moreover, if $P(\ell)$ is as in Corollary \ref{formula for P(ell)}, then we have that \[ S_4(g) = \frac{P}{\prod_{\ell|g} P(\ell)} \left( 1 + O\left(\frac{1}{(\log k)^{\alpha+1}}\right) \right) , \quad\text{where}\quad P : = \prod_{\ell\nmid 2k} P(\ell) . \] Therefore \als{ S_3\left( 1+ O\left(\frac{1}{(\log k)^{\alpha+1}}\right) \right) &= P \cdot \prod_{\ell\nmid2k} \left( 1 + \sum_{w\ge1} \frac{(1+\leg{N-1}{\ell}^2\leg{k}{\ell})(\ell^2+\ell+1)}{\ell^{3w}(\ell-\leg{m}{\ell}^2)(\ell+1)P(\ell)}\right) \\ &= \prod_{\ell\nmid2k} \left( P(\ell) + \frac{1+\leg{N-1}{\ell}^2\leg{k}{\ell}}{(\ell^{2}-1)(\ell-\leg{m}{\ell}^2)} \right) \\ &= \prod_{\ell\nmid2k} \frac{\ell^3 - \leg{m}{\ell}^2 \ell^2 - (1+\leg{m(N-1)}{\ell}^2)\ell }{(\ell^{2}-1)(\ell-\leg{m}{\ell}^2)}\\ &= \prod_{\ell\nmid2N} \left( 1- \frac{\ell \leg{N-1}{\ell}^2+1}{(\ell^{2}-1)(\ell-1)} \right) . } Consequently, \als{ M(G) &= \frac{\mathcal J mk}{\phi(m)\log N} \prod_{\ell\nmid2N} \left( 1- \frac{\ell \leg{N-1}{\ell}^2+1}{(\ell^{2}-1)(\ell-1)} \right) \prod_{\substack{\ell|k \\ \ell>2}} \left( 1+ \frac{\ell-1-\leg{m}{\ell}^2}{(\ell-1)(\ell-\leg{m}{\ell}^2)}\right) \\ &\quad + O_{\alpha,\epsilon}\left( \frac{k}{(\log k)^\alpha} + E\right) . } So the theorem follows by the above estimates together with Lemmas \ref{Aut(G)} and \ref{formula for J}. \end{proof} \section{Powers of 2}\label{2} The goal of this section is to show Lemma \ref{formula for J}, which gives the value of \begin{equation*} \mathcal J = \sum_{\substack{v\ge0 \\ (2^v,k)=1}} \frac{\mathcal J(v)}{8^v}, \end{equation*} where \begin{equation*} \mathcal J(v) = \frac{1}{2^{v_0-1}} \sum_{r\in\{0,1,4,5\}} \frac{|J_r(v)|}{2-\leg{r}{2}}, \quad\quad v_0 = \begin{cases} 2 &\text{if}\ 2\nmid m,\cr 3 &\text{if}\ 2|m, \end{cases} \end{equation*} and \begin{equation*} J_r(v) = \{1\le j\le 2^{2v+3}:(j-mk)^2\equiv 4k+4^vr\pmod{2^{2v+3}}, \, jm\equiv 0\pmod 2\}. \end{equation*} We start with the following standard lemma. \begin{lma}\label{generic quad lemma - prime 2} We have that \[ \#\{j\in\mathbb Z/8\mathbb Z : j^2\equiv d\pmod{8}\} = \begin{cases} 2 &\text{if}\ d\equiv 0,4\mod 8,\\ 4 &\text{if}\ d\equiv 1\mod 8,\\ 0 &\text{otherwise}. \end{cases} \] Moreover, if $d$ is odd and $e\ge3$, then \[ \#\{j\in\mathbb Z/2^e\mathbb Z : j^2\equiv d\pmod{2^e}\} = \begin{cases} 4 &\text{if}\ d\equiv 1\mod 8, \\ 0 &\text{otherwise}. \end{cases} \] \end{lma} We shall use the above lemma to calculate $|J_r(v)|$ and $\mathcal J(v)$ when $(2^v,k)=1$. First, we note that if $v\ge1$, then $k$ must be odd and \eq{J_r(v) alt}{ |J_r(v)| = \begin{cases} 2\cdot \#\{ j\mod{ 2^{2v+1} } : j^2\equiv k+4^{v-1} r\mod{2^{2v+1}} \} &\text{if}\ 2|m,\\ 0 &\text{if}\ 2\nmid m. \end{cases} } Indeed, when $v\ge1$, the relation $(j-mk)^2\equiv 4k+4^vr\mod{2^{2v+3}}$ implies that $2|(j-mk)$. Since $k$ is odd and we also have that $jm\equiv 0\mod{2}$, we deduce that $2\mid (m,j)$. Hence, $|J_r(v)|=0$ when $2\nmid m$. Assuming that $2\mid m$, we write $j=mk+2j'$ and find that \als{ |J_r(v)| &= \#\{j'\mod{2^{2v+2}} : j'^2\equiv k+4^{v-1} r\mod{2^{2v+1}} \} \\ &= 2\cdot \#\{j\mod{2^{2v+1} } : j^2\equiv k+4^{v-1} r\mod{2^{2v+1}} \} , } as claimed. \begin{lma}\label{prime 2 lma1} Let $v\ge0$ with $(2^v,k)=1$. If $m$ is odd, then \[ \mathcal J(v) = \begin{cases} 1 &\mbox{if $v=0$ and $2|k$},\\ \frac{2}{3} &\mbox{if $v=0$ and $2\nmid k$},\\ 0 &\text{if $v\ge1$ and $2\nmid k$}. \end{cases} \] \end{lma} \begin{proof} The case $v\ge1$ follows by \eqref{J_r(v) alt}. Assume now that $v=0$. Since $m$ is odd, the condition $jm\equiv 0\pmod 2$ implies that every $j\in J_r(v)$ is even. Writing $j=2j'$, we deduce that \[ |J_r(0)| = \#\{j'\mod 4: (2j'-mk)^2\equiv 4k+r\mod{8}\} \] If $k$ is odd, then we must have that $(2j'-mk)^2-4k\equiv -3\mod8$ and thus $r=5$, in which case $|J_r(0)|=4$; otherwise $|J_r(v)|=0$. So \[ \mathcal J(0) = \frac{1}{2} \cdot \frac{4}{2-(-1)} = \frac{2}{3} . \] Finally, assume that $k$ is even. Writing $z=j'-mk/2$, our task reduces to counting solutions to $4z^2\equiv r\mod 8$ with $1\le z\le 4$. If $r\in\{1,5\}$, then there are no such solutions, whereas if $r\in\{0,4\}$, then there are precisely two such solutions. Consequently, when $m$ is odd and $k$ is even, \[ \mathcal J(0)=\frac{1}{2} \left( \frac{2}{2-0} + \frac{2}{2-0} \right) = 1 , \] and the lemma follows in this case too. \end{proof} \begin{lma}\label{prime 2 lma2} Let $v\ge0$ with $(2^v,k)=1$, and suppose that $2|m$. If $2|k$, then \[ \mathcal J(0) = \frac{3}{2} . \] If $k\equiv 1\pmod 8$, then \[ \mathcal J(v) = \begin{cases} \frac{5}{6} &\mbox{if $v=0$}, \\ 1 &\mbox{if $v=1$}, \\ 2 &\mbox{if $v=2$}, \\ \frac{14}{3} &\mbox{if $v\ge3$}. \end{cases} \] If $k\equiv 3,7\pmod 8$, then \[ \mathcal J(v) = \begin{cases} \frac{5}{6} &\mbox{if $v=0$}, \\ \frac{4}{3} &\mbox{if $v=1$}, \\ 0 &\mbox{if $v\ge2$}. \end{cases} \] If $k\equiv 5\pmod 8$, then \[ \mathcal J(v) = \begin{cases} \frac{5}{6} &\mbox{if $v=0$}, \\ 1 &\mbox{if $v=1$}, \\ \frac{8}{3} &\mbox{if $v=2$}, \\ 0 &\mbox{if $v\ge3$}. \end{cases} \] \end{lma} \begin{proof} First, we calculate $|J_r(0)|$. Note that the condition $jm\equiv 0\mod2$ is trivially satisfied now since $2|m$. Therefore, a change of variable and Lemma \ref{generic quad lemma - prime 2} imply that \eq{J_r(0)}{ |J_r(0)| = \#\{j\mod 8: j^2\equiv 4k+r\mod{8}\} = \begin{cases} 2 &\text{if}\ 4k+r\equiv 0,4\mod 8,\\ 4 &\text{if}\ 4k+r\equiv 1\mod 8,\\ 0 &\text{if}\ 4k+r\equiv 5\mod 8 . \end{cases} } Thus, \[ \mathcal J(0) = \begin{cases} \frac{1}{4}\left( \frac{2}{2-0}+\frac{4}{2-1} + \frac{2}{2-0} + \frac{0}{2-(-1)} \right) = \frac{3}{2} &\text{if}\ 2|k,\\ \frac{1}{4} \left(\frac{2}{2-0}+\frac{0}{2-1} + \frac{2}{2-0}+ \frac{4}{2-(-1)}\right) = \frac{5}{6} &\text{if}\ 2\nmid k . \end{cases} \] Next assume that $v\ge1$, and note that the condition $(2^v,k)=1$ means that we only need consider this case when $k$ is odd. By relation \eqref{J_r(v) alt}, we have that \[ |J_r(v)| = 2\cdot \#\{ j\mod{2^{2v+1}}: j^2\equiv k+4^{v-1} r\mod{2^{2v+1}} \}. \] Now if $v\ge2$, then Lemma~\ref{generic quad lemma - prime 2} implies that $|J_r(v)|=2\cdot 4=8$ or $|J_r(v)|=0$ according to whether $k+4^{v-1}r\equiv 1\mod{8}$ or not. Therefore, when $v\ge2$, \[ \mathcal J(v) = \begin{cases} \frac{1}{4} \left(\frac{8}{2-0}+\frac{8}{2-0}\right) = 2 &\text{if}\ v=2\ \text{and}\ k\equiv 1\mod 8,\\ \frac{1}{4} \left( \frac{8}{2-1} + \frac{8}{2-(-1)}\right) = \frac{8}{3} &\text{if}\ v=2\ \text{and}\ k\equiv 5\mod 8,\\ \frac{1}{4} \left(\frac{8}{2-0}+\frac{8}{2-1} + \frac{8}{2-0}+ \frac{8}{2-(-1)}\right) = \frac{14}{3} &\text{if}\ v\ge3\ \text{and}\ k\equiv 1\mod 8,\\ 0 &\text{otherwise} . \end{cases} \] Finally, we consider the case $v=1$. Using Lemma~\ref{generic quad lemma - prime 2} again, we have \[ |J_r(1)| = 2\cdot \#\{ j\mod{8}: j^2\equiv k+r\mod 8 \} = \begin{cases} 4 &\text{if}\ k+r\equiv 0,4\mod 8,\\ 8 &\text{if}\ k+r\equiv 1\mod 8,\\ 0 &\text{otherwise}. \end{cases} \] Therefore, \[ \mathcal J(1) = \begin{cases} \frac{1}{4} \cdot \frac{8}{2-0} = 1 &\text{if}\ k\equiv 1,5\mod 8,\\ \frac{1}{4} \left( \frac{4}{2-1} + \frac{4}{2-(-1)} \right) = \frac{4}{3} &\text{if}\ k\equiv 3,7 \mod 8 , \end{cases} \] which completes the proof of the lemma. \end{proof} Lemma \ref{formula for J} now follows as a direct consequence of Lemmas \ref{prime 2 lma1} and \ref{prime 2 lma2}. \appendix \section{by Chantal David, Greg Martin and Ethan Smith} The purpose of this appendix is to give a probabilistic interpretation to the Euler factors arising in $K(G)\frac{|G|}{|\Aut(G)|}$ and $K(N)\frac{N}{\phi(N)}$, where $K(G)$ and $K(N)$ are defined by~\eqref{define K(G)} and~\eqref{define K(N)}, respectively. Given a prime $\ell$, we let $\nu_\ell(\cdot)$ denote the usual $\ell$-adic valuation. For each integer $e\ge 1$, we also let $G_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z)$ denote the usual group of invertible $2\times 2$ matrices with entries from $\mathbb Z/\ell^e\mathbb Z$. The $2\times 2$ identity matrix we denote by $I$. The main results of this appendix are as follows. \begin{thm}\label{K(N) interpretation} For each positive integer $N$, \begin{equation*} \frac{K(N)\cdot N}{\phi(N)} =\prod_\ell\left(\lim_{e\rightarrow\infty}\frac{\ell^e\cdot\#\{\sigma\inG_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z) : \det(\sigma)+1-\tr(\sigma)\equiv N\pmod{\ell^e}\}}{\#G_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z)}\right), \end{equation*} where the product is taken over all primes $\ell$. Furthermore, the sequences defining the Euler factors are constant for $e>\nu_\ell(N)$. \end{thm} \begin{rmk} If $\mu$ denotes the Haar measure on the space of $2\times 2$ matrices over the $\ell$-adic integers $\mathbb Z_\ell$, normalized so that $\mu\left(G_{m, k}L_2(\mathbb Z_\ell)\right)=1$, then the Euler factor of $K(N)\frac{N}{\phi(N)}$ for the prime $\ell$ may be viewed as the density function for the probability measure on $\mathbb Z_\ell$ defined by the pushforward of $\mu$ via the map $\det+1-\tr:G_{m, k}L_2(\mathbb Z_\ell)\rightarrow\mathbb Z_\ell$. \end{rmk} \begin{thm}\label{K(G) interpretation} For each pair of positive integers $m$ and $k$, put $G=G_{m,k}=\mathbb Z/m\mathbb Z\times\mathbb Z/mk\mathbb Z$. Then \begin{equation*} \frac{K(G)\cdot |G|}{|\Aut(G)|} =\prod_\ell\left(\lim_{e\rightarrow\infty}\frac{\ell^e\cdot\#\left\{\sigma\inG_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z) : \begin{array}{l}\det(\sigma)+1-\tr(\sigma)\equiv |G|\pmod{\ell^e},\\ \sigma\equiv I\pmod{\ell^{\nu_\ell(m)}},\\ \sigma\not\equiv I\pmod{\ell^{\nu_\ell(m)+1}} \end{array}\right\}} {\#G_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z)}\right), \end{equation*} where the product is taken over all primes $\ell$. Furthermore, the sequences defining the Euler factors are constant for $e>\nu_\ell(|G|)$. \end{thm} For the remainder of this appendix, we assume that $e, n, N,$ and $\ell$ are positive integers with $\ell$ prime and $n^2\mid N$. Later we will also assume that $N=|G|=m^2k$. For convenience, we let \begin{equation*} C_{N,n}(\ell^e)=\left\{\sigma\inG_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z) : \det(\sigma)+1-\tr(\sigma)\equiv N\pmod{\ell^e},\ \sigma\equiv I\pmod{\ell^{\nu_\ell(n)}}\right\}. \end{equation*} In the case that $\ell\nmid n$, we note that the condition $\sigma\equiv I\pmod{\ell^{\nu_\ell(n)}}$ is vacuous. As usual, $\leg{\cdot}{\ell}$ denotes the Kronecker symbol modulo $\ell$. \begin{lma}\label{matrix count mod ell} If $\ell\nmid n$, then \begin{equation*} \#C_{N,n}(\ell)= \ell\left(\ell^2-\leg{N}{\ell}^2\ell-1-\leg{N-1}{\ell}^2\right). \end{equation*} \end{lma} \begin{proof} We first observe that $\#C_{N,n}(\ell)$ is equal to the number of quadruples $(a,b,c,d)$ satisfying $0\le a,b,c,d<\ell$ and \begin{align} ad-bc+1-(a+d)&\equiv N\pmod\ell,\label{det-tr cond}\\ ad-bc&\not\equiv 0\pmod\ell\label{det cond}. \end{align} The lemma follows by first counting the number of quadruples satisfying~\eqref{det-tr cond} and then removing the number of quadruples satisfying~\eqref{det-tr cond} that do not satisfy~\eqref{det cond}. Rearranging, we see that the condition~\eqref{det-tr cond} may be rewritten as \begin{equation*} (a-1)(d-1)-bc\equiv N\pmod\ell. \end{equation*} It is clear that any choice of $a,b,c$ with $a\ne 1$ uniquely determines $d$. On the other hand, if $a=1$, then there are $\ell$ choices for $d$, and the pair $(b,c)$ must satisfy $bc\equiv -N\pmod\ell$. Therefore, there are \begin{equation*} \ell^3+\left(1-\leg{N}{\ell}^2\right)\ell^2-\ell \end{equation*} solutions $(a,b,c,d)$ to~\eqref{det-tr cond} with $0\le a,b,c,d<\ell$. We now count the number of quadruples $(a,b,c,d)$ with $0\le a,b,c,d<\ell$ for which~\eqref{det-tr cond} holds but~\eqref{det cond} does not. These are the quadruples that satisfy the system \begin{align*} a+d&\equiv 1-N\pmod\ell,\\ ad&\equiv bc\pmod\ell. \end{align*} It is clear that any choice of $a$ uniquely determines $d$. If $a=0$ or $a=1-N$, then there are $2\ell-1$ choices for the pair $(b,c)$. On the other hand, if $a\ne 0,1-N$, there are only $\ell-1$ choices for $(b,c)$. Therefore, there are \begin{equation*} \ell^2+\leg{N-1}{\ell}^2\ell \end{equation*} solutions $(a,b,c,d)$ to~\eqref{det-tr cond} with $0\le a,b,c,d<\ell$ for which~\eqref{det cond} does not hold. \end{proof} \begin{prop}\label{matrix proportions for ell not dividing N} If $\ell\nmid N$, then \begin{equation*} \#C_{N,n}(\ell^e)=\ell^{3(e-1)+1}\left(\ell^2-\ell-1-\leg{N-1}{\ell}^2\right) \end{equation*} for every $e\ge 1$. \end{prop} \begin{proof} The case $e=1$ is treated in Lemma~\ref{matrix count mod ell}, and so we assume that $e\ge 2$. Since any $\sigma\in C_{N,n}(\ell^e)$ must reduce modulo $\ell$ to a matrix in $C_{N,n}(\ell)$, it suffices to count the number of matrices in $C_{N,n}(\ell^e)$ that reduce to a given matrix in $C_{N,n}(\ell)$. To this end, we assume that $\sigma_0\in C_{N,n}(\ell)$ and $\sigma\in C_{N,n}(\ell^e)$ is such that $\sigma\equiv\sigma_0\pmod\ell$. Thus, we may write \begin{equation*} \sigma_0=\begin{pmatrix}a_0&b_0\\ c_0&d_0\end{pmatrix}\quad\text{and}\quad \sigma=\begin{pmatrix}a_0+a\ell&b_0+b\ell\\ c_0+c\ell&d_0+d\ell\end{pmatrix} \end{equation*} with $0\le a_0,b_0,c_0,d_0<\ell$ and $0\le a,b,c,d<\ell^{e-1}$. Note that the condition $\det\sigma\not\equiv 0\pmod\ell$ is necessarily satisfied since $\det\sigma\equiv\det\sigma_0\pmod\ell$ and $\sigma_0\in C_{N,n}(\ell)$. Therefore, $\sigma\in C_{N,n}(\ell^e)$ if and only if \begin{equation}\label{lift det-tr cond} a_0d_0-b_0c_0+1-a_0-d_0 +(a(d_0-1)+d(a_0-1)-b_0c-bc_0)\ell +(ad-bc)\ell^2 \equiv N\pmod{\ell^e}. \end{equation} Since $\sigma_0\in C_{N,n}(\ell)$, it follows that $a_0d_0-b_0c_0+1-a_0-d_0=N+k_0\ell$ for some $k_0$, and hence condition~\eqref{lift det-tr cond} reduces to \begin{equation*} k_0+ ((d_0-1)a-c_0b-b_0c+(a_0-1)d) +(ad-bc)\ell \equiv 0\pmod{\ell^{e-1}}. \end{equation*} Since $\ell\nmid N$, $\sigma_0$ cannot be the identity matrix modulo $\ell$, and the polynomial $(d_0-1)a - c_0 b - b_0 c + (a_0-1) d$ in the variables $a,b,c,d$ has at least one nonzero coefficient. Say for example that $d_0 - 1$ is not zero. Then for each triple $(b,c,d)$, there is a unique choice of $a$ satisfying the above congruence. Therefore, there are exactly $\ell^{3(e-1)}$ solutions $(a,b,c,d)$ with $0\le a,b,c,d<\ell^{e-1}$. \end{proof} Let $\Mat_2(\mathbb Z/\ell^k\mathbb Z)$ denote the ring of $2\times 2$ matrices with entries from $\mathbb Z/\ell^k\mathbb Z$. In order to compute $C_{N,n}(\ell^e)$ when $\ell\mid N$ we need to know the number of matrices in $\Mat_2(\mathbb Z/\ell^k\mathbb Z)$ of every individual determinant. \begin{prop}\label{det prop} Let $M$ be a positive integer, and let $r=\nu_\ell(M)$. Then for $r,s\ge 0$, we have \begin{equation*} \#\left\{\sigma\in\Mat_2(\mathbb Z/\ell^{r+s}\mathbb Z) : \det(\sigma)\equiv M\pmod{\ell^{r+s}}\right\} =\ell^{2(r-1)}\left(\ell^{3s}(\ell+1)(\ell^{r+1}-1)+\delta(s)\right), \end{equation*} where $\delta(s)$ is defined by \begin{equation*} \delta(s):=\begin{cases} 1&\text{if }s=0,\\ 0&\text{otherwise}. \end{cases} \end{equation*} \end{prop} For the proof of Proposition~\ref{det prop}, we first make a simple reduction and fix some notation. Given any positive integer $M$, we write $M=\ell^rM'$ with $r=\nu_\ell(M)$ and $(M',\ell)=1$. Since the determinant maps $G_{m, k}L_2(\mathbb Z/\ell^{r+s}\mathbb Z)$ onto $(\mathbb Z/\ell^{r+s}\mathbb Z)^*$, it follows that there is an $\alpha\inG_{m, k}L_2(\mathbb Z/\ell^{r+s}\mathbb Z)$ such that $\det(\alpha)\equiv M'\pmod{\ell^{r+s}}$. Since the map $\sigma\mapsto\alpha\sigma$ is a group automorphism of $\Mat_2(\mathbb Z/\ell^{r+s}\mathbb Z)$ and since $\det(\sigma)=M=\ell^rM'$ if and only if $\det(\alpha^{-1}\sigma)=\ell^r$, it follows that \begin{equation*} \#\left\{\sigma\in\Mat_2(\mathbb Z/\ell^{r+s}\mathbb Z) : \det(\sigma)\equiv M\pmod{\ell^{r+s}}\right\} =\#F(r,s), \end{equation*} where \begin{equation*} F(r,s):=\left\{\sigma\in\Mat_2(\mathbb Z/\ell^{r+s}\mathbb Z) : \det(\sigma)\equiv \ell^r\pmod{\ell^{r+s}}\right\}. \end{equation*} Thus, we see that $\#\left\{\sigma\in\Mat_2(\mathbb Z/\ell^{r+s}\mathbb Z) : \det(\sigma)\equiv M\pmod{\ell^{r+s}}\right\}$ depends on the power of $\ell$ dividing $M$ and not on the $\ell$-free part of $M$. With this in mind, we define \begin{equation*} f(r,s):=\#F(r,s), \end{equation*} where we adopt the natural convention that $f(0,0)=1$. Proposition~\ref{det prop} then follows easily by induction on $r$ using the following lemma. \begin{lma} For every $s\ge 0$, we have \begin{align*} f(0,s)&=\ell^{3s-2}(\ell^2-1)+\ell^{-2}\delta(s),\\ f(1,s)&=\ell^{3s}(\ell+1)(\ell^2-1)+\delta(s),\\ f(r,s)&=\ell^{3(r+s-1)}(\ell+1)(\ell^2-1)+\ell^4f(r-2,s),\quad r\ge 2. \end{align*} \end{lma} \begin{proof} By convention we have $f(0,0)=1$. For $s\ge 1$, we have the well-known formula \begin{equation*} f(0,s)=\#\mathcal SL_2(\mathbb Z/\ell^s\mathbb Z)=\ell^{3s-2}(\ell^2-1). \end{equation*} This proves the first formula given in the statement of the lemma. Now assume that $r\ge 1$. If $r=1$ and $s=0$, then we have \begin{equation*} f(1,0)=\#\Mat_2(\mathbb Z/\ell\mathbb Z)-\#G_{m, k}L_2(\mathbb Z/\ell\mathbb Z)=\ell^3+\ell^2-\ell. \end{equation*} We observe that any $\sigma\in F(r,s)$ must reduce modulo $\ell$ to some $\sigma_0\in F(1,0)$. Thus, we assume that $\sigma_0\in F(1,0)$, and we write \begin{equation*} \sigma_0=\begin{pmatrix}a_0&b_0\\ c_0&d_0\end{pmatrix}\quad\text{and}\quad \sigma=\begin{pmatrix}a_0+a\ell&b_0+b\ell\\ c_0+c\ell&d_0+d\ell\end{pmatrix}, \end{equation*} with $0\le a_0, b_0, c_0, d_0<\ell$ and $0\le a,b,c,d<\ell^{r+s-1}$. By definition, we see that $\sigma\in F(r,s)$ if and only if \begin{equation*} a_0d_0-b_0c_0+(d_0a-c_0b-b_0c+a_0d)\ell+(ad-bc)\ell^2\equiv\ell^r\pmod{\ell^{r+s}}. \end{equation*} If $\sigma_0$ is not the zero matrix modulo $\ell$, then there are exactly $\ell^{3(r+s-1)}$ choices of $(a,b,c,d)$ satisfying the above congruence. On the other hand, if $\sigma_0$ is the zero matrix (which is always an element of $F(1,0)$), the above congruence condition reduces to \begin{equation}\label{zero matrix cond} (ad-bc)\ell^2\equiv \ell^r\pmod{\ell^{r+s}}. \end{equation} If $r=1$, then there can be no solutions to~\eqref{zero matrix cond} with $s\ge 1$. Therefore, \begin{equation*} f(1,s)=\ell^{3s}(f(1,0)-1)=\ell^{3s}(\ell^3+\ell^2-\ell-1) =\ell^{3s}(\ell+1)(\ell^2-1) \end{equation*} when $s\ge 1$, and this completes the proof of the second formula stated in the lemma. On the other hand, if $r\ge 2$, then condition~\eqref{zero matrix cond} reduces to \begin{equation*} (ad-bc)\equiv\ell^{r-2}\pmod{\ell^{r-2+s}}. \end{equation*} There are $\ell^4f(r-2,s)$ solutions to this congruence with $0 \leq a,b,c,d < \ell^{r+s-1}$. Whence \begin{equation*} \begin{split} f(r,s)&=\ell^{3(r+s-1)}(f(1,0)-1)+\ell^4f(r-2,s)\\ &=\ell^{3(r+s-1)}(\ell+1)(\ell^2-1)+\ell^4f(r-2,s) \end{split} \end{equation*} for $r\ge 2$, and this completes the proof of the lemma. \end{proof} \begin{prop}\label{ell divides N but not n} If $v=\nu_\ell(N)\ge 1$ and $\ell\nmid n$, then \begin{equation*} \#C_{N,n}(\ell^e) =\ell^{3e-v-2}(\ell+1)\left(\ell^{v+1}-\ell^v-1\right) \end{equation*} for every $e>v$. \end{prop} \begin{proof} By Lemma~\ref{matrix count mod ell}, we have \begin{equation}\label{base case} \#C_{N,n}(\ell)=\ell(\ell^2-2)=\ell^3-2\ell, \end{equation} and so we may assume that $e\ge 2$. We proceed in a manner similar to the proof of Proposition~\ref{matrix proportions for ell not dividing N}. In particular, we assume that $\sigma_0\in C_{N,n}(\ell)$ and count the number of $\sigma\in C_{N,n}(\ell^e)$ that reduce to $C_{N,n}(\ell)$. Writing \begin{equation*} \sigma_0=\begin{pmatrix}a_0&b_0\\ c_0&d_0\end{pmatrix}\quad\text{and}\quad \sigma=\begin{pmatrix}a_0+a\ell&b_0+b\ell\\ c_0+c\ell&d_0+d\ell\end{pmatrix} \end{equation*} with $0\le a_0,b_0,c_0,d_0<\ell$ and $0\le a,b,c,d<\ell^{e-1}$, we deduce that the quadruple $(a,b,c,d)$ must satisfy~\eqref{lift det-tr cond}. As in the proof of Proposition~\ref{matrix proportions for ell not dividing N}, if $\sigma_0$ is not the identity matrix, there are exactly $\ell^{3(e-1)}$ choices for $(a,b,c,d)$. Now suppose that $\sigma_0$ is the identity matrix. (Note that the identity matrix is always an element of $C_{N,n}(\ell)$ when $\ell\mid N$.) Then writing $N=\ell^vN'$ with $v=\nu_\ell(N)\ge 1$ and $(N',\ell)=1$, we see that condition~\eqref{lift det-tr cond} reduces to \begin{equation}\label{id case} (ad-bc)\ell^2\equiv N'\ell^{v}\pmod{\ell^{e}}. \end{equation} Clearly, there are no solutions to this congruence unless $v\ge 2$. Therefore, if $v=1$ and $e\ge 2$, we have that \begin{equation*} \#C_{N,n}(\ell^e) =\ell^{3(e-1)}(\ell^3-2\ell-1) =\ell^{3e-3}(\ell+1)(\ell^2-\ell-1). \end{equation*} Now, suppose that $v\ge 2$ and $e\ge 3$. Then~\eqref{id case} reduces to \begin{equation} (ad-bc)\equiv N'\ell^{v-2}\pmod{\ell^{e-2}}. \end{equation} The number of solutions to this congruence with $0\le a,b,c,d<\ell^{e-1}$ is equal to \begin{equation*} \ell^4\#\{\alpha\in\Mat_2(\mathbb Z/\ell^{e-2}\mathbb Z): \det(\alpha)\equiv N'\ell^{v-2}\pmod{\ell^{e-2}}\}. \end{equation*} Since we are assuming that $v<e$, Proposition~\ref{det prop} implies that the above count is equal to \begin{equation*} \ell^4\ell^{2(v-3)}\ell^{3(e-v)}(\ell+1)(\ell^{v-1}-1) =\ell^{3e-v-2}(\ell+1)(\ell^{v-1}-1). \end{equation*} Putting everything together, we find that \begin{equation*} \begin{split} \#C_{N,n}(\ell^e) &=\ell^{3(e-1)}(\ell^3-2\ell-1)+\ell^{3e-v-2}(\ell+1)(\ell^{v-1}-1)\\ &=\ell^{3e-v-2}(\ell+1)\left(\ell^{v+1}-\ell^v-1\right) \end{split} \end{equation*} for $v\ge 2$. \end{proof} Recall our standing assumption that $n^2\mid N$. \begin{thm}\label{complete matrix count thm} Let $u=\nu_\ell(n)$ and $v=\nu_\ell(N)$. Then for every $e>v$, we have \begin{equation*} \#C_{N,n}(\ell^e) =\begin{cases} \ell^{3(e-1)+1}\left(\ell^2-\ell-1-\leg{N-1}{\ell}^2\right)&\text{if }u=0\text{ and }v= 0,\\ \ell^{3e-v-2}(\ell+1)\left(\ell^{v+1}-\ell^v-1\right)&\text{if }u=0\text{ and }v\ge 1,\\ \ell^{3e-v-2}(\ell+1)(\ell^{v-2u+1}-1)&\text{if }1\le u\le v/2,\\ 0&\text{if } 0\le v/2<u. \end{cases} \end{equation*} Therefore, for every $e>v$, we have \begin{equation*} \begin{split} \frac{\ell^e\#C_{N,n}(\ell^e)}{\#G_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z)} &=\begin{cases} \displaystyle \left(1-\frac{\leg{N-1}{\ell}^2\ell+1}{(\ell-1)^2(\ell+1)}\right)&\text{if }u=0\text{ and }v= 0,\\ \displaystyle \frac{\ell}{\ell-1}\left(1-\frac{1}{\ell^v(\ell-1)}\right)&\text{if }u=0\text{ and }v\ge 1,\\ \displaystyle \frac{\ell}{\ell^{2u}(\ell-1)}\left(\frac{\ell^{v+1}-\ell^{2u}}{\ell^{v+1}-\ell^v-1}\right)\left(1-\frac{1}{\ell^v(\ell-1)}\right)&\text{if }1\le u\le v/2,\\ 0&\text{if }0\le v/2<u. \end{cases} \end{split} \end{equation*} \end{thm} \begin{proof} Note that the second assertion of theorem follows from the first together with the well-known formula \begin{equation*} \#G_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z)=\ell^{4(e-1)+1}(\ell+1)(\ell-1)^2, \end{equation*} and so it suffices to prove the first assertion of the theorem. The first two cases have already been addressed by Propositions~\ref{matrix proportions for ell not dividing N} and~\ref{ell divides N but not n}. Therefore, we may assume that $u\ge 1$. Supposing that $\sigma\in C_{N,n}(\ell^e)$, we may write \begin{equation*} \sigma=\begin{pmatrix}1+a\ell^u&b\ell^u\\ c\ell^u&1+d\ell^u\end{pmatrix} \end{equation*} with $0\le a,b,c,d<\ell^{e-u}$ chosen such that \begin{equation*} (ad-bc)\ell^{2u}\equiv N'\ell^v\pmod{\ell^e}. \end{equation*} This congruence clearly has no solutions if $e>v$ and $2u>v$. Therefore, we may assume that $2\le 2u\le v<e$. In this case the above congruence is equivalent to the condition \begin{equation*} (ad-bc)\equiv N'\ell^{v-2u}\pmod{\ell^{e-2u}} \end{equation*} for $0 \leq a,b,c,d < \ell^{e-u}$. Applying Proposition~\ref{det prop} with $r=v-2u$ and $s=e-v>0$, we find that \begin{equation*} \begin{split} \#C_{N,n}(\ell^e) &=\ell^{4u}\ell^{2(v-2u-1)}\ell^{3(e-v)}(\ell+1)(\ell^{v-2u+1}-1)\\ &=\ell^{3e-v-2}(\ell+1)(\ell^{v-2u+1}-1). \end{split} \end{equation*} \end{proof} We are now ready to give the proofs of Theorems~\ref{K(N) interpretation} and~\ref{K(G) interpretation}. \begin{proof}[Proof of Theorems~\ref{K(N) interpretation} and~\ref{K(G) interpretation}] Theorem~\ref{K(N) interpretation} follows easily from~\eqref{define K(N)} and the cases of Theorem~\ref{complete matrix count thm} with $\nu_\ell(n)=u=0$. For the proof of Theorem~\ref{K(G) interpretation}, we let $N=m^2k=|G|$, and for each prime $\ell$, we put \begin{equation*} v_\ell(N,n):=\frac{\ell^e\#C_{N,n}(\ell^e)}{\#G_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z)} \end{equation*} with $e=e_\ell>\nu_\ell(N)$. We then compute the absolutely convergent infinite product \begin{equation*} \prod_\ell\left(v_\ell(N,m)-v_\ell(N,\ell m)\right) \end{equation*} in two different ways. On the one hand, by definition of the $v_\ell(N,n)$, the above expression is equal to \begin{equation*} \prod_\ell\left(\frac{\ell^e\cdot\#\left\{\sigma\inG_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z) : \begin{array}{l}\det(\sigma)+1-\tr(\sigma)\equiv N\pmod{\ell^e},\\ \sigma\equiv I\pmod{\ell^{\nu_\ell(m)}},\\ \sigma\not\equiv I\pmod{\ell^{\nu_\ell(m)+1}} \end{array}\right\}} {\#G_{m, k}L_2(\mathbb Z/\ell^e\mathbb Z)}\right). \end{equation*} On the other hand, by comparing~\eqref{define K(G)} and Lemma~\ref{Aut(G)} with Theorem~\ref{complete matrix count thm}, we see that it is equal to $K(G)\frac{|G|}{|\Aut(G)|}$. \end{proof} \def$'${$'$} \end{document}
\begin{document} \title[Global Yamabe flow ] {Global Yamabe flow on asymptotically flat manifolds} \author{Li Ma} \address{Li MA, School of Mathematics and Physics\\ University of Science and Technology Beijing \\ 30 Xueyuan Road, Haidian District Beijing, 100083\\ P.R. China } \address{ Department of Mathematics \\ Henan Normal university \\ Xinxiang, 453007 \\ China} \thanks{Li Ma's research was partially supported by the National Natural Science Foundation of China (No.11771124)} \begin{abstract} In this paper, we study the existence of global Yamabe flow on asymptotically flat (in short, AF or ALE) manifolds. Note that the ADM mass is preserved in dimensions 3,4 and 5. We present a new general local existence of Yamabe flow on a complete Riemannian manifold with the initial metric quasi-isometric to a background metric of bounded scalar curvature. Asymptotic behaviour of the Yamabe flow on ALE manifolds is also addressed provided the initial scalar curvature is non-negative and there is a bounded subsolution to the corresponding Poisson equation. We also present a maximum principle for a very general parabolic equations on the complete Riemannian manifolds. { \textbf{Mathematics Subject Classification 2010}: 53E99, 35A01, 35K55, 35R01, 53C21.} { \textbf{Keywords}: Yamabe flow, global existence, scalar curvature, asymptotic behaviour} \end{abstract} \maketitle \section{Introduction}\label{sect1} The goal of this paper is to consider the global Yamabe flow on complete manifolds. This topic has recently been studied in the works \cite{M}, \cite{MC}, \cite{CZ} and \cite{M1,M2}. Since the Yamabe flow is degenerate, the expected global flow is rare, however, the Yamabe flow on asymptotically flat (in short, AF or ALE) manifolds is widely believed to be global. We shall confirm this in this paper. Hamilton \cite{Hamilton1989} \cite{H} have introduced Yamabe flow which describes a family of Riemannian metrics $g(t)$ subject to the evolution equation $\frac{\partial}{\partial t}g=-R(g)\,g$, where $R(g)$ denotes the scalar curvature corresponding to the metric $g$. Hamilton proved local in time existence of Yamabe flows on compact manifolds without boundary. Asymptotic behaviour of the Yamabe flow was subsequently analysed by B. Chow \cite{C}, R. Ye \cite{Y94}, Schwetlick and M. Struwe \cite{SS} and S. Brendle \cite{B}. The discrete Morse flow method for 2-d Yamabe flow was developed in \cite{MW}. The theory of Yamabe flows on non-compact manifolds was addressed by Ma and An \cite{AM}. Daskalopoulos and Sesum \cite{Daskalopoulos2013} analysed the profiles of self-similar solutions (Yamabe solitons). More recently, Bahuaud and Vertman \cite{Bahuaud2014,Bahuaud2016} constructed Yamabe flows on spaces with incomplete edge singularities such that the singular structure is preserved along the Yamabe flow. {Choi}, {Daskalopoulos}, and {King} \cite{Choi2018} were able to find solutions to the Yamabe flow on the Euclidean space $\mathbb{R}^n$ which develop a type II singularity in finite time. In the interesting work \cite{GT}, Gregor Giesen and Peter M. Topping obtained remarkable results about Yamabe flow incomplete surfaces. In \cite{S1, S2}, assuming that the initial metric is conformally hyperbolic with conformal factor and scalar curvature bounded from above, Schulz had obtained existence of instantaneously complete Yamabe flows on hyperbolic space of arbitrary dimension $n\geq3$. The study of Yamabe flow on $R^n$ may be included in the class of porous-media equations \cite{Ar} \cite{DK} \cite{HP}. Given an $n$-dimensional complete Riemannian manifold $(M^n,g_0)$, $n\geq 3$. The Yamabe flow on $(M^n,g_0)$ is a family of Riemannian metrics $\{g(\cdot, t)\}$ on $M$ defined by the evolution equation \begin{equation}\label{yamabe_flow_curvature} \left\{ \begin{array}{ll} \frac{\partial g}{\partial t}=-Rg \quad &\text{in}\ M^n\times[0,T),\\ g(\cdot,0)=g_0 &\text{in}\ M^n, \end{array} \right. \end{equation} where $R$ is the scalar curvature of the metric $ g:=g(\cdot,t)=u^{\frac{4}{n-2}}g_0, $ where $u:M^n\to \mathbb{R}^+$ is a positive smooth function on $M^n$. Let $p=\frac{n+2}{n-2}$, $L_{g_0}u=\Delta_{g_0}u-aR_{g_0}u$ and $a=\frac{n-2}{4(n-1)}$. By changing time by a constant scale, (\ref{yamabe_flow_curvature}) can be written in the equivalent form \begin{equation}\label{yamabe_flow_u} \left\{ \begin{array}{ll} \frac{\partial u^p}{\partial t}=L_{g_0}u, \quad &\text{in}\ M^n\times[0,T),\\ u(\cdot,0)=1, &\text{in}\ M^n. \end{array} \right. \end{equation} To understand the local existence result of the Yamabe flow on the Riemannian manifold $(M,g_0)$, we may choose a base metric $g_M$ on $M$ and write the Yamabe flow equation \cite{S1} as follows. Let $g(t)=w(x,t)g_M$ with $w=w(x,t)>0$ on $M$. Then the Yamabe flow equation is $$ \frac{1}{m-1}w_t=-\frac{wR}{n-1}=-\frac{R_0}{n-1}+\frac{\Delta_{g_{M}} w}{w}+\frac{(n-6)}{4}\frac{|\nabla w|_{g_{M}}^2}{w^2}, $$ with $w(0)=w_0$. We may denote the terms involving $w$ in the equation above by $$ B[w]:=(n-1)\Bigl(-\frac{R_0}{n-1}+\frac{\Delta_{g_{M}} w}{w}+\frac{(n-6)}{4}\frac{|\nabla w|_{g_{M}}^2}{w^2}\Bigr). $$ We shall apply inverse function theorem to this form of equation to get local existence of solutions. Before presenting the main result of this paper, we need the following two definitions. The first one is the definition of asymptotically flat (AF or ALE) manifold of order $\tau>0$ (\cite{S} \cite{LP}). \begin{Def}\label{AE_def} A Riemannian manifold $M^n$, $n\geq 3$, with $C^{\infty}$ metric $g$ is called asymptotically flat of order $\tau$ if there exists a decomposition $M^n= M_0\cup M_{\infty}$ (for simplicity we deal only with the case of one end and the case of multiple ends can be dealt with similarly) with $M_0$ compact and a diffeomorphism $M_{\infty}\cong \mathbb{R}^n-B(o,R_0)$ for some constant $R_0 > 0$ such that \begin{align}\label{AE} g_{ij} -\delta_{ij}\in C^{2+\alpha}_{-\tau}(M) \end{align} (defined in Definition \ref{elliptic_wss} below) in the coordinates $\{x^i\}$ induced on $M_{\infty}$. And the coordinates $\{x^i\}$ are called asymptotic coordinates. \end{Def} The second one is about the fine solution to Yamabe flow (\cite{CZ} \cite{M3}). \begin{Def}\label{fine} We say that $u(x,t)\in C^1(M\times [0,t_{max})$ is a fine function if $0<\delta\leq u(x,t)\leq C$ for $0\leq t\leq T$ with any $0<T<t_{max}$ and $\sup\limits_{M^n\times [0,T]}|\nabla_{g_0} u(x,t)|\leq C$. We say that $u(x,t)\in C^1(M\times [0,t_{max})$ is a fine solution of the Yamabe flow, $0\leq t<t_{max}$, on a complete manifold $(M^n,g_0)$ if it is a fine function solution to the Yamabe flow and $\sup\limits_{M^n\times [0,T]}|Rm(g)|(x,t)\leq C$ for any $T<t_{max}$, such that either $\lim\limits_{t\to t_{max}}\sup\limits_{M}|Rm|(\cdot,t)=\infty$ for $t_{max}<\infty$ or $t_{max}=\infty$, where $Rm(g)$ is the Riemannian curvature of the metric $g:=g(t)=u^{4/(n-2)}g_0$. \end{Def} We remark that in the language from (2.5.2) in \cite{SY} ( see also \cite{M4}), the fine solution is uniformly quasi-isometric to the initial metric $g_0$ on every interval $[0,T]$ for $0<T<t_{max}$. Our main result is in below. \begin{Thm}\label{global} Given an $n$-dimensional asymptotically flat manifold $(M^n,g_0)$ of any order $\tau>\frac{n-2}{2}$. There exists an unique global Yamabe flow $g(x,t)=u(x,t)^{4/(n-2)}g_0$ with the initial metric $g(0)=g_0$ and the solution $u(x,t)$, $0\leq t<t_0<\infty$, is a fine solution to the Yamabe flow (\ref{yamabe_flow_u}) and the flow preserves the AF property of the initial metric. In other word, for $v=1-u$, we have $v(x,t)\in C^{2+\alpha}_{-\tau}(M)$ and $g_{ij}(x,t)-\delta_{ij}\in C^{2+\alpha}_{-\tau}(M)$ for $t\in [0,t_{max})$. \end{Thm} The uniqueness part follows from the standard argument and we shall omit the detail. We shall present a general local existence result in Theorem \ref{short_existence} in section \ref{sect2}. As a direct application of the computation as showed in \cite{CZ} and \cite{M}, we have \begin{Thm}\label{main_5} Let $u(x,t)$, $0\leq t< t_0<\infty$, be the fine solution to the Yamabe flow (\ref{yamabe_flow_u}) on an $n$-dimensional asymptotically flat manifold $(M^n,g_0)$ of order $\tau>\frac{n-2}{2}$ with $u(0)=1$. Assume that $R_{g_0}\geq 0$ and $R_{g_0}\in L^{1}(M)$, where $R_{g_0}$ is the scalar curvature of $g_0$. Denoted by $g(t)=u^\frac{4}{n-2}g_0$. Then for $n=3,4,$ or $5$, ADM mass $m(g(t))$ (see \cite{SY} \cite{LP} or below for the definition) is well-defined under the Yamabe flow (\ref{yamabe_flow_u}) for $0\leq t<\infty$ (i.e. ADM mass is independent of the choices of the coordinates),i.e, $m(g(t))\equiv m(g_0)$. \end{Thm} Recall here that the ADM mass of $n$-dimensional AF Riemannian manifolds \cite{LP} is defined as \begin{align}\label{ADM_Mass} m(g)=\lim\limits_{r\to\infty}\frac{1}{4\omega}\int_{S_r}(\partial_j g_{ij}-\partial_ig_{jj})dS^i, \end{align} where $\omega$ denotes the volume of unit sphere in $\mathbb{R}^n$, $S_r$ denotes the Euclidean sphere with radius $r$ and $dS^i$ is the normal surface volume element to $S_r$ with respect to Euclidean metric. Similar results for Ricci flow were established in \cite{BW} and \cite{DM}. With an application of Theorem 5 in \cite{M3}, we get the convergent result of the global Yamabe flow below. \begin{Thm}\label{main_6} Let $u(x,t)$, $0\leq t<\infty$, be the global solution to the Yamabe flow (\ref{yamabe_flow_u}) on an $n$-dimensional asymptotically flat manifold $(M^n,g_0)$ of order $\tau>\frac{n-2}{2}$ with $u(0)=1$. Assume that $R_{g_0}\geq 0$ and there exists a bounded sub-solution $w_0$ to the Poisson equation $$ L_{g_0}w_0=\Delta_{g_0}w_0-aR_{g_0}w_0\geq 0, \ \ in \ M. $$ Then the Yamabe flow $g(t)$ converges in $C^\infty_{loc}(M)$ to a Yamabe metric of scalar curvature zero. \end{Thm} We now recall the definition of weighted spaces (see \cite{LP}) for elliptic operators on asymptotically flat manifolds. \begin{Def}\label{elliptic_wss} Suppose $(M^n,g)$ is an $n$-dimensional asymptotically flat manifold with asymptotic coordinates $\{x^i\}$. Denote $D^j_x v=\sup\limits_{|\alpha|=j}|\frac{\partial^{|\alpha|}}{\partial x_{i_1}\cdots\partial x_{i_j}}v|$. Let $r(x)=|x|$ on $M_{\infty}$ (defined in Definition \ref{AE_def}) and extend $r$ to a smooth positive function on all of $M^n$. For $q\geq 1$ and $\beta\in \mathbb{R}$, the weighted Lebesgue space $L^q_{\beta}(M)$ is defined as the set of locally integrable functions $v$ with the norm given by $$ ||v||_{L^q_\beta(M)}=\left\{ \begin{array}{ll} (\int_{M}|v|^q r^{-\beta q-n}dx)^{\frac{1}{q}}, & \hbox{$q<\infty$;} \\ ess \sup\limits_{M} (r^{-\beta}|v|), & \hbox{$q=\infty$.} \end{array} \right. $$ Then the weighted Sobolev space $W^{k,q}_\beta(M)$ is defined as the set of functions $v$ for which $|D^j_xv|\in L^q_{\beta-j}(M)$ with the norm $$ ||v||_{W^{k,q}_\beta(M)}=\sum\limits^k_{j=0}||D^j_x v||_{L^q_{\beta-j}(M)}. $$ For a nonnegative integer $k$, the weighted $C^k$ space $C^k_{\beta}(M)$ is defined as the set of $C^k$ functions $v$ with the norm $$ ||v||_{C^k_\beta(M)}=\sum\limits_{j=0}^k\sup\limits_{M} r^{-\beta+j}|D^j_xv|. $$ The weighted H\"{o}lder space $C^{k+\alpha}_{\beta}(M)$ is defined as the set of functions $v\in C^{k}_{\beta}(M)$ with the norm $$ ||v||_{C^{k+\alpha}_\beta(M)}=||v||_{C^k_\beta(M)}+\sup\limits_{x\neq y\in M}\min(r(x),r(y))^{-\beta+k+\alpha}\frac{|D^k_xv(x)-D^k_xv(y)|}{|x-y|^{\alpha}}. $$ \end{Def} We end the introduction with a brief outline of the paper. We discuss the local existence theory of Yamabe flow on a complete Riemannian manifold with bounded scalar curvature in section \ref{sect2}, this part may be well-known to experts. In section \ref{sect3}, we obtain the global Yamabe flows on AF manifolds. We show or give an outline proof of Theorems \ref{main_5} and \ref{main_6}. In the appendix section \ref{sect4}, we discuss the general version of maximum principle, which may be used in the space decaying argument of the Yamabe flows on AF manifolds. \section{Yamabe flow: local existence}\label{sect2} Let $(M,g_M)$ be a complete Riemannian manifold of dimension $n=dim(M)$. Given an initial metric $g_0=w_0g_M$ where $w_0>0$ is a fine function on $M$. Let $R_0=R(g_0)$ be the scalar curvature of the initial Riemannian metric $g_0$. The following local existence results of the solutions to the Yamabe flow \eqref{eqn:Yamabe-flow} may be well-known for experts \cite{AM} (see also Theorem 2.4 in \cite{CZ}), but it is new. \begin{Thm}\label{short_existence} Let $(M,g_M)$ be an $n$-dimensional complete manifold with bounded scalar curvature and let $g_0=w_0g_M$, where $w_0>0$ is a fine function on $M$. Then Yamabe flow (\ref{eqn:Yamabe-flow}) below with initial metric $g_0$ has a smooth solution on a maximal time interval $[0,T_{max})$ with $T_{max}>0$ such that either $T_{max}=+\infty$ or the evolving metric contracts to a point at finite time $T_{max}$. \end{Thm} Since the assumption above is weaker than previous existence result, one can not expects the uniqueness of the Yamabe flow. We shall use the formulation from the interesting paper \cite{S1}. The plan of the proof is to get the local existence result of the Yamabe flow on the Riemannian manifold $(M,g_M)$ by considering the evolution equation in the following form (\cite{AM}) \begin{align}\label{eqn:Yamabe-flow} \frac{1}{n-1}w_t=-\frac{wR}{n-1}=-\frac{R_0}{n-1}+\frac{\Delta_{g_{M}} w}{w}+\frac{(n-6)}{4}\frac{|\nabla w|_{g_{M}}^2}{w^2}, \end{align} with $w(0)=w_0$. We denote the terms of right side of the equation \eqref{eqn:Yamabe-flow} by \begin{align*} B[w]:=(n-1)\Bigl(-\frac{R_0}{n-1}+\frac{\Delta_{g_{M}} w}{w}+\frac{(n-6)}{4}\frac{|\nabla w|_{g_{M}}^2}{w^2}\Bigr). \end{align*} We first set up the local existence result on any bounded domain with uniform time interval. Given a smooth, bounded domain $\Omegaega\subset M$ and $T>0$, we may assume that $-R_0\geq n c$ for some constant $c$ and we consider the problem \begin{align}\label{eqn:pde} \left\{ \begin{aligned} \frac{\partial w}{\partial t}&=B[w] &&\text{ in $\Omegaega\times[0,T]$, } \\ w&=\phi &&\text{ on $\partial\Omegaega\times[0,T]$, } \\[.5ex] w&=w_0 &&\text{ on $\Omegaega\times\{0\}$} \end{aligned}\right. \end{align} for given $0<w_0\in C^{2,\alpha}(\overline{\Omegaega})$ and $\phi\in C^{2,\alpha;1,\frac{\alpha}{2}}(\partial\Omegaega\times[0,T])$ satisfying $\phi(\cdot,t)=w_0$ on $\partial\Omegaega\times[0,T]$ Since $w_0$ and $R_{g_0}$ are bounded on the compact set $\partial\Omegaega$ and $w_0>0$, the nonlinear term $B[w]$ is well-defined at the initial time. By the standard parabolic theory \cite{Ladyzenskaja1967} we may solve the linear parabolic problem \begin{align}\label{eqn:pde-linear-u} \left\{\begin{aligned} \frac{1}{n-1}\frac{\partial\tilde{u}}{\partial t}-\frac{\Delta_{g_{M}}\tilde{u}}{w_0} -\frac{(n-6)}{4}\frac{<\nabla\tilde{u},\nabla w_0>_{g_{M}}}{w_0^2} &=-\frac{R_0}{n-1} &&\text{ in $\Omegaega\times[0,T]$, } \\ \tilde{u}&=\phi &&\text{ on $\partial\Omegaega\times[0,T]$, } \\[.5ex] \tilde{u}&=w_0 &&\text{ on $\Omegaega\times\{0\}$. } \end{aligned}\right. \end{align} to get the solution $\tilde{u}$. Since $\Omegaega$ is bounded and since $w_0>0$ in $M$, there exists some $\delta>0$ depending on $\Omegaega$ and $w_0$ such that $w_0\geq \delta$ in $\Omegaega\times [0,T]$. Therefore, equation \eqref{eqn:pde-linear-u} is uniformly parabolic with regular coefficients and the initial-boundary conditions are satisfied. According to linear parabolic theory \cite{Ladyzenskaja1967}[IV.5, Theorem 5.2], problem \eqref{eqn:pde-linear-u} has a unique solution $\tilde{u}\in C^{2,\alpha;1,\frac{\alpha}{2}}(\overline{\Omegaega}\times[0,T])$. Since $w_0>0$ and $\phi(\cdot,t)=w_0$ for all $t\in[0,T]$, the parabolic maximum principle applied to $\tilde{u}(\cdot,t)$ implies $\tilde{u}\geq\varepsilon$ on $\Omegaega\times[0,T]$ for some $\varepsilon>0$ depending on $\Omegaega$ and $\tilde{u}$. For the short time $t>0$, we want to get the solution $w$ to \eqref{eqn:pde} which will be close to the function $\tilde{u}$. We shall use the inverse function theorem to construct the short time solution to \eqref{eqn:pde} on any bounded domain. \begin{Lem}[Short-time existence on bounded domains] \label{lem:shorttimeexistence} Let $\Omegaega\subset M$ be a smooth bounded domain in $(M, g_M)$. Then there exists $T>0$ such that problem \eqref{eqn:pde} has a unique solution. \end{Lem} \begin{proof} We shall construct a solution $w$ to \eqref{eqn:pde} is of the form $w=\tilde{u}+v$, where $\tilde{u}$ solves \eqref{eqn:pde-linear-u} and \begin{align}\label{eqn:pde-v} \left\{\begin{aligned} \frac{\partial v}{\partial t} &=B[\tilde{u}+v]-\frac{\partial \tilde{u}}{\partial t} &&\text{ in $\Omegaega\times[0,T]$, } \\ v&=0 &&\text{ on $\partial\Omegaega\times[0,T]$, } \\[.5ex] v&=0 &&\text{ on $\Omegaega\times\{0\}$. } \end{aligned}\right. \end{align} For the H\"older exponent $0<\alpha<1$, we define the working space \begin{align*} X&:=\{v\in C^{2,\alpha;1,\frac{\alpha}{2}}(\overline{\Omegaega}\times[0,T]) \mid{}v=0\text{ on }(\Omegaega\times\{0\})\cup(\partial\Omegaega\times[0,T])\}, \\[1ex] Y&:=\{f\in C^{0,\alpha;0,\frac{\alpha}{2}}(\overline{\Omegaega}\times[0,T]) \mid{}f=0\text{ on }\partial\Omegaega\times\{0\}\}. \end{align*} Notice that the map $F: X\to Y$, \begin{align*} F: v\mapsto \frac{\partial}{\partial t}(\tilde{u}+v)-B[\tilde{u}+v]. \end{align*} is well-defined because the initial-boundary conditions imply that at every $p\in\partial\Omegaega$ for every $v\in X$, we have \begin{align*} (F v)(p,0)&=\Bigl(\frac{\partial\tilde{u}}{\partial t}-B[\tilde{u}]\Bigr)(p,0) =\Bigl(\frac{\partial\phi}{\partial t}(\cdot,0)-B[u_0]\Bigr)(p)=0. \end{align*} The linearization of $B[\tilde{u}]$ around $\tilde{u}\in C^{2,\alpha;1,\frac{\alpha}{2}}(\overline{\Omegaega}\times[0,T])$ gives the linear operator \begin{align*} \breve{L}(\tilde{u})&=(n-1)\Bigl(-\frac{\Delta_{g_{M}}\tilde{u}}{\tilde{u}^2} -\frac{(n-6)}{2}\frac{|\nabla\tilde{u}|_{g_{M}}^2}{\tilde{u}^3}+ \frac{(n-6)}{2\tilde{u}^2}<\nabla\tilde{u},\nabla\,\cdot\,>_{g_{M}} +\frac{\Delta_{g_{M}}}{\tilde{u}} \Bigr). \end{align*} We claim that the map $F$ is Fr\'echet differentiable at $0\in X$. The reason is below. First, the map $F$ is G\^ateaux differentiable at $0\in X$ with derivative \begin{align*} D F(0)\colon X&\to Y \\ w&\mapsto \frac{\partial}{\partial t}w-L(\tilde{u})w. \end{align*} Second, the mapping $u\mapsto \breve{L}(u)$ is continuous near $\tilde{u}$ because $\tilde{u}$ is bounded away from zero. Hence, $DF(0)$ is the Fr\'{e}chet-derivative of $S$ at $0\in X$. Note that the linear operator $\frac{\partial}{\partial t}-\breve{L}(\tilde{u})$ is uniformly parabolic. Let $f\in Y$ be an arbitrary element. By definition, $0=f(\cdot,0)$ on $\partial\Omegaega$. We consider the linear parabolic problem \begin{align}\label{eqn:pde-linear} \left\{\begin{aligned} \frac{\partial w}{\partial t}-\breve{L}(\tilde{u})w&=f &&\text{ in $\Omegaega\times[0,T]$, } \\ w&=0 &&\text{ on $\partial\Omegaega\times[0,T]$, } \\[.5ex] w&=0 &&\text{ on $\Omegaega\times\{0\}$. } \end{aligned}\right. \end{align} As before, linear parabolic theory guarantees that \eqref{eqn:pde-linear} has a unique solution $w\in X$. Hence, the continuous linear map $DF(0)\colon X\to Y$ is invertible. By the Inverse Function Theorem, $F$ is invertible in some neighborhood $V_0\subset Y$ of $F(0)$. Claim that $V_0$ contains an element $h$ such that $h(\cdot,t)=0$ for $0\leq t\leq\varepsilon$ and sufficiently small $\varepsilon>0$. Fix $f:=F(0)=\frac{\partial}{\partial t}\tilde{u}-B[\tilde{u}]$. Choose $\eta: [0,T]\to[0,1]$, a smooth cutoff function such that \begin{align*} \eta(t)&=\begin{cases} 0, &\text{ for } t\leq\varepsilon, \\ 1, &\text{ for } t>2\varepsilon, \end{cases}& 0&\leq\frac{d\eta}{d t}\leq\frac{3}{\varepsilon}. \end{align*} Note that $\eta f\in V_0$ for sufficiently small $\varepsilon>0$. In fact, since $\tilde{u}$ is smooth in $\overline{\Omegaega}\times[0,T]$, we have $f\in C^{1}(\overline{\Omegaega}\times[0,T])$. Noting at $t=0$, we have \begin{align}\label{est:f-hoelder1} f(\cdot,0)&=\frac{\partial\tilde{u}}{\partial t}(\cdot,0)-B[w_0]=0 \quad\text{ on $\overline{\Omegaega}$, } \end{align} we may estimate \begin{align}\label{eqn:sC1} |{f(\cdot,s)}|=|f(\cdot,s)-f(\cdot,0)|\leq s|{f}|_{C^1(\overline{\Omegaega}\times[0,T])}. \end{align} Take $t,s\in[0,T]$ and $t>s$. For $s>2\varepsilon$, we have $$(f-\eta f)(\cdot,s)=(f-\eta f)(\cdot,t)=0.$$ Then we may assume $s\leq2\varepsilon$. In this case we may estimate the time difference of the function $(f-\eta f)$ in the following way. \begin{align}\nonumber &|(f-\eta f)(\cdot,t)-(f-\eta f)(\cdot,s)| \\ \nonumber &\leq |f(\cdot,t)-f(\cdot,s)| +|\eta f(\cdot,t)-\eta f(\cdot,s)| \\ \nonumber &\leq \bigl(1+|\eta(t)|\bigr)|f(\cdot,t)-f(\cdot,s)| +|f(\cdot,s)||\eta(t)-\eta(s)| \\ \nonumber &\leq2 |f|_{C^1}|t-s|+s|{f}|_{C^1}|\eta'|_{C^0}|{t-s}| \\ &\leq\bigl(2+s\tfrac{3}{\varepsilon}\bigr)|f|_{C^1}|{t-s}| \\ \label{est:f-hoelder3} &\leq 8 |{f}|_{C^1}|{t-s}|. \end{align} By \eqref{est:f-hoelder1}, we may reduce the special case $s=0$ to the bound \begin{align}\label{eqn:20190125-3} |(f-\eta f)(\cdot,t)| &\leq8t |{f}|_{C^1}. \end{align} Since the left-hand side of \eqref{eqn:20190125-3} vanishes for $t>2\varepsilon$, we may have \begin{align*} |f-\eta f|_{C^0} &\leq 16\varepsilon |f|_{C^1}. \end{align*} If $|t-s|<\varepsilon$, the estimate \eqref{est:f-hoelder3} implies that \begin{align*} |(f-\eta f)(\cdot,t)-(f-\eta f)(\cdot,s)| &\leq 8\varepsilon^{1-\frac{\alpha}{2}}|f|_{C^1}|{t-s}|^{\frac{\alpha}{2}}. \end{align*} If $|{t-s}|\geq\varepsilon$, we may replace the estimate by the fact that \begin{align*} |(f-\eta f)(\cdot,t)-(f-\eta f)(\cdot,s)| &\leq 2|f-\eta f|_{C^0} \\ \nonumber &\leq 32\varepsilon |{f}|_{C^1} \\ \nonumber &\leq 32\varepsilon^{1-\frac{\alpha}{2}}|f|_{C^1}|t-s|^{\frac{\alpha}{2}}. \end{align*} Then, $$[f-\eta f]_{\frac{\alpha}{2},t}\leq32\varepsilon^{1-\frac{\alpha}{2}}|f|_{C^1}.$$ For the estimation of the spatial H\"older seminorm, we may obtain a similar estimate from \eqref{eqn:sC1} and estimate the space difference: \begin{align*} &|{(f-\eta f)(x,t)-(f-\eta f)(y,t)}| \\ \nonumber &\leq |{1-\eta(t)}||{f(x,t)-f(y,t)}|^{\alpha}|{f(x,t)-f(y,t)}^{1-\alpha}| \\ \nonumber &\leq |{f}|_{C^1}^{\alpha}\,d(x,y)^{\alpha}\bigl(4\varepsilon |{f}|_{C^1}\bigr)^{1-\alpha} =(4\varepsilon)^{1-\alpha}|{f}|_{C^1}\,d(x,y)^{\alpha}, \end{align*} where $d(x,y)$ is the Riemannian distance between $x$ and $y$ in $(M,g_{M})$. Then, $$|{f-\eta f}|_{Y}\leq C\varepsilon^{\beta-\alpha}|{f}|_{C^1}.$$ This implies that $\eta f$ belongs to the neighborhood $V_0$ of $f$ if $\varepsilon>0$ is sufficiently small. By the construction above, $F^{-1}(\eta f)$ is a solution to \eqref{eqn:pde-v} in $\Omegaega\times[0,\varepsilon]$. Setting $T=\varepsilon>0$, we then obtain the desired result. \end{proof} We are now going to prove Theorem \ref{short_existence}. \begin{proof} We now may obtain the local in time solution to \eqref{eqn:Yamabe-flow} on the whole Riemannian manifold $(M,g_M)$. Recall that we have assumed the scalar curvature of $g_0=w_0g_M$ is bounded. Let $\Omegaega_1\subset\Omegaega_2\subset\cdots$ be the smooth compact domain exhaustion of $M$ (the existence of such domain exhaustion was used in \cite{D}). Recall that $p=\frac{n+2}{n-2}$, $L_{g_0}v=\Delta_{g_0}v-aR_{g_0}v$ and $a=\frac{n-2}{4(n-1)}$. We may write $g_0=v_0^{4/(n-2)}g_M$ for $w_0=v_0^{4/(n-2)}$ and look for for solution of the form $$g(x,t)=\check{u}^{4/(n-2)}g_0=(\check{u}{v_0})^{4/(n-2)}g_M=v^{4/(n-2)}g_M=ug_M. $$ Then the Yamabe flow equation may be written as $$ \frac{\partial v^p}{\partial t}=L_{g_M}v, \quad \ x\in M, \ t>0, $$ with the initial data $v(0)=v_0$. Recall that $L_{g_M}v =\Delta_{g_M}v-aR_{g_M}v$ in $M$. For shortening the notation, we may assume $v_0=1$ and then $g_0=g_M$. Then, the solution $\check{u}(x,t)$ to Yamabe flow \eqref{eqn:Yamabe-flow} may be obtained by a sequence of approximation solutions $u_m(x,t)=\check{u}_m(x,t)^{4/(n-2}$ obtained above. Note that $\check{u}_m(x,t)$ satisfies \begin{equation}\label{yamabe_flow_u_local1} \left\{ \begin{array}{ll} \frac{\partial \check{u}^p_m}{\partial t}=L_{g_0}\check{u}_m, \quad &\ x\in\Omegaega_m, \ t>0,\\ \check{u}_m(x,t)>0, \quad &\ x\in\Omegaega_m, \ t>0,\\ \check{u}_m(x,t)=1, \quad &\ x\in \partial \Omegaega_m, \ t>0,\\ \check{u}_m(\cdot,0)=1, \quad &\ x\in\Omegaega_m.\\ \end{array} \right. \end{equation} Since $\check{u}_m(x,t)=1$ is bounded on $\partial \Omegaega_m$, by the maximum principle, we may conclude that \begin{align*} \max\limits_{\Omegaega_m} \check{u}_m(t)\leq (1+\frac{n-2}{(n-1)(n+2)}\sup\limits_{M^n}|R_{g_0}|t)^{\frac{n-2}{4}}. \end{align*} and \begin{align*} \min\limits_{\Omegaega_m} \check{u}_m(t)\geq (1-\frac{n-2}{(n-1)(n+2)}\sup\limits_{M^n}|R_{g_0}|t)^{\frac{n-2}{4}}. \end{align*} We see that $\check{u}_m(t)$ has an uniformly upper bound on $[0,t_0)$ for any $t_0>0$ and uniformly positive lower bound on $[0,\frac{(n-1)(n+2)}{2(n-2)\sup\limits_{M^n}|R_{g_0}|}]$. Let $T=\frac{(n-1)(n+2)}{2(n-2)\sup\limits_{M^n}|R_{g_0}|}$. Then every local solution $\{u_m\}$ is well-defined on the time interval $[0,T]$. Applying Trudinger's estimate \cite{Trudinger1968} (or the Krylov-Safonov estimate) and Schauder estimate of parabolic equations to \eqref{eqn:Yamabe-flow} on any ball $B_{g_0}(p,r_0)\subset (M,g_0)$, we have $||u_m||_{C^{2+\alpha,1+\frac{\alpha}{2}}(B_{g_0}(p,r_0)\times [0,T])}\leq C$, where $C$ is independent of the point $p$. Using the diagonal subsequence of $\{u_m\}$, we may extract a $C^{2+\alpha,1+\frac{\alpha}{2}}_{loc}$ convergent sequence with its positive limit $u(x,t)=\check{u}^{\frac{4}{n-2}}$ on whole $M$, which is the desired local in time solution to \eqref{eqn:Yamabe-flow}. Since $g(\cdot,t)=\check{u}^{\frac{4}{n-2}}g_0$, we also have $\sup\limits_{B_{g_0}(p,r_0)\times [0,T]}|Rm(x,t)|\leq C$. With this understanding, we may extend the solution to the maximal time solution as we wanted. This completes the proof of the result. \end{proof} \section{global Yamabe flows on ALE manifolds}\label{sect3} Assume that $(M,g_0)$ is an ALE manifold. We shall show that the Yamabe flow exists globally. Assume by contrary that the maximal time of Yamabe flow is finite, i.e., $T_{max}<\infty$. Then according to AF property of the solution in Theorem 5.1 \cite{CZ} (and its argument is based on the generalized maximum principle Theorem \ref{Ecker_Huisken} as showed in appendix), we know that there exists a compact set $S\subset M$ such that $$ \frac12\leq u(x,t)\leq 3/4, \ \ \forall (x,t)\in (M\setminus S)\times [0,T_{max}) $$ and there is a point $x\in S$ such that $u$ is non-trivial in a neighborhood of $x$. As in \cite{Y94}, using DiBennedetto's estimate we may extend the solution $u$ continuously to $T_{max}$. Using Ye's argument, we know that $u$ can not have any zero point in $S$ at $T_{max}$. Therefore, there exist two positive constants $c_1$ and $c_2$ such that $$ c_1\leq u(x,t)\leq c_2, \ \ \forall (x,t)\in M\times [0,T_{max}] \ \text{uniformly} . $$ Then we may use the standard parabolic theory \cite{Trudinger1968} to extend the solution beyond $T_{max}$, which is a contradiction with $T_{max}<\infty$. The decaying property of the Yamabe flow follows from Theorem 5.1 in \cite{CZ}. This then completes the proof of Theorem \ref{global}. The proof of Theorem \ref{main_5} follows from the application of Theorem \ref{short_existence}, Theorem 5.1 in \cite{CZ}, and Theorem 6 in \cite{M}. Since the proof of Theorem \ref{main_6} is by now easy to give and the proof is below. \begin{proof} We choose $\delta>0$ small such that $\tilde{u}_0=\delta w_0<1$. Note that $\tilde{u}_0$ is the lower solution of the Yamabe flow. Let $\tilde{g}_0= \tilde{u}_0^{4/(n-2)}g_0$. Then the scalar curvature of the metric $\tilde{g}_0$ is non-negative and we also have $$ g_0\geq \tilde{g}_0. $$ Note that along the Yamabe flow $g(t)\geq \tilde{g}_0$ and by the maximum principle we know that the scalar curvature $R=R(g(t))\geq 0$ on $M$. Since $$ \frac{\partial g}{\partial t}=-Rg \leq 0, $$ We then know that the Yamabe flow $g(t)$ converges in $C^\infty_{loc}(M)$ to a Yamabe metric of scalar curvature zero. \end{proof} \section{Appendix: the maximum principle}\label{sect4} In this section, we present a generalized version of the maximum principle (see Theorem 4.3 in \cite{EH} and Theorem 2.6 in \cite{CZ}), where they consider the maximum principle for the parabolic equation $\frac{\partial }{\partial t}v - \Delta v\leq b\cdot \nabla v+cv$ or $\frac{\partial }{\partial t}v - div(a \nabla v)\leq b\cdot \nabla v+cv$ on noncompact manifolds, where $\Delta$ and $\nabla$ depend on $g(t)$. Our maximum principle is about the more general equation $$m(x) \frac{\partial }{\partial t}v - div(a \nabla v)\leq b\cdot \nabla v+cv, \ \ M\times [0,T)$$ where $m(x)$ is a positive regular function on $M$. \begin{Thm}\label{Ecker_Huisken} Suppose that the complete noncompact manifold $M^n$ with Riemannian metric $g(t)$ satisfies the uniformly volume growth condition \begin{align*} vol_{g(t)}(B_{g(t)}(p,r))\leq exp(k(1+r^2)) \end{align*} for some point $p\in M$ and a uniform constant $k>0$ for all $t\in [0,T]$. Let $v$ be a differentiable function on $M\times (0,T]$ and continuous on $M\times [0,T]$. Assume that $v$ and $g(t)$ satisfy (i) The differential inequality \begin{align*} m(x)\frac{\partial }{\partial t}v - div(a \nabla v)\leq b\cdot \nabla v+cv, \end{align*} where $m(x)$ is a positive continuous function on $M$ such that $0<m_0\leq m(x)\leq m_1$ for some constant $m_0>0$ and $m_1>0$, the vector field $b$ and the function $a$ and $c$ are uniformly bounded \begin{align*} 0<\alpha_1' \leq a\leq \alpha_1, \sup\limits_{M\times [0,T]} |b|\leq \alpha_2, \sup\limits_{M\times [0,T]} |c|\leq \alpha_3, \end{align*} for some constants $\alpha_1',\alpha_1,\alpha_2<\infty$. Here $\Delta$ and $\nabla$ depend on $g(t)$. (ii) The initial data \begin{align*} v(p,0)\leq 0, \end{align*} for all $p\in M$. (iii) The growth condition \begin{align*} \int^T_0(\int_{M}exp[-\alpha_4 d_{g(t)}(p,y)^2]|\nabla v|^2(y)d \mu_t)dt<\infty. \end{align*} for some constant $\alpha_4>0$. (iv) Bounded variation condition in metrics in the sense that \begin{align*} \sup\limits_{M\times[0,T]}|\frac{\partial}{\partial t}g(t)|\leq \alpha_5 \end{align*} for some constant $\alpha_5<\infty$. Then we have $ v\leq 0 $ on $M\times [0,T]$. \end{Thm} \begin{Rk}\label{remark_max} Note that the conditions (iii) and (iv) are satisfied if the sectional curvature of $g(t)$ and $\nabla v$ are uniformly bounded on $[0,T]$. There are many versions of maximum principles, one may prefer to \cite{A} and \cite{BK}. \end{Rk} \textbf{Proof of Theorem \ref{Ecker_Huisken}:} Fix $K_0>0$ large. We choose $\theta>0$ and let $$ h(y,t)=-\frac{\theta d^2_{g(t)}(p,y)}{4(2\eta-t)},0<t<\eta, $$ where $d_{g(t)}(p,y)$ is the distance between $p$ and $y$ at time $t$ and $0<\eta<\min(T,\frac{1}{64K_0},\frac{1}{32\alpha_4},\frac{1}{4\alpha_5})$. Then \begin{align*} \frac{d}{dt}h=-\frac{\theta d^2_{g(t)}(p,y)}{4(2\eta-t)^2}-\frac{\theta d_{g(t)}(p,y)}{2(2\eta-t)}\frac{d}{dt}d_{g(t)}(p,y). \end{align*} By (iv), we have \begin{align*} |\frac{d}{dt}d_{g(t)}(p,y)|\leq \frac{1}{2}\alpha_5 d_{g(t)}(p,y). \end{align*} Then we have that \begin{align*} \frac{d}{dt}h\leq -\theta^{-1}|\nabla h|^2+\theta^{-1}\alpha_5|\nabla h|^2 (2\eta-t), \end{align*} We choose $\theta=\frac{1}{4\alpha_1}$. Using $\eta\leq \frac{1}{4\alpha_5}$ we have \begin{align}\label{h_estiamte} m_0\frac{d}{dt}h+2a|\nabla h|^2\leq 0. \end{align} Let $K>0$, which will be a very large constant. Taking $f_K=\max\{\min(f,K),0\}$ and $0<\epsilonsilon<\eta$, we have \begin{align*} &\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K(div(a\nabla f)-\frac{\partial f}{\partial t})d\mu_t)dt\\ \geq & -\alpha_2 \int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K|\nabla f|d\mu_t)dt\\ &-\alpha_3 \int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K fd\mu_t)dt \end{align*} for some smooth time independent compactly supported function $\phi$ on $M^n$, where $\beta>0$ will be chosen later. Then we have \begin{align*} 0\leq &-\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h a <\nabla f_K,\nabla f_K>d\mu_t)dt\\ &-\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K a<\nabla h,\nabla f>d\mu_t)dt\\ &-2\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi e^h f_K a<\nabla \phi,\nabla f>d\mu_t)dt\\ &-\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}m(x)\phi^2 e^h f_K\frac{\partial f}{\partial t}d\mu_t)dt +\alpha_3 \int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K fd\mu_t)dt \\ &+\alpha_2 \int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K|\nabla f|d\mu_t)dt\\ =&\textrm{I}+\textrm{II}+\textrm{III}+\textrm{IV}+\textrm{V}+\textrm{VI}. \end{align*} By Schwartz' inequality, we derive \begin{align*} \textrm{II}\leq \frac{1}{4}\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h a|\nabla f|^2d\mu_t)dt+\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K^2 a|\nabla h|^2d\mu_t)dt, \end{align*} \begin{align*} \textrm{III}\leq \frac{1}{2}\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h a|\nabla f|^2d\mu_t)dt+2\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M} e^h f_K^2 a|\nabla \phi|^2d\mu_t)dt, \end{align*} and \begin{align*} \textrm{VI}&\leq \frac{1}{4}\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h a|\nabla f|^2d\mu_t)dt+\alpha_2^2\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M} e^h f_K^2 \frac{1}{a}|\nabla \phi|^2d\mu_t)dt\\ &\leq \frac{1}{4}\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h a|\nabla f|^2d\mu_t)dt+\frac{\alpha_2^2}{\alpha_1'}\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M} e^h f_K^2 |\nabla \phi|^2d\mu_t)dt. \end{align*} Since \begin{align*} -e^hf_K\frac{\partial f}{\partial t}\leq -e^h f_K \frac{\partial f_K}{\partial t}+\frac{\partial }{\partial t}(e^hf_K(f_K-f)), \end{align*} and $$f_K(f_K-f)\leq 0,$$ we obtain \begin{align*} &\qquad\textrm{IV}+\textrm{V}\\ &\leq -\frac{1}{2}\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}m(x)\phi^2 e^h \frac{\partial f_K^2}{\partial t}d\mu_t)dt +\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}m(x)\phi^2 \frac{\partial }{\partial t}(e^h f_K(f_K-f))d\mu_t)dt\\ & -\alpha_3 \int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K(f_K-f)d\mu_t)dt+\alpha_3 \int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K^2d\mu_t)dt. \end{align*} Moreover, we have \begin{align*} |\frac{d}{dt}(d\mu_t)|\leq n \alpha_5 d\mu_t \end{align*} by (iv). Now we choose $\beta>0$ such that $m_0\beta\geq 2n\alpha_5+4\alpha_3+4\frac{\alpha_2^2}{\alpha_1'}$. Then \begin{align*} &\qquad\textrm{IV}+\textrm{V}\\ &\leq -\frac{1}{2}e^{-\beta t}\int_{M}m(x)\phi^2 e^h f_K^2d\mu_t|_{t=\eta} +\frac{1}{2}e^{-\beta t}\int_{M}m(x)\phi^2 e^h f_K^2d\mu_t|_{t=\epsilonsilon}\\ &+\frac{1}{2}\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}m(x)\phi^2 e^h f_K^2 \frac{\partial h}{\partial t}d\mu_t)dt-\frac{1}{4}m_0\beta\int^{\eta}_{\epsilonsilon}e^{-\beta t}(\int_{M}\phi^2 e^h f_K^2 d\mu_t)dt\\ & +e^{-\beta t}\int_{M}\phi^2 e^h f_K(f_K-f)d\mu_t|_{t=\eta}-e^{-\beta t}\int_{M}\phi^2 e^h f_K^2d\mu_t|_{t=\epsilonsilon}. \end{align*} Combining the estimates of $\textrm{I}-\textrm{VI}$ and letting $\epsilonsilon\to 0$, we obatin \begin{align*} 0\leq &-\int^{\eta}_{0}e^{-\beta t}(\int_{M}\phi^2 e^h a |\nabla f_K|^2d\mu_t)dt +\int^{\eta}_{0}e^{-\beta t}(\int_{M}\phi^2 e^h a |\nabla f|^2d\mu_t)dt\\ &+2\int^{\eta}_{0}e^{-\beta t}(\int_{M} e^h f_K^2 a|\nabla \phi|^2d\mu_t)dt-\frac{1}{2}e^{-\beta t}\int_{M}m(x)\phi^2 e^h f_K^2d\mu_t|_{t=\eta}. \end{align*} by $f_K\equiv 0$ at $t=0$ and (\ref{h_estiamte}). Now we choose $0\leq \phi\leq 1$ satisfying $\phi \equiv 1$ on $B_{g_0}(p,R)$, $\phi \equiv 0$ outside $B_{g_0}(p,R+1)$ and $|\nabla_{g_0} \phi|_{g_0}\leq 2$. Then we have \begin{align*} &\frac{1}{2}e^{-\beta \eta}\int_{B_{g_0}(p,R)}m(x)\phi^2 e^h f_K^2d\mu_t|_{t=\eta}\leq \int^{\eta}_{0}e^{-\beta t}(\int_{B_{g_0}(p,R+1)}\phi^2 e^h a (|\nabla f|^2-|\nabla f_K|^2)d\mu_t)dt\\ &+C(\alpha_5)\int^{\eta}_{0}e^{-\beta t}(\int_{B_{g_0}(p,R+1)\backslash B_{g_0}(p,R)} e^h f_K^2 a d\mu_t)dt, \end{align*} where $C(\alpha_5)$ is a constant only depending on $\alpha_5$. By $0<\eta<\min(\frac{1}{K_0},\frac{1}{32\alpha_4})$ and volume growth assumptions on $M^n$, we have \begin{align*} \int^{\eta}_{0}e^{-\beta t}(\int_{B_{g_0}(p,R+1)\backslash B_{g_0}(p,R)} e^h f_K^2 a d\mu_t)dt\to 0, \end{align*} as $R\to \infty$. Then we derive \begin{align*} &\frac{1}{2}e^{-\beta \eta}\int_{M}\phi^2 e^h f_K^2d\mu_t|_{t=\eta}\leq \int^{\eta}_{0}e^{-\beta t}(\int_{M}\phi^2 e^h a (|\nabla f|^2-|\nabla f_K|^2)d\mu_t)dt. \end{align*} Letting $K\to\infty$, we conclude that \begin{align*} \frac{1}{2}e^{-\beta \eta}\int_{M}m(x)\phi^2 e^h (\max(f,0))^2d\mu_t|_{t=\eta}\leq 0, \end{align*} where $0<\eta<\min(T,\frac{1}{64K_0},\frac{1}{32\alpha_4},\frac{1}{4\alpha_5})$. That implies that $f\leq 0$ in $M^n\times[0,\eta]$. By the induction argument, we then have that $f\leq 0$ in $M^n\times[0,T]$. $\Box$ \end{document}
\begin{document} \begin{frontmatter} \journal{J. Math. Anal. Appl.} \title{Ramanujan-Slater Type Identities \\Related to the Moduli 18 and 24} \date{February 21, 2008} \author{James McLaughlin} \address{Department of Mathematics, West Chester University, West Chester, PA; telephone 610-738-0585; fax 610-738-0578} \ead{[email protected]} \ead[url]{http://math.wcupa.edu/\~{}mclaughlin} \author{Andrew V. Sills} \address{Department of Mathematical Sciences, Georgia Southern University, Statesboro, GA; telephone 912-681-5892; fax 912-681-0654} \ead{[email protected]} \ead[url]{http://math.georgiasouthern.edu/\~{}asills} \begin{abstract} We present several new families of Rogers-Ramanujan type identities related to the moduli 18 and 24. A few of the identities were found by either Ramanujan, Slater, or Dyson, but most are believed to be new. For one of these families, we discuss possible connections with Lie algebras. We also present two families of related false theta function identities. \end{abstract} \begin{keyword} Rogers-Ramanujan identities\varphiep Bailey pairs \varphiep $q$-series identities \varphiep basic hypergeometric series \varphiep false theta functions \varphiep affine Lie algebras \varphiep principal character \MSC 11B65\varphiep 33D15\varphiep 05A10 \varphiep 17B57\varphiep 17B10 \end{keyword} \end{frontmatter} \varphiection{Introduction} The Rogers-Ramanujan identities are \begin{thm}[The Rogers-Ramanujan Identities] \begin{equation}\label{RRa1} \varphium_{n=0}^\infty \frac{q^{n^2}}{(q;q)_n} = \frac{(q^2, q^3, q^5; q^5)_\infty}{(q;q)_\infty}, \end{equation} and \begin{equation}\label{RRa2} \varphium_{n=0}^\infty \frac{q^{n(n+1)}}{(q;q)_n} = \frac{(q, q^4, q^5; q^5)_\infty}{(q;q)_\infty}, \end{equation} where \[ (a;q)_m = \prod_{j=0}^{m-1} (1-aq^j), \quad (a;q)_\infty = \prod_{j=0}^\infty (1-aq^j), \] and \[ (a_1, a_2, \dots, a_r; q)_s = (a_1;q)_s (a_2;q)_s \dots (a_r;q)_s. \] \end{thm} (Although the results in this paper may be considered purely from the point of view of formal power series, they also yield identities of analytic functions provided $|q|<1$.) The Rogers-Ramanujan identities are due to L.~J.~Rogers~\cite{R94}, and were rediscovered by S. Ramanujan~\cite{M18} and I. Schur~\cite{S17}. Rogers and others discovered many series--product identities similar in form to the Rogers-Ramanujan identities, and such identities are called ``identities of the Rogers-Ramanujan type." Two of the largest collections of Rogers-Ramanujan type identities are contained in Slater's paper~\cite{S52} and Ramanujan's Lost Notebook~\cite[Chapters 10--11]{AB05},~\cite[Chapters 1--5]{AB07}. Rogers-Ramanujan type identities occur in closely related ``families." Just as there are two Rogers-Ramanujan identities related to the modulus 5, there are a family of three Rogers-Selberg identities related to the modulus 7~\cite[p. 331, (6)]{R17}, a family of three identities related to the modulus 9 found by Bailey~\cite[p. 422, Eqs. (1.6)--(1.8)]{B47}, a family of four identities related to the modulus 27 found by Dyson~\cite[p. 433, Eqs. (B1)--(B4)]{B47}, etc. While both Ramanujan and Slater usually managed to find all members of a given family, this was not always the case. In this paper, we present several complete families of identities for which Ramanujan or Slater found only one member, as well as two complete new families. The following family of four identities related to the modulus 18 is believed to be new: {\allowdisplaybreaks \begin{gather} \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (-1;q^3)_n}{ (-1;q)_n (q;q)_{2n} } = \frac{(q,q^8,q^9;q^9)_\infty (q^7,q^{11};q^{18})_\infty} {(q;q)_\infty} \label{m18-1}\\ \varphium_{n=0}^\infty \frac{ q^{n^2} (-1;q^3)_n}{ (-1;q)_n (q;q)_{2n} } = \frac{(q^2,q^7,q^9;q^9)_\infty (q^5,q^{13} ; q^{18})_\infty} {(q;q)_\infty} \label{m18-2} \\ \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (-q^3;q^3)_n}{ (-q;q)_n (q;q)_{2n+1} } = \frac{(q^3,q^6,q^9;q^9)_\infty (q^3,q^{15};q^{18})_\infty}{(q;q)_\infty} \label{m18-3} \\ \varphium_{n=0}^\infty \frac{ q^{n(n+2)} (-q^3;q^3)_n } { (q^2;q^2)_n (q^{n+2};q)_{n+1} } = \frac{(q^4,q^5,q^9;q^9)_\infty (q,q^{17};q^{18})_\infty}{(q;q)_\infty} \label{m18-4} \end{gather} } \begin{rem} We included Identity~\eqref{m18-3} in our joint paper with D. Bowman~\cite[Eq. (6.30)]{BMS07}, as it also occurs as part of a different family of four identities. \end{rem} A closely related family of mod 18 identities is as follows. \begin{gather} 1+\varphium_{n=1}^\infty \frac{ q^{n^2} (q^3;q^3)_{n-1} (2+q^n)} { (q;q)_{n-1} (q;q)_{2n} } = \frac{(-q,-q^8,q^9;q^9)_\infty (q^7,q^{11};q^{18})_\infty}{(q;q)_\infty} \label{m18-m1}\\ 1+\varphium_{n=1}^\infty \frac{ q^{n^2} (q^3;q^3)_{n-1} (1+2q^n)} { (q;q)_{n-1} (q;q)_{2n} } = \frac{(-q^2,-q^7,q^9;q^9)_\infty (q^5,q^{13};q^{18})_\infty} {(q;q)_\infty} \label{m18-m2}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (q^3;q^3)_n}{ (q;q)_n (q;q)_{2n+1} } = \frac{(-q^3,-q^6,q^9;q^9)_\infty (q^3,q^{15};q^{18})_\infty}{(q;q)_\infty} \label{m18-m3}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+2)} (q^3;q^3)_n } { (q;q)_n^2 (q^{n+2};q)_{n+1} } = \frac{(-q^4,-q^5,q^9;q^9)_\infty (q,q^{17};q^{18})_\infty}{(q;q)_\infty} \label{m18-m4} \end{gather} Identity~\eqref{m18-m3} is due to Dyson~\cite[p. 434, Eq. (B3)]{B47} and also appears in Slater~\cite[p. 161, Eq. (92)]{S52}. In both~\cite{B47} and~\cite{S52}, the right hand side of~\eqref{m18-m3} appears in a different form and thus is seen to be a member of a different family of four identities related to the modulus 27. Following Ramanujan (cf. \cite[p. 11, Eq (1.1.7)]{AB05}), let us use the notation \begin{equation*} \psi(q) = \frac{(q^2;q^2)_\infty}{(q;q^2)_\infty}. \end{equation*} Ramanujan recorded the identity \begin{equation} \varphium_{n=0}^\infty \frac{ q^{n^2} (-q^3;q^6)_n}{ (q^2;q^2)_{2n} } = \frac{ (q^2,q^{10},q^{12};q^{12})_\infty (q^{8},q^{16};q^{24})_\infty }{\psi(-q)} \label{m24t-2} \end{equation} in his lost notebook~\cite[Entry 5.3.8]{AB07}. As we see below, it is actually only one of a family of five similar identities. \begin{gather} \varphium_{n=0}^\infty \frac{ q^{n(n+2)} (-q;q^2)_n (-1;q^6)_n}{ (q^2;q^2)_{2n} (-1;q^2)_n} = \frac{ (q,q^{11},q^{12};q^{12})_\infty (q^{10},q^{14};q^{24})_\infty }{\psi(-q)} \label{m24t-1}\\ \varphium_{n=0}^\infty \frac{ q^{n^2} (-q;q^2)_n (-1;q^6)_n}{ (q^2;q^2)_{2n} (-1;q^2)_n } = \frac{ (q^3,q^{9},q^{12};q^{12})_\infty (q^{6},q^{18};q^{24})_\infty }{\psi(-q)} \label{m24t-3}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+2)} (-q^3;q^6)_n}{ (q;q)_{2n+1} (-q;q)_{2n} } = \frac{ (q^4,q^{8},q^{12};q^{12})_\infty (q^{4},q^{20};q^{24})_\infty }{\psi(-q)} \label{m24t-4}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+2)} (-q;q^2)_{n+1} (-q^6;q^6)_n } { (q^4;q^4)_{n} (q^{2n+4};q^2)_{n+1} } = \frac{ (q^5,q^{7},q^{12};q^{12})_\infty (q^{2},q^{22};q^{24})_\infty }{\psi(-q)} \label{m24t-5} \end{gather} Ramanujan also recorded the identity \begin{equation} \label{m24t-m2} \varphium_{n=0}^\infty \frac{ q^{n^2} (q^3;q^6)_n}{ (q;q^2)_{n}^2 (q^4;q^4)_n } = \frac{ (-q^2,-q^{10},q^{12};q^{12})_\infty (q^{8},q^{16};q^{24})_\infty }{\psi(-q)} \end{equation} in the lost notebook~\cite[Entry 5.3.9]{AB07}. Again, it is one of a family of five similar identities. This time, however, two of the remaining four identities were found by Slater. Identity~\eqref{m24t-m4} is a corrected presentation of~\cite[p. 164, Eq. (110)]{S52} and identity~\eqref{m24t-m5} is a corrected presentation of~\cite[p. 163, Eq. (108)]{S52}. \begin{gather} 1+\varphium_{n=1}^\infty \frac{ q^{n^2} (-q;q^2)_n (q^6;q^6)_{n-1} (2+q^{2n})}{ (q^2;q^2)_{2n} (q^2;q^2)_{n-1}} = \frac{ (-q,-q^{11},q^{12};q^{12})_\infty (q^{10},q^{14};q^{24})_\infty } {\psi(-q)} \label{m24t-m1}\\ 1+\varphium_{n=1}^\infty \frac{ q^{n^2} (-q;q^2)_n (q^6;q^6)_{n-1} (1+2q^{2n})} { (q^2;q^2)_{2n} (q^2;q^2)_{n-1} } = \frac{ (-q^3,-q^{9},q^{12};q^{12})_\infty (q^{6},q^{18};q^{24})_\infty } {\psi(-q)} \label{m24t-m3}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+2)} (q^3;q^6)_n (-q;q^2)_{n+1} }{ (q^2;q^2)_{2n+1} (q;q^2)_n } = \frac{ (-q^4,-q^{8},q^{12};q^{12})_\infty (q^{4},q^{20};q^{24})_\infty } {\psi(-q)} \label{m24t-m4}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+2)} (-q;q^2)_{n+1} (q^6;q^6)_n } { (q^{2n+4};q^2)_{n+1} (q^2;q^2)_n^2 } = \frac{ (-q^5,-q^{7},q^{12};q^{12})_\infty (q^{2},q^{22};q^{24})_\infty } {\psi(-q)}\label{m24t-m5} \end{gather} We believe that the following family of five identities has not previously appeared in the literature: \begin{gather} \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (-q^2;q^2)_n (-q^3;q^6)_n } { (q;q)_{2n} (-q;q)_{2n+1} (-q;q^2)_n} = \frac{ (q,q^{11},q^{12};q^{12})_\infty (q^{10},q^{14};q^{24})_\infty } {\varphi(-q^2)}\label{m24s-1}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (-1;q^6)_n (-q^2;q^2)_n }{ (q^2;q^2)_{2n} (-1;q^2)_n } = \frac{ (q^2,q^{10},q^{12};q^{12})_\infty (q^{8},q^{16};q^{24})_\infty } {\varphi(-q^2)} \label{m24s-2}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (-q^2;q^2)_n (-q^3;q^6)_n}{ (q^2;q^2)_{2n+1} (-q;q^2)_n } = \frac{ (q^3,q^{9},q^{12};q^{12})_\infty (q^{6},q^{18};q^{24})_\infty } {\varphi(-q^2)}\label{m24s-3}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (-q^6;q^6)_n}{ (q^2;q^2)_{2n+1} } = \frac{ (q^4,q^{8},q^{12};q^{12})_\infty (q^{4},q^{20};q^{24})_\infty } {\varphi(-q^2)} \label{m24s-4}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+3)} (-q^2;q^2)_{n} (-q^3;q^6)_n } { (q^2;q^2)_{2n+1} (-q;q^2)_n } = \frac{ (q^5,q^{7},q^{12};q^{12})_\infty (q^{2},q^{22};q^{24})_\infty } {\varphi(-q^2)} \label{m24s-5} ,\end{gather} where \begin{equation*} \varphi(q) := \frac{(-q;-q)_\infty}{(q;-q)_\infty} \end{equation*} is another notation used by Ramanujan. In the following counterpart to the preceding family, two of the five identities appear in Slater's list. \begin{gather} \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (-q^2;q^2)_n (q^3;q^6)_{n} }{ (q;q)_{2n+1} (-q;q)_{2n} (q;q^2)_{n}} = \frac{ (-q,-q^{11},q^{12};q^{12})_\infty (q^{10},q^{14};q^{24})_\infty } {\varphi(-q^2)}\label{m24s-m1} \\ 1+\varphium_{n=1}^\infty \frac{ q^{n(n+1)} (q^6;q^6)_{n-1} (-q^2;q^2)_n}{ (q^2;q^2)_{n-1} (q^2;q^2)_{2n} } = \frac{ (-q^2,-q^{10},q^{12};q^{12})_\infty (q^{8},q^{16};q^{24})_\infty } {\varphi(-q^2)} \label{m24s-m2} \\ \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (-q^2;q^2)_n (q^3;q^6)_{n}}{ (q^2;q^2)_{2n+1} (q;q^2)_{n} } = \frac{ (-q^3,-q^{9},q^{12};q^{12})_\infty (q^{6},q^{18};q^{24})_\infty }{\varphi(-q^2)}\label{m24s-m3}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (q^6;q^6)_n (-q^2;q^2)_n } { (q^2;q^2)_{2n+1} (q^2;q^2)_n } = \frac{ (-q^4,-q^{8},q^{12};q^{12})_\infty (q^{4},q^{20};q^{24})_\infty }{\varphi(-q^2)}\label{m24s-m4}\\ \varphium_{n=0}^\infty \frac{ q^{n(n+3)} (-q^2;q^2)_{n} (q^3;q^6)_n } { (q^2;q^2)_{2n+1} (q;q^2)_n } = \frac{ (-q^5,-q^{7},q^{12};q^{12})_\infty (q^{2},q^{22};q^{24})_\infty} {\varphi(-q^2)}\label{m24s-m5} \end{gather} Identity~\eqref{m24s-m3} is due to Slater~\cite[p. 163, Eq. (107)]{S52}. Identity~\eqref{m24s-m4} is originally due to Dyson~\cite[p. 434, Eq. (D2)]{B47} and also appears in Slater~\cite[p. 160, Eq. (77)]{S52}. The following false theta series identities, which are closely related to identities~\eqref{m24s-1}--\eqref{m24s-m5}, are believed to be new, except for~\eqref{ft7} and ~\eqref{ft9}. Identity~\eqref{ft7} is due to Dyson~\cite[p. 434, Eq. (E1)]{B47}, while Identity~\eqref{ft9} appears in Ramanujan's lost notebook~\cite[Entry 5.4.2]{AB07} and was rediscovered by Dyson~\cite[p. 434, Eq. (E2)]{B47}. \begin{multline} \varphium_{n=0}^\infty \frac{(-1)^n q^{n(n+1)} (-q^3;q^6)_n} { (q^{2};q^4)_n (-q;q)_{2n+1} }\\ = \varphium_{n=0}^\infty (-1)^n q^{18n^2 + 3n}(1+q^{30n+15}) - q \label{ft1} \varphium_{n=0}^\infty (-1)^n q^{18n^2 + 9n}(1+q^{18n+9}) \end{multline} \begin{equation} \varphium_{n=0}^\infty \frac{ (-1)^n q^{n(n+3)} (-q^6;q^6)_n}{ (q^{2};q^4)_{n+1} (-q^2;q^2)_{n} (-q^2;q^2)_{n+1}} =\varphium_{n=0}^\infty (-1)^n q^{18n^2+12n} (1+q^{12n+6}) \label{ft2} \end{equation} \begin{multline} \varphium_{n=0}^\infty \frac{ (-1)^n q^{n(n+1)} (-q^3;q^6)_n}{ (q^{2};q^4)_{n+1} (-q;q)_{2n}} \\=\varphium_{n=0}^\infty (-1)^n q^{18n^2+3n}(1+q^{30n+15}) +q^3\varphium_{n=0}^\infty (-1)^n q^{18n^2+15n}(1+q^{6n+3}) \label{ft3} \end{multline} \begin{multline} \varphium_{n=0}^\infty \frac{ (-1)^n q^{n(n+1)} (-q^6;q^6)_n}{ (q^{2};q^4)_{n+1} (-q^2;q^2)_{n}^2}\\ =\varphium_{n=0}^\infty (-1)^n q^{18n^2+6n} (1+q^{24n+12}) +2 q^4 \varphium_{n=0}^\infty (-1)^n q^{18n^2+18n} \label{ft4} \end{multline} \begin{multline} \varphium_{n=0}^\infty \frac{ (-1)^n q^{n(n+3)} (-q^3;q^6)_n}{ (q^{2};q^4)_{n+1} (-q;q)_{2n}}\\ =\varphium_{n=0}^\infty (-1)^n q^{18n^2+9n} (1+q^{18n+9}) + q^2 \varphium_{n=0}^\infty (-1)^n q^{18n^2+15n} (1+q^{6n+3}) \label{ft5} \end{multline} \begin{multline} \varphium_{n=0}^\infty \frac{(-1)^n q^{n(n+1)} (q^3;q^6)_n }{ (q^{2};q^4)_n (-q^2;q^2)_n (q;q^2)_{n+1} } \\=\varphium_{n=0}^\infty q^{18n^2 + 3n}(1-q^{30n+15}) + q \varphium_{n=0}^\infty q^{18n^2 + 9n}(1-q^{18n+9}) \label{ft6} \end{multline} \begin{equation} \varphium_{n=0}^\infty \frac{ (-1)^{n} q^{n(n+3)}(q^6;q^6)_{n}}{ (q;q)_{2n+1} (-q;q)_{2n+2} } =\varphium_{n=0}^\infty q^{18n^2+12n} (1-q^{12n+6})\label{ft7} \end{equation} \begin{multline} \varphium_{n=0}^\infty \frac{ (-1)^n q^{n(n+1)} (q^3;q^6)_n}{ (q^{2};q^4)_{n+1} (-q^2;q^2)_n (q;q^2)_{n} } \\ =\varphium_{n=0}^\infty q^{18n^2 + 3n}(1-q^{30n+15}) -q^3 \varphium_{n=0}^\infty q^{18n^2 + 15n}(1-q^{6n+3}) \label{ft8} \end{multline} \begin{equation} \varphium_{n=0}^\infty \frac{ (-1)^n q^{n(n+1)} (q^6;q^6)_n}{ (q^{2};q^2)_{2n+1} } =\varphium_{n=0}^\infty q^{18n^2+6n} (1-q^{24n+12}) \label{ft9} \end{equation} \begin{multline} \varphium_{n=0}^\infty \frac{ (-1)^n q^{n(n+3)} (q^3;q^6)_n}{ (q^{2};q^4)_{n+1} (-q^2;q^2)_n (q;q^2)_{n} } \\=\varphium_{n=0}^\infty q^{18n^2+9n} (1-q^{18n+9}) + q^2 \varphium_{n=0}^\infty q^{18n^2+15n} (1-q^{6n+3}) \label{ft10} \end{multline} In \S\ref{StdResults}, we will review some standard definitions and results to be used in the sequel. In \S\ref{Proofs}, we indicate the Bailey pairs necessary to prove Identities~\eqref{m18-1}--\eqref{ft10} and provide the keys to proving Identities~\eqref{m18-1}--\eqref{m24s-m5}. In \S\ref{FT}, we will discuss and prove the false theta series identities~\eqref{ft1}--\eqref{ft10}. Finally, in \S\ref{Lie} we discuss possible connections between Identities~\eqref{m18-1}--\eqref{m18-4} and the standard level 6 modules associated with the Lie algebra $A_{2}^{(2)}$. \varphiection{Standard definitions and results}\label{StdResults} We will require a number of definitions and theorems from the literature. It will be convenient to adopt Ramanujan's notation for theta functions~\cite[p. 11, Eqs. (1.1.5)--(1.1.8)]{AB05}. {\allowdisplaybreaks \begin{defn} For $|ab|<1$, let \begin{align} f(a,b) &:= \varphium_{n=-\infty}^\infty a^{n(n+1)/2} b^{n(n-1)/2}, \label {fdef}\\ \varphi(q) &:= f(q,q), \label{phidef}\\ \psi(q) &:= f(q,q^3), \label{psidef}\\ f(-q) &:= f(-q,-q^2).\label{PNSdef} \end{align} \end{defn} } Both the Jacobi triple product identity and the quintuple product identity were used extensively by Ramanujan (cf.~\cite{AB05}, \cite{AB07}) and Slater~\cite{S52}. Rogers, on the other hand, appears to have been unaware of the quintuple product identity, since he referred to~\cite[p. 333, Eq. (16)]{R94} \begin{equation} \label{remarkable} \frac{(q^2;q^2)_\infty }{ (q^{30}; q^{30})_\infty (q;q^5)_\infty (q^4;q^5)_\infty} = (q^{13}; q^{30})_\infty (q^{17};q^{30})_\infty + q (q^7;q^{30})_\infty (q^{23};q^{30})_\infty, \end{equation} which follows immediately from the quintuple product identity, as a ``remarkable identity" after observing that both sides of~\eqref{remarkable} are equal to the same series. Accordingly, we have chosen the name ``Ramanujan-Slater type identities" in our title for the identities in this paper rather than ``Rogers-Ramanujan type identities." Many proofs of the Jacobi triple product identity are known; see, e.g.,~\cite[pp. 496--500]{AAR99} for two proofs. For a history and many proofs of the quintuple product identity, see S. Cooper's excellent survey article~\cite{C06}. \begin{thm}[Jacobi's triple product identity] For $|ab|<1$, \begin{equation} \label{jtp} f(a,b) = (-a, -b, ab ; ab)_\infty. \end{equation} \end{thm} \begin{thm}[Quintuple product identity] For $|w|<1$ and $x\neq 0$, \begin{multline} \label{qpi} f(-wx^3, -w^2 x^{-3}) + x f(-wx^{-3}, -w^2 x^3) = \frac{ f(w/x, x) f(-w/x^2, -wx^2) }{ f(-w^2) } \\ = (-wx^{-1}, -x, w; w)_\infty (wx^{-2}, wx^2; w^2)_\infty. \end{multline} \end{thm} The following is a special case of Bailey's ${}_6 \psi_6$ summation formula~\cite[Eq. (4.7)]{B36} which appears in Slater~\cite[p. 464, Eq. (3.1)]{S51}. \begin{thm}[Bailey] \begin{multline} \label{6psi6} \varphium_{r=-\infty}^\infty \frac{ (1-aq^{6r})(q^{-n};q)_{3r} (e;q^3)_r a^{2r} q^{3nr} } {(1-a)(aq^{n+1}; q)_{3r} (aq^3/e;q^3)_r e^r } \\ = \frac{ (a;q^3)_\infty (q^3/a;q^3)_\infty (aq^2/e;q^3)_\infty (aq/e; q^3)_\infty (q;q)_n (aq;q)_n (a^2/e; q^3)_n} { (q;q^3)_\infty (q^2;q^3)_\infty (q^3/e;q^3)_\infty (a^2/e; q^3)_\infty (a;q)_{2n} (aq/e;q)_n }, \end{multline} where $a$ must be a power of $q$ so that the series terminates below. \end{thm} The next two $q$-hypergeometric summation formulas are due to to Andrews~\cite[p. 526, Eqs. (1.8) and (1.9) respectively]{A73}. \begin{thm}[$q$-analog of Gauss's ${}_2 F_{1} (\frac 12)$ sum] \begin{equation} \label{q2ndGauss} \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (a;q^2)_n (b;q^2)_n } { (q^2;q^2)_n (abq^2;q^4)_n } = \frac{ (aq^2;q^4)_\infty (bq^2;q^4)_\infty } { (q^2;q^4)_\infty (abq^2;q^4)_\infty }. \end{equation} \end{thm} {\allowdisplaybreaks \begin{thm}[$q$-analog of Bailey's ${}_2 F_1 (\frac 12)$ sum] \begin{equation} \label{qBailey} \varphium_{n=0}^\infty \frac{ (bq;q^2)_n (b^{-1}q;q^2)_n c^n q^{n^2} } {(cq;q^2)_n (q^4;q^4)_n } = \frac{ (b^{-1}cq^2;q^4)_\infty (bcq^2; q^4)_\infty }{ (cq;q^2)_\infty }. \end{equation} \end{thm} } \begin{defn} A pair of sequences \[ \left( \{\alpha_n (a,q) \}_{n=0}^\infty, \{ \beta_n(a,q)\}_{n=0}^\infty \right)\] is called a \emph{Bailey pair relative to $a$} if \begin{equation} \label{BPdef} \beta_n (a,q) = \varphium_{r=0}^n \frac{\alpha_r (a,q)}{(a q;q )_{n+r} (q;q)_{n-r}}. \end{equation} \end{defn} Bailey~\cite[p. 3, Eq. (3.1)]{B49} proved a key result, now known as ``Bailey's lemma," which led to the discovery of many Rogers-Ramanujan type identities. We will require several special cases of Bailey's lemma. \begin{thm} If $\left( \{ \alpha_n (a,q) \}, \{ \beta_n (a,q) \} \right)$ form a Bailey pair, then {\allowdisplaybreaks \begin{align} \varphium_{n=0}^\infty a^n q^{n^2} \beta_n(a,q) &= \frac{1}{(aq;q)_\infty} \varphium_{r=0}^\infty a^r q^{r^2} \alpha_r (a,q) \label{aPBL} \\ \varphium_{n=0}^\infty a^n q^{n^2} (-q;q^2)_n \beta_n(a,q^2) &= \frac{(-aq;q^2)_\infty }{(aq^2;q^2)_\infty} \varphium_{r=0}^\infty a^r q^{r^2} \alpha_r (a,q^2) \label{aTBL}\\ \frac{1}{1-q^2}\varphium_{n=0}^\infty q^{n(n+1)} (-q^2;q^2)_n \beta_n(q^2,q^2) &= \frac{1}{\varphi(-q^2)} \varphium_{r=0}^\infty q^{r(r+1)} \alpha_r (q^2,q^2). \label{S2BL} \\ \frac{1}{1-q^2}\varphium_{n=0}^\infty (-1)^n q^{n(n+1)} (q^2;q^2)_n \beta_n(q^2,q^2) &= \varphium_{r=0}^\infty (-1)^r q^{r(r+1)} \alpha_r (q^2,q^2). \label{FBL} \end{align} } \end{thm} Eq.~\eqref{aPBL} is~\cite[p. 3, Eq. (3.1) with $\rho_1, \rho_2\to\infty$]{B49}. Eq.~\eqref{aTBL} is~\cite[p. 3, Eq. (3.1) with $\rho_1=-\varphiqrt{q};\ \rho_2\to\infty$]{B49}. Eq.~\eqref{S2BL} is~\cite[p. 3, Eq. (3.1) with $\rho_1=-q;\ \rho_2\to\infty$]{B49}. Eq.~\eqref{FBL} is~\cite[p. 3, Eq. (3.1) with $\rho_1=q;\ \rho_2\to\infty$]{B49}. \varphiection{Proofs of Identities~\eqref{m18-1}--\eqref{m24s-m5}}\label{Proofs} To facilitate the proofs of many of the identities, we will first need to establish a number of Bailey pairs. For instance, \begin{lem} \label{BP2} If {\allowdisplaybreaks \begin{equation*} \alpha_n (1,q) = \left\{ \begin{array}{ll} 1 &\mbox{if $n=0$}\\ q^{\frac 92 r^2 - \frac 32 r} (1+q^{3r}) &\mbox{if $n=3r>0$}\\ -q^{\frac 92 r^2 - \frac 92 r + 1} &\mbox{if $n=3r-1$}\\ -q^{\frac 92 r^2 + \frac 92 r + 1} &\mbox{if $n=3r+1$} \end{array} \right. \end{equation*} and \[ \beta_n (1,q) = \frac {(-1; q^3)_n } { (q;q)_{2n} (-1; q)_n },\] then $\left( \alpha_n (1,q) , \beta_n (1,q) \right)$ form a Bailey pair relative to $1$.} \end{lem} \begin{pf} Set $a=q$ and $e=-q^2$ in~\eqref{6psi6} and simplify to obtain \begin{equation} \label{6psi6spec} \varphium_{r\in\mathbb Z} \frac{ (1-q^{6r+1}) q^{\frac 92 r^2 -\frac 32 r} } {(q;q)_{n-3r} (q; q)_{n+3r+1} } = \frac{ (-1; q^3)_n} { (q;q)_{2n} (-1;q)_n }. \end{equation} \begin{align*} &\qquad\quad\varphium_{r=0}^n \frac{ \alpha_r(1,q)}{(q;q)_{n-r} (q;q)_{n+r}} \\ &= \frac{1}{(q;q)_n^2} + \varphium_{r\mathfrak{g}eqq 1} \frac{ \alpha_{3r}(1,q)}{(q;q)_{n-3r} (q;q)_{n+3r}} + \varphium_{r\mathfrak{g}eqq 1} \frac{ \alpha_{3r-1}(1,q)}{(q;q)_{n-3r+1} (q;q)_{n+3r-1}} \\ &\qquad\qquad + \varphium_{r\mathfrak{g}eqq 0} \frac{ \alpha_{3r+1}(1,q)}{(q;q)_{n-3r-1} (q;q)_{n-3r+1}} \\ &= \varphium_{r\in\mathbb Z} \frac{ q^{\frac 92 r^2 -\frac 32 r} }{(q;q)_{n-3r} (q;q)_{n+3r}} - \varphium_{r\in\mathbb Z} \frac{ q^{\frac 92 r^2 + \frac 92 r + 1}}{(q;q)_{n+3r+1} (q;q)_{n-3r-1}} \\ &= \varphium_{r\in\mathbb Z} \frac{ q^{\frac 92 r^2 -\frac 32 r} }{(q;q)_{n-3r} (q;q)_{n+3r+1}} \left( (1-q^{n+3r+1}) - q^{6r+1} (1-q^{n-3r}) \right) \\ &=\varphium_{r\in\mathbb Z} \frac{ q^{\frac 92 r^2 -\frac 32 r} (1-q^{6r+1}) }{(q;q)_{n-3r} (q;q)_{n+3r+1}} = \frac{(-1;q^3)}{(q;q)_{2n} (-1;q)_n} \mbox{ (by~\eqref{6psi6spec}) }. \qed \end{align*} \end{pf} The other necessary Bailey pairs can be established similarly, so we omit the details and summarize the results in Table~\ref{BPtable}.1. With the required Bailey pairs in hand, the identities can be proved. For example, to prove Identity~\eqref{m18-1}, we proceed as follows: \begin{pf} Insert the Bailey pair P2 into Eq.~\eqref{aPBL} with $a=1$ to obtain \begin{align*} & \quad\qquad \varphium_{n=0}^\infty \frac{ q^{n(n+1)} (-1;q^3)_n}{ (q;q)_{2n} (-1;q)_n}\\ & = \frac{1}{(q;q)_\infty}\left( 1 + \varphium_{r=1}^\infty q^{\frac{27}{2} r^2 - \frac 32 r} (1+q^{3r}) - \varphium_{r=1}^\infty q^{\frac{27}{2} r^2 - \frac {15}{2} r +1} - \varphium_{r=0}^\infty q^{\frac{27}{2} r^2 + \frac{15}{2} r+1 } \right) \\ & = \frac{1}{(q;q)_\infty} \left( \varphium_{r=-\infty}^\infty q^{\frac{27}{2} r^2 - \frac 32 r} -q \varphium_{r=-\infty}^\infty q^{\frac{27}{2} r^2 - \frac {15}{2}r } \right)\\ &= \frac{ f(q^{12}, q^{15}) - q f(q^6, q^{21}) }{ f(-q)} \qquad \qquad\mbox{ (by~\eqref{jtp})} \\ & = \frac{ (q, q^8 , q^9; q^9)_\infty (q^7, q^{11};q^{18})_\infty }{ (q;q)_\infty} \qquad\mbox{ (by~\eqref{qpi}) }. \qed \end{align*} \end{pf} The details of the proofs of the other identities are similar and therefore omitted, with the key information summarized in Table~\ref{IdTable}.2. \begin{landscape} \begin{table}[hbt] \label{BPtable} \centering \begin{tabular}{|c|c|c|c|c|c|c|c| } \hline\hline & $a$ &$e$ & $\beta_n$ & $\alpha_{3r+1}$ & $\alpha_{3r}$ & $\alpha_{3r-1}$ & rel to \\ \hline P1& $q$ & $-q^2$ & $\frac{(-1;q^3)}{(q;q)_{2n} (-1;q)_n}$ & $-q^{\frac 92 r^2 + \frac 92 r + 1} $ & $ q^{\frac 92 r^2 - \frac 32 r} (1+q^{3r})$ & $ -q^{\frac 92 r^2 - \frac 92 r + 1} $ & $1$ \\ \hline P2& $q$ & $-q^2$ &$\frac {q^n (-1; q^3)_n } { (q;q)_{2n} (-1; q)_n }$ & $-q^{\frac 92 r^2 + \frac 32 r}$ & $q^{\frac 92 r^2 - \frac 32 r} (1+q^{3r}) $ & $-q^{\frac 92 r^2 - \frac 32 r}$ & $1$ \\ \hline P3& $q^2$ & $-q$ & $ \frac {(-q^3; q^3)_n } { (q^2;q)_{2n} (-q; q)_n }$ & $-2q^{\frac 92 r^2 + \frac 92 r + 1} $ & $ q^{\frac 92 r^2 + \frac 32 r}$ & $q^{\frac 92 r^2 - \frac 32 r} $ & $q$\\ \hline P4& $q^2$ & $-q^{5/2} $ & $\frac {(-q^{3/2}; q^3)_n } { (q^2;q)_{2n} (-q^{1/2}; q)_n }$ & $-q^{\frac 92 r^2+3r+\frac 12} (1+q^{3r+\frac 32})$ & $q^{\frac 92 r^2}$ & $q^{\frac 92 r^2}$ & $q$ \\ \hline P5& $q^2$ & $-q^{5/2}$ & $ \frac {q^n (-q^{3/2}; q^3)_n } { (q^2;q)_{2n} (-q^{1/2}; q)_n }$ & $ -q^{\frac 92 r^2} (q^{6r+\frac 32}+q^{3r})$ & $q^{\frac 92 r^2+3r} $ & $q^{\frac 92 r^2-3r} $ & $q$\\ \hline P6 & $q$ & $-q^2$ & $ \frac {(1-q)(-1; q^3)_n } { (q;q)_{2n} (-1; q)_n }$ & $0$ & $ q^{\frac 92 r^2 - \frac 32 r} (1-q^{6r+1}) $ & $ -q^{\frac 92 r^2 - \frac 92 r + 1}(1-q^{6r-1})$ & $q$\\ \hline P7 & $q$ & $q^2$ & $\frac {(q^3; q^3)_{n-1} } { (q^2;q)_{2n-1} (q;q)_{n-1} }$ & $(-1)^{r+1} q^{\frac 92 r^2 + \frac 32 r +1}\frac{1-q^{6r+3}}{1-q}$ & $(-1)^r q^{\frac 92 r^2 - \frac 32 r} \frac{1-q^{6r+1}}{1-q} $ & $(-1)^{r+1} q^{\frac 92 r^2 - \frac 92 r+1} \frac{1-q^{6r-1}}{1-q}$ & $q$\\ \hline \end{tabular} \caption{By specializing $a$ and $e$ in~\eqref{6psi6} as indicated, each of the following seven Bailey pairs (relative to $1$ or $q$ as stated) can be established. In all cases $\alpha_0 = \beta_0 = 1$.} \end{table} \end{landscape} \begin{table} \label{IdTable} \caption{Proofs of identities~\eqref{m18-1}--~\eqref{m24s-m5}} \begin{tabular}{|c|c|c|c|c| } \hline\hline Eq. & Bailey& Bailey & $a$ & \\ & pair & lemma & & \\ \hline \eqref{m18-1} & P2 & \eqref{aPBL} & $1$ &\\ \eqref{m18-2} & P1 & \eqref{aPBL} & $1$&\\ \eqref{m18-3} & P3 & \eqref{aPBL} & $q$&\\ \eqref{m18-4} & $-$ & $-$ & $-$ & $q^{-1} \times ( \eqref{m18-2} - \eqref{m18-1} ) $\\ \eqref{m18-m1} & $-$ & $-$ & $-$ & ~\cite[p. 433, (B4) $+q\times$(B2)]{B47}\\ \eqref{m18-m2} & $-$ & $-$ & $-$ & ~\cite[p. 433, (B4)$+q^2\times$(B1)]{B47}\\ \eqref{m18-m3} & $-$ & $-$ & $-$ &~\cite[p. 433, (B3)]{B47}\\ \eqref{m18-m4} & $-$ & $-$ & $-$ &~\cite[p. 433, (B2) $-q\times$(B1)]{B47}\\ \eqref{m24t-2} & $-$ & $-$ & $-$ & Set $b=e^{\pi i/3}$ and $c=1$ in \eqref{qBailey}.\\ \eqref{m24t-1} & P2 & \eqref{aTBL} & $1$ &\\ \eqref{m24t-3} & P1 & \eqref{aTBL} & $1$ &\\ \eqref{m24t-4} & $-$ & $-$ & $-$ &Set $b=e^{\pi i/3}$ and $c=q^2$ in~\eqref{qBailey}.\\ \eqref{m24t-5} & $-$ & $-$ & $-$ & $q^{-1}\times (\eqref{m24t-3}-\eqref{m24t-1})$\\ \eqref{m24t-m2} & $-$ & $-$ & $-$ &Set $b=e^{2\pi i/3}$ and $c=1$ in \eqref{qBailey}.\\ \eqref{m24t-m1} & $-$ & $-$ & $-$ & \cite[p. 434, (C3)$ +q\times$(C2)]{B47}\\ \eqref{m24t-m3} & $-$ & $-$ & $-$ & \cite[p. 434, (C3)$ +q^3\times$(C1)]{B47}\\ \eqref{m24t-m4} & $-$ & $-$ & $-$ & Set $b=e^{2\pi i/3}$ and $c=q^2$ in \eqref{qBailey}\\ \eqref{m24t-m5} & $-$ & $-$ & $-$ & $q^{-1}\times (\eqref{m24t-m1}-\eqref{m24t-m3})$\\ \eqref{m24s-1} & $-$ & $-$ & $-$ & $\eqref{m24s-3}-q\times\eqref{m24s-5} $ \\ \eqref{m24s-2} & $-$ & $-$ & $-$ &Set $a= e^{\pi i /3} $, $b=e^{-\pi i/ 3}$ in~\eqref{q2ndGauss}.\\ \eqref{m24s-3} & P4 & \eqref{S2BL} & $q$ &\\ \eqref{m24s-4} & $-$ & $-$ & $-$ &Set $a= e^{\pi i /3} q^2$, $b=e^{-\pi i/ 3} q^2$ in~\eqref{q2ndGauss}.\\ \eqref{m24s-5} & P5 & \eqref{S2BL} & $q$ & \\ \eqref{m24s-m1} & $-$ & $-$ & $-$ & $\eqref{m24s-m3}+q\times\eqref{m24s-m5} $ \\ \eqref{m24s-m2} & $-$ & $-$ & $-$ & Set $a= e^{2\pi i /3} $, $b=e^{-2\pi i/ 3}$ in~\eqref{q2ndGauss}.\\ \eqref{m24s-m3} & J4~\cite[p. 149]{S52} & \eqref{S2BL} & $q$ &\\ \eqref{m24s-m4} &$-$ & $-$ & $-$ & \varphimall{Set $a= e^{2\pi i /3} q^2$, $b=e^{-2\pi i/ 3} q^2$ in~\eqref{q2ndGauss}.}\\ \eqref{m24s-m5} &J5~\cite[p. 149]{S52} & \eqref{S2BL} & $q$ & \\ \hline \end{tabular} \end{table} \varphiection{False theta series identities}\label{FT} Rogers introduced the term ``false theta series" and included a number of related identities in his 1917 paper~\cite{R17}. Ramanujan presented a number of identities involving false theta series in his lost notebook~\cite[p. 256--259, \S11.5]{AB05}. Recalling that Ramanujan defines the theta function as \begin{align*} f(a,b)&:= \varphium_{n=-\infty}^\infty a^{n(n+1)/2} b^{n(n-1)/2}\\ &= \varphium_{n=0}^\infty a^{n(n+1)/2} b^{n(n-1)/2} + \varphium_{n=1}^\infty a^{n(n-1)/2} b^{n(n+1)/2}\\ &= 1 +a + b + a^3 b + ab^3 + a^6 b^3 + a^3 b^6 + a^{10} b^6 + a^6 b^{10} + \dots, \end{align*} let us define the corresponding \emph{false theta function} as \begin{align*} \Psi(a,b)&:=\varphium_{n=0}^\infty a^{n(n+1)/2} b^{n(n-1)/2} - \varphium_{n=1}^\infty a^{n(n-1)/2} b^{n(n+1)/2}\\ &= \varphium_{n=0}^\infty a^{n(n+1)/2} b^{n(n-1)/2} (1 - b^{2n+1}) \\ &=1 +a - b + a^3 b - ab^3 + a^6 b^3 - a^3 b^6 + a^{10} b^6 - a^6 b^{10} + \dots. \end{align*} In practice, $a$ and $b$ are always taken to be $\pm q^h$ for some integer or half-integer $h$. The key to the proof of each false theta series identity is indicated in Table~\ref{FTtable}.1. \begin{table} \label{FTtable} \caption{Proofs of identities~\eqref{ft1}--~\eqref{ft10}} \begin{tabular}{|c|c|c|c|c| } \hline\hline Eq. & Bailey pair & form of Bailey lemma & $a$ & \\ \hline \eqref{ft1} & $-$ & $-$ & $-$ & \eqref{ft3}$-q\times$\eqref{ft5}\\ \eqref{ft2} & P6 & \eqref{FBL} &$q$ & \\ \eqref{ft3} & P4 & \eqref{FBL} &$q$ &\\ \eqref{ft4} & P3 & \eqref{FBL} &$q$ &\\ \eqref{ft5} & P5 & \eqref{FBL} &$q$ &\\ \eqref{ft6} & $-$ & $-$ & $-$ & \eqref{ft8}+$q\times$\eqref{ft10}\\ \eqref{ft7} & P7 & \eqref{FBL} &$q$ &\\ \eqref{ft8} & J4~\cite[p. 149]{S52} & \eqref{FBL} &$q$ &\\ \eqref{ft9} & $-$ & $-$ & $-$ & See~\cite[Entry 5.4.2]{AB07}\\ \eqref{ft10} & J5~\cite[p. 149]{S52} & \eqref{FBL} &$q$ &\\ \hline \end{tabular} \end{table} \varphiection{Connections with Lie algebras}\label{Lie} Let $\mathfrak{g}$ be the affine Kac-Moody Lie algebra $A_1^{(1)}$ or $A_2^{(2)}$. Let $h_0, h_1$ be the usual basis of a maximal toral subalgebra $T$ of $\mathfrak{g}$. Let $d$ denote the ``degree derivation" of $\mathfrak{g}$ and $\tilde{T}:= T \oplus \mathbb C d$. For all dominant integral $\lambda\in\tilde{T}^*$, there is an essentially unique irreducible, integrable, highest weight module $L(\lambda)$, assuming without loss of generality that $\lambda(d) = 0$. Now $\lambda= s_0 \Lambda_0 + s_1 \Lambda_1$ where $\Lambda_0$ and $\Lambda_1$ are the fundamental weights, given by $\Lambda_i(h_j) = \delta_{ij}$ and $\Lambda_i(d) = 0$; here $s_0$ and $s_1$ are nonnegative integers. For $A_1^{(1)}$, the canonical central element is $c= h_0 + h_1$, while for $A_2^{(2)}$, the canonical central element is $c = h_0 + 2h_1$. The quantity $\lambda(c)$ (which equals $s_0+s_1$ for $A_1^{(1)}$ and which equals $s_0+2s_1$ for $A_2^{(2)}$) is called the \emph{level} of $L(\lambda)$. (cf.\cite{K90}, \cite{LM78}.) Additionally (see~\cite{LM78}), there is an infinite product $F_{\mathfrak{g}}$ associated with $\mathfrak{g}$, often light-heartedly called the ``fudge factor," which needs to be divided out of the the principally specialized character $\chi(L(\lambda)) = \chi(s_0 \Lambda_0 + s_1\Lambda_1)$, in order to obtain the quantities of interest here. For $\mathfrak{g}=A_1^{(1)}$, the fudge factor is given by $F_{\mathfrak{g}} = (q;q^2)_\infty^{-1}$, while for $\mathfrak{g}=A_2^{(2)}$, it is given by $F_{\mathfrak{g}} = \left[ (q;q^6)_\infty (q^5;q^6)_\infty \right]^{-1}$. Now $\mathfrak{g}$ has a certain infinite-dimensional Heisenberg subalgebra known as the ``principal Heisenberg vacuum subalgebra" $\varphifrak$ (see~\cite{LW78} for the construction of $A_1^{(1)}$ and~\cite{KKLW81} for that of $A_2^{(2)}$). As shown in~\cite{LW82}, the principal character $\chi(\Omega(s_0 \Lambda_0 + s_1 \Lambda_1))$, where $\Omega(\lambda)$ is the vacuum space for $\varphifrak$ in $L(\lambda)$, is \begin{equation} \label{char} \chi(\Omega(s_0 \Lambda_0 + s_1\Lambda_1)) = \frac{\chi( L(s_0 \Lambda_0 + s_1 \Lambda_1)) }{ F_{\mathfrak{g}} }, \end{equation} where $\chi(L(\lambda))$ is the principally specialized character of $L(\lambda)$. By~\cite{LM78} applied to~\eqref{char} in the case of $A_1^{(1)}$, for standard modules of odd level $2k+1$, $$\chi(\Omega( (2k-i+2)\Lambda_0 + (i-1)\Lambda_1 ))$$ is given by Andrews' analytic generalization of the Rogers-Ramanujan identities~\cite{A74}: \begin{equation}\label{AndGor} \varphium_{n_1, n_2, \dots, n_{k}\mathfrak{g}eqq 0} \frac{ q^{N_1^2 + N_2^2 + \cdots + N_{k}^2 + N_i+N_{i+1}+\cdots+N_{k}}} {(q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k}} } = \frac{(q^i,q^{2k+3-i},q^{2k+3};q^{2k+3})_\infty }{(q;q)_\infty }, \end{equation} where $1\leqq i \leqq k+1$ and $N_j: = n_j + n_{j+1} + \cdots + n_{k}$. The combinatorial counterpart to~\eqref{AndGor} is Gordon's partition theoretic generalization of the Rogers-Ramanujan identities~\cite{G61}; this generalization was explained vertex-operator theoretically in~\cite{LW84} and~\cite{LW85}. In addition, for the $A_1^{(1)}$ standard modules of even level $2k$, \[ \chi( \Omega( (2k-i+1)\Lambda_0 + (i-1)\Lambda_1 )) \] is given by Bressoud's analytic identity~\cite[p. 15, Eq. (3.4)]{B80} \begin{equation}\label{BressoudEven} \varphium_{n_1, n_2, \dots, n_{k}\mathfrak{g}eqq 0} \frac{ q^{N_1^2 + N_2^2 + \cdots + N_{k}^2 + N_i+N_{i+1}+\cdots+N_{k}}} {(q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q^2;q^2)_{n_k} } = \frac{(q^i,q^{2k+2-i},q^{2k+2};q^{2k+2})_\infty }{(q;q)_\infty }, \end{equation} where $1\leqq i \leqq k+1$, and its partition theoretic counterpart~\cite[p. 64, Theorem, $j=0$ case]{B79}; likewise, this generalization was explained vertex-operator theoretically in~\cite{LW84} and~\cite{LW85}. Notice that the infinite products associated with level $\ell$ standard modules for $A_1^{(1)}$ in~\eqref{AndGor} and~\eqref{BressoudEven} are instances of the Jacobi triple product identity for modulus $\ell+2$ divided by $(q;q)_\infty$. Probably the most efficient way of deriving~\eqref{AndGor} is via the Bailey lattice~\cite{AAB87}, which is an extension of the Bailey chain concept (\cite{A84}; cf. \cite[\S 3.5, pp. 27ff]{A86}) built upon the ``unit Bailey pair" \[ \beta_n(1,q) = \left\{ \begin{array}{ll} 1 &\mbox{if $n=0$}\\ 0 &\mbox{if $n>0$} \end{array} \right. \] \[ \alpha_n(1,q) = \left\{ \begin{array}{ll} 1 &\mbox{if $n=0$}\\ (-1)^n q^{n(n-1)/2} (1+q^n) &\mbox{if $n>0$.} \end{array} \right. \] Similarly, \eqref{BressoudEven} follows from a Bailey lattice built upon the Bailey pair \[ \beta_n(1,q) = \frac{1}{(q^2;q^2)_n}, \] \[ \alpha_n(1,q) = \left\{ \begin{array}{ll} 1 &\mbox{if $n=0$}\\ (-1)^n 2 q^{n^2} &\mbox{if $n>0$.} \end{array} \right. \] Thus the standard modules of $A_1^{(1)}$ may be compactly ``explained" via two interlaced instances of the Bailey lattice. In contrast, the standard modules of $A_{2}^{(2)}$ are not as well understood, and a uniform $q$-series and partition correspondence analogous to what is known for $A_1^{(1)}$ has thus far remained elusive. As with $A_1^{(1)}$, there are $1+\lfloor \frac{\ell}{2} \rfloor$ inequivalent level $\ell$ standard modules associated with the Lie algebra $A_2^{(2)}$, but the analogous quantity for the level $\ell$ standard modules \[ \chi (\Omega( (\ell-2i+2)\Lambda_0 + (i-1)\Lambda_1 )) \] is given by instances of the quintuple product identity (rather than the triple product identity) divided by $(q;q)_\infty$: \begin{equation} \label{A22prodside} \frac{ (q^i, q^{\ell+3-i}, q^{\ell+3}; q^{\ell+3})_\infty (q^{\ell+3-2i}, q^{\ell+2i+3}; q^{2\ell+6})_\infty}{(q;q)_\infty}, \end{equation} where $1\leqq i \leqq 1 + \lfloor \frac{\ell}{2} \rfloor $; see~\cite{LM78}. It seems quite plausible that in the case of $A_2^{(2)}$, the analog of the Andrews-Gordon-Bressoud identities would involve the interlacing of six Bailey lattices in contrast to the two that were necessary for $A_1^{(1)}$. To see this, consider the following set of Andrews-Gordon-Bressoud type identities where the product sides involve instances of the quintuple product identity rather than the triple product identity: {\allowdisplaybreaks \begin{multline} \label{lev6k2} \varphium_{n_1, n_2, \dots, n_k \mathfrak{g}eqq 0} \frac{ q^{ N_1(N_1+1)/2 + N_2(N_2+1) + N_3(N_3+1)+ \cdots +N_k(N_k+1) + N_k^2} } { (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q;q)_{2n_k+1} (-q^{N_1+1};q)_\infty} \\= \frac{ (q^{k}, q^{5k-1}, q^{6k-1}; q^{6k-1})_\infty (q^{4k-1},q^{8k-1}; q^{12k-2})_\infty }{(q;q)_\infty} \end{multline} \begin{multline} \label{lev6k3} \varphium_{n_1, n_2, \dots, n_{k+1} \mathfrak{g}eqq 0} \frac{ q^{N_1^2 + N_2^2 + \cdots +N_{k}^2} \left( \frac{n_k- n_{k+1}+1}{3} \right) } { (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k+1}} (q;q)_{2n_k- n_{k+1} }} \\= \frac{ (q^{k}, q^{5k}, q^{6k}; q^{6k})_\infty (q^{4k},q^{8k}; q^{12k})_\infty }{(q;q)_\infty} \end{multline} \begin{multline} \label{lev6k4} \varphium_{n_1, n_2, \dots, n_k \mathfrak{g}eqq 0} \frac{ q^{ N_1(N_1+1)/2 + N_2(N_2+1) + N_3(N_3+1)+ \cdots +N_k(N_k+1)} } { (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q;q)_{2n_k+1} (-q^{N_1+1};q)_\infty} \\= \frac{ (q^{2k}, q^{4k+1}, q^{6k+1}; q^{6k+1})_\infty (q^{2k+1},q^{10k+1}; q^{12k+2})_\infty }{(q;q)_\infty} \end{multline} } {\allowdisplaybreaks \begin{multline} \label{lev6k5} \varphium_{n_1, n_2, \dots, n_k \mathfrak{g}eqq 0} \frac{ q^{N_1^2 + N_2^2 + \cdots +N_{k-1}^2+2N_k^2}} { (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q;q)_{2n_k}} \\= \frac{ (q^{k}, q^{5k+2}, q^{6k+2}; q^{6k+2})_\infty (q^{4k+2},q^{8k+2}; q^{12k+4})_\infty }{(q;q)_\infty} \end{multline} } {\allowdisplaybreaks \begin{multline} \label{lev6k6} \varphium_{n_1, n_2, \dots, n_k \mathfrak{g}eqq 0} \frac{ q^{N_1^2 + N_2^2 + \cdots +N_k^2} (-1;q^3)_{n_k}}{ (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q;q)_{2n_k} (-1;q)_{n_k} } \\= \frac{ (q^{k+1}, q^{5k+2}, q^{6k+3}; q^{6k+3})_\infty (q^{4k+1},q^{8k+5}; q^{12k+6})_\infty }{(q;q)_\infty} \end{multline} } {\allowdisplaybreaks \begin{multline}\label{lev6k7} \varphium_{n_1, n_2, \dots, n_k \mathfrak{g}eqq 0} \frac{ q^{N_1^2 + N_2^2 + \cdots +N_k^2}}{ (q;q)_{n_1} (q;q)_{n_2} \cdots (q;q)_{n_{k-1}} (q;q)_{2n_k}} \\= \frac{ (q^{k+1}, q^{5k+3}, q^{6k+4}; q^{6k+4})_\infty (q^{4k+2},q^{8k+6}; q^{12k+8})_\infty }{(q;q)_\infty}, \end{multline} } where $\left( \frac{n}{p} \right)$ in~\eqref{lev6k3} is the Legendre symbol. We note that~\eqref{lev6k3} first appeared in~\cite[p. 400, Eq. (1.7)]{S04} and that \eqref{lev6k7} is due to Andrews~\cite[p. 269, Eq. (1.8)]{A84}. While~\eqref{lev6k2}, \eqref{lev6k4}, and \eqref{lev6k5} probably have not appeared explicitly in the literature, they each follow from building a Bailey chain on a known Bailey pair and may be regarded as nothing more than a standard exercise in light of Andrews' discovery of the Bailey chain~(\cite{A84}; cf.~\cite[\S3.5]{A86}). Indeed the $k=1$ cases of~\eqref{lev6k2},~\eqref{lev6k4},~\eqref{lev6k5}, and~\eqref{lev6k7} are all due to Rogers and appear in Slater's list~\cite{S52} as Eqs. (62), (80), (83), and (98) respectively. On the other hand,~\eqref{lev6k6} is new since it arises from inserting a new Bailey pair, namely the one from Lemma~\ref{BP2} in this paper, into the Bailey chain mechanism. Notice that as $k$ runs through the positive integers in the numerators of the right hand sides of~\eqref{lev6k2}--\eqref{lev6k7}, we obtain instances of the quintuple product identity for all moduli represented in~\eqref{A22prodside} (except for the trivial level 1 case where the relevant identity reduces to ``$1=1$"). It is because of the preceding observations that we conjecture that $A_2^{(2)}$ may be ``explained" by six interlaced Bailey lattices. We now turn our attention to combinatorial considerations in the context of $A_2^{(2)}$. In his 1988 Ph.D. thesis S. Capparelli~\cite{C88} conjectured two beautiful partition identities resulting from his analysis of the two inequivalent level 3 standard modules of $A_2^{(2)}$, using the theory in~\cite{LW84} and~\cite{LW85}. Capparelli's conjectures were first proved by Andrews~\cite{A94} using combinatorial methods. Later, Lie algebraic proofs were found by Tamba and Xie~\cite{TX95} and Capparelli himself~\cite{C96}. More recently, Capparelli~\cite{C04} related the principal characters of the vacuum spaces for the standard modules of $A_2^{(2)}$ for levels 5 and 7 to some known $q$-series and partition identities. In the same way, our identities~\eqref{m18-1}--\eqref{m18-4} appear to correspond to the standard modules for level 6. \varphiection*{Acknowledgements} Many thanks are due to Jim Lepowsky and Robert Wilson for their help with the exposition in \S\ref{Lie}. We also thank George Andrews for his encouragement and several useful suggestions. Finally, we thank the referee for helpful comments. \end{document}
\begin{document} \baselineskip=17pt \title[Morrey spaces related to nonnegative potentials]{Morrey spaces related to certain nonnegative potentials and fractional integrals on the Heisenberg groups} \author[H. Wang]{Hua Wang} \address{College of Mathematics and Econometrics, Hunan University, Changsha, 410082, P. R. China\\ \&~Department of Mathematics and Statistics, Memorial University, St. John's, NL A1C 5S7, Canada} \email{[email protected]} \date{} \begin{abstract} Let $\mathcal L=-\Delta_{\mathbb H^n}+V$ be a Schr\"odinger operator on the Heisenberg group $\mathbb H^n$, where $\Delta_{\mathbb H^n}$ is the sub-Laplacian on $\mathbb H^n$ and the nonnegative potential $V$ belongs to the reverse H\"older class $RH_s$ with $s\geq Q/2$. Here $Q=2n+2$ is the homogeneous dimension of $\mathbb H^n$. For given $\alpha\in(0,Q)$, the fractional integrals associated to the Schr\"odinger operator $\mathcal L$ is defined by $\mathcal I_{\alpha}={\mathcal L}^{-{\alpha}/2}$. In this article, we first introduce the Morrey space $L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ and weak Morrey space $WL^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ related to the nonnegative potential $V$. Then we establish the boundedness of fractional integrals ${\mathcal L}^{-{\alpha}/2}$ on these new spaces. Furthermore, in order to deal with certain extreme cases, we also introduce the spaces $\mathrm{BMO}_{\rho,\infty}(\mathbb H^n)$ and $\mathcal{C}^{\beta}_{\rho,\infty}(\mathbb H^n)$ with exponent $\beta\in(0,1]$. \end{abstract} \subjclass[2010]{Primary 42B20; 35J10; Secondary 22E25; 22E30} \keywords{Schr\"odinger operator; fractional integrals; Heisenberg group; Morrey spaces; reverse H\"older class} \maketitle \section{Introduction} \subsection{Heisenberg group $\mathbb H^n$} The \emph{Heisenberg group} $\mathbb H^n$ is a nilpotent Lie group with underlying manifold $\mathbb C^n\times\mathbb R$. The group structure (the multiplication law) is given by \begin{equation*} (z,t)\cdot(z',t'):=\Big(z+z',t+t'+2\mathrm{Im}(z\cdot\overline{z'})\Big), \end{equation*} where $z=(z_1,z_2,\dots,z_n)$, $z'=(z_1',z_2',\dots,z_n')\in\mathbb C^n$, and \begin{equation*} z\cdot\overline{z'}:=\sum_{j=1}^nz_j\overline{z_j'}. \end{equation*} It can be easily seen that the inverse element of $u=(z,t)$ is $u^{-1}=(-z,-t)$, and the identity is the origin $(0,0)$. The Lie algebra of left-invariant vector fields on $\mathbb H^n$ is spanned by \begin{equation*} \begin{cases} X_j=\displaystyle\frac{\partial}{\partial x_j}+2y_j\frac{\partial}{\partial t},\quad j=1,2,\dots,n,&\\ Y_j=\displaystyle\frac{\partial}{\partial y_j}-2x_j\frac{\partial}{\partial t},\quad j=1,2,\dots,n,&\\ T=\displaystyle\frac{\partial}{\partial t}.& \end{cases} \end{equation*} All non-trivial commutation relations are given by \begin{equation*} [X_j,Y_j]=-4T,\quad j=1,2,\dots,n. \end{equation*} The sub-Laplacian $\Delta_{\mathbb H^n}$ is defined by \begin{equation*} \Delta_{\mathbb H^n}:=\sum_{j=1}^n\big(X_j^2+Y_j^2\big). \end{equation*} The dilations on $\mathbb H^n$ have the following form \begin{equation*} \delta_a(z,t):=(az,a^2t),\quad a>0. \end{equation*} For given $(z,t)\in\mathbb H^n$, the \emph{homogeneous norm} of $(z,t)$ is given by \begin{equation*} |(z,t)|=\big(|z|^4+t^2\big)^{1/4}. \end{equation*} Observe that $|(z,t)^{-1}|=|(z,t)|$ and \begin{equation*} \big|\delta_a(z,t)\big|=\big(|az|^4+(a^2t)^2\big)^{1/4}=a|(z,t)|. \end{equation*} In addition, this norm $|\cdot|$ satisfies the triangle inequality and leads to a left-invariant distant $d(u,v)=\big|u^{-1}\cdot v\big|$ for $u=(z,t)$, $v=(z',t')\in\mathbb H^n$. The ball of radius $r$ centered at $u$ is denoted by \begin{equation*} B(u,r)=\big\{v\in\mathbb H^n:d(u,v)<r\big\}. \end{equation*} The Haar measure on $\mathbb H^n$ coincides with the Lebesgue measure on $\mathbb R^{2n}\times\mathbb R$. The measure of any measurable set $E\subset\mathbb H^n$ is denoted by $|E|$. For $(u,r)\in\mathbb H^n\times(0,\infty)$, it can be shown that the volume of $B(u,r)$ is \begin{equation*} |B(u,r)|=r^{Q}\cdot|B(0,1)|, \end{equation*} where $Q:=2n+2$ is the \emph{homogeneous dimension} of $\mathbb H^n$ and $|B(0,1)|$ is the volume of the unit ball in $\mathbb H^n$. A direct calculation shows that \begin{equation*} |B(0,1)|=\frac{2\pi^{n+\frac{\,1\,}{2}}\Gamma(\frac{\,n\,}{2})}{(n+1)\Gamma(n)\Gamma(\frac{n+1}{2})}. \end{equation*} Given a ball $B=B(u,r)$ in $\mathbb H^n$ and $\lambda>0$, we shall use the notation $\lambda B$ to denote $B(u,\lambda r)$. Clearly, we have \begin{equation}\label{homonorm} |B(u,\lambda r)|=\lambda^{Q}\cdot|B(u,r)|. \end{equation} For more information about the harmonic analysis on the Heisenberg groups, we refer the reader to \cite[Chapter XII]{stein2} and \cite{thangavelu}. Let $V:\mathbb H^n\rightarrow\mathbb R$ be a nonnegative locally integrable function that belongs to the \emph{reverse H\"older class} $RH_s$ for some exponent $1<s<\infty$; i.e., there exists a positive constant $C>0$ such that the following reverse H\"older inequality \begin{equation*} \left(\frac{1}{|B|}\int_B V(w)^s\,dw\right)^{1/s}\leq C\left(\frac{1}{|B|}\int_B V(w)\,dw\right) \end{equation*} holds for every ball $B$ in $\mathbb H^n$. For given $V\in RH_s$ with $s\geq Q/2$, we introduce the \emph{critical radius function} $\rho(u)=\rho(u;V)$ which is given by \begin{equation}\label{rho} \rho(u):=\sup\bigg\{r>0:\frac{1}{r^{Q-2}}\int_{B(u,r)}V(w)\,dw\leq1\bigg\},\quad u\in\mathbb H^n, \end{equation} where $B(u,r)$ denotes the ball in $\mathbb H^n$ centered at $u$ and with radius $r$. It is well known that this auxiliary function satisfies $0<\rho(u)<\infty$ for any $u\in\mathbb H^n$ under the above assumption on $V$ (see \cite{lu}). We need the following known result concerning the critical radius function \eqref{rho}. \begin{lem}[\cite{lu}]\label{N0} If $V\in RH_s$ with $s\geq Q/2$, then there exist constants $C_0\geq 1$ and $N_0>0$ such that for all $u$ and $v$ in $\mathbb H^n$, \begin{equation}\label{com} \frac{\,1\,}{C_0}\left(1+\frac{|v^{-1}u|}{\rho(u)}\right)^{-N_0}\leq\frac{\rho(v)}{\rho(u)}\leq C_0\left(1+\frac{|v^{-1}u|}{\rho(u)}\right)^{\frac{N_0}{N_0+1}}. \end{equation} \end{lem} Lemma \ref{N0} is due to Lu \cite{lu}. In the setting of $\mathbb R^n$, this result was given by Shen in \cite{shen}. As a straightforward consequence of \eqref{com}, we can see that for each integer $k\geq1$, the following estimate \begin{equation}\label{com2} 1+\frac{2^kr}{\rho(v)}\geq \frac{1}{C_0}\left(1+\frac{r}{\rho(u)}\right)^{-\frac{N_0}{N_0+1}}\left(1+\frac{2^kr}{\rho(u)}\right) \end{equation} holds for any $v\in B(u,r)$ with $u\in\mathbb H^n$ and $r>0$, $C_0$ is the same as in \eqref{com}. \subsection{Fractional integrals} First we recall the fractional power of the Laplacian operator on $\mathbb R^n$. For given $\alpha\in(0,n)$, the classical fractional integral operator $I^{\Delta}_{\alpha}$ (also referred to as the Riesz potential) is defined by \begin{equation*} I^{\Delta}_{\alpha}(f):=(-\Delta)^{-\alpha/2}(f), \end{equation*} where $\Delta$ is the Laplacian operator on $\mathbb R^n$. If $f\in\mathcal S(\mathbb R^n)$, then by virtue of the Fourier transform, we have \begin{equation*} \widehat{I^{\Delta}_{\alpha}f}(\xi)=(2\pi|\xi|)^{-\alpha}\widehat{f}(\xi),\quad \forall\,\xi\in\mathbb R^n. \end{equation*} Comparing this to the Fourier transform of $|x|^{-\alpha}$, $0<\alpha<n$, we are led to redefine the fractional integral operator $I^{\Delta}_{\alpha}$ by \begin{equation}\label{frac} I^{\Delta}_{\alpha}f(x):=\frac{1}{\gamma(\alpha)}\int_{\mathbb R^n}\frac{f(y)}{|x-y|^{n-\alpha}}\,dy, \end{equation} where \begin{equation*} \gamma(\alpha)=\frac{\pi^{\frac{n}{\,2\,}}2^\alpha\Gamma(\frac{\alpha}{\,2\,})}{\Gamma(\frac{n-\alpha}{2})} \end{equation*} with $\Gamma(\cdot)$ being the usual gamma function. It is well-known that the Hardy-Littlewood-Sobolev theorem states that the fractional integral operator $I^{\Delta}_{\alpha}$ is bounded from $L^p(\mathbb R^n)$ to $L^q(\mathbb R^n)$ for $0<\alpha<n$, $1<p<n/{\alpha}$ and $1/q=1/p-{\alpha}/n$. Also we know that $I^{\Delta}_{\alpha}$ is bounded from $L^1(\mathbb R^n)$ to $WL^q(\mathbb R^n)$ for $0<\alpha<n$ and $q=n/{(n-\alpha)}$ (see \cite{stein}). Next we are going to discuss the fractional integrals on the Heisenberg group. For given $\alpha\in(0,Q)$ with $Q=2n+2$, the fractional integral operator $I_{\alpha}$ (also referred to as the Riesz potential) is defined by (see \cite{xiao}) \begin{equation}\label{frac2} I_{\alpha}(f):=(-\Delta_{\mathbb H^n})^{-\alpha/2}(f), \end{equation} where $\Delta_{\mathbb H^n}$ is the sub-Laplacian on $\mathbb H^n$ defined above. Let $f$ and $g$ be integrable functions defined on $\mathbb H^n$. Define the \emph{convolution} $f*g$ by \begin{equation*} (f*g)(u):=\int_{\mathbb H^n}f(v)g(v^{-1}u)\,dv. \end{equation*} We denote by $H_s(u)$ the convolution kernel of heat semigroup $\big\{T_s=e^{s\Delta_{\mathbb H^n}}:s>0\big\}$. Namely, \begin{equation*} e^{s\Delta_{\mathbb H^n}}f(u)=\int_{\mathbb H^n}H_s(v^{-1}u)f(v)\,dv. \end{equation*} For any $u=(z,t)\in\mathbb H^n$, it was proved in \cite[Theorem 4.2]{xiao} that $I_{\alpha}$ can be expressed by the following formula: \begin{equation}\label{frac3} \begin{split} I_{\alpha}f(u)&=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}e^{s\Delta_{\mathbb H^n}}f(u)\,s^{\alpha/2-1}ds\\ &=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}\big(H_s*f\big)(u)\,s^{\alpha/2-1}ds. \end{split} \end{equation} Let $V\in RH_s$ for $s\geq Q/2$. For such a potential $V$, we consider the time independent \emph{Schr\"odinger operator} on $\mathbb H^n$ (see \cite{lin}), \begin{equation*} \mathcal L:=-\Delta_{\mathbb H^n}+V, \end{equation*} and its associated semigroup \begin{equation*} \mathcal T^{\mathcal L}_sf(u):=e^{-s\mathcal L}f(u)=\int_{\mathbb H^n}P_s(u,v)f(v)\,dv,\quad f\in L^2(\mathbb H^n),~s>0, \end{equation*} where $P_s(u,v)$ denotes the kernel of the operator $e^{-s\mathcal L},s>0$. For any $u=(z,t)\in\mathbb H^n$, it is well-known that the heat kernel $H_s(u)$ has the explicit expression: \begin{equation*} H_s(z,t)=(2\pi)^{-1}(4\pi)^{-n}\int_{\mathbb R}\bigg(\frac{|\lambda|}{\sinh|\lambda|s}\bigg)^n\exp\left\{-\frac{|\lambda||z|^2}{4}\coth|\lambda|s-i\lambda s\right\}d\lambda, \end{equation*} and hence it satisfies the following estimate (see \cite{jerison} for instance) \begin{equation}\label{heatkernel} 0\leq H_s(u)\leq C\cdot s^{-Q/2}\exp\bigg(-\frac{|u|^2}{As}\bigg), \end{equation} where the constants $C,A>0$ are independent of $s$ and $u\in\mathbb H^n$. Since $V\geq0$, by the \emph{Trotter product formula} and \eqref{heatkernel}, one has \begin{equation}\label{heat} 0\leq P_s(u,v)\leq H_s(v^{-1}u)\leq C\cdot s^{-Q/2}\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg),\quad s>0. \end{equation} Moreover, this estimate \eqref{heat} can be improved when $V$ belongs to the reverse H\"older class $RH_s$ for some $s\geq Q/2$. The auxiliary function $\rho(u)$ arises naturally in this context. \begin{lem}\label{ker1} Let $V\in RH_s$ with $s\geq Q/2$, and let $\rho(u)$ be the auxiliary function determined by $V$. For every positive integer $N\geq1$, there exists a positive constant $C_N>0$ such that for all $u$ and $v$ in $\mathbb H^n$, \begin{equation*} 0\leq P_s(u,v)\leq C_N\cdot s^{-Q/2}\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg)\bigg(1+\frac{\sqrt{s\,}}{\rho(u)}+\frac{\sqrt{s\,}}{\rho(v)}\bigg)^{-N},\quad s>0. \end{equation*} \end{lem} This estimate of $P_s(u,v)$ is better than \eqref{heat}, which was given by Lin and Liu in \cite[Lemma 7]{lin}. Inspired by \eqref{frac2} and \eqref{frac3}, for given $\alpha\in(0,Q)$, the \emph{$\mathcal L$-Fractional integral operator} or \emph{$\mathcal L$-Riesz potential} on the Heisenberg group is defined by (see \cite{jiang} and \cite{jiang2}) \begin{equation*} \begin{split} \mathcal I_{\alpha}(f)(u)&:={\mathcal L}^{-{\alpha}/2}f(u)\\ &=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}e^{-s\mathcal L}f(u)\,s^{\alpha/2-1}ds. \end{split} \end{equation*} Recall that in the setting of $\mathbb R^n$, this integral operator was first introduced by Dziuba\'{n}ski et al.\cite{dziu}. In this article we shall be interested in the behavior of the fractional integral operator $\mathcal I_{\alpha}$ associated to Schr\"odinger operator on $\mathbb H^n$. For $1\leq p<\infty$, the Lebesgue space $L^p(\mathbb H^n)$ is defined to be the set of all measurable functions $f$ on $\mathbb H^n$ such that \begin{equation*} \big\|f\big\|_{L^p(\mathbb H^n)}:=\bigg(\int_{\mathbb H^n}\big|f(u)\big|^p\,du\bigg)^{1/p}<\infty. \end{equation*} The weak Lebesgue space $WL^p(\mathbb H^n)$ consists of all measurable functions $f$ on $\mathbb H^n$ such that \begin{equation*} \big\|f\big\|_{WL^p(\mathbb H^n)}:= \sup_{\lambda>0}\lambda\cdot\big|\big\{u\in\mathbb H^n:|f(u)|>\lambda\big\}\big|^{1/p}<\infty. \end{equation*} Now we are going to establish strong-type and weak-type estimates of the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$ on the Lebesgue spaces. We first claim that the following estimate \begin{equation}\label{claim} |\mathcal I_{\alpha}f(u)|\leq C\int_{\mathbb H^n}|f(v)|\frac{1}{|v^{-1}u|^{Q-\alpha}}\,dv=C\big(|f|*|\cdot|^{\alpha-Q}\big)(u) \end{equation} holds for all $u\in\mathbb H^n$. Let us verify \eqref{claim}. To do so, denote by $\mathcal K_{\alpha}(u,v)$ the kernel of the fractional integral operator $\mathcal I_{\alpha}$. Then we have \begin{equation*} \begin{split} \int_{\mathbb H^n}\mathcal K_{\alpha}(u,v)f(v)\,dv&=\mathcal I_{\alpha}f(u)={\mathcal L}^{-{\alpha}/2}f(u)\\ &=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}e^{-s\mathcal L}f(u)\,s^{\alpha/2-1}ds\\ &=\int_0^{\infty}\bigg[\frac{1}{\Gamma(\alpha/2)}\int_{\mathbb H^n}P_s(u,v)f(v)\,dv\bigg]s^{\alpha/2-1}ds\\ &=\int_{\mathbb H^n}\bigg[\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}P_s(u,v)\,s^{\alpha/2-1}ds\bigg]f(v)\,dv. \end{split} \end{equation*} Hence, \begin{equation*} \mathcal K_{\alpha}(u,v)=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}P_s(u,v)\,s^{\alpha/2-1}ds. \end{equation*} Moreover, by using \eqref{heat}, we can deduce that \begin{equation*} \begin{split} \big|\mathcal K_{\alpha}(u,v)\big|&\leq\frac{C}{\Gamma(\alpha/2)}\int_0^{\infty}\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg)s^{\alpha/2-Q/2-1}ds\\ &\leq\frac{C}{\Gamma(\alpha/2)}\cdot\frac{1}{|v^{-1}u|^{Q-\alpha}}\int_0^{\infty}e^{-t}\,t^{(Q/2-\alpha/2)-1}dt\\ &=C\cdot\frac{\Gamma(Q/2-\alpha/2)}{\Gamma(\alpha/2)}\cdot\frac{1}{|v^{-1}u|^{Q-\alpha}}, \end{split} \end{equation*} where in the second step we have used a change of variables. Thus \eqref{claim} holds. According to Theorems 4.4 and 4.5 in \cite{xiao}, we get the Hardy-Littlewood-Sobolev theorem on the Heisenberg group. \begin{thm}\label{strong} Let $0<\alpha<Q$ and $1\leq p<Q/{\alpha}$. Define $1<q<\infty$ by the relation $1/q=1/p-{\alpha}/Q$. Then the following statements are valid: \begin{enumerate} \item if $p>1$, then $\mathcal I_{\alpha}$ is bounded from $L^p(\mathbb H^n)$ to $L^q(\mathbb H^n)$; \item if $p=1$, then $\mathcal I_{\alpha}$ is bounded from $L^1(\mathbb H^n)$ to $WL^q(\mathbb H^n)$. \end{enumerate} \end{thm} The organization of this paper is as follows. In Section 2, we will give the definitions of Morrey space and weak Morrey space and state our main results: Theorems \ref{mainthm:1}, \ref{mainthm:2} and \ref{mainthm:3}. Section 3 is devoted to proving the boundedness of the fractional integral operator in the context of Morrey spaces. We will study certain extreme cases in Section 4. Throughout this paper, $C$ represents a positive constant that is independent of the main parameters, but may be different from line to line, and a subscript is added when we wish to make clear its dependence on the parameter in the subscript. We also use $a\approx b$ to denote the equivalence of $a$ and $b$; that is, there exist two positive constants $C_1$, $C_2$ independent of $a,b$ such that $C_1a\leq b\leq C_2a$. \section{Main results} In this section, we introduce some types of Morrey spaces related to the nonnegative potential $V$ on $\mathbb H^n$, and then give our main results. \begin{defin} Let $\rho$ be the auxiliary function determined by $V\in RH_s$ with $s\geq Q/2$. Let $1\leq p<\infty$ and $0\leq\kappa<1$. For given $0<\theta<\infty$, the Morrey space $L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$ is defined to be the set of all $p$-locally integrable functions $f$ on $\mathbb H^n$ such that \begin{equation}\label{morrey1} \bigg(\frac{1}{|B|^{\kappa}}\int_B\big|f(u)\big|^p\,du\bigg)^{1/p} \leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\theta} \end{equation} for every ball $B=B(u_0,r)$ in $\mathbb H^n$. A norm for $f\in L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$, denoted by $\|f\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}$, is given by the infimum of the constants in \eqref{morrey1}, or equivalently, \begin{equation*} \big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}:=\sup_{B(u_0,r)}\left(1+\frac{r}{\rho(u_0)}\right)^{-\theta} \bigg(\frac{1}{|B|^{\kappa}}\int_B\big|f(u)\big|^p\,du\bigg)^{1/p} <\infty, \end{equation*} where the supremum is taken over all balls $B=B(u_0,r)$ in $\mathbb H^n$, $u_0$ and $r$ denote the center and radius of $B$ respectively. Define \begin{equation*} L^{p,\kappa}_{\rho,\infty}(\mathbb H^n):=\bigcup_{\theta>0}L^{p,\kappa}_{\rho,\theta}(\mathbb H^n). \end{equation*} \end{defin} \begin{defin} Let $\rho$ be the auxiliary function determined by $V\in RH_s$ with $s\geq Q/2$. Let $1\leq p<\infty$ and $0\leq\kappa<1$. For given $0<\theta<\infty$, the weak Morrey space $WL^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$ is defined to be the set of all measurable functions $f$ on $\mathbb H^n$ such that \begin{equation*} \frac{1}{|B|^{\kappa/p}}\sup_{\lambda>0}\lambda\cdot\big|\big\{u\in B:|f(u)|>\lambda\big\}\big|^{1/p} \leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\theta} \end{equation*} for every ball $B=B(u_0,r)$ in $\mathbb H^n$, or equivalently, \begin{equation*} \big\|f\big\|_{WL^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}:=\sup_{B(u_0,r)}\left(1+\frac{r}{\rho(u_0)}\right)^{-\theta}\frac{1}{|B|^{\kappa/p}} \sup_{\lambda>0}\lambda\cdot\big|\big\{u\in B:|f(u)|>\lambda\big\}\big|^{1/p}<\infty. \end{equation*} Correspondingly, we define \begin{equation*} WL^{p,\kappa}_{\rho,\infty}(\mathbb H^n):=\bigcup_{\theta>0}WL^{p,\kappa}_{\rho,\theta}(\mathbb H^n). \end{equation*} \end{defin} Obviously, if we take $\theta=0$ or $V\equiv0$, then this Morrey space (or weak Morrey space) is just the Morrey space $L^{p,\kappa}(\mathbb H^n)$ (or $WL^{p,\kappa}(\mathbb H^n)$), which was defined by Guliyev et al.\cite{guliyev}. Moreover, according to the above definitions, one has \begin{equation*} \begin{cases} L^{p,\kappa}(\mathbb H^n)\subset L^{p,\kappa}_{\rho,\theta_1}(\mathbb H^n)\subset L^{p,\kappa}_{\rho,\theta_2}(\mathbb H^n);&\\ WL^{p,\kappa}(\mathbb H^n)\subset WL^{p,\kappa}_{\rho,\theta_1}(\mathbb H^n)\subset WL^{p,\kappa}_{\rho,\theta_2}(\mathbb H^n),& \end{cases} \end{equation*} for $0<\theta_1<\theta_2<\infty$. Hence $L^{p,\kappa}(\mathbb H^n)\subset L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ and $WL^{p,\kappa}(\mathbb H^n)\subset WL^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ for $(p,\kappa)\in[1,\infty)\times[0,1)$. The space $L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$ (or $WL^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$) could be viewed as an extension of Lebesgue (or weak Lebesgue) space on $\mathbb H^n$ (when $\kappa=\theta=0$). In this article we will extend the Hardy-Littlewood-Sobolev theorem on $\mathbb H^n$ to the Morrey spaces. We now present our main results as follows. \begin{thm}\label{mainthm:1} Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $V\in RH_s$ with $s\geq Q/2$ and $0<\kappa<p/q$, then the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$ is bounded from $L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ into $L^{q,{(\kappa q)}/p}_{\rho,\infty}(\mathbb H^n)$. \end{thm} \begin{thm}\label{mainthm:2} Let $0<\alpha<Q$, $p=1$ and $q=Q/{(Q-\alpha)}$. If $V\in RH_s$ with $s\geq Q/2$ and $0<\kappa<1/q$, then the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$ is bounded from $L^{1,\kappa}_{\rho,\infty}(\mathbb H^n)$ into $WL^{q,(\kappa q)}_{\rho,\infty}(\mathbb H^n)$. \end{thm} Before stating our next theorem, we need to introduce a new space $\mathrm{BMO}_{\rho,\infty}(\mathbb H^n)$ defined by \begin{equation*} \mathrm{BMO}_{\rho,\infty}(\mathbb H^n):=\bigcup_{\theta>0}\mathrm{BMO}_{\rho,\theta}(\mathbb H^n), \end{equation*} where for $0<\theta<\infty$ the space $\mathrm{BMO}_{\rho,\theta}(\mathbb H^n)$ is defined to be the set of all locally integrable functions $f$ satisfying \begin{equation}\label{BM} \frac{1}{|B(u_0,r)|}\int_{B(u_0,r)}\big|f(u)-f_{B(u_0,r)}\big|\,du \leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}, \end{equation} for all $u_0\in\mathbb H^n$ and $r>0$, $f_{B(u_0,r)}$ denotes the mean value of $f$ on $B(u_0,r)$, that is, \begin{equation*} f_{B(u_0,r)}:=\frac{1}{|B(u_0,r)|}\int_{B(u_0,r)}f(v)\,dv. \end{equation*} A norm for $f\in\mathrm{BMO}_{\rho,\theta}(\mathbb H^n)$, denoted by $\|f\|_{\mathrm{BMO}_{\rho,\theta}}$, is given by the infimum of the constants satisfying \eqref{BM}, or equivalently, \begin{equation*} \|f\|_{\mathrm{BMO}_{\rho,\theta}} :=\sup_{B(u_0,r)}\left(1+\frac{r}{\rho(u_0)}\right)^{-\theta}\bigg(\frac{1}{|B(u_0,r)|}\int_{B(u_0,r)}\big|f(u)-f_{B(u_0,r)}\big|\,du\bigg), \end{equation*} where the supremum is taken over all balls $B(u_0,r)$ with $u_0\in\mathbb H^n$ and $r>0$. Recall that in the setting of $\mathbb R^n$, the space $\mathrm{BMO}_{\rho,\theta}(\mathbb R^n)$ was first introduced by Bongioanni et al.\cite{bong2} (see also \cite{bong3}). Moreover, given any $\beta\in[0,1]$, we introduce the space of H\"older continuous functions on $\mathbb H^n$, with exponent $\beta$. \begin{equation*} \mathcal{C}^{\beta}_{\rho,\infty}(\mathbb H^n):=\bigcup_{\theta>0}\mathcal{C}^{\beta}_{\rho,\theta}(\mathbb H^n), \end{equation*} where for $0<\theta<\infty$ the space $\mathcal{C}^{\beta}_{\rho,\theta}(\mathbb H^n)$ is defined to be the set of all locally integrable functions $f$ satisfying \begin{equation}\label{hconti} \frac{1}{|B(u_0,r)|^{1+\beta/Q}}\int_{B(u_0,r)}\big|f(u)-f_{B(u_0,r)}\big|\,du \leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}, \end{equation} for all $u_0\in\mathbb H^n$ and $r\in(0,\infty)$. The smallest bound $C$ for which \eqref{hconti} is satisfied is then taken to be the norm of $f$ in this space and is denoted by $\|f\|_{\mathcal{C}^{\beta}_{\rho,\theta}}$. When $\theta=0$ or $V\equiv0$, $\mathrm{BMO}_{\rho,\theta}(\mathbb H^n)$ and $\mathcal{C}^{\beta}_{\rho,\theta}(\mathbb H^n)$ will be simply written as $\mathrm{BMO}(\mathbb H^n)$ and $\mathcal{C}^{\beta}(\mathbb H^n)$, respectively. Note that when $\beta=0$ this space $\mathcal{C}^{\beta}_{\rho,\theta}(\mathbb H^n)$ reduces to the space $\mathrm{BMO}_{\rho,\theta}(\mathbb H^n)$ mentioned above. For the case $\kappa\geq p/q$ of Theorem \ref{mainthm:1}, we will prove the following result. \begin{thm}\label{mainthm:3} Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $V\in RH_s$ with $s\geq Q/2$ and $p/q\leq\kappa<1$, then the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$ is bounded from $L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ into $\mathcal{C}^{\beta}_{\rho,\infty}(\mathbb H^n)$ with $\beta/Q=\kappa/p-1/q$ and $\beta$ sufficiently small. To be more precise, $\beta<\delta\leq1$ and $\delta$ is given as in Lemma $\ref{kernel2}$. \end{thm} In particular, for the limiting case $\kappa=p/q$ (or $\beta=0$), we obtain the following result on BMO-type estimate of $\mathcal I_{\alpha}$. \begin{cor}\label{mainthm:4} Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $V\in RH_s$ with $s\geq Q/2$ and $\kappa=p/q$, then the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$ is bounded from $L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ into $\mathrm{BMO}_{\rho,\infty}(\mathbb H^n)$. \end{cor} \section{Proofs of Theorems $\ref{mainthm:1}$ and $\ref{mainthm:2}$} In this section, we will prove the conclusions of Theorems \ref{mainthm:1} and \ref{mainthm:2}. Let us remind that the $\mathcal L$-fractional integral operator of order $\alpha\in(0,Q)$ can be written as \begin{equation*} \mathcal I_{\alpha}f(u)={\mathcal L}^{-{\alpha}/2}f(u)=\int_{\mathbb H^n}\mathcal K_{\alpha}(u,v)f(v)\,dv, \end{equation*} where \begin{equation}\label{kauv} \mathcal K_{\alpha}(u,v)=\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}P_s(u,v)\,s^{\alpha/2-1}ds. \end{equation} The following lemma gives the estimate of the kernel $\mathcal K_{\alpha}(u,v)$ related to the Schr\"odinger operator $\mathcal L$, which plays a key role in the proof of our main theorems. \begin{lem}\label{kernel} Let $V\in RH_s$ with $s\geq Q/2$ and $0<\alpha<Q$. For every positive integer $N\geq1$, there exists a positive constant $C_{N,\alpha}>0$ such that for all $u$ and $v$ in $\mathbb H^n$, \begin{equation}\label{WH1} \big|\mathcal K_{\alpha}(u,v)\big|\leq C_{N,\alpha}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{1}{|v^{-1}u|^{Q-\alpha}}. \end{equation} \end{lem} \begin{proof} By Lemma \ref{ker1} and \eqref{kauv}, we have \begin{equation*} \begin{split} \big|\mathcal K_{\alpha}(u,v)\big|&\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}\big|P_s(u,v)\big|\,s^{\alpha/2-1}ds\\ &\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}\frac{C_N}{s^{Q/2}}\cdot\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}+\frac{\sqrt{s\,}}{\rho(v)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}\frac{C_N}{s^{Q/2}}\cdot\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds. \end{split} \end{equation*} We consider two cases $s>|v^{-1}u|^2$ and $0\leq s\leq|v^{-1}u|^2$, respectively. Thus, $|\mathcal K_{\alpha}(u,v)|\leq I+II$, where \begin{equation*} I=\frac{1}{\Gamma(\alpha/2)}\int_{|v^{-1}u|^2}^{\infty}\frac{C_N}{s^{Q/2}}\cdot\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds \end{equation*} and \begin{equation*} II=\frac{1}{\Gamma(\alpha/2)}\int_0^{|v^{-1}u|^2}\frac{C_N}{s^{Q/2}}\cdot\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds. \end{equation*} When $s>|v^{-1}u|^2$, then $\sqrt{s\,}>|v^{-1}u|$, and hence \begin{equation*} \begin{split} I&\leq\frac{1}{\Gamma(\alpha/2)}\int_{|v^{-1}u|^2}^{\infty}\frac{C_N}{s^{Q/2}}\cdot\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg) \bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &\leq C_{N,\alpha}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\int_{|v^{-1}u|^2}^{\infty}s^{\alpha/2-Q/2-1}ds\\ &\leq C_{N,\alpha}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{1}{|v^{-1}u|^{Q-\alpha}}, \end{split} \end{equation*} where the last integral converges because $0<\alpha<Q$. On the other hand, \begin{equation*} \begin{split} II&\leq C_{N,\alpha}\int_0^{|v^{-1}u|^2}\frac{1}{s^{Q/2}}\cdot\bigg(\frac{|v^{-1}u|^2}{s}\bigg)^{-(Q/2+N/2)} \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &=C_{N,\alpha}\int_0^{|v^{-1}u|^2}\frac{1}{|v^{-1}u|^Q}\cdot\bigg(\frac{\sqrt{s\,}}{|v^{-1}u|}\bigg)^{N} \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds. \end{split} \end{equation*} It is easy to see that when $0\leq s\leq|v^{-1}u|^2$, \begin{equation*} \frac{\sqrt{s\,}}{|v^{-1}u|}\leq\frac{\sqrt{s\,}+\rho(u)}{|v^{-1}u|+\rho(u)}. \end{equation*} Hence, \begin{equation*} \begin{split} II&\leq C_{N,\alpha}\int_0^{|v^{-1}u|^2}\frac{1}{|v^{-1}u|^Q}\cdot\bigg(\frac{\sqrt{s\,}+\rho(u)}{|v^{-1}u|+\rho(u)}\bigg)^{N} \bigg(\frac{\sqrt{s\,}+\rho(u)}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &=\frac{C_{N,\alpha}}{|v^{-1}u|^Q}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\int_0^{|v^{-1}u|^2}s^{\alpha/2-1}ds\\ &=C_{N,\alpha}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{1}{|v^{-1}u|^{Q-\alpha}}. \end{split} \end{equation*} Combining the estimates of $I$ and $II$ yields the desired estimate \eqref{WH1} for $\alpha\in(0,Q)$. This concludes the proof of the lemma. \end{proof} We are now ready to show our main theorems. \begin{proof}[Proof of Theorem $\ref{mainthm:1}$] By definition, we only need to show that for any given ball $B=B(u_0,r)$ of $\mathbb H^n$, there is some $\vartheta>0$ such that \begin{equation}\label{Main1} \bigg(\frac{1}{|B|^{\kappa q/p}}\int_B\big|\mathcal I_{\alpha}f(u)\big|^q\,du\bigg)^{1/q}\leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\vartheta} \end{equation} holds for given $f\in L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ with $(p,\kappa)\in(1,Q/{\alpha})\times(0,p/q)$. Suppose that $f\in L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$ for some $\theta>0$. We decompose the function $f$ as \begin{equation*} \begin{cases} f=f_1+f_2\in L^{p,\kappa}_{\rho,\theta}(\mathbb H^n);\ &\\ f_1=f\cdot\chi_{2B};\ &\\ f_2=f\cdot\chi_{(2B)^c}, \end{cases} \end{equation*} where $2B$ is the ball centered at $u_0$ of radius $2r>0$, $\chi_{2B}$ is the characteristic function of $2B$ and $(2B)^c=\mathbb H^n\backslash(2B)$. Then by the linearity of $\mathcal I_{\alpha}$, we write \begin{equation*} \begin{split} \bigg(\frac{1}{|B|^{\kappa q/p}}\int_B\big|\mathcal I_{\alpha}f(u)\big|^q\,du\bigg)^{1/q} &\leq\bigg(\frac{1}{|B|^{\kappa q/p}}\int_B\big|\mathcal I_{\alpha}f_1(u)\big|^q\,du\bigg)^{1/q}\\ &+\bigg(\frac{1}{|B|^{\kappa q/p}}\int_B\big|\mathcal I_{\alpha}f_2(u)\big|^q\,du\bigg)^{1/q}\\ &:=I_1+I_2. \end{split} \end{equation*} In what follows, we consider each part separately. By Theorem \ref{strong} (1), we have \begin{equation*} \begin{split} I_1&=\bigg(\frac{1}{|B|^{\kappa q/p}}\int_B\big|\mathcal I_{\alpha}f_1(u)\big|^q\,du\bigg)^{1/q}\\ &\leq C\cdot\frac{1}{|B|^{\kappa/p}}\bigg(\int_{\mathbb H^n}\big|f_1(u)\big|^p\,du\bigg)^{1/p}\\ &=C\cdot\frac{1}{|B|^{\kappa/p}}\bigg(\int_{2B}\big|f(u)\big|^p\,du\bigg)^{1/p}\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot \frac{|2B|^{\kappa/p}}{|B|^{\kappa/p}}\cdot\left(1+\frac{2r}{\rho(u_0)}\right)^{\theta}. \end{split} \end{equation*} Also observe that for any fixed $\theta>0$, \begin{equation}\label{2rx} 1\leq\left(1+\frac{2r}{\rho(u_0)}\right)^{\theta}\leq 2^{\theta}\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}. \end{equation} This in turn implies that \begin{equation*} \begin{split} I_1&\leq C_{\theta,n}\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}. \end{split} \end{equation*} Next we estimate the other term $I_2$. Notice that for any $u\in B(u_0,r)$ and $v\in (2B)^c$, one has \begin{equation*} \big|v^{-1}u\big|=\big|(v^{-1}u_0)\cdot(u_0^{-1}u)\big|\leq\big|v^{-1}u_0\big|+\big|u_0^{-1}u\big| \end{equation*} and \begin{equation*} \big|v^{-1}u\big|=\big|(v^{-1}u_0)\cdot(u_0^{-1}u)\big|\geq\big|v^{-1}u_0\big|-\big|u_0^{-1}u\big|. \end{equation*} Thus, \begin{equation*} \frac{1}{\,2\,}\big|v^{-1}u_0\big|\leq\big|v^{-1}u\big|\leq\frac{3}{\,2\,}\big|v^{-1}u_0\big|, \end{equation*} i.e., $|v^{-1}u|\approx|v^{-1}u_0|$. It then follows from Lemma \ref{kernel} that for any $u\in B(u_0,r)$ and any positive integer $N$, \begin{equation}\label{Talpha} \begin{split} \big|\mathcal I_{\alpha}f_2(u)\big|&\leq\int_{(2B)^c}|\mathcal K_{\alpha}(u,v)|\cdot|f(v)|\,dv\\ &\leq C_{N,\alpha}\int_{(2B)^c}\bigg(1+\frac{|v^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{1}{|v^{-1}u|^{Q-\alpha}}\cdot|f(v)|\,dv\\ &\leq C_{N,\alpha,n}\int_{(2B)^c}\bigg(1+\frac{|v^{-1}u_0|}{\rho(u)}\bigg)^{-N}\frac{1}{|v^{-1}u_0|^{Q-\alpha}}\cdot|f(v)|\,dv\\ &=C_{N,\alpha,n}\sum_{k=1}^\infty\int_{2^kr\leq|v^{-1}u_0|<2^{k+1}r}\bigg(1+\frac{|v^{-1}u_0|}{\rho(u)}\bigg)^{-N} \frac{1}{|v^{-1}u_0|^{Q-\alpha}}\cdot|f(v)|\,dv\\ &\leq C_{N,\alpha,n}\sum_{k=1}^\infty\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}} \int_{|v^{-1}u_0|<2^{k+1}r}\bigg(1+\frac{2^kr}{\rho(u)}\bigg)^{-N}|f(v)|\,dv. \end{split} \end{equation} In view of \eqref{com2} and \eqref{2rx}, we can further obtain \begin{align}\label{Tf2} \big|\mathcal I_{\alpha}f_2(u)\big| &\leq C\sum_{k=1}^\infty\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\notag\\ &\times\int_{|v^{-1}u_0|<2^{k+1}r}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^kr}{\rho(u_0)}\right)^{-N}|f(v)|\,dv\notag\\ &\leq C\sum_{k=1}^\infty\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\notag\\ &\times\int_{B(u_0,2^{k+1}r)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N}|f(v)|\,dv. \end{align} We consider each term in the sum of \eqref{Tf2} separately. By using H\"older's inequality, we obtain that for each integer $k\geq1$, \begin{equation*} \begin{split} &\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\int_{B(u_0,2^{k+1}r)}\big|f(v)\big|\,dv\\ &\leq\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\bigg(\int_{B(u_0,2^{k+1}r)}\big|f(v)\big|^p\,dv\bigg)^{1/p} \bigg(\int_{B(u_0,2^{k+1}r)}1\,dv\bigg)^{1/{p'}}\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot\frac{|B(u_0,2^{k+1}r)|^{{\kappa}/p}}{|B(u_0,2^{k+1}r)|^{1/q}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{\theta}. \end{split} \end{equation*} This allows us to obtain \begin{equation*} \begin{split} I_2&\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot\frac{|B(u_0,r)|^{1/q}}{|B(u_0,r)|^{{\kappa}/p}} \sum_{k=1}^\infty\frac{|B(u_0,2^{k+1}r)|^{{\kappa}/p}}{|B(u_0,2^{k+1}r)|^{1/q}} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}\left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}\\ &=C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \sum_{k=1}^\infty\frac{|B(u_0,r)|^{1/q-\kappa/p}}{|B(u_0,2^{k+1}r)|^{1/q-\kappa/p}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}. \end{split} \end{equation*} Thus, by choosing $N$ large enough so that $N>\theta$, and the last series is convergent, then we have \begin{equation*} \begin{split} I_2&\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}\sum_{k=1}^\infty\left(\frac{|B(u_0,r)|}{|B(u_0,2^{k+1}r)|}\right)^{{(1/q-\kappa/p)}}\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}, \end{split} \end{equation*} where the last inequality follows from the fact that $1/q-\kappa/p>0$. Summing up the above estimates for $I_1$ and $I_2$ and letting $\vartheta=\max\big\{\theta,N\cdot\frac{N_0}{N_0+1}\big\}$, we obtain the desired inequality \eqref{Main1}. This completes the proof of Theorem \ref{mainthm:1}. \end{proof} \begin{proof}[Proof of Theorem $\ref{mainthm:2}$] To prove Theorem \ref{mainthm:2}, by definition, it suffices to prove that for each given ball $B=B(u_0,r)$ of $\mathbb H^n$, there is some $\vartheta>0$ such that \begin{equation}\label{Main2} \frac{1}{|B|^{\kappa}}\sup_{\lambda>0}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha}f(u)|>\lambda\big\}\big|^{1/q} \leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\vartheta} \end{equation} holds for given $f\in L^{1,\kappa}_{\rho,\infty}(\mathbb H^n)$ with $0<\kappa<1/q$ and $q=Q/{(Q-\alpha)}$. Now suppose that $f\in L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)$ for some $\theta>0$. We decompose the function $f$ as \begin{equation*} \begin{cases} f=f_1+f_2\in L^{1,\kappa}_{\rho,\theta}(\mathbb H^n);\ &\\ f_1=f\cdot\chi_{2B};\ &\\ f_2=f\cdot\chi_{(2B)^c}. \end{cases} \end{equation*} Then for any given $\lambda>0$, by the linearity of $\mathcal I_{\alpha}$, we can write \begin{equation*} \begin{split} &\frac{1}{|B|^{\kappa}}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha}f(u)|>\lambda\big\}\big|^{1/q}\\ &\leq\frac{1}{|B|^{\kappa}}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha}f_1(u)|>\lambda/2\big\}\big|^{1/q}\\ &+\frac{1}{|B|^{\kappa}}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha}f_2(u)|>\lambda/2\big\}\big|^{1/q}\\ &:=J_1+J_2. \end{split} \end{equation*} We first give the estimate for the term $J_1$. By Theorem \ref{strong} (2), we get \begin{equation*} \begin{split} J_1&=\frac{1}{|B|^{\kappa}}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha} f_1(u)|>\lambda/2\big\}\big|^{1/q}\\ &\leq C\cdot\frac{1}{|B|^{\kappa}}\bigg(\int_{\mathbb H^n}\big|f_1(u)\big|\,du\bigg)\\ &=C\cdot\frac{1}{|B|^{\kappa}}\bigg(\int_{2B}\big|f(u)\big|\,du\bigg)\\ &\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot\frac{|2B|^{\kappa}}{|B|^{\kappa}}\left(1+\frac{2r}{\rho(u_0)}\right)^{\theta}. \end{split} \end{equation*} Therefore, in view of \eqref{2rx}, \begin{equation*} J_1\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}. \end{equation*} As for the second term $J_2$, by using the pointwise inequality \eqref{Tf2} and Chebyshev's inequality, we can deduce that \begin{equation}\label{Tf2pr} \begin{split} J_2&=\frac{1}{|B|^{\kappa}}\lambda\cdot\big|\big\{u\in B:|\mathcal I_{\alpha}f_2(u)|>\lambda/2\big\}\big|^{1/q}\\ &\leq\frac{2}{|B|^{\kappa}}\bigg(\int_{B}\big|\mathcal I_{\alpha}f_2(u)\big|^q\,du\bigg)^{1/q}\\ &\leq C\cdot\frac{|B|^{1/q}}{|B|^{\kappa}} \sum_{k=1}^\infty\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\\ &\times\int_{B(u_0,2^{k+1}r)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N}|f(v)|\,dv. \end{split} \end{equation} We consider each term in the sum of \eqref{Tf2pr} separately. For each integer $k\geq1$, we compute \begin{equation*} \begin{split} &\frac{1}{|B(u_0,2^{k+1}r)|^{1-(\alpha/Q)}}\int_{B(u_0,2^{k+1}r)}\big|f(v)\big|\,dv\\ &\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)}\cdot \frac{|B(u_0,2^{k+1}r)|^{\kappa}}{|B(u_0,2^{k+1}r)|^{1/q}}\left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{\theta}. \end{split} \end{equation*} Consequently, \begin{equation*} \begin{split} J_2&\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)} \cdot\frac{|B(u_0,r)|^{1/q}}{|B(u_0,r)|^{\kappa}}\sum_{k=1}^\infty\frac{|B(u_0,2^{k+1}r)|^{\kappa}}{|B(u_0,2^{k+1}r)|^{1/q}} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}\left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}\\ &=C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}\sum_{k=1}^\infty\frac{|B(u_0,r)|^{{1/q-\kappa}}}{|B(u_0,2^{k+1}r)|^{{1/q-\kappa}}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}. \end{split} \end{equation*} Therefore, by selecting $N$ large enough so that $N>\theta$, we thus have \begin{equation*} \begin{split} J_2&\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \sum_{k=1}^\infty\left(\frac{|B(u_0,r)|}{|B(u_0,2^{k+1}r)|}\right)^{{(1/q-\kappa)}}\\ &\leq C\big\|f\big\|_{L^{1,\kappa}_{\rho,\theta}(\mathbb H^n)} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}, \end{split} \end{equation*} where the last step is due to the fact that $0<\kappa<1/q$. Let $\vartheta=\max\big\{\theta,N\cdot\frac{N_0}{N_0+1}\big\}$. Here $N$ is an appropriate constant. Summing up the above estimates for $J_1$ and $J_2$, and then taking the supremum over all $\lambda>0$, we obtain the desired inequality \eqref{Main2}. This finishes the proof of Theorem \ref{mainthm:2}. \end{proof} \section{Proof of Theorem \ref{mainthm:3}} We need the following lemma which establishes the Lipschitz regularity of the kernel $P_s(u,v)$. See Lemma 11 and Remark 4 in \cite{lin}. \begin{lem}[\cite{lin}]\label{ker2} Let $V\in RH_s$ with $s\geq Q/2$. For every positive integer $N\geq1$, there exists a positive constant $C_N>0$ such that for all $u$ and $v$ in $\mathbb H^n$, and for some fixed $0<\delta\leq 1$, \begin{equation*} \big|P_s(u\cdot h,v)-P_s(u,v)\big|\leq C_N\bigg(\frac{|h|}{\sqrt{s\,}}\bigg)^{\delta} s^{-Q/2}\exp\bigg(-\frac{|v^{-1}u|^2}{As}\bigg)\bigg(1+\frac{\sqrt{s\,}}{\rho(u)}+\frac{\sqrt{s\,}}{\rho(v)}\bigg)^{-N}, \end{equation*} whenever $|h|\leq|v^{-1}u|/2$. \end{lem} Based on the above lemma, we are able to prove the following result, which plays a key role in the proof of our main theorem. \begin{lem}\label{kernel2} Let $V\in RH_s$ with $s\geq Q/2$ and $0<\alpha<Q$. For every positive integer $N\geq1$, there exists a positive constant $C_{N,\alpha}>0$ such that for all $u,v$ and $w$ in $\mathbb H^n$, and for some fixed $0<\delta\leq 1$, \begin{equation}\label{WH2} \big|\mathcal K_{\alpha}(u,w)-\mathcal K_{\alpha}(v,w)\big|\leq C_{N,\alpha}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{|v^{-1}u|^{\delta}}{|w^{-1}u|^{Q-\alpha+\delta}}, \end{equation} whenever $|v^{-1}u|\leq |w^{-1}u|/2$. \end{lem} \begin{proof} In view of Lemma \ref{ker2} and \eqref{kauv}, we have \begin{equation*} \begin{split} &\big|\mathcal K_{\alpha}(u,w)-\mathcal K_{\alpha}(v,w)\big|\\ &=\frac{1}{\Gamma(\alpha/2)}\bigg|\int_0^{\infty}P_s(u,w)\,s^{\alpha/2-1}ds-\int_0^{\infty}P_s(v,w)\,s^{\alpha/2-1}ds\bigg|\\ &\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}\big|P_s(u\cdot(u^{-1}v),w)-P_s(u,w)\big|\,s^{\alpha/2-1}ds\\ &\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}C_N\cdot\bigg(\frac{|u^{-1}v|}{\sqrt{s\,}}\bigg)^{\delta} s^{-Q/2}\exp\bigg(-\frac{|w^{-1}u|^2}{As}\bigg)\bigg(1+\frac{\sqrt{s\,}}{\rho(u)}+\frac{\sqrt{s\,}}{\rho(w)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &\leq\frac{1}{\Gamma(\alpha/2)}\int_0^{\infty}C_N\cdot\bigg(\frac{|u^{-1}v|}{\sqrt{s\,}}\bigg)^{\delta} s^{-Q/2}\exp\bigg(-\frac{|w^{-1}u|^2}{As}\bigg)\bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds. \end{split} \end{equation*} Arguing as in the proof of Lemma \ref{kernel}, consider two cases as below: $s>|w^{-1}u|^2$ and $0\leq s\leq|w^{-1}u|^2$. Then the right-hand side of the above expression can be written as $III+IV$, where \begin{equation*} III=\frac{1}{\Gamma(\alpha/2)}\int_{|w^{-1}u|^2}^{\infty}\frac{C_N}{s^{Q/2}}\cdot \bigg(\frac{|u^{-1}v|}{\sqrt{s\,}}\bigg)^{\delta}\exp\bigg(-\frac{|w^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds, \end{equation*} and \begin{equation*} IV=\frac{1}{\Gamma(\alpha/2)}\int_0^{|w^{-1}u|^2}\frac{C_N}{s^{Q/2}}\cdot \bigg(\frac{|u^{-1}v|}{\sqrt{s\,}}\bigg)^{\delta}\exp\bigg(-\frac{|w^{-1}u|^2}{As}\bigg) \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds. \end{equation*} When $s>|w^{-1}u|^2$, then $\sqrt{s\,}>|w^{-1}u|$, and hence \begin{equation*} \begin{split} III&\leq\frac{1}{\Gamma(\alpha/2)}\int_{|w^{-1}u|^2}^{\infty}\frac{C_N}{s^{Q/2}}\cdot\bigg(\frac{|u^{-1}v|}{|w^{-1}u|}\bigg)^{\delta} \exp\bigg(-\frac{|w^{-1}u|^2}{As}\bigg)\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &\leq C_{N,\alpha}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}\bigg(\frac{|u^{-1}v|}{|w^{-1}u|}\bigg)^{\delta} \int_{|w^{-1}u|^2}^{\infty}s^{\alpha/2-Q/2-1}ds\\ &\leq C_{N,\alpha}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{|v^{-1}u|^{\delta}}{|w^{-1}u|^{Q-\alpha+\delta}}, \end{split} \end{equation*} where the last inequality holds since $|u^{-1}v|=|v^{-1}u|$ and $0<\alpha<Q$. On the other hand, \begin{equation*} \begin{split} IV&\leq C_{N,\alpha}\int_0^{|w^{-1}u|^2}\frac{1}{s^{Q/2}}\cdot\bigg(\frac{|u^{-1}v|}{\sqrt{s\,}}\bigg)^{\delta} \bigg(\frac{|w^{-1}u|^2}{s}\bigg)^{-(Q/2+N/2+\delta/2)}\bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &=C_{N,\alpha}\int_0^{|w^{-1}u|^2}\frac{|u^{-1}v|^{\delta}}{|w^{-1}u|^{Q+\delta}}\bigg(\frac{\sqrt{s\,}}{|w^{-1}u|}\bigg)^{N} \bigg(1+\frac{\sqrt{s\,}}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds. \end{split} \end{equation*} It is easy to check that when $0\leq s\leq|w^{-1}u|^2$, \begin{equation*} \frac{\sqrt{s\,}}{|w^{-1}u|}\leq\frac{\sqrt{s\,}+\rho(u)}{|w^{-1}u|+\rho(u)}. \end{equation*} This in turn implies that \begin{equation*} \begin{split} IV&\leq C_{N,\alpha}\int_0^{|w^{-1}u|^2}\frac{|u^{-1}v|^{\delta}}{|w^{-1}u|^{Q+\delta}}\bigg(\frac{\sqrt{s\,}+\rho(u)}{|w^{-1}u|+\rho(u)}\bigg)^{N} \bigg(\frac{\sqrt{s\,}+\rho(u)}{\rho(u)}\bigg)^{-N}s^{\alpha/2-1}ds\\ &=C_{N,\alpha}\cdot\frac{|u^{-1}v|^{\delta}}{|w^{-1}u|^{Q+\delta}}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}\int_0^{|w^{-1}u|^2}s^{\alpha/2-1}ds\\ &=C_{N,\alpha}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N}\frac{|v^{-1}u|^{\delta}}{|w^{-1}u|^{Q-\alpha+\delta}}, \end{split} \end{equation*} where the last step holds because $|u^{-1}v|=|v^{-1}u|$. Combining the estimates of $III$ and $IV$ produces the desired inequality \eqref{WH2} for $\alpha\in(0,Q)$. This concludes the proof of the lemma. \end{proof} We are now in a position to give the proof of Theorem $\ref{mainthm:3}$. \begin{proof}[Proof of Theorem $\ref{mainthm:3}$] Fix a ball $B=B(u_0,r)$ with $u_0\in\mathbb H^n$ and $r\in(0,\infty)$, it suffices to prove that the following inequality \begin{equation}\label{end1.1} \frac{1}{|B|^{1+\beta/Q}}\int_B\big|\mathcal I_{\alpha}f(u)-(\mathcal I_{\alpha}f)_B\big|\,du\leq C\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{\vartheta} \end{equation} holds for given $f\in L^{p,\kappa}_{\rho,\infty}(\mathbb H^n)$ with $1<p<q<\infty$ and $p/q\leq\kappa<1$, where $0<\alpha<Q$ and $(\mathcal I_{\alpha}f)_B$ denotes the average of $\mathcal I_{\alpha}f$ over $B$. Suppose that $f\in L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)$ for some $\theta>0$. Decompose the function $f$ as $f=f_1+f_2$, where $f_1=f\cdot\chi_{4B}$, $f_2=f\cdot\chi_{(4B)^c}$, $4B=B(u_0,4r)$ and $(4B)^c=\mathbb H^n\backslash(4B)$. By the linearity of the $\mathcal L$-fractional integral operator $\mathcal I_{\alpha}$, the left-hand side of \eqref{end1.1} can be written as \begin{equation*} \begin{split} &\frac{1}{|B|^{1+\beta/Q}}\int_B\big|\mathcal I_{\alpha}f(u)-(\mathcal I_{\alpha}f)_B\big|\,du\\ &\leq\frac{1}{|B|^{1+\beta/Q}}\int_B\big|\mathcal I_{\alpha}f_1(u)-(\mathcal I_{\alpha}f_1)_B\big|\,du +\frac{1}{|B|^{1+\beta/Q}}\int_B\big|\mathcal I_{\alpha}f_2(u)-(\mathcal I_{\alpha}f_2)_B\big|\,du\\ &:=K_1+K_2. \end{split} \end{equation*} First let us consider the term $K_1$. Applying the strong-type $(p,q)$ estimate of $\mathcal I_{\alpha}$ (see Theorem \ref{strong}) and H\"older's inequality, we obtain \begin{equation*} \begin{split} K_1&\leq\frac{2}{|B|^{1+\beta/Q}}\int_B|\mathcal I_{\alpha}f_1(u)|\,du\\ &\leq\frac{2}{|B|^{1+\beta/Q}}\bigg(\int_B|\mathcal I_{\alpha}f_1(u)|^q\,du\bigg)^{1/q}\bigg(\int_B1\,du\bigg)^{1/{q'}}\\ &\leq\frac{C}{|B|^{1+\beta/Q}}\bigg(\int_{4B}|f(u)|^p\,du\bigg)^{1/p}|B|^{1/{q'}}\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \cdot\frac{|B(u_0,4r)|^{{\kappa}/p}}{|B(u_0,r)|^{1/q+\beta/Q}}\left(1+\frac{4r}{\rho(u_0)}\right)^{\theta}. \end{split} \end{equation*} Using the inequalities \eqref{homonorm} and \eqref{2rx}, and noting the fact that $\beta/Q=\kappa/p-1/q$, we derive \begin{equation*} \begin{split} K_1&\leq C_n\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{4r}{\rho(u_0)}\right)^{\theta}\\ &\leq C_{n,\theta}\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{\theta}. \end{split} \end{equation*} Let us now turn to estimate the term $K_2$. For any $u\in B(u_0,r)$, \begin{equation*} \begin{split} \big|\mathcal I_{\alpha}f_2(u)-(\mathcal I_{\alpha}f_2)_B\big| &=\bigg|\frac{1}{|B|}\int_B\big[\mathcal I_{\alpha}f_2(u)-\mathcal I_{\alpha}f_2(v)\big]\,dv\bigg|\\ &=\bigg|\frac{1}{|B|}\int_B\bigg\{\int_{(4B)^c}\Big[\mathcal K_{\alpha}(u,w)-\mathcal K_{\alpha}(v,w)\Big]f(w)\,dw\bigg\}dv\bigg|\\ &\leq\frac{1}{|B|}\int_B\bigg\{\int_{(4B)^c}\big|\mathcal K_{\alpha}(u,w)-\mathcal K_{\alpha}(v,w)\big|\cdot|f(w)|\,dw\bigg\}dv. \end{split} \end{equation*} By using the same arguments as that of Theorem \ref{mainthm:1}, we find that \begin{equation*} |v^{-1}u|\leq |w^{-1}u|/2 \quad \& \quad |w^{-1}u|\approx |w^{-1}u_0|, \end{equation*} whenever $u,v\in B$ and $w\in(4B)^c$. This fact along with Lemma \ref{kernel2} yields \begin{align}\label{average} &\big|\mathcal I_{\alpha}f_2(u)-(\mathcal I_{\alpha}f_2)_B\big|\notag\\ &\leq\frac{C_{N,\alpha}}{|B|}\int_B\bigg\{\int_{(4B)^c}\bigg(1+\frac{|w^{-1}u|}{\rho(u)}\bigg)^{-N} \frac{|v^{-1}u|^{\delta}}{|w^{-1}u|^{Q-\alpha+\delta}}\cdot|f(w)|\,dw\bigg\}dv\notag\\ &\leq C_{N,\alpha,n}\int_{(4B)^c}\bigg(1+\frac{|w^{-1}u_0|}{\rho(u)}\bigg)^{-N}\frac{r^{\delta}}{|w^{-1}u_0|^{Q-\alpha+\delta}}\cdot|f(w)|\,dw\notag\\ &=C_{N,\alpha,n}\sum_{k=2}^\infty\int_{2^kr\leq|w^{-1}u_0|<2^{k+1}r} \bigg(1+\frac{|w^{-1}u_0|}{\rho(u)}\bigg)^{-N}\frac{r^{\delta}}{|w^{-1}u_0|^{Q-\alpha+\delta}}\cdot|f(w)|\,dw\notag\\ &\leq C_{N,\alpha,n}\sum_{k=2}^\infty\frac{1}{2^{k\delta}}\cdot\frac{1}{|B(u_0,2^{k+1}r)|^{1-({\alpha}/Q)}} \int_{B(u_0,2^{k+1}r)}\bigg(1+\frac{2^kr}{\rho(u)}\bigg)^{-N}|f(w)|\,dw. \end{align} Furthermore, by using H\"older's inequality and \eqref{com2}, we deduce that for any $u\in B(u_0,r)$, \begin{align}\label{end1.3} &\big|\mathcal I_{\alpha}f_2(u)-(\mathcal I_{\alpha}f_2)_B\big|\notag\\ &\leq C\sum_{k=2}^\infty\frac{1}{2^{k\delta}}\cdot\frac{1}{|B(u_0,2^{k+1}r)|^{1-({\alpha}/Q)}} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N}\notag\\ &\times\bigg(\int_{B(u_0,2^{k+1}r)}\big|f(w)\big|^p\,dw\bigg)^{1/p} \left(\int_{B(u_0,2^{k+1}r)}1\,dw\right)^{1/{p'}}\notag\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \sum_{k=2}^\infty\frac{1}{2^{k\delta}}\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N}\notag\\ &\times\frac{|B(u_0,2^{k+1}r)|^{{\kappa}/p}}{|B(u_0,2^{k+1}r)|^{1/q}}\left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{\theta}\notag\\ &=C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \sum_{k=2}^\infty\frac{|B(u_0,2^{k+1}r)|^{\beta/Q}}{2^{k\delta}}\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}, \end{align} where the last equality is due to the assumption $\beta/Q=\kappa/p-1/q$. From the pointwise estimate \eqref{end1.3} and \eqref{homonorm}, it readily follows that \begin{equation*} \begin{split} K_2&=\frac{1}{|B|^{1+\beta/Q}}\int_B\big|\mathcal I_{\alpha}f_2(u)-(\mathcal I_{\alpha}f_2)_B\big|\,du\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \sum_{k=2}^\infty\frac{1}{2^{k\delta}}\cdot\left(\frac{|B(u_0,2^{k+1}r)|}{|B(u_0,r)|}\right)^{\beta/Q} \left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}} \left(1+\frac{2^{k+1}r}{\rho(u_0)}\right)^{-N+\theta}\\ &\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)} \sum_{k=2}^\infty\frac{1}{2^{k(\delta-\beta)}}\cdot\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}, \end{split} \end{equation*} where $N>0$ is a sufficiently large number so that $N>\theta$. Also observe that $\beta<\delta\leq1$, and hence the last series is convergent. Therefore, \begin{equation*} K_2\leq C\big\|f\big\|_{L^{p,\kappa}_{\rho,\theta}(\mathbb H^n)}\left(1+\frac{r}{\rho(u_0)}\right)^{N\cdot\frac{N_0}{N_0+1}}. \end{equation*} Fix this $N$ and set $\vartheta=\max\big\{\theta,N\cdot\frac{N_0}{N_0+1}\big\}$. Finally, combining the above estimates for $K_1$ and $K_2$, the inequality \eqref{end1.1} is proved and then the proof of Theorem \ref{mainthm:3} is finished. \end{proof} In the end of this article, we discuss the corresponding estimates of the fractional integral operator $I_{\alpha}=(-\Delta_{\mathbb H^n})^{-\alpha/2}$ (under $0<\alpha<Q$). We denote by $K^{*}_{\alpha}(u,v)$ the kernel of $I_{\alpha}=(-\Delta_{\mathbb H^n})^{-\alpha/2}$. In \eqref{claim}, we have already shown that \begin{equation}\label{WH3} \big|K^{*}_{\alpha}(u,v)\big|\leq C_{\alpha,n}\cdot\frac{1}{|v^{-1}u|^{Q-\alpha}}. \end{equation} Using the same methods and steps as we deal with \eqref{WH2} in Lemma \ref{kernel2}, we can also show that for some fixed $0<\delta\leq 1$ and $0<\alpha<Q$, there exists a positive constant $C_{\alpha,n}>0$ such that for all $u,v$ and $w$ in $\mathbb H^n$, \begin{equation}\label{WH4} \big|K^{*}_{\alpha}(u,w)-K^{*}_{\alpha}(v,w)\big|\leq C_{\alpha,n}\cdot\frac{|v^{-1}u|^{\delta}}{|w^{-1}u|^{Q-\alpha+\delta}}, \end{equation} whenever $|v^{-1}u|\leq |w^{-1}u|/2$. Following along the lines of the proof of Theorems \ref{mainthm:1}--\ref{mainthm:3} and using the inequalities \eqref{WH3} and \eqref{WH4}, we can obtain the following estimates of $I_{\alpha}$ with $\alpha\in(0,Q)$. \begin{thm}\label{thm:1} Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $0<\kappa<p/q$, then the fractional integral operator $I_{\alpha}$ is bounded from $L^{p,\kappa}(\mathbb H^n)$ into $L^{q,{(\kappa q)}/p}(\mathbb H^n)$. \end{thm} \begin{thm}\label{thm:2} Let $0<\alpha<Q$, $p=1$ and $q=Q/{(Q-\alpha)}$. If $0<\kappa<1/q$, then the fractional integral operator $I_{\alpha}$ is bounded from $L^{1,\kappa}(\mathbb H^n)$ into $WL^{q,(\kappa q)}(\mathbb H^n)$. \end{thm} Here, we remark that Theorems \ref{thm:1} and \ref{thm:2} have been proved by Guliyev et al.\cite{guliyev}. \begin{thm}\label{thm:3} Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $p/q\leq\kappa<1$, then the fractional integral operator $I_{\alpha}$ is bounded from $L^{p,\kappa}(\mathbb H^n)$ into $\mathcal{C}^{\beta}(\mathbb H^n)$ with $\beta/Q=\kappa/p-1/q$ and $\beta<\delta\leq1$, where $\delta$ is given as in \eqref{WH4}. \end{thm} As an immediate consequence we have the following corollary. \begin{cor} Let $0<\alpha<Q$, $1<p<Q/{\alpha}$ and $1/q=1/p-{\alpha}/Q$. If $\kappa=p/q$, then the fractional integral operator $I_{\alpha}$ is bounded from $L^{p,\kappa}(\mathbb H^n)$ into $\mathrm{BMO}(\mathbb H^n)$. \end{cor} Upon taking $\alpha=1$ in Theorem \ref{thm:3}, we get the following \textbf{Morrey's lemma} on the Heisenberg group. \begin{cor} Let $\alpha=1$, $1<p<Q$ and $1/q=1/p-1/Q$. If $p/q<\kappa<1$, then the fractional integral operator $I_{1}$ is bounded from $L^{p,\kappa}(\mathbb H^n)$ into $\mathcal{C}^{\beta}(\mathbb H^n)$ with $\beta/Q=\kappa/p-1/q$ and $\beta<\delta\leq1$, where $\delta$ is given as in \eqref{WH4}. Namely, \begin{equation*} \big\|\nabla_{\mathbb H^n}f\big\|_{\mathcal{C}^{\beta}(\mathbb H^n)}\leq C\big\|f\big\|_{L^{p,\kappa}(\mathbb H^n)}, \end{equation*} where $0<\kappa<1$, $p>(1-\kappa)Q$, $\beta=1-{(1-\kappa)Q}/p$ and the gradient $\nabla_{\mathbb H^n}$ is defined by \begin{equation*} \nabla_{\mathbb H^n}=\big(X_1,\dots,X_n,Y_1,\dots,Y_n\big). \end{equation*} \end{cor} \end{document}
\begin{document} \begin{titlepage} \title{Proposed experimental tests of the Bell-Kochen-Specker theorem\footnote{Phys. Rev. Lett. {\bf 80}, 1797 (1998).}} \author{Ad\'{a}n Cabello\thanks{Electronic address: [email protected]}\\ {\em Departamento de F\'{\i}sica Aplicada,}\\ {\em Universidad de Sevilla, 41012 Sevilla, Spain.}\\ \\ Guillermo Garc\'{\i}a-Alcaine\thanks{Electronic address: [email protected]}\\ {\em Departamento de F\'{\i}sica Te\'{o}rica,}\\ {\em Universidad Complutense, 28040 Madrid, Spain.}} \date{\today} \maketitle \begin{abstract} For a two-particle two-state system, sets of compatible propositions exist for which quantum mechanics and noncontextual hidden-variable theories make conflicting predictions for every individual system whatever its quantum state. This permits a simple all-or-nothing state-independent experimental verification of the Bell-Kochen-Specker theorem.\\ \\ PACS numbers: 03.65.Bz \end{abstract} \end{titlepage} \begin{sloppypar} There are two main theorems on the impossibility of hidden variables in quantum mechanics (QM). The most general is the Bell-Kochen-Specker (BKS) theorem \cite{Bell66,KS67} which excludes noncontextual hidden-variable (NCHV) theories (i.\ e., those in which the values of the physical observables are the same whatever the experimental context in which they appear). The other is Bell's theorem \cite{Bell64} which discards local hidden variables of the kind considered by Einstein, Podolsky, and Rosen \cite{EPR35}. Both theorems are mathematical statements which, as such, do not require any real experiment to be proved or disproved. Only if we want to investigate how nature behaves do we require actual experiments. There is a wide range of experiments which show that nature violates Bell's inequalities \cite{Aspectetal}. However, no empirical disproof of NCHV theories has yet been exhibited \cite{Santos88}. This situation can be explained by comparing the proofs of both theorems. Bell's inequalities \cite{Bell64} involve statistical magnitudes which can be calculated from measurements carried out in different subensembles of pairs. In contrast, the proofs of the BKS theorem \cite{Bell66,KS67,varios} refer to a single individual system but involve noncompatible observables that cannot be measured in the same individual system. On the other hand, while Bell's inequalities are verified by any local hidden-variable theory independently of any QM assumptions, the proofs of the BKS theorem refer to NCHV theories that share some properties with QM. In this sense, the proofs of the BKS theorem are not entirely independent of the formal structure of QM. For these reasons, one could think that ``the whole notion of an experimental test of [B]KS misses the point'' \cite{Merminpc}. \end{sloppypar} In this paper we will show a situation, the first to our knowledge, in which NCHV theories, {\em without} any call to the formal structure of QM, make conflicting predictions with those of QM for every individual system and whatever its quantum state. These predictions can be tested by a joint measurement of {\em one} set of compatible propositions. We propose the following situation. Consider an individual system of two spin-$\frac{1}{2}$ particles (or any other two-particle two-state system) initially prepared in an unspecified state. Suppose that a NCHV theory can describe that system. Noncontextuality here will mean that this hidden-variable theory satisfies the following two assumptions: (i) Any one-particle observable (for a two-state system) can be assumed to have a definite value. This is a reasonable assumption for any NCHV theory since Gleason's theorem \cite{Gleason57} is not valid for systems described by a Hilbert space of dimension two, and since the possibility of NCHV for these systems was explicitly proved by Bell \cite{Bell66} and by Kochen and Specker \cite{KS67}. In particular, we will assume that the observables $A:=\sigma _z^{(1)}$, $B:=\sigma _z^{(2)}$, $a:=\sigma _x^{(1)}$, and $b:=\sigma _x^{(2)}$ (the spin components in units of ${\hbar}/{2}$ in the $z$ and $x$ directions for particles one and two) have predefined noncontextual values either $+1$ or $-1$. We will denote these values as $v(A)$, $v(B)$, $v(a)$, and $v(b)$. Then, considering the values of these four observables, $2^4$ different ``states'' could exist (for instance, one possible ``state'' is $v(A)=-v(B)=-v(a)=v(b)=+1$). (ii) The value of a two-particle observable which is a product of one-particle observables such as $AB$ (or $Ab$, etc) is \begin{equation} v(AB):=v(A)\,v(B)\,. \label{product} \end{equation} Note that $A$ and $B$ are not only compatible observables but refer to two different particles \cite{FineTeller78}. Definition (\ref{product}) is a consequence of noncontextuality since one particular way of measuring the observable $AB$ is by measuring separately $A$ and $B$ and multiplying their results; but, in a NCHV theory, $v(AB)$ must be the same whatever the experimental context in which it appears. Now we will show some predictions derived from these two assumptions. For that purpose consider the following four propositions: \begin{equation} {P}_1:= ``AB=1\;\;\;\mbox{and}\;\;\;ab=1"\,, \label{proposition1} \end{equation} \begin{equation} {P}_2:=``AB=-1\;\;\;\mbox{and}\;\;\;ab=-1"\,, \label{proposition2} \end{equation} \begin{equation} {P}_3:=``Ab=1\;\;\;\mbox{and}\;\;\;aB=1"\,, \label{proposition3} \end{equation} \begin{equation} {P}_4:=``Ab=-1\;\;\;\mbox{and}\;\;\;aB=-1"\,. \label{proposition4} \end{equation} Proposition ${P}_1$ has the value 1 ({\em true}) if the two-particle observables $AB$ {\em and} $ab$ have values $+1$, and the value 0 ({\em false}) otherwise, etc. In a NCHV theory, the $\{{P}_i\}$ have predefined values related, using assumption (ii), to those of $A$, $B$, $a$, and $b$. For instance, $v({P}_1) = 1$ if $v(A) = v(B)$ {\em and} $v(a) = v(b)$, and zero otherwise. As can be easily seen from the study of all the possible states of this NCHV theory, some predictions can be made: NCHV1.---The propositions ${P}_1$, ${P}_2$, ${P}_3$, ${P}_4$ are {\em not} mutually {\em exclusive}: Two of them can be simultaneously true [for instance, $v({P}_1)=v({P}_3)=1$ in the state $v(A)=v(B)=v(a)=v(b)=+1$]. NCHV2.---${P}_1$, ${P}_2$, ${P}_3$, ${P}_4$ are {\em not exhaustive}: All of them can be simultaneously false [for instance, $v({P}_1)=v({P}_2)=v({P}_3)=v({P}_4)=0$ in the state $v(A)=v(B)=v(a)=-v(b)=+1$]. Indeed, checking all the possible states, NCHV1 and NCHV2 can be summarized as follows: NCHV3.---In a NCHV theory, the values of ${P}_1$, ${P}_2$, ${P}_3$, and ${P}_4$ in a joint measurement would be either 4 zeros---all the propositions are false---, or 2 ones and 2 zeros---2 propositions are true and 2 are \mbox{false---}. Note that the predictions NCHV1, NCHV2, and NCHV3 are entirely independent of the formal structure of QM. What are the corresponding quantum predictions? First, let us see the quantum representatives of propositions ${P}_1$, ${P}_2$, ${P}_3$, and ${P}_4$. If $\hat {A}$, $\hat {B}$, $\hat {a}$, and $\hat {b}$ denote the self-adjoint operators representing the observables $A$, $B$, $a$, and $b$, the proposition ${P}_i$ is represented by the projector $\hat P_i:= \left| {\psi _i} \right\rangle \left\langle {\psi _i} \right|$, where $\left\{ {\left| {\psi _i} \right\rangle } \right\}$ are the states defined by the following eigenvalue equations \cite{states}: \begin{equation} \hat A\otimes \hat B\;\left| {\psi _1} \right\rangle = \left| {\psi _1} \right\rangle\,,\;\;\; \hat a\otimes \hat b\;\left| {\psi _1} \right\rangle = \left| {\psi _1} \right\rangle\,, \label{projector1} \end{equation} \begin{equation} \hat A\otimes \hat B\;\left| {\psi _2} \right\rangle = -\left| {\psi _2} \right\rangle\,,\;\;\; \hat a\otimes \hat b\;\left| {\psi _2} \right\rangle = -\left| {\psi _2} \right\rangle\,, \label{projector2} \end{equation} \begin{equation} \hat A\otimes \hat b\;\left| {\psi _3} \right\rangle = \left| {\psi _3} \right\rangle\,,\;\;\; \hat a\otimes \hat B\;\left| {\psi _3} \right\rangle = \left| {\psi _3} \right\rangle\,, \label{projector3} \end{equation} \begin{equation} \hat A\otimes \hat b\;\left| {\psi _4} \right\rangle = -\left| {\psi _4} \right\rangle\,,\;\;\; \hat a\otimes \hat B\;\left| {\psi _4} \right\rangle = -\left| {\psi _4} \right\rangle\,. \label{projector4} \end{equation} As can be easily seen, the projectors $\hat P_1$, $\hat P_2$, $\hat P_3$, and $\hat P_4$ are mutually orthogonal, \begin{equation} \hat{P}_i\, \hat{P}_j = 0\;\;\;{\em {\em if}}\;\;\; i\ne j\,. \label{orthogonality} \end{equation} Therefore, according to QM: QM1.---The propositions ${P}_1$, ${P}_2$, ${P}_3$, and ${P}_4$ are mutually exclusive: Two of them cannot be simultaneously true. Moreover, it can be checked that the projectors $\hat P_1$, $\hat P_2$, $\hat P_3$, and $\hat P_4$ form a resolution of the identity, i.e., \begin{equation} \hat{P}_1 + \hat{P}_2 + \hat{P}_3 + \hat{P}_4 = \hat{1}\,. \label{resolution} \end{equation} Therefore, according to QM: QM2.---${P}_1$, ${P}_2$, ${P}_3$, and ${P}_4$ are exhaustive: Not all of them can be simultaneously false. Indeed, from the mathematical properties (\ref{orthogonality}) and (\ref{resolution}) follows a third physical prediction which includes QM1 and QM2: QM3.---According to QM, in any joint measurement of ${P}_1$, ${P}_2$, ${P}_3$, and ${P}_4$ in the same individual system, one and only one of the propositions will be true and the other three will be false, whatever the preparation of the state. Clearly, NCHVi and QMi are conflicting physical predictions. The situation at this point is similar to that which appears between Bell's inequalities and QM: We have two theories with contradictory predictions. Now we have to propose an experiment to check how nature behaves. How could a joint measurement of ${P}_1$, ${P}_2$, ${P}_3$, and ${P}_4$ be possible? Until now we have assumed that the propositions ${P}_1$, ${P}_2$, ${P}_3$, and ${P}_4$ are {\em compatible}. This remains to be justified. Of course, we have seen that the projectors $\hat P_1$, $\hat P_2$, $\hat P_3$, and $\hat P_4$ commute, and it is a generally accepted assumption of QM that commuting operators correspond to compatible observables. The reason for this assumption is that, if there is a set of pairwise commuting self-adjoint operators, then there exists a nontrivial {\em maximal}---nondegenerate--- operator $\hat H$ commuting with all $\hat P_i$, such that $\hat P_i =f_i(\hat H)$ \cite{vonNeumann31}. However, this justification hinges on the existence of a physical observable $H$ which corresponds to the operator $\hat H$. In our case, such operator can be \begin{equation} \hat H=\sum\limits_{i=1}^4 {c_i}\,\hat P_i\,, \label{opmax} \end{equation} where the $\{c_i\}$ are arbitrary distinct real numbers. Then, it is easily checked that \begin{equation} \hat P_i=\prod\limits_{j\ne i} {{{\hat H-c_j \hat 1} \over {c_i-c_j}}}\,. \end{equation} Optical observables corresponding to operators of the form (\ref{opmax}) for two-particle systems have been proposed and actual experimental results are expected to be presented soon \cite{ZZH97}. On the other hand, the proposals \cite{Moussa97etal} for experiments designed to measure the {\em Bell operator} \cite{BMR92} used for quantum teleportation \cite{BBCJPW93} can be modified to measure operators of the form (\ref{opmax}) \cite{bellstates}. In summary, we have showed that there are situations in nature in which NCHV theories, without any call to the formal structure of QM, make conflicting predictions with those of QM for every individual system whatever its quantum state. An experimental test of these predictions requires the measurement of a particular set of compatible propositions. Optical versions of experiments related with these propositions have been proposed for other purposes, and actual experimental results based on these proposals are expected to be presented soon. \\ The authors wish to thank Asher Peres for his comments and suggestions, and David Mermin for his observations and criticisms; both have been essential in the writing of this Letter. We also acknowledge comments by Ignacio Cirac and Emilio Santos. One of us (A. C.) thanks Harvey Brown, Gonzalo Garc\'{\i}a de Polavieja, and Erik Sj\"{o}qvist for useful discussions, and for their hospitality at Oxford. \pagebreak \end{document}
\begin{document} \title[Quantum repeater architecture with hierarchically optimized memory buffer times]{Quantum repeater architecture with hierarchically optimized memory buffer times} \author[cor`]{Siddhartha Santra} \address{US Army Research Laboratory, Adelphi, Maryland 20783, USA} \ead{\mailto{[email protected]}} \author{Liang Jiang} \address{Departments of Applied Physics and Physics, Yale University, New Haven, Connecticut 06520, USA} \address{Yale Quantum Institute, Yale University, New Haven, Connecticut 06511, USA} \author{Vladimir S. Malinovsky} \address{US Army Research Laboratory, Adelphi, Maryland 20783, USA} \begin{abstract} We propose a quantum repeater protocol and architecture that mitigates decoherence of the entangled states by optimizing the quantum memory buffer time. The protocol maximizes the rate of distillable entanglement in the average accessed state at all nesting levels. The achievable rate is higher by orders of magnitude in comparison to a canonical protocol that does not optimize the buffer time. The advantage of the proposed design is observed for all nesting levels of the repeater for technologically feasible memory quality, entanglement generation and swapping success probabilities. \end{abstract} \pacs{03.67.Hk, 03.67.Bg, 03.67.Pp} \noindent{\it Keywords}: Quantum repeaters, memory decoherence, optimized architecture. \section{Introduction} Spatially distributed entanglement is a valuable resource for quantum communication, computing and sensing \cite{vanmeter}. Quantum repeaters (QR) follow a nested divide and conquer strategy to distribute entanglement across large distances \cite{briegel,dlcz}. At each nesting level, first, entangled states are generated probabilistically over smaller segments and stored in quantum memories at repeater stations. Second, a swapping operation on the memories doubles the physical range of the entangled state. As states make their way up the levels they spend some time, the memory buffer time, in the decohering quantum memories before being discarded or accessed for use by the next level. A larger buffer time at any nesting level increases the probability to obtain an entanglement length doubled state but decreases the entanglement quality of the average obtained state due to decoherence. These competing factors determine the entanglement generation rate (EGR) which is the product of the rate of obtaining entanglement length-doubled states and the entanglement of the average obtained state. An optimal buffer time maximizes the EGR. However, most protocols for repeater operation \cite{qrep-review-gisin} ignore the optimality of buffer time arising due to the interplay of entanglement generation probability and quantum memory decoherence \cite{imperfectmemory1}.\\ \indent In practice, it is crucial to include quantum memory decoherence for quantum repeaters that rely on two-way communication over long distances as shown in \cite{memerrors-hartmann}. The same reference suggests using decoherence free subspaces or local encoding and repeater operation in blind-mode to suppress memory errors. Other interesting ideas to address this challenge, e.g. addition of more physical resources such as multiplexed quantum memory to reduce memory waiting time \cite{multiplexed-memory}; or more complicated operations such as quantum error correction to actively suppress all errors \cite{qrep-encoding-jiang,qcomm-withoutmems,qcomm-FT} are promising in the long term but still very challenging with current experimental capability. Besides asking for more physical resources or complicated operations, it is also important to optimize the parameters of QR protocols. For example, dynamic programming has been introduced to explore the huge parameter space of QR protocols, which can successfully identify efficient protocols with significantly boosted performance in the absence of memory decoherence \cite{Jiang17291}. So far, there is no efficient method that can include quantum memory decoherence and systematically optimize the design parameters of QR protocols.\\ \indent Here, we propose an optimized buffer time protocol (OBP) and architecture that maximizes the entanglement generation rate using hierarchically optimized buffer times for all nesting levels. The optimal buffer time depends on the parameters of quantum memory quality, $\beta$, the entanglement generation probability, $p$, and the swapping success probability, $p_S$. The minimal parametrization chosen in terms of $(p,\beta,p_S)$ subsumes implementation-specific details such as source-station geometry, coupling and conversion efficiences or the use of multiplexed memories etc. (Entanglement generation probability $p$, for example, can include source-fiber coupling, wavelength conversion and memory read-in efficiency. Memory read-out may be included in $p$ or swapping success probability $p_S$.) We compare the OBP to a canonical repeater protocol (CP) that does not optimize the buffer time and show that the OBP improves the entanglement generation rate by several orders of magnitude in the technologically relevant parameter region. Moreover, we show that the relative improvement due to the OBP increases with the nesting level for technologically feasible swapping success probability. The protocol works for finite-lifetime quantum memories used to store entangled states in quantum repeaters which utilize two-way classical communication between its nodes to verify entanglement generation before entanglement swapping is performed. The layout of the paper is as follows. Section \ref{sec:repprotocol} first describes the central idea of the optimized memory buffer time protocol in subsection \ref{subsec:optfirnest} followed by the definition of optimal memory buffer time in subsection \ref{subsec:optaccesstime} and comparison with a canonical protocol in subsection \ref{subsec:egrcomp}. Section \ref{sec:arch} presents a quantum repeater architecture compatible with hierarchical optimization of buffer times in subsection \ref{subsec:hierarch}. Subsection \ref{subsec:hieropt} then describes an algorithm that can be used for the hierarchical optimization. Further, subsection \ref{subsec:compallnest} shows the comparison of the entanglement generation rates for the OBP compared to the CP for all nesting levels. Section \ref{sec:conc} concludes the paper with a discussion of the typical advantage one may expect using OBP when used with state of art parameters. \section{Quantum repeater protocol with optimized memory buffer time} \label{sec:repprotocol} In this section we describe the optimized memory buffer time protocol by focusing on the first nesting level, in subsection \ref{subsec:optfirnest}, of a potentially multi-level quantum repeater network. While quantum memories may also suffer from decoherence due to depolarization and loss, we consider dephasing as the only mode of memory decoherence. This highlights the central physical idea while keeping the discussion mathematically simple. The optimal buffer time is described in subsection \ref{subsec:optaccesstime}. A comparison of entanglement generation rates of the optimized and canonical protocols is presented in subsection \ref{subsec:egrcomp}. We term a repeater protocol that does not optimize its quantum memory buffer time as a canonical protocol, for example, those in \cite{imperfectmemory1} and \cite{memerrors-hartmann}. As with the optimized protocol, in canonical protocols entanglement generation and swapping occur probabilistically. For comparison with the optimized protocol, the distinctive feature of canonical protocols is that the quantum memories can wait for arbitrarily long times for successful entanglement generation. The rate of entanglement generation in such protocols is inversely proportional to the expected number of entanglement generation attempts needed for success, as shown in \ref{app:avstates}. Subsequent to entanglement generation, for both the canonical and optimized protocols, purification of the generated entangled pairs may be performed if multiple quantum memories at a given nesting level are available at the repeater nodes. In this paper, we compare the protocols without considering purification of entangled states on a finite number of quantum memories. Thus, we compare the two protocols on a single-copy basis and use the distillable entanglement of the average state in the respective protocols as a measure of the entanglement quality. \subsection{Optimized memory buffer time protocol at the first nesting level} \label{subsec:optfirnest} \begin{figure} \caption{(Color online) A quantum repeater with two segments at nesting level 1. Quantum memory pairs $1,2$ and $3,4$ store entangled states produced by sources $S_{12} \label{fig:qlink} \end{figure} \noindent Operationally, the optimized memory buffer time protocol can be understood by considering the entanglement swapping of states across two elementary segments at the first nesting level in a QR, Figure~\ref{fig:qlink}. Sources $S_{12}$ and $S_{34}$ supply entangled states with probability, $p$, to the decohering quantum memory pairs $(1,2)$ and $(3,4)$. The memory lifetime is denoted by $\tau_M$. Entanglement swapping at the repeater station, C, is performed via informed Bell-state measurements. C checks the two pairs of memories verifying whether they are charged which requires waiting for one unit of one-way classical communication time, $\tau_C=L_0/c$, with $c$ the speed of light in the fiber and $L_0$ the length of the segment. If both pairs are charged, C performs a swapping operation on the memories $2$ and $3$ with success probability, $p_S$, producing an entangled state across the remote memories $1$ and $4$. Information about the success or failure of the swapping operation is then communicated to the remote memories taking an additional time $\tau_C$. In case C finds the memory pairs uncharged (one or both) it classically communicates the need to continue entanglement generation attempt in the segment(s) to the remote memories which also requires $\tau_C$ amount of time. The OBP limits the number of such entanglement generation attempts to a number $n_{\textrm{opt}}(p,\beta,p_S)$, determined by the operating parameters, after which the state from the remote memories is accessed. Subsequently all four memories are refreshed and the entanglement generation process starts over. The CP on the other hand places no limit on the number of entanglement generation attempts which continue till a state is obtained in both segments.\\ By limiting the buffer time, the OBP provides an average remote entangled state with a high measure of entanglement since the time for memory decoherence is limited. The entanglement generation in the two segments can succeed at step numbers, $1\leq k_1,k_2\leq n$, where $n$ is the maximum number of attempts and the probability distribution of successful entanglement generation is $\mathbb{P}(k_1,k_2)=(1-p)^{k_1+k_2-2}p^2$. Without loss of generality, we assume that the memory pairs in the two segments of figure~\ref{fig:qlink} are supplied with the state, $\rho^-=\ket{\psi^-}\bra{\psi^-}$, where, $\ket{\psi^\pm}=(\ket{01}\pm\ket{10})/\sqrt{2}$. Storing the $\rho^-$ state in a pair of quantum memories with lifetime $\tau_M$ for time $t$ results in the state, $\rho(t)=\rho^-(1+e^{-2t/\tau_M})/2+\rho^+(1-e^{-2t/\tau_M})/2$, where $\rho^+=\ket{\psi^+}\bra{\psi^+}$. The remote state obtained after a successful swap and communication to the remote memories $1$ and $4$ is \begin{equation} \rho^S(k_1,k_2)=\frac{1}{2}(1+\beta^{|\Delta k|+2})\rho^-+\frac{1}{2}(1-\beta^{|\Delta k|+2})\rho^+, \label{swapstategreedy} \end{equation} where $\beta=e^{-2\tau_C/\tau_M}$ is the memory quality parameter that quantifies the decoherence in a pair of quantum memories during one round of one-way classical communication, and $\Delta k =(k_2-k_1)$. $\rho^S(k_1,k_2)$ approaches the totally mixed state exponentially fast with $\Delta k$. Thus, the states for large $\Delta k$ contribute little to the entanglement of the average state. The state in Eq.~(\ref{swapstategreedy}) further decoheres in the remote memories for a time, $t=2\tau_C(n-\textrm{max}(k_1,k_2))$, before being accessed, and leads to the state $\rho^S_n(k_1,k_2)=\rho^-(1+\beta^{|\Delta k|+2+n-\textrm{max}(k_1,k_2)})/2+\rho^+(1-\beta^{|\Delta k|+2+n-\textrm{max}(k_1,k_2)})/2$. The average remote entangled state is the probabilistically weighted sum of such states, $\rho^O= \mathcal{N}^{-1} \sum_{k_1=1,k_2=1}^{n,n} \mathbb{P}(k_1,k_2)\rho^S_n(k_1,k_2)$, where $\mathcal{N}=(1-(1-p)^n)^2$ is the total probability of obtaining a remote entangled state across the two segments in $n$ attempts. The average remote entangled state can be expressed in a compact manner as (see \ref{app:avstates}), \begin{equation} \rho^O=\frac{1}{2}(1+\gamma^O(p,\beta,n))\rho^-+\frac{1}{2}(1-\gamma^O(p,\beta,n))\rho^+, \label{avstateopt} \end{equation} which has a fidelity of $F^O(p,\beta,n)=\tr\{\rho^-\rho^O\}=\frac{1}{2}(1+\gamma^O(p,\beta,n))$ and is obtained once every $n 2\tau_C$ period of time. The function $\gamma^O(p,\beta,n)\in[0,1]$ can be physically interpreted as the degradation in fidelity of the average state due to memory decoherence during the buffer time. For perfect quantum memories, $\beta=1$, and $\gamma^O(p,\beta,n)=1$. \subsection{Optimal memory buffer time} \label{subsec:optaccesstime} As an entanglement measure for the mixed state, $\rho^O$, we use the upper bound on its distillable entanglement, $E[F^O]=H[\frac{1}{2}+(F^O(1-F^O))^{.5}]$ for $1\geq F^O>0.5$ and $E[F^O]=0$ for $.5\geq F^O\geq 0$, where $H[x]=-x\log(x)-(1-x)\log(1-x)$ is the binary entropy function. $E[F^O]$ expresses the number of pure Bell states that the best distillation protocol can achieve in the limit of asymptotic number of copies. For $\rho^O$ the bound can be achieved using the hashing protocol \cite{entdistill}. The entanglement generation rate is thus given by the rate of distillable entanglement (DE), \begin{equation} R^O_{DE}(p,\beta,n)=\frac{p_S(1-(1-p)^n)^2}{n(2\tau_C)}E[F^O(p,\beta,n)]. \label{rateopt} \end{equation} The optimal buffer time, $n_{\textrm{opt}}$, maximizes this rate for given $p$ and $\beta$ values, i.e., \begin{equation} n_{\textrm{opt}}(p,\beta)=\textrm{ArgMax}_{n}[R^O_{DE}(p,\beta,n)]. \label{eq:noptimal} \end{equation} with the obvious condition that $n_{\textrm{opt}}(p,\beta)\geq 1$. Note that the optimal buffer time found using (\ref{eq:noptimal}) is obtained in units of the two-way classical communication time $2\tau_C$. The behavior of the optimal buffer time, $n_{\textrm{opt}}(p,\beta)$, in different regions of the $(p,\beta)$ parameter space is described in \ref{app:nopt}. While the optimal buffer time at the first nesting level depends only on the entanglement generation probability $p$ and the memory quality parameter $\beta$, for higher nesting levels it depends also on the swapping success probability of the previous level $p_S$. In this paper, for simplicity, we assume that the parameters $p,\beta,p_S$ remain constant for all nesting levels. However, our analysis outlined in section \ref{sec:arch} can be used to address various distributions of the parameters. In case of multiplexed quantum memories \cite{multiplexed-memory}, expression (\ref{eq:noptimal}) can be used to determine the optimal buffer time by using the effective entanglement generation probability between the nodes. Also, note that the asymptotic value of the fidelity, $F^O(p,\beta,n)=\frac{1}{2}(1+\gamma^O(p,\beta,n))$, is at least $0.5$ when dephasing is the only mode of decoherence. Indeed, dephasing is the dominant mode of decoherence for repeater-relevant timescales in quantum memories based on nuclear spins in diamond NV centers and the hyperfine electron levels in ion traps \cite{Simon2010}. However, when loss and depolarization are also considered, the asymptotic fidelity can fall below the distillable entanglement threshold of $F^O>0.5$. The optimized memory buffer time protocol works in this general case as well but now the maximum memory buffer time, $n_{\textrm{max}}$, is limited by the threshold condition, $F^O(p,\beta,n_{\textrm{max}})>0.5$. \subsection{Entanglement generation rate comparison of the optimized and canonical protocols} \label{subsec:egrcomp} \begin{figure} \caption{(Color online) Logarithm (base 10) of the ratio of entanglement generation rate in the optimized memory buffer time protocol, $R^O_{DE} \label{fig:rateratio-p-beta1} \end{figure} To compare the entanglement generation rates of the optimized and canonical protocols we next obtain the average remote entangled state of the canonical protocol. The average remote entangled state in the canonical protocol has a low measure of entanglement since it is an average over states that have decohered in the memories for arbitrarily long times. The average state, $\rho^C= \sum_{k_1=1,k_2=1}^{\infty,\infty} \mathbb{P}(k_1,k_2)\rho^S(k_1,k_2)$, again takes a compact form (see \ref{app:avstates}) \begin{equation} \rho^C=\frac{1}{2}(1+\gamma^C(p,\beta))\rho^-+\frac{1}{2}(1-\gamma^C(p,\beta))\rho^+. \label{avstatecan} \end{equation} Such states of fidelity $F^C(p,\beta):=\tr\{\rho^-\rho^C\}=\frac{1}{2}(1+\gamma^C(p,\beta))$ are obtained at the rate of the inverse of the waiting time $\braket{k}=(3-2p)/p(2-p)$ \cite{rate-vanloock}. The entanglement generation rate in the canonical protocol is \begin{equation} R^C_{DE}(p,\beta)=\frac{p_S}{\braket{k}(2\tau_C)}E[F^C(p,\beta)]. \label{ratecan} \end{equation} The optimized buffer time protocol provides manifold increase of entanglement generation rates in most of the $(p,\beta)$-parameter space, even at the first nesting level, as shown in figure~\ref{fig:rateratio-p-beta1}. In particular, for the low $p,\beta$-region the ratio, $\eta(p,\beta,n_{\textrm{opt}}) = R^O_{DE}(p,\beta,n_{\textrm{opt}})/R^C_{DE}(p,\beta)\sim1/p$ (see \ref{app:rateratio}). Only for $\beta\sim 1$, i.e., for near-perfect quantum memories, does the canonical protocol provide better rates. The optimal buffer time, $n_{\textrm{opt}}$, depends on the operating point in parameter space. For short-lived quantum memories, $\beta<<1$, it is numerically found that $n_{\textrm{opt}}=1$. For long-lived quantum memories, $\beta\to 1$, and low entanglement generation probability, $p\to 0$, the optimal buffer time scales as, $n_{\textrm{opt}}\sim p^{-1}\log(1/\beta)^{-1}=(1/p)(\tau_M/2\tau_C)$. \section{Hierarchical buffer time optimization-compliant repeater architecture} \label{sec:arch} We now present a repeater architecture which can operate all its nesting levels based on the optimized memory buffer time protocol in subsection \ref{subsec:hierarch}. This is followed by a description of the algorithm to hierarchically optimize the buffer time in subsection \ref{subsec:hieropt}. The section ends by presenting a comparison of the entanglement generation rates of the optimized and canonical protocols for all nesting levels in subsection \ref{subsec:compallnest}. \subsection{Optimization-compliant architecture for all nesting levels} \label{subsec:hierarch} \begin{figure} \caption{(Color online) Quantum repeater architecture based on the optimized buffer time protocol. Shown are three nesting levels ($i=1,2,3$) and 9 repeater nodes ($A-I$). A new set of quantum memories (blue ovals) are required at each nesting level. Green rectangles represent entanglement swapping operations. Entanglement length-doubled states obtained at any level are transferred to the quantum memories of the next level using coherent operations (red arrows).} \label{fig:nestedlevels} \end{figure} \noindent A quantum repeater architecture capable of supporting hierarchical optimization of buffer times requires a new set of quantum memories at each nesting level as shown in figure~\ref{fig:nestedlevels}. The average remote entangled state output by nesting level $i$ is transferred to the new quantum memories at level $(i+1)$ for $i=1,2,...,(N_m-1)$, where $N_m$ is the maximum nesting level. This transfer can be achieved by using a two-qubit quantum SWAP gate \cite{swap-experiment}. In figure~\ref{fig:nestedlevels} the quantum memories are labeled by the nesting level as superscript and the node label as subscript, for example, $m^{(1)}_{E_1}$ denotes quantum memory number 1 at the first nesting level of node $E$. The memories at any level follow the OBP with an optimal buffer time that is determined by the effective probability with which it receives entangled states and memory quality parameter relative to its classical communication time. All levels follow the informed Bell-state measurement procedure followed by communication to the remote memories just as in the first nesting level described earlier. The state output by a nesting level is therefore the probabilistically weighted sum of the average state received from the previous level. Periodic SWAP operations between two quantum memories at a node, for example, between $m^{(1)}_{E_1} \to m^{(2)}_{E_1} \to m^{(3)}_{E_1}$ in figure~\ref{fig:nestedlevels}, are used to feed forward the average state to the quantum memories of the higher nesting level. The nesting levels in such an architecture can be modeled as a sequence of self-similar input-output systems, $\{S^{(i)}\}_i,i\in\{1,2,...,N_m\}$, as shown in figure~\ref{fig:in-out-system}. Each system is characterized by its classical communication time $\tau_C^{(i)}=2^{i-1}\tau_C$ and memory quality parameter $\beta^{(i)}=\beta^{2^{i-1}}$. Further, for each system $S^{(i)}$ the input-cycle time $n^{(i)}_{\textrm{in}}$ specifies the number of two-way classical communication cycles over which it receives one average state from the previous system with probability $p^{(i)}_{\textrm{in}}$. This takes $n^{(i)}_{\textrm{in}}2\tau^{(i)}_C$ amount of time. The output-cycle time $n^{(i)}_{\textrm{out}}$ is the buffer time for system $S^{(i)}$ in terms of the number of input-cycles for system $S^{(i)}$, i.e., one average state is output by the system in $n^{(i)}_{\textrm{out}}n^{(i)}_{\textrm{in}}2\tau^{(i)}_C$ amount of time with probability $p^{(i)}_{\textrm{out}}$. Any system $S^{(i)}$ is able to receive or output an average state only at the end of a time period that is a multiple of its two-way classical communication cycle time. Two adjacent systems $S^{(i)}$ and $S^{(i+1)}$ are synchronized if the physical times at which the $i$'th system outputs its average state corresponds to the physical times at which the $(i+1)$'th system can receive the state. Therefore, successive input-cycle times and output-cycle times have to obey the condition for synchronization of the systems, $n^{(i)}_{\textrm{out}}n^{(i)}_{\textrm{in}}2\tau_C^{(i)}=n^{(i+1)}_{\textrm{in}}2\tau^{(i+1)}_C$, which implies \begin{equation} n^{(i+1)}_{\textrm{in}}=n^{(i)}_{\textrm{out}}n^{(i)}_{\textrm{in}}/2, \label{eq:phasematchcond} \end{equation} for $i=1,...,N_m-1$, so that $n^{(i)}_{\textrm{in}, \textrm{out} }$ are positive integers, and $n^{(1)}_{\textrm{in}}=1$. The output probability of system $S^{(i)}$ is related to its input probability as $ p^{(i)}_{\textrm{out}}=p_s[1-(1-p^{(i)}_{\textrm{in}})^{n^{(i)}_\textrm{out}}]^2$ with $p^{(1)}_{\textrm{in}}=p$. While the input probability of system $S^{(i)}$ is related to the output probability of system $S^{(i-1)}$ as $p^{(i)}_{\textrm{in}}=p_Tp^{(i-1)}_{\textrm{out}}$, where $p_T$ is the probability to successfully transfer states from the memories of one nesting level to the next, and $ p^{(0)}_\textrm{out}=p$ to maintain consistency. \begin{figure} \caption{(Color online) Organization of nesting levels in a quantum repeater architecture as a sequence of self-similar systems. Each system $S^{(i)} \label{fig:in-out-system} \end{figure} The average remote entangled state $\rho^{O,(i)}$ obtained in the OBP based architecture at any nesting level $i$ depends on the parameter values for the preceding levels, i.e., on $\{\tau^{(j)}_C,\beta^{(j)}_C,n^{(j)}_\textrm{in}\}$ for $1\leq j\leq i$. In addition, it depends on the initial entanglement generation probability, $p$, the swapping success probability, $p_S$, and the transfer success probability, $p_T$ . This state, \begin{equation} \rho^{O,(i)}=\frac{1+\prod_{j=1}^i[\gamma^{O,(j)}]^{2^{i-j}}}{2}\rho^- +\frac{1-\prod_{j=1}^i[\gamma^{O,(j)}]^{2^{i-j}}}{2}\rho^+, \label{eq:rhonestleveli} \end{equation} is obtained once every $n^{(i)}_{\textrm{in}}n^{(i)}_{\textrm{out}}2\tau^{(i)}_C$ amount of time. $\gamma^{O,(j)}$ as a function of the relevant parameters can be found in the supplementary material. Physically, $\gamma^{O,(j)}\leq 1$ can be understood as the degradation of the fidelity during the buffer time at nesting level $j$. $\rho^{O,(i)}$ has a fidelity $F^{O,(i)}=\tr\{\rho^-\rho^{O,(i)}\}=(1+\prod_{j=1}^i[\gamma^{O,(j)}]^{2^{i-j}})/2$. This fidelity approaches the distillation threshold of $1/2$ as $(1/2)\prod_{j=1}^i[\gamma^{O,(j)}]^{2^{i-j}}$, which accounts for the degradation in all nesting levels prior to $i$. Therefore, the EGR of system $S^{(i)}$ is \begin{equation} R^{O,(i)}_{DE}=\frac{p^{(i)}_{\textrm{out}}}{n^{(i)}_{\textrm{in}}n^{(i)}_{\textrm{out}}2\tau^{(i)}_C}E\left[\frac{1+\prod_{j=1}^{i}[\gamma^{O,(j)}]^{2^{i-j}}}{2}\right]. \label{DEratei} \end{equation} \subsection{Hierarchical optimization of memory buffer time at every nesting level} \label{subsec:hieropt} In a repeater with $N_m$ levels the final EGR can be maximized by hierarchically optimizing buffer times of all nesting levels. We search over the set of positive integers $\{n^{(i)}_{\textrm{in}},n^{(i)}_{\textrm{out}}\}$, subject to synchronization condition constraints, Eq.~(\ref{eq:phasematchcond}). The optimization proceeds by maximizing the EGR sequentially starting from nesting level 1. If at any nesting level, $i$, the optimal value for $n^{(i)}_{\textrm{out,opt}} = \textrm{ArgMax}_{n^{(i)}_{\textrm{out}}}[R^{O,(i)}_{DE}]$ does not satisfy the synchronization condition one of the neighboring values, $\tilde{n}^{(i)}_{\textrm{out,opt}}\in\{n^{(i)}_{\textrm{out,opt}}-1,n^{(i)}_{\textrm{out,opt}}+1\}$ whichever provides a higher $R^{O,(i)}_{DE}$, is chosen and used for calculating $n^{(i+1)}_{\textrm{in}}$ and $p^{(i+1)}_{\textrm{in}}$. This procedure is followed for all nesting levels upto $i=N_m$ and produces an approximately-optimal synchronized sequence of $n^{(i)}_{\textrm{in}}$ and $n^{(i)}_{\textrm{out}}$. A repeater operating its nesting levels based on the sequence of approximately-optimal buffer times still gives manifold increase of EGR as shown by the logarithm of the ratio of the rates at nesting level $i$, $\log[\eta^{(i)}(p,\beta,n)]=\log[R^{O,(i)}_{DE}(p,\beta,n)/R^{C,(i)}_{DE}(p,\beta)]$, in figure~\ref{fig:crossover}. \subsection{Comparison of optimized vs canonical protocol at any nesting level} \label{subsec:compallnest} In the canonical protocol all nesting levels operate on the same set of quantum memories and successive levels do not use new set of memories. An average remote entangled state at any level is obtained after the waiting time for that level. The next level receives the average state from the previous level as soon as it is obtained, i.e., no synchronization conditions are used. The average remote entangled state in the canonical protocol obtained at any nesting level $i$ is given by an expression identical to Eq.~(\ref{eq:rhonestleveli}) with $\gamma^{C,(j)}$ replacing $\gamma^{O,(j)}$. Physically, $\gamma^{C,(j)}$ can be understood as the degradation in the fidelity due to the waiting time at nesting level $j$. The entanglement generation rate in the canonical protocol is given by (see \ref{app:avstates}) \begin{equation} R^{C,(i)}_{DE}=\frac{p_S}{(\prod_{j=1}^i\braket{k}_j)2\tau_C}E\left[\frac{1+\prod_{j=1}^{i}[\gamma^{C,(j)}]^{2^{i-j}}}{2}\right] , \label{canDEratei} \end{equation} where $\braket{k}_1=(3-2p)/p(2-p)$ is the waiting time at the first nesting level due to the initial entanglement generation probability, and $\braket{k}_j=(3-2p_S)/p_S(2-p_S)\forall j\geq 2$ is the waiting time due to the swapping success probability at the second nesting level and higher. \begin{figure} \caption{(Color online) Logarithm (base 10) of the ratio of entanglement generation rate for the optimized buffer time protocol and the canonical protocol at different nesting levels for $p=0.02,\beta=0.2,p_T=1$ and two different values of $p_S=0.75$ (blue dots) and $p_S=0.5$ (red triangles). The buffer times of the nesting levels were approximately optimal in both cases.} \label{fig:crossover} \end{figure} The manifold increase in the entanglement generation rate in the optimized buffer time protocol compared to canonical protocol is seen at all nesting levels if the swapping success probability $p_S$ is low, figure~\ref{fig:crossover}. On the other hand, if $p_S$ is high then the canonical protocol can yield better rates for higher nesting levels. In OBP the probability factor on the right hand side of Eq.~(\ref{DEratei}) scales as, $p^{(i)}_{\textrm{out}}\sim (p_S)^i(p)^{2^i}$, for $p<<1$, whereas the time to obtain a state goes as, $n^{(i)}_{\textrm{in}}n^{(i)}_{\textrm{out}}2\tau^{(i)}_C= 2^{i+1}\tau_C$, for $n^{(i)}_{\textrm{out}}=2$ taken as an example. This implies a low probability of obtaining a remote entangled state per unit time. In the CP, the probability factor $p_S$ in the RHS of Eq.~(\ref{canDEratei}) is constant. However, the time to obtain a state, $(\prod_{j=1}^i\braket{k}_j)2\tau_C=\braket{k}_1(\braket{k}_2)^{i-1}2\tau_C$, can diverge much faster than that in the OBP, $2^{i+1}\tau_C$ in our example. This happens if the waiting time due to the swapping success probability, $\braket{k}_j\geq 2~ \forall j\geq 2$, which implies $p_S\leq 0.64$. In this case, the probability to obtain a remote entangled state per unit time in the CP can be even lower than that in the OBP. Moreover, the degradation in the fidelity, $\prod_{j=1}^{i}[\gamma^{X,(j)}]^{2^{i-j}}$, for $X=O,C$ in the two cases has maximum contribution from initial nesting levels. As discussed earlier the OBP yields a $~1/p$-factor increase of EGR at the first nesting level itself. Therefore the advantage due to OBP persists for the first few nesting levels till a crossover nesting level even if $p_S$ is high. At the crossover nesting level, the entanglement generation rate of the optimized buffer time protocol becomes equal to or less than that of the canonical protocol. If $p_S$ is low then the logarithm of the ratio of rates in the two protocols diverges with the nesting level. In practice, one can use the explicit expressions for the rates provided in Eq.~(\ref{DEratei}) and Eq.~(\ref{canDEratei}) to numerically evaluate the performance of the protocols at each nesting level. The advantage of the optimized buffer time protocol can be observed by plotting the logarithm of the ratio of entanglement generation rates, $\log[\eta^{(i)}(p,\beta,n_{\textrm{opt}})]$, versus the distance between repeater nodes at the first nesting level as shown in figure (\ref{fig:ratio}). The repeater is placed midway, at a distance of $L_0$ from either end node, figure (\ref{fig:qlink}). We assume that the entanglement generation probability varies with the distance as $p=e^{-L_0/L_a}$ with the attenuation length $L_a=20$ kms. The memory quality parameter then varies as $\beta=e^{-(L_0/L_a)(L_a/c\tau_M)}$ where the speed of light in fiber is taken to be $c=2\times10^8$ ms$^{-1}$. We choose two different memory lifetimes with values of $\tau_M=100~\mu$s (green curve) and $\tau_M=1$ ms (blue curve). The latter shows that a hundred fold improvement in entanglement generation rates is obtained with 1 millisecond quantum memories when the repeater is at a distance of $100$ kms whereas the increase is even higher in the former case. \begin{figure} \caption{(Color online) Logarithm (base 10) of the ratio of entanglement generation rate for the optimized buffer time protocol and the canonical protocol vs the distance between the repeater nodes, $L_0$, at the first nesting level. The green curve is obtained for memory lifetime $\tau_M=100~\mu$s and the blue curve for $\tau_M=1$ ms.} \label{fig:ratio} \end{figure} \section{Discussion and conclusion} \label{sec:conc} We presented a quantum repeater architecture and protocol that mitigates quantum memory decoherence. The protocol optimizes the buffer time of the quantum memories based on the operating point in parameter space. We showed the hierarchical optimization of the buffer time at all nesting levels. The resulting increase of entanglement generation rates by many orders of magnitude was demonstrated. Crucially, the improvement was achieved with state of art physical resources. For example, with current technology, entanglement generation probability of $p\simeq10^{-4}-10^{-5}$, memory lifetime of $\tau_M\simeq10^{-1}$ ms, swapping success probability $p_S\simeq 0.5$ and transfer probability $p_T\simeq1$ are feasible \cite{swap-experiment,quantummemories,seqquantumrepeater}. If repeater stations are spaced at intervals of $L_0=20$ km, corresponding to the attenuation length in optical fibers, the one-way classical communication time $\tau_C=L_0/(2\times 10^5~\textrm{km}~s^{-1})=10^{-1}$ ms equals the memory lifetime. The memory quality parameter then is $\beta=0.135$. In this region of parameter space the optimized buffer time protocol yields $(10^4 -10^5)$ increase in the entanglement generation rate. The proposed optimized buffer time protocol performs particularly well in the technologically feasible parameter regions and could facilitate broad applications in the future development of quantum networks such as for interferometry \cite{santra_qtel} and secret sharing \cite{Hillery1999}. \ack This work was supported in part by the Office of the Secretary of Defense, Quantum Science and Engineering Program. L.J. acknowledges support from the ARL-CDQI (W911NF-15-2-0067, W911NF-18-2-0237), NSF (EFMA-1640959), and the Packard Foundation (2013-39273). \appendix \section{Derivation of the average state in the Optimized buffer-time protocol and the Canonical protocol} \label{app:avstates} The decoherence in a pair of identical quantum memories with a lifetime $\tau_M$, due to dephasing, occurs at the rate $2/\tau_M$. We will denote the phase decoherence superoperator by $D_t$. Thus for an initial stored state, $\rho_0=\ket{\psi^-}\bra{\psi^-}$, we have \begin{equation} \rho(t)=D_t(\rho_0)=P^-(t)\rho^+ + P^+(t)\rho^-, \end{equation} with $P^{\pm}:=(1\pm e^{-2t/\tau_M})/2$. We assume that our heralded scheme of entanglement swapping succeeds with a probability $p_S$. Under the swapping operation $\hat{S}$ for a pair of 2-qubit states $\rho(t_1)$ and $\rho(t_2)$ stored in the two pairs of memories for times $t_1$ and $t_2$ we have the output state conditioned on heralding to be \begin{eqnarray} \hat{S}[\rho(t_1),\rho(t_2)]=\hat{S}[(P^-(t_1)\rho^+ + P^+(t_1)\rho^-)(P^-(t_2)\rho^+ + P^+(t_2)\rho^-)]\nonumber\\ =P^-(t_1)P^-(t_2)\hat{S}[\rho^+,\rho^+]+P^-(t_1)P^+(t_2)\hat{S}[\rho^+,\rho^-]\nonumber\\ ~~+P^+(t_1)P^-(t_2)\hat{S}[\rho^-,\rho^+]+P^+(t_1)P^+(t_2)\hat{S}[\rho^-,\rho^-]\nonumber\\ =(P^-(t_1)P^-(t_2)+P^+(t_1)P^+(t_2))\rho^-+(P^+(t_1)P^-(t_2)+P^-(t_1)P^+(t_2))\rho^+\nonumber\\ =P^+(t_1+t_2)\rho^-+P^-(t_1+t_2)\rho^+, \label{swapstate} \end{eqnarray} where we have used the linearity of the swap operation in the second line and the equalities $\hat{S}[\rho^+,\rho^+]=\hat{S}[\rho^-,\rho^-]=\rho^-$, $\hat{S}[\rho^+,\rho^-]=\hat{S}[\rho^-,\rho^+]=\rho^+$. The probabilistic process of charging a pair of memories with the state $\ket{\psi^-}$ succeeds at some step number $k$. This can happen for possibly different step numbers $k_1,k_2$ for the two pairs of memories shown in figure (1) of the main text. Assuming $k_1\leq k_2$, the latter pair of memories still stores the state for a time $t_2=\tau_C$, i.e., for one classical communication time. The earlier charged pair stores the state for a time $t_1=(k_2-k_1)2\tau_C+\tau_C$. Therefore, the correspondence of storage times to step numbers is, \begin{eqnarray} t_1\to(k_2-k_1)2\tau_C+\tau_C\nonumber\\ t_2\to\tau_C. \end{eqnarray} Thus, $(t_1+t_2)\to (|k_2-k_1|+1)2\tau_C$ for all $k_1,k_2$. In the OBP the average accessed state $\rho^O$ is a probabilistically weighted sum of states obtained after swapping, the states that have been stored in the two pairs of quantum memories, subject to the condition on the charging step numbers $|k_2-k_1|\leq n$. The probability distribution of successful entanglement generation has the form $\mathbb{P}(k_1,k_2)=(1-p)^{k_1+k_2-2}p^2$. The form of the remote entangled state $\rho^S(k_1,k_2)$ after the swap is given by Eq.~(\ref{swapstate}) with $(t_1+t_2)\to (|k_2-k_1|+1)2\tau_C$ for all $k_1,k_2$. One also needs to account for the decoherence suffered in the two remote memories after a swapped state is obtained untill the memories are refreshed after $n$ cycle times. This state obtained after the swap, Eq.~(\ref{swapstate}), further decoheres in the remote quantum memories at the two ends for a time, $t_n=\{2(n-\textrm{max}(k_1,k_2))+1\}\tau_C$, and leads to the state $\rho^S_n(k_1,k_2)=D_{t_n}(\rho^S(k_1,k_2))$. Thus the average state in the optimized protocol is given by \begin{eqnarray} \rho^O=\frac{\sum_{k_1=1,k_2=1}^{n,n} \mathbb{P}(k_1,k_2)\rho^S_n(k_1,k_2)}{(1-(1-p)^n)^2}\nonumber\\ =\frac{1+\gamma^O(p,\beta,n)}{2}\rho^-+\frac{1-\gamma^O(p,\beta,n)}{2}\rho^+, \label{optstate} \end{eqnarray} where $\gamma^O(p,\beta,n)$ is given by \begin{equation} \gamma^O(p,\beta,n) = \beta^3\frac{p}{(1-(1-p)^n)^2}\frac{f(p,\beta,n)}{(\beta^2-q)(\beta^2-q^2)}, \label{eq:gammaopt} \end{equation} with $f(p,\beta,n) = q^{2n}(\beta^2+q(1-q-\beta^2))+\beta^{2n}(2q^{n+2}-q^2+\beta^2-2q^n\beta^2+q(\beta^2-1))$, $q=(1-p)$, and $\beta=e^{-2\tau_C/\tau_M}$. Note that the function $f(p,\beta,n)$ has $q=\beta,\beta^2$ as roots and thus the factors in the denominator of Eq.~(\ref{eq:gammaopt}) do not cause any singular behavior. For perfect quantum memories $\beta=1$ so that $\gamma^O(p,\beta,n)=0$. For $p=1$, $\gamma^O(p,\beta,n)=\beta^{2n+1}$ which is maximum for $n=1$ and $\beta<1$. The state in Eq.~(\ref{optstate}) is the output of the first nesting level and is denoted as $\rho^{O,(1)}$. Second nesting level receives the average output state from the first nesting level and outputs a probabilistically weighted average state with the probability defined by the input probability for the second level. The probability distribution at any nesting level, $i$, is given by \begin{equation} \mathbb{P}^{O,(i)}(k_1,k_2)=(p^{(i)}_{\textrm{in}})^2(q^{(i)}_{\textrm{in}})^{k_1+k_2-2} \label{probgreedyj} \end{equation} with $q^{(i)}_{\textrm{in}}=(1-p^{(i)}_{\textrm{in}}$). The average state obtained in the optimistic protocol at any nesting level depends on the sequence of values of the parameter sets $\bar{\tau}_C=\{\tau^{(1)}_C,\tau^{(2)}_C,...,\tau^{(i)}_C\}$, $\bar{\beta}=\{\beta^{(1)},\beta^{(2)},...,\beta^{(i)}\}$, $\bar{n}_{\textrm{in}}=\{n^{(1)}_{\textrm{in}},n^{(2)}_{\textrm{in}},...,n^{(i)}_{\textrm{in}}\}$, the initial charging success probability $p$, the swapping success probability $p_S$, and the transfer success probability $p_T$ . This state is obtained by iterating the process of averaging over the states received from the previous nesting level and normalizing by the appropriate probability normalization factor resulting in \begin{equation} \rho^{O,(i)}_{\textrm{out}}=\frac{1+\prod_{j=1}^i[\gamma^{O,(j)}]^{2^{i-j}}}{2}\rho^- +\frac{1-\prod_{j=1}^i[\gamma^{O,(j)}]^{2^{i-j}}}{2}\rho^+. \label{eq:rhonestleveli} \end{equation} In the above expression the explicit dependence of $\gamma^{O,(j)}$ on the relevant parameter values are suppressed for brevity. The actual expression for which is \begin{equation} \gamma^{O,(j)}=\mathcal{N}^{-1}\sum_{k_1,k_2=1}^{n^{(j)}_{\textrm{out}}}\mathbb{P}^{O,(j)}(k_1,k_2)[\beta^{(j)}]^{n^{(j)}_{\textrm{in}}\{2(n^{(j)}_{\textrm{out}}-k_1)+2\}+1}, \end{equation} where $\mathcal{N}=\sum_{k_1,k_2=1}^{n^{(j)}_{\textrm{out}}}\mathbb{P}^{(j)}(k_1,k_2)$. In the CP, the average obtained state is a probabilistically weighted sum of states over all possible storage times in the two pairs of memories. The state obtained after the swap operation is given by Eq.~(\ref{swapstate}) with $(t_1+t_2)\to (|k_2-k_1|+1)2\tau_C$ for all $k_1,k_2$. The obtained swapped state decoheres during time $\tau_C$ in the two remote memories resulting in the state, $\rho^S_r(k_1,k_2)=D_{\tau_C}(\rho^S(k_1,k_2))$, since the results of the swap operation have to be communicated to the end nodes. Therefore, the CP average state is \begin{eqnarray} \rho^C=\sum_{k_1=1,k_2=1}^{\infty,\infty} \mathbb{P}(k_1,k_2)\rho^S_r(k_1,k_2)\nonumber\\ =\frac{1+\gamma^C(p,\beta)}{2}\rho^-+\frac{1-\gamma^C(p,\beta)}{2}\rho^+ \, , \label{expectedstategreedy} \end{eqnarray} where the function $\gamma^C(p,\beta)$ is given by \begin{equation} \gamma^C(p,\beta) = \beta^3p^2\frac{(1-\beta^2 q)+2\beta^2 q}{(1-q^2)(1-\beta^2 q)} \label{gammagreedy} \end{equation} with $\beta=e^{-2\tau_C/\tau_M}$ and $q=1-p$. Again, for $\beta=1$, $\gamma^C(p,\beta)=1$ and for $p=1$, $\gamma^C(p,\beta)=\beta^3$. The expected number of steps to obtain an entangled state in both involved segments is given by \begin{eqnarray} \braket{k}=\sum_{k=1}^\infty kp^2q^{2(k-1)}+2\sum_{k_1=1}^\infty p^2q^{k_1-2}(\sum_{k_2=k_1+1}^{\infty}k_2q^{k_2})\nonumber\\ =\frac{(3-2p)}{p(2-p)} \,. \end{eqnarray} The average state after any nesting level in the CP is obtained by iterating the averaging procedure at every nesting level. The probability distribution at the first nesting level is determined by the entanglement generation probability. For the second and higher nesting levels the probability distribution is determined by the swapping success probability. Thus, \begin{eqnarray} \mathbb{P}^{C,(1)}(k_1,k_2)=p^2q^{k_1+k_2-2} \,, \nonumber\\ \mathbb{P}^{C,(i)}(k_1,k_2)=p_S^2q_S^{k_1+k_2-2} \,. \end{eqnarray} By iterating the averaging procedure at each nesting level using the above probability distributions we get the form of the average state for any nesting level \begin{equation} \rho^{C,(i)}_{\textrm{out}}=\frac{1+\prod_{j=1}^i[\gamma^{C,(j)}]^{2^{i-j}}}{2}\rho^- +\frac{1-\prod_{j=1}^i[\gamma^{C,(j)}]^{2^{i-j}}}{2}\rho^+, \label{eq:canrhonestleveli} \end{equation} where \begin{equation} \gamma^{C,(j)}=\sum_{k_1,k_2=1}^{\infty,\infty}\mathbb{P}^{C,(j)}(k_1,k_2)[\beta^{(j)}]^{3}[\beta^{(1)}]^{2\prod_{l=0}^{(j-1)}\braket{k}_l(k_2-k_1)}, \end{equation} with $\braket{k}_0=1$, $\braket{k}_1=(3-2p)/p(2-p)$, $\braket{k}_l=(3-2p_S)/p_S(2-p_S)$ for $l\geq 2$. In both the OBP and CP, if the initial entangled state across the elementary segments has a fidelity of $f$, i.e., $\rho_0=f\rho^-+(1-f)\rho^+$, then the output states Eq.~(\ref{eq:rhonestleveli}) and Eq.~(\ref{eq:canrhonestleveli}) include the fidelity factor $f$ in their coefficients. Thus for the OBP $\gamma^{O,(1)}\to f\gamma^{O,(1)}$, while for the CP $\gamma^{C,(1)}\to f\gamma^{C,(1)}$. Our results reman unchanged for any value of the initial fidelity. For a repeater with $N_m$ nesting levels, OBP requires $2(2^{N_m+2}-2)$ memories whereas CP requires $2^{N_m+2}$ quantum memories. Thus, the OBP requires at most twice as many quantum memories as the CP. The EGR per memory used is still higher by orders of magnitude in the OBP in the relevant regions of parameter space. \section{Ratio of rates and optimal wait-window size} \label{app:nopt} The OBP provides higher entanglement generation rates than the CP for most regions of the $(p,\beta)$-parameter space, as shown in figure~2 of the main text. The size of the optimal wait-window in terms of cycle time depends on the operating point in the $(p,\beta)$ parameter space, figure~(\ref{fig:optimaln1}). \begin{figure} \caption{The optimal wait-window size in the optimized buffer time protocol depends on the location in parameter space. Shown here is the ratio of entanglement generation rates for $(p,\beta)$ = (0.1,0.9) in blue, (0.1,0.4) in green and (0.05,0.8) in red. The sequences of blue, red and green dots have its maximum at $n_{\textrm{opt} \label{fig:optimaln1} \end{figure} We identify several regions: \protect{\begin{itemize} \item Long-lived quantum memories, low entanglement generation probabilities ($p\to 0,~\beta\to 1$). In this region, figure~(\ref{fig:optimistic-optimaln1}) suggests the scaling of $n_{\textrm{opt}}\sim1/p$. \item Short-lived quantum memories, high entanglement generation probabilities ($p\to 1,~\beta\to 0$). In this region, the best schedule of course is the rapid reset strategy with $n_{\textrm{opt}}=1$. \item Short-lived memories, low entanglement generation probabilities ($p\to 0,~\beta\to 0$). In this region, figure~(\ref{fig:optimistic-optimaln2}) suggests that rapid resetting with $n_{\textrm{opt}}=1$ constitutes the best schedule. \item Long-lived quantum memories, high entanglement generation probabilities ($p\to 1,~\beta\to 1$). In this region, the best schedule also turns out to be the rapid reset strategy $n_{\textrm{opt}}=1$. For $\beta\simeq 1$ the CP provides better entanglement generation rates than the OBP. \end{itemize}} \begin{figure} \caption{Scaling of the optimal wait-window size $n_{\textrm{opt} \label{fig:optimistic-optimaln1} \end{figure} \begin{figure} \caption{Scaling of the optimal wait-window size $n_{\textrm{opt} \label{fig:optimistic-optimaln2} \end{figure} \section{Ratio of entanglement generation rates for nesting level N=1} \label{app:rateratio} The OBP and the CP can be compared with respect to the entanglement generation rate which is the maximum rate of distillable entanglement. Here we compare the distillable entanglement generation rates in the two protocols at the first nesting level, $i=1$, in the $p\to0,\beta\to 0$ region. We find approximations for the rates \begin{eqnarray} R^C_{DE}(p,\beta)=\frac{p_S}{\tau_C}\frac{p(2-p)}{3-2p}H[x_1(p,\beta)],\nonumber\\ R^O_{DE}(p,\beta,n)=\frac{p_S}{\tau_C}\frac{(1-(1-p)^n)^2}{n}H[x_2(p,\beta,n)], \label{ratecomp1} \end{eqnarray} where $x_2=(1+\sqrt{1-(\gamma^O(p,\beta,n))^2})/2$, and $x_1=(1+\sqrt{1-(\gamma^C(p,\beta))^2})/2$. In the $p\to0,\beta\to 0$ region both $\gamma^C,\gamma^O\ll 1$ so that $x_1(p,\beta)\approx 1-(\gamma^C(p,\beta))^2/4$, $x_2(p,\beta,n)\approx 1-(\gamma^O(p,\beta,n))^2/4$, $\gamma^C(p,\beta)\approx \beta^3p/2$, $\gamma^O(p,\beta,n)\approx \beta^3/n^2$. We now use the property of binary entropy that, $H(1-x)=H(x), 0\leq x\leq 1$ and a small $x$ approximation, $H(x)\approx x\log_2(e/x), x\to 0$. Further, we approximate $p(2-p)/3-2p\approx 2p/3$ and $(1-(1-p)^n)^2/n\approx np^2$. We also know from our numerical investigations (and analytical results) that in this region $n_{\textrm{opt}}=1$. Putting all this together we get \begin{eqnarray} \eta (p,\beta,n_{\textrm{opt}}=1) = \frac{R^O_{DE}(p,\beta,n_{\textrm{opt}}=1)}{R^C_{DE}(p,\beta)}\nonumber\\ \approx\frac{6}{p}\frac{\log_2(4e)-6\log_2(\beta)}{\log_2(16e)-6\log_2(\beta)-2\log_2(p)}. \end{eqnarray} For $p,\beta$ values of technological relevance the above ratio is well approximated as $1/p$, shown in figure~(\ref{fig:ratioscaling1}). \begin{figure} \caption{Scaling of the entanglement generation rate ratio using the optimized buffer time protocol and the canonical protocol: blue curve - the exact rate expressions from (\ref{ratecomp1} \label{fig:ratioscaling1} \end{figure} \section*{References} \end{document}
\begin{document} \title{Complete Adiabatic Quantum Search in Unsorted Databases} \author{Nanyang Xu, Xinhua Peng, Mingjun Shi, Jiangfeng Du$^{\ast}$\\ \normalsize{Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics,University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China}\\ \normalsize{$^\ast$To whom correspondence should be addressed; E-mail: [email protected]}} \begin{abstract} We propose a new adiabatic algorithm for the unsorted database search problem. This algorithm saves two thirds of qubits than Grover's algorithm in realizations. Meanwhile, we analyze the time complexity of the algorithm by both perturbative method and numerical simulation. The results show it provides a better speedup than the previous adiabatic search algorithm. \end{abstract} \pacs{03.67.Lx, 89.70.-a, 03.65.-w} \maketitle Quantum computation is a promising way to solve classical hard problems. Several quantum algorithms have been designed to perform classical algorithms with remarkable speedups. The most useful one among these is Grover's algorithm\cite{Grover_search} which concerns the problem of searching for a required item in a unsorted database. One common example for this unsorted database search is to find a person's name in a phone book (the items are sorted by names) with only knowing his phone number. Classically, the only way to achieve this is brute-force search\cite{Grover_search,Ju_2007} which requires an average of $ \frac{N}{2}$ quires for $N$ entries in the phone book. However, if the information is stored in a quantum database, to find the right name with Grover's algorithm costs only a time of order $\sqrt{N}$, providing a quadratic speedup. The main process of Grover's algorithm is, swinging the $index$ qubits from an initial uniform state to approach the solution state. The information of the database is not explicitly accessed in the processing of search. Instead, an Oracle is supposed to know all the information in the database and act properly towards a input state depending on whether it denotes the solution\cite{QCQI}. Early experiments\cite{Chuang_1998search, Dodd_2003, Vandersypen_3bitsearch, Brickman_2005,Long_2001} of Grover's algorithm constructed this Oracle from a marked state as a functional analog instead of querying the database. For a complete solution of search problem, Kim and coworkers proposed a new approach to realize the Oracle on a quantum database and implemented it in experiment\cite{Kim_2002}. In this complete approach of Grover's algorithm, extra qubits are used to store the database. Similar method for construction of Grover's Oracle was also theoretically discussed later \cite{Ju_2007}. While Grover's algorithm is presented in the standard circuit model(\textit{i.e.}, using a sequence of discrete quantum gates), a new model of quantum computation shows up where the state of quantum computer evolves continuously and adiabatically under a certain time-dependent Hamiltonian. This new adiabatic model was soon applied to the database search problem\cite{Farhi_2000} and the original adiabatic search algorithm was proved to have a time complexity of order $N$, which is the same performance as classical algorithms. More recently, Roland and Cerf\cite{Roland_2002} recovered the advantage of adiabatic search to order $\sqrt{N}$ (the same with Grover's algorithm), by performing the adiabatic evolution locally. However this adiabatic search algorithm construct the Hamiltonian from a marked state instead of referring to the database, thus it is not a complete search algorithm. And in order to discriminate it from our algorithm, we call it \emph{the marked-state adiabatic search}(MSAS) algorithm in the following paragraphs. In this letter, we apply the quantum adiabatic computation to unsorted database search problem again and present a new quantum search algorithm. We will put forward a new method to represent the database. By this method the algorithm contains no Oracles and saves $\frac{2}{3}$ of qubits than the complete approach of Grover's algorithm\cite{Kim_2002}. We also analyze the time complexity by both perturbation method and numerical simulation. The results show it provide a higher speedup than the MSAS algorithm. As a new quantum computation model, adiabatic algorithm was brought out by Farhi \textit{et al.}\cite{Farhi} and soon became a rapidly expanding field. The idea of this new computation model is to prepare a system in the ground state of a simple initial Hamiltonian, then slowly switch the simple Hamiltonian to a complex Hamiltonian whose ground state encodes the solution to the problem of interest. According to Adiabatic Theorem, the the system stays in the ground state of the instantaneous Hamiltonian if we perform the evolution slowly enough. So finally the state describes the solution to the problem. The time-dependent system Hamiltonian is \begin{equation} \label{ht}\\ H(t)=[1-s(t)]H_{i}+s(t)H_{p}, \end{equation} where $H_{i}$ is the initial Hamiltonian and $H_{p}$ is the problem Hamiltonian which encodes the solution, and the monotonic function $s(t)$ fulfills $s(0) = 0$ and $s(T)=1$. Here let's focus on the unsorted database search problem. To be simplified, the database is a list of $(i,v_i)$ pairs and sorted by $i$ where $i$ denotes $index$ and $v_i$ is $value$. Both $i$ and $v_i$ are $n$-bit binary codes thus the database contained $N=2^n$ items. The \emph{"unsorted"} property of the database refers to the field $value$ not $index$. The unsorted database search problem here is looking for the corresponding $index$ $i$ for a given target $value$ ${t}$. And we assume that there's only one solution in the database for each search. Next we will describe the process to find the right $i$ which connects to the target ${t}$. The essential part of an adiabatic search algorithm is how to encode the solution in the ground state of problem Hamiltonian. For example, the MSAS algorithm constructs the problem Hamiltonian as $H_p=1-|m\rangle\langle m|$ where $|m\rangle$ is exactly the solution state. Thus it is not a complete database search. Obviously for a complete search, the information in database should be represented in quantum forms. Taking the complete approach of Grover's algorithm as an example\cite{Kim_2002,Ju_2007}, the database is represented in an operator which satisfies $ U_{f}|i\rangle|0\rangle=|i\rangle|v_{i}\rangle$. $U_f$ generates the entanglement of qubits to denote the relation between $i$ and $v_i$, thus both the fields are represented by qubits. In the present algorithm, however, not both the fields are represented by qubits. We define a database operator as \begin{equation} \label{dop} \mathcal{D}=\sum_{i=0}^{N-1} v_{i}|i\rangle \langle i|. \end{equation} Clearly in this approach, the $index$ is represented by qubits while the $value$ is store in the strength of interactions. So no extra qubits are needed for the database. The operator $\mathcal{D}$ contains all the information in the database. Thus, we can construct the problem Hamiltonian from $\mathcal{D}$ as \begin{eqnarray} \label{hp1} H_{p} &=& (\mathcal{D} - {t})^2, \end{eqnarray} where ${t}$ is the target $value$ which we are looking for. To test the validity of $H_p$, we will examine its ground state. To this end, we can write $H_p$ as $ \sum_{i=0}^{N-1} (v_{i}-{t})^2|i\rangle \langle i|$. From this form, each diagonal element of $H_p$ is the square of difference between $v_i$ and ${t}$. Thus the ground state will be the solution state $|i\rangle$ where $v_i$ equals to ${t}$. Of course this construction provides a valid problem Hamiltonian for the search problem. However, the Hamiltonian in Eq.\eqref{hp1} has a spectral width exponentially growing with the number of qubits, which is hard to realize when the database is large. Thus it is only useful in small-size databases. To solve this problem, we divide the comparison between $v_i$ and ${t}$ into $n$ sub-comparisons, each of which is performed for a single bit between them. Thus the database operator should be formed separately for each bit. For the $j$th bit of $value$ we define the bit database operator $\mathcal{D}_j$ as \begin{eqnarray} \label{hpd1} \mathcal{D}_{j} &=& \sum_{i=0}^{N-1} v_{ij}|i\rangle \langle i|, \end{eqnarray} where $v_{ij}$ is the $j$th bit of $v_{i}$. Similarly with the operation in Eq.\eqref{hp1}, the problem Hamiltonian for each bit is \begin{eqnarray} H_{p}^{j} &=& (\mathcal{D}_{j} - {t}_{j})^{2}, \end{eqnarray} where ${t}_{j}$ is as well the $j$th bit of ${t}$. Consequently, the overall problem Hamiltonian is the summarization of all bit problem Hamiltonians \begin{eqnarray} \tilde{H_{p}} &=& \sum_{j=0}^{n-1}{H_{p}^{j}} = \sum_{j=0}^{n-1}{[\mathcal{D}_{j}(1-{t}_{j}) + {t}_{j}(I - \mathcal{D}_{j})]}\nonumber\\ &=& \sum_{j=0}^{n-1}{(\mathcal{D}_{j}\bar{t}_{j}+{t}_{j}\bar{\mathcal{D}}_{j})}, \end{eqnarray} where $\bar{t}_{j}$ is the complementation of binary bit ${t}_{j}$ and $\bar{\mathcal{D}}_j = I - \mathcal{D}_j$. Also for a test of the validity, we can simplify $\tilde{H_{p}}$ as \begin{eqnarray} \label{hp_hd} \tilde{H_{p}} &=& \sum_{i=0}^{N-1} {h(v_{i},t)}|i\rangle \langle i|, \end{eqnarray} where the function $h(v_i,{t})$ is the Hamming distance between $v_i$ and $t$. Thus the state $|i\rangle$ where $h(v_i,t)=0$ is the ground state of $\tilde{H_{p}}$ and is also the solution state. Moreover, the spectrum was successfully bounded in a range from $0$ to $n$. After the preparation of problem Hamiltonian, we will choose an initial Hamiltonian $H_{i}$. $H_{i}$ should be chosen to be noncommutative with $H_{p}$ to avoid crossing of energy levels\cite{Farhi_2000}. Normally, $H_{i}$ is \begin{eqnarray} \label{initH} H_{i} & = & g(\sigma_{x}^{0}+\sigma_{x}^{1}+\cdots + \sigma_{x}^{n-1}), \end{eqnarray} which means the qubits coupling with a magnetic field at the $x$-direction and the coupling strength is $g$. The ground state of $H_i$ is \begin{eqnarray} \label{initS} |\psi_{0}\rangle & =& \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}(-1)^{b(j)}|j\rangle, \label{Int_state} \end{eqnarray} where $b(j)$ is the Hamming distance between $j$ and $0$. In the adiabatic evolution, the system Hamiltonian interpolates from $H_{i}$ to $\tilde{H_{p}}$ (\textit{i.e.,} see Eq \ref{ht}) and the state of the system evolves according to the Schr\"{o}dinger equation. If this evolution acts adiabatically, the system will always stay on the instantaneous ground state of $H(t)$ and in the end the solution of our problem will show up. An explicit application is necessary for a clear understanding. Here we perform a 3-bit unsorted database search for example. We randomly generate a database in a list as $\{6,3,5,0,4,1,7,2\}$. The position of each $value$ in the list refers to the $index$ which ranges from 0 to 7. For convenience we rewrite the database as binary codes which is \{110,011,101,000,100,001,111,010\}. Because the database operators and problem Hamiltonian are diagonal, they are expressed by only the diagonal elements. \begin{eqnarray} \mathcal{D}_0 &=& diag\{0,1,1,0,0,1,1,0\}\nonumber\\ \mathcal{D}_1 &=& diag\{1,1,0,0,0,0,1,1\}\nonumber\\ \mathcal{D}_2 &=& diag\{1,0,1,0,1,0,1,0\}. \end{eqnarray} After building the quantum database, the problem Hamiltonian can be constructed for each search task. For example, if we want to find the position of the $value$ 5 which is 101 in binary, the problem Hamiltonian is \begin{eqnarray} \label{hp_example} \tilde{H_{p}} &=& \bar{\mathcal{D}}_{0} +\mathcal{D}_1 + \bar{\mathcal{D}}_{2}\nonumber\\ &=& diag\{2,2,0,2,1,1,1,3\}. \end{eqnarray} To perform the adiabatic evolution, we initially prepare the system on the state of Eq.\eqref{initS}, and adiabatically switch the system Hamiltonian from a initial Hamiltonian in Eq.\eqref{initH} to the problem Hamiltonian in Eq.\eqref{hp_example}. Finally the system will on the ground state of $\tilde{H_{p}}$ which is the state $|2\rangle$. After measurement, we can get the knowledge that $value$ 5 is on position 2. Fig.\ref{evolution} shows the process of the adiabatic evolution for this search. \begin{figure} \caption{Process of the adiabatic evolution to search for $value$ 5 in the mentioned database. (a)The instantaneous eigenvalues of the system Hamiltonian as a function of $s$. The solid line represented the energy level of ground state. (b)Occupation probabilities of the system for the computational basis during the adiabatic evolution in the numerical simulation. The system starts from a uniform state and evolves to the solution state $|010\rangle$ which shows $value$ 5 is on $index$ 2. The parameter $g$ of $H_i$ in this example is 0.5.} \label{evolution} \end{figure} For the practical usefulness of an algorithm, the occupied amount of resources is an important aspect. Without doubts, the number of qubits needed for our algorithm equals to the bit width of the $index$ because only the $index$ field is represented by qubits. Thus the spatial complexity is $n$. As a comparison, since both the fields $value$ and $target$ are represented by qubits, the spatial complexity of the complete approach of Grover's algorithm\cite{Kim_2002} is $3n$. Although the MSAS algorithm also have a spatial complexity of $n$, it is not a complete database search. Thus our algorithm has the best spatial complexity in the quantum algorithms for complete database searches. To evaluate the time complexity of our algorithm, a decisive mathematical analysis is not possible. Therefore in this letter, we use both the perturbative method\cite{Amin_localminima} and numerical simulation\cite{Farhi} to examine the situation of the time complexity. In the perturbative approach\cite{Amin_localminima}, the time cost of a adiabatic algorithm by either \emph{global} or \emph{local} evolution can be written as \begin{eqnarray} \label{Tlg} T_{local}&\propto& \sqrt{T_{global}} \propto \sqrt{|S^{-}|/|S^{+}|}, \\ \label{s2} S^{+} &\approx& \{z: h(z,f) < m_c \}, m_c \propto \frac{\log{1 / \delta}}{\log \zeta^{+}}\nonumber\\ S^{-} &\approx& \{z: E_z < E_c \}, E_c \propto \frac{\log {1/\zeta^{-}}}{\log{1 / \delta}}, \end{eqnarray} where $S^{+}$ is a set containing the eigenstates of the problem Hamiltonian which have a small Hamming distance towards the solution state $|f\rangle$, while $S^{-}$ contains the ones which have low energy levels. $|S|$ is the cardinality of set $S$. $\zeta^{\pm}$ are dimensionless parameters which are defined as $\zeta^{\pm}\equiv\zeta(s^*\pm\epsilon_0)$. Here $\zeta(t)=\frac{s(t)}{1-s(t)}$ and $s^*$ is the position of the minimum gap between the ground and first exited state. $\epsilon_0$ and $\delta$ are small numbers. To apply this result to our algorithm, we assume that the minimum gap is on the central position of $s$, thus we can get $\zeta^{+} =1/\zeta^{-}>1$. Then we define a small number $\Omega\equiv\frac{\log{1 / \delta}}{\log \zeta^{+}}$. Because the degeneracy of energy levels in the problem Hamiltonian in Eq.\eqref{hp_hd} is $C_{n}^{i}$ where $i$ is the $i$th energy level, Eq.\eqref{s2} goes as \begin{eqnarray} |S^{+}| &\approx& \sum_{i=0}^{i<m_c}{C_n^i}, m_c \propto \Omega\nonumber\\ |S^{-}| &\approx& \sum_{i=0}^{i<E_c}{C_n^i}, E_c \propto 1/\Omega. \end{eqnarray} Here, $\Omega$ is a small number and is not supposed to increase with $n$, so only some low energy levels will be in $S^{-}$ and $|S^{+}| $ is a comparatively small positive integer. Since $\sum_{i=0}^{n-1}{C_n^i}= N$, for the worst case, we can take Eq.\eqref{Tlg} as $T_{global} \propto|S^{-}|\propto N^\alpha$ where $\alpha < 1$ is a constant. To derive a more accurate range for $\alpha$, we performed a numerical simulation\cite{Farhi} for randomly generated databases with the bit width of $index$ sized from $5$ to $16$. For each bit, we randomly generated 50 instances of database search. Then we performed a numerical global evolution using four-order self-adapted Runge-Kutta method to get a success probability of range $[0.12,0.13]$ for each instance. The mean time for each bit is shown in Fig.\ref{time}. By fitting the mean time, we obtain $\alpha=0.81$. For a comparison, we simulated the time complexity of MSAS algorithm using the same environment. The value $\alpha$ of MSAS algorithm obtained from the fitting is $1.02$. The result of simulation fit well with theoretical expectation where $\alpha$ is $1$\cite{Farhi_2000}. \begin{figure} \caption{Comparison of running time between our algorithm and MSAS algorithm to achieve a success probability of 1/8 as the function of bit width of $index$. The solid line is the fit of the triangles each of which represent the mean simulation result of 50 instances ($32$ instances for $n=5$) of our algorithm, while the dash line is the fit of the circles each of which is that of the MSAS algorithm. The error range of fitting the triangles is $\pm2\%$ and that for the circles is $\pm0.3\%$ } \label{time} \end{figure} In Fig.\ref{time}, the running time of our algorithm grows much more slowly than the MSAS algorithm. This result matches well with the expectation from the perturbative analysis. Both the results show that our algorithm has a better performance in time complexity than the MSAS algorithm. And because local evolution can provide a quadratic speedup over global evolution, theoretically the time complexity of our algorithm by local evolution can be reduced to less than order $\sqrt{N}$, even lower than than the complexity of Grover's algorithm. To be concluded, we introduce a new algorithm for quantum search problem by adiabatic evolution. We use another method to represent the quantum database in this algorithm and it saves $\frac{2}{3}$ of qubits than the complete approach of Grover's algorithm\cite{Kim_2002}. We use both the emerging perturbative method of adiabatic algorithm and numerical simulation to analyze the time complexity in this algorithm. The results show that it provides a higher speedup than the MSAS algorithm and potentially has a better performance than Grover's algorithm. This algorithm can be experimentally verified in NMR or ion-trap systems\cite{Peng_2008,Friedenauer_2008}. The authors thank Zeyang Liao, Dieter Suter and Guilu Long for discussions and comments. This work was supported by National Nature Science Foundation of China, the CAS, Ministry of Education of PRC. \end{document}
\begin{document} \title{Phase-preserving linear amplifiers not simulable by the parametric amplifier} \author{A. Chia} \affiliation{Centre for Quantum Technologies, National University of Singapore} \author{M. Hajdu\v{s}ek} \affiliation{Keio University Shonan Fujisawa Campus, Kanagawa, Japan} \author{R. Nair} \affiliation{School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore} \affiliation{Complexity Institute, Nanyang Technological University, Singapore} \author{R. Fazio} \affiliation{Abdus Salam ICTP, Strada Costiera, Trieste, Italy.} \author{L. C. Kwek} \affiliation{Centre for Quantum Technologies, National University of Singapore} \affiliation{National Institute of Education, Nanyang Technological University, Singapore} \author{V. Vedral} \affiliation{Centre for Quantum Technologies, National University of Singapore} \affiliation{Department of Physics, University of Oxford, UK} \date{\today} \begin{abstract} It is commonly accepted that a parametric amplifier can simulate a phase-preserving linear amplifier regardless of how the latter is realised [C.~M.~Caves~\emph{et al.}, Phys.~Rev.~A~{\bf 86}, 063802 (2012)]. If true, this reduces all phase-preserving linear amplifiers to a single familiar model. Here we disprove this claim by constructing two counterexamples. A detailed discussion of the physics of our counterexamples is provided. It is shown that a Heisenberg-picture analysis facilitates a microscopic explanation of the physics. This also resolves a question about the nature of amplifier-added noise in degenerate two-photon amplification. \end{abstract} \maketitle \emph{Introduction.---\,}Linear amplification has long been an integral part of quantum measurements whereby a weak signal is amplified to a detectable level \cite{CTD+80,CDG+10}. Due to advances in quantum optics and quantum information, linear amplifiers are now also seen as a facilitating component of many useful tasks such as state discrimination \cite{ZFB10}, quantum feedback \cite{VMS+12}, metrology \cite{HKL+14}, and entanglement distillation \cite{RL09,XRLWP10}. New paradigms of amplification such as heralded probabilistic amplification \cite{RL09,ZFB10,CWA+14,HZD+16} and photon number amplification \cite{PvE19} are being actively researched for these and other applications. Much attention has been given to the application and construction of linear amplifiers \cite{CTD+80,CDG+10}, and their fundamental quantum noise limits have been known for a long time \cite{Cav82maintext}. A relatively recent foundational development, however, is the claim that a parametric amplifier can simulate any phase-preserving linear amplifier regardless of how it is realised \cite{CCJP12}. This statement is significant as it replaces the set of all phase-preserving linear amplifiers by a single familiar model. Either proving it or falsifying it is thus of fundamental importance to our understanding of deterministic amplifiers. It would also clarify the status of the parametric amplifier (henceforth abbreviated as paramp). More specifically, is it possible to find phase-preserving linear amplifiers which cannot be simulated by the paramp? If so, what differentiates such amplifiers from those that can be simulated by the paramp? In this work, we provide answers to these questions. We provide as counterexamples two families of physically-realisable linear amplifiers which are phase preserving but cannot be simulated by the paramp. The inner workings of such amplifiers are then studied, revealing that the physical mechanism of multiplicative noise leads to amplifiers that are not simulable by the paramp. This delineates the boundary and status of the paramp in linear-amplifier theory. Our main result is summarised in Fig.~\ref{MainResult}. As a corollary, we also gain understanding on the nature of noise in nonlinear amplifiers. \begin{figure} \caption{\label{MainResult} \label{MainResult} \end{figure} \emph{Definitions.---\,} We begin by making the above statements precise. We specify an amplifier by a map ${\cal A}$ which transforms the state of an input signal $\rho_{\rm in}$ to a new state at its output $\rho_{\rm out}={\cal A}\,\rho_{\rm in}$. Throughout this paper the signal itself will be represented by the single-mode bosonic annihilation operator $\hat{a}$ acting on Hilbert space $\mathbbm{H}_A$. An amplifier is said to be \begin{itemize} \item[(i)] \emph{Physical} if ${\cal A}$ is completely positive and trace preserving. \item[(ii)] \emph{Linear} if ${\cal A}$ is such that $\an{\hat{a}}_{\rm out}\equiv{\rm Tr}[\hat{a}{\cal A}\rho_{\rm in}]=g\an{\hat{a}}_{\rm in}$ for all $\rho_{\rm in}$. \item[(iii)] \emph{Phase preserving} if the gain $g$ is real valued. \end{itemize} We denote the set of amplifiers satisfying (i)--(iii) by $\frak{A}$ \footnote{An additional requirement -- which we call \emph{phase covariance} \cite{CHN+19} -- has been mentioned in Sec.~III of \cite{CCJP12}, but both the universality claim and purported proof of it do not impose this requirement. In any case, both the counterexamples to the paramp conjecture that we present satisfy this property as well \cite{CHN+19}.}. A member of $\frak{A}$ is given by ${\cal A}_1(t)=\exp({\cal L}_1 t)$, where \begin{align} \label{Example1} {\cal L}_1 = \kappa_{\uparrow} \, {\cal D}[\hat{a}\dg] + \kappa_{\downarrow} \, {\cal D}[\hat{a}] \; ; \quad \; \kappa_{\uparrow} > \kappa_{\downarrow} \ge 0 \; , \end{align} and ${\cal D}[\hat{A}]\hat{B}\equiv\hat{A}\hat{B}\hat{A}^\dagger-\hat{A}^\dagger\hat{A}\hat{B}/2-\hat{B}\hat{A}^\dagger\hat{A}/2$. By virtue of its Lindblad form, \eqref{Example1} generates a family of completely-positive trace-preserving maps $\{{\cal A}_1(t)\}_t$ for fixed $\kappa_{\uparrow}$ and $\kappa_{\downarrow}$ \cite{Lin76}. This is the familar master equation model of a linear amplifier \cite{BP02,Car02,DH14,Aga13,SZ97}. It is not too difficult to show that ${\cal A}_1(t)$ is linear and phase preserving for any $t$ \cite{Aga13}. \emph{Parametric amplifier.---\,}The paramp is a device with an internal degree of freedom represented by the bosonic annihilation operator $\hat{b}$ acting on $\mathbbm{H}_B$. Its inital state is denoted by $\sigma$. The paramp map ${\cal E}$ is defined via the two-mode squeeze operator $\hat{S}=\exp[\,r(\hat{a}\,\hat{b}-\hat{a}\dg\hat{b}^\dagger)]$ as \begin{align} \label{ParampMap} \rho_{\rm out} = {\cal E} \, \rho_{\rm in} = {\rm Tr}_B\big[ \hat{S}\, \rho_{\rm in} \otimes \sigma \,\hat{S}^\dagger \big] \; , \end{align} where ${\rm Tr}_{\rm B}$ denotes a partial trace over $\mathbbm{H}_B$. The gain of the paramp may be shown to be $G=\cosh r$ where $r$ is the squeezing parameter \cite{CCJP12}. This finally brings us to the universality claim of the paramp \cite{CCJP12}: Given any physical linear phase-preserving amplifier ${\cal A}$, one can always find a $\sigma$ and $G$ of the paramp such that its output state is identical to the output state from ${\cal A}$ for any input $\rho_{\rm in}$, i.e., \begin{align} \label{ParampConj} \exists \; \sigma, \, G\!: \, {\cal E}={\cal A} \; , \quad \forall \, {\cal A} \in \frak{A}. \end{align} If we denote the set of amplifiers that are paramp simulable by $\frak{P}$, \eqref{ParampConj} states that $\frak{P}=\frak{A}$ [shown in Fig.~\ref{MainResult}(a)]. \emph{Counterexamples.---\,}We consider first the family of maps ${\cal A}_2(t)=\exp({\cal L}_2\, t)$ generated by \begin{align} \label{GeneratorForNIA} {\cal L}_2 = \frac{\gamma}{2} \; \big( \, {\cal D}\big[\hat{a}^2\big] + {\cal D}\big[\hat{a}\dg{}^2\big] \, \big); \; \; \gamma > 0. \end{align} By virtue of its Lindblad form, $\{{\cal A}_2(t)\}_t$ is a physically valid family of maps for a fixed $\gamma$. Consider a particular member of this family ${\cal A}_2=\exp({\cal L}_2\, t_0)$ for some choice of $t_0$. A straightforward calculation shows that this produces a linear amplifier $\an{\hat{a}}_{\rm out}=g\,\an{\hat{a}}_{\rm in}$ where $g=\exp(\gamma t_0)$. This establishes that ${\cal A}_2 \in \frak{A}$. For the paramp ${\cal E}$ to be equivalent to ${\cal A}_2$, it is necessary that the moments of $\hat{a}$ at the output from both amplifiers be identical for an arbitrary input state $\rho_{\rm in}$. Here we show that this cannot be satisfied by considering the output amplitude and photon-number moments corresponding to ${\cal E}$ and ${\cal A}_2$. For ${\cal A}_2$ they are \cite{CHN+19}: \begin{align} \label{<a>NoiseInduced} \an{\hat{a}}_{\rm out} = {}& g\,\an{\hat{a}}_{\rm in} \; , \\ \label{<n>NoiseInduced} \ban{\hat{n}}_{\rm out} = {}& g^4 \, \ban{\hat{n}}_{\rm in} + \frac{g^4-1}{2} \; , \end{align} where $\hat{n}=\hat{a}\dg\hat{a}$. The same quantities for the paramp are \cite{CCJP12}: \begin{align} \an{\hat{a}}_{\rm out} = {}& G\,\an{\hat{a}}_{\rm in} + \rt{G^2-1}\,\an{\hat{b}} \; , \\ \label{<n>Parametric} \ban{\hat{n}}_{\rm out} = {}& G^2 \ban{\hat{n}}_{\rm in} + \big(G^2-1\big)\,\ban{\hat{b}\,\hat{b}^\dagger} \; , \end{align} where all moments involving $\hat{b}$ are taken with respect to its internal state $\sigma$ while those involving $\hat{a}$ are taken with respect to $\rho_{\rm in}$. To ensure that the two amplifiers give identical $\an{\hat{a}}_{\rm out}$ for any $\rho_{\rm in}$ we must choose $\ban{\hat{b}}=0$ and set $G=g$. Now consider an input signal prepared in some state, say $\rho_1$, with average photon number $\an{\hat{n}}_1$. It is necessary that ${\cal A}_2$ and ${\cal E}$ output the same photon number when applied to $\rho_1$, i.e. \begin{align} \label{<n(t)>1} g^4 \ban{\hat{n}}_1 + \frac{g^4-1}{2} = {}& g^2 \ban{\hat{n}}_1 + (g^2-1) \, \ban{\hat{b}\,\hat{b}^\dagger} \;. \end{align} Similarly we may consider another input state $\rho_2$ with a different average photon number $\an{\hat{n}}_2$. The same requirement leads to \begin{align} \label{<n(t)>2} g^4 \ban{\hat{n}}_2 + \frac{g^4-1}{2} = {}& g^2 \ban{\hat{n}}_2 + (g^2-1) \, \ban{\hat{b}\,\hat{b}^\dagger} \;. \end{align} Subtracting \eqref{<n(t)>2} from \eqref{<n(t)>1} we get \begin{align} \label{<n>1-<n>2} g^4 \Big[ \ban{\hat{n}}_1 - \ban{\hat{n}}_2 \Big] = g^2 \Big[ \ban{\hat{n}}_1 - \ban{\hat{n}}_2 \Big] \;. \end{align} Equation \eqref{<n>1-<n>2} clearly cannot be satisfied unless $g=1=G$ (which means no amplification). Thus, the paramp cannot be a universal model for $\frak{A}$. Note that it is the difference in how $\an{\hat{n}}_{\rm out}$ scales with $g$ in the two types of amplifiers that makes ${\cal E} \ne {\cal A}_2$. To the best of our knowledge, this is the first time that a phase-preserving linear amplifier has been shown to fall outside the reach of the paramp. It is natural to wonder whether the family of amplifiers $\{{\cal A}_2(t)\}_t$ is something of a special case. Another family of counterexamples $\{{\cal A}_3(t)\}_t$ with ${\cal A}_3(t) = \exp({\cal L}_3 t)$ is derived from the generator \begin{align} \label{ThreePhotonGenerator} {\cal L}_3 = \frac{\gamma}{9} \; \big( \, {\cal D}\big[\hat{a}^3\big] + {\cal D}\big[\hat{a}\dg{}^3\big] \, \big) + \gamma \; {\cal D}\big[\hat{a}^2\big] \;, \quad \gamma > 0 \; . \end{align} Physical realizability follows immediately from the Lindblad form of \eqref{ThreePhotonGenerator}, while properties (ii) and (iii) are shown in Ref.~\cite{CHN+19}. We have chosen the coefficients in \eqref{ThreePhotonGenerator} so that ${\cal A}_3(t)$ has the same gain $g = \exp(\gamma t)$ as ${\cal A}_2(t)$. In this case a simple analytic expression like \eqref{<n>NoiseInduced} cannot be found for its average output photon number. It is nevertheless possible to show that ${\cal A}_3(t)$ leads to an average output photon number which is irreproducible by the paramp \cite{CHN+19}. \emph{Physical properties---\,}We now turn to the question of what differentiates amplifiers which are paramp simulable from those that are not. A hint is provided by the nonlinear dependence on $\hat{a}$ and $\hat{a}\dg$ seen in ${\cal L}_2$ and ${\cal L}_3$, suggesting that the physics separating paramp simulable and unsimulable amplifiers might have something to do with multiphoton processes. To tackle this question we focus on the family of counterexamples defined by ${\cal L}_2$, which involve two-photon processes. To start, we note that ${\cal L}_2$ in fact appears as a special case of the so-called (degenerate) two-photon amplifier with the master equation \cite{Lam67,MW74} \begin{align} \label{TwoPhotonME} \ddt \, \rho(t) = \kappa_{\uparrow}p \, {\cal D}[\hat{a}\dg{}^2] \rho(t) + \kappa_\Downarrow \, {\cal D}[\hat{a}^2] \rho(t) \; . \end{align} This equation was derived from first principles starting from an atom-photon Hamiltonian with two-photon interactions by Lambropoulos in which $\kappa_{\uparrow}p$ and $\kappa_\Downarrow$ are further related to microscopic quantities \footnote{Equation (3.13) of Ref.~\cite{Lam67} is equivalent to \eqref{TwoPhotonME} in the Fock basis.}. Here, it suffices to express them as $\kappa_{\uparrow}p=\gamma\, n_\uparrowp$ and $\kappa_\Downarrow=\gamma\, n_\Downarrow$ where $\gamma$ is an effective atom-photon coupling strength while $n_\uparrowp$ and $n_\Downarrow$ are the fractional atomic populations in the excited and ground states respectively. Two-photon amplifiers have been widely studied for some time \cite{Lam67,MW74,NEFE77,NZT81,BRGDH87,AGBZ90,GWMM92,Iro92,Gau03,NHO10,HNGO11,RSSHS16,MRBB18} and their output photon statistics have been intensively studied for the model of \eqref{TwoPhotonME} and special cases of it \cite{Lam67,MW74}. Already in Ref.~\cite{Lam67}, Lambropoulos noted that \emph{linear} amplification, i.e.~one-photon gain, was somehow possible with ${\cal L}_2$ upon setting $\kappa_{\uparrow}p=\kappa_\Downarrow=\gamma/2$ in \eqref{TwoPhotonME} despite the amplifier being described by an inherently two-photon model [See Sec.~V.C of \cite{Lam67}. Also compare his Eqs.~(5.9b)--(5.9c) with our Eqs.~\eqref{<a>NoiseInduced}-\eqref{<n>NoiseInduced}]. To explain this he postulated that the amplification had to involve a ``half noise half signal'' process, originating from two-photon emissions whereby ``the emission of one of the photons is induced and the other spontaneous'' \footnote{See the main text on the last page of Ref.~\cite{Lam67}.}. However, to the best of our knowledge, this assertion has remained unsubstantiated to date. If we are able to affirm the speculated mechanism underlying ${\cal L}_2$, we would not only have validated Lambropoulos's conjecture, but will also be guided to what kind of physics prevents a phase-preserving linear amplifier from being simulable by a paramp. As we now explain, ${\cal L}_2$ can be understood in terms of the elementary atom-photon interactions shown in Fig.~\ref{AtomPhotonInt}(b). \begin{figure} \caption{\label{AtomPhotonInt} \label{AtomPhotonInt} \end{figure} Attempts to understand the photon statistics of the two-photon amplifier naturally treat the density operator of the signal mode $\hat{a}$ as a central object of analysis, and thus work in the Schr\"odinger\ picture. This is a major drawback in understanding the noise mechanism because the internal modes of the amplifier noise are traced out in such a description \cite{Lam67}. We are therefore motivated to work in the Heisenberg\ picture where the amplifier noise appears explicitly as a time-dependent operator. This will allow us to track how the noise arises at the output and arrive at Fig.~\ref{AtomPhotonInt}(b). Before we analyse ${\cal A}_2(t)$ in the Heisenberg\ picture, it is instructive to review how such an analysis works for the example of ${\cal A}_1(t)$. Its Heisenberg-picture equivalent for the signal $\hat{a}(t)$ can be shown to be given by \cite{CHN+19,GZ10} \begin{align} \label{da/dt=Sig+Noise} d\hat{a}(t) = \frac{1}{2} \, \big( \kappa_{\uparrow} - \kappa_{\downarrow} \big) \, \hat{a}(t) \, dt + d\hat{W}(t) \; , \end{align} where $d\hat{s}(t) \equiv\hat{s}(t+dt)-\hat{s}(t)$ for arbitrary $\hat{s}(t)$. Equation \eqref{da/dt=Sig+Noise} may be derived from a familiar model of the field interacting with a two-level atom. In this case, $\kappa_{\uparrow}$ and $\kappa_{\downarrow}$ are the effective excited-state and ground-state populations in an atomic gain medium that implements one-photon interactions. The term $d\hat{W}(t)$ is a quantum Wiener increment and represents the noise being added to the signal as it is being amplified according to \eqref{da/dt=Sig+Noise}. It is an atomic operator that is independent of the signal and has zero mean. All its higher-order moments vanish except the second-order ones given by the quantum It\^o\ rules \cite{GZ10,Ito42,Ito44,Ito46,HP84,WM10} \begin{align} d\hat{W}^\dagger(t) \, d\hat{W}(t) = \kappa_{\uparrow} \, dt \; , \quad d\hat{W}(t) \, d\hat{W}^\dagger(t) = \kappa_{\downarrow} \, dt \; . \end{align} Since we are now working explicitly in continuous time, the input and output signals are to be identified as $\hat{a}(0)$ and $\hat{a}(t)$ respectively. Applying quantum It\^o\ calculus, \eqref{da/dt=Sig+Noise} can be shown to satisfy $[\hat{a}(t),\hat{a}\dg(t)]=\hat{1}$ for all $t$, as required in order to be consistent with quantum mechanics. The advantage of \eqref{da/dt=Sig+Noise} is that it allows us to see how the noise contributes to the amplifier output explicitly. In particular, we can extract some basic physics about the amplification of $\hat{a}$ by considering the evolution of the average photon number: \begin{align} \label{A1<dn>} d\ban{\hat{n}(t)} = {}& (\kappa_{\uparrow}-\kappa_{\downarrow}) \, \ban{\hat{n}(t)} \, dt + d\hat{W}^\dagger(t) \, d\hat{W}(t) \\ \label{StdLinAmpd<n>} = {}& (\kappa_{\uparrow}-\kappa_{\downarrow}) \, \ban{\hat{n}(t)} \, dt + \kappa_{\uparrow} \, dt \; . \end{align} The first two terms in \eqref{StdLinAmpd<n>} show that population inversion in the gain medium is necessary for a positive contribution to the signal's energy, i.e., for amplification. The third term given by $\kappa_{\uparrow}$ comes directly from the noise operator $d\hat{W}(t)$ and represents noise photons added to the signal. Furthermore, each term in \eqref{StdLinAmpd<n>} can be understood to correspond to an elementary atom-photon interaction (i.e.,~stimulated emission, absorption, or spontaneous emission) \cite{Aga13,Mil19}: The first term is proportional to both the intensity of the light reaching the atom $\an{\hat{n}(t)}$ as well as the effective atomic population of the excited state $\kappa_{\uparrow}$ and corresponds to stimulated emission. Similarly, we know that the number of absorption events in the gain medium should be proportional to $\an{\hat{n}(t)}$ and the effective ground-state population of the atoms. This corresponds to the term $-\kappa_{\downarrow}\an{\hat{n}(t)}$ in \eqref{StdLinAmpd<n>} where the negative sign indicates that absorption removes energy from the field. The only atom-photon interaction that does not depend on the signal's energy, but only on the excited-state population of the gain medium, is spontaneous emission, and is given by the last term in \eqref{StdLinAmpd<n>}. This highlights the well-known facts about linear amplifiers that rely on single-photon interactions: First, that stimulated emission and population inversion are essential for amplification, and second, that spontaneous emission is the physical mechanism responsible for adding noise to the signal. A summary of these processes is shown in Fig.~\ref{AtomPhotonInt}(a). The Heisenberg-picture equation for $\hat{a}$ corresponding to the two-photon amplifier of \eqref{TwoPhotonME} is \cite{CHN+19} \begin{align} \label{TwoPhotonIto} d\hat{a}(t) = {}& \big( \kappa_{\uparrow}p - \kappa_\Downarrow \big) \, \hat{a}^\dagger(t) \, \hat{a}^2(t) \, dt \nonumber \\ & + 2\,\kappa_{\uparrow}p \, \hat{a}(t) \,dt + \hat{a}^\dagger(t) \, d\hat{W}(t) \, . \end{align} This is an It\^o\ quantum stochastic differential equation \cite{Chi15,Par92,WZ65,Gou06,GC85} where $d\hat{W}(t)$ is again an atomic operator with zero mean and such that \begin{align} d\hat{W}^\dagger(t) \,d\hat{W}(t) = 4 \kappa_{\uparrow}p\,dt \;, \quad d\hat{W}(t) \,d\hat{W}^\dagger(t) = 4 \kappa_\Downarrow\,dt \; . \end{align} Again, it can be shown that \eqref{TwoPhotonIto} preserves $[\hat{a}(t),\hat{a}\dg(t)]=\hat{1}$ for all $t$ \cite{CHN+19}. The Heisenberg\ equation of motion for $\hat{a}$ corresponding to ${\cal L}_2$ may then be obtained from \eqref{TwoPhotonIto} by setting $\kappa_{\uparrow}p=\kappa_\Downarrow=\gamma/2$. This gives \begin{align} \label{ItoFormNIA} d\hat{a}(t) = \gamma \, \hat{a}(t) \, dt + \hat{a}\dg(t) \, d\hat{W}(t) \; . \end{align} Note here that \eqref{ItoFormNIA} now carries a signal-dependent noise given by $\hat{a}\dg(t) d\hat{W}(t)\,$. This is the ``half signal half noise'' which Lambropoulos spoke of in Ref.~\cite{Lam67}. It is also referred to as multiplicative noise in random process theory \cite{Jac10,Gar09}. We can now show exactly what the multiplicative noise in \eqref{ItoFormNIA} is in terms of elementary atom-photon interactions by considering how the average photon number evolves. Using quantum It\^o\ calculus we have, \begin{align} d\ban{\hat{n}(t)} = {}& 2 \,\gamma \ban{\hat{n}(t)} \, dt + \ban{ \hat{a}(t) \,\hat{a}\dg(t) } \, d\hat{W}^\dagger\!(t) \, d\hat{W}(t) \\ \label{dnItoProduct} = {}& 2 \,\gamma \ban{\hat{n}(t)} \, dt + 2\,\gamma \, \big[ \ban{ \hat{n}(t)} + 1 \big] \, dt \; . \end{align} The first term in \eqref{dnItoProduct} is inherited from the $\gamma\,\hat{a}(t)dt$ term in \eqref{ItoFormNIA} and corresponds to one-photon stimulated emission as it depends on $\kappa_{\uparrow}p$ and $\an{\hat{n}(t)}$. Since the model restricts the atoms to have only two-photon transitions, this term by itself does not complete a full atomic transition from excited to ground state with the emission of two photons. To complete the picture we must take into account for the photons from the remaining terms in \eqref{dnItoProduct}, which are noise photons insofar as they arise from the atomic operator $d\hat{W}(t)$. In contrast to \eqref{StdLinAmpd<n>}, there are now two types of noise photons. The first is linear in $\an{\hat{n}(t)}$, so it corresponds to a one-photon emission that depends on the signal strength reaching the atom. The fact that it is a noise photon suggests that it came from spontaneous emission while the fact that it depends on the signal means that such a spontaneous emission is ``stimulated''---conditioned on a stimulated emission having taken place just before it. The seemingly strange possibility of getting one-photon amplification in a two-photon model can now be resolved when we take the stimulated photon corresponding to the first term in \eqref{dnItoProduct} together with the signal-dependent noise photon to arrive at the two-photon process shown on the left of Fig.~\ref{AtomPhotonInt}(b). This is the underlying mechanism responsible for linear (i.e.~one-photon) amplification in a gain medium with only two-photon transitions. The remaining type of noise photon is due to the $2\gamma$ in \eqref{dnItoProduct} which corresponds to two-photon spontaneous emission. This is shown on the right in Fig.~\ref{AtomPhotonInt}(b). Our physical picture of the multiplicative noise in \eqref{ItoFormNIA} thus allows us to see how it is signal dependent. It is precisely this signal-dependent noise that leads to a photon-number gain of $g^4$ in \eqref{<n>NoiseInduced} which ultimately makes it impossible for the paramp to simulate it as shown in \eqref{<n>1-<n>2}. This can be seen explicitly from \eqref{dnItoProduct} where the first term contributes photons at a rate $2\gamma\an{\hat{n}(t)}$ to the signal, while the signal-dependent noise contributes another $2\gamma\an{\hat{n}(t)}$ photons per unit time to make up a total rate of $4\gamma\an{\hat{n}(t)}$ [which leads to the fourth power of $g$ in \eqref{<n>NoiseInduced} and subsequently in \eqref{<n>1-<n>2}]. Because \eqref{ItoFormNIA} is the simplest form of a phase-preserving linear amplifier with multiplicative noise, it may be expected that other such amplifiers with more complicated signal-dependent noise can also violate \eqref{ParampConj}, as we showed with ${\cal A}_3(t)$ from Eq.\eqref{ThreePhotonGenerator}. We note that non-degenerate variants of the left picture in Fig.~\ref{AtomPhotonInt}(b) (i.e.~a two-photon emission with unequal transition frequencies) has been observed in experiments and are known in the literature as singly stimulated emission \cite{HGO08,NBH+10,OIKA11} (see Ref.~\cite{HNGO11} and the references therein for more details). What we have done in this section on the physical properties of our counterexamples is to show that (i) multiplicative noise prevents a phase-preserving linear amplifier from being paramp simulable, and (ii) explain the physical basis of this multiplicative noise in terms of elementary atom-photon interactions. It is also possible to interpret \eqref{ItoFormNIA} and its associated linear amplification purely from the perspective of quantum stochastic processes. In this interpretation \eqref{ItoFormNIA} is understood to generate linear amplification as a result of the correlations between the amplifier-added noise and the signal. This follows from the Stratonovich\ form of \eqref{ItoFormNIA} which is derived in Ref.~\cite{CHN+19}. Such a process may in principle be realised using ion traps \cite{SMrefs}. Finally, our discussion above sheds light on how ${\cal A}_2$ evades the claimed proof of the universality of the paramp model in Ref.~\cite{CCJP12}. The authors of Ref.~\cite{CCJP12} mathematically characterize a phase-preserving linear amplifier as a composition of a perfectly noiseless (and unphysical) amplifier with a noise map that restores physicality \footnote{See Equation (3.2) and the surrounding discussion in Ref.~\cite{CCJP12}.}. Crucially, this added noise was taken to be \emph{signal-independent}, thus excluding multiplicative noise of the kind found in ${\cal A}_2$ by fiat. \begin{acknowledgments} We would like thank Carl Caves for email correspondences and Howard Wiseman for some feedback on our paper draft. In addition we thank Berge Englert, Christian Miniatura, Alex Hayat, Aaron Danner, and Tristan Farrow for useful discussions on atom-photon interactions. This research is supported by: The MOE grant number RG 127/14, the National Research Foundation, Prime Minister's Office, Singapore under its Competitive Research Programme (CRP Award No. NRF-CRP-14-2014-02), the National Research Foundation of Singapore (NRF Fellowship Reference Nos. NRF-NRFF2016-02 and NRF-CRP14-2014-02), the Ministry of Education Singapore (MOE2019-T1-002-015), the National Research Foundation Singapore and the Agence Nationale de la Recherche (NRF2017-NRFANR004 VanQuTe), and the Foundational Questions Institute (FQXi-RFP-IPW-1903). MH acknowledges support by the Air Force Office of Scientific Research under award FA2386-19-1-4038. \end{acknowledgments} \begin{thebibliography}{100} \bibitem{CTD+80} C. M. Caves, K. S. Thorne, R. W. P. Drever, V. D. Sandberg, and M. Zimmermann, Rev. Mod. Phys. {\bf 52}, 341 (1980). \bibitem{CDG+10} A. A. Clerk, M. H. Devoret, S. M. Girvin, F. Marquadt and R. J. Schoelkopf, Rev. Mod. Phys. {\bf 82}, 1155 (2010). \bibitem{ZFB10} A. Zavatta, J. Fiur\'{a}\v{s}ek, and M. Bellini, Nat. Photon. {\bf 5}, 52 (2010). \bibitem{VMS+12} R. Vijay, C. Macklin, D. H. Slichter, S. J. Weber, K. W. Murch, R. Naik, A. N. Korotkov, and I. Siddiqi, Nature {\bf 490}, 77 (2012) \bibitem{HKL+14} F. Hudelist, J. Kong, C. Liu, J. Jing, Z. Y. Ou, and W. Zhang, Nat. Comm. {\bf 5}, 3049 (2014). \bibitem{RL09} T. C. Ralph and A. P. Lund, \emph{Proceedings of the 9th International Conference on Quantum Communication Measurement and Computing, (A. Lvovsky Ed.)}, 155, (AIP, 2009). \bibitem{XRLWP10} G. Y. Xiang, T. C. Ralph and N. Walk and G. J. Pryde, Nat. Photon. {\bf 4}, 316 (2010). \bibitem{CWA+14} H. M. Chrzanowski, N. Walk, S. M. Assad, J. Janousek, S. Hosseini, T. C. Ralph, T. Symul, and P. K. Lam, Nat. Photon. {\bf 8}, 333 (2014). \bibitem{HZD+16} J. Y. Haw, J. Zhao, J. Dias, S. M. Assad, M. Bradshaw, R. Blandino, T. Symul, T. C. Ralph, and P. K. Lam, Nat. Comm. {\bf 7}, 1 (2016). \bibitem{PvE19} T. B. Propp and S. J. van Enk, Opt. Express {\bf 27}, 23454 (2019). \bibitem{Cav82maintext} C. M. Caves, Phys. Rev. D {\bf 26}, 1817 (1982). \bibitem{CCJP12} C. M. Caves, J. Combes, Z. Jiang, and S. Pandey, Phys. Rev. A {\bf 86}, 063802 (2012). \bibitem{CHN+19} See Supplementary Material. \bibitem{Lin76} G. Lindblad, Comm. Math. Phys. {\bf 48}, 119 (1976). \bibitem{BP02} H.-P. Breuer and F. Petruccione, \emph{The Theory of Open Quantum Systems}, (Oxford University Press, 2002). \bibitem{Car02} H. J. Carmichael, \emph{Statistical Methods in Quantum Optics 1} (Second-corrected-printing), (Springer, 2002). \bibitem{DH14} P. D. Drummond and M. Hillery, \emph{The Quantum Theory of Nonlinear Optics}, (Cambridge University Press, 2014) \bibitem{Aga13} G. S. Agarwal, \emph{Quantum Optics}, (Cambridge University Press, 2013). \bibitem{SZ97} M. O. Scully and M. S. Zubairy, \emph{Quantum Optics} (Cambridge University Press, 1997). \bibitem{Lam67} P. Lambropoulos, Phys. Rev. {\bf 156}, 286 (1967). \bibitem{MW74} K. J. McNeil and D. F. Walls, J. Phys. A {\bf 7}, 617 (1974). \bibitem{HNGO11} A. Hayat, A. Nevet, P. Ginzburg, and M. Orenstein, Semicond. Sci. Technol. {\bf 26}, 083001 (2011). \bibitem{Gau03} D. J. Gauthier, Prog. Opt. {\bf 45}, 205 (2003). \bibitem{NEFE77} L. M. Narducci, W. W. Edison, P. Furcinitti, and D. C. Eteson, Phys. Rev. A {\bf 16}, 1665 (1977). \bibitem{NZT81} B. Nikolaus, D. Z. Zhang, and P. E. Toschek, Phys. Rev. Lett. {\bf 47}, 171 (1981). \bibitem{BRGDH87} M. Brune, J. M. Raimond, P. Goy, L. Davidovich, and S. Haroche, Phys. Rev. Lett. {\bf 59}, 1899 (1987). \bibitem{AGBZ90} I. Asharaf, J. Gea-Banacloche, and M. S. Zubairy, Phys. Rev. A {\bf 42}, 6704 (1990). \bibitem{Iro92} C. N. Ironside, IEEE J. Quantum Electron. {\bf 28}, 842 (1992). \bibitem{GWMM92} D. J. Gauthier, Q. Wu, S. E. Morin, and T. W. Mossberg, Phys. Rev. Lett. {\bf 68}, 464 (1992). \bibitem{NHO10} A. Nevet, A. Hayat, and M. Ornstein, Phys. Rev. Lett. {\bf 104}, 207404 (2010). \bibitem{RSSHS16} M. Reichert, A. L. Smirl, G. Salamo, D. J. Hagan, and E. W. Van Stryland, Phys. Rev. Lett. {\bf 117}, 073602 (2016). \bibitem{MRBB18} S. Melzer, C. Ruppert, A. D. Bristow, and M. Betz, Opt. Lett. {\bf 43}, 5066 (2018). \bibitem{GZ10} C. W. Gardiner and P. Zoller, \emph{Quantum Noise} (Third edition), (Springer, 2010). \bibitem{Ito42} K. It\^o, J. Pan-Japan Math. Coll. {\bf 1077}, 1352 (1942). \bibitem{Ito44} K. It\^o, Proc. Imp. Acad. Tokyo {\bf 20}, 519 (1944). \bibitem{Ito46} K. It\^o, Proc. Imp. Acad. Tokyo {\bf 22}, 32 (1946). \bibitem{HP84} R. L. Hudson and K. R. Parthasarathy, Commun. Math. Phys. {\bf 93}, 301 (1984). \bibitem{WM10} H. M. Wiseman and G. J. Milburn, \emph{Quantum Measurement and Control}, (Cambridge University Press, 2010). \bibitem{Mil19} P. W. Milonni, \emph{An Introduction to Quantum Optics and Quantum Fluctuations} (Oxford University Press, 2019). \bibitem{Chi15} M.-H. Chiang, \emph{Quantum Stochastics}, (Cambridge University Press, 2015). \bibitem{Par92} K. R. Parthasarathy, \emph{An Introduction to Quantum Stochastic Calculus}, (Birkh\"{a}user, 1992). \bibitem{WZ65} E. Wong and M. Zakai, Int. J. Engng. Sci. {\bf 3}, 213 (1965). \bibitem{Gou06} J. Gough, J. Math. Phys. {\bf 47}, 113509 (2006). \bibitem{GC85} C. W. Gardiner and M. J. Collett, Phys. Rev. A {\bf 31}, 3761 (1985). \bibitem{Jac10} K. Jacobs, \emph{Stochastic Processes for Physicists: Understanding Noisy Systems}, (Cambridge University Press, 2010). \bibitem{Gar09} C. Gardiner, \emph{Stochastic Methods} (Fourth edition), (Springer, 2009). \bibitem{HGO08} A. Hayat, P. Ginzburg, and M. Orenstein, Nat. Photonics {\bf 2}, 238 (2008). \bibitem{NBH+10} A. Nevet, N. Berkovitch, A. Hayat, P. Ginzburg, S. Ginzach, O. Sorias, and M. Orenstein, NanoLett {\bf 10}, 1848 (2010). \bibitem{OIKA11} Y. Ota, S. Iwamoto, N. Kumagai, and Y. Arakawa, Phys. Rev. Lett. {\bf 107}, 233602 (2011). \bibitem{SMrefs} See Supplementary Material [url] for how an ion-trap realisation may be accomplished which includes Refs.~\cite{LBMW03,LS13} \bibitem{LBMW03} D. Leibfried, R. Blatt, C. Monroe, and D. Wineland, Rev. Mod. Phys. {\bf 75}, 281 (2003). \bibitem{LS13} T. E. Lee and H. R. Sadeghpour, Phys. Rev. Lett. {\bf 111}, 234101 (2013). \end{thebibliography} \onecolumngrid \section*{\large Supplementary Material for ``Phase-Preserving Linear Amplifiers Not Simulable by the Parametric Amplifier''} \section{The two-photon amplifier} \subsection{Overview} We have already said in the main text that the degenerate two-photon amplifier can be modelled by the master equation \begin{align} \label{TwoPhotonMESuppMat} \ddt \; \rho(t) = \kappa_\Downarrow \, {\cal D}[\hat{a}^2] \, \rho(t) + \kappa_{\uparrow}p \, {\cal D}[\hat{a}\dg{}^2] \, \rho(t) \; . \end{align} This may be derived within the Born--Markov framework of open-systems theory \cite{DH14,BP02SM,Car02SM}. The procedure leading to the master equation \eqref{TwoPhotonMESuppMat} is fairly well understood so we will not derive it here. We will, however, derive the corresponding Heisenberg\ equation of motion for $\hat{a}$ in Sec.~\ref{HeiPic} since it is the Heisenberg-picture treatment that has played a critical role for understanding the physics of the multiplicative noise which we met in \eqref{TwoPhotonIto}--\eqref{dnItoProduct}. In addition, the Heisenberg-picture treatment of open systems is somewhat less well known compared to the Schr\"odinger-picture theory. The specific form of the Heisenberg\ equation of motion used in \eqref{TwoPhotonIto}--\eqref{dnItoProduct} assume that the atomic baths behave as white noise so we must then take this limit after deriving the Heisenberg\ equation of motion. This then turns the Heisenberg\ equation for $\hat{a}$ into a quantum stochastic differential equation. Generally, a stochastic differential equation can be classified to be one of two kinds \cite{Gar09, Jac10a}: The first kind is a stochastic differential of the Stratonovich\ form. The second kind is a stochastic differential equation of the It\^o\ form. One advantage of the Stratonovich\ form is that normal calculus can used when manipulating these equations. However, this makes the statistical moments of $\hat{a}$, such as the photon number, or two-time correlation functions more cumbersome to derive. On the other hand, a stochastic differential equation in the It\^o\ form requires one to learn new rules of differentiation and integration. This is known as It\^o\ calculus. It\^o\ equations have the advantage that its noise terms are always independent of the system variables and this leads to computational simplicity provided that It\^o\ calculus is correctly applied. These general properties of Stratonovich\ and It\^o\ calculi also apply to quantum stochastic differential equations \cite{Par92,GZ10SM}. The difference between the two forms of stochastic differential equations originate in the order in which the white-noise limit is taken. We obtain a Stratonovich\ quantum stochastic differential equation when we take the white-noise limit of the Heisenberg\ equation of motion at the end of its derivation (as opposed to the start). The rigorous justification of this is given by the Wong--Zakai theorem \cite{WZ65SM,Gou06SM}. Hence, when we take the white-noise limit of the resulting Heisenberg\ equation of motion for $\hat{a}$ in Sec.~\ref{HeiPic} we arrive at a Stratonovich\ equation. From this we will proceed to derive the average evolution from this within the Stratonovich\ framework in Sec.~\ref{NoisiAmplification}. This calculation allows us to understand linear amplification in \eqref{TwoPhotonMESuppMat} as the result of correlations between the noise and the signal. For this reason one may refer to the case of $\kappa_{\uparrow}p=\kappa_\Downarrow$ as a noise-induced amplifier. As just mentioned, the Stratonovich\ form of the Heisenberg\ equation makes it more difficult to calculate moments of $\hat{a}$, so we will convert our Stratonovich\ quantum stochastic differential equation for $\hat{a}$ into its equivalent It\^o\ form. This then gives us exactly \eqref{TwoPhotonIto} in the main text. We then show, in Sec.~\ref{ProofCCR}, that on using It\^o\ calculus the canonical commutation relation $[\hat{a}(t),\hat{a}\dg(t)]=\hat{1}$ is preserved for all $t$ as it should be in the Heisenberg\ picture. \subsection{Amplitude equation of motion} \label{HeiPic} The two-photon amplifier given in the main text by the master equation \eqref{TwoPhotonME} and the It\^o\ stochastic equation \eqref{TwoPhotonIto} may be derived by modelling the signal as a single bosonic oscillator (with Hilbert space $\mathbbm{H}_A$) coupled to a bath of two-level atoms (with Hilbert space $\mathbbm{H}_B$) that mediate two-photon transitions. The atoms model the gain medium that is used for amplification. The full Hamiltonian $\hat{H}$ on $\mathbbm{H}_A \otimes \mathbbm{H}_B$ is \begin{align} \label{HamiltonianHP} \hat{H} = \hbar \, \omega_0 \, \hat{a}\dg \hat{a} + \sum_n \frac{\hbar\,\omega_n}{2} \; \hat{\pi}^z_n + \hat{a}t \, \hat{\Pi}^\dagger + \hat{a}\dgt \, \hat{\Pi} \; , \end{align} where $\omega_0$ is the oscillator's natural frequency and $\hat{\pi}^z_n$, $\hat{\pi}^+_n$, and $\hat{\pi}^-_n$ are atomic operators for the $n$th atom, defined by \begin{align} \hat{\pi}^z_n = \hat{\pi}^+_n \hat{\pi}^-_n - \hat{\pi}^-_n \hat{\pi}^+_n \;, \quad \hat{\pi}^+_n = \op{\uparrowp_n}{\Downarrow_n} \;, \quad \hat{\pi}^-_n = \op{\Downarrow_n}{\uparrowp_n} \;. \end{align} We have also defined the bath operators \begin{align} \hat{\Pi} = \hbar \sum_n \, \xi_n \, \hat{\pi}^-_n \;, \quad \hat{\Pi}^\dagger = \hbar \sum_n \xi^*_n \, \hat{\pi}^+_n \;. \end{align} The bath will be assumed to be at temperature $T$ so that its state $\rho_B$ is given by \begin{align} \rho_B = \bigotimes_n \, \rho_{n} \;, \quad \rho_{n} = \frac{\exp\!\big(\!-\!\beta\hbar\;\!\omega_n \hat{\pi}^z_n/\;\!2\big)}{Z_{n}} \;, \end{align} where we have defined $\beta= 1/k_{\text{\tiny B}}\,T$, and $k_{\text{\tiny B}}$ is the Boltzmann constant. The normalisation of $\rho_n$ (also the partition function) is \begin{align} Z_{n} = {\rm Tr}\big[\exp\!\big(\!-\!\beta\hbar\;\!\omega_n \hat{\pi}^z_n/\;\!2\big)\big] = 2 \cosh\!\bigg( \frac{\beta \hbar\;\!\omega_n}{2} \bigg) \;. \end{align} It will also be useful to introduce the shortands for the atomic populations in the $n$th atom: \begin{align} {\bigr)}a{\uparrowp_n} \rho_n \ket{\uparrowp_n} \equiv N_{\uparrowp}(\omega_n,T) \;, \quad {\bigr)}a{\Downarrow_n} \rho_n \ket{\Downarrow_n} \equiv N_{\Downarrow}(\omega_n,T) \;. \end{align} The Heisenberg\ equation of motion for the oscillator's amplitude $\hat{a}$ is defined by \begin{align} \label{da/dt} \frac{d}{dt} \, \hat{a}(t) = -\frac{i}{\hbar} \; e^{i\hat{H}t/\hbar} \big[ \hat{a}(0), \hat{H}(0) \big] e^{-i\hat{H}t/\hbar} \;, \end{align} where $\hat{H}(0)=\hat{H}$. Noting that at the initial time the system and bath operators commute, we find \begin{align} \label{da/dt2} \frac{d}{dt} \, \hat{a}(t) = -i \, \omega_0 \, \hat{a}(t) - \frac{i}{\hbar} \, 2 \, \hat{a}\dg(t) \, \hat{\Pi}(t) \;. \end{align} where $\hat{\Pi}(t)=\exp(i\hat{H}t/\hbar)\,\hat{\Pi}\,\exp(-i\hat{H}t/\hbar)$. It helps to move into a rotating frame at the oscillator frequency by defining \begin{align} \label{aRF} \bar{a}(t) = \hat{a}(t) \, e^{i\,\omega_0 t} \;. \end{align} Differentiating $\bar{a}(t)$ and using \eqref{da/dt2} we get \begin{align} \label{da/dt3} \frac{d}{dt} \, \bar{a}(t) = - \frac{i}{\hbar} \, 2 \, \bar{a}^\dagger(t) \, \hat{\Pi}(t) \, e^{i\,2\omega_0\,t} \;. \end{align} We see that $\bar{a}(t)$ is coupled to $\hat{\Pi}(t)$. To deal with this one may substitute the formal solution for $\hat{\Pi}(t)$ back into \eqref{da/dt3} iteratively. However, if the system and bath are only weakly coupled then we can approximate the system evolution up to second order in the interaction strength. This step constitutes the so-called Born approximation, after which we arrive at \begin{align} \label{dabar/dtBorn} \frac{d}{dt} \, \bar{a}(t) = - \frac{i}{\hbar} \, 2 \, \bar{a}^\dagger(t) \, \tilde{\Pi}(t) \, e^{i \,2\omega_0 t} - \frac{2}{\hbar^2} \, \bar{a}^\dagger(t) \int^t_0 dt' \: \bar{a}^2(t') \, \big[ \tilde{\Pi}(t), \tilde{\Pi}^\dagger(t') \big] \, e^{i\,2\omega_0(t-t')} \;, \end{align} where we have defined \begin{align} \label{IntPicPi} \tilde{\Pi}(t) = \hbar \sum_n \, \xi_n \, \hat{\pi}^-_n(0) \, e^{-i\,\omega_n t} \;. \end{align} The system's evolution is now affected by the history of $\tilde{\Pi}(t)$ in the commutator inside the integrand. We can simplify this by first replacing the bath commutator by its average, which is justified if we are going to use the Heisenberg\ equation of motion for calculating expectation values. The dependence on the history of $\tilde{\Pi}(t)$ can then simplified by making the Markov approximation: This relies on the characteristic timescale over which $\bar{a}(t)$ evolves to be much longer than the timescale over which bath correlations decay. In this regime we can then replace the system operators at time $t'<t$, by the present time $t$ and extend the top limit of the time integrals to infinity. Doing so allows us to compute the time integral in \eqref{dabar/dtBorn} by assuming the distribution of transition frequencies of the atoms to be sufficiently dense. We may then convert the sum over atomic degrees of freedom in $\tilde{\Pi}(t)$ into an integral by introducing a function $D(\omega)$ which counts how many atoms there are per transition frequency in the bath. That is, $D(\omega)\,d\omega$ is the number of atoms in the bath with a transition frequency in the range from $\omega$ to $\omega+d\omega$. We then have \begin{align} \label{RFda/dt} \frac{d}{dt} \, \bar{a}(t) = \big( \kappa_{\uparrow}p - \kappa_\Downarrow \big) \, \bar{a}^\dagger(t) \, \bar{a}^2(t) - \frac{i}{\hbar} \, 2 \, \bar{a}^\dagger(t) \, \tilde{\Pi}(t) \, e^{i \,2\omega_0 t} \;, \end{align} where we have further defined \begin{align} \kappa_{\uparrow}p = \gamma \, N_{\uparrowp}(2\omega_0,T) \;, \quad \kappa_\Downarrow = \gamma \, N_\Downarrow(2\omega_0,T) \;, \quad \gamma \equiv 2 \;\!\pi \, D(2\omega_0) \, |\xi(2\omega_0)|^2 \;. \end{align} In \eqref{RFda/dt} we have neglected shifts in the oscillator's frequency due to the bath correlation functions on the grounds that they are typically very small \cite{BP02SM,Car02SM}. Physically this can be expected since atoms that are detuned from $2\omega_0$ would not be expected to have a strong two-photon coupling. The dominant coupling occurs for the on-resonance case and they give rise to $\kappa_{\uparrow}p$ and $\kappa_\Downarrow$. \subsection{Noise-induced amplification} \label{NoisiAmplification} \subsubsection{Effect of noise within Stratonovich\ calculus} Taking the expectation value of \eqref{RFda/dt} gives us \begin{align} \label{<RFda/dt>} \frac{d}{dt} \, \ban{\bar{a}(t)} = \big( \kappa_{\uparrow}p - \kappa_\Downarrow \big) \, \ban{\bar{a}^\dagger(t) \, \bar{a}^2(t)} - \frac{i}{\hbar} \, 2 \, \ban{\bar{a}^\dagger(t) \, \tilde{\Pi}(t)} \, e^{i \,2\omega_0 t} \;, \end{align} If we can calculate the expectation value of the noise term in \eqref{<RFda/dt>} then we know what effect it has on the signal on average. However, it is not immediately obvious how to do this since the evolution of $\bar{a}(t)$ under \eqref{RFda/dt} will correlate it with $\tilde{\Pi}(t)$. Therefore we must treat $\ban{\bar{a}^\dagger(t) \, \tilde{\Pi}(t)}$ with care. Typically it is difficult to proceed further without assuming anything about $\tilde{\Pi}(t)$. Therefore it is often useful to consider the white-noise limit of \eqref{RFda/dt}. In so doing we obtain the Stratonovich\ equivalent to \eqref{TwoPhotonIto} from the main text (except for some scaling of the noise operator which we will take into account later). This means that we may approximate the autocorrelations of $\tilde{\Pi}(t)$ to have as small a correlation time as we like. Effectively one may take \begin{align} \ban{\tilde{\Pi}^\dagger(t) \, \tilde{\Pi}(t-\tau)} = \frac{\hbar^2}{2} \; \kappa_{\uparrow}p \, \nablata(\tau) \; , \quad \ban{\tilde{\Pi}(t) \, \tilde{\Pi}^\dagger(t-\tau)} = \frac{\hbar^2}{2} \; \kappa_\Downarrow \, \nablata(\tau) \; . \end{align} Integrating \eqref{RFda/dt} allows us to arrive at \begin{align} \label{<sys(t)noise(t)>1} \ban{\bar{a}^\dagger(t)\;\!\tilde{\Pi}(t)} = \ban{\bar{a}^\dagger(0)\;\!\tilde{\Pi}(t)} + \big( \kappa_{\uparrow}p - \kappa_\Downarrow \big) \int^t_0 dt' \, \ban{\bar{a}^\dagger{}^2(t')\,\bar{a}(t')\;\!\tilde{\Pi}(t)} + 2\,\frac{i}{\hbar} \int^t_0 dt' \, \ban{\tilde{\Pi}^\dagger(t')\;\!\bar{a}(t')\;\!\tilde{\Pi}(t)} \, e^{-i \,2\omega_0 t'} \;. \end{align} Because we are assuming $\tilde{\Pi}(t)$ to have short correlation times we can factorise multitime averages between noise operators at time $t$ and system operators at time $t'$ provided that $t>t'$. For example, for any system operator $\hat{s}$, \begin{align} \label{SysNoise1} \ban{\hat{s}(t')\,\tilde{\Pi}(t)} = \ban{\hat{s}(t')} \ban{\tilde{\Pi}(t)} = 0 \;, \quad \; t>t' \; , \end{align} where we have noted that $\tilde{\Pi}(t)$ has zero mean. When $t>t'$ the noise $\tilde{\Pi}(t)$ and system variable $\hat{s}(t')$ are in fact independent so we have \begin{align} \label{SysNoise2} \big[ \hat{s}(t'),\tilde{\Pi}(t) \big] = 0 \;, \quad \; t>t' \; . \end{align} The case of $t=t'$ then depends on the form of $\hat{s}$ and the operator that couples to the bath as defined by $\hat{V}$ \cite{GC85}. For the case considered here this can be seen to give $[\bar{a}(t),\tilde{\Pi}(t)]=0$. Hence we have, \begin{align} \label{<sys(t)noise(t)>2} \ban{\bar{a}^\dagger(t)\;\!\tilde{\Pi}(t)} = 2 \, \frac{i}{\hbar} \, \an{\bar{a}(t)} \, e^{-i\,2\omega_0 \;\!t} \int^\infty_0 d\tau \, \ban{\tilde{\Pi}^\dagger(t-\tau)\;\!\tilde{\Pi}(t)} \: e^{i\,2\omega_0 \tau} = i \;\! \hbar \, \kappa_{\uparrow}p \, \an{\bar{a}(t)} \, e^{-i\,2\omega_0 \;\!t} \;. \end{align} Substituting this back into \eqref{<RFda/dt>} thus gives \begin{align} \label{RF<aPi>} \frac{d}{dt} \, \an{\bar{a}(t)} = \big( \kappa_{\uparrow}p - \kappa_\Downarrow \big) \, \ban{\bar{a}^\dagger(t) \, \bar{a}^2(t)} + 2\,\kappa_{\uparrow}p \, \an{\bar{a}(t)} \;. \end{align} For ease of writing let us define \begin{align} \label{w(Pi)} \hat{w}(t) = - \frac{i}{\hbar} \, 2 \, \tilde{\Pi}(t) \, e^{i \,2\omega_0 t} \; , \end{align} and relabel $\bar{a}(t)$ as $\hat{a}(t)$ (keeping in mind that it is an equation of motion in the rotating frame). Considering the case of $\kappa_{\uparrow}p=\kappa_\Downarrow=\gamma/2$ we can then write \eqref{RFda/dt} simply as \begin{align} \label{NoisiampStr} \frac{d}{dt} \, \hat{a}(t) = \hat{a}\dg(t) \, \hat{w}(t) \; , \end{align} and where the average of this is simply \eqref{RF<aPi>} written in terms of $\hat{w}(t)$, \begin{align} \label{NoisiampAvg} \ban{\hat{a}\dg(t) \, \hat{w}(t)} = \gamma \, \an{\hat{a}(t)} \; . \end{align} The derivation of \eqref{NoisiampAvg} proves that linear amplification can be induced by the amplifier added noise when it comes in the form of multiplicative noise. For this reason we can refer to \eqref{NoisiampStr} as a noise-induced amplifier (henceforth abrreviated to noisiamp). We can also understand noisi amplification as classical correlation between the internal noise source of the amplifier and the signal that is being amplified. This is the essential content of \eqref{NoisiampAvg}. Though such equations are not typically encountered in the amplifier literature, it is certainly allowed within the Born--Markov framework of open-systems theory. The important point to note here is that neither the Markov approximation, nor Stratonovich\ calculus, treat $\hat{w}(t)$ as true idealised white noise. All that is required is for $\hat{w}(t)$ to have a very small but nonzero correlation time, otherwise \eqref{NoisiampAvg} would be zero and there would be no point to the derivation above. In other words, if there is no correlation between the noise and signal, there is no amplification. Having said this, it is possible to convert \eqref{NoisiampStr} to a form where $\hat{w}(t)$ really is ideal white-noise for which its correlations with any system variable always vanishes. This is given by the It\^o\ form corresponding to \eqref{NoisiampStr} and we will consider it in Sec.~\ref{ProofCCR}. Before taking this on we briefly discuss how one might be able to observe noisi amplification in ion traps. \subsubsection{Realisation using ion traps} The noisiamp in \eqref{NoisiampStr} and \eqref{NoisiampAvg} may in principle be realised using ion traps as follows. Trapped ions can be thought of as possessing two degrees of freedom, an internal degree of freedom which we can effectively think of as a two-level atom, and a motional degree of freedom. The internal degree of freedom has basis states $\ket{g}$ (ground state) and $\ket{e}$ (excited state), while the Fock basis $\ket{n}$ is used for the motional degree of freedom. Implementing \eqref{NoisiampStr} and \eqref{NoisiampAvg} is equivalent to implimenting \eqref{TwoPhotonMESuppMat} with $\kappa_{\uparrow}p=\kappa_\Downarrow=\gamma/2$. \begin{figure} \caption{\label{IonTrap} \label{IonTrap} \end{figure} In order to implement the two-photon cooling in \eqref{TwoPhotonMESuppMat} with $\kappa_{\uparrow}p=\kappa_\Downarrow=\gamma/2$, the trapped ion interacts with a laser field detuned from the carrier transition by $-2\omega_0$, where $\omega_0$ is the natural frequency of the trap. The ion then relaxes at the carrier frequency, effectively implementing two-photon loss ($\ket{g,n} \rightarrow \ket{g,n-2}$). Similarly, another laser field detuned by $2\omega_0$ is used to implement the two-photon heating process ($\ket{g,n} \rightarrow \ket{g,n+2}$). These processes are illustrated in Fig.~\ref{IonTrap}. The ion must be deep in the Lamb-Dicke regime \cite{leibfried2003quantum}, ensuring the sidebands are resolved and relaxation occurs predominantly at the carrier frequency. The Lamb--Dicke parameter is given by $\eta=(2\pi/\lambda)\sqrt{\hbar/2m\omega_0}\,\cos\theta$, where $\lambda$ is the wavelength of the incident laser field, $m$ is the mass of the ion, and $\theta$ is the angle between the laser field and the motion of the ion. The Lamb-Dicke parameter must satisfy $\eta^2(2n+1)\ll 1$, where $n$ is the Fock state of the ion's motion. To implement \eqref{TwoPhotonMESuppMat} with $\kappa_{\uparrow}p=\kappa_\Downarrow=\gamma/2$, the heating and cooling is required to have the same rate $\gamma/2$, which is controlled by the Rabi frequency $\Omega$ of the applied laser fields, and is given by $\gamma\approx\eta^2\Omega$. A similar realisation has been proposed in Ref.~\cite{lee2013quantum} to implement the quantum van der Pol oscillator. \subsection{Conversion to It\^o\ form and consistency with quantum mechanics} \label{ProofCCR} \subsubsection{The two-photon amplifier in It\^o\ form} Using \eqref{w(Pi)} we may write the Stratonovich\ equation \eqref{RFda/dt} as \begin{align} \label{da2PhStr} \frac{d}{dt} \; \hat{a}(t) = \big( \kappa_{\uparrow}p - \kappa_\Downarrow \big) \, \hat{a}\dg(t) \, \hat{a}^2(t) + \hat{a}\dg(t) \, \hat{w}(t) \; . \end{align} From \eqref{<sys(t)noise(t)>2} it is not difficult to see that the It\^o\ equivalent of \eqref{da2PhStr} is given by \begin{align} \label{da2PhIto} d\hat{a}(t) = \big( \kappa_{\uparrow}p - \kappa_\Downarrow \big) \, \hat{a}^\dagger(t) \, \hat{a}^2(t) \, dt + 2\,\kappa_{\uparrow}p \, \hat{a}(t) \,dt + \hat{a}^\dagger(t) \, d\hat{W}(t) \; , \end{align} where $d\hat{s}(t)\equiv\hat{s}(t+dt)-\hat{s}(t)$ for any operator $\hat{s}$ and $d\hat{W}(t)=\hat{w}(t)\,dt$ satisfies the It\^o\ rules \begin{align} \label{ItoRulesAppendix} d\hat{W}^\dagger(t) \, d\hat{W}(t) = 4 \, \kappa_{\uparrow}p \, dt \;, \quad d\hat{W}(t) \, d\hat{W}^\dagger(t) = 4 \, \kappa_\Downarrow \, dt \; . \end{align} Note that as we have mentioned earlier, the It\^o\ equation has the property that \begin{align} \ban{\hat{a}^\dagger(t)\,d\hat{W}(t)} = 0 \; . \end{align} However, there is now an extra term of order $dt$ in \eqref{da2PhIto} containing $2\,\kappa_{\uparrow}p \, \hat{a}(t)$ that makes sure it has the same average as \eqref{da2PhStr}. While the Stratonovich\ and It\^o\ equations can have different appearances, the physics described by each within their respective calculus must be identical. Equation \eqref{da2PhIto} can be derived directly from \eqref{da2PhStr} by treating the time derivative as an implicit equation \cite{WM10SM}. Alternatively, \eqref{da2PhIto} can also derived by treating $\hat{w}(t)$ as a quantum white-noise process from the start. This can be achieved using the time-evolution operator in the rotating frame: \begin{align} \label{ItoU(t,0)} \hat{U}(t,0) = {\rm T}_{\rm Tr}iangleleft \bigg\{ \exp\bigg[ \!-\!i \int^t_0 dt' \, \hat{V}(t') \bigg] \bigg\} \; , \end{align} where ${\rm T}_{\rm Tr}iangleleft$ denotes chronological time ordering. Here $\hat{V}(t)$ is in the rotating frame with respect to the free evolution and where all such time dependencies are grouped into $\hat{w}(t)$ [see \eqref{w(Pi)}], \begin{align} \label{V(t)ItoDefn} \hat{V}(t) = \hat{a}^2(0) \, \hat{w}^\dagger(t) + \hat{a}\dg{}^2(0) \, \hat{w}(t). \end{align} The time dependence of $\hat{a}(t)$ in \eqref{da2PhIto} is thus defined by \begin{align} \label{a(t)ItoDefn} \hat{a}(t) = \hat{U}^\dagger(t,0) \, \hat{a}(0) \, \hat{U}(t,0) \; . \end{align} Often the time-evolution operator is specified by a Hudson--Parthasarthy equation (a quantum stochastic Schr\"odinger\ equation) \cite{HP84,Chi15,WM10SM} \begin{align} \label{HPE} d\hat{U}(t,0) = \bigg\{ \!-\!\frac{1}{2} \; \Big[ \, \kappa_{\uparrow}p \, \hat{a}^2(t) \, \hat{a}\dg{}^2(t) + \kappa_\Downarrow \, \hat{a}\dg{}^2(t) \, \hat{a}^2(t) \Big] \, dt - \frac{1}{2} \, \Big[ \hat{a}^2(t) \, d\hat{W}^\dagger(t) - \hat{a}\dg{}^2(t) \, d\hat{W}(t) \Big] \bigg\} \, \hat{U}(t,0) \; . \end{align} In practice it is easier to derive the It\^o\ quantum stochastic differential equation from the Hudson--Parthasarathy/stochastic Schr\"odinger\ equation \eqref{HPE}. We emphasise here that \eqref{ItoU(t,0)} [or \eqref{HPE}], along with \eqref{V(t)ItoDefn} and \eqref{ItoRulesAppendix} provide an independent and way of deriving \eqref{da2PhIto} that is void of any reference to baths at thermal equilibrium. Equation \eqref{V(t)ItoDefn} simply couples the signal represented by $\hat{a}(t)$ to a quantum white-noise process $\hat{w}(t)$. \subsubsection{Preservation of canonical commutation relation} We can show that \eqref{da2PhIto} preserves the canonical commutation relation for $\hat{a}$ and $\hat{a}\dg$. If at time $t$ we have $[\hat{a}(t),\hat{a}\dg(t)]=\hat{1}$, then we must have \begin{align} \label{d[a,adg]} d\big[\hat{a}(t),\hat{a}\dg(t)\big] = 0 \; . \end{align} Omitting the time argument for ease of writing we have \begin{align} d\big[\hat{a},\hat{a}^\dagger\big] = {}& (d\hat{a}) \, \hat{a}^\dagger + \hat{a} \, (d\hat{a}^\dagger) + (d\hat{a}) (d\hat{a}^\dagger) - (d\hat{a}^\dagger) \hat{a} - \hat{a}^\dagger (d\hat{a}) - (d\hat{a}^\dagger) (d\hat{a}) \\ = {}& (\kappa_{\uparrow}p-\kappa_\Downarrow) (\hat{a}^\dagger \hat{a}^2 \hat{a}^\dagger + \hat{a} \, \hat{a}^\dagger{}^2 \hat{a} - 2 \, \hat{a}^\dagger{}^2 \hat{a}^2 ) \, dt - 4 \, ( \kappa_{\uparrow}p-\kappa_\Downarrow) \, \hat{a}^\dagger \hat{a} \, dt \;. \end{align} On normal ordering the first two terms in the parentheses on the right-hand side we arrive at \eqref{d[a,adg]}. As part of the proof of \eqref{d[a,adg]} we have also worked out the photon-number evolution in the general case when $\kappa_{\uparrow}p\ne\kappa_\Downarrow$. Its average gives \begin{align} \label{d<adga>/dt} \frac{d}{dt} \; \an{\hat{a}\dg\hat{a}} = 2 \, (\kappa_{\uparrow}p-\kappa_\Downarrow) \, \ban{\hat{a}\dgt\hat{a}t} + 8 \, \kappa_{\uparrow}p \, \ban{\hat{a}\dg\hat{a}} + 4 \, \kappa_{\uparrow}p \;. \end{align} In the main text we worked out the corresponding atom-photon interactions taking place when $\kappa_{\uparrow}p=\kappa_\Downarrow$ so that the nonlinear term in \eqref{d<adga>/dt} does not contribute to the noisiamp [see Fig.~\ref{AtomPhotonInt}(b) of the main text]. The nonlinear term here represents a two-photon generalisation of the linear (i.e.~one-photon) amplifier. We depict the necessary atom-photon interactions associated with the general two-photon amplifier in Fig.~\ref{TwoPhotonProcessSuppMat}. \begin{figure} \caption{\label{TwoPhotonProcessSuppMat} \label{TwoPhotonProcessSuppMat} \end{figure} \section{Phase properties} \label{PhaseCovarianceSM} \subsection{Phase covariance} We define an arbitrary linear amplifier to be phase covariant if and only if its map ${\cal A}(t)$ (assumed to be completely-positive and trace-preserving) commutes with the phase-shift map, \begin{align} \label{PhaseCovDefn} {\cal A}(t) \, {\cal P}_\varphi = {\cal P}_\varphi \, {\cal A}(t) \; . \end{align} The phase-shift map is defined by \begin{align} {\cal P}_\varphi \, \rho = \exp(-i\varphi\hat{n}) \, \rho \, \exp(i\varphi\hat{n}) \equiv \rho_\varphi \; . \end{align} where $\hat{n}=\hat{a}\dg\hat{a}$. This same property has been referred to as ``phase-preserving in the strict sense'' in Ref.~\cite{CCJP12SM}. To prove that ${\cal A}(t)$ is phase covariant, we can think of ${\cal A}(t)$ as many compositions of ${\cal A}(\nablata t)$ in the limit that $\nablata t \longrightarrow 0$. That is, if we define $\nablata t=t/n$ then ${\cal A}(t)=[{\cal A}(\nablata t)]^n$ as $n\longrightarrow\infty$. Hence to show that a linear amplifier is phase covariant, all we have to do is show that its map satisfies \eqref{PhaseCovDefn} for an infinitesimal time interval $dt$. Since our counterexamples to the paramp conjecture in the main text are of the form ${\cal A}(t)=\exp({\cal L} t)$ with ${\cal L}$ in the Lindblad form, we see that ${\cal A}(dt)=\mathbbm{1}+{\cal L}\,dt$. The condition ${\cal A}(dt)\,{\cal P}_\varphi={\cal P}_\varphi\,{\cal A}(dt)$ then becomes \begin{align} \label{InfPhasePres} {\cal L} \, {\cal P}_\varphi = {\cal P}_\varphi \, {\cal L} \;. \end{align} It is then possible to show that \eqref{InfPhasePres} is true. Here we will in fact prove that ${\cal A}(t)=\exp({\cal L} t)$ is a phase-covariant channel as long as ${\cal L}$ is any $m$-photon dissipator. That is, \begin{align} \label{mPhotonCh} {\cal D}[\hat{a}^m] \, {\cal P}_\varphi = {\cal P}_\varphi \, {\cal D}[\hat{a}^m] \;, \quad {\cal D}[\hat{a}\dg{}^m] \, {\cal P}_\varphi = {\cal P}_\varphi \, {\cal D}[\hat{a}\dg{}^m] \;. \end{align} This covers both our counterexamples to the paramp conjecture. For ${\cal D}[\hat{a}^m]$ we have, \begin{align} {\cal D}[\hat{a}^m] \, {\cal P}_\varphi \, \rho = {}& \hat{a}^m \, \rho_\varphi \, \hat{a}\dg{}^m - \frac{1}{2} \; \hat{a}\dg{}^m \, \hat{a}^m \, \rho_\varphi - \frac{1}{2} \; \rho_\varphi \, \hat{a}\dg{}^m \, \hat{a}^m \\[0.25cm] = {}& \big( e^{-i\varphi\;\!\hat{n}} \, e^{i\varphi\;\!\hat{n}} \big) \, \hat{a}^m \, \rho_\varphi \, \hat{a}\dg{}^m \, \big( e^{-i\varphi\;\!\hat{n}} \, e^{i\varphi\;\!\hat{n}} \big) - \frac{1}{2} \; \big( e^{-i\varphi\;\!\hat{n}} \, e^{i\varphi\;\!\hat{n}} \big) \, \hat{a}\dg{}^m \, \big( e^{-i\varphi\;\!\hat{n}} \, e^{i\varphi\;\!\hat{n}} \big) \, \hat{a}^m \, \rho_\varphi \nonumber \\ & - \frac{1}{2} \; \rho_\varphi \, \hat{a}\dg{}^m \,\big( e^{-i\varphi\;\!\hat{n}} \, e^{i\varphi\;\!\hat{n}} \big)\, \hat{a}^m \, \big( e^{-i\varphi\;\!\hat{n}} \, e^{i\varphi\;\!\hat{n}} \big) \\[0.25cm] \label{D[am]} = {}& e^{-i\varphi\;\!\hat{n}} \, \big( e^{i\varphi\;\!\hat{n}} \, \hat{a}^m e^{-i\varphi\;\!\hat{n}} \big) \rho \, \big( e^{i\varphi\;\!\hat{n}} \, \hat{a}\dg{}^m e^{-i\varphi\;\!\hat{n}} \big) e^{i\varphi\;\!\hat{n}} - \frac{1}{2} \; e^{-i\varphi\;\!\hat{n}} \big( e^{i\varphi\;\!\hat{n}} \, \hat{a}\dg{}^m \, e^{-i\varphi\;\!\hat{n}} \big) \big( e^{i\varphi\;\!\hat{n}} \, \hat{a}^m \, e^{-i\varphi\;\!\hat{n}} \big) \, \rho \, e^{i\varphi\;\!\hat{n}} \nonumber \\ & - \frac{1}{2} \; e^{i\varphi\;\!\hat{n}} \, \rho \, \big( e^{i\varphi\;\!\hat{n}} \, \hat{a}\dg{}^m \, e^{-i\varphi\;\!\hat{n}} \big) \big( e^{i\varphi\;\!\hat{n}} \, \hat{a}^m \, e^{-i\varphi\;\!\hat{n}} \big) e^{i\varphi\;\!\hat{n}} \;. \end{align} This can simplified by noting that $\exp(i\varphi\;\!\hat{n})\,\hat{a}\,\exp(-i\varphi\;\!\hat{n})=\hat{a}\,\exp(-i\;\!\varphi)$ from which we can also see that \begin{align} \label{am} \exp(i\varphi\;\!\hat{n}) \, \hat{a}^m \, \exp(-i\varphi\;\!\hat{n}) = \hat{a}^m \, e^{-im\;\!\varphi} \;. \end{align} Equation \eqref{D[am]} is thus \begin{align} {\cal D}[\hat{a}^m] \, {\cal P}_\varphi \, \rho = {}& e^{-i\varphi\;\!\hat{n}} \, \hat{a}^m \rho \, \hat{a}\dg{}^m \, e^{i\varphi\;\!\hat{n}} - \frac{1}{2} \; e^{-i\varphi\;\!\hat{n}} \, \hat{a}\dg{}^m \, \hat{a}^m \, \rho \, e^{i\varphi\;\!\hat{n}} - \frac{1}{2} \; e^{i\varphi\;\!\hat{n}} \, \rho \, \hat{a}\dg{}^m \, \hat{a}^m \, e^{i\varphi\;\!\hat{n}} \\ = {}& e^{-i\varphi\;\!\hat{n}} \, \bigg( \hat{a}^m \rho \, \hat{a}\dg{}^m \, - \frac{1}{2} \; \hat{a}\dg{}^m \, \hat{a}^m \, \rho \, - \frac{1}{2} \; \rho \, \hat{a}\dg{}^m \, \hat{a}^m \, \bigg) \, e^{i\varphi\;\!\hat{n}} \nonumber \\ \label{PhCovDam} = {}& {\cal P}_\varphi \, {\cal D}[\hat{a}^m] \, \rho \;. \end{align} The proof for ${\cal D}[\hat{a}\dg{}^m]$ follows similarly on replacing $\hat{a}$ with $\hat{a}\dg$ and using \begin{align} \label{adgm} \exp(i\varphi\;\!\hat{n}) \, \hat{a}\dg{}^m \, \exp(-i\varphi\;\!\hat{n}) = \hat{a}\dg{}^m \, e^{im\;\!\varphi} \;. \end{align} \subsection{Phase sensitivity} Aside from phase covariance, another important property of linear amplifiers is whether or not it is phase sensitive \cite{Cav82,SZ97}. This tries to capture whether the amplification and added noise due to the linear amplifier will differ for different directions in phase space. A linear amplifier is said to be phase insensitive if and only if for any value of $\varphi$, the quadrature \begin{align} \hat{x}_\varphi = \frac{1}{\rt{2}} \; \big[ \hat{a} \exp(-i\varphi) + \hat{a}\dg \exp(i\varphi) \big] \; , \end{align} is such that it satisfies \begin{gather} \an{\hat{x}_\varphi(t)} = g \, \an{\hat{x}_\varphi(0)} \; , \\ \an{[\Delta \hat{x}_\varphi(t)]^2} = g^2 \, \an{[\hat{x}_\varphi(0)]^2} + N \; , \end{gather} where $g$ and $N$ are independent of $\varphi$, and we have defined $\Delta\hat{x}_\varphi=\hat{x}_\varphi-\an{\hat{x}_\varphi}$. It is simple to show that for the noisiamp $g$ is as defined already, and \begin{align} N = g^2 \, \big( g^2 - 1 \big) \bigg[ \ban{\hat{n}(0)} +\frac{1}{2} \, \bigg] \; . \end{align} It is clear from these relations that the noisiamp is phase insensitive. \section{A three-photon counterexample} To demonstrate the non-uniqueness of the noisiamp as an example which cannot be described by the paramp, we proposed in the main text the example defined by \begin{align} \label{3PhotonL} \ddt \; \rho(t) = \frac{\gamma}{9} \, {\cal D}[\hat{a}^3]\, \rho(t) + \frac{\gamma}{9} \, {\cal D}[\hat{a}\dg{}^3] \, \rho(t) + \gamma \; {\cal D}[\hat{a}^2] \, \rho(t) \;. \end{align} This is clearly a phase-covariant linear amplifier by the results of Sec.~\ref{PhaseCovarianceSM}. It is simple to show from this that \begin{gather} \ddt \; \an{\hat{a}} = \gamma \, \ban{\hat{a}} \;, \\ \label{3Photon<n>} \ddt \; \an{\hat{n}} = 2\,\gamma\,\ban{\hat{n}^2} + 6\,\gamma\,\ban{\hat{n}} + 2 \, \gamma \;. \end{gather} It is obvious that $\an{\hat{a}(t)}=g\an{\hat{a}(0)}$ where $g=\exp(\gamma\,t)$. \begin{figure} \caption{\label{ThreePhotonEx} \label{ThreePhotonEx} \end{figure} However, the output average photon number is now coupled to its second moment. We can still show that it leads to unattainable values for the paramp by considering a lower bound of $\an{\hat{n}(t)}$ by ignoring the first term of \eqref{3Photon<n>}. Solving the resulting differential equation gives \begin{align} \label{<n>LB} \ban{\hat{n}(t)}_{\rm LB} = {}& g^6 \, \ban{\hat{n}(0)} + \frac{g^6-1}{3} \le \ban{\hat{n}(t)} \; . \end{align} The paramp with identical amplitude gain has \begin{align} \label{<n>ParametricSM} \ban{\hat{n}(t)} = {}& g^2 \ban{\hat{n}(0)} + \big(g^2-1\big)\,\ban{\hat{b}(0)\,\hat{b}^\dagger(0)} \;. \end{align} When considered a function of $\an{\hat{n}(0)}$, \eqref{<n>ParametricSM} is a straight line with gradient $g^2$ and vertical intercept $(g^2-1)\,\an{\hat{b}(0)\,\hat{b}^\dagger(0)}$. This is shown as the black line in Fig.~\ref{ThreePhotonEx}. On the same axes \eqref{<n>LB} is shown as the red dashed line (not to scale). It is also a straight line but with a larger gradient and vertical intercept $(g^6-1)/3$. The actual solution to \eqref{3Photon<n>} must therefore lie above the red-dashed line while the area below it (shaded region) is forbidden. Figure~\ref{ThreePhotonEx} clearly illustrates that no matter how $\an{\hat{b}(0)\,\hat{b}^\dagger(0)}$ is chosen (by choosing $\sigma$ in the ancillary mode), the paramp $\an{\hat{n}(t)}$ always has a segment in the forbidden region of the three-photon example. \end{document}
\begin{equation}gin{document} \title{When can quantum decoherence be mimicked by classical noise?} \author{Bing Gu} \altaffiliation[Present address:]{ Department of Chemistry, University of California, Irvine, CA 92697, USA} \affiliation{Department of Chemistry, University of Rochester, Rochester NY 14627, USA} \author{Ignacio Franco} \affiliation{Department of Chemistry, University of Rochester, Rochester NY 14627, USA} \affiliation{Department of Physics, University of Rochester, Rochester NY 14627, USA} \date{\today} \begin{equation}gin{abstract} Quantum decoherence arises due to uncontrollable entanglement between a system with its environment. However the effects of decoherence are often thought of and modeled through a simpler picture in which the role of the environment is to introduce classical noise in the system's degrees of freedom. Here we establish necessary conditions that the classical noise models need to satisfy to quantitatively model the decoherence. Specifically, for pure-dephasing processes we identify well-defined statistical properties for the noise that are determined by the quantum many-point time correlation function of the environmental operators that enter into the system-bath interaction. In particular, for the exemplifying spin-boson problem with a Lorentz-Drude spectral density we show that the high-temperature quantum decoherence is quantitatively mimicked by colored Gaussian noise. In turn, for dissipative environments we show that classical noise models cannot describe decoherence effects due to spontaneous emission induced by a dissipative environment. These developments provide a rigorous platform to assess the validity of classical noise models of decoherence. \end{abstract} \maketitle Schr\"{o}dinger equationction{Introduction} The inevitable interaction between a quantum system with its surrounding environment leads to decoherence \cite{Breuer2002, Schlosshauer2007, Gu2017, Gu2018, Gu2018c, Hu2018}. The decoherence occurs because such interaction leads to system-bath entanglement that turns a pure system state to a statistical mixture of states. Understanding quantum decoherence is important for a wide range of fields such as quantum computation and quantum information processing \cite{Nielsen2011}, quantum control\cite{Shapiro2003}, measurement theory, spectroscopy, molecular structure and dynamics \cite{Valkunas2013}. There are several theoretical frameworks to understand quantum decoherence and the effective dynamics of open quantum systems \cite{Breuer2002}. The most rigorous one of them consists of \emph{explicitly} solving the time-dependent Schr\"{o}dinger equation for the system and its environment and then tracing out the environmental degrees of freedom to obtain the system's reduced density matrix. However, this approach, while desirable \cite{Hu2018, Franco2008_charge, Franco2008}, is often intractable due to the exponentially increasing computational cost of solving the time-dependent Schr\"{o}dinger equation with system/environment size. This limitation has lead to significant advances developing methods in which the effect of the bath is considered \emph{implicitly} \cite{Breuer2002, Gu2017a} such as perturbative quantum master equations \cite{Tanimura2006}, path integral techniques \cite{Walters2015} and hierarchical equations of motion \cite{Tanimura2012, Tanimura1989}. Despite this important progress, following the reduced dynamics of a primary system of interest interacting with a general quantum environment remains an outstanding challenge. Due to the conceptual and technical complexities in dealing with the system plus environment fully quantum mechanically, an alternative approach is to simply consider that the effect of the environment is to introduce classical noise in the system's degrees of freedom \cite{Budini2001, Yang2017, Stern1990, Kubo1969, Gelzinis2015, Leon-Montiel2013, Costa-Filho2017, Chenu2017, Spanner2009}. In this picture, quantum dissipation is mimicked by stochastic terms in the equation of motion that introduce random transitions between system energy eigenstates. In turn, pure-dephasing processes are modeled by introducing dynamic disorder (or, equivalently, spectral diffusion) in which classical noise perturbs the energy of the system eigenstates leading to an accumulated random phase. Decoherence arises by averaging over an ensemble of these stochastic but unitary quantum dynamics. Note that this implementation of decoherence through noise requires averaging over an ensemble of realizations each one evolving unitarily. The corresponding ensemble of unitary evolutions represents a nonunitary evolution of the density matrix of the system. By contrast, ``true" decoherence occurs for a single-quantum system that becomes entangled with environmental degrees of freedom. The unitary deterministic evolution of the system plus environment leads to a nonunitary evolution of the reduced density matrix of the system. This conceptual difference between noise and true decoherence is known~\cite{Schlosshauer2007, Joos2013}. However, unless this difference is probed explicitly, the noise model can mimic well the effects of decoherence since they both effectively lead to a damping of coherences. In fact, this stochastic picture with classical noise has been widely used in chemistry and physics to capture the loss of interference \cite{Stern1990, Spanner2009}, optical line shapes \cite{Kubo1969, Gelzinis2015}, noise-assisted energy transport \cite{Leon-Montiel2013}, non-Markovian dynamics \cite{Costa-Filho2017}, Landau-Zener \cite{Wubs2006, Kayanuma1985} and central-spin problems \cite{Yang2017} and in the quantum simulation of open many-body systems \cite{Chenu2017}. The fundamental question that arises in this context is what is the regime of validity and the limitations of the classical noise picture. An initial discussion of this problem was provided by Stern \emph{et al.} \cite{Stern1990} where it is argued that the loss of quantum interference can be mimicked by the phase uncertainty introduced by the classical noise. However, no formal criteria for the validity of classical noise was provided. Here we identify necessary conditions under which the decoherence effects induced by a quantum environment in a quantum system can be understood and modeled through classical noise. Such conditions are obtained by comparing the reduced dynamics of an open quantum system to the ensemble average of a series of unitary quantum trajectories generated by a stochastic Hamiltonian. We consider the effects of dissipation and pure dephasing independently and do not take into account their possible interference which was recently demonstrated in Ref. \cite{Gu2018}. This paper is organized as follows. Section \ref{sec:pure} introduces decoherence functions that arise due to system-bath entanglement and due to classical noise in the pure dephasing limit. Through a term-by-term comparison of their cumulant expansion, we isolate conditions on the classical noise that need to be satisfied to mimic the quantum dynamics. These conditions are determined by the many-point time correlation functions of the environment operators that enter into the system-bath interaction. The application of these conditions to the spin-boson model show that the decoherence effects can be captured through colored Gaussian noise provided that the environment time-correlation function can be described by a set of exponentially decaying functions. In turn, Sec. \ref{sec:diss} focuses on decoherence through quantum relaxation. We show that classical noise cannot describe decoherence induced by spontaneous emission and thus that these models are of limited applicability when spontaneous fluctuations play a critical role. Schr\"{o}dinger equationction{Pure dephasing dynamics} \label{sec:pure} We first focus on pure dephasing dynamics and establish general criteria that needs to be satisfied to employ classical noise to mimic quantum decoherence. Pure dephasing refers to a process in which the decoherence arises without energy transfer between system and environment. For a general composite system with Hamiltonian, \begin{equation} H = H_\text{S} + H_\text{B} + H_\text{SB} \end{equation} where $H_\text{S}$ is the Hamiltonian of the quantum system, $H_\text{B}$ of the environment and $H_\text{SB}$ the interaction between system and bath, the pure-dephasing condition arises when $[H_\text{S}, H_\text{SB}]=0$. Even when this condition is not strictly satisfied, the pure-dephasing effects may still be the dominant effect when the environment dynamics is non-resonant with the transition frequencies of the system such that the dissipation is much slower compared to pure-dephasing effects. For this reason, the pure-dephasing limit has been useful in describing electronic decoherence in molecules \cite{Gu2018, Hu2018}, elastic electron-phonon interaction in solid state systems, loss of quantum interference \cite{Stern1990}, line shape in spectroscopic measurements \cite{Kubo1969}, vibrational dephasing in solvents \cite{Joutsuka2016} and the central spin problem \cite{Yang2017}. Below we define decoherence functions that arise from system-bath entanglement and from noise-induced pure dephasing. By contrasting them we isolate conditions that the classical noise needs to satisfy to mimic the quantum decoherence. \subsection{ Quantum decoherence function} For pure-dephasing dynamics, the system-bath interaction can be written as \begin{equation} H_\text{SB} = \sum_\alpha \ket{\alpha}\bra{\alpha} \otimes B_\alpha \label{eq:hsb} \end{equation} where $\{ \ket{\alpha}\}$ are the eigenstates of $H_\text{S}$ and $B_\alpha$ is a bath operator. Here we assume that the system and bath are uncorrelated at initial time such that the density matrix can be written as \begin{equation} \rho(0) = \rho_\text{S}(0) \otimes \rho_\text{B}(0), \label{eq:init} \end{equation} where $\rho_\text{S}$ is the reduced density matrix for the system and $\rho_\text{B}$ for the bath. The Liouville-von Neumann (LvN) equation in the interaction picture of $H_0 = H_\text{S} + H_\text{B}$ reads \begin{equation} i \frac{d}{dt} {\tilde{\rho}}(t) = [\tilde{H}_\text{SB}(t), \tilde{\rho}(t)] , \label{eq:113} \end{equation} where $\tilde{A}(t) = U_0^\dag(t) A U_0(t)$ is the operator $A$ in this interaction picture and $U_0(t) = e^{-i H_0 t}$. For notational convenience, for system operators $\tilde{A}_\text{S}(t) \equiv U_\text{S}^\dagger A_\text{S}U_\text{S} $ where $U_\text{S}=e^{-iH_\text{S} t}$. Similarly, for bath operators $\tilde{A}_\text{B}(t) \equiv U_\text{B}^\dagger A_\text{B}U_\text{B} $ where $U_\text{B}=e^{-iH_\text{B} t}$. Here and throughout we employ atomic units where $\hbar=1$. The solution to the LvN equation can be written as \begin{equation} \tilde{\rho}(t) = \tilde{U}(t) \rho(0) \tilde{U}^\dag(t) \label{eq:114} \end{equation} where $ \tilde{U}(t) = {\mathcal{T}} e^{-i \int_0^t \tilde{H}_\text{SB}(t')\,dt'}$ is the propagator in the interaction picture and ${\mathcal{T}}$ is the time-ordering operator. Using \eq{eq:hsb}, it follows that $ \tilde{H}_\text{SB}(t) = \sum_{\alpha} \proj{\alpha} \otimes \tilde{B}_\alpha(t) $ and \begin{equation} \begin{equation}gin{split} \tilde{U}(t) &= {\mathcal{T}} \sum_{n=0}^\infty \frac{(-i)^n}{n!} \left( \int_0^t dt' \sum_{\alpha} \proj{\alpha} \otimes \tilde{B}_\alpha(t')\right)^n \\ &= \sum_{\alpha} \proj{\alpha} \otimes {\mathcal{T}} \sum_{n=0}^\infty \frac{(-i)^n}{n!} \left( \int_0^t dt' \tilde{B}_\alpha(t')\right)^n = \sum_{\alpha} \proj{\alpha} \otimes V_\alpha(t) \end{split} \label{eq:118} \end{equation} where $V_\alpha(t) \equiv {\mathcal{T}} \exp\left(-i \int_0^t \tilde{B}_\alpha(t')\,dt' \right)$. Inserting \eq{eq:118} into \eq{eq:114}, taking into account the uncorrelated initial system-bath state in \eq{eq:init}, and tracing out the bath degrees of freedom (which is denoted by $\text{Tr}_\text{B}[\cdots]$) yields the reduced density matrix for the system \begin{equation} \tilde{\rho}^{\text{S}}_{\alpha \begin{equation}ta}(t) = \bra{\alpha} \textrm{Tr}_\text{B}[ \tilde{\rho}(t)]\ket{\begin{equation}ta} = {\rho}^{\text{S}}_{\alpha \begin{equation}ta}(0) \Phi_{\alpha \begin{equation}ta}(t). \label{eq:solq} \end{equation} Here \begin{equation} \Phi_{\alpha \begin{equation}ta}(t) \equiv \text{Tr}_\text{B}[\rho_\text{B}(0) V^\dag_\begin{equation}ta(t)V_\alpha(t)] = \Braket{V^\dag_\begin{equation}ta(t) V_\alpha(t) }, \label{eq:df_def} \end{equation} is the quantum decoherence function (QDF), which characterizes the decoherence effects for pure-dephasing dynamics. In this pure-dephasing dynamics, the diagonal matrix elements of the reduced density matrix representing populations in the energy eigenstates are not influenced by the environment as $\braket{ V^\dag_\alpha(t)V_\alpha(t)} = 1$. However, the off-diagonal elements of the density matrix decay with a rate determined by $\Phi_{\alpha\begin{equation}ta}(t)$. If the initial state of the environment is pure, i.e., $\rho_\text{B}(0) = \ket{\chi}\bra{\chi}$, the QDF becomes \begin{equation} \Phi_{ \alpha \begin{equation}ta}(t) = \braket{\chi | V^\dag_\begin{equation}ta(t) V_\alpha(t)|\chi }. \label{eq:115} \end{equation} In this case, the absolute square of decoherence function $|\Phi_{ \alpha \begin{equation}ta}|^2$ is known as the Loschmidt echo $L(t)$ \cite{Goussev2012}. The Loschmidt echo measures the stiffness of the environment to the perturbation by the system and is deeply connected to quantum decoherence \cite{Cucchietti2003}. A particular interesting case is that for a two-level system with a initial state $\ket{\psi_0} = c_0\ket{0} + c_1\ket{1}$ , the Loschmidt echo connects directly to the purity of the system, defined as $\mc{P}(t) = \text{Tr}_\text{S} [\rho_\text{S}^2(t)] $, with the following relationship \begin{equation} \mathcal{P}(t) = 1 + 2|c_0|^2|c_1|^2 (L_{01}(t)-1) .\end{equation} \subsection{Noise-induced decoherence function} Consider now a quantum system that is subject to classical noise. The noise is supposed to cause spectral diffusion, i.e. to introduce stochastic dynamics to the energy eigenvalues of the system. The effective Hamiltonian of the system for a particular realization of the noise is \begin{equation} H(t) = H_\text{S} + \sum_\alpha \eta_\alpha(t) \ket{\alpha}\bra{\alpha} \end{equation} where $\{\eta_\alpha(t)\}$ are real stochastic processes. For the Hamiltonian to be Hermitian the $\eta_\alpha(t)$ must be real. The density matrix for a single realization of the noise can be obtained from the LvN equation in the interaction picture of $H_\text{S}$ to yield \begin{equation} i \frac{d}{dt} \tilde{\rho}_{\alpha \begin{equation}ta}(t)= (\eta_\alpha(t) - \eta_\begin{equation}ta(t)) \tilde{\rho}_{\alpha \begin{equation}ta}(t). \label{eq:116} \end{equation} Taking a statistical average of the solution of \eq{eq:116} yields \begin{equation} \overline{\tilde{\rho}_{\alpha \begin{equation}ta}(t)} = \Phi^\text{noise}_{\alpha \begin{equation}ta}(t) \rho_{\alpha \begin{equation}ta}(0), \label{eq:solc} \end{equation} where we have introduced the noise-induced decoherence function (NIDF) \begin{equation} \Phi^\text{noise}_{\alpha \begin{equation}ta}(t) = \overline{e^{-i \int_0^t \Delta_{\alpha \begin{equation}ta}(s) \,ds }}, \end{equation} $\Delta_{\alpha \begin{equation}ta}(s) \equiv \eta_\alpha(s) - \eta_\begin{equation}ta(s)$ and the overline denotes statistical averaging. \subsection{Contrasting quantum and noise-induced decoherence functions} Comparing Eqs. \eqref{eq:solq} and \eqref{eq:solc}, it is clear that if the classical decoherence function coincides with the quantum decoherence function, i.e., \begin{equation} \Phi_{\alpha \begin{equation}ta}(t) = \Phi^\text{noise}_{\alpha \begin{equation}ta}(t) \qquad \forall \, \alpha, \begin{equation}ta, \label{eq:cond} \end{equation} the noise picture of decoherence accurately mimics the entanglement process that leads to the decoherence. This formal relation offers a general structure to understand how classical noise models can be related to physical pure dephasing processes. However, it does not offer a practical prescription to relate the decoherence dynamics with the statistical properties of the noise as the quantum decoherence function involves two time-ordered exponentials of the bath operators which are generally not available. To make further progress, below we introduce a useful operatorial identity for products of time-ordered exponentials and use it to develop a cumulant expansion of the quantum decoherence function. \subsubsection{A useful operatorial identity} We now show that given two general Hermitian operators $A(t)$ and $B(t)$ \begin{equation} \begin{array}r{{\mathcal{T}}} e^{i \int_0^t B(\tau)\,d\tau} {\mathcal{T}} e^{-i \int_0^t A(\tau)\,d\tau} = {\mathcal{T}}_\mc{C} e^{-i \int_0^t (A(\tau_+) - B(\tau_-) )\,d\tau} \label{eq:identity} \end{equation} where $\begin{array}r{{\mathcal{T}}}$ is the anti-chronological time-ordering operator, and ${\mathcal{T}}_\mc{C}$ is the contour-ordering operator defined in a complex time contour $\mc{C}$ as specified in Fig.~\ref{fig:kc}. The anti-chronological time ordering operator rearranges earlier-time terms to the left of the later-time ones, and the contour-ordering operator rearranges earlier-in-contour terms to the right of the later-in-contour ones. This contour consists of two time branches, the upper branch going forward in time from $t_0+i\epsilon \rightarrow t+i\epsilon $ and the lower one going backward in time from $t-i\epsilon \rightarrow t_0-i\epsilon$ where $\epsilon = 0^+$ is an infinitesimal positive number. Equation \eqref{eq:identity} can be understood as a direction extension of the semigroup property of the evolution operator [$U(t,t') = U(t,t'')U(t'',t')$] from real time to a complex time contour. A formal proof is provided as follows. We first note that \begin{equation} \begin{array}r{{\mathcal{T}}} e^{i \int_0^t B(\tau)\,d\tau} {\mathcal{T}} e^{-i \int_0^t A(\tau)\,d\tau} = {\mathcal{T}}c e^{i \int_0^t B(\tau_-)\,d\tau } e^{-i \int_0^t A(\tau_+)\,d\tau} \label{eq:121} \end{equation} due to the fact that the effects of the two time-ordering operators in the left-hand side are being taken care of by the contour-ordering operator. Here the subindex $ \pm $ indicates the upper/lower time branch of the contour. Using the Baker-Campbell-Hausdorff formula \cite{Rossmann2002} $e^X e^Y = e^{X+Y+\half [X,Y] + \cdots }$ yields \begin{equation}gin{widetext} \begin{equation} {\mathcal{T}}c e^{i \int_0^t B(\tau_-)\,d\tau } e^{-i \int_0^t A(\tau_+)\,d\tau} = {\mathcal{T}}c \exp \left\{ i\int_0^t( B(\tau_-) - A(\tau_+))d\tau - \frac{i^2}{2} \iint_0^t [ {B}(\tau_-), A(\tau'_+)]\,d\tau d\tau' + \cdots \right\} \label{eq:tmp} \end{equation} \end{widetext} Now, commutators vanish under the contour-ordering operator \begin{equation} {\mathcal{T}}c \{[A(\tau),B(\tau')] \}= {\mathcal{T}}c \{A(\tau)B(\tau') - B(\tau')A(\tau)\} = 0 \end{equation} as the two terms will be ordered in the same way by the contour-ordering operator. Then all commutators and nested commutators in \eq{eq:tmp} vanish, yielding the identity in \eq{eq:identity}. The utility of \eq{eq:identity} is that it enables us to express the two time ordered exponentials in $\Phi_{\alpha\begin{equation}ta}(t)$ in terms of a single contour-ordered exponential. As shown below, such exponential admits a simple cumulant expansion that will enable us to connect the desirable statistical properties of the noise with quantum time-correlation functions. \subsubsection{Decoherence function in the contour} Using \eq{eq:identity} it follows that \begin{equation} V_\begin{equation}ta^\dag(t)V_\alpha(t) = {\mathcal{T}}c \exp\left(i\int_0^t (\tilde{B}_\begin{equation}ta(\tau_-) - \tilde{B}_\alpha(\tau_+))\,d\tau \right). \label{eq:112} \end{equation} This equation can be simplified further if we define a function in the contour as \begin{equation} \mc{B}_{\alpha \begin{equation}ta}(\tau) = \theta_\mc{C}(t-\tau) \tilde{B}_\alpha(\tau) + \theta_\mc{C}(\tau - t) \tilde{B}_\begin{equation}ta(\tau) \end{equation} where $\theta_\mc{C}(\tau-\tau')$ is the Heaviside step function defined in the contour, $\theta_\mc{C}(\tau-\tau') = 1$ if $\tau$ is later than $\tau'$ in the contour and $\theta_\mc{C}(\tau-\tau') = 0$ otherwise. Using this definition, Eqs. \eqref{eq:112} and \eqref{eq:df_def}, the QDF can be written as a single contour-ordered exponential \begin{equation} \Phi_{ \alpha \begin{equation}ta}(t) = \Braket{{\mathcal{T}}c \left\{ e^{-i \int_\mc{C} \mc{B}_{\alpha \begin{equation}ta}(\tau) \,d\tau} \right\}} , \label{eq:solq2} \end{equation} where the contour integral is defined as $ \int_\mc{C} = \int_{0+i\eta}^{t+i\eta} - \int^{t-i\eta}_{0-i\eta}$. \begin{equation}gin{figure}[bt] \begin{equation}gin{tikzpicture}[ scale=2, line cap=round, dec/.style args={#1#2}{ decoration={markings, mark=at position #1 with {#2}}, postaction={decorate} } ] \draw[thick,->] (0,0) -- (2.8,0) node[anchor=north west] {Re $t$}; \draw[thick,->] (0,-0.6) -- (0,0.6) node[anchor=south east] {Im $t$}; \draw [->](0,0.1)--node[above,black]{$\mathcal{C}_+$} (1.0,0.1); \draw (1,0.1)-- (2,0.1); \draw [->](2.0,-0.1)--node[below,black]{$\mathcal{C}_-$} (1.0,-0.1) ; \draw (1,-0.1)-- (0,-0.1); \draw (2,0.1) arc (90:-90:1mm); \node at (2.1,0.0) [circle, scale=0.3, draw=black!80,fill=black!80] {}; \node at (2.15,-0.1) {$t$}; \node at (-0.1,-0) {$t_0$}; \end{tikzpicture} \caption{The complex time contour that is used in \eq{eq:identity}.} \label{fig:kc} \end{figure} \subsubsection{Cumulant expansion} With Eqs. \eqref{eq:solq2} and \eqref{eq:solc}, the condition \eq{eq:cond} becomes \begin{equation}\overline{e^{-i\int_0^t \Delta_{\alpha \begin{equation}ta}(s) \,ds }} = \Braket{{\mathcal{T}}c e^{-i \int_\mc{C} \mc{B}_{\alpha\begin{equation}ta}(\tau) \,d\tau}} \label{eq:final}. \end{equation} While formally exact, it is still nontrivial to directly infer from \eq{eq:final} whether it is possible to find random processes $ \{\eta_\alpha(t)\}$ that satisfy it. Further progress can be made by performing a cumulant expansion for both sides of \eq{eq:final}, \begin{equation} \begin{equation}gin{split} \ln \Phi_{\alpha \begin{equation}ta}(t) &\equiv K^\text{q}_{\alpha \begin{equation}ta}(t) = \sum_{n=1}^\infty \frac{(-i)^n}{n!}\kappa_{\alpha \begin{equation}ta}^{\text{q}, (n)}(t), \\ \ln \Phi^\text{noise}_{\alpha \begin{equation}ta}(t) &\equiv K^\text{c}_{\alpha \begin{equation}ta}(t) = \sum_n \frac{(-i)^n}{n!}\kappa_{\alpha \begin{equation}ta}^{\text{c}, (n)}(t). \end{split} \end{equation} The cumulant expansion is the Taylor expansion of the logarithm of the decoherence function with respect to the system-bath coupling strength. This can readily seen by parameterizing the system-bath interaction as $H_\text{SB} \rightarrow \lambda H_\text{SB}$. For the classical and quantum decoherence functions to be equivalent irrespective of the system-bath interaction strength, the cumulants of $\Phi_{\alpha \begin{equation}ta}(t)$ and $\Phi^\text{noise}_{\alpha \begin{equation}ta}(t)$ need to match order by order. This condition is, in fact, stricter than \eq{eq:final}. For the NIDF, the cumulant expansion can be obtained through the following recursive formula \cite{Smith1995} \begin{equation} \kappa^{\text{c}, (n)}_{\alpha \begin{equation}ta} = \mu^{\text{c}, (n)}_{\alpha\begin{equation}ta} - \sum_{m=1}^{n-1} \binom{n-1}{m-1} \kappa^{\text{c}, (m)}_{\alpha \begin{equation}ta} \mu^{\text{c}, (n-m)}_{\alpha \begin{equation}ta}, \label{eq:recursive} \end{equation} where \begin{equation} \mu^{\text{c}, (n)}_{\alpha\begin{equation}ta} = \idotsint_0^t \overline{\Delta_{\alpha \begin{equation}ta}(s_1) \cdots \Delta_{\alpha \begin{equation}ta}(s_n) }\, ds_1\cdots ds_n \label{eq:c_moments} \end{equation} are the moments of the stochastic variable $\Delta_{\alpha \begin{equation}ta}$ and $\binom{n}{m}$ denote the binomial coefficients. One of the advantages of recasting the quantum decoherence function into a single exponential is that it becomes simpler to perform a cumulant expansion. A straightforward extension of the cumulant expansion for time-ordered exponentials by Kubo \cite{Kubo1962} leads to the conclusion that the quantum cumulants satisfy the same recursive formula \eq{eq:recursive}, that is, \begin{equation} \kappa^{\text{q}, (n)}_{\alpha \begin{equation}ta} = \mu^{\text{q}, (n)}_{\alpha\begin{equation}ta} - \sum_{m=1}^{n-1} \binom{n-1}{m-1} \kappa^{\text{q}, (m)}_{\alpha \begin{equation}ta} \mu^{\text{q}, (n-m)}_{\alpha \begin{equation}ta}, \label{eq:q_recursive} \end{equation} with the generalized quantum moments of operator $\mc{B}_{\alpha \begin{equation}ta}$ defined as \begin{equation} \mu_{\alpha \begin{equation}ta}^{\text{q}, (n)} = \idotsint_\mc{C} \Braket{{\mathcal{T}}c \prod_{i=1}^n \mc{B}_{\alpha \begin{equation}ta}(\tau_i)} \,\prod_{i=1}^n d\tau_i. \label{eq:q_moments} \end{equation} With the cumulant expansion for both sides of \eq{eq:final}, the problem of whether classical noise can mimic quantum pure-dephasing dynamics can now be mapped to the much more manageable task of whether one can find a classical noise having correlation functions equivalent to the quantum time-correlation functions. The first-order cumulant of the quantum and noise-induced decoherence function reads \begin{equation} \kappa^\text{q,(1)} = \int_\mc{C} d\tau \braket{\mc{B}_{\alpha \begin{equation}ta}(\tau)} = \int_0^t \braket{\tilde{B}_\alpha(s) - \tilde{B}_\begin{equation}ta(s)} \,ds , \end{equation} \begin{equation} \kappa^\text{c,(1)} = \int_0^t \overline{ \Delta_{\alpha \begin{equation}ta}(s)} \,ds = \int_0^t \overline{ \eta_{\alpha}(s)-\eta_{\begin{equation}ta}(s) } \,ds . \end{equation} At a quantum level this cumulant is determined by the expectation value of the environment operators entering $H_\text{SB}$. At a noise level it is determined by the expectation value of the noise. Since the expectation value of the environment operator is merely a real number, it is always possible to find noise with its average $\overline{\eta_\alpha(t)} = \braket{\tilde{B}_\alpha(t)}$ such that $\kappa^\text{q,(1)} = \kappa^\text{c,(1)}$. A more stringent requirement comes from the second cumulant. As it is always possible to redefine the system Hamiltonian to make the expectation value of the environment operator vanish, we assume that the first cumulant vanishes in the following. From Eqs. (\ref{eq:recursive} - \ref{eq:q_moments}), it is straightforward to obtain the second cumulant for the QDF and NIDF \begin{equation} \kappa^{\text{c}, (2)}_{\alpha \begin{equation}ta} = \iint_0^t dsds' \overline{\Delta_{\alpha \begin{equation}ta}(s)\Delta_{\alpha \begin{equation}ta}(s')}, \label{eq:second_c} \end{equation} and \begin{equation} \begin{equation}gin{split} \kappa^{\text{q}, (2)}_{\alpha\begin{equation}ta}(t) =& \iint_\mc{C}d\tau d\tau' \Braket{{\mathcal{T}}c \mc{B}_{\alpha \begin{equation}ta}(\tau) \mc{B}_{\alpha \begin{equation}ta}(\tau') } \\ = & 2\int_0^t ds\int_0^s ds' (D_{\alpha\alpha}(s,s') + D_{\begin{equation}ta \begin{equation}ta}(s',s)) \\ &-2\iint_0^t ds ds' D_{\begin{equation}ta \alpha}(s,s') \end{split} \label{eq:second_q} \end{equation} where $ D_{\alpha \begin{equation}ta}(s,s') = \Braket{ \tilde{B}_\alpha(s) \tilde{B}_\begin{equation}ta(s')}$ is the quantum time-correlation function of the environment. Because the classical noise is real, if the second cumulant for the QDF is complex, the classical noise cannot fully capture the effects of a quantum environment. Thus, a necessary condition to mimic the quantum decoherence with classical noise is that the cumulants are real. Higher-order cumulants can be important for anharmonic and many-body environments. Using \eq{eq:recursive}, it is now straightforward to obtain higher-order cumulants for QDF. For example, the third cumulant is given by \begin{equation} \kappa^{\text{q}, (3)}_{\alpha \begin{equation}ta} = \iiint_\mc{C} \braket{{\mathcal{T}}c \mc{B}_{\alpha \begin{equation}ta}(\tau_1) \mc{B}_{\alpha \begin{equation}ta}(\tau_2)\mc{B}_{\alpha \begin{equation}ta}(\tau_3)} \,d\tau_1d\tau_2 d\tau_3. \end{equation} If the higher-order quantum cumulants make significant contributions to decoherence, it requires the classical noise to have the corresponding higher-order correlations. This implies that, for such environments, the commonly used Gaussian noise model can be inadequate \cite{Yang2017, Kubo1963}. We expect that such environments can arise in electronic decoherence in molecules where the environment are molecular vibrations which can be far from harmonic, and also in central spin model where the environment consists of interacting spins. Surprisingly, the cumulants, often considered as a convenient computational tool, carry direct physical meaning. To see this, we take a time-derivative of \eq{eq:solq} and use the definition of the cumulants to obtain \begin{equation} \frac{d}{dt} {\tilde{\rho}}^\text{S}_{\alpha\begin{equation}ta}(t) = \dot{K}^\text{q}_{\alpha\begin{equation}ta}(t) \tilde{\rho}^\text{S}_{\alpha\begin{equation}ta}(t) \label{eq:122} \end{equation} Equation \eqref{eq:122} is the equation of motion for the coherences in the interaction picture. Clearly, the time-derivative of the cumulants are the generators of decoherence and each cumulant corresponds to a particular order on the system-bath interaction. Explicitly, expressing the coherence in the polar form ${\tilde{\rho}}^\text{S}_{\alpha\begin{equation}ta}(t) = A_{\alpha \begin{equation}ta}(t)e^{i\phi_{\alpha \begin{equation}ta}(t)}$, it follows from \eq{eq:122} that \begin{equation} \dot{A}_{\alpha\begin{equation}ta}(t) = \operatorname{Re} \dot{K}^\text{q}_{\alpha\begin{equation}ta}(t) {A_{\alpha\begin{equation}ta}(t)}, ~~~ \dot{\phi}_{\alpha \begin{equation}ta}(t) = \operatorname{Im} \dot{K}^\text{q}_{\alpha\begin{equation}ta}(t). \label{eq:124} \end{equation} Equation \eqref{eq:124} indicates that the real parts of the time-derivative of cumulants is responsible for decoherence, and the imaginary parts account for the environment-induced energy shifts. \subsection{Spin-boson model} We now illustrate how the above criteria can be applied using a concrete example: the quintessential spin-boson problem. The Hamiltonian for the pure-dephasing spin-boson model is \begin{equation} H = - \frac{\omega_0}{2}\sigma_z + \sigma_z \sum_k g_k (a_k^\dag + a_k) + H_\text{B}. \label{eq:h_sb} \end{equation} where $\sigma_z$ is the Pauli $z$ matrix and $\omega_0$ the transition frequency for the two-level system. Here $H_\text{B} = \sum_k \omega_k a_k^\dag a_k$ describes a bosonic environment consisting of a distribution of harmonic oscillators of frequency $\omega_k$ with $a_k, a^\dag_k$ being the creation and annihilation operators for the $k$-th mode, respectively. The coupling of the system with the environment leads to shifts in the system's energy levels, where $g_k$ is the coupling constant to the $k$-th harmonic mode. The environment is assumed to be initially in thermal equilibrium at inverse temperature $\begin{equation}ta = 1/(k_\text{B}T)$ with density matrix $\rho_\text{B} = e^{-\begin{equation}ta H_\text{B}}/Z $ where $Z = \text{Tr}_\text{B}[e^{-\begin{equation}ta H_\text{B}}]$ is the partition function. For time-independent Hamiltonian, this leads to time-translational invariant time-correlation function \begin{equation} D_{\alpha\begin{equation}ta}(t,t') = D_{\alpha\begin{equation}ta}(t-t') .\end{equation} For a two-level system, only one decoherence function has to be considered corresponding to $\alpha = 0, \begin{equation}ta = 1$. Since $\sigma_z = \ket{0}\bra{0} -\ket{1}\bra{1}$, one can identify $B_0 = - B_1 = \sum_k g_k (a_k + a^\dag_k) \equiv B$ and $D_{00} = D_{11} = -D_{01} \equiv D $. Using $\tilde{a}_k(t) = e^{-i\omega_k t}a_k$ and $\tilde{a}^\dag_k(t) = e^{i\omega_k t}a_k^\dag$, the time-correlation function $D(t)$ can be calculated as \begin{equation} \begin{equation}gin{split} D(t) &= \sum_k |g_k|^2 \left( \Braket{\tilde{a}_k(t) a_k^\dag} + \Braket{\tilde{a}_k^\dag(t)a_k} \right) \\ &= \sum_k |g_k|^2 [(1 - \begin{array}r{n}_k) e^{-i\omega_k t} + \begin{array}r{n}_k e^{i\omega_k t}] \end{split} \label{eq:tmp111} \end{equation} where $\begin{array}r{n}_k = \braket{a_k^\dag a_k}$ is the distribution function. At thermal equilibrium, $\begin{array}r{n}_k = 1/(e^{\begin{equation}ta \omega_k} -1)$ corresponding to the Bose-Einstein distribution and \eq{eq:tmp111} yields \begin{equation} \begin{equation}gin{split} D(t) &= \int_0^\infty \frac{d\omega}{\pi} J(\omega) [\coth(\begin{equation}ta \omega/2 ) \cos(\omega t) - i\sin(\omega t) ] \\ &= \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} J(\omega) [\coth(\begin{equation}ta \omega/2 ) \cos(\omega t) - i\sin(\omega t) ] \end{split} \label{eq:tcf} \end{equation} where the spectral density is defined as $ J(\omega) \equiv \pi \sum_k |g_k|^2 \delta(\omega - \omega_k)$ for $\omega > 0$ and extended to negative frequencies by $J(-\omega) = -J(\omega)$. This extension makes the integrand in \eq{eq:tcf} symmetric under $\omega \rightarrow -\omega$, hence the second equality. Since the environment is Gaussian, the QDF is determined by the first two cumulants \cite{Kubo1962, Wick1950}. The first cumulant vanishes, and the second cumulant can be calculated by inserting \eq{eq:tcf} into \eq{eq:second_q} \begin{equation} \kappa_{\alpha \begin{equation}ta}^{\text{q}, (2)}(t) = 8 \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} J(\omega) \coth(\begin{equation}ta \omega/2) \frac{1-\cos(\omega t)}{\omega^2} \label{eq:sb_DF} \end{equation} Interestingly, the cumulant is real even though the time-correlation function is complex. This is due to the property of the quantum time-correlation function \begin{equation} D(-\tau) = D^*(\tau). \label{eq:sym} \end{equation} Because the cumulant is real, as described below, its effects on the dynamics can be mimicked by classical noise. Consider now the noise model intended to mimic the above decoherence dynamics with Hamiltonian \begin{equation} H(t) = -\frac{\omega_0}{2}\sigma_z + \eta(t) \sigma_z. \end{equation} where the stochastic process $\eta(t)$ replaces the system-bath interaction in \eq{eq:h_sb}. Denoting the noise correlation function as $C(s,s') = \overline{\eta(s)\eta(s')}$, we show that if the noise satisfy the following three conditions: (i) $C(s,s') = C(s-s')$, (ii) $\overline{\eta(t)} = 0$, and (iii) $C(t) = S(t)$ where $S(t) \equiv \half \braket{\{B(t), B\}} = \half \braket{B(t)B + BB(t)}$, then the NIDF coincides with the QDF. The first condition implies that the noise is stationary corresponding to the equilibrium state of the environment. The second condition reflects the vanishing of the first cumulant of the QDF. The third one is required to make the second cumulants for QDF and NIDF equal. To see this, realizing that $\Delta_{01}(t) = 2\eta(t)$ and inserting the Fourier transform of the noise correlation function \begin{equation} C(t) = \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} C(\omega) e^{- i\omega t} , \end{equation} into \eq{eq:second_c} yields \begin{equation} \kappa^{\text{c}, (2)}(t) = 8\int_{-\infty}^{\infty} \frac{d\omega}{2\pi} \frac{1-\cos(\omega t)}{\omega^2} C(\omega) \label{eq:117} \end{equation} Comparing \eq{eq:sb_DF} and \eq{eq:117}, it is clear that the condition $\kappa^{\text{q}, (2)}(t) = \kappa^{\text{c}, (2)}(t)$ is equivalent to \begin{equation} {C}(\omega) = J(\omega) \coth(\begin{equation}ta\omega/2). \label{eq:cond3} \end{equation} According to \eq{eq:tcf}, the right-hand side of \eq{eq:cond3} is the Fourier transform of the real part of the quantum time-correlation function [\eq{eq:tcf}]. Using \eq{eq:sym}, it follows that $ S(t) = \operatorname{Re} D(t) $ and thus to the third condition $S(t) = (1/2)(D(t) + D(-t)) = (1/2) \langle \{B(t), B\} \rangle$. Equation \eqref{eq:cond3} suggests that for each spectral density there is a corresponding classical noise leading to the same pure-dephasing dynamics provided that an adequate algorithm to generate the stochastic process is identified. Here we exemplify the analysis with the widely used Ohmic environments with a Lorentz-Drude cutoff. The spectral density for such environments is \begin{equation} J(\omega) = 2\lambda \frac{\omega_c \omega}{\omega^2 + \omega_c^2}, \end{equation} where $\omega_c$ is the cutoff frequency of the environment and $\lambda$ characterizes the system-bath interaction strength. In the high-temperature limit $\begin{equation}ta \omega_c \ll 1$, $\coth(\begin{equation}ta \omega/2) \approx 2(\begin{equation}ta \omega)^{-1}$ and \begin{equation} J(\omega) \coth(\begin{equation}ta \omega/2) \approx 4\lambda k_\text{B} T \frac{\omega_c}{\omega^2 + \omega_c^2} \label{eq:sd} \end{equation} Now let $\eta(t)$ be a colored Gaussian noise with correlation function $C(\tau) = 2 \lambda k_\text{B} T e^{-\omega_c \tau}$. This choice ensures that \eq{eq:cond3} is satisfied in the high temperature limit which can be seen by taking the Fourier transform of the noise correlation function and comparing with \eq{eq:sd}. Therefore, the quantum pure-dephasing effects of a high-temperature Ohmic bath can be fully captured by colored exponentially correlated Gaussian noise. \begin{equation}gin{figure}[htbp] \includegraphics[width=0.5\textwidth]{coherence} \caption{(a) Correlation function of the generated noise (red) in comparison to the target (black). (b) Quantum and noise-induced decoherence dynamics in a spin-boson model starting from a superposition with equal coefficients of ground and excited state. Model parameters are: $\lambda/\omega_0 = 0.5, \begin{equation}ta \omega_0 = 1, \omega_c/\omega_0 = 1$. The exact results is obtained through $\Phi_{01}(t) = e^{-\half \kappa^{\text{q}, (2)}(t)}$ with the second cumulant computed using \eq{eq:sb_DF}. The stochastic simulations are obtained with 2000 realizations of the colored noise and with a time step $\omega_0 dt = 0.002$. No revivals of the coherence are observed in this model. } \label{fig:spin_boson} \end{figure} This conclusion is demonstrated in \fig{fig:spin_boson}, which contrasts the exact quantum results with stochastic simulations. The exact results are obtained by first inserting \eq{eq:sd} into \eq{eq:sb_DF} to obtain the second-order cumulant and thus the decoherence function $\Phi_{01}(t) = e^{-\half \kappa^{\text{q}, (2)}(t)}$. This decoherence function is exact (compare with Ref.~\onlinecite{Breuer2002b}) as the contributions of higher order cumulants vanish in this case. The stochastic simulation is averaged over 2000 realizations of the exponentially correlated colored Gaussian noise generated using the algorithm in \cite{Fox1988}. The correlation function of generated noise is shown in Fig. \ref{fig:spin_boson}a. For each realization, the stochastic time dependent Schr\"{o}dinger equation\ $i \frac{d}{dt} \ket{\psi(t)} = H(t)\ket{\psi(t)}$ with the initial condition $\ket{\psi(0)} = \frac{1}{\sqrt{2}} (\ket{0} + \ket{1})$ is integrated. As shown, the decoherence dynamics obtained with stochastic noise is in quantitative agreement with the exact quantum decoherence dynamics, consistent with our conclusion above. For low-temperature regime and other types of spectral densities, if $S(t)$ can be well-described by a set of exponential functions, \begin{equation} S(t) = \sum_n |c_n|^2 e^{- |t-t'|/\tau_n}, \end{equation} one can choose a sum of exponentially-colored Gaussian noises \begin{equation} \eta(t) = \sum_n c_n \eta_n(t) \end{equation} where $\{\eta_n(t)\}$ are Gaussian stochastic processes with statistical properties \begin{equation} \overline{\eta_n(t)\eta_m^*(t')} = \delta_{nm} e^{- |t-t'|/\tau_n}. \end{equation} In this case, the noise correlation function \begin{equation} C(t) = \sum_{n, m} c_n c_m^* \overline{\eta_n(t)\eta_m^*(t')} = \sum_n |c_n|^2 e^{- |t-t'|/\tau_n} = S(t). \end{equation} Thus, the quantum decoherence dynamics can still be captured by classical noise. Schr\"{o}dinger equationction{Quantum dissipation} \label{sec:diss} Another major source of decoherence is quantum dissipation due to { transitions} between system eigenstates induced by the environment. The role of the dissipative environment is to drive the system from an initially out-of-equilibrium state to thermal equilibrium. The question we seek to address here is when can we understand quantum decoherence induced by dissipation in terms of classical noise. This problem has been studied previously by Tanimura and Kubo \cite{Tanimura1989} with the hierarchical equation of motion. The conclusion of such a formal study is that the classical noise can only be made to be equivalent to a full quantum treatment at infinite temperature, i.e., as $\begin{equation}ta \rightarrow 0$ . Below we provide a simpler analysis of this problem for Markovian environments and show that the physical reason behind this conclusion is that the classical noise cannot describe the decoherence effects due to spontaneous emission induced by a dissipative environment. Here spontaneous emission is not restricted to electromagnetic environments but refers to a damping effect induced by the spontaneous fluctuations of any dissipative environment. The simplest model that allows isolating this basic physics is a two-level system $\ket{g}, \ket{e}$ interacting with a thermal environment. A standard full quantum treatment of this model within the dipole approximation leads to the equation of motion for the reduced density matrix \cite{Breuer2002a} \begin{equation} \begin{equation}gin{split} \dt {\rho}_{\text{S}}(t) =& -i [H_\text{S}, \rho_\text{S}] + \Gamma_e \left( \sigma_- \rho_\text{S}(t)\sigma_+ - \half \{ \sigma_+\sigma_- , \rho_\text{S}(t)\}\right)\\ & + \Gamma_a \left( \sigma_+ \rho_\text{S}(t)\sigma_- - \half \{ \sigma_-\sigma_+ , \rho_\text{S}(t)\}\right) \end{split} \label{eq:bloch} \end{equation} where $H_\text{S}= -\omega_0 \sigma_z/2$ is the system Hamiltonian and $\sigma_\pm$ is the raising/lowering operator, $[A,B] = AB - BA$ and $\{A, B\} = AB + BA$ denote the commutator and anticommutator, respectively. The first term in the right-hand side of \eq{eq:bloch} accounts for the unitary dynamics of $H_\text{S}$, which does not contribute to decoherence. The meaning of the remaining dissipative terms is best revealed by decomposing \eq{eq:bloch} in terms of the matrix elements \begin{equation} \dt \rho_{gg}^\text{S}(t) = \Gamma_e \rho_{ee}^\text{S}(t) - \Gamma_a \rho_{gg}^\text{S}(t) , \end{equation} \begin{equation} \dt \rho_{eg}^\text{S}(t) = -i \omega_0 \rho_{eg}^\text{S}(t) - \Gamma_\text{d} \rho_{eg}^\text{S}(t). \end{equation} where $\Gamma_\text{d} = (\Gamma_e + \Gamma_a)/2$. Clearly, the second term in \eq{eq:bloch} accounts for the emission of energy to the environment, and the third one to absorption. Here the emission rate $\Gamma_{e}$ is a sum of the stimulated emission rate (which is equivalent to the absorption rate $\Gamma_{a}$) and spontaneous emission rate $\Gamma_0$, i.e., $\Gamma_e = \Gamma_a + \Gamma_0$. The off-diagonal matrix elements (or coherence) represented in the eigenstates of $H_\text{S}$ admits an exponential decay with the decoherence rate $\Gamma_\text{d}$. Note that as a consequence of the Markovian approximation involved in the derivation of \eq{eq:bloch}, the model does not capture the universal initial Gaussian regime for uncorrelated initial states which gives rise to quantum Zeno effects \cite{Fischer2001, Facchi2004, Gu2018b}. Now consider the classical noise picture where the system is subject to a random term that induces transitions between system eigenstates, i.e., \begin{equation} H = H_\text{S} + \eta(t)\sigma_- + \eta^*(t) \sigma_+ .\end{equation} Here the stochastic variable $\eta$ are allowed to be complex but still keeping the dynamics for each noise realization unitary. For Markovian environments without memory effects, it is appropriate to choose $\braket{\eta(t)\eta^*(t')} = \gamma \delta(t-t')$. In the interaction picture of $ H_\text{S}$, the Liouville von-Neumann equation reads \begin{equation} i \dt \tilde{\rho}_\text{S}(t) = [\eta(t)\tilde{\sigma}_-(t) + \eta^*(t) \tilde{\sigma}_+(t) , \tilde{\rho}_\text{S}(t)] \label{eq:lvn} \end{equation} A quantum master equation can be obtained as follows. Integrating \eq{eq:lvn} yields \begin{equation} \tilde{\rho}_\text{S}(t) = \rho_{\text{S}}(0) - i \int_0^t dt' [\eta(t')\tilde{\sigma}_-(t') + \eta^*(t') \tilde{\sigma}_+(t') , \tilde{\rho}_\text{S}(t')]. \label{eq:120} \end{equation} Inserting \eq{eq:120} back into the right-hand side of \eq{eq:lvn} and taking statistical average of the stochastic processes yields \begin{equation} \begin{equation}gin{split} \frac{d}{dt} \tilde{\rho}_\text{S}(t) =& \gamma [ \tilde{\sigma}_-(t), [\tilde{\sigma}_+(t), \tilde{\rho}_\text{S}(t)]] + \gamma [ \tilde{\sigma}_+(t) , [\tilde{\sigma}_-(t) , \tilde{\rho}_\text{S}(t)]]. \\ \end{split} \label{eq:123} \end{equation} Transforming into the Schr\"{o}dinger picture gives the quantum master equation \begin{equation} \begin{equation}gin{split} \dot{\rho}_\text{S}(t) =& -i [H_\text{S}, \rho_\text{S}(t)] + \gamma \left( \sigma_- \rho_\text{S}(t)\sigma_+ - \half \{ \sigma_+\sigma_- , \rho_\text{S}(t)\}\right) \\ &+ \gamma \left( \sigma_+ \rho_\text{S}(t)\sigma_- - \half \{ \sigma_-\sigma_+ , \rho_\text{S}(t) \} \right). \end{split} \label{eq:111} \end{equation} Comparing \eq{eq:111} and \eq{eq:bloch}, it becomes clear that the noise can mimic many of the effects of the quantum relaxation provided that one identifies $\gamma$ with $\Gamma_a$. What becomes missing in this picture are the contributions due to spontaneous emission. In this case, one obtains a decoherence rate $\gamma_\text{d} = \Gamma_a$ from \eq{eq:111}. Thus, the decoherence rate in the classical noise picture does not contain the contribution from spontaneous emission. The missing of spontaneous emission has a direct consequence in relaxation. Since the absorption and emission rates are equal, the stationary state at long times is the non-physical infinite-temperature state. This problem can be fixed by going beyond the classical noise picture. For example, by promoting the classical noise to quantum noise \cite{Gardiner2004} or by relaxing the constraint of unitary dynamics for each noise realization as in the stochastic Liouville equation \cite{Hsieh2018}. Schr\"{o}dinger equationction{Conclusions} \label{sec:conc} To summarize, we have contrasted quantum decoherence that arises as a single quantum system becomes entangled with environmental degrees of freedom with the apparent decoherence that results by averaging over an ensemble of unitary evolutions generated by a Hamiltonian subject to classical noise. For dissipative environments, we showed that the classical noise cannot describe the decoherence induced by spontaneous emission and, thus, that the classical noise picture can only become quantitative in the infinite temperature limit. For pure-dephasing dynamics, we identified general conditions that determine whether the decoherence dynamics due to a quantum environment can be quantitatively mimicked through classical noise. Specifically, we showed that for the two dynamics to agree the cumulants of the quantum and noise-induced decoherence functions must coincide. These requirements impose restrictions on the statistical properties of the noise that are determined by the quantum many-point time correlation function of the environmental operators that enter into the system-bath interaction. These conditions are valid for {any} pure dephasing problem including anharmonic environments and nonlinear system-bath couplings. In particular, through the spin-boson model, we demonstrated numerically and analytically that the decoherence effects due to a harmonic Ohmic environment (in the high-temperature pure-dephasing limit) can be mimicked by exponentially correlated colored Gaussian noise. This observation is consistent with a recent study~\cite{Rahman2019} of the quantum transport properties of a molecular junction subject to vibrational dephasing that finds agreement between a fully quantum model (harmonic, Ohmic, pure-dephasing environment in the high temperature limit) and a model in which the thermal environment manifests itself in (exponentially correlated Gaussian) fluctuating site energies. A challenge in employing classical noise models for environments with more complicated spectral densities is to generate noise with the correct statistical properties. The classical noise model has also been useful in quantum information processing \cite{Rossi2014}, particularly for the design of dynamic decoupling schemes to preserve coherence \cite{Yang2017}. In particular, in the context of optimal control computations an effective stochastic model that captures the effects of a quantum environment is highly desirable \cite{Witzel2014} as these computations are challenging for a full quantum model. Our results offers well-defined criteria to develop and to understand the limitations of such models. \begin{equation}gin{acknowledgments} This material is based upon work supported by the National Science Foundation under CHE-1553939. \end{acknowledgments} \end{document}
\begin{document} \operatorname{\textrm{hyp}}(A_r, c)ersetup{linkcolor=cobalt} \title{f Correspondences in complex dynamics} \begin{quote} \center{\it In memory of Welington de Melo, whose intellectual honesty inspired us all.} \end{quote} \thispagestyle{empty} \begin{abstract} This paper surveys some recent results concerning the dynamics of two families of holomorphic correspondences, namely ${\mathcal F}_a:z \to w$ defined by the relation $$\left( \frac{aw-1}{w-1} \right)^2 + \left( \frac{aw-1}{w-1} \right) \left( \frac{az +1}{z+1} \right) + \left( \frac{az+1}{z+1} \right)^2 =3,$$ \noindent and $$\mathbf{f}_c(z)=z^{\beta} +c, \mbox{ where } 1<\beta=p/q \in \mathbb{Q},$$ \noindent which is the correspondence $\mathbf{f}_c:z \to w$ defined by the relation $$(w-c)^q=z^p.$$ Both can be regarded as generalizations of the family of quadratic maps $f_c(z)=z^2+c$. We describe dynamical properties for the family $\mathcal{F}_a$ which parallel properties enjoyed by quadratic polynomials, in particular a B\"ottcher map, periodic geodesics and Yoccoz inequality, and we give a detailed account of the very recent theory of holomorphic motions for hyperbolic multifunctions in the family ${\bf f}_c$. \end{abstract} \tableofcontents \thispagestyle{empty} \setcounter{page}{1} \section{Introduction} A holomorphic correspondence on the Riemann sphere is a relation $z\mapsto w$ given implicitly by a polynomial equation $P(z,w)=0.$ Any rational map is an example of a holomorphic correspondence. Indeed, if $f(z)=p(z)/q(z),$ then $w=f(z)$ iff $P(z,w)=0,$ where $P(z,w)= wq(z)-p(z).$ In particular, the family of quadratic polynomials $f_c(z)= z^2+c$ (parametrized by $c\in {\mathbb C}$) can be regarded as an analytic family of holomorphic correspondences. The grand orbits of any finitely generated Kleinian group can also be regarded as those of a holomorphic correspondence. This paper is concerned with two families of holomorphic correspondences which generalize quadratic polynomials in different ways. The first is the family ${\mathcal F}_a:z \to w$ defined by \begin{equation} \label{plk} \left( \frac{aw-1}{w-1} \right)^2 + \left( \frac{aw-1}{w-1} \right) \left( \frac{az +1}{z+1} \right) + \left( \frac{az+1}{z+1} \right)^2 =3, \end{equation} \noindent where $a\in \mathbb{C}$ and $a\neq 1,$ introduced in the early nineties by Bullett and Penrose \cite{Bullett1994}. They proved: \begin{thm}\label{quadmat}For every $a$ in the real interval $[4,7],$ the correspondence $\mathcal{F}_a$ is a mating between some quadratic map $f_c(z)=z^2+ c$ and the modular group $\Gamma=\operatorname{PSL}(2,\mathbb{Z}),$ \end{thm} and conjectured that the connectedness locus for this family is homeomorphic to the Mandelbrot set.\\ The second family is \begin{equation} \label{fxs} \mathbf{f}_c(z) = z^{\beta} +c, \ \ c\in{\mathbb C}, \end{equation} \noindent where $\beta>1$ is a rational number and $z^{\beta} = \exp \frac{1}{q}(\log z^p)$. If $\beta=p/q$ in lowest terms, then each member of the family \eqref{fxs} of multifunctions is a holomorphic correspondence, defined by the relation $(w-c)^q=z^p.$ Hence $\mathbf{f}_c$ maps every $z\neq 0$ to a set consisting of $q$ points. If $p$ and $q$ are not relatively prime, we shall use the notation $z^{p/q}+c$ to express the holomorphic correspondence $(w-c)^q=z^p.$ Thus $z^2 +c$ and $z^{4/2} +c$ denote different correspondences. \\ In this paper we describe the dynamics of holomorphic correspondences from various perspectives, exploring the concepts of hyperbolicity and holomorphic motions for \eqref{fxs} and describing results concerning a B\"ottcher map, periodic geodesics, and a Yoccoz inequality for the family of matings \eqref{plk}. As we shall see, the techniques involved in the two studies are independent, but as we have already noted, both families can be viewed as generalizations of the quadratic family, and our techniques for studying them are motivated by the notions of hyperbolicity, external rays, Yoccoz inequalities and local connectivity, which are inextricably related to one another in the study of quadratic polynomials $f_c(z)=z^2+c$. For this reason, it will be convenient to start by recalling some well known facts, techniques and open questions concerning this celebrated family of maps. Excellent sources for details are the books of Milnor \cite{Milnor} and de Faria and de Melo \cite{FM}. An overview of a century of complex dynamics is presented in the article by Mary Rees \cite{Rees}. \subsection{Dynamics of quadratic maps}\label{bts} Consider the action of $f_c(z)=z^2+c$ on the Riemann sphere $\widehat{\mathbb{C}}$. For any polynomial of degree $d \geq 2$ acting on $\widehat{\mathbb{C}}$, the point $z=\infty$ is a superattracting fixed point. Let $\mathcal{A}_c$ denote its basin of attraction. The filled Julia set $K_c= K_{f_c}$ is the set of points with bounded orbit, that is $K_c = \widehat{\mathbb{C}} \setminus \mathcal{A}_c$. The Julia set $\mathcal{J}_c=\mathcal{J}_{f_c}$ is the common boundary of these regions: $\mathcal{J}_c = \partial K_c = \partial \mathcal{A}_c$. The \emph{Mandelbrot set} $M$ is the connectedness locus of the family $f_c(z) = z^2 +c$; in other words, the set of all parameters $c \in \mathbb{C}$ such that $\mathcal{J}_c$ is connected. On the basin of attraction $\mathcal{A}_c$, the quadratic polynomial $f_c$ is conformally conjugate to the map $f_0(z)=z^2$ by the so-called \textit{B\"ottcher map} $\varphi_c$ (tangent to the identity at infinity). In the case $\mathcal{J}_c$ (or equivalently $K_c$) is connected, the B\"ottcher map extends to a conformal conjugacy: $$\varphi_c: \mathbb{C} \setminus K_c \to \mathbb{C}\setminus \overline{\mathbb{D}}_{1}$$ (An analogue of this map for the family $\mathcal{F}_a$ will appear in Section \ref{lmv}.) The \emph{external ray} $R_{\theta}^c\in \mathbb{C} \setminus K_c$ with argument ${\theta} \in \mathbb{R}/\mathbb{Z}$ is the preimage under the B\"ottcher map $\varphi_c $ of the half-line $t e^{2\pi i \theta}\in \mathbb{C}\setminus \overline{\mathbb{D}}_{1}$, with $t \in (1, \infty)$. When $$\lim_{t\to 1^+}\varphi_c^{-1} (t e^{2\pi i \theta})=z,$$ we say that $R_{\theta}^c$ lands at $z.$ We know that \textit{rational rays land} \cite{DH84, Milnor}, and that \textit{repelling and parabolic periodic points are landing points} of at least one and at most finitely many rays \cite{Milnor}. By Carath\'eodory's theorem, if $\mathcal{J}_c$ is locally connected, then every external ray lands. We remark that the B\"ottcher map and external rays can also be defined for degree $d$ polynomials, and in this case as well rational rays land and repelling and parabolic periodic points are landing points \cite{Milnor}. (\textit{Hyperbolic geodesics} play an analogous role for the family $\mathcal{F}_a$ and enjoy similar properties to external rays, see Section \ref{lmv}).\\ Using the B\"ottcher map, Douady and Hubbard constructed a conformal homeomorphism between the complement of the Mandelbrot set and the complement of the closed unit disk: $$\Phi: \mathbb{C} \setminus M \to \mathbb{C}\setminus \overline{\mathbb{D}}_{1}$$ $$\,\,\, c \,\,\,\, \rightarrow \,\,\, \varphi_c(c),$$ proving that the Mandelbrot set is compact and connected \cite{DH84}. This isomorphism also allows the definition of \emph{ parameter space external rays}: the parameter ray of argument $\theta$ is $\mathcal{R}_{\theta}=\Phi^{-1}(R_{\theta}^0)$. If $M$ is locally connected, then every external ray lands. Conjecturally, the Mandelbrot set is locally connected (which we write MLC). This topological conjecture is crucial in one dimensional complex dynamics, since it has been proved (\cite{DH85}) to imply \textit{density of hyperbolicity} for the quadratic family. A rational map is called \textit{hyperbolic} when all its critical points are attracted to attracting cycles. Hyperbolic maps are among the best understood rational maps. Indeed, if the quadratic polynomial $f_c$ is hyperbolic then (i) every orbit in the interior of the filled Julia set $K_c$ (if non-empty) converges to the finite attracting cycle (which is unique since $f_c$ is quadratic); (ii) every orbit outside $K_c$ converges to $\infty;$ and (iii) $f_c$ is expanding and topologically mixing on the Julia set $\mathcal{J}_c =\partial K_c$. A major conjecture in holomorphic dynamics is: \begin{conj}[Density of hyperbolicity]\label{denhyp} The set of hyperbolic rational maps is open and dense in the space of rational maps $\operatorname{Rat}_d$ of the same degree. \end{conj} A version of this conjecture dates back to Fatou, and for this reason Conjecture \ref{denhyp} is often known as the \textit{Fatou conjecture}. Note that it concerns density of hyperbolicity, since openness of the set of hyperbolic maps is known.\\ Strongly related to hyperbolicity is the concept of structural stability. A map $f_a$ is \emph{structurally stable} if $f_c$ is topologically conjugate to $f_a,$ for every $c$ in an open set containing $a.$ For rational maps on the Riemann sphere \textit{$J$-stability}, which roughly speaking means stability on a neighborhood of the Julia set, is usually considered \cite{Rees}. Ma\~n\'e, Sad and Sullivan \cite{Mane1983} have shown that the set of $J$-structurally stable rational maps is open and dense in the space of rational maps $\operatorname{Rat}_d$ of the same degree. Since in any family of holomorphic maps the set of hyperbolic parameters forms an open and dense subset of the $J$-stable parameters, Conjecture \ref{denhyp} is equivalent to the following (see \cite{McMullen94}): \begin{conj} A $J$-stable rational map of degree $d$ is hyperbolic. \end{conj} For quadratic polynomials, Conjecture \ref{denhyp} claims that the set of $c$ such that $f_c(z)=z^2+c$ is hyperbolic is an open and dense subset of the complex plane. On the other hand, density of $J$-stability implies that each of the infinitely many components $U$ of $\mathbb{C} \setminus M$ is the parameterization domain of a holomorphic motion $h_c: \mathcal{J}_a \to \mathcal{J}_c,$ $c\in U$ (holomorphic motions are defined in Section \ref{dfc}), with base point $a\in U$ arbitrarily fixed, and every $h_c$ being a \emph{quasi-conformal conjugacy}. If $U$ is a component of $\mathbb{C}\setminus \partial M$ having one point $a$ for which $f_a(z)= z^2+ a$ is hyperbolic, then $f_c$ is hyperbolic for every $c$ in $U$, and thus in the quadratic setting density of hyperbolicity is equivalent to conjecturing that \textit{every component of $\mathbb{C}\setminus \partial M$ is hyperbolic}. Note that, since $\mathcal{J}_0=\mathbb{S}^1,$ it follows that $\mathcal{J}_c$ is a \emph{quasicircle} (image of $\mathbb{S}^1$ under a quasiconformal homeomorphism) for every $c$ close to zero (more precisely, for every $c$ in the same hyperbolic component as $c=0$). (A generalization of this fact for $z^{\beta} +c$ is given by Theorem \ref{bfl}). In the late eighties J.-C. Yoccoz made a major contribution towards the MLC conjecture, proving that MLC holds at every point $c \in \partial M$ such that $f_c$ is not infinitely renormalizable. A key ingredient is what is now known as the \emph{Yoccoz inequality}. It can be shown that if $z$ is a repelling fixed point for a degree $d$ polynomial $P$ with connected filled Julia set, then just finitely many external rays $\gamma_i$, say $q'$, land at $z$. Each $\gamma_i$ is periodic with the same period, and there exists $p'<q'$ such that $P\circ \gamma_i = \gamma_{i+p'}$ for any $i$. The number of cycles of rays landing at $z$ is $m=\operatorname{gcd}(p',q')$, and $\theta= p/q = (p'/m)/(q'/m)$ is called the \emph{combinatorial rotation number} of $P$ at $z$. \begin{thm}[Yoccoz-Pommerenke-Levin inequality \cite{Hubbard91, Pommerenke86, Levin91}] \label{yocin} If $z$ is a repelling fixed point of a degree $d$ polynomial $P$ with connected filled Julia set, and $\theta=p/q$ is its combinatorial rotation number in lowest terms, then \begin{equation} \frac{\operatorname{Re} \tau}{ |\tau - 2\pi i \theta |^2 }\geq \frac{mq}{2\operatorname{log} d}, \end{equation} for some branch $\tau$ of $\operatorname{log} P'(z).$ \end{thm} (A Yoccoz inequality for the family $\mathcal{F}_a$ is developed by the first two authors in \cite{BL17}; see Theorem \ref{xxx}. While the original Yoccoz inequality is proven for degree $d$ polynomials, and so applies to iterates of degree $2$ polynomials and hence to periodic orbits, an inequality of the form presented in Theorem \ref{xxx} has so far only been proved for repelling fixed points.)\\ In 1994, C. McMullen made a deep contribution toward MLC, by proving that every component of the interior of the Mandelbrot set meeting the real axis is hyperbolic \cite{McMullen94}. In the late nineties, M. Lyubich \cite{Lyubich97}, and independently Graczyk and Swiatek \cite{Swiatek98} proved density of hyperbolicity for the real quadratic family. About ten years later Kozlovski, Shen and van Strien proved it for real polynomials of higher degree, by proving that any real polynomial can be approximated by hyperbolic real polynomials of the same degree \cite{SSK07}. However, density of hyperbolicity for degree $d$ rational maps on $\widehat{\mathbb C}$ is still open. \subsection{Dynamics of holomorphic correspondences} We now outline our main results described in this paper, concerning the families \eqref{plk} and \eqref{fxs}: these involve generalizations of the concepts presented in Section \ref{bts}. {\sl Readers who want to see the proofs - as Welington always did - } can find those concerning family \eqref{plk} in \cite{BL16} and \cite{BL17}, and those concerning family \eqref{fxs} in \cite{SS17, Carlos, Rigidity, Siq17}. \paragraph{Part I.} We start with an abstract definition of matings between quadratic maps and $\operatorname{PSL}(2,\mathbb{Z})$ (Section \ref{wxz}) with the help of Minkowski's question mark function. This description dates back to 1994, when the first author together with C. Penrose \cite{Bullett1994} started investigating the family $\mathcal{F}_a.$ The formal definitions of limit sets and the connectedness locus $\mathcal{C}_{\Gamma}$ for this family are given in Section \ref{gra}. There we also define a \textit{mating} between the modular group and a map in the parabolic quadratic family $$\operatorname{Per}_1(1) = \{ P_A(z)= z+1/z+A \, | \, A \in \mathbb{C} \}/(A\sim -A),$$ and present a result which is a significant advance on Theorem \ref{quadmat}, namely that for any $a\in \mathcal{C}_{\Gamma},$ the correspondence $\mathcal{F}_a$ is a mating between $\operatorname{PSL}(2,\mathbb{Z})$ a parabolic map in $\operatorname{Per}_1(1)$ (see Theorem \ref{Mat}, and figures \ref{ppp} and \ref{bbb}). We open Section \ref{lmv} by recalling the existence of a B\"ottcher map for the family $\mathcal{F}_a$ when $a \in \mathcal{C}_{\Gamma}$ (see Theorem \ref{efv}), and we then use it to construct periodic geodesics on the regular domain of $\mathcal{F}_a$ (an analogue of periodic external rays). These land (see Theorem \ref{land}), analogously to the rational external rays for the quadratic family of polynomials. By a quite technical and deep argument \cite{BL17} it can be shown that when $a$ is in ${\mathcal C}_\Gamma$ every repelling fixed point $z$ of ${\mathcal F}_a$ is the landing point of exactly one periodic cycle of geodesics. It follows, as for polynomials, that $z$ has a well-defined combinatorial rotation number $\theta=p/q$. A geodesic in the cycle is stabilized by a Sturmian word $W_{p/q}$, in $\alpha$ and $\beta$, of rotation number $p/q$ (Sturmian words are defined in Section \ref{fp}: $W_{p/q}$ is unique up to cyclic permutation for any given $p/q$). \begin{figure} \caption{Disks in the $\tau$-plane permitted by the Yoccoz inequality: on the left for the matings $\mathcal{F} \label{Ydiscs} \end{figure} \begin{thm}[Yoccoz inequality]\label{xxx} Let $a\in \mathcal{C}_{\Gamma}$ and $z$ be a repelling fixed point of $f_a$ whose combinatorial rotation number is $\theta=p/q$ in lowest terms. Then there is a branch $\tau$ of $\operatorname{log} f_a'(z)$ such that $$\frac{\operatorname{Re} \tau}{|\tau - 2\pi i \theta |^2} \geq \frac{q^2}{4p \operatorname{log} (\left \lceil{q/p}\right \rceil +1)}, \ \ \ \textrm{if} \ \ \theta \leq 1/2; \ and $$ $$\frac{\operatorname{Re} \tau}{|\tau - 2\pi i \theta |^2} \geq \frac{q^2}{4(q-p)\operatorname{log} (\left \lceil{q/(q-p)}\right \rceil +1)}, \ \ \ \textrm{if} \ \ \theta > 1/2. $$ \end{thm} The inequalities of both Theorems \ref{yocin} and \ref{xxx} have geometric interpretations as restricting the logarithm of the derivative at a repelling fixed point to a round disk for each $p/q$. See Figure \ref{Ydiscs} for illustrations.\\ Theorem \ref{xxx} provides a key step in the strategy of the first two authors to prove that the part $M_{\Gamma}=\mathcal{C}_{\Gamma} \cap \{z: |z-4|\leq 3\}$ of the connectedness locus $\mathcal{C}_{\Gamma}$ of the family \eqref{plk} is homeomorphic to the connectedness locus $M_1$ of the parabolic family $\{z\mapsto z+ \frac{1}{z} +A: A\in \mathbb{C}\}/(A\sim -A)$. With the result announced by Carsten Peterson and Pascale Roesch that $M_1$ is homeomorphic to the Mandelbrot set $M$ \cite{Petersen17}, this will finally prove the long-standing conjecture that $M_\Gamma$ (pictured in Figure \ref{MGamma}) is homeomorphic to $M$. \paragraph{Part II.} The last Section \ref{tra} describes the dynamics of hyperbolic correspondences in the family \eqref{fxs}. We start by defining Julia sets (see Figure \ref{fff} for an example). The main subject is the generalization of holomorphic motions, which involves the construction of a solenoid associated to the Julia set of $\mathbf{f}_c(z)=z^{\beta} +c$ (Theorem \ref{bfl}). For parameters $c$ close to zero, the dynamics of $z^{\beta}+c$ on its Julia set $\mathcal{J}_c$ is the projection of a (single-valued) dynamical system $f_c: U \to U$ given by as holomorphic map defined on a subset $U \subset \mathbb{C}^2.$ The maximal invariant set of $f_c$ is a solenoid whose projection is $\mathcal{J}_c.$ The projection of the holomorphic motion in $\mathbb{C}^2$ yields a branched holomorphic motion on the plane, as defined by Lyubich and Dujardin \cite{Lyubich2015} for polynomial automorphisms of $\mathbb{C}^2.$ Branched holomorphic motions are described in greater generality for the family \eqref{fxs} in \cite{SS17}. The advantage of the solenoid construction is that it makes possible to apply certain techniques of Thermodynamic Formalism to the family of maps $f_c:U \to U$ and use them to estimate the Hausdorff dimension of $\mathcal{J}_c.$ For example, \begin{thm}[Hausdorff dimension]\label{tyu} If $q^2 < p$ then for every $c$ sufficiently close to zero, $$\mathcal{D}im_{H} \mathcal{J}_c < 2, $$ \noindent where $\mathcal{D}im_H$ denotes the Hausdorff dimension of $\mathcal{J}_c.$ \end{thm} In the family of Figure \ref{bcx} we have $p=5$ and $q=2.$ Since $2^2 < 5,$ it follows that $\mathcal{J}_c$ is the projection of a solenoid having zero Lebesgue measure. The assumption $q^2 < p$ may not be sharp. The essential idea is that $\mathcal{D}im_H\mathcal{J}_c \to 2$ as $\beta \to 1,$ which is supported by many experiments. \begin{figure} \caption{ Julia sets of $z^{\beta} \label{bcx} \end{figure} \paragraph{Notation and terminology.} \begin{enumerate} \item Holomorphic correspondences are denoted by $\mathcal{F}, \mathcal{G}, \ldots$ in the context of matings, or by $\mathbf{f}, \mathbf{g}, \ldots$ when studying hyperbolic multifunctions. \item By the term \emph{multifunction} we mean any multivalued map. Every multifunction maps points to subsets. \item $\mathbb{S}^1= \{z\in \mathbb{C}: |z|=1\},$ $\widehat{\mathbb C} = \mathbb{C}\cup \{\infty\},$ $\mathbb{H}= \{z=x+iy \in \mathbb{C}:y>0\}, $ and $f^n= \underbrace{f\circ \cdots\circ f}_{n}.$ \item $\Gamma=\operatorname{PSL}(2,\mathbb{Z})$ is the modular group consisting of all M\"obius transformations $$z\mapsto \frac{az+b}{cz +d}, $$ \noindent where $ad-bc=1$ and $a,b,c,d \in \mathbb{Z}.$ The operation is the standard composition $\circ$. The \emph{generators} of the modular group that we shall use are the maps $$\alpha(z)=z+1 \mbox{ and } \beta(z)=\frac{z}{z+1}. $$ Consider \begin{equation}\label{scv} P(z,w) = (w-(z+1))(w(z+1) -z)=0.\end{equation} The grand orbits of $\operatorname{PSL}(2,\mathbb{Z})$ on ${\mathbb H}$ are identical to those of the holomorphic correspondence $\mathcal{H}: \widehat{\mathbb C} \rightarrow \widehat{\mathbb C}$ determined by $P(z,w)=0.$ \end{enumerate} \parr{Acknowledgments.} The authors would like to thank the Funda\c{c}\~ao de amparo a pesquisa do estado de S\~ao Paulo, which has supported the first two authors by the grant FAPESP 2016/50431-6, and the third by FAPESP 2016/16012-6. C.S. is very grateful to Edson de Faria for the hospitality at IME-USP, to Sylvain Bonnot for suggesting the investigation of some interesting problems relating holomorphic correspondences to automorphisms of $\mathbb{C}^2,$ and to Daniel Smania for many discussions and key ideas on the dynamics of hyperbolic correspondences, specially those concerning Gibbs states and Hausdorff dimension. L.L. and C.S. would like to express their sincere gratitude to the scientific committee and organizers of the conference \emph{New trends in one-dimensional dynamics}, on the occasion of Welington de Melo's seventieth birthday, specially to Pablo Guarino and Maria J. Pacifico (significant content of this paper has been previously announced in this conference). \section{Mating quadratic maps with $\operatorname{PSL}(2,\mathbb{Z})$} Recall that in the case of hyperbolic quadratic polynomials $f_c(z)=z^2+c$, the \emph{topological mating} between $f_c$ and $f_{c'}$ is the map $$g: \frac{K_c \cup K_{c'}}{\sim} \to \frac{K_c \cup K_{c'}}{\sim} $$ induced by $f_c$ and $f_{c'}$ on the quotient space, where $\sim$ is the smallest closed relation such that $\varphi_c(z) \sim \varphi_{c'}(\overline{z})$, for every $z\in \mathbb{S}^1$ ($\varphi_c$ is the boundary extension of the B\"ottcher coordinate and $K_c$ is a copy of the filled Julia set). The two maps are \emph{matable} if the quotient space is a sphere, and $g$ can be realized as a rational map. By applying Thurston's characterization of rational maps among critically finite branched coverings of the sphere, Tan Lei (\cite{Tan}) and Mary Rees (\cite{MR}) proved that two quadratic polynomials $f_c,f_{c'}$ with periodic critical points are matable if and only if $c$ and $c'$ do not belong to complex conjugate limbs of the Mandelbrot set. Matings can also be constructed between Fuchsian groups: by applying the Bers Simultaneous Uniformization Theorem certain Fuchsian groups can be mated with (abstractly isomorphic) Fuchsian groups to yield \emph{quasifuchsian} Kleinian groups. (See \cite{Bullett2010} for a discussion of matings in various contexts in conformal dynamics.) What is a surprise when first encountered is that certain Fuchsian groups can be mated with polynomial maps (see Section \ref{wxz}). This is achieved in a larger category of conformal dynamical systems, containing both rational maps and finitely generated Kleinian groups, the category of holomorphic correspondences on the Riemann sphere. These are multifunctions $\mathcal{F}: \widehat{\mathbb C} \to \widehat{\mathbb C},$ for which there is a polynomial $P(z,w)$ in two complex variables such that $\mathcal{F}(z) = \left\{w\in \widehat{\mathbb C}: P(z,w)=0 \right\}$. \subsection{Mating quadratic polynomials with $\operatorname{PSL}(2,\mathbb{Z})$} \label{wxz} Examples of matings between quadratic polynomials and the modular group were discovered by the first author and Christopher Penrose in the early '90s. To understand their existence we first consider how one can construct an abstract (topological) model (see also \cite{Bullett1994} and \cite{BodilNuria} for more details). \paragraph{Topogical mating: Minkowski's question mark function.} Let $$h:\hat{\mathbb R}_{\ge 0} \to [0,1]$$ denote the homeomorphism which sends $x \in {\mathbb R}$ represented by the continued fraction $$ [x_0; x_1, x_2, \ldots] = x_0+\cfrac{1}{x_1+\cfrac{1}{x_2+\cfrac{1}{x_3+\mathcal{D}ots}}}$$ to the binary number $$ h(x)=0.\underbrace{1\ldots1}_{x_0}\underbrace{0\ldots0}_{x_1}\underbrace{1\ldots1}_{x_2}\ldots$$ This is a version of Minkowski's question mark function \cite{Mink04}. It conjugates the pair of maps $\alpha:x \to x+1$, $\beta:x \to x/(x+1)$ to the pair of maps $t \to t/2$, $t \to (t+1)/2$ (the inverse binary shift). If the Julia set $J(f_c)$ of $f_c: z \to z^2+c$ is connected and locally connected then the B\"ottcher map $\varphi_c:\widehat{\mathbb C}\setminus {\mathbb D} \to \widehat{\mathbb C} \setminus K(f_c)$ extends to a continuous surjection $S^1 \to J(f_c)$, which semi-conjugates the map $z \to z^2$ on $S^1$ (the binary shift) to the map $f_c$ on $J(f_c)$. We deduce that we may use the homeomorphism $h$ described above to glue the action of $f_c^{-1}$ on $J(f_c)$ to that of $\alpha$, $\beta$ on $\hat{\mathbb R}_{\ge 0}/_{\{0 \sim \infty\}}$. Equally well we can glue the action of $f_c^{-1}$ on $J(f_c)$ to that of $\alpha^{-1}$, $\beta^{-1}$ on $\hat{\mathbb R}_{\le 0}/_{\{0 \sim -\infty\}}$. We now take two copies $K_-$ and $K_+$ of the filled Julia set $K_c$ of $f_c$ and glue them together at the boundary point of external angle $0$ to form a space $K_- \vee K_+$. Each point $z\in K_c$ has a corresponding $z'$ defined by $f_c(z')=f_c(z)$. Consider the $(2:2)$ correspondence defined on $K_- \vee K_+$ by sending $\bullet$ $z \in K_-$ to $f_c(z) \in K_-$ and to $z' \in K_+$; $\bullet$ $z \in K_+$ to $f_c^{-1}(z) \in K_+$. It is an elementary exercise to check that this correspondence on $K_- \vee K_+$ can be glued to the correspondence defined by $\alpha$ and $\beta$ on the complex upper half-plane using the homeomorphisms $\hat{\mathbb R}_{\ge 0}/\{0 \sim \infty \} \to \partial K_-$ and $\hat{\mathbb R}_{\le 0}/\{0 \sim -\infty\} \to \partial K_+$ defined above. Thus we have a topological mating between the action of the modular group on the upper half-plane and our $(2:2)$ correspondence on $K_- \vee K_+$. \paragraph{Holomorphic mating.} Reassured by the existence of this topological construction, we define a (holomorphic) mating between a quadratic polynomial $f_c,\,\,c \in M$ and $\Gamma=PSL(2,\mathbb{Z})$ to be a $(2:2)$ holomorphic correspondence $\mathcal{F}$ such that: \begin{enumerate} \item there exists a completely invariant open simply-connected region $\Omega$ and a conformal bijection $\varphi: \Omega \rightarrow \mathbb{H}$ conjugating $\mathcal{F}|_{\Omega}$ to $\alpha|_\mathbb{H}$ and $\beta|_\mathbb{H}$; \item $\widehat \mathbb{C} \setminus \Omega = \Lambda= \Lambda_- \cup \Lambda_+$, where $\Lambda_- \cap \Lambda_+=\{P\}$ (a single point) and there exist homeomorphisms $\phi_{\pm}: \Lambda_{\pm} \rightarrow K_c$ conjugating respectively $\mathcal{F}|_{\Lambda_-}$ to $f_c|_{K_c}$ and $\mathcal{F}|_{\Lambda_+}$ to $f^{-1}_c|_{K_c}$ \end{enumerate} In 1994 the first author and C. Penrose proved that for all parameters $a$ in the real interval $[4,7],$ the correspondence $\mathcal{F}_a$ is a mating between a quadratic polynomial $f_c(z)=z^2+c$, $c\in [-2,+1/4]\subset \mathbb{R}$ and the modular group $\Gamma=PSL(2,\mathbb{Z})$ (see \cite{Bullett1994}). \subsection{The regular and limit sets of ${\mathcal F}_a$} \label{gra} Consider the family of holomorphic correspondences $\mathcal{F}_a: \widehat{\mathbb C} \to \widehat{\mathbb C},$ defined by the polynomial equation \eqref{plk}. The change of coordinate $\phi_a:\widehat{\mathbb C} \to \widehat{\mathbb C}$ given by $$\phi_a(z)= \frac{az+1}{z+1}$$ conjugates $\mathcal{F}_a$ to the correspondence \begin{equation}\label{pls} J \circ \operatorname{Cov}_0^{Q}, \end{equation} where $J$ is the (unique) conformal involution fixing $1$ and $a$, and $Cov_0^Q$ is the deleted covering correspondence of the function $Q(z)=z^3$, that is to say, the correspondence defined by the relation $$\frac{Q(w)-Q(z)}{w-z}=0, \mbox{ i.e. }z^2+ zw+ w^2=3.$$ So $\mathcal{F}_a$ and $J \circ \operatorname{Cov}_0^{Q}$ are the same correspondence in different coordinates, and in that sense we write $\mathcal{F}_a= J \circ \operatorname{Cov}_0^{Q}.$ By a \emph{fundamental domain} for $\operatorname{Cov}_0^{Q}$ (respectively $J$) we mean any maximal open set $U$ which is disjoint from $\operatorname{Cov}_0^{Q}(U)$ (respectively $J(U)$). We require our fundamental domains to be simply-connected and bounded by Jordan curves (see Figure \ref{H}). \paragraph{Klein combination locus.}Let $P=1$ denote the common fixed point of $\operatorname{Cov}_0^{Q}$ and $J.$ The point $P$ is a \textit{parabolic fixed point}. The \emph{Klein combination locus} $\mathcal{K}$ is the subset of $\mathbb{C}$ consisting of all $a$ for which there are fundamental domains $\mathbb{D}elta_{Cov}$ and $\mathbb{D}elta_{J}$ of $\operatorname{Cov}_0^{Q}$ and $J$, respectively, such that $$\mathbb{D}elta_{Cov} \cup \mathbb{D}elta_J= \widehat{\mathbb C}\setminus \{P\}.$$ We call such a pair of fundamental domains a \emph{Klein Combination pair}.\\ \begin{figure} \caption{Standard fundamental domains for $\operatorname{Cov} \label{aaa} \label{H} \end{figure} In \cite{BL16} we show that $\{a\in \mathbb{C}: |a-4|\leq 3, a\neq 1 \} \subset \mathcal{K}$, and that when $a$ is in the interior of this disk the \emph{standard} fundamental domains (see figure \ref{aaa}) are a Klein combination pair. More generally we prove that for every $a\in \mathcal{K},$ we can always choose a Klein combination pair whose boundaries $\partial \mathbb{D}elta_{Cov}$ and $\partial \mathbb{D}elta_J$ are transversal to the attracting-repelling axis at $P$. Now suppose $a\in \mathcal{K}$ and let $\mathbb{D}elta_{Cov}$ and $\mathbb{D}elta_J$ be a corresponding pair of fundamental domains of $\operatorname{Cov}_0^{Q}$ and $J$ such that $\partial \mathbb{D}elta_{Cov}$ and $\partial \mathbb{D}elta_J$ are transversal to the attracting-repelling axis at $P.$ It follows that $P\in \mathcal{F}^{n}_a(\overline{\mathbb{D}elta}),$ for every $n,$ and $\mathcal{F}_a(\mathbb{D}elta)$ is compactly contained in $\mathbb{D}elta \cup \{P\}.$ By definition,\begin{equation} \Lambda_{a,+} = \bigcap_{n=1}^{\infty} \mathcal{F}_a^n(\overline{\mathbb{D}elta}) \end{equation} where $\mathcal{F}_a = J \circ \operatorname{Cov}_0^{Q},$ \noindent is the \emph{forward limit set} of $\mathcal{F}_a.$ Similarly, since $\mathbb{D}elta_{Cov}$ is forward invariant, the complement of $\mathbb{D}elta_{Cov}$ is invariant under $\mathcal{F}^{-1}_a$ and \begin{equation} \Lambda_{a,-}=\bigcap_{n=1}^{\infty} \mathcal{F}_a^{-n} (\widehat{\mathbb C}\setminus \mathbb{D}elta_{Cov}) \end{equation} \noindent is the \emph{backward limit set} of $\mathcal{F}_a.$ The sets $\Lambda_{a,-} $ and $\Lambda_{a,+}$ have only one point in common, the point $P.$ Their union, $\Lambda_a,$ is the \emph{limit set} of $\mathcal{F}_a.$ An example of a plot of a limit set of $\mathcal{F}_a$ is displayed in Figure \ref{ppp}. (In this plot we use the original coordinate system of \eqref{plk}, so $P=0$ and $J$ is the involution $z \leftrightarrow -z$.) \\ We have $\mathcal{F}_a^{-1}(\Lambda_{a,-}) = \Lambda_{a,-},$ and the restriction of $\mathcal{F}_a$ to this set is a (2\,:1) single-valued holomorphic map denoted by $f_a.$ The involution $J$ maps $\Lambda_{a,-} $ onto $\Lambda_{a,+}$ and determines a conjugacy from $f_a$ to $$\mathcal{F}_a^{-1}: \Lambda_{a,+} \to \Lambda_{a,+}.$$ The \emph{regular domain} of $\mathcal{F}_a$ is $\Omega_a = \widehat{\mathbb C}\setminus \Lambda_a.$ This set is completely invariant under $\mathcal{F}_a$ (forward and backwards). By the Klein Combination Theorem it can be shown that if $\Omega_a$ contains no critical points it is tiled by copies of the intersection of any pair of Klein combination domains, \cite{Bullett00}. \begin{figure} \caption{ A connected limit set for $\mathcal{F} \label{ppp} \caption{ Julia set of the hybrid equivalent member of $\operatorname{Per} \label{bbb} \end{figure} \paragraph{Connectedness locus.} The \emph{connectedness locus} $\mathcal{C}_{\Gamma}$ of the family $\mathcal{F}_a$ is the subset of $\mathcal{K}$ consisting of all $a$ such that the limit set $\Lambda_a$ is connected. When $a\in \mathcal{C}_{\Gamma}$, the regular domain $\Omega_a$ contains no critical points, and moreover is simply connected. Bullett and Penrose \cite{Bullett1994} conjectured that for every $a \in \mathcal{C}_{\Gamma},$ the correspondence $\mathcal{F}_a$ is a mating between some quadratic map $f_c(z)=z^2 +c$ and the modular group $\operatorname{PSL}(2,\mathbb{Z}).$ More recently, this conjecture was settled affirmatively by Bullett and Lomonaco \cite{BL16}, provided the quadratic family is replaced by a quadratic family of \textit{parabolic} maps (see figures \ref{ppp} and \ref{bbb}). \subsection{Mating parabolic maps with $\operatorname{PSL}(2,\mathbb{Z}).$} The family $\operatorname{Per}_1(1)$ consists of quadratic rational maps of the form $P_A(z) = 1+ 1/z + A,$ where $A\in \mathbb{C}$. The maps in $\operatorname{Per}_1(1)$ all have a persistent parabolic fixed point at $\infty$ and critical points at $\pm 1$. The connectedness locus for the family $\operatorname{Per}_1(1)$ is the \textit{parabolic Mandelbrot set} $M_1$, which has been proved to be homeomorphic to the Mandelbrot set by C. Petersen and P. Roesch (\cite{Petersen17}). We say that $\mathcal{F}_a$ is a \emph{mating} between $P_A$ and $\operatorname{PSL}(2,\mathbb{Z})$ if: \begin{enumerate} \item on the completely invariant open simply-connected region $\Omega_a$ there exists a conformal bijection $\varphi_a: \Omega_a \rightarrow \mathbb{H}$ conjugating $\mathcal{F}_a: \Omega_a \to \Omega_a$ to $\alpha|_\mathbb{H}$ and $\beta|_\mathbb{H}$; and \item the (2\,:1) branch of $\mathcal{F}_a$ which fixes $\Lambda_{a,-}$ (given by the holomorphic map $f_a$) is hybrid equivalent to $P_A$ on the backward limit set $\Lambda_{a,-}$. \end{enumerate} In \cite{BL16}, using the theory of parabolic-like maps developed by the second author (see \cite{L}), the first two authors proved the following (see figures \ref{ppp} and \ref{bbb}): \begin{thm}\label{Mat} For every $a\in \mathcal{C}_{\Gamma},$ the correspondence $\mathcal{F}_a$ is mating between a parabolic map in $\operatorname{Per}_1(1)$ and $\operatorname{PSL}(2,\mathbb{Z}).$ \end{thm} \begin{figure} \caption{ A plot of $M_{\Gamma} \label{MGamma} \end{figure} The following conjecture has been open for at least 20 years \cite{Bullett1994}: \begin{conj}\label{MM1} The Mandelbrot set is homeomorphic to $M_{\Gamma}$. \end{conj} The first two authors have developed a detailed strategy for proving that $M_{\Gamma}$ is homeomorphic to $M_1$. This, together with the proof by Petersen and Roesch that $M_1$ is homeomorphic to $M$, would finally prove Conjecture \ref{MM1}. A key step in the strategy to prove that $M_{\Gamma}$ is homeomorphic to $M_1$ makes use of a Yoccoz inequality for matings, which we prove using a generalization of the technique of external rays (the subject of the next section). \subsection{Periodic geodesics} \label{lmv} \paragraph{B\"ottcher coordinates.} Consider the holomorphic correspondence $\mathcal{H}$ on the upper half-plane obtained from the generators $\alpha(z)=z+ 1$ and $\beta(z)=z/(z+1)$ of $\operatorname{PSL}(2,\mathbb{Z})$, i.e. defined by the polynomial equation \eqref{scv}. As part of the proof of Theorem \ref{Mat} it is shown in \cite{BL16} that: \begin{thm}[B\"ottcher map] \label{efv}If $a\in \mathcal{C}_{\Gamma},$ there is a unique conformal homemorphism $\varphi_a: \Omega_a \to \mathbb{H}$ such that $$ \mathcal{H}\circ \varphi_a = \varphi_a \circ \mathcal{F}_a. $$ \end{thm} By the Schwarz lemma, the B\"ottcher map is an isometry with respect to the hyperbolic metric, and maps geodesics to geodesics. Geodesics in $\Omega_a$, or equivalently in ${\mathbb H}$, play a role for the correspondences ${\mathcal F}_a$ analogous to the role played by external rays for quadratic polynomials $f_c$. \paragraph{Periodic geodesics land.} By a finite word in $\alpha$ and $\beta$ we mean any M\"obius transformation $$W=g_1g_2 \cdots g_n:=g_1 \circ \cdots \circ g_n, $$ where $g_i \in \{\alpha, \beta\}.$ We can compose words in the obvious way, and also consider infinite sequences $(g_i)_1^{\infty}$ and bi-infinite sequences $(g_i)_{-\infty}^\infty$. A geodesic $\gamma$ in the hyperbolic plane is said to be \emph{periodic} if $W\circ \gamma= \gamma$ for some finite word $W$. (Note that $W$ must include both the letters $\alpha$ and $\beta$, since these being parabolic transformations of ${\mathbb H}$ there are no geodesics invariant under either). Since $\mathbb{H}$ is geodesically complete, $\gamma$ is a curve $\mathbb{R} \to \mathbb{H},$ and the limits $$ \gamma(-\infty):=\lim_{t\to -\infty} \gamma(t) , \ \ \gamma(\infty):= \lim_{t\to \infty} \gamma(t)$$ are by definition the \emph{landing points} of $\gamma$. Every periodic geodesic lands on the hyperbolic plane, and the landing points are in $\mathbb{R}\cup \{\infty\}$. If $a\in \mathcal{C}_{\Gamma},$ the regular domain is a hyperbolic Riemann surface, that is, it has a unique complete metric of constant curvature $-1$ determining its geometry. A geodesic $\hat\gamma$ in $\Omega_a$ is periodic if $\varphi_a\circ \hat\gamma$ is a periodic geodesic of $\mathbb{H}.$ We say that a periodic geodesic $\hat\gamma:\mathbb{R} \to \Omega_a$ \emph{lands} if the limits $\hat\gamma(\infty)$ and $\hat\gamma(-\infty)$ exist. They are the \emph{right and left landing points,} respectively. \begin{thm}\label{land} If $a\in \mathcal{C}_{\Gamma},$ then every periodic geodesic lands. The left landing point belongs to $\Lambda_{a, -}$ and the right landing point is in $\Lambda_{a,+}.$ \end{thm} As a corollary, the B\"ottcher map extends to all landing points of periodic geodesics. Indeed it extends to all landing points of preperiodic geodesics, and moreover these correspond under $\varphi_a$ to the set of all quadratic irrationals in ${\mathbb R}$ (the set of real numbers with preperiodic continued fraction expansions). \subsection{Repelling fixed points, and Sturmian sequences}\label{fp} The following result is again analogous to a result for quadratic polynomials, but the proof is quite technical and deep (even more so than in the case of polynomials, which is already difficult, see \cite{BL17}), and at present we only have a proof for repelling fixed points, whereas for polynomials it is known for repelling and parabolic cycles: \begin{thm}\label{lp} A repelling fixed point in $\Lambda_-({\mathcal F}_a)$ of a correspondence ${\mathcal F}_a$ with $a\in {\mathcal C}_\Gamma$ is the landing point of exactly one periodic cycle of geodesics. \end{thm} This theorem has the consequence that to a repelling fixed point $z\in \Lambda_-$ of a correspondence $\mathcal{F}_a$ with $a \in {\mathcal C}_\Gamma$ we can associate a periodic geodesic $\hat\gamma$ which lands there, and a finite word $W$ in $\alpha$ and $\beta$ which fixes $\varphi_a\circ \hat\gamma$. Letting $f_a$ denote the (locally defined) branch of $\mathcal{F}_a$ which fixes $z$, we deduce that since $f_a$ is locally a homeomorphism the cyclic order of the images of $\hat\gamma$ around $z$ is preserved by $f_a$. Thus $f_a$ has a well-defined {\emph{combinatorial rotation number} around $z$, and this number is rational. \paragraph{Sturmian sequences.} Recall that a sequence $(s_i) \in \{0,1\}^{\mathbb{N}}$ is \emph{Sturmian} if, for every $n,$ the number of $1's$ in any two blocks of length $n$ differs by at most one. There is an obvious equivalent definition for bi-infinite sequences. If $(s_i)$ is Sturmian, then the points of the orbit of $x=0.s_1s_2 \ldots$ (binary) under $f(z)=z^2$ on the unit circle are necessarily in the same order as the points of some rigid rotation $R_{\theta}$, and vice versa. This $\theta$ is uniquely determined; it is by definition the \emph{rotation number} of $(s_i).$ Equivalently, $\theta$ is the limiting frequency of $1's$ in the sequence \cite{BS94}. For each rational $p/q$ (modulo $1$) in lowest terms, there is a unique (up to cyclic permutation) finite word $W_{p/q}=(s_i)\in\{0,1\}^q$ such that the orbit of $x=0.\overline{s_1\ldots s_q}$ under $f(z)=z^2$ is in the same order around the circle as the points of an orbit of the rigid rotation $R_{p/q}$ (here $\overline{s_1\ldots s_q}$ denotes a recurring block). For example $W_{1/3}=001$, and $W_{2/5}=00101$. We call $W_{p/q}$ the finite Sturmian word of rotation number $p/q$, since the bi-infinite sequence made up of repeated copies of $W_{p/q}$ is the unique (up to shift) periodic Sturmian sequence of rotation number $p/q$. Finally we remark that there is nothing special about the symbols $1$ and $0$: identical terminology for Sturmian sequences and words may be applied if we replace $1$ and $0$ by $\alpha$ and $\beta$ respectively.\\ We now return to the situation that $a\in \mathcal{C}_{\Gamma}$, and $z$ is a repelling fixed point of $f_a: \Lambda_{a,-} \to \Lambda_{a,-}.$ If $\hat\gamma$ is a periodic geodesic landing at $z$, it has a combinatorial rotation number $p/q$ (by Theorem \ref{lp}), and any finite word $W$ in $\alpha$ and $\beta$ which fixes $\varphi_a\circ\hat\gamma$ is Sturmian, hence (a cyclic permutation of) a power of $W_{p/q}$. By establishing and applying bounds for the eigenvalues of the Sturmian words $W_{p/q}$ in $\alpha$ and $\beta$, we prove our Yoccoz inequality, Theorem \ref{xxx} (see \cite{BL17}).\\ \section{Hyperbolic correspondences} \label{tra} We now turn to the study of the one parameter family of holomorphic correspondences defined by \eqref{fxs}. This family is perhaps the simplest generalization of the quadratic family as a multifunction. It will be useful to recall some well-known facts directly related to the dynamics of $\mathbf{f}_c(z)=z^{\beta} +c$ when $\beta>1$ is a rational number. \paragraph{Hyperbolic quadratic maps.} The notion of hyperbolicity can be given in several equivalent forms. According to the simplest one, $f_c(z)=z^2+c$ is \emph{hyperbolic} if $f_c^n(0)$ converges to an attracting cycle (finite or infinite). Since every finite attracting cycle attracts the orbit of a critical point, the map $f_c$ can have at most one finite attracting cycle. Any quadratic map with a finite attracting cycle corresponds to a point in the interior of the Mandelbrot set $M,$ and an equivalent form of the Fatou conjecture states that this is the only possibility for a quadratic map in the interior of $M. $ On the other hand, if $c$ is in the complement of $M, $ then $\mathcal{J}_c$ is a Cantor set and $f_c$ is hyperbolic because $f_c^n(0) \to \infty.$ The closure of attracting cycles is denoted by $\mathcal{J}_c^*.$ It turns out that \emph{$f_c$ is hyperbolic iff the basin of attraction of $\mathcal{J}_c^*$ is $\widehat{\mathbb C} \setminus \mathcal{J}_c.$ } For this reason, we call $\mathcal{J}_c^*$ the dual Julia set of $f_c.$ This equivalent definition of hyperbolicity should be preserved in any generalization, mainly because of its intrinsic dynamical significance. We shall use this equivalent property to define hyperbolic correspondences and centers in the family $\mathbf{f}_c(z) = z^{\beta}+c,$ but first we need to extend the concepts of orbit, Julia set and multiplier of a cycle. \paragraph{Cycles.} Consider the family \eqref{fxs}. Every sequence $(z_i)_0^{\infty}$ for which the points satisfy $z_{i+1}\in \mathbf{f}_c(z_i)$ is a \emph{forward orbit}. A backward orbit is characterized by $z_{i+1} \in \mathbf{f}_c^{-1}(z_i).$ If $\varphi: U \to \mathbb{C}$ is an injective holomorphic map from a region $U$ of the plane such that $\varphi(z) \in \mathbf{f}_c(z),$ for every $z$ in $U,$ then $\varphi$ is a \emph{univalent branch} of $\mathbf{f}_c.$ By a cycle we mean any periodic forward orbit with minimal period $n.$ The quantity $$\lambda = \prod_{0}^{n-1} \varphi_i'(z_i), $$ where $\varphi_i$ is the unique univalent branch taking $z_i$ to $z_{i+1},$ is the \emph{multiplier} of the cycle. If $z=0$ then there is no univalent branch defined at $z;$ if some point of the cycle is $0,$ then by definition $\lambda=0.$ The cycle is \emph{repelling} if $|\lambda|> 1,$ and attracting if $|\lambda|< 1.$ \paragraph{Julia sets.} The Julia set of $\mathbf{f}_c$, denoted by $\mathcal{J}_c$, is the closure of the union of all repelling cycles of $\mathbf{f}_c.$ Similarly, the dual Julia set $\mathcal{J}_c^*$ is the closure of the union of all \emph{finite} attracting cycles. The dual Julia set containing the attracting fixed point $\infty$ is denoted by $\mathcal{J}_c^{e*}=\mathcal{J}_c^*\cup \{\infty\}.$ \begin{figure} \caption{ The Julia set of $z\mapsto z^{\frac{3} \label{fff} \end{figure} \paragraph{Filled Julia set.} For every $c$ there is bounded disk $B$ centered at $0$ whose complement is invariant under $\mathbf{f}_c,$ and every forward orbit of a point in $\mathbb{C} \setminus B$ converges exponentially fast to $\infty.$ We define \begin{equation} K_c = \bigcap_{n>0 } \mathbf{f}_c^{-n}(B) \end{equation} as the \emph{filled Julia set} of $\mathbf{f}_c.$ A point $z$ belongs to $K_c$ iff there is at least one bounded forward orbit under $\mathbf{f}_c$ starting at $z.$ The restriction $\mathbf{f}_c|_{K_c}$ is denoted by $\mathbf{g}_c: K_c \to K_c.$ \paragraph{Hyperbolic correspondences.} The $\omega$-limit set of a point $z$, denoted $\omega(z)$, consists of every $\zeta$ such that $z_{i_k} \to \zeta$ as $k\to \infty,$ for some bounded forward orbit $(z_i)$ starting at $z_0=z,$ and some subsequence $(z_{i_k}).$ We may use $\omega(z,\mathbf{f}_c)$ to make explicit the dependence on the dynamics of $\mathbf{f}_c.$ The dual Julia set is a \emph{hyperbolic attractor} for $\mathbf{g}_c$ if $\mathcal{J}_c^*$ is $\mathbf{g}_c$-forward invariant and supports an attracting conformal metric $\rho(z)|dz|,$ in the sense that $$\sup_{z, \varphi}\| \varphi'(z) \|_{\rho} <1, $$ where the $\sup$ is taken over all $z\in \mathcal{J}_c^*$ and all univalent branches $\varphi$ of $\mathbf{f}_c$ at $z$ such that $\varphi(z) \in \mathcal{J}_c^*.$ It is implicit in this definition that $ \mathcal{J}_c^*$ does not contain the critical point, for then no univalent branch is defined at $0.$ If $\mathcal{J}_c^*$ is a hyperbolic attractor for $\mathbf{g}_c,$ then the \emph{basin of attraction} of $\mathcal{J}_c^{*e}$ is well defined (in other words, it contains a neighborhood of $\mathcal{J}^{*e}_c$) and consists of all $z$ such that $\omega(z)\subset \mathcal{J}_c^{*e}.$ \begin{defi}[Hyperbolicity] We say that $\mathbf{f}_c$ is \emph{hyperbolic} if $\mathcal{J}_c^*$ is a hyperbolic attractor for $\mathbf{g}_c$ and the basin of attraction of $\mathcal{J}_c^{e*}$ is $\hat{\mathbb{C}} \setminus \mathcal{J}_c. $ \end{defi} \paragraph{Carpets and connectedness locus.} A connected compact subset of the plane is \emph{full} if its complement in the Riemann sphere is connected. A set $\Lambda\subset \mathbb{C}$ is a hyperbolic repeller of $\mathbf{f}_c$ if (i) $\mathbf{f}_c^{-1}(\Lambda)=\Lambda;$ and (ii) $\Lambda$ supports an expanding conformal metric defined on a neighborhood of $\Lambda.$ (See \cite{SS17}). A filled Julia set $K_c$ is a \emph{Carpet} if (i) $K_c$ is connected but not full; and (ii) $K_c$ is a hyperbolic repeller. Intuitively, every Carpet presents holes, and by the contraction of the branches of $\mathbf{f}_c^{-1}$, every hole comes with infinitely many small copies. We say that $K_c$ is a \emph{Cantor repeller} if $K_c$ is a hyperbolic repeller and also a Cantor set. In this case, $\mathcal{J}_c=K_c.$ \noindent The connectedness locus $M_{\beta}$ of the family $\mathbf{f}_c$ is by definition the set of all parameters $c$ for which $K_c$ is connected. Another important subset of the parameter space is \begin{equation} M_{\beta,0} = \{c\in \mathbb{C}: 0 \in K_c\}. \end{equation} Notice that both sets generalize the definition of Mandelbrot set for the quadratic family, but if $\beta$ is not an integer, there is no reason to believe that $M_{\beta} = M_{\beta,0}.$ \begin{figure} \caption{ Filled Julia sets in the family $\mathbf{f} \label{pcy} \end{figure} \begin{thm} \label{pcv} If $\beta=p/q$ and $p$ is prime, then $K_c$ is either full, a Carpet, or a Cantor repeller. \end{thm} If $p$ is prime, it is possible to show that $M_{\beta,0} \subset M_{\beta},$ in other words, $K_c$ is connected if $0 \in K_c.$ If $c$ is in $M_{\beta} - M_{\beta,0}$ then $K_c$ is a Carpet, and if $c$ is in the complement of $M_{\beta},$ then $K_c$ is a Cantor repeller. \paragraph{Centers.} A center is a point $c$ of the parameter space such that $$\mathbf{g}_c^n(0)=\{0\},$$ for some $n>0.$ This definition is motivated by a well-known fact from the quadratic family, where every bounded hyperbolic component $U$ has a center \cite{DH84, DH85} defined as the unique point $c\in U$ for which the multiplier of the finite attracting cycle of $f_c$ is zero. Hence, in the case of the quadratic family, the number of bounded hyperbolic components is countably infinite, and every such component is encoded by a solution of $f_c^n(0)=0,$ for some $n>0.$ \paragraph{Simple centers.} A center is called \emph{simple} if there is only one orbit of $0$ under $\mathbf{g}_c,$ and this orbit is necessarily a cycle containing $0.$ Let $\mathcal{S}_d = \{a\in \mathbb{C}: a^{d-1}=-1\},$ for $d>1.$ For every pair $(d,a)$ in the infinite set $$\bigcup_{d>1} \{d\} \times \mathcal{S}_d, $$ the point $a$ is a simple center of family of the holomorphic correspondences $\mathbf{f}_c: z\mapsto w$ given by $(w-c)^{2} = z^{2d}.$ Indeed, it was shown in \cite{SS17} that the first two iterates of $0$ under $\mathbf{f}_a$ are $0 \mapsto a \mapsto a^n +a=0$ and $0\mapsto a \mapsto -a^{n} +a=-2a^n,$ where $-2a^{n}$ is a point in the basin of infinity of $\mathbf{f}_a.$ \paragraph{Open problems.} A fundamental program for the family $\mathbf{f}_{c}(z)=z^{\beta} +c$ is given by the following problems: \begin{itemize} \item[I.] Show that every perturbation of a center corresponds to a hyperbolic correspondence; \item[II.] Show that the set $M_{\beta}'$ of hyperbolic parameters is indeed open and every component of $M_{\beta}'$ is encoded by a center; \item[III.] Decide if the set of parameters for which $c\mapsto \mathcal{J}_c$ is continuous in the Hausdorff topology is open and dense (computer experiments seem to support this statement); \item[IV.] Show that every component of $\mathbb{C}\setminus \partial M_{\beta}$ is hyperbolic. \item[V.] Classify Julia sets with zero Lebesgue measure. \end{itemize} The first Problem I can be solved with a generalization of the proof of Theorem \ref{jkl} (see \cite{Rigidity} for a detailed exposition); the second is very realistic but still unresolved; the third is in many aspects a generalization of the celebrated work of Ma\~n\'e, Sad and Sullivan \cite{Mane1983} (see also \cite{SS17} and Section \ref{dfc} for a discussion of holomorphic motions in the family \eqref{fxs}) ; and the fourth and fifth may be as difficult as the Fatou conjecture (which has been open for a century). Indeed, the Fatou conjecture is equivalent to the following assertion \cite{Mane1983}: if $c$ is in the interior of the Mandelbrot set, then the Julia set of $f_c(z)=z^2 +c$ has zero Lebesgue measure. Theorem \ref{tyu} is perhaps the first result towards this classification. \begin{thm}[Hyperbolicity] \label{jkl} If $c$ is in the complement of $M_{\beta,0},$ or $c$ is sufficiently close to a simple center, then $\mathbf{f}_c$ is hyperbolic. \end{thm} \subsection{Holomorphic motions} \label{dfc} Quasiconformal deformations of Julia sets in the family $\mathbf{f}_c$ can be explained by the theory of branched holomorphic motions introduced by Lyubich and Dujardin \cite{Lyubich2015} for polynomial automorphisms of $\mathbb{C}^2.$ For more details, see \cite{SS17}. First, let us recall some classical facts about holomorphic motions. Let $\Lambda \subset \mathbb{C}^n$ and $U\subset \mathbb{C}$ be an open set. A family of injections $h_c: \Lambda \to \mathbb{C}^n$ is a \emph{holomorphic motion} with base point $a\in U$ if (i) $h_a$ is the identity, and (ii) $c\mapsto h_c(z)$ is holomorphic on $U$, for every $z$ fixed in $\Lambda.$ \paragraph{Branched holomorphic motions.} Let $\Lambda$ and $U$ be subsets of $\mathbb{C}$ and suppose $U$ open and nonempty. A branched holomorphic motion with base point $a\in U$ is a multifunction $\mathbf{h}: U \times \Lambda \to \mathbb{C}$ with the following properties: (i) $\mathbf{h}(a,z)=\{z\},$ for every $z\in \Lambda.$ In other words, $\mathbf{h}_a=\mathbf{h}(a, \cdot)$ is the identity; and (ii) there is a family $\mathcal{F}$ of holomorphic maps $f:U \to \mathbb{C}$ such that $$\bigcup_{z\in \Lambda} G_z(\mathbf{h}) = \bigcup_{f\in \mathcal{F}} G(f), $$ where $G(f)=\{(z, fz);\, z\in U\}$ is the graph of $f$ and $G_{z}(\mathbf{h})$ is the graph of $c\mapsto \mathbf{h}_c(z).$ The key difference in the definitions of branched and (non-branched) holomorphic motion is that bifurcations are allowed in the branched family, so that $\mathbf{h}_c(z)$ is a set instead of a single point. \subsection{Solenoidal Julia sets.} Recently, Siqueira and Smania have presented another way of interpreting branched holomorphic motions on the plane as projections of (non-branched) holomorphic motions on $\mathbb{C}^{2}.$ The method is general and applies to every hyperbolic Julia set \cite{SS17}, but we shall restrict to bifurcations near $c=0.$ There is a family of holomorphic maps $f_c: U_0 \to V_0$ such that $U_0$ and $V_0$ are open subsets of $\mathbb{C}^2,$ the closure of $U_0$ is contained in $V_0,$ and the maximal invariant set $$ S_c = \bigcap_{n=1}^{\infty} f_c^{-n}(V_0) $$ \noindent is the closure of periodic points of $f_c.$ (All periodic points are repelling in a certain generalized sense, see \cite{SS17}). This description holds for every $c$ in a neighborhood of zero. The dynamics of $\mathbf{f}_c$ on $\mathcal{J}_c$ is a topological factor of $f_c: S_c \to S_c,$ in the sense that $\pi(S_c)=\mathcal{J}_c$ and $\pi$ sends two points in $S_c$ related by $f_c$ to two points in $\mathcal{J}_c$ related by $\mathbf{f}_c$: $\pi f_c(x)$ is an image of $\pi(x)$ under $\mathbf{f}_c,$ for every $x\in S_c.$ Let $\pi_c: S_c \to \mathcal{J}_c$ denote the projection $(z,w) \mapsto z.$ \begin{thm}[Holomorphic motions] \label{bfl} There is a holomorphic motion $h_c:S_0 \to \mathbb{C}^2$ with base point $c=0$ such that \begin{enumerate} \item $h_c(S_0) =S_c$ and $h_c$ is a conjugacy (homeomorphism) from $f_0:S_0 \to S_0$ to $f_c:S_c \to S_c.$ \item the projected motion $\mathbf{h}_c(z)= \pi_c \circ f_c \circ \pi_0^{-1}(z) $ is a branched holomorphic motion mapping $\mathcal{J}_0=\mathbb{S}^1$ to $\mathcal{J}_c=\mathbf{h}_c(\mathbb{S}^1).$ \item $S_0$ is a solenoid, and $\mathbf{f}_c$ is hyperbolic, for every $c$ in $U.$ \end{enumerate} \end{thm} \noindent See \cite{SS17} for the solenoidal description of $S_0$ (indeed, $S_0$ is the Williams-Smale solenoid for certain values of $p$ and $q$). In Figure \ref{bcx}, the motion of $\mathcal{J}_c$ is illustrated in four steps. \subsection{Conformal iterated function systems} Dual Julia sets $\mathcal{J}_c^*$ in the family \eqref{fxs} often appear as limit sets of conformal iterated function systems (CIFS). This phenomenon is easy to explain when $c$ is close to zero, and very convenient to motivate further generalizations. Indeed, using the contraction of $\mathbf{f}_c$ around $z=0$ one can prove that for every $c\neq 0$ close to zero, there is an open disk $D$ such that $D_1=\mathbf{f}_c(D)$ is another disk avoiding zero and compactly contained in $D.$ Since $D_1$ is simply connected, there are $q$ conformal branches $f_j: D_1 \to \mathbb{C}$ such that $\mathbf{f}_c(z)=\{f_j(z) \}_j,$ for every $z\in D_1.$ Moreover, the images $f_j(D_1)$ are disjoint disks. It follows that $$\mathbf{f}_c(D_1) \subset \mathbf{f}_c(D)=D_1; $$ and the family of maps $f_j: D_1 \to D_1$ is a CIFS. The limit set of this CIFS is $\Lambda=\cap_n H^n(D_1),$ where $$H(A) = \bigcup_{j=1}^{q} f_j(A)$$ is the Hutchinson operator, and $A\subset D_1.$ The most important fact derived from this construction is that $\Lambda$ is the closure of attracting periodic orbits: $\Lambda=\mathcal{J}_c^*.$ This analysis has many generalizations, including holomorphic motions and Hausdorff dimension. Theorem \ref{tyu}, for example, is stated in great generality in \cite{Siq17}. In \cite{Rigidity} we give a general account establishing a rigidity result which states that $\mathcal{J}_c^*$ is finite at simple centers, but any perturbation of $c$ yields a hyperbolic correspondence whose dual Julia set is a Cantor set. In the case of $c$ close to zero, for example, $\mathcal{J}_c^*$ is either a Cantor set if $c\neq 0$ (indeed, $\Lambda$ comes from a CIFS without overlaps) or a single point set $\mathcal{J}_0^*=\{0\}.$ \Addresses \end{document}
\begin{document} \begin{CJK*}{GBK}{song} \CJKtilde\CJKindent \title{Generation of atomic NOON states via adiabatic passage*} \author{Qi-Gong Liu, Qi-Cheng Wu, Xin Ji\footnote{E-mail: [email protected]}, and Shou Zhang} \affiliation{Department of Physics, College of Science, Yanbian University, Yanji, Jilin 133002, People's Republic of China} \begin{abstract} We propose a scheme for generating atomic NOON states via adiabatic passage. In the scheme, a double $\Lambda$-type three-level atom is trapped in a bimodal cavity and two sets of $\Lambda$-type three-level atoms are translated into and outside of two single mode cavities respectively. The three cavities connected by optical fibres are always in vacuum states. After a series of operations and suitable interaction time, we can obtain arbitrary large-$n$ NOON states of two sets of $\Lambda$-type three-level atoms in distant cavities by performing a single projective measurement on the double $\Lambda$-type three-level atom. Due to adiabatic elimination of atomic excited states and the application of adiabatic passage, our scheme is robust against the spontaneous emissions of atoms, the decays of fibres and cavities photon leakage. So the scheme has a high fidelity and feasibility under the current available techniques. \\ {\bf{Keywords:}} {NOON states} $\cdot$ {Adiabatic passage} $\cdot$ {Cavity quantum electrodynamics} \end{abstract} \maketitle \section*{1. Introduction} Quantum entanglement, an interesting and attractive phenomenon in quantum mechanics, plays a significant role not only in testing quantum nonlocality, but also in processing a variety of quantum information tasks \cite{AKE,CHB,CHBS,CHBG,KMH,MHVB,SBZGCG,GV}. Multi-particle entangled states, such as GHZ states, W states, cluster states, NOON states, etc, are the fundamental resource of quantum information processing (QIP). The NOON states, as an intereting multi-particle entangled states, have the form as \begin{equation}\label{01} |{\rm{NOON}}\rangle=\frac{1}{\sqrt{2}}(|n,0\rangle+|0,n\rangle), \end{equation} which contain $n$ indistinguishable particles in an equal superposition of all being in one of the two possible modes. It's well established that the NOON states have significant applications either in lithography \cite{ANB2000,MDA2001,KE2002} or in quantum metrology \cite{MMW2004,RTG2008}. Especially, it can be used to obviously improve the phase sensitivity in quantum interferometry and beat the classical diffraction limit in quantum lithography \cite{JJB1996,PW2004,JJA2009,RTG2008}. Recently, much attention has been paid to prepare the NOON states. Many theoretical and experimental schemes have been proposed for generating the NOON states via optical components \cite{CKH1987,CCG2001,KE2002,HL2002,KP2002,MMW2004,HHF2007,HC2007,AI2010}, cold atomic ensemble \cite{YAC2010}, superconducting circuits \cite{STM2010,HW2011} and cavity quantum electrodynamics (QED) \cite{KTK2007,RI2007,ZZR2010,RMD2011,LX2011,XXQ2012,NG2012,YRC2011,KL2013}. However, the success probabilities of the proposals based on the linear optical components, are very low as the number of particles increasing \cite{KP2002}. Some people suggested using the optical nonlinear process for the generation of NOON states. Yet, it is difficult to carry out experimentally so far \cite{KTK2007}. The QED system, as a suitable candidate for demonstrating QIP and quantum state engineering \cite{SB1999}, has been studied extensively. Many proposals for preparing NOON states based on QED have been put forward, as previously mentioned. However, all of these systems are always affected by the various of external factors which will induce the decoherence and affect the probability of success, even destroy the entanglement. Although many kinds of physical systems are used to avoid or reduce the decoherence, it is always imperfect yet. So the preparation of the entangled multi-particle NOON states is still a severe challenge in the state of the art, though great progress has been made in recent decades. On the other hand, the adiabatic passage \cite{ASP1993,JRK1989,KB1998,NVV1999}, as an useful tool for realizing QIP, is becoming more and more powerful and charming owing to its robust against the spontaneous emission of the excited states, cavity photon leakage and some experimental parameter errors. As a result, the technique of adiabatic passage has been extensively studied for the entanglement generation \cite{MAT2005,WL2000,RGU2001,CM2003,LBC2007,SYY2008,SBZ2009,XYL2009,YLL2010,YLLF2010,PJS2011}. In this paper, we propose a scheme for generating the NOON states of two sets of the atoms via adiabatic passage. In the scheme, a double $\Lambda$-type three-level atom is trapped in a bimodal cavity and two sets of $\Lambda$-type three-level atoms are translated into and outside of two single mode cavities respectively. After a series of operations and suitable interaction time, the arbitrary large-$n$ entangled NOON states of two sets of $\Lambda$-type three-level atoms in distant cavities can be obtained by performing a single projective measurement on the double $\Lambda$-type three-level atom. Our scheme has the following characteristics: (1) Due to along the dark state, the cavity modes are unpopulated during the whole interaction process, hence the scheme is robust against the decays of the cavities. (2) Owing to adiabatical eliminations of atomic excited states, the spontaneous emission rate can be regarded as zero. (3) Under certain conditions, the probabilities of fibre modes populated can be negligible safely, thus the decays of fibres are effectively suppressed. (4) Taking advantages of adiabatic passage, the scheme is insensitive to small fluctuation of experimental parameters. (5) The scheme can be used to generate arbitrary large-$n$ NOON states in theory. The rest of the paper is organized as follows. In Sect. 2, the fundamental model and Hamiltonian are introduced. In Sect. 3, we propose a scheme to generate atomic NOON states via adiabatic passage. Finally, we discuss the fidelity of the scheme and summarize the conclusion in Sect. 4. \section*{2. The fundamental model and Hamiltonian} \begin{figure} \caption{The schematic setup for generating atomic NOON states. The two sets of $\Lambda$-type three-level atoms are stored in two transverse optical lattices respectively and translated into and outside of the cavity 1 and 3 respectively for interacting with the cavity modes and the classical fields. The level configurations of the atoms are shown in Fig.~\ref{fig02} \label{fig01} \end{figure} \begin{figure} \caption{The level configurations of the $\Lambda$-type three-level atoms translated into and outside of cavities 1 and 3. The states $|s\rangle_{L(R)} \label{fig02} \end{figure} \begin{figure} \caption{The level configuration of the double $\Lambda$-type three-level atom \cite{KL2013} \label{fig03} \end{figure} The schematic setup for generating atomic NOON states is shown in Fig.~\ref{fig01}. There are three distant optical cavities connected by two fibres (fibres A and B). The two sets of $\Lambda$-type three-level atoms are stored in two transverse optical lattices respectively and translated into and outside of the cavity 1 and cavity 3 simultaneously and respectively \cite{PX2006,JAS2004}. The level configurations of the atoms interacting with cavities 1 and 3 respectively are shown in Fig.~\ref{fig02}(a) and Fig.~\ref{fig02}(b). The states $|s\rangle_{L(R)}$ and $|k\rangle_{L(R)}$ are two ground levels and $|r\rangle_{L(R)}$ is an excited level of the atoms interacting with cavity 1 (3). The transition $|s\rangle_{L(R)}\leftrightarrow|r\rangle_{L(R)}$ is coupled to the mode of cavity 1 (3) with the coupling constant $g_{1(3)}$. The transition $|k\rangle_{L(R)}\leftrightarrow|r\rangle_{L(R)}$ is driven by the classical field with time-dependent Rabi frequency $\Omega_{L(R)}$. The frequency detunings between the atomic transitions $|s\rangle_{L(R)}\leftrightarrow|r\rangle_{L(R)}, |k\rangle_{L(R)}\leftrightarrow|r\rangle_{L(R)}$ and the relevant cavity mode and classical cavity are the same and denoted as $\Delta_{1(3)}$. They satisfy the corresponding two-photon resonance conditions. The atom in the bimodal cavity 2 is a double $\Lambda$-type three-level atom. The relevant atomic levels and transitions are depicted in Fig.~\ref{fig03}. Such level structure can be achieved in $^{40}Ca^{+}$ \cite{SBZ2006,HZW2007,MC2009,KL2013}. Two degenerate ground states $|g_{L}\rangle$ and $|g_{R}\rangle$ correspond to $^{40}Ca^{+}$ atom hyperfine levels $|F=1/2,m=-1/2\rangle$ and $|F=1/2,m=1/2\rangle$ of the level $4S_{1/2}$, while two degenerate excited states $|e_{L}\rangle$ and $|e_{R}\rangle$ correspond to $|F=1/2,m=-1/2\rangle$ and $|F=1/2,m=1/2\rangle$ of the level $4P_{1/2}$. On the other hand, two intermediate semi-stable states $|f_{L}\rangle$ and $|f_{R}\rangle$ correspond to $|F=3/2,m=-1/2\rangle$ and $|F=3/2,m=1/2\rangle$ of the level $3D_{3/2}$, respectively. The transition $|f_{L(R)}\rangle\leftrightarrow|e_{L(R)}\rangle$ is driven by a classical field $F_{1}$ with Rabi frequency $\Omega_{1}$; $|e_{L(R)}\rangle\leftrightarrow|g_{L(R)}\rangle$ is coupled to the left(right) circular polarized cavity mode with the coupling constant $g_{2l(r)}$; the transition $|g_{L(R)}\rangle\leftrightarrow|f_{L(R)}\rangle$ is driven by another classical field $F_{2}$ with Rabi frequency $\Omega_{2}$. The frequency detunings of the cavity modes and classical cavity $F_{1}$ from the respective atomic transitions are the same and denoted as $\Delta_{2}$. Now, we consider the case both cavity 1 and cavity 3 have one atom respectively, thus the Hamiltonian of atom-cavity system under the rotating-wave approximation can be written as ($\hbar=1$) \begin{eqnarray}\label{2} H_{ac}&=&\frac{\Omega_{L}^2}{\Delta_{1}}|k\rangle_{L}\langle{k}|+\frac{\Omega_{1}^2}{\Delta_{2}}|f_{L}\rangle\langle{f_{L}}|+\frac{\Omega_{1}^2}{\Delta_{2}}|f_{R}\rangle\langle{f_{R}}| +\frac{\Omega_{R}^2}{\Delta_{3}}|k\rangle_{R}\langle{k}|\cr &&+\frac{g^2}{\Delta_{1}}a_{1}^{\dag}a_{1}|s\rangle_{L}\langle{s}| +\frac{g^2}{\Delta_{2}}a_{2l}^{\dag}a_{2l}|g_{L}\rangle\langle{g_{L}}| +\frac{g^2}{\Delta_{2}}a_{2r}^{\dag}a_{2r}|g_{R}\rangle\langle{g_{R}}|+\frac{g^2}{\Delta_{3}}a_{3}^{\dag}a_{3}|s\rangle_{R}\langle{s}|\cr &&+\bigg(\frac{g\Omega_{L}}{\Delta_{1}}a_{1}^{\dag}|s\rangle_{L}\langle{k}| +\frac{g\Omega_{1}}{\Delta_{2}}a_{2l}^{\dag}|g_{L}\rangle\langle{f_{L}}| +\frac{g\Omega_{1}}{\Delta_{2}}a_{2r}^{\dag}|g_{R}\rangle\langle{f_{R}}|+\frac{g\Omega_{R}}{\Delta_{3}}a_{3}^{\dag}|s\rangle_{R}\langle k|+\rm H.c.\bigg), \end{eqnarray} where $a_{1(3)}^{\dag}$ and $a_{1(3)}$ are the creation and annihilation operators of the cavity 1(3); $a_{2l(r)}^{\dag}$ and $a_{2l(r)}$ are the creation and annihilation operators of left(right) circular polarization of the cavity 2. For convenience, here we have set $g_{2l(r)}=g_{1(3)}=g$. The first four terms on the right-hand side in Eq. (\ref{2}) represent the atom level shifts induced by classical fields. By using the nonresonant coupling of other lasers with the corresponding atom levels, these energy level shifts can be compensated straightforwardly \cite{TP1997}. Considering the cavities are initially in vacuum states, the Hamiltonian $H_{ac}$ can be further simplified into \begin{eqnarray}\label{03} H_{ac}^{'}=\Omega_{Le}(t)a_{1}^{\dag}|s\rangle_{L}\langle{k}|+\Omega_{1e}(t)a_{2l}^{\dag}|g_{L}\rangle\langle{f_{L}}| +\Omega_{1e}(t)a_{2r}^{\dag}|g_{R}\rangle\langle{f_{R}}|+\Omega_{Re}(t)a_{3}^{\dag}|s\rangle_{R}\langle{k}|+\rm H.c., \end{eqnarray} where $\Omega_{Le}(t)=\Omega_{L}g/\Delta, \Omega_{Re}(t)=\Omega_{R}g/\Delta, \Omega_{1e}(t)=\Omega_{1}g/\Delta$ are the effective Rabi frequencies for the corresponding Raman transitions $|k\rangle_{L}\rightarrow|s\rangle_{L}, |k\rangle_{R}\rightarrow|s\rangle_{R},|f_{L(R)}\rangle\rightarrow|g_{L(R)}\rangle$, respectively. Here we have assumed $\Delta_{1,2,3}=\Delta$. In our scheme, three cavities are connected by two optical fibres A and B. In the short fibre limit, only one (resonant) mode of a fibre interacts with corresponding cavity modes. The interaction Hamiltonian of fibre-cavity system can be approximated to \begin{eqnarray}\label{04} H_{cf}=\eta_{A}[b_{A}(a_{1}^{\dag}+a_{2l}^{\dag})]+\eta_{B}[b_{B}(a_{2r}^{\dag}+a_{3}^{\dag})]+\rm H.c.. \end{eqnarray} where $b_{A}$ and $b_{B}$ are the annihilation operators of the resonant modes of fibre A and B respectively and the polarizations of the fibre modes A and B have been chosen as the left circular and the right circular polarizations, respectively. $\eta_{A(B)}$ is the coupling strength between the fibre mode A(B) and corresponding cavity modes. Lastly, in the interaction picture, the total Hamiltonian of the system can be written as \begin{eqnarray}\label{05} H_{eff}&=&H_{ac}^{'}+H_{cf}\cr &=&\Omega_{Le}(t)a_{1}^{\dag}|s\rangle_{L}\langle{k}|+\Omega_{1e}(t)a_{2l}^{\dag}|g_{L}\rangle\langle{f_{L}}| +\Omega_{1e}(t)a_{2r}^{\dag}|g_{R}\rangle\langle{f_{R}}|+\Omega_{Re}(t)a_{3}^{\dag}|s\rangle_{R}\langle{k}|\cr &&+\eta_{A}[b_{A}(a_{1}^{\dag}+a_{2l}^{\dag})]+\eta_{B}[b_{B}(a_{2r}^{\dag}+a_{3}^{\dag})]+\rm H.c.. \end{eqnarray} \section*{3. Generation of the NOON states} In this section, we will show how to deterministically prepare the multi-particle NOON states. Considering that both the cavity modes and the fiber modes are all in vacuum states, initially, and the double $\Lambda$-type three-level atom is prepared in the superposition state $1/\sqrt{2}(|f_{L}\rangle+|f_{R}\rangle)$ with the method of Ref.~\cite{CKL1998}, while atoms interacting with cavities 1 and 3 respectively are in the state $|s\rangle_{L}|s\rangle_{R}$, so the initial state of the whole compound system is \begin{eqnarray}\label{06} \frac{1}{\sqrt{2}}(|f_{L}\rangle+|f_{R}\rangle)|s\rangle_{L}|s\rangle_{R}|000\rangle_{c}|00\rangle_{f}. \end{eqnarray} For the initial state $|f_{L}\rangle|s\rangle_{L}|s\rangle_{R}|000\rangle_{c}|00\rangle_{f}$, dominated by Hamiltonian (\ref{05}), the evolution of the system state remains in the subspace with one excitation number spanned by the basis vectors \begin{eqnarray}\label{7} |\psi_{1}\rangle&=&|f_{L}\rangle|s\rangle_{L}|s\rangle_{R}|000\rangle_{c}|00\rangle_{f},\cr |\psi_{2}\rangle&=&|g_{L}\rangle|s\rangle_{L}|s\rangle_{R}|01^{l}0\rangle_{c}|00\rangle_{f},\cr |\psi_{3}\rangle&=&|g_{L}\rangle|s\rangle_{L}|s\rangle_{R}|000\rangle_{c}|1^{l}0\rangle_{f},\cr |\psi_{4}\rangle&=&|g_{L}\rangle|s\rangle_{L}|s\rangle_{R}|1^{l}00\rangle_{c}|00\rangle_{f},\cr |\psi_{5}\rangle&=&|g_{L}\rangle|k\rangle_{L}|s\rangle_{R}|000\rangle_{c}|00\rangle_{f}. \end{eqnarray} The Hamiltonian (\ref{05}) has a dark state (i.e., zero energy eigenstate of Hamiltonian (\ref{05})) \begin{eqnarray}\label{8} |\Psi_{D1}\rangle=\frac{1}{K_{0}}(\Omega_{Le}\eta_{A}|\psi_{1}\rangle-\Omega_{Le}\Omega_{1e}|\psi_{3}\rangle+\Omega_{1e}\eta_{A}|\psi_{5}\rangle), \end{eqnarray} where $K_{0}=\sqrt{\Omega_{Le}^{2}\eta_{A}^{2}+\Omega_{Le}^{2}\Omega_{1e}^{2}+\Omega_{1e}^{2}\eta_{A}^{2}}$. Under the condition of \begin{eqnarray}\label{9} \eta_{A}\gg \Omega_{Le},\Omega_{1e}, \end{eqnarray} the dark state (\ref{8}) reduces to \begin{eqnarray}\label{10} |\Psi_{D1}^{'}\rangle=\frac{1}{\sqrt{\Omega_{Le}^{2}+\Omega_{1e}^{2}}}(\Omega_{Le}|\psi_{1}\rangle+\Omega_{1e}|\psi_{5}\rangle). \end{eqnarray} If pulse shapes are designed such that \begin{eqnarray}\label{11} \lim_{t\to-\infty}\frac{\Omega_{1e}}{\Omega_{Le}}=0,~ \lim_{t\to+\infty}\frac{\Omega_{Le}}{\Omega_{1e}}=0, \end{eqnarray} the initial state $|\psi_{1}\rangle$ of the system is adiabatically transferred to $|\psi_{5}\rangle$. For the initial state $|f_{R}\rangle|s\rangle_{L}|s\rangle_{R}|000\rangle_{c}|00\rangle_{f}$, dominated by Hamiltonian (\ref{05}) also, the evolution of the system state remains in the subspace with one excitation number spanned by the basis vectors \begin{eqnarray}\label{12} |\psi_{6}\rangle&=&|f_{R}\rangle|s\rangle_{L}|s\rangle_{R}|000\rangle_{c}|00\rangle_{f},\cr |\psi_{7}\rangle&=&|g_{R}\rangle|s\rangle_{L}|s\rangle_{R}|01^{r}0\rangle_{c}|00\rangle_{f},\cr |\psi_{8}\rangle&=&|g_{R}\rangle|s\rangle_{L}|s\rangle_{R}|000\rangle_{c}|01^{r}\rangle_{f},\cr |\psi_{9}\rangle&=&|g_{R}\rangle|s\rangle_{L}|s\rangle_{R}|001^{r}\rangle_{c}|00\rangle_{f},\cr |\psi_{10}\rangle&=&|g_{R}\rangle|s\rangle_{L}|k\rangle_{R}|000\rangle_{c}|00\rangle_{f}. \end{eqnarray} The Hamiltonian (\ref{05}) has a dark state \begin{eqnarray}\label{13} |\Psi_{D2}\rangle=\frac{1}{K_{1}}(\Omega_{Re}\eta_{B}|\psi_{6}\rangle-\Omega_{Re}\Omega_{1e}|\psi_{8}\rangle+\Omega_{1e}\eta_{B}|\psi_{10}\rangle), \end{eqnarray} where $K_{1}=\sqrt{\Omega_{Re}^{2}\eta_{B}^{2}+\Omega_{Re}^{2}\Omega_{1e}^{2}+\Omega_{1e}^{2}\eta_{B}^{2}}$. Under the condition of \begin{eqnarray}\label{14} \eta_{B}\gg \Omega_{Re},\Omega_{1e}, \end{eqnarray} the dark state (\ref{13}) reduces to \begin{eqnarray}\label{15} |\Psi_{D2}^{'}\rangle=\frac{1}{\sqrt{\Omega_{Re}^{2}+\Omega_{1e}^{2}}}(\Omega_{Re}|\psi_{6}\rangle+\Omega_{1e}|\psi_{10}\rangle). \end{eqnarray} If pulse shapes are designed such that \begin{eqnarray}\label{16} \lim_{t\to-\infty}\frac{\Omega_{1e}}{\Omega_{Re}}=0,~ \lim_{t\to+\infty}\frac{\Omega_{Re}}{\Omega_{1e}}=0, \end{eqnarray} the initial state $|\psi_{6}\rangle$ of the system is adiabatically transferred to $|\psi_{10}\rangle$. After the above adiabatic processes, the system state \begin{eqnarray}\label{17} |\Psi_{1}\rangle&=&\frac{1}{\sqrt{2}}(|g_{L}\rangle|k\rangle_{L}|s\rangle_{R}+|g_{R}\rangle|s\rangle_{L}|k\rangle_{R})|000\rangle_{c}|00\rangle_{f}\cr &=&\frac{1}{\sqrt{2}}(|g_{L}\rangle|1,0\rangle+|g_{R}\rangle|0,1\rangle)|000\rangle_{c}|00\rangle_{f} \end{eqnarray} can be achieved. Here, $|1,0\rangle (|0,1\rangle)$ denotes $|k\rangle_{L}|s\rangle_{R} (|s\rangle_{L}|k\rangle_{R})$. After that, we turn off the classical field $F_{1}$ and apply the classical field $F_{2}$ with Rabi frequency $\Omega_{2}$ on the double $\Lambda$-type three-level atom to drive the transition $|g_{L}\rangle\rightarrow |f_{L}\rangle(|g_{R}\rangle\rightarrow |f_{R}\rangle)$. After the interaction time $\tau$ which satisfies $\Omega_{2}\tau=\pi/2$, the state of the whole system becomes \begin{eqnarray}\label{18} \frac{1}{\sqrt{2}}(|f_{L}\rangle|k\rangle_{L1}|s\rangle_{R1}+|f_{R}\rangle|s\rangle_{L1}|k\rangle_{R1})|000\rangle_{c}|00\rangle_{f}. \end{eqnarray} Here, in order to describe the process for preparing the atomic NOON states clearly, we introduce symbol $|x\rangle_{L(R)i} (x=s,k; i=1,2,3,\cdots)$ which denotes the $i$-th atom through the cavity 1(3) is in the state $|x\rangle$. The state in Eq. (\ref{18}) is the new initial state again. Then turn off the classical field $F_{2}$ and turn on the classical field $F_{1}$ simultaneously. After repeating the above adiabatic process and choosing suitable interaction time each time, the whole system evolves successively into the states \begin{eqnarray}\label{19} |\Psi_{2}\rangle&=&\frac{1}{\sqrt{2}}(|g_{L}\rangle|k\rangle_{L1}|k\rangle_{L2}|s\rangle_{R1}|s\rangle_{R2}+|g_{R}\rangle|s\rangle_{L1}|s\rangle_{L2}|k\rangle_{R1}|k\rangle_{R2})|000\rangle_{c}|00\rangle_{f}\cr &=&\frac{1}{\sqrt{2}}(|g_{L}\rangle|2,0\rangle+|g_{R}\rangle|0,2\rangle)|000\rangle_{c}|00\rangle_{f},\cr |\Psi_{3}\rangle&=&\frac{1}{\sqrt{2}}(|g_{L}\rangle|k\rangle_{L1}|k\rangle_{L2}|k\rangle_{L3}|s\rangle_{R1}|s\rangle_{R2}|s\rangle_{R3}+|g_{R}\rangle|s\rangle_{L1}|s\rangle_{L2}|s\rangle_{L3}|k\rangle_{R1}|k\rangle_{R2}|k\rangle_{R3})|000\rangle_{c}|00\rangle_{f},\cr &=&\frac{1}{\sqrt{2}}(|g_{L}\rangle|3,0\rangle+|g_{R}\rangle|0,3\rangle)|000\rangle_{c}|00\rangle_{f},\cr |\Psi_{4}\rangle&=&\frac{1}{\sqrt{2}}(|g_{L}\rangle|4,0\rangle+|g_{R}\rangle|0,4\rangle)|000\rangle_{c}|00\rangle_{f},\cr |\Psi_{5}\rangle&=&\frac{1}{\sqrt{2}}(|g_{L}\rangle|5,0\rangle+|g_{R}\rangle|0,5\rangle)|000\rangle_{c}|00\rangle_{f},\cr &\vdots&\cr |\Psi_{n}\rangle&=&\frac{1}{\sqrt{2}}(|g_{L}\rangle|n,0\rangle+|g_{R}\rangle|0,n\rangle)|000\rangle_{c}|00\rangle_{f},\cr &\vdots& \end{eqnarray} Here, in the sign $|n,0\rangle(|0,n\rangle)(n=1,2,3,\cdots)$, $|n\rangle$ donates there are $n$ atoms through the cavity 1(3) in the state $|k\rangle_{L(R)}$; $|0\rangle$ donates there are $n$ atoms in the state $|s\rangle_{R(L)}$. At last, we use a classical field $F_{3}$ to implement a Hadamard operation \begin{eqnarray}\label{20} |g_{L}\rangle\rightarrow\frac{1}{\sqrt{2}}(|g_{L}\rangle+|g_{R}\rangle),\cr |g_{R}\rangle\rightarrow\frac{1}{\sqrt{2}}(|g_{L}\rangle-|g_{R}\rangle), \end{eqnarray} and then, the state $|\Psi_{n}\rangle$ becomes \begin{eqnarray}\label{21} \frac{1}{2}[|g_{L}\rangle(|n,0\rangle+|0,n\rangle)+|g_{R}\rangle(|n,0\rangle-|0,n\rangle)]. \end{eqnarray} Now, a single projective measurement should be performed on the double $\Lambda$-type three-level atom. If the double $\Lambda$-type three-level atom is detected in the state $|g_{L}\rangle$, Eq. (\ref{21}) collapses to \begin{eqnarray}\label{22} |{\rm{NOON}}\rangle_{+}=\frac{1}{\sqrt{2}}(|n,0\rangle+|0,n\rangle), \end{eqnarray} and if the atom is detected in the state $|g_{R}\rangle$, Eq. (\ref{21}) collapses to \begin{eqnarray}\label{23} |{\rm{NOON}}\rangle_{-}=\frac{1}{\sqrt{2}}(|n,0\rangle-|0,n\rangle). \end{eqnarray} It is worth noting that no matter what the measurement result is, the NOON states can be always achieved. That's to say, the successful probability of our protocol is unity in the ideal case. \section*{4. Discussion and conclusion} In order to generate the NOON states, the conditions of Eq. (\ref{11}) and Eq. (\ref{16}) should be satisfied in our scheme. So we can design the pulse shape of the laser fields $\Omega_{L}$, $\Omega_{R}$ and $\Omega_{1}$ as the Gaussian \cite{UG1990,HG2004,YLZ2009}, \begin{eqnarray}\label{25} \Omega_{\xi}(t)=\Omega_{0}\textmd{exp}[-\frac{(t-T/2-t_{\xi})^2}{2\tau^2}], \end{eqnarray} where $\xi=L,R,1;$ $\Omega_{0}$ is the amplitude of $\Omega_{\xi}$; $T$ is the total adiabatic time and $\tau$ is the laser beam waist. $t_{\xi}$ is the time we turn on the laser with Rabi frequency $\Omega_{\xi}$ on the corresponding atoms. \begin{figure} \caption{(a) Time dependence of $\Omega_{\xi} \label{fig04} \end{figure} For showing expression (\ref{25}) clearly, we plot the time dependence of $\Omega_{\xi}(t)/\Omega_{0}$ of the laser fields in Fig.~\ref{fig04}(a) and the population curves of $|\psi_{1}\rangle+|\psi_{6}\rangle$ and $|\psi_{5}\rangle+|\psi_{10}\rangle$ with $gt$ in Fig.~\ref{fig04}(b). The system parameters are chosen as $g=1$ GHz, $\Omega_{0}=1.5g$, $T=100/g,\tau=12/g, t_{L}=t_{R}=-15/g, t_{1}=15/g$ and $\Delta=15g$. It can be seen from Fig.~\ref{fig04}(a) that the conditions \begin{eqnarray}\label{26} &&\lim_{t\to-\infty}\frac{\Omega_{1e}}{\Omega_{Le}}=0,~ \lim_{t\to+\infty}\frac{\Omega_{Le}}{\Omega_{1e}}=0,\cr &&\lim_{t\to-\infty}\frac{\Omega_{1e}}{\Omega_{Re}}=0,~ \lim_{t\to+\infty}\frac{\Omega_{Re}}{\Omega_{1e}}=0, \end{eqnarray} can be satisfied during the whole evolution for realizing the NOON states. In addition, we can see from Fig.~\ref{fig04}(b) that when $gt\rightarrow+\infty$, the population of $|\psi_{5}\rangle+|\psi_{10}\rangle$ is 1, which means the NOON states can be deterministically achieved via adiabatic evolution. Certainly, all the above results are based on the ideal case because the influences of atomic spontaneous emission, photon leakage out of the cavities and fibres are not taken into account. In fact, the effects of these factors are inevitable, so we will study the effects of these factors on the fidelity below. For purpose of generating atomic NOON states, we have used the large detuning condition and adiabatically eliminated the excited states of the atoms, as a result, setting the spontaneous emission rate to be zero is acceptable. While, the effect of photon loss out of the cavity can also be ignored safely in our scheme because the cavity modes are never populated in the whole process due to the adiabatic passage along dark state. The terms related to $|\psi_{3}\rangle$ and $|\psi_{8}\rangle$, however, have been discarded for above calculations since we have assumed that $|\eta_{A}|, |\eta_{B}|\gg |\Omega_{Le}|, |\Omega_{Re}|, |\Omega_{1e}|$ in Eq. (\ref{10}) and Eq. (\ref{15}). The practical situation is that the fibre modes may be excited, in other words, there is a probability that the state $(|\psi_{3}\rangle+|\psi_{8}\rangle)/\sqrt{2}$ are populated, and then they may evolve into $(|g_{L}\rangle|s\rangle_{L1}|s\rangle_{R1}+|g_{R}\rangle|s\rangle_{L1}|s\rangle_{R1})|000\rangle_{c}|00\rangle_{f}/\sqrt{2}$ due to the fibre decays, which will cause error for generating the NOON states. Considering the effect of photon leakage out of the fibres, the fidelity can be written as \begin{eqnarray}\label{24} F=1-\frac{\gamma_{f}}{2}\int_{0}^{T}\frac{\Omega_{Le}^{2}\Omega_{1e}^{2}}{K_{0}^2}+\frac{\Omega_{Re}^{2}\Omega_{1e}^{2}}{K_{1}^2}dt, \end{eqnarray} where $K_{0},K_{1}$ are given in Eq. (\ref{8}) and Eq. (\ref{13}); $\gamma_{f}$ denotes the decay rate of fibres (here we assume the decay rates of two fibres are the same); $T$ is still the total adiabatic time. \begin{figure} \caption{The effect of fibre loss $\gamma_{f} \label{fig05} \end{figure} We investigate the effect of fibre loss on the fidelity of getting $|\Psi_{1}\rangle$ for different $\Omega_{0}$ values ($\Omega_{0}=0.75g, 1.5g, 2.25g$) as shown in Fig.~\ref{fig05}(a), where we set $\eta_{A,B}=\eta=0.6g$. It is seen obviously from Fig.~\ref{fig05}(a) that the fidelity decreases slightly with the increase of the $\gamma_{f}/g$. However, even if there exists a relatively large fibre decay rate $\gamma_{f}=0.3g$ when $\Omega_{0}\leq1.5g$, we can still obtain NOON states with a high fidelity. Besides, we plot the fidelity as a function of $\eta/g$ for different $\gamma_{f} (\gamma_{f} = 0.05g, 0.1g, 0.2g)$ in Fig.~\ref{fig05}(b) with $\Omega_{0}=1.5g$. We can see that the fidelities increase gradually with the increase of $\eta/g$, while, with the increase of decay rate $\gamma_{f}$, the fidelity approaches 1 more slowly. But, when $\eta/g\geq0.6$ the fidelity is higher than 0.99 even with $\gamma_{f}=0.2g$. \begin{figure} \caption{Fidelities of getting $|\Psi_{n} \label{fig06} \end{figure} In addition, we also plot the curves of fidelity of getting $|\Psi_{n}\rangle$ versus $n$ for different decay rates of fibre modes with $\Omega_{0}=1.5g$ in Fig.~\ref{fig06}. The other parameters are the same as those in Fig. 5. Obviously, it can be seen from Fig.~\ref{fig06} that the fidelity decreases with the increase of the particle number $n$. Nevertheless, the fidelity can still reach 0.934 even if the particle number $n$ is up to 10 when $\gamma_{f}=0.2g$ and up to 20 when $\gamma_{f}=0.1g$. In conclusion, we have proposed a scheme for generating arbitrary large-$n$ NOON states via adiabatic passage. By using a sequence of pulse laser fields, our atom-cavity-fibre system is always in the dark states. In the whole process, the influence of atomic spontaneous emissions, photon leakage out of fibres and cavities are effectively compressed via adiabatic elimination of excited states and adiabatic passage. Furthermore, we make an estimation on the fidelities of the NOON states by considering different parameters and show that our scheme is insensitive to small fluctuations of experimental parameters. Anyway, the present scheme provides an efficient approach to realize arbitrary large-$n$ atomic NOON states and we hope our work may be useful for the quantum information in the near future. \begin{center}$\mathbf{Acknowledgments}$\end{center} This work was supported by the National Natural Science Foundation of China under Grant Nos. 11064016 and 61068001. \end{CJK*} \end{document}
\betagin{document} \title [Attractors for the viscous Camassa-Holm equation] {Attractors for the viscous Camassa-Holm equation} \author{Milena Stanislavova} \author{Atanas Stefanov} \address{Milena Stanislavova\\ Department of Mathematics \\ University of Kansas\\ 1460 Jayhawk Blvd\\ Lawrence, KS 66045--7523} \email{[email protected]} \address{Atanas Stefanov\\ Department of Mathematics \\ University of Kansas\\ 1460 Jayhawk Blvd\\ Lawrence, KS 66045--7523} \email{[email protected]} \thanks{ First author supported in part by the NSF under grant \# EPS-0236913 and NSF-DMS 0508184. Second author supported in part by NSF-DMS 0300511.} \date{\today} \subjclass[2000]{35Q35, 35Q58, 37K40, 35B41, 35B65, 76B15} \keywords{Viscous Camassa-Holm equation, global solutions, attractors} \betagin{abstract} We consider the viscous Camassa-Holm equation subject to an external force, where the viscosity term is given by second order differential operator in divergence form. We show that under some mild assumptions on the viscosity term, one has global well-posedness both in the periodic case and the case of the whole line. In the periodic case, we show the existence of global attractors in the energy space $H^1$, provided the external force is in the class $L^2(I)$. Moreover, we establish an asymptotic smoothing effect, which states that the elements of the attractor are in fact in the smoother Besov space $B^2_{2,\infty}(I)$. Identical results (after adding an appropriate linear damping term) are obtained in the case of the whole line. \end{abstract} \maketitle \section{Introduction} The failure of weakly nonlinear dispersive equations, such as the celebrated Korteweg-de Vries equation, to model interesting physical phenomena like wave breaking, existence of peaked waves etc., was a motivation for transition to full nonlinearity in the search for alternative models for nonlinear dispersive waves (C^{\infty}te{Whitham}). The first step in this direction was the derivation of the Green-Naghdi system of equations (see C^{\infty}te{GN}), which is a Hamiltonian system that models fluid flows in thin domains. Writing the Green-Naghdi equations in Hamiltonian form and using asymptotic expansion which keeps the Hamiltonian structure, Camassa and Holm (C^{\infty}te{Camassa}) derived the Camassa-Holm equation in 1993. They obtained the strongly nonlinear equation $$ u_t-\frac{1}{4} u_{xxt}+\frac{3}{2} (u^2)_x-\frac{1}{8} (u_x^2)_x-\frac{1}{4} (u u_{xx})_x=0, $$ which was also found independently by Dai (C^{\infty}te{Dai}) as a model for nonlinear waves in cylindrical hyper elastic rods and had been originally obtained by Fokas and Fuchsteiner (C^{\infty}te{FF}) as an example of bi-Hamiltonian equation. The equation possesses a Lax pair and is completely integrable in terms of the inverse scattering transform, see C^{\infty}te{Cam1},C^{\infty}te{Camassa}. For recent and extensive treatments of the case of solutions on the real line, decaying at infinity, we refer to C^{\infty}te{con5}, C^{\infty}te{con1}, C^{\infty}te{kaup}, while for the periodic case, one should consult C^{\infty}te{con2}, C^{\infty}te{con4} . A dictinct feature of the Camassa-Holm equations is that it exhibits orbitally stable soliton solutions, which are weak solutions in the shape of a peaked waves, C^{\infty}te{Co2}, C^{\infty}te{Co1}, see also C^{\infty}te{lenells} for the most complete description of traveling waves available as of this writing. Camassa and Holm, C^{\infty}te{Camassa} have found that two solitary waves keep their shape and size after interaction while the ultimate position of each wave is affected only with a phase shift by the nonlinear interaction, see also C^{\infty}te{Beals}, C^{\infty}te{Co3}. Finally we mention the presence of breaking waves for this equation (C^{\infty}te{Camassa}, C^{\infty}te{Co},C^{\infty}te{McKean}, C^{\infty}te{con3}), as well as the occurrence of global solutions (C^{\infty}te{Co}, C^{\infty}te{Co2}, C^{\infty}te{Co3},C^{\infty}te{Co4}). Our main object of investigation will be the initial value problem for the Camassa-Holm equation, which takes the form \betagin{equation} \lambdabel{eq:1} (CH) \ \ \left|\betagin{array}{l} u_t-u_{txx}=2u_x u_{xx}+u u_{xxx}-3 u u_x\\ u(x,0)=u_0(x) \end{array}\right. \end{equation} By reorganizing the terms, one sees that this is equivalent to \betagin{equation} \lambdabel{eq:10} u_t+\f{1}{2}\partial_x(u^2)+\partial_x(1-\p_x^2)^{-1} [u_x^2/2+u^2]=0, \end{equation} where the Helmholtz operator $(1-\p_x^2)^{-1}$ is standardly defined in Section \ref{sec:helm}. Denote here and for the rest of the paper the nonlinearity of \eqref{eq:10} $F(u,u_x)= \f{1}{2}\partial_x(u^2)+\partial_x(1-\p_x^2)^{-1} [u_x^2/2+u^2]$. The viscous Camassa-Holm equation in one and more dimensions\footnote{These equations are also known as Navier Stokes $\alpha$ models.} was studied extensively in the recent years. This was done in parallel with the non viscous one, so we refer to the papers, quoted above. In C^{\infty}te{Stanislavova}, we have shown in particular that for \betagin{equation} \lambdabel{eq:11} u_t+\f{1}{2}\partial_x(u^2)+\partial_x(1-\p_x^2)^{-1} [u_x^2/2+u^2]=\varepsilon \partial_x^2 u, \end{equation} one has global and unique solution in the energy class $H^1(\mathbf R^1)$. In C^{\infty}te{Helge}, the authors have taken a more general type of viscosity and forcing terms. They have shown (among other things) global well-posedness for the equation \betagin{equation} \lambdabel{eq:2} u_t+\f{1}{2}\partial_x(u^2)+\partial_x(1-\p_x^2)^{-1} [u_x^2/2+u^2]=\partial_x(a(t,x)\partial_x u)+g(t,x,u), \end{equation} with initial data $u_0\in H^2(\mathbf R^1)$. Here $a$ is bounded, positive and bounded away from zero, with number of additional technical assumptions on $a$, $g$. In this article, we shall consider similar type of viscosity terms $\partial_x(a \partial_x u)$, which is motivated by recent works in conservation laws and which seem to better model the underlying physical situations. We will however stick to the case of {\it time independent} $a=a(x)$, although our arguments work in the time dependent case as well, subject to some minor modifications. This is done to reduce the unnecessary technicalities and it is also dictated by our interest in the dynamical system (rather than the cocycle) properties of \eqref{eq:2}. It is also our goal to consider the question for global well-posedness of \eqref{eq:2} both on the whole line $\mathbf R^1$ and on any finite interval. As we shall see, the methods that we employ in the two cases are slightly different, but not conceptually so. The main difficulty for the case of $\mathbf R^1$ as usual will be the non compactness of the embedding $H^1(\mathbf R^1)\hookrightarrow L^2(\mathbf R^1)$. Let us take a moment to explain our results. First, under standard assumptions\footnote{In fact, for the existence theorem, the smoothness assumptions on $a$ that we work with are considerably less restrictive than those imposed by C^{\infty}te{Helge}. Moreover, in the proof of well-posedness, it will suffice to assume only $a\in C^1(I)$.} on $a$ and $g$, {\it we show that the dynamical system \eqref{eq:2} has an unique global solution}, whenever $u_0\in H^1(\mathbf R^1)$ or $u_0\in H^1(0, 1)$ respectively. For the case of finite interval, we are able to show {\it the existence of global attractor}. This is done under a smallness assumption on the Lipschitz norm of $a$. In addition, the attractor (which is initially a subset of $H^1(0, 1)$) turns out to be a subset of the smoother space $H^{2-\sigma}(0, 1)$, {\it that is the semigroup associated with \eqref{eq:2} exhibits asymptotic smoothing effect}. More precisely, we show that for every $\sigma>0$, the attractor is a bounded subset of $H^{2-\sigma}(0,1)$. For the case of \eqref{eq:2} considered as a integro differential equation on the whole line $\mathbf R^1$, the existence of an attractor is not clear, although we have not explicitly found a counterexample. The main difficulty is that nothing seems to prevent a low-frequency buildup, which may cause an unrestricted growth of $\|u(t, \mathcal Dot)\|_{H^1}$. That is, we expect that for a wide class of initial data $u_0$ and right hand side $g$, $L^{\infty}msup_{t\to \infty} \|u(t, \mathcal Dot)\|_{H^1(\mathbf R^1)}=\infty$. This clearly would prevent the existence of an attractor. On the other hand, if one adds an additional damping term (which is actually a relevant physical model, considered in two dimensions by Ilyin and Titi, C^{\infty}te{Titi5}), one can show the existence of an attractor and boundedness in $H^{2-}$ in the case of the whole line as well. The discussion on that is in Section \ref{sec:conclusions}. \\ Now and throughout the paper, we will require that the operator $A u=-\partial_x(a(x) \partial_x u)$ be coercive. That is, assume that $a(x)$ is $C^2$ real valued, so that for some fixed $\varepsilon>0$ \betagin{equation} \lambdabel{eq:c1} \varepsilon<a(x) <1/\varepsilon \end{equation} Note that under these assumptions, we can define the (unbounded) operator $A$ as a Friedrich's extension of the unbounded operator defined by the quadratic form $$ q(u,u) = \int_I a(x) |u'(x)|^2 dx=:\dpr{u}{Au}, $$ with domain $\dot{H}^1(I)$ with the natural boundary conditions and where $I=\mathbf R^1$ or $I=(0,1)$. That is, we impose the boundary condition $u(0)=u(1)$ in the periodic case and $L^{\infty}m_{|x|\to \infty} u(x)=0$ in the case of the whole line. In particular, $A$ is positive and self-adjoint operator and $-A$ generates a strongly continuous semigroup. \\ Our first theorem is a well-posedness type result. \betagin{theorem} \lambdabel{theo:1} For the viscous Camassa-Holm equation \eqref{eq:2}, assume that $a=a(x)$ satisfies\footnote{For the well-posedness result, it is enough to assume only that $a\in C^1(0,1)$.} \eqref{eq:c1} and $g\in L^\infty_t L^2_x (I)$, where either $I=\mathbf R^1$ or $I=(0,1)$. Then for every initial data $u_0\in H^1(I)$, there is an unique global classical solution $u$ to \eqref{eq:2}. More specifically, $u\in C([0,\infty), H^1(I))$ and for every $0<T_1<T_2<\infty$, $u\in C^2([T_1, T_2], I)$. \end{theorem} Our next result concerns the existence of global attractors for \eqref{eq:2} in the case of finite interval\footnote{As we have mentioned already, global attractors may not exist in the case $I=\mathbf R^1$.} $I=(0,1)$. For technical reasons, we need to impose a smallness condition $\norm{a'}{L^\infty}<<\varepsilon$. We do not know whether such a condition is necessary or not, but it is possible that unless such a condition hold, one gets unbounded orbits for some sets of initial data, thus rendering the statements regarding the existence of attractors false. \betagin{theorem} \lambdabel{theo:2} Assume that $a$ satisfies \eqref{eq:c1} and $\norm{a'}{L^\infty}\leq \delta \varepsilon$ for some sufficiently small $\delta$. Let $g=g(x)\in L^2(0,1)$ has mean value zero, $\int_0^1 g(x) dx=0$. Then, the viscous Camassa-Holm equation \eqref{eq:2} has a global attractor, when considered as a dynamical system over a finite interval $I=(0,1)$ with initial data in $H^1_0(0,1)=H^1\mathcal Ap \{f: \int_0^1 f(x) dx=0\}$. \end{theorem} \noindent {\bf Remark:} The mean value zero condition imposed upon the forcing term $g$ is necessary for the existence of a global attractor and is in fact necessary merely for uniform boundedness of the orbits. Indeed, an elementary computation shows that $ \partial_t \int_0^1 u (t,x) dx= \int_0^1 g(x) dx$, whence $\int_0^1 u(t,x) dx = \int_0^1 u_0(x) dx+ (\int g(x) dx) t$, which is not bounded as $t\to \infty$, unless $\int g(x) dx=0$. Our next theorem addresses precisely the asymptotic smoothing effect of the corresponding dynamics. \betagin{theorem} \lambdabel{theo:3} The attractor $\mathcal A$ constructed in Theorem \ref{theo:2} is contained in $\mathcal Ap_{\sigma>0} H^{2-\sigma}(0,1)$. Moreover, for all $\sigma>0$, we have the estimate $$ \sup\limits_{f \in \mathcal A} \norm{f}{H^{2-\sigma}(0,1)}\leq C_\sigma \norm{g}{L^2} $$ That is, the attractor is a bounded subset in $H^{2-\sigma}$ with bounds depending only on the constants in the problem ($\varepsilon, \delta, \sigma$) and $\norm{g}{L^2}$. In fact, more generally, $\mathcal A$ is a bounded subset of $ B^{2}_{2, \infty}$, with the corresponding estimate \betagin{equation} \lambdabel{eq:smooth} \sup\limits_{f\in \mathcal A} \sup\limits_k 2^{2k} \norm{P_{2^k} f}{L^2}= \sup\limits_{f\in \mathcal A} \sup\limits_k 2^{2k} \left(\sum\limits_{n=2^{k-1}}^{2^{k+1}} |\hat{f}(n)|^2\right)^{1/2} \leq C\norm{g}{L^2}, \end{equation} \end{theorem} We record that in the case of constant viscosity (i.e. $a=const>0$), all the conditions in Theorem \ref{theo:2} and Theorem \ref{theo:3} are satisfied. For the case of the whole line, consider the Camassa-Holm equation with an additional damping factor, as considered in two dimensions by Ilyin-Titi, C^{\infty}te{Titi5}. Namely, let $\mu>0$ and consider \betagin{equation} \lambdabel{e:1} u_t +\mu u + \f{1}{2}\partial_x(u^2)+\partial_x(1-\p_x^2)^{-1} [u_x^2/2+u^2] =\partial_x(a(x) u_x)+g(x) \end{equation} with initial data $u(0,x)=f$. We have the following \betagin{theorem} \lambdabel{theo:9} Assume that $a$ satisfies \eqref{eq:c1} and either $\norm{a'}{L^\infty}<\delta\varepsilon$ for some sufficiently small $\delta$ {\it or} $a''(x)\leq 2 a(x)$. Then the equation \eqref{e:1} is globally well-posed in $H^1(\mathbf R^1)$. It also has a global attractor $\mathcal A$ and the semigroup has the smoothing property: $\mathcal A$ is a bounded subset of $B^2_{2, \infty}$. More precisely, $$ \sup\limits_{f\in \mathcal A}\sup_k 2^{2k} \norm{P_{2^k} f}{L^2}\leq C\norm{g}{L^2}. $$ \end{theorem} \noindent {\bf Remark} \betagin{itemize} \item If $a=const>0$, all the conditions in Theorem \ref{theo:9} are met and the results hold. \item In contrast with Theorem \ref{theo:2}, note that we can impose the structural condition $a''(x)\leq 2 a(x)$, instead of the smallness of $\norm{a'}{L^\infty}$. \end{itemize} \noindent The paper is organized as follows. In Section \ref{sec:prelim}, we collect some useful facts from Fourier analysis and the theory of attractors both in finite and infinite domain setting. In Section \ref{sec:90}, we first show a local well-posedness of the Cauchy problem for the viscous Camassa-Holm equation, by using some elementary $C_0$ semigroup properties of the semigroup generated by $A=-\partial_x(a(\mathcal Dot) \partial_x \mathcal Dot)$. This is done by a contraction map principle and yields valid solution only for short time. We then derive\footnote{see Section \ref{sec:smoothness}} some additional $H^2$ smoothness estimates in order to exploit the underlying $H^1$ conservation law. In Section \ref{sec:global}, we show that $H^1$ {\it a priori} estimates hold on {\it any time interval} $(0,T)$ and thus global well-posedness is established. In Section \ref{sec:attractor_1}, we establish the existence of global attractors in the case of finite interval. This is done by verifying the point dissipativeness and the uniform boundedness of the dynamics. The uniform\footnote{Here uniform means uniformity with respect to a given bounded sequence of initial data.} vanishing of the high frequency mass of the solutions, which is needed for the existence of attractors is addressed in Section \ref{sec:vanish}. Incidentally, one obtains the smoothing estimate \eqref{eq:smooth}. Finally, in Section \ref{sec:conclusions}, we prove Theorem \ref{theo:9}. The methods here are quite similar to the ones used in the final interval case. \\ \noindent {\bf Acknowledgement:} We are grateful to our colleague Bixiang Wang for numerous discussions regarding the abstract theory of attractors and their properties. \section{Preliminaries} \lambdabel{sec:prelim} In this section, we collect some useful (generally well-known) facts. We start with the definition of the Fourier transform in the whole space and in the periodic setting. \subsection{The Fourier transform and the Helmholtz operator $(1-\p_x^2)^{-1}$} \lambdabel{sec:helm} The Fourier transform on $\mathbf R^1$ is (initially) defined on the functions in the Schwartz class $\mathcal S$ by $$ \hat{f}(\xi)=\int\limits_{\mathbf R^1} f(x) e^{-2\partiali i x\xi} dx. $$ We record the inverse Fourier transform $$ f(x)=\int\limits_{\mathbf R^1} \hat{f}(\xi) e^{2\partiali i x\xi } d\xi, $$ and the Plancherel's identity is $\norm{f}{L^2}= \|\hat{f}\|_{L^2}$ for all functions $f\in L^2$. On the interval $[0,1]$, we may introduce the Fouier transform $L^2([0,1])\to l^2(\mathcal Z)$, by setting $f\to \{a_k\}_{k\in\mathcal Z}$, where $$ a_k= \int\limits_0^1 f(x) e^{-2\partiali i k x} dx. $$ The inverse Fourier transform in that case is the familiar Fourier expansion $$ f(x)=\sum\limits_{k \in \mathcal Z} a_k e^{2\partiali i k x}. $$ an the Plancherel's identity is $\norm{f}{L^2([0,L])}=\norm{\{a_k\}}{l^2}$. Note that here and for the rest of the paper $L^2([0,1])$ is the space of square integrable functions with period one. The Helmholtz operator is the inverse of the operator $(1-\partial_x^2)$ or $(1-\p_x^2)^{-1}$. This is well-defined on both $L^2(\mathbf R^1)$ and $L^2(0,1)$. For (nice decaying) functions $f:\mathbf R^1\to\mathcal C$, it may be defined via the Fourier transform via $\widehat{(1-\p_x^2)^{-1} f}(\xi)= (1+4\partiali^2|\xi|^2)^{-1} \hat{f}(\xi)$ or more explicitly, via $$ (1-\p_x^2)^{-1} f(x)=e^{-|\mathcal Dot|}/2 *f(x) =\f{1}{2}\int\limits_{-\infty}^\infty e^{-|x-y|}f(y) dy. $$ For the case of finite interval, we consider only the case $(0,1)$ for notational convenience. We remark that the results in the general case can be recovered by a simple change of variables in the equation. Thus for a function $f:(0,1)\to \mathcal C$ given by its Fourier expansion $ f(x)= \sum_k a_k e^{2\partiali i k x}$, set $$ (1-\p_x^2)^{-1} f (x)= \sum\limits_k \f{a_k}{1+4\partiali^2 k^2} e^{2\partiali i k x} $$ Next, we verify that at least formally, the non viscous Camassa-Holm equation \eqref{eq:10} satisfies the conservation law $$ \int_I (u^2(t,x) + u_x^2(t,x)) dx=const. $$ \subsection{Conservation law for \eqref{eq:10}} More specifically, let $I(t)= \int_I (u^2(t,x) + u_x^2(t,x)) dx$. If $u$ is a solution, which is sufficiently smooth and decaying\footnote{This needs justification in each instance, if one takes $u$ to a be a solution of \eqref{eq:10}}, we may take time derivative $I'(t)$ to get $$ I'(t)= - 2 \int_I (u F(u, u_x) + u_x \partial_x [F(u, u_x)]) dx. $$ \betagin{lemma} \lambdabel{le:5} Let $u\in C^2(\mathbf R^1)$, with square integrable second derivative. Then $$ \int_I (u F(u, u_x) + u_x \partial_x [F(u, u_x)]) dx=0 $$ \end{lemma} \betagin{proof} This is a simple, although lengthy computation. Note that in what follows below, all the boundary terms are zero, either because $L^{\infty}m_{|x|\to \infty} u=0$ (in the case $I=\mathbf R^1$), or by the periodic boundary conditions. We have \betagin{eqnarray*} & & \int_I (u F(u, u_x) + u_x \partial_x [F(u, u_x)]) dx= \\ & &= \int_I u \left(\f{1}{2}\partial_x(u^2)+\partial_x(1-\p_x^2)^{-1} [u_x^2/2+u^2]\right) dx + \\ & & + \int_I u_x \left(\f{1}{2}\partial^2_x(u^2)+\partial^2_x(1-\p_x^2)^{-1} [u_x^2/2+u^2]\right) dx \end{eqnarray*} We start with the terms on the second line above. We have by integration by parts \betagin{eqnarray*} & & \f{1}{2} \int_I u_x \partial^2_x(u^2) dx= - \int_I u_{xx} u_x u dx = -\f{1}{2} \int_I \partial_x[u_x^2] u dx= \f{1}{2} \int_I u_x^3 dx. \end{eqnarray*} For the second term, use that $\partial_x^2(1-\p_x^2)^{-1}= -Id +(1-\p_x^2)^{-1}$ to get \betagin{eqnarray*} & & \int_I u_x \partial^2_x(1-\p_x^2)^{-1} [u_x^2/2+u^2] dx = - \int_I u_x[u_x^2/2+u^2] dx + \\ & &+ \int_I u_x(1-\p_x^2)^{-1}[u_x^2/2+u^2]dx = - \f{1}{2} \int_I u_x^3 dx- \int_I u\partial_x (1-\p_x^2)^{-1}[u_x^2/2+u^2]dx, \end{eqnarray*} where we have used that $\int_I u_x u^2 dx=0$. Putting everything together yields the Lemma. \end{proof} \subsection{Littlewood-Paley projections and function spaces} Fix a smooth, even function $\partialsi\in C_0^\infty(\mathbf R^1)$, so that $0\leq \partialsi\leq 1$, $\partialsi(\xi)=1$, whenever $|\xi|\leq 1$, $\partialsi$ is decreasing in $(0, \infty)$ and $\partialsi(\xi)=0$ for all $|\xi|\geq 3/2$. Let also $\varphi(\xi):=\partialsi(\xi)-\partialsi(2\xi)$. Clearly $\varphi(\xi)=1$ for all $3/4\leq |\xi|\leq 1$ and $\textup{supp} \varphi\subset 1/2\leq |\xi|\leq 3/2$. For every integer $k$, define the {\it Littlewood-Paley operators}, acting on test functions $f\in \mathcal S({\mathbf R}^n)$ via \betagin{eqnarray*} & & \widehat{P_{<2^k} f}(\xi):=\partialsi(2^{-k} \xi) \hat{f}(\xi),\\ & & \widehat{P_{2^k} f}(\xi):=\varphi(2^{-k} \xi) \hat{f}(\xi), \end{eqnarray*} Clearly the kernels of these operators are given by $2^{kn} \hat{\partialsi}(2^k \mathcal Dot)$ and $2^{kn} \hat{\varphi}(2^k \mathcal Dot)$ respectively and thus commute with differential operators. It is also easy to see that since $\|2^{kn} \hat{\partialsi}(2^k \mathcal Dot)\|_{L^1}=C\|\hat{\partialsi}\|_{L^1}$ and similar for the other kernel, {\it $P_{<2^k}, P_{2^k}$ are bounded on $L^p$ spaces for all $1\leq p\leq \infty$ } with bounds independent of $k$. The Calder\'on commutator theorem states that the commutator \\ $[P_{2^k}, a] f:=P_{2^k}(a f)- a P_{2^k} f$ acts as a smoothing operator of order one\footnote{Similar statement holds for the commutator $[P_{<2^k}, a]$ as well.}. More precisely, we shall need a (standard) estimates of the form \betagin{eqnarray*} & & \norm{[P_{2^k}, a] f}{L^r}\leq C 2^{-k} \norm{\nabla a}{L^q}\norm{f}{L^p},\\ & & \norm{[P_{<2^k}, a] f}{L^r}\leq C 2^{-k} \norm{\nabla a}{L^q}\norm{f}{L^p}\\ & & \norm{[P_{2^k}, a]\nabla f}{L^r}\leq C \norm{\nabla a}{L^q}\norm{f}{L^p},\\ & & \norm{[P_{<2^k}, a]\nabla f}{L^r}\leq C \norm{\nabla a}{L^q}\norm{f}{L^p}, \end{eqnarray*} whenever $1\leq r, q, p\leq \infty$ and $1/r=1/p+1/q$. This whole theory can be developed for the case of finite interval, with some notable differences, some of which we discuss below. The Littlewood-Paley operators acting on $L^2([0,1])$ are defined via $$ P_{\leq N} f (x) = \sum\limits_{k: |k|\leq N} a_k e^{2\partiali i k x}, $$ that is $P_{\leq N}$ truncates the terms in the Fourier expansion with frequencies $k: |k|> N$. Clearly $P_{\leq N}$ is a projection operator. More generally, we may define for all $0\leq N<M\leq \infty$ $$ P_{N\leq \mathcal Dot\leq M}f(x) = \sum\limits_{k: N\leq |k|\leq M} a_k e^{2\partiali i k x}. $$ It is an elementary exercise in orthogonality, that whenever $[N_1, M_1]\mathcal Ap [N_2, M_2]=\emptyset$, then $\int_0^1 P_{N_1\leq \mathcal Dot\leq M_1}f(x)P_{N_2\leq \mathcal Dot\leq M_2}g(x)dx=0$. For products of three functions, we have the following \betagin{lemma} \lambdabel{le:80} Let $f, g, h:[0,1]\to \mathcal C$, with Fourier coefficients $\{f_n\}, \{g_n\}, \{h_n\}$ respectively. Then $$ \int_0^1 f(x) g(x) h(x) dx=\sum\limits_{m, k\in\mathcal Z} f_m g_{-m-k} h_k. $$ As a consequence, for every $N>>1$, \betagin{equation} \lambdabel{eq:98} \int_0^1 (P_{>N} f(x)) g(x) (P_{<N/2} h(x)) dx = \int_0^1 (P_{>N} f(x)) (P_{>N/2} g(x)) (P_{<N/2} h(x)) dx \end{equation} \end{lemma} \betagin{proof} The proof is based on the Fourier expansion and the fact that $\int_0^1 e^{2\partiali i n x} dx =\delta_n$. More specifically, \betagin{eqnarray*} & & \int_0^1 f(x) g(x) h(x) dx = \sum\limits_{m,n, k\in\mathcal Z} f_m g_{n} h_k \int\limits_0^1 e^{2\partiali i (m+n+k) x} dx = \\ & & = \sum\limits_{m,n, k\in\mathcal Z} f_m g_{n} h_k \delta_{m+n+k}= \sum\limits_{m, k\in\mathcal Z} f_m g_{-m-k} h_k. \end{eqnarray*} For \eqref{eq:98}, observe that if $|m|>N$ and $|k|<N/2$, then $|-m-k|>N/2$. \end{proof} Our next lemma is a well-known Sobolev embedding type result for the spaces $L^q(0,1)$. We state it in the form of the {\it Bernstein inequality}, since this is what we use later on. One can also formulate a version in terms of the Sobolev spaces defined below. \betagin{lemma} \lambdabel{le:bern} Let $N$ be an integer and $f:[0,1]\to \mathcal C$. Then, for every $1\leq p\leq 2 \leq q\leq \infty$, $$ \norm{P_{<N} f}{L^q}\leq N^{1/p-1/q} \norm{f}{L^p}. $$ \end{lemma} \betagin{proof} First, we establish the lemma for $p=2$, $q=\infty$. Let $f=\sum_n f_n e^{2\partiali i n x}$. Then $$ \norm{P_{<N} f}{L^\infty}\leq \sum_{n :|n|<N} |f_n|\leq N^{1/2} (\sum_{n :|n|<N} |f_n|^2)^{1/2}\leq N^{1/2} \norm{f}{L^2}. $$ Since by Plancherel's theorem $P_{<N}:L^2\to L^2$, it follows that $\norm{P_N}{L^q\to L^2}\leq N^{1/2-1/q}$. The rest of the range follows by duality. \end{proof} Introduce some function spaces. Take \betagin{eqnarray*} & & \dot{H}^s(\mathbf R^1) = \{f:{\mathbf R}^n\to \mathcal C: (\int_{\mathbf R^1} |\hat{f}(\xi)|^2|\xi|^{2s}d\xi)^{1/2}< \infty\}, \\ & & H^s(\mathbf R^1) = L^2(\mathbf R^1)\mathcal Ap \dot{H}^s(\mathbf R^1), \\ & & \dot{H}^s(0,1) = \{f:(0,1)\to \mathcal C: (\sum\limits_{k\in \mathcal Z}|a_k|^2|k|^{2s} )^{1/2}< \infty\}, \\ & & H^s((0,1)) = L^2(0,1)\mathcal Ap \dot{H}^s(0,1). \end{eqnarray*} By the Plancherel's theorem $\norm{P_{2^k} f}{\dot{H}^s}\sigmam 2^{ks} \norm{P_{2^k} f}{L^2}$ and $\norm{P_{>2^k} f}{\dot{H}^s}\gtrsim 2^{ks} \norm{P_{2^k} f}{L^2}$. \\ {\bf Remark: }We note that while the Littlewood-Paley operators acting on functions in $L^2({\mathbf R}^n)$ enjoy the Caler\'on commutation estimates, {\it such commutator estimate fails for Littlewood-Paley operators acting on functions in $L^2(I)$}. We will also frequently use the fractional differentiation operators of order $s: -\infty<s<\infty$, defined via $$ \widehat{|\partial|^s f}(\xi):= |\xi|^s \hat{f}(\xi), $$ in the case of whole line and $$ |\partial|^s (\sum\limits_k a_k e^{2\partiali i k x}):= \sum\limits_{k\neq 0} a_k |k|^s e^{2\partiali i k x}. $$ in the case $I=(0,1)$. We would like to point out that $|\partial|^s:H^s_0\to L^2_0$ is an isometry and in general $$ \norm{|\partial|^{s_2} u}{\dot{H}^{s_1}}= \norm{u}{\dot{H}^{s_1+s_2}}. $$ As a corollary of Lemma \ref{le:bern}, we have that for all $\sigma>0$, there is $C_\sigma$, so that \betagin{equation} \lambdabel{eq:sob} \norm{u}{L^\infty(0,1)}\leq |\int_0^1 u(x) dx|+C_\sigma\norm{u}{\dot{H}^{1/2+\sigma}}. \end{equation} \subsection{Kato-Ponce Lemma in the finite interval case} Recall the Kato-Ponce product estimates, that is for all $s\geq 0$ and $1\leq p, q_1, r_1, q_2, r_2 \leq \infty: 1/p=1/q_1+1/r_1=1/q_2+1/r_2$, then $$ \norm{|\partial|^s (f g)}{L^p({\mathbf R}^n)}\leq C_s(\norm{|\partial|^s f}{L^{q_1}({\mathbf R}^n)}\norm{g}{L^{r_1}({\mathbf R}^n)}+ \norm{f}{L^{r_2}({\mathbf R}^n)}\norm{|\partial|^s g}{L^{q_2}({\mathbf R}^n)}). $$ Unfortunately, we do not know of an analogue of such fractional differentiation product estimate for the case of finite interval. However, when $s$ is an integer, we have a similar, if somewhat weaker estimate. \betagin{lemma} \lambdabel{le:KP} Let $s\geq 0$ be an integer and $1\leq p, q_1, r_1, q_2, r_2 \leq \infty: 1/p=1/q_1+1/r_2=1/q_2+1/r_1$. Then for any $X\subset {\mathbf R}^n$, $$ \norm{\partial^s (f g)}{L^p(X)}\leq C_s(\norm{\partial^s f}{L^{q_1}(X)}+ \norm{f}{L^{q_2}(X)})( \norm{\partial^s g}{L^{r_1}(X)}+\norm{g}{L^{r_2}(X)}). $$ \end{lemma} \betagin{proof} Recall the differentiation formula $$ \partial^s (f g)= \sum\limits_{s_1=0}^s \f{s!}{s_1! (s-s_1)!} \partial^{s_1} f \partial^{s-s_1} g. $$ and the Young's inequality $ab \leq a^p/p+b^q/q$ for any $1<p,q<\infty: 1/p+1/q=1$. We have $$ \norm{\partial^s (f g)}{L^p(X)}\leq 2^s \sup\limits_{0\leq s_1\leq s} \norm{\partial^{s_1} f \partial^{s-s_1} g}{L^p}. $$ Thus, it will suffice to show that for any $s_1\in[0,s]$, $$ \norm{\partial^{s_1} f \partial^{s-s_1} g}{L^p}\leq C_s (\norm{\partial^s f}{L^{q_1}(X)}+ \norm{f}{L^{q_2}(X)})( \norm{\partial^s g}{L^{r_1}(X)}+\norm{g}{L^{r_2}(X)}). $$ Fix $s_1$ and denote $\alpha=s_1/s\in[0,1]$. If $\alpha=0$ or $\alpha=1$, an application of the H\"older's inequality gives the result. If $\alpha\in(0,1)$, then in fact $1/s\leq \alpha<1-1/s$. \\ Let $\tilde{q}, \tilde{r}$ be determined by \betagin{eqnarray*} & & \tilde{q}^{-1}=\alpha q_1^{-1}+(1-\alpha) q_2^{-1},\\ & & \tilde{r}^{-1}=(1-\alpha)r_1^{-1}+\alpha r_2^{-1}. \end{eqnarray*} Clearly $\tilde{q}^{-1}+\tilde{r}^{-1}=p^{-1}$ and by H\"older's inequality and convexity of the norms \betagin{eqnarray*} & & \norm{\partial^{s_1} f \partial^{s-s_1} g}{L^p}\leq \norm{\partial^{s_1} f }{L^{\tilde{q}}} \norm{\partial^{s-s_1} g }{L^{\tilde{r}}}\leq \norm{\partial^{s} f }{L^{q_1}}^\alpha \norm{f}{L^{q_2}}^{1-\alpha} \norm{\partial^{s} g }{L^{r_1}}^{1-\alpha} \norm{g }{L^{r_2}}^{\alpha}. \end{eqnarray*} By Young's inequality, the last expression is bounded by $$ C_\alpha(\norm{\partial^{s} f }{L^{q_1}}+\norm{f}{L^{q_2}})(\norm{\partial^{s} g }{L^{r_1}}+ \norm{g }{L^{r_2}}), $$ where $C_\alpha$ may be taken $2 \max(\alpha^{-2}, (1-\alpha)^{-2})\leq 2 s^2$. \end{proof} \subsection{Attractors} \lambdabel{sec:attractors} In this section, we offer some basic definitions and elementary properties of attractors. For an initial value problem for well-posed evolution equation, $$ \frac{d}{dt} u(t)=F(u(t)), \ \ \ u(0)=u_0, $$ defined on a Hilbert space $H$, consider the solution semigroup $\{S(t)\}_{t \geq 0}$ by $S(t)u_0=u(t)$. $S(t)$ maps $H$ into $H$, satisfies the semigroup properties $$S(t+s)=S(t) S(s), S(0)=Id$$ and is continuous in the initial data for each $t \geq 0$. \betagin{definition} Let $S(t)$ be a $C_0$ semigroup, acting on a normed space $H$. Then \betagin{itemize} \item $S(t)$ is called {\it point dissipative} if there is a bounded set $B\subset H$ such that for any $u_0 \in H, S(t)u_0 \in B$ for all sufficiently large $t \geq 0$. That is $$ \sup\limits_{u_0\in B}L^{\infty}msup_{t\to \infty} \norm{S(t) u_0}{H}<\infty. $$ \item $S(t)$ is called {\it asymptotically compact} in $H$ if $S(t_n)u_n$ has a convergent subsequence for any {\it bounded sequence} $u_n$ when $t_n \to +\infty$. \end{itemize} \end{definition} \noindent Our next definition gives a precise meaning to the notion of attractor. \betagin{definition} $\mathcal A \subset H$ is called a global attractor for the evolution equation if it is compact, invariant ($S(t) \mathcal A=\mathcal A, \ t \geq 0$) and attracts every bounded set $X$ ( $S(t) X \to \mathcal A, \ t \to \infty$). \end{definition} A classical result in dynamical systems is that an attractor exists, if $S(t)_{t \ge 0}$ is both point dissipative and asymptotically compact. Next, we recall the Riesz-Rellich Criteria for precompactness, see Theorem XIII.66, p. 248, C^{\infty}te{SimoniV}). \betagin{proposition} \lambdabel{rrthm} Let $S \subseteq L^p({\mathbf R}^n)$ with $1 \le p <\infty$. Then $S$ is precompact in $L^p({\mathbf R}^n)$ if and only if the following conditions are satisfied: (1) $S$ is bounded in $L^p({\mathbf R}^n)$; (2) $f \to 0$ in $L^p$ sense at infinity uniformly in $S$, i.e., for any $\end{proof}silon$, there is a bounded set $K \subset {\mathbf R}^n$ so that for all $f \in S$: $ \int_{{\mathbf R}^n \backslash K} |f(x)|^p dx \le \end{proof}silon^p; $ (3) $f(\mathcal Dot -y) \to f$ uniformly in $S$ as $y \to 0$, i.e., for any $\end{proof}silon$, there is $\deltalta$ so that $f\in S$ and $|y|<\deltalta$ imply that $ \int_{{\mathbf R}^n} |f(x-y) -f(x)|^p dx \le \end{proof}silon^p. $ \end{proposition} As shown in C^{\infty}te{Stanislavova}, C^{\infty}te{wang} (see also Proposition 3 in C^{\infty}te{Stanislavova1}), we may replace the difficult to verify condition $(3)$ in the Riesz-Rellich Criteria above by an equivalent condition, which basically says that the ($L^2$ or the $H^1$) mass of the high-frequency component has to go uniformly to zero. The exact formulation is \betagin{proposition} \lambdabel{prop:3} Assume that \betagin{itemize} \item $\sup\limits_n \norm{u_n(t_n,.)}{H^1({\mathbf R}^n)} \leq C $ \item $L^{\infty}msupL^{\infty}mits_n \norm{u_n(t_n,.)}{H^1(|x|>N)} \to 0 \ {\textup as} \ N \to \infty$ \item $L^{\infty}msupL^{\infty}mits_n \norm{P_{>N}u_n(t_n,.)}{H^1({\mathbf R}^n)} \to 0 \ {\textup as} \ N \to \infty$ \end{itemize} Then the sequence $\{u_n(t_n,.)\}$ is precompact in $H^1({\mathbf R}^n)$. Same results hold, if one replaces $H^1({\mathbf R}^n)$ by $L^2({\mathbf R}^n)$ everywhere in the statement above. \end{proposition} In the case of finite domains, one has of course the second condition automatically satisfied and we have \betagin{proposition} \lambdabel{prop:4} For the sequence $\{u_n\}\subset H^1(0,1)$, assume \betagin{itemize} \item $\sup\limits_n \norm{u_n(t_n,.)}{H^1(0,1)} \leq C $ \item $L^{\infty}msupL^{\infty}mits_n \norm{P_{>N} u_n(t_n,.)}{H^1(0,1)}\to 0 \ \textup{as}\ N \to \infty$. \end{itemize} then the sequence $\{u_n(t_n,.)\}$ is precompact in $H^1(0,1)$. \end{proposition} We reproduce the short proof of Proposition \ref{prop:4}. \betagin{proof} By the Plancherel's theorem, it suffices to show that $b^k=\{a_n^k\}$, $k=1, \ldots$ is precompact in the weighted space $l^2_s$ if it is uniformly bounded and \\ $L^{\infty}m_{N\to \infty} L^{\infty}msup_k (\sum_{n:|n|>N} |n|^{2s} |a_n^k|^2)^{1/2}=0$. By the uniform boundedness of $\{b^k\}$ and the reflexivity of $l^2_s$, we have a weak limit $b=\{a_n\}\in l^2_s$ of some subsequence of $b^k$. Without loss of generality, assume $b^k\to b$ weakly. In particular, for all $n$, $a_n^k \to_k a_n$. We will show that actually $L^{\infty}ml_k\norm{b^k-b}{l^2_s}=0$. Fix $\sigma>0$ and find $N$, so that for all, but finitely many $k$ $$ (\sum_{n:|n|>N} |n|^{2s} |a_n^k|^2)^{1/2}\leq \sigma/3. $$ Next, find $N_1$, so that $$ (\sum_{n:|n|>N} |n|^{2s} |a_n|^2)^{1/2}\leq \sigma/3. $$ Finally, find $k_0$, so that for all $-\max(N, N_1)\leq n\leq \max(N, N_1)$ and for all $k>k_0$, we have $|a_n^k-a_n|\leq \sigma/(10 \max(N, N_1))$. We conclude that for all but finitely many $k>k_0$, we have $$ \norm{b^k-b}{l^2_s}\leq \sigma. $$ \end{proof} \section{Global well-posedness for the viscous Camassa-Holm equation} \lambdabel{sec:90} In this section, we show the global well-posedness for \eqref{eq:2} in both the finite interval case and the whole line case. The methods are identical in both cases, so we treat it in the same proof. As we have mentioned earlier the unbounded operator $A: Au=-\partial_x (a(x)u_x)$, satisfying \eqref{eq:c1} defines a $C_0$ (and in fact analytic) semigroup, see for example C^{\infty}te{SimonII}, p. 252. \\ This allows us to reformulate \eqref{eq:2} in an equivalent integral equation form\footnote{for smooth and decaying solutions} \betagin{equation} \lambdabel{eq:15} u = e^{-t A} u_0-\int\limits_0^t e^{(s-t)A} F(u, u_x)(s) ds. \end{equation} Our first step then will be to show a local well-posedness result. \subsection{Local well-posedness for \eqref{eq:2}} \lambdabel{sec:lwp} Regarding the simpler equation \eqref{eq:11}, we have taken the classical approach for the heat equation outlined in C^{\infty}te{McOwen}. We will use the following lemma, which is a compilation of Theorem 3 (p. 298-300) and the discussion in Section 11.2.b, C^{\infty}te{McOwen}. \betagin{lemma} \lambdabel{le:17} Suppose $S(t)=e^{-t L}$ is a $C_0$-semigroup acting on both $L^2(I)$ and $\dot{H}^1(I)$. Assume also \betagin{eqnarray} \lambdabel{eq:c10} & & \norm{S(t) g}{\dot{H}^1(I)}\leq C t^{-1/2} \norm{g}{L^2}. \end{eqnarray} For the integral equation $$ u(t)= S(t) u_0 +\int\limits_0^t S(t-s) F(u)(s) ds, $$ there exists time $T>0$ depending only on $\norm{u_0}{H^1}$, such that the integral equation has an unique local solution $u\in C([0,T], H^1)$ provided \betagin{equation} \lambdabel{eq:163} \left\{\betagin{array}{l} \norm{F(u)-F(v)}{L^2}\leq M_R\norm{u-v}{H^1} \\ \textup{whenever}\quadq \norm{u}{H^1},\norm{v}{H^1}\leq R. \end{array} \right. \end{equation} \end{lemma} We first show the proof of Lemma \ref{le:17} and then verify \eqref{eq:c10} for the semigroup $S(t)=e^{-tA}$ and \eqref{eq:163} for the Camassa-Holm nonlinearity $F(u, u_x)$. ' \betagin{proof}(Lemma \ref{le:17}) We set a fixed point argument for the integral equation at hand. Set $X^R_T=\{u\in C([0,T), H^1(I)), \sup_{0<t<T}\norm{u(t, \mathcal Dot}{H^1}\leq R \}$ and the map $$ \Lambda u(t, \mathcal Dot) = S(t) u_0 +\int\limits_0^t S(t-s) F(u)(s) ds. $$ We need to show that for appropriate $R=R(\norm{u_0}{H^1})$ and $T=T(R)$, $\Lambda:X^R_T\to X^R_T$ is a contraction. Take $R=10 \norm{u_0}{H^1}$. To see $\Lambda:X^R_T\to X^R_T$, we have by \eqref{eq:c10} and \eqref{eq:163} (applied for the case $v=0$), \betagin{eqnarray*} & &\norm{\Lambda u(t, \mathcal Dot)}{H^1}\leq \norm{S(t)u_0}{H^1}+ \|\int\limits_0^t e^{(s-t)L} F(u)(s) ds\|_{L^2}+ C \|\int\limits_0^t e^{(s-t)L} F(u)(s) ds\|_{\dot{H}^1} \\ & & \leq C\norm{u_0}{H^1}+ \int\limits_0^t \norm{F(u)(s, \mathcal Dot)}{L^2}ds + \int\limits_0^t \f{\norm{F(u)(s, \mathcal Dot)}{L^2}}{\sqrt{t-s}} ds\leq \\ & & \leq C\norm{u_0}{H^1}+ C M(\norm{u}{H^1}) (t+\sqrt{t})\sup\limits_{0<t<T} \norm{u}{H^1}. \end{eqnarray*} Clearly choosing $T=T(R)$ small enough, $0<t<T$ and $\sup_{0<t<T} \norm{u}{H^1}\leq R$ will guarantee that the right hand side is less than $R$. One verifies similarly the contraction property of $\Lambda:X^R_T\to X^R_T$, by using the full strength of \eqref{eq:163}. \end{proof} First, we verify that $e^{-tA}$ is a semigroup on $\dot{H}^1$. Observe that $\norm{u}{\dot{H}^1}\sigmam \norm{A^{1/2} u}{L^2}$. Indeed, $$ \norm{A^{1/2} u}{L^2(I)}^2 = \dpr{Au}{u}=\int_I a(x) u_x^2 dx\sigmam \norm{u_x}{L^2}^2, $$ by \eqref{eq:c1}. Then $$ \norm{e^{-t A} f}{\dot{H}^1}\sigmam \norm{A^{1/2} e^{-t A} f}{L^2}= \norm{ e^{-t A} A^{1/2} f}{L^2}\leq C\norm{A^{1/2} f}{L^2}\sigmam \norm{f}{\dot{H}^1}. $$ The estimate \eqref{eq:c10} is a standard property of analytic semigroups, see Corollary 1 and Corollary 2, C^{\infty}te{SimonII}, p. 252. We choose to deduce it as a simple consequence of the functional calculus for the self adjoint operator $A$. We have $\norm{e^{-tA} g}{\dot{H}^1}\sigmam \norm{A^{1/2} e^{-tA} g}{L^2} = t^{-1/2} \norm{f(tA) g}{L^2}$, where $f(y)=e^{-y}y^{1/2}$ is a well-defined bounded function on the spectrum of $A$. It follows that $$ \norm{e^{-tA} g}{\dot{H}^1}\leq Ct^{-1/2} \norm{f}{L^\infty(0,\infty)} \norm{g}{L^2} \leq Ct^{-1/2} \norm{g}{L^2}, $$ which is \eqref{eq:c10}. It remains to establish \eqref{eq:163} for the Camassa-Holm nonlinearity $F$. We actually prove a little more general statement. \betagin{lemma} \lambdabel{le:20} Let $F$ be the nonlinearity for the Camassa-Holm equation, as defined earlier. Then for all nonnegative integers $s$, we have \betagin{equation} \lambdabel{eq:164} \norm{F(u)-F(v)}{H^s}\leq M (\norm{u}{H^{s+1}}+\norm{v}{H^{s+1}}) \norm{u-v}{H^{s+1}}. \end{equation} \end{lemma} \betagin{proof} We have by Lemma \ref{le:KP} and the Sobolev embedding $L^\infty(I)\hookrightarrow H^{1/2+}(I) \hookrightarrow H^{s+1}(I)$, \betagin{eqnarray*} & & \norm{\partial_x(u^2)-\partial_x(v^2)}{\dot{H}^s}\sigmam \norm{|\partial_x^{s+1}[(u-v)(u+v)]}{L^2}\leq \\ & &\leq C( \norm{|\partial_x^{s+1} (u-v)}{L^2}+\norm{(u-v)}{L^\infty})(\norm{u+v}{L^\infty}+ \norm{|\partial_x^{s+1}(u+v)}{L^2})\leq \\ & & \leq M \norm{u-v}{H^{s+1}} (\norm{u}{H^{s+1}}+\norm{v}{H^{s+1}}). \end{eqnarray*} For the second term in $F$, consider first $s\geq 1$. We have \betagin{eqnarray*} & & \norm{\partial_x(1-\p_x^2)^{-1}(u_x^2-v_x^2)}{\dot{H}^s}\leq C \norm{|\partial_x^{s-1}[ (u_x-v_x) (u_x+v_x)]}{L^2} \leq \\ & & \leq C(\norm{|\partial_x^{s}(u-v)}{L^\infty}+ \norm{u_x-v_x}{L^2}) (\norm{u}{H^1}+\norm{v}{H^1}+ \norm{\partial_x^s u}{L^\infty}+ \norm{\partial_x^s v}{L^\infty})\leq \\ & & \leq C \norm{u-v}{H^{s+1/2+}}( \norm{u}{H^{s+1/2+}}+\norm{v}{H^{s+1/2+}})\leq C \norm{u-v}{H^{s+1}}( \norm{u}{H^{s+1}}+\norm{v}{H^{s+1}}). \end{eqnarray*} When $s=0$, use either $$ \partial_x(1-\p_x^2)^{-1} f(x)=\f{1}{2} \int\limits sgn(x-y) e^{-|x-y|} f(y) dy, $$ or $$ \partial_x(1-\p_x^2)^{-1} f(x)=\sum\limits_n \f{2\partiali i n}{1+4\partiali^2 n^2} f_n e^{2\partiali i n x}, $$ to conclude that $\partial_x(1-\p_x^2)^{-1}:L^1(I)\to L^2(I)$. It follows that $$ \norm{\partial_x(1-\p_x^2)^{-1}(u_x^2-v_x^2)}{L^2(I)} \lesssim \norm{(u_x-v_x)(u_x+v_x)}{L^1}\lesssim \norm{u-v}{H^1}(\norm{u}{H^1}+\norm{v}{H^1}). $$ For the third term in $F$, we easily estimate \betagin{eqnarray*} & & \norm{\partial_x(1-\p_x^2)^{-1}(u^2-v^2)}{\dot{H}^s}\leq C \norm{u-v}{H^{\max(s-1,0)}}(\norm{u}{L^\infty}+ \norm{v}{L^\infty})\leq \\ & &\leq C \norm{u-v}{H^{s+1}}(\norm{u}{H^{s+1}}+\norm{u}{H^{s+1}}). \end{eqnarray*} \end{proof} Note that one can represent $F(u)=\Lambda(u,u)$, where $\Lambda(u,v)$ is the bilinear form $$ \Lambda(u,v)= \f{1}{2}\partial_x(u v)+\partial_x(1-\p_x^2)^{-1} [u_x v_x/2+u v]. $$ It is easy to see that one can show (with the same exact proof) for every integer $s\geq 0$ $$ \norm{\Lambda(\varphi,\partialsi)}{H^s}\leq C\norm{\varphi}{H^{s+1}}\norm{\partialsi}{H^{s+1}}. $$ A bilinear interpolation between the estimates above, (which are valid for all integers), yields the corresponding estimates for non integer values of $s$ as well. Setting $\varphi=\partialsi=u$, we obtain \betagin{corollary} \lambdabel{cor:1} Let $s\geq 0$ and $F$ be the Camassa-Holm nonlinearity. Then $$ \norm{F(u)}{H^s}\leq M \norm{u}{H^{s+1}}^2. $$ \end{corollary} \subsection{$H^2$ smoothness of the local solutions} \lambdabel{sec:smoothness} In this section, we show the $H^2$ smoothness of the local $H^1$ solution constructed above. Beside the obvious importance of having this extra smoothness information, this will enable us (see Section \ref{sec:global} below) to iterate the local solution to a global one by utilizing the conservation (or rather dissipation) of the $H^1$ energy. We have \betagin{proposition} \lambdabel{prop:1} Let $u$ be the $H^1$ solution to \eqref{eq:15}, with life span $T$. Then there exists a constant $C_\varepsilon$, so that for all $0<t<T$, $u\in C((0,t), H^2(I))$ and as a result $$ \norm{u(t, \mathcal Dot)}{H^2(I)}\leq \f{C_\varepsilon}{\sqrt{t}} \norm{u_0}{H^1}+ C_\varepsilon t^{1/4} \sup\limits_{0<s<t} \norm{u(s, \mathcal Dot)}{H^1}^{3}+ C_\varepsilon\norm{u(t, \mathcal Dot)}{H^1}. $$ \end{proposition} \betagin{proof} The argument required for the proof is to rerun again the fixed point method, this time in the smoother space $H^2(I)$. However, this amounts to showing $H^2$ {\it a priori} estimates for the solution, which is what we concentrate on. Apply $A$ to \eqref{eq:15}. This is justified, since the right hand side of \eqref{eq:15} is in the domain of $A$ by the semigroup properties of $e^{-t A}$. We have \betagin{eqnarray*} \norm{Au}{L^2}\leq \norm{e^{-tA} A u_0}{L^2}+C \int\limits_0^t \norm{e^{(s-t)A} A F(u)(s)}{L^2(I)} ds \end{eqnarray*} But $$ \norm{e^{-tA} A u_0}{L^2}= \norm{e^{-tA} A^{1/2} (A^{1/2} u_0)}{L^2}\leq C t^{-1/2} \norm{A^{1/2} u_0}{L^2}\sigmam C t^{-1/2} \norm{u_0}{H^1}. $$ On the other hand, by the properties of the functional calculus for $A$ \betagin{equation} \lambdabel{eq:901} \norm{e^{-z A} A F}{L^2}=|z|^{-1} \norm{f(A) F}{L^2}\leq C |z|^{-1} (\sup\limits_{y>0} |e^{-y}y| ) \norm{F}{L^2}\leq C |z|^{-1} \norm{F}{L^2}, \end{equation} for all $z>0$, while \betagin{equation} \lambdabel{eq:902} \norm{e^{-z A} A F}{L^2}\leq C\norm{A F}{L^2}\leq C_\varepsilon \norm{F}{H^2}. \end{equation} The last inequality can be checked easily as follows \betagin{eqnarray*} & & \norm{A u}{L^2}^2=\int (a u_{xx}+a' u_x)^2 dx= \int (a^2 u_{xx}^2 - a a'' u_x^2) dx\leq \\ & &\leq \norm{a}{L^\infty}^2 \norm{u_{xx}}{L^2}^2+\norm{a}{L^\infty}\norm{a''}{L^\infty} \norm{u_{x}}{L^2}^2\lesssim \norm{u}{H^2}^2. \end{eqnarray*} A complex interpolation between \eqref{eq:901} and \eqref{eq:902} yields $$ \norm{e^{-z A} A F}{L^2}\leq C |z|^{-7/8} \norm{F}{H^{1/4}}. $$ Plugging this estimate back in the integral term yields \betagin{eqnarray*} & & \int\limits_0^t \norm{e^{(s-t)A} A F(u)(s)}{L^2(I)} ds\leq C \int\limits_0^t \f{\norm{F(s, \mathcal Dot)}{H^{1/4}}}{(t-s)^{7/8}}ds\leq C t^{1/8} \sup_{0<s<t} \norm{F(s, \mathcal Dot)}{H^{1/4}} . \end{eqnarray*} According to Corollary \ref{cor:1}, $\norm{F(s, \mathcal Dot)}{H^{1/4}}\leq C \norm{u}{H^{5/4}}^2$. By the Gagliardo-Nirenberg inequality, $\norm{u}{H^{5/4}}\leq \norm{u}{H^2}^{1/4}\norm{u}{H^1}^{3/4}$. Putting everything together \betagin{eqnarray*} \norm{Au(t, \mathcal Dot)}{L^2} &\leq & \f{C}{\sqrt{t}}\norm{u_0}{H^1}+ C t^{1/8} \sup\limits_{0<s<t} \norm{u(s, \mathcal Dot)}{H^2}^{1/2}\norm{u(s, \mathcal Dot)}{H^1}^{3/2}\leq \\ & &\leq \f{C}{\sqrt{t}}\norm{u_0}{H^1}+ C_\sigma t^{1/4} \sup\limits_{0<s<t} \norm{u(s, \mathcal Dot)}{H^1}^{3}+ \sigma \sup\limits_{0<s<t} \norm{u(s, \mathcal Dot)}{H^2}. \end{eqnarray*} for any $\sigma>0$ and some $C_\sigma$. Observe now, $\norm{Au}{L^2}^2\geq \varepsilon^2 \norm{u}{H^2}^2/2-C \norm{u}{H^1}^2$. Indeed, \betagin{eqnarray*} & & \norm{Au}{L^2}^2= \int (a^2 u_{xx}^2 +(a')^2 u_x^2 +2 a a' u_x u_{xx}) dx\geq \\ & & \geq \int \f{a^2}{2} u_{xx}^2 dx - \int (a')^2 u_x^2 dx \geq \varepsilon^2 \norm{u}{H^2}^2/2-C \norm{u}{H^1}^2. \end{eqnarray*} Let $G(t)=\sup\limits_{0<s\leq t} \norm{u(s, \mathcal Dot)}{H^2}$. Taking into account the last inequality provides $$ G(t)\leq \f{C_\varepsilon}{\sqrt{t}}\norm{u_0}{H^1}+ C_{\sigma,\varepsilon} t^{1/4} \sup\limits_{0<s<t} \norm{u(s, \mathcal Dot)}{H^1}^{3}+ C\norm{u(t, \mathcal Dot)}{H^1}+ C_\varepsilon\sigma G(t). $$ Choosing appropriately small $\sigma:C_\varepsilon\sigma<1/2 $, allows us to hide the last term and as a result $$ \norm{u(t, \mathcal Dot)}{H^2}\leq G(t)\leq \f{C_{\varepsilon}}{\sqrt{t}}\norm{u_0}{H^1}+ C_\varepsilon t^{1/4} \sup\limits_{0<s<t} \norm{u(s, \mathcal Dot)}{H^1}^{3}+ C_\varepsilon\norm{u(t, \mathcal Dot)}{H^1}. $$ \end{proof} \noindent {\bf Remark} The above argument can be extended (with no additional smoothness or otherwise assumptions on $A$) to show that $u\in \mathcal Ap_{m=0}^\infty D(A^m)$ with the corresponding estimates (away from the zero) for $\norm{A^m u}{L^2}$ as in Proposition \ref{prop:1}. This is the usual regularity result that one expects for parabolic equations. \subsection{Global well-posedness for \eqref{eq:2}} \lambdabel{sec:global} Our approach to global well-posedness for the parabolic problem \eqref{eq:2} is to iterate the local well-posedness result to a global one. We will show that for the local $H^1$ solution, produced in Section \ref{sec:lwp}, one has the estimate \betagin{equation} \lambdabel{eq:21} \norm{u(t, \mathcal Dot)}{H^1}\leq I(0) e^{C t} + C_\varepsilon(e^{C t}-1) \sup\limits_{0\leq s\leq t} \norm{g(s, \mathcal Dot)}{L^2}^2. \end{equation} for every $0<t<T$, where $T$ is its lifespan. Assuming \eqref{eq:21}, let us prove that the solution is global. Fix $u_0\in H^1(I)$ and define for every (sufficiently large) integer $n$ $$ T_n=\sup\{ t:H^1 \textup{solution is defined in } (0,t)\ \&\ \sup\limits_{0<t_1<t} \norm{u(t_1, \mathcal Dot)}{H^1}<n\}, $$ and $T^*=L^{\infty}msup_n T_n$. If $T^*=\infty$, there is nothing to prove, the solution is global. If $T^*<\infty$, it must be that $L^{\infty}msup_{t\to T^*}\norm{u(t, \mathcal Dot)}{H^1}=\infty$. On the other hand, take any sequence $t_n\to T^*$. By \eqref{eq:21}, $$ L^{\infty}msup_{n\to \infty} \norm{u(t_n, \mathcal Dot)}{H^1}\leq I(0) e^{C T^*} + C_\varepsilon(e^{C T^*}-1) \sup\limits_{0\leq s\leq T^*} \norm{g(s, \mathcal Dot)}{L^2}^2 <\infty, $$ a contradiction. This implies the solutions produced in Section \ref{sec:lwp} are global ones. Therefore, it remains to show \eqref{eq:21}. \subsubsection{Local boundedness of $t\to \norm{u(t, \mathcal Dot)}{H^1}$} In view of the $H^2$ smoothness, established in Proposition \ref{prop:1}, this follows in a standard way from Lemma \ref{le:5}. To this end, let $$ I(t)=\int_I (u^2(t,x)+u_x^2(t,x)) dx. $$ and differentiate in time. Then one may use the equation (because of the $H^2$ smoothness) to get \betagin{eqnarray*} & & I'(t)= 2 \int_I (u u_t + u_x (u_t)_x) dx = -2 \int (u F(u, u_x)+u_x \partial_x F(u, u_x)) dx + \\ & & + 2\int_I u \partial_x (a(x) u_x)dx +2 \int_I u_x \partial^2_x (a(x) u_x)dx + 2\int_I u g(t,x) dx+ 2\int_I u_x g_x(t,x) dx \end{eqnarray*} Note that by Lemma \ref{le:5}, $\int (u F(u, u_x)+u_x \partial_x F(u, u_x)) dx=0$. For the next term, clearly $$ \int_I u \partial_x (a(x) u_x)dx= -\int a(x) u_x^2 dx\leq 0 $$ Next, consider the term $ \int_I u_x \partial^2_x (a(x) u_x)dx$. We have \betagin{eqnarray*} & & \int_I u_x \partial^2_x (a(x) u_x)dx =- \int \partial_x(a u_x) u_{xx} dx= - \int a(x) u_{xx}^2 + \f{1}{2}\int a''(x) u_x^2 dx\leq \\ & &\leq -\varepsilon \norm{u_{xx}}{L^2}^2+ \norm{a''}{L^\infty} \norm{u_x}{L^2}^2\leq -\varepsilon \norm{u_{xx}}{L^2}^2+ C \norm{u_x}{L^2}^2 \end{eqnarray*} We have used here $a(x)\geq \varepsilon$ and $a\in C^2(I)$. Finally, we have \betagin{eqnarray*} & &|\int_I u g(t,x) dx+ \int_I u_x g_x(t,x) dx |\leq C(\norm{u}{L^2}+\norm{u_{xx}}{L^2}) \norm{g(t, \mathcal Dot)}{L^2}\leq \\ & & \leq \varepsilon \norm{u_{xx}}{L^2}^2/2+ \norm{u}{L^2}^2+ C_\varepsilon \norm{g(t, \mathcal Dot)}{L^2}^2. \end{eqnarray*} Altogether, \betagin{eqnarray*} & &I'(t)\leq -\varepsilon \norm{u_{xx}}{L^2}^2/2+C(\norm{u(t, \mathcal Dot)}{L^2}^2+\norm{u_{x}(t, \mathcal Dot)}{L^2}^2)+ C_\varepsilon\norm{g(t, \mathcal Dot)}{L^2}^2\leq \\ & & \leq C I(t)+C_\varepsilon\norm{g(t, \mathcal Dot)}{L^2}^2 \end{eqnarray*} Rewrite this as \betagin{eqnarray*} & & \f{d}{d t} (e^{-C t} I(t))\leq C_\varepsilon e^{-C t} \norm{g(t, \mathcal Dot)}{L^2}^2, \end{eqnarray*} whence upon integration we get $$ I(t)\leq I(0) e^{C t} + C_\varepsilon(e^{C t}-1) \sup\limits_{0\leq s\leq t} \norm{g(s, \mathcal Dot)}{L^2}^2. $$ which is \eqref{eq:21}. \section{Global attractors for the viscous Camassa-Holm: The finite interval case} \lambdabel{sec:attractor_1} In this section, we prove Theorem \ref{theo:2}. As we have discussed in Section \ref{sec:attractors} and more specifically Proposition \ref{prop:4}, we will need to verify that for any $t_n\to \infty$ and for any $B>0$ and any sequence of initial data $\{u_n\}\subset H^1(0,1)$ with $\sup\limits_n \norm{u_n}{H^1}\leq B$, we have \betagin{eqnarray} \lambdabel{eq:70} & & \sup_{u_0\in H^1_0} L^{\infty}msup_{t\to \infty}\norm{S(t)u_0}{H^1}\leq C(g, \varepsilon), \\ \lambdabel{eq:71} & & \sup_n \norm{S(t_n)u_n}{H^1}\leq C(B; g, \varepsilon), \\ \lambdabel{eq:72} & & L^{\infty}m_N L^{\infty}msup_n\norm{P_{>N} S(t_n)u_n}{H^1}=0. \end{eqnarray} This section is devoted to showing \eqref{eq:70}, \eqref{eq:71}. The estimate \eqref{eq:72} is somewhat more complicated and it will postponed until Section \ref{sec:vanish}. In the end, we will show the asymptotic smoothing effect, that is the fact that the attractor lies in a smoother space. \subsection{Point dissipativeness: Proof of \eqref{eq:70}} Fix $u_0$ with $\norm{u_0}{H^1}\leq B$. Consider the solution to \eqref{eq:2} with initial data $u_0$, $u(t, \mathcal Dot)=S(t)u_0$. We have already shown the local boundedness of $t\to \norm{u(t, \mathcal Dot)}{H^1}$ ( i.e. is \eqref{eq:21}), which we now improve. Note that the extra conditions $\int_0^1 g(x)dx=\int_0^1 u_0(x)dx=0$ are crucial in our argument. Recall $I(t)=\int_I (u^2(t,x)+u_x^2(t,x)) dx$. We need to reexamine our estimates above for $I'(t)$, in order to use to our advantage the smallness of $\norm{a'}{L^\infty}$. We have as before \betagin{eqnarray*} & & I'(t)= 2 \int_I (u u_t + u_x (u_t)_x) dx = - 2\int_I a(x) (u_x)^2 dx -2 \int_I u_{xx} \partial_x (a(x) u_x)dx + \\ & & +2\int_I u g(t,x) dx- 2\int_I u_{xx} g(t,x) dx \leq -2\varepsilon(\norm{u_x}{L^2}^2 + \norm{u_{xx}}{L^2}^2)+ \\ & &+ 2 \norm{a'}{L^\infty} \norm{u_x}{L^2} \norm{u_{xx}}{L^2}+ (\norm{u}{L^2}+\norm{u_{xx}}{L^2})\norm{g}{L^2}. \end{eqnarray*} Since $\norm{a'}{L^\infty}\leq \varepsilon$, it is easy to see that the term $2 \norm{a'}{L^\infty} \norm{u_x}{L^2} \norm{u_{xx}}{L^2}$ gets absorbed by $\varepsilon(\norm{u_x}{L^2}^2 + \norm{u_{xx}}{L^2}^2)$ and we get \betagin{equation} \lambdabel{eq:22} I'(t)\leq -\varepsilon( \norm{u_x}{L^2}^2+\norm{u_{xx}}{L^2}^2) + (\norm{u}{L^2}+\norm{u_{xx}}{L^2})\norm{g}{L^2} \end{equation} Note that by the conservation law $\partial_t \int_0^1 u(t,x) dt= \int_0^1 g(x) dt=0$ and $\int_0^1 u_0(x)=0$, we have $\int_0^1 u(t,x)dx=0$ for all $t$. Let $u(t,x)=\sum_{n\neq 0} a_n(t) e^{2\partiali i n x}$. It follows that $$ \norm{u(t, \mathcal Dot)}{L^2}=\left(\sum\limits_{|n|\geq 1} |a_n|^2\right)^{1/2}\leq \left(\sum\limits_{|n|\geq 1} |n|^2 |a_n|^2\right)^{1/2}\leq C \|u_x(t, \mathcal Dot)\|_{L^2}. $$ Use $\norm{u(t, \mathcal Dot)}{L^2}\leq \|u_x(t, \mathcal Dot)\|_{L^2}\leq \|u_{xx}(t, \mathcal Dot)\|_{L^2}$ and the Cauchy-Schwartz's inequality in \eqref{eq:22} to get \betagin{eqnarray*} & & I'(t)\leq -\varepsilon (\norm{u_x}{L^2}^2+\norm{u_{xx}}{L^2}^2) + (\norm{u_x}{L^2}+\norm{u_{xx}}{L^2})\norm{g}{L^2}\leq \\ & & \leq -\varepsilon( \norm{u}{L^2}^2+\norm{u_{xx}}{L^2}^2)/2 + C \norm{g}{L^2}^2/\varepsilon \leq -\varepsilon I(t)/2 + C\norm{g}{L^2}^2/\varepsilon \end{eqnarray*} We now finish with a Gronwall type argument, namely we rewrite the inequality above as $$ \f{d}{dt} (I(t) e^{t \varepsilon /2}) \leq C e^{t \varepsilon /2} \norm{g}{L^2}^2/\varepsilon, $$ which after integration in time yields \betagin{equation} \lambdabel{eq:23} I(t)\leq I(0) e^{- \varepsilon t/2} + C \norm{g}{L^2}^2/\varepsilon^2. \end{equation} It follows that $$ L^{\infty}msup_{t\to \infty} I(t) \leq C \norm{g}{L^2}^2/\varepsilon^2, $$ which is the point dissipativeness of $S(t)$. \subsection{Uniform boundedness: Proof of \eqref{eq:71} } The uniform boundedness in fact follows from \eqref{eq:23} as well. Indeed, denote $I_n(t)=\norm{S(t)u_n}{L^2}^2+\norm{S(t)u_n}{\dot{H}^1}^2$. Clearly $I_n(0)=\norm{u_n}{H^1}^2\leq B^2$. We have by \eqref{eq:23}, $$ I_n(t_n)\leq I_n(0) e^{- \varepsilon t_n/2} + C \norm{g}{L^2}^2/\varepsilon^2 \leq B^2+C \norm{g}{L^2}^2/\varepsilon^2. $$ \section{Uniform vanishing: Proof of \eqref{eq:72}} \lambdabel{sec:vanish} Fix a real number $B$. Let the initial data be $u_0: \norm{u_0}{H^1}\leq B$, with a corresponding solution $u$. We know from the results of the previous sections that such solutions exist globally and belong to the class $C((t_1, t_2), H^2)$ for every $0<t_1<t_2<\infty$. Let $k$ be a (large) positive integer and denote $$ I_{>k}(t)=\int\limits_0^1 ((P_{>2^k} u)^2 + (P_{>2^k} u_x)^2 dx. $$ This is the high-frequency portion of the energy, which we are trying to show is small as $N\to \infty$, uniformly in $\norm{u_0}{H^1}$. We use energy estimate reminiscent of the estimate for $I(t)$. After taking time derivative, use the equation \eqref{eq:2} and $P_{>2^k}^2=P_{>2^k}$. We get \betagin{eqnarray*} & & I_{>k}'(t)=2\int\limits_0^1 ( P_{>2^k} u P_{>2^k} u_t + P_{>2^k} u_x P_{>2^k} u_{tx} dx= \\ & &= 2 \int\limits_0^1 P_{>2^k} u F(u, u_x)+ P_{>2^k} u_x \partial_x F(u, u_x) dx+ \\ & &+ \int\limits_0^1 P_{>2^k} u \partial_x( a(x) u_x)dx+ P_{>2^k} u_x \partial^2_x( a(x) u_x)dx+ \\ & & + \int\limits_0^1 (P_{>2^k} u g + P_{>2^k} u_x g_x dx)=:N+V+F \end{eqnarray*} There are three sort of terms arising in the energy estimate. We start with those arising from the viscosity. \subsection{Viscosity terms} \lambdabel{sec:vis} Write \betagin{eqnarray*} & & \f{V}{2}= \int\limits_0^1 (P_{>2^k} u) \partial_x( a(x) u_x)dx+ (P_{>2^k} u_x) \partial^2_x( a(x) u_x)dx= \\ & & = - \int\limits_0^1 \left[(P_{>2^k} u_x) a(x) u_x dx+ (P_{>2^k} u_{xx})a(x) u_{xx}+ (P_{>2^k} u_{xx})a'(x) u_{x} \right]dx. \end{eqnarray*} We estimate the first and the third term by H\"older's inequality and the uniform boundedness $$ |\int\limits_0^1 (P_{>2^k} u_x) a(x) u_x dx|\leq \norm{a}{L^\infty} \norm{u_x}{L^2}^2\leq C(B; g, \varepsilon). $$ Also, by H\"older and Cauchy-Schwartz \betagin{eqnarray*} & & |\int\limits_0^1 (P_{>2^k} u_{xx})a'(x) u_{x} dx|\leq \norm{a'}{L^\infty} \norm{u_x}{L^2} \norm{P_{>2^k} u_{xx}}{L^2}\leq \\ & & \leq \f{\varepsilon}{100} \norm{P_{>2^k} u_{xx}}{L^2}^2+ \f{C}{\varepsilon} \norm{a'}{L^\infty}^2 \norm{u_x}{L^2}^2= \f{\varepsilon}{100} \norm{P_{>2^k} u_{xx}}{L^2}^2+C(B; g, \varepsilon). \end{eqnarray*} We need more delicate estimates for the second term $\int (P_{>2^k} u_{xx})a(x) u_{xx} dx$. The difficulties here lie with the fact that the commutators $[P_{>N},a]$ {\it are not smoothing operators }, when considered on $L^2[0,1]$, (in contrast with $L^2(\mathbf R^1)$). Write $u_{xx}= P_{>2^k} u_{xx}+P_{\leq 2^k}u_{xx}$ to get $$ \int (P_{>2^k} u_{xx})a(x) u_{xx} dx= \int (P_{>2^k} u_{xx})^2 a(x) dx+ \int (P_{>2^k} u_{xx})a(x) P_{\leq 2^k}u_{xx} dx. $$ Clearly, $\int (P_{>2^k} u_{xx})^2 a(x) dx\geq \varepsilon \norm{P_{>2^k} u_{xx}}{L^2}^2$, while we will show \betagin{equation} \lambdabel{eq:50} \betagin{array}{c} |\int (P_{>2^k} u_{xx})a(x) P_{\leq 2^k}u_{xx} dx|\leq \\ \leq C 2^k (\norm{a'}{L^\infty} \norm{P_{>2^{k-1}}u_{x}}{L^2} \norm{P_{>2^k} u_{xx}}{L^2}+ \norm{a_{>{2^{k-1}}}}{L^\infty}\norm{P_{>2^k} u_{xx}}{L^2} \norm{u_x}{L^2}). \end{array} \end{equation} To that end, write \betagin{eqnarray*} & & \int (P_{>2^k} u_{xx})a(x) P_{\leq 2^k}u_{xx} dx= \int (P_{>2^k} u_{xx})a(x) P_{2^{k-1}< \mathcal Dot \leq 2^k}u_{xx} dx+ \\ & &+\int (P_{>2^k} u_{xx})a(x) P_{\leq 2^{k-1}}u_{xx} dx. \end{eqnarray*} For the first term, use that $a(x)=a(0)+\int\limits_0^x a'(y) dy$ and by orthogonality \\ $\int (P_{>2^k} u_{xx})a(0) P_{2^{k-1}< \mathcal Dot \leq 2^k}u_{xx} dx=0$. We get \betagin{eqnarray*} & & |\int (P_{>2^k} u_{xx})a(x) P_{2^{k-1}\leq \mathcal Dot \leq 2^k}u_{xx} dx| = |\int (P_{>2^k} u_{xx})(\int_0^x a'(y)dy) P_{2^{k-1}<\mathcal Dot \leq 2^k}u_{xx} dx|\leq \\ & & \leq \norm{a'}{L^\infty} \norm{P_{>2^k} u_{xx}}{L^2} \norm{P_{2^{k-1}< \mathcal Dot \leq 2^k} u_{xx}}{L^2} \leq 2^k \norm{a'}{L^\infty} \norm{P_{2^{k-1}< \mathcal Dot \leq 2^k} u_{x}}{L^2} \norm{P_{>2^k} u_{xx}}{L^2} \leq \\ & & \leq 2^k \norm{a'}{L^\infty} \norm{P_{>2^{k-1}} u_{x}}{L^2} \norm{P_{>2^k} u_{xx}}{L^2} . \end{eqnarray*} For the second term, use Lemma \ref{le:80}, more specifically \eqref{eq:98}. We have \betagin{eqnarray*} & & |\int (P_{>2^k} u_{xx})a(x) P_{\leq 2^{k-1}}u_{xx} dx| = |\int (P_{>2^k} u_{xx})(P_{>2^{k-1}} a(x)) P_{\leq 2^{k-1}}u_{xx} dx|\leq \\ & &\leq \norm{P_{>2^k} u_{xx}}{L^2} \norm{P_{>2^{k-1}} a}{L^\infty} \norm{P_{\leq 2^{k-1}}u_{xx}}{L^2}\leq C 2^{k} \norm{a_{>2^{k-1}}}{L^\infty} \norm{P_{>2^k} u_{xx}}{L^2} \norm{P_{\leq 2^{k-1}} u_x}{L^2} \\ & & \leq C 2^{k} \norm{a_{>2^{k-1}}}{L^\infty} \norm{P_{>2^k} u_{xx}}{L^2} \norm{u_x}{L^2}. \end{eqnarray*} This establishes \eqref{eq:50}. Put together all terms that arise from the viscosity and use the uniform boundedness \eqref{eq:71} and the Cauchy-Schwartz inequality $a b\leq \varepsilon a^2 +(4\varepsilon)^{-1}b^2$ to obtain \betagin{eqnarray*} & & V\leq - \f{2\varepsilon}{3} \norm{P_{>2^k} u_{xx}}{L^2}^2+ C(B; g,\varepsilon, \delta) + \f{2^{2k} \norm{a'}{L^\infty}^2}{\varepsilon} \norm{P_{>2^{k-1}}u_{x}}{L^2}^2+ \\ & & + 2^{2k} \norm{a_{>2^{k-1}}}{L^\infty}^2 C(B; g,\varepsilon)\leq - \f{2\varepsilon}{3} \norm{P_{>2^k} u_{xx}}{L^2}^2+ 2^{2k} \delta^2 \varepsilon \norm{P_{>2^{k-1}}u_{x}}{L^2}^2+ C(B; g,\varepsilon) + \\ & &+ 2^{2k} \norm{a_{>2^{k-1}}}{L^\infty}^2 C(B; g,\varepsilon). \end{eqnarray*} The last inequality holds due to $\norm{a'}{L^\infty}\leq \delta \varepsilon$. \subsection{Nonlinearity terms} \lambdabel{sec:nonl} For the nonlinearity terms, we have several easy terms, that we take care of first. Namely, according to Lemma \ref{le:20} (see \eqref{eq:164} with $s=0$) \betagin{eqnarray*} & & |\int (P_{>2^k} u) F(u,x_x) dx|\leq \norm{ P_{>2^k} u}{L^2}\norm{F(u, u_x)}{L^2}\leq C \norm{u}{H^1}^3\leq C(B; g,\varepsilon). \end{eqnarray*} Also, by H\"older's inequality and the Sobolev embedding \eqref{eq:sob} \betagin{eqnarray*} & & |\int (P_{>2^k} u_x) \partial_x^2 (u^2) dx|= |\int (P_{>2^k} u_{xx}) \partial_x (u^2) dx| \leq \norm{P_{>2^k} u_{xx}}{L^2} \norm{u_x}{L^2}\norm{u}{L^\infty}\leq \\ & &\leq C \norm{P_{>2^k} u_{xx}}{L^2} \norm{u}{H^1}^2\leq \f{\varepsilon}{100} \norm{P_{>2^k} u_{xx}}{L^2}^2+ C(B; g,\varepsilon). \end{eqnarray*} Finally, \betagin{eqnarray*} & & |\int P_{>2^k} u_x \partial_x^2 (1-\p_x^2)^{-1}(u_x^2/2+u^2) dx|\leq C \norm{P_{>2^k} u_x}{L^\infty} \norm{u}{H^1}^2. \end{eqnarray*} However, by Lemma \ref{le:bern} \betagin{eqnarray*} & & \norm{P_{>2^k} u_x}{L^\infty}\leq \sum\limits_{l\geq k} \norm{P_{2^l<\mathcal Dot\leq 2^{l+1}} u_x}{L^\infty} \leq \sum\limits_{l\geq k} 2^{l/2} \norm{P_{2^l<\mathcal Dot\leq 2^{l+1}} u_x}{L^2} \sigmam \\ & & \sigmam \sum\limits_{l\geq k} 2^{-l/2} \norm{P_{2^l<\mathcal Dot\leq 2^{l+1}} u_{xx}}{L^2}\leq C 2^{-k/2} \norm{P_{>2^k} u_{xx}}{L^2}\leq C \norm{P_{>2^k} u_{xx}}{L^2}, \end{eqnarray*} implying that $$ |\int P_{>2^k} u_x \partial^2 (1-\p_x^2)^{-1}(u_x^2/2+u^2) dx|\leq \f{\varepsilon}{100} \norm{P_{>2^k} u_{xx}}{L^2}^2 + C(B; g,\varepsilon). $$ \subsection{Forcing terms} \lambdabel{sec:forc} The forcing terms are easy to control. \betagin{eqnarray*} & & |\int P_{>2^k} u g + P_{>2^k} u_x g_x dx|= |\int P_{>2^k} u g - P_{>2^k} u_{xx} g dx|\leq \\ & & \leq (\norm{P_{>2^k} u}{L^2}+\norm{P_{>2^k} u_{xx}}{L^2}) \norm{g}{L^2}\leq \\ & & \leq \f{\varepsilon}{100} (\norm{P_{>2^k} u}{L^2}^2+\norm{P_{>2^k} u_{xx}}{L^2}^2 )+ \f{C}{\varepsilon} \norm{g}{L^2}^2\leq \\ & &\leq \f{\varepsilon}{50} \norm{P_{>2^k} u_{xx}}{L^2}^2 + C(B; g, \varepsilon). \end{eqnarray*} \subsection{Conclusion of the argument for uniform vanishing of the high frequencies} Put together all the estimates for viscosity terms, forcing terms and nonlinearity terms. We obtain \betagin{eqnarray*} & & I_{>k}'(t)\leq -\f{\varepsilon}{2}\norm{P_{>2^k} u_{xx}}{L^2}^2+ C 2^{2k} \delta^2 \varepsilon \norm{P_{>2^{k-1}} u_{x}}{L^2}^2+ \\ & &+ 2^{2k}\norm{a_{>2^{k-1}}}{L^\infty}^2 C(B; g, \varepsilon)+ C(B; g, \varepsilon). \end{eqnarray*} Note first that $\norm{P_{>2^k} u_{xx}}{L^2}\geq 2^k \norm{P_{>2^k} u_{x}}{L^2}\geq c 2^k \sqrt{I_{>k}(t)}$. Next, we estimate the term $\norm{a_{>2^{k-1}}}{L^\infty}$. Let $a(x)=\sum\limits_l a_l e^{2\partiali i l x}$. Then \betagin{eqnarray*} & & \norm{a_{>2^{k-1}}}{L^\infty}\leq \sum\limits_{l>2^k} |a_l| \leq C 2^{-k} \sum\limits_{l>k} |l ||a_l| \leq C 2^{-k} (\sum\limits_{l>k} |a_l|^2 |l|^{4})^{1/2}(\sum\limits_{l>1} l^{-2})^{1/2} \leq \\ & & \leq C 2^{-k} \norm{a''}{L^2(I)}\leq C 2^{-k} \norm{a''}{L^\infty}. \end{eqnarray*} We plug in this estimate to get \betagin{equation} \lambdabel{eq:100} I_{>k}'(t)+ \f{2^{2k} \varepsilon}{4}I_{>k}(t)\leq C 2^{2k} \delta^2 \varepsilon \norm{P_{>2^{k-1}} u_{x}}{L^2}^2+ C(B; g, \varepsilon) \end{equation} Notice that as before, we can rewrite \eqref{eq:100} as $$ \f{d}{d t} (I_{>k}(t) e^{2^{2k} \varepsilon t/4}) \leq C 2^{2k} \delta^2 \varepsilon e^{2^{2k} \varepsilon t/4} \norm{P_{>2^{k-1}} u_{x}}{L^2}^2+ e^{2^{2k} \varepsilon t/4} C(B; g, \varepsilon), $$ which after time integration yields \betagin{equation} \lambdabel{eq:104} \betagin{array}{l} I_{>k}(t)\leq I_{>k}(0)e^{-2^{2k} \varepsilon t/4}+ C \delta^2 \sup\limits_{0\leq s\leq t} \norm{P_{>2^{k-1}} u_{x}(s, \mathcal Dot)}{L^2}^2+ 2^{-2k} C(B; g, \varepsilon)\leq \\ \leq I_{>k}(0)e^{-2^{2k} \varepsilon t/4}+ C \delta^2 \sup\limits_{0\leq s\leq t} I_{>k-1}(s) + 2^{-2k} C(B; g, \varepsilon). \end{array} \end{equation} Informally, it should be that $I_{>k-1}\sigmam I_{>k}$, and since $\delta^2<<1$, we may ignore the middle term and get the desired uniform vanishing. However, $I_{>k-1}\geq I_{>k}$ and we may not perform this operation. \\ To go around this difficulty, introduce $$ I^n_{>k}(t)= \int ((u^n_{>2^k}(t, \mathcal Dot))^2+(\partial_x u^n_{>2^k}(t, \mathcal Dot))^2)dx, $$ where $\{u^n\}\subset H^1$, with $\sup_n \norm{u^n}{H^1}\leq B$. Note that by the uniform boundedness \eqref{eq:71}, we have $$ \sup_{n, k, t} I^n_{>k}(t)\leq \int ((u^n(t, \mathcal Dot))^2+(\partial_x u^n(t, \mathcal Dot))^2)dx\leq C(B; g, \varepsilon). $$ Let also $h^n_k(t)= \sup_{0\leq s\leq t} I^n_{>k}(s)$. Recast \eqref{eq:104} for each $n$ as \betagin{equation} \lambdabel{eq:117} h^n_k(t)\leq h^n_k(0)e^{-2^{2k} \varepsilon t/4} + C\delta^2 h^n_{k-1}(t)+ C 2^{-2k} C(B; g, \varepsilon) \end{equation} We will need $\delta$ so small, that $C\delta^2\leq 1/8$. Denote also $h_k=L^{\infty}msup_{n\to \infty} h^n_k(t_n)$ for some fixed sequence $t_n\to \infty$. Thus, we have $$ h_k\leq h_{k-1}/8+ 2^{-2k} C(B; g, \varepsilon) $$ Iterating this inequality, we obtain \betagin{eqnarray*} & & h_k\leq h_{k-1}/8+ 2^{-2k} C(B; g, \varepsilon)\leq 8^{-2} h_{k-2}+ (2^{-2k}+2^{-2k-1}) C(B; g, \varepsilon)\leq \ldots \\ & & \leq 2^{-2k} C(B; g, \varepsilon)+ 2^{-3k} h_0\leq (2^{-2k}+ 2^{-3k}) C(B; g, \varepsilon), \end{eqnarray*} since by \eqref{eq:71}, $h_0\leq C(B; g, \varepsilon)$. It follows that $$ L^{\infty}m_{k\to \infty} L^{\infty}msup_{n\to\infty} \norm{P_{>2^k} u^n(t_n, \mathcal Dot)}{H^1}=L^{\infty}m_{k\to \infty}h_k =0, $$ which is \eqref{eq:72}. Moreover, we have that the attractor (whose existence is now established) is actually a {\it bounded subset} of $H^{2-\sigma}$ for all $\sigma>0$. Indeed, since every element of the attractor is of the form $u(\mathcal Dot)=L^{\infty}m_n u^n(t_n, \mathcal Dot)$, we have by the last estimate $$ \sup_k 2^{2k} \norm{P_{>2^k} u}{L^2}\leq C(B, g, \varepsilon), $$ or $u\in B^2_{2, \infty}$. Of course, this implies $$ \norm{u(\mathcal Dot)}{H^s}^2\sigmam \sum\limits_{k\geq 1} 2^{2k(s-1)} \norm{P_{\sigmam 2^k} u(\mathcal Dot)}{H^1}^2 \leq \sum\limits_{k\geq 1} 2^{2k(s-1)} 2^{-2k} C(B; g, \varepsilon, \delta)< C(B; g, \varepsilon) $$ if $s<2$. \section{Attractors for the viscous Camassa-Holm equation on the whole line} \lambdabel{sec:conclusions} In this section, we indicate the main steps for the Proof of Theorem \ref{theo:9}. Since most of the arguments are quite similar to those already presented for the case of finite interval, we will frequently refer to the previous sections. To start with, let us point out that Theorem \ref{theo:1}, which applies to the (undamped) viscous Camassa-Holm equation \eqref{eq:2} applies as stated to \eqref{e:1} as well. The reader may reproduce the arguments from Section \ref{sec:90} easily, but we point out that the energy estimates in fact work better in the presence of the damping factor $\mu u$, see the discussion regarding the proof of \eqref{e:2} below. To establish the asymptotic compactness of the dynamical system $S(t)$ associated with \eqref{e:1}, we resort to Proposition \ref{prop:3}, just as we have used the similar Proposition \ref{prop:4} for the case of finite interval. Therefore, fix a sequence of times $\{t_n\}$ and $\{u_n\}\subset H^1(\mathbf R^1)$, which is uniformly bounded, say $\sup_n \norm{u_n}{H^1}\leq B$. It remains to show \betagin{eqnarray} \lambdabel{e:2} & & \sup_{f\in H^1} L^{\infty}msup_{t\to \infty}\norm{S(t)f}{H^1}\leq C(g, \mu, \varepsilon) \\ \lambdabel{e:3} & & \sup\limits_n \norm{S(t_n)u_n}{H^1}\leq C(B, g, \varepsilon, \mu) \\ \lambdabel{e:4} & & L^{\infty}m_{N\to \infty} L^{\infty}msup_n \norm{P_{>N} S(t_n) u_n}{H^1}=0 \\ \lambdabel{e:5} & & L^{\infty}m_{N\to \infty} L^{\infty}msup_n \norm{S(t_n) u_n}{H^1(|x|>N)}=0. \end{eqnarray} Note that \eqref{e:2} is the point dissipativeness of $S(t)$, while \eqref{e:3},\eqref{e:4}, \eqref{e:5} guarantee the asymptotic compactness of $S(t)$, according to Proposition \ref{prop:3}. \subsection{Proof of \eqref{e:2}} Denote $ I(t)= \int_{\mathbf R^1} (u^2 +u_x^2) dx $ and compute \betagin{eqnarray*} & & I'(t)= 2\int u u_t + u_x u_{x t} dx = 2\int u(-F(u, u_x) +\partial_x(a u_x)-\mu u+g) dx - \\ & & - 2 \int u_{xx} ( -F(u, u_x) +\partial_x(a u_x)-\mu u+g) dx= \\ & &=-2 \int (u F(u, u_x) +u_x \partial_x F(u, u_x)) dx-2 \int a(x) (u_x^2+ u_{xx}^2) dx - \\ & & - 2\int a'(x) u_{xx} u_x dx+2\int (u-u_{xx}) g -2 \mu \int (u^2+u_x^2) dx \end{eqnarray*} We split now our considerations, depending on the assumptions on $a$.\\ {\bf Estimate with the assumption $\norm{a'}{L^\infty}<<\varepsilon$.}\\ By Lemma \ref{le:5}, $\int (u F(u, u_x) +u_x \partial_x F(u, u_x)) dx=0$ and we estimate the rest by H\"older's inequality \betagin{eqnarray*} & & I'(t)\leq -2\varepsilon \int (u_x^2+ u_{xx}^2) dx+2 \norm{a'}{L^\infty} \norm{u_x}{L^2} \norm{u_{xx}}{L^2}+\\ & & +2 \norm{g}{L^2}(\norm{u}{L^2}+\norm{u_{xx}}{L^2}) - 2 \mu \int (u^2+u_x^2) dx \end{eqnarray*} By the smallness of $\norm{a'}{L^\infty}$, we conclude \\ $\norm{a'}{L^\infty} \norm{u_x}{L^2} \norm{u_{xx}}{L^2}\leq \varepsilon( \norm{u_x}{L^2}^2+\norm{u_{xx}}{L^2}^2)/2$. On the other hand, by Young's inequality $$ \norm{g}{L^2}(\norm{u}{L^2}+\norm{u_{xx}}{L^2})\leq \mu \norm{u}{L^2}^2/2+ \varepsilon\norm{u_{xx}}{L^2}^2/4+\f{C}{\min(\mu, \varepsilon)}\norm{g}{L^2}^2. $$ Altogether, \betagin{equation} \lambdabel{e:89} I'(t)\leq -\f{\varepsilon}{2} \int (u_x^2+u_{xx}^2 )dx-\mu \int u^2 dx+ \f{C}{\min(\mu, \varepsilon)}\norm{g}{L^2}^2. \end{equation} We show that \eqref{e:89} follows by assuming $a''(x)\leq 2 a(x)$. \\ {\bf Estimate with the assumption $2a''(x)\leq 2 a(x)$.}\\ We perform one more integration by parts in the expression for $I'(t)$ to get \betagin{eqnarray*} & & I'(t)= - 2 \int a(x) (u_x^2+ u_{xx}^2) dx +\int a''(x) u^2_x dx+2\int (u-u_{xx}) g - \\ & & - 2 \mu \int (u^2+u_x^2) dx \leq -2 \int a(x) u_{xx}^2 -2\mu \int (u_x^2 +u^2) dx +2\int (u-u_{xx}) g dx \leq \\ & & \leq -\min(\varepsilon, \mu) \int (u^2 + u_x^2) dx + \f{C}{\min(\mu, \varepsilon)} \norm{g}{L^2}^2. \end{eqnarray*} Thus, under either the smallness assumption $\norm{a'}{L^\infty}<<\varepsilon$ or under $a''(x)\leq 2 a(x)$, we have $$ I'(t)+\f{\min(\varepsilon, \mu)}{2} I(t)\leq \f{C}{\min(\mu, \varepsilon)}\norm{g}{L^2}^2, $$ which by Gronwall's inequality implies $$ I(t)\leq e^{- \min(\varepsilon, \mu)/2 t} I(0)+ \f{C}{\min(\mu, \varepsilon)^2}\norm{g}{L^2}^2= e^{- \min(\varepsilon, \mu)/2 t} \norm{f}{H^1}^2 + \f{C}{\min(\mu, \varepsilon)^2}\norm{g}{L^2}^2. $$ Taking limit $t\to \infty$ establishes \eqref{e:2}. \subsection{Proof of \eqref{e:3}} Uniform boundedness of the orbits follows from the last estimate as follows. Denote $I_n(t)=\norm{u_n(t, \mathcal Dot)}{H^1}^2$. We have $$ I_n(t)\leq e^{- \min(\varepsilon, \mu)/2 t} \norm{u_n(0, \mathcal Dot)}{H^1}^2 + \f{C}{\min(\mu, \varepsilon)^2}\norm{g}{L^2}^2\leq B^2+ \f{C}{\min(\mu, \varepsilon)^2}\norm{g}{L^2}^2, $$ where $B=\sup_n \norm{u_n(0)}{H^1}$. \subsection{Proof of \eqref{e:4}} The proof of \eqref{e:4} largely follows the argument for the similar estimate \eqref{eq:72}. Set $$ I_{>2^k} (t)= \int_{\mathbf R^1} (u_{>2^k})^2+(\partial_x u_{>2^k})^2 dx $$ and compute as in Section \ref{sec:vanish} \betagin{eqnarray*} & & I_{>2^k}'(t)= 2 \int\limits P_{>2^k}^2 u F(u, u_x)+ P_{>2^k}^2 u_x \partial_x F(u, u_x) dx+ \\ & &+ 2 \int P_{>2^k} u \partial_x P_{>2^k}( a(x) u_x)dx+ P_{>2^k} u_x \partial^2_x P_{>2^k}( a(x) u_x)dx+ \\ & & + 2 \int (P_{>2^k}^2 u g + P_{>2^k}^2 u_x g_x dx)- 2 \mu \int ((P_{>2^k}^2 u)^2 + (P_{>2^k}^2 u_x)^2 dx). \end{eqnarray*} The estimates for the terms arising from the nonlinearity work just in the case of finite interval. Again the damping terms can be ignored, because they give rise to terms with negative signs. In short, the estimates that we need can be summarized in \betagin{eqnarray*} & & |\int\limits P_{>2^k}^2 u F(u, u_x)+ P_{>2^k}^2 u_x \partial_x F(u, u_x) dx| \leq C \norm{P_{>2^k} u_{xx}}{L^2}\norm{u}{H^1}^2\leq \\ & & \leq \varepsilon \norm{P_{>2^k} u_{xx}}{L^2}^2+ \f{C}{\varepsilon} \norm{u}{H^1}^4. \end{eqnarray*} Similarly, the estimates for the terms arising form the forcing $g$ are estimated by \betagin{eqnarray*} & & |\int (P_{>2^k}^2 u g + P_{>2^k}^2 u_x g_x )dx|\leq \varepsilon \norm{P_{>2^k} u_{xx}}{L^2}^2/100 + \f{C}{\varepsilon} \norm{g}{L^2}^2. \end{eqnarray*} Finally, the viscosity terms are in fact better behaved than the corresponding terms for the finite interval case, but one has to proceed in a slightly different fashion, due to the technical inconvenience that $P_{>2^k}$ are not involutions, i.e. $P_{>2^k}^2\neq P_{>2^k}$. We have \betagin{eqnarray*} & & V = - 2 \int (P_{>2^k} u_x) P_{>2^k}( a(x) u_x)dx -2 \int (P_{>2^k} u_{xx}) P_{>2^k}( a(x) u_x)dx = \\ & & = -2 \int (P_{>2^k} u_x) a(x) (P_{>2^k} u_x)dx- 2 \int (P_{>2^k} u_x) [P_{>2^k},a] u_x dx - \\ & & - 2 \int (P_{>2^k} u_{xx}) P_{>2^k} (a' u_x) dx - 2\int P_{>2^k} u_{xx} a(x) P_{>2^k} u_{xx} dx - \\ & & - 2 \int P_{>2^k} u_{xx} [P_{>2^k},a] u_{xx} dx\leq -2\varepsilon \int (P_{>2^k} u_{x})^2 + (P_{>2^k} u_{xx})^2 dx + \\ & & + 2 \int |(P_{>2^k} u_x) [P_{>2^k},a] u_x| dx+ 2 \int |(P_{>2^k} u_{xx}) P_{>2^k} (a' u_x)| dx+ \\ & & + 2 \int |P_{>2^k} u_{xx} [P_{>2^k},a] u_{xx} |dx. \end{eqnarray*} Note that \betagin{eqnarray*} & & \int (P_{>2^k} u_{x})^2 + (P_{>2^k} u_{xx})^2 dx\geq \int (P_{>2^k} u_{xx})^2 dx \geq c 2^{2k} \int (P_{>2^k} u_{x})^2 dx\sigmam \\ & & \sigmam 2^{2k} \int (P_{>2^k} u)^2+ (P_{>2^k} u_{x})^2 dx= 2^{2k} I_{>2^k}(t). \end{eqnarray*} By the Calder\'on commutator estimates \betagin{eqnarray*} & & \int |(P_{>2^k} u_x) [P_{>2^k},a] u_x| dx\leq C 2^{-k} \norm{P_{>2^k} u_x}{L^2} \norm{a'}{L^\infty} \norm{u_x}{L^2}\leq C \norm{u_x}{L^2}^2 \norm{a'}{L^\infty}, \\ & & \int |(P_{>2^k} u_{xx}) P_{>2^k} (a' u_x)| dx\leq \norm{P_{>2^k} u_{xx}}{L^2}\norm{a'}{L^\infty} \norm{u_x}{L^2}\leq \\ & & \leq \varepsilon \norm{P_{>2^k} u_{xx}}{L^2}/100+ \frac{C}{\varepsilon} \norm{a'}{L^\infty}^2 \norm{u_x}{L^2}^2, \\ & & \int |P_{>2^k} u_{xx} [P_{>2^k},a] u_{xx} |dx\leq C \norm{P_{>2^k} u_{xx}}{L^2} \norm{a'}{L^\infty} \norm{u_x}{L^2}\leq \\ & & \leq \varepsilon \norm{P_{>2^k} u_{xx}}{L^2}/100+ \frac{C}{\varepsilon} \norm{a'}{L^\infty}^2 \norm{u_x}{L^2}^2. \end{eqnarray*} Altogether, the various terms in $I_{>2^k}'$ are estimated by $$ I_{>2^k}'(t)\leq -\varepsilon 2^{2k} I_{>2^k}(t)+ \frac{C}{\varepsilon} \norm{a'}{L^\infty}^2 \norm{u_x}{L^2}^2+ \f{C}{\varepsilon}(\norm{g}{L^2}^2+\norm{u}{H^1}^4). $$ By the uniform boundedness(i.e. \eqref{e:3} ) and the Gronwall's inequality, we deduce $$ I_{>2^k}(t)\leq I_{>2^k}(0) e^{-\varepsilon 2^{2k} t}+ 2^{-2k} \f{C}{\varepsilon^2}(\norm{g}{L^2}^2+\sup\limits_{0\leq s\leq t}\norm{u(s, \mathcal Dot)}{H^1}^4+\norm{a'}{L^\infty}^2\sup\limits_{0\leq s\leq t}\norm{u(s, \mathcal Dot)}{H^1}^2 ). $$ It follows that $$ L^{\infty}msup_n \norm{P_{>2^k} S(t_n) u_n}{H^1}\leq 2^{-k} C(B, g, \varepsilon) $$ and therefore $L^{\infty}m_{k\to \infty} L^{\infty}msup_n \norm{P_{>2^k} S(t_n) u_n}{H^1}=0$, thus establishing \eqref{e:4}. Note that since $L^{\infty}msup_n \norm{P_{>2^k} S(t_n) u_n}{H^1} \lesssim 2^{-k}$, it follows with the same argument as before that the attractor $\mathcal A\subset H^{2-\sigma}(\mathbf R^1)$ for every $\sigma>0$. \subsection{Proof of \eqref{e:5}} Our last goal is to establish the uniform smallness of the $H^1$ energy functional away from large balls. Set $$ J_{>N}(t)=\int (u^2(t,x)+u_x^2(t,x))(1-\partialsi(x/N)) dx. $$ Compute the derivative \betagin{eqnarray*} & & J_{>N}'(t)= 2\int (u u_t+u_x u_{xt})(1-\partialsi(x/N)) dx = \\ & &=-2\int (u F(u, u_x)+u_x \partial_x F(u, u_x)) (1-\partialsi(x/N)) dx- \\ & &- 2\mu\int (u^2+u_x^2)(1-\partialsi(x/N)) dx+\\ & & +2 \int (u \partial_x(a u_x)+ u_x \partial_x^2 (a u_x)) (1-\partialsi(x/N)) dx. \end{eqnarray*} The first term has already been handled in our previous paper, C^{\infty}te{Stanislavova}. According to Lemma 5, C^{\infty}te{Stanislavova} the estimate is\footnote{This is actually not so hard to justify. Observe that by the conservation law $\int (u F(u, u_x)+u_x \partial_x F(u, u_x))dx=0$, all the integration by parts in \eqref{e:8} that does not hit the term $(1-\partialsi(x/N))$ equates to zero. Therefore, the only terms that survive are those with $N^{-1}\partialsi'(x/N)$ in them. Observe that there are no $u_{xx}$ in those either, whence \eqref{e:8}.} \betagin{equation} \lambdabel{e:8} |\int (u F(u, u_x)+u_x \partial_x F(u, u_x)) (1-\partialsi(x/N)) dx|\leq \f{C}{N}\norm{u(t, \mathcal Dot)}{H^1}^3. \end{equation} Next, integration by parts yields \betagin{eqnarray*} & & \int (u \partial_x(a u_x)+ u_x \partial_x^2 (a u_x)) (1-\partialsi(x/N)) dx = \\ & & = -\int a u_x^2 (1-\partialsi(x/N)) dx+ N^{-1} \int a u u_x \partialsi'(x/N) dx-\\ & & - \int u_{xx} \partial_x (a u_x) (1-\partialsi(x/N)) dx+ N^{-1} \int u_x \partial_x(a u_x)\partialsi'(x/N) dx \end{eqnarray*} The terms with the factor $N^{-1}$ are ``good'' terms. For the first term , we estimate right away $$ N^{-1} |\int a u u_x \partialsi'(x/N) dx|\leq C N^{-1}\norm{a}{L^\infty} \norm{u}{H^1}^2. $$ For the second term containing $N^{-1}$, we have \betagin{eqnarray*} & & N^{-1} \int u_x \partial_x(a u_x)\partialsi'(x/N) dx = N^{-1} \int a'(x) u_x^2 \partialsi'(x/N) dx - \\ & & - \f{1}{2N} \int u_x^2 \partial_x( a' \partialsi'(x/N)) dx\leq \f{C}{N} \norm{u_x}{L^2}^2(\norm{a'}{L^\infty}+\norm{a''}{L^\infty}), \end{eqnarray*} for some absolute constant $C$. \\ Finally, we have to estimate the term $- \int u_{xx} \partial_x (a u_x) (1-\partialsi(x/N)) dx$. As before, we need to use either the smallness of $\norm{a'}{L^\infty}$ or $a''(x)\leq 2 a(x)$. \\ {\bf Estimate under the assumption $a''(x)\leq 2 a(x)$.}\\ We have \betagin{eqnarray*} & & - \int u_{xx} \partial_x (a u_x) (1-\partialsi(x/N)) dx= \\ & & = -\int a u_{xx}^2 (1-\partialsi(x/N)) dx- \int u_{xx} a' u_x (1-\partialsi(x/N)) dx\leq \\ & & \leq \f{1}{2} \int u_x^2 \partial_x (a'(1-\partialsi(x/N)) ) dx= \\ & &= \f{1}{2} \int u_x^2 a''(x) (1-\partialsi(x/N)) dx-\f{1}{2 N} \int a''(x) u_x^2 \partialsi'(x/N) dx\leq \\ & & \leq \f{1}{2} \int u_x^2 a''(x) (1-\partialsi(x/N)) dx+ \f{1}{2N}\norm{u_x}{L^2}^2 \norm{a''}{L^\infty}. \end{eqnarray*} All in all, we get \betagin{eqnarray*} & &J_{>N}'(t)\leq \f{C}{N}(\norm{u(t, \mathcal Dot)}{H^1}^3+ (\norm{a}{L^\infty} +\norm{a'}{L^\infty}+\norm{a''}{L^\infty}) \norm{u(t, \mathcal Dot)}{H^1}^2)+\\ & &+ \int u_x^2 a''(x) (1-\partialsi(x/N)) dx-2\int a(x) u_x^2 (1-\partialsi(x/N)) dx -\\ & &- 2\mu \int (u^2 +u_x^2) (1-\partialsi(x/N)) dx. \end{eqnarray*} We now use the condition $a''(x)\leq 2 a(x)$, to conclude that the middle term is non-positive whence $$ J_{>N}'(t)\leq \f{C}{N}(\norm{u(t, \mathcal Dot)}{H^1}^3+ (\norm{a}{L^\infty} +\norm{a'}{L^\infty}+\norm{a''}{L^\infty}) \norm{u(t, \mathcal Dot)}{H^1}^2) - 2\mu J_{>N}(t). $$ By the uniform bounds on $\norm{u(t, \mathcal Dot)}{H^1}$ , (i.e. \eqref{e:3}) and the previous considerations, it follows that \betagin{equation} \lambdabel{eq:end} J_{>N}'(t)+ \mu J_{>N}(t)\leq \f{C(B, g, \varepsilon, \mu)}{N}. \end{equation} We will show that \eqref{eq:end} holds, by assuming appropriate smallness of $\norm{a'}{L^\infty}$. \\ {\bf Estimate under the assumption $\norm{a'}{L^\infty}<<\varepsilon$.}\\ We have \betagin{eqnarray*} & & - \int u_{xx} \partial_x (a u_x) (1-\partialsi(x/N)) dx\leq \\ & & \leq \int |u_{xx}| |a'(x)||u_x| (1-\partialsi(x/N)) dx- \int u_{xx}^2 a(x)(1-\partialsi(x/N)) dx \leq \\ & &\leq \norm{a'}{L^\infty} (\int u_{xx}^2 (1-\partialsi(x/N)) dx )^{1/2} (\int u_x^2(1-\partialsi(x/N)) dx )^{1/2}-\\ & &- \varepsilon \int u_{xx}^2 (1-\partialsi(x/N)) dx\leq 2\f{\norm{a'}{L^\infty}^2}{\varepsilon} \int u_x^2(1-\partialsi(x/N)) dx\leq 2\delta^2 \varepsilon J_{>N}(t). \end{eqnarray*} Taking $\delta$ so small that $\delta^2\varepsilon<\mu$ ensures that $2\delta^2 \varepsilon J_{>N}(t)$ is subsumed by \\ $-2\mu \int u^2+u_x^2)(1-\partialsi(x/N)) dx$ and therefore, we arrive at \eqref{eq:end} again. \\ The Gronwall's inequality applied to \eqref{eq:end} yields $$ J_{>N}(t)\leq e^{-\mu t} J_{>N}(0)+ \f{C(B, g, \varepsilon, \mu)}{N}. $$ Thus $L^{\infty}msup_{t_n\to\infty} J_{>N}(t_n)\leq N^{-1} C(B, g, \varepsilon, \mu)$, whence $$ L^{\infty}m_{N\to\infty} L^{\infty}msup_{t_n\to\infty} J_{>N}(t_n)=0. $$ \betagin{thebibliography}{100} \bibitem{Beals} R. Beals R, D. Sattinger and J. Szmigielski, {\em Multipeakons and a theorem of Stieltjes}, Inverse Problems {\bf 15} (1999), 1-4. \bibitem{Cam1} R. Camassa, D. Holm and J.M. Hyman, {\emph A new integrable shallow water equation}, Adv. Appl. Mech. {\bf 31} (1994), 1--33. \bibitem{Camassa} R. Camassa and D. Holm, {\emph An integrable shallow water equation with peaked solitons.} Phys. Rev. Lett. {\bf 71} (1993), no. 11, 1661--1664. \bibitem{chen} S. Chen, C. Foias, D. Holm, E. Olson, E.S. Titi and S. Wynne, {\emph The Camassa-Holm equations and turbulence. Predictability: quantifying uncertainty in models of complex phenomena (Los Alamos, NM, 1998)} Phys. D {\bf 133} (1999), 49--65. \bibitem{Coutand} D. Coutand, J. Peirce and S. Shkoller, {\emph Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains} Commun. Pure Appl. Anal. {\bf 1} (2002), no. 1, 35--50. \bibitem{Helge} G. Coclite, H. Holden and K. Karlsen, {\emph Wellposedness for a parabolic-elliptic system}, Disc. Cont. Dyn. Sys. {\bf 13} (2005), 659--682. \bibitem{con2} A. Constantin, {\emph On the inverse spectral problem for the Camassa-Holm equation.} J. Funct. Anal. {\bf 155} (1998), no. 2, 352--363. \bibitem{Co} A. Constantin,{\em Existence of permanent and breaking waves for a shallow water equation: a geometric approach}, Ann. Inst. Fourier {\bf 50} (2000), 321--362. \bibitem{con5} A. Constantin, {\em On the scattering problem for the Camassa-Holm equation.} R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci. {\bf 457} (2001), no. 2008, 953--970. \bibitem{con3} A. Constantin, J. Escher, {\emph Wave breaking for nonlinear nonlocal shallow water equations.} Acta Math. {\bf 181} (1998), no. 2, 229--243. \bibitem{Co3} A. Constantin, J. Escher, {\emph Global weak solutions for a shallow water equation.} Indiana Univ. Math. J. {\bf 47} (1998), 1525--1545. \bibitem{Co4} A. Constantin, J. Escher, {\emph Global existence and blow-up for a shallow water equation.} Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) {\bf 26} (1998), 303--328. \bibitem{con1} A. Constantin, V. Gerdjikov and R. Ivanov, Inverse scattering transform for the Camassa-Holm equation, Inverse problems, {\bf 22} (2006), 2197--2207. \bibitem{con4} A. Constantin, H.P. McKean, {\emph A shallow water equation on the circle.} Comm. Pure Appl. Math. {\bf 52} (1999), no. 8, 949--982. \bibitem{Co2} A. Constantin, L. Molinet, {\em Global weak solutions for a shallow water equation}, Comm. Math. Phys. {\bf 211} (2000), 45-61. \bibitem{Co1} A. Constantin, W. Strauss, {\emph Stability of a class of solitary waves in compressible elastic rods.} Phys. Lett. A {\bf 270} (2000), no. 3-4, 140--148. \bibitem{Dai} H. Dai, Y. Huo, {\emph Solitary shock waves and other travelling waves in a general compressible hyper elastic rod.} R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci. {\bf 456} (2000), 331--363. \bibitem{Davies} E. B. Davies, ``Heat kernels and spectral theory'', Cambridge Tracts in Mathematics, 92, Cambridge University Press, Cambridge, 1990. \bibitem{Foias} C. Foias, D. Holm and E.S. Titi, {\emph The three dimensional viscous Camassa-Holm equations, and their relation to the Navier-Stokes equations and turbulence theory.} J. Dynam. Differential Equations {\bf 14} (2002), no. 1, 1--35. \bibitem{FF} A. S. Fokas and B. Fuchssteiner, {\em Symplectic structures, their B\"acklund transformation and hereditary symmetries}, Physica D {\bf 4} (1981), 47-66. \bibitem{GN} A. E. Green and P. M. Naghdi, {\em A derivation of equations for wave propagation in water of variable depth}, J. Fluid Mech. {\bf 78} (1976), 237--246. \bibitem{Ilin} A. Ilyin and E.S. Titi, {\emph Attractors for the two-dimensional Navier-Stokes-$\alphapha$ model: an $\alphapha$-dependence study. J. Dynam. Differential Equations} {\bf 15} (2003), no. 4, 751--778. \bibitem{Titi5} A. Ilyin and E.S. Titi, {\emph Sharp estimates for the number of degrees of freedom for the damped-driven 2D Navier--Stokes equations}, (2005), preprint, available at http://arxiv.org/pdf/math.AP/0507327. \bibitem{kaup} D. J. Kaup, {\emph Evolution of the scattering coefficients of the Camassa-Holm equation, for general initial data.} Stud. Appl. Math. {\bf 117} (2006), no. 2, 149--164. \bibitem{lenells} J. Lenells, {\emph Traveling wave solutions of the Camassa-Holm equation.} J. Differential Equations {\bf 217} (2005), no. 2, 393--430. \bibitem{McKean} H.P. McKean, {\emph Breakdown of a shallow water equation. Mikio Sato: a great Japanese mathematician of the twentieth century.} Asian J. Math. {\bf 2} (1998), no. 4, 867--874. \bibitem{McOwen} R. McOwen, ``Partial Differential equations: methods and applications'', Second Edition Prentice Hall, 2003. \bibitem{SimonII} M. Reed and B. Simon, ``Methods of modern mathematical physics. II. Fourier analysis, self-adjointness'', Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1975. \bibitem{SimoniV} M. Reed and B. Simon, ``Methods of modern mathematical physics. IV. Analysis of operators'' {\it Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London,} 1978. \bibitem{Stanislavova} M. Stanislavova and A. Stefanov, {\emph On global finite energy solutions of the Camassa-Holm equation}, J. Four. Anal. Appl. {\bf 11} (2005), no. 5, 511--531. \bibitem{Stanislavova1} M. Stanislavova, {\emph On the global attractor for the damped Benjamin-Bona-Mahony equation}, Disc. Cont. Dyn. Syst. (Supplement Volume) (2005). \bibitem{wang} M. Stanislavova, A. Stefanov and B. Wang,{\emph Asymptotic Smoothing and Attractors for the Generalized BBM Equation on $\mathbf R^3$}, J. Diff. Eq. {\bf 219} (2005), p. 451--483. \bibitem{Whitham} G. B. Whitham, ``Linear and Nonlinear waves'', Wiley, New York, 1973. \end{thebibliography} \end{document}
\begin{document} \title[Modelling and well-posedness]{Modelling and well-posedness of evolutionary differential variational-hemivariational inequalities} \author[N. Skoglund Taki and K. Kumar]{Nadia Skoglund Taki and Kundan Kumar} \address{Center for Modeling of Coupled Subsurface Dynamics \\ Department of Mathematics\\ University of Bergen\\ Postbox 7800\\ 5020 Bergen\\ Norway} \date{\today} \email{[email protected], [email protected]} \keywords{Variational-hemivariational inequality, Well-posedness, Contact problem, Modelling} \subjclass[2020]{Primary: 47J20, 49K40; Secondary: 74M15} \begin{abstract} In this paper, we study the well-posedness of a class of evolutionary variational-hemivariational inequalities coupled with a nonlinear ordinary differential equation in Banach spaces. The proof is based on an iterative approximation scheme showing that the problem has a unique mild solution. In addition, we established continuity of the flow map with respect to the initial data. Under the general framework, we consider two new applications for modelling of frictional contact with viscoelastic materials, where the friction coefficient $\mu$ depends on an external state variable $\alpha$ and the slip rate $|\Dot{u}_\tau|$. In the first application, we consider Coulomb friction with normal compliance, and in the second, normal damped response. In both cases, we present a new first-order approximation of the Dieterich rate-and-state friction law. \end{abstract} \maketitle \section{Introduction} \noindent This work concerns the study of an evolutionary differential variational-hemivariational inequality modelling mathematical problems from contact mechanics. These systems are relevant for many physical phenomena ranging from engineering to biology (see, e.g., \cite{Dancer2003, Duvaut1976,shillor2004} and the references therein). We are interested in frictional contact phenomena for viscoelastic materials with the linearized strain tensor, which have been studied intensively, see, e.g., some of the relevant books \cite{Duvaut1976,Migorski2012, shillor2004}. \\ \indent Let $V$ and $Y$ be two Banach spaces, and for $T>0$, we let $[0,T]$ be the time interval of interest. Then, the Cauchy problem under consideration reads \begin{subequations}\label{eq:prob} \begin{align}\label{eq:alpha_firsteq} &\Dot{\alpha}(t) = \mathcal{G}(t,\alpha(t),Mw(t)),\\ &\inner{\Dot{w}(t) + A(t, w(t)) - f(t)+ \mathcal{R}w(t) }{v- w(t) } + \varphi(t, \alpha(t),\mathcal{S}_\varphi w(t), Mw(t), Kv) \label{eq:w1}\\ &- \varphi(t, \alpha(t),\mathcal{S}_\varphi w(t), Mw(t), Kw(t))+ j^\circ(t,\alpha(t), \mathcal{S}_j w(t),Nw(t); Nv -Nw(t))\geq 0, \notag \end{align} for all $v\in V$, a.e. $t\in (0,T)$ with \begin{equation}\label{eq:w2} w(0) = w_0, \ \ \alpha(0) = \alpha_0. \end{equation} \end{subequations} Here, $A$ and $\mathcal{R}$ are nonlinear operators related to the viscoelastic constitutive laws. Further, $j^\circ$ is a generalized directional derivative of a functional $j$. The functionals $\varphi$ and $j$ are determined by contact boundary conditions. We require $\varphi$ to be convex in its last argument, while $j$ may be nonconvex with appropriate structures given later (see Section \ref{sec:problem_and_mainresult}). The operators $\mathcal{S}_\varphi$ and $\mathcal{S}_j$ relate to the contact conditions, and $\mathcal{G}$ is assumed to be a nonlinear operator related to the change in the external state variable $\alpha$. The data $f$ is related to the given body forces and surface traction, and $w_0$ and $\alpha_0$ represent the initial data. Lastly, $M$, $N$, and $K$ are bounded linear operators related to the tangential and normal trace operators. The Cauchy problem \eqref{eq:w1}-\eqref{eq:w2} is called a hemivariational inequality if $\varphi\equiv 0$ and variational inequality if $j^\circ\equiv 0$. Moreover, a solution to \eqref{eq:prob} is understood in the mild sense. \begin{definition}\label{def:sols} A pair of functions $(w,\alpha)$, where $\alpha\in C([0,T];Y)$ and $w : [0,T] \rightarrow V$ measurable, is said to be a mild solution of \eqref{eq:prob} if $\alpha$ and $w$, respectively, satisfy \begin{equation*} \alpha(t) = \alpha_0 + \int_0^t \mathcal{G}(s,\alpha(s),Mw(s)) ds \end{equation*} and \eqref{eq:w1}-\eqref{eq:w2}. \end{definition} The main purpose of this paper is to extend the results from \cite{Migorski2022, Patrulescu2017} to prove well-posedness of \eqref{eq:prob} with applications to rate-and-state frictional contact problems. We prove that the pair $(w,\alpha)$ is a solution to \eqref{eq:prob} in the sense of Definition \ref{def:sols} and that the flow map depends continuously on the initial data. The problem setting is motivated by \cite{Pipping2015_phd,shillor2004,Patrulescu2017,sofonea2017}, and the techniques have taken inspiration from \cite{Migorski2019}. \subsection{Former well-posedness results} Special cases of \eqref{eq:prob} have been investigated in literature. The recent work \cite{Migorski2022} is closest to our setting. They prove well-posedness for an ordinary differential equation coupled with a variational-hemivariational inequality with applications to viscoplastic material and viscoelasticity with adhesion. In fact, if we let $\varphi$ be independent of $Mw$ in its third argument and relaxing the more generalized structure on $\varphi$ (see Remark \ref{remark:mubounded} for more details), then \eqref{eq:prob} reduces to the problem studied in \cite{Migorski2022}. However, keeping the $Mw$ in $\varphi$ and generalizing the structure on $\varphi$ (see Remark \ref{remark:mubounded}) allows us to include the application of a first-order approximation of the so-called Dieterich rate-and-state friction law, \eqref{eq:regularized} and \eqref{eq:aginglaw}, (introduced in Section \ref{sec:rateandstate}). In the quasi-static case tackled in \cite{Patrulescu2017} with $j^\circ \equiv 0$ and simplifying the structure on $\varphi$ (see Remark \ref{remark:mubounded}), they proved existence and uniqueness of the solution pair by an implicit method, where they rewrite \eqref{eq:alpha_firsteq} to only depend on $w$. However, the setting of \cite{Patrulescu2017} was not directly applicable in our case as the inertial term restricts the space-time regularity for $w$. Moreover, in \cite{Pipping2019,Pipping2015}, they study \eqref{eq:both1}-\eqref{eq:both2} in a time-discrete setting with $\mathcal{S}_\varphi w \equiv \text{constant}$ (the normal stresses are constant - referred to as Tresca friction), making $\varphi$ independent of $Mw$ in its third argument, relaxing the structure on $\varphi$ (see Remark \ref{remark:mubounded}), and putting $j^\circ \equiv 0$. In contrast, we also consider the continuous dependence on initial data, which is not covered in \cite{Patrulescu2017,Pipping2019,Pipping2015}. Moreover, neglecting $\alpha$ and $Mw$ in $\varphi$ and $\alpha$ in $j^\circ$, existence and uniqueness is provided in \cite[Section 10.3]{sofonea2017}. If we let $\varphi \equiv 0$ and $j^\circ$ be independent of $\alpha$ and $\mathcal{S}_jw$, existence and uniqueness was proved in \cite[Section 6]{han}. We refer to \cite[p.2]{Migorski2022} for further discussion. \subsection{Physical setting}\label{sec:intro_physics} A mathematical model in contact mechanics needs several relations: a constitutive law, a balance equation, boundary conditions, interface laws, and initial conditions. The constitutive laws help us describe the material's mechanical reactions (stress-strain type). In most cases, constitutive laws originate from experiments, though they are verified to satisfy certain invariance principles. We refer to \cite[Chapter 6]{han2002} for a general description of several diagnostic experiments which provide the needed information to construct constitutive laws for specific materials. The interface laws are prescribed on the possible contact surface. We refer to the interface laws in tangential direction as friction laws and in normal direction as contact conditions. The mathematical treatment of these problems gives rise to the variational-hemivariational inequalities of the form \eqref{eq:w1}-\eqref{eq:w2} where we put appropriate constraints on the operators to fit the applications of interest. \\ \indent We are mainly interested in studying frictional problems and considering the contact surface's time dependence on the friction coefficient. This is modelled via an ODE, where the \emph{state variable} $\alpha$ tracks information of the contact surface using the slip rate $|\Dot{u}_\tau(t)|$ found from solving \eqref{eq:w1}-\eqref{eq:w2} and then updates the friction coefficient. We assume the following dependencies: \begin{align}\label{eq:mu_alpha} \mu &= \mu(|\Dot{u}_\tau(t)|,\alpha(t)), &\Dot{\alpha}(t) = G(\alpha(t),|\Dot{u}_\tau(t)|). \end{align} The laws with dependencies seen in \eqref{eq:mu_alpha} are referred to as rate-and-state friction laws. One of the most common models is the experimentally derived Dieterich-Ruina laws which are standard in geophysical applications in earth sciences. We refer to \cite{Marone1998} for an overview and comparison of some commonly used laws. There have been physical issues with the standard rate law, e.g., $|\Dot{u}_\tau(t)| \rightarrow 0$ resulting in a negative friction coefficient. This is repaired by using the regularized or truncated law (see \cite[Section 1.1-1.3]{Pipping2015_phd} and references therein), which are, respectively, given by \begin{subequations}\label{eq:both1} \begin{align}\label{eq:regularized} \mu(|\Dot{u}_\tau(t)|,\alpha(t)) &= a \: \mathrm{arcsinh} \bigg(\frac{|\Dot{u}_\tau(t)|}{2 v_\alpha(t)} \bigg), \\ \mu(|\Dot{u}_\tau(t)|,\alpha(t)) &=a\log^+ \bigg(\frac{|\Dot{u}_\tau(t)|}{v_\alpha(t)}\bigg) \ \ \text{ with } \log^+v= \begin{dcases} \log v, \ \ \text{ if } v\geq 1,\\ 0, \ \ \text{ otherwise}, \end{dcases} \label{eq:truncated} \end{align} \end{subequations} where $v_\alpha(t) = v_0e^{-\frac{1}{a}(\mu_0 + b\alpha(t))}$. The coefficients $a$, $b$, $v_0$, and $\mu_0$ are system parameters (see, e.g., \cite{Marone1998,Helmstetter},\cite[Section 1.2]{Pipping2015_phd}). The most popular rate-and-state friction laws are the so-called aging and slip law, respectively, described by \begin{subequations}\label{eq:both2} \begin{align}\label{eq:aginglaw} \Dot{\alpha} (t) &= \frac{v_0\mathrm{e}^{-\alpha(t)}- |\Dot{u}_\tau(t)|}{L},\\ \Dot{\alpha}(t) &= -\frac{|\Dot{u}_\tau(t)|}{L} \Big[\log\Big(\frac{|\Dot{u}_\tau(t)|}{v_0}\Big) + \alpha(t)\Big], \label{eq:sliplawrate} \end{align} \end{subequations} with $L$ being a system parameter (see, e.g., \cite{Helmstetter}). In the framework presented in this paper, we are not able to include \eqref{eq:both1}-\eqref{eq:both2}. The main issue is that $|\Dot{u}_\tau(t)|$ does not have a $L^\infty$-bound on the contact surface. We therefore consider a first-order Taylor approximation of $\mathrm{e}^{ \Hat{c} \alpha(t)}$ around $\alpha_0$ with $\Hat{c} = -1, \frac{b}{a}$, which are used in \eqref{eq:regularized} and \eqref{eq:aginglaw}. This leads to a first-order approximation of \eqref{eq:regularized} and \eqref{eq:aginglaw} (see Section \ref{sec:rateandstate}). In \cite{Patrulescu2017}, they consider that $\mu(|\Dot{u}_\tau(t)|,\alpha(t))$ is bounded and Lipschitz with respect to both arguments. Additionally to the boundeness assumption on the friction coefficient, \cite{Migorski2022} considers applications in the frictionless setting and $\mu = \mu(u_\nu(t))$. Our framework is therefore complimentary to the frameworks in \cite{Patrulescu2017,Migorski2022}, where we need to use different techniques to prove the well-posedness of \eqref{eq:prob} in the sense of Definition \ref{def:sols}. A discussion on many different friction models can be found in \cite{Zmitrowicz}. \\ \\ \subsection{Contributions and outline} The novelties of this paper are: \begin{itemize} \item Well-posedness of \eqref{eq:prob} in the sense of Definition \ref{def:sols}, where the proof is based on an iterative decoupling approach that directly gives rise to a numerical method. \item Allowing a more complicated structure on $\varphi$ (see Remark \ref{remark:mubounded}), which relaxes the boundedness requirement on the friction coefficient but also covers the case when it is bounded. \item Two new applications related to friction. One is a contact problem with normal compliance and the latter is contact with normal damped response. The included applications pertain to rate-and-state friction. \item Introducing a first-order approximation of the rate-and-state law \eqref{eq:regularized} and \eqref{eq:aginglaw} which falls under the framework in this paper. \end{itemize} The paper is organized as follows. In Section \ref{sec:func_nonsmooth}, we introduce the function spaces and some basics of nonsmooth analysis in order to better understand the problem setting. In Section \ref{sec:problem_and_mainresult}, we present our problem statement and the assumptions on the data. The section ends with our main result, Theorem \ref{thm:mainresult}, that summarizes the well-posedness of \eqref{eq:prob} in the sense of Definition \ref{def:sols}. The proof of the theorem is presented in Section \ref{sec:proof_main_result} utilizing a preliminary result stated in Section \ref{sec:preliminary}. Next, two applications fitting our framework will be introduced in Section \ref{sec:application1}-\ref{sec:application2}. In Section \ref{sec:rateandstate}, we introduce a first-order approximation of \eqref{eq:regularized} and \eqref{eq:aginglaw}. Lastly, in Appendix \ref{appendix:comments_app}, we include remarks on the assumptions needed in the proof of Theorem \ref{thm:mainresult}. Appendix \ref{appendix:proof_part2}-\ref{appendix:assumption_phiandj} contains proofs of results that are mostly available or similar to proofs found elsewhere but needed throughout the paper. \subsection{Notation} We now present some notations that will be used in this paper. \begin{itemize} \item Let $0<T<\infty$ be the maximal time. \item Let $d$ denote the dimension. In the applications, $d=2,3$. \item A point in $\mathbb{R}^d$ is denoted by $x = (x_i)$, $1 \leq i \leq d$. \item $\mathbb{S}^d$ denotes the space of second order symmetric tensors on $\mathbb{R}^d$. \item We denote $|\cdot|$ as the Euclidean norm. \item $\Omega \subset \mathbb{R}^d$ is a bounded open connected subset with a Lipschitz boundary $\Gamma = \partial \Omega$. We split $\Gamma$ into three disjoint parts; $\Gamma_D$, $\Gamma_N$, and $\Gamma_C$ with $\mathrm{meas}(\Gamma_D)>0$, $\mathrm{meas}(\Gamma_C)>0$, i.e., nonzero Lebesgue measure, but $\Gamma_N$ is allowed to be empty. \item In the applications, $\Omega$ is the reference configuration of a viscoelastic deformable body sliding on a foundation. Moreover, $\Gamma_D$ denotes the Dirichlet boundary, $\Gamma_N$ the Neumann boundary, and $\Gamma_C$ is the contact boundary. \item $\nu$ denotes the outward normal on $\Gamma$. \item We denote $\Bar{\Omega} = \Omega \cup \Gamma$. \item $L^p(\Omega)$ denotes the space of Lebesgue $p$-integrable functions equipped with the norm $ \norm{v}_{L^p(\Omega)} = \big( \int_\Omega |v|^p dx \big)^{1/p}$ for $1\leq p <\infty$. With the usual modifications for $L^\infty(\Omega)$. \item $C^\infty_c(\Omega)$ denotes the space of infinitely differentiable functions with compact support. \item We will denote $c$ as a positive constant, which might change from line to line. \item Let $h(t)$ be a function, then we denote the time derivative of $h(t)$ by $\Dot{h}(t)$ and the double time derivative as $\Ddot{h}(t)$. Assuming here that $h(t)$ has enough regularity such that it makes sense to take the time derivative of it twice. \item $(\Tilde{X},\norm{\cdot}_{\Tilde{X}})$ and $(\Tilde{Y},\norm{\cdot}_{\Tilde{Y}})$ denote arbitrary (separable reflexive) Banach spaces. For the convenience of the reader, this notation will be used when introducing general theory. If the theory is only needed for a specific (separable reflexive) Banach space, we write the specific space. \item The dual space of $\Tilde{X}$ will be denoted by $\Tilde{X}^\ast$. \item $2^{\Tilde{X}^\ast}$ denotes the set containing all subsets of $\Tilde{X}^\ast$. \item The dual product between the spaces $\Tilde{X}$ and $\Tilde{X}^\ast$ will be denoted by $\inner{\cdot}{\cdot}_{\Tilde{X}^\ast \times \Tilde{X}}$. \item In the particular case, when $\Tilde{X}$ is an inner product space, we denote $\scalarprod{\cdot}{\cdot}_{\Tilde{X}}$ as its inner product. \item $U$, $V$, $X$, $Y$, and $Z$ denote real separable reflexive Banach spaces, and $H$ is a real Hilbert space. \item The embedding $V \subset H \subset V^\ast$ is referred to as an evolution triple. Here, the embedding $V \subset H$ is continuous and $V$ dense in $H$. It follows that $H \subset V^\ast$ is continuously embedded (see, e.g., \cite[Section 7.2]{Roubicek2006}). Moreover, $V \subset H$ is compactly embedded, leading to $H \subset V^\ast$ being compactly embedded (see, e.g., \cite[Remark 3.4.4]{Denkowski_application}). \item For simplicity of notation, the dual product between $V^\ast$ and $V$ is denoted by $\inner{\cdot}{\cdot} = \inner{\cdot}{\cdot}_{V^\ast \times V} $. \item $\mathcal{L}(\Tilde{X},\Tilde{Y})$ denotes the set of all bounded linear maps from $\Tilde{X}$ into $\Tilde{Y}$. \item We denote the operator norm of the operators $M: V \rightarrow U$, $N : V \rightarrow X$, and $K : V \rightarrow Z$ as $\norm{M} = \norm{M}_{\mathcal{L}(V,U)}$, $\norm{N} = \norm{N}_{\mathcal{L}(V,X)}$, and $\norm{K} = \norm{K}_{\mathcal{L}(V,Z)}$, respectively. \end{itemize} \tableofcontents \section{Function spaces and basics of nonsmooth analysis} \label{sec:func_nonsmooth} \noindent In this section, we present the function spaces and fundamental results. For further information, we refer to standard textbooks, e.g., \cite{Cheney2001,evans,Pedersen1989,Roubicek2006}. \subsection{Sobolev spaces}\label{sec:sobolev_spaces} This section defines the solution spaces and the usual Sobolev spaces, which will become useful in Section \ref{sec:problem_and_mainresult} and in the applications, i.e., Section \ref{sec:applications}. We define \begin{align}\label{eq:H_space} L^p(\Omega; \mathbb{R}^d ) &= \{ v = (v_i) : v_i \in L^p(\Omega), \ 1\leq i \leq d\},\\ L^p(\Omega; \mathbb{S}^d ) &= \{ \tau = (\tau_{ik}) : \tau_{ik} = \tau_{ki} \in L^p(\Omega), \ 1\leq i,k \leq d\}, \label{eq:Q_space} \end{align} for $1\leq p \leq \infty$. The associated norms will be denoted $\norm{\cdot}_{ L^p(\Omega; \mathbb{R}^d )}$ and $\norm{\cdot}_{L^p(\Omega; \mathbb{S}^d )}$, respectively. For $p=2$, \eqref{eq:H_space}-\eqref{eq:Q_space} are Hilbert spaces with the canonical inner products \begin{align*} \scalarprod{u}{v}_{ L^2(\Omega; \mathbb{R}^d )} &= \int_\Omega u_iv_i dx = \int_\Omega u\cdot v dx, &\scalarprod{\sigma}{\tau}_{L^2(\Omega; \mathbb{S}^d )} &= \int_\Omega \sigma_{ik} \tau_{ik} dx = \int_\Omega \sigma : \tau dx \end{align*} for all $u,v \in L^2(\Omega; \mathbb{R}^d) $, $\sigma,\tau \in L^2(\Omega; \mathbb{S}^d)$. Moreover, we define the spaces \begin{equation*} H^1(\Omega) = \{v \in L^2(\Omega) : \text{ the weak derivatives } \frac{\partial v }{\partial x_i}\text{ exist in } L^2(\Omega) , \ 1\leq i \leq d\} \end{equation*} and \begin{equation*} H^1(\Omega;\mathbb{R}^d) = \{ v = (v_i) : v_i \in H^1(\Omega), \ 1\leq i \leq d\}. \end{equation*} With abuse of notation, the trace of functions $v \in H^1(\Omega;\mathbb{R}^d)$ on $\Gamma$ will still be denoted by $v$. For the displacement, we use the space \begin{equation}\label{eq:V_space} V = \{v \in H^1(\Omega;\mathbb{R}^d) : v = 0 \ \text{ on }\Gamma_D \}. \end{equation} As a consequence of $\mathrm{meas}(\Gamma_D) > 0$, it follows by Korn's inequality, i.e., $\norm{\varepsilon(v)}_{L^2(\Omega; \mathbb{S}^d )}\geq c\norm{v}_{H^1(\Omega;\mathbb{R}^d)}$ (see, e.g., \cite[Lemma 6.2]{Kikuchi1988}), that $V$ is a Hilbert space with the canonical inner product \begin{equation*} \scalarprod{u}{v}_V = \int_\Omega \varepsilon(u) : \varepsilon(v) dx. \end{equation*} Here, $\varepsilon : H^1(\Omega;\mathbb{R}^d) \rightarrow L^2(\Omega; \mathbb{S}^d )$ is the deformation operator defined by \begin{equation*} \varepsilon(u) = (\varepsilon_{ik}(u)), \ \ \ \varepsilon_{ik}(u) = \frac{1}{2} \Big(\frac{\partial u_k }{\partial x_i} + \frac{\partial u_i}{\partial x_k}\Big). \end{equation*} We denote the associated norm on $V$ by $\norm{\cdot}_V$. Moreover, if $\sigma$ is a regular function, say $\sigma \in C^1(\Bar{\Omega}; \mathbb{S}^d)$, the following Green's formula holds \begin{align}\label{eq:greensformulasigma} \int_\Omega (\nabla \cdot \sigma) v dx = \int_\Gamma \sigma \nu \cdot v da - \int_\Omega \sigma : \varepsilon(v) dx \ \text{ for all } v\in H^1(\Omega;\mathbb{R}^d). \end{align} We also need the following trace theorem from \cite[Theorem 4.12]{Adams2003}. \begin{thm}\label{thm:trace} Let $\Omega \subset \mathbb{R}^d$ be bounded with Lipschitz boundary $\Gamma$ and $\Gamma_C \subset \Gamma$ be such that $\mathrm{meas}(\Gamma_C)>0$. Then there exists a linear continuous operator $\gamma : V \rightarrow L^q(\Gamma_C;\mathbb{R}^d)$ satisfying \begin{align*} \norm{\gamma v}_{L^q(\Gamma_C;\mathbb{R}^d)} &\leq \norm{\gamma}_{\mathcal{L}(V,L^q(\Gamma_C;\mathbb{R}^d))} \norm{v}_V \end{align*} for all $v\in V$. If $d=2$, then $2\leq q<\infty$. And $d>2$, $2\leq q\leq p^\ast = \frac{4}{d-2}$. \end{thm} \indent We denote $\mathbb{X} = \prod_{i=1}^\ell \Tilde{X}_i$ as a Cartesian product space, for some $\ell\in \mathbb{Z}_+$, where $(\Tilde{X}_i,\norm{\cdot}_{\Tilde{X}_i})$ are normed spaces for $i=1,\dots, \ell$. Then, $\mathbb{X}$ is equipped with the norm \begin{equation*} \norm{(v_1,\dots,v_\ell)}_\mathbb{X} = \sum_{i=1}^\ell \norm{v_i}_{\Tilde{X}_i} \ \ \text{ for all $v_i \in \Tilde{X}_i$, $i=1,\dots,\ell$}. \end{equation*} Equivalently, we may equip $\mathbb{X}$ with the norm \begin{equation*} \norm{(v_1,\dots,v_\ell)}_\mathbb{X}^2 = \sum_{i=1}^\ell \norm{v_i}_{\Tilde{X}_i}^2 \ \ \text{ for all $v_i \in \Tilde{X}_i$, $i=1,\dots,\ell$}. \end{equation*} \subsection{Time-dependent spaces}\label{sec:bochner_spaces} Let $(V,H,V^\ast)$ be an evolution triple. \begin{definition}\label{def:Bochner} Let $\Tilde{X}$ be a Banach space, and $T > 0$. The space $L^p(0,T; \Tilde{X})$, $1\leq p \leq \infty$, consists of all measurable functions $v : [0, T] \rightarrow \Tilde{X}$ such that \begin{align*} \int_0^T \norm{v(t)}_{\Tilde{X}}^p dt < \infty. \end{align*} With the usual modifications for $L^\infty(0, T; \Tilde{X})$. For brevity, we use the standard short-hand notation \begin{equation*} L^p_{t}\Tilde{X} = L^p(0, t; \Tilde{X}) \end{equation*} for all $t \in [0,T]$ and $1\leq p \leq \infty$. \end{definition} We also introduce the solution space \begin{align}\label{eq:spaces_VastW} \mathcal{W}^{1,2}_T= \{ w \in L^2_{T}V : \Dot{w} \in L^2_{T}V^\ast\}, \end{align} equipped with the norm $\norm{w}_{\mathcal{W}^{1,2}_T}^2 = \norm{w}_{L^2_{T}V}^2 + \norm{\dot{w}}_{L^2_{T}V^\ast}^2$. The duality pairing between $L^2_TV^\ast$ and $L^2_TV$ is denoted by \begin{equation*} \inner{\Tilde{v}}{v}_{L^2_TV^\ast \times L^2_TV}= \int_0^T \inner{\Tilde{v}(s)}{v(s)} ds \ \ \text{ for all } \Tilde{v}\in L^2_TV^\ast, v\in L^2_TV. \end{equation*} We denote the space of continuous functions defined on $[0,T]$ with values in $\Tilde{X}$ by \begin{equation*} C([0,T]; \Tilde{X}) = \{h : [0,T] \rightarrow \Tilde{X} : h \text{ is continuous and } \norm{h}_{L^\infty_T{\Tilde{X}}} < \infty\}. \end{equation*} The next proposition can be found in, e.g., \cite[Proposition 3.4.14]{Denkowski_application} or \cite[Lemma 7.3]{Roubicek2006} and will help us provide estimates. \begin{prop}\label{prop:integrationbypartsformula} Let $(V,H,V^\ast)$ be an evolution triple in space, and $0<T< \infty$. Then, for any $v_1,v_2 \in \mathcal{W}^{1,2}_T$ (defined in \eqref{eq:spaces_VastW}), and for all $0 \leq s \leq t \leq T$, the following integration by parts formula holds: \begin{equation*} \scalarprod{v_1(t)}{v_2(t)}_H - \scalarprod{v_1(s)}{v_2(s)}_H = \int_s^t \big[ \inner{\Dot{v}_1(\tau)}{v_2(\tau)} + \inner{\Dot{v}_2(\tau)}{v_1(\tau)} \big] d \tau. \end{equation*} In addition, the embedding $W^{1,2}_T \subset C([0,T];H)$ is continuous. \end{prop} For more on evolution spaces and other time-dependent spaces, which are also referred to as Bochner spaces, see, e.g., \cite[Chapter 7]{Roubicek2006}, \cite[Section 3.4]{Denkowski_application}, \cite[Chapter 23]{Zeidler1990}, \cite[Chapter V, Section 5]{Yosida1968}, or \cite[Section 5.9.2 and Appendix E.5]{evans}.\\ \indent We lastly introduce the following Bochner space needed in the applications, i.e., in Section \ref{sec:applications}: \begin{definition} Let $V$ be a real Banach space, then the Bochner space $W^{1,2}(0,T;V)$ consists of all functions $u \in L^2_TV$ such that $\Dot{u}$ exists in the weak sense and belongs to $L^2_TV$. The space $W^{1,2}(0,T;V)$ is equipped with the norm \begin{align*} \norm{u}_{W^{1,2}(0,T;V)}^2 = \norm{u}_{L^2_TV}^2 + \norm{\Dot{u}}_{L^2_TV}^2. \end{align*} \end{definition} \subsection{Generalized gradients} Let $\Tilde{X}$ be a reflexive Banach space. In contact mechanics, we are often interested in contact conditions of the form $\zeta_\nu \in \partial h(u_\nu)$, where $\zeta_\nu$ represents an interface force, $u_\nu = u \cdot \nu$ the normal component of the displacement, and $\partial h(u_\nu)$ being the Clarke subdifferential of $h$ defined as below. \begin{definition}\label{def:subdiffernetial} Let $h : \Tilde{X} \rightarrow \mathbb{R}$ be a locally Lipschitz function. The generalized (Clarke) directional derivative of $h$ at $\Tilde{x} \in \Tilde{X}$ in the direction $v\in \Tilde{X}$, denoted $h^\circ(\Tilde{x};v)$, is defined by \begin{equation*} h^\circ(\Tilde{x};v) = \limsup_{\Tilde{y}\rightarrow \Tilde{x}, \ \Tilde{\epsilon} \downarrow 0} \frac{h(\Tilde{y} + \Tilde{\epsilon} v) - h(\Tilde{y})}{\Tilde{\epsilon}}. \end{equation*} Moreover, the subdifferential in the sense of Clarke of $h$ at $\Tilde{x}$, denoted $\partial h(\Tilde{x})$, is a subset of $\Tilde{X}^\ast$ of the form \begin{equation*} \partial h(\Tilde{x}) = \{ \zeta \in \Tilde{X}^\ast : h^\circ(\Tilde{x};v) \geq \inner{\zeta}{v}_{ \Tilde{X}^\ast \times \Tilde{X}} \ \ \text{ for all } v\in \Tilde{X} \}. \end{equation*} \end{definition} \begin{remark} To say that a function $h : \Tilde{X} \rightarrow \mathbb{R}$ is locally Lipschitz on $\Tilde{X}$ means that $h(\Tilde{x})$ is Lipschitz continuous in a neighborhood of $\Tilde{x} \in \Tilde{X}$. \end{remark} \begin{remark} We refer the reader to, e.g., \cite[p.185-187]{han} and \cite[Section 6.3]{Migorski2012} for examples related to contact mechanics. Other examples may be found in, e.g., \cite{clarke,Denkowski_theory}. \end{remark} \begin{prop}\label{prop:chainrule_subdiff} Let $\Tilde{X}$ be a Banach space and $h : \Tilde{X} \rightarrow \mathbb{R}$ be locally Lipschitz on $\Tilde{X}$. Then $(\Tilde{x} ,v) \mapsto h^\circ(\Tilde{x} ;v)$ is upper semicontinuous. \end{prop} \begin{prop} \label{prop:convex_lsc_implies_locallyLipschitz} Let $\Tilde{X}$ be a Banach space. If $h: \Tilde{X} \rightarrow \mathbb{R}\cup\{\infty\}$ is proper, convex, and lower semicontinuous, then $h$ is locally Lipschitz on the interior of the domain of $h$. \end{prop} Proposition \ref{prop:chainrule_subdiff} can be found in \cite[Proposition 5.6.6]{Denkowski_theory}, and Proposition \ref{prop:convex_lsc_implies_locallyLipschitz} can be found in \cite[Proposition 5.2.10]{Denkowski_theory}. \begin{definition}\label{def:convex_subdiffernetial} Let $h : \Tilde{X} \rightarrow \mathbb{R} \cup \{\infty\}$ be a proper and convex function. The (generally multivalued) mapping $\partial_c h : \Tilde{X} \rightarrow 2^{\Tilde{X}^\ast}$, written \begin{equation*} \partial_c h(\Tilde{x} ) = \{ x^\ast \in \Tilde{X}^\ast : h(v) - h(\Tilde{x} ) \geq \inner{x^\ast}{v-\Tilde{x} }_{ \Tilde{X}^\ast \times \Tilde{X}} \ \text{ for all } v\in \Tilde{X} \} \end{equation*} is called the convex subdifferential of $h$ at $\Tilde{x} \in \Tilde{X}$. \end{definition} \begin{prop} Let $h:\Tilde{X} \rightarrow \mathbb{R}$ be locally Lipschitz on $\Tilde{X}$. If $h$ is convex, then $\partial h$ coincides with $\partial_c h$. \end{prop} The proof of the above proposition is found in \cite[Proposition 2.2.7]{clarke}. Lastly, to show that $(w,\alpha)$ is indeed a solution to Problem \ref{prob:fullproblem} (see Section \ref{sec:problem_and_mainresult}), we require the following result found in \cite[Lemma 3.43]{Migorski2012}. \begin{lem}\label{lemma:phicontinuous} Let $\Tilde{X}$ and $\Tilde{Y}$ be two Banach spaces and $h : \Tilde{X} \times \Tilde{Y} \rightarrow \mathbb{R}$ be such that \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\arabic*),itemindent=1em,leftmargin=!] \item $h(\cdot,\Tilde{y})$ is continuous on $\Tilde{X}$ for all $\Tilde{y} \in \Tilde{Y}$. \label{list:phicont1} \item $h(\Tilde{x},\cdot)$ is locally Lipschitz on $\Tilde{Y}$ for all $\Tilde{x} \in \Tilde{X}$. \label{list:phicont2} \item There exists $c > 0$ such that for all $\zeta \in \partial h(\Tilde{x},\Tilde{y})$ we have \begin{align*} \norm{\zeta}_{\Tilde{Y}^\ast} \leq c (1 + \norm{\Tilde{x}}_{\Tilde{X}} + \norm{\Tilde{y}}_{\Tilde{Y}}), \end{align*} where $\partial h$ denotes the generalized gradient of $h(\Tilde{x},\cdot)$. \label{list:phicont3} \end{enumerate} Then $h$ is continuous on $ \Tilde{X} \times \Tilde{Y}$. \end{lem} To read more on the generalized directional derivatives, subdifferential, and nonsmooth analysis see, e.g., \cite[Chapter 2]{clarke}, \cite[Chapter 5]{Denkowski_theory}, and \cite[Chapter 1-3]{Hu_application}. \section{Problem statement and main result}\label{sec:problem_and_mainresult} In this section, we first introduce the problem and then present the main result. \subsection{Problem statement} Let $(V,H,V^\ast)$ be an evolution triple, and $U$, $X$, $Y$, $Z$ real separable reflexive Banach spaces, with the other function spaces defined in Section \ref{sec:bochner_spaces}. We only seek a solution of \eqref{eq:prob} in the sense of Definition \ref{def:sols}. We are therefore interested in the following evolutionary differential variational-hemivariational inequality: \begin{prob}\label{prob:fullproblem} Find $w\in \mathcal{W}^{1,2}_T$ and $\alpha \in C([0,T];Y)$ such that \begin{subequations} \begin{align}\label{eq:full_nonlinear_problem_1} &\alpha(t) = \alpha_0 + \int_0^t \mathcal{G}(s,\alpha(s),Mw(s))ds,\\ &\inner{\Dot{w}(t) + A(t, w(t))- f(t) + \mathcal{R}w(t) }{v- w(t) } + \varphi(t, \alpha(t),\mathcal{S}_\varphi w(t), Mw(t), Kv) \label{eq:full_nonlinear_problem_2} \\ &- \varphi(t, \alpha(t),\mathcal{S}_\varphi w(t), Mw(t), Kw(t)) + j^\circ(t,\alpha(t), \mathcal{S}_jw(t),Nw(t); Nv -Nw(t)) \geq 0 \notag \end{align} for all $v\in V$, a.e. $t\in (0,T)$ with \begin{equation}\label{eq:intialdata_originalprob} w(0) = w_0. \end{equation} \end{subequations} \end{prob} We require the following assumptions on the operators and data: \\ $\underline{\H{A}}$: \label{assumptionA} $A : (0,T) \times V \rightarrow V^\ast \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $A(\cdot, v)$ is measurable on $(0,T)$ for all $v \in V$. \label{list:A_measurable} \item $ A(t,\cdot) \text{ is demicontinuous on } V \text{ for a.e. } t \in (0,T)$, i.e., if $v^n \rightarrow v$ strongly in $V$, then $A(t,v^n) \rightarrow A(t,v)$ weakly in $V^\ast$ as $n \rightarrow \infty$ for a.e. $t\in (0,T)$. \label{list:A_demicont} \item $\norm{A(t,v)}_{V^\ast} \leq a_0(t) + a_1 \norm{v}_V$ for all $v\in V$, a.e. $t\in (0,T)$ with $a_0 \in L^2(0,T)$, $a_0 \geq 0, \ a_1 \geq 0$. \label{list:A_bounded} \item There is a $m_A >0$ such that $\inner{A(t,v_1) - A(t,v_2)}{v_1 - v_2} \geq m_A \norm{v_1 - v_2 }_V^2$ for all $v_i\in V$, $i=1,2$, a.e. $t\in (0,T)$. \label{list:A_maximalmonotone} \end{enumerate} $\underline{\H{\varphi}}$: \label{assumptionphi} $ \varphi : (0,T) \times Y \times X \times U \times Z \rightarrow \mathbb{R} \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $\varphi(\cdot,y,z,\Tilde{w},\Tilde{v}) \text{ is measurable on } (0,T) \text{ for all } y\in Y, \ z\in X, \ \Tilde{w} \in U, \ \Tilde{v}\in Z$. \label{list:phi_measurable} \item $ \varphi(t,\cdot,\cdot,\cdot,\Tilde{v}) \text{ is continuous on } Y \times X \times U \text{ for all } \Tilde{v}\in Z, \text{ a.e. } t \in (0,T)$. \label{list:phi_cont} \item $\varphi(t,y,z,\Tilde{w},\cdot) \text{ is convex and lower semicontinuous on } Z \text{ for all}$ $y\in Y$, $z\in X$, $\Tilde{w}\in U$, a.e. $t\in (0,T)$. \label{list:phi_convex_lsc} \item $\norm{\partial_c \varphi (t,y,z,\Tilde{w},\Tilde{v})}_{Z^\ast}\leq c_{0\varphi} (t) + c_{1\varphi} \norm{y}_Y + c_{2\varphi} \norm{z}_X + c_{3\varphi} \norm{\Tilde{w}}_U + c_{4\varphi} \norm{\Tilde{v}}_Z$ $\text{ for all }$ $y\in Y, \ z\in X, \ \Tilde{w}\in U, \ \Tilde{v}\in Z, \text{ a.e. } t\in (0,T)$, $\text{ with } c_{0\varphi} \in L^2(0,T),\text{ and } c_{0\varphi}, c_{1\varphi}, c_{2\varphi}, c_{3\varphi}, c_{4\varphi} \geq 0.$ \label{list:phi_bounded} \item $\text{There are $\beta_{i\varphi} \geq 0$ for $i=1,\dots,7$ such that }$ \begin{align*} & \varphi (t,y_1,z_1,\Tilde{w}_1, \Tilde{v}_2) - \varphi (t, y_1,z_1,\Tilde{w}_1, \Tilde{v}_1) +\varphi (t,y_2,z_2,\Tilde{w}_2, \Tilde{v}_1) - \varphi (t, y_2, z_2,\Tilde{w}_2, \Tilde{v}_2) \\ &\leq \beta_{1\varphi} \norm{\Tilde{w}_1}_U \norm{y_1 - y_2}_Y\norm{\Tilde{v}_1 - \Tilde{v}_2 }_Z +\beta_{2\varphi}\norm{y_1 - y_2}_Y\norm{\Tilde{v}_1 - \Tilde{v}_2 }_Z \\ &+ \beta_{3\varphi} \norm{z_1 - z_2}_X\norm{\Tilde{v}_1 - \Tilde{v}_2 }_Z + \beta_{4\varphi} \norm{\Tilde{w}_1 - \Tilde{w}_2}_U\norm{\Tilde{v}_1 - \Tilde{v}_2 }_Z \\ & + \beta_{5\varphi} \norm{y_2}_Y\norm{\Tilde{w}_1 - \Tilde{w}_2}_U\norm{\Tilde{v}_1 - \Tilde{v}_2}_Z + \beta_{6\varphi} \norm{\Tilde{w}_1}_U\norm{z_1-z_2}_X\norm{\Tilde{v}_1 - \Tilde{v}_2}_Z \\ &+ \beta_{7\varphi} \norm{y_1}_Y\norm{z_1-z_2}_X \norm{\Tilde{v}_1 - \Tilde{v}_2}_Z \end{align*} $\text{for all } y_i \in Y$, $z_i \in X$, $\Tilde{w}_i \in U$, $\Tilde{v}_i \in Z$, $i=1,2$, a.e. $t\in (0,T)$. \label{list:phi_estimate} \end{enumerate} $\underline{\H{j}}$: \label{assumptionj} $j : (0,T) \times Y \times X \times X \rightarrow \mathbb{R} \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $j(\cdot, y,z, \Tilde{v}) \text{ is measurable on } (0,T) \text{ for all } y\in Y, z,\Tilde{v} \in X$. \label{list:j_measurable} \item $j(t,y,z,\cdot) \text{ is locally Lipschitz on } X \text{ for all } y\in Y, z\in X, \text{ a.e. } t \in (0,T)$. \label{list:j_locallyLipschitz} \item For $\{y_n\}_{n\geq 1} \subset Y$ such that $y_n \rightarrow y$ strongly in $Y$, $\{z_n\}_{n\geq 1} \subset X$ such that $z_n \rightarrow z$ strongly in $X$, $\{\Tilde{v}_n\}_{n\geq 1} \subset X$ satisfying $\Tilde{v}_n \rightarrow \Tilde{v}$ strongly in $X$, and $\{\Tilde{u}_n\}_{n\geq 1} \subset X$ such that $\Tilde{u}_n \rightarrow \Tilde{u}$ strongly in $X$, when $n\rightarrow \infty$, we have \begin{equation*} \limsup_{n\rightarrow \infty} j^\circ(t, y_n,z_n, \Tilde{v}_n;\Tilde{u}_n) \leq j^\circ(t, y, z, \Tilde{v};\Tilde{u}) \end{equation*} for a.e. $t\in (0,T)$. \label{list:j_convergence} \item $ \norm{\partial j (t,y,z,\Tilde{v})}_{X^\ast}\leq c_{0j} (t) + c_{1j}\norm{y}_Y + c_{2j}\norm{z}_X + c_{3j} \norm{\Tilde{v}}_X \text{ for all } y\in Y, z,\Tilde{v}\in X$, $\text{ a.e. } t\in (0,T)$ with $c_{0j} \in L^2(0,T)$ and $c_{0j}, c_{1j} , c_{2j}, c_{3j} \geq 0$. \label{list:j_bounded} \item $\text{There are $m_j,\Bar{m}_j \geq 0$ such that }$ $j^\circ(t,y_1,z_1,\Tilde{v}_1; \Tilde{v}_2 - \Tilde{v}_1) + j^\circ(t,y_2,z_2,\Tilde{v}_2; \Tilde{v}_1 - \Tilde{v}_2) \leq m_j \norm{\Tilde{v}_1 - \Tilde{v}_2 }_X^2 + \Bar{m}_j (\norm{y_1 - y_2 }_Y + \norm{z_1 - z_2 }_X)\norm{\Tilde{v}_1 - \Tilde{v}_2 }_X$ $\text{for all } y_i\in Y, \ z_i,\Tilde{v}_i \in X,\ i=1,2, \text{ a.e. } t\in (0,T)$. \label{list:j_estimate} \end{enumerate} We note that hypothesis \hyperref[assumptionj]{$\H{j}$}\ref{list:j_estimate} is equivalent to \begin{align}\label{eq:jeq} &\inner{z^\ast_1- z^\ast_2}{\Tilde{v}_1 - \Tilde{v}_2}_{X^\ast \times X}\\ &\geq - m_j \norm{\Tilde{v}_1-\Tilde{v}_2}_X - \bar{m}_j (\norm{y_1 - y_2}_Y + \norm{z_1 - z_2}_X) \norm{\Tilde{v}_1-\Tilde{v}_2}_X \notag \end{align} for all $z^\ast_i \in \partial j(t,y_i,z_i,\Tilde{v}_i)$, $y_i \in Y$, $z_i,\Tilde{v}_i \in X$, $i=1,2$, a.e. $t\in (0,T)$ with $m_j \geq 0$, $\bar{m}_j \geq 0$. With small modifications, the equivalence follows by the proof in \cite[Lemma 7, p.124]{sofonea2017}.\\ \\ $\underline{\H{\mathcal{R} }}$: \label{assumptionR} $ \mathcal{R} :L^2_TV \rightarrow L^2_TV^\ast$ is such that \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $\mathcal{R} \text{ is a history-dependent operator, i.e., }$ \begin{align*} \norm{ \mathcal{R} v_1(t) - \mathcal{R} v_2 (t)}_{V^\ast} \leq c_{\mathcal{R}} \int_0^t \norm{v_1(s)-v_2(s)}_V ds \end{align*} $\text{ for all }v_i \in L^2_TV$, $i=1,2$, a.e. $t\in (0,T) \text{ with } c_{\mathcal{R} } >0$. \label{list:S_hist} \item $\mathcal{R}0$ belongs to a bounded subset of $L^2_TV^\ast$. \label{list:R0} \end{enumerate} $\underline{\H{\mathcal{S}_\varphi }}$: \label{assumptionS1} $ \mathcal{S}_\varphi :L^2_TV \rightarrow L^2_TX$ is such that \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $\mathcal{S}_\varphi \text{ is a history-dependent operator, i.e., }$ \begin{align*} \norm{ \mathcal{S}_\varphi v_1(t) - \mathcal{S}_\varphi v_2 (t)}_X \leq c_{\mathcal{S}_\varphi } \int_0^t \norm{v_1(s)-v_2(s)}_V ds \end{align*} $\text{ for all }v_i \in L^2_TV$, $i=1,2$, a.e. $t\in (0,T) \text{ with } c_{\mathcal{S}_\varphi } >0$. \label{list:S_hist11} \item $\mathcal{S}_\varphi0$ belongs to a bounded subset of $L^2_TX$. \label{list:S011} \end{enumerate} $\underline{\H{\mathcal{S}_j}}$: \label{assumptionS2} $ \mathcal{S}_j :L^2_TV \rightarrow L^2_TX$ is such that \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $\mathcal{S}_j \text{ is a history-dependent operator, i.e., }$ \begin{align*} \norm{ \mathcal{S}_j v_1(t) - \mathcal{S}_j v_2 (t)}_X \leq c_ {\mathcal{S}_j} \int_0^t \norm{v_1(s)-v_2(s)}_V ds \end{align*} $\text{ for all }v_i \in L^2_TV$, $i=1,2$, a.e. $t\in (0,T) \text{ with } c_{\mathcal{S}_j} >0$. \label{list:S_hist22} \item $\mathcal{S}_j0$ belongs to a bounded subset of $L^2_TX$. \label{list:S022} \end{enumerate} $\underline{\H{\mathcal{G}}}$: \label{assumptionG} $ \mathcal{G} : (0,T) \times Y \times U \rightarrow Y \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $\mathcal{G}(\cdot,\alpha, \Tilde{v}) \text{ is measurable on } (0,T) \text{ for all } \alpha \in Y, \Tilde{v} \in U$. \label{list:G_measurable} \item There exists an $L_\mathcal{G}>0$ such that \begin{align*} \norm{ \mathcal{G}(t,\alpha_1(t),\Tilde{v}_1(t)) - \mathcal{G}(t,\alpha_2(t), \Tilde{v}_2(t))}_Y \leq L_\mathcal{G} \Big(\norm{\alpha_1(t) - \alpha_2(t) }_Y + \norm{\Tilde{v}_1(t) - \Tilde{v}_2(t) }_U \Big) \end{align*} $\text{for all } \alpha_i \in Y$, $\Tilde{v}_i \in U$, $i=1,2$, a.e. $t\in (0,T).$ \label{list:G_Lipschitz} \item $\norm{\mathcal{G}(\cdot,0,0)}_{L^\infty_TY} < \infty$. \label{list:G_00} \end{enumerate} $\underline{\H{MNK}}$: \label{assumptionMNK} $M \in \mathcal{L}(V,U), \ \ N \in \mathcal{L}(V,X), \ \ K \in \mathcal{L}(V,Z)$. \\ \\ \indent We also assume the following regularity on the source term and initial data: \begin{subequations}\label{eq:initaldata} \begin{align}\label{eq:initaldata1} f \in L^2_TV^\ast,& \ \ \ \ w_0 \in V,\\ \alpha_0 &\in Y. \label{eq:initaldata2} \end{align} \end{subequations} Lastly, we require the following smallness-condition: \begin{equation}\label{eq:assumptionbound_mA_max} m_A > m_j\norm{N}^2 + \sqrt{2}(\beta_{4\varphi}+\beta_{5\varphi}\norm{\alpha_0}_{Y})\norm{K}\norm{M}. \end{equation} \begin{remark} Similar assumptions can be found in, e.g., \cite{Patrulescu2017,Migorski2022,Migorski2019,shillor2004,sofonea2017}. The same type of condition as \hyperref[assumptionj]{$\H{j}$}\ref{list:j_convergence} is found in, e.g., \cite{Migorski2022_2,Migorski2023}. If $j^0$ is independent of $\alpha$ and $\mathcal{S}_jw$ in \eqref{eq:w1}, we may relax the assumption \hyperref[assumptionj]{$\H{j}$}\ref{list:j_convergence}, and \hyperref[assumptionj]{$\H{j}$}\ref{list:j_locallyLipschitz} and Proposition \ref{prop:chainrule_subdiff} are enough. \end{remark} \begin{remark}\label{remark:mubounded} If $\beta_{1\varphi} =\beta_{4\varphi} =\beta_{5\varphi} = \beta_{6\varphi}= \beta_{7\varphi} = 0$ in \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_estimate}, then the friction coefficient $\mu$ is bounded. Consequently, Problem \ref{prob:fullproblem} reduces to the one found in \cite{Migorski2022}. In the quasi-static setting, taking $\beta_{1\varphi} = \beta_{5\varphi} = \beta_{6\varphi}= \beta_{7\varphi} = 0$, the problem is covered in \cite{Patrulescu2017}. Lastly, taking $\beta_{3\varphi} = \beta_{6\varphi}= \beta_{7\varphi} = 0$ reduces Problem \ref{prob:fullproblem} to the one found in \cite{Pipping2015,Pipping2019} for the first-order approximation of \eqref{eq:aginglaw} and \eqref{eq:regularized} introduced in Section \ref{sec:rateandstate}. \end{remark} We make a brief remark on the assumptions in Appendix \ref{appendix:comments_app}. \subsection{Main result} We will now state the main result, i.e., Theorem \ref{thm:mainresult}; the first part is an existence and uniqueness result, and the latter provides that the flow map depends continuously on the initial data. The proof of Theorem \ref{thm:mainresult} is deferred to Section \ref{sec:proof_main_result} after the preparation in Section \ref{sec:preliminary}. \begin{thm}\label{thm:mainresult} Assume \hyperref[assumptionA]{$\H{A}$}, \hyperref[assumptionphi]{$\H{\varphi}$}, \hyperref[assumptionj]{$\H{j}$}, \hyperref[assumptionR]{$\H{\mathcal{R}}$}, \hyperref[assumptionS1]{$\H{\mathcal{S}_\varphi}$}, \hyperref[assumptionS2]{$\H{\mathcal{S}_j}$}, \hyperref[assumptionG]{$\H{\mathcal{G}}$}, \hyperref[assumptionMNK]{$\H{MNK}$}, and \eqref{eq:initaldata}-\eqref{eq:assumptionbound_mA_max} holds. \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\alph*),itemindent=1em,leftmargin=1em] \item Then there exists a $T>0$ satisfying \begin{subequations}\label{eq:times_T} \begin{align} T= T(\norm{(f,w_0,\alpha_0)}_{L^2_T V^\ast \times V \times Y}) \end{align} and \begin{align} T(a) > T(b) \quad \text{if }\quad a< b \end{align} \end{subequations} so that $w\in \mathcal{W}^{1,2}_T \subset C([0,T];H)$ and $\alpha \in C([0,T];Y)$ is a unqiue solution to Problem \ref{prob:fullproblem}. \item Moreover, there exists a neighborhood around $(w_0 , \alpha_0)$ so that the flow map $F : V \times Y \rightarrow L^2_{T/2}V \times C([0,T/2];Y)$ defined by $(w_0 , \alpha_0) \mapsto (w,\alpha)$ is continuous. \label{list:continuous_dependence_on_inital_data} \end{enumerate} \end{thm} \begin{remark} If $\beta_{1\varphi} =\beta_{4\varphi} =\beta_{5\varphi} = \beta_{6\varphi}= \beta_{7\varphi} = 0$ in \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_estimate}, we also obtain a global time of existence, i.e., the existence of a solution holds for any finite time $T>0$. \end{remark} \begin{remark} The theorem can easily be extended to include more than three history-dependent operators without needing any additional assumptions other than the once put on $\mathcal{R}$, $\mathcal{S}_\varphi$, and $\mathcal{S}_j$, i.e., \hyperref[assumptionR]{$\H{\mathcal{R}}$}, \hyperref[assumptionS1]{$\H{\mathcal{S}_\varphi}$}, and \hyperref[assumptionS2]{$\H{\mathcal{S}_j}$}, respectively. \end{remark} \subsection{Strategy of the proof of Theorem \ref{thm:mainresult}} The proof of the theorem is divided into six steps. In the first step, we introduce an auxiliary problem to Problem \ref{prob:fullproblem}, calling this Problem \ref{prob:first_step}. Specifically, we fix five of the functions in \eqref{eq:full_nonlinear_problem_2} and leave \eqref{eq:full_nonlinear_problem_1} intact. We recast the auxiliary problem as a differential inclusion (introduced in Section \ref{sec:preliminary}) and use existing results to prove that Problem \ref{prob:first_step} has a unique solution (Step \ref{linearization}-\ref{estimate1}). Next, we define an iterative scheme for Problem \ref{prob:fullproblem} using Problem \ref{prob:first_step}. This iterative scheme decouples \eqref{eq:full_nonlinear_problem_1} and Problem \ref{prob:first_step} at each step. Then, we study the difference between two successive iterates and show that these iterates are Cauchy sequences. We then pass to the limit to show that the iterative scheme converges to Problem \ref{prob:fullproblem} (Step \ref{convergence}). Finally, we show that the flow map continuously depends on the initial data (Step \ref{continuous}). \section{Preliminary result}\label{sec:preliminary} \noindent Before proving Theorem \ref{thm:mainresult}, we present an existence and uniqueness result for a differential inclusion problem see, e.g., \cite{Aubin1984}. The forthcoming result will be used to prove existence of a solution to an auxiliary problem of \eqref{eq:full_nonlinear_problem_2}-\eqref{eq:intialdata_originalprob} in Problem \ref{prob:fullproblem}. To utilize this result, we need to introduce a differential inclusion which we relate to the auxiliary problem of \eqref{eq:full_nonlinear_problem_2}-\eqref{eq:intialdata_originalprob}. This will be made clear in Step \ref{linearization}-\ref{exunique} in the proof of Theorem \ref{thm:mainresult}. \\ \indent We begin by introducing the inclusion problem. \begin{prob}\label{prob:inclusion} Find $w\in \mathcal{W}^{1,2}_T$ such that \begin{align*} \Dot{w}(t) &+ A(t,w(t)) + \partial \psi (t,w(t)) \ni f(t) \ \text{ for a.e. } \ t\in (0,T),\\ w(0) &= w_0. \end{align*} \end{prob} For clarity on how to work with the preceding problem, we include the following definition: \begin{definition}\label{def:def_inclusion} A function $w\in \mathcal{W}^{1,2}_T$ is called a solution to Problem \ref{prob:inclusion} if there exists $w^\ast \in L^2_TV^\ast$ such that \begin{align*} &\Dot{w}(t) + A(t,w(t)) + w^\ast(t) = f(t) \ \ \text{ for a.e. } \ t\in (0,T),\\ &w^\ast(t) \in \partial \psi (t,w(t)) \end{align*} for a.e. $t\in (0,T)$ with \begin{equation*} w(0) = w_0. \end{equation*} \end{definition} In the preliminary existence and uniqueness result, we consider the following assumptions: \\ $\underline{\H{\psi}}$: \label{assumptionpsi} $\psi : (0,T) \times V \rightarrow \mathbb{R} \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $\psi(\cdot, v) $ is measurable on $(0,T)$ for all $v \in V$. \label{list:psi_measurable} \item $ \psi(t,\cdot) \text{ is locally Lipschitz on } V \text{ for a.e. } t \in (0,T)$. \label{list:psi_locallyLipschitz} \item $\norm{\partial \psi (t,v)}_{V^\ast}\leq c_0 (t) + c_1 \norm{v}_V \text{ for all } v\in V$, $\text{ a.e. } t\in (0,T) \text{ with } c_0 \in L^2(0,T)$, $c_0 \geq 0, \ c_1 \geq 0$.\label{list:psi_bounded} \item $\text{There is a $m_\psi \geq 0$ such that }$ $ \inner{z_1 - z_2}{v_1 - v_2} \geq -m_\psi \norm{v_1 - v_2 }_V^2$ $\text{ for all } z_i \in \partial \psi(t,v_i)$, $z_i \in V^\ast, \ v_i\in V, \ i=1,2, \text{ a.e. } t\in (0,T)$. \label{list:psi_maximalmonotone} \end{enumerate} We further assume that the operator $A:(0,T) \times V \rightarrow V^\ast$ satisfies \hyperref[assumptionA]{$\H{A}$}, and the source term $f$ and the initial data $w_0$ satisfy \eqref{eq:initaldata1}. Additionally, we assume that the following smallness-condition holds: \begin{equation}\label{eq:assumptionbound_on_mA} m_A >m_\psi. \end{equation} \begin{thm}\label{thm:inclusionsol} Assume that \hyperref[assumptionA]{$\H{A}$}, \hyperref[assumptionpsi]{$\H{\psi}$}, \eqref{eq:initaldata1}, and \eqref{eq:assumptionbound_on_mA} hold. Then Problem \ref{prob:inclusion} has a unique solution $w\in \mathcal{W}^{1,2}_T$ in the sense of Definition \ref{def:def_inclusion} for any $T>0$. \end{thm} The theorem was proved in \cite[Theorem 3]{Migorski2019}. This result will be used in Step \ref{exunique} of the proof of Theorem \ref{thm:mainresult}. \section{Proof of Theorem \ref{thm:mainresult}}\label{sec:proof_main_result} With the preparation in Section \ref{sec:func_nonsmooth}-\ref{sec:preliminary}, we proceed to the proof of Theorem \ref{thm:mainresult}. For the convenience of the reader, the proof is established in several steps, and some of the proofs have been moved to the appendix. We recall that the function spaces are defined in Section \ref{sec:bochner_spaces}. \\ \indent \textbf{Step \ref{linearization}}\rtask{linearization} \textit{(Auxiliary problem to the evolutionary hemivariational-variational inequality \eqref{eq:full_nonlinear_problem_2}-\eqref{eq:intialdata_originalprob})}. Let $(\alpha,\xi, \eta, g,\chi) \in C([0,T];Y) \times L^2_TV^\ast \times L^2_TX \times L^2_TV \times L^2_TX$ be given, then we define an auxiliary problem to \eqref{eq:full_nonlinear_problem_2}-\eqref{eq:intialdata_originalprob} in Problem \ref{prob:fullproblem}. \begin{prob}\label{prob:first_step} Find $w_{\alpha\xi\eta g\chi} \in \mathcal{W}^{1,2}_T$ corresponding to $(\alpha,\xi,\eta, g,\chi) \in C([0,T];Y) \times L^2_TV^\ast \times L^2_TX \times L^2_TV\times L^2_TX$ such that \begin{align*} &\inner{\Dot{w}_{\alpha\xi\eta g\chi}(t) + A(t, w_{\alpha\xi\eta g\chi}(t)) - f(t) + \xi(t) }{v- w_{\alpha\xi\eta g\chi}(t) } + \varphi(t,\alpha(t), \eta(t), Mg(t), Kv) \\ &- \varphi(t,\alpha(t), \eta(t), Mg(t), Kw_{\alpha\xi\eta g\chi}(t)) +j^\circ(t,\alpha(t), \chi(t), Nw_{\alpha\xi\eta g\chi}(t); Nv -Nw_{\alpha\xi\eta g\chi}(t) ) \geq 0 \end{align*} for all $v\in V$, a.e. $t\in (0,T)$ with \begin{equation*} w_{\alpha\xi\eta g\chi} (0) = w_0. \end{equation*} \end{prob} \begin{remark} A glance at Problem \ref{prob:first_step} and \eqref{eq:full_nonlinear_problem_2} lets us see that the auxiliary problem keeps $\xi= \mathcal{R}w$, $\alpha$ (still denoted by $\alpha$), $\eta = \mathcal{S}_\varphi w$, $g=w$, and $\chi = \mathcal{S}_j w$ known in contrast to \eqref{eq:full_nonlinear_problem_2}. We find it worth mentioning that we use the subscripts on $w$ to emphasize that a solution $w_{\alpha\xi\eta g\chi}$ to Problem \ref{prob:first_step} corresponds to $(\alpha,\xi,\eta, g,\chi) \in C([0,T];Y) \times L^2_TV^\ast \times L^2_TX \times L^2_TV \times L^2_TX$. This also helps to distinguish between a solution to Problem \ref{prob:fullproblem} and a solution to Problem \ref{prob:first_step}. \end{remark} \textbf{Step \ref{exunique}}\rtask{exunique} \textit{(Existence of a solution to Problem \ref{prob:first_step})}. Let $(\alpha,\xi,\eta, g,\chi) \in C([0,T];Y) \times L^2_TV^\ast \times L^2_TX \times L^2_TV\times L^2_TX$ be given. We wish to utilize Theorem \ref{thm:inclusionsol} in order to prove that Problem \ref{prob:first_step} has a solution. We therefore define the functional $\psi_{\alpha\xi\eta g\chi} : (0,T) \times V \rightarrow \mathbb{R}$ by \begin{align}\label{eq:psixietag} \psi_{\alpha\xi\eta g\chi}(t,v) &= \inner{\xi(t)}{v} + \varphi(t,\alpha(t),\eta(t),Mg(t),Kv) + j(t,\alpha(t),\chi(t),Nv) \end{align} for all $v\in V$, a.e. $t\in (0,T)$. Verification of the hypothesis of Theorem \ref{thm:inclusionsol} follows from the same approach as the first part of the proof in \cite[Theorem 5]{Migorski2019}. We investigate the assumption \hyperref[assumptionpsi]{$\H{\psi}$} and the smallness-condition \eqref{eq:assumptionbound_on_mA}, as there are some modifications in comparison to \cite[Theorem 5]{Migorski2019}. Keeping Proposition \ref{prop:convex_lsc_implies_locallyLipschitz} in mind, we only comment on the changes and leave the reader to visit \cite[Theorem 5]{Migorski2019} for a detailed verification. Using \eqref{eq:jeq}, we find that \hyperref[assumptionpsi]{$\H{\psi}$} holds with $c_0(t) = \norm{\xi(t)}_{V^\ast} + c_{0j} (t)\norm{N} +c_{2j} \norm{N} \norm{\chi(t)}_X + c_{0\varphi} (t)\norm{K} + c_{2\varphi}\norm{K} \norm{\eta(t)}_X + (c_{1j}\norm{N} + c_{1\varphi} \norm{K}) \norm{\alpha(t)}_Y + c_{3\varphi}\norm{K}\norm{M} \norm{g(t)}_V$, $c_1 = c_{3j}\norm{N}^2+ c_{4\varphi}\norm{K}^2$, and $m_\psi = m_j \norm{N}^2$. This, together with the smallness-condition \eqref{eq:assumptionbound_mA_max} leads to \eqref{eq:assumptionbound_on_mA}. Thus, we conclude by Theorem \ref{thm:inclusionsol} that there exists a solution $w_{\alpha\xi\eta g\chi} \in \mathcal{W}^{1,2}_T$ of Problem \ref{prob:inclusion} with $\psi_{\alpha\xi\eta g\chi}$ defined in \eqref{eq:psixietag}. It remains to show that the existence of a solution to Problem \ref{prob:inclusion} implies the existence of a solution to Problem \ref{prob:first_step}. This is a consequence of Definition \ref{def:subdiffernetial} and \ref{def:convex_subdiffernetial}, and basic results of the generalized gradients, see, e.g., \cite[Theorem 3.7, Proposition 3.10-3.12]{han}, where they have summarized these properties, and \cite[Lemma 7, p.124]{sofonea2017}. A more detailed approach to this part can be found in, e.g., \cite[Section 6]{han} or \cite[p.190-192]{sofonea2017}. \textbf{Step \ref{unique}}\rtask{unique} \textit{(Uniqueness of a solution to Problem \ref{prob:first_step})}. Uniqueness is immediate from the proof in \cite[Theorem 98]{sofonea2017} with $\alpha_j = m_j\norm{N}^2$ and the smallness-condition \eqref{eq:assumptionbound_mA_max}, i.e., $m_A > m_j\norm{N}^2$. \textbf{Step \ref{estimate1}}\rtask{estimate1} \textit{(Estimate on the solution to Problem \ref{prob:first_step}, that is, $w_{\alpha\xi\eta g\chi} \in \mathcal{W}^{1,2}_T \subset C([0,T];H)$)}. We now find an estimate on the solution to Problem \ref{prob:first_step}, which will come in handy later: \begin{prop}\label{prop:estimatewfirst} Under the assumptions of Theorem \ref{thm:mainresult}, for given $(\alpha,\xi,\eta,g,\chi) \in$ \\$C([0,T];Y) \times L^2_TV^\ast \times L^2_TX\times L^2_TV \times L^2_TX$, let $w_{\alpha\xi\eta g\chi}$ be a solution to Problem \ref{prob:first_step}. Then, there exists a constant $c>0$ independent of $w_{\alpha\xi\eta g\chi}$ such that \begin{align}\label{eq:estwwww} &\norm{w_{\alpha\xi\eta g\chi}}_{L^\infty_TH}^2 + \norm{w_{\alpha\xi\eta g\chi}}_{\mathcal{W}^{1,2}_T}^2 \\ \notag &\leq c(1 +\norm{w_0}_V^2+ \norm{f}_{L^2_TV^\ast}^2 + \norm{\xi}_{L^2_TV^\ast}^2 + \norm{\alpha}_{L^\infty_TY}^2 \\ \notag & + \norm{\eta}_{L^2_TX}^2 + \norm{g}_{L^2_TV}^2 + \norm{\chi}_{L^2_TX}^2 + T^k\norm{\alpha}_{L^\infty_TY}^2 \norm{g}_{L^2_TV}^2 ) \\ \notag &+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M} }{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{\alpha}_{L^\infty_TY} \norm{g}_{L^2_TV}^2 \\ &+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M} }{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{\alpha}_{L^\infty_TY} \norm{w_{\alpha\xi\eta g\chi}}_{L^2_TV}^2 \notag \end{align} for some $k\geq 1/2$. \end{prop} The proof of Proposition \ref{prop:estimatewfirst} is postponed to Appendix \ref{appendix:proof_part2}. \textbf{Step \ref{convergence}}\rtask{convergence} \textit{(Scheme for the approximated solution to Problem \ref{prob:fullproblem})}. For $n\in \mathbb{Z}_+$, let $\alpha^{n-1} \in C([0,T];Y)$, and $w^{n-1} \in \mathcal{W}^{1,2}_T$ be known. We construct the approximated solutions $\\ \{(w^n,\alpha^n)\}_{n\geq 1} \subset \mathcal{W}^{1,2}_T \times C([0,T];Y)$ to Problem \ref{prob:fullproblem}, where $(w^{n},\alpha^n)$ is a solution of the scheme: \begin{subequations} \begin{align}\label{eq:algorithm11} &\inner{\Dot{w}^{n}(t) + A(t, w^{n}(t)) - f(t) + \mathcal{R}w^{n-1}(t) }{v- w^{n}(t) } \\ \notag &\hspace{1.1cm}+ \varphi(t,\alpha^{n-1}(t), \mathcal{S}_\varphi w^{n-1}(t), Mw^{n-1}(t), Kv) \\ \notag &\hspace{1.1cm}- \varphi(t,\alpha^{n-1}(t), \mathcal{S}_\varphi w^{n-1}(t), Mw^{n-1}(t), Kw^{n}(t)) \\ &\hspace{1.1cm}+ j^\circ(t,\alpha^{n-1}(t), \mathcal{S}_j w^{n-1}(t), Nw^{n}(t); Nv -Nw^{n}(t) ) \geq 0 \notag \end{align} for all $v\in V$, a.e. $t \in (0,T)$, and \begin{equation}\label{eq:boundaryconditions_walphaxietag1} w^{n}(0) = w_0. \end{equation} \begin{equation}\label{eq:algorithm12} \alpha^n (t) = \alpha_0 + \int_0^t \mathcal{G}(s,\alpha^n(s),Mw^n(s)) ds \ \ \text{ for a.e. } t\in (0,T),\\ \end{equation} \end{subequations} with $w^0 = w_0 \in V$ and $\alpha^0 = \alpha_0 \in Y$. \\ \indent\textbf{Step \ref{convergence}.\ref{welldefinedscehem}}\rsubtask{welldefinedscehem} \textit{(Existence and uniqueness of $(w^n,\alpha^n) \in\mathcal{W}^{1,2}_T \times C([0,T];Y)$ to \eqref{eq:algorithm11}-\eqref{eq:algorithm12} for all $n\in \mathbb{N}$)}. We establish existence and uniqueness by induction on $n$. First, applying Minkowski's inequality, Young's inequality, integrating over the time interval $(0,t') \subset (0,T)$, and lastly applying the Cauchy-Schwarz inequality to hypothesis \hyperref[assumptionR]{$\H{\mathcal{R}}$}, \hyperref[assumptionS1]{$\H{\mathcal{S}_\varphi}$}, and \hyperref[assumptionS2]{$\H{\mathcal{S}_j}$}, respectively, yield \begin{align}\label{eq:hist_est_R} \int_0^{t'}\norm{\mathcal{R}w^{n-1}(t)}_{V^\ast}^2dt &\leq 2 \int_0^{t'}\norm{\mathcal{R}w^{n-1}(t) -\mathcal{R}0(t)}_{V^\ast}^2 dt + 2 \int_0^{t'} \norm{\mathcal{R}0(t)}_{V^\ast}^2 dt\\ \notag &\leq 2T^2c_\mathcal{R}^2 \int_0^{t'} \norm{w^{n-1}(t)}_{V}^2 dt + 2 \norm{\mathcal{R}0}_{L^2_{t'}V^\ast}^2 \\ \int_0^{t'}\norm{\mathcal{S}_\varphi w^{n-1}(t)}_X^2dt &\leq 2T^2c_{\mathcal{S}_\varphi }^2 \int_0^{t'} \norm{w^{n-1}(t)}_{V}^2 dt + 2 \norm{\mathcal{S}_\varphi 0}_{L^2_{t'}X}^2 \label{eq:hist_est_S1}\\ \int_0^{t'}\norm{\mathcal{S}_jw^{n-1}(t)}_X^2dt &\leq 2T^2c_{\mathcal{S}_j}^2 \int_0^{t'} \norm{w^{n-1}(t)}_{V}^2 dt + 2 \norm{\mathcal{S}_j0}_{L^2_{t'}X}^2 \label{eq:hist_est_S2} \end{align} for all $t' \in [0,T]$. We combine \eqref{eq:hist_est_R}-\eqref{eq:hist_est_S2} with the estimate \eqref{eq:estwwww} in Proposition \ref{prop:estimatewfirst} for $w_{\alpha\xi\eta g\chi} = w^n$, $\xi = \mathcal{R}w^{n-1}$, $\alpha = \alpha^{n-1}$, $\eta = \mathcal{S}_\varphi w^{n-1}$, $g=w^{n-1}$, and $\chi = \mathcal{S}_j w^{n-1}$. This implies \begin{align}\label{eq:est_w_n} \norm{w^n}_{L^\infty_TH}^2 &+ \norm{w^n}_{\mathcal{W}^{1,2}_T}^2 \\ &\leq c(1 +\norm{f}_{L^2_TV^\ast}^2 + \norm{w_0}_V^2+ \norm{w^{n-1}}_{L^2_TV}^2 + \norm{\alpha^{n-1}}_{L^\infty_TY}^2 \notag \\ &+ T^k \norm{\alpha^{n-1}}_{L^\infty_TY}^2 \norm{w^{n-1}}_{L^2_TV}^2 ) \notag \\ &+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M} }{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{\alpha^{n-1}}_{L^\infty_TY} \norm{w^{n-1}}_{L^2_TV}^2 \notag \\ &+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M}}{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{\alpha^{n-1}}_{L^\infty_TY} \norm{w^{n}}_{\mathcal{W}^{1,2}_T}^2 \notag \end{align} for all $n\in \mathbb{Z}_+$ and some $k\geq 1/2$. Applying Minkowski's inequality to \eqref{eq:algorithm12} reads \begin{align*} \norm{\alpha^n(t)}_Y \leq \norm{\alpha_0}_Y + \int_0^t \norm{\mathcal{G}(s,\alpha^n(s),Mw^n(s))}_Y ds \end{align*} for a.e. $t\in (0,T)$. We observe that by Minkowski's inequality, \hyperref[assumptionG]{$\H{\mathcal{G}}$}\ref{list:G_Lipschitz} and \hyperref[assumptionMNK]{$\H{MNK}$} \begin{align}\label{eq:est_on_G} \norm{\mathcal{G}(s,\alpha^n(s),Mw^n(s))}_Y &\leq \norm{\mathcal{G}(s,\alpha^n(s),Mw^n(s)) - \mathcal{G}(s,0,0)}_Y + \norm{\mathcal{G}(s,0,0)}_Y\\ &\leq L_\mathcal{G} \norm{\alpha^n(s)}_Y + L_\mathcal{G}\norm{M}\norm{w^n(s)}_V + \norm{\mathcal{G}(s,0,0)}_{Y} \notag \end{align} for a.e. $s\in (0,t) \subset (0,T)$. Accordingly, the Cauchy-Schwarz inequality and \hyperref[assumptionG]{$\H{\mathcal{G}}$}\ref{list:G_00} implies \begin{align*} \norm{\alpha^n(t)}_Y &\leq c(\norm{\alpha_0}_Y + T\norm{\mathcal{G}(\cdot,0,0)}_{L^\infty_TY} + T^{1/2} \norm{w^n}_{L^2_TV}) + c\int_0^t\norm{\alpha^n(s)}_Y ds \end{align*} for a.e $t\in (0,T)$. By Gr\"{o}nwall's inequality (see, e.g., \cite{evans}), we have \begin{subequations} \begin{align}\label{eq:alpha_estimate1} \norm{\alpha^n}_{L^\infty_TY} \leq c(\norm{\alpha_0}_Y &+ T^k\norm{\mathcal{G}(\cdot,0,0)}_{L^\infty_TY} + T^k \norm{w^n}_{L^2_TV} )(1+cT\mathrm{e}^{cT}), \end{align} and from Young's inequality \begin{align}\label{eq:alpha_estimate2} \norm{\alpha^n}_{L^\infty_TY}^2 \leq c(\norm{\alpha_0}_Y^2 &+ T^k(\norm{\mathcal{G}(\cdot,0,0)}_{L^\infty_TY}^2 + T^k\norm{w^n}_{L^2_TV}^2 ))(1+cT^2\mathrm{e}^{2ct}) \end{align} \end{subequations} for $k\geq 1/2$. We will show the uniform bound by induction on $n$. \\ \indent For $n=1$, with the initial guesses; $w^0 = w_0$ and $\alpha^0 = \alpha_0$, \eqref{eq:est_w_n} becomes \begin{align*} \norm{w^1}_{L^\infty_TH}^2 + \norm{w^1}_{\mathcal{W}^{1,2}_T}^2 &\leq c(1 +\norm{w_0}_V^2+ \norm{f}_{L^2_TV^\ast}^2 + \norm{\alpha_0}_{Y}^2 + T^k \norm{\alpha_0}_{Y}^2 \norm{w_0}_{V}^2 ) \\ \notag &+ \frac{T}{2\sqrt{2}}\frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M} }{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}}\norm{\alpha_0}_Y \norm{w_0}_{V}^2 \\ &+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M}}{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}}\norm{\alpha_0}_Y\norm{w^1}_{\mathcal{W}^{1,2}_T}^2. \notag \end{align*} Choosing $T>0$ such that \begin{align*} T^k &\sim \frac{1}{c(\norm{\alpha_0}_Y)} \end{align*} small enough for $k\geq 1/2$. Consequently, by the smallness-assumption \eqref{eq:assumptionbound_mA_max} \begin{align}\label{eq:estiamte_w_1} \norm{w^1}_{L^\infty_TH}^2+ \norm{w^1}_{\mathcal{W}^{1,2}_T}^2 &\leq c (1 + \norm{f}_{L^2_TV^\ast}^2 + \norm{w_0}_V^2 + \norm{\alpha_0}_Y^2 ). \end{align} We next define the complete metric space \begin{equation*} X_T(a) = \{ h \in C([0,T];Y) : \norm{h}_{L^\infty_TY} \leq a, \ \text{with } a\in \mathbb{R}_+\} \end{equation*} and the operator $\Lambda : X_T(a) \rightarrow X_T(a)$ by \begin{align}\label{eq:Lambda} \Lambda\alpha^1 (t) = \alpha_0 + \int_0^t \mathcal{G}(s,\alpha^1(s),Mw^1(s)) ds \end{align} for $w^1 \in \mathcal{W}^{1,2}_T$. We verify that $\alpha^1$ is indeed a solution to \eqref{eq:algorithm12} for $n=1$ in the next lemma. \begin{lem}\label{lemma:lambda_fixedpoint} Let $w^1 \in \mathcal{W}^{1,2}_T$ be a solution of \eqref{eq:algorithm11}-\eqref{eq:boundaryconditions_walphaxietag1} with $w^0 = w_0\in V$ and $\alpha^0 = \alpha_0 \in Y$. Under the assumptions of Theorem \ref{thm:mainresult}, the operator $\Lambda : X_T(a) \rightarrow X_T(a)$, defined by \eqref{eq:Lambda}, has a unique fixed-point, i.e., there exists a constant $0 \leq L < 1$ such that \begin{equation*} \norm{\Lambda \alpha^1_1 - \Lambda \alpha^1_2}_{X_T(a)} \leq L \norm{ \alpha^1_1 - \alpha^1_2}_{X_T(a)}. \end{equation*} \end{lem} The proof of Lemma \ref{lemma:lambda_fixedpoint} is moved to Appendix \ref{appendix:lambda_fixedpoint} as it follows from the standard ODE arguments combined with the estimate \eqref{eq:estiamte_w_1}, and the assumptions \hyperref[assumptionG]{$\H{\mathcal{G}}$} and \hyperref[assumptionMNK]{$\H{MNK}$}. \indent Next, investigating $n=2$. This implies \begin{align*} \norm{w^2}_{L^\infty_TH}^2 + \norm{w^2}_{\mathcal{W}^{1,2}_T}^2 &\leq c(1 +\norm{w_0}_V^2+ \norm{f}_{L^2_TV^\ast}^2 + \norm{\alpha^1}_{L^\infty_TY}^2 + T^k \norm{\alpha^1}_{L^\infty_TY}^2 \norm{w^1}_{L^2_TV}^2 ) \\ \notag &+ \frac{1}{2\sqrt{2}}\frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M} }{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{\alpha^1}_{L^\infty_TY} \norm{w^1}_{L^2_TV}^2 \\ &+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M}}{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{\alpha^1}_{L^\infty_TY} \norm{w^2}_{\mathcal{W}^{1,2}_T}^2 \notag \end{align*} for $k\geq 1/2$. From the estimates \eqref{eq:alpha_estimate1} for $n=1$, we have that \begin{align*} \norm{w^2}_{L^\infty_TH}^2 + &\norm{w^2}_{\mathcal{W}^{1,2}_T}^2 \leq c (1+\norm{w_0}_V^2 + \norm{f}_{L^2_TV^\ast}^2 + \norm{\alpha^1}_{L^\infty_TY}^2 +T^k \norm{\alpha^1}_{L^\infty_TY}^2 \norm{w^1}_{L^2_TV}^2) \\ &\hspace{0.2cm}+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M}(\norm{\alpha_0}_{Y}+ T^k c(\norm{(f,w_0,\alpha_0)}_{L^2_TV^\ast \times V\times Y}))}{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{w^1}_{L^2_TV}^{2}\\ &\hspace{0.2cm}+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M}(\norm{\alpha_0}_{Y}+ T^k c(\norm{(f,w_0,\alpha_0)}_{L^2_TV^\ast \times V\times Y}))}{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{w^2}_{\mathcal{W}^{1,2}_T}^2 \end{align*} for $k\geq 1/2$. Choosing $T>0$ such that \begin{align}\label{eq:T_k} T^k &\sim \frac{1}{c(\norm{(f,w_0,\alpha_0)}_{L^2_TV^\ast \times V\times Y})} \end{align} small enough for some $k\geq 1/2$. From the smallness-condition \eqref{eq:assumptionbound_mA_max} and the choice of $T>0$, we have that \begin{align*} \frac{\sqrt{2}\beta_{5\varphi} \norm{K}\norm{M}}{m_A - m_j \norm{N}^2-\sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}}(\norm{\alpha_0}_{Y}+ T^kc(\norm{(f,w_0,\alpha_0)}_{L^2_TV^\ast \times V\times Y})) <1. \end{align*} Accordingly, \eqref{eq:estiamte_w_1} implies \begin{align*} \norm{w^2}_{L^\infty_TH}^2 + \norm{w^2}_{\mathcal{W}^{1,2}_T}^2 \leq c (1+ \norm{f}_{L^2_TV^\ast}^2 + \norm{w_0}_V^2 + \norm{\alpha_0}_Y^2 ). \end{align*} Moreover, verifying that $\alpha^2 \in C([0,T];Y)$ is indeed a solution to \eqref{eq:algorithm12} for $n=2$ follows by the same approach as for $n=1$. \\ \indent The induction step follows the same procedure as for $n=2$. Consequently, $(w^n, \alpha^n) \in \mathcal{W}^{1,2}_T \times C([0,T];Y)$ is the approximated solution of \eqref{eq:algorithm11}-\eqref{eq:algorithm12}. Further, we obtain the following uniform bound \begin{align}\label{eq:uniformlybounded} \norm{w^n}_{L^\infty_TH}^2 + \norm{w^n}_{\mathcal{W}^{1,2}_T}^2 +\norm{\alpha^n}_{L^\infty_TY}^2 \leq c (1 + \norm{f}_{L^2_TV^\ast}^2 + \norm{w_0}_V^2 + \norm{\alpha_0}_Y^2) \end{align} for all $n \in \mathbb{Z}_+$. Additionally, it follows from Proposition \ref{prop:integrationbypartsformula} that $\{ w^n \}_{n\geq 1} \subset C([0,T];H)$. \\ \indent\textbf{Step \ref{convergence}.\ref{passinglimit}} \rsubtask{passinglimit} \textit{(Convergence of the approximated solution)}. We first show that $\{(w^n,\alpha^n)\}_{n\geq 1}$ is a Cauchy sequence in $L^2_TV \times C([0,T];Y)$. This is summarized in the proposition below. \begin{prop}\label{prop:cauchysequences} Let $w^0 = w_0 \in V$ and $\alpha^0 = \alpha_0 \in Y$. Under the hypothesis of Theorem \ref{thm:mainresult}, let $\{(w^n,\alpha^n)\}_{n\geq 1} \subset \mathcal{W}^{1,2}_T \times C([0,T];Y)$ be the solution of \eqref{eq:algorithm11}-\eqref{eq:algorithm12}. Then, $\{(w^n,\alpha^n)\}_{n\geq 1}$ is a Cauchy sequence in $L^2_TV \times C([0,T];Y)$. In addition, $\{\mathcal{R}w^n\}_{n\geq 1}$, $\{\mathcal{S}_\varphi w^n\}_{n\geq 1}$, and $\{\mathcal{S}_jw^n\}_{n\geq 1}$ are Cauchy sequences in $L^2_TV^\ast$, $L^2_TX$ and $L^2_TX$, respectively. \end{prop} \begin{remark} To cover the case where we obtain global time of existence when $\beta_{1\varphi} = \beta_{4\varphi} = \beta_{5\varphi}= \beta_{6\varphi} = \beta_{7\varphi} = 0$, the proof needs to be slightly modified. This case is included in Corollary \ref{cor:cauchysequences}. \end{remark} \begin{proof} Let $e_{\Dot{w}}^n = \Dot{w}^n - \Dot{w}^{n-1}$, $e_{w}^n = w^n - w^{n-1}$, and $e_\alpha^n = \alpha^n - \alpha^{n-1}$. To begin with, we add \eqref{eq:algorithm11} for two iterations at the levels $n$ and $n-1$. Then, choosing $v=w^{n-1}$ and $v=w^n$ for the levels $n$ and $n-1$, respectively, implies \begin{align*} \inner{e^n_{\Dot{w}}(t)}{e^n_{w}(t) } &+ \inner{ A(t, w^n(t)) - A(t, w^{n-1}(t)) }{w^n(t) - w^{n-1}(t) } \\ \notag &\leq \inner{ \mathcal{R}w^{n-2}(t) - \mathcal{R}w^{n-1}(t) }{e_{w}^n(t)} \\ \notag &+\varphi(t, \alpha^{n-1}(t),\mathcal{S}_\varphi w^{n-1}(t), Mw^{n-1}(t), Kw^{n-1}(t)) \\ \notag &- \varphi(t, \alpha^{n-1}(t),\mathcal{S}_\varphi w^{n-1}(t), Mw^{n-1}(t), Kw^{n}(t)) \\ \notag &+ \varphi(t,\alpha^{n-2}(t),\mathcal{S}_\varphi w^{n-2}(t), Mw^{n-2}(t), Kw^{n}(t)) \\ \notag &- \varphi(t,\alpha^{n-2}(t),\mathcal{S}_\varphi w^{n-2}(t), Mw^{n-2}(t), Kw^{n-1}(t))\\ \notag &+ j^\circ(t,\alpha^{n-1}(t),\mathcal{S}_j w^{n-1}(t),Nw^{n}(t); -Ne_w^{n}(t)) \\ \notag &+ j^\circ(t,\alpha^{n-2}(t),\mathcal{S}_j w^{n-2}(t), Nw^{n-1}(t); Ne_w^{n}(t) ) \end{align*} for a.e. $t \in (0,T)$ with $w^n(0) = w^{n-1}(0) = w_0$. We deduce from hypotheses \hyperref[assumptionphi]{$\H{A}$}\ref{list:A_maximalmonotone}, \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_estimate}, \hyperref[assumptionj]{$\H{j}$}\ref{list:j_estimate}, \hyperref[assumptionMNK]{$\H{MNK}$}, and the Cauchy-Schwarz inequality that \begin{align*} &\inner{e_{\Dot{w}}^n(t)}{e_w^{n}(t) } + (m_A-m_j\norm{N}^2) \norm{e_w^n(t)}_V^2 \\ &\leq \norm{\mathcal{R}w^{n-1}(t) - \mathcal{R}w^{n-2}(t)}_{V^\ast}\norm{e_w^n(t)}_V \\ &+(\Bar{m}_j \norm{N}+\beta_{2\varphi} \norm{K}) \norm{ e_\alpha^{n-1}(t)}_Y\norm{e_w^n(t)}_V \\ &+ \beta_{1\varphi} \norm{K} \norm{M} \norm{w^{n-1}(t)}_V \norm{e_\alpha^{n-1}(t)}_Y\norm{e_w^n(t)}_V \\ &+ \beta_{3\varphi} \norm{K} \norm{ \mathcal{S}_\varphi w^{n-1}(t) - \mathcal{S}_\varphi w^{n-2}(t)}_X\norm{e_w^n(t)}_V \\ &+ \beta_{4\varphi} \norm{K}\norm{M} \norm{e_w^{n-1}(t)}_V \norm{e_w^n(t)}_V \\ &+ \beta_{5\varphi} \norm{K} \norm{M}\norm{\alpha^{n-2}(t)}_Y\norm{e_w^{n-1}(t)}_V \norm{e_w^n(t)}_V\\ &+\beta_{6\varphi} \norm{K} \norm{M}\norm{w^{n-1}(t)}_V\norm{ \mathcal{S}_\varphi w^{n-1}(t) - \mathcal{S}_\varphi w^{n-2}(t)}_X\norm{e_w^n(t) }_V \\ &+ \beta_{7\varphi} \norm{K} \norm{\alpha^{n-1}(t)}_Y \norm{ \mathcal{S}_\varphi w^{n-1}(t) - \mathcal{S}_\varphi w^{n-2}(t)}_X\norm{e_w^n(t)}_V \\ &+ \Bar{m}_j \norm{N} \norm{\mathcal{S}_j w^{n-1}(t) - \mathcal{S}_jw^{n-2}(t)}_X \norm{e_w^{n}(t)}_V \end{align*} for a.e. $t \in (0,T)$. Integrating over the time interval $(0,t')\subset (0,T)$, we observe after using the integration by parts formula in Proposition \ref{prop:integrationbypartsformula} (with $v_1=v_2=e_w^n(t)$ for a.e. $t\in(0,T)$) and the Cauchy-Schwarz inequality that \begin{align*} &I_1 := \frac{1}{2}\norm{e_w^n(t')}_H^2 - \frac{1}{2}\norm{e_w^n(0)}_H^2 + (m_A- m_j \norm{N}^2)\int_0^{t'} \norm{e_w^n(t)}_V^2 dt \\ &\leq \int_0^{t'} \norm{ \mathcal{R}w^{n-1}(t) - \mathcal{R}w^{n-2}(t)}_{V^\ast} \norm{e_w^n(t)}_V dt \\ &+ (\bar{m}_{j} \norm{N} + \beta_{2\varphi} \norm{K}) \int_0^{t'} \norm{e_\alpha^{n-1}(t)}_Y\norm{e_w^n(t)}_V dt \\ &+ \beta_{1\varphi} \norm{K} \norm{M} \int_0^{t'} \norm{w^{n-1}(t)}_V \norm{e_\alpha^{n-1}(t)}_Y \norm{e_w^n(t)}_V dt\\ &+ \beta_{3\varphi} \norm{K} \int_0^{t'} \norm{ \mathcal{S}_\varphi w^{n-1}(t) - \mathcal{S}_\varphi w^{n-2}(t)}_X\norm{e_w^n(t)}_V dt \\ &+ \beta_{4\varphi} \norm{K}\norm{M} \int_0^{t'} \norm{e_w^{n-1}(t)}_V \norm{e_w^n(t)}_V dt\\ &+ \beta_{5\varphi} \norm{K} \norm{M}\int_0^{t'} \norm{\alpha^{n-2}(t)}_Y \norm{e_w^{n-1}(t) }_V \norm{e_w^n(t)}_V dt\\ &+ \beta_{6\varphi} \norm{K} \norm{M} \int_0^{t'} \norm{w^{n-1}(t)}_V \norm{ \mathcal{S}_\varphi w^{n-1}(t) - \mathcal{S}_\varphi w^{n-2}(t)}_X\norm{e_w^n(t)}_V dt\\ &+ \beta_{7\varphi} \norm{K} \int_0^{t'} \norm{\alpha^{n-1}(t)}_Y \norm{ \mathcal{S}_\varphi w^{n-1}(t) - \mathcal{S}_\varphi w^{n-2}(t)}_X\norm{e_w^n(t)}_V dt \\ &+ \bar{m}_{j} \norm{N} \int_0^{t'} \norm{ \mathcal{S}_j w^{n-1}(t) - \mathcal{S}_j w^{n-2}(t)}_X\norm{e_w^n(t)}_V dt\\ &=: I_{1,1} + I_{1,2} + I_{1,3} + I_{1,4} + I_{1,5} + I_{1,6} + I_{1,7} + I_{1,8} + I_{1,9} \end{align*} for a.e. $t'\in (0,T)$. We may apply the Cauchy-Schwarz inequality to $I_{1,1}$, $I_{1,2}$, $I_{1,4}$ , $I_{1,5}$, and $I_{1,9}$. For $I_{1,3}$ and $I_{1,7}$, we respectively apply H\"{o}lder's inequality with $\frac{1}{2} + \frac{1}{\infty}+ \frac{1}{2} = 1$ and $ \frac{1}{\infty}+\frac{1}{2} +\frac{1}{2} = 1$. To treat $I_{1,6}$, we observe by the Cauchy-Schwarz inequality and \hyperref[assumptionS1]{$\H{\mathcal{S}_\varphi}$}\ref{list:S_hist11} that \begin{align} \label{eq:S_hist_est2} \norm{\mathcal{S}_\varphi w^{n-1}(t) - \mathcal{S}_\varphi w^{n-2}(t)}_X^2 &\leq c_{\mathcal{S}_\varphi }^2 T \int_0^t \norm{e_w^{n-1}(s)}_V^2 ds \end{align} for a.e. $t \in (0,T)$. Therefore, we may use H\"{o}lder's inequality with $\frac{1}{2}+ \frac{1}{\infty} +\frac{1}{2} = 1$. Similarly, by \hyperref[assumptionR]{$\H{\mathcal{R}}$}\ref{list:S_hist} and \hyperref[assumptionS2]{$\H{\mathcal{S}_j}$}\ref{list:S_hist22}, respectively, we obtain \begin{align} \label{eq:R_hist_est} \norm{\mathcal{R}w^{n-1}(t) - \mathcal{R}w^{n-2}(t)}_{V^\ast}^2 &\leq c_\mathcal{R}^2 T \int_0^t \norm{e_w^{n-1}(s)}_V^2 ds \\ \label{eq:S_hist_est1} \norm{\mathcal{S}_jw^{n-1}(t) - \mathcal{S}_j w^{n-2}(t)}_X^2 &\leq c_{\mathcal{S}_j}^2 T \int_0^t \norm{e_w^{n-1}(s)}_V^2 ds \end{align} for a.e. $t \in (0,T)$. In addition, using that $w^n(0) = w^{n-1}(0)$ and the smallness-assumption \eqref{eq:assumptionbound_mA_max}, then \begin{align*} (m_A- m_j \norm{N}^2)\int_0^{t'} \norm{e_w^n(t)}_V^2 dt \leq I_1 \end{align*} for all $t' \in [0,T]$. Gathering the above and dividing by $\norm{e_w^n}_{L^2_TV}$, we have \begin{align}\label{eq:I_2123} I_2 &:= (m_A -m_j \norm{N}^2 ) \bigg[ \int_0^{t'} \norm{e_w^n(t)}^2_V dt \bigg]^{1/2}\\ &\leq T^k( c_\mathcal{R} + \beta_{3\varphi} \norm{K} c_{\mathcal{S}_\varphi } + \Bar{m}_j \norm{N} c_{\mathcal{S}_j} + \beta_{7\varphi}\norm{K}c_{\mathcal{S}_\varphi}\norm{\alpha^{n-1}}_{L^\infty_{T}Y}) \notag \\ &\times \bigg[ \int_0^{t'}\int_0^t \norm{e_w^{n-1}(s)}_V^2 ds dt \bigg]^{1/2} \notag \\ &+(\bar{m}_{j}\norm{N}+\beta_{2\varphi}\norm{K})\bigg[ \int_0^{t'}\norm{e_\alpha^{n-1}(t)}_Y^2dt \bigg]^{1/2} \notag \\ &+ \beta_{1\varphi} \norm{K} \norm{M} \norm{w^{n-1}}_{L^2_TV} \norm{e_\alpha^{n-1}}_{L^\infty_{t'}Y} \notag\\ &+ ( \beta_{4\varphi}+\beta_{5\varphi} \norm{\alpha^{n-2}}_{L^\infty_{T}Y})\norm{K}\norm{M} \bigg[ \int_0^{t'} \norm{ e_w^{n-1}(t) }^2_V dt\bigg]^{1/2} \notag\\ &+ T^k\beta_{6\varphi} c_{\mathcal{S}_\varphi} \norm{K} \norm{M} \norm{w^{n-1}}_{L^2_{T}V} \bigg[ \int_0^{t'} \norm{e_w^{n-1}(t)}_V^2 dt \bigg]^{1/2} \notag \\ &=: I_{2,1} + I_{2,2} + I_{2,3} + I_{2,4} + I_{2,5} \notag \end{align} for all $t'\in [0,T]$ and $k\geq 1/2$. Next, subtracting \eqref{eq:algorithm12} for two iterations at the levels $n-1$ and $n-2$. Utilizing Minkowski's inequality and \hyperref[assumptionG]{$\H{\mathcal{G}}$}\ref{list:G_Lipschitz} yields \begin{align}\label{eq:est_alpha123} \norm{e_\alpha^{n-1}(t)}_Y &\leq L_\mathcal{G} \int_0^t \norm{e_\alpha^{n-1}(s)}_Y ds + L_\mathcal{G} \norm{M} \int_0^t \norm{e_w^{n-1}(s)}_V ds \end{align} for a.e. $t\in(0,T)$. Applying a standard Gr\"{o}nwall argument and the Cauchy-Schwarz inequality reads \begin{align}\label{eq:etimate_alpha} \norm{e_\alpha^{n-1}(t)}_Y &\leq cT^k(1+cT\mathrm{e}^{cT})\Big[ \int_0^t\norm{e_w^{n-1}(s)}^2_V ds\Big]^{1/2} \end{align} for a.e. $t \in (0,T)$. From \eqref{eq:uniformlybounded}, $I_{2,4}$ becomes \begin{align*} I_{2,4} &\leq (\beta_{4\varphi}+\beta_{5\varphi} \norm{\alpha_0}_Y)\norm{K}\norm{M} \bigg[ \int_0^{t'} \norm{ e_w^{n-1}(t) }^2_V dt\bigg]^{1/2}\\ &+ T^k c(\norm{(f,w_0,\alpha_0)}_{L^2_TV^\ast \times V\times Y}) \bigg[ \int_0^{t'} \norm{ e_w^{n-1}(t) }^2_V dt\bigg]^{1/2}\\ &=: I_{2,4,1} + I_{2,4,2}. \end{align*} Combining $I_{2,4,2}$ and $I_{2,5}$, we define \begin{align*} I_{2,6} := T^k (\beta_{6\varphi} c_{\mathcal{S}_\varphi} \norm{w^{n-1}}_{L^2_{T}V} + c(\norm{(f,w_0,\alpha_0)}_{L^2_TV^\ast \times V\times Y}) )\norm{K}\norm{M}\bigg[ \int_0^{t'} \norm{ e_w^{n-1}(t) }^2_V dt\bigg]^{1/2} \end{align*} for some $k\geq 1/2$. Accordingly, we have that \begin{align*} I_2 \leq I_{2,1} + I_{2,2} + I_{2,3} + I_{2,4,1} + I_{2,6}. \end{align*} Applying Young's inequality to $I_{2,4,1}$ and $I_{2,1} + I_{2,2} + I_{2,3} + I_{2,6}$ and then the arithmetic-quadratic mean inequality to the latter term, we obtain \begin{align*} (I_2)^2 &\leq (I_{2,1} + I_{2,2} + I_{2,3} + I_{2,4,1} + I_{2,6} )^2 \\ &\leq 2(I_{2,4,1})^2 + 2(I_{2,1}+ I_{2,2} + I_{2,3}+I_{2,6})^2 \notag\\ &\leq 2(I_{2,4,1})^2 + 8\Big[(I_{2,1})^2 + (I_{2,2})^2 + (I_{2,3})^2+(I_{2,6})^2\Big]. \notag \end{align*} From \eqref{eq:etimate_alpha} and Young's inequality, the terms $(I_{2,2})^2$ and $(I_{2,3})^2$ becomes \begin{align*} (I_{2,2})^2 &\leq T^k c(1+cT\mathrm{e}^{2cT}) \int_0^{t'} \norm{e_w^{n-1}(t)}_V^2 dt \\ (I_{2,3})^2 &\leq T^k c(1+cT^2\mathrm{e}^{2cT}) \int_0^{t'} \norm{e_w^{n-1}(t)}_V^2 dt \end{align*} for some $k\geq 1/2$. Gathering the above estimates and noting that $m_A - m_j \norm{N}^2>0$ by the smallness-assumption \eqref{eq:assumptionbound_mA_max} reads \begin{align}\label{eq:estimate_before_iteration} I_3 &:= \int_0^{t'} \norm{e_w^{n}(t)}_V^2 dt \\ \notag &\leq T^k \frac{c}{(m_A -m_j \norm{N}^2)^2} \int_0^{t'}\int_0^t \norm{e_w^{n-1}(s)}_V^2 ds dt \\ \notag &+ \frac{2(\beta_{4\varphi} + \beta_{5\varphi}\norm{\alpha_0}_Y)^2 \norm{K}^2 \norm{M}^2}{(m_A -m_j \norm{N}^2)^2} \int_0^{t'}\norm{e_w^{n-1}(t)}^2_V dt \\ \notag &+T^k\frac{c(1+cT^2\mathrm{e}^{2cT})}{(m_A -m_j \norm{N}^2)^2} \int_0^{t'}\norm{e_w^{n-1}(t) }^2_V dt \\ &=: \Big(T^kR + \frac{2(\beta_{4\varphi} + \beta_{5\varphi}\norm{\alpha_0}_Y)^2 \norm{K}^2 \norm{M}^2}{(m_A -m_j \norm{N}^2)^2} \Big) \int_0^{t'}\norm{e_w^{n-1}(t) }^2_V dt \notag \end{align} for all $t' \in [0,T]$ and some $k\geq 1/2$. Here, \begin{align*} R &= \frac{c(\norm{(f,w_0,\alpha_0)}_{L^2_TV^\ast\times V \times Y})}{(m_A - m_j\norm{N}^2)^2}. \end{align*} The aim is to get the right-hand side of \eqref{eq:estimate_before_iteration} to go to zero as $n\rightarrow \infty$. We will combine the smallness condition \eqref{eq:assumptionbound_mA_max} and the assumption on the final time $T$. \\ \indent Iterating over $n\in \mathbb{Z}_+$, we have that \begin{align*} I_3 \leq c\bigg(T^kR + \frac{2(\beta_{4\varphi} + \beta_{5\varphi}\norm{\alpha_0}_Y)^2 \norm{K}^2 \norm{M}^2}{(m_A -m_j \norm{N}^2)^2} \bigg)^n (\norm{w^1}_{L^2_TV}^2 + \norm{w_0}_V^2). \end{align*} Choosing $T>0$ such that \begin{align*} T^k \sim \frac{1}{R} \end{align*} small enough for some $k\geq 1/2$, the smallness-assumption \eqref{eq:assumptionbound_mA_max} implies that \begin{align}\label{eq:sometimething} T^kR + \frac{2(\beta_{4\varphi} + \beta_{5\varphi}\norm{\alpha_0}_Y)^2 \norm{K}^2 \norm{M}^2}{(m_A -m_j \norm{N}^2)^2} <1. \end{align} Hence, \eqref{eq:estiamte_w_1} and then passing the limit $n\rightarrow \infty$ gives us \begin{align*} \lim_{n\rightarrow\infty} c \bigg(T^kR + \frac{2(\beta_{4\varphi} + \beta_{5\varphi}\norm{\alpha_0}_Y)^2 \norm{K}^2 \norm{M}^2}{(m_A -m_j \norm{N}^2)^2} \bigg)^n =0 \end{align*} as desired. \\ \indent Consequently, iterating over $n \in \mathbb{Z}_+$ in \eqref{eq:estimate_before_iteration}, and then passing the limit $n \rightarrow \infty$ gives us that $\{w^n\}_{n\geq1}$ is a Cauchy sequence in $L^2_TV$, and \eqref{eq:etimate_alpha} implies that $\{\alpha^n\}_{n\geq 1}$ is a Cauchy sequence in $C([0,T];Y)$. Moreover, $\{\mathcal{R}w^n\}_{n\geq 1}$ is a Cauchy sequence in $L^2_TV^\ast$ by \eqref{eq:R_hist_est}. Similarly, $\{\mathcal{S}_\varphi w^n\}_{n\geq 1}$ is a Cauchy sequence in $L^2_TX$ by \eqref{eq:S_hist_est2}, and $\{\mathcal{S}_j w^n\}_{n\geq 1}$ is a Cauchy sequence in $L^2_TX$ by \eqref{eq:S_hist_est1}. Concluding the proof. \end{proof} \begin{cor}\label{cor:cauchysequences} Let $w^0 = w_0 \in V$ and $\alpha^0 = \alpha_0 \in Y$. Under the hypothesis of Theorem \ref{thm:mainresult} with $\beta_{1\varphi} = \beta_{4\varphi} = \beta_{5\varphi}= \beta_{6\varphi} = \beta_{7\varphi} = 0$, let $\{(w^n,\alpha^n)\}_{n\geq 1} \subset \mathcal{W}^{1,2}_T \times C([0,T];Y)$ be the solution of \eqref{eq:algorithm11}-\eqref{eq:algorithm12} for any time $T>0$. Then, $\{(w^n,\alpha^n)\}_{n\geq 1}$ is a Cauchy sequence in $L^2_TV \times C([0,T];Y)$. In addition, $\{\mathcal{R}w^n\}_{n\geq 1}$, $\{\mathcal{S}_\varphi w^n\}_{n\geq 1}$, and $\{\mathcal{S}_jw^n\}_{n\geq 1}$ are Cauchy sequences in $L^2_TV^\ast$, $L^2_TX$ and $L^2_TX$, respectively. \end{cor} The proof of Corollary \ref{cor:cauchysequences} can be found in Appendix \ref{appendix:cor_cauchy}. \\ \\ \indent\textbf{Step \ref{convergence}.\ref{weaksol}} \rsubtask{weaksol} \textit{(Passing the limit in \eqref{eq:algorithm11}-\eqref{eq:algorithm12})}. From Proposition \ref{prop:cauchysequences}, it follows as $n\rightarrow \infty$ that \begin{subequations} \begin{align}\label{eq:strong_convergences1} w^n \rightarrow w \text{ strongly in } &L^2_TV, &\alpha^n \rightarrow \alpha \text{ strongly in } C([0,T];Y), \\ \mathcal{S}_\varphi w^n \rightarrow \mathcal{S}_\varphi w \text{ strongly in } &L^2_TX, &\mathcal{S}_j w^n \rightarrow \mathcal{S}_j w \text{ strongly in } L^2_TX, \\ \mathcal{R}w^n \rightarrow \mathcal{R}w \text{ strongly in }& L^2_TV^\ast. \label{eq:strong_convergences2} \end{align} \end{subequations} We are now in a position to pass the limit in \eqref{eq:algorithm11}-\eqref{eq:algorithm12}. First, by \eqref{eq:uniformlybounded}, we have that $\{w^n\}_{n\geq 1}$ and $\{\Dot{w}^n\}_{n\geq 1}$ are uniformly bounded in $L^2_TV$ and $L^2_TV^\ast$, respectively. Then, by Eberlein–\v{S}mulian's theorem, as $L^2_TV$ is a reflexive Banach space, we have, upon passing to a subsequence, that \begin{align*} w^n \rightarrow w& \ \ \text{ weakly in } L^2_TV, &\Dot{w}^n \rightarrow \Dot{w} \ \ \text{ weakly in } L^2_TV^\ast. \end{align*} Now, with \eqref{eq:strong_convergences1}-\eqref{eq:strong_convergences2} in mind, we find by a proof of contradiction (assuming that the below does not hold thereby obtaining a contradiction with \eqref{eq:strong_convergences1}-\eqref{eq:strong_convergences2}) that \begin{subequations} \begin{align} \label{eq:strong_convergences_t1} w^n(t) \rightarrow w(t) \text{ strongly in } &V \ \ \text{ for a.e. }t\in (0,T),\\ \alpha^n(t) \rightarrow \alpha(t) \text{ strongly in } &Y \ \ \text{ for a.e. }t\in (0,T), \label{eq:strong_convergences_talpha}\\ \mathcal{S}_\varphi w^n(t) \rightarrow \mathcal{S}_\varphi w(t) \text{ strongly in } &X \ \ \text{ for a.e. }t\in (0,T), \label{eq:strong_convergences_S2}\\ \mathcal{S}_jw^n(t) \rightarrow \mathcal{S}_j w(t) \text{ strongly in } &X \ \ \text{ for a.e. }t\in (0,T), \label{eq:strong_convergences_S1}\\ \mathcal{R}w^n(t) \rightarrow \mathcal{R}w(t) \text{ strongly in } &V^\ast \ \ \text{ for a.e. }t\in (0,T),\label{eq:strong_convergences_tR} \end{align} as $n \rightarrow \infty$. By similar arguments, we find by \eqref{eq:uniformlybounded} that $\{w^n(t)\}_{n\geq 1}$ for a.e. $t\in (0,T)$ is uniformly bounded in $V$ (see, e.g., the first part of the proof of \cite[Lemma 13]{Zeng2018}). Since $V$ is a reflexive Banach space, it follows by Eberlein–\v{S}mulian's theorem, up to a subsequence, that $w^n(t) \rightarrow \Tilde{w}(t)$ weakly in $V$ for a.e. $t\in (0,T)$. By uniqueness of limits, we have by \eqref{eq:strong_convergences_t1} that $\Tilde{w}(t) = w(t)$ for a.e. $t\in (0,T)$. Following the same reasoning shows that $\{\Dot{w}^n(t)\}_{n\geq 1}$ for a.e. $t\in (0,T)$ is uniformly bounded in $V^\ast$. Thus, as $n \rightarrow \infty$ \begin{align}\label{eq:weak_wdot} \Dot{w}^n(t) \rightarrow \Dot{w}(t) \text{ weakly in } &V^\ast \ \ \text{ for a.e. }t\in (0,T). \end{align} \end{subequations} From \eqref{eq:strong_convergences_t1} and \hyperref[assumptionA]{$\H{A}$}\ref{list:A_demicont}, we get \begin{equation}\label{eq:Aconvergenceweak} \lim_{n\rightarrow \infty} \inner{A(t, w^{n}(t)) }{v} = \inner{A(t, w(t))}{v} \ \ \text{ for all } v\in V, \text{ a.e. } t\in (0,T). \end{equation} Moreover, we utilize \hyperref[assumptionj]{$\H{j}$}\ref{list:j_convergence}, \eqref{eq:strong_convergences_t1}-\eqref{eq:strong_convergences_talpha}, \eqref{eq:strong_convergences_S1}, and \hyperref[assumptionMNK]{$\H{MNK}$} to obtain \begin{align*} \limsup_{n\rightarrow \infty} \ & j^\circ(t,\alpha^{n-1}(t), \mathcal{S}_jw^{n-1}(t),Nw^{n}(t);Nv-Nw^{n}(t)) \\ &\leq j^\circ(t,\alpha(t),\mathcal{S}_jw(t),Nw(t);Nv-Nw(t)) \end{align*} for all $v \in V$, a.e. $t\in(0,T)$. Next, by \eqref{eq:strong_convergences_t1}, \eqref{eq:strong_convergences_tR}-\eqref{eq:weak_wdot}, \eqref{eq:Aconvergenceweak}, and the Cauchy-Schwarz inequality, we can find that \begin{align*} \lim_{n\rightarrow \infty} \inner{A(t, w^{n}(t)) }{v- w^{n}(t) } &= \inner{A(t, w(t))}{v- w(t)}, \\ \lim_{n\rightarrow \infty} \inner{\Dot{w}^{n}(t)}{v-w^{n}(t)} &= \inner{\Dot{w}(t)}{v-w(t)},\\ \lim_{n\rightarrow \infty} \inner{\mathcal{R}w^{n}(t)}{v-w^{n}(t)} &= \inner{\mathcal{R}w(t)}{v-w(t)},\\ \lim_{n\rightarrow \infty}\inner{f(t)}{v-w^{n}(t)} &= \inner{f(t)}{v-w(t)} \end{align*} for all $v\in V$, a.e. $t\in (0,T)$. Furthermore, let $\mathbb{E} = Y \times X \times U$ be equipped with the norm $\norm{(x,y,z)}_{\mathbb{E}} = \norm{x}_Y + \norm{y}_X + \norm{z}_U$. We wish to deduce that $\varphi(t,\cdot,\cdot,\cdot,\cdot)$ is continuous on $\mathbb{E} \times Z$ for a.e. $t\in (0,T)$ by applying Lemma \ref{lemma:phicontinuous}. The conditions \ref{list:phicont1} and \ref{list:phicont3} are directly obtained by \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_cont},\ref{list:phi_bounded}. Lastly, we find that condition \ref{list:phicont2} holds by Proposition \ref{prop:convex_lsc_implies_locallyLipschitz}. Indeed, as $\varphi$ is lower semicontinuous and convex in its last argument, by \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_convex_lsc}, and the fact that $\varphi$ is finite (does not take the values $\pm\infty$), it then follows by \eqref{eq:strong_convergences_t1}-\eqref{eq:strong_convergences_S2} and \hyperref[assumptionMNK]{$\H{MNK}$} that \begin{align*} &\lim_{n\rightarrow \infty} \big[\varphi(t,\alpha^{n-1}(t),\mathcal{S}_\varphi w^{n-1}(t),Mw^{n-1}(t),Kv) \\ &\qquad- \varphi(t,\alpha^{n-1}(t),\mathcal{S}_\varphi w^{n-1}(t),Mw^{n-1}(t),Kw^{n-1}(t)) \big]\\ &\qquad= \varphi(t,\alpha(t),\mathcal{S}_\varphi w(t),Mw(t),Kv) - \varphi(t,\alpha(t),\mathcal{S}_\varphi w(t),Mw(t),Kw(t)) \end{align*} for all $v\in V$, a.e. $t\in(0,T)$. Next, we have by \hyperref[assumptionG]{$\H{\mathcal{G}}$}\ref{list:G_Lipschitz}, that $\mathcal{G}(t,\cdot,\cdot)$ is continuous on $Y \times U$ for a.e. $t\in (0,T)$. From \eqref{eq:est_on_G}, \hyperref[assumptionG]{$\H{\mathcal{G}}$}\ref{list:G_00}, and \eqref{eq:uniformlybounded}, we obtain the desired bound, i.e., integrable and independent of $n$. Combining \eqref{eq:strong_convergences_t1}, \eqref{eq:strong_convergences_talpha}, and \hyperref[assumptionMNK]{$\H{MNK}$}, we may apply the dominated convergence theorem to conclude \begin{align*} \lim_{n\rightarrow \infty}\int_0^t \mathcal{G}(s,\alpha^n(s),Mw^n(s))ds = \int_0^t \mathcal{G}(s,\alpha(s),Mw(s))ds \end{align*} for all $t\in [0,T]$. Thus, passing the upper limit $n\rightarrow \infty$ in \eqref{eq:algorithm11}-\eqref{eq:algorithm12} gives us that $(w,\alpha) \in \mathcal{W}^{1,2}_T \times C([0,T];Y)$ is indeed a solution to Problem \ref{prob:fullproblem}. \\ \indent \textbf{Step \ref{continuous}\rtask{continuous}} \textit{(Continuous dependence on initial data)}. For simplicity of notation, we define $\mathbb{U} = L^2_{T/2}V \times C([0,T/2];Y)$ equipped with the norm $\norm{(x,y)}_\mathbb{U}^2 = \norm{x}_{L^2_{T/2}V}^2 +\norm{y}_{L^\infty_{T/2}Y}^2$. Consider two sets of initial data $(w_{01},\alpha_{01}),(w_{02},\alpha_{02}) \in V \times Y$, we aim to prove that for all $\lambda >0$ there exists a $\delta>0$, which will be fixed later, such that \begin{subequations} \begin{align}\label{eq:continuous1} \norm{(w_{01},\alpha_{01})-(w_{02},\alpha_{02})}_{V \times Y} < \delta \end{align} implies \begin{align}\label{eq:continuous2} \norm{(w_1,\alpha_1) - (w_2,\alpha_2)}_\mathbb{U} < \lambda. \end{align} \end{subequations} We consider the time interval $(0,T/2)$ with $T$ as in \eqref{eq:times_T} to guarantee that $\norm{(w_{01},\alpha_{01})-(w_{02},\alpha_{02})}_{V \times Y} < \delta$. Here, $(w_1,\alpha_1),(w_2,\alpha_2) \in \mathcal{W}^{1,2}_{T/2} \times C([0,T/2];Y)$ are two solutions to Problem \ref{prob:fullproblem} corresponding to $(w_{01},\alpha_{01}),(w_{02},\alpha_{02})$. That is, for $i=1,2$, $(w_i,\alpha_i) \in \mathcal{W}^{1,2}_{T/2} \times C([0,T/2];Y)$ is the solution to \begin{subequations} \begin{align*} &\alpha_i(t) = \alpha_{0i} + \int_0^t \mathcal{G}(s,\alpha_i(s),Mw_i(s))ds,\\ &\inner{\Dot{w}_i (t) + A(t, w_i(t))- f(t) + \mathcal{R}w_i(t) }{v- w_i(t) } \label{eq:cont2}\\ &+ \varphi(t,\alpha_i(t), \mathcal{S}_\varphi w_i(t), Mw_i(t), Kv) - \varphi(t, \alpha_i(t), \mathcal{S}_\varphi w_i(t), Mw_i(t), Kw_i(t)) \notag\\ &+ j^\circ(t,\alpha_i(t),\mathcal{S}_j w_i(t), Nw_i(t); Nv -Nw_i(t))\geq 0 \notag \end{align*} for all $v\in V$, a.e. $t\in (0,T/2)$ with \begin{align} w_i(0) = w_{0i}. \end{align} \end{subequations} Let $\{(w_i^n,\alpha^n_i)\}_{n\geq 1} \subset \mathcal{W}_{T/2}^{1,2} \times C([0,T/2];Y)$ be two solutions of \eqref{eq:algorithm11}-\eqref{eq:algorithm12} corresponding to the data $(w_{0i},\alpha_{0i}) \in V \times Y$ for $i=1,2$. Then observe that \begin{align*} &\norm{(w_1,\alpha_1) - (w_2,\alpha_2)}_{\mathbb{U}}\\ &\leq \norm{(w_1^n,\alpha_1^n) - (w_1,\alpha_1)}_{\mathbb{U}} +\norm{(w_2^n,\alpha_2^n) - (w_2,\alpha_2)}_{\mathbb{U}} +\norm{(w_1^n,\alpha_1^n) - (w_2^n ,\alpha_2^n )}_{\mathbb{U}}\\ &=: B_1 + B_2 + B_3. \end{align*} By Step \ref{convergence}.\ref{passinglimit}, we have that $(w^n,\alpha^n) \rightarrow (w,\alpha)$ strongly in $\mathbb{U}$ when $n\rightarrow \infty$. Consequently, $B_1$ and $B_2$ must at least satisfy the estimate \begin{align*} B_1 + B_2 < \delta. \end{align*} While for $B_3$, we need the continuity of the flow map with respect to \eqref{eq:algorithm11}-\eqref{eq:algorithm12}. We add the two inequalities and choose $v=w^n_k(t)$ for a.e. $t\in(0,T/2)$ and $k=1,2$, $i\neq k$. Let $(w^{n-1}_1,\alpha^{n-1}_1),(w^{n-1}_2,\alpha^{n-1}_2) \in \mathcal{W}^{1,2}_{T/2} \times C([0,T/2];Y)$ be given, and $W^0 = W_0 := w_{01} - w_{02} \in V$ and $\Sigma^0 = \Sigma_0 := \alpha_{01} - \alpha_{02}\in Y$. Then $W^n = w^n_1-w^n_2$ and $\Sigma^n= \alpha_1^n - \alpha_2^n$ solves \begin{subequations} \begin{align}\label{eq:ineq_W_n} &\inner{\Dot{W}^n(t) + A(t, w^n_1(t)) - A(t, w^n_2(t)) + \mathcal{R}w^{n-1}_1(t)-\mathcal{R}w^{n-1}_2(t) }{W^n(t)} \\ &+ \varphi(t,\alpha^{n-1}_1(t), \mathcal{S}_\varphi w^{n-1}_1(t), Mw^{n-1}_1(t), Kw^n_2(t)) \notag\\ &- \varphi(t,\alpha^{n-1}_1(t), \mathcal{S}_\varphi w^{n-1}_1(t), Mw^{n-1}_1(t), Kw^n_1(t)) \notag\\ &+ \varphi(t,\alpha^{n-1}_2(t), \mathcal{S}_\varphi w^{n-1}_2(t), Mw^{n-1}_2(t), Kw^n_1(t)) \notag \\ &- \varphi(t,\alpha^{n-1}_2(t), \mathcal{S}_\varphi w^{n-1}_2(t), Mw^{n-1}_2(t), Kw^n_2(t)) \notag\\ &+j^\circ(t,\alpha^{n-1}_1(t), \mathcal{S}_j w^{n-1}_1(t), Nw^n_1(t); -W^n(t) ) \notag\\ &+j^\circ(t,\alpha^{n-1}_2(t), \mathcal{S}_j w^{n-1}_2(t), Nw^n_2(t); W^n(t) ) \geq 0 \notag \end{align} for a.e. $t\in (0,T/2)$ with \begin{equation}\label{eq:bc_W_n} W^n (0) = W_0 , \end{equation} and \begin{equation}\label{eq:ineq_A_n} \Sigma^n (t) = \Sigma_0 + \int_0^t[\mathcal{G}(s,\alpha^n_1(s),Mw^n_1(s))- \mathcal{G}(s,\alpha^n_2(s),Mw^n_2(s))]ds \end{equation} \end{subequations} for a.e. $t\in (0,T/2)$. To find the desired estimates, we need the following lemma. \begin{lem}\label{lemma:est_W_n} Let $w^0_i = w_{0i} \in V$ and $\alpha^0_i = \alpha_{0i} \in Y$ for $i=1,2$. Under the assumptions of Theorem \ref{thm:mainresult}, let $\{(W^n,\Sigma^n)\}_{n\geq 1} \subset \mathcal{W}_{T/2}^{1,2} \times C([0,T/2];Y)$ be the solution to \eqref{eq:ineq_W_n}-\eqref{eq:ineq_A_n}. Then \begin{align}\label{eq:estU} & \norm{W^n}_{L^2_{T/2}V}^2 \\ &\leq c(\norm{W_0}_V^2 + \norm{\Sigma^{n-1}}_{L^\infty_{T/2}Y}^2 + \norm{W^{n-1}}_{L^2_{T/2}V}^2 + \norm{w^{n-1}_1}_{L^2_{T/2}Y}^2 \norm{\Sigma^{n-1}}_{L^\infty_{T/2}Y}^2 \notag \\ &+ T^k \norm{w_1^{n-1}}_{L^2_{T/2}V}^2\norm{W^{n-1}}_{L^2_{T/2}V}^2 + T^k \norm{\alpha_1^{n-1}}_{L^\infty_{T/2}Y}^2 \norm{W^{n-1}}_{L^2_{T/2}V}^2 ) \notag \\ &+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K} \norm{M} }{m_A - m_j\norm{N}^2 - \sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{\alpha_2^{n-1}}_{L^\infty_{T/2}Y}\norm{W^{n-1}}_{L^2_{T/2}V}^2 \notag \\ &+ \frac{1}{2\sqrt{2}}\frac{\sqrt{2}\beta_{5\varphi} \norm{K} \norm{M} }{m_A - m_j\norm{N}^2 - \sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{\alpha_2^{n-1}}_{L^\infty_{T/2}Y} \norm{W^n}_{L^2_{T/2}V}^2\notag \end{align} for all $n\in \mathbb{Z}_+$ and some $k\geq 1/2$. \end{lem} The proof of Lemma \ref{lemma:est_W_n} is postponed to Appendix \ref{appendix:proof_W_n}. From \eqref{eq:T_k} and \eqref{eq:uniformlybounded}, the estimate \eqref{eq:estU} becomes \begin{align*} \norm{W^n}_{L^2_{T/2}V}^2 &\leq c(\norm{W_0}_V^2 +\norm{\Sigma^{n-1}}_{L^\infty_{T/2}Y}^2 + \norm{W^{n-1}}_{L^2_{T/2}V}^2)\\ &+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K} \norm{M} }{m_A - m_j\norm{N}^2 - \sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{\alpha_2^{n-1}}_{L^\infty_{T/2}Y}\norm{W^{n-1}}_{L^2_{T/2}V}^2 \\ &+ \frac{1}{2\sqrt{2}} \frac{\sqrt{2}\beta_{5\varphi} \norm{K} \norm{M} }{m_A - m_j\norm{N}^2 - \sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}} \norm{\alpha_2^{n-1}}_{L^\infty_{T/2}Y} \norm{W^n}_{L^2_{T/2}V}^2. \end{align*} In a similar manner as we obtained \eqref{eq:alpha_estimate2}, we see from \eqref{eq:ineq_A_n}, a standard Gr\"{o}nwall argument, the Cauchy-Schwarz inequality, and Young's inequality that \begin{align*} \norm{\Sigma^{n-1}}_{L^\infty_{T/2}Y}^2 \leq c(\norm{\Sigma_0}_Y^2 + T^k \norm{W^{n-1}}_{L^2_{T/2}V}^2 ). \end{align*} By induction of $n$, it follows the same procedure as in Step \ref{convergence}.\ref{welldefinedscehem} that \begin{align*} \norm{(W^n,\Sigma^n)}_{L^2_{T/2}V \times L^\infty_{T/2}Y}^2 &\leq c(\norm{(W_0,\Sigma_0)}_{ V \times Y}^2 ) \end{align*} for all $n\in \mathbb{Z}_+$. Consequently, we may choose a $\delta >0$ in \eqref{eq:continuous1} to obtain \eqref{eq:continuous2}.\\ \indent \textbf{Step \ref{mainproof}\rtask{mainproof}} \textit{(Proof of Theorem \ref{thm:mainresult})}. We now have all the tools to prove the main theorem. \begin{proof}[Proof of Theorem \ref{thm:mainresult}] Combining Step \ref{linearization}-\ref{continuous} gives us well-posedness of Problem \ref{prob:fullproblem}. \end{proof} \section{Viscoelastic frictional contact problems}\label{sec:applications} \noindent We will present two applications to frictional contact; the first considering contact with normal compliance, and the second contact with normal damped response. Moreover, in Section \ref{sec:rateandstate}, we introduce a first-order approximation of the rate-and-state friction law \eqref{eq:regularized} and \eqref{eq:aginglaw} that is included in our framework. Let $u : \Omega \times [0,T] \rightarrow \mathbb{R}^d$ denote the displacement, $\sigma : \Omega \times [0,T] \rightarrow \mathbb{S}^d$ the stress tensor, and $\alpha : \Gamma_C \times [0,T] \rightarrow \mathbb{R}$ the external state variable. In addition, $f_0$ denotes the body forces, $f_N$ the surface traction, and $\rho$ the density. \begin{figure} \caption{A standard illustration of a sliding block.} \label{fig:klosse} \end{figure}\noindent We let the spaces $H=L^2(\Omega;\mathbb{R}^d)$, $Q=L^2(\Omega; \mathbb{S}^d)$, $V$, and $\mathcal{W}^{1,2}_T$ be defined by \eqref{eq:H_space}, \eqref{eq:Q_space}, \eqref{eq:V_space}, and \eqref{eq:spaces_VastW}, respectively. We refer to Section \ref{sec:sobolev_spaces}-\ref{sec:bochner_spaces} for further definitions of the function spaces. Further, let $X=L^4(\Gamma_C)$ and $U=L^4(\Gamma_C;\mathbb{R}^d)$. Let $\gamma_\nu : V \rightarrow X$ denote the normal trace operator, and $\gamma_\tau : V \rightarrow U$ denote the tangential trace operator. It then follows by Theorem \ref{thm:trace} that $\gamma_\tau$ and $\gamma_\nu$ are well-defined for $d=2,3$. For all $v\in V$, we let $v_\nu = \gamma_\nu v = v \cdot \nu$ denote the normal components on $\Gamma$, and $v_\tau =\gamma_\tau v = v - v_\nu\nu$ the tangential components on $\Gamma$. Similarly, let $\sigma_\nu = (\sigma\nu) \cdot \nu$, and $\sigma_\tau = \sigma\nu - \sigma_\nu\nu$ be the normal and tangential components of the tensor $\sigma$ on $\Gamma$, respectively. \subsection{Dynamic frictional contact problem with normal compliance}\label{sec:application1} In this section, we present a system of equations describing the evolution of a viscoelastic body in frictional contact with a foundation. Viscoelastic contact problems with normal compliance and friction are discussed in, e.g., \cite[Section 8.3]{shillor2004}. The normal compliance condition is used as an approximation of the Signorini non-penetration condition. More on this can be found in \cite[Chapter 5]{han2002}, \cite[Chapter 11]{Kikuchi1988} and \cite{Sofonea2012}. We wish to study the following problem: \begin{prob}\label{prob:application} Find the displacement $u: \Omega \times [0,T] \rightarrow \mathbb{R}^d$ and the external state variable $\alpha : \Gamma_C \times [0,T] \rightarrow \mathbb{R}$ such that \begin{subequations} \begin{align} \label{eq:sigma} \sigma(t) &= \mathcal{A} \varepsilon (\Dot{u}(t)) + \mathcal{B} \varepsilon (u(t)) + \int_0^t \mathcal{C}(t-s, \varepsilon(\Dot{u}(s))) ds & \text{ on } \Omega \times (0,T)\\ \label{eq:momentumeq} \rho \Ddot{u}(t) &= \nabla \cdot \sigma(t) + f_0(t) & \text{ on } \Omega \times (0,T)\\ u(t) &= 0, \ \ \Dot{u}(t) = 0 & \text{ on } \Gamma_D \times (0,T) \label{eq:direchelet} \\ \sigma(t) {\nu} &= f_N(t) & \text{ on } \Gamma_N \times (0,T) \label{eq:traction} \\ -\sigma_\nu(t) &= p(u_\nu(t)) & \text{ on } \Gamma_C \times (0,T) \label{eq:sigma_nu} \\ |\sigma_\tau(t)| &\leq \mu (|\Dot{u}_\tau(t)|, \alpha(t)) |\sigma_\nu(t)| & \text{ on } \Gamma_C \times (0,T) \label{eq:prob1_friction_eq2} \\ -\sigma_\tau(t) &= \mu (|\Dot{u}_\tau(t)|, \alpha(t)) |\sigma_\nu(t)| \frac{\Dot{u}_\tau (t)}{|\Dot{u}_\tau(t)|}, \ \ \text{ if } \Dot{u}_\tau(t) \neq 0 & \text{ on } \Gamma_C \times (0,T) \label{eq:prob1_friction_eq}\\ \Dot{\alpha}(t) &= G (\alpha(t), |\Dot{u}_\tau(t)|) & \text{ on } \Gamma_C \times (0,T) \label{eq:alphaeq} \end{align} with the initial conditions \begin{align} u(0) &= u_0, \ \ \Dot{u}(0) = w_0 & \text{ on } \Omega \label{eq:bbc1} \\ \alpha (0) &= \alpha_{0} & \text{ on } \Gamma_C. \label{eq:bbc} \end{align} \end{subequations} \end{prob} In the above problem, \eqref{eq:sigma} is a general viscoelastic constitutive law, where $\mathcal{A}$ is a viscosity operator, $\mathcal{B}$ an elasticity operator, and $\mathcal{C}$ is referred to as a relaxation tensor. We note that $\mathcal{A}\varepsilon(\Dot{u})$ and $\mathcal{B}\varepsilon(u)$ are short-hand notation for $\mathcal{A}(x,\varepsilon(\Dot{u}))$ and $\mathcal{B}(x,\varepsilon(u))$, respectively. Moreover, \eqref{eq:momentumeq} is a momentum balance equation, \eqref{eq:direchelet} denotes the Dirichlet boundary conditions, and \eqref{eq:traction} the traction applied to the surface. The equation \eqref{eq:sigma_nu} is a contact condition, where $p$ is a prescribed function describing the penetration condition. Next, \eqref{eq:prob1_friction_eq2}-\eqref{eq:prob1_friction_eq} denotes a generalized Coulomb's friction law, and \eqref{eq:alphaeq} describes the evolution of the external state variable, see Section \ref{sec:intro_physics} for a discussion on this equation. Lastly, \eqref{eq:bbc1}-\eqref{eq:bbc} are initial conditions. We wish to investigate \eqref{eq:sigma}-\eqref{eq:bbc} under the following assumptions: \\ $\underline{\H{\mathcal{A}}}$: \label{assumptionAcal} $\mathcal{A} : \Omega \times \mathbb{S}^d \rightarrow \mathbb{S}^d \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item For any $ \varepsilon \in \mathbb{S}^d, \ x\mapsto \mathcal{A} (x,\varepsilon)$ is measurable on $\Omega$. \item There exists $L_\mathcal{A} > 0$ such that $ |\mathcal{A} (x,\varepsilon_1) - \mathcal{A} (x,\varepsilon_2)| \leq L_\mathcal{A} |\varepsilon_1 - \varepsilon_2|$ for all $\varepsilon_1,\varepsilon_2 \in \mathbb{S}^d$, a.e. $x \in \Omega$. \label{list:Acal_bounded} \item There exists $m_\mathcal{A}>0$ such that $ (\mathcal{A}(x,\varepsilon_1) - \mathcal{A}(x,\varepsilon_2)): (\varepsilon_1 - \varepsilon_2) \geq m_\mathcal{A} |\varepsilon_1 - \varepsilon_2|^2$, for all $\varepsilon_1, \varepsilon_2 \in \mathbb{S}^d$, a.e. $x \in \Omega$. \label{list:Acal_maximalmonotone} \label{list:Acal_measurable} \item $\mathcal{A} (x,0) = 0$ for a.e. $x\in \Omega$. \label{list:Acal_0} \end{enumerate} \noindent $\underline{\H{\mathcal{B}}}$: \label{assumptionB} $\mathcal{B} : \Omega \times \mathbb{S}^d \rightarrow \mathbb{S}^d \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item For any $ \varepsilon \in \mathbb{S}^d, \ x\mapsto \mathcal{B} (x,\varepsilon)$ is measurable on $\Omega$. \label{list:Bcal_measurable} \item There exists $L_\mathcal{B} > 0$ such that $|\mathcal{B} (x,\varepsilon_1) - \mathcal{B} (x,\varepsilon_2)| \leq L_\mathcal{B} |\varepsilon_1 - \varepsilon_2|$ for all $\varepsilon_1,\varepsilon_2 \in \mathbb{S}^d$, a.e. $x\in \Omega$. \label{list:Bcal_bounded} \item $\norm{\mathcal{B} (\cdot,0)}_Q < \infty $. \label{list:Bcal_0} \end{enumerate} \noindent $\underline{\H{\mu}}$: \label{assumptionmu} $ \mu : \Gamma_C \times \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R} \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item The mapping $ x\mapsto \mu(x,r,y)$ is measurable on $\Gamma_C$ for all $r ,y \in \mathbb{R}$. \label{list:mu1_measurable} \item There exist $L_{1\mu},L_{2\mu},L_{3\mu} \geq 0$ such that $|\mu(x,r_1,y_1) - \mu (x,r_2,y_2)| \leq ( L_{1\mu} + L_{2\mu}|y_2|)|r_1-r_2| + L_{3\mu}|r_1||y_1-y_2| $ for all $r_1,r_2\in \mathbb{R}$, $y_1,y_2\in \mathbb{R}$, a.e. $x\in \Gamma_C$. \label{list:mu1_Lipschitz} \item There exist $\kappa_1,\kappa_2,\kappa_3 \geq 0$ such that $ |\mu(x,r,y) | \leq \kappa_1 + \kappa_2|y|+\kappa_3|r|$ for all $r \in \mathbb{R}$, $y\in \mathbb{R}$, a.e. $x\in \Gamma_C$. \label{list:mu1_est} \end{enumerate} \noindent $\underline{\H{p}}$: \label{assumptionp} $ p : \Gamma_C \times \mathbb{R} \rightarrow \mathbb{R} \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item The mapping $ x\mapsto p(x,r)$ is measurable on $\Gamma_C$ for all $r \in \mathbb{R}$. \label{list:p_measurable} \item There exists $L_p > 0$ such that $|p(x,r_1) - p (x,r_2)| \leq L_p |r_1-r_2|$ for all $r_1,r_2 \in \mathbb{R}$, a.e. $x\in \Gamma_C$. \label{list:p_Lipschitz} \item There exists $ p^\ast >0 $ such that $p(x,r) \leq p^\ast$ for all $r\in \mathbb{R}$, a.e. $x\in \Gamma_C$. \label{list:p_bounded} \end{enumerate} \noindent $\underline{\H{G}}$: \label{assumptionGapp} $G : \Gamma_C \times \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R} \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item The mapping $x\mapsto G(x,\alpha,r)$ is measurable on $\Omega$ for all $\alpha,r \in \mathbb{R}$. \label{list:G_app_measurable} \item There exists $L_G > 0$ such that $|G(x,\alpha_1,r_1) - G (x,\alpha_2,r_2)| \leq L_G (|\alpha_1-\alpha_2| + |r_1-r_2|)$ for all $\alpha_1,\alpha_2,r_1,r_2 \in \mathbb{R}$, a.e. $x\in \Gamma_C$. \label{list:G_app_Lipschitz} \item $\norm{G(\cdot,0,0)}_{L^2(\Gamma_C)} < \infty$. \label{list:G_app_0} \end{enumerate} \noindent $\underline{\H{\mathcal{C}}}$: \label{assumptionC} $\mathcal{C} : \Omega \times (0,T) \times \mathbb{S}^d \rightarrow \mathbb{S} \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $\mathcal{C}(x,t,\varepsilon) = (c(x,t) \varepsilon)$ for all $\varepsilon \in \mathbb{S}^d$, a.e. $(x,t)\in \Omega \times (0,T)$. \item $c(x,t) = (c_{ijkl}(x,t))$ with $c_{ijkl} = c_{jikl} = c_{lkij} \in L^\infty_TL^\infty(\Omega)$. \end{enumerate} \begin{equation}\label{eq:denisty} \text{The mass density is assumed to be a positive constant } \rho>0. \end{equation} \begin{equation}\label{eq:assumptiononsourceterms} f_0 \in L^2_TH, \ \ f_N \in L^2_TL^2(\Gamma_N;\mathbb{R}^d). \end{equation} with the initial data satisfying \begin{subequations}\label{eq:assumptionondata} \begin{align}\label{eq:assumptionondata1} u_0,\ &w_0 \in V, \\ \alpha_0 &\in L^2(\Gamma_C). \label{eq:assumptionondata2} \end{align} \end{subequations} \begin{remark} Similar assumptions on the operators and data are found in, e.g., \cite{Migorski2022,Patrulescu2017,sofonea2017}. In comparison, the assumptions \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_Lipschitz}-\ref{list:mu1_est} are generalized. \end{remark} We refer the reader to Appendix \ref{appendix:comments_app} for a discussion on applications under these assumptions. \subsubsection{Variational formulation} \label{sec:variationalformulation_1} We find a formal derivation of the variational formulation of Problem \ref{prob:application}, i.e., assuming sufficiently regular functions, as we only are interested in a mild solution (see Definition \ref{def:sols}). We refer to, e.g., \cite[Section 5.2]{shillor2004} for a more detailed derivation, especially how to deal with the contact conditions. Inserting \eqref{eq:momentumeq} into Green's formula \eqref{eq:greensformulasigma} yields \begin{align*} \int_{\Omega} \rho \Ddot{u}(t) \cdot \big[v-\Dot{u}(t)\big] dx + \int_{\Omega} \sigma(t) : \big[\varepsilon (v)-\varepsilon (\Dot{u}(t))\big] dx &= \int_{\Omega} f_0(t) \cdot \big[v-\Dot{u}(t)\big] dx \\ &+ \int_{\Gamma} \sigma(t)\nu \cdot \big[v-\Dot{u}(t)\big] da \end{align*} for all $v\in V$, a.e. $t\in (0,T)$. From the terms on $\Gamma_C$, we deduce \begin{align*} &\int_{\Gamma_C} \sigma_\tau(t) \cdot \big[v_\tau - \Dot{u}_\tau(t)\big] da + \int_{\Gamma_C} \sigma_\nu(t) \big[v_\nu - \Dot{u}_\nu(t) \big] da \\ &\geq \int_{\Gamma_C} \mu (|\Dot{u}_\tau(t)|, \alpha(t)) p(u_\nu(t)) \big[ |\Dot{u}_\tau(t)| - |v_\tau| \big] da + \int_{\Gamma_C} p(u_\nu(t)) \big[\Dot{u}_\nu(t) - v_\nu \big] da \end{align*} for all $v\in V$, a.e. $t \in (0,T)$. Combining the above reads \begin{align*} &\int_{\Omega} \rho \Ddot{u}(t) \cdot \big[v-\Dot{u}(t)\big] dx + \int_{\Omega} \sigma (t) : \big[\varepsilon (v)- \varepsilon (\Dot{u}(t))\big] dx \\ \notag &+ \int_{\Gamma_C} \mu (|\Dot{u}_\tau(t)|, \alpha(t)) p(u_\nu(t)) \big[|v_\tau |-|\Dot{u}_\tau(t) |\big] da + \int_{\Gamma_C} p(u_\nu(t)) \big[v_\nu - \Dot{u}_\nu(t) \big] da \\ & \geq \int_{\Gamma_N} f_N(t) \cdot \big[v-\Dot{u}(t)\big] da + \int_{\Omega} f_0(t) \cdot \big[v-\Dot{u}(t)\big] dx \notag \end{align*} for all $v\in V$, a.e. $t\in (0,T)$. We write the above inequality slightly more compactly. We observe that the map $t \mapsto \int_{\Gamma_N} f_N(t) \cdot v da + \int_{\Omega} f_0(t) \cdot v dx$ is linear and bounded in $V$. Consequently, the Riesz representation theorem implies the existence of $f(t) \in V^\ast$ such that \begin{align}\label{eq:f_inner} \inner{f(t)}{v} = \scalarprod{f_N(t)}{v}_{L^2(\Gamma_N;\mathbb{R}^d)} + \scalarprod{ f_0(t)}{v}_{H} \ \ \text{ for all } v\in V, \ \ \text{ a.e. } t\in (0,T). \end{align} As mentioned, we are interested in a mild solution of \eqref{eq:alphaeq} (see Definition \ref{def:sols}), so we integrate \eqref{eq:alphaeq} over the time interval $(0,t)$ and use the initial condition \eqref{eq:bbc} to obtain this equation on the desired form. We may now formulate a variational inequality of Problem \ref{prob:application}. \begin{prob}\label{prob:weaksol} Find $u : \Omega \times [0,T] \rightarrow \mathbb{R}^d$ and $\alpha: \Gamma_C \times [0,T]\rightarrow \mathbb{R}$ such that \begin{subequations} \begin{align}\label{eq:alphayo} &\alpha(t) = \alpha_0 + \int_0^t G (\alpha(s), |\Dot{u}_\tau(s)|) ds,\\ &\int_{\Gamma_C} \mu (|\Dot{u}_\tau(t)|, \alpha(t)) p(u_\nu(t)) \big[|v_\tau |-|\Dot{u}_\tau(t) |\big] da + \int_{\Gamma_C} p(u_\nu(t)) \big[v_\nu - \Dot{u}_\nu(t) \big] da \label{eq:restofeq} \\ &+\int_{\Omega} \rho \Ddot{u}(t) \cdot \big[v-\Dot{u}(t)\big] dx + \scalarprod{\mathcal{A} \varepsilon (\Dot{u}(t))}{ \varepsilon (v)- \varepsilon (\Dot{u}(t)) }_Q \notag\\ &+ \scalarprod{\mathcal{B} \varepsilon (u(t)) + \int_0^t \mathcal{C}(t-s, \varepsilon(\Dot{u}(s))) ds }{ \varepsilon (v)- \varepsilon (\Dot{u}(t))}_Q \geq \inner{f(t)}{v-\Dot{u}(t)} \notag \end{align} \end{subequations} for all $v\in V$, a.e. $t\in (0,T)$ with \begin{equation*} u(0) = u_0, \ \ \Dot{u}(0) = w_0. \end{equation*} \end{prob}\noindent \begin{remark} Conversely, under the assumption of sufficient regularity, showing that \eqref{eq:alphayo} is equivalent to \eqref{eq:alphaeq} together with \eqref{eq:bbc} is an application of the fundamental theorem of calculus. Moreover, choosing the test functions $v=\Dot{u} \pm \Tilde{v}$ with $\Tilde{v} \in C^\infty_c(\Omega)$ in \eqref{eq:restofeq} implies that Problem \ref{prob:weaksol} is indeed equivalent to Problem \ref{prob:application}, see, e.g., \cite[Section 2.6]{Pipping2015_phd} or \cite[Section 5.2]{shillor2004}. \end{remark} The well-posedness result for Problem \ref{prob:application} is summarized below. \begin{thm}\label{thm:finalthm} Assume that \hyperref[assumptionAcal]{$\H{\mathcal{A}}$}, \hyperref[assumptionB]{$\H{\mathcal{B}}$}, \hyperref[assumptionC]{$\H{\mathcal{C}}$}, \hyperref[assumptionmu]{$\H{\mu}$}, \hyperref[assumptionp]{$\H{p}$}, \hyperref[assumptionGapp]{$\H{G}$}, and \eqref{eq:denisty}-\eqref{eq:assumptionondata} holds. Then, there exists a $T>0$ satisfying \eqref{eq:times_T} such that Problem \ref{prob:weaksol} has at most one solution $(u,\alpha)$ under the smallness-assumption \begin{align}\label{eq:smallness1} m_\mathcal{A} > \sqrt{2} \: p^\ast &(L_{1\mu}\sqrt{\mathrm{meas}(\Gamma_C)} + L_{2\mu}\norm{\alpha_0}_Y)\\ &\times \norm{(\gamma_\tau,\gamma_\nu)}_{\mathcal{L}(V,L^4(\Gamma_C;\mathbb{R}^d) \times L^4(\Gamma_C))}\norm{\gamma_\tau}_{\mathcal{L}(V,L^4(\Gamma_C;\mathbb{R}^d))}. \notag \end{align} In addition $(u,\alpha)$ has the following regularity: \begin{equation*} u\in W^{1,2}(0,T;V),\ \ \Dot{u} \in \mathcal{W}^{1,2}_T \subset C([0,T];H), \ \ \alpha \in C([0,T];L^2(\Gamma_C)). \end{equation*} Moreover, there exists a neighborhood around $(u_0,w_0,\alpha_0)$ so that the flow map $\Tilde{F} : V \times V \times L^2(\Gamma_C) \rightarrow W^{1,2}(0,T/2;V) \times C([0,T/2]; L^2(\Gamma_C))$ defined by $(u_0,w_0,\alpha_0) \mapsto (u,\alpha)$ is continuous. \end{thm} \begin{remark} A similar smallness-assumption \eqref{eq:smallness1} is used in, e.g., \cite{Patrulescu2017}. \end{remark} \begin{remark}\label{remark:rho} We will show that there exists a solution for $\rho \equiv 1$. For $\rho>0$, we may take $w(t) = w(\rho t)$. \end{remark} \subsubsection{Proof of Theorem \ref{thm:finalthm}} Our aim is to use Theorem \ref{thm:mainresult} to prove Theorem \ref{thm:finalthm}. Our first task is to rewrite Problem \ref{prob:weaksol} in the same form as Problem \ref{prob:fullproblem}. Then, we will verify the hypothesis of Theorem \ref{thm:mainresult}. \\ \indent Let $Y=L^2(\Gamma_C)$ and $Z=U\times X$. We then define the operators $A : (0, T) \times V \rightarrow V^\ast $, $\mathcal{R} : L^2_TV \rightarrow L^2_TV^\ast$, $\mathcal{S}_\varphi : L^2_TV \rightarrow L^2_T X$, $\mathcal{G} : (0,T) \times Y \times U \rightarrow Y$, $M : V \rightarrow U$, and $N : V \rightarrow X$, respectively, by \begin{subequations} \begin{align}\label{eq:defining_ARS} \inner{A(t,w)}{v} &= \int_\Omega \mathcal{A} \varepsilon (w) : \varepsilon(v) dx, \ \text{ for } w,v\in V, \text{ a.e. } t\in (0,T),\\ \inner{\mathcal{R}w(t)}{v} &= \int_\Omega \mathcal{B} ( \varepsilon(\int_0^t w(s) ds + u_0)) : \varepsilon(v) dx \label{eq:definition_R}\\ &+ \int_\Omega \int_0^t \mathcal{C}(t-s, \varepsilon(w(s))) ds : \varepsilon(v) dx, \notag \\ & \text{for } w\in L^2_TV, \ v\in V, \text{ a.e. } t\in (0,T), \notag\\ \mathcal{S}_\varphi w(t) &= \gamma_\nu \Big( \int_0^t w(s) ds + u_{0} \Big) , \ \text{ for } w\in L^2_TV, \text{ a.e. } t\in (0,T), \label{eq:definition_S}\\ \mathcal{G}(t,\alpha,Mw) &= G(\alpha,|w_{\tau}|), \text{ for } \alpha \in Y, \ w \in V, \text{ a.e. } t\in (0,T), \label{eq:and_G}\\ Mv = &v_\tau, \ \ Nv = v_\nu, \ \ \text{ for } v\in V. \label{eq:def_MN} \end{align} We define $K : V \rightarrow Z$ by \begin{equation}\label{eq:def_K} Kv = (Mv,Nv), \ \ \text{ for } v\in V, \end{equation} which is linear and bounded in both arguments, i.e., $K \in \mathcal{L}(V,Z)$. We consider the functional $\varphi : (0, T) \times Y \times X \times U \times Z \rightarrow \mathbb{R}$ by \begin{equation}\label{eq:defintion_phi} \varphi(t,y,z,\Tilde{w},\Tilde{v}) = \int_{\Gamma_C} p(z) v^{(2)} da + \int_{\Gamma_C} \mu (|\Tilde{w}| , y) p(z) |v^{(1)}| da \end{equation} for $y \in Y$, $z\in X$, $\Tilde{w} \in U$, $\Tilde{v}=(v^{(1)},v^{(2)}) \in Z$, a.e. $t\in (0,T)$. We let $f:(0,T) \rightarrow V^\ast$ be defined as in \eqref{eq:f_inner}. \end{subequations} This yields the following generalization of Problem \ref{prob:weaksol}: \begin{prob}\label{prob:soondone} Find $w\in \mathcal{W}^{1,2}_T$ and $\alpha \in C([0,T];Y)$ such that \begin{align*} &\alpha(t) = \alpha_0 + \int_0^t \mathcal{G}(s,\alpha(s),Mw(s))ds,\\ &\inner{\rho\Dot{w}(t)}{v-w(t)}+ \inner{A(t,w(t))}{v-w(t)} + \inner{\mathcal{R}w(t)}{v-w(t)} \\ &+ \varphi(t, \alpha(t),\mathcal{S}_\varphi w(t),Mw(t),Kv) - \varphi(t, \alpha(t),\mathcal{S}_\varphi w(t),Mw(t),Kw(t)) \geq \inner{f(t)}{v-w(t)} \end{align*} for all $v\in V$, a.e. $t\in (0,T)$, with $w(0) = w_0$. \end{prob} \begin{remark} Since Problem \ref{prob:weaksol} is contained in Problem \ref{prob:soondone}, it suffices to show well-posedness of Problem \ref{prob:soondone}. \end{remark} \begin{lem}\label{lemma:assumptionphi2} Under the assumptions of Theorem \ref{thm:finalthm}, the hypothesis of Theorem \ref{thm:mainresult} holds for \eqref{eq:defining_ARS}-\eqref{eq:defintion_phi}. Here, $c_{0\varphi}(t) = \kappa_1(\mathrm{meas}(\Gamma_C))^{3/2}$, $c_{1\varphi}= \kappa_2\mathrm{meas}(\Gamma_C)$, $c_{2\varphi} = L_p(\mathrm{meas}(\Gamma_C))^{5/4} $, $c_{3\varphi} = \kappa_3(\mathrm{meas}(\Gamma_C))^{5/4}$, $\beta_{1\varphi}= p^\ast L_{3\mu}$, $\beta_{3\varphi} = L_p(1+ \kappa_1) \sqrt{\mathrm{meas}(\Gamma_C)} $, $\beta_{4\varphi} = p^\ast L_{1\mu} \sqrt{\mathrm{meas}(\Gamma_C)}$, $\beta_{5\varphi} = p^\ast L_{2\mu}$, $\beta_{6\varphi} = \kappa_3 L_p(\mathrm{meas}(\Gamma_C))^{1/4}$, $\beta_{7\varphi} = \kappa_2 L_p$, and $c_{4\varphi}=\beta_{2\varphi} = 0$. \end{lem} To maintain the flow of the article, the proof of Lemma \ref{lemma:assumptionphi2} is placed in Appendix \ref{appendix:proofs_app1}. We are now ready to prove Theorem \ref{thm:finalthm}. \begin{proof}[Proof of Theorem \ref{thm:finalthm}] The proof relies on Theorem \ref{thm:mainresult} with $j^\circ \equiv 0$. In light of Lemma \ref{lemma:assumptionphi2}, the hypotheses of Theorem \ref{thm:mainresult} are fulfilled. Consequently, $(w,\alpha)\in \mathcal{W}^{1,2}_T \times C([0,T];Y)$ is a unique solution of Problem \ref{prob:soondone}. Moreover, we define the function $u : [0,T] \rightarrow V$ by \begin{equation}\label{eq:definition_of_u} u(t) = u_0 + \int_0^t w(s) ds \end{equation} for all $t\in[0,T]$. As a consequence of Bochner space theory, the fact that $w \in L^2_TV \subset L^1_T V$ and \eqref{eq:definition_of_u}, we have that $u \in C([0,T];V)$ and $\Dot{u} = w$. Consequently, $\Dot{u} \in \mathcal{W}^{1,2}_T$, which implies $u \in W^{1,2}(0,T;V)$. Furthermore, we define the set \begin{equation*} \mathbb{Y} = V \times V \times Y, \end{equation*} and show that the flow map $\Tilde{F} : \mathbb{Y} \rightarrow W^{1,2}(0,T/2;V) \times C([0,T/2];Y)$ defined by $(u_{0},w_{0},\alpha_{0}) \mapsto(u,\alpha) $ is continuous. That is, we claim that for all $\lambda >0$, there exists a $\delta>0$, chosen later, such that \begin{subequations} \begin{equation}\label{eq:cont_1111} \norm{(u_{01},w_{01},\alpha_{01})- (u_{02},w_{02},\alpha_{02})}_\mathbb{Y} < \delta \end{equation} implies \begin{equation}\label{eq:lastlast123} \norm{(u_1,\alpha_1) - (u_2,\alpha_2)}_{W^{1,2}(0,T/2;V) \times L^\infty_{T/2}Y} < \lambda. \end{equation} \end{subequations} To check this, let us use the continuous dependence result in Theorem \ref{thm:mainresult}. We observe that \begin{align}\label{eq:oneestleft} &\norm{(w_{01},\alpha_{01})- (w_{02},\alpha_{02})}_{ V \times Y}^2 \leq \norm{(u_{01},w_{01},\alpha_{01})- (u_{02},w_{02},\alpha_{02})}_{\mathbb{Y}}^2 . \end{align} From \eqref{eq:cont_1111}, \eqref{eq:oneestleft}, and Theorem \ref{thm:mainresult}\ref{list:continuous_dependence_on_inital_data}, we obtain \begin{equation}\label{eq:estwest} \norm{(w_1,\alpha_1) - (w_2,\alpha_2)}_{L^2_{T/2}V \times L^\infty_{T/2}Y }^2 < \delta. \end{equation} Next, by \eqref{eq:definition_of_u}, the triangle inequality, Minkowski's inequality, Young's inequality, the Cauchy-Schwarz inequality, and integrating over the time interval $(0,T/2)$, we have \begin{align}\label{eq:thisisit} \norm{u_1 - u_2}_{L^2_{T/2}V}^2 \leq c(T\norm{u_{01}- u_{02}}_V^2 + T^2\norm{w_1 - w_2}_{L^2_{T/2}V}^2). \end{align} Combining \eqref{eq:estwest}-\eqref{eq:thisisit}, while remembering that $w_i = \Dot{u}_i$ for $i=1,2$, we may choose a $\delta >0$ so that \eqref{eq:lastlast123} is acquired. \end{proof} \subsection{Dynamic frictional contact problem with normal damped response}\label{sec:application2} In our second application, we consider contact with normal damped response, i.e., a wet material or some lubrication between the foundation and the reference configuration of a viscoelastic body (see, e.g., \cite{Sofonea2012}). We present the problem: \begin{prob}\label{prob:application3} Find the displacement $u: \Omega \times [0,T] \rightarrow \mathbb{R}^d$ and the external state variable $\alpha : \Gamma_C \times [0,T] \rightarrow \mathbb{R}$ such that \begin{subequations} \begin{align}\label{eq:constitutive} \sigma(t) &= \mathcal{A} \varepsilon (\Dot{u}(t)) + \mathcal{B} \varepsilon (u(t)) + \int_0^t \mathcal{C}(t-s, \varepsilon(\Dot{u}(s))) ds & \text{ on } \Omega \times (0,T)\\ \label{eq:momentumeq3} \rho \Ddot{u}(t) &= \nabla \cdot \sigma(t) + f_0(t) & \text{ on } \Omega \times (0,T)\\ u(t) &= 0, \ \ \Dot{u}(t) = 0 & \text{ on } \Gamma_D \times (0,T)\\ \sigma(t) {\nu} &= f_N(t) & \text{ on } \Gamma_N \times (0,T) \label{eq:sigmanu2} \\ -\sigma_\nu(t) &\in \partial j_\nu(\Dot{u}_\nu(t)) & \text{ on } \Gamma_C \times (0,T) \label{eq:sigma_nu2} \\ |\sigma_\tau(t)| &\leq \mu (|\Dot{u}_\tau(t)|, \alpha(t)) & \text{ on } \Gamma_C \times (0,T) \label{eq:prob1_friction_eq4} \\ -\sigma_\tau(t) &= \mu (|\Dot{u}_\tau(t)|, \alpha(t)) \frac{\Dot{u}_\tau (t)}{|\Dot{u}_\tau(t)|}, \ \ \text{ if } \Dot{u}_\tau(t) \neq 0 & \text{ on } \Gamma_C \times (0,T) \label{eq:prob1_friction_eq3}\\ \Dot{\alpha}(t) &= G (\alpha(t), |\Dot{u}_\tau(t)|) & \text{ on } \Gamma_C \times (0,T) \label{eq:alpha2} \end{align} with \begin{align} u(0) &= u_0, \ \ \Dot{u}(0) = w_0 & \text{ on } \Omega\\ \alpha (0) &= \alpha_{0} & \text{ on } \Gamma_C . \label{eq:laaaaaaaaast} \end{align} \end{subequations} \end{prob} We summarized \eqref{eq:constitutive}-\eqref{eq:sigmanu2}, and \eqref{eq:alpha2}-\eqref{eq:laaaaaaaaast} underneath Problem \ref{prob:application}. However, \eqref{eq:sigma_nu2} is a general form of the contact condition for normal damped response describing the contact with a lubricated foundation \cite[Section 6.3]{Migorski2012}. The equations \eqref{eq:prob1_friction_eq4}-\eqref{eq:prob1_friction_eq3} is a version of Coulomb's law of dry friction, where \eqref{eq:prob1_friction_eq4}-\eqref{eq:prob1_friction_eq3} is a generalization of \cite[Problem 68, p.268]{sofonea2017}. We investigate Problem \ref{prob:application3} under the hypotheses of \hyperref[assumptionA]{$\H{A}$}, \hyperref[assumptionB]{$\H{\mathcal{B}}$}, \hyperref[assumptionmu]{$\H{\mu}$}, \hyperref[assumptionGapp]{$\H{G}$}, \hyperref[assumptionC]{$\H{\mathcal{C}}$}, and \eqref{eq:denisty}-\eqref{eq:assumptionondata}. In addition, we require an assumption on $j_\nu$ in \eqref{eq:sigma_nu2}:\\ \\ $\underline{\H{j_\nu}}$: \label{assumptionjnu} $ j_\nu : \Gamma_C \times \mathbb{R} \rightarrow \mathbb{R} \text{ is such that }$ \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $j_\nu(\cdot,r)$ is measurable on $\Gamma_C$ for all $r\in \mathbb{R}$, and there exists $\Bar{e} \in L^4(\Gamma_C)$ such that $j_\nu(\cdot,\Bar{e}(\cdot)) \in L^1(\Gamma_C )$. \item $j_\nu(x,\cdot)$ is locally Lipschitz on $\mathbb{R}$ for a.e. $ x\in \Gamma_C $. \item $|\partial j_\nu (x,r)| \leq \Bar{c}_0 + \Bar{c}_1 |r| $ for all $r \in \mathbb{R}$, a.e. $x \in \Gamma_C$ with $\Bar{c}_0, \Bar{c}_1 \geq 0$. \label{list:j_nu_bounded} \item $j_\nu^0 (x, r_1; r_2 - r_1) + j_\nu^0 (x, r_2; r_1 - r_2) \leq m_{j_\nu} |r_1 - r_2|^2$ for all $r_1, r_2 \in \mathbb{R}$, a.e. $ x\in \Gamma_C $ with $m_{j_\nu} \geq 0$. \label{list:j_nu_est} \end{enumerate} We refer the reader to Appendix \ref{appendix:comments_app} for a small discussion on the assumptions on the operators and functions in Problem \ref{prob:application3}. \subsubsection{Variational formulation} We make use of the derivations in Section \ref{sec:application1}, but include the new term for the normal stress. By definition of the Clarke subgradient (see Definition \ref{def:subdiffernetial}) and \eqref{eq:sigma_nu2}, we have \begin{align*} -\sigma_\nu(t) \Tilde{v}_\nu \leq j_\nu^0 (\Dot{u}_\nu(t);\Tilde{v}_\nu) \ \ \text{ for all } \Tilde{v}\in V, \ \text{a.e. } t\in (0,T). \end{align*} Integrating over $\Gamma_C$, and choosing $\Tilde{v}_\nu = v_\nu-\Dot{u}_\nu(t)$ gives us \begin{align}\label{eq:ineq_jnu0} \int_{\Gamma_C}\sigma_\nu(t) [v_\nu - \Dot{u}_\nu(t)] da \leq \int_{\Gamma_C} j_\nu^0 (\Dot{u}_\nu(t);v_\nu-\Dot{u}_\nu(t))da \end{align} for all $v\in V$, a.e. $t\in(0,T)$. Combining \eqref{eq:ineq_jnu0} with the calculations from Section \ref{sec:variationalformulation_1}, we have the following problem: \begin{prob}\label{prob:weaksol2} Find $u : \Omega \times [0,T] \rightarrow \mathbb{R}^d$ and $\alpha: \Gamma_C \times [0,T]\rightarrow \mathbb{R}$ such that \begin{subequations} \begin{align}\label{eq:alpha_as_an_eq} &\alpha(t) = \alpha_0 + \int_0^t G (\alpha(s), |\Dot{u}_\tau(s)|) ds,\\ \label{eq:as_an_ineq} &\int_{\Gamma_C} \mu (|\Dot{u}_\tau(t)| , \alpha(t))[|v_\tau |-|\Dot{u}_\tau(t)|] da + \int_{\Gamma_C} j^\circ_\nu (\Dot{u}_\nu(t); v_\nu - \Dot{u}_\nu(t)) da \\ \notag &+\int_{\Omega} \rho \Ddot{u}(t) \cdot [v-\Dot{u}(t)] dx + \scalarprod{\mathcal{A} \varepsilon (\Dot{u}(t)) }{\varepsilon (v)- \varepsilon (\Dot{u}(t))}_Q\\ & + \scalarprod{\mathcal{B} \varepsilon (u(t)) + \int_0^t \mathcal{C}(t-s, \varepsilon(\Dot{u}(s))) ds }{ \varepsilon (v)- \varepsilon (\Dot{u}(t)) }_Q \geq \inner{f(t)}{v-\Dot{u}(t)} \notag \end{align} for all $v\in V$, a.e. $t\in (0,T)$ with \begin{equation*} u(0) = u_0, \ \ \Dot{u}(0) = w_0. \end{equation*} \end{subequations} \end{prob}\noindent The well-posedness result for Problem \ref{prob:weaksol2} is stated in the following theorem. \begin{thm}\label{thm:wellposed_app2} Assume that \hyperref[assumptionAcal]{$\H{\mathcal{A}}$}, \hyperref[assumptionB]{$\H{\mathcal{B}}$}, \hyperref[assumptionmu]{$\H{\mu}$}, \hyperref[assumptionGapp]{$\H{G}$}, \hyperref[assumptionC]{$\H{\mathcal{C}}$}, \eqref{eq:denisty}-\eqref{eq:assumptionondata}, and \hyperref[assumptionjnu]{$\H{j_\nu}$} holds. Then, there exists a $T>0$ satisyfing \eqref{eq:times_T} so that Problem \ref{prob:weaksol2} has a unique solution $(u,\alpha)$ under the smallness-condition \begin{align}\label{eq:smallness_app2} m_\mathcal{A} &> m_{j_\nu} \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\gamma_\nu}_{\mathcal{L}(V,L^4(\Gamma_C))}^2\\ &+ \sqrt{2} \:( L_{1\mu}\sqrt{\mathrm{meas}(\Gamma_C)}+ L_{2\mu}\norm{\alpha_0}_Y) \norm{\gamma_\tau}^2_{\mathcal{L}(V,L^4(\Gamma_C;\mathbb{R}^d))}. \notag \end{align} In addition, we have the regularity: \begin{equation}\label{eq:regularity_second_application} u\in W^{1,2}(0,T;V),\ \ \ \Dot{u} \in \mathcal{W}^{1,2}_T \subset C([0,T];H), \ \ \ \alpha \in C([0,T];L^2(\Gamma_C)). \end{equation} Moreover, the flow map depends continuously on the initial data. \end{thm} \begin{remark} The constraint \eqref{eq:smallness_app2} can also be found in, e.g., \cite[Theorem 4.4]{han} for $\alpha_\varphi c_\alpha^2 = \sqrt{2} \:( L_{1\mu} \sqrt{\mathrm{meas}(\Gamma_C)}+ L_{2\mu}\norm{\alpha_0}_Y) \norm{\gamma_\tau}^2_{\mathcal{L}(V,L^4(\Gamma_C;\mathbb{R}^d))}$ and $\alpha_j c_j^2 =m_{j_\nu} \sqrt{\mathrm{meas}(\Gamma_C)} \times$ \\$\norm{\gamma_\nu}_{\mathcal{L}(V,L^4(\Gamma_C))}^2$. \end{remark} \begin{remark} We show that there exists a solution for $\rho \equiv 1$, and then take $w(t) = w(\rho t)$. \end{remark} \subsubsection{Proof of Theorem \ref{thm:wellposed_app2}} We use the same approach as in Section \ref{sec:application1}, meaning that we will use Theorem \ref{thm:mainresult} to prove Theorem \ref{thm:wellposed_app2}. But to use this theorem, we need to first rewrite Problem \ref{prob:weaksol2} into the same form as Problem \ref{prob:fullproblem}. Then we will verify the hypothesis of Theorem \ref{thm:mainresult}. \\ \indent Take $Y=L^2(\Gamma_C)$, $X=L^4(\Gamma_C)$, and $U=Z=L^4(\Gamma_C; \mathbb{R}^d)$. We define $A : (0, T) \times V \rightarrow V^\ast $, $\mathcal{R} : L^2_TV \rightarrow L^2_TV^\ast$, $\mathcal{G} : (0,T) \times Y \times U \rightarrow Y$, and $f : (0,T) \rightarrow V^\ast$ as in \eqref{eq:defining_ARS}-\eqref{eq:and_G} and \eqref{eq:f_inner}, respectively. We choose $M : V \rightarrow U$ and $N : V \rightarrow X$ to be as in \eqref{eq:def_MN} and $K\equiv M$. Moreover, we define the functional $\varphi : (0, T) \times Y \times U \times Z \rightarrow \mathbb{R}$ by \begin{subequations} \begin{equation}\label{eq:defintion_phi_2} \varphi(t,y,\Tilde{w},\Tilde{v}) = \int_{\Gamma_C} \mu (|\Tilde{w}|,y) |\Tilde{v}| da \ \ \text{ for } y\in Y, \ \Tilde{w} \in U,\ \Tilde{v} \in Z, \ \text{ a.e. } t\in (0,T) \end{equation} and the functional $j : (0,T) \times X \rightarrow \mathbb{R}$ by \begin{align}\label{eq:j_nu} j(t,\Tilde{w}) = \int_{\Gamma_C} j_\nu (\Tilde{w}) da \ \ \text{ for } \Tilde{w} \in X, \ \text{ a.e. } t\in (0,T). \end{align} \end{subequations} The problem is then on the following form. \begin{prob}\label{prob:done_afterthis} Find $(w,\alpha)\in \mathcal{W}^{1,2}_T \times C([0,T];Y)$ such that \begin{align*} &\alpha(t) = \alpha_0 + \int_0^t \mathcal{G}(s,\alpha(s),Mw(s))ds,\\ &\inner{\rho\Dot{w}(t)}{v-w(t)}+ \inner{A(t,w(t))}{v-w(t)} + \inner{\mathcal{R}w(t)}{v-w(t)} + j^\circ(t,Nw(t);Nv-Nw(t))\\ & + \varphi(t, \alpha(t),Mw(t),Kv) - \varphi(t, \alpha(t),Mw(t),Kw(t)) \geq \inner{f(t)}{v-w(t)} \end{align*} for all $v\in V$, a.e. $t\in (0,T)$ with $w(0) = w_0$. \end{prob} To see that it suffices to prove existence of a solution to Problem \ref{prob:done_afterthis} in order for Problem \ref{prob:weaksol2} to have a solution, we introduce the following result, which is of a similar form as found in \cite[Lemma 8, p.126]{sofonea2017} (see also \cite[Theorem 3.47]{Migorski2012}). The result will also be useful to prove uniqueness. \begin{cor}\label{cor:j} Assume that \hyperref[assumptionjnu]{$\H{j_\nu}$} holds. Then, the functional $j$ defined by \eqref{eq:j_nu} has the following properties: \begin{enumerate}[labelindent=0pt,labelwidth=\widthof{\ref{last-item}},label=(\roman*),itemindent=1em,leftmargin=!] \item $j(\cdot,v)$ is measurable on $(0,T)$ for all $v\in X$. \label{list:finite} \item $j(t, \cdot)$ is locally Lipschitz on $X$ for a.e. $t\in (0,T)$. \label{list:locally} \item For all $\Tilde{w},v \in X$, we have $ j^\circ (t,\Tilde{w};v) \leq \int_{\Gamma_C} j_\nu^\circ (\Tilde{w};v) da$. \label{list:equality} \end{enumerate} \end{cor} \begin{lem}\label{lemma:assumptionon_j} Under the assumptions of Theorem \ref{thm:wellposed_app2}, \hyperref[assumptionphi]{$\H{\varphi}$} holds for $\varphi$ defined by \eqref{eq:defintion_phi_2} for $c_{0\varphi}(t) = \kappa_1 (\mathrm{meas}(\Gamma_C))^{3/2}$, $c_{1\varphi}= \kappa_2 \mathrm{meas}(\Gamma_C)$, $ c_{3\varphi} = \kappa_3 (\mathrm{meas}(\Gamma_C))^{5/4}$, $\beta_{1\varphi} =L_{3\mu}$,\\ $\beta_{4\varphi} =L_{1\mu}\sqrt{\mathrm{meas}(\Gamma_C)}$, $\beta_{5\varphi} =L_{2\mu}$, and $c_{4\varphi}=\beta_{2\varphi} = \beta_{3\varphi}= \beta_{6\varphi} = \beta_{7\varphi}=0$. \\ \indent Moreover, $j$ defined by \eqref{eq:j_nu} satisfies \hyperref[assumptionj]{$\H{j}$} for $c_{0j}(t) = 2^{3/4} (\mathrm{meas}(\Gamma_C))^{3/4} \Bar{c}_0$, \\ $c_{3j} = 2^{3/4} \sqrt{\mathrm{meas}(\Gamma_C)}\Bar{c}_1$, $m_j = m_{j_\nu}\sqrt{\mathrm{meas}(\Gamma_C)}$, and $c_{1j} = c_{2j} = 0$. \end{lem} The proof of Lemma \ref{lemma:assumptionon_j} is placed in Appendix \ref{appendix:assumption_phiandj}. We may now prove Theorem \ref{thm:wellposed_app2}. \begin{proof}[Proof of Theorem \ref{thm:wellposed_app2}] We wish to utilize Theorem \ref{thm:mainresult}. The hypothesis of Theorem \ref{thm:mainresult} holds by \eqref{eq:smallness_app2}, Lemma \ref{lemma:assumptionphi2} and \ref{lemma:assumptionon_j}, and Corollary \ref{cor:j}\ref{list:finite}-\ref{list:locally}. With the help of Corollary \ref{cor:j}\ref{list:equality}, we may conclude that there exists a solution to Problem \ref{prob:weaksol2}. Moreover, the fact that the flow map depends continuously on the initial data follows by the same approach as in the proof of Theorem \ref{thm:finalthm}. To obtain uniqueness, we let $(u_1,\alpha_1),(u_2,\alpha_2) \in W^{1,2}(0,T;V) \times C([0,T];Y)$ be two pairs of solutions to Problem \ref{prob:weaksol2}. Choosing the test functions $v=\Dot{u}_2(t)$ and $v=\Dot{u}_1(t)$ for a.e. $t\in (0,T)$, respectively, in \eqref{eq:as_an_ineq} yields \begin{align}\label{eq:inequality} &\scalarprod{ \mathcal{A} \varepsilon (\Dot{u}_1(t)) - \mathcal{A} \varepsilon (\Dot{u}_2(t)) }{ \varepsilon (\Dot{u}_1(t))- \varepsilon (\Dot{u}_2(t)) }_Q \\ \notag &+ \int_{\Omega} \rho \big[\Ddot{u}_1(t)- \Ddot{u}_2(t)\big] \cdot \big[\Dot{u}_1(t)-\Dot{u}_2(t)\big] dx\\ \notag &\leq \int_{\Gamma_C} \big[\mu (|\Dot{u}_{1\tau}(t)| , \alpha_1(t)) - \mu (|\Dot{u}_{2\tau}(t)| , \alpha_2(t))\big]\big[|\Dot{u}_{2\tau}(t) |-|\Dot{u}_{1\tau}(t)|\big] da\\ \notag &+ \int_{\Gamma_C} [j^\circ_\nu (\Dot{u}_{1\nu}(t); \Dot{u}_{2\nu}(t) - \Dot{u}_{1\nu}(t)) + j^\circ_\nu (\Dot{u}_{2\nu}(t); \Dot{u}_{1\nu}(t) - \Dot{u}_{2\nu}(t))] da\\ \notag &+ \scalarprod{ \mathcal{B} \varepsilon (u_1(t))- \mathcal{B} \varepsilon (u_2(t))}{ \varepsilon (\Dot{u}_2(t))- \varepsilon (\Dot{u}_1(t)) }_Q \\ &+ \scalarprod{\int_0^t[ \mathcal{C}(t-s, \varepsilon(\Dot{u}_1(s))) - \mathcal{C}(t-s, \varepsilon(\Dot{u}_2(s)))] ds }{ \varepsilon (\Dot{u}_2(t))- \varepsilon (\Dot{u}_1(t)) }_Q \notag \end{align} for a.e. $t\in (0,T)$. Using \hyperref[assumptionGapp]{$\H{G}$}\ref{list:G_app_Lipschitz}, a standard Gr\"{o}nwall argument, the Cauchy-Schwarz inequality, and Minkowski's inequality in \eqref{eq:alpha_as_an_eq} implies \begin{align}\label{eq:alphaineq} \int_{\Gamma_C}|\alpha_1(t)-\alpha_2(t)|^2 dx &\leq c\int_0^t \int_{\Omega} |\Dot{u}_{1}(s) - \Dot{u}_{2}(s)|^2 dx ds \end{align} for a.e. $t\in (0,T$). We next utilize \hyperref[assumptionAcal]{$\H{\mathcal{A}}$}\ref{list:Acal_maximalmonotone}, \hyperref[assumptionB]{$\H{\mathcal{B}}$}\ref{list:Bcal_bounded}, \hyperref[assumptionC]{$\H{\mathcal{C}}$}, \hyperref[assumptionjnu]{$\H{j_\nu}$}\ref{list:j_nu_est}, \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_Lipschitz}, \eqref{eq:denisty}, the Cauchy-Schwarz inequality, and Young's inequality to \eqref{eq:inequality}: \begin{align*} & m_A \norm{\varepsilon (\Dot{u}_1(t)) - \varepsilon (\Dot{u}_2(t))}_Q^2 + \int_{\Omega} \rho \big[\Ddot{u}_1(t)- \Ddot{u}_2(t)\big] \cdot \big[\Dot{u}_1(t)-\Dot{u}_2(t)\big] dx\\ \notag &\leq L_{1\mu} \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\Dot{u}_{1\tau}(t) - \Dot{u}_{2\tau}(t)}_{L^4(\Gamma_C;\mathbb{R}^d)}^2 \\ &+ L_{2\mu} \norm{\alpha_1(t)}_{L^2(\Gamma_C)} \norm{\Dot{u}_{1\tau}(t) - \Dot{u}_{2\tau}(t)}_{L^4(\Gamma_C;\mathbb{R}^d)}^2\\ &+L_{3\mu} \norm{\Dot{u}_{2\tau}(t)}_{L^4(\Gamma_C)} \norm{\alpha_1(t)-\alpha_2(t)}_{L^2(\Gamma_C)} \norm{\Dot{u}_{1\tau}(t) - \Dot{u}_{2\tau}(t)}_{L^4(\Gamma_C;\mathbb{R}^d)} \\ & + m_{j\nu} \sqrt{\mathrm{meas}(\Gamma_C)}\norm{\Dot{u}_{1\nu}(t) - \Dot{u}_{2\nu}(t)}_{L^4(\Gamma_C)}^2 \\ &+ L_\mathcal{B} \norm{\varepsilon (u_1(t)) - \varepsilon (u_2(t))}_Q\norm{\varepsilon (\Dot{u}_1(t)) - \varepsilon (\Dot{u}_2(t))}_Q\\ &+ \norm{C}_{L^\infty_TL^\infty(\Omega;\mathbb{S}^d)} \norm{\int_0^t [ \varepsilon(\Dot{u}_1(s)) - \varepsilon(\Dot{u}_2(s)) ]ds}_Q \norm{ \varepsilon (\Dot{u}_2(t))- \varepsilon (\Dot{u}_1(t)) }_Q \notag \end{align*} for a.e. $t\in(0,T)$. Next, we use the fact that if $\Dot{u}\in \mathcal{W}^{1,2}_T$, then $\frac{d}{dt}\norm{\Dot{u}(t)}_H^2 = 2 \inner{\Ddot{u}(t)}{\Dot{u}(t)}$ \cite[Theorem 3 in Section 5.9.2]{evans}, \eqref{eq:definition_of_u}, and integrating over the time interval $(0,t') \subset (0,T)$. We observe by \eqref{eq:smallness_app2} that $m_A -m_{j_\nu} \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\gamma_\nu}_{\mathcal{L}(V,X)}^2>0$. In addition, we use \eqref{eq:alphaineq} and then apply H\"{o}lder's inequality and Minkowski's inequality to obtain \begin{align*} \norm{\Dot{u}_1 - \Dot{u}_2}_{L^2_{t'}V}^2 &\leq \frac{(L_{1\mu} \sqrt{\mathrm{meas}(\Gamma_C)} + L_{2\mu}\norm{\alpha_1}_{L^\infty_TY})\norm{\gamma_\tau}_{\mathcal{L}(V,U)}^2 }{(m_A -m_{j_\nu} \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\gamma_\nu}_{\mathcal{L}(V,X)}^2 )} \norm{\Dot{u}_1 - \Dot{u}_2}_{L^2_{t'}V}^2 \\ &+c \norm{\Dot{u}_{2}}_{L^2_TV} \norm{\alpha_1-\alpha_2}_{L^\infty_{t'}Y} \norm{\Dot{u}_1 - \Dot{u}_2}_{L^2_{t'}V}\\ &+ c \Big[ \int_0^{t'} \int_0^t \norm{\Dot{u}_1(s) - \Dot{u}_2(s)}_V^2 ds dt \Big]^{1/2}\norm{ \Dot{u}_1- \Dot{u}_2 }_{L^2_{t'} V} \\ &=: I + II + III \end{align*} for a.e. $t'\in (0,T)$. Using Minkowski's inequality, \hyperref[assumptionGapp]{$\H{G}$}\ref{list:G_app_Lipschitz}, and the Cauchy-Schwarz inequality implies \begin{align*} \norm{\alpha_1(t)}_{L^2(\Gamma_C)} &\leq \norm{\alpha_0}_{L^2(\Gamma_C)} + L_G\int_0^t [\norm{\alpha_1(s)}_{L^2(\Gamma_C)} + \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\Dot{u}_{1\tau}(s)}_{L^4(\Gamma_C)}]ds\\ &\leq \norm{\alpha_0}_{L^2(\Gamma_C)} + L_G\int_0^t \norm{\alpha_1(s)}_{L^2(\Gamma_C)}ds + cT^k\int_0^t \norm{\Dot{u}_1(s)}^2_Vds \end{align*} for a.e. $t\in (0,T)$ and $k\geq 1/2$. From Gr\"{o}nwall's inequality, we deduce \begin{align*} \norm{\alpha_1}_{L^\infty_TL^2(\Gamma_C)} &\leq (\norm{\alpha_0}_{L^2(\Gamma_C)} + cT^k\norm{\Dot{u}_1}_{L^2_TV})(1+cT\mathrm{e}^{cT}) \end{align*} for some $k\geq 1/2$. Utilizing Young's inequality to $I$ and $II+III$ and then the arithmetic-quadratic mean inequality to the latter term while keeping in mind \eqref{eq:smallness_app2}-\eqref{eq:regularity_second_application} and \eqref{eq:alphaineq}, we obtain \begin{align*} \norm{\Dot{u}_1 - \Dot{u}_2}_{L^2_{t'}V}^2 &\leq\frac{ 2(L_{1\mu} \sqrt{\mathrm{meas}(\Gamma_C)} + L_{2\mu}(\norm{\alpha_0}_Y + T^kc) )^2\norm{\gamma_\tau}_{\mathcal{L}(V,U)}^4}{(m_A -m_{j_\nu} \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\gamma_\nu}_{\mathcal{L}(V,X)}^2 )^2 } \norm{\Dot{u}_1 - \Dot{u}_2}_{L^2_{t'}V}^2 \\ &+ c \int_0^{t'} \int_0^t \norm{\Dot{u}_1(s) - \Dot{u}_1(s)}_V^2 ds dt \end{align*} for all $t'\in (0,T)$. A consequence of \eqref{eq:smallness_app2} and choosing $T>0$ such that \begin{align*} T^k \sim \frac{m_A -m_{j_\nu} \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\gamma_\nu}_{\mathcal{L}(V,X)}^2}{cL_{2\mu}\norm{\gamma_\tau}_{\mathcal{L}(V,U)}^2} \end{align*} small enough for some $k\geq 1/2$ implies \begin{equation*} \frac{\sqrt{2}(L_{1\mu} \sqrt{\mathrm{meas}(\Gamma_C)} + L_{2\mu}(\norm{\alpha_0}_Y + T^kc))\norm{\gamma_\tau}_{\mathcal{L}(V,U)}^2 }{(m_A -m_{j_\nu} \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\gamma_\nu}_{\mathcal{L}(V,X)}^2 )} < 1, \end{equation*} we obtain \begin{align*} \norm{\Dot{u}_1 - \Dot{u}_2}_{L^2_{t'}V}^2 \leq c \int_0^{t'} \norm{\Dot{u}_1 - \Dot{u}_2}_{L^2_tV}^2 dt \end{align*} for all $t'\in (0,T)$. Applying a standard Gr\"{o}nwall argument reads \begin{align}\label{eq:ineq_to_go} \norm{\Dot{u}_1 - \Dot{u}_2}_{L^2_{T}V}^2 \leq 0. \end{align} By the definition of $u$, i.e., \eqref{eq:definition_of_u}, the smallness-condition \eqref{eq:smallness_app2} and \eqref{eq:alphaineq}-\eqref{eq:ineq_to_go}, we conclude that $(u,\alpha)$ is the unique solution to Problem \ref{prob:weaksol2}. \end{proof} \subsection{Application to rate-and-state friction} \label{sec:rateandstate} In this section, we will discuss the application of rate-and-state friction. To fit these laws to our framework, we use a first-order Taylor approximation of $\mathrm{e}^{\Hat{c}\alpha(t)}$ around $\alpha_0$, that is, \begin{align}\label{eq:expo} \mathrm{e}^{\Hat{c}\alpha(t)} = \mathrm{e}^{\Hat{c}\alpha_0} (1 + \Hat{c} (\alpha(t)-\alpha_0) )+ \mathcal{O} ((\alpha(t)-\alpha_0)^2) \end{align} with $\Hat{c} = -1, \frac{b}{a}$. Using the above approximation in \eqref{eq:regularized} and \eqref{eq:aginglaw} gives us \begin{subequations} \begin{align}\label{eq:alpha_approx} G(\alpha(t),|\Dot{u}_\tau(t)|) &= \frac{v_0 \mathrm{e}^{-\alpha_0}(1- \alpha(t) +\alpha_0 ) - |\Dot{u}_\tau(t)|}{L}, \\ \mu(|\Dot{u}_\tau(t)|,\alpha(t)) &=a\: \mathrm{arcsinh} \bigg(\frac{ \mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{a}}|\Dot{u}_\tau(t)|(1+\frac{b}{a}(\alpha(t)-\alpha_0))}{2v_0} \bigg). \label{eq:mu_approx} \end{align} \end{subequations} We will make a formal argument to justify that these are first-order approximations. \begin{proof}[Formal augmentations of \eqref{eq:alpha_approx}-\eqref{eq:mu_approx}] For simplicity in notation, we let $y= \alpha(t)$ and $r=\Dot{u}_\tau(t)$. We wish to have the same order of error for the approximation of \eqref{eq:regularized} and \eqref{eq:aginglaw} as in \eqref{eq:expo}. For \eqref{eq:alpha_approx}, we are considering $\Hat{c} = -1$. We directly obtain \begin{align*} \frac{v_0}{L}(\mathrm{e}^{-y}-\mathrm{e}^{-\alpha_0}(1-(y-\alpha_0))) = \mathcal{O} ((y-\alpha_0)^2). \end{align*} Next, considering \eqref{eq:mu_approx}, we are interested in the approximation \eqref{eq:expo} for $\Hat{c} = \frac{b}{a}$. We let $g= \frac{1 }{2v_0}\mathrm{e}^{\frac{\mu_0+ by}{a}}$ and $f =\frac{1}{2v_0}\mathrm{e}^{\frac{\mu_0 + b\alpha_0}{a}}(1+ \frac{b}{a}(y-\alpha_0))$. Then, by the mean value theorem \begin{align*} \mathrm{arcsinh} (|r|g) - \mathrm{arcsinh} (|r|f) &= \frac{|r|}{\sqrt{1 + z^2}} (g-f) \end{align*} with $z\in (|r|f,|r|g)$ where $f\leq g$. So, we have \begin{align*} \mathrm{arcsinh} (|r|g) = \mathrm{arcsinh} (|r|f) + \frac{|r|}{\sqrt{1 + z^2}} \mathcal{O}((y-\alpha_0)^2). \end{align*} We also note that \begin{align*} 0 \leq \frac{|r|}{\sqrt{1 + z^2}} \leq \frac{|r|}{\sqrt{1 + (|r|f)^2}} \leq \frac{1}{|f|} = \frac{1}{|\Tilde{c}||1+\frac{b}{a}(y-\alpha_0)|} , \end{align*} where $f \sim \Tilde{c}$ if $y$ is close to $\alpha_0$. Consequently, we have the same order of error as in \eqref{eq:expo} as desired. \end{proof} \begin{remark}\label{remark:ratestate1} Above, we gave formal arguments that our model is a first-order expansion around the initial value $\alpha_0$. We will now investigate if our approximated model has the same qualitative behavior as the original model. Following \cite{rice2001rate}, the key restriction on $G$ is that when the slip rate $|\dot{u}_\tau|$ is constant, the equation $\dot{\alpha} = G(\alpha, |\dot{u}_\tau|)$ has a stable solution that evolves monotonically towards a definite value of $\alpha$, denoted $\alpha^\ast=\alpha^\ast(|\dot{u}_\tau|)$, at which $G(\alpha^\ast,|\dot{u}_\tau|) = 0$. This holds true if $\dfrac{\partial G}{\partial \alpha} < 0$, which is easily verified for the approximation of $G$ given by \eqref{eq:alpha_approx}. \\ \indent For the friction term, again following \cite{rice2001rate}, we seek $\dfrac{\partial \mu}{\partial |\dot{u}_\tau|} > 0$ and $\dfrac{\partial \mu}{\partial \alpha} >0$. The first condition is consistent with the experimental observations and holds if $\alpha$ is close to $\alpha_0$. The latter condition agrees with the established convention for the state variable; larger values mean greater strength. This is also consistent with the usual interpretation of $\alpha$ as a measure of contact maturity and the fact that more mature contact is stronger. Consequently, this shows that qualitatively our approximate model has the same behavior as the original model problem. One can also see in Figure \ref{fig:sub1}-\ref{fig:sub2} that there is a neighborhood where the first-order approximations \eqref{eq:alpha_approx}-\eqref{eq:mu_approx} are close to the original equations for the values used in Table \ref{tab:table1}. \end{remark} \begin{table}[h] \begin{center} \caption{Parameters used for the experiments are found in \cite{parameters}.} \label{tab:table1} \begin{tabular}{c|c} \textbf{Symbol} & \textbf{Value}\\ \hline $a$ & $0.011$ \\ $b$ & $0.014$ \\ $L$ & $5\cdot 10^{-5}$ m \\ $v_0$ & $10^{-9}$ m/s \\ $\mu_0$ & $0.7$ \\ $|\Dot{u}_\tau(t=0)|$ & $10^{-9}$ m/s\\ $\alpha_0$ & $\ln\big(\frac{v_0}{L}\big)$ \\ \end{tabular} \end{center} \end{table} \begin{figure} \caption{With the parameters in Table \ref{tab:table1} \label{fig:sub1} \end{figure} \begin{figure} \caption{The friction coefficient $\mu = \mu(|\Dot{u} \label{fig:sub2} \end{figure} We require the following assumption on the coefficients in \eqref{eq:alpha_approx}-\eqref{eq:mu_approx}: \begin{subequations}\label{eq:abLmu0} \begin{align} \alpha_0 &\in L^\infty (\Gamma_C), \\ v_0^{-1}, L^{-1} &\in L^\infty(\Gamma_C),\\ v_0,\mu_0,a,b&\in L^\infty(\Gamma_C), \\ L,a,&v_0 \: \text{ are nonzero}. \end{align} \end{subequations} Then, we have the following consequences of Theorem \ref{thm:finalthm} and \ref{thm:wellposed_app2}, respectively. \begin{cor}\label{cor:cor} Assume that hypotheses \hyperref[assumptionAcal]{$\H{\mathcal{A}}$}, \hyperref[assumptionB]{$\H{\mathcal{B}}$}, \hyperref[assumptionp]{$\H{p}$}, \hyperref[assumptionC]{$\H{\mathcal{C}}$}, \eqref{eq:denisty}-\eqref{eq:assumptionondata1}, and \eqref{eq:abLmu0} holds. Then, there exists a $T>0$ satisfying \eqref{eq:times_T} such that Problem \ref{prob:weaksol} with \eqref{eq:alpha_approx}-\eqref{eq:mu_approx} has a unique solution $(u,\alpha)$ under the smallness-conditions \begin{align}\label{eq:cor1_smallness_ass1} m_\mathcal{A} &> \sqrt{2} \: p^\ast(\sqrt{\mathrm{meas}(\Gamma_C)} \norm{a-b\alpha_0}_{L^\infty(\Gamma_C)} + \norm{b}_{L^\infty(\Gamma_C)}\norm{\alpha_0}_{L^2(\Gamma_C)} ) \\ &\times \norm{\frac{1}{2v_0}\mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{a}}}_{L^\infty(\Gamma_C)} \norm{\gamma_\tau}_{ \mathcal{L}(V,L^4(\Gamma_C;\mathbb{R}^d))} \norm{(\gamma_\tau,\gamma_\nu)}_{\mathcal{L}(V,L^4(\Gamma_C;\mathbb{R}^d)) \times \mathcal{L}(V,L^4(\Gamma_C)) }.\notag \end{align} We obtain the following regularity \begin{equation*} u\in W^{1,2}(0,T;V),\ \ \Dot{u} \in \mathcal{W}^{1,2}_T \subset C([0,T];H), \ \ \alpha \in C([0,T];L^2(\Gamma_C)). \end{equation*} Moreover, there exists a neighborhood around $(u_0,w_0,\alpha_0)$ so that the flow map $\Tilde{F} : V \times V \times L^\infty(\Gamma_C) \rightarrow W^{1,2}(0,T/2;V) \times C([0,T/2]; L^2(\Gamma_C))$, $(u_0,w_0,\alpha_0) \mapsto (u,\alpha)$ is continuous. \end{cor} \begin{cor}\label{cor:cor2} Assume that \hyperref[assumptionAcal]{$\H{\mathcal{A}}$}, \hyperref[assumptionB]{$\H{\mathcal{B}}$}, \hyperref[assumptionmu]{$\H{\mu}$}, \hyperref[assumptionGapp]{$\H{G}$}, \hyperref[assumptionC]{$\H{\mathcal{C}}$}, \eqref{eq:denisty}-\eqref{eq:assumptionondata1}, \hyperref[assumptionjnu]{$\H{j_\nu}$}, and \eqref{eq:abLmu0} holds. Then, there exists a $T>0$ satisfying \eqref{eq:times_T} so that Problem \ref{prob:weaksol2} with \eqref{eq:alpha_approx}-\eqref{eq:mu_approx} has a unique solution $(u,\alpha)$ under the smallness-condition \begin{align}\label{eq:cor2_smallness_ass1} m_\mathcal{A} &>m_{j_\nu} \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\gamma_\nu}_{\mathcal{L}(V,L^4(\Gamma_C))}^2+\sqrt{2} \norm{\frac{1}{2v_0}\mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{a}}}_{L^\infty(\Gamma_C)} \\ &\times (\sqrt{\mathrm{meas}(\Gamma_C)} \norm{a-b\alpha_0}_{L^\infty(\Gamma_C)} + \norm{b}_{L^\infty(\Gamma_C)}\norm{\alpha_0}_{L^2(\Gamma_C)} ) \norm{\gamma_\tau}_{ \mathcal{L}(V,L^4(\Gamma_C;\mathbb{R}^d))}^2. \notag \end{align} In addition, we have the regularity: \begin{equation*} u\in W^{1,2}(0,T;V),\ \ \ \Dot{u} \in \mathcal{W}^{1,2}_T \subset C([0,T];H), \ \ \ \alpha \in C([0,T];L^2(\Gamma_C)). \end{equation*} Moreover, the flow map depends continuously on the initial data. \end{cor} \begin{remark} We observe from \eqref{eq:cor1_smallness_ass1} and \eqref{eq:cor2_smallness_ass1} that either \begin{align*} \norm{\frac{1}{2v_0}\mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{a}}}_{L^\infty(\Gamma_C)} (\sqrt{\mathrm{meas}(\Gamma_C)} \norm{a-b\alpha_0}_{L^\infty(\Gamma_C)} + \norm{b}_{L^\infty(\Gamma_C)}\norm{\alpha_0}_{L^2(\Gamma_C)} ) \end{align*} is small enough, or we must compensate by adding more viscosity. \end{remark} \begin{proof}[Proof of Corollary \ref{cor:cor}] We only need to verify \hyperref[assumptionmu]{$\H{\mu}$} and \hyperref[assumptionGapp]{$\H{G}$} for \eqref{eq:alpha_approx}-\eqref{eq:mu_approx}, respectively. The rest follows by Theorem \ref{thm:finalthm} since $L^\infty(\Gamma_C) \subset L^2(\Gamma_C)$. We start by verifying \hyperref[assumptionGapp]{$\H{G}$} for \eqref{eq:alpha_approx}. From \eqref{eq:abLmu0}, we directly have that \hyperref[assumptionGapp]{$\H{G}$}\ref{list:G_app_Lipschitz}-\ref{list:G_app_0} holds with $L_G = c(\norm{v_0}_{L^\infty(\Gamma_C)},\norm{L^{-1}}_{L^\infty(\Gamma_C)},\norm{\alpha_0}_{L^\infty(\Gamma_C)})$ Next, we verify \hyperref[assumptionmu]{$\H{\mu}$} for \eqref{eq:mu_approx}. \begin{claim}\label{claim:2} We have the following inequalities: \begin{subequations} \begin{align} \label{eq:arc2} \Big|\mathrm{arcsinh}(\beta \xi)| &\leq |\beta|+|\xi|,\\ \label{eq:arc_lip} |\mathrm{arcsinh}(\beta_1\xi_1)-\mathrm{arcsinh}(\beta_2\xi_2)| &\leq |\beta_1| |\xi_1-\xi_2| + |\xi_2| |\beta_1-\beta_2| \end{align} \end{subequations} for all $\beta, \ \xi\in \mathbb{R}$ and $\beta_i , \ \xi_i \in \mathbb{R}$ for $i=1,2$. \end{claim} \indent By \eqref{eq:arc2}, we have \begin{align*} &|\mu(x,|r|,y)| \\ &\leq \norm{\frac{1}{\sqrt{2v_0}}\mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{2a}}}_{L^\infty(\Gamma_C)}\Big[ (\norm{a}_{L^\infty(\Gamma_C)} + \norm{b\alpha_0}_{L^\infty(\Gamma_C)} ) + \norm{b}_{L^\infty(\Gamma_C)} |y| + \norm{a}_{L^\infty(\Gamma_C)} |r|\Big] \end{align*} which is \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_est} for $\kappa_1 = \norm{\frac{1}{\sqrt{2v_0}}\mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{2a}}}_{L^\infty(\Gamma_C)} (\norm{a}_{L^\infty(\Gamma_C)} + \norm{b\alpha_0}_{L^\infty(\Gamma_C)})$, \\$\kappa_2 = \norm{\frac{1}{\sqrt{2v_0}}\mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{2a}}}_{L^\infty(\Gamma_C)}\norm{b}_{L^\infty(\Gamma_C)}$, and $\kappa_3 = \norm{\frac{1}{\sqrt{2v_0}}\mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{2a}}}_{L^\infty(\Gamma_C)}\norm{a}_{L^\infty(\Gamma_C)}$. Lastly, to verify \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_Lipschitz}, we use \eqref{eq:arc_lip} and the triangle inequality \begin{align*} &|\mu(x,|r_1|,y_1) - \mu(x,|r_2|,y_2)| \\ &= |a| \: | \mathrm{arcsinh} \bigg(\frac{ \mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{a}}|r_1|(1+\frac{b}{a}(y_1-\alpha_0))}{2v_0} \bigg) - \mathrm{arcsinh} \bigg(\frac{ \mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{a}}|r_2|(1+\frac{b}{a}(y_2-\alpha_0))}{2v_0} \bigg) | \\ &\leq \norm{\frac{1}{2v_0} \mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{a}}}_{L^\infty(\Gamma_C)} |a| \Big[|r_1|\frac{|b|}{|a|}|y_1 -y_2| + |(1+\frac{b}{a}(y_2-\alpha_0))| \: |r_1-r_2| \Big] \\ &\leq \norm{\frac{1}{2v_0} \mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{a}}}_{L^\infty(\Gamma_C)} \Big[|b||r_1| |y_1-y_2| + (|a-b\alpha_0|+|b||y_2|) \: |r_1-r_2| \Big]. \end{align*} Thus, \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_Lipschitz} holds with $L_{1\mu} =\norm{\frac{1}{2v_0}\mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{a}}}_{L^\infty(\Gamma_C)} \norm{a-b\alpha_0}_{L^\infty(\Gamma_C)}$ and $L_{2\mu} = L_{3\mu} = \norm{\frac{b}{2v_0}\mathrm{e}^{\frac{\mu_0 + b \alpha_0 }{a}}}_{L^\infty(\Gamma_C)} \norm{b}_{L^\infty(\Gamma_C)}$. We lastly observe that the smallness-condition \eqref{eq:smallness1} holds by \eqref{eq:cor1_smallness_ass1}. \begin{proof}[Proof of Claim \ref{claim:2}] Since $|\mathrm{arcsinh(\beta\xi)}|$ is symmetric for $\beta,\xi\in\mathbb{R}$, we may prove \eqref{eq:arc2} for $\beta,\xi>0$. By the mean-value theorem, it follows that \begin{align} \label{eq:arc456} \mathrm{arcsinh}(\beta) \leq \beta. \end{align} By definition $\mathrm{arcsinh}(\beta) = \log(\beta + \sqrt{1+\beta^2})$, then by the fact that $(\beta+\sqrt{1+\beta^2})(\xi+\sqrt{1+\xi^2}) \geq \beta\xi+\sqrt{1+(\beta \xi)^2}$ and the increasing property of the logarithm, we have \begin{align*} \mathrm{arcsinh}(\beta \xi) \leq \mathrm{arcsinh}(\beta) + \mathrm{arcsinh}(\xi). \end{align*} Then, from \eqref{eq:arc456}, we conclude that \eqref{eq:arc2} holds. Similarly, without loss of generality, we may assume that $\beta>\xi$, then by the mean-value theorem \begin{align}\label{eq:arc123} \mathrm{arcsinh}(\beta) - \mathrm{arcsinh}(\xi) \leq \beta-\xi. \end{align} Combining \begin{align*} &| \mathrm{arcsinh} (\beta_1\xi_1 ) - \mathrm{arcsinh}(\beta_2\xi_2)| \\ &\leq | \mathrm{arcsinh} (\beta_1\xi_1 ) - \mathrm{arcsinh}(\beta_1\xi_2)| + | \mathrm{arcsinh} (\beta_1\xi_2 ) - \mathrm{arcsinh}(\beta_2\xi_2)| \end{align*} with \eqref{eq:arc123} implies \eqref{eq:arc_lip}. \end{proof} \end{proof} \begin{proof}[Proof of Corollary \ref{cor:cor2}] The proof follows by Theorem \ref{thm:wellposed_app2} and the verification of \hyperref[assumptionmu]{$\H{\mu}$} and \hyperref[assumptionGapp]{$\H{G}$} for \eqref{eq:alpha_approx}-\eqref{eq:mu_approx}, which is done in the proof of Corollary \ref{cor:cor}. In addition, the smallness-condition \eqref{eq:smallness_app2} holds by \eqref{eq:cor2_smallness_ass1}. \end{proof} \SkipTocEntry \section{Comments on assumptions}\label{appendix:comments_app} \noindent We include a small discussion on applications fitting our assumptions: \begin{itemize} \item We first consider the equation $G = G(\alpha, |\Dot{u}_\tau|)$ under the assumptions \hyperref[assumptionGapp]{$\H{G}$}. In Section \ref{sec:rateandstate}, we introduced two applications to rate-and-state friction that are included in the framework introduced in this paper. Another application is, e.g., the slip law \begin{align*} G(\alpha, |\Dot{u}_\tau|) = |\Dot{u}_\tau|, \end{align*} which also fit the frameworks in \cite{Patrulescu2017,Migorski2022}. \item The assumption \hyperref[assumptionp]{$\H{p}$} is also used in, e.g., \cite{Patrulescu2017} and \cite[Section 10.3]{sofonea2017}. They hold for, e.g., a constant function and \begin{equation*} p(r) = \begin{dcases} c_p(r^+)^m, \ \ \ \text{ if } r \leq r^\ast\\ c_p(r^\ast)^m, \ \ \ \text{ if } r > r^\ast, \end{dcases} \end{equation*} where $r^\ast$ is a positive cut-off limit related to the wear and hardness of the material, $r^+= \max\{0,r\}$, $m\in \mathbb{N}$, and $c_p>0$ is a surface stiffness coefficient. We refer the reader to, e.g., \cite{Migorski2012,shillor2004} for more applications. We observe that $p$ is Lipschitz as we may write \begin{align*} r_1^m - r_2^m = (r_1-r_2)\sum_{k=0}^{m-1}r_{1}^{k}r_{2}^{m-1-k} \end{align*} and $p$ is bounded (see, e.g., \cite[Section 6.3]{Migorski2012}). \item For applications of the function $j_{\nu}$ fitting the assumptions \hyperref[assumptionjnu]{$\H{j_\nu}$}, we refer the reader to, e.g., \cite[Section 6.3]{Migorski2012} and \cite[p.185-187]{han}. \item One example in linear viscoelasticity where the operators $\mathcal{A}$ and $\mathcal{B}$ satisfy \hyperref[assumptionAcal]{$\H{\mathcal{A}}$} and \hyperref[assumptionB]{$\H{\mathcal{B}}$}, respectively, is the Kelvin-Voigt constitutive law \begin{equation*} \sigma_{ij} = a_{ijkl}\varepsilon_{kl}(\Dot{u}) + b_{ijkl}\varepsilon_{kl}(u), \end{equation*} where $\sigma_{ij}$, $a_{ijkl}$, and $b_{ijkl}$ are the components of $\sigma$, $\mathcal{A}$, and $\mathcal{B}$, respectively, under the assumption $a_{ijkl} \in L^\infty(\Omega)$ and \begin{equation}\label{eq:sym} a_{ijkl} = a_{jikl} = a_{klij}. \end{equation} In addition, there exists $m_\mathcal{A}>0$ such that \begin{align*} a_{ijkl} \varepsilon_{ij}\varepsilon_{kl} \geq m_\mathcal{A} |\varepsilon|^2 \ \ \text{ for all } \varepsilon \in \mathbb{S}^d, \end{align*} i.e., the usual ellipticity condition. This implies $a_0 = 0$ in \hyperref[assumptionAcal]{$\H{\mathcal{A}}$}\ref{list:Acal_bounded}, and $a_1 = L_\mathcal{A}$. We also assume that $b_{ijkl} \in L^\infty(\Omega)$ has the same type of symmetry property as \eqref{eq:sym} (see, e.g., \cite[Remark 3.1]{han2015}). \end{itemize} For further applications, we refer the reader to, e.g., \cite[Section 6]{han2002}, \cite[Section 6]{Migorski2012}, and \cite[Section 4]{Sofonea2012}. \SkipTocEntry\section{Proof of Proposition \ref{prop:estimatewfirst}} \label{appendix:proof_part2} \begin{proof}[Proof of Proposition \ref{prop:estimatewfirst}] For the simplicity of notation, let $w = w_{\alpha\xi\eta g\chi}$. We start off finding estimates on $A$, $\varphi$, and $j^\circ$. According to \hyperref[assumptionA]{$\H{A}$}\ref{list:A_maximalmonotone}, \begin{align*} &\inner{A(t,w(t))}{w(t)} - \inner{A(t,0)}{w(t)} = \inner{A(t,w(t))- A(t,0)}{w(t)} \geq m_A \norm{w(t)}_V^2 \end{align*} for a.e. $t\in (0,T)$. Invoking \hyperref[assumptionA]{$\H{A}$}\ref{list:A_bounded}, i.e., $\norm{A(t,0)}_{V^\ast} \leq a_0 (t)$, and the Cauchy-Schwarz inequality gives \begin{align}\label{eq:estimateonA} \inner{A(t,w(t))}{w(t)} \geq m_A \norm{w(t)}_V^2 + \inner{A(t,0)}{w(t)} \geq m_A \norm{w(t)}_V^2 - a_0(t)\norm{w(t)}_V \end{align} for a.e. $t\in (0,T)$. Next, we take a closer look at $j^\circ$. Keeping in mind Definition \ref{def:subdiffernetial} and applying \hyperref[assumptionj]{$\H{j}$}\ref{list:j_bounded}-\ref{list:j_estimate} and the Cauchy-Schwarz inequality reads \begin{align}\label{eq:forsurethis} &j^\circ(t,\alpha(t),\chi(t),Nw(t);Nv-Nw(t)) \\ \notag &\hspace{1.1cm}\leq m_j \norm{N(w(t)-v)}_X^2 + \Bar{m}_j(\norm{\alpha(t)}_Y +\norm{\chi(t)}_X) \norm{Nw(t)-Nv}_X \notag\\ &\hspace{1.1cm}- j^\circ(t,0,0,Nv;Nw(t-Nv)\notag\\ &\hspace{1.1cm}\leq m_j\norm{N}^2 \norm{w(t)-v}_V^2 + \Bar{m}_j(\norm{\alpha(t)}_Y +\norm{\chi(t)}_X)\norm{N} \norm{w(t) - v}_V \notag \\ &\hspace{1.1cm}+( c_{0j}(t) + c_{3j}\norm{N} \norm{v}_V )\norm{N} \norm{w(t) - v}_V \notag \end{align} for all $v\in V$, a.e. $t\in (0,T)$. Similarly, we observe by Definition \ref{def:convex_subdiffernetial}, \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_bounded}-\ref{list:phi_estimate}, and the Cauchy-Schwarz inequality that \begin{align}\label{eq:this2} &\varphi(t,\alpha(t),\eta(t),Mg(t),Kv) - \varphi(t,\alpha(t),\eta(t),Mg(t),Kw(t)) \\ \notag &\leq \beta_{2\varphi}\norm{K} \norm{\alpha(t)}_Y\norm{w(t)-v}_V +\beta_{3\varphi} \norm{K}\norm{\eta(t)}_X\norm{w(t)-v}_V \\ \notag &+ \beta_{4\varphi}\norm{K}\norm{M}\norm{g(t)}_V\norm{w(t)-v}_V + \beta_{5\varphi} \norm{K}\norm{M}\norm{\alpha(t)}_Y\norm{g(t)}_V\norm{w(t)-v}_V \notag \\ \notag &+ \big[c_{0\varphi}(t) + c_{4\varphi}\norm{K}\norm{v}_V\big]\norm{K}\norm{w(t)-v}_V \\ &=: K_1 + K_2 + K_3 + K_4 + K_5 + K_6 \notag \end{align} for all $v\in V$, a.e. $t\in (0,T)$. We are now in a position to find the desired estimate. Choosing $v=0$ in Problem \ref{prob:first_step}, while keeping in mind \hyperref[assumptionMNK]{$\H{MNK}$}, reads \begin{align*} &\inner{\Dot{w}(t)}{w(t)} + \inner{A(t,w(t))}{w(t)} \\ &\leq \inner{f(t)}{w(t) } - \inner{\xi(t) }{w(t) } + \varphi(t, \alpha(t),\eta(t), Mg(t), 0) - \varphi(t,\alpha(t), \eta(t), Mg(t), Kw(t)) \\ &+j^\circ(t, \alpha(t),\chi(t), Nw(t); -Nw(t) ) \end{align*} for a.e. $t\in (0,T)$. Take $v=0$ in \eqref{eq:forsurethis}-\eqref{eq:this2} and combine with \eqref{eq:estimateonA}. Next, we integrate over the time interval $(0,T)$ and apply the Cauchy-Schwarz inequality to \eqref{eq:forsurethis}, $K_1$, $K_2$, $K_3$, and $K_5$. We apply H\"{o}lder's inequality with $\frac{1}{\infty} + \frac{1}{2} + \frac{1}{2} = 1$ to $K_4$. Applying the integration by parts formula in Proposition \ref{prop:integrationbypartsformula} (with $v_1=v_2=w(t)$ for a.e. $t\in(0,T)$) yields \begin{align*} &\norm{w(t)}_{H}^2 + (m_A- m_j\norm{N}^2)\norm{w}_{L^2_TV}^2 \\ &\leq \norm{w(0)}_H^2 + \norm{f}_{L^2_TV^\ast}\norm{w}_{L^2_TV} + \norm{\xi}_{L^2_TV^\ast}\norm{w}_{L^2_TV} + \norm{a_0}_{L^2(0,T)}\norm{w}_{L^2_TV} \\ &+ \Bar{m}_j\norm{N} \norm{\chi}_{L^2_TX}\norm{w}_{L^2_TV} + \norm{N} \norm{c_{0j}}_{L^2(0,T)}\norm{w}_{L^2_TV} + \norm{K} \norm{c_{0\varphi}}_{L^2(0,T)}\norm{w}_{L^2_TV} \\ & + T^{1/2}( \Bar{m}_j\norm{N} + \beta_{2\varphi}\norm{K}) \norm{\alpha}_{L^\infty_TY}\norm{w}_{L^2_TV} +\beta_{3\varphi}\norm{K} \norm{\eta}_{L^2_TX}\norm{w}_{L^2_TV} \\ &+ \beta_{4\varphi}\norm{K}\norm{M} \norm{g}_{L^2_TV}\norm{w}_{L^2_TV} + \beta_{5\varphi}\norm{K}\norm{M} \norm{\alpha}_{L^\infty_TY} \norm{g}_{L^2_TV}\norm{w}_{L^2_TV} \end{align*} for a.e. $t\in (0,T)$. Next, using the fact that $w(0) = w_0$, that $V\subset H$ is a continuous embedding, and applying Young's inequality implies that there exists $\epsilon >0$ such that \begin{align*} &\norm{w(t)}_{H}^2 + (m_A- m_j\norm{N}^2 - \frac{9}{2}\epsilon )\norm{w}_{L^2_TV}^2 \\ &\leq c(\norm{w_0}_V^2 + \frac{1}{2\epsilon}\norm{f}_{L^2_TV^\ast}^2 + \frac{1}{2\epsilon}\norm{\xi}_{L^2_TV^\ast}^2 +\frac{1}{2\epsilon} \norm{a_0}_{L^2(0,T)}^2 + \frac{1}{2\epsilon} \norm{\chi}_{L^2_TX}^2 \\ &+ \frac{1}{2\epsilon} \norm{c_{0j}}_{L^2(0,T)}^2 + \frac{1}{2\epsilon} \norm{c_{0\varphi}}_{L^2(0,T)}^2 + \frac{1}{2\epsilon} \norm{\alpha}_{L^\infty_TY}^2 + \frac{1}{2\epsilon} \norm{\eta}_{L^2_TX}^2 + \frac{1}{2\epsilon} \norm{g}_{L^2_TV}^2 ) \\ &+ \frac{\beta_{5\varphi}\norm{K}\norm{M}}{2} \norm{\alpha}_{L^\infty_TY} \norm{g}_{L^2_TV}^2 + \frac{1}{2}\beta_{5\varphi}\norm{K}\norm{M} \norm{\alpha}_{L^\infty_TY}\norm{w}_{L^2_TV}^2 \end{align*} for a.e. $t\in (0,T)$. Now, we choose $\frac{9}{2} \epsilon = \sqrt{2}\beta_{4\varphi}\norm{K}\norm{M}>0$ which implies that $m_A - m_j \norm{N}^2 -\frac{9}{2}\epsilon >0$ from \eqref{eq:assumptionbound_mA_max}. \\ \indent It remains to find a bound on $\Dot{w}$ in $L^2_TV^\ast$. Let us first rearrange Problem \ref{prob:first_step}. Then the estimates \eqref{eq:forsurethis}-\eqref{eq:this2} and the Cauchy-Schwarz inequality implies \begin{align*} &\inner{\Dot{w}(t)}{w(t)-v} + \inner{A(t,w(t))}{w(t)-v} \\ &\leq \norm{f(t)}_{V^\ast} \norm{v-w(t)}_V + \norm{\xi(t)}_{V^\ast} \norm{v-w(t)}_V + c_{0j}(t)\norm{N}\norm{v-w(t)}_V \\ &+\Bar{m}_j\norm{N} \norm{\chi(t)}_X \norm{v-w(t)}_V + c_{0\varphi}(t)\norm{K}\norm{v - w(t)}_V \\ \notag & +(\bar{m}_j\norm{N}+\beta_{2\varphi}\norm{K} )\norm{\alpha(t)}_Y\norm{v-w(t)}_V +\beta_{3\varphi} \norm{K}\norm{\eta(t)}_X\norm{v-w(t)}_V \\ \notag &+ \beta_{4\varphi}\norm{K}\norm{M}\norm{g(t)}_V\norm{v-w(t)}_V + \beta_{5\varphi} \norm{K}\norm{M}\norm{\alpha(t)}_Y\norm{g(t)}_V\norm{v-w(t)}_V \notag \\ \notag &+ \big[c_{0\varphi}(t) + c_{4\varphi}\norm{K}\norm{v}_V\big]\norm{K}\norm{v-w(t)}_V \notag \end{align*} for all $v\in V$, a.e. $t\in (0,T)$. Next, choosing $V \ni \Tilde{v} = v- w(t)$ with $v \in V$ arbitrary. By duality, we deduce \begin{align*} &\norm{\Dot{w}(t)}_{V^\ast} + \norm{A(t,w(t))}_{V^\ast} = \sup_{ \Tilde{v} \in V,\ \norm{\Tilde{v} }_V = 1} \Big(\inner{\Dot{w}(t)}{\Tilde{v} }_{V^\ast \times V} + \inner{A(t,w(t))}{\Tilde{v}}_{V^\ast \times V} \Big)\\ &\leq \norm{f(t)}_{V^\ast} + \norm{\xi(t)}_{V^\ast} + \norm{N} c_{0j}(t) +\norm{K} c_{0\varphi}(t) + \Bar{m}_j\norm{N}\norm{\chi(t)}_X + m_j\norm{N}^2 \\ &+ (\bar{m}_j\norm{N} + \beta_{2\varphi}\norm{K}) \norm{\alpha(t)}_Y +\beta_{3\varphi} \norm{K}\norm{\eta(t)}_X + \beta_{4\varphi}\norm{K}\norm{M}\norm{g(t)}_V \\ &+ \beta_{5\varphi} \norm{K}\norm{M}\norm{\alpha(t)}_Y\norm{g(t)}_V + \big[c_{0\varphi}(t) + c_{4\varphi}\norm{K}(1+\norm{w}_V)\big] \notag \end{align*} for a.e. $t\in (0,T)$. We conclude the proof by arithmetic-quadratic mean inequality, integrating over the time interval $(0,T)$, utilizing H\"{o}lder's inequality and the estimate for $\norm{w}_{L^2_TV}^2$. \end{proof} \SkipTocEntry\section{Proof of Lemma \ref{lemma:lambda_fixedpoint}} \label{appendix:lambda_fixedpoint} \begin{proof}[Proof of Lemma \ref{lemma:lambda_fixedpoint}] The proof relies on the Banach fixed-point theorem. We, therefore, need to verify that the map is indeed well-defined and that it is a contractive mapping on $X_T(a)$. For the sake of presentation, we split the proof into two steps. \\ \indent \textbf{Step i} \textit{(The operator $\Lambda$ is well-defined on $X_T(a)$)}. Indeed, $\Lambda \alpha^1 \in X_T(a)$. We first prove that for given $\alpha^1 \in X_T(a)$, then $\norm{\Lambda \alpha^1}_{L^\infty_TY} \leq a$. We apply Minkowski's inequality to \eqref{eq:Lambda}, then we utilize the estimate \eqref{eq:est_on_G} with $\alpha^n= \alpha^1$ and $w^n= w^1$ together with \hyperref[assumptionMNK]{$\H{MNK}$} and \hyperref[assumptionG]{$\H{\mathcal{G}}$}\ref{list:G_00}. This yields \begin{align*} \norm{\Lambda \alpha^1(t)}_Y &\leq \norm{\alpha_0}_Y + \int_0^t \norm{ \mathcal{G}(s,\alpha^1(s),Mw^1(s))}_Y ds \\ &\leq \norm{\alpha_0}_Y + \int_0^t \big[L_\mathcal{G} \norm{\alpha^1(s)}_Y + L_\mathcal{G} \norm{M} \norm{w^1(s)}_V \big]ds + T\norm{\mathcal{G}(\cdot,0,0)}_{L^\infty_TY} \end{align*} for a.e. $t\in (0,T)$. From H\"{o}lder's inequality and the Cauchy-Schwarz inequality, we obtain \begin{align*} \norm{\Lambda \alpha^1(t)}_Y &\leq \norm{\alpha_0}_Y + L_\mathcal{G} T \norm{\alpha^1}_{L^\infty_TY} + L_\mathcal{G} \norm{M} T^{1/2} \norm{w^1}_{L^2_TV} + T \norm{\mathcal{G}(\cdot,0,0)}_{L^\infty_TY}. \end{align*} We then see from Young's inequality and the estimate \eqref{eq:estiamte_w_1} that \begin{align*} &\norm{\Lambda \alpha^1}_{L^\infty_TY}^2 \leq c(1 + \norm{\alpha^1}_{L^\infty_TY}^2 ). \end{align*} Choosing $a$ such that it provides the desired upper bound concludes this part. \\ \indent It remains to show that $\Lambda \alpha^1(t)$ is continuous in $Y$ for a.e. $t\in (0,T)$ for given $\alpha^1 \in X_T(a)$ and $w^1 \in \mathcal{W}^{1,2}_T$. Let $t, t'\in [0,T]$, then with no loss of generality, we assume that $t < t'$. We then use Minkowski's inequality, the estimate \eqref{eq:est_on_G} (with $\alpha^n= \alpha^1$ and $w^n= w^1$), H\"{o}lder's inequality, the Cauchy-Schwarz inequality, \hyperref[assumptionMNK]{$\H{MNK}$}, and \hyperref[assumptionG]{$\H{\mathcal{G}}$}\ref{list:G_00} to deduce \begin{align*} &\norm{\Lambda\alpha^1(t') - \Lambda\alpha^1(t)}_Y \\ &\leq \int_t^{t'} \norm{ \mathcal{G}(s,\alpha^1(s),Mw^1(s))}_Y ds\\ &\leq \int_t^{t'} \big[ L_\mathcal{G} \norm{\alpha^1(s)}_Y + L_\mathcal{G}\norm{M}\norm{w^1(s)}_V + \norm{\mathcal{G}(s,0,0)}_{Y} \big] ds\\ &\leq L_\mathcal{G} |t'-t| \norm{\alpha^1}_{L^\infty(t,t';Y)} + L_\mathcal{G}|t'-t|^{1/2}\norm{M} \norm{w^1}_{L^2(t,t';V)} + |t'-t| \norm{\mathcal{G}(\cdot,0,0)}_{L^\infty(t,t';Y)} . \end{align*} The estimate \eqref{eq:estiamte_w_1} implies \begin{align*} \norm{\Lambda\alpha^1(t') - \Lambda\alpha^1(t)}^2_Y &\leq c |t'-t|^{1/2} + c|t'-t| \norm{\alpha^1}_{L^\infty_TY} + |t'-t| \norm{\mathcal{G}(\cdot,0,0)}_{L^\infty_TY} \end{align*} for $k\in \mathbb{Z}^+$. Passing the limit $|t'-t| \rightarrow 0$, we have that indeed $\Lambda \alpha^1\in X_T(a)$. \\ \indent\textbf{Step ii} \textit{(The application $\Lambda : X_T(a) \rightarrow X_T(a)$ is a contractive mapping)}. Let $\alpha^1_i \in X_T(a)$, $i=1,2$, and let $ w^1 \in \mathcal{W}^{1,2}_T$ be a unique solution to \eqref{eq:algorithm11}-\eqref{eq:algorithm12}. We introduce a new norm in $C([0,T];Y)$ \begin{equation*} \norm{z}_\gamma = \max_{s\in[0,T]} \mathrm{e}^{-\gamma s} \norm{z(s)}_Y \end{equation*} where $\gamma>0$ is chosen later. We notice that $C([0,T];Y)$ with $\norm{\cdot}_\gamma$ is complete and $\norm{\cdot}_\gamma$ is equivalent to the norm on $C([0,T];Y)$. Then, from \eqref{eq:Lambda}, we have \begin{align*} \mathrm{e}^{-\gamma t} \norm{\Lambda\alpha^1_1(t) - \Lambda\alpha^1_2(t)}_Y &\leq L_\mathcal{G} \mathrm{e}^{-\gamma t} \int_0^t \mathrm{e}^{\gamma s} \mathrm{e}^{-\gamma s} \norm{\alpha^1_1(s) - \alpha^1_2(s)}_Y ds\\ &\leq L_\mathcal{G} \mathrm{e}^{-\gamma t} \norm{\alpha^1_1 - \alpha^1_2}_\gamma \int_0^t \mathrm{e}^{\gamma s} ds \\ &\leq \frac{L_\mathcal{G}}{\gamma} \norm{\alpha^1_1 - \alpha^1_2}_\gamma . \end{align*} Choosing $\gamma > L_\mathcal{G}$ implies that $\Lambda$ is a contraction on $X_T(a)$, and thus we may conclude by the Banach fixed-point theorem that $\alpha^1 \in X_T(a)$ is a unique fixed point to \eqref{eq:Lambda}. \end{proof} \SkipTocEntry\section{Proof of Corollary \ref{cor:cauchysequences}} \label{appendix:cor_cauchy} \begin{proof}[Proof of Corollary \ref{cor:cauchysequences}] We follow the proof of Proposition \ref{prop:cauchysequences} until \eqref{eq:I_2123}. Since $\beta_{1\varphi} = \beta_{4\varphi} \beta_{5\varphi} =\beta_{6\varphi}=\beta_{7\varphi} = 0$, while keeping in mind \eqref{eq:assumptionbound_mA_max}, \eqref{eq:I_2123} becomes \begin{align*} \bigg[ \int_0^{t'} \norm{e_w^n(t)}^2_V dt \bigg]^{1/2} &\leq T^k\frac{c_\mathcal{R} + \beta_{3\varphi} \norm{K} c_{\mathcal{S}_\varphi } + \Bar{m}_j \norm{N} c_{\mathcal{S}_j} }{ m_A -m_j \norm{N}^2 } \bigg[ \int_0^{t'}\int_0^t \norm{e_w^{n-1}(s)}_V^2 ds dt \bigg]^{1/2} \\ &+\frac{\bar{m}_{j}\norm{N}+\beta_{2\varphi}\norm{K}}{ m_A -m_j \norm{N}^2 }\bigg[ \int_0^{t'}\norm{e_\alpha^{n-1}(t)}_Y^2dt \bigg]^{1/2} \end{align*} for all $t' \in (0,T)$ and some $k\geq 1/2$. From a standard Gr\"{o}nwall argument (see, e.g., \cite[Lemma 3.2]{Sofonea2012}) combined with Young's inequality, and the Cauchy-Schwarz inequality applied to \eqref{eq:est_alpha123}, we obtain \begin{align}\label{eq:etimate_alpha} \norm{e_\alpha^{n-1}(t)}_Y^2 \leq c T^k \int_0^t\norm{e_w^{n-1}(s)}_V^2ds + cT^k\int_0^t \int_0^s e^{c(t-s)} \norm{e_w^{n-1}(r)}_V^2 dr ds \end{align} for a.e. $t \in (0,T)$ and some $k\geq 1/2$. Applying Young's inequality and \eqref{eq:etimate_alpha}, we have \begin{align*} \int_0^{t} \norm{e_w^n(t_{n})}^2_V dt_{n} &\leq c \int_0^t\int_0^{t_n} \norm{e_w^{n-1}(t_{n-1})}_V^2 dt_{n-1} dt_{n} \\ &+ c \int_0^{t} \int_0^{t_{n}} e^{c(t-t_{n})} \norm{e_w^{n-1}(t_{n-1})}_V^2 dt_{n-1} dt_{n}. \end{align*} Iterating over $n \in \mathbb{Z}_+$ implies \begin{align*} \int_0^{t}& \norm{e_w^n(t_{n})}^2_V dt_{n} \\ &\leq c (1+ e^{ct}) \int_0^{t} \int_0^{t_{n}} \int_0^{t_{n-1}} \dots \int_0^{t_3} \int_0^{t_2} \norm{e_w^{1}(t_1)}_V^2 dt_1 dt_2 \dots dt_{n-2} dt_{n-1}dt_{n} \\ &\leq c \Big( \norm{w^{1}}_{L^2_TV}² + T\norm{w_0}_V^2\Big)(1+ e^{cT}) \int_0^t \int_0^{t_{n}} \int_0^{t_{n-1}} \dots \int_0^{t_3} dt_2 \dots dt_{n-2} dt_{n-1} dt_n. \end{align*} We observe that \begin{align*} \int_0^t \int_0^{t_{n}} \int_0^{t_{n-1}} \dots \int_0^{t_3} dt_2 \dots dt_{n-2} dt_{n-1} dt_n = \frac{t^{n-1}}{(n-1)!} \leq \frac{T^{n-1}}{(n-1)!}. \end{align*} Now, since $T<\infty$ and \begin{equation*} n! \sim \sqrt{2\pi n}(n/e)^n, \end{equation*} we have \begin{align*} \lim_{n\rightarrow \infty} c\frac{T^{n-1}}{(n-1)!} =0. \end{align*} We may therefore conclude the proof in the same way as in the proof of Proposition \ref{prop:cauchysequences}. \end{proof} \SkipTocEntry\section{Proof of Lemma \ref{lemma:est_W_n}} \label{appendix:proof_W_n} \begin{proof}[Proof of Lemma \ref{lemma:est_W_n}] For \eqref{eq:ineq_W_n}, we utilize the conditions \hyperref[assumptionA]{$\H{A}$}\ref{list:A_maximalmonotone}, \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_estimate}, \hyperref[assumptionj]{$\H{j}$}\ref{list:j_estimate}, and the Cauchy-Schwarz inequality to obtain \begin{align*} &\inner{\Dot{W}^n(t)}{W^n(t)} + (m_A-m_j\norm{N}^2 ) \norm{W^n(t)}_V^2 \\ &\leq \norm{\mathcal{R}w^{n-1}_1(t) - \mathcal{R}w^{n-1}_2(t) }_{V^\ast} \norm{W^n(t)}_V \\ &+\beta_{1\varphi}\norm{K} \norm{M}\norm{w_1^{n-1}(t)}_V\norm{\Sigma^{n-1}(t)}_Y \norm{W^n(t)}_V \\ & + (\beta_{2\varphi}\norm{K} +\Bar{m}_j\norm{N}) \norm{\Sigma^{n-1}(t)}_Y \norm{W^n(t)}_V \\ &+ \beta_{3\varphi}\norm{K} \norm{\mathcal{S}_\varphi w_1^{n-1}(t)-\mathcal{S}_\varphi w_2^{n-1}(t)}_X\norm{W^n(t)}_V \\ &+ \beta_{4\varphi}\norm{K}\norm{M} \norm{W^{n-1}(t)}_V\norm{W^n(t)}_V + \beta_{5\varphi} \norm{K} \norm{M}\norm{\alpha_2^{n-1}(t)}_Y \norm{W^{n-1}(t)}_V \norm{W^n(t)}_V \\ &+ \beta_{6\varphi} \norm{K} \norm{M}\norm{w_1^{n-1}(t)}_V\norm{\mathcal{S}_\varphi w_1^{n-1}(t)-\mathcal{S}_\varphi w_2^{n-1}(t)}_X\norm{W^n(t)}_V \\ &+ \beta_{7\varphi} \norm{K} \norm{\alpha^{n-1}_1(t)}_Y\norm{\mathcal{S}_\varphi w_1^{n-1}(t)-\mathcal{S}_\varphi w_2^{n-1}(t)}_X\norm{W^n(t)}_V \\ &+ \Bar{m}_j\norm{N}\norm{\mathcal{S}_j w_1^{n-1}(t)-\mathcal{S}_j w_2^{n-1}(t)}_X\norm{W^n(t)}_V \end{align*} for a.e. $t\in (0,T/2)$. First, integrating over the time interval $(0,t')\subset (0,T/2)$, applying the integration by parts formula in Proposition \ref{prop:integrationbypartsformula} (with $v_1=v_2=W(t)$ for a.e. $t\in (0,T/2)$), Young's inequality, H\"{o}lder's inequality, $W(0) = W_0$, and the fact that $V\subset H$ is a continuous embedding. Secondly, we apply the Cauchy-Schwarz inequality to \hyperref[assumptionR]{$\H{\mathcal{R}}$}, \hyperref[assumptionS1]{$\H{\mathcal{S}_\varphi}$}, and \hyperref[assumptionS2]{$\H{\mathcal{S}_j}$}, respectively, gives us the following estimates \begin{align*} \norm{\mathcal{R}w_1^{n-1}(t) - \mathcal{R}w_2^{n-1}(t)}_{V^\ast}^2 &\leq c_\mathcal{R}^2 \frac{T}{2} \int_0^t \norm{W^{n-1}(s)}_V^2 ds , \\ \norm{\mathcal{S}_\varphi w_1^{n-1}(t) - \mathcal{S}_\varphi w^{n-1}_2(t)}_X^2 &\leq c_{\mathcal{S}_\varphi }^2 \frac{T}{2} \int_0^t \norm{W^{n-1}(s)}_V^2 ds, \\ \norm{\mathcal{S}_jw_1^{n-1}(t) - \mathcal{S}_j w_2^{n-1}(t)}_X^2 &\leq c_{\mathcal{S}_j}^2 \frac{T}{2} \int_0^t \norm{W^{n-1}(s)}_V^2 ds \end{align*} for a.e. $t \in (0,T/2)$. This yields \begin{align*} & (m_A - m_j\norm{N}^2 - 4\epsilon)\norm{W^n}_{L^2_{T/2}V}^2 \\ &\leq c(\norm{W_0}_V^2 + \frac{1}{2\epsilon} \norm{w_1^{n-1}}_{L^2_{T/2}V}^2 \norm{\Sigma^{n-1}}_{L^\infty_{T/2}Y}^2 + \frac{1}{2\epsilon} \norm{\Sigma^{n-1}}_{L^\infty_{T/2}Y}^2 \\ & + \frac{2}{\epsilon}\norm{W^{n-1}}_{L^2_{T/2}V}^2 + \frac{T^k}{2\epsilon}\norm{w_1^{n-1}}_{L^2_{T/2}V}^2 \norm{W^{n-1}}_{L^2_{T/2}V}^2 + \frac{T^k}{2\epsilon} \norm{\alpha_1^{n-1}}_{L^\infty_{T/2}Y}^2 \norm{W^{n-1}}_{L^2_{T/2}V}^2 ) \\ &+ \frac{\beta_{5\varphi} \norm{K} \norm{M}}{2} \norm{\alpha_2^{n-1}}_{L^\infty_{T/2}Y}\norm{W^{n-1}}_{L^2_{T/2}V}^2 + \frac{1}{2}\beta_{5\varphi} \norm{K} \norm{M} \norm{\alpha_2^{n-1}}_{L^\infty_T{T/2}Y} \norm{W^n}_{L^2_{T/2}V}^2 \end{align*} for $k\geq 1/2$. Keeping in mind \eqref{eq:assumptionbound_mA_max}, we may choose $4\epsilon = \sqrt{2} \beta_{4\varphi}\norm{M}\norm{K} >0$ to obtain the desired result. \end{proof} \SkipTocEntry\section{Proof of Lemma \ref{lemma:assumptionphi2}} \label{appendix:proofs_app1} \begin{proof}[Proof of Lemma \ref{lemma:assumptionphi2}] The assumptions on $A$, \hyperref[assumptionA]{$\H{A}$}, hold directly by the hypothesis \hyperref[assumptionAcal]{$\H{\mathcal{A}}$} with $m_A = m_\mathcal{A}$, $a_0 =0$, and $a_1 = L_\mathcal{A}$ (see, e.g., \cite[p.273]{sofonea2017}). Secondly, \hyperref[assumptionG]{$\H{\mathcal{G}}$} holds by \hyperref[assumptionGapp]{$\H{G}$} with $L_\mathcal{G} = L_G$ and $\mathcal{R}$ is history-dependent with $c_\mathcal{R} = L_\mathcal{B} + \norm{\mathcal{C}}_{L^\infty_TL^\infty(\Omega;\mathbb{S}^d)}$ by \hyperref[assumptionC]{$\H{\mathcal{C}}$} and \hyperref[assumptionB]{$\H{\mathcal{B}}$}\ref{list:Bcal_bounded}, see, e.g., \cite[p.275]{sofonea2017}. We observe that \hyperref[assumptionR]{$\H{\mathcal{R}}$}\ref{list:R0} holds by a duality argument, the Cauchy-Schwarz inequality, \hyperref[assumptionB]{$\H{\mathcal{B}}$}\ref{list:Bcal_bounded}, and \eqref{eq:assumptionondata1}. Moreover, it follows directly by Minkowski's inequality and the properties of the trace operator that \hyperref[assumptionS1]{$\H{\mathcal{S}_\varphi}$}\ref{list:S_hist11}-\ref{list:S011} holds with $c_{\mathcal{S}_\varphi } = \norm{\gamma_\nu}_{\mathcal{L}(V,Y)}$. Additionally, \eqref{eq:initaldata}-\eqref{eq:assumptionbound_mA_max} is a consequence of \eqref{eq:assumptiononsourceterms}-\eqref{eq:assumptionondata} and a duality argument. \\ \indent The verification of \hyperref[assumptionphi]{$\H{\varphi}$} requires some work. Firstly, \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_measurable} holds as the variables in \eqref{eq:defintion_phi} do not explicitly depend on $t$. Next, we show that $\varphi(t,\cdot,\cdot,\cdot,\Tilde{v})$ is continuous on $Y \times X \times U$ for all $\Tilde{v}=(v^{(1)},v^{(2)})\in Z$, a.e. $t\in (0,T)$. That is, ensuring that \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_cont} holds. Let $(y,z,\Tilde{w}),(y_0,z_0,\Tilde{w}_0) \in Y \times X \times U$ such that $\norm{(y,z,\Tilde{w})- (y_0,z_0,\Tilde{w}_0)}_{ Y \times X \times U} \rightarrow 0$. Then \begin{align*} &|\varphi(t, y,z,\Tilde{w},\Tilde{v}) - \varphi(t, y_0,z_0,\Tilde{w}_0,\Tilde{v})| \\ &\leq \int_{\Gamma_C}|p(z)-p(z_0)||v^{(2)}|da + \Big|\int_{\Gamma_C}[\mu(|\Tilde{w}|,y) p(z) - \mu(|\Tilde{w}_0|,y_0) p(z_0)] |v^{(1)}| da \Big| . \end{align*} For simplicity in notation, we let \begin{equation*} \mu = \mu (|\Tilde{w}| , y), \ \ \ \mu_0 = \mu (|\Tilde{w}_{0}| , y_0), \end{equation*} then \begin{align*} &|\varphi(t, y,z,\Tilde{w},\Tilde{v}) - \varphi(t, y_0,z_0,\Tilde{w}_0,\Tilde{v})| \\ &\leq \int_{\Gamma_C}|p(z)-p(z_0)||v^{(2)}|da + \int_{\Gamma_C}|p(z)||\mu - \mu_0| |v^{(1)}| da + \int_{\Gamma_C}|\mu_0||p(z) - p(z_0)| |v^{(1)}| da\\ &=: I + II + III. \end{align*} We estimate $I$ directly by H\"{o}lder's inequality and \hyperref[assumptionp]{$\H{p}$}\ref{list:p_Lipschitz}, while for $II$ and $III$, we use H\"{o}lder's inequality, \hyperref[assumptionp]{$\H{p}$}\ref{list:p_bounded}, and \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_Lipschitz}. This reads \begin{align*} I &\leq L_p\sqrt{\mathrm{meas}(\Gamma_C)}\norm{z-z_0}_X \norm{v}_Z, \\ II &\leq p^\ast \big(L_{3\mu}\norm{\Tilde{w}}_U\norm{y - y_0}_Y\norm{v}_Z\\ &+ \big(L_{1\mu} \sqrt{\mathrm{meas}(\Gamma_C)}+ L_{2\mu} \norm{y_0}_Y\big) \norm{\Tilde{w}-\Tilde{w}_0}_U \norm{v}_Z, \\ III &\leq L_p\big(\kappa_1 \sqrt{\mathrm{meas}(\Gamma_C)} + \kappa_2 \norm{y_0}_Y +\kappa_3(\mathrm{meas}(\Gamma_C))^{1/4} \norm{\Tilde{w}_0}_U \big) \norm{z - z_0}_X \norm{v}_Z. \end{align*} \indent To verify \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_convex_lsc}, we first note that a continuous function is lower semicontinuous. So, it suffices to show that $\varphi(t,y,z,\Tilde{w},\cdot)$ is Lipschitz continuous on $Z$ for all $y\in Y$, $z\in X$, $\Tilde{w} \in U$, a.e. $t\in (0,T)$. Let $\Tilde{v}_i =(v^{(1)}_i, v^{(2)}_i)\in Z $, $i=1,2$, then \hyperref[assumptionp]{$\H{p}$}\ref{list:p_Lipschitz}-\ref{list:p_bounded}, \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_est}, and H\"{o}lder's inequality yields \begin{align}\label{eq:star} &|\varphi(t, y,z,\Tilde{w},\Tilde{v}_1) - \varphi(t, y,z,\Tilde{w},\Tilde{v}_2)| \\ \notag &\leq \int_{\Gamma_C}|p(z)||v^{(2)}_{1} - v^{(2)}_{2}|da +p ^\ast \int_{\Gamma_C}|\mu(|\Tilde{w}|,y)| [|v^{(1)}_{1}| - |v^{(1)}_{2}|] da \\ &\leq L_p \sqrt{\mathrm{meas}(\Gamma_C)} \norm{z}_X \norm{\Tilde{v}_1 - \Tilde{v}_2}_Z \notag\\ &+ ( \kappa_1(\mathrm{meas}(\Gamma_C))^{3/4} + \kappa_2(\mathrm{meas}(\Gamma_C))^{1/4} \norm{y_0}_Y +\kappa_3\sqrt{\mathrm{meas}(\Gamma_C)} \norm{\Tilde{w}_0}_U ) \norm{\Tilde{v}_1 - \Tilde{v}_2}_Z \notag \end{align} for all $y \in Y$, $z\in X$, $\Tilde{w}\in U$, a.e. $t\in (0,T)$. The triangle inequality and the linearity of the integral guarantee the convexity in the last argument of $\varphi$. Thus, we have that \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_convex_lsc} holds. \\ \indent From \eqref{eq:star} and $Z^\ast = U^\ast \times X^\ast = (L^4(\Gamma_C;\mathbb{R}^d))^\ast \times (L^4(\Gamma_C))^\ast = L^{4/3}(\Gamma_C;\mathbb{R}^d) \times L^{4/3}(\Gamma_C)$, we observe that \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_bounded} holds for $c_{0\varphi}(t) = \kappa_1(\mathrm{meas}(\Gamma_C))^{3/2}$, $c_{1\varphi}= \kappa_2\mathrm{meas}(\Gamma_C)$, $c_{2\varphi} = L_p(\mathrm{meas}(\Gamma_C))^{5/4} $, $c_{3\varphi} = \kappa_3(\mathrm{meas}(\Gamma_C))^{5/4}$, and $c_{4\varphi} = 0$. We lastly verify \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_estimate}. Let $y_i\in Y$, $z_i \in X$, $\Tilde{w}_i\in U$, $\Tilde{v}_i =(v^{(1)}_i,v^{(2)}_i) \in Z$, for $i=1,2$. Then \begin{align*} &\varphi(t,y_1,z_1,\Tilde{w}_1,\Tilde{v}_2) - \varphi(t,y_1,z_1,\Tilde{w}_1,\Tilde{v}_1) + \varphi(t,y_2,z_2,\Tilde{w}_2,\Tilde{v}_1) - \varphi(t,y_2,z_2,\Tilde{w}_2,\Tilde{v}_2) \\ &= \int_{\Gamma_C} \big[p(z_1)- p(z_2)\big] \big[v^{(2)}_{2}-v^{(2)}_{1}\big] da + \int_{\Gamma_C} \mu (|\Tilde{w}_{1}| , y_1) p(z_1) \big[|v^{(1)}_{2}|-|v^{(1)}_{1}|\big] da\\ &+ \int_{\Gamma_C} \mu (|\Tilde{w}_{2}| , y_2) p(z_2) \big[|v^{(1)}_{1}|- |v^{(1)}_2|\big] da. \end{align*} For simplicity, we define \begin{align*} \mu_1 &= \mu (|\Tilde{w}_{1}| , y_1), & \mu_2 = \mu (|\Tilde{w}_{2}| , y_2), \end{align*} then by \hyperref[assumptionp]{$\H{p}$}\ref{list:p_bounded}, \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_Lipschitz}, and the triangle inequality \begin{align*} &\varphi(t,y_1,z_1,\Tilde{w}_1,\Tilde{v}_2) - \varphi(t,y_1,z_1,\Tilde{w}_1,\Tilde{v}_1) + \varphi(t,y_2,z_2,\Tilde{w}_2,\Tilde{v}_1) - \varphi(t,y_2,z_2,\Tilde{w}_2,\Tilde{v}_2) \\ &= \int_{\Gamma_C} \big[p(z_1)- p(z_2)\big] \big[v^{(2)}_{2}-v^{(2)}_{1}\big] da + \int_{\Gamma_C} p(z_2)\big[\mu_1 -\mu_2\big] \big[|v^{(1)}_{2}|-|v^{(1)}_{1}|\big] da\\ &+ \int_{\Gamma_C} \mu_1\big[p(z_1)- p(z_2)\big] \big[|v^{(1)}_{2}|- |v^{(1)}_1|\big] da\\ &\leq L_p\int_{\Gamma_C} |z_1- z_2| |v^{(2)}_{2}-v^{(2)}_{1}| da + p^\ast\int_{\Gamma_C} L_{1\mu} |\Tilde{w}_1 - \Tilde{w}_2| |v^{(1)}_{2} - v^{(1)}_{1}| da \\ &+ p^\ast \int_{\Gamma_C} L_{2\mu}|y_2||\Tilde{w}_1 - \Tilde{w}_2| |v^{(1)}_{2} - v^{(1)}_{1}| da + p^\ast \int_{\Gamma_C} L_{3\mu}|\Tilde{w}_1| |y_1 - y_2||v^{(1)}_{2} - v^{(1)}_{1}| da \\ &+ L_p\int_{\Gamma_C} (\kappa_1 + \kappa_2|y_1| + \kappa_3|\Tilde{w}_1|) |z_1- z_2| |v^{(1)}_{2}- v^{(1)}_1| da\\ &=: I + II + III + IV + V. \end{align*} Next, we apply H\"{o}lder's inequality and \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_est} to obtain \begin{align*} I &\leq L_p \sqrt{\mathrm{meas}(\Gamma_C)} \norm{z_1- z_2}_X \norm{v^{(2)}_1-v^{(2)}_2}_X,\\ II &\leq p^\ast L_{1\mu} \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\Tilde{w}_1 - \Tilde{w}_2}_U \norm{v^{(1)}_{1} - v^{(1)}_{2}}_U,\\ III &\leq p^\ast L_{2\mu} \norm{y_2}_Y \norm{\Tilde{w}_1 - \Tilde{w}_2}_U \norm{v^{(1)}_1 -v^{(1)}_2}_U,\\ IV &\leq p^\ast L_{3\mu} \norm{\Tilde{w}_1}_U \norm{y_1 - y_2}_Y \norm{v^{(1)}_1 -v^{(1)}_2}_U,\\ V &\leq \kappa_1 \sqrt{\mathrm{meas}(\Gamma_C)} L_p \norm{z_1 - z_2}_X \norm{v^{(1)}_1 -v^{(1)}_2}_U + \kappa_2L_p \norm{y_1}_Y \norm{z_1 - z_2}_X \norm{v^{(1)}_1 -v^{(1)}_2}_U\\ &+ \kappa_3 (\mathrm{meas}(\Gamma_C))^{1/4} L_p \norm{\Tilde{w}_1}_U \norm{z_1 - z_2}_X \norm{v^{(1)}_1 -v^{(1)}_2}_U. \end{align*} Hence, \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_estimate} holds with $\beta_{1\varphi}= p^\ast L_{3\mu}$, $\beta_{3\varphi} = L_p(1+ \kappa_1) \sqrt{\mathrm{meas}(\Gamma_C)} $,\\ $\beta_{4\varphi} = p^\ast L_{1\mu} \sqrt{\mathrm{meas}(\Gamma_C)}$, $\beta_{5\varphi} = p^\ast L_{2\mu}$, $\beta_{6\varphi} = \kappa_3 L_p(\mathrm{meas}(\Gamma_C))^{1/4}$, $\beta_{7\varphi} = \kappa_2 L_p$, and $\beta_{2\varphi} = 0$. \end{proof} \SkipTocEntry\section{Proof of Lemma \ref{lemma:assumptionon_j}} \label{appendix:assumption_phiandj} \begin{proof}[Proof of Lemma \ref{lemma:assumptionon_j}] We will first prove that $\varphi$ defined by \eqref{eq:defintion_phi_2} satisfies \hyperref[assumptionphi]{$\H{\varphi}$}. Firstly, \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_measurable} holds as the variables in \eqref{eq:defintion_phi_2} do not explicitly depend on $t$. Next, \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_cont} holds, i.e., $\varphi(t,\cdot,\cdot,\Tilde{v})$ is continuous on $Y \times U$ for all $\Tilde{v} \in Z$, a.e. $t\in (0,T)$. Indeed, let $(y,\Tilde{w}),(y_0,\Tilde{w}_0) \in Y \times U$ such that $\norm{(y,\Tilde{w}) - (y_0,\Tilde{w}_0)}_{Y \times U} \rightarrow 0$, it then follows by \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_Lipschitz}, and H\"{o}lder's inequality that \begin{align*} |\varphi(t,y,\Tilde{w},\Tilde{v}) - \varphi(t,y_0,\Tilde{w}_0,\Tilde{v})| &= | \int_{\Gamma_C} \big[\mu (|\Tilde{w}|, y) - \mu (|\Tilde{w}_0|, y_0)\big] |\Tilde{v}| da| \\ &\leq L_{1\mu} \norm{w}_U\norm{y- y_0}_Y\norm{\Tilde{v}}_Z\\ &+ (L_{2\mu}\sqrt{\mathrm{meas}(\Gamma_C)}+ L_{3\mu} \norm{y_0}_Y) \norm{ \Tilde{w}- \Tilde{w}_0}_U \norm{\Tilde{v}}_Z \end{align*} for all $\Tilde{v} \in Z$. Moreover, convexity in the last argument of $\varphi$ follows by the triangle inequality and the linearity of the integral. Additionally, a continuous function is lower semicontinuous. Therefore, we will show that $\varphi$ is continuous in its last argument. Let $\Tilde{v},\Tilde{v}_0\in Z$ such that $\norm{\Tilde{v} - \Tilde{v}_0}_Z \rightarrow 0$. Then, the Cauchy-Schwarz inequality, and \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_est} gives \begin{align}\label{eq:varphi_est_app} &|\varphi(t,y,\Tilde{w},\Tilde{v}_1) - \varphi(t,y,\Tilde{w},\Tilde{v}_2)|\\ \notag &\leq (\kappa_1 (\mathrm{meas}(\Gamma_C))^{3/4} + \kappa_2 (\mathrm{meas}(\Gamma_C))^{1/4} \norm{y}_Y + \kappa_3 \sqrt{\mathrm{meas}(\Gamma_C)} \norm{\Tilde{w}}_U) \norm{\Tilde{v}_{1} - \Tilde{v}_{2}}_Z \notag \end{align} for $y\in Y$, $\Tilde{w} \in U$, $\Tilde{v}_{i} \in Z$, $i=1,2$, a.e. $t\in (0,T)$. This proves \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_convex_lsc}. Similarly as in the proof of Lemma \ref{lemma:assumptionphi2}, \eqref{eq:varphi_est_app} implies that \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_bounded} holds for $c_{0\varphi}(t) = \kappa_1 (\mathrm{meas}(\Gamma_C))^{3/2}$, $c_{1\varphi}= \kappa_2 \mathrm{meas}(\Gamma_C)$, $ c_{3\varphi} = \kappa_3 (\mathrm{meas}(\Gamma_C))^{5/4}$, and $ c_{4\varphi} = 0$. Lastly, we investigate \hyperref[assumptionphi]{$\H{\varphi}$}\ref{list:phi_estimate}. From hypothesis \hyperref[assumptionmu]{$\H{\mu}$}\ref{list:mu1_Lipschitz} and H\"{o}lder's inequality, we obtain \begin{align*} &\varphi(t,y_1,\Tilde{w}_1,\Tilde{v}_2) - \varphi(t,y_1,\Tilde{w}_1,\Tilde{v}_1) + \varphi(t,y_2,\Tilde{w}_2,\Tilde{v}_1)- \varphi(t,y_2,\Tilde{w}_2,\Tilde{v}_2) \\ &\leq (L_{1\mu} \sqrt{\mathrm{meas}(\Gamma_C)} + L_{2\mu}\norm{y_2}_Y) \norm{\Tilde{w}_1 - \Tilde{w}_2}_U \norm{\Tilde{v}_1 - \Tilde{v}_2}_Z + L_{3\mu} \norm{\Tilde{w}_1}_U \norm{y_1 - y_2}_Y \norm{\Tilde{v}_1 - \Tilde{v}_2}_Z \end{align*} for all $y_i \in Y$, $\Tilde{w}_i \in U$, $\Tilde{v}_{i} \in Z$ for $i=1,2$, a.e. $t\in (0,T)$. We set $\beta_{1\varphi} =L_{3\mu}$, $\beta_{4\varphi} = L_{1\mu}\sqrt{\mathrm{meas}(\Gamma_C)}$, $\beta_{5\varphi} =L_{2\mu}$, and $\beta_{2\varphi} = \beta_{3\varphi}= \beta_{6\varphi} = \beta_{7\varphi}=0$. Lastly, we prove that $j$ defined by \eqref{eq:j_nu} satisfies \hyperref[assumptionj]{$\H{j}$}. Corollary \ref{cor:j}\ref{list:finite}-\ref{list:locally} guarantees that \hyperref[assumptionj]{$\H{j}$}\ref{list:j_measurable}-\ref{list:j_locallyLipschitz}. It remain to show that \hyperref[assumptionj]{$\H{j}$}\ref{list:j_convergence}-\ref{list:j_estimate}. From Corollary \ref{cor:j}\ref{list:equality}, Proposition \ref{prop:chainrule_subdiff}, and Fatou's lemma, we obtain \hyperref[assumptionj]{$\H{j}$}\ref{list:j_convergence}. We find that \hyperref[assumptionj]{$\H{j}$}\ref{list:j_bounded} holds by \hyperref[assumptionjnu]{$\H{j_\nu}$}\ref{list:j_nu_bounded}, Young's inequality, and H\"{o}lder's inequality. Indeed, \begin{align*} |\xi|^{4/3} &\leq (\bar{c}_0 + \bar{c}_1 |v|)^{4/3} \leq 8^{1/3}(\bar{c}_0^{4/3} + \bar{c}_1^{4/3} |v|^{4/3}) \end{align*} implies \begin{align*} \norm{\xi}_{X^\ast} \leq 2^{3/4} (\mathrm{meas}(\Gamma_C))^{3/4} \bar{c}_0 + 2^{3/4}\sqrt{\mathrm{meas}(\Gamma_C)} \bar{c}_1 \norm{v}_X \end{align*} with $c_{0j}(t) = 2^{3/4} (\mathrm{meas}(\Gamma_C))^{3/4} \bar{c}_0$, $c_{3j} = 2^{3/4}\sqrt{\mathrm{meas}(\Gamma_C)}\bar{c}_1$, and $c_{1j} = c_{2j} = 0$. For \hyperref[assumptionj]{$\H{j}$}\ref{list:j_estimate}, we use Corollary \ref{cor:j}\ref{list:equality} (see, e.g., \cite{Migorski2022}), \hyperref[assumptionjnu]{$\H{j_\nu}$}\ref{list:j_nu_est}, and the Cauchy-Schwarz inequality to obtain \begin{align*} j^\circ (t,v_1 ; v_2 - v_1) + j^\circ (t,v_2 ; v_1 - v_2) &\leq m_{j_\nu} \int_{\Gamma_C} |v_1 - v_2|^2 da \leq m_{j_\nu} \sqrt{\mathrm{meas}(\Gamma_C)} \norm{v_1 - v_2}_{X}^2, \end{align*} where $m_j = m_{j_\nu} \sqrt{\mathrm{meas}(\Gamma_C)}$, concluding this proof. \end{proof} \end{document}
\mathbf{b}egin{document} \mathbf{t}itle{Supplementary Information: Quantum Violation of an Instrumental Test} \mathbf{d}ate{\mathbf{t}oday} \author{Rafael Chaves} \mathbf{e}mail{[email protected]} \affiliation{International Institute of Physics, Federal University of Rio Grande do Norte, 59078-970, P. O. Box 1613, Natal, Brazil} \author{Gonzalo Carvacho} \affiliation{Dipartimento di Fisica - Sapienza Universit\`{a} di Roma, P.le Aldo Moro 5, I-00185 Roma, Italy} \author{Iris Agresti} \affiliation{Dipartimento di Fisica - Sapienza Universit\`{a} di Roma, P.le Aldo Moro 5, I-00185 Roma, Italy} \author{Valerio Di Giulio} \affiliation{Dipartimento di Fisica - Sapienza Universit\`{a} di Roma, P.le Aldo Moro 5, I-00185 Roma, Italy} \author{Leandro Aolita} \affiliation{Instituto de F\'{i}sica, Universidade Federal do Rio de Janeiro, Caixa Postal 68528, Rio de Janeiro, RJ 21941-972, Brazil} \author{Sandro Giacomini} \affiliation{Dipartimento di Fisica - Sapienza Universit\`{a} di Roma, P.le Aldo Moro 5, I-00185 Roma, Italy} \author{Fabio Sciarrino} \mathbf{e}mail{[email protected]} \affiliation{Dipartimento di Fisica - Sapienza Universit\`{a} di Roma, P.le Aldo Moro 5, I-00185 Roma, Italy} \title{Supplementary Information: Quantum Violation of an Instrumental Test} \mathrm{op}{S}ubsection{Geometric description of the set of classical correlations in the instrumental DAG} As discussed in the main text, the Markov condition for the instrumental DAG implies that any observable probability distribution compatible with it can be decomposed as \mathbf{b}egin{equation} \label{deco_app} p(a,b \mathbf{v}ert x )=\mathrm{op}{S}um_{\lambda} p(a \mathbf{v}ert x, \lambda)p(b \mathbf{v}ert a,\lambda)p(\lambda). \mathbf{e}nd{equation} As we can see, this decomposition defines a convex set (the convex sum over the probabilities for the variable $\Lambda$). For discrete variables, this set is in fact a polytope, that is, a convex set with finitely many extremal points and that can be equivalently described by finitely many linear inequalities that represent the boundaries of the polytope (from which the non-trivial represent instrumental inequalities). As mentioned in the main text, the probabilities appearing in the decomposition above can without loss of generality be expressed as deterministic (response) functions. More precisely, we have that \mathbf{b}egin{eqnarray} &&a=D_{a}(x,\lambda), \\ &&b=D_{b}(a,\lambda). \mathbf{e}nd{eqnarray} That is, for a given value of $\lambda$ the variable $a$ is a deterministic function of $x$; similarly $b$ is a deterministic function of $a$. We can think of the variable $\lambda$ as selecting which function to use in a given run of the physical process (with probability $p(\lambda)$). The number of possible deterministic functions to choose from will intrinsically depend on the domain of the variables. Namely, the number of deterministic functions for variable $A$ is given by $\mathbf{v}ert a \mathbf{v}ert ^{\mathbf{v}ert x \mathbf{v}ert} $ and for variable $B$ is given by $\mathbf{v}ert b \mathbf{v}ert ^{\mathbf{v}ert a \mathbf{v}ert}$, where $\mathbf{v}ert a \mathbf{v}ert$ is the number of values taken by the random variable $A$ (similarly for $B$ and $X$). Thus the total number of deterministic functions is given by $\mathbf{v}ert a \mathbf{v}ert ^{\mathbf{v}ert x \mathbf{v}ert} \mathbf{v}ert b \mathbf{v}ert ^{\mathbf{v}ert a \mathbf{v}ert}$; each of these corresponding to one of the extremal points defining the convex set defined by \mathbf{e}qref{deco_app}. Given the list of extremal points of a polytope, we can find its dual description in terms of facets, that is, linear inequalities of the probabilities $p(a,b \mathbf{v}ert x)$ that generically can be written as \mathbf{b}egin{equation} \mathrm{op}{S}um_{a,b,x} c_{a,b,x} p(a,b \mathbf{v}ert x) \leq B_{\mathrm{Classical}}, \mathbf{e}nd{equation} where $c_{a,b,x}$ are real coefficients and $B_{\mathrm{Classical}}$ corresponds to the maximum values achievable by the extremal points. Further, this dualization procedure can be performed using standard convex optimization software \perp\!\!\!\perpte{PORTA}. A given correlation is compatible with the instrumental DAG if and only if it respects all of these inequalities. If any of them is violated we thus have an unambiguous proof that some of the causal assumptions is incompatible with the correlation under test. \mathrm{op}{S}ubsection{Quantum and post-quantum violations of the instrumental inequality} Our interest is to find the optimal quantum violations of inequality (eq. (6) in the main text) \mathbf{b}egin{equation} \mathcal{I}= -\mean{B}_1+2\mean{B}_2+\mean{A}_1-\mean{AB}_1+2\mean{AB}_3 \leq 3, \label{new_instrumental_SI}. \mathbf{e}nd{equation} That is, to find the optimal quantum state and measurements producing correlations according to \mathbf{b}egin{equation} \label{PQ} p_{\mathrm{Q}}(a,b \mathbf{v}ert x)=\mathrm{Tr} \left[ (M^{x}_{a} \mathbb{O}times M^{a}_{b}) \mathbf{v}arrho \right]. \mathbf{e}nd{equation} To that aim we have restricted our attention to pure two-qubit states of the form $\ket{\mathcal{P}si}=\mathbf{c}os{(\mathbf{t}heta)}\ket{\mathbf{u}parrowrrow \mathbf{u}parrowrrow}+\mathrm{op}{S}in{(\mathbf{t}heta)}\ket{\mathbf{d}ownarrow \mathbf{d}ownarrow}$ (because of the convexity of the set of quantum correlations we do not need to consider mixed states in the optimization). As for the measurement observables we have considered qubit projective measurements. Thus, the variable $A$ is associated to the measurement outcome of the measurement performed on the first qubit of the pair and, following the prescription of the instrumental DAG, the measurement settings for the second qubit (the measurements outcomes which correspond to the variable $B$) can depend on the measurement outcome of the first measurement. By numerically optimizing (in Mathematica) the measurement settings for each value of $\mathbf{t}heta$ we observe that every entangled state ($\mathbf{t}heta \neq n \mathbf{p}i/2$) violates the inequality and that the maximal violation is obtained for maximally entangled states ($\mathbf{t}heta =\mathbf{p}i/4$) (see Fig. \ref{fig:Vplot}). To prove analytically that every entangled pure state of two-qubits (considering without loss of generality $0 < \mathbf{t}heta \leq \mathbf{p}i/4$) violates the inequality it is enough to set $O^{x=1}=-(\mathrm{op}{\mathrm{op}{S}igma}ma_X+\mathrm{op}{\mathrm{op}{S}igma}ma_Z)/\mathrm{op}{S}qrt{2}$, $O^{x=2}=\mathrm{op}{\mathrm{op}{S}igma}ma_X$ and $O^{x=3}=\mathrm{op}{\mathrm{op}{S}igma}ma_Z$ for the first qubit (producing the outcome $a$) and $O^{a=0}=(\mathbf{c}os{(\mathbf{t}heta)}\mathrm{op}{\mathrm{op}{S}igma}ma_Z+\mathrm{op}{S}in{(\mathbf{t}heta)}\mathrm{op}{\mathrm{op}{S}igma}ma_X)$ and $O^{a=1}=\mathrm{op}{\mathrm{op}{S}igma}ma_Z$ for the second qubit (producing the outcome $b$). The corresponding value for inequality \mathbf{e}qref{new_instrumental_SI} is given by \mathbf{b}egin{equation} \label{non_optimal} \frac{1}{4}(4+(8+3\mathrm{op}{S}qrt{2})\mathbf{c}os(\mathbf{t}heta)-2\mathrm{op}{S}qrt{2}\mathbf{c}os(2\mathbf{t}heta)-\mathrm{op}{S}qrt{2}\mathbf{c}os(3\mathbf{t}heta)), \mathbf{e}nd{equation} that can be easily seen to be larger than $3$ for any $0 < \mathbf{t}heta \leq \mathbf{p}i/4$ (see Fig. \ref{fig:Vplot}). We notice that this choice of parameters does not lead to the optimal violation for a given $\mathbf{t}heta$ but it is enough for the purpose of showing violations. \mathbf{b}egin{figure}[t!] \mathbf{c}enter \includegraphics[width=1\mathbf{c}olumnwidth]{Vplot.pdf} \mathbf{c}aption{\mathbf{t}extbf{Violation of the instrumental inequality \mathbf{e}qref{new_instrumental_SI} as function of $\mathbf{t}heta$:} every pure entangled state violates the inequality and the maximum is achieved by the maximally entangled state. The blue curve shows the optimal violation obtained numerically by considering projective measurements on both qubits. The black curve shows the (non-optimal) violation following \mathbf{e}qref{non_optimal}. The red line shows the classical limit.} \label{fig:Vplot} \mathbf{e}nd{figure} Instead of considering quantum resources, we can consider what is the maximum violation of the instrumental inequality provided by non-signalling (post-quantum) resources. To that aim, consider a given non-signalling (NS) distribution $p_{\mathrm{NS}}(a,b \mathbf{v}ert x,y)$ respecting the NS constraints \mathbf{b}egin{eqnarray} \nonumber & & p_{\mathrm{NS}}(a \mathbf{v}ert x)= \mathrm{op}{S}um_{b}p_{\mathrm{NS}}(a,b \mathbf{v}ert x,y)=\mathrm{op}{S}um_{b}p_{\mathrm{NS}}(a,b \mathbf{v}ert x,y^{\mathbf{p}rime}), \\ \nonumber & & p_{\mathrm{NS}}(b \mathbf{v}ert y)= \mathrm{op}{S}um_{a}p_{\mathrm{NS}}(a,b \mathbf{v}ert x,y)=\mathrm{op}{S}um_{a}p_{\mathrm{NS}}(a,b \mathbf{v}ert x^{\mathbf{p}rime},y), \mathbf{e}nd{eqnarray} for all $x,x^{\mathbf{p}rime},y,y^{\mathbf{p}rime}$. NS correlations can represented by the black-box process illustrated in Fig. \ref{fig:Bbox}. Just as in the classical and quantum cases, one of the inputs of this black-box can depend on the output obtained earlier. That is, in this case we are considering ``wirings'' of probability distributions \perp\!\!\!\perpte{Barrett2005}, as illustrated in Fig. \ref{fig:Bbox}. The probability distribution for the instrumental scenario obtainable from NS resources via these wirings is thus given by \mathbf{b}egin{equation} p(a,b \mathbf{v}ert x)= p_{\mathrm{NS}}(a,b \mathbf{v}ert x,f(a)), \mathbf{e}nd{equation} where $f(a)$ is an arbitrary deterministic function with $\mathbf{v}ert a \mathbf{v}ert$ possible inputs and $\mathbf{v}ert a \mathbf{v}ert$ possible outputs that describes a wiring from Alice's output to Bob's input. In turn, for a fixed $f(a)$ it is a simple linear program (LP) to find what is the maximum violation of the instrumental inequality \mathbf{e}qref{new_instrumental_SI}. Since there are finitely many functions $f(a)$ there is also a finite number of LPs one has to solve. Doing that, we obtain that the maximum violation is $\mathcal{I}_{\mathrm{NS}}=5$. The latter is obtained for $f(a)=a$ and with $p_{\mathrm{NS}}(a,b \mathbf{v}ert x,y)$ the simple variation of the Popescu-Rohrlich (PR) box \perp\!\!\!\perpte{Popescu1994} shown in Table \ref{tabPR}. \mathbf{b}egin{table}[hp] $p_{\mathrm{NS}}(a,b \mathbf{v}ert x,y)=$ \mathbf{b}egin{tabular}{c | c || c c | c c |} $x\mathbf{b}ackslash y$ & & \multicolumn{2}{c|}{0} & \multicolumn{2}{c|}{1} \\ \mathbf{h}line & $a\mathbf{b}ackslash b$ & 0 & 1 & 0 & 1 \\ \mathbf{h}line \mathbf{h}line \multirow{3}{*}{0} & 0 & $\frac{1}{2}$ & $\frac{1}{2}$ & $\frac{1}{2}$ & $\frac{1}{2}$ \\ & 1 & 0 & 0 & 0 & 0 \\ \mathbf{h}line \multirow{2}{*}{1} & 0 & $\frac{1}{2}$ & 0 & 0 & $\frac{1}{2}$ \\ & 1 & 0 & $\frac{1}{2}$ & $\frac{1}{2}$ & 0 \\ \mathbf{h}line \multirow{2}{*}{2} & 0 & $\frac{1}{2}$ & 0 & $\frac{1}{2}$ & 0 \\ & 1 & 0 & $\frac{1}{2}$ & 0 & $\frac{1}{2}$ \\ \mathbf{h}line \mathbf{e}nd{tabular} \mathbf{c}aption{A matrix representation of the non-signalling distribution maximally violating the instrumental inequality. The rows stand for the $x$ and $a$ values while the columns stand for $y$ and $b$. For $x=0$ we see that $A$ always outputs $a=0$ while B produces a random bit $b$. Restricting to the inputs $x=1,2$ and transforming them as $x=1 \rightarrow x=0$ and $x=2 \rightarrow x=1$ we have a symmetry of the PR-box given by $p(a,b \mathbf{v}ert x,y)=\frac{1}{2}\mathbf{d}elta_{a \mathrm{op}lus b, (x+1)y}$.}\label{tabPR} \mathbf{e}nd{table} \mathbf{b}egin{figure}[t!] \mathbf{c}enter \includegraphics[width=1\mathbf{c}olumnwidth]{black_box.png} \mathbf{c}aption{\mathbf{t}extbf{Black-box representation of a physical process:} a) The black box associated to a probability distribution $p(a,b \mathbf{v}ert x,y)$. b) The wired box used to produce the probability $p(a,b \mathbf{v}ert x)$.} \label{fig:Bbox} \mathbf{e}nd{figure} \mathrm{op}{S}ubsection{Quantifying causal relaxations in the instrumental DAG} Observed a violation of an instrumental inequality it is natural to consider by how much we have to relax the underlying causal assumptions of the instrumental model in order to explain the observed correlations. As discussed in the main text, the instrumental DAG subsumes two causal assumptions: i) the independence of the instrumental variable $X$ from the hidden variable $\Lambda$ ($p(x,\lambda)=p(x)p(\lambda)$) and ii) the fact that all the correlations between $X$ and $B$ are mediated via $A$ and $\Lambda$ ($p(b\mathbf{v}ert a,x,\lambda)=p(b\mathbf{v}ert a,\lambda)$). The graphical language of DAGs immediately allows us to represent the causal relaxations in both cases (see Fig. \ref{fig:relax}). In the first case, we allow for correlations between $X$ and $\Lambda$ to be mediated by a common ancestor $\Gamma$. In turn, in the second DAG we allow for a direct causal influence (a directed edge) between variables $X$ and $B$. The DAG representation, however, does not specify the strength of the arrows, that is, by how much we have the relax the corresponding causal assumptions. To that aim, we first have to introduce a measure of correlations and/or direct causal influence. Below we introduce the measures we consider and using the techniques developed in \perp\!\!\!\perpte{Chaves2015b} show how they can be estimated from given observed correlations via a linear program. For the first case, the observed probability distribution has a decomposition given by \mathbf{b}egin{eqnarray} \label{eq:Mdep} p(a,b \mathbf{v}ert x )= & & \mathrm{op}{S}um_{\lambda,\gamma} p(a \mathbf{v}ert x, \lambda)p(b \mathbf{v}ert a,\lambda)p(\lambda,\gamma \mathbf{v}ert x) \\ \nonumber = & &\frac{1}{p(x)} \mathrm{op}{S}um_{\lambda,\gamma} p(a \mathbf{v}ert x, \lambda)p(b \mathbf{v}ert a,\lambda)p(\lambda\mathbf{v}ert \gamma) p(x \mathbf{v}ert \gamma)p(\gamma) . \mathbf{e}nd{eqnarray} Without loss of generality, we model such correlations by introducing an additional hidden variable $\Gamma$ which serves as a common ancestor for $X$, and $\Lambda$. This suggests to decompose this common ancestor into $\gamma=(\gamma_x,\gamma_{\lambda})$. We can assume $x=\gamma_x$ and $\lambda=\gamma_{\lambda}$ (that is, they are deterministic functions) without loss of generality. If the observable variables $a$, $b$ and $x$ are discrete variables then finitely many different instances of $\gamma$ suffice to fully characterize the common ancestor's influence. \mathbf{b}egin{figure}[t!] \mathbf{c}enter \includegraphics[width=1\mathbf{c}olumnwidth]{DAG_V4_new.png} \mathbf{c}aption{\mathbf{t}extbf{Causal relaxations of the instrumental DAG:} a) Measurement independence relaxation where the hidden variable $\Lambda$ is allowed to be correlated with the instrumental variables $X$. b) A relaxation where a direct causal influence from $X$ into $B$ is allowed.} \label{fig:relax} \mathbf{e}nd{figure} Alternatively, we can represent \mathbf{e}qref{eq:Mdep} as \mathbf{b}egin{equation} \mathbf{p} = T \mathbf{q}, \mathbf{e}nd{equation} where $p(a,b|x)$ is represented by a vector $\mathbf{p}$ with components $\mathbf{p}_j$ labeled by the indexes $j=(a,b,x)$ and the distribution of $\gamma$ is associated with a vector with components $\mathbf{q}_\gamma=p(\Gamma = \gamma)$. In turn, $T$ is a matrix with elements $T_{j,\gamma} = \frac{1}{p(x)}\mathbf{d}elta_{x,\gamma_x}\mathbf{d}elta_{a,D_a(x,\gamma_{\lambda})} \mathbf{d}elta_{b,D_b(a,\gamma_{\lambda})}= \frac{1}{p(x)}\mathbf{d}elta_{a,D_a(\gamma_x,\gamma_{\lambda})} \mathbf{d}elta_{b,D_b(a,\gamma_{\lambda})}$. Since the correlations between $X$ and $\Lambda$ are mediated by a third variable $\Gamma$, it is reasonable to consider as a measure of correlations the total variation distance given by \mathbf{b}egin{equation} \mathcal{M}_{X:\Lambda}=\mathrm{op}{S}um_{x,\lambda} \mathbf{v}ert p(x,\lambda)-p(x)p(\lambda) \mathbf{v}ert. \mathbf{e}nd{equation} There are certainly other good measures one can employ, however, the total variation distance has the good advantage of providing a non-trivial lower bound, via the Pinsker inequality \perp\!\!\!\perpte{Fedotov2003}, to the mutual information between $X$ and $\Lambda$: \mathbf{b}egin{equation} I(X:\Lambda) \geq \frac{1}{2}\mathcal{M}_{X:\Lambda}^2\log{\mathbf{e}}, \mathbf{e}nd{equation} a quantity that is highly non-linear as a function of the probabilities and thus can in practice only be estimated numerically. In the second case (the one discussed in the main text), we want to quantify the causal influence of the arrow $X \rightarrow B$, that is, the causal model allows for distributions $p(a,b \mathbf{v}ert x)$ of the form \mathbf{b}egin{equation} p(a,b \mathbf{v}ert x )=\mathrm{op}{S}um_{\lambda} p(a \mathbf{v}ert x, \lambda)p(b \mathbf{v}ert a,x,\lambda)p(\lambda). \mathbf{e}nd{equation} Similarly to what we have done in the case of measurement dependence, we can represent this model as \mathbf{b}egin{equation} \mathbf{p} = W \mathbf{q}, \mathbf{e}nd{equation} where again $p(a,b|x)$ is represented by a vector $\mathbf{p}$ with components $\mathbf{p}_j$ labeled by the indexes $j=(a,b,x)$ and the distribution of $\lambda$ is associated with a vector with components $\mathbf{q}_\lambda=p(\Lambda = \lambda)$. In turn, $W$ is a matrix with elements $W_{j,\lambda} = \mathbf{d}elta_{a,D_a(x,\lambda)} \mathbf{d}elta_{b,D_b(a,x,\lambda)}$. Since in this case we do have a direct arrow between the variables, it is more appropriate to consider a measure of causal influence rather than a simple measure of correlations. As discussed in the main text, in this case we consider the measure of causal influence defined as \mathbf{b}egin{equation} \mathcal{C}_{X \rightarrow B} = \mathrm{op}{S}up_{x,x^{\mathbf{p}rime},a,b} \mathrm{op}{S}um_{\lambda}p(\lambda)\mathbf{v}ert p(b\mathbf{v}ert a,do(x),\lambda)-p(b\mathbf{v}ert a,{do(x^\mathbf{p}rime}),\lambda) \mathbf{v}ert. \mathbf{e}nd{equation} Further, we notice that in this case, since the variables $X$ and $B$ have no common ancestors, the intervention (the do operation) is totally equivalent to a simple conditional operation, that is, $p(b \mathbf{v}ert do(x))=p(b \mathbf{v}ert x)$ and thus \mathbf{b}egin{equation} \mathcal{C}_{X \rightarrow B} = \mathrm{op}{S}up_{x,x^{\mathbf{p}rime},a,b} \mathrm{op}{S}um_{\lambda}p(\lambda)\mathbf{v}ert p(b\mathbf{v}ert a,x,\lambda)-p(b\mathbf{v}ert a,{x^\mathbf{p}rime},\lambda) \mathbf{v}ert. \mathbf{e}nd{equation} Given some observed probability distribution or given the violation of some instrumental inequality (represented by a given matrix $V$ such that $V \mathbf{p}$ gives the corresponding inequality), our aim is to estimate the minimum relaxation necessary to explain it. In other terms what is the minimum value of $\mathcal{M}_{X:\Lambda}$ or $\mathcal{C}_{X \rightarrow B}$. That is, we are interested in the following minimization problems: \mathbf{b}egin{eqnarray} \mathbf{u}nderset{ {\mathbf{b}f q} \in \mathbbm{R}^{n}}{\mathbf{t}extrm{minimize}} & & \mathbf{q}uad \mathcal{M}_{X:\Lambda} \label{min_1} \\ \nonumber \mathrm{op}{S}t & & \mathbf{q}uad V T{\mathbf{b}f q} = V \mathbf{p} \\ & & \mathbf{q}uad \langle \mathbf{1}_n, {\mathbf{b}f q} \rangle = 1 \\ \nonumber & & \mathbf{q}uad {\mathbf{b}f q} \geq \mathbf{0}_n. \mathbf{e}nd{eqnarray} or \mathbf{b}egin{eqnarray} \mathbf{u}nderset{ {\mathbf{b}f q} \in \mathbbm{R}^{n}}{\mathbf{t}extrm{minimize}} & & \mathbf{q}uad \mathcal{C}_{X \rightarrow B} \label{min_2} \\ \nonumber \mathrm{op}{S}t & & \mathbf{q}uad V W{\mathbf{b}f q} = V \mathbf{p} \\ & & \mathbf{q}uad \langle \mathbf{1}_n, {\mathbf{b}f q} \rangle = 1 \\ \nonumber & & \mathbf{q}uad {\mathbf{b}f q} \geq \mathbf{0}_n. \mathbf{e}nd{eqnarray} Following the results in \perp\!\!\!\perpte{Chaves2015b}, these optimization problems can be casted a linear programs and thus analytical and computationally efficient solutions can be found. Notice that the same approach can be applied to compute the average causal effect $\mathrm{ACE}_{A \rightarrow B}$ and the direct causal influence $\mathcal{C}_{X \rightarrow B}$ (eqs. (1) and (12) of the main text). \mathrm{op}{S}ubsection{A simple example showing how post-selection can simulate non-local correlations} Here we illustrate how in the usual Bell scenario the post-selection of data given by $y=a$ can be used to simulate non-local and signalling correlations through local hidden variable (LHV) models. However, as discussed in the main text such post-selection is allowed if instead of testing quantum mechanics againts LHV models we test it against non-local hidden variable models, more specifically a model where the outcome $A$ has a direct causal influence over $Y$ (thus also allowing for measurement dependence). Consider the mixture (with equal weights) of two deterministic local strategies such that $p(a,b,x,y)=\frac{1}{4}p(a,b \mathbf{v}ert x,y)$ where for the first strategy we have $a=x$ and $b=0$ and for the second we have $a=x\mathrm{op}lus 1$ and $b=y$. As shown in the Table \ref{tab1} below (using a matrix representation) under the postselection $y=a$ the first strategy is mapped to a distribution such that $p(x,y)=1/2$ if $x\mathrm{op}lus y=0$ ($0$ otherwise) and the second is mapped to a distribution such that $p(x,y)=1/2$ if $x\mathrm{op}lus y=1$ ($0$ otherwise). That is, this post-selection of data generates measurement dependence between the hidden variable (choosing which deterministic strategy to use) and the measurement choices. Using the two mentioned local deterministic strategies and after the post-selection, we have a distribution $p(a,b,x,y)=\frac{1}{4}p(a,b \mathbf{v}ert x,y)=\frac{1}{4}\mathbf{d}elta_{a,y}\mathbf{d}elta_{b,y(x \mathrm{op}lus 1)}$ that achieves the maximum violation the CHSH inequality ($=4$) and clearly is signalling. \mathbf{b}egin{table}[hp] \mathbf{b}egin{tabular}{c | c || c c | c c |} $x\mathbf{b}ackslash y$ & & \multicolumn{2}{c|}{0} & \multicolumn{2}{c|}{1} \\ \mathbf{h}line & $a\mathbf{b}ackslash b$ & 0 & 1 & 0 & 1 \\ \mathbf{h}line \mathbf{h}line \multirow{3}{*}{0} & 0 & $\frac{1}{4}$ & 0 & $\frac{1}{4}$ & 0 \\ & 1 & 0 & 0 & 0 & 0 \\ \mathbf{h}line \multirow{2}{*}{1} & 0 & 0 & 0 & 0 & 0 \\ & 1 & $\frac{1}{4}$ & 0 & $\frac{1}{4}$ & 0 \\ \mathbf{h}line \mathbf{e}nd{tabular} \mathbf{q}uad $\mathbf{x}rightarrow[\mathbf{t}ext{}]{\mathbf{t}ext{y=a}}$ \mathbf{q}uad \mathbf{b}egin{tabular}{c | c || c c | c c |} $x\mathbf{b}ackslash y$ & & \multicolumn{2}{c|}{0} & \multicolumn{2}{c|}{1} \\ \mathbf{h}line & $a\mathbf{b}ackslash b$ & 0 & 1 & 0 & 1 \\ \mathbf{h}line \mathbf{h}line \multirow{3}{*}{0} & 0 & $\frac{1}{2}$ & 0 & 0 & 0 \\ & 1 & 0 & 0 & 0 & 0 \\ \mathbf{h}line \multirow{2}{*}{1} & 0 & 0 & 0 & 0 & 0 \\ & 1 & 0 & 0 & $\frac{1}{2}$ & 0 \\ \mathbf{h}line \mathbf{e}nd{tabular} \mathbf{q}uad \mathbf{q}uad , \mathbf{q}uad \mathbf{q}uad \mathbf{b}egin{tabular}{c | c || c c | c c |} $x\mathbf{b}ackslash y$ & & \multicolumn{2}{c|}{0} & \multicolumn{2}{c|}{1} \\ \mathbf{h}line & $a\mathbf{b}ackslash b$ & 0 & 1 & 0 & 1 \\ \mathbf{h}line \mathbf{h}line \multirow{3}{*}{0} & 0 & 0 & 0 & 0 & 0 \\ & 1 & $\frac{1}{4}$ & 0 & 0 & $\frac{1}{4}$ \\ \mathbf{h}line \multirow{2}{*}{1} & 0 & $\frac{1}{4}$ & 0 & 0 & $\frac{1}{4}$ \\ & 1 & 0 & 0 & 0 & 0 \\ \mathbf{h}line \mathbf{e}nd{tabular} \mathbf{q}uad $\mathbf{x}rightarrow[\mathbf{t}ext{}]{\mathbf{t}ext{y=a}}$ \mathbf{q}uad \mathbf{b}egin{tabular}{c | c || c c | c c |} $x\mathbf{b}ackslash y$ & & \multicolumn{2}{c|}{0} & \multicolumn{2}{c|}{1} \\ \mathbf{h}line & $a\mathbf{b}ackslash b$ & 0 & 1 & 0 & 1 \\ \mathbf{h}line \mathbf{h}line \multirow{3}{*}{0} & 0 & 0 & 0 & 0 & 0 \\ & 1 & 0 & 0 & 0 & $\frac{1}{2}$ \\ \mathbf{h}line \multirow{2}{*}{1} & 0 & $\frac{1}{2}$ & 0 & 0 & 0 \\ & 1 & 0 & 0 & 0 & 0 \\ \mathbf{h}line \mathbf{e}nd{tabular} \mathbf{c}aption{The two upper matrices show the matrix representation for the probability distribution $p(a,b,x,y)=\frac{1}{4}p(a,b \mathbf{v}ert x,y)$ of the local deterministic strategy where $a=x$ and $b=0$ and how it transforms after performing the post-selection where $y=a$. The two lower matrices show the same for the deterministic strategy $a=x\mathrm{op}lus 1$ and $b=y$.}\label{tab1} \mathbf{e}nd{table} \mathrm{op}{S}ubsection{Addressing the $X \rightarrow B$ loophole} Differently from a usual Bell scenario, the instrumental causal structure is not subjected to the locality loophole since the measurement outcome $A$ has a direct causal influence over outcome $B$ also implying that in general $X$ and $B$ will be correlated. As we explain below, the instrumental scenario does introduces a new kind of loophole, which can in principle be addressed relying on interventions. Before discussing the loopholes in the instrumental scenario, first we draw parallels with the usual loopholes and meaning of device-independence in a usual Bell scenario. The term device-independence refers to the fact that we can infer properties of our system of interest without relying on any assumptions about the internal mechanism of our measurement devices. However, still one has to rely on some global assumptions, external to the inner working of the devices. From a causal networks perspective (from which Bell's theorem is a particular case), device-independence refers to the fact once we impose the external mechanisms (the causal relations between the variables), our conclusions are independent of whatever are the internal mechanisms generating the value of a given variable (linear or non-linear response functions, Gaussian or non-Gaussian distributions, etc). For example, in a usual Bell scenario we associate the violation of a Bell inequality with a device-independent proof that the underlying quantum state is entangled (thus independent of the fact that we have measured a given observable, the dimension of the quantum state, etc). However, to achieve this conclusion we have to impose the locality and measurement independence assumptions. The question is then how can we be sure to fulfill such assumptions in an experiment? To close the locality loophole we can rely on physical arguments since by invoking special relativity we can assure the locality assumption by measuring particles that are space-like separated. However, the presence of a possible measurement dependence (correlating the choice of observables and the source preparing the system to be measured) cannot be ruled out on a physical ground, thus introducing a loophole that cannot be closed. Clearly, however, measurement independence is an assumption external to the measurement devices. From the perspective of applications (such as cryptography) device-indepedence means that we can trust our results even without knowing our measurement apparatus that can then be treated as a black box. However, we do have to trust that the causal assumptions are being fulfilled otherwise an eavesdropper can easily simulate apparently quantum non-local correlations with classical resources (for example correlating the source and the measurement choices as has been done above). The same is true for the instrumental scenario. As opposed to the locality loophole in Bell's theorem, the direct causal influence between the variables $X$ and $B$ (not mediated by variable $A$) cannot be ruled out by physical grounds such as special relatively. However, just as in the Bell case, once an instrumental inequality is violated we do not rely on any assumption about the measurement apparatus but only on external/causal assumptions. For example, in a clinical trial, the variable $X$ would stand for the treatment assigned to a patient (a drug or placebo), $A$ would represent the compliance of the patient (taking or not the treatment) and $B$ would be treatment response. As discussed in the main text, in such a randomized experiment, it is natural to assume that $X$ has not direct influence over $B$, however, if the patients discover that they are actually receiving placebo treatment, this might not be true anymore. That is, this loophole cannot be conclusively closed (unless interventions are made, see below), the best one can do is to design the best possible experiment (e.g, patients are unaware of the received treatment). Something similar happens with the measurement independence assumption in a Bell test. The best one can do is to improve our experimental trust in the measurement independence, for instance, using cosmic photons \perp\!\!\!\perpte{Handsteiner2017} or human randomness \perp\!\!\!\perpte{BBT2017}. Finally, as a potential way of ruling out such loophole one can make use of interventions. Notice that in the instrumental scenario the variables $X$ and $B$ are in general correlated but all these correlations are mediated by the variable $A$. This means that by intervening on $A$ (thus breaking the incoming causal link from $X$) the variable $X$ and $B$ should become independent, that is, $p(b \mathbf{v}ert x, \mathrm{do}(a))=p(b\mathbf{v}ert \mathrm{do}(a))$. However, if under interventions on $A$ we still observe correlations between $X$ and $B$, this could only be due some direct influence of $X$ into $B$ thus allowing us to detect such causal link. Of course that interventions are device-dependent operations but at least they provide an experimental way to addressing such loopholes. To illustrate we give a very simple example. Suppose we have a probability distribution achieving the maximum algebraic value of \mathbf{e}qref{new_instrumental_SI} given by $\mathcal{I}_{\mathrm{max}}=7$. In this case, clearly we need a the direct causal influence $X \rightarrow B$. One way of achieving this value is via a deterministic strategy such that $a=0 \mathbf{q}uad \forall x$ and $b=1$ if $x=1$ and $b=0$ if $x=2,3$ (thus $A$ has in fact no causal influence over $B$). One can easily check that $p(b \mathbf{v}ert x, \mathrm{do}(a)) \neq p(b\mathbf{v}ert \mathrm{do}(a))$ allowing one to conclude that some direct causal influence between $X$ and $B$ is present. \mathrm{op}{S}ubsection{The effect of detection inefficiencies} As it happens with Bell tests, the experimental implementation of the violation of the instrumental\ inequality is also subjected to experimental imperfections. Here we analyze in details the effects of detections inefficiencies, that is, the fact that some of the generated photons might not lead to a detection click. In our setup two entangled photons are generated and then measured in a projective basis, that is, defined the measurement direction we have two possible outcomes: $0$ (corresponding to the eigenvalue $+ 1$) and $1$ (corresponding to the outcome $- 1$). In this situation, the non-detection of a photon, labelled by $\mathbf{e}mptyset$, can be handled by two different approaches. First, one can treat the no-click event as third possible outcome of the performed measurement. However, notice that the instrumental inequality \mathbf{e}qref{new_instrumental_SI} refers to a scenario where only two outcomes are allowed. Thus, to treat the no-click as third event we have derive new instrumental inequalities taking that into account. The second possibility, the one we follow here, is to remain in a description with two outcomes only and thus treat the no-click by a binning approach, that is, whenever a non-click happens we simply label $\mathbf{e}mptyset$ as either $0$ or $1$. In this case we can simply use inequality \mathbf{e}qref{new_instrumental_SI}. Quite generally a measurement with two outcomes can be represented by the POVM operators \perp\!\!\!\perpte{Chaves2011} \mathbf{b}egin{eqnarray} \label{eq:POVM} M_{\mathbf{u}parrowrrow}= \mathbf{e}ta_{\mathbf{u}parrowrrow} \ket{\mathbf{u}parrowrrow}\mathbf{b}ra{\mathbf{u}parrowrrow} +(1-\mathbf{e}ta_{\mathbf{d}ownarrow}) \ket{\mathbf{d}ownarrow}\mathbf{b}ra{\mathbf{d}ownarrow}, \\ \nonumber M_{\mathbf{d}ownarrow}= \mathbf{e}ta_{\mathbf{d}ownarrow} \ket{\mathbf{d}ownarrow}\mathbf{b}ra{\mathbf{d}ownarrow} +(1-\mathbf{e}ta_{\mathbf{u}parrowrrow}) \ket{\mathbf{u}parrowrrow}\mathbf{b}ra{\mathbf{u}parrowrrow}, \mathbf{e}nd{eqnarray} where $\mathbf{e}ta_{\mathbf{u}parrowrrow}=\mathbf{e}ta_{\mathbf{d}ownarrow}=1$ represent the case of perfect projective measurements and where $\ket{\mathbf{u}parrowrrow,\mathbf{d}ownarrow}$ represent some arbitrary basis for qubits. To model the binning of the no-click event we can make $\mathbf{e}ta_{\mathbf{u}parrowrrow}=1$ and $\mathbf{e}ta_{\mathbf{d}ownarrow}=\mathbf{e}ta$ (where $\mathbf{e}ta$ is the detection efficiency of the photon detector). This corresponds to the case where whenever we obtain $\mathbf{e}mptyset$ we simply relabel it as the outcome $0$ (corresponding to projection in $\mathbf{u}parrowrrow$). In other terms the event $\mathbf{e}mptyset$ is binned with the event $0$. Using this modeling and optimizing over the entangled states and measurement basis we have obtained the minimum detection efficiencies required to obtain a detection-loophole-free violation of inequality \mathbf{e}qref{new_instrumental_SI}. The results are shown in Fig. \ref{fig:etaplot}. \mathbf{b}egin{figure}[t!] \mathbf{c}enter \includegraphics[width=1\mathbf{c}olumnwidth]{etaplot.pdf} \mathbf{c}aption{\mathbf{t}extbf{Critical values above which one can violate the instrumental inequality \mathbf{e}qref{new_instrumental_SI}:} At $(\mathbf{e}ta_a,\mathbf{e}ta_b)=(0.8285,1)$ the optimal states are maximally entangled state while at $(\mathbf{e}ta_a,\mathbf{e}ta_b)=(1,0.667)$ we need an almost separable state.} \label{fig:etaplot} \mathbf{e}nd{figure} We observe that there is an asymmetry between the detection efficiencies of the $\mathbf{e}ta_a$ and $\mathbf{e}ta_b$ corresponding to the measurements outcomes $A$ and $B$, respectively. Setting $\mathbf{e}ta_a=1$ we obtain the critical value $\mathbf{e}ta^{\mathrm{crit}}_b \approx 0.667$, below which no violation of the instrumental inequality is possible anymore. In turn, for $\mathbf{e}ta_b=1$ we observe $\mathbf{e}ta^{\mathrm{crit}}_a \approx 0.828$. That is, the violation is more robust according to the detection inefficiencies of the measurement corresponding to the $B$ variable. Instead, if we set $\mathbf{e}ta_a=\mathbf{e}ta_b=\mathbf{e}ta$ we obtain $\mathbf{e}ta^{\mathrm{crit}} \approx 0.905$. Further, we observe that for detection inefficiencies in $a$, the optimal states are always maximally entangled. In turn, for detection inefficiencies in $b$, the best violations are obtained by states with reduced entanglement that in fact tend to separable states as $\mathbf{e}ta_b \rightarrow \mathbf{e}ta^{\mathrm{crit}}_b$. This is a similar effect to that in usual Bell tests \perp\!\!\!\perpte{Eberhard1993}, such that the best critical detection efficiencies are obtained for states close to separability. \mathrm{op}{S}ubsection{The effects of other sources of experimental errors} As shown in the main text, the expected quantum violation of the instrumental inequality is $ \mathcal{I}_{\mathrm{Q}}=1+2\mathrm{op}{S}qrt{2}$, achieved by a maximally entangled state of two qubits. In our experimental photonic setup, in order to prepare one of those states it is necessary to use a Spontaneous Parametric Down Conversion (SPDC)-type II source. It has been shown \perp\!\!\!\perpte{Noise_Cabello_1}\perp\!\!\!\perpte{Noise_Cabello_3} that these sources suffer from two different kinds of noise: white noise and colored noise. Thus, the experimental state given by the mixture of these two noises can be modeled as: \mathbf{b}egin{eqnarray} \label{eq:noise_source} \mathbf{v}arrho_{noise}= & & v\ket{\mathbf{p}hi^{+}}\mathbf{b}ra{\mathbf{p}hi^{+}}+ \\ \nonumber & & +(1-v)\left[\lambda\frac{\ket{\mathbf{p}hi^{+}}\mathbf{b}ra{\mathbf{p}hi^{+}}+ \ket{\mathbf{p}hi^{-}}\mathbf{b}ra{\mathbf{p}hi^{-}}}{2}+(1-\lambda)\frac{\mathbb{I}}{4}\right]. \mathbf{e}nd{eqnarray} White noise is responsible for the appearance of the maximally mixed state and coloured noise for the appearance of $\ket{\mathbf{p}hi^{-}}\mathbf{b}ra{\mathbf{p}hi^{-}}$. Considering this state and optimizing over the measurement settings for each value of $v~\mathbf{t}ext{and}~\lambda$ we obtain the results shown in Fig. \ref{fig:Noise_Graphic}. The point marked in this figure represents the experimental violation expected from the characterization of our EPR source that corresponds to $v\approx0.94$ and $\lambda\approx 0.33$ yielding $\mathcal{I}_{\mathrm{Q}}=3.643$. Since half wave-plates and fibers don't introduce significant noise we can consider this point as the theoretical expected violation for the experiment based on post-selection of data. Therefore, it's clear that our experimental result of $\mathcal{I}_{\mathrm{Q}}=3.621 \mathbf{p}m 0.023$ is in almost perfect agreement with the theoretical one. \mathbf{b}egin{figure}[h!] \mathbf{c}enter \includegraphics[width=1\mathbf{c}olumnwidth]{Noise_Plotting_resub.png} \mathbf{c}aption{\mathbf{t}extbf{EPR source noise effect on inequality parameter:} In this graph is represented, by a scale of colors as showed in the label, the maximum values of $\mathcal{I}_{\mathrm{Q}}$ for any $v~\mathbf{t}ext{and}~\lambda$ according to the state in \mathbf{e}qref{eq:noise_source}. The black cross denotes the estimated working point of our EPR source while the dashed line separates the region of violations (to the right) from the region of no-violation of the instrumental inequality (to the left).} \label{fig:Noise_Graphic} \mathbf{e}nd{figure} \mathbf{b}egin{figure}[h!] \mathbf{c}enter \includegraphics[width=1\mathbf{c}olumnwidth]{etaplot_graph.png} \mathbf{c}aption{\mathbf{t}extbf{No-click and EPR source noise effect on inequality parameter:} Maximum violations reached in a no-click scenario by maximizing over the state and measurement settings considering the presence of colored and white noise at the working point of our EPR source. The magnitude of violation is represented by a scale of colors separated by contours. The dashed line separates the region where a violation occurs (on the right) from the region where $\mathcal{I}_Q$ is lower than the classical bound (on the left).} \label{fig:Noise_Graphic_NoClick} \mathbf{e}nd{figure} In order to confirm also the Pockels Cell experimental results, we need to model the non-perfect efficiency of the crystal implementing $\mathrm{op}{\mathrm{op}{S}igma}ma_{z}$. We thus have to take into account the possibility in which the Pockels Cell acts as identity instead of $\mathrm{op}{\mathrm{op}{S}igma}ma_{z}$. This can be modeled by a Pockels Cell visibility $v_{pc}$ that can be interpreted as the probability of not applying the $\mathrm{op}{\mathrm{op}{S}igma}ma_{z}$ transformation. Thus, the case when $a=0$ (the case when the $\mathrm{op}{\mathrm{op}{S}igma}ma_{z}$ should be applied) can be computed as \mathbf{b}egin{equation} \label{eq:noise_pc} p_{\mathrm{Q}}(0,b|x)=\Tr\left[\left(M^{x}_{0}\mathbb{O}times M^{1}_{b}\right)\mathbf{v}arrho^{pc}_{noise}\right], \mathbf{e}nd{equation} where \mathbf{b}egin{equation} \mathbf{v}arrho^{pc}_{noise}=(1-v_{pc})\mathbf{v}arrho_{noise}+v_{pc}\mathbb{I}^{A}\mathbb{O}times\mathrm{op}{\mathrm{op}{S}igma}ma^{B}_{z}\mathbf{v}arrho_{noise}\mathbb{I}^{A}\mathbb{O}times\mathrm{op}{\mathrm{op}{S}igma}ma^{B}_{z}. \mathbf{e}nd{equation} All other probabilities should be computed just as in equation \mathbf{e}qref{PQ} but considering a state given by $\mathbf{v}arrho_{noise}$. We have evaluated experimentally the crystal visibility which was found to be $v_{pc}=0.87$. Together with the other sources of errors we find the expected violation to be $\mathcal{I}_{\mathrm{Q}}=3.342$ which is in very good agreement with our experimental result of $\mathcal{I}_{\mathrm{Q}}=3.358 \mathbf{p}m 0.020$. We want to highlight that the previous result can be achieved also considering a POVM (Positive operator valued measure) instead of modifying the probability as done in \mathbf{e}qref{eq:noise_pc}. To show it, we can define such POVM by the following operators: \mathbf{b}egin{eqnarray} &&E_0^0=v_{pc}\ket{\mathbf{t}ilde{0}}\mathbf{b}ra{\mathbf{t}ilde{0}} + (1-v_{pc})\ket{\mathrm{op}{S}lashed{0}}\mathbf{b}ra{\mathrm{op}{S}lashed{0}} \label{eq:povm1}\\ &&E_1^0=v_{pc}\ket{\mathbf{t}ilde{1}}\mathbf{b}ra{\mathbf{t}ilde{1}} + (1-v_{pc})\ket{\mathrm{op}{S}lashed{1}}\mathbf{b}ra{\mathrm{op}{S}lashed{1}} \label{eq:povm2} \mathbf{e}nd{eqnarray} where $\ket{\mathrm{op}{S}lashed{0}},\ket{\mathrm{op}{S}lashed{1}}$ are the eigenvectors of $\frac{\mathrm{op}{\mathrm{op}{S}igma}ma_z-\mathrm{op}{\mathrm{op}{S}igma}ma_x}{\mathrm{op}{S}qrt{2}}$ while $\ket{\mathbf{t}ilde{0}},\ket{\mathbf{t}ilde{1}}$ represent the eigenspace of $\frac{\mathrm{op}{\mathrm{op}{S}igma}ma_z+\mathrm{op}{\mathrm{op}{S}igma}ma_x}{\mathrm{op}{S}qrt{2}}$. Replacing the projective measurements $M^0_b$ with eq. \mathbf{e}qref{eq:povm1} and \mathbf{e}qref{eq:povm2}, we obtain again $\mathcal{I}_Q =3.342$. Now, we consider how the no-click events joined with EPR source noise (considered in the post-selection experiment) would affect the maximal violation achievable. To take into account no-click events and the possibility that the maximum value of $\mathcal{I}_{\mathrm{Q}}$ is not always reached by maximally entangled state for all values of $\mathbf{e}ta_{a}~ \mathbf{t}extnormal{and}~\mathbf{e}ta_{b}$, we modified the state in eq. \mathbf{e}qref{eq:noise_source} as following: \mathbf{b}egin{eqnarray} & &\ket{\mathbf{p}hi_{\mathbf{t}heta}}\mathbf{e}quiv\mathbf{c}os{\mathbf{t}heta}\ket{\mathbf{u}parrowrrow\mathbf{u}parrowrrow}+\mathrm{op}{S}in{\mathbf{t}heta}\ket{\mathbf{d}ownarrow\mathbf{d}ownarrow}\\ & &\ket{\mathbf{p}hi_{\mathbf{t}heta}^{\mathbf{b}ot}}\mathbf{e}quiv\mathrm{op}{S}in{\mathbf{t}heta}\ket{\mathbf{u}parrowrrow\mathbf{u}parrowrrow}-\mathbf{c}os{\mathbf{t}heta}\ket{\mathbf{d}ownarrow\mathbf{d}ownarrow} \mathbf{e}nd{eqnarray} \mathbf{b}egin{eqnarray} \mathbf{v}arrho_{noise}= & & v\ket{\mathbf{p}hi_{\mathbf{t}heta}}\mathbf{b}ra{\mathbf{p}hi_{\mathbf{t}heta}}+ \\ \nonumber & & +(1-v)[\lambda\frac{\ket{\mathbf{p}hi_{\mathbf{t}heta}}\mathbf{b}ra{\mathbf{p}hi_{\mathbf{t}heta}}+ \ket{\mathbf{p}hi_{\mathbf{t}heta}^{\mathbf{b}ot}}\mathbf{b}ra{\mathbf{p}hi_{\mathbf{t}heta}^{\mathbf{b}ot}}}{2}+(1-\lambda)\frac{\mathbb{I}}{4}] \mathbf{e}nd{eqnarray} Fixing the values of $\lambda$ and $v$ ($0.33$ and $0.94$ respectively), maximizing the violation over measurement settings and $\mathbf{t}heta$ in the POVM scenario depicted in eq. \mathbf{e}qref{eq:POVM} with $\mathbf{e}ta_{\mathbf{u}parrowrrow}^{a,b}=1$ and $\mathbf{e}ta_{\mathbf{d}ownarrow}^{a,b}=\mathbf{e}ta_{a,b}$, contours in Fig. \ref{fig:Noise_Graphic_NoClick} were obtained. Since the parameters $\lambda$ and $v$, reported above, were estimeted exploiting the accidental counts correction method, all the previous analyzes are valid only in the case we compare our theoretical expectation values with the experimental ones where such correction was taken into account. Therefore, in order to perform a similar analysis to predict the violation values without correction, we have to just modify the state $\mathbf{v}arrho_{noise}$ adding a further term of white noise: \mathbf{b}egin{eqnarray} \mathbf{v}arrho_{noise}^{noacc} = &&\gamma \mathbf{b}iggl[ v\ket{\mathbf{p}hi^+}\mathbf{b}ra{\mathbf{p}hi^+}+ \mathbf{b}iggr. \\ &&\mathbf{b}iggl.+(1-v) \left( \lambda\frac{\ket{\mathbf{p}hi^+}\mathbf{b}ra{\mathbf{p}hi^+}+\ket{\mathbf{p}hi^-}\mathbf{b}ra{\mathbf{p}hi^-}}{2}+(1-\lambda) \frac{\mathbb{I}}{4}\right) \mathbf{b}iggr]+ \nonumber \\ &&+(1-\gamma)\frac{\mathbb{I}}{4}. \nonumber \mathbf{e}nd{eqnarray} where $\gamma$ is a parameter which can be experimentally calculated as follows: \mathbf{b}egin{equation}\label{eq:gamma} \gamma=1 - \frac{\mathrm{op}{S}um_{a,b}Acc_{a,b}}{\mathrm{op}{S}um_{a,b}Coinc_{a,b}} \mathbf{e}nd{equation} in Eq. \mathbf{e}qref{eq:gamma} $Acc_{a,b}$ and $Coinc_{a,b}$ are, respectively, the accindental counts and the total coincidence counts observed by detectors associated to the a, b outcomes. From our data we have obtained $\gamma \approx 0.971$ which leads to the following theoretical expectation values: \mathbf{b}egin{eqnarray} &&\mathcal{I}_Q = 3.537,~~~\mathbf{t}extnormal{for the post-selection experiment.}\nonumber \\ &&\mathcal{I}_Q = 3.245,~~~\mathbf{t}extnormal{for the active feed-forward experiment.}\nonumber \mathbf{e}nd{eqnarray} Those results are again in agreement with the experimental values reported in the main text. \mathbf{b}egin{thebibliography}{11} \makeatletter \mathbf{p}rovidecommand \@ifxundefined [1]{ \@ifx{#1\mathbf{u}ndefined} } \mathbf{p}rovidecommand \@ifnum [1]{ \ifnum #1\mathbf{e}xpandafter \@firstoftwo \mathbf{e}lse \mathbf{e}xpandafter \@secondoftwo \fi } \mathbf{p}rovidecommand \@ifx [1]{ \ifx #1\mathbf{e}xpandafter \@firstoftwo \mathbf{e}lse \mathbf{e}xpandafter \@secondoftwo \fi } \mathbf{p}rovidecommand \natexlab [1]{#1} \mathbf{p}rovidecommand \mathbf{e}nquote [1]{``#1''} \mathbf{p}rovidecommand \mathbf{b}ibnamefont [1]{#1} \mathbf{p}rovidecommand \mathbf{b}ibfnamefont [1]{#1} \mathbf{p}rovidecommand \perp\!\!\!\perptenamefont [1]{#1} \mathbf{p}rovidecommand \mathbf{h}ref@noop [0]{\@secondoftwo} \mathbf{p}rovidecommand \mathbf{h}ref [0]{\mathbf{b}egingroup \@sanitize@url \@href} \mathbf{p}rovidecommand \@href[1]{\@@startlink{#1}\@@href} \mathbf{p}rovidecommand \@@href[1]{\mathbf{e}ndgroup#1\@@endlink} \mathbf{p}rovidecommand \@sanitize@url [0]{\mathbf{c}atcode `\\mathbf{1}2\mathbf{c}atcode `\$12\mathbf{c}atcode `\&12\mathbf{c}atcode `\#12\mathbf{c}atcode `\^12\mathbf{c}atcode `\_12\mathbf{c}atcode `\%12\relax} \mathbf{p}rovidecommand \@@startlink[1]{} \mathbf{p}rovidecommand \@@endlink[0]{} \mathbf{p}rovidecommand \mathbf{u}rl [0]{\mathbf{b}egingroup\@sanitize@url \@url } \mathbf{p}rovidecommand \@url [1]{\mathbf{e}ndgroup\@href {#1}{\mathbf{u}rlprefix }} \mathbf{p}rovidecommand \mathbf{u}rlprefix [0]{URL } \mathbf{p}rovidecommand \Eprint [0]{\mathbf{h}ref } \mathbf{p}rovidecommand \mathbf{d}oibase [0]{http://dx.doi.org/} \mathbf{p}rovidecommand \mathrm{op}{S}electlanguage [0]{\@gobble} \mathbf{p}rovidecommand \mathbf{b}ibinfo [0]{\@secondoftwo} \mathbf{p}rovidecommand \mathbf{b}ibfield [0]{\@secondoftwo} \mathbf{p}rovidecommand \mathbf{t}ranslation [1]{[#1]} \mathbf{p}rovidecommand \BibitemOpen [0]{} \mathbf{p}rovidecommand \mathbf{b}ibitemStop [0]{} \mathbf{p}rovidecommand \mathbf{b}ibitemNoStop [0]{.\EOS\mathrm{op}{S}pace} \mathbf{p}rovidecommand \EOS [0]{\mathrm{op}{S}pacefactor3000\relax} \mathbf{p}rovidecommand \BibitemShut [1]{\mathbf{c}sname bibitem#1\mathbf{e}ndcsname} \let\auto@bib@innerbib\@empty \mathbf{b}ibitem [{\perp\!\!\!\perptenamefont {Christof}\ and\ \perp\!\!\!\perptenamefont {L{\"o}bel}(2009)}]{PORTA} \BibitemOpen \mathbf{b}ibfield {author} {\mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {T.}~\mathbf{b}ibnamefont {Christof}}\ and\ \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {A.}~\mathbf{b}ibnamefont {L{\"o}bel}},\ }\mathbf{h}ref@noop {} {\mathbf{e}nquote {\mathbf{b}ibinfo {title} {\mathbf{t}exttt{PORTA} -- \mathbf{t}exttt{PO}lyhedron \mathbf{t}exttt{R}epresentation \mathbf{t}exttt{T}ransformation \mathbf{t}exttt{A}lgorithm},}\ } (\mathbf{b}ibinfo {year} {2009})\BibitemShut {NoStop} \mathbf{b}ibitem [{\perp\!\!\!\perptenamefont {Barrett}\ \mathbf{e}mph {et~al.}(2005)\perp\!\!\!\perptenamefont {Barrett}, \perp\!\!\!\perptenamefont {Linden}, \perp\!\!\!\perptenamefont {Massar}, \perp\!\!\!\perptenamefont {Pironio}, \perp\!\!\!\perptenamefont {Popescu},\ and\ \perp\!\!\!\perptenamefont {Roberts}}]{Barrett2005} \BibitemOpen \mathbf{b}ibfield {author} {\mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {J.}~\mathbf{b}ibnamefont {Barrett}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {N.}~\mathbf{b}ibnamefont {Linden}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {S.}~\mathbf{b}ibnamefont {Massar}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {S.}~\mathbf{b}ibnamefont {Pironio}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {S.}~\mathbf{b}ibnamefont {Popescu}}, \ and\ \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {D.}~\mathbf{b}ibnamefont {Roberts}},\ }\mathbf{b}ibfield {title} {\mathbf{e}nquote {\mathbf{b}ibinfo {title} {Nonlocal correlations as an information-theoretic resource},}\ }\mathbf{h}ref {\mathbf{d}oibase 10.1103/PhysRevA.71.022101} {\mathbf{b}ibfield {journal} {\mathbf{b}ibinfo {journal} {Phys. Rev. A}\ }\mathbf{t}extbf {\mathbf{b}ibinfo {volume} {71}},\ \mathbf{b}ibinfo {pages} {022101} (\mathbf{b}ibinfo {year} {2005})}\BibitemShut {NoStop} \mathbf{b}ibitem [{\perp\!\!\!\perptenamefont {Popescu}\ and\ \perp\!\!\!\perptenamefont {Rohrlich}(1994)}]{Popescu1994} \BibitemOpen \mathbf{b}ibfield {author} {\mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {S.}~\mathbf{b}ibnamefont {Popescu}}\ and\ \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {D.}~\mathbf{b}ibnamefont {Rohrlich}},\ }\mathbf{b}ibfield {title} {\mathbf{e}nquote {\mathbf{b}ibinfo {title} {Quantum nonlocality as an axiom},}\ }\mathbf{h}ref {http://dx.doi.org/10.1007/BF02058098} {\mathbf{b}ibfield {journal} {\mathbf{b}ibinfo {journal} {Foundations of Physics}\ }\mathbf{t}extbf {\mathbf{b}ibinfo {volume} {24}},\ \mathbf{b}ibinfo {pages} {379--385} (\mathbf{b}ibinfo {year} {1994})}\BibitemShut {NoStop} \mathbf{b}ibitem [{\perp\!\!\!\perptenamefont {Chaves}\ \mathbf{e}mph {et~al.}(2015)\perp\!\!\!\perptenamefont {Chaves}, \perp\!\!\!\perptenamefont {Kueng}, \perp\!\!\!\perptenamefont {Brask},\ and\ \perp\!\!\!\perptenamefont {Gross}}]{Chaves2015b} \BibitemOpen \mathbf{b}ibfield {author} {\mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {R.}~\mathbf{b}ibnamefont {Chaves}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {R.}~\mathbf{b}ibnamefont {Kueng}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {J.~B.}\ \mathbf{b}ibnamefont {Brask}}, \ and\ \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {D.}~\mathbf{b}ibnamefont {Gross}},\ }\mathbf{b}ibfield {title} {\mathbf{e}nquote {\mathbf{b}ibinfo {title} {Unifying framework for relaxations of the causal assumptions in {B}ell's theorem},}\ }\mathbf{h}ref {\mathbf{d}oibase 10.1103/PhysRevLett.114.140403} {\mathbf{b}ibfield {journal} {\mathbf{b}ibinfo {journal} {Phys. Rev. Lett.}\ }\mathbf{t}extbf {\mathbf{b}ibinfo {volume} {114}},\ \mathbf{b}ibinfo {pages} {140403} (\mathbf{b}ibinfo {year} {2015})}\BibitemShut {NoStop} \mathbf{b}ibitem [{\perp\!\!\!\perptenamefont {Fedotov}\ \mathbf{e}mph {et~al.}(2003)\perp\!\!\!\perptenamefont {Fedotov}, \perp\!\!\!\perptenamefont {Harremo{\"e}s},\ and\ \perp\!\!\!\perptenamefont {Topsoe}}]{Fedotov2003} \BibitemOpen \mathbf{b}ibfield {author} {\mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Alexei~A}\ \mathbf{b}ibnamefont {Fedotov}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Peter}\ \mathbf{b}ibnamefont {Harremo{\"e}s}}, \ and\ \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Flemming}\ \mathbf{b}ibnamefont {Topsoe}},\ }\mathbf{b}ibfield {title} {\mathbf{e}nquote {\mathbf{b}ibinfo {title} {Refinements of pinsker's inequality},}\ }\mathbf{h}ref@noop {} {\mathbf{b}ibfield {journal} {\mathbf{b}ibinfo {journal} {IEEE Transactions on Information Theory}\ }\mathbf{t}extbf {\mathbf{b}ibinfo {volume} {49}},\ \mathbf{b}ibinfo {pages} {1491--1498} (\mathbf{b}ibinfo {year} {2003})}\BibitemShut {NoStop} \mathbf{b}ibitem [{\perp\!\!\!\perptenamefont {Handsteiner}\ \mathbf{e}mph {et~al.}(2017)\perp\!\!\!\perptenamefont {Handsteiner}, \perp\!\!\!\perptenamefont {Friedman}, \perp\!\!\!\perptenamefont {Rauch}, \perp\!\!\!\perptenamefont {Gallicchio}, \perp\!\!\!\perptenamefont {Liu}, \perp\!\!\!\perptenamefont {Hosp}, \perp\!\!\!\perptenamefont {Kofler}, \perp\!\!\!\perptenamefont {Bricher}, \perp\!\!\!\perptenamefont {Fink}, \perp\!\!\!\perptenamefont {Leung}, \perp\!\!\!\perptenamefont {Mark}, \perp\!\!\!\perptenamefont {Nguyen}, \perp\!\!\!\perptenamefont {Sanders}, \perp\!\!\!\perptenamefont {Steinlechner}, \perp\!\!\!\perptenamefont {Ursin}, \perp\!\!\!\perptenamefont {Wengerowsky}, \perp\!\!\!\perptenamefont {Guth}, \perp\!\!\!\perptenamefont {Kaiser}, \perp\!\!\!\perptenamefont {Scheidl},\ and\ \perp\!\!\!\perptenamefont {Zeilinger}}]{Handsteiner2017} \BibitemOpen \mathbf{b}ibfield {author} {\mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Johannes}\ \mathbf{b}ibnamefont {Handsteiner}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Andrew~S.}\ \mathbf{b}ibnamefont {Friedman}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Dominik}\ \mathbf{b}ibnamefont {Rauch}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Jason}\ \mathbf{b}ibnamefont {Gallicchio}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Bo}~\mathbf{b}ibnamefont {Liu}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Hannes}\ \mathbf{b}ibnamefont {Hosp}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Johannes}\ \mathbf{b}ibnamefont {Kofler}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {David}\ \mathbf{b}ibnamefont {Bricher}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Matthias}\ \mathbf{b}ibnamefont {Fink}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Calvin}\ \mathbf{b}ibnamefont {Leung}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Anthony}\ \mathbf{b}ibnamefont {Mark}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Hien~T.}\ \mathbf{b}ibnamefont {Nguyen}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Isabella}\ \mathbf{b}ibnamefont {Sanders}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Fabian}\ \mathbf{b}ibnamefont {Steinlechner}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Rupert}\ \mathbf{b}ibnamefont {Ursin}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {S\"oren}\ \mathbf{b}ibnamefont {Wengerowsky}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Alan~H.}\ \mathbf{b}ibnamefont {Guth}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {David~I.}\ \mathbf{b}ibnamefont {Kaiser}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Thomas}\ \mathbf{b}ibnamefont {Scheidl}}, \ and\ \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Anton}\ \mathbf{b}ibnamefont {Zeilinger}},\ }\mathbf{b}ibfield {title} {\mathbf{e}nquote {\mathbf{b}ibinfo {title} {Cosmic bell test: Measurement settings from milky way stars},}\ }\mathbf{h}ref {\mathbf{d}oibase 10.1103/PhysRevLett.118.060401} {\mathbf{b}ibfield {journal} {\mathbf{b}ibinfo {journal} {Phys. Rev. Lett.}\ }\mathbf{t}extbf {\mathbf{b}ibinfo {volume} {118}},\ \mathbf{b}ibinfo {pages} {060401} (\mathbf{b}ibinfo {year} {2017})}\BibitemShut {NoStop} \mathbf{b}ibitem [{BBT()}]{BBT2017} \BibitemOpen \mathbf{h}ref {http://thebigbelltest.org/} {\mathbf{b}ibinfo {journal} {http://thebigbelltest.org/}\ }\BibitemShut {NoStop} \mathbf{b}ibitem [{\perp\!\!\!\perptenamefont {Chaves}\ and\ \perp\!\!\!\perptenamefont {Brask}(2011)}]{Chaves2011} \BibitemOpen \mathbf{b}ibfield {journal} { }\mathbf{b}ibfield {author} {\mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Rafael}\ \mathbf{b}ibnamefont {Chaves}}\ and\ \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Jonatan~Bohr}\ \mathbf{b}ibnamefont {Brask}},\ }\mathbf{b}ibfield {title} {\mathbf{e}nquote {\mathbf{b}ibinfo {title} {Feasibility of loophole-free nonlocality tests with a single photon},}\ }\mathbf{h}ref {\mathbf{d}oibase 10.1103/PhysRevA.84.062110} {\mathbf{b}ibfield {journal} {\mathbf{b}ibinfo {journal} {Phys. Rev. A}\ }\mathbf{t}extbf {\mathbf{b}ibinfo {volume} {84}},\ \mathbf{b}ibinfo {pages} {062110} (\mathbf{b}ibinfo {year} {2011})}\BibitemShut {NoStop} \mathbf{b}ibitem [{\perp\!\!\!\perptenamefont {Eberhard}(1993)}]{Eberhard1993} \BibitemOpen \mathbf{b}ibfield {author} {\mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Philippe~H.}\ \mathbf{b}ibnamefont {Eberhard}},\ }\mathbf{b}ibfield {title} {\mathbf{e}nquote {\mathbf{b}ibinfo {title} {Background level and counter efficiencies required for a loophole-free einstein-podolsky-rosen experiment},}\ }\mathbf{h}ref {\mathbf{d}oibase 10.1103/PhysRevA.47.R747} {\mathbf{b}ibfield {journal} {\mathbf{b}ibinfo {journal} {Phys. Rev. A}\ }\mathbf{t}extbf {\mathbf{b}ibinfo {volume} {47}},\ \mathbf{b}ibinfo {pages} {R747--R750} (\mathbf{b}ibinfo {year} {1993})}\BibitemShut {NoStop} \mathbf{b}ibitem [{\perp\!\!\!\perptenamefont {Cabello}\ \mathbf{e}mph {et~al.}(2005)\perp\!\!\!\perptenamefont {Cabello}, \perp\!\!\!\perptenamefont {Feito},\ and\ \perp\!\!\!\perptenamefont {Lamas-Linares}}]{Noise_Cabello_1} \BibitemOpen \mathbf{b}ibfield {author} {\mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Ad\'an}\ \mathbf{b}ibnamefont {Cabello}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {\'Alvaro}\ \mathbf{b}ibnamefont {Feito}}, \ and\ \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {Ant\'{\i}a}\ \mathbf{b}ibnamefont {Lamas-Linares}},\ }\mathbf{b}ibfield {title} {\mathbf{e}nquote {\mathbf{b}ibinfo {title} {Bell's inequalities with realistic noise for polarization-entangled photons},}\ }\mathbf{h}ref {\mathbf{d}oibase 10.1103/PhysRevA.72.052112} {\mathbf{b}ibfield {journal} {\mathbf{b}ibinfo {journal} {Phys. Rev. A}\ }\mathbf{t}extbf {\mathbf{b}ibinfo {volume} {72}},\ \mathbf{b}ibinfo {pages} {052112} (\mathbf{b}ibinfo {year} {2005})}\BibitemShut {NoStop} \mathbf{b}ibitem [{\perp\!\!\!\perptenamefont {Ca\~nas}\ \mathbf{e}mph {et~al.}(2013)\perp\!\!\!\perptenamefont {Ca\~nas}, \perp\!\!\!\perptenamefont {Barra}, \perp\!\!\!\perptenamefont {G\'omez}, \perp\!\!\!\perptenamefont {Lima}, \perp\!\!\!\perptenamefont {Sciarrino},\ and\ \perp\!\!\!\perptenamefont {Cabello}}]{Noise_Cabello_3} \BibitemOpen \mathbf{b}ibfield {author} {\mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {G.}~\mathbf{b}ibnamefont {Ca\~nas}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {J.~F.}\ \mathbf{b}ibnamefont {Barra}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {E.~S.}\ \mathbf{b}ibnamefont {G\'omez}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {G.}~\mathbf{b}ibnamefont {Lima}}, \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {F.}~\mathbf{b}ibnamefont {Sciarrino}}, \ and\ \mathbf{b}ibinfo {author} {\mathbf{b}ibfnamefont {A.}~\mathbf{b}ibnamefont {Cabello}},\ }\mathbf{b}ibfield {title} {\mathbf{e}nquote {\mathbf{b}ibinfo {title} {Detection efficiency for loophole-free bell tests with entangled states affected by colored noise},}\ }\mathbf{h}ref {\mathbf{d}oibase 10.1103/PhysRevA.87.012113} {\mathbf{b}ibfield {journal} {\mathbf{b}ibinfo {journal} {Phys. Rev. A}\ }\mathbf{t}extbf {\mathbf{b}ibinfo {volume} {87}},\ \mathbf{b}ibinfo {pages} {012113} (\mathbf{b}ibinfo {year} {2013})}\BibitemShut {NoStop} \mathbf{e}nd{thebibliography} \mathbf{e}nd{document}
\begin{document} \title{Two versions of a specific natural extension} \author{Karma Dajani} \address{Department of Mathematics\\ Utrecht University\\ Postbus 80.000\\ 3508 TA Utrecht\\ the Netherlands} \email{[email protected]} \author{Charlene Kalle} \address{Department of Mathematics\\ Utrecht University\\ Postbus 80.000\\ 3508 TA Utrecht\\ the Netherlands} \email{[email protected]} \subjclass{Primary, 37A05, 11K55.} \keywords{greedy expansion, natural extension, absolutely continuous invariant measure} \maketitle \begin{abstract} We give two versions of the natural extension of a specific greedy $\beta$-transformation with deleted digits. We use the natural extension to obtain an explicit expression for the invariant measure, equivalent to the Lebesgue measure, of this $\beta$-transformation. \end{abstract} \newtheorem{prop}{Proposition}[section] \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{remark}{Remark}[section] \theoremstyle{definition} \newtheorem{defn}{Definition}[section] \newtheorem{ex}{Example}[section] \newcommand{<_{\text{lex}}}{<_{\text{lex}}} \newcommand{\mathcal T}{\mathcal T} \section{Introduction} The classical greedy $\beta$-transformation, $T_c$, is defined for each real number $\beta >1$ and has been studied by a large number of people. It is defined from the interval $[ 0, \frac{\lfloor \beta \rfloor}{\beta -1}]$ to itself and the definition is as follows. $$T_c x = \left\{ \begin{array}{ll} \beta x -j, & \text{if } x \in [\frac{j}{\beta}, \frac{j+1}{\beta}), \; j \in \{ 0,1, \ldots, \lfloor \beta \rfloor -1 \},\\ \beta x - \lfloor \beta \rfloor, & \text{if } x \in [ \frac{\lfloor \beta \rfloor}{\beta}, \frac{\lfloor \beta \rfloor}{\beta -1}], \end{array} \right. $$ where $\lfloor \beta \rfloor$ indicates the largest integer less than or equal to $\beta$. The importance of this transformation lies in the fact that it can be used to generate $\beta$-expansions for all elements in the interval $[ 0, \frac{\lfloor \beta \rfloor}{\beta -1}]$ in the following way. Let $x \in [ 0, \frac{\lfloor \beta \rfloor}{\beta -1}]$ and define the sequence of digits $\{ b_n \}_{n \ge 1}$ by setting $$ b_1 = b_1(x) = \left\{ \begin{array}{ll} j, & \text{if } x \in [\frac{j}{\beta}, \frac{j+1}{\beta}), \; j \in \{ 0,1, \ldots, \lfloor \beta \rfloor -1 \},\\ \lfloor \beta \rfloor, & \text{if } x \in [ \frac{\lfloor \beta \rfloor}{\beta}, \frac{\lfloor \beta \rfloor}{\beta -1}], \end{array} \right. $$ and for $n \ge 1$, set $b_n = b_n(x) = b_1(T_c^{n-1}x)$. Then $T_c x = \beta x - b_1$ and inverting this relation gives $x = \frac{b_1}{\beta} + \frac{T_c x}{\beta}$. Repeating this $n$ times leads to $ x = \sum_{i=1}^{n} \frac{b_i}{\beta^i} + \frac{T_c^n x}{\beta^n}$ and for $n \to \infty$, this converges to $$x= \sum_{i=1}^{\infty} \frac{b_i}{\beta^i}.$$ This last expression is called a $\beta$-expansion of $x$ with digits in the set $\{ 0,1, \ldots, \lfloor \beta \rfloor \}$. More specifically, the expansion obtained by iterating the transformation $T_c$ is called the greedy $\beta$-expansion of $x$, since for each $n \ge 1$, if $b_1, \ldots, b_{n-1}$ are known, then $b_n$ is the largest element of the set $\{ 0, 1, \ldots, \lfloor \beta \rfloor \}$, such that $$\sum_{i=1}^{n} \frac{b_i}{\beta^i} \le x.$$ There exists an invariant measure for $T_c$, that is absolutely continuous with respect to the Lebesgue measure. From now on, we will call such a measure an {\it acim} and we will use $\lambda$ to denote the 1-dimensional Lebesgue measure. The acim for $T_c$ has the interval $[0,1)$ as its support. In 1957 R\'enyi proved the existence of such a measure (\cite{Ren1}) and in 1959 and 1960 Gel'fond and Parry gave, independently of one another, an explicit expression of the density of this measure (see \cite{Gel1} and \cite{Par1}). This density function $h_c$ is given by $$ h_c: [0,1) \to [0,1) : x \mapsto \frac{1}{F(\beta)} \sum_{n=0}^{\infty} \frac{1}{\beta^n} 1_{[0, T_{c}^n 1)}(x), $$ where $F(\beta) = \int_0^1 \sum_{x < T_{c}^n 1} \frac{1}{\beta^n} d\lambda$ is a normalizing constant.\\ \indent The greedy $\beta$-transformation with deleted digits is a generalization of the classical greedy $\beta$-transformation. For each $\beta >1$ and each set of real numbers $A= \{ a_0, a_1, \ldots, a_m\}$ satisfying \begin{itemize} \item[(i)] $a_0=0$, \item[(ii)] $a_0 < a_1 < \ldots < a_m$, \item[(iii)] $\max_{1\le j \le m} (a_j - a_{j-1}) \le \frac{a_m}{\beta -1}$, \end{itemize} the greedy $\beta$-transformation with deleted digits is defined from the interval $[ 0, \frac{a_m}{\beta-1} ]$ to itself by $$ T_{dd} \; x = \left\{ \begin{array}{ll} \beta x - a_j, & \text{if } x \in [ \frac{a_j}{\beta}, \frac{a_{j+1}}{\beta} ], \; j \in \{0, \ldots, m-1\},\\ \beta x - a_m, & \text{if } x \in [ \frac{a_m}{\beta}, \frac{a_m}{\beta-1} ]. \end{array} \right. $$ Notice that we get $T_c$ by taking $A=\{ 0,1, \ldots , \lfloor \beta \rfloor \}$. The transformation was first defined in \cite{DK2} and its definition was based on a recursive algorithm given by Pedicini in \cite{Ped1}. In \cite{DK2} the greedy $\beta$-transformations with deleted digits are also defined for digit sets $A$, not satisfying $a_0=0$, but it is shown in the same paper that these transformations are isomorphic to the one given above. So without loss of generality we can assume that $a_0=0$. The transformation $T_{dd}$ can be used to generate $\beta$-expansions with digits in the set $A$ for all elements in the interval $[ 0, \frac{a_m}{\beta-1} ]$ in exactly the same way as described above for the classical transformation. For $x \in [ 0, \frac{a_m}{\beta-1} ]$, set $$d_1 = d_1(x) = \left\{ \begin{array}{ll} a_j, & \text{if } x \in [ \frac{a_j}{\beta}, \frac{a_{j+1}}{\beta} ], \; j \in \{0, \ldots, m-1\},\\ a_m, & \text{if } x \in [ \frac{a_m}{\beta}, \frac{a_m}{\beta-1} ], \end{array} \right.$$ and for $n \ge 1$, set $d_n = d_n (x) = d_1 (T_{dd}^{n-1})$. Then $T_{dd} x =\beta x -d_1$ and for each $x \in [ 0, \frac{a_m}{\beta-1} ]$ we can form the expression \begin{equation}\label{q:greedyexpdd} x = \sum_{n=1}^{\infty} \frac{d_n}{\beta^n}. \end{equation} Expression (\ref{q:greedyexpdd}) is called the greedy $\beta$-expansion with deleted digits of $x$. This expansion is called greedy for the same reasons as before. At each step the digit given by $T_{dd}$ is the largest element of the set $A$ that ``fits in that position of the expansion'', i.e. if $d_1, \ldots, d_{n-1}$ are already known, then $d_n$ is the largest element of $A$, such that $$ \sum_{i=1}^{n} \frac{d_n}{\beta^n} \le x.$$ Pedicini studied $\beta$-expansions with deleted digits in \cite{Ped1}.\\ \indent In \cite{DK1} it is shown that the transformation $T_{dd}$ admits an acim that is unique and ergodic. The support of this invariant measure is an interval of the form $[0, a_{j_0}-a_{j_0-1} )$, where $$ j_0 = \min \{j: T_{dd}[0,a_j-a_{j-1}) \subseteq [0, a_j-a_{j-1}) \; \lambda \text{ a.e. }, 1 \le j \le m \}.$$ An explicit expression for the density of this measure, however, is given only under certain conditions. In this paper we will construct two versions of the natural extension of the dynamical system $$([0, a_{j_0}-a_{j_0-1} ), \mathcal B([0, a_{j_0}-a_{j_0-1} )), \mu, T),$$ where $ \mathcal B([0, a_{j_0}-a_{j_0-1} ))$ is the Borel $\sigma$-algebra on $[0, a_{j_0}-a_{j_0-1} )$, $T$ is the specific greedy $\beta$-transformation with deleted digits that will be defined below, and $\mu$ is the probability measure on $([0, a_{j_0}-a_{j_0-1} ), \mathcal B([0, a_{j_0}-a_{j_0-1} ))$, obtained by ``pulling back'' the invariant measure that we will define on the natural extension. Notice that the dynamical system $([0, a_{j_0}-a_{j_0-1} ), \mathcal B([0, a_{j_0}-a_{j_0-1} )), \mu, T)$ is not invertible. The natural extension is the smallest invertible dynamical system, that contains this system. The original system can be obtained from the natural extension through a surjective, measurable and measure preserving map that preserves the dynamics of both systems. This map is called a factor map and in this paper it will simply be the projection onto the first coordinate. For more information on natural extensions, see \cite{Roh1} or \cite{Cor1}. By defining the right measure on the natural extension, we can obtain an expression for the density function of the invariant measure of the specific transformation $T$. Maybe one of the versions given in this paper can serve as a starting point for finding an explicit expression for the invariant measure of the greedy $\beta$-transformations with deleted digits in general.\\ \indent The transformation we will consider is the greedy $\beta$-transformation with deleted digits with $\beta = \frac{1+\sqrt 5}{2}$, the positive solution to the equation $x^2-x-1=0$, and with digit set $A = \{ 0,2,3\}$. The support of the acim is the interval $[0,2)$ and therefore we will define the transformation on this interval only. Let the partition $\Delta = \{ \Delta(0), \Delta(2), \Delta(3) \}$ of the interval $[0,2)$ be given by $$ \Delta(0) = \left[ 0, \frac{2}{\beta} \right), \quad \Delta(2) = \left[ \frac{2}{\beta}, \frac{3}{\beta} \right), \quad \Delta(3) = \left[ \frac{3}{\beta}, 2 \right).$$ Then $T: [0,2) \to [0,2)$ is defined by $Tx = \beta x - j$ on $\Delta(j)$, $j \in \{0,2,3\}$. We will use the first section of this paper to fix some notation. In the second and third sections we define two versions of the natural extension of the dynamical system $([0,2), \mathcal B ([0,2)), \mu, T)$. For the classical greedy $\beta$-transformation, versions of the natural extension are given in \cite{DKS} and by Brown and Yin in \cite{Bro1}. The first version we will give is a generalization of the natural extension defined in \cite{DKS}. The second version is defined on a subset of $\mathbb R^2$ and uses the transformation from the first version. We end the paper with a concluding remark. \section{Expansions and fundamental intervals} The transformation $T: [0,2) \to [0,2)$ is defined by setting $Tx = \beta x - j$ on $\Delta(j)$, $j \in \{0,2,3\}$. We can use this transformation to generate expansions of all points in the interval $[0,2)$, with base $\beta$ and digits in the set $\{0,2,3\}$ as was described in the introduction. So for all $x \in [0,2)$ we have the expression (\ref{q:greedyexpdd}). We also write $x =_{\beta} d_1 d_2 d_3 \ldots$, which is understood to mean the same as (\ref{q:greedyexpdd}). Two expansions that will play an important role in what follows are the expansions of the points 1 and $\frac{1}{\beta^3}$. Notice that $\frac{1}{\beta^3} = 2\beta - 3$ would be the image of 2 under $T$ if $T$ were defined on the closed interval $[0,2]$. We have \begin{eqnarray} 1 &=& \sum_{n=1}^{\infty} \frac{d_n^{(2)}}{\beta^n} = \frac{2}{\beta^2} + \frac{2}{\beta^5} + \frac{2}{\beta^8} + \frac{2}{\beta^{11}} + \ldots =_{\beta} 02\overline{002}, \label{q:exp1}\\ \frac{1}{\beta^3} &=& \sum_{n=1}^{\infty} \frac{d_n^{(3)}}{\beta^n} = \frac{2}{\beta^4} + \frac{2}{\beta^7} + \frac{2}{\beta^{10}} + \ldots =_{\beta} 00\overline{002},\label{q:exp2} \end{eqnarray} where the bars on the right hand side of the previous equations indicate a repeating sequence in the expansions. With the {\it orbit of a point} $x$ {\it under} $T$ we mean the set $ \{ T^n x : n \ge 0\}$. In Figure \ref{f:goldenattractor}, you can see the graph of $T$ and the orbits of the points 1 and $\frac{1}{\beta^3}$. \begin{figure} \caption{The transformation $T$ and the orbits of $1$ and $\frac{1} \label{f:goldenattractor} \end{figure} Using $T$ and $\Delta$, we can define a sequence of partitions $\{ \Delta^{(n)} \}_{n \ge 1}$ of $[0,2)$ by setting $ \Delta^{(n)} = \bigvee_{i=0}^{n-1} T^{-i} \Delta$. We call the elements of $\Delta^{(n)}$ {\it fundamental intervals of rank} $n$. Since they will have the form $$ \Delta(b_0) \cap T^{-1} \Delta(b_1) \cap \ldots \cap T^{-(n-1)} \Delta(b_{n-1})$$ for some $b_0, b_1, \ldots, b_{n-1} \in \{0,2,3 \}$, we will denote them by $\Delta(b_0 \ldots b_{n-1})$. We will call $\Delta(b_0 \ldots b_{n-1}) \in \Delta^{(n)}$ {\it full} if $T^n \Delta(b_0 \ldots b_{n-1}) = [0,2)$ and {\it non-full} otherwise. Notice that a fundamental interval of rank $n$ specifies the first $n$ digits, $d_1, \ldots, d_n$, of the greedy expansion of the elements it contains. So, $$ \Delta(b_0 \ldots b_{n-1}) = \{ x \in [0,2) : d_i(x) = b_{i-1}, \; 1 \le i \le n \}.$$ For full fundamental intervals, we have the following obvious lemma. \begin{lemma}\label{l:full} Let $\Delta(a_0 \ldots a_{p-1})$ and $\Delta(b_0 \ldots b_{q-1})$ be two full fundamental intervals of rank $p$ and $q$ respectively. Then the set $\Delta(a_0 \ldots a_{p-1} b_0 \ldots b_{q-1})$ is a full fundamental interval of rank $p+q$. \end{lemma} From the next lemma, it follows that the full fundamental intervals generate the Borel $\sigma$-algebra on $[0,2)$. \begin{lemma}\label{l:generate} For each $n \ge 1$, let $D_n$ be the union of those full fundamental intervals of rank $n$ that are not subsets of any full fundamental interval of lower rank. Then $$ \sum_{n=1}^{\infty} \lambda(D_n) =2.$$ \end{lemma} \begin{proof} Notice that $$ \lambda (D_1) = \lambda (\Delta(0)) = \frac{2}{\beta}, \quad \lambda (D_3) = \lambda (\Delta(200)) = \frac{2}{\beta^3}$$ and for $k \ge 1$, $$ \lambda (D_{3(k+1)}) = \lambda (\Delta(202\underbrace{002\ldots 002}_{k-1 \text{ times}}000) \cup \Delta(300\underbrace{002\ldots 002}_{k-1 \text{ times}}000)) = \frac{4}{\beta^{3(k+1)}}.$$ For all the other values of $n$, $D_n = \emptyset$. So $$ \sum_{n=1}^{\infty} \lambda(D_n) = \frac{2}{\beta} + \frac{2}{\beta^3} + \sum_{k=1}^{\infty} \frac{4}{\beta^{3(k+1)}} = \frac{2}{\beta} + \frac{2}{\beta^3} + \frac{4}{\beta^3} \left[ \frac{1}{1-1/\beta^3}-1 \right] =2. \qedhere $$ \end{proof} \begin{remark}\label{r:generate} {\rm The fact that $\Delta(0)$ is a full fundamental interval of rank 1 allows us to construct full fundamental intervals of arbitrary small Lebesgue measure. This together with the previous lemma guarantees that we can write each interval in $[0,2)$ as a countable union of full fundamental intervals. Thus, the full fundamental intervals generate the Borel $\sigma$-algebra on $[0,2)$. } \end{remark} \section{Two rows of rectangles} To find an expression for the acim of $T$, we will define two versions of the natural extension of the dynamical system $([0,2), \mathcal B([0,2)), \mu, T)$. For the definition of the first version, we will use a subcollection of the collection of fundamental intervals. For $n \ge 1$, let $B_n$ denote the collection of all non-full fundamental intervals of rank $n$ that are not a subset of any full fundamental interval of lower rank. The elements of $B_n$ can be explicitly given as follows. $$ B_1 = \{ \Delta(2), \Delta(3) \}, \quad B_2 = \{ \Delta(20), \Delta(30)\}$$ and for $k \ge 1$, \begin{eqnarray*} B_{3k} &=& \{ \Delta (202\underbrace{002 \ldots 002}_{k-1 \text{ times}}), \Delta (300\underbrace{002 \ldots 002}_{k-1 \text{ times}}) \},\\ B_{3k+1} &=& \{ \Delta (202\underbrace{002 \ldots 002}_{k-1 \text{ times}}0), \Delta (300\underbrace{002 \ldots 002}_{k-1 \text{ times}}0) \},\\ B_{3k+2} &=& \{ \Delta (202\underbrace{002 \ldots 002}_{k-1 \text{ times}}00), \Delta (300\underbrace{002 \ldots 002}_{k-1 \text{ times}}00) \}. \end{eqnarray*} Then $$ T \Delta(2) = [0,1), \; T\Delta(3) = [0, 1/ \beta^3 ), \; T^2 \Delta(20) = [0,\beta), \; T^2\Delta(30) = [0, 1/ \beta^2)$$ and for $k \ge 1$ \begin{eqnarray*} T^{3k} \Delta (202\underbrace{002 \ldots 002}_{k-1 \text{ times}}) &=& T^{3k} \Delta (300\underbrace{002 \ldots 002}_{k-1 \text{ times}}) = [0, 1/ \beta),\\ T^{3k+1} \Delta (202\underbrace{002 \ldots 002}_{k-1 \text{ times}}0) &=& T^{3k+1} \Delta (300\underbrace{002 \ldots 002}_{k-1 \text{ times}}0) = [0,1),\\ T^{3k+2} \Delta (202\underbrace{002 \ldots 002}_{k-1 \text{ times}}00) &=&T^{3k+2} \Delta (300\underbrace{002 \ldots 002}_{k-1 \text{ times}}00) = [0,\beta). \end{eqnarray*} For each $n \ge 1$, $B_n$ contains exactly two elements, one which has $b_0=2$ and one for which $b_0 =3$. So for fixed $b_0$, we can speak of the element $\Delta (b_0 \ldots b_{n-1})$ of $B_n$. We will define two sequences of sets $\{ R_{(2,n)} \}_{n \ge 1}$ and $\{ R_{(3,n)} \}_{n \ge 1}$, that represent the images of the elements of $B_n$ under $T^n$ and we will order them in two rows by assigning two extra parameters to each rectangle. Let $$ R_0 = [0,2) \times [0,2) \times \{0\} \times \{ 0 \}$$ and for each $n \ge 1$, $j \in \{2,3\}$ define the sets $$ R_{(j,n)} = T^n \Delta(j d^{(j)}_1 \ldots d^{(j)}_{n-1}) \times \Delta(\underbrace{0 \ldots 0}_{n \text{ times}}) \times \{ j \} \times \{ n \},$$ where the digits $d_n^{(j)}$ are the digits from the greedy expansions of $1$ and $\frac{1}{\beta^3}$ as given in (\ref{q:exp1}) and (\ref{q:exp2}). Then $R = R_0 \cup \bigcup_{n=1}^{\infty} (R_{(2,n)} \cup R_{(3,n)})$. Let $\mathcal B_0$ denote the Borel $\sigma$-algebra on $R_0$ and on each of the rectangles $R_{(j,n)}$, let $\mathcal B_{(j,n)}$ denote the Borel $\sigma$-algebra defined on it. We can define a $\sigma$-algebra on $R$ as the disjoint union of all these $\sigma$-algebras, $$ \mathcal B = \coprod_{j,n} \mathcal B_{(j,n)} \amalg \mathcal B_0.$$ Let $\bar \lambda$ be the measure on $(R, \mathcal B)$, given by the Lebesgue measure on each rectangle. Then $\bar \lambda (R) = 32 -14\beta$. If we set $\nu = \frac{1}{32-14\beta} \bar \lambda$, then $(R, \mathcal B, \nu)$ will be a probability space.\\ \indent The transformation $\mathcal T$ that we are going to define on this space will map $R_{(j,n)}$ onto $R_{(j,n+1)}$ if $\Delta (j d^{(j)}_1 \ldots d^{(j)}_{n-1}0)$ is non-full, otherwise a part of $R_{(j,n)}$ is mapped onto $R_{(j,n+1)}$ and the other part is mapped in $R_0$. We will define $\mathcal T$ piecewise on these sets.\\ On $R_0$, let $$ \mathcal T (x,y,0,0) = \left\{ \begin{array}{ll} (Tx, \frac{y}{\beta}, 0,0) \in R_0, & \mbox{if } x \in \Delta (0),\\ (Tx,\frac{y}{\beta},j,1) \in R_{(j,1)}, & \mbox{if } x \in \Delta (j), \; j \in \{2,3\} \end{array} \right. $$ and for $(x,y,j,n) \in R_{(j,n)}$, let $$ \mathcal T (x,y,j,n) = \left\{ \begin{array}{ll} (Tx, y^{(j)},0,0) \in R_0, & \mbox{if } \Delta (j d^{(j)}_1 \ldots d^{(j)}_{n-1}0) \text{ is full}\\ &\; \text{and } x \in \Delta(0),\\ (Tx, \frac{y}{\beta},j,n+1) \in R_{(j,n+1)}, & \mbox{if } \Delta (j d^{(j)}_1 \ldots d^{(j)}_{n-1}0)\\ &\; \text{is non-full or } x \not \in \Delta(0),\\ \end{array} \right. $$ where $$ y^{(j)} =\displaystyle \frac{j}{\beta} + \frac{d_1^{(j)}}{\beta^2}+\frac{d_2^{(j)}}{\beta^3} + \ldots + \frac{d_{n-1}^{(j)}}{\beta^n} + \frac{y}{\beta}. $$ \noindent Figure \ref{f:piles} shows the space $R$. \begin{figure} \caption{The space $R$ consists of all these rectangles.} \label{f:piles} \end{figure} \begin{remark}\label{r:bijective} {\rm Notice that for $k\ge 1$, $\mathcal T$ maps all rectangles $R_{(2,n)}$ for which $n \neq 3k-1$ and all rectangles $R_{(3,n)}$ for which $n \neq 3k+2$ bijectively onto $R_{(2,n+1)}$ and $R_{(3,n+1)}$ respectively. The rectangles $R_{(2,3k-1)}$ and $R_{(3,3k+2)}$ are partly mapped onto $R_{(2,3k)}$ and $R_{(3,3k+3)}$ and partly into $R_0$. From Lemma \ref{l:generate} it follows that $\mathcal T$ is bijective. } \end{remark} Let $\pi_1: R \to [0,2)$ be the projection onto the first coordinate. To show that $(R, \mathcal B, \nu, \mathcal T)$ is a version of the natural extension with $\pi_1$ as a factor map, we need to prove all of the following. \begin{itemize} \item[(i)] $\pi_1$ is a surjective, measurable and measure preserving map from $R$ to $[0,2)$. \item[(ii)] For all $x \in R$, we have $(T \circ \pi_1)(x) = (\pi_1 \circ \mathcal T)(x)$. \item[(iii)] $\mathcal T:R \to R$ is an invertible transformation. \item[(iv)] $\mathcal B = \bigvee_{n=0}^{\infty} T^n \pi_1^{-1} (\mathcal B([0,2)))$, where $\bigvee_{n=0}^{\infty} T^n \pi_1^{-1} (\mathcal B([0,2)))$ is the smallest $\sigma$-algebra containing the $\sigma$-algebras $T^n \pi_1^{-1} (\mathcal B ([0,2)))$ for all $n \ge 1$. \end{itemize} It is clear that $\pi_1$ is surjective and measurable and that $T \circ \pi_1 = \pi_1 \circ \mathcal T$. Since $\mathcal T$ expands by a factor $\beta$ in the first coordinate and contracts by a factor $\beta$ in the second coordinate, it is also clear that $\mathcal T$ is invariant with respect to the measure $\nu$. Then $\mu = \nu \circ \pi_1^{-1}$ defines a $T$-invariant probability measure on $([0,2), \mathcal B([0,2)))$ and $\pi_1$ is measure preserving. This shows (i) and (ii). The invertibility of $\mathcal T$ follows from Remark \ref{r:bijective}, so that leaves only (iv). To prove (iv) we will have a closer look at the structure of the fundamental intervals and we will introduce some more notation. \vskip .3cm For a fundamental interval, $\Delta(b_0 \ldots b_q)$, the block of digits $b_0 \ldots b_q$ consists of several subblocks, each of which forms a full fundamental interval itself, except for possibly the last subblock. This last subblock will form a full fundamental interval if $\Delta(b_0 \ldots b_q)$ is full and it will form a non-full fundamental interval otherwise. We take these subblocks a small a possible, i.e. a new subblock starts, as soon as the previous subblock forms a full fundamental interval. Therefore, each of these subblocks consists only of the digit $0$ or is the beginning of the greedy expansion of $1$ or $\frac{1}{\beta^3}$, followed by the digit $0$, except possibly for the last subblock. For example, the block of digits from the fundamental interval $\Delta (2000300002002000)$ can be divided into the three subblocks, $200$, $0$ and $300002002000$. To make this subdivision more precise, we need the notion of return time. For points $(x,y) \in R_0$ define the {\it first return time to} $R_0$ by $$r_1(x,y) = \min \{ n \ge 1: \mathcal T^n (x,y,0,0) \in R_0 \}$$ and for $k \ge 1$, let the $k${\it -th return time to} $R_0$ be given recursively by $$r_k (x,y) =\min \{ n \ge r_{k-1} (x,y): \mathcal T^n (x,y,0,0) \in R_0\}.$$ Notice that this notion depends only on $x$, i.e. for all $y, y' \in R_0$ and all $k \ge 1$, $r_k(x,y)=r_k (x,y')$. So we can write $r_k (x)$ instead of $r_k (x,y)$. In this sense, for each $x \in [0,a_1)$ we can talk about the $k$-th return time of this element. If $\Delta (b_0 \ldots b_{q-1}) \in \Delta^{(q)}$, then for all $n \le q$, $\mathcal T^n$ maps the whole set $\Delta(b_0 \ldots b_{q-1}) \times [0,2) \times \{0\} \times \{0\} \subseteq R_0$ to the same rectangle in $R$. So the first several return times to $R_0$, $r_1, \ldots, r_{\kappa}$, are equal for all elements in $\Delta(b_0 \ldots b_{q-1})$. This means we can talk about the $k$-th return time to $R_0$ of this entire fundamental interval $\Delta(b_0 \ldots b_{q-1})$. Now suppose that $\Delta(b_0 \ldots b_{q-1}) \in \Delta^{(q)}$ is a full fundamental interval. Then there is a $\kappa \ge 1$ and there are numbers $r_i$, $1 \le i \le \kappa$ such that $r_i = r_i(x)$ for all $x \in \Delta (b_0 \ldots b_{q-1})$ and $r_{\kappa}=q$. Put $r_0=0$, then we can divide the block of digits $b_0 \ldots b_{q-1}$ into $\kappa$ subblocks $C_1, \ldots, C_{\kappa}$, where $$ C_i = b_{r_{i-1}} \ldots b_{r_i-1}.$$ So $\Delta(b_0 \ldots b_{q-1}) = \Delta (C_1 \ldots C_{\kappa})$. These subblocks, $C_i$, have the following properties. \begin{itemize} \item[(i)] If $|C_i|$ denotes the length of block $C_i$, then $|C_i| = r_i-r_{i-1}$ for all $i \in \{ 1,2, \ldots, \kappa \}$. \item[(ii)] If $b_{r_i}=0$, then $r_{i+1} = r_i+1$. \item[(iii)] If $b_{r_i} = j \in \{ 2,3\}$, then the block $C_{i+1}$ is equal to $j$ followed by the first part of the greedy expansion of $1$ if $j=2$ and that of $1/ \beta^3$ if $j=3$. So $ C_{i+1} = j d^{(j)}_1 \ldots d^{(j)}_{|C_{i+1}|-1}$. \item[(iv)] For all $i \in \{ 1, \ldots, \kappa \}$, $\Delta(C_i)$ is a full fundamental interval of rank $|C_i|$. \end{itemize} The above procedure gives for each full fundamental interval $\Delta(b_0 \ldots b_{q-1})$, a subdivision of the block of digits $b_0 \ldots b_{q-1}$ into subblocks $C_1, \ldots ,C_{\kappa}$, such that $\Delta(C_i)$ is a full fundamental interval of rank $| C_i |$ and $\Delta(b_0 \ldots b_{q-1})=\Delta(C_1 \ldots C_{\kappa})$. The next lemma is the last step in proving that the system $(R, \mathcal B, \nu, \mathcal T)$ is a version of the natural extension \begin{lemma}\label{l:bigvee} The $\sigma$-algebra $\mathcal B$ on $R$ and the $\sigma$-algebra $\bigvee_{n=0}^{\infty} \mathcal T^n \pi_1^{-1}(\mathcal B([0,2)))$ are equal. \end{lemma} \begin{proof} First notice that by Lemma \ref{l:generate}, each of the $\sigma$-algebras $\mathcal B_{(j,n)}$ is generated by the direct products of the full fundamental intervals, contained in the rectangle $R_{(j,n)}$. Also, $\mathcal B_0$ is generated by the direct products of the full fundamental intervals. It is clear that $\bigvee_{n=0}^{\infty} \mathcal T^n \pi_1^{-1}(\mathcal B([0,2))) \subseteq \mathcal B$. For the other inclusion, first take a generating rectangle in $R_0$: $$\Delta(a_0 \ldots a_{p-1}) \times \Delta(b_0 \ldots b_{q-1}) \times \{0 \} \times \{0 \},$$ where $\Delta(a_0 \ldots a_{p-1})$ and $\Delta(b_0 \ldots b_{q-1})$ are full fundamental intervals. For the set $\Delta(b_0 \ldots b_{q-1})$ construct the subblocks $C_1, \ldots, C_{\kappa}$ as before. By Lemma \ref{l:full} $\Delta (C_{\kappa} C_{\kappa-1} \ldots C_1 a_0 \ldots a_{p-1})$ is a full fundamental interval of rank $p+q$. Then $$ \pi_1^{-1} (\Delta (C_{\kappa} C_{\kappa-1} \ldots C_1 a_0 \ldots a_{p-1})) \cap R_0 \quad$$ $$ \quad = \Delta (C_{\kappa} C_{\kappa-1} \ldots C_1 a_0 \ldots a_{p-1}) \times [0,2) \times \{0 \} \times \{ 0 \}.$$ It is a well-known fact that for each full fundamental interval $\Delta(d_0 \ldots d_{n-1})$ and each $i \in \{ 1, \ldots, n-1 \}$, we have $ T^i \Delta (d_0 \ldots d_{n-1}) = \Delta(d_i \ldots d_{n-1})$. This, together with the definitions of the blocks $C_i$ and the transformation $\mathcal T$ leads to $$ \mathcal T^q ( \pi_1^{-1} (\Delta (C_{\kappa} C_{\kappa-1} \ldots C_1 a_0 \ldots a_{p-1})) \cap R_0) \quad \quad$$ $$ \quad = \Delta (a_0 \ldots a_{p-1}) \times \Delta (C_1 C_2 \ldots C_{\kappa}) \times \{ 0 \} \times \{ 0 \}.$$ So $$ \Delta(a_0 \ldots a_{p-1}) \times \Delta(b_0 \ldots b_{q-1}) \times \{0 \} \times \{0 \} \subseteq \bigvee_{n=0}^{\infty} \mathcal T^n \pi_1^{-1}(\mathcal B([0,2))).$$ Now let $\Delta(a_0 \ldots a_{p-1}) \times \Delta(b_0 \ldots b_{q-1}) \times \{j \} \times \{n \}$ be a generating rectangle for $\mathcal B_{(j,n)}$, for $j \in \{2,3\}$ and $n \ge 1$. So $\Delta(a_0 \ldots a_{p-1})$ and $\Delta(b_0 \ldots b_{q-1})$ are again full fundamental intervals. Notice that $$\Delta(b_0 \ldots b_{q-1}) \subseteq \Delta(\underbrace{0 \ldots 0}_{n \text{ times}}),$$ which means that $q \ge n$. Also $b_i =0$ and thus $r_{i+1}=i+1$ for all $i \in \{ 0, \ldots, n-1\}$. So, if we divide $b_0 \dots b_{q-1}$ into subblocks $C_i$ as before, we get that $C_1 = C_2 = \ldots =C_n =0$, that $\kappa \ge n$ and that $|C_{n+1}| + \ldots + |C_{\kappa}|=q-n$. Consider the set $$ C = \Delta (C_{\kappa} C_{\kappa-1} \ldots C_{n+1} j d^{(j)}_1 \ldots d_{n-1}^{(j)}a_0 \ldots a_{p-1}). $$ We will show the following.\\ Claim: The set $C$ is a fundamental interval of rank $p+q$ and $T^q C=\Delta(a_0 \ldots a_{p-1})$. \vskip .2cm \noindent First notice that $$ C= \Delta (C_{\kappa} C_{\kappa-1} \ldots C_{n+1}) \cap T^{n-q} \Delta(j d^{(j)}_1 \ldots d_{n-1}^{(j)}) \cap T^{-q}\Delta(a_0 \ldots a_{p-1}).$$ So obviously, $$ T^q C \subseteq T^q \Delta (C_{\kappa} C_{\kappa-1} \ldots C_{n+1}) \cap T^n \Delta(j d^{(j)}_1 \ldots d_{n-1}^{(j)}) \cap \Delta(a_0 \ldots a_{p-1}). $$ By Lemma \ref{l:full}, $\Delta (C_{\kappa} C_{\kappa-1} \ldots C_{n+1})$ is a full fundamental interval of rank $q-n$, so $ T^q \Delta (C_{\kappa} C_{\kappa-1} \ldots C_{n+1}) = [0,2)$. Now, by the definition of $R_{(j,n)}$ we have that \begin{equation}\label{q:deltaa} \Delta(a_0 \ldots a_{p-1}) \subseteq T^n \Delta(j d^{(j)}_1 \ldots d_{n-1}^{(j)}), \end{equation} and thus $T^q C \subseteq \Delta(a_0 \ldots a_{p-1})$.\\ For the other inclusion, let $z \in \Delta (a_0 \ldots a_{p-1})$. By (\ref{q:deltaa}), there is an element $y$ in $\Delta(j d^{(j)}_1 \ldots d_{n-1}^{(j)})$, such that $T^n y =z$. And since $ T^{q-n} \Delta (C_{\kappa} C_{\kappa-1} \ldots C_{n+1}) = [0,2)$, there is an $x \in \Delta (C_{\kappa} C_{\kappa-1} \ldots C_{n+1})$ with $T^{q-n}x = y$, so $T^q x = z$. This means that $$ z \in T^q \Delta (C_{\kappa} C_{\kappa-1} \ldots C_{n+1}) \cap T^n \Delta(j d^{(j)}_1 \ldots d_{n-1}^{(j)}) \cap \Delta(a_0 \ldots a_{p-1}).$$ So $ T^q C= \Delta(a_0 \ldots a_{p-1})$ and this proves the claim. \vskip .2cm \noindent Consider the set $D= \pi_1^{-1} (C) \cap R_0$. Then as before, we have $$\mathcal T^{q-n} D = \Delta (j d^{(j)}_1 \ldots d^{(j)}_{n-1}a_0 \ldots a_{p-1}) \times \Delta(C_{n+1} C_{n+2} \ldots C_{\kappa}) \times \{0 \} \times \{0 \}.$$ And after $n$ more steps, \begin{eqnarray*} \mathcal T^q D &=& \Delta(a_0 \ldots a_{p-1}) \times \Delta (\underbrace{00\ldots 0}_{n \text{ times}} C_{n+1} \ldots C_{\kappa}) \times \{ j \} \times \{ n \}\\ &=& \Delta(a_0 \ldots a_{p-1}) \times \Delta(b_0 \ldots b_{q-1}) \times \{j \} \times \{n \}. \end{eqnarray*} So, $$\Delta(a_0 \ldots a_{p-1}) \times \Delta(b_0 \ldots b_{q-1}) \times \{j \} \times \{n \} \in \bigvee_{n=0}^{\infty} \mathcal T^n \pi_1^{-1}(\mathcal B([0,2)))$$ and thus we see that $$ \mathcal B = \bigvee_{n=0}^{\infty} \mathcal T^n \pi_1^{-1}(\mathcal B([0,2))). \qedhere$$ \end{proof} \noindent This leads to the following theorem. \begin{theorem} The dynamical system $(R, \mathcal B, \nu, \mathcal T)$ is a version of the natural extension of the dynamical system $([0,2), \mathcal B([0,2)), \mu, T)$, where $\mu = \nu \circ \pi_1^{-1}$ is an invariant probability measure of $T$, equivalent to the Lebesgue measure on $[0,2)$, whose density function, $h:[0,2) \to [0,2)$, is given by \begin{eqnarray*} h(x) &=& \frac{1}{16-7\beta} [(1+2\beta)1_{[0, 1/ \beta^3)}(x) + (2+\beta)1_{[1/ \beta^3, 1/ \beta^2)}(x)\\ && + 2\beta 1_{[1/ \beta^2, 1/ \beta)}(x) + \beta^2 1_{[1/ \beta, 1)}(x) + \beta 1_{[1, \beta)}(x) + 1_{[\beta, 2)}(x)]. \end{eqnarray*} \end{theorem} \begin{proof} The proof follows from Remark \ref{r:generate}, the properties of $\pi_1$ and Lemma \ref{l:bigvee}. \end{proof} \section{Towering the orbits} For the second version of the natural extension, we will define a transformation on a certain subset of $[0,2)\times [0,2\beta)$, using the transformation $\mathcal T$, defined in the previous section. Define for $n \ge 1$ the following intervals: $$ I_{(2,n)} = \left[\frac{2}{\beta^2} + \frac{2}{\beta^2}\sum_{j=1}^{n-1} \frac{1}{\beta^j}, \frac{2}{\beta^2} + \frac{2}{\beta^2}\sum_{j=1}^{n} \frac{1}{\beta^j} \right)$$ and $$ I_{(3,n)} = \left[2 + \frac{2}{\beta^2}\sum_{j=1}^{n-1} \frac{1}{\beta^j}, 2 + \frac{2}{\beta^2}\sum_{j=1}^{n} \frac{1}{\beta^j}\right),$$ where $\displaystyle \sum_{j=1}^{0} \frac{1}{\beta^j} =0$. Let $I_0 = [0,\frac{2}{\beta^2})$. Notice that all of these rectangles are disjoint and that $\displaystyle \bigcup_{n=1}^{\infty} I_{(2,n)} = \left[ \frac{2}{\beta^2}, 2 \right)$ and $\displaystyle \bigcup_{n=1}^{\infty} I_{(3,n)} = [2, 2 \beta)$, so that these intervals together with $I_0$ form a partition of $[0,2\beta)$. Now define the subset $I \subseteq [0,2)\times [0,2\beta)$ by $$I = ([0,2) \times I_0) \cup \bigcup_{n=1}^{\infty} (([0, T^{n-1} 1) \times I_{(2,n)}) \cup ([0, T^{n-1} \frac{1}{\beta^3}) \times I_{(3,n)})) $$ and let the function $\phi:I \to R$ be given by $$\phi(x,y) = \left\{ \begin{array}{ll} (x, \beta^2 (y-\frac{2}{\beta^2}-\frac{2}{\beta^3}\sum_{j=0}^{n-1} \frac{1}{\beta^j}), 2,n), & \text{if } y \in I_{(2,n)},\\ (x, \beta^2 (y-2-\frac{2}{\beta^3}\sum_{j=0}^{n-1}\frac{1}{\beta^j}), 3,n), & \text{if } y \in I_{(3,n)},\\ (x, \beta^2 y,0,0), & \text{if } y \in I_0.\\ \end{array} \right.$$ So $\phi$ maps $I_0$ to $R_0$ and for all $n \ge 1$, $j \in \{ 2,3 \}$, $\phi$ maps $I_{(j,n)}$ to $R_{(j,n)}$. Clearly, $\phi$ is a measurable bijection. Define the transformation $\tilde {\mathcal T}: I \to I$, by $$\tilde {\mathcal T} (x,y) = \phi^{-1} (\mathcal T (\phi(x,y))).$$ It is straightforward to check that $\tilde{\mathcal T}$ is invertible. In Figure \ref{f:tower} we see this transformation. \begin{figure} \caption{The transformation $\tilde{\mathcal T} \label{f:tower} \end{figure} Let $\mathcal I$ be the collection of Borel sets on $I$. If $\lambda_2$ is the 2-dimensional Lebesgue measure, then $$ \lambda_2 (I) = 78-46 \beta = \frac{1}{\beta^2} \bar \lambda(R).$$ Define a measure $\tilde{\nu}$ on $(I, \mathcal I)$ by setting $\tilde{\nu} (E) = (\nu \circ \phi)(E)$, for all $E \in \mathcal I$. Then $\phi$ is measure preserving and the systems $(R, \mathcal B, \mu, \mathcal T)$ and $(I, \mathcal I, \tilde{\nu}, \tilde{\mathcal T})$ are isomorphic. Notice that $\tilde{\nu}$ is the normalized 2-dimensional Lebesgue measure on $(I, \mathcal I)$ and that the projection of $\tilde{\nu}$ on the first coordinate gives $\mu$ again. The following lemma is now enough to show that $(I, \mathcal I, \tilde{\nu}, \tilde{\mathcal T})$ is a version of the natural extension of $([0,2), \mathcal B([0,2)), \mu, T)$. \begin{lemma} The $\sigma$-algebras $\mathcal I$ and $\bigvee_{n=0}^{\infty} \tilde{\mathcal T}^n \pi_1^{-1} (\mathcal B([0,2))) $ are equal. \end{lemma} \begin{proof} It is easy to see that $\bigvee_{n=0}^{\infty} \tilde{\mathcal T}^n \pi_1^{-1} (\mathcal B([0,2))) \subseteq \mathcal I$. For the other inclusion, notice that the direct products of full fundamental intervals contained in $$([0,2) \times I_0) \cup \bigcup_{n=1}^{\infty} ( [0, T^{n-1} 1) \times I_{(2,n)}),$$ generate the restriction of $\mathcal I$ to this set. If $\Delta(b_0 \ldots b_{n-1}) \in \Delta^{(n)}$ is full in $[0, \frac{2}{\beta})$, then the set $2+ \Delta(b_0 \ldots b_{n-1})$ is a subset of $[2, 2\beta)$. So the direct products of full fundamental intervals in $[0, \beta)$ and sets of the form $2+ \Delta(b_0 \ldots b_{n-1})$ contained in $\bigcup_{n=1}^{\infty} ([0, T^{n-1} \frac{1}{\beta^3}) \times I_{(3,n)})$, generate the restriction of $\mathcal I$ to this set. Since $\tilde{\mathcal T}$ is isomorphic to $\mathcal T$, the fact that $$\mathcal I \subseteq \bigvee_{n=0}^{\infty} \tilde{\mathcal T}^n \pi_1^{-1} (\mathcal B([0,2)))$$ now can be proven in a way similar to the proof of Lemma \ref{l:bigvee}. \end{proof} \section{Concluding remark} In the previous sections we have defined two dynamical systems that are versions of the natural extension of the dynamical system $([0,2), \mathcal B([0,2)), \mu, T)$, where $T$ is the greedy $\beta$-transformation with deleted digits for $\beta = \frac{1+\sqrt 5}{2}$ and $A=\{ 0,2,3\}$. This gave us the possibility to find the density function of the invariant measure of $T$, equivalent to the Lebesgue measure on $[0,2)$. An important feature of the transformation $T$, that was used in both versions is that the orbits of the points $1$ and $\frac{1}{\beta^3}$ and the interval $\Delta(3)$ are disjoint. If this would not be the case, defining a version of the natural extension of a greedy $\beta$-transformation with three deleted digits would require extra effort. It is probably the first version of the natural extension that can be adapted to this most easily. \end{document}
\begin{document} \title[Yang-Mills flow for semistable bundles]{Continuity of the Yang-Mills flow on the set of semistable bundles} \dedicatory{Dedicated to Duong H. Phong, with admiration, on the occasion of his 65th birthday.} \author[Sibley]{Benjamin Sibley} \address{Benjamin Sibley, Simons Center for Geometry and Physics \\ State University of New York \\ Stony Brook, NY 11794-3636, USA} \email{[email protected]} \author[Wentworth]{Richard Wentworth} \address{Richard Wentworth, Department of Mathematics \\ University of Maryland \\ College Park, MD 20742, USA} \email{[email protected]} \urladdr{\href{http://www.math.umd.edu/~raw/}{http://www.math.umd.edu/~raw/}} \thanks{ R.W.'s research supported in part by NSF grant DMS-1564373. The authors also acknowledge support from NSF grants DMS-1107452, -1107263, -1107367 ``RNMS: GEometric structures And Representation varieties'' (the GEAR Network).} \keywords{Yang-Mills flow, semistable bundles, Donaldson--Uhlenbeck compactification} \subjclass[2010]{14D20, 14J60, 32G13, 53C07} \maketitle \noindent \section{Introduction} Let $(X,\omega)$ be a compact K\"ahler manifold of dimension $n$ and $(E,h)\to X$ a $C^\infty$ hermitian vector bundle on $X$. The celebrated theorem of Donaldson-Uhlenbeck-Yau states that if $A$ is an integrable unitary connection on $(E,h)$ that induces an $\omega$-slope stable holomorphic structure on $E$, then there is a complex gauge transformation $g$ such that $g(A)$ satisfies the Hermitian-Yang-Mills (HYM) equations. The proof in \cite{UhlenbeckYau:86} uses the continuity method applied to a deformation of the Hermitian-Einstein equations for the metric $h$. The approach in \cite{Donaldson:85,Donaldson:87} deforms the metric using a nonlinear parabolic equation, the \emph{Donaldson flow}. Deforming the metric is equivalent to acting by a complex gauge transformation modulo unitary ones, and in this context the Donaldson flow is equivalent (up to unitary gauge transformations) to the Yang-Mills flow on the space of integrable unitary connections. The proof in \cite{Donaldson:87} assumes that $X$ is a projective algebraic manifold (more precisely, that $\omega$ is a Hodge metric) whereas the argument in \cite{UhlenbeckYau:86} does not. The methods of Uhlenbeck-Yau and Donaldson were combined by Simpson \cite{Simpson:88} to prove convergence of the Yang-Mills flow for stable bundles on all compact K\"ahler manifolds. The Yang-Mills flow thus defines a map $\mathcal A^s(E,h)\to M_{\text{\rm\tiny HYM}}^\ast(E,h)$ from the space of smooth integrable connections on $(E,h)$ inducing stable holomorphic structures to the moduli space $M^\ast_{\text{\rm\tiny HYM}}(E,h)$ of irreducible HYM connections.\mathop{\varphi}ootnote{The notion of (semi)stability depends on the choice of K\"ahler class $[\omega]$; however, the class will remain fixed throughout, and we shall suppress this dependency from the notation.} Continuity of this map follows by a comparison of Kuranishi slices (see \cite{FujikiSchumacher:87,Miyajima:89}). When the holomorphic bundle $\mathcal E=(E,\bar\partial_A)$ is strictly semistable, then the Donaldson flow fails to converge unless $\mathcal E$ splits holomorphically into a sum of stable bundles (i.e.\ it is \emph{polystable}). If $n=1$ it is still true, however, that the Yang-Mills flow converges to a smooth HYM connection on $E$ for any semistable initial condition. This was proven by Daskalopoulos and R{\aa}de \cite{Daskal:92,Rade:92}. Moreover, the holomorphic structure of the limiting connection is isomorphic to the polystable holomorphic bundle $\text{\rm\tiny G}r(\mathcal E)$ obtained from the associated gradation of the Jordan-H\"older filtration of $\mathcal E$. For $n\geq 2$, there is an obstruction to a smooth splitting into an associated graded bundle, and $\text{\rm\tiny G}r(\mathcal E)$ may not be locally free. The new phenomenon of bubbling occurs, and one must talk of convergence \emph{in the sense of Uhlenbeck}, that is, away from a singular set of complex codimension at least $2$ (see Theorem \ref{thm:uhlenbeck} below). In \cite{DaskalWentworth:04} (see also \cite{DaskalWentworth:07b}) it was shown for $n=2$ that the Yang-Mills flow converges in the sense of Uhlenbeck to the reflexification $\text{\rm\tiny G}r(\mathcal E)^{\ast\ast}$, which is a polystable bundle. The bubbling locus, which in this case is a collection of points with multiplicities, is precisely the set where $\text{\rm\tiny G}r(\mathcal E)$ fails to be locally free \cite{DaskalWentworth:07a}. The extension of these results in higher dimensions was achieved in \cite{Sibley:15,SibleyWentworth:15}. Here, even the reflexified associated graded sheaf may fail to be locally free, and one must use the notion of an \emph{admissible} HYM connection introduced by Bando and Siu \cite{BandoSiu:94}. Convergence of the flow to the associated graded sheaf for semistable bundles in higher dimensions was independently proven by Jacob \cite{Jacob:15}. In a different direction, a compactification of $M_{\text{\rm\tiny HYM}}^\ast$ was proposed by Tian in \cite{Tian:00} and further studied in \cite{TianYang:02}. This may be viewed as a higher dimensional version of the Donaldson-Uhlenbeck compactification of ASD connections on a smooth manifold of real dimension $4$ (cf.\ \cite{FreedUhlenbeck:84,DonaldsonKronheimer:90}). It is based on a finer analysis of the bubbling locus for limits of HYM connections that is similar to the one carried out for harmonic maps by Fang-Hua Lin \cite{Lin:99}. More precisely, Tian proves that the top dimensional stratum is rectifiable and calibrated by $\omega$ with integer multiplicities, and as a consequence of results of King \cite{King:71} and Harvey-Shiffman \cite{HarveyShiffman:74}, it represents an analytic cycle. The compactification is then defined by adding ideal points containing in addition to an admissible HYM connection the data of a codimension $2$ cycle in an appropriate cohomology class (see Section \ref{sec:uhlenbeck}). At least when $X$ is projective, the space $\widehat M_{\text{\rm\tiny HYM}}$ of ideal HYM connections is a compact topological space (Hausdorff), and the compactification of $M^\ast_{\text{\rm\tiny HYM}}$ is obtained by taking its closure $\overlineerline M_{\text{\rm\tiny HYM}}\subset \widehat M_{\text{\rm\tiny HYM}}$. Under this assumption, we recently showed, in collaboration with Daniel Greb and Matei Toma, that $\overlineerline M_{\text{\rm\tiny HYM}}$ admits the structure of a seminormal complex algebraic space \cite{GSTW:18}. The purpose of this note is to point out the compatibility of this construction with the Yang-Mills flow. For example, in the case of a Riemann surface, the flow defines a continuous deformation retraction of the entire semistable stratum onto the moduli space of semistable bundles. This is precisely what is to be expected from Morse theory (see \cite{AtiyahBott:82}). In higher dimensions, as mentioned above, bubbling along the flow needs to be accounted for. The result is the following. \begin{theorem*} Let $(E,h)$ be a hermitian vector bundle over a compact K\"ahler manifold $(X,\omega)$ with $[\omega]\in H^2(X,\mathbb{Z}Bbb)$. Let $\mathcal A^{ss}(E,h)$ be the set of semistable integrable unitary connections on $(E,h)$ with the smooth topology (see Section \ref{sec:uhlenbeck}). Then the Yang-Mills flow defines a continuous map \begin{equation}\label{eqn:F} \mathscr F: \mathcal A^{ss}(E,h)\to \widehat M_{\text{\rm\tiny HYM}}(E,h)\ . \end{equation} In particular, the restriction of $\mathscr F$ gives a continuous map $\overlineerline{\mathcal A^s}(E,h)\to \overlineerline M_{\text{\rm\tiny HYM}}(E,h)$, where \break $\overlineerline{\mathcal A^s}(E,h)\subset \mathcal A^{ss}(E,h)$ is the closure of $\mathcal A^s(E,h)$ in the smooth topology. \end{theorem*} The proof of the Main Theorem is a consequence of the work in \cite{GSTW:18}, with small modifications. For the case of K\"ahler surfaces, this result was claimed in \cite[Thm.\ 2]{DaskalWentworth:07a}. Unfortunately, there is an error in the proof of Lemma 8 of that paper, and hence also in the proof of Theorem 2. The Main Theorem above validates the statement in \cite[Thm.\ 2]{DaskalWentworth:07a}, at least in the projective case. We do not know if the result holds when $X$ is only K\"ahler. The advantage of projectivity is that a twist of the bundle is generated by global holomorphic sections. These behave well with respect to Uhlenbeck limits and provide a link between the algebraic geometry of geometric invariant theory quotients and the analytic compactification. We review this in Section \ref{sec:quot} below. \section{Uhlenbeck limits and admissible HYM connections} \label{sec:uhlenbeck} In this section we briefly review the compactification of $M_{\text{\rm\tiny HYM}}^\ast(E,h)$ by ideal HYM connections. As in the introduction, let $(E,h)$ be a hermitian vector bundle on a compact K\"{a}hler manifold $(X, \omega)$ of dimension $n$, and let $\mathfrak g_E$ denote the bundle of skew-hermitian endomorphisms of $E$. The space $\mathcal A(E,h)$ of $C^\infty$ unitary connections on $E$ is an affine space over $\Omega^1(X,\mathfrak g_E)$, and we endow it with the smooth topology. A connection $A\in\mathcal A(E,h)$ is called \emph{integrable} if its curvature form $F_A$ is of type (1,1). Let $\mathcal A^{1,1}(E,h)$ denote the set of integrable unitary connections on $(E,h)$. Then $\mathcal A^{1,1}(E,h)\subset\mathcal A(E,h)$ inherits a topology as a closed subset. The locus $\mathcal A^s(E,h)$ of \emph{stable} holomorphic structures is open in $\mathcal A^{1,1}(E,h)$ (cf.\ \cite[Thm.\ 5.1.1]{LubkeTeleman:95}). Under the assumption that $\omega$ is a Hodge metric we shall prove below that the subset $\mathcal A^{ss}(E,h)$ of \emph{semistable} holomorphic structures is also open in $\mathcal A^{1,1}(E,h)$ (see Corollary \ref{cor:open}). We call the contraction $\sqrt{-1}\Lambda F_A$ of $F_A$ with the K\"ahler metric the \emph{Hermitian-Einstein tensor}. It is a hermitian endomorphism of $E$. The key definition is the following (cf.\ \cite{BandoSiu:94} and \cite[Sect.\ 2.3]{Tian:00}). \begin{definition}\label{def:admissible} An \emph{admissible connection} is a pair $(A,S)$ where \begin{enumerate} \item $S\subset X$ is a closed subset of finite Hausdorff $(2n-4)$-measure; \item $A$ is a smooth integrable unitary connection on $E\bigr|_{X\backslash S}$; \item $\int_{X\backslash S} |F_A|^2\, dvol_X < +\infty$; \item $\sup_{X\backslash S} | \Lambda F_A| < +\infty$. \end{enumerate} An admissible connection is called \emph{admissible HYM} if there is a constant $\mu$ such that $\sqrt{-1}\Lambda F_A=\mu\cdot {\bf I}$ on $X\backslash S$. \end{definition} The fundamental weak compactness result is the following. \begin{theorem}[Uhlenbeck \cite{UhlenbeckPreprint}] \label{thm:uhlenbeck} Let $A_i$ be a sequence of smooth integrable connections on $X$ with uniformly bounded Hermitian-Einstein tensors. Then for any $p>n$ there is \begin{enumerate} \item a subsequence $($still denoted $A_{i}$$)$, \item a closed subset $S_{\infty}\subset X$ of finite $(2n-4)$-Hausdorff measure, \item a connection $A_\infty$ on a hermitian bundle $E_\infty\to X\backslash S_{\infty}$, and \item local isometries $E_\infty \simeq E$ on compact subsets of $X\backslash S_{\infty}$ \end{enumerate} such that with respect to the local isometries, and modulo unitary gauge equivalence, $A_{i}\to A_\infty$ weakly in $ L^p_{1,loc}(X\backslash S_{\infty})$. \end{theorem} We call the limiting connection $A_\infty$ an \emph{Uhlenbeck limit}. The set $$ S_\infty= \bigcap_{\sigma_0\geq \sigma>0}\Bigl\{ x\in X \mid \liminf_{i\to\infty} \sigma^{4-2n}\int_{B_\sigma(x)}|F_{A_i}|^2 \mathop{\varphi}rac{\omega^n}{n!}\geq \varepsilon_0\Bigr\}\ , $$ where $\sigma_0$ and $\varepsilon_0$ are universal constants depending only on the geometry of $X$, is called the \emph{(analytic) singular set}. For the definition of a gauge theoretic compactification more structure is needed. This is provided by the following, which is a consequence of work of Tian \cite{Tian:00} and Hong-Tian \cite{HongTian:04}. \begin{proposition} \label{prop:admissible} The Uhlenbeck limit of a sequence of smooth HYM connections on $(E,h)$ is an admissible HYM connection. Moreover, the corresponding singular set $S_\infty$ is a holomorphic subvariety of codimension at least $2$. The same is true for Uhlenbeck limits of sequences along the Yang-Mills flow. \end{proposition} To be more precise, there is a decomposition $S_\infty=|\mathbb{C}cal_\infty|\cup S(A_\infty)$, where \begin{equation}\label{eqn:singset} S(A_\infty):=\biggl\{ x\in X \,\biggr |\, \lim_{\sigma\downarrow 0}\sigma^{4-2n}\int_{B_{\sigma}(x)}\left\vert F_{A_\infty}\right\vert ^{2}\mathop{\varphi}rac{\omega^n}{n!}\neq 0\ \biggr\}. \end{equation} has codimension $\geq 3$, and $|\mathbb{C}cal_\infty|$ is the support of a codimension 2 cycle $\mathbb{C}cal_\infty$. The cycle appears as the limiting current of the Yang-Mills energy densities, just as in the classical approach of Donaldson-Uhlenbeck in real dimension $4$. This structure motivates the following \begin{definition}[{\cite[Def.\ 3.15]{GSTW:18}}] \label{def:ideal-connection} An {\em ideal HYM connection} is a triple $(A,\mathbb{C}cal, S(A))$ satisfying the following conditions: \begin{enumerate} \item $\mathbb{C}cal$ is an $(n-2)$-cycle on $X$; \item the pair $(A, |\mathbb{C}cal| \cup S(A))$ is an admissible HYM connection on the hermitian vector bundle $(E,h)\to X$, where $S(A)$ is given as in eq.\ \eqref{eqn:singset}; \item $[{\rm ch}_2(A)]={\rm ch}_2(E)+[\mathbb{C}cal]$, in $H^4(X,\mathbb{Q}Bbb)$; \end{enumerate} \end{definition} Here we have denoted by ${\rm ch}_{2}(A)$ the $(2,2)$-current given by \begin{equation*} {\rm ch}_{2}(A)(\Omega):=-\mathop{\varphi}rac{1}{8\pi^2}\int_{X}\tr(F_{A}\wedge F_{A})\wedge\Omega\ , \end{equation*} for smooth $(2n-4)$-forms $\Omega$. This is well defined by Definition \ref{def:admissible} (3), and in \cite[Prop.\ 2.3.1]{Tian:00} it is shown to be a closed current. It thus defines a cohomology class as above. By \cite{BandoSiu:94}, there is a polystable reflexive sheaf $\mathcal{E}$ extending the holomorphic bundle $(E|_{X\backslash Z\cup S(A)},\overlineerline{\partial }_{A})$. The singular set $\sing(\mathcal E)$ of $\mathcal E$, that is, the locus where $\mathcal E$ fails to be locally free, coincides with $S(A)$ (see \cite[Thm.\ 1.4]{TianYang:02}). By the proof of \cite[Prop.\ 3.3]{SibleyWentworth:15}, ${\rm ch}_{2}(A)$ represents the class ${\rm ch}_{2}(\mathcal{E})$. Thus we may alternatively regard an ideal connection as a pair $(\mathcal{E},\mathcal{C})$, where $\mathcal E$ is a reflexive sheaf, $\mathbb{C}cal$ is a codimension $2$ cycle with $ {\rm ch}_{2}(\mathcal{E})={\rm ch}_{2}(E)+[\mathcal{C}] $, and where the underlying smooth bundle of $\mathcal E$ on the complement of $|\mathbb{C}cal|\cup \sing(\mathcal E)$ is isomorphic to $E$. See \cite[Sec.\ 3.3]{GSTW:18} for more details. There is an obvious notion of gauge equivalence of ideal HYM connections. The main result is the following. \begin{theorem} \label{thm:ideal-convergence} Assume $\omega$ is a Hodge metric. Let $(A_i, \mathbb{C}cal_i, S(A_i))\in \widehat M_\text{\rm\tiny HYM}(E,h)$. Then there is a subsequence $($also denoted by $\{i\}$$)$, and an ideal HYM connection $(A_\infty, \mathbb{C}cal_\infty, S(A_\infty))$ such that $\mathbb{C}cal_i$ converges to a subcycle of $\mathbb{C}cal_\infty$, and (up to gauge transformations) $A_i\to A_\infty$ in $C^\infty_{loc}$ on $X\backslash (|\mathbb{C}cal_\infty|\cup S(A_\infty))$. Moreover, \begin{equation} \label{eqn:currents-converge} {\rm ch}_2(A_i)-\mathbb{C}cal_i\longrightarrow {\rm ch}_2(A_\infty)-\mathbb{C}cal_\infty \end{equation} in the mass norm; in particular, also in the sense of currents. \end{theorem} For more details we refer to \cite{Tian:00,TianYang:02,GSTW:18}. \section{The method of holomorphic sections} \label{sec:quot} Admissibility of a connection is precisely the correct analytic notion to make contact with complex analysis. Bando \cite{Bando:91} and Bando-Siu \cite{BandoSiu:94} show that bundles with admissible connections admit sufficiently many local holomorphic sections to prove coherence of the sheaf of $L^2$-holomorphic sections. This local statement only requires the K\"ahler condition. The key difference between the projective vs.\ K\"ahler case is, of course, the abundance of global holomorphic sections. These provide a link between the algebraic and analytic moduli. They are also well-behaved with respect to limits. The technique described here mimics that introduced by Jun Li in \cite{Li:93}. We henceforth assume $[\omega]\in H^2(X,\mathbb{Z}Bbb)$. Let $L\to X$ be a complex line bundle with $c_1(L)=[\omega]$. Define the numerical invariant: \begin{equation} \label{eqn:hilbert-poly} \tau_E(m):=\int_X {\rm ch}(E\otimes L^{m}){\rm td}(X) \ . \end{equation} Since $\omega$ is a $(1,1)$ class, $L$ may be endowed with a holomorphic structure $\mathcal L$ making it the ample line bundle defining the polarization of $X$. We also fix a hermitian metric on $L$ with respect to which the Chern connection of $\mathcal L$ has curvature $-2\pi i\omega$. Use the following notation: $\mathcal E(m):= \mathcal E\otimes\mathcal L^m$. The key property we exploit is the following, which is a consequence of Maruyama's boundedness result \cite{Maruyama:81}, as well as the Hirzebruch-Riemann-Roch theorem. \begin{proposition} \label{prop:maruyama} There is $M\geq 1$ such that for all $m\geq M$ and all $A\in \mathcal A^{ss}(E,h)$, if $\mathcal E=(E,\bar\partial_A)$ then the bundle $\mathcal E(m)$ is globally generated and all higher cohomology groups vanish. In particular, $\dim H^0(X, \mathcal E(m))=\tau_E(m)$ for $m\geq M$. \end{proposition} In the following, we shall assume $m$ has been fixed sufficiently large (possibly larger than in the previous proposition). Fix a vector space $V$ of dimension $\tau_E(m)$, and let \begin{equation} \label{eqn:H} \mathcal H=V\otimes \mathcal O_X(-m)\ . \end{equation} The \emph{Grothendieck Quot scheme} $\mathbb{Q}uot(\mathcal H, \tau_E)$ is a projective scheme parametrizing isomorphism classes of quotients $\mathcal H \to \mathcal F\to 0$, where $\mathcal F\to X$ is a coherent sheaf with Hilbert polynomial $\tau_E$ \cite{Grothendieck:61,AltmanKleiman:80}. Proposition \ref{prop:maruyama} states that there is a uniform $m$ such that for every $A\in \mathcal A^{ss}(E,h)$ there is a quotient $\mathcal H\to \mathcal E\to 0$ in $\mathbb{Q}uot(\mathcal H, \tau_E)$ with $\mathcal E\simeq (E,\bar\partial_A)$. The next result begins the comparison between Uhlenbeck limits and limits in $\mathbb{Q}uot(\mathcal H, \tau_E)$. \begin{proposition} \label{prop:limit_sections} Let $\{A_i\}\subset \mathcal A^{ss}(E,h)$, and suppose $A_i\to A_\infty$ in the sense of Uhlenbeck (Theorem \ref{thm:uhlenbeck}), and assume uniform bounds on the Hermitian-Einstein tensors. Then there is a quotient $\mathcal H\to \mathcal F_\infty\to 0$ in $\mathbb{Q}uot(\mathcal H, \tau_E)$ and an inclusion $\mathcal F_\infty\hookrightarrow \mathcal E_\infty$ such that $\mathcal F_\infty^{\ast\ast}\simeq \mathcal E_\infty$. \end{proposition} The proof of this result for sequences of HYM connections is in \cite[Sec.\ 4.2]{GSTW:18}, but the proof there works as well under the weaker assumption of bounded Hermitian-Einstein tensor. Indeed, the first key point is the application of the Bochner formula to obtain uniform bounds on $L^2$-holomorphic sections. The precise statement is that if $s\in H^0(X,\mathcal E(m))$ then there is a constant $C$ depending only on the geometry of $X$, $m$, and the uniform bound on the Hermitian-Einstein tensor, such that $\sup_X|s|\leq C \Vert s\Vert_{L^2}$. In this way one can extract a convergent subsequence of orthonormal sections to obtain a map $q_\infty: \mathcal H \to \mathcal E_\infty$. The limiting sections may no longer form a basis of $H^{0}(X,\mathcal E_{\infty}(m))$, nor necessarily do they generate the fiber of $\mathcal E_\infty$. Remarkably, though, it is still the case that the rank of the image sheaf $\widetilde \mathcal E_\infty\subset\mathcal E_\infty$ of $q_\infty$ agrees with $\rank(E)$ and has Hilbert polynomial $\tau_E$ (for this one may have to twist with a further power of $\mathcal L$). In fact, the quotient sheaf $\mathcal{T}_{\infty }=\mathcal{E}_{\infty }/ \widetilde{\mathcal{E}}_{\infty }$ turns out to be supported in complex codimension $2$ (the first Chern class is preserved under Uhlenbeck limits). Hence, in particular, $(\widetilde \mathcal E_\infty)^{\ast\ast}\simeq \mathcal E_\infty$. See \cite[proof of Lemma 4.3]{GSTW:18} for more details. The second ingredient in the proof is the fact that $\mathbb{Q}uot(\mathcal H, \tau_E)$ is compact in the analytic topology. Hence, after passing to a subsequence, we may assume the $q_i$ converge. Convergence in $\mathbb{Q}uot(\mathcal H, \tau_E)$ means the following: there is a convergent sequence of quotients $\mathcal F_i\to \mathcal F_\infty$ and isomorphisms $f_i$ making the following diagram commute \begin{equation*} \begin{split} \xymatrix{ \mathcal H\ar@{=}[d] \ar[r] & \mathcal F_i \ar[d]^{f_i} \ar[r] & 0 \\ \mathcal H\ar[r]^{q_i}&\mathcal E_i\ar[r] & 0 } \end{split} \end{equation*} The proof is completed, as in \cite[Lemma 4.4]{GSTW:18}, by showing that $\mathcal F_\infty\simeq \widetilde \mathcal E_\infty$. The crucial point that is used in showing this is that the two sheaves are quotients of $\mathcal H$ with the same Hilbert polynomial. \section{Analytic cycles and the blow-up set} \label{sec:cycle} In the case of the stronger notion of convergence of Uhlenbeck-Tian, we go one step further and identify the cycle associated to the sheaf $\mathcal F_{\infty}$ with the cycle $\mathbb{C}cal_{\infty}$ that arises from bubbling of the connections. The candidate is the following: for any torsion-free sheaf $\mathcal F\to X$, define a codimension $2$ cycle $\mathbb{C}cal_\mathcal F$ from the top dimensional stratum of the support of $\mathcal F^{\ast\ast}/\mathcal F$. See for example \cite[Sec.\ 2.5.3]{GSTW:18}. \begin{proposition} \label{prop:cycles} Let $A_{i}$ be a sequence of connections as in Proposition \ref{prop:limit_sections}, and suppose furthermore that they converge to an ideal connection $(A_{\infty },\mathcal{C}_{\infty },S(A_{\infty }))$ in the sense of Theorem \ref{thm:ideal-convergence}. Let $\mathcal H\to \mathcal F_\infty$ be as in the statement of Proposition \ref{prop:limit_sections}. Then $\mathcal{C}_{\infty }= \mathcal{C}_{\mathcal{F}_{\infty }}$. \end{proposition} The proof of this result follows from the discussion in \cite[Sec.\ 4.3]{GSTW:18} (see in particular Prop.\ 4.7). Although the result there is stated for sequence of HYM connections, this is required only to obtain the same sup-norm inequality on the global sections of $\mathcal{E}(m)$ that was used to obtain Proposition \ref{prop:limit_sections}. Thus, the uniform bound on the Hermitian-Einstein tensor suffices. Let us sketch the argument. The first key point is that $[\mathcal{C}_{\mathcal{F} _{\infty }}]=[$ $\mathcal{C}_{\infty }]$ in rational cohomology. Indeed, the connection $A_{\infty }$ is defined on the smooth locus of the sheaf $\mathcal{F}_{\infty }^{\ast \ast }$ and is smooth there, and ${\rm ch}_{2}(A_{\infty })$ defines a closed current (see Section \ref{sec:uhlenbeck}). It then follows as in the proof of \cite[Prop.\ 3.3]{SibleyWentworth:15} that $ [{\rm ch}_{2}(A_{\infty })]={\rm ch}_{2}(\mathcal{F}_{\infty }^{\ast \ast })$. The exact sequence \begin{equation*} 0\longrightarrow \mathcal{F}_{\infty }\longrightarrow \mathcal{F}_{\infty }^{\ast \ast }\longrightarrow \mathcal{T}_{\infty }\longrightarrow 0\ , \end{equation*} implies that \begin{equation*} \lbrack {\rm ch}_{2}(A_{\infty })]={\rm ch}_{2}(\mathcal{F}_{\infty })+{\rm ch}_{2}(\mathcal{T} _{\infty })={\rm ch}_{2}(E)+[\mathcal{C}_{\mathcal{F}_{\infty }}]\ , \end{equation*} where in the second inequality we have used the fact that the Chern classes of $ \mathcal{F}_{\infty }$ are the same as those of $E$, and Proposition 3.1 of \cite{SibleyWentworth:15} (see also the latest arxiv version of this reference). By the convergence of the currents in Theorem \ref{thm:ideal-convergence} and Chern-Weil theory, we have \begin{equation*} {\rm ch}_{2}(E)+[\mathcal{C}_{\infty }]=[{\rm ch}_{2}(A_{i})]+[\mathcal{C}_{\infty }]=[{\rm ch}_{2}(A_{\infty })]. \end{equation*} Combining these two equalities gives the statement. What remains to be shown is that given any irreducible component $ Z\subset \supp(\mathcal{T}_{\infty })$, for the associated multiplicity $ m_{Z}$ as defined in \cite[Sec.\ 2.5.3]{GSTW:18}, we have an equality \begin{equation*} m_{Z}=\lim_{i\rightarrow \infty }\mathop{\varphi}rac{1}{8\pi ^{2}}\int_{\Sigma }\tr(F_{A_{i}}\wedge F_{A_{i}})-\tr(F_{A_{\infty }}\wedge F_{A_{\infty }})\ , \end{equation*} where $\Sigma $ is a generic real $4$-dimensional slice intersecting $Z$ transversely in a single smooth point. The point is that if $Z$ is contained in the support $|\mathcal{C}_{\infty }|$ then it must be equal to one of the irreducible components. In this case, the number on the right hand side of the equality above is exactly the multiplicity of this component in the cycle $\mathcal{C}_{\infty }$, and otherwise this number is zero, (see \cite[Lemma 3.13]{GSTW:18} and \cite[Lemma 4.1]{SibleyWentworth:15} and again note that the proof is completely general). If the equality holds, this number cannot be zero, since $m_{Z}$ is strictly positive by definition, and therefore $Z$ must be a component of $\mathcal{C}_{\infty }$, and the multiplicities agree. Since $ \mathcal{C}_{\infty }$ and $\mathcal{C}_{\mathcal{F}_{\infty }}$ are equal in cohomology, there can be no other irreducible components of $\mathcal{C} _{\infty }$, and so $\mathcal{C}_{\infty }=$ $\mathcal{C}_{\mathcal{F} _{\infty }}$. For more details, see the proof of \cite[Prop.\ 4.7]{GSTW:18}). \begin{remark} It should be emphasized that Proposition \ref{prop:cycles} does not claim that the support of $\mathcal F_\infty^{\ast\ast}/\mathcal F_\infty$ coincides with the full bubbling locus $|\mathbb{C}cal_\infty|\cup S(A_\infty)$; only the top dimensional strata are necessarily equal. This differs from what occurs, for example, along the Yang-Mills flow (see \cite[Thm.\ 1.1]{SibleyWentworth:15}). It would be interesting to understand the behavior of the higher codimensional pieces from this perspective. There are recent examples due to Chen-Sun indicating that this should be subtle (see \cite{ChenSun:18b,ChenSun:18a}). \end{remark} \section{A remark on the topology of the Quot scheme} In this section we consider the relationship between the Quot scheme $\mathbb{Q}uot(\mathcal H,\tau_E)$ discussed in Section \ref{sec:quot}, and the infinite dimensional space $\mathcal A^{1,1}(E,h)$ of integrable connections. We are thus interested in the points in $\mathbb{Q}uot(\mathcal H,\tau_E)$ where the quotient sheaf is locally free and has underlying $C^\infty$ bundle isomorphic to $E$. Such a point corresponds to an isomorphism class of holomorphic structures on $E$, or equivalently, to a complex gauge orbit in $\mathcal A^{1,1}(E,h)$. Conversely, a connection $A\in \mathcal A^{1,1}(E,h)$ gives a holomorphic bundle which, provided $m$ is sufficiently large, can be realized as a quotient. We wish to show that this correspondence between complex gauge orbits in $\mathcal A^{1,1}(E,h)$ and points in $\mathbb{Q}uot(\mathcal H,\tau_E)$ can be made continuous in the respective topologies. Since the complex gauge orbit space in $\mathcal A^{1,1}(E,h)$ is non-Hausdorff in general, we will lift to a map from open sets in $\mathcal A^{1,1}(E,h)$ itself. This gives rise to the following notion. Let $U\subset \mathcal A^{1,1}(E,h)$. We call $\sigma : U\to \mathbb{Q}uot(\mathcal H,\tau_E)$ a \emph{classifying map} if the quotient $\sigma(A)$ is a holomorphic bundle isomorphic to $(E,\bar\partial_A)$. Recall from Section \ref{sec:quot} that the bundle $\mathcal H$ depends on a sufficiently large choice of $m$, which we omit from the notation. Then the result is the following. \begin{theorem} \label{thm:classifying} Fix $A_0\in \mathcal A^{1,1}(E,h)$. Then for $m$ sufficiently large (depending on $A_0$), there is an open neighborhood $U\subset\mathcal A^{1,1}(E,h)$ of $A_0$ and a continuous classifying map $\sigma : U\to \mathbb{Q}uot(\mathcal H,\tau_E)$. On $\mathcal A^{ss}(E,h)$, the twist $m$ may be chosen uniformly. \end{theorem} Throughout the proof, as in Section \ref{sec:quot}, we fix a hermitian structure $h_L$ on $L$ such that the curvature of the Chern connection of $(\mathcal L, h_L)$ defines a K\"ahler metric $\omega$ on $X$. \begin{proof} Let $d(m,n)=\tau_E(m)\cdot \dim H^0(X,\mathcal L^n)$. For $n>>1$, $\mathbb{Q}uot(\mathcal H,\tau_E)$ is embedded in the Grassmannian $G(d(m,n), \tau_E(m+n))$ of $\tau_E(m+n)$-dimensional quotients of $\mathbb{C}Bbb^{d(m,n)}$. More precisely, suppose $q: \mathcal H\to \mathcal E$ is a point in $\mathbb{Q}uot(\mathcal H,\tau_E)$, and let $\mathcal K=\ker q$. There is a sufficiently large $n$ (uniform over the whole Quot scheme) such that \begin{equation} \label{eqn:vanishing} H^i(X,\mathcal K(m+n))=H^i(X,\mathcal E(m+n))=\{0\}\ ,\ i\geq 1 \end{equation} (cf.\ \cite[Lemmas 1.7.2 and 1.7.6]{HuybrechtsLehn:10}). We therefore have a short exact sequence: \begin{equation} \label{eqn:quotient} 0\longrightarrow H^0(X, \mathcal K(m+n))\longrightarrow H^0(X, \mathcal H(m+n))\longrightarrow H^0(\mathcal E(m+n))\longrightarrow 0 \end{equation} Since the middle term has dimension $d(m,n)$, and by \eqref{eqn:vanishing} the last term has dimension $\tau_E(m+n)$, we obtain a point in $G(d(m,n), \tau_E(m+n))$. For $n$ sufficiently large, this is an embedding. Given $A_0\in \mathcal A^{1,1}(E,h)$, $\mathcal E_0=(E,\bar\partial_{A_0})$, choose $m$ such that $\mathcal E_0(m)$ is globally generated and has no higher cohomology. Set $V_0=H^0(X, \mathcal E_0(m))$. Then $\dim V_0=\tau_E(m)$. Given $s\in V_0$, $f\in \mathcal O_X$, the map $${\rm ev} : V_0\otimes \mathcal O_X\longrightarrow \mathcal E_0(m) : s\otimes_\mathbb{C}Bbb f\mapsto fs $$ realizes $\mathcal E_0(m)$ as a quotient of $V_0\otimes \mathcal O_X$. After twisting back by $\mathcal O_X(-m)$, we have a quotient $\mathcal H\to \mathcal E_0\to 0$. For $A\in U$ (the open set $U$ remains to be specified) in order to realize $\mathcal E =(E,\bar\partial_A)$ as a quotient of $\mathcal H$, it suffices to give an isomorphism of $V_0$ with $V_A=\ker\bar\partial_A\subset \text{\rm\tiny G}amma(E\otimes L^m)$, for then $\mathcal E$ is obtained through this isomorphism followed by evaluation ${\rm ev}$ as above. Note here that we assume already that $U$ has been chosen sufficiently small so that $\mathcal E(m)$ is globally generated and has no higher cohomology. This is the first condition on $U$, and it can be arranged by semicontinuity of cohomology. On $\text{\rm\tiny G}amma(E\otimes L^m)$ we have an $L^2$-inner product. Since $V_A$ and $V_0$ are subspaces of $\text{\rm\tiny G}amma(E\otimes L^m)$, we can define a map by orthogonal projection $\pi_A: V_A\to V_0$. Let us write this explicitly. For $s\in V_A$, let $\pi_A(s)=s_0=s+u_s$, where $u_s\in V_0^\perp$. We require $\bar\partial_{A_0}s_0=0$, or $\bar\partial_{A_0}(s+u_s)=0$. If we write $\bar\partial_A=\bar\partial_{A_0}+a$, $a\in \Omega^{0,1}(X,\mathfrak g_E)$, then the above is $\bar\partial_{A_0}u_s=as$. Let $G_0$ be the Green's operator for the $\bar\partial_{A_0}$ laplacian acting on $\Omega^{0,1}(E\otimes L^m)$. In general, the Green's operator inverts the laplacian up to projection onto the orthogonal complement of the harmonic forms in $\Omega^{0,1}(X,E\otimes L^{m})$. We have assumed vanishing of $H^1(X,\mathcal E_0(m))$, so in our case $G_0$ is a genuine inverse. Set: $u_s=\bar\partial_{A_0}^\ast G_0(as)$. Then $$ \bar\partial_{A_0}u_s=\bar\partial_{A_0}\bar\partial_{A_0}^\ast G_0(as)=\square_{A_0}G_0(as)+\bar\partial_{A_0}^\ast\bar\partial_{A_0}G_0(as)=as\ , $$ as desired. Here, we have used the fact that $\bar\partial_{A_0}G_0=G_0\bar\partial_{A_0}$, and that, by the integrability of $\bar\partial_{A}$ and $s\in V_{A}$, $\bar\partial_{A_0}(as)=0$. Notice that this definition of $u_s$ guarantees that it is orthogonal to $V_0$. Moreover, since $\bar\partial_{A_0}^\ast G_0$ is a bounded operator, we have an estimate \begin{equation} \label{eqn:u-est} \Vert u_s\Vert_{L^2}\leq B \Vert as\Vert_{L^2} \leq B(\sup|a|) \Vert s\Vert_{L^2}\ . \end{equation} In particular, for $\sup|a|$ sufficiently small, $\Vert\pi_A(s)\Vert_{L^2}\geq (1/2)\Vert s\Vert_{L^2}$, so $\pi_A$ is injective and therefore an isomorphism. The classifying map is then defined as the quotient: $$ \sigma(A) : \mathcal H\xrightarrow{\pi_A^{-1}\otimes id}V_A\otimes \mathcal O_X(-m)\xrightarrow{\rm ev} \mathcal E\longrightarrow 0\ . $$ It remains to show that $\sigma$ is continuous. We begin with a few preliminaries. For $s\in \text{\rm\tiny G}amma(E\otimes L^m)$, let $$\widetilde \pi_A : \text{\rm\tiny G}amma(E\otimes L^m)\longrightarrow \text{\rm\tiny G}amma(E\otimes L^m) : s\mapsto s+\bar\partial_{A_0}^\ast G_0(as)\ , $$ so that $\widetilde \pi_A$ restricted to $V_A$ is $\pi_A$. Again using that $\bar\partial_{A_0}^\ast G_0$ is a bounded operator, we have $$ \Vert (\widetilde \pi_{A_1}-\widetilde \pi_{A_2})s\Vert_{L^2}= \Vert\bar\partial_{A_0}^\ast G_0((a_1-a_2)s)\Vert_{L^2}\leq B\Vert (a_1-a_2)s\Vert_{L^2}\leq B\sup|a_1-a_2|\Vert s\Vert_{L^2}\ . $$ It follows that $\widetilde \pi_A$ is continuous in $A$. By the argument following \eqref{eqn:u-est}, it is also uniformly invertible for $A\in U$, with $\Vert\widetilde \pi_A^{-1}\Vert\leq 2$. Hence, \begin{align*} (\widetilde \pi_{A_1}^{-1}-\widetilde \pi_{A_2}^{-1})s &= \widetilde \pi_{A_1}^{-1} (\widetilde \pi_{A_2}-\widetilde \pi_{A_1}) \widetilde \pi_{A_2}^{-1}s \\ \Vert (\widetilde \pi_{A_1}^{-1}-\widetilde \pi_{A_2}^{-1})s\Vert_{L^2} &\leq 4 \Vert \widetilde \pi_{A_1}-\widetilde \pi_{A_2}\Vert\cdot \Vert s\Vert_{L^2}\leq 4B\sup|a_1-a_2|\Vert s\Vert_{L^2}\ . \end{align*} We conclude that the map $\pi_A^{-1} : V_0\to \text{\rm\tiny G}amma(E\otimes L^m)$, whose image is $V_A$, is continuous for $A\in U$, and in fact satisfies an estimate: \begin{equation} \label{eqn:inverse-estimate} \Vert (\pi_{A_1}^{-1}- \pi_{A_2}^{-1})s_0\Vert_{L^2} \leq 4B\sup|a_1-a_2|\Vert s_0\Vert_{L^2}\ , \end{equation} for all $s_0\in V_0$. The second ingredient we shall need is the following. We may assume that there is a uniform bound on the Hermitian-Einstein tensors for each $A\in U$. It follows as in Section \ref{sec:quot} that we have an estimate: $\sup|s|\leq C\Vert s\Vert_{L^2}$, for all $s\in V_A$. Hence, \begin{equation} \label{eqn:sup-bound} \sup|\pi_A^{-1}(s_0)|\leq C\Vert \pi_A^{-1}(s_0)\Vert_{L^2}\leq 2C\Vert s_0\Vert_{L^2}\ , \end{equation} for all $s_0\in V_0$. Finally, let $s_0\in V_0$ and $f\in H^0(X,\mathcal L^n)$. Since $f$ is holomorphic any norm is bounded by its $L^2$ norm. Using \eqref{eqn:inverse-estimate} and \eqref{eqn:sup-bound}, there is a constant $C_1>0$ such that: \begin{align} \Vert f(\pi_{A_1}^{-1}-\pi_{A_2}^{-1})s_0\Vert_{L^2}^2 &\leq \Vert f\Vert_{L^4}^2\Vert (\pi_{A_1}^{-1}-\pi_{A_2}^{-1})s_0\Vert_{L^4}^2 \leq C_1\Vert f\Vert_{L^2}^2 \Vert (\pi_{A_1}^{-1}-\pi_{A_2}^{-1})s_0\Vert_{L^2} \Vert s_0\Vert_{L^2} \notag\\ &\leq C_1 (\sup|a_1-a_2|) \Vert s_0\Vert_{L^2}^2 \Vert f\Vert_{L^2}^2\ .\label{eqn:L4} \end{align} To prove continuity of $\sigma$, we show that the corresponding quotients \eqref{eqn:quotient} vary continuously in the Grassmannian for $A\in U$. First, notice that \begin{equation*} \label{eqn:Hmn} H^0(X,\mathcal H(m+n))\simeq V_0\otimes H^0(X,\mathcal L^n)\ , \end{equation*} On $V_0\otimes H^0(X,\mathcal L^n)$, we choose the tensor product metric of the $L^2$ metrics on $V_0$ and $H^0(X,\mathcal L^n)$. The map induced by $\sigma$ is described as follows: for each $A\in U$ we have $$ T_A : V_0\otimes H^0(X,\mathcal L^m)\longrightarrow \text{\rm\tiny G}amma(E\otimes L^{m+n}) : s\otimes_\mathbb{C}Bbb f\mapsto \pi_A^{-1}(s)\otimes_{\mathcal O_X}f $$ with image $H^0(X, \mathcal E(m+n))$. Moreover, it follows as in \eqref{eqn:L4} that $T_A$ is continuous in $A$ for $A\in U$. Let $P_A$ denote the orthogonal projection to $\ker T_A$. The topology of the Grassmannian may defined through projection operators, so it suffices to show that $P_A$ is continuous in $A\in U$. Because the dimensions of the kernels of $T_A$ are constant on $U$, this reduces to showing that for any sequence $A_j\to A$ and $s_j\in \ker T_{A_j} \subset V_0\otimes H^0(X,\mathcal L^n)$, $\Vert s_j\Vert=1$, there is a subsequence such that $s_j\to s\in \ker T_A$. Indeed, if this is the case we may choose an orthonormal basis of such sections, $\{s_j^\alpha\}$, so that for any $s\in V_0\otimes H^0(X,\mathcal L^n)$, $$ P_{A_j} s=\sum_\alpha \langle s, s_j^\alpha\rangle s_j^\alpha\ , $$ and the right hand side converges to $P_A s$, and so $\Vert P_{A_j}-P_A\Vert\to 0$. By finite dimensionaliy of $V_0\otimes H^0(X,\mathcal L^n)$, we may assume $s_j\to s$ for some $s\in V_0\otimes H^0(X,\mathcal L^n)$. Let $s_j=s_j^0+s_j^1$ be the orthogonal decomposition with respect to the splitting $\ker T_A\oplus (\ker T_A)^\perp$. In particular, there is a constant $c>0$ such that \begin{equation} \label{eqn:T-est} \Vert T_A s_j^1\Vert\geq c\Vert s_j^1\Vert\ . \end{equation} But then $$ 0=T_{A_j}s_j=(T_{A_j}-T_A)s_j + T_A s_j^1 $$ and so $$ (T_A-T_{A_j})s_j=T_A s_j^1 \ \Longrightarrow \ \Vert T_A s_j^1\Vert \to 0\ . $$ The estimate \eqref{eqn:T-est} implies $s_j^1\to 0$. Hence, $s\in \ker T_A$, and continuity of $\sigma$ is proven. The uniformity of $m$ in the second statement follows from Proposition \ref{prop:maruyama}. \end{proof} The semistable quotients in $\mathbb{Q}uot(\mathcal H,\tau_E)$ form an open set \cite[Thm.\ 2.8]{Maruyama:76}. Combining this with Theorem \ref{thm:classifying} we obtain \begin{corollary} \label{cor:open} The set $\mathcal A^{ss}(E,h)$ is open in $\mathcal A^{1,1}(E,h)$. \end{corollary} \section{Proof of the Main Theorem} \label{sec:proof} As seen in Section \ref{sec:quot}, a consequence of the assumption that $X$ is projective is a representation of holomorphic bundles and Uhlenbeck limits as quotients. The existence of many holomorphic sections also passes to certain line bundles on moduli spaces. This fact implies strong separation properties and will be used in this section to deduce the Main Theorem. Let $A\in \mathcal A^{ss}(E,h)$ be a smooth integrable unitary connection such that the induced holomorphic bundle $\mathcal E=(E,\bar\partial_{A})$ is semistable. Then there is a Seshadri filtration $\{0\}=\mathcal F_0\subset\mathcal F_1\subset\cdots \subset \mathcal F_\ell=\mathcal E$ such that the successive quotients $\mathbb{Q}cal_i=\mathcal F_i/\mathcal F_{i-1}$, $i=1,\ldots,\ell$ are stable torsion-free sheaves all of equal slope to that of $\mathcal E_0$. Let $\text{\rm\tiny G}r(\mathcal E)=\oplus_{i=1}^\ell \mathbb{Q}cal_i$ and $\mathbb{C}cal$ the cycle defined by the codimension $2$ support of $\text{\rm\tiny G}r(\mathcal E)^{\ast\ast}/\text{\rm\tiny G}r(\mathcal E)$ (see Section \ref{sec:cycle}). By the result of Bando-Siu referenced in Section \ref{sec:uhlenbeck}, there is an admissible HYM connection $A$ on $\text{\rm\tiny G}r(\mathcal E)^{\ast\ast}$, such that $(A,\mathbb{C}cal, S(A))$ defines an ideal HYM connection in the sense of Definition \ref{def:ideal-connection}. \begin{theorem}[\cite{DaskalWentworth:04,DaskalWentworth:07a,Sibley:15,SibleyWentworth:15}] \label{thm:flow} Let $A_0\in\mathcal A^{ss}(E,h)$. Then the Yang-Mills flow $A_t$ with initial condition $A_0$ converges in the sense of Theorem \ref{thm:ideal-convergence} to an ideal connection $[A_\infty, \mathbb{C}cal_\infty, S(A_\infty)]$, where $A_\infty$ is the admissible HYM connection on $\text{\rm\tiny G}r(\mathcal E_0)^{\ast\ast}$, and $\mathbb{C}cal_\infty$ is the codimension $2$ cycle defined by the torsion-free sheaf $\text{\rm\tiny G}r(\mathcal E_0)$. \end{theorem} The theorem above states that the map $\mathscr F$ in \eqref{eqn:F} is given purely in terms of the holomorphic initial data and the solution for admissible HYM connections on reflexive sheaves. We wish to prove that $\mathscr F$ is continuous. For this we invoke the moduli space construction of Greb-Toma \cite{GrebToma:17}. Let $R^{\mu ss}\subset \mathbb{Q}uot(\mathcal H, \tau_E)$ denote the open subset consisting of quotients that are slope semistable torsion-free sheaves. Then there exists a (seminormal) projective variety $M^{\mu ss}$ and a morphism (in particular, continuous map) $R^{\mu ss}\to M^{\mu ss} : \mathcal F\mapsto [\mathcal F]$ with the following properties: \begin{enumerate} \item If $\mathcal F_1\simeq \mathcal F_2$, then $[\mathcal F_1]=[\mathcal F_2]$ in $M^{\mu ss}$ (cf.\ the discussion preceding \cite[Def.\ 2.19]{GSTW:18}); \item If $[\mathcal F_1]=[\mathcal F_2]$ in $M^{\mu ss}$, then $\mathcal F_1^{\ast\ast}\simeq \mathcal F_2^{\ast\ast}$ and $\mathbb{C}cal_{\mathcal F_1}=\mathbb{C}cal_{\mathcal F_2}$ \cite[Thm.\ 5.5]{GrebToma:17}. \end{enumerate} The association $[\mathcal F]\to (\mathcal F^{\ast\ast}, \mathbb{C}cal_\mathcal F)$ gives a well-defined map $\overlineerline \mathbb{P}hi : \overlineerline M^\mu(E,h)\to \overlineerline M_{\text{\rm\tiny HYM}}(E,h)$, where $\overlineerline{M}^{\mu }(E,h)$ is the closure of $M^\ast_{\text{\rm\tiny HYM}}(E,h)$ in $M^{\mu ss}$. There is a diagram \begin{equation} \begin{split} \label{eqn:moduli} \xymatrix{ \overlineerline{\mathcal A^s}(E,h) \ar[r]^Q\ar[dr]_{\mathscr F} & \overlineerline M^\mu(E,h) \ar[d]^{\overlineerline\mathbb{P}hi}\\ &\overlineerline M_{\text{\rm\tiny HYM}}(E,h) } \end{split} \end{equation} Here, the map $\overlineerline{A^s}(E,h)\stackrel{Q}{\longrightarrow} \overlineerline{M} ^{\mu}(E,h)$ is defined by realizing a semistable bundle as a quotient in $R^{\mu ss}$ (see the discussion following Proposition \ref{prop:maruyama}), and sending this quotient to its equivalence class in $M^{\mu ss}$. By construction, $Q$ may be locally exhibited as the composition of the map $R^{\mu ss}\to M^{\mu ss}$ with a classifying map $\sigma$ as discussed in the previous section. The former map is a morphism of complex spaces and is therefore continuous. By Theorem \ref{thm:classifying}, $\sigma$ is continuous as well. Since continuity is a local property, we deduce the continuity of $Q$. Now one of the main results of \cite{GSTW:18} is Theorem 4.11, which states that $\overlineerline \mathbb{P}hi$ is also continuous. By Theorem \ref{thm:flow}, the diagram \eqref{eqn:moduli} commutes, and we therefore conclude that $\mathscr F$ is continuous on $\overlineerline{A^s}(E,h)$. To address the general situation, we first reduce the problem as in \cite[Sec.\ 4]{DaskalWentworth:07a}. Let $A_i\to A_0$ be a sequence in $\mathcal A^{ss}(E,h)$ converging in the $C^\infty$ topology, and let $[A_\infty, \mathbb{C}cal_\infty^A, S(A_\infty)]=\mathscr F(A_0)$. By the compactness theorem \cite[Thm.\ 3.23]{GSTW:18}, we may assume that, after passing to a subsequence, there is an ideal connection such that $\mathscr F(A_i)\to [(B_\infty, \mathbb{C}cal_\infty^B, S(B_\infty)]$. We must show that the two limits agree. Let $A_{i,t}$ denote the Yang-Mills flow at time $t$ of $A_i$. Smooth dependence on initial conditions implies that for each fixed $T>0$, $A_{i,t}\to A_t$ smoothly as $i\to +\infty$, uniformly for $t\in [0,T)$. \begin{lemma} \label{lem:A-subsequence} There is a subsequence $($also denoted {$\{i\}$}$)$ and $t_i\to +\infty$, such that $A_{i,t_i}\to [A_\infty, \mathbb{C}cal_\infty^A, S(A_\infty)]$ in the sense of Theorem \ref{thm:ideal-convergence}. \end{lemma} \begin{proof} The proof relies on several properties. First, since $A_{i,t}\to A_t$ for every $t\geq 0$, by a diagonalization argument we may choose a sequence $A_{i,t_i}$ so that (up to gauge), $A_{i,t_i}\to A_\infty$ weakly in $L^p_{1,loc}$ away from $|\mathbb{C}cal|\cup S(A_\infty)$. Next, by the result in \cite{HongTian:04}, any sequence $A_{i,t_i}$, $t_i\to +\infty$, has a subsequence that converges to an ideal connection. This is shown in \cite{HongTian:04} for a sequence of times along a single flow, but the argument extends more generally. The key points are Theorem 8 and Proposition 9 of \cite{HongTian:04}, and these hold uniformly for a smoothly convergent sequence of initial conditions. Note that there is a uniform bound on the Hermitian-Einstein tensor. Given this fact, we are exactly in the set-up of the proof of \cite[Proposition 3.20]{GSTW:18}. The conclusion of that result is that any limiting ideal HYM connection of $\{A_{i,t_i}\}$ must coincide with $[A_\infty, \mathbb{C}cal_\infty^A, S(A_\infty)]$. \end{proof} The Yang-Mills flow lies in a single complex gauge orbit, and $\mathscr F(A_{i,t})=\mathscr F(A_i)$. Therefore, using the same argument as above applied to connections along the flow, we also have the following. \begin{lemma} \label{lem:B-subsequence} There are complex gauge transformations $g_i$ such that if $B_i=g_i(A_{i,t_i})$, then after passing to a subsequence, $B_i\to [(B_\infty, \mathbb{C}cal^B_\infty, S(B_\infty)]$ in the sense of Theorem \ref{thm:ideal-convergence}. \end{lemma} We now apply Propositions \ref{prop:limit_sections} and \ref{prop:cycles} to both sequences $A_{i,t_i}$ and $B_i$. One obtains quotients $q_i^A\to q_\infty^A: \mathcal H\to \mathcal F^A_\infty$ and $q_i^B\to q_\infty^B: \mathcal H\to \mathcal F^B_\infty$ in $\mathbb{Q}uot(\mathcal H, \tau_E)$. Moreover, $(\mathcal F_\infty^A)^{\ast\ast}\simeq \mathcal E^A_\infty$ and $(\mathcal F_\infty^B)^{\ast\ast}\simeq \mathcal E^B_\infty$. In particular, since $\mathcal E_\infty^{A,B}$ have admissible Hermitian-Einstein metrics, the $\mathcal F_\infty^{A,B}$ are slope polystable, and so they lie in $R^{\mu ss}$. Also, $\mathcal{C}^{A}_\infty=\mathcal{C}_{\mathcal{F}_{\infty }^{A}}$ and $\mathcal{C} _\infty^{B}=\mathcal{C}_{\mathcal{F}_{\infty }^{B}}$. Now the quotients $q_i^A$ and $q_i^B$ are isomorphic for each $i$, since $B_i=g_i(A_{i,t_i})$. Furthermore, these bundles are semistable. Hence, by item (1) above, $[\mathcal F_i^A]=[\mathcal F_i^B]$ in $M^{\mu ss}$ for every $i$. Since their limits are also semistable (in fact polystable), we conclude again from item (1) above and the continuity of the projection to $M^{\mu ss}$ that $[\mathcal{F}_{\infty }^{A}]=[\mathcal{F}_{\infty }^{B}]$. It then follows from item (2) that $\mathcal E_\infty^A\simeq \mathcal E_\infty^B$, and $\mathbb{C}cal_\infty^A=\mathcal{C}_{\mathcal{F}_{\infty }^{A}}=\mathcal{C}_{\mathcal{F}_{\infty }^{B}}=\mathbb{C}cal_\infty^B$. From the discussion following Definition \ref{def:ideal-connection}, the limiting ideal HYM connections coincide. This completes the proof of the Main Theorem. {} \begin{center} --------------------------------------- \end{center} \end{document}
\begin{document} \title{Rank-one transformations, odometers, and finite factors} \author[M. Foreman, S. Gao, A. Hill, C.E. Silva, B. Weiss]{Matthew Foreman, Su Gao, Aaron Hill, Cesar E. Silva, Benjamin Weiss} \address{Mathematics Department, UC Irvine, Irvine, CA 92697, USA} \email{[email protected]} \address{Department of Mathematics, University of North Texas, 1155 Union Circle \#311430, Denton, TX 76203, USA} \email{[email protected]} \address{Proof School, 973 Mission Street, San Francisco, CA, 94103, USA} \email{[email protected]} \address{Department of Mathematics and Statistics, Williams College,, Williamstown, MA 01267, USA} \email{[email protected]} \address{Institute of Mathematics, Hebrew University of Jerusalem, Jerusalem, Israel} \email{[email protected]} \date{\today} \subjclass[2010]{Primary 37A05, 37A35} \keywords{rank-one transformation, odometer, factor, isomorphism, totally ergodic} \begin{abstract} In this paper we give explicit characterizations, based on the cutting and spacer parameters, of (a) which rank-one transformations factor onto a given finite cyclic permutation, (b) which rank-one transformations factor onto a given odometer, and (c) which rank-one transformations are isomorphic to a given odometer. These naturally yield characterizations of (d) which rank-one transformations factor onto some (unspecified) finite cyclic permutation, (d$^\prime$) which rank-one transformations are totally ergodic, (e) which rank-one transformations factor onto some (unspecified) odometer, and (f) which rank-one transformations are isomorphic to some (unspecified) odometer. \end{abstract} \maketitle \thispagestyle{empty} \section{Introduction} The ultimate motivation of the work done in this paper is the isomorphism problem in ergodic theory as formulated by von Neumann in his seminal paper \cite{vN} of 1932. There he asked for an explicit process to determine when two measure-preserving transformations are measure-theoretically isomorphic. Two important theorems in this direction are von Neumann's theorem classifying discrete spectrum transformations by their eigenvalues, and Ornstein's theorem classifying Bernoulli transformations by their entropy. To our knowledge, no other complete isomorphism invariants that classify a class of transformations have been found, though of course notions such as mixing, weak mixing, etc., are invariant under isomorphism. In \cite{FRW}, Foreman, Rudolph, and Weiss showed that the isomorphism relation on the class of all ergodic transformations is complete analytic, in particular not Borel. In some sense, this brings a negative conclusion to the von Neumann program. However, in \cite{FRW} the authors also showed that the isomorphism problem is Borel on the generic class of (finite measure-preserving) rank-one transformations. Thus this provides hope that there should exist some explicit method for determining whether two rank-one transformations are isomorphic. In particular, if one is given a specific rank-one transformation, there should be an explicit description of all rank-one transformations that are isomorphic to it. In this paper we give such explicit descriptions, provided that the given rank-one transformation is an odometer. All the transformations we consider in this paper are invertible finite measure-preserving transformations. Another reason for considering odometers is the role they played in a question of Ferenczi. In his survey article \cite{Fe}, Ferenczi asked whether every odometer is isomorphic to a symbolic rank-one transformation. This question is connected to whether two common definitions of rank-one---the constructive geometric definition and the constructive symbolic definition---are equivalent. As noted by the referee, in the Introduction to Adams--Ferenczi--Petersen \cite{AFP}, the authors mention how one can use Remark 2.10 in Danilenko \cite{D16} to answer this question in the affirmative, and also show how to construct a symbolic rank-one transformation that is isomorphic to any given odometer. The results in this paper can be thought of as a continuation of work in \cite{AFP}, \cite{D16}. Namely, we explicitly describe {\em all} rank-one transformations that are isomorphic to any given odometer (Theorem~\ref{isomorphictothisodometer}). In addition, we also explicitly describe all rank-one transformations that are isomorphic to some (unspecified) odometer (Theorem~\ref{isomorphictosomeodometer}). Rank-one transformations are determined by two sequences of parameters, known as the cutting parameter and spacer parameter (see Section~\ref{Pre} for the precise definitions). In this paper we give explicit descriptions, in terms of the cutting parameter and spacer parameter, of when a rank-one transformation factors onto a given finite cyclic transformation, or factors onto an (infinite) odometer, or is isomorphic to a given odometer. Note that a measure-preserving transformation factors onto a non-trivial finite cyclic transformation if and only if it is not totally ergodic. Thus results in this paper give an explicit description of when an arbitrary rank-one transformation is totally ergodic. This generalizes some result of \cite{GH}, where Gao and Hill gave an explicit description of which rank-one transformations with bounded cutting parameter are totally ergodic. The rest of paper is organized as follows. In Section~\ref{Pre} we recall the constructive geometric definition and the constructive symbolic definition of rank-one transformations. We also explicitly define odometers and finite cyclic transformations. In Section~\ref{Fin} we give an explicit description of all rank-one transformations that factor onto a given finite cyclic transformation, as well as a description of rank-one transformations that allow a finite factor. In Section~\ref{Odo} we describe all rank-one transformations that factor onto a given odometer. As a corollary, we get a description of all rank-one transformations that factor onto some odometers. Finally, in Section~\ref{Iso} we describe all rank-one transformations that are isomorphic to a given odometer. Again, this gives rise to a description of all rank-one transformations that are isomorphic to some odometer. \vskip 12pt {\em Acknowledgment.} The research in this paper was done at the AIM SQuaRE titled {\it The isomorphism problem for rank-one transformations}. The authors would like to acknowledge the American Institute of Mathematics for the support on this research. M.F. acknowledges the US NSF grant DMS-1700143 for support for this research. S.G. acknowledges the US NSF grants DMS-1201290 and DMS-1800323 for the support of his research. Since August 2019, C.S. has been serving as a Program Director in the Division of Mathematical Sciences at the National Science Foundation (NSF), USA, and as a component of this job, he received support from NSF for research, which included work on this paper. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We we would like to thank the referee for a careful reading and suggestions that shortened our proofs. \section{Preliminaries}\label{Pre} \subsection{Measure-preserving transformations} We will be concerned with Lebesgue spaces, which we shall denote by $(X,\mu)$ or $(Y,\nu)$, and typically not mention the $\sigma$-algebra. We shall assume that the measure of the space is 1 and in most cases, and unless we explicitly specify to the contrary, we will assume our measures to be nonatomic and call the spaces standard Lebesgue spaces. A map $\phi:(X,\mu)\to (Y,\nu)$ is {\it measure-preserving} if for all measurable sets $A$, $\phi^{-1}(A)$ is measurable and $\mu(\phi^{-1}(A))=\nu(A)$. A {\it transformation} $T:(X,\mu)\to (X,\mu)$ is a measure-preserving map that is invertible on a set of full measure and whose inverse is measure-preserving. We will call $(X,\mu,T)$ a measure-preserving system and, by abuse of notation, also a measure-preserving transformation. If $(X, \mu, T)$ and $(Y, \nu, S)$ are measure-preserving transformations, then a {\em factor} map from $T$ to $S$ is a measure-preserving map $\phi: (X, \mu) \to (Y, \nu)$ such that for $\mu$-almost every $x \in X$, $\phi \circ T (x) = S \circ \phi (x)$. We say that $T$ {\em factors onto} $S$ if there exists a factor map $\phi$ from $(X,\mu,T)$ onto $(Y,\nu,S)$. If $(X, \mu, T)$ and $(Y, \nu, S)$ are measure-preserving transformations, then an {\em isomorphism} between $T$ and $S$ is a factor map $\phi$ from $(X, \mu,T)$ to $(Y, \nu,S)$ that is invertible a.e.. We note here that neither factor maps nor isomorphisms need to be defined on the entire underlying space $(X, \mu)$, only a subset of $X$ of full measure, and that two measure isomorphisms are considered the same if they agree on a set of full measure. \subsection{Rank-one transformations} The constructive geometric definition of a rank-one transformation is given below (see e.g., \cite{Fe}). It describes a recursive cutting and stacking process that produces infinitely many Rokhlin towers (or columns) to approximate the transformation. \begin{definition} A measure-preserving transformation $T$ on a standard Lebesgue space $(X, \mu)$ is {\it rank-one} if there exist sequences of positive integers $r_n > 1$, for $n\in\N=\{0, 1, 2, \dots\}$, and nonnegative integers $s_{n,i}$, for $n\in\N$ and $0 < i \leq r_n$, such that, if $h_n$ is defined by $$ h_0 = 1; h_{n+1} = r_nh_n +\sum_{0<i\leq r_n}s_{n,i}, $$ then \begin{equation}\label{r1} \sum^{+\infty}_{n=0} \frac{h_{n+1}-r_nh_n}{h_{n+1}}< +\infty; \end{equation} and there are subsets of $X$, denoted by $B_n$ for $n\in\N$, by $B_{n,i}$ for $n\in \N$ and $0<i\leq r_n$, and by $C_{n,i,j}$ for $n\in\N$, $0<i\leq r_n$ and $0<j\leq s_{n,i}$ (if $s_{n,i}= 0$ then there are no $C_{n,i,j}$), such that for all $n\in\N$: \begin{itemize} \item $\{B_{n,i}\,:\, 0 < i \leq r_n\}$ is a partition of $B_n$, \item the $T^k(B_n)$, $0\leq k < h_n$, are disjoint, \item $T^{h_n}(B_{n,i}) = C_{n,i,1}$ if $s_{n,i} \neq 0$ and $i \leq r_n$, \item $T^{h_n}(B_{n,i}) = B_{n,i+1}$ if $s_{n,i} = 0$ and $i < r_n$, \item $T(C_{n,i,j}) = C_{n,i,j+1}$ if $j < s_{n,i}$, \item $T(C_{n,i,s_{n,i}}) = B_{n,i+1}$ if $i < r_n$, \item $B_{n+1} = B_{n,1}$, \end{itemize} and the collection $\bigcup_{n=0}^\infty\{B_n, T(B_n), \dots, T^{h_n-1}(B_n)\}$ is dense in the $\sigma$-algebra of all $\mu$-measurable subsets of $X$. \end{definition} Assumption (\ref{r1}) of this definition is equivalent to the finiteness of the measure $\mu$. In this definition the sequence $(r_n)$ is called the {\em cutting parameter}, the sets $C_{n,i,j}$ are called the {\em spacers}, and the doubly-indexed sequence $(s_{n,i})$ is called the {\em spacer parameter}. For each $n\in\N$, the collection $\{B_n, T(B_n), \dots, T^{h_n-1}(B_n)\}$ gives the {\em stage-$n$ tower}, with $B_n$ as the {\em base} of the tower, and each $T^k(B_n)$, where $0 \leq k < h_n$, a {\em level} of the tower. The stage-$n$ tower has height $h_n$. At stage $n+1$, the stage-$n$ tower is cut into $r_n$ many $n$-blocks of equal measure. Each block has a base $B_{n,i}$ for some $0 < i\leq r_n$ and has height $h_n$. These $n$-blocks are then stacked up, with spacers inserted in between. At future stages, these $n$-blocks are further cut into thinner blocks, but they always have height $h_n$. Note that the base of the stage-$m$ tower, $B_m$, is partitioned into $\{ B_{m,i}\,:\, 0<i\leq r_m\}$, where each $B_{m,i}$ is now a level of the stage-$(m+1)$ tower, with $B_{m,1}=B_{m+1}$ being the base of the stage-$(m+1)$ tower. It is clear by induction that for any $n\geq m$, $B_m$ is partitioned into various levels of the stage-$n$ tower. We let $I_{m,n}$, for $n\geq m$, denote the set of indices for all levels of the stage-$n$ tower that form a partition of $B_m$, i.e., $$ I_{m,n}=\{ i\, :\, T^i(B_n)\subseteq B_m, 0\leq i<h_n\}. $$ Note that $B_m=\bigcup_{i\in I_{m,n}}T^i(B_n)$. $I_{m,n}$ is a finite set of natural numbers that can be inductively computed from the cutting and spacer parameters. For example, $$ I_{m,m+1}=\{0, h_m+s_{m,1}, 2h_m+s_{m,1}+s_{m,2}, \dots, (r_m-1)h_m+\sum_{0<i<r_m}s_{m,i}\}. $$ We next turn to the constructive symbolic definition of rank-one transformations. This often gives a succinct way to describe a concrete rank-one transformation. We will be talking about finite words over the alphabet $\{0,1\}$. Let $F$ be the set of all finite words over the alphabet $\{0,1\}$ that start with 0. A {\em generating rank-one sequence} is an infinite sequence $(v_n)$ of finite words in $F$ defined by induction on $n\in\N$: $$v_0 = 0; v_{n+1} = v_n1^{s_{n,1}}v_n1^{s_{n,2}}\cdots v_n1^{s_{n,r_n}}$$ for some integers $r_n>1$ and non-negative integers $s_{n,i}$ for $0 < i\leq r_n$. We continue to refer to the sequence $(r_n)$ as the cutting parameter and the doubly-indexed sequence $(s_{n,i})$ as the spacer parameter. Note that the cutting and spacer parameters uniquely determine a generating rank-one sequence. A generating rank-one sequence converges to an infinite rank-one word $V\in \{0,1\}^{\N}$. We write $V = \lim_{n}v_n$. \begin{definition} Given an infinite rank-one word $V$, the {\em symbolic rank-one system} induced by $V$ is a pair $(X, \sigma)$, where $$ X = X_V = \{x \in\{0,1\}^\Z\,:\, \mbox{every finite subword of $x$ is a subword of $V$}\}$$ and $\sigma: X \to X$ is the shift map defined by $$\sigma(x)(k) = x(k + 1)\ \mbox{for all $k\in\Z$}. $$ \end{definition} Under the same assumption (\ref{r1}) as in the constructive geometric definition, the symbolic rank-one system will carry a unique non-atomic, invariant probability measure. In this case the symbolic rank-one system will be isomorphic to the rank-one transformation that is constructed with the same cutting and spacer parameters. The symbolic definition does not explicitly describe odometers (see Subsection \ref{cyclic} below for definitions), which are considered rank-one transformations. This was the motivation of Ferenczi's question in \cite{Fe} as discussed in the introduction. In contrast, we note that in the topological setting, Gao and Ziegler have recently proved in \cite{GZ} that (infinite) odometers are not topologically isomorphic to symbolic rank-one systems (which are called rank-one subshifts in \cite{GZ}). When we work with a rank-one transformation we will use both the terminology and the notation in this subsection. \subsection{Finite cyclic permutations and odometers\label{cyclic}} Here we precisely describe what we mean by ``finite cyclic permutation" in the context of measure-preserving transformations. If $k\in\N$ with $k>1$ and $n\in\N$, we denote by $[n]_k$ the unique $m\in\N$ with $m<k$ and $n\equiv m\mod k$. For each $k \in \N$ with $k>1$, let $X_k = \{0, 1, \ldots, k-1\}$, let $\mu_k$ be the measure on $X_k$ where each point has measure $1/k$, and let $f_k: X_k \rightarrow X_k$ given by $f_k (i) = [i+1]_k$. We let $\Z/k\Z$ denote the transformation $(X_k, \mu_k, f_k)$ and refer to such a transformation as a finite cyclic permutation. These are the sole cases we consider where the measure is atomic, so the measures are defined on atomic Lebesgue probability spaces, and we will still refer to $(X_k, \mu_k, f_k)$ as a transformation, though it should be clear from the context, such as when we denote a transformation by $T$, when a transformation is defined on a non-atomic space. It is natural to speak of a factor map from a measure-preserving transformation $T$ to $(X_k, \mu_k, f_k)$, but since $T$ is implicitly defined on a non-atomic space, it is not possible for such a factor map to be an isomorphism. Now we describe what we mean by an odometer (see \cite{Do}). Loosely it can be described as an inverse limit of a coherent sequence of finite cyclic permutations. To be more precise, suppose we have a sequence $(k_n : n \in \N)$ of positive integers greater than 1 such that for all $n \in \N$, $k_n | k_{n+1}$. We now define $X$ as the collection of sequences $\alpha = (\alpha_n : n\in \N) \in \Pi_{n \in \N} \Z / k_n\Z$ such that for all $m,n \in \N$ with $m \leq n$, $[\alpha_n]_{k_m} = \alpha_m$. There is a natural measure $\mu$ on $X$ satisfying the following: for all $n \in \Z$ and all $i \in \{0, 1, \ldots, k_n-1\}$ the set $\{\alpha \in X : \alpha_n = i \}$ has measure $1/k_n$. There is also a natural bijection $f: X \rightarrow X$ defined by $$f(\alpha) = (f_1(\alpha_1), f_2(\alpha_2), \ldots ) = ([\alpha_1 + 1]_{k_1}, [\alpha_2 + 1]_{k_2}, \dots ).$$ A transformation $(X, \mu, f)$ obtained in this way is called an {\it odometer}. For example, if $k_n=2^n$, one obtains the standard dyadic odometer. The following characterization of when two such odometers are isomorphic is well known. Suppose $(k_n : n \in \N)$ and $(k_n^\prime : n \in \N)$ are sequences of positive integers greater than 1 such that for all $n \in \N$, $k_n | k_{n+1}$ and $k_n^\prime | k_{n+1}^\prime$. Then the odometers corresponding to these two sequences are isomorphic if and only if $$ \{m\in \N\,:\, \exists n \in \N\ (m | k_n)\}=\{m\in\N\,:\, \exists n \in \N\ (m | k_n^\prime)\}.$$ Because of this characterization we often describe an odometer by an infinite collection $K$ of natural numbers that is closed under taking factors. If one has such a set $K$, then it is easy to produce a sequence $(k_n: n \in \N)$ of integers $>1$ such that $k_n | k_{n+1}$, for all $n \in \N$, and for which $$K = \bigcup_{n \in \N} \{m \in \N : m | k_n\}.$$ Moreover, any choice of such a sequence $(k_n : n \in \N)$ will give rise to the same odometer, up to isomorphism. We can now let $\mathcal{O}_K$ denote (any) one of the odometers produced by choosing such a sequence $(k_n: n \in \N)$. There are canonical ways to choose $\mathcal{O}_K$ based on the maximum power of each prime that occurs in $K$, but we will not go into the details of this canonical choice in this paper. It is worth noting that the characterization in the preceding paragraph guarantees that if $K \neq K^\prime$ are infinite collections of natural numbers that are closed under factors, then $\mathcal{O}_K \not\cong \mathcal{O}_{K^\prime}.$ Here we collect the important facts about $\mathcal{O}_K$ that we will use in this paper. \begin{enumerate} \item For each $k \in K$, then there is a canonical factor map $\pi_k$ from $\mathcal{O}_K$ to $\Z/k\Z$. \item For all $k, k^\prime \in K$, with $k | k^\prime$, then for all $x$ in the underlying set of $\mathcal{O}_K$, $\pi_k (x) = [\pi_{k^\prime} (x)]_k$. \item The collection of sets $\{ \pi_k^{-1} (i): k \in K, 0 \leq i < k \}$ generates the $\sigma$-algebra on $\mathcal{O}_K$. \item If a measure-preserving transformation factors onto $\Z/k\Z$ for all $k \in K$, then it also factors onto $\mathcal{O}_K$. If, moreover, the fibers of these maps generate the $\sigma$-algebra on $(X, \mu)$, then that factor map is an isomorphism. The argument for this is similar to the construction of the Kronecker factor of a transformation, see e.g. \cite{Qu}. \end{enumerate} \subsection{The notion of $\epsilon$-containment} In this subsection we define a precise notion of almost containment and briefly describe some of its properties; this is a standard notion in measure theory also called $(1-\epsilon)$-full. \begin{definition} Let $A$ and $B$ be measurable subsets of positive measure of a measure space $(X, \mu)$ and let $\epsilon >0$. We say that $A$ is {\em $\epsilon$-contained} in $B$, and write $A \subseteq_{\epsilon} B$, provided that $$\frac{\mu (A \setminus B)}{\mu (A)} < \epsilon.$$ Equivalently, we say that $A$ is {\it $(1-\epsilon)$-full} of $B$ if $\mu(A\cap B)>(1-\epsilon)\mu(A)$. \end{definition} Here are the basic facts we will need; the reader may refer to e.g. \cite{Si}. \begin{enumerate} \item If $A \subseteq_\epsilon B$ and $A$ is partitioned into sets $A_1, A_2, \ldots, A_r$, there is some $i \leq r$ such that $A_i \subseteq_\epsilon B$. \item If $A$ is partitioned into sets $A_1, A_2, \ldots, A_r$ and for all $i \leq r$, $A_i \subseteq_\epsilon B$, then $A \subseteq_\epsilon B$. \item Let $(X, \mu, T)$ be a measure-preserving transformation. If $A \subseteq_\epsilon B$ and $z \in \Z$, the $T^z (A) \subseteq_\epsilon T^z(B)$. \item Let $(X, \mu, T)$ be a rank-one transformation. If $B \subseteq X$ has positive measure, there there is some $n \in \N$ and some $0 \leq i < h_n$ such that $T^i(B_n) \subseteq_{\epsilon} B$. \end{enumerate} \section{Factoring onto a finite cyclic permutation}\label{Fin} It is quite easy to build a rank-one transformation that factors onto a cyclic permutation of $k$ elements. Simply ensure that for some $N \in \N$, the height of the stage-$N$ tower is a multiple of $k$ and furthermore insist that every time spacers are inserted after stage-$N$ the number of spacers inserted is a multiple of $k$. If a rank-one transformation is constructed in this way, then one can define, for all $m \geq N$, a function $\pi_m$ which goes from the stage-$m$ tower to $\Z / k\Z$ defined by $\pi_m (x) = [i]_k$, where $x$ belongs to level $i$ of the stage-$m$ tower. The method of construction guarantees that if $x$ belongs to the stage-$m$ tower and $n \geq m$, then $\pi_m (x) = \pi_n (x)$. The domains of the functions $\pi_m$ are increasing and their measure goes to one. Thus, we can define $\pi$ from a full-measure subset of $X$ to $\Z / k\Z$ by $$\pi (x) = \lim_{m \rightarrow \infty} \pi_m (x).$$ This map $\pi$ is clearly a factor map. The theorem below gives a full characterization of which transformations factor onto a cyclic permutation of $k$ elements. \begin{theorem} \label{finitefactor1} Let $(X, \mu, T)$ be a rank-one measure-preserving transformation and let $1 < k \in \N$. The following are equivalent. \begin{enumerate} \item[\rm (i)] $(X, \mu, T)$ factors onto $\Z / k\Z$. \item[\rm (ii)] $\forall \eta > 0, \exists N \in \N, \forall n \geq m \geq N, \exists j \in \Z/k\Z$ such that $$\frac{ |\{i \in I_{m,n} : [i]_k \neq j \}|}{| I_{m,n}|} < \eta.$$ \end{enumerate} \end{theorem} \begin{proof} First we will show that (i) implies (ii). Suppose that $\pi : X \rightarrow \Z/k\Z$ is a factor map. The fibers $\pi^{-1} (0), \pi^{-1} (1), \pi^{-1} (2), \ldots, \pi^{-1} (k-1)$ are a partition of $X$ into sets of measure $1/k$ such that $T(\pi^{-1} (j)) = \pi^{-1} ([j+1]_k)$, for all $j \in \Z/k\Z$. Let $\eta >0$ and choose $\epsilon$ smaller than both $\eta/2$ and $1/2$. Since the levels of the towers generate the $\sigma$-algebra of $X$, there exists $N\in\N$ such that for all $n>m\geq N$, every level of the stage-$n$ tower is $\epsilon$-contained in $\pi^{-1}(j)$ for some $j\in \Z/k\Z$. Fix $j_0 \in \Z/k\Z$ such that $B_m \subseteq_\epsilon \pi^{-1}(j_0)$. We claim that among the levels of the stage-$n$ tower that comprise the base of the stage-$m$ tower, the fraction of those that are $\epsilon$-contained in $ \pi^{-1}(j_0)$ must be at least $1-2\epsilon$. In other words, letting $I^\prime = \{i \in I_{m,n}: T^i(B_n) \not\subseteq_\epsilon \pi^{-1} (j_0)\}$, we claim that \begin{equation}\label{fraction}\frac{|I^\prime|}{|I_{m,n}|} < 2\epsilon. \end{equation} Suppose this is not the case. Since $$B_m \setminus \pi^{-1}(j_0) \supseteq \bigcup_{i \in I^\prime} \left( T^i(B_n) \setminus \pi^{-1}(j_0) \right), \text{we have that} $$ $$\mu \left( B_m \setminus \pi^{-1}(j_0) \right) \geq |I^\prime| \cdot \mu(B_n) \cdot (1 - \epsilon) = \frac{ |I^\prime|}{|I_{m,n}|} \cdot \mu(B_m) \cdot (1 - \epsilon).$$ Therefore, $$\frac{\mu \left( B_m \setminus \pi^{-1}(j_0) \right) }{ \mu(B_m)} \geq \frac{ |I^\prime|}{|I_{m,n}|} \cdot (1 - \epsilon) \geq (2 \epsilon) \cdot (1-\epsilon) > \epsilon,$$ since $\epsilon < 1/2$. This contradicts the fact that $B_m$ is $\epsilon$-contained in $\pi^{-1}(j_0)$ and completes the proof of \eqref{fraction}. Since the levels of the stage-$n$ tower that are $\epsilon$-contained in $\pi^{-1}(j_0)$ are all in the same congruence class mod $k$, there is some $j \in \Z / k\Z$ such that $$\frac{ |\{i \in I_{m,n} : [i]_k \neq j \}|}{| I_{m,n}|} < 2 \epsilon < \eta,$$ completing the proof that (i) implies (ii). Next we will show that (ii) implies (i). Assuming (ii) we construct a factor map $\pi : X \rightarrow \Z/ k\Z$. For all $\alpha \in \N$, let $\eta_\alpha = \frac{1}{2^{\alpha+2}}$ and use (ii) to produce $N_\alpha \in \N$. We may assume that the sequence $(N_\alpha : \alpha \in \N)$ is increasing and that for each $\alpha$, $N_\alpha$ is large enough that the measure of the stage-$N_\alpha$ tower is at least $1 - \frac{1}{2^{\alpha +1}}$. Now, for each $\alpha \in \N$ we also choose $j_\alpha \in \Z/k\Z$ such that $$\frac{ |\{i \in I_{N_\alpha,N_{\alpha+1}} : [i]_k \neq j_\alpha \}|}{| I_{N_\alpha,N_{\alpha+1}}|} < \eta_\alpha .$$ For all $\alpha \in \N$, define a function $\phi_{\alpha}$ from the stage-$N_\alpha$ tower to $\Z/k\Z$ as follows: If $x$ belongs to level $i$ of the stage-$N_\alpha$ tower, then $\phi_\alpha (x) = [i]_k$. Since for most $x$ in the base of the $N_\alpha$-tower, $\phi_{\alpha+1} (x) = j_\alpha$, the reader can verify that for all $\alpha \in \N$, $$\mu \left( \{x \in \textnormal{dom}(\phi_\alpha): \phi_{\alpha+1} (x) \neq j_\alpha \} \right) < \eta_\alpha.$$ Now, for each $\alpha \in \N$, we let $J_\alpha = \sum_{\beta < \alpha} j_\beta$. Also, for each $\alpha \in \N$ we define a function $\pi_\alpha$ from the stage-$N_\alpha$ tower to $\Z/k\Z$ by $\pi_\alpha (x) = [\phi_\alpha (x) - J_\alpha]_k$. Since $\phi_\alpha$ and $\pi_\alpha$ have the same domain for all $\alpha \in \N$, and in addition, if $x \in \textnormal{dom} (\pi_\alpha)$, then $\pi_{\alpha+1} (x) = \pi_{\alpha} (x)$ if and only if $\phi_{\alpha+1} (x) = [\phi_{\alpha} (x) + j_\alpha]_k$, and we already know that $\mu \left( \{x \in \textnormal{dom}(\phi_\alpha): \phi_{\alpha+1} (x) \neq [\phi_\alpha (x) + j_\alpha]_k \} \right) < \eta_\alpha,$ then one can verify that for all $\alpha \in \N$, $$\mu \left( \{x \in \textnormal{dom}(\pi_\alpha): \textnormal{ for all $\beta \geq \alpha$, } \pi_\alpha (x) = \pi_{\beta} (x) \} \right) \geq 1 - \frac{1}{2^\alpha }.$$ It follows that for $\mu$-almost every $x \in X$, the sequence $(\pi_\alpha (x) : \alpha \in \N)$ eventually stabilizes and we can define $$\pi (x) = \lim_{\alpha \rightarrow \infty} \pi_\alpha (x).$$ Choose $\alpha$ sufficiently large so that $\pi_\alpha (x) = \pi(x)$, $\pi_\alpha(T(x)) = \pi(T(x))$ and $x$ belongs to a non-top level of the stage-$N_\alpha$ tower. If $x$ belongs to level $i$ of the stage $N_\alpha$ tower, then $T(x)$ belongs to level $i+1$ of the stage-$N_\alpha$ tower which implies that $\phi_\alpha (T(x)) = [\phi_\alpha (x) + 1]_k$. Now, $$\pi (T(x)) = \pi_\alpha (T(x)) = [\phi_{\alpha} (T(x)) - J_\alpha]_k = [\phi_{\alpha} (x) + 1 - J_\alpha]_k = [\pi (x) +1]_k. $$ Therefore, $\pi: X \rightarrow \Z/k\Z$ is a factor map. \end{proof} As a corollary, we obtain a characterization of the rank-one transformations that factor onto some (unspecified) non-trivial finite cyclic permutation, a condition that is well-know to be equivalent to the transformation not being totally ergodic. \begin{corollary}\label{cortoterg} Let $(X, \mu, T)$ be a rank-one measure-preserving transformation. The following are equivalent. \begin{enumerate} \item $T$ factors onto some finite cyclic permutation. \item $\exists k \in \N$ with $k>1$, $\forall \eta > 0, \exists N \in \N, \forall n \geq m \geq N, \exists j \in \Z/k\Z$ such that $$\frac{ |\{i \in I_{m,n} : [i]_k \neq j \}|}{| I_{m,n}|} < \eta.$$ \end{enumerate} \end{corollary} We end with an equivalent characterization as suggested by the referee. The proof is similar to that of Theorem \ref{finitefactor1}. \begin{theorem} \label{finitefactor2} Let $(X, \mu, T)$ be a rank-one measure-preserving transformation and let $1 < k \in \N$. The following are equivalent. \begin{enumerate} \item[\rm (i)] $(X, \mu, T)$ factors onto $\Z / k\Z$. \item[\rm (ii)] There is an increasing sequence $(q_n)$ such that $$\sum_{n=1}^\infty \frac{ |\{i \in I_{q_n,q_n+1} : i \equiv 0 \mod k \}|}{| I_{q_n,q_n}|} < \infty.$$ \end{enumerate} \end{theorem} \section{Factoring onto an odometer}\label{Odo} We now give characterizations of which rank-one transformations factor onto a given odometer, and which rank-one transformations factor onto some (unspecified) odometer. These characterizations are essentially corollaries of Theorem \ref{finitefactor1}. \begin{theorem}\label{T:factortoodometer} Let $(X, \mu, T)$ be a rank-one measure-preserving transformation and let $\mathcal{O}_K$ be an odometer. The following are equivalent. \begin{enumerate} \item[\rm (i)] $(X, \mu, T)$ factors onto $\mathcal{O}_K$. \item[\rm (ii)] $\forall k \in K, \forall \eta > 0, \exists N \in \N, \forall n \geq m \geq N, \exists j \in \Z/k\Z$ such that $$\frac{ |\{i \in I_{m,n} : [i]_k \neq j \}|}{| I_{m,n}|} < \eta.$$ \end{enumerate} \end{theorem} \begin{proof} Suppose $(X, \mu, T)$ factors onto $\mathcal{O}_K$. Then for each $k \in K$, one can compose this factor map with a factor map from $\mathcal{O}_K$ to $\Z/ k\Z$ to get a factor map from $(X, \mu, T)$ to $\Z/ k\Z$. Together with Theorem \ref{finitefactor1}, this implies condition (ii). Now suppose that condition (ii) holds. By Theorem \ref{finitefactor1} we know that $(X, \mu, T)$ factors onto $\Z/ k\Z$ for every $k \in K$. Therefore, $(X, \mu, T)$ factors onto $\mathcal{O}_K$. \end{proof} By a proof is similar to that of Theorem~\ref{T:factortoodometer} we obtain the following corollary. \begin{corollary} Let $(X, \mu, T)$ be a rank-one measure-preserving transformation. The following are equivalent. \begin{enumerate} \item[\rm (i)] $(X, \mu, T)$ factors onto some odometer $\mathcal{O}$. \item[\rm (ii)] $\forall M \in \N, \exists k \geq M, \forall \eta > 0, \exists N \in \N, \forall n \geq m \geq N, \exists j \in \Z/k\Z$ such that $$\frac{ |\{i \in I_{m,n} : [i]_k \neq j \}|}{| I_{m,n}|} < \eta.$$ \end{enumerate} \end{corollary} \section{Being isomorphic to a given odometer}\label{Iso} It turns out that it is not too hard to construct a rank-one transformation that is isomorphic to a given odometer. Let $K$ be an infinite set of natural numbers that is closed under factors. First choose a sequence $(k_n: n \in \N)$ of natural numbers such that the factors of the partial products $\prod_{m<n}k_m$ are precisely the set $K$ and for which $$\sum_{n \in \N} \frac{1}{k_n}< \infty. $$ Then build a rank-one transformation by a symbolic construction as follows. For $n \in \N$, let $v_0 = 0$ and let $v_{n+1} = (v_n)^{k_n-1} 1^{v_n}$. Then the resulting transformation $T$ is what is called {\em essentially $0$-expansive} by Adams, Ferenczi, and Petersen in \cite{AFP}, and their method shows that $T$ is isomorphic to the odometer $\mathcal{O}_K$. A definition of an isomorphism is also implicit in our results below. In this section we characterize in general when a rank-one transformation is isomorphic to a given odometer. The idea is to build on our characterization for rank-one transformations which factor onto a given odometer, and then to examine when a factor map turns out to be an isomorphism. The following result gives the explicit details. \begin{theorem} \label{isomorphictothisodometer} Let $(X, \mu, T)$ be a rank-one measure-preserving transformation and let $\mathcal{O}_K$ be an odometer. The following are equivalent. \begin{enumerate} \item[\rm (I)] $T$ is isomorphic to $\mathcal{O}_K$. \item[\rm (II)] Both of the following hold. \begin{enumerate} \item[\rm (IIa)] $\forall k \in K, \forall \eta > 0, \exists N \in \N, \forall n \geq m \geq N, \exists j \in \Z/k\Z$ such that $$\frac{ |\{i \in I_{m,n} : [i]_k \neq j \}|}{| I_{m,n}|} < \eta.$$ \item[\rm (IIb)] $\forall l \in \N, \forall \epsilon>0, \exists k \in K, \exists N \in \N, \forall m \geq N, \exists D \subseteq \Z / k\Z$ such that $$\frac{|\{i \leq |h_m|: [i]_k \in D\} \Delta I_{l,m} |}{|I_{l,m}|} < \epsilon$$ \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} First assume (II). Using condition (IIa) and the proof of Theorem \ref{finitefactor1} we construct, for each $k\in K$, a factor map $\pi_k: X \rightarrow \Z/k\Z$. Recall that $\pi_k$ is built using a series of approximating maps $(\pi_{k, \alpha}: \alpha \in \N)$. It suffices to show that for every $l \in \N$ and every $\delta > 0$, there is some $k \in K$ and some $E \subseteq \Z/ k\Z$ such that $$\mu (B_l \Delta \pi_k^{-1} [ E] ) < \delta.$$ Let $l \in \N$ and $\delta >0$. Let $\epsilon = \delta/2$. First, we use condition (IIb) above to produce $k \in K$ and $N >l$ such that for all $m \geq N$, there exists some $D \subseteq \Z/k\Z$ such that $$\frac{|\{i \leq |h_m|: [i]_k \in D\} \Delta I_{l,m} |}{|I_{l,m}|} < \epsilon.$$ Since $k \in K$, we have a factor map $\pi_k : X \rightarrow \Z/k\Z$ that is built using the approximating maps $\pi_{k, \alpha}$. Choose a specific $\alpha \in \N$ so that $\frac{1}{2^{\alpha}} < \delta/2$ and such that $N_\alpha$ is greater than the $N$ produced in the preceding paragraph. Using the fact that $N_\alpha>N$ and using features of the approximating maps $\pi_{k, \alpha}$ we get the following. \begin{enumerate} \item [(i)] There exists some $D \subseteq \Z/k\Z$ such that $$\frac{|\{i \leq h_{N_\alpha}: [i]_k \in D\} \Delta I_{l,N_\alpha} |}{|I_{l,N_\alpha}|} < \epsilon.$$ \item [(ii)] There exists $E \subseteq \Z/k\Z$ such that $$\bigcup_{d \in D} ( \bigcup_{\substack{0 \leq i < h_{N_\alpha}\\ [i]_k =d}} T^i (B_{N_\alpha})) = \bigcup_{e \in E} \pi_{k, \alpha}^{-1} (e). $$ \item [(iii)] $\mu (\{x \in \textnormal{dom}(\pi_{k, \alpha}): \pi_{k, \alpha} (x) = \pi_k(x)\}) \geq 1 - \frac{1}{2^{\alpha}}$. \end{enumerate} Using these properties one can show that $$\mu (B_l \Delta \pi_k^{-1} [ E] ) < \delta,$$ completing the proof that $(X, \mu, T)$ is isomorphic to $\mathcal{O}_K$. Now we assume that $(X, \mu, T)$ is isomorphic to $\mathcal{O}_K$ and let $\phi$ be an isomorphism between $T$ and $\mathcal{O}_K$. For each $k \in K$ we can compose $\phi$ with the canonical factor map of $\mathcal{O}_K$ onto $\Z/k\Z$ to get a factor map $\pi_{k}$ from $X$ to $\Z/k\Z$. For such a $k\in K$, Theorem \ref{finitefactor1} guarantees that $\forall \eta > 0, \exists N \in \N, \forall n \geq m \geq N, \exists j \in \Z/k\Z$ such that $$\frac{ |\{i \in I_{m,n} : [i]_k \neq j \}|}{| I_{m,n}|} < \eta.$$ Thus we have condition (IIa). Next, exchanging the variable $\epsilon$ for $\delta$ in condition (IIb), we will prove that $\forall l \in \N, \forall \delta>0, \exists k \in K, \exists N \in \N, \forall m \geq N, \exists D \subseteq \Z / k\Z$ such that $$\frac{|\{i \leq |h_m|: [i]_k \in D\} \Delta I_{l,m} |}{|I_{l,m}|} < \delta.$$ Let $l \in \N$ and $\delta>0$. Let $\epsilon = \delta \cdot \mu(B_l)/4$. The reader can verify that there exists some $k \in K$ and $E \subseteq \Z/k\Z$ such that \begin{equation} \mu (B_l \Delta \pi_k^{-1} (E)) < \epsilon. \tag{*} \end{equation} We next claim that there exists $N \in \N$ such that for all $m \geq N$ there exists some $j \in \Z/k\Z$ such that for all $0 \leq i < h_m$, $T^{i}(B_m) \subseteq_\epsilon \pi_k^{-1} ([i +j]_k)$. We can prove this with similar methods. Fix such an $N \in \N$ that also satisfies $\mu\left(\bigcup_{0\leq i<h_N}T^i(B_N)\right) >1-\epsilon$ and let $m \geq N$. We now claim that there exists $D \subseteq \Z/k\Z$ such that \begin{equation} \mu ( \bigcup_{\substack{0 \leq i < h_m\\ [i]_k \in D}} T^i(B_m) \Delta \ \pi_k^{-1} (E) ) < 3 \epsilon. \tag{**} \end{equation} Combining equations (*) and (**) we now have that $$\mu ( \bigcup_{\substack{0 \leq i < h_m\\ [i]_k \in D} }T^i(B_n) \Delta \ B_l) < 4 \epsilon.$$ To finish the proof of the theorem, note that $$\begin{array}{l} \displaystyle\frac{|\{i < h_m: [i]_k \in D\} \Delta I_{l,m} |}{|I_{l,m}|} =\frac{\displaystyle \mu ( \bigcup_{\substack{0 \leq i < h_m\\ [i]_k \in D}} T^i(B_m) \Delta \bigcup_{i \in I_{l,m} } T^i(B_m) ) }{\displaystyle \mu ( \bigcup_{i \in I_{l,m} } T^i(B_m) )} \\ =\frac{\displaystyle \mu ( \bigcup_{\substack{0 \leq i < h_m\\ [i]_k \in D}} T^i(B_m) \Delta B_l ) }{\displaystyle \mu \left( B_l \right)} <\displaystyle\frac{4\epsilon}{\mu(B_l)} = \delta.\end{array}$$ \end{proof} Next we characterize when a rank-one transformation is isomorphic to some (unspecified) odometer. \begin{theorem} \label{isomorphictosomeodometer} Let $(X, \mu, T)$ be a rank-one measure-preserving transformation. The following are equivalent. \begin{enumerate} \item[\rm (I)] $T$ is isomorphic to an odometer. \item[\rm (II)] For all $l \in \N$ and all $\epsilon>0$, there is some $k \in \N$ such that for all $\eta >0$ there exists an $N \in \N$ such that for all $n > m \geq N$, \begin{enumerate} \item[\rm (IIa)] There is some $j \in \Z / k\Z$ such that $$\frac{|\{i \in I_{m,n}: [i]_k \neq j\}|}{|I_{m,n}|} < \eta$$ \item[\rm (IIb)] There is some $D \subseteq \Z / k\Z$ such that $$\frac{|\{i \leq |h_m|: [i]_k \in D\} \Delta I_{l,m} |}{|I_{l,m}|} < \epsilon$$ \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} Suppose $T$ is isomorphic to an odometer. Let $K$ be the finite factors of that odometer. Let $l \in \N$ and $\epsilon >0$. Using condition (IIb) of Theorem \ref{isomorphictothisodometer} we can find some $k \in K$ and some $N_1 \in \N$, such that $\forall m \geq N_1, \exists D \subseteq \Z / k\Z$ such that $$\frac{|\{i \leq |h_m|: [i]_k \in D\} \Delta I_{l,m} |}{|I_{l,m}|} < \epsilon$$ For any $\eta >0$ we can use that specific $k\in K$ and condition (IIa) of Theorem \ref{isomorphictothisodometer} to find $N_2 \in \N$ such that $\forall n \geq m \geq N_2, \exists j \in \Z/k\Z$ such that $$\frac{ |\{i \in I_{m,n} : [i]_k \neq j \}|}{| I_{m,n}|} < \eta.$$ Letting $N = \max\{N_1, N_2\}$ we complete condition (II) of the theorem. Suppose now that condition (II) holds. For all $l \in \N$ and all $\epsilon >0$, produce $k_{l, \epsilon}$, and $N_{l, \epsilon}$ according to condition (II). Let $$K = \{k \in \N : k | k_{l, \epsilon} \textnormal{ for some $l \in \N$ and $\epsilon>0$}\}.$$ It is clear that $K$ is closed under factors. We leave it to the reader to show that $K$ is infinite by showing that if $l \in \N$ and $\epsilon<1$, then $k_{l, \epsilon} \geq h_l$. Now, consider $\mathcal{O}_K$. We will prove that $T$ is isomorphic to $\mathcal{O}_K$ by showing that conditions (IIa) and (IIb) of Theorem \ref{isomorphictothisodometer} hold. First, let $k \in K$. Choose $l \in N$ and $\epsilon>0$ such that $k | k_{l, \epsilon}$. We chose $k_{l, \epsilon}$ using condition (II) of this theorem. Theorem \ref{finitefactor1} guarantees that that $T$ factors onto $\Z / k_{l, \epsilon}\Z$. Therefore, $T$ must also factor onto $\Z/k\Z$. Now Theorem \ref{finitefactor1} guarantees that condition (IIa) of Theorem \ref{isomorphictothisodometer} holds. Condition (IIb) of Theorem \ref{isomorphictothisodometer} follows immediately from our assumption that condition (II) of this theorem holds and our choice of $K$. \end{proof} Before closing we consider an example of a rank-one transformation that factors onto an odometer but is not isomorphic to any odometer. \noindent {\bf Example.} Let $T$ be the rank-one transformation corresponding to the symbolic definition $v_0=0$ and $$ v_{n+1}=v_nv_n1^{2^{n+1}}v_nv_n. $$ Then the length of $v_n$, or equivalently the height $h_n$ of the stage-$n$ tower, is $2^n(2^{n+1}-1)$. Using Theorem~\ref{finitefactor1} it is easy to verify that $T$ has all powers of $2$ as finite factors. Thus $T$ factors onto the dyadic odometer. As n1oted by the referee, ergodicity of the dyadic powers and non-ergodicity of the odd powers follows from \cite[Theorem H]{D19}. An argument using Theorem~\ref{finitefactor1} also shows that $T$ does not have any other factors. Indeed, suppose $T$ has an odd finite factor $a$. If no multiples of $a$ are of the form $2^k-1$ for any $k$, then the condition in Theorem~\ref{finitefactor1} fails, since the elements of $I_{m,n}$ come in pairs, with a difference $h_m=2^m(2^{m+1}-1)$ between them. On the other hand, suppose $a$ has a multiple of the form $2^{m+1}-1$ for some $m$. Then note that the elements of $I_{m,n}$ come in quadruples, with the sequence of differences $h_m, 2^{m+1}, h_m$ in between them. This implies also that at least half of the indices of $I_{m,n}$ disagree on the congruence class mod $a$, and thus the condition in Theorem~\ref{finitefactor1} fails. Therefore the maximal odometer factor of $T$ is the dyadic odometer. Finally, a similar argument shows that condition (IIb) of Theorem~\ref{isomorphictothisodometer} fails. Consequently $T$ is not isomorphic to the dyadic odometer. In conclusion, $T$ is not isomorphic to any odometer. \thebibliography{999} \bibitem{AFP} T. Adams, S. Ferenczi, K. Petersen, \textit{Constructive symbolic presentations of rank one measure-preserving systems,} {Colloq. Math.} 150 (2017), no. 2, 243--255. \bibitem{D16} A.I. Danilenko, \textit{Actions of finite rank: weak rational ergodicity and partial rigidity}, Ergodic Theory Dynam. Systems 36 (2016), no. 7, 2138�2171. \bibitem{D19} A.I. Danilenko, \textit{Rank-one actions, their (C,F)-models and constructions with bounded parameters}, J. Anal. Math. 139 (2019), no. 2, 697�749. \bibitem{Do} T. Downarowicz, \textit{Survey of odometers and Toeplitz flows.} {Algebraic and topological dynamics}, 7--37, \textit{Contemp. Math.}, 385, Amer. Math. Soc., Providence, RI, 2005. \bibitem{Fe} S. Ferenczi, \textit{Systems of finite rank}, {Colloq. Math.} 73:1 (1997), 35--65. \bibitem{FRW} M. Foreman, D. J. Rudolph, B. Weiss, \textit{The conjugacy problem in ergodic theory}, {Ann. of Math.} 173 (2011), 1529--1586. \bibitem{GH} S. Gao, A. Hill, \textit{Bounded rank-one transformations}, {J. Anal. Math.} 129 (2016), 341--365. \bibitem{GZ} S. Gao, C. Ziegler, \textit{Topological factors of rank-one subshifts}, {Proc. Amer. Math. Soc. Ser. B} 7 (2020), 118�126. \bibitem{Qu} M. Queffelec, \textit{Substitution dynamical systems and spectral analysis}, LNM 1294, Springer, NY, 2010. \bibitem{Si} C.E. Silva, \textit{Invitation to ergodic theory}, SML 42. American Mathematical Society, Providence, RI, 2008. \bibitem{vN} J. von Neumann, \textit{Zur Operatorenmethode in der klassischen Mechanik}, {Ann. of Math.} 33 (1932), 587--642. \end{document} \section{Conclusions and questions} The work in this paper gives a fairly complete understanding of which rank-one transformations are isomorphic to (or factor onto) a given element (or some element) of the class of odometers (or the class of finite cyclic permutations). There are several natural questions that arise if one moves beyond odometers and finite cyclic permutations. Here are a few such questions. \begin{enumerate} \item For a specific rank-one transformation that is not an odometer, can one explicitly describe which rank-one transformations are isomorphic to that given transformation? For example, can one explicitly describe which rank-one transformations are isomorphic to Chacon's transformation? Can one explicitly describe which rank-one transformations are isomorphic to a given irrational rotation? \item For a natural class of transformations, can one explicitly describe which rank-one transformations are isomorphic to some element of that class? For example, can one explicitly describe which rank-one transformations are isomorphic to some (unspecified) irrational rotation? Can one explicitly describe which rank-one transformations are isomorphic to some (unspecified) compact group rotation? \item For a specific class of transformations that is closed under factors, can one describe explicitly which rank-one transformations factor onto some element of that class? For example, can one explicitly describe which rank-one transformations have some (unspecified) compact group rotation as a factor? If the answer to this last question is yes, then one can also characterize which rank-one transformations are weakly mixing. \end{enumerate} \end{document}
\begin{document} \title{Domain Invariant Representation Learning with Domain Density Transformations} \begin{abstract} Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains. Naively training a model on the aggregate set of data (pooled from all source domains) has been shown to perform suboptimally, since the information learned by that model might be domain-specific and generalize imperfectly to target domains. To tackle this problem, a predominant domain generalization approach is to learn some domain-invariant information for the prediction task, aiming at a good generalization across domains. In this paper, we propose a theoretically grounded method to learn a domain-invariant representation by enforcing the representation network to be invariant under all transformation functions among domains. We next introduce the use of generative adversarial networks to learn such domain transformations in a possible implementation of our method in practice. We demonstrate the effectiveness of our method on several widely used datasets for the domain generalization problem, on all of which we achieve competitive results with state-of-the-art models. \end{abstract} \section{Introduction} \label{introduction} Domain generalization refers to the machine learning scenario where the model is trained on multiple source domains so that it is expected to generalize well to unseen target domains. The key difference between domain generalization~\cite{khosla2012undoing,muandet2013domain,ghifary2015domain} and domain adaptation~\cite{zhao2019learning,zhang2019bridging,combes2020domain,tanwani2020domain} is that, in domain generalization, the learner does not have access to data of the target domain, making the problem much more challenging. One of the most common domain generalization approaches is to learn an invariant representation across domains, aiming at a good generalization performance on target domains. For instance, in the representation learning framework, the prediction function $y=f(x)$, where $x$ is data and $y$ is a label, is obtained as a composition $y=h \circ g (x)$ of a deep representation network $z=g(x)$, where $z$ is a learned representation of data $x$, and a smaller classifier $y=h(z)$, predicting label $y$ given representation $z$, both of which are shared across domains. With this framework, we can aim to learn an ``invariant'' representation $z$ across the source domains with the ``hope'' of a better generalization to the target domain. Most existing ``domain-invariance''-based methods in domain generalization focus on the marginal distribution alignment~\cite{muandet2013domain,ajakan2014domain,sun2016deep,shen2018wasserstein,li2018domainb}, which are still prone to distributional shifts when the conditional data distribution is not stable. In particular, the marginal alignment refers to making the representation distribution $p(z)$ to be the same across domains. This is essential since if $p(z)$ for the target domain is different from that of source domains, the classification network $h(z)$ would face out-of-distribution data at test time. Conditional alignment refers to aligning the conditional distribution of the label given the representation $p(y|z)$ to expect that the classification network (trained on the source domains) would give accurate predictions at test time. The formal definitions of these two types of alignment are discussed in Section~\ref{sec:approach}. \begin{figure} \caption{\small\textbf{An example of two domains} \label{ex} \end{figure} In Figure~\ref{ex} we illustrate an example where the representation $z$ satisfies the marginal alignment but not the conditional alignment. Specifically, $x$ is distributed uniformly on the circle with radius 2 (and centered at the origin) for domain 1 and distributed uniformly on the circle with radius 3 (centered at the origin) for domain 2. The representation $z$ defined by the mapping $z=g(x)=x/||x||_2$ will align the marginal distribution $p(z)$, i.e., $z$ is now distributed uniformly on the unit circle for both domains. However, the conditional distribution $p(y|z)$ is not aligned between the two domains ($y$ is represented by color), which means using this representation for classification is suboptimal, and in this extreme case would lead to 0\% accuracy in the target domain 2. This is an extreme case of misalignment but it does illustrate the importance of the conditional alignment. Therefore, we need to align both the marginal and the conditional distributions for a domain-invariant representation. Recently, there have been several attempts \cite{li2018domain,li2018deep,zhao2020domain} to align the joint distribution of the representation and the label $p(y,z)$ in a domain generalization problem by aligning the distribution of $z$ across domains for each class, i.e., $p(z|y)$ (given that the label distribution $p(y)$ is unchanged across domains). However, the key drawbacks of these methods are that they either do not scale well with the number of classes or have limited performance in real-world computer vision datasets (see Section~\ref{experiments}). In this paper, we focus on learning a domain-invariant representation that aligns both the marginal and the conditional distributions in domain generalization problems. We present theoretical results regarding the necessary and sufficient conditions for the existence of a domain-invariant representation; and subsequently propose a method to learn such representations by enforcing the invariance of the representation network under domain density transformation functions. A simple intuition for our approach is that if we enforce the representation to be invariant under the transformations among the source domains, the representation will become more robust under other domain transformations. Furthermore, we introduce an implementation of our method in practice, in which the domain transformation functions are learned through the training process of generative adversarial networks (GANs)~\cite{goodfellow2014generative,choi2018stargan}. We conduct extensive experiments on several widely used datasets and observe a significant improvement over relevant baselines. We also compare our methods against other state-of-the-art models and show that our method achieves competitive results. Our contribution in this work is threefold: \begin{itemize} \item We revisit the domain invariant representation learning problem and shed some light by providing several observations: a necessary and sufficient condition for the existence of a domain-invariant representation and a connection between domain-independent representation and a marginally-aligned representation. \item We propose a theoretically grounded method for learning a domain-invariant representation based on domain density transformation functions. We also demonstrate that we can learn the domain transformation functions by GANs in order to implement our approach in practice. \item We empirically show the effectiveness of our method by performing experiments on widely used domain generalization datasets (e.g., Rotated MNIST, VLCS and PACS) and compare our method with relevant baselines (especially CIDG \cite{li2018domain}, CIDDG \cite{li2018deep} and DGER \cite{zhao2020domain}). \end{itemize} \section{Related Work} \paragraph{Domain generalization:} Domain generalization is an seminal task in real-world machine learning problems where the data distribution of a target domain might vary from that of the training source domains. Therefore, extensive research has been developed to handle that domain-shift problem, aiming at a better generalization performance in the unseen target domain. A predominant approach for domain generalization is domain invariance \cite{muandet2013domain,li2018domain,li2018deep,arjovsky2019invariant,wang2020respecting,akuzawa2019adversarial,ilse2020diva,zhao2020domain,ajakan2014domain,li2018domainb,sun2016deep,shen2018wasserstein} that learns a domain-invariant representation (which we define as to align the marginal distribution of the representation or the conditional distribution of the output given the representation or both). We are particularly interested in CIDG \cite{li2018domain}, CIDDG \cite{li2018deep} and DGER \cite{zhao2020domain}, which also learn a representation that aligns the joint distribution of the representation and the label given that the class distribution is unchanged across domains. It should be noted that \citet{zhao2020domain} assume the label is distributed uniformly on all domains, while our proposed method only requires an assumption that the distribution of label is unchanged across domains (and not necessarily uniform). We also show later in our paper that the invariance of the distribution of class label across domains turns out to be the necessary and sufficient condition for the existence of a domain-invariant representation. Moreover, we provide a unified theoretical discussion about the two types of alignment, and then propose a method to learn a representation that aligns both the marginal and conditional distributions via domain density transformation functions for the domain generalization problem. Note that there exist several related works, such as~\citet{ajakan2014domain,ganin2016domain}, that use adversarial loss with a domain discriminator to align the marginal distribution of representation among domains, but they are different from our approach. In particular, our method only uses GANs or normalizing flows to learn the transformation among domains, and learn a representation that is invariant under these functions, without using an adversarial loss on the representation (which can lead to very unstable training \cite{goodfellow2016nips,kodali2017convergence}). There also exist works \cite{liu2016coupled,hoffman2018cycada,bousmalis2017unsupervised,russo2018source} in the domain adaptation literature that use generative modeling to learn a domain transformation function from source to target images, and use the transformed images to train a classifier. Our method differs from these by enforcing the representation to be invariant under the domain transformation, and we show theoretically that the representation learned that way would be domain-invariant marginally and conditionally. Meanwhile, the above works use the domain transformation to transform the images and train the classifier directly on the transformed data, and are not effective or applicable for domain generalization. Another line of methods that received a recent surge in interest is applying the idea of meta-learning for domain generalization problems \cite{du2020learning,balaji2018metareg,li2018learning,behl2019alphamaml}. The core idea behind these works is that if we train a model that can adapt among the source domains well, it would be more likely to adapt to unseen target domains. Recently, there are approaches \cite{ding2017deep,chattopadhyay2020learning,seo2019learning} that make use of the domain specificity, together with domain invariance, for the prediction problem. The argument here is that domain invariance, while being generalized well between domains, might be insufficient for the prediction of each specific domain and thus domain specificity is necessary. We would like to emphasize that our method is not a direct competitor of meta-learning based and domain-specificity based methods. In fact, we expect that our method can be used in conjunction with these methods to get the best of both worlds for better performance. \paragraph{Density transformation between domains:} Since our method is based on domain density transformations, we will review briefly some related works here. To transform the data density between domains, one can use several types of generative models. Two common methods are based on GANs \cite{zhu2017unpaired,choi2018stargan,choi2020stargan} and normalizing flows \cite{grover2020alignflow}. Although our method is not limited to the choice of the generative model used for learning the domain transformation functions, we opt to use GAN, specifically StarGAN \cite{choi2018stargan}, for scalability. This is just an implementation choice to demonstrate the use and effectiveness of our method in practice, and it is unrelated to our theoretical results. \paragraph{Connection to contrastive learning:} Our method can be interpreted intuitively as a way to learn a representation network that is invariant (robust) under domain transformation functions. On the other hand, contrastive learning \cite{chen2020simple,chen2020big,misra2020self} is also a representation learning paradigm where the model learns images' similarity. In particular, contrastive learning encourages the representation of an input to be similar under different transformations (usually image augmentations). However, the transformations in contrastive learning are not learned and do not serve the purpose of making the representation robust under domain transformations. Our method first learns the transformations between domains and then uses them to learn a representation that is invariant under domain shifts. \section{Theoretical Approach} \label{sec:approach} \subsection{Problem Statement} \label{sec:problem_statement} \begin{figure*} \caption{\small \textbf{Graphical model} \label{graphical_model} \end{figure*} Let us denote the data distribution for a domain $d\in \mathcal D$ by $p(x,y|d)$, where the variable $x \in \mathcal X$ represents the data and $y \in \mathcal Y$ is its corresponding label. The graphical model for our domain generalization framework is depicted in Figure~\ref{graphical_model}, in which the joint distribution is presented as follows: \begin{equation} p(d,x,y,z) = p(d)p(y)p(x|y,d)p(z|x)\;. \end{equation} In the domain generalization problem, since the data distribution $p(x,y|d)$ varies between domains, we expect the changes in the marginal data distribution $p(x|d)$ or the conditional data distribution $p(y|x,d)$ or both. In this paper, we assume that $p(y|d)$ is invariant across domains, i.e., the marginal distribution of the label $y$ is not dependent on the domain $d$---this assumption is shown to be the key condition for the existence of a domain-invariant representation (see Remark~\ref{theorem1}). This is practically reasonable since in many classification datasets, the class distribution can be assumed to be unchanged across domains (usually uniform distribution among the classes, e.g., balanced datasets). Our aim is to find a domain-invariant representation $z$ represented by the mapping $p(z|x)$ that can be used for the classification of label $y$ and be generalized among domains. In practice, this mapping can be deterministic (in that case, $p(z|x)=\delta_{g_\theta(x)}(z)$ with some function $g_\theta$, where $\delta$ is the Dirac delta distribution) or probabilistic (e.g., a normal distribution with the mean and standard deviation outputted by a network parameterized by $\theta$). For all of our experiments, we use a deterministic mapping for an efficient inference at test time, while in this section, we present our theoretical results with the general case of a distribution $p(z|x)$. In most existing domain generalization approaches, the domain-invariant representation $z$ is defined using one of the two following definitions: \begin{definition} \textbf{(Marginal Distribution Alignment)} The representation $z$ is said to satisfy the marginal distribution alignment condition if $p(z|d)$ is invariant w.r.t. $d$. \end{definition} \begin{definition} \textbf{(Conditional Distribution Alignment)} The representation $z$ is said to satisfy the conditional distribution alignment condition if $p(y|z,d)$ is invariant w.r.t. $d$. \end{definition} However, when the joint data distribution varies between domains, it is crucial to align both the marginal and the conditional distribution of the representation $z$. To this end, this paper aims to learn a representation $z$ that satisfies both the marginal and conditional alignment conditions. We justify our assumption of independence between $y$ and $d$ (thus $p(y|d)=p(y)$) by the following remark, which shows that this assumption turns out to be the necessary and sufficient condition for learning a domain-invariant representation. Note that this condition is also used in several existing works~ \cite{zhao2020domain,li2018domain,li2018deep}. \begin{remark} \label{theorem1} The invariance of $p(y|d)$ across domains $d$ is the necessary and sufficient condition for the existence of a domain-invariant representation (that aligns both the marginal and conditional distributions). \end{remark} \begin{proof} provided in the appendix. \end{proof} It is also worth noting that methods which learn a domain independent representation, for example, ~\cite{ilse2020diva}, only align the marginal distribution. This comes directly from the following remark: \begin{remark} A representation $z$ satisfies the marginal distribution alignment condition if and only if $I(z,d)=0$, where $I(z,d)$ is the mutual information between $z$ and $d$. \end{remark} \begin{proof} provided in the appendix. \end{proof} The question still remains that how we can learn a non-trivial domain invariant representation that satisfies both of the distribution alignment conditions. This will be discussed in the following subsection. \begin{figure*} \caption{\small\textbf{Domain density transformation} \label{concept} \end{figure*} \subsection{Learning a Domain-Invariant Representation with Domain Density Transformation Functions} To present our method, we will make some assumptions about the data distribution. Specifically, for any two domains $d,d'$, we assume that there exists an invertible and differentiable function denoted by $f_{d,d'}$ that transforms the density $p(x|y,d)$ to $p(x'|y,d'), \forall y$. Let $f_{d,d'}$ be the inverse of $f_{d',d}$, i.e., $f_{d',d}:=(f_{d,d'})^{-1}$. Due to the invertibility and differentiability of $f$'s, we can apply the change of variables theorem \cite{rudin2006real,bogachev2007measure} for the distributions above. In particular, with $x' = f_{d,d'}(x)$ (and thus $x=f_{d',d}(x')$), we have: \begin{equation} p(x|y,d) = p(x'|y,d')\left|\text{det } J_{f_{d',d}}(x')\right|^{-1} \label{cv}, \end{equation} where $J_{f_{d',d}}(x')$ is the Jacobian matrix of the function $f_{d',d}$ evaluated at $x'$. Multiplying both sides of Eq.~\ref{cv} with $p(y|d)=p(y|d')$, we get \begin{equation} p(x,y|d) = p(x',y|d')\left|\text{det } J_{f_{d',d}}(x')\right|^{-1}; \end{equation} and marginalizing both sides of the above equation over $y$ gives us \begin{equation} p(x|d) = p(x'|d')\left|\text{det } J_{f_{d',d}}(x')\right|^{-1}.\; \label{cv2} \end{equation} By using Eq.~\ref{cv} and Eq.~\ref{cv2}, we can prove the following theorem, which offers an efficient way to learn a domain-invariant representation, given the transformation functions $f$'s between domains. \begin{theorem} Given an invertible and differentiable function $f_{d,d'}$ (with the inverse $f_{d',d}$) that transforms the data density from domain $d$ to $d'$ (as described above). Assuming that the representation $z$ satisfies: \begin{equation} \label{z_defi} p(z|x)=p(z|f_{d,d'}(x)), \; \forall x, \end{equation} Then it aligns both the marginal and the conditional of the data distribution for domain $d$ and $d'$. \end{theorem} \begin{proof} provided in the appendix. \end{proof} This theorem indicates that, if we can find the functions $f$ that transform the data densities among the domains, we can learn a domain-invariant representation $z$ by encouraging the representation to be invariant under all the transformations $f$. This idea is illustrated in Figure~\ref{concept}. We therefore can use the following learning objective to learn a domain-invariant representation $z=g_\theta(x)$: \begin{equation}\label{eq:obj_func} \mathbb{E}_{d}\left[\mathbb{E}_{p(x,y|d)}\left[l(y,g_\theta(x))+\mathbb{E}_{d'}[dis(g_\theta(x),g_\theta(f_{d,d'}(x)))]\right]\right] \end{equation} Assume that we have a set of $K$ sources domain $D_s=\{d_1,d_2,...,d_K\}$, the objective function in Eq.~\ref{eq:obj_func} becomes: \begin{align} \mathbb{E}_{d,d'\in D_s,p(x,y|d)}\left[l(y,g_\theta(x))+dis(g_\theta(x),g_\theta(f_{d,d'}(x)))\right], \label{obj} \end{align} where $l(y,g_\theta(x))$ is the prediction loss of a network that predicts $y$ given $z=g_\theta(x)$, and $dis$ is a distance metric to enforce the invariant condition in Eq.~\ref{z_defi}. In our implementation, we use a squared error distance, e.g., $dis(g_\theta(x),g_\theta(f_{d,d'}(x))) = ||g_\theta(x)-g_\theta(f_{d,d'}(x))||^2_2$, since it performs the best in practice. However, we also considered other distances such as constrastive distance, which we discuss in more detail in the appendix. This theorem motivates us to learn such domain transformation functions for our domain-invariant representation learning framework. In the next section, we show how one can incorporate this idea into real-world domain generalization problems by learning the transformations with generative adversarial networks. \section{An Practical Implementation using Generative Adversarial Networks} In practice, we can learn the functions $f$ that transform the data distributions between domains by using several generative modeling frameworks, e.g., normalizing flows \cite{grover2020alignflow} or GANs \cite{zhu2017unpaired,choi2018stargan,choi2020stargan}. One advantage of normalizing flows is that this transformation is naturally invertible by design of the neural network. However, existing frameworks (e.g., \citet{grover2020alignflow}) require two flows to transform between each pair of domains, making it not scalable (scales linearly with the number of domains). Moreover, an initial implementation of our method using AlignFlow shows similar performance with the version using GAN. Therefore, we opt to use GANs for better scalability. In particular, we use the StarGAN ~\cite{choi2018stargan} model, which is a unified network (only requiring a single network to transform across all domains) designed for image domain transformations. It should be noted that the transformations learned by StarGAN are differentiable everywhere or almost everywhere with typical choices of the activation function (e.g., tanh or ReLU), and the cycle-consistency loss enforces each pair of transformations to approximate a pair of inverse functions. The goal of StarGAN is to learn a unified network $G$ that transforms the data density among multiple domains. In particular, the network $G(x,d,d')$ (i.e., $G$ is conditioned on the image $x$ and the two different domains $d,d'$) transforms an image $x$ from domain $d$ to domain $d'$. Different from the original StarGAN model that only takes the image $x$ and the desired destination domain $d'$ as its input, in our implementation, we feed both the original domain $d$ and desired destination domain $d'$ together with the original image $x$ to the generator $G$. The generator's goal is to fool a discriminator $D$ into thinking that the transformed image belongs to the destination domain $d'$. In other words, the equilibrium state of StarGAN, in which $G$ completely fools $D$, is when $G$ successfully transforms the data density of the original domain to that of the destination domain. After training, we use $G(.,d,d')$ as the function $f_{d,d'}(.)$ described in the previous section and perform the representation learning via the objective function in Eq.~\ref{obj}. Three important loss functions of the StarGAN architecture are: \begin{itemize} \item Domain classification loss $\mathcal{L}_{cls}$ that encourages the generator $G$ to generate images that closely belongs to the desired destination domain $d'$. \item The adversarial loss $\mathcal{L}_{adv}$ that is the classification loss of a discriminator $D$ that tries to distinguish between real images and the synthetic images generated by G. The equilibrium state of StarGAN is when $G$ completely fools $D$, which means the distribution of the generated images (via $G(x,d,d'), x\sim p(x|d)$) becomes the distribution of the real images of the destination domain $p(x'|d')$. This is our objective, i.e., to learn a function that transforms domains' densities. \item Reconstruction loss $\mathcal{L}_{rec}=\mathbb{E}_{x,d,d'}[||x-G(x',d',d)||_1]$ where $x'=G(x,d,d')$ to ensure that the transformations preserve the image's content. Note that this also aligns with our interest since we want $G(.,d',d)$ to be the inverse of $G(.,d,d')$, which minimizes $\mathcal{L}_{rec}$. \end{itemize} We can enforce the generator $G$ to transform the data distribution within the class $y$ (e.g., $p(x|y,d)$ to $p(x'|y,d')$ $\forall y$) by sampling each minibatch with data from the same class $y$, so that the discriminator will distinguish the transformed images with the real images from class $y$ and domain $d'$. However, we found that this constraint can be relaxed in practice, and the generator almost always transforms the image within the original class $y$. As mentioned earlier, after training the StarGAN model, we can use the generator $G(.,d,d')$ as our $f_{d,d'}(.)$ function and learn a domain-invariant representation via the learning objective in Eq.~\ref{obj}. We name this implementation of our method DIRT-GAN (Domain Invariant Representation learning with domain Transformations via Generative Adversarial Networks). \section{Experiments} \label{experiments} \subsection{Datasets} To evaluate our method, we perform experiments in three datasets that are commonly used in the literature for domain generalization. \paragraph{Rotated MNIST \cite{ghifary2015domain}:} In this dataset, 1,000 MNIST images (100 per class) \cite{lecun-mnisthandwrittendigit-2010} are chosen to form the first domain (denoted $\mathcal{M}_0$), then counter-clockwise rotations of $15^{\circ},30^{\circ},45^{\circ},60^{\circ}$ and $75^{\circ}$ are applied to create five additional domains, denoted $\mathcal{M}_{15},\mathcal{M}_{30},\mathcal{M}_{45},\mathcal{M}_{60}$ and $\mathcal{M}_{75}$. The task is classification with ten classes (digits 0 to 9). \paragraph{VLCS \cite{ghifary2015domain}:} contains 10,729 images from four domains, each domain is a subdataset. The four datasets are VOC2007 (V), LabelMe (L), Caltech-101 (C), and SUN09 (S). The task is classification with five classes. \paragraph{PACS \cite{Li2017dg}:} contains 9,991 images from four different domains: art painting, cartoon, photo, sketch. The task is classification with seven classes. \subsection{Experimental Setting} For all datasets, we perform ``leave-one-domain-out'' experiments, where we choose one domain as the target domain, train the model on all remaining domains and evaluate it on the chosen domain. Following standard practice, we use 90\% of available data as training data and 10\% as validation data, except for the Rotated MNIST experiment where we do not use a validation set and just report the performance of the last epoch. For the \textbf{Rotated MNIST} dataset, we use a network of two 3x3 convolutional layers and a fully connected layer as the representation network $g_\theta$ to get a representation $z$ of 64 dimensions. A single linear layer is then used to map the representation $z$ to the ten output classes. This architecture is the deterministic version of the network used by \citet{ilse2020diva}. We train our network for 500 epochs with the Adam optimizer \cite{kingma2014adam}, using the learning rate 0.001 and minibatch size 64, and report performance on the test domain after the last epoch. For the \textbf{VLCS} and \textbf{PACS} datasets, for a fair comparison against our main baselines, we use the most common choices of backbone networks for those datasets in existing works as the representation networks $g_\theta$, i.e., Alexnet \cite{krizhevsky2012imagenet} for VLCS and Resnet18 \cite{he2016deep} for PACS. We replace the last fully connected layer of the backbone with a linear layer of dimension 256 so that our representation has 256 dimensions. As with the Rotated MNIST experiment, we use a single layer to map from the representation $z$ to the output. We train the network for 100 epochs with plain stochastic gradient descent (SGD) using learning rate 0.001, momentum 0.9, minibatch size 64, and weight decay 0.001. Data augmentation is also standard practice for real-world computer vision datasets like VLCS and PACS, and during the training we augment our data as follows: crops of random size and aspect ratio, resizing to 224 × 224 pixels, random horizontal flips, random color jitter, randomly converting the image tile to grayscale with 10\% probability, and normalization using the ImageNet channel means and standard deviations. The StarGAN \cite{choi2018stargan} model implementation is taken from the authors' original source code with no significant modifications. For each set of source domains, we train the StarGAN model for 100,000 iterations with a minibatch of 16 images per iteration. Our code is available at \href{https://github.com/atuannguyen/DIRT}{https://github.com/atuannguyen/DIRT}. We train our model on a NVIDIA Quadro RTX 6000. \begin{table*}[t!] \footnotesize \centering \caption{\small \textbf{Rotated Mnist}. Reported numbers are mean accuracy and standard deviation among 5 runs} \label{mnist_exp} \small \setlength{\tabcolsep}{3pt} \centering \begin{tabular}{cccccccc} \toprule & \multicolumn{6}{c}{Domains} & \\ \cmidrule(r){2-7} Model & $\mathcal{M}_0$ & $\mathcal{M}_{15}$ & $\mathcal{M}_{30}$ & $\mathcal{M}_{45}$ & $\mathcal{M}_{60}$ & $\mathcal{M}_{75}$ & Average \\ \midrule HIR \cite{wang2020respecting} & 90.34 & 99.75 & 99.40 & 96.17 & 99.25 & 91.26 & 96.03 \\ DIVA \cite{ilse2020diva} & 93.5 & 99.3 & 99.1 & 99.2 & 99.3 & 93.0 & 97.2 \\ DGER \cite{zhao2020domain} & 90.09 & 99.24 & 99.27 & 99.31 & 99.45 & 90.81 & 96.36 \\ \midrule DA \cite{ganin2016domain} & 86.7 & 98.0 & 97.8 & 97.4 & 96.9 & 89.1 & 94.3 \\ LG \cite{shankar2018generalizing} & 89.7 & 97.8 & 98.0 & 97.1 & 96.6 & 92.1 & 95.3 \\ HEX \cite{wang2019learning} & 90.1 & 98.9 & 98.9 & 98.8 & 98.3 & 90.0 & 95.8 \\ ADV \cite{wang2019learning} & 89.9 & 98.6 & 98.8 & 98.7 & 98.6 & 90.4 & 95.2 \\ \midrule DIRT-GAN (ours) & 97.2($\pm$0.3) & 99.4($\pm$0.1) & 99.3($\pm$0.1) & 99.3($\pm$0.1) & 99.2($\pm$0.1) & 97.1($\pm$0.3) & \textbf{98.6}\\ \bottomrule \end{tabular} \end{table*} \subsection{Results} \begin{figure*} \caption{\small\textbf{Visualization of the representation space} \label{mnist_pca} \end{figure*} \paragraph{Rotated MNIST Experiment.} Table~\ref{mnist_exp} shows the performance of our model on the Rotated MNIST dataset. The main baselines we consider in this experiment are HIR \cite{wang2020respecting}, DIVA \cite{ilse2020diva} and DGER \cite{zhao2020domain}, which are domain invariance based methods. Our method recognizably outperforms those, illustrating the effectiveness of our method in learning a domain-invariant representation over the existing works. We also include other best-performing models for this dataset in the second half of the table. To the best of our knowledge, we set a new state-of-the-art performance on this Rotated MNIST dataset. We further analyze the distribution of the representation $z$ by performing principal component analysis to reduce the dimension of $z$ from 64 to two principal components. We visualize the representation space for two domains $\mathcal{M}_{30}$ and $\mathcal{M}_{60}$, with each point indicating the representation $z$ of an image $x$ in the two-dimensional space and its color indicating the label $y$. Figures \ref{mnist_pca}a and \ref{mnist_pca}b show the representation space of our method (in domains $\mathcal{M}_{30}$ and $\mathcal{M}_{60}$ respectively). It is clear that both the marginal (judged by the general distribution of the points) and the conditional (judged by the positions of colors) are relatively aligned. Meanwhile, Figures \ref{mnist_pca}c and \ref{mnist_pca}d show the representation space with naive training (in domains $\mathcal{M}_{30}$ and $\mathcal{M}_{60}$ respectively), showing the misalignment in the marginal distribution (judged by the general distribution of the points) and the conditional distribution (for example, the distributions of blue points and green points). \begin{table*}[t!] \footnotesize \centering \caption{\small \textbf{VLCS}. Reported numbers are mean accuracy and standard deviation among 5 runs} \label{vlcs_exp} \centering \begin{tabular}{ccccccc} \toprule & & \multicolumn{5}{c}{VLCS} \\ \cmidrule(r){3-7} Model & Backbone & V & L & C & S & Average \\ \cmidrule(r){1-7} CIDG \cite{li2018domain} & Alexnet & 65.65 & 60.43 & 91.12 & 60.85 & 69.51\\ CIDDG \cite{li2018deep} & Alexnet & 64.38 & 63.06 & 88.83 & 62.10 & 69.59\\ DGER \cite{zhao2020domain} & Alexnet & 73.24 & 58.26 & 96.92 & 69.10 & 74.38 \\ HIR \cite{wang2020respecting} & Alexnet & 69.10 & 62.22 & 95.39 & 65.71 & 73.10 \\ \cmidrule{1-7} JiGen \cite{carlucci2019domain} & Alexnet & 70.62 & 60.90 & 96.93 & 64.30 & 73.19 \\ \cmidrule(r){1-7} DIRT-GAN (ours) & Alexnet & 72.1($\pm$1.0) & 64.0($\pm$0.9) & 97.3($\pm$0.2) & 72.2($\pm$1.1) & \textbf{76.4} \\ \bottomrule \end{tabular} \end{table*} \begin{table*}[t!] \footnotesize \centering \caption{\small \textbf{PACS}. Reported numbers are mean accuracy and standard deviation among 5 runs} \label{pacs_exp} \centering \begin{tabular}{ccccccc} \toprule & & \multicolumn{5}{c}{PACS} \\ \cmidrule(r){3-7} Model & Backbone & Art Painting & Cartoon & Photo & Sketch & Average \\ \cmidrule(r){1-7} DGER \cite{zhao2020domain} & Resnet18 & 80.70 & 76.40 & 96.65 & 71.77 & 81.38 \\ \cmidrule{1-7} JiGen \cite{carlucci2019domain} & Resnet18 & 79.42 & 75.25 & 96.03 & 71.35 & 79.14 \\ MLDG \cite{li2018learning} & Resnet18 & 79.50 & 77.30 & 94.30 & 71.50 & 80.70 \\ MetaReg \cite{balaji2018metareg} & Resnet18 & 83.70 & 77.20 & 95.50 & 70.40 & 81.70 \\ CSD \cite{piratla2020efficient} & Resnet18 & 78.90 & 75.80 & 94.10 & 76.70 & 81.40 \\ DMG \cite{chattopadhyay2020learning} & Resnet18 & 76.90 & 80.38 & 93.35 & 75.21 & 81.46 \\ \cmidrule(r){1-7} DIRT-GAN (ours) & Resnet18 & 82.56($\pm$ 0.4) & 76.37($\pm$ 0.3) & 95.65($\pm$ 0.5) & 79.89($\pm$ 0.2) & \textbf{83.62} \\ \bottomrule \end{tabular} \end{table*} \paragraph{VLCS and PACS.} Tables~\ref{vlcs_exp} and \ref{pacs_exp} show the results for the VLCS and PACS datasets. In these real-world computer vision datasets, we consider HIR \cite{wang2020respecting}, CIDG \cite{li2018domain}, CIDDG \cite{li2018deep} and DGER \cite{zhao2020domain} as our main domain-invariance baselines. We also include other approaches (meta-learning based or domain-specificity based) in the second half of the tables for references. Our method significantly ourperforms other invariant-representataion baselines, namely CIDG, CIDDG and DGER, with the same backbone architectures, showing the effectiveness of our representation alignment method. \section{Conclusion} \label{conclusion} To conclude, in this work we propose a theoretically grounded approach to learn a domain-invariant representation for the domain generalization problem by using domain transformation functions. We also provide insights into domain-invariant representation learning with several theoretical observations. We then introduce an implementation for our method in practice with the domain transformations learned by a StarGAN architecture and empirically show that our approach outperforms other domain-invariance based methods. Our method also achieves competitive results on several datasets when compared to other state-of-the-art models. A potential limitation of our method is that we need to train an additional network (StarGAN) to learn to transform data density among domains. However, this network is only used during training, and the required computation at test time is still the same as other models. In the future, we plan to incorporate our method into meta-learning based and domain-specificity based approaches for improved performance. We also plan to extend the domain-invariant representation learning framework to the more challenging scenarios, for example, where domain information is not available (i.e., we have a dataset pooled from multiple source domains but do not know the domain identification of each data instance). \section*{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{see Section~\ref{conclusion}} \item Did you discuss any potential negative societal impacts of your work? \answerNA{} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{} \item Did you include complete proofs of all theoretical results? \answerYes{see our Supplementary File} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{see Section~\ref{experiments}} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{See Section~\ref{experiments}} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{} \item Did you mention the license of the assets? \answerNA{} \item Did you include any new assets either in the supplemental material or as a URL? \answerYes{We include our source code in the Supplementary File} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \appendix \section{Proofs} For the following proofs, we treat the variables as continuous variables and always use the integral. If one or some of the variables are discrete, it is straight-forward to replace the corresponding integral(s) with summation sign(s) and the proofs still hold. \subsection{Remark 1} \begin{proof} $ $\newline \begin{itemize} \item[i)] If there exists a representation $z$ defined by the mapping $p(z|x)$ that aligns both the marginal and conditional distribution, then $\forall d,d',y$ we have: \begin{align} p(y,z|d) &= p(z|d)p(y|z,d) \nonumber\\ &=p(z|d')p(y|z,d')=p(y,z|d'). \label{eqyz} \end{align} By marginalizing both sides of Eq~\ref{eqyz} over $z$, we get $p(y|d)=p(y|d')$\,. \item[ii)] If $p(y|d)$ is unchanged w.r.t. the domain $d$, then we can always find a domain invariant representation, for example, $p(z|x)=\delta_0(z)$ for the deterministic case (that maps all $x$ to 0), or $p(z|x)=\mathcal{N}(z;0,1)$ for the probabilistic case. These representations are trivial and not of our interest since they are uninformative of the input $x$. However, the readers can verify that they do align both the marginal and conditional distribution of data. \end{itemize} \end{proof} \subsection{Remark 2} \begin{proof} $ $\newline \begin{itemize} \item If $I(z,d)=0$, then $p(z|d)=p(z)$, which means $p(z|d)$ is invariant w.r.t. $d$. \item If $p(z|d)$ is invariant w.r.t. $d$, then $\forall z,d:$ \begin{align} p(z) &= \int p(z|d')p(d') \text{d}d' = \int p(z|d)p(d') \text{d}d' \nonumber \\ & \quad(\text{since } p(z|d')=p(z|d) \forall d') \nonumber \\ &= p(z|d)\int p(d') \text{d}d' = p(z|d) \nonumber\\ &\implies I(z,d)=0 \end{align} \end{itemize} \end{proof} \subsection{Theorem 1} \begin{proof} $ $\newline \begin{itemize} \item[i)] Marginal alignment: $\forall z$ we have: \begin{align} &p(z|d)=\int p(x|d)p(z|x)\text{d}x \nonumber\\ &=\int p(f_{d',d}(x')|d)p(z|f_{d',d}(x'))\left|\text{det } J_{f_{d',d}}(x')\right|\text{d}x' \nonumber &\intertext{\quad(by applying variable substitution in multiple integral: $x'=f_{d,d'}(x)$)}\nonumber &= \int p(x'|d')\left|\text{det } J_{f_{d',d}}(x')\right|^{-1}p(z|x')\nonumber\\ &\hspace{4.5cm}\left|\text{det } J_{f_{d',d}}(x')\right|\text{d}x' \nonumber \intertext{\quad(since $p(f_{d',d}(x')|d)=p(x'|d')\left|\text{det } J_{f_{d',d}}(x')\right|^{-1}$ and $p(z|f_{d',d}(x'))=p(z|x')$)} \nonumber &= \int p(x'|d')p(z|x')\text{d}x' \nonumber\\ &= p(z|d') \end{align} \item[ii)] Conditional alignment: $\forall z,y$ we have: \begin{align} &p(z|y,d)=\int p(x|y,d)p(z|x)\text{d}x \nonumber\\ &=\int p(f_{d',d}(x')|y,d)p(z|f_{d',d}(x'))\left|\text{det } J_{f_{d',d}}(x')\right|\text{d}x' \nonumber &\intertext{\quad (by applying variable substitution in multiple integral: $x'=f_{d,d'}(x)$)}\nonumber &= \int p(x'|y,d')\left|\text{det } J_{f_{d',d}}(x')\right|^{-1}p(z|x') \nonumber\\ &\hspace{4.5cm}\left|\text{det } J_{f_{d',d}}(x')\right|\text{d}x' \nonumber \intertext{\quad(since $p(f_{d',d}(x')|y,d)=p(x'|y,d')\left|\text{det } J_{f_{d',d}}(x')\right|^{-1}$ and $p(z|f_{d',d}(x'))=p(z|x')$)} \nonumber &= \int p(x'|y,d')p(z|x')\text{d}x' \nonumber\\ &= p(z|y,d') \end{align} Note that \begin{equation} p(y|z,d)=\frac{p(y,z|d)}{p(z|d)}=\frac{p(y|d)p(z|y,d)}{p(z|d)} \end{equation} Since $p(y|d)=p(y)=p(y|d'), p(z|y,d)=p(z|y,d')$ and $p(z|d)=p(z|d')$, we have: \begin{equation} p(y|z,d)=\frac{p(y|d')p(z|y,d')}{p(z|d')}=p(y|z,d') \end{equation} \end{itemize} \end{proof} \section{Discussion on the choice of the distance metric between representations} As discussed in Section 3.2, we enforce the representation network $g_\theta$ to be invariant under the domain transformation $f_{d,d'}$ (with any two domains $d,d'$), meaning that $g_\theta(x)=g_\theta(f_{d,d'}(x))$. In our implementation, we use the squared error distance as the distance between $g_\theta(x)$ and $g_\theta(f_{d,d'}(x))$. Admittedly, this distance tends to have the side effect of making the norm of the representation smaller. However, as visualized in Section 5.3, it does successfully align the distributions of the representation. We also considered other distances such as contrastive distance and the cosine distance. We present below in Table~\ref{ablation} an ablation study with difference choices of the distance metrics, for the Rotated Mnist experiment with the target domain $\mathcal{M}_{75}$. Note that in this Rotated Mnist dataset, domains $\mathcal{M}_{75}$ and $\mathcal{M}_{0}$ are (equally) the hardest target domains. Therefore, we choose $\mathcal{M}_{75}$ for this ablation study to compare the performance of the variants. \begin{table*}[ht] \footnotesize \centering \centering \caption{\small \textbf{Ablation study}: Rotated MNIST experiments with $\mathcal{M}_{75}$ as the target domain.} \begin{tabular}{cc} \toprule Distance Metric & Accuracy \\ \midrule Squared Error Distance & 97.1±0.3 \\ Contrastive Distance & 95.8±0.9 \\ Cosine Distance & 90.1±0.3 \\ \bottomrule \end{tabular} \label{ablation} \end{table*} As the Squared Error Distance gives the best performance and also is the most stable one in practice, we decide to use it for our implementation. \end{document}
\begin{document} \author[Robert Laterveer] {Robert Laterveer} \address{Institut de Recherche Math\'ematique Avanc\'ee, CNRS -- Universit\'e de Strasbourg,\ 7 Rue Ren\'e Des\-car\-tes, 67084 Strasbourg CEDEX, FRANCE.} \email{[email protected]} \title[Zero--cycles on self--products of surfaces]{Zero--cycles on self--products of surfaces: some new examples verifying Voisin's conjecture} \begin{abstract} An old conjecture of Voisin describes how $0$--cycles of a surface $S$ should behave when pulled--back to the self--product $S^m$ for $m>p_g(S)$. We exhibit some surfaces with large $p_g$ that verify Voisin's conjecture. \end{abstract} \keywords{Algebraic cycles, Chow groups, motives, Voisin conjecture, Kimura finite--dimensionality conjecture} \subjclass[2010]{Primary 14C15, 14C25, 14C30.} \maketitle \section{Introduction} Let $X$ be a smooth projective variety over $\mathbb{C}$, and let $A^i(X)_{\mathbb{Z}}:=CH^i(X)_{}$ denote the Chow groups of $X$ (i.e. the groups of codimension $i$ algebraic cycles on $X$ with $\mathbb{Z}$--coefficients, modulo rational equivalence \cite{F}). Let $A^i_{hom}(X)_{\mathbb{Z}}$ (and $A^i_{AJ}(X)_{\mathbb{Z}}$) denote the subgroup of homologically trivial (resp. Abel--Jacobi trivial) cycles. The Bloch--Beilinson--Murre conjectures present a beautiful and coherent dream--world in which Chow groups are determined by cohomology and the coniveau filtration \cite{J2}, \cite{J4}, \cite{Mur}, \cite{Kim}, \cite{MNP}, \cite{Vo}. The following particular instance of this dream--world was first formulated by Voisin: \begin{conjecture}[Voisin 1993 \cite{V9}]\label{conj} Let $S$ be a smooth projective surface. Let $m$ be an integer larger than the geometric genus $p_g(S)$. Then for any $0$--cycles $a_1,\ldots,a_m\in A^2_{AJ}(S)_{\mathbb{Z}}$, one has \[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) a_{\sigma(1)}\times\cdots\times a_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{2m}(S^m)_{\mathbb{Z}}\ .\] (Here $\mathfrak S_m$ is the symmetric group on $m$ elements, and $ \hbox{sgn}(\sigma)$ is the sign of the permutation $\sigma$.) \end{conjecture} For surfaces of geometric genus $0$, Conjecture \ref{conj} reduces to Bloch's conjecture \cite{B}. For surfaces $S$ of geometric genus $1$, Conjecture \ref{conj} takes on a particularly simple form: in this case, the conjecture stipulates that any $a_1, a_2\in A^2_{AJ}(S)_{\mathbb{Z}}$ should verify the equality \[ a_1\times a_2 =a_2\times a_1\ \ \ \hbox{in}\ A^4(S\times S)_{\mathbb{Z}}\ .\] This conjecture is still open for a general $K3$ surface; examples of surfaces of geometric genus $1$ verifying this conjecture are given in \cite{V9}, \cite{16.5}, \cite{19}, \cite{21}. One can also formulate versions of Conjecture \ref{conj} for higher--dimensional varieties; this is studied in \cite{V9}, \cite{17}, \cite{24.4}, \cite{24.5}, \cite{BLP}, \cite{LV}, \cite{Ch}. On a historical note, it is interesting to observe that Voisin's Conjecture \ref{conj} antedates Kimura's conjecture ``all varieties have finite--dimensional motive'' \cite{Kim}. Both conjectures have a similar flavour: Chow groups of a surface $S$ should have controlled behaviour when pulled--back to the self--product $S^m$, for large $m$. The difference between Voisin's conjecture and Kimura's conjecture lies in the index $m$ which is much lower in Voisin's conjecture. In fact (as explained in \cite{BLP}), Voisin's conjecture follows from a combination of Kimura's conjecture with a strong form of the generalized Hodge conjecture. The goal of the present note is to collect some (easy) examples of surfaces with geometric genus larger than $1$ verifying Voisin's conjecture. \begin{nonumbering}[=Corollaries \ref{main1}, \ref{cor2}, \ref{cor4} and \ref{last}] The following surfaces verify Conjecture \ref{conj}: \item {(\romannumeral1)} generalized Burniat type surfaces in the family $\mathcal S_{16}$ of \cite{BCF} ($p_g(S)=3$); \item {(\romannumeral2)} the hypersurfaces $S\subset A/\iota$ considered in \cite{LNP}, where $A$ is an abelian threefold and $\iota$ is the $-1$-involution ($p_g(S)=3$); \item {(\romannumeral3)} minimal surfaces $S$ of general type with $p_g(S)=q(S)=3$ and $K^2_S=6$; \item{(\romannumeral4)} the double cover of certain cubic surfaces (among which the Fermat cubic) branched along the Hessian ($p_g(S)=4$); \item {(\romannumeral5)} the Fano surface of lines in a smooth cubic threefold ($p_g(S)=10$); \item{(\romannumeral6)} the quotient $S=F/\iota$, where $F$ is the Fano surface of conics in a Verra threefold and $\iota$ is a certain involution ($p_g(S)=36$); \item{(\romannumeral7)} the surface of bitangents $S$ of a general quartic in $\mathbb{P}^3$ ($p_g(S)=45$); \item{(\romannumeral8)} the singular locus $S$ of a general EPW sextic ($p_g(S)=45$). \end{nonumbering} A by--product of the proof is that these surfaces all have finite--dimensional motive, in the sense of Kimura \cite{Kim} (this appears to be a new observation for cases (\romannumeral6)--(\romannumeral8)). Also, certain instances of the generalized Hodge conjecture are verified: \begin{nonumberingc}[=Corollary \ref{ghc}] Let $S$ be any of the above surfaces, and let $m>p_g(S)$. Then the sub--Hodge structure \[ \wedge^m H^2(S,\mathbb{Q})\ \subset\ H^{2m}(S^m,\mathbb{Q}) \] is supported on a divisor. \end{nonumberingc} The surfaces considered in this note have an interesting feature in common (which makes it easy to prove Conjecture \ref{conj} for them): for many of them, intersection product induces a surjection \[ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \twoheadrightarrow\ A^2_{AJ}(S)\ .\] In the other cases (cases (\romannumeral2), (\romannumeral4), (\romannumeral6)--(\romannumeral8), which have $q(S)=0$), the surface $S$ is dominated by a surface $T$ with the property that the intersection product map \[ A^1_{hom}(T)\otimes A^1_{hom}(T)\ \to\ A^2_{AJ}(T)\ \] surjects onto $\operatorname{i}a \bigl( A^2_{AJ}(S)\to A^2_{AJ}(T)\bigr)$. Using this feature, to prove Conjecture \ref{conj} for these surfaces one is reduced to a problem concerning $0$--cycles on abelian varieties. This last problem has recently been solved by Vial \cite{Ch}, using a strong version of the generalized Hodge conjecture for generic abelian varieties. \vskip0.6cm \begin{convention} In this note, the word {\sl variety\/} will refer to a reduced irreducible scheme of finite type over $\mathbb{C}$. A {\sl subvariety\/} is a (possibly reducible) reduced subscheme which is equidimensional. {\bf Unless indicated otherwise, all Chow groups will be with rational coefficients}: we will denote by $A_j(X)$ the Chow group of $j$--dimensional cycles on $X$ with $\mathbb{Q}$--coefficients (and by $A_j(X)_{\mathbb{Z}}$ the Chow groups with $\mathbb{Z}$--coefficients); for $X$ smooth of dimension $n$ the notations $A_j(X)$ and $A^{n-j}(X)$ are used interchangeably. The notations $A^j_{hom}(X)$, $A^j_{AJ}(X)$ will be used to indicate the subgroups of homologically trivial, resp. Abel--Jacobi trivial cycles. The contravariant category of Chow motives (i.e., pure motives with respect to rational equivalence as in \cite{Sc}, \cite{MNP}) will be denoted $\mathcal M_{\rm rat}$. We will write $H^j(X)$ to indicate singular cohomology $H^j(X,\mathbb{Q})$. \end{convention} \section{Generalized Burniat type surfaces with $p_g=3$} \begin{definition}[\cite{BCF}]\label{gbt} Let $A=E_1\times E_2\times E_3$ be a product of elliptic curves. A {\em generalized Burniat type surface\/} (or ``GBT surface'') is a quotient $S=Y/G$, where $Y\subset A$ is a smooth hypersurface corresponding to the square of a principal polarization, and $G\cong \mathbb{Z}_2^3$ acts freely. \end{definition} \begin{remark} GBT surfaces are minimal surfaces of general type with $p_g(S)=q(S)$ ranging from $0$ to $3$. There are $16$ irreducible families of GBT surfaces, labelled $\mathcal S_1,\ldots \mathcal S_{16}$ in \cite{BCF}. The families $\mathcal S_1, \mathcal S_2$ have moduli--dimension $4$, the other families are $3$--dimensional. \end{remark} \begin{theorem}[Peters \cite{Chris}]\label{Gbt} Let $S$ be a GBT surface with $p_g(S)=3$ (i.e., $S$ is in the family labelled $\mathcal S_{16}$ in \cite{BCF}), and let $A$ be the abelian threefold as in definition \ref{gbt}. \noindent (\romannumeral1) $S$ has finite--dimensional motive, and there are natural isomorphisms \[ A^2_{(2)}(A)\ \xrightarrow{\cong}\ A^2_{AJ}(S)\ \xrightarrow{\cong}\ A^3_{(2)}(A)\ .\] (Here $A^\ast_{(\ast)}(A)$ refers to Beauville's decomposition \cite{Beau}.) \noindent (\romannumeral2) Intersection product induces a surjection \[ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \twoheadrightarrow\ A^2_{AJ}(S)\ .\] \end{theorem} \begin{proof} Part (\romannumeral1) is \cite[Theorem 4.2]{Chris}. Part (\romannumeral2) follows from (\romannumeral1), in view of the fact that intersection product induces a surjection \[ A^1_{hom}(A)\otimes A^1_{hom}(A)\ \twoheadrightarrow\ A^2_{(2)}(A) \ \] \cite[Proposition 4]{Beau}. \end{proof} Property (\romannumeral2) of Theorem \ref{Gbt} is relevant to Conjecture \ref{conj}: \begin{proposition}\label{handy0} Let $S$ be a smooth projective surface, and assume that intersection product induces a surjection \[ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \twoheadrightarrow\ A^2_{AJ}(S)\ .\] Then $S$ has finite--dimensional motive. Also, Conjecture \ref{conj} is true for $S$ with $m>{q(S)\choose 2}$. (In particular, in case of equality $p_g(S)= {q(S)\choose 2}$ the full Conjecture \ref{conj} is true for $S$.) \end{proposition} \begin{proof} Let $\alpha\colon S\to A:=\hbox{Alb}(S)$ be the Albanese map. There is a commutative diagram \[ \begin{array}[c]{ccc} A^1_{hom}(S)\otimes A^1_{hom}(S) &\to& A^2_{AJ}(S)\\ &&\\ \ \ \ \ \uparrow{\scriptstyle (\alpha^\ast,\alpha^\ast)}&& \ \ \ \ \uparrow{\scriptstyle \alpha^\ast}\\ &&\\ A^1_{hom}(A)\otimes A^1_{hom}(A) &\to& A^2_{(2)}(A)\\ \end{array} \] (where horizontal maps are induced by intersection product, and $A^\ast_{(\ast)}(A)$ refers to the Beauville decomposition \cite{Beau} of the Chow ring of any abelian variety). As the left vertical map is an isomorphism, the assumption implies that the right vertical map is surjective. In view of \cite[Theorem 3.11]{V3}, this implies $S$ has finite--dimensional motive. (For an alternative proof of \cite[Theorem 3.11]{V3} in terms of birational motives, cf. \cite[Theorem B.7]{LNP}. For a similar result, cf. \cite[Proposition 2.1]{Diaz}.) Next, let us consider Conjecture \ref{conj} for $S$. Thanks to Rojtman's result \cite{Ro}, it suffices to establish Conjecture \ref{conj} for $0$--cycles with $\mathbb{Q}$--coefficients. Because $\alpha^\ast\colon A^2_{(2)}(A)\to A^2_{AJ}(S)$ is surjective, to prove Conjecture \ref{conj} for $S$ it suffices to prove (a version of) Conjecture \ref{conj} for elements $b_1,\ldots,b_m\in A^2_{(2)}(A)$. We now reduce to $0$--cycles on $A$: given $b_j\in A^2_{(2)}(A)$, let \[ c_j:= b_j\cdot h^{q-2}\ \ \in\ A^q_{(2)}(A)\ ,\ \ \ j=1,\ldots,m\ ,\] be $0$--cycles, where $q:=q(S)$ is the dimension of $A$ and $h\in A^1(A)$ is a symmetric ample divisor. Let us consider the $\mathfrak S_m$--invariant ample divisor \[ H:= \sum_{j=1}^m (pr_j)^\ast(h)\ \ \ \in\ A^1(A^m)\ .\] From K\"unnemann's hard Lefschetz result \cite{Kun}, we know that the map \[ \cdot H^{m(q-2)}\colon\ \ A^{2m}_{(2m)}(A^m)\ \to\ A^{qm}_{(2m)}(A^m) \] is an isomorphism. On the other hand, \[ \begin{split} c_{\sigma(1)}\times\cdots\times c_{\sigma(m)}&= \bigl(b_{\sigma(1)}\times\cdots\times b_{\sigma(m)} \bigr)\cdot \bigl( h^{q-2}\times\cdots\times h^{q-2}\bigr)\\ &= \bigl(b_{\sigma(1)}\times\cdots\times b_{\sigma(m)} \bigr)\cdot H^{m(q-2)}\ \ \ \hbox{in}\ A^{qm}_{(2m)}(A^m)\\ \end{split}\] (since intersecting $A^2(A)$ with a power $h^r, r>q-2$ gives $0$). We are thus reduced to proving that for any $c_1,\ldots,c_m\in A^q_{(2)}(A)$, where $m>{q\choose 2}$, there is equality \[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) \, c_{\sigma(1)}\times\cdots\times c_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{gm}(A^m)_{}\ .\] At this point, we can invoke the following general result on $0$--cycles on abelian varieties to conclude: \begin{theorem}[Vial \cite{Ch}] Let $A$ be an abelian variety of dimension $g$, and let $c_1,\ldots,c_m\in A^g_{(k)}(A)$. If $k$ is even and $m>{g\choose k}$, there is vanishing \[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) \, c_{\sigma(1)}\times\cdots\times c_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{mg}(A^m)_{}\ .\] If $k$ is odd and $m>{g\choose k}$, there is vanishing \[ \sum_{\sigma\in\mathfrak S_m} c_{\sigma(1)}\times\cdots\times c_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{mg}(A^m)_{}\ .\] \end{theorem} \begin{proof} This is \cite[Theorem 4.1]{Ch}, whose proof uses the concept of ``generically defined cycles on abelian varieties'', and a strong form of the generalized Hodge conjecture for powers of generic abelian varieties, due to Hazama \cite[Theorem 2.12]{Ch}. The case $k=g$ was proven earlier (and differently) in \cite[Example 4.40]{Vo}. \end{proof} This ends the proof of Proposition \ref{handy0}. \end{proof} We can now prove that surfaces in the family $\mathcal S_{16}$ verify Voisin's conjecture: \begin{corollary}\label{main1} Let $S$ be a GBT surface with $p_g(S)=3$ (i.e., $S$ is in the family labelled $\mathcal S_{16}$ in \cite{BCF}). Then $S$ verifies Conjecture \ref{conj}: for any $m>3$ and $a_1,\ldots,a_m\in A^2_{AJ}(S)$, there is equality \[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) a_{\sigma(1)}\times\cdots\times a_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{2m}(S^m)\ .\] \end{corollary} \begin{proof} This follows from Proposition \ref{handy0}, in view of Theorem \ref{Gbt} plus the fact that $q(S)=p_g(S)=3$. \end{proof} We recall that the truth of Conjecture \ref{conj} implies a certain instance of the generalized Hodge conjecture: \begin{corollary}\label{ghc} Let $S$ be a surface verifying Conjecture \ref{conj}, and let $m>p_g(S)$. Then the sub--Hodge structure \[ \wedge^m H^2(S,\mathbb{Q})\ \subset\ H^{2m}(S^m,\mathbb{Q}) \] is supported on a divisor. \end{corollary} \begin{proof} This is already observed in \cite{V9}. Consider the Chow motive $\wedge^m h^2(S)$ defined by the idempotent \[ \Gamma:= \bigl(\sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) \Gamma_\sigma\bigr)\circ \bigl(\pi^2_S\times\cdots\times \pi^2_S\bigr)\ \ \ \in\ A^{2m}(S^m\times S^m)\ .\] Conjecture \ref{conj} is equivalent to saying that $A_0(\wedge^m h^2(S))=0$. Applying the Bloch--Srinivas argument \cite{BS} to $\Gamma$, one obtains a rational equivalence \[ \Gamma=\gamma\ \ \ \hbox{in}\ A^{2m}(S^m\times S^m)\ ,\] where $\gamma$ is a cycle supported on $S^m\times D$ for some divisor $D\subset S^m$. On the other hand, $\Gamma$ acts on $H^{2m}(S^m,\mathbb{Q})$ as projector on $\wedge^m H^2(S,\mathbb{Q})$. It follows that $ \wedge^m H^2(S,\mathbb{Q})$ is supported on $D$. \end{proof} \section{A criterion} The approach of the last section can be conveniently rephrased as follows: \begin{proposition}\label{handy} Let $S$ be a smooth projective surface. Assume that $S$ has finite--dimensional motive, and that cup product induces an isomorphism \[ C\colon\ \ \wedge^2 H^1(S,\mathcal O_S) \ \xrightarrow{\cong}\ H^2(S,\mathcal O_S)\ .\] Then Conjecture \ref{conj} is true for $S$. \end{proposition} \begin{proof} Surjectivity of $C$, combined with finite--dimensionality of the motive of $S$, ensures that intersection product induces a surjection \[ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \twoheadrightarrow\ A^2_{AJ}(S)\ \] \cite{moib}. The assumption that $C$ is an isomorphism implies that $p_g(S)={{q(S)}\choose{2}}$. The result now follows from Proposition \ref{handy0}. \end{proof} This takes care of two more cases announced in the introduction: \begin{corollary}\label{cor2} Conjecture \ref{conj} is true for the following surfaces: \item {(\romannumeral1)} minimal surfaces of general type with $p_g(S)=q(S)=3$ and $K^2=6$; \item {(\romannumeral2)} the Fano surface of lines in a cubic threefold ($p_g(S)=10$). \end{corollary} \begin{proof} In case (\romannumeral1), it is known that $S$ is the symmetric square $S=C^{(2)}$ where $C$ is a genus $3$ curve \cite{CCML} (cf. also \cite[Theorem 9]{BCP}). Thus, the assumptions of Proposition \ref{handy} are clearly satisfied. As for case (\romannumeral2), it is well--known this satisfies the assumptions of Proposition \ref{handy} (finite--dimensionality is proven in \cite{Diaz} and \cite{22}). Alternatively, one could apply Proposition \ref{handy0} directly (the assumption of Proposition \ref{handy0} is satisfied by the Fano surface thanks to \cite{B}; an alternative proof is sketched in \cite[Remark 20.8]{SV}). \end{proof} \section{A variant criterion} Let us now state a variant version of Proposition \ref{handy0}: \begin{proposition}\label{handy1} Let $S$ be a smooth projective surface. Assume that $S=S^\prime/<\iota>$, where $\iota$ is an automorphism of a surface $S^\prime$ such that intersection product induces a surjection \[ A^1_{hom}(S^\prime)\otimes A^1_{hom}(S^\prime) \ \twoheadrightarrow\ A^2_{AJ}(S^\prime)^\iota\ .\] Then $S$ has finite--dimensional motive. Also, Conjecture \ref{conj} is true for $S$ with $m>{q(S^\prime)\choose 2}$. (In particular, if $p_g(S)={q(S^\prime)\choose 2}$ the full Conjecture \ref{conj} is true for $S$.) \end{proposition} \begin{proof} This is proven just as Proposition \ref{handy0}. \end{proof} This takes care of several more cases announced in the introduction: \begin{corollary}\label{cor4} Conjecture \ref{conj} is true for the following surfaces: \item {(\romannumeral1)} surfaces $S=T/<\iota>$, where $T$ is a smooth divisor in the linear system $\vert 2\Theta\vert$ on a principally polarized abelian threefold, and $\iota$ is the $(-1)$--involution ($p_g(S)=3$); \item{(\romannumeral2)} the quotient $S=F/\iota$, where $F$ is the Fano surface of conics in a general Verra threefold and $\iota$ is a certain involution ($p_g(S)=36$); \item{(\romannumeral3)} the surface of bitangents $S$ of a general quartic in $\mathbb{P}^3$ ($p_g(S)=45$); \item{(\romannumeral4)} the surface $S$ that is the singular locus of a general EPW sextic ($p_g(S)=45$). \end{corollary} \begin{proof} \noindent \item{(\romannumeral1)} The surface $S$ verifies the assumptions of Proposition \ref{handy1} with $S^\prime=T$, according to \cite[Subsection 7.2]{LNP}. \noindent \item{(\romannumeral3)} More generally, one may consider the surface $S$ studied by Welters \cite{Wel} and defined as follows. Let $Y$ be a {\em quartic double solid\/}, i.e. $Y\to\mathbb{P}^3$ is a double cover branched along a smooth quartic $Q$. Let $T$ be the surface of conics contained in $Y$, and let $\iota\in\aut(T)$ be the involution induced by the covering involution of $Y$. Then the surface $S:=T/<\iota>$ is a smooth surface of general type with $p_g(S)=45$. (The generic quartic $K3$ surface $Q$ does not contain a line. In this case, as explained in \cite{Fer} (cf. also \cite[Example 3.5]{Beau1} and \cite[Remark 8.5]{Huy1}), the surface $S$ is (isomorphic to) the so--called ``surface of bitangents'', which is the fixed locus of Beauville's anti--symplectic involution \[ Q^{[2]}\ \to\ Q^{[2]} \] first considered in \cite{Beau0}. As noted in \cite[Example 3.5]{Beau1}, this gives another proof of the fact that $p_g(S)=45$.) Voisin has proven \cite[Corollaire 3.2(b)]{V8} (cf. also \cite[Remarque 3.4]{V8}) that intersection product induces a surjection \[ A^1_{hom}(T)\otimes A^1_{hom}(T)\ \twoheadrightarrow\ A^2_{AJ}(T)^\iota=A^2_{AJ}(S)\ .\] Since $p_g(S)=45$ and $q(T)=10$ \cite{Wel}, the assumptions of Proposition \ref{handy1} are met with. \noindent \item {(\romannumeral2)} A {\em Verra threefold\/} $Y$ is a divisor of bidegree $(2,2)$ in $\mathbb{P}^2\times\mathbb{P}^2$ (these varieties were introduced in \cite{Ver}). Let $F$ be the Fano surface of conics of bidegree $(1,1)$ contained in $Y$. As observed in \cite[Section 5]{IKKR}, $F$ admits an involution $\iota$ such that $(F,\iota)$ enters into the set--up of Voisin's work \cite{V8}. Thus, \cite[Corollaire 3.2(b)]{V8} implies that intersection product induces a surjection \[ A^1_{hom}(F)\otimes A^1_{hom}(F)\ \twoheadrightarrow\ A^2_{AJ}(F)^\iota=A^2_{AJ}(S)\ .\] Since $q(F)=9$ and $p_g(S)=36$ \cite[Proposition 5.1]{IKKR}, the assumptions of Proposition \ref{handy1} are again met with. \noindent \item {(\romannumeral4)} Let $Y$ be a transverse intersection of the Grassmannian $Gr(2,5)\subset\mathbb{P}^9$ with a codimension $2$ linear subspace and a quadric (i.e., $Y$ is an {\em ordinary Gushel--Mukai threefold\/}, in the language of \cite{DK}, \cite{DK1}). For generic $Y$, the surface $F$ of conics contained in $Y$ is smooth and irreducible. There exists a birational involution $\iota\in\hbox{Bir}(F)$, such that intersection product induces a surjection \[ A^1_{hom}(F)\otimes A^1_{hom}(F)\ \twoheadrightarrow\ A^2_{AJ}(F)^\iota\ \] \cite[Corollaire 3.2(b)]{V8}. The surface $F$ and the birational involution $\iota$ are also studied in \cite{Lo} and \cite{DIM}. There exists a (geometrically meaningful) birational morphism $F\to F_m$, where $F_m$ is smooth and such that $\iota$ extends to a morphism $\iota_m$ on $F_m$ \cite{Lo}, \cite[Section 6]{DIM}, \cite[Section 5.1]{IM}. For $Y$ generic, the quotient $S:=F_m/<\iota_m>$ is smooth, and it is isomorphic to the singular locus of the EPW sextic associated to $Y$. (This is contained in \cite{Lo}, \cite{DIM}. The double cover $F_m\to S$ is also described in \cite[Theorem 5.2(2)]{DK3}.) Since $A^1_{hom}(), A^2_{AJ}()$ are birational invariants among smooth varieties, Voisin's result implies there is also a surjection \[ A^1_{hom}(F_m)\otimes A^1_{hom}(F_m)\ \twoheadrightarrow\ A^2_{AJ}(F_m)^{\iota_m}=A^2_{AJ}(S)\ .\] It is known that $q(F_m)=10$ \cite{Lo} and $p_g(S)=45$ \cite{OG0} (this can also be deduced from \cite{Beau1}), and so Proposition \ref{handy1} applies. \end{proof} \begin{remark} In cases (\romannumeral2), (\romannumeral3) and (\romannumeral4) of Corollary \ref{cor4}, the surface $S$ is the fixed locus of an anti--symplectic involution of a hyperk\"ahler fourfold. For the surface of bitangents, this is Beauville's involution on the Hilbert square $Q^{[2]}$. For the singular locus $S$ of a general EPW sextic, this is (isomorphic to) the fixed locus of the anti--symplectic involution of the associated double EPW sextic. For the surface $S$ of (\romannumeral2), this is the anti--symplectic involution of the ``double EPW quartic'' (double EPW quartics form a $19$--dimensional family of hyperk\"ahler fourfolds, introduced in \cite{IKKR}). Is this merely a coincidence, or is there something fundamental going on ? Do other two--dimensional fixed loci of anti--symplectic involutions of hyperk\"ahler fourfolds also enter in the set--up of Proposition \ref{handy1} ? \end{remark} \begin{remark} Inspired by the famous results concerning the Fano surface of the cubic threefold, Voisin \cite{V8} systematically studies the Fano surface $F$ of conics contained in Fano threefolds $Y$. Under certain conditions, she is able to prove \cite[Corollaire 3.2]{V8} that there is a birational involution $\iota$ on $F$, with the property that \[ A^1_{hom}(F)\otimes A^1_{hom}(F)\ \to\ A^2_{AJ}(F)^{<\iota>} \] is surjective (and so one could hope to apply Proposition \ref{handy1} to find more examples of surfaces verifying Conjecture \ref{conj}). Examples given in \cite{V8} (other than those mentioned in Corollary \ref{cor4} above) include: \noindent \item{(1)} Fano threefolds $Y$ of index $1$, Picard number $1$ and genus $g\in[7,10]\cup\{12\}$ \cite[Section 2.4]{V8}; \noindent \item{(2)} a general complete intersection of two quadrics in $\mathbb{P}^5$ \cite[Section 2.7]{V8}; \noindent \item{(3)} the intersection of the Grassmannian $Gr(2,5)\subset\mathbb{P}^9$ with a general codimension $3$ linear subspace \cite[Section 2.7]{V8}. (In all these cases, $\iota$ is actually the identity.) In case (1), the surface of conics $F$ is not very interesting. (for $g=12$, $F\cong\mathbb{P}^2$ \cite[Proposition B.4.1]{KPS}; for $g=10$, $F$ is an abelian surface \cite[Proposition B.5.5]{KPS}; ; for $g=9$, $F$ is a $\mathbb{P}^1$--bundle over a curve \cite[Proposition 2.3.6]{KPS}; for $g=8$, $F$ is isomorphic to the Fano surface of a cubic threefold \cite[Proposition B.6.1]{KPS}; for $g=7$, $F$ is the symmetric product of a curve of genus $7$ \cite{Kuz05}. These results are also discussed in \cite[Section 3.1]{IM0}.) The other two cases also turn out to reduce to known cases: Indeed, for case (2) the Fano surface of lines is isomorphic to the Jacobian of a genus $2$ curve \cite[Theorem 2]{DR}. For case (3), the Fano threefold $Y$ is birational to a cubic threefold $Y^\prime$, and the Fano surface of conics on $Y$ is birational to the Fano surface of lines on $Y^\prime$ \cite[Theorem B and Section 6]{Puts}. Since Conjecture \ref{conj} is obviously a birationally invariant statement, Conjecture \ref{conj} for the Fano surface of case (3) thus reduces to Corollary \ref{cor2}(\romannumeral2). \end{remark} \begin{remark} There are interesting relations between the surfaces of Corollary \ref{cor4} and other Fano surfaces: In case (\romannumeral2), the general Verra threefold $Y$ is birational to a one--nodal ordinary Gushel--Mukai threefold $\bar{X}$, and there is an induced birational map between the Fano surface of lines $F(Y)$ and the Fano surface of conics $F(\bar{X})$ \cite[Section 5.4 and Proposition 6.6]{DIM2}. In case (\romannumeral3), the general quartic double solid $Y$ is known to be birational to a one--nodal ordinary degree $10$ Fano threefold $\bar{X}$, and there is an induced birational map between the Fano surface of lines $F(Y)$ and the Fano surface of conics $F(\bar{X})$ \cite[Proposition 5.2]{DIM}. \end{remark} \section{Double covers of cubic surfaces} \begin{theorem}[Ikeda \cite{Ike}]\label{ike} Let $Y\subset\mathbb{P}^3$ be a smooth cubic surface, and let $\bar{S}\to Y$ be the double cover of $Y$ branched along its Hessian. Let $S\to\bar{S}$ be a minimal resolution of singularities. The surface $S$ is a minimal surface of general type with $p_g(S)=4$ and $K^2=6$. \end{theorem} \begin{remark} The intersection of $Y$ with its Hessian is smooth (and so $S=\bar{S}$) precisely when $Y$ has no Eckardt points. In this case, the Picard rank of $S$ is $28$ \cite[Theorem 6.1]{Ike}. At the other extreme, if $Y$ is the Fermat cubic (which is the only cubic surface attaining the maximal number of Eckardt points) the Picard rank of $S$ is $44$ \cite[Theorem 6.6]{Ike}, and so in this case $S$ is a $\rho$--maximal surface (in the sense of \cite{Beau3}). For more on Eckardt points of cubic surfaces, cf. \cite[Chapter 2 Section 3.6]{Huy}. \end{remark} Let us now prove Voisin's conjecture for some of Ikeda's double covers: \begin{corollary}\label{last} Let $Y\subset\mathbb{P}^3$ be a smooth cubic surface, and let $S$ be a double cover as in theorem \ref{ike}. Assume that $Y$ is in the pencil \[ x_0^3 + x_1^3 +x_2^3 -3\lambda x_0 x_1 x_2 + x_3^3 =0 \ .\] Then $S$ verifies Conjecture \ref{conj}: for any $m>4$ and $a_1,\ldots,a_m\in A^2_{hom}(S)_{\mathbb{Z}}$, there is equality \[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma) a_{\sigma(1)}\times\cdots\times a_{\sigma(m)}=0\ \ \ \hbox{in}\ A^{2m}(S^m)_{\mathbb{Z}}\ .\] \end{corollary} \begin{proof} A first part of the argument works for arbitrary smooth cubic surfaces $Y$; only in the last step will we use that $Y$ is of a specific type. Let us assume $Y\subset\mathbb{P}^3$ is any smooth cubic, defined by a cubic polynomial $f(x_0,\ldots,x_3)$. Let $Z\subset\mathbb{P}^4$ be the smooth cubic threefold defined by \[ f(x_0,\ldots,x_3)+x_4^3=0\ ,\] so $Z$ has the structure of a triple cover \[ \rho\colon\ \ Z\ \to\ \mathbb{P}^3 \] branched along $Y$. Let $F(Z)$ denote the Fano surface of lines contained in $Z$. Ikeda \cite{Ike} shows that there is a dominant rational map of degree $3$ \[ f\colon\ \ F(Z)\ \dashrightarrow\ S \ ,\] and an isomorphism \[ f^\ast\colon\ \ H^2_{tr}(S,\mathbb{Q})\ \xrightarrow{\cong}\ H^2_{tr}(F(Z),\mathbb{Q})^{Gal(\rho)}\ .\] This implies that there is an isomorphism of homological motives \begin{equation}\label{homiso} {}^t \Gamma_f\colon\ \ \ t(S)\ \xrightarrow{\cong}\ t(F(Z))^{Gal(\rho)}:=(F(Z),{1\over 3}\sum_{g\in Gal(\rho)} \Gamma_g\circ \pi^2_{tr},0)\ \ \ \hbox{in}\ \mathcal M_{\rm hom}\ .\end{equation} (Here for any surface $T$, the motive $t(T):=(T,\pi^2_{tr},0)\in\mathcal M_{\rm rat}$ denotes the {\em transcendental part of the motive\/} as in \cite{KMP}.) According to \cite{Diaz} and \cite{22}, the Fano surface $F(Z)$ has finite--dimensional motive (in the sense of Kimura \cite{Kim}, \cite{An}, \cite{J4}). The surface $S$, being rationally dominated by $F(Z)$, also has finite--dimensional motive. Thus, one may upgrade (\ref{homiso}) to an isomorphism of Chow motives \[ {}^t \Gamma_f\colon\ \ \ t(S)\ \xrightarrow{\cong}\ t(F(Z))^{Gal(\rho)}\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] In particular, this implies that there is an isomorphism of Chow groups \[ f^\ast\colon A^2_{hom}(S)=A^2_{AJ}(S)\ \xrightarrow{\cong}\ A^2_{AJ}(F(Z))^{Gal(\rho)}\ .\] Let $A$ be the 5--dimensional Albanese variety of $F(Z)$ (which is isomorphic to the intermediate Jacobian of $Z$). As observed in \cite{Diaz}, the inclusion $F(Z)\hookrightarrow A$ induces an isomorphism \[ A^2_{(2)}(A)\cong A^2_{AJ}(F(Z))\ .\] In particular, there is a restriction--induced isomorphism \[ A^2_{(2)}(A)^{Gal(\rho)}\cong A^2_{AJ}(F(Z))^{Gal(\rho)}\ ,\] where we simply use the same letter $\rho$ for the action induced by the triple cover $\rho\colon Z\to\mathbb{P}^3$. Consequently, it suffices to prove a version of Conjecture \ref{conj} for cycles in $ A^2_{(2)}(A)^{Gal(\rho)}$. Also, using K\"unnemann's hard Lefschetz theorem (for some $Gal(\rho)$--invariant ample divisor), one reduces to a statement for cycles in $ A^5_{(2)}(A)^{Gal(\rho)}$ (i.e., $0$--cycles). This last statement can be proven, subject to some restrictions on the cubic surface $Y$, thanks to the following result: \begin{proposition}[Vial \cite{Ch}]\label{factors} Let $B$ be an abelian variety of dimension $g$, and assume $B$ is isogenous to $ E_1^{r_1}\times E_2^{r_2}\times E_3^{r_3}$, where the $E_j$ are elliptic curves. Let $\Gamma\in A^g(B\times B)$ be an idempotent which lies in the sub--algebra generated by symmetric divisors. Assume that $\Gamma^\ast H^{j,0}(B)=0$ for all $j$. Then also \[ \Gamma_\ast A^g(B)=0\ .\] \end{proposition} \begin{proof} This is a special case of \cite[Theorem 3.15]{Ch}, whose hypotheses are more general. \end{proof} It remains to verify that Proposition \ref{factors} applies to our set--up. If the cubic threefold $Z=Z_\lambda$ is in the pencil \[ x_0^3 + x_1^3 +x_2^3 -3\lambda x_0 x_1 x_2 + x_3^3 +x_4^4=0 \ ,\] its intermediate Jacobian $A$ is isogenous to $E_0^3\times E_\lambda^2$, where $E_\lambda$ is the elliptic curve \[ x_0^3 + x_1^3 +x_2^3 -3\lambda x_0 x_1 x_2=0\ \] \cite{Rou}. We can apply Proposition \ref{factors} with $B:=A^m$ and \[ \Gamma:= \bigl(\sum_{g\in Gal(\rho)} \Gamma_g\times \cdots \times\Gamma_g\bigr) \circ \bigl(\sum_{\sigma\in \mathfrak S_m} \hbox{sgn}(\sigma)\,\Gamma_\sigma\bigr) \circ \bigl( \pi^8_A\times \cdots \times \pi^8_A\bigr) \ \ \ \in A^{5m}(A^m\times A^m)\ .\] Here $\pi^8_A$ is part of the Chow--K\"unneth decomposition of \cite{DM}, with the property that \[ A^5_{(2)}(A)=(\pi^8_A)_\ast A^5(A)\ .\] Since $g\in Gal(\rho)$ and $\sigma\in \mathfrak S_m$ are homomorphisms of abelian varieties, and the $\pi^8_A$ are symmetrically distinguished (in the sense of O'Sullivan \cite{OS}) and generically defined (in the sense of Vial \cite{Ch}), the correspondence $\Gamma$ is in the sub--algebra generated by symmetric divisors \cite[Proposition 3.11]{Ch}. In particular, the correspondence $\Gamma$ is symmetrically distinguished, and so (since it is idempotent in cohomology) idempotent. The correspondence ${}^t \Gamma$ acts on cohomology as projector on \[ \wedge^m \bigl( H^2(A)^{Gal(\rho)}\bigr)\ .\] Since \[ \dim \hbox{Gr}^0_F H^2(A)^{Gal(\rho)}=p_g(S)=4\ ,\] we have that $\Gamma^\ast=({}^t \Gamma)_\ast$ is zero on $H^{j,0}(B)$ as soon as $m>4$. Applying Proposition \ref{factors}, we can prove Conjecture \ref{conj} for $A^5_{(2)}(A)^{Gal(\rho)}$ (and hence, as explained above, also for $A^2_{AJ}(S)$): let $b_1,\ldots,b_m\in A^5_{(2)}(A)^{Gal(\rho)}$, where $m>4$. Then \[ \sum_{\sigma\in\mathfrak S_m} \hbox{sgn}(\sigma)\, b_{\sigma(1)}\times b_{\sigma(2)}\times\cdots\times b_{\sigma(m)}=\Gamma_\ast (b_1\times b_2\times\cdots\times b_m)=0\ \ \ \hbox{in}\ A^{5m}(A^m)\ .\] \end{proof} \begin{remark} The argument of Corollary \ref{last} also applies to double covers of some other cubic surfaces. For instance, let $Y$ be a cubic surface, let $S$ be the double cover as in theorem \ref{ike}, and let $J(Z)$ be the intermediate Jacobian of the associated cubic threefold. If $J(Z)$ is $\rho$--maximal, then $S$ verifies conjecture \ref{conj}. Indeed, $\rho$--maximality implies that $J(Z)$ is isogenous to $E^5$ for some elliptic curve $E$ \cite[Proposition 3]{Beau3}, and so Proposition \ref{factors} applies. \end{remark} \vskip1cm \begin{nonumberingt} Thanks to the wonderful staff of the Executive Lounge at the Schilik Math Research Institute. \end{nonumberingt} \vskip1cm \end{document}
\begin{document} \title{On the performance of algorithms for the minimization of $\ell_1$-penalized functionals} \author{Ignace Loris\\Mathematics Department, Vrije Universiteit Brussel\\ Pleinlaan 2, 1050 Brussel, Belgium} \date{12/12/2008} \maketitle \begin{abstract} The problem of assessing the performance of algorithms used for the minimization of an $\ell_1$-penalized least-squares functional, for a range of penalty parameters, is investigated. A criterion that uses the idea of `approximation isochrones' is introduced. Five different iterative minimization algorithms are tested and compared, as well as two warm-start strategies. Both well-conditioned and ill-conditioned problems are used in the comparison, and the contrast between these two categories is highlighted. \end{abstract} \section{Introduction} In recent years, applications of sparse methods in signal analysis and inverse problems have received a great deal of attention. The term \emph{compressed sensing} is used to describe the ability to reconstruct a \emph{sparse} signal or object from far fewer linear measurements than would be needed traditionally \cite{Donoho2006}. A promising approach, applicable to the regularization of linear inverse problems, consists of using a sparsity-promoting penalization. A particularly popular penalization of this type is the $\ell_1$ norm of the object in the basis or frame in which the object is assumed to be sparse. In \cite{Daubechies.Defrise.ea2004} it was shown that adding an $\ell_1$ norm penalization to a least squares functional (see expression (\ref{l1functional}) below) regularizes ill-posed linear inverse problems . The minimizer of this functional has many components exactly equal to zero. Furthermore, the iterative soft-thresholding algorithm (IST) was shown to converge in norm to the minimizer of this functional (earlier work on $\ell_1$ penalties is in \cite{Santosa.Symes1986,Tibshirani1996}). It has also been noted that the convergence of the IST algorithm can be rather slow in cases of practical importance, and research interest in speeding up the convergence or developing alternative algorithms is growing \cite{DaFoL2008,Kim.Koh.ea2007,Figueiredo.Nowak.ea2008,Hale.Yin.ea2007,Beck.Teboulle2008}. There are already several different algorithms for the minimization of an $\ell_1$ penalized functional. Therefore, it is necessary to discuss robust ways of evaluating and comparing the performance of these competing methods. The aim of this manuscript is to propose a procedure that assesses the strengths and weaknesses of these minimization algorithms for a range of penalty parameters. Often, authors compare algorithms only for a single value of the penalty parameter and may thereby fail to deliver a complete picture of the convergence speed of the algorithms. For the reader, it is difficult to know if the parameter has been tuned to favor one or the other method. Another issue plagueing the comparison of different minimization algorithms for problem (\ref{minimizer}) below is the confusion that is made with sparse recovery. Finding a sparse solution of a linear equation and finding the minimizer of (\ref{minimizer}) are closely related but (importantly) \emph{different} problems: The latter will be sparse (the higher the penalty parameter, the sparser), but the former does not necessarily minimize the $\ell_1$ penalized functional for any value of the penalty parameter. Contrary to many discussions in the literature, we will look at the minimization problem (\ref{minimizer}) independently of its connection to sparse recovery and compressed sensing. The central theme of this note is the introduction of the concept of \emph{approximation isochrone}, and the illustration of its use in the comparison of different minimization algorithms. It proves to be an effective tool in revealing when algorithms do well and under which circumstances they fail. As an illustration, we compare five different iterative algorithms. For this we use a strongly ill-conditioned linear inverse problem that finds its origin in a problem of seismic tomography \cite{Loris.Nolet.ea2007}, a Gaussian random matrix (i.e. the matrix elements are taken from a normal distribution) as well as two additional synthetic matrices. In the existing literature, most tests are done using only a matrix of random numbers, but we believe that it is very important to also consider other matrices. Actual inverse problems may depend on an operator with a less well-behaved spectrum or with other properties that could make the minimization more difficult. Such tests are usually not available. Here we compare four operators. Among other things, we find that the strongly singular matrices are more demanding on the algorithms. We limit ourselves to the case of real signals, and do not consider complex variables. In this manuscript, the usual $2$-norm of a vector $x$ is denoted by $\|x\|$ and the $1$-norm is denoted by $\|x\|_1$. The largest singular value of a matrix $K$ is denoted by $\|K\|$. \section{Problem statement} \label{problemsection} After the introduction of a suitable basis or frame for the object and the image space, the minimization problem under study can be stated in its most basic form, without referring to any specific physical origin, as the minimization of the convex functional \begin{equation} F_\lambda(x)=\|Kx-y\|^2+2\lambda \|x\|_1\label{l1functional} \end{equation} in a real vector space ($x\in \mathbb{R}^p$, $\lambda\geq 0$), for a fixed linear operator $K\in \mathbb{R}^{m\times p}$ and data $y\in \mathbb{R}^m$. In the present analysis we will assume the linear operator $K$ and the data $y$ are such that the minimizer of (\ref{l1functional}) is unique. This is a reasonable assumption as one imposes penalty terms, typically, to make the solution to an inverse problem unique. We set \begin{equation} \bar x(\lambda)=\arg\min_x\|Kx-y\|^2+2\lambda \|x\|_1.\label{minimizer} \end{equation} The penalty parameter $\lambda$ is positive; in applications, it has to be chosen depending on the context. Problem (\ref{minimizer}) is equivalent to the constrained minimization problem: \begin{equation} \tilde x(\rho)=\arg\min_{\|x\|_1\leq \rho}\|Kx-y\|^2 \qquad \label{constrmin} \end{equation} with an implicit relationship between $\rho$ and $\lambda$: $\rho=\|\bar x(\lambda)\|_1$. It follows from the equations (\ref{kkt}) below that the inverse relationship is: $\lambda=\max_i |(K^T(y-K \tilde x(\rho)))_i|$. Under these conditions one has that: $\bar x(\lambda)=\tilde x(\rho)$. One also has that $\bar x(\lambda)=0$ for all $\lambda\geq \lambda_\mathrm{max}\equiv\max_i|(K^Ty)_i|$. \subsection{Direct method} An important thing to note is that the minimizer $\bar x$ (and thus also $\tilde x$) can in principle be found in a finite number of steps using the homotopy/LARS method \cite{Osborne.Presnell.ea2000,Efron.Hastie.ea2004}. This direct method starts from the variational equations which describe the minimizer $\bar x$: \begin{equation} \begin{array}{lclcl} (K^T(y-K\bar x))_i&=&\lambda\; \mathrm{sgn}(\bar x_i) & \mathrm{if} & \bar x_i\neq 0 \\ |(K^T(y-K\bar x))_i|&\leq&\lambda & \mathrm{if} & \bar x_i= 0 \end{array}\label{kkt} \end{equation} Because of the piece-wise linearity of the equations (\ref{kkt}), it is possible to construct $\bar x(\lambda_\mathrm{stop})$ by letting $\lambda$ in (\ref{kkt}) descend from $\lambda_\mathrm{max}$ to $\lambda_\mathrm{stop}$, and by solving a linear system at every value of $\lambda$ where a component in $\bar x(\lambda)$ goes from zero to nonzero or, exceptionally, from nonzero to zero. The first such point occurs at $\lambda=\lambda_\mathrm{max}$. The linear systems that have to be solved at each of these breakpoints are `small', starting from $1\times 1$ and ending with $s\times s$, where $s$ is the number of nonzero components in $\bar x(\lambda_\mathrm{stop})$. Such a method thus constructs $\bar x(\lambda)$ exactly for all $\lambda_\mathrm{max}\geq\lambda\geq \lambda_\mathrm{stop}$, or equivalently all $\bar x$ with $0\leq\|\bar x(\lambda)\|_1\leq \|\bar x(\lambda_\mathrm{stop})\|_1$. It also follows that $\bar x(\lambda)$ is a piecewise linear function of $\lambda$. Implementations of this direct algorithm exist \cite{IMM2005-03897,Donoho.Stodden.ea2007,Loris2007}, and exhibit a time complexity that is approximately cubic in $s$ (the number of nonzero components of $\bar x(\lambda)$). If one is interested in sparse recovery, this is not necessarily a problem as time complexity is linear for small $s$. In fact, the algorithm can be quite fast, certainly if one weighs in the fact that the exact minimizer (up to computer round-off) is obtained. A plot of the time complexity as a function of the number of nonzero components in $\bar x(\lambda)$ is given in figure \ref{complexityfig} (left hand side) for an example operator $K^{(1)}$ and data $y$ (see start of section \ref{comparisonsection} for a description of $K^{(1)}$). The plot shows that the direct algorithm is useable in practice for about $s\leq 10^3$. This graph also illustrates the fact that the size of the support of $\bar x(\lambda)$ does not necessarily grow monotonically with decreasing penalty parameter (this depends on the operator and data). \begin{figure} \caption{Left: Time complexity of the direct algorithm mentioned in section \ref{problemsection} \label{complexityfig} \end{figure} It follows immediately from equation (\ref{kkt}) that the minimizer $\bar x(\lambda)$ satisfies the fixed point equation \begin{equation} \bar x=S_\lambda[\bar x+K^T(y-K\bar x)]\label{fixedpoint} \end{equation} where $S_\lambda$ is the well-known soft-thresholding operator applied component-wise: \begin{equation} S_\lambda(u)=\left\{\begin{array}{llccr} u-\lambda & \qquad\qquad & u&\geq&\lambda\\ 0 & & |u|&\leq&\lambda\\ u+\lambda & & u&\leq& -\lambda \end{array}\right. \end{equation} In real life we have to take into account that computers work with floating point, inexact, arithmetic. The definition of an exact solution, in this case, is that $\bar x(\lambda)$ satisfies the fixed-point equation (\ref{fixedpoint}) up to computer precision: \begin{equation} \|\bar x-S_\lambda[\bar x+K^T(y-K\bar x)]\|/\|\bar x\| \approx10^{-16}.\label{inexactfixedpoint} \end{equation} The direct algorithm mentioned before can be implemented using floating point arithmetic (\cite{IMM2005-03897,Donoho.Stodden.ea2007} do this). The implementation \cite{Loris2007} can handle both exact arithmetic (with integer, rational numbers) and floating point arithmetic. An example of the errors made by the direct method is illustrated figure \ref{complexityfig} (right). We will still use the term `exact solution' as long as condition (\ref{inexactfixedpoint}) is satisfied. \subsection{Iterative algorithms} There exist several iterative methods that can be used for the minimization problems (\ref{minimizer}) or (\ref{constrmin}): \begin{enumerate} \item The iterative soft-thresholding (IST) algorithm already mentioned in the introduction can be written as: \begin{equation} x^{(n+1)}=S_\lambda[x^{(n)}+K^T(y-Kx^{(n)})],\qquad x^{(0)}=0.\label{tlw} \end{equation} Under the condition $\|K\|<1$ the limit of this sequence coincides with the minimizer $\bar x(\lambda)$ and $F_\lambda(x^{(n)})$ decreases monotonically as a function of $n$ \cite{Daubechies.Defrise.ea2004}. For $\|K\|<\sqrt{2}$, there is still (weak) convergence \cite[Corollary 5.9]{Combettes.Wajs2005}, but the functional (\ref{l1functional}) is no longer guaranteed to decrease at every step. This algorithm is probably the easiest to implement.\label{tlwalg} \item a projected steepest descent method \cite{DaFoL2008} (and related \cite[expression (59)]{Combettes1997}): \begin{equation} x^{(n+1)}=P_\rho[x^{(n)}+\beta^{(n)}\,r^{(n)}],\qquad x^{(0)}=0, \end{equation} with $r^{(n)}=K^T(y-Kx^{(n)})$ and $\beta^{(n)}=\|r^{(n)}\|^2/\|Kr^{(n)}\|^2$. $P_\rho(\cdot)$ denotes the orthogonal projection onto an $\ell_1$ ball of radius $\rho$, and can be implemented efficiently by soft-thresholding with an appropriate variable threshold.\label{psdalg} \item the `GPSR-algorithm' (gradient projection for sparse reconstruction), another iterative projection method, in the auxiliary variables $u,v\geq 0$ with $x=u-v$ \cite{Figueiredo.Nowak.ea2008}.\label{gpsralg} \item the `$\ell_1$-ls-algorithm', an interior point method using preconditioned conjugate gradient substeps (this method solves a linear system in each outer iteration step) \cite{Kim.Koh.ea2007}.\label{l1lsalg} \item `FISTA' (fast iterative soft-thresholding algorithm) is a variation of the iterative soft-thresholding algorithm. Define the (non-linear) operator $T$ by $T(x)=S_\lambda[x+K^T(y-Kx)]$. Then the FISTA algorithm is: \begin{equation} x^{(n+1)}=T\left(x^{(n)}+\frac{t^{(n)}-1}{t^{(n+1)}} \left(x^{(n)}-x^{(n-1)}\right)\right)\qquad x^{(1)}=0, \label{fista} \end{equation} where $t^{(n+1)}=\frac{1+\sqrt{1+4(t^{(n)})^2}}{2}$ and $t^{(1)}=1$. It has virtually the same complexity as algorithm \ref{tlwalg}, but can be shown to have better convergence properties \cite{Beck.Teboulle2008}.\label{fistaalg} \end{enumerate} \subsection{Warm-start strategies} There also exist so-called \emph{warm-start} strategies. These methods start from $\bar x(\lambda_0=\lambda_\mathrm{max})=0$ and try to approximate $\bar x(\lambda_k)$ for $k:0,\ldots,N$ by starting from an approximation of $\bar x(\lambda_{k-1})$ already obtained in the previous step instead of always restarting from $0$. They can be used for finding an approximation of a whole range of minimizers $\bar x(\lambda_k)$ for a set of penalty parameters $\lambda_\mathrm{max}=\lambda_0>\lambda_1>\lambda_2>\ldots>\lambda_N=\lambda_\mathrm{stop}$ or, equivalently, for a set of $\ell_1$-radii $0=\rho_0<\rho_1<\rho_2<\ldots<\rho_N=\rho_\mathrm{stop}$. Two examples of such methods are: \begin{enumerate} \item[(A)] `fixed-point continuation' method \cite{Hale.Yin.ea2007}: \begin{equation} x^{(n+1)}=S_{\lambda_{n+1}}[x^{(n)}+K^T(y-Kx^{(n)})] \end{equation} with $\lambda_0=\lambda_\mathrm{max}$ and $\lambda_{n+1}=\alpha\lambda_{n}$ and $\alpha<1$ such that $\lambda_N=\lambda_\mathrm{stop}$ (after a pre-determined number $N$ of steps). In other words, the threshold is decreased geometrically instead of being fixed as in the IST method and $x^{(n)}$ is interpreted as an approximation of $\bar x(\lambda_n)$. \item[(B)] adaptive steepest descent \cite{DaFoL2008}: \begin{equation} x^{(n+1)}=P_{\rho_{n+1}}[x^{(n)}+\beta^{(n)}\,K^T(y-Kx^{(n)})],\qquad x^{(0)}=0, \end{equation} with $r^{(n)}=K^T(y-Kx^{(n)})$, $\beta^{(n)}=\|r^{(n)}\|^2/\|Kr^{(n)}\|^2$, and $\rho_{n+1}=(n+1) \rho_\mathrm{stop}/N$. Here the radius $\rho_n$ increases arithmetically instead of being fixed as in algorithm \ref{psdalg} and $x^{(n)}$ is interpreted as an approximation of $\tilde x(\rho_n)$. \end{enumerate} Such algorithms have the advantage of providing an approximation of the Pareto curve (a plot of $\|K\bar x(\lambda)-y\|^2$ vs. $\|\bar x(\lambda)\|_1$, also known as trade-off curve) as they go, instead of just calculating the minimizer corresponding to a fixed penalty parameter. It is useful for determining a suitable value of the penalty parameter $\lambda$ in applications. \section{Approximation isochrones} \label{isosection} In this section, we discuss the problem of assessing the speed of convergence of a given minimization algorithm. The minimization problem (\ref{l1functional}) is often used in compressed sensing. In this context, an iterative minimization algorithm may be tested as follows: one chooses a (random) sparse input vector $x^\mathrm{input}$, calculates the image under the linear operator and adds noise $y=Kx^\mathrm{input}+n$. One then uses the algorithm in question to try to reconstruct the input vector $x^\mathrm{input}$ as a minimizer of (\ref{minimizer}), choosing $\lambda$ in such a way that the resulting $x^{(N)}$ best corresponds to the input vector $x^\mathrm{input}$. This procedure in not useful in our case because it does not compare the iterates $x^{(n)}$ ($n:0\ldots N$) with the actual minimizer $\bar x$ of functional (\ref{minimizer}). Even though it is sparse, the input vector $x^\mathrm{input}$ most likely does not satisfy equations of type (\ref{kkt}), and hence does not constitute an actual minimizer of (\ref{l1functional}). Such a type of evaluation is e.g. done in \cite[section IV.A]{Figueiredo.Nowak.ea2008}.\\ In this note we are interested in describing how well an algorithm does in finding the true minimizer of (\ref{l1functional}), not in how suitable an algorithm may be for compressed sensing applications. We consider sparse recovery and $\ell_1$-penalized functional minimization as two separate issues. Here, we want to focus on the latter. Another unsatisfactory method of evaluating the convergence of a minimization algorithm is to look at the behavior of the functional $F_\lambda(x^{(n)})$ as a function of $n$. For small penalties, it is quite possible that the minimum is almost reached by a vector $x^{(N)}$ that is still quite far from the real minimizer $\bar x$. Suppose one has developed an iterative algorithm for the minimization of the $\ell_1$-penalized functional (\ref{l1functional}), i.e. a method for the computation of $\bar x(\lambda)$ in expression (\ref{minimizer}). As we are interested in evaluating an algorithm's capabilities of minimizing the functional (\ref{l1functional}), it is reasonable that one would compare the iterates $x^{(n)}$ with the exact minimizer $\bar x(\lambda)$. I.e. evaluation of the convergence speed should look at the quantity $\|x^{(n)}-\bar x\|$ as a function of time. A direct procedure exists for calculating $\bar x(\lambda)$ and thus it is quite straightforward to make such an analysis for a whole range of values of the penalty parameter $\lambda$ (as long as the support of $\bar x(\lambda)$ is not excessively large). We will see that the weaknesses of the iterative algorithms are already observable for penalty parameters corresponding to quite sparse $\bar x(\lambda)$, i.e. for $\bar x$ that are still relatively easy to compute with the direct method. In doing so, one has three parameters that need to be included in a graphical representation: the relative reconstruction error $e=\|x^{(n)}-\bar x\|/\|\bar x\|$, the penalty parameter $\lambda$ and also the time $t$ needed to obtain the approximation $x^{(n)}$. Making a 3D plot is not a good option because, when printed on paper or displayed on screen, it is difficult to accurately interpret the shape of the surface. It is therefore advantageous to try to make a more condensed representation of the algorithm's outcome. One particularly revealing possibility we suggest, is to use the $\lambda$-$e$-plane to plot the \emph{isochrones} of the algorithm. For a fixed amount of computer time $t$, these isochrones trace out the approximation accuracy $e$ that can be obtained for varying values of the penalty parameter $\lambda$. There are two distinct advantages in doing so. Firstly, it becomes immediately clear for which range of $\lambda$ the algorithm in question converges quickly and, by labeling the isochrones, in which time frame. Secondly, it is clear where the algorithm has trouble approaching the real minimizer: this is characterized by isochrones that are very close to each other and away from the $e=0$ line. Hence a qualitative evaluation of the convergence properties can be made by noticing where the isochrones are broadly and uniformly spaced (good convergence properties), and where the isochrones seem to stick together (slow convergence). Quantitatively, one immediately sees what the remaining relative error is. Another advantage of this representation is that (in some weak sense) the isochrones do not depend on the computer used: if one uses a faster/slower computer to make the calculations, the isochrones `do not change place' in the sense that only their labels change. In other words, this way of representing the convergence speed of the algorithm accurately depicts the region where convergence is fast or slow, independently of the computer used. \begin{figure} \caption{This figure displays the approximation isochrones ($t=1,2,\ldots,10$ minutes) for the iterative thresholding algorithm (\ref{tlw} \label{singleisochronefig} \end{figure} In figure \ref{singleisochronefig}, an example of such a plot is given. The operator is again $K^{(1)}\in \mathbb{R}^{1848\times 8192}$ (more details are given at the start of section \ref{comparisonsection}). The algorithm assessed in this plot is the iterative thresholding algorithm (a). Clearly one sees that the iterative thresholding does well for $\lambda\geq \lambda_\mathrm{max}/2^{6}$, but comes into trouble for $\lambda\leq \lambda_\mathrm{max}/2^{8}$. This means that the iterative thresholding algorithm (\ref{tlw}) has trouble finding the minimizer with more than about 50 nonzero components (out of a possible 8192 degrees of freedom). The direct method is still practical in this regime as proven by figure \ref{complexityfig} (left), and it was used to find the `exact' $\bar x(\lambda)$'s. In fact, here the direct method is faster than the iterative methods. \section{Comparison of minimization algorithms} \label{comparisonsection} In this section we will compare the six iterative algorithms (a)--(e), and the two warm-start algorithms (A)--(B) mentioned in section \ref{problemsection}. We will use four qualitatively different operators $K$ for making this comparison. Firstly, we will use an ill-conditioned matrix $K^{(1)}$ stemming from a geo-science inverse problem. It was already used in figures \ref{complexityfig} and \ref{singleisochronefig}. It contains $1848$ 2-D integration kernels discretized on a $64\times 64$ grid, and expanded in a ($2\times$ redundant) wavelet frame. Hence this matrix has $1848$ rows and $8192$ ($=2\times 64^2$) columns. The spectrum is pictured in figure~\ref{spectrumpic}. Clearly it is severely ill-conditioned. The matrix $K^{(2)}$ is of the same size, but contains random numbers taken from a Gaussian distribution. Its spectrum is also in figure~\ref{spectrumpic} and has a much better condition number (ratio of largest singular value to smallest nonzero singular value). This type of operator is often used in the evaluation of algorithms for minimization of an $\ell_1$-penalized functional. As we shall see, the different minimization algorithms will not perform equally well for these two operators. In order to further discuss the influence of both the spectrum and orientation of the null space of the operator on the algorithms' behavior, we will also use two other operators. $K^{(3)}$ will have the same well behaved spectrum as $K^{(2)}$, but an unfavorably oriented null space: the null space will contain many directions that are almost parallel to an edge or a face of the $\ell_1$ ball. $K^{(4)}$ will have the same ill-behaved spectrum as $K^{(1)}$, but it will have the same singular vectors as the Gaussian matrix $K^{(2)}$. More precisely, $K^{(3)}$ is constructed artificially from $K^{(2)}$ by setting columns $4000$ through $8192$ equal to column $4000$. This creates an operator with a null space that contains many vectors parallel to a side or edge of the $\ell_1$ ball. A small perturbation is added in the form of another random Gaussian matrix to yield the intermediate matrix $A$. The singular value decomposition is calculated $A=USV^T$ ($U^{-1}=U^T$ and $V^{-1}=V^T$). In this decomposition, the spectrum is replaced by the spectrum of $K^{(2)}$. This then forms the matrix $K^{(3)}$. $K^{(4)}$ is constructed by calculating the singular value decomposition of $K^{(2)}=USV^T$ and replacing the singular values in $S$ by those of $K^{(1)}$. In all cases, $K^{(i)}$ is normalized to have its largest singular value $\approx 0.999$ (except when studying algorithm (a') where we use normalization $\approx 0.999\times\sqrt{2}$). \begin{figure} \caption{Left: The singular values $\lambda_n$ ($n:1,\ldots, 1848$) of the operators $K^{(1)} \label{spectrumpic} \end{figure} \subsection{A severely ill-conditioned operator} \label{illcondsection} In figure \ref{isochronefig}, we compare the six algorithms (a)--(e) mentioned in section \ref{problemsection} for the same ill-conditioned operator $K^{(1)}\in \mathbb{R}^{1848\times 8192}$ as in figure \ref{singleisochronefig}. We again choose penalty parameters $\lambda_\mathrm{max}\geq\lambda\geq\lambda_\mathrm{max}/2^{14}$ and show the isochrones corresponding to $t=1,2,\ldots,10$ minutes. Panel (a) is identical to figure \ref{singleisochronefig}. All six algorithms do well for large penalties (left-hand sides of the graphs). For smaller values of $\lambda$ ($8<\log_2 \lambda_\mathrm{max}/\lambda$) the isochrones come closer together meaning that convergence progresses very slowly for algorithms (a), (a'), (b) and (c). For algorithm (d), the isochrones are still reasonable uniformly spaced even for smaller values of the penalty parameter. In this case the projected algorithms do better than iterative thresholding, but the $\ell_1$-ls algorithm (d) is to be preferred in case of small penalty parameters. The Fista (e) algorithm seems to perform best of all for small parameters $\lambda$. \begin{figure} \caption{These pictures contain the approximation isochrones for the algorithms (a)--(e) mentioned in section \ref{problemsection} \label{isochronefig} \end{figure} Apart from the shape of the isochrone curves, it is also important to appreciate the top horizontal scales of these plots. The top scale indicate the size of the support of the corresponding minimizer $\bar x(\lambda)$. We see that all algorithms have much difficulty in finding minimizers with more than about $100$ nonzero components. In this range of the number of nonzero components in $\bar x(\lambda)$, the direct method is faster for $K^{(1)}$. A skeptic might argue that, in the case of figures \ref{singleisochronefig} and \ref{isochronefig}, the minimizer $\bar x(\lambda)$ might not be unique for small values of $\lambda$, and that this is the reason why the isochrones do not tend to $e=0$ after about 10 minutes ($\approx 4500$ iterations). This is not the case. If one runs the iterative methods for a much longer time, one sees that the error $e$ does go to zero. Such a plot is made in figure \ref{singlelambdaplot} for one choice of $\lambda$ ($\lambda\approx\lambda_\mathrm{max}/2^{11.115}$). \begin{figure} \caption{Main graph: Relative error $\|x^{(n)} \label{singlelambdaplot} \end{figure} What we notice here is that, for iterative soft-thresholding, $\|x^{(n+1)}-x^{(n)}\|/\|x^{(n)}\|=\|x^{(n)}-S_\lambda[x^{(n)}+K^T(y-K x^{(n)})\|/\|x^{(n)}\|$ is small ($\approx 10^{-5}$ after $4500$ iterations in this example). But this does \emph{not} mean that the algorithm has almost converged (as might be suggested by figure \ref{singlelambdaplot}-inset); on the contrary it indicates that the algorithm is progressing only very slowly for this value of $\lambda$! The difference $\|x^{(n+1)}-x^{(n)}\|/\|x^{(n)}\|$ should be of the order $10^{-16}$, for one to be able to conclude convergence, as already announced in formula (\ref{inexactfixedpoint}). \subsection{A Gaussian random matrix} \label{wellcondsection} In this subsection we choose $K=K^{(2)}$. It is much less ill-conditioned than the matrix in the previous subsection. In figure \ref{isochronefig3} we make the same comparison of the six iterative algorithms (a)--(e) for the operator $K^{(2)}$. We choose penalty parameters $\lambda_\mathrm{max}\geq\lambda\geq\lambda_\mathrm{max}/2^{14}$ and show the isochrones corresponding to $t=6,12,\ldots,60$ seconds. I.e. the time scale is $10$ times smaller than for the ill-conditioned matrix $K^{(1)}$ in the previous section. Again all the algorithms do reasonably well for large penalty parameters, but performance diminishes for smaller values of $\lambda$. \begin{figure} \caption{These pictures contain the approximation isochrones for the algorithms (a)--(e) mentioned in section \ref{problemsection} \label{isochronefig3} \end{figure} The iterative soft-thresholding method with $\|K^{(2)}\|\approx\sqrt{2}$ in (a') does slightly better than iterative soft-thresholding with $\|K^{(2)}\|\approx 1$ in panel (a). The GPSR method (c) does better than the other methods for large values of the penalty (up to $|\mathrm{supp}\, \bar x|\approx 500$), but loses out for smaller values. The $\ell_1$-ls method (d) does well compared to the other algorithms as long as the penalty parameter is not too large. The FISTA method (e) does best except for the very small value of $\lambda$ (right hand sides of plots) where the projected steepest descent does better in this time scale. Apart from the different time scales ($1$ minute vs. $10$ minutes) there is another, probably even more important, difference between the behavior of the algorithms for $K^{(1)}$ and $K^{(2)}$. In the latter case the size of the support of the minimizers that are recoverable by the iterative algorithms range from $0$ to about $1800$ (which is about the maximum for $1848$ data). This is much more than in the case of the matrix $K^{(1)}$ in figure \ref{isochronefig} where only minimizers with about $120$ nonzero coefficients were recoverable. \subsection{Further examples} Here we compare the various iterative algorithms for the operators $K^{(3)}$ and $K^{(4)}$. The former was constructed such that many elements of its null space are almost parallel to the edges of the $\ell_1$ ball, in an effort to make the minimization (\ref{l1functional}) more challenging. The latter operator has the same singular vectors as the random Gaussian matrix $K^{(2)}$ but the ill-conditioned spectrum of $K^{(1)}$. This will then illustrate how the spectrum of $K$ can influence the algorithms used for solving (\ref{l1functional}). Figure~\ref{K3pic} shows the isochrone lines for the various algorithms applied to the operator $K^{(3)}$. To make comparison with figure~\ref{isochronefig3} straightforward, the total time span is again $1$ minute subdivided in $6$s intervals. Convergence first progresses faster than in figure~\ref{isochronefig3}: the isochrone corresponding to $t=6$s lies lower than in figure~\ref{isochronefig3}. For $\lambda<\lambda_\mathrm{max}/2^6$ it is clear that the various algorithms perform worse for the operator $K^{(3)}$ than for the matrix $K^{(2)}$. As these two operators have identical spectra, this implies that a well-conditioned spectrum alone is no guarantee for good performance of the algorithms. \begin{figure} \caption{The six algorithms (a)--(e) are compared for the operator $K^{(3)} \label{K3pic} \end{figure} Figure~\ref{K3pic} shows the isochrone plots for the operator $K^{(4)}$ which has identical spectrum as $K^{(1)}$, implying that it is very ill-conditioned. The singular vectors of $K^{(4)}$ are the same as for the random Gaussian matrix $K^{(2)}$. We see that the ill-conditioning of the spectrum influences the performance of the algorithms is a negative way, as compared to the Gaussian operator $K^{(2)}$. \begin{figure} \caption{The six algorithms (a)--(e) are compared in case of the operator $K^{(4)} \label{K4pic} \end{figure} \subsection{Warm-start strategies} For the warm-start algorithms (A)--(B), it is not possible to draw isochrones because these methods are not purely iterative. They depend on a preset maximum number of iterates and a preset end value for the penalty parameter $\lambda$ or the $\ell_1$ norm $\rho$ of $x$. It is possible, for a fixed total computation time and a fixed value of $\lambda_\mathrm{stop}$ or $\rho_\mathrm{stop}$, to plot $\|\bar x(\lambda_n)-x^{(n)}\|$ vs. $\lambda_n$ or $\|\tilde x(\rho_n)-x^{(n)}\|$ vs. $\rho_n$. This gives a condensed picture of the performance of such an algorithm, as it include information on the remaining error for various values of $\lambda$ or $\rho$. In figure \ref{fpcfig}, the warm-start methods (A) and (B) are compared in the range $0\leq \|\bar x\|_1\leq 15$ for the matrix $K^{(1)}$. For each experiment, ten runs were performed, corresponding to total computation times of $1, 2, \ldots, 10$ minutes. For each run, the parameter $\rho_\mathrm{max}$ was chosen to be 15 in algorithm (B). For algorithm (A), $\lambda_\mathrm{stop}$ was chosen such that $\|\bar x(\lambda_\mathrm{stop})\|_1=15$ also. From the pictures we see that the algorithms (A)--(B) do not do very well for small values of the penalty parameter $\lambda$. Their big advantage is clear: For the price of a single run, an acceptable approximation of \emph{all} minimizers $\bar x(\lambda)$ with $\lambda_\mathrm{max}\geq\lambda\geq\lambda_\mathrm{stop}$ is obtained. The algorithm (B) does somewhat better than algorithm (A). \begin{figure} \caption{The behavior of two warm-start strategies for the operator $K^{(1)} \label{fpcfig} \end{figure} In figure \ref{gaussfpcfig}, the same type of comparison is made for the matrix $K^{(2)}$. In this case, ten runs are performed with a total computation time per run equal to $6,12,\ldots,60$ seconds ($60$s corresponds to about $460$ iterative soft-thresholding steps or $300$ projected steepest descent steps). This is ten times less than in case of the matrix $K^{(1)}$. Both the `fixed point continuation method' (A) and the `projected steepest descent' (B) do acceptably well, even up to very small values for the penalty parameter $\lambda$ (large value of $\rho$). \begin{figure} \caption{The behavior of two warm-start strategies for the Gaussian random matrix $K^{(2)} \label{gaussfpcfig} \end{figure} All the calculations in this note were done on a 2GHz processor and 2Gb ram memory. \section{Conclusions} The problem of assessing the convergence properties for $\ell_1$ penalized least squares functionals was discussed. We start from the rather obvious observation that convergence speed can only refer to the behavior of $e=\|x^{(n)}-\bar x\|/|\bar x\|$ as a function of time. Luckily, in the case of functional (\ref{l1functional}), the exact minimizer $\bar x$ (up to computer round-off) can be found in a finite number of steps: even though the variational equations (\ref{kkt}) are nonlinear, they can still be solved exactly using the LARS/homotopy method. A direct calculation of the minimizers $\bar x(\lambda)$ is thus possible, at least when the number of nonzero coefficients in $\bar x$ is not too large (e.g. in the numerical experiments in \cite{Figueiredo.Nowak.ea2008}, one has $4096$ degrees of freedom and only about 160 nonzeros.). We provided a graph that indicates the time complexity of the exact algorithm as a function of $s$, the number of nonzero components in $\bar x(\lambda)$. It showed us that computing time rises approximately cubically as a function of $s$ (it is linear at first). Also we gave an example where the size of support of $\bar x(\lambda)$ does not decrease monotonically as $\lambda$ decreases. The direct method is certainly practical for $|\,\mathrm{supp}\, \bar x(\lambda)|\leq 10^3$ or so. It is impossible to completely characterize the performance of an iterative algorithm in just a single picture. A good qualitative appreciation, however, can be gained from looking at the approximation isochrones introduced in this note. These lines in the $\lambda-e$-plane tell us for which value of the penalty parameter $\lambda$ convergence is adequately fast, and for which values it is inacceptably slow. We look at the region $e\in [0, 1]$, because this is probably most interesting for doing real inversions. One could also use a logarithmic scale for $e$, and look at very small values of $e$ approaching computer epsilon. But that is probably not of principle interest to people doing real inverse problems. The main content is thus in the concept of approximation isochrone and in figures \ref{singleisochronefig}, \ref{isochronefig}, \ref{isochronefig3}, \ref{K3pic} and \ref{K4pic} comparing six different algorithms for four operators. For large penalty parameters, all algorithms mentioned in this note do well for our particular example operators with a small preference for the GPSR method. The biggest difference can be found for small penalty parameters. Algorithms (a), (a'), (b) and (c) risk to be useless in this case. The $\ell_1$-ls (d) algorithm seems to be more robust, but it loses out for large penalties. The FISTA method (e) appears to work best for small values of the penalty parameter. We uncovered two aspects that may influence the convergence speed of the iterative algorithms. Firstly we saw that convergence is slower for ill-conditioned operators than for well-conditioned operators (comparison of $K^{(2)}$ and $K^{(4)}$). Furthermore, the number of nonzero coefficients that a recoverable minimizer has is smaller for an ill-conditioned operator. In particular, the operator $K^{(1)}$ that comes out of a real physical problem, presents a challenge for all algorithms for small values of the penalty parameter. Secondly, the orientation of the null-space with respect to the edges of the $\ell_1$ ball also influences the speed of convergence of the iterative algorithms, even if the operator is well-conditioned (comparison of $K^{(2)}$ and $K^{(3)}$). We also compared two warm-start strategies and showed that their main advantage is to yield a whole set of minimizers (for different penalty parameters) in a single run. We found that the adaptive-steepest descent method (B), proposed in \cite{DaFoL2008} but tested here for the first time, does better than the fixed point continuation method (A). \section{Acknowledgments} Part of this work was done as a post-doctoral research fellow for the F.W.O-Vlaanderen (Belgium) at the Vrije Universiteit Brussel and part was done as `Francqui Foundation intercommunity post-doctoral researcher' at D\'epartement de Math\'ematique, Universit\'e Libre de Bruxelles. Discussions with Ingrid Daubechies and Christine De Mol are gratefully acknowledged. The author acknowledges the financial support of the VUB through the GOA-62 grant, of the FWO-Vlaanderen through grant G.0564.09N. {} \end{document}
\begin{document} \title{Macaulay's theorem for vector-spread algebras} \author{Marilena Crupi, Antonino Ficarra, Ernesto Lax} \address{Marilena Crupi, Department of mathematics and computer sciences, physics and earth sciences, University of Messina, Viale Ferdinando Stagno d'Alcontres 31, 98166 Messina, Italy} \email{[email protected]} \address{Antonino Ficarra, Department of mathematics and computer sciences, physics and earth sciences, University of Messina, Viale Ferdinando Stagno d'Alcontres 31, 98166 Messina, Italy} \email{[email protected]} \address{Ernesto Lax, Department of mathematics and computer sciences, physics and earth sciences, University of Messina, Viale Ferdinando Stagno d'Alcontres 31, 98166 Messina, Italy} \email{[email protected]} \thanks{. } \subjclass[2020]{Primary 05E40; Secondary 13A02, 13D02, 13D40, 13F55.} \keywords{vector-spread monomial ideals, vector-spread strongly stable ideals, Macaulay's Theorem, Kruskal-- Katona's Theorem, Hilbert functions, Betti numbers.} \maketitle \begin{abstract} Let $S=K[x_1,\dots,x_n]$ be the standard graded polynomial ring, with $K$ a field, and let ${\bf t}=(t_1,\dots,t_{d-1})\in{\NZQ Z}_{\ge 0}^{d-1}$, $d\ge 2$, be a $(d-1)$-tuple whose entries are non negative integers. To a {\bf t}-spread ideal $I$ in $S$, we associate a unique $f_{\bf t}$-vector and we prove that if $I$ is {\bf t}-spread strongly stable, then there exists a unique {\bf t}-spread lex ideal which shares the same $f_{\bf t}$-vector of $I$ \emph{via} the combinatorics of the {\bf t}-spread shadows of special sets of monomials of $S$. Moreover, we characterize the possible $f_{\bf t}$-vectors of {\bf t}-vector spread strongly stable ideals generalizing the well-known theorems of Macaulay and Kruskal--Katona. Finally, we prove that among all {\bf t}-spread strongly stable ideals with the same $f_{\bf t}$-vector, the {\bf t}-spread lex ideals have the largest Betti numbers. \end{abstract} \section*{Introduction} One of the main well--studied and important numerical invariant of a graded ideal in a standard graded polynomial ring is its Hilbert function which gives the sizes of the graded components of the ideal. There is an extensive literature on this topic (see, for instance, \cite{JT} and the references therein). Usually, Hilbert functions are described using the well--known Macaulay's expansion with binomials. This fact often implies the use of combinatorics tools and furthermore the arguments consist of very clever computations with binomials. The crucial idea of Macaulay is that there exist special monomial ideals, the so called \emph{lex ideals}, that attain all possible Hilbert functions. The pivotal property is that a lex ideal grows as slowly as possible. The ``squarefree'' analogue of Macaulay’s theorem is known as the Kruskal–Katona theorem. Indeed, if Macaulay’s theorem describes the possible Hilbert functions of the graded ideals in polynomial rings, the possible $f$-vectors of a simplicial complex are characterized in the theorem of Kruskal–Katona \cite{JT, GK, JK}. In fact, the Hilbert function of the Stanley–Reisner ring of a simplicial complex ${\mathcal D}elta$ is determined by the $f$-vector of ${\mathcal D}elta$, and vice versa. Kruskal–Katona's theorem is a fundamental result in topological combinatorics and discrete geometry which quickly have aroused much interest in face enumeration questions for various classes of simplicial complexes, polytopes, and manifolds. Furthermore, it may be also interpreted as a theorem on Hilbert functions of quotients of exterior algebras \cite{AHH}. The lex ideals as well as squarefree lex ideals play a key role in the study of the minimal free resolutions of monomial ideals. Indeed, if one consider the stable and squarefree stable ideals and the formulas for computing their graded Betti numbers \cite{JT} one can deduce the Bigatti-Hulett theorem \cite{BAM, HH} which says that lex ideals have the largest graded Betti numbers among all graded ideals with the same Hilbert function (see also \cite{HH2, KP}). Let $S=K[x_1,\dots,x_n]$ be the standard graded polynomial ring, with $K$ a field, and let ${\bf t}=(t_1,\dots,t_{d-1})\in{\NZQ Z}_{\ge0}^{d-1}$, $d\ge 2$, be a $(d-1)$-tuple whose entries are non negative integers. Recently in \cite{F1}, the class of ${\bf t}$\textit{-spread strongly stable ideals} has been introduced. It is a special class of monomial ideals which generalizes the class of $t$-spread strongly stable ideals in \cite{EHQ}, $t$ non negative integer. More in detail, a monomial $u=x_{j_1}x_{j_2}\cdots x_{j_\ell}$ ($1\le j_1\le j_2\le\cdots\le j_\ell\le n$) of degree $\ell\le d$ of $S$ is called a \textit{vector-spread monomial of type ${\bf t}$} or simply a \textit{${\bf t}$-spread monomial} if $j_{i+1}-j_{i}\ge t_{i}$, for $i=1,\dots,\ell-1$ and a \textit{${\bf t}$-spread monomial ideal} is a monomial ideal of $S$ generated by ${\bf t}$-spread monomials. A \textit{${\bf t}$-spread strongly stable ideal} is a \textit{${\bf t}$-spread monomial ideal} with an additional combinatorial property (Definition \ref{def:stronglylex}). For ${\bf t}\in \{(0, \ldots, 0), (1, \ldots, 1)\}$, one obtains the classical notions of strongly stable ideal and squarefree strongly stable ideal, respectively \cite{JT}. The aim of this article is to generalize Macaulay's theorem for the class of ${\bf t}$-spread strongly stable ideals. The crucial role is played by the class of \textit{${\bf t}$-spread lex ideal} (Definition \ref{def:stronglylex}). Since, ${\bf t} \in {\NZQ Z}_{\ge0}^{d-1}$, \emph{i.e.}, the entries of ${\bf t}$ can be also zero, in order to unify the theory about the classification of the Hilbert functions of graded ideals of $S$, we put our attention on the classification of the possible $f_{{\bf t}}$-vector of ${\bf t}$-spread strongly stable ideals. More in detail, we answer to the following question: \emph{under which conditions a given sequence of positive integers $f = (f_{-1}, f_{0}, \ldots, f_{d-1})$ is the $f_{{\bf t}}$-vector of a ${\bf t}$-spread strongly stable ideal?} The plain of the paper is as follows. Section \ref{sec1} contains some preliminaries and notions that will be used in the article. We introduce the notion of \textit{${\bf t}$-spread shadow} of a set of monomials of $S$ and the notions of ${\bf t}$\textit{-spread strongly stable set (ideal)} and ${\bf t}$\textit{-spread lex set (ideal)} (Definitions \ref{def:stronglylexset}, \ref{def:stronglylex}). The combinatorics of such sets is deeply analyzed in Section \ref{sec2}. The key result in the section is Theorem \ref{Thm:BayerVectSpread} which allows us to prove that to every ${\bf t}$-spread strongly stable ideal $I$ one can associate a unique ${\bf t}$-spread lex ideal which shares the same $f_{\bf t}$-vector of $I$ (Corollary \ref{Cor:SubstituteTLex}). Moreover, we point out why this is not possible for an arbitrary ${\bf t}$-spread monomial ideal. Section \ref{sec3} contains the main result in the article which gives the classification of all possible $f_{\bf t}$-vectors of a ${\bf t}$-spread strongly stable ideal (Theorem \ref{thm:main}). The classification is obtained by introducing a new operator (Definition \ref{Def:vector-spreadOp}) which, for suitable values of ${\bf t}$, is analog either to the operator $a\longrightarrow a^{\langle d\rangle}$ or to the operator $a\longrightarrow a^{(d)}$ which are involved in the Macaulay theorem and in the Kruskal--Katona theorem, respectively \cite{JT}. Finally, in Section \ref{sec4}, as an application of the results in the previous sections, we state an upper bound for the graded Betti numbers of the class of all ${\bf t}$-spread strongly stable ideals with a given $f_{{\bf t}}$-vector (Theorem \ref{thm:upperbound}). We prove that the ${\bf t}$-spread lex ideals give the maximal Betti numbers among all ${\bf t}$-spread strongly stable ideals with a given $f_{{\bf t}}$-vector. Such a statement generalizes the well--known result proved independently by Bigatti \cite{BAM} and Hulett \cite{HH} for graded ideals in a polynomial ring with coefficient in a field of characteristic zero and afterwards generalized by Pardue \cite{KP} in any characteristic. The article contains some examples illustrating the main results developed using \emph{Macaulay2} \cite{GDS}. \section{Preliminaries and notations}\label{sec1} Let $S=K[x_1,\dots,x_n]$ be the standard graded polynomial ring, with $K$ a field, and let ${\bf t}=(t_1,\dots,t_{d-1})\in{\NZQ Z}_{\ge0}^{d-1}$, $d\ge 2$, be a $(d-1)$-tuple whose entries are non negative integers. A monomial $u=x_{j_1}x_{j_2}\cdots x_{j_\ell}$ ($1\le j_1\le j_2\le\cdots\le j_\ell\le n$) of degree $\ell\le d$ of $S$ is called a \textit{vector-spread monomial of type ${\bf t}$} or simply a \textit{${\bf t}$-spread monomial} if $j_{i+1}-j_{i}\ge t_{i}$, for $i=1,\dots,\ell-1$. If $I$ is a graded ideal of $S$ we denote by $I_j$ the $j$-graded component of $I$ and by $\indeg(I)$ the initial degree of $I$, \emph{i.e.}, the smallest $j$ such that $I_j\neq 0$. Moreover, for a monomial ideal $I\subset S$, we denote by $G(I)$ the unique minimal set of monomial generators. Furthermore, if $j\ge0$ is an integer, we set $G(I)_j=\{u\in G(I):\deg(u)=j\}$. A \textit{${\bf t}$-spread monomial ideal} is a monomial ideal of $S$ generated by ${\bf t}$-spread monomials. For instance, $I=(x_1x_4^2x_5,x_1x_4^2x_6,x_1x_5x_7)$ is a $(3,0,1)$-spread monomial ideal of the polynomial ring $S=K[x_1, \ldots, x_7]$, but it is not $(3,0,2)$-spread as $x_1x_4^2x_5\in G(I)$ is not a $(3,0,2)$-spread monomial. Note that any monomial (ideal) is ${\bf 0}$-spread, where ${\bf 0}=(0,0,\dots,0)$. If $t_i\ge1$, for all $i$, a ${\bf t}$-spread monomial (ideal) is a \textit{squarefree} monomial (ideal). We denote by $M_{n,\ell,{\bf t}}$ the set of all ${\bf t}$-spread monomials of degree $\ell$ in $S$. If $\ell \leq d$, by \cite[Corollary 2.4]{F1}, \begin{equation}\label{Formula:|Mn,l,t|} \lvert M_{n,\ell,{\bf t}} \rvert = \binom{n+(\ell-1)-\sum_{j=1}^{\ell-1}t_j}{\ell}. \end{equation} For a set $L$ of monomials of $S$, one defines the \textit{vector-spread shadow} or simply the \textit{${\bf t}$-spread shadow} of $L$ to be the set \begin{align*} \Shad_{\bf t}(L)\ &=\ \big\{x_iw:w\in L, x_iw\ \textup{is}\ {\bf t}\textup{-spread}, \, i=1,\dots,n \big\}. \end{align*} Note that $\Shad_{\bf t}(M_{n,\ell,{\bf t}})=M_{n,\ell+1,{\bf t}}$ for all $\ell\ge 0$ and that $\Shad_{\bf t}(L)=\emptyset$ whenever all monomials in $L$ have degrees $\ge d$. Moreover, one can quickly observe that if $L$ is a set of monomials of $S$, then the definition of $\Shad_{\bf 0}(L)$ coincides with the classical notion of shadow of $L$ \cite[Chapter 6]{JT}. If $I$ is a monomial ideal of $S$, we denote by $[I_j]_{\bf t}$ the set of all ${\bf t}$-spread monomials in $I_j$. Furthermore, we set \[f_{{\bf t},\ell-1}(I) = \vert M_{n,\ell,{\bf t}}\vert - \vert [I_\ell]_{\bf t}\vert, {\frk q}uad 0\le\ell\le d.\] and define the vector \[f_{\bf t} (I)= (f_{{\bf t},-1}(I), f_{{\bf t},0}(I), \ldots, f_{{\bf t}, d-1}(I)).\] Such a vector is called the \textit{$f_{\bf t}$-vector} of $I$. Note that $f_{{\bf t},-1}(I)=1$. Let $i,j$ integers, we set $[i,j]=\{k\in{\NZQ Z}: i\leq k\leq j\}$. Note that $[i,j]\ne\emptyset$ if and only if $i\le j$. Moreover, if $i=1$, we denote the set $[1,j]$ simply by $[j]$. One can observe that for ${\bf t} =(t_1, \ldots, t_{d-1})$ with $t_i\ge 1$, for all $i$, then $I$ is the Stanley--Reisner ideal $I_{\mathcal D}elta$ of a uniquely determined simplicial complex ${\mathcal D}elta$ on vertex set $[n]$ with $f_{\bf 1}(I)$ as $f$-vector, where ${\bf 1} =(1, \ldots, 1)$. \begin{Definition} \label{def:stronglylexset} \rm Let $L\subseteq M_{n,\ell,{\bf t}}$. $L$ is called a ${\bf t}$\textit{-spread strongly stable set} if for all $u\in L$, $j<i$ such that $x_i$ divides $u$ and $x_j(u/x_{i})$ is ${\bf t}$-spread, then $x_j(u/x_{i})\in L$. $L$ is called a ${\bf t}$-\textit{spread lex set}, if for all $u\in L$, $v\in M_{n,\ell,{\bf t}}$ such that $v\ge_{\lex}u$, then $v\in L$. \end{Definition} Here $\ge_{\lex}$ stands for the lex order induced by $x_1>\dots>x_n$ \cite{JT}. For our convenience, throughout the article, we assume the empty set to be both a ${\bf t}$-spread strongly stable set and a lex set. \begin{Definition} \label{def:stronglylex} \rm Let $I$ be a ${\bf t}$\textit{-spread ideal}. $I$ is said to be a ${\bf t}$\textit{-spread strongly stable ideal} if $[I_{\ell}]_{\bf t}$ is a ${\bf t}$\textit{-spread strongly stable set}, for all $\ell$. $I$ is said to be a ${\bf t}$\textit{-spread lex ideal} if $[I_{\ell}]_{\bf t}$ is a ${\bf t}$\textit{-spread lex set}, for all $\ell$. \end{Definition} One can observe that any {\bf t}-spread lex set (ideal) is a {\bf t}-spread strongly stable set (ideal). Moreover, for ${\bf t}={\bf 0}$ (${\bf t}={\bf 1}$) one obtains the classical notions of (squarefree) strongly stable ideal and (squarefree) lex ideal \cite{JT}. \section{Combinatorics on vector-spread shadows}\label{sec2} In this section, if ${\bf t} =(t_1, \ldots, t_{d-1})\in{\NZQ Z}_{\ge0}^{d-1}$, $d\ge 2$, we deal with the combinatorics of the {\bf t}-spread shadows of {\bf t}-spread strongly stable sets and {\bf t}-spread lex sets. As a consequence, given a {\bf t}-spread strongly stable ideal $I$ of the polynomial ring $S$, we prove the existence of a unique {\bf t}-spread lex ideal of $S$ with the same $f_{\bf t}$-vector of $I$.\\ Let $u$ be a monomial. We set $\max(u)=\max\{i:x_i\ \textup{divides}\ u\}$. \begin{Lemma}\label{Lemma:ShadVectSS} Let $L\subseteq M_{n,\ell,{\bf t}}$ be a ${\bf t}$-spread strongly stable set. Then $$ \Shad_{\bf t}(L)=\big\{wx_j:w\in L,\ j\ge\max(w),\ wx_j\ \textup{is}\ {\bf t}\textup{-spread}\big\}. $$ \end{Lemma} \begin{proof} Let $u\in\Shad_{\bf t}(L)$. Then, $u=w x_j$ for some $w\in L$. If $\max (w)\leq j$ there is nothing to prove. Suppose $j<\max(w)$. We can write $u=w'x_{\max (u)}$, with $w' = x_j(u/x_{\max (u)})$ and $\max(w')\le\max(u)$. The proof is complete if we show that $w'\in L$. Let $u=x_{j_1}\cdots x_{j_{\ell+1}}$ with $j_1\le\dots\le j_{\ell+1}$. Then, $j=j_p$ for some $p<\ell+1$ and $w'=x_{j_1}\cdots x_{j_\ell}=x_{j_p}(u/x_{\max(u)})$ is a ${\bf t}$-spread monomial because $w'$ is ${\bf t}$-spread. Moreover, $w'\in L$ since $j_p<\max(u)=j_{\ell+1}$ and $L$ is a ${\bf t}$-spread strongly stable set. \end{proof} \begin{Proposition}\label{Prop:Shadt(L)} Let $L\subseteq M_{n,\ell,{\bf t}}$ be a ${\bf t}$-spread set. \begin{enumerate} \item[\textup{(a)}] If $L$ is a ${\bf t}$-spread strongly stable set, then $\Shad_{\bf t}(L)\subseteq M_{n,\ell+1,{\bf t}}$ is a ${\bf t}$-spread strongly stable set. \item[\textup{(b)}] If $L$ is a ${\bf t}$-spread lex set, then $\Shad_{\bf t}(L)\subseteq M_{n,\ell+1,{\bf t}}$ is a ${\bf t}$-spread lex set. \end{enumerate} \end{Proposition} \begin{proof} Let $u=wx_j\in\Shad_{\bf t}(L)$. For the proofs of both (a) and (b), by Lemma \ref{Lemma:ShadVectSS}, we can assume that $\max(w)\le j$. \\ (a) Let $i<k$ such that $x_k$ divides $u$ and $u'=x_i(u/x_k)$ is ${\bf t}$-spread. We prove that $u'\in\Shad_{\bf t}(L)$. If $k=j$, then $u'=x_iw\in\Shad_{\bf t}(L)$ by definition. Suppose $k\ne j$. Since $j=\max(u)$, then $i<k<j$ and consequently $x_k$ divides $w$. Therefore, $u'=x_i(w/x_k)x_j\in\Shad_{\bf t}(L)$ because $x_i(w/x_k)\in L$ as $x_i(w/x_k)$ is a ${\bf t}$-spread monomial, $i<k$ and $L$ is a ${\bf t}$-spread strongly stable set. \\ (b) Let $v\in M_{n,\ell+1,{\bf t}}$ with $v>_{\lex}u$. We prove that $v\in\Shad_{\bf t}(L)$. By definition of the lex order, it follows that $v/x_{\max(v)}\ge_{\lex} u/x_{\max(u)}$. The hypothesis on $L$ guarantees that $v/x_{\max(v)}\in L$. Hence, $v=(v/x_{\max(v)})x_{\max(v)}\in\Shad_{\bf t}(L)$. \end{proof} Let $L\subseteq M_{n,\ell,{\bf t}}$ be a set of monomials, where $\ell\leq d$. For every $i\in\{1,\ldots,n\}$ we denote by $m_i (L)$ the number of monomials $u\in L$ such that $\max(u)=i$ and then we set $m_{\leq j}(L) = \sum_{i=1}^{j} m_i(L)$. Note that $m_i(L)=0$ if $i\leq \sum_{j=1}^{\ell-1} t_j$. \begin{Lemma}\label{Lemma:m_i} Let $L\subseteq M_{n,\ell,{\bf t}}$ be a $\bf t$-spread strongly stable set with $\ell<d$. Then \begin{enumerate} \item[\textup{(a)}] $m_i(\Shad_{\bf t}(L))=m_{\leq i-t_\ell}(L)$ for all $i$; \item[\textup{(b)}] $\left\lvert \Shad_{\bf t}(L) \right\rvert = \sum_{k=1+\sum_{j=1}^{\ell-1} t_j}^{n-t_\ell} m_{\leq k}(L)$. \end{enumerate} \end{Lemma} \begin{proof} (a) If $i\leq\sum_{j=1}^{\ell} t_j$ the proof is trivial. Let $i\geq 1+\sum_{j=1}^{\ell} t_j$. Consider the map \[\varphi : \left\{ u\in\Shad_{\bf t}(L)\, :\, \max(u)=i \right\} \longrightarrow \left\{ w \in L \, :\, \max(w)\leq i-t_\ell \right\},\] defined as follows. Let $u\in\Shad_{\bf t}(L)$ with $\max(u)=i$. By Lemma \ref{Lemma:ShadVectSS}, $u=w x_i$ where $w\in L$ is the unique monomial such that $\max(u)=i$. Thus we set $\varphi(u)=w$. The map $\varphi$ is well defined by the uniqueness of $w$. To prove (a), it is enough to show that $\varphi$ is a bijection. $\varphi$ is clearly injective. To prove that $\varphi$ is surjective, let $w\in L$ with $\max(w)\leq i-t_\ell$. Then, $u=w x_i$ is ${\bf t}$-spread because $\max(w)\leq i - t_\ell$. Since $\max(u)=i$, then $u$ belongs to the domain of $\varphi$ and $\varphi(u)=w$, as desired. \noindent(b) Since $$ \Shad_{\bf t}(L)=\bigcup_{i=1+\sum_{j=1}^{\ell}t_j}^{n}\big\{u\in\Shad_{\bf t}(L):\max(u)=i\big\}, $$ where the union is disjoint, by (a), $|\{u\in\Shad_{\bf t}(L):\max(u)=i\}|=m_i(\Shad_{\bf t}(L))=m_{\le i-t_\ell}(L)$, and so \begin{align*} \left\lvert \Shad_{\bf t}(L) \right\rvert & = \sum_{i=1+\sum_{j=1}^{\ell}t_j}^{n} m_i (\Shad_{\bf t}(L)) \\ & = \sum_{i=1+\sum_{j=1}^{\ell}t_j}^{n} m_{\leq i-t_\ell} (L) \\ & = \sum_{k=1+\sum_{j=1}^{\ell-1}t_j}^{n-t_\ell} m_{\leq k} (L). \end{align*} \end{proof} The following result is a vector-spread analogue of a well known theorem due to Bayer, see \cite[Theorem 6.3.3]{JT}. The proof is very similar to \cite[Theorem 2]{CAC}, but we include it in all the details for the convenience of the reader stressing the needed changes. \begin{Theorem}\label{Thm:BayerVectSpread} Let $L\subset M_{n,\ell,{\bf t}}$ be a {\bf t}-spread lex set and let $N\subset M_{n,\ell,{\bf t}}$ be a {\bf t}-spread strongly stable set. Suppose $\left\lvert L \right\rvert \leq \left\lvert N \right\rvert$. Then $m_{\leq i}(L)\leq m_{\leq i}(N)$. \end{Theorem} \begin{proof} We first observe that $N=N_0 \union N_1 x_n$, where $N_0$ and $N_1$ are the unique {\bf t}-spread strongly stable sets such that \begin{align*} N_0\ &=\ \{u\in N:\max(u)<n\},& N_1\ &=\ \{u/x_n:u\in N,\max(u)=n\}. \end{align*} Similarly, we can write $L=L_0\cup L_1x_n$, where $L_0$ and $L_1$ are ${\bf t}$-spread lex sets defined as above. We proceed by induction on $n\ge1$, with the base case being trivial. Let $n>1$. Firstly, observe that $m_{\leq n}(L)=\left\lvert L \right\rvert$ and $\left\lvert N \right\rvert = m_{\leq n}(N)$. Hence the assertion holds for $i=n$. Note that $m_{\leq n-1}(L)=|L_0|$ and $m_{\leq n-1}(N)=|N_0|$. Thus, to say that $m_{\leq n-1}(L)\leq m_{\leq n-1}(N)$ is equivalent to prove that \begin{equation}\label{eq:N0L0Crucial} |L_0|\leq|N_0|. \end{equation} Assume for a moment that inequality \eqref{eq:N0L0Crucial} holds. Then, applying our inductive hypothesis to the sets $L_0,N_0\subset M_{n-1,\ell,{\bf t}}$ we obtain $$ m_{\le i}(L)=m_{\le i}(L_0)\le m_{\le i}(N_0)=m_{\le i}(N)\ \ \ \text{for}\ \ \ i=1,\dots,n-1, $$ as desired. Thus it remains to prove the inequality \eqref{eq:N0L0Crucial}. Let $N_0^* \subset M_{n-1,\ell,{\bf t}}$ be a {\bf t}-spread lex set with $\left\lvert N_0^* \right\rvert = \left\lvert N_0 \right\rvert$ and $N_1^* \subset M_{n-t_{\ell-1},\ell-1,{\bf t}}$ be a {\bf t}-spread lex set with $\left\lvert N_1^* \right\rvert = \left\lvert N_1 \right\rvert$. Let $N^* = N_0^*\cup N_1^* x_n$. We claim that $N^*$ is a {\bf t}-spread strongly stable set. Let $u\in N^*$. We shall prove that for every $j<i$ such that $x_i$ divides $u$ and $x_j(u/x_{i})$ is ${\bf t}$-spread, then $x_j(u/x_{i}) \in N^*$. If $u\in N_0^*$ there is nothing to prove since $N_0^*$ is a ${\bf t}$-spread lex set. Suppose $u\in N_1^* x_n$, then we can write $u=wx_n$, where $w\in N_1^*$. If $i<n$, then $w'= x_j(w/x_{i})$ belongs to $N_1^*$ and $x_j(u/x_i)=w'x_n\in N_1^* x_n$. If $i=n$, then $x_j(u/x_i)=x_jw$. Now, if $x_n$ divides $x_jw$, then again $x_jw\in N_1^*x_n$. Otherwise, if $x_n$ does not divide $x_jw$, then $x_jw\in N^*$ if and only if $x_jw\in N_0^*$. Thus, we must show that $\Shad_{\bf t}(N_1^*)\subset N_0^*$. For this aim, it is sufficient to prove that $|\Shad_{\bf t}(N_1^*)|\leq |N_0^*|$, as both sets are ${\bf t}$-spread lex sets (Proposition \ref{Prop:Shadt(L)}(b)). By Lemma \ref{Lemma:m_i} and the induction hypothesis we obtain \begin{align*} |\Shad_{\bf t}(N_1^*)| & = \sum_{i=1+\sum_{j=1}^{\ell-2}t_j}^{n-t_{\ell-1}} m_{\leq i}(N_1^*)\leq \sum_{i=1+\sum_{j=1}^{\ell-2}t_j}^{n-t_{\ell-1}} m_{\leq i}(N_1) \\ & = |\Shad_{\bf t}(N_1)|\leq |N_0|= |N_0^*|. \end{align*} Finally, $N^*$ is a {\bf t}-spread strongly stable set. Since $|N|=|N^*|$, we may replace $N$ by $N^*$ and assume that $N_0$ is a {\bf t}-spread lex set. We suppose $n>1 + \sum_{j=1}^{\ell-1} t_j$, otherwise $M_{n,\ell,{\bf t}}=\{x_1x_{1+t_1}\cdots x_{1+\sum_{j=1}^{\ell-1} t_j}\}$ and the assertion is trivial. Let $m=x_{j_1}\cdots x_{j_{\ell}}$ be a {\bf t}-spread monomial and $\alpha : M_{n,\ell,{\bf t}} \rightarrow M_{n,\ell,{\bf t}}$ be the map defined as follows: \begin{enumerate} \item[{\normalfont(a)}] if $j_{\ell} \neq n$, then $\alpha(m)=m$; \item[{\normalfont(b)}] if $j_{\ell} = n$ and $m\neq \min_{>_{\lex}}M_{n,\ell,{\bf t}}=x_{n-\sum_{j=1}^{\ell -1}t_j} x_{n-\sum_{j=2}^{\ell -1}t_j}\cdots x_{n-t_{\ell -1}}x_n$, then there exists $r\in [2,\ell]$ such that $j_r > j_{r-1} + t_{r-1}$. Hence, if $r$ is the largest integer with this property, we define $$\alpha(m) = x_{j_1}\cdots x_{j_{r-1}} x_{j_r - 1}\cdots x_{j_{\ell -1}-1}x_{n-1};$$ \item[{\normalfont(c)}] if $j_{\ell} = n$ and $m=\min_{>_{\lex}}M_{n,\ell,{\bf t}}=x_{n-\sum_{j=1}^{\ell -1}t_j} x_{n-\sum_{j=2}^{\ell -1}t_j}\cdots x_{n-t_{\ell -1}}x_n$, then $$\alpha(m)=x_{n-1-\sum_{j=1}^{\ell -1}t_j} x_{n-1-\sum_{j=2}^{\ell -1}t_j}\cdots x_{n-1-t_{\ell -1}}x_{n-1}.$$ \end{enumerate} Such map $\alpha$ is well defined and is easily seen to be a lexicographic order preserving map, \textit{i.e.}, if $m_1,m_2\in M_{n,\ell,{\bf t}}$ and $m_1 <_{\lex} m_2$, then $\alpha(m_1)<_{\lex}\alpha(m_2)$, too. To prove \eqref{eq:N0L0Crucial}, since both $L_0$ and $N_0$ are {\bf t}-spread lex sets, it is enough to show that $\min_{>_{\lex}} L_0 \geq_{\lex} \min_{>_{\lex}} N_0$. Let $u=\min_{>_{\lex}} L=x_{i_1}\cdots x_{i_{\ell}}$ and $v=\min_{>_{\lex}} N=x_{j_1}\cdots x_{j_{\ell}}$. We claim that $\alpha(u)=\min_{>_{\lex}}L_0$ and $\alpha(v)=\min_{>_{\lex}}N_0$. Indeed, \begin{enumerate} \item[{\normalfont(a)}] if $v\in N_0$, then $\alpha(v)=v\in N_0$ \item[{\normalfont(b)}] if $v\in N_1 x_n$ and $\alpha(v)=x_{j_1}\cdots x_{j_{r-1}} x_{j_r - 1}\cdots x_{j_{\ell -1}-1}x_{n-1}$, where $r\in [2,\ell]$ is the largest integer such that $j_r > j_{r-1} + t_{r-1}$, then \begin{enumerate} \item[{\normalfont(i)}] if $r=\ell$, then $\alpha(v)=x_{j_1}\cdots x_{j_{\ell-1}} x_{n-1}=(v/x_n)x_{n-1}\in N_0$, because $N$ is a {\bf t}-spread strongly stable set. \item[{\normalfont(ii)}] if $r<\ell$, since $N$ is a {\bf t}-spread strongly stable set, we have \begin{align*} v_1&= x_{j_k -1}(v/x_{j_k})\in N,\\ v_2&= x_{j_{k+1} -1}(v_1/x_{j_{k+1}})\in N,\\ &\phantom{..}\vdots\\ v_{\ell-r}&= x_{j_{\ell-1} -1}(v_{\ell -k-1}/x_{j_{\ell-1}})\in N, \end{align*} then $\alpha(v)=x_{n-1}(v_{\ell -k}/x_n)\in N_0$. \end{enumerate} \item[{\normalfont(c)}] if $v\in N_1 x_n$ and $\alpha(v)=x_{j_1 -1}\cdots x_{j_{\ell -1}-1}x_{n-1}$, we have \begin{align*} v_1&=x_{j_1 -1}(v/x_{j_1})\in N,\\ v_2&=x_{j_2 -1}(v_1/x_{j_2})\in N,\\ &\phantom{..}\vdots\\ v_{\ell-1}&=x_{j_{\ell} -1}(v_{\ell-2}/x_{j_{\ell}-1})\in N, \end{align*} and $\alpha(v)=x_{n-1}(v_{\ell -1}/x_n)\in N_0$, because $N$ is a {\bf t}-spread strongly stable set. \end{enumerate} Hence, in all possible cases $\alpha(v)\in N_0$. Thus, we have $\min_{>_{\lex}}N_0 \leq_{\lex} \alpha(v)$, and $\min_{>_{\lex}}N_0 \geq_{\lex} v =\min_{>_{\lex}}N$. Since $\max(\min_{>_{\lex}}N_0)<n$, we have $$ \min_{>_{\lex}}N_0 = \alpha(\min_{>_{\lex}} N_0) \geq_{\lex} \alpha(v) \geq_{\lex} \min_{>_{\lex}} N_0, $$ and so $\min_{>_{\lex}} N_0 = \alpha(v)$. Similarly one can prove that $\min_{>_{\lex}} L_0 = \alpha(u)$. Finally, since $L$ is a {\bf t}-spread lex set and $|L|\leq |N|$, we have $u\geq_{\lex}v$. Consequently, $\min_{>_{\lex}}L_0 = \alpha(u) \geq_{\lex} \alpha(v)= \min_{>_{\lex}}N_0$ and the proof is completed. \end{proof} As a striking consequence of the previous result, we prove that to every ${\bf t}$-spread strongly stable ideal $I$ one can associate a unique ${\bf t}$-spread lex ideal which shares the same $f_{\bf t}$-vector of $I$. It is necessary to highlight that if $I$ is an arbitrary ${\bf t}$-spread ideal of $S$, then such a ${\bf t}$-spread lex ideal does not always exist \cite[Remark 2]{CAC}. Nevertheless, there can exist a ${\bf t}$-spread ideal $I$ of $S$ which is not ${\bf t}$-spread strongly stable but for which there exists a ${\bf t}$-spread lex ideal with the same $f_{\bf t}$-vector of $I$ (see, for instance, \cite[Remark 4.10]{ACF} and \cite[Remark 2]{CAC}). \begin{Corollary}\label{Cor:SubstituteTLex} Let $I\subset S$ be a ${\bf t}$-spread strongly stable ideal. Then there exists a unique ${\bf t}$-spread lex ideal $I^{{\bf t},\lex}\subset S$ such that $f_{\bf t}(I)=f_{\bf t}(I^{{\bf t},\lex})$. \end{Corollary} \begin{proof} We construct a ${\bf t}$-spread lex ideal $J$ verifying $f_{\bf t}(J)=f_{\bf t}(I)$ as follows. For all $0\leq \ell\leq d$, let $L_\ell$ be the unique {\bf t}-spread lex set of $M_{n,\ell,{\bf t}}$ with $|L_\ell|=|[I_\ell]_{\bf t}|$. Whereas, for $\ell>d$ we set $L_{\ell}=\emptyset$. For all $\ell\ge0$, we denote by $J_\ell$ the $K$-vector space spanned by the monomials in the set $$ L_\ell\cup\Shad_{\bf 0}(B_{\ell-1}), $$ where $B_{-1}=\emptyset$ and for $\ell\ge1$, $B_{\ell-1}$ is the set of monomials in $J_{\ell-1}$. Then, we set $$ J=\bigoplus_{\ell\ge0}J_{\ell}. $$ By abuse of notation, we denote by $[J_{\ell}]$ the set of monomials spanning $J_{\ell}$. We claim that $J$ satisfies our statement. Firstly, we must show that $J$ is a ${\bf t}$-spread lex ideal. For this purpose, it is enough to show that $$ \Shad_{\bf 0}([J_{\ell-1}])\subseteq[J_\ell], $$ for all $\ell\ge1$. But $B_{\ell-1}=[J_{\ell-1}]$, and so $$ \Shad_{\bf 0}([J_{\ell-1}])=\Shad_{\bf 0}(B_{\ell-1})\subseteq L_{\ell}\cup\Shad_{\bf 0}(B_{\ell-1})=[J_\ell]. $$ It remains to prove that $f_{\bf t}(J)=f_{\bf t}(I)$, \emph{i.e.}, $|[J_\ell]_{\bf t}|=|[I_{\ell}]_{\bf t}|$ for $0\leq \ell\leq d$. Let $\delta=\indeg(I)= \indeg(J)$. Then $\delta\le d$ and $|[J_\ell]_{\bf t}|=|[I_{\ell}]_{\bf t}|=0$ for all $0\le\ell<\delta$. Now, let $\ell\ge\delta$. Since $|L_{\ell}|=|[I_\ell]_{\bf t}|$, then $$ |[J_\ell]_{\bf t}|=|L_\ell\cup\Shad_{\bf t}(B_{\ell-1})|=|[I_\ell]_{\bf t}| $$ if and only if $\Shad_{\bf t}(B_{\ell-1})\subseteq L_\ell$. We proceed by finite induction on $\delta\le\ell\le d$. For the base case $\ell=\delta$, just note that $B_{\delta-1}=\emptyset$. Now, let $\ell>\delta$, then $\Shad_{\bf t}(B_{\ell-2})\subseteq L_{\ell-1}$ by the inductive hypothesis. Hence, \begin{align*} \Shad_{\bf t}(B_{\ell-1})\ &=\ \Shad_{\bf t}(L_{\ell-1}\cup\Shad_{\bf 0}(B_{\ell-2}))\\ &=\ \Shad_{\bf t}(L_{\ell-1}\cup\Shad_{\bf t}(B_{\ell-2}))\\ &=\ \Shad_{\bf t}(L_{\ell-1}). \end{align*} Indeed, $$ \Shad_{\bf t}(\Shad_{\bf 0}(B_{\ell-2}))=\Shad_{\bf t}(\Shad_{\bf t}(B_{\ell-2})). $$ It is clear that the second set is included in the first one. For the other inclusion, let $u\in\Shad_{\bf t}(\Shad_{\bf 0}(B_{\ell-2}))$. Then $u=vx_ix_j$ with $\max(v)\le i\le j$ and $\deg(v)=\ell-2$. Since $u$ is ${\bf t}$-spread and clearly not a generator of $J$, there exists $w\in G(J)$ that properly divides $u$. Note that $\deg(w)\le\ell-2$. By the ${\bf t}$-spread lex property, there exists also a $w'\in G(J)$ that divides $v$. Hence, $v\in[J_{\ell-2}]$ and $u\in\Shad_{\bf t}(\Shad_{\bf t}(B_{\ell-2}))$. Thus, it remains to prove that $\Shad_{\bf t}(L_{\ell-1})\subseteq L_{\ell}$. Both sets are {\bf t}-spread lex sets. Therefore, the previous inclusion holds if and only if $|\Shad_{\bf t}(L_{\ell-1})|\le|L_{\ell}|$. By Lemma \ref{Lemma:m_i}(b) and Theorem \ref{Thm:BayerVectSpread} applied to the sets $L_{\ell-1}$ and $[I_{\ell-1}]_{\bf t}$ satisfying $|L_{\ell-1}|=|[I_{\ell-1}]_{\bf t}|$, we have, \begin{align*} \left\lvert \Shad_{\bf t}(L_{\ell-1}) \right\rvert\ &= \sum_{k=1+\sum_{j=1}^{\ell-2} t_j}^{n-t_{\ell-1}} m_{\leq k}(L_{\ell-1})\le\sum_{k=1+\sum_{j=1}^{\ell-2} t_j}^{n-t_{\ell-1}} m_{\leq k}([I_{\ell-1}]_{\bf t})\\ &=\ |\Shad_{\bf t}([I_{\ell-1}]_{\bf t})|\le|[I_{\ell}]_{\bf t}|=|L_\ell|. \end{align*} The inductive proof is complete. We denote $J$ by $I^{{\bf t},\lex}$. It is clear that $I^{{\bf t},\lex}$ is the unique ideal meeting the requirements of the statement. \end{proof} \begin{Example}\label{Ex:Itlex} \rm Let ${\bf t}=(1,0,2)$ and $n=6$. Consider the following ${\bf t}$-spread strongly stable ideal of $S=K[x_1,\dots,x_6]$: $$ I=(x_1x_2,x_1x_3,x_1x_4,x_2x_3,x_2x_4^2,x_3x_4^2x_6). $$ Then, \begin{align*} [I_\ell]_{\bf t}\ &=\ \emptyset,\ \text{for}\ \ell=0,1,\\[0.3em] [I_2]_{\bf t}\ &=\ \{x_1x_2,x_1x_3,x_1x_4,x_2x_3\},\\[0.3em] [I_3]_{\bf t}\ &=\ \{x_1x_2^2,x_1x_2x_3,x_1x_2x_4,x_1x_2x_5,x_1x_2x_6,x_1x_3^2,x_1x_3x_4,x_1x_3x_5,x_1x_3x_6,\\ &\phantom{=\ \{.}x_1x_4^2,x_1x_4x_5,x_1x_4x_6,x_2x_3^2,x_2x_3x_4,x_2x_3x_5,x_2x_3x_6,x_2x_4^2\},\\[0.3em] [I_4]_{\bf t}\ &=\ \{x_1x_2^2x_4,x_1x_2^2x_5,x_1x_2^2x_6,x_1x_2x_3x_5,x_1x_2x_3x_6,x_1x_2x_4x_6,x_1x_3^2x_5,x_1x_3^2x_6,\\ &\phantom{=\ \{.}x_1x_3x_4x_6,x_1x_4^2x_6,x_2x_3^2x_5,x_2x_3^2x_6,x_2x_3x_4x_6,x_2x_4^2x_6,x_3x_4^2x_6\},\\[0.3em] [I_\ell]_{\bf t}\ &=\ \emptyset,\ \text{for all}\ \ell\ge5. \end{align*} Therefore, \begin{align*} f_{\bf t}(I)\ &=\ (f_{{\bf t},-1}(I),f_{{\bf t},0}(I),f_{{\bf t},1}(I),f_{{\bf t},2}(I),f_{{\bf t},3}(I))\\ &=\ (1,6,11,18,0). \end{align*} Note that the value of $f_{{\bf t},3}(I)$ depends on the fact that $[I_4]_{\bf t}=M_{6,4,{\bf t}}$. Moreover, $L_\ell=\emptyset$ for $\ell=0,1$ and for $\ell\ge5$. Whereas, for $\ell=2,3,4$, we have \begin{align*} L_2\ &=\ \{x_1x_2,x_1x_3,x_1x_4,x_1x_5\},\\[0.3em] L_3\ &=\ \{x_1x_2^2,x_1x_2x_3,x_1x_2x_4,x_1x_2x_5,x_1x_2x_6,x_1x_3^2,x_1x_3x_4,x_1x_3x_5,x_1x_3x_6,\\ &\phantom{=\ \{.}x_1x_4^2,x_1x_4x_5,x_1x_4x_6,x_1x_5^2,x_1x_5x_6,x_1x_6^2,x_2x_3^2,x_2x_3x_4\},\\[0.3em] L_4\ &=\ \{x_1x_2^2x_4,x_1x_2^2x_5,x_1x_2^2x_6,x_1x_2x_3x_5,x_1x_2x_3x_6,x_1x_2x_4x_6,x_1x_3^2x_5,x_1x_3^2x_6,\\ &\phantom{=\ \{.}x_1x_3x_4x_6,x_1x_4^2x_6,x_2x_3^2x_5,x_2x_3^2x_6,x_2x_3x_4x_6,x_2x_4^2x_6,x_3x_4^2x_6\}. \end{align*} Hence, $$ I^{{\bf t},\lex}=(x_1x_2,x_1x_3,x_1x_4,x_1x_5,x_1x_6^2,x_2x_3^2,x_2x_3x_4,x_2x_4^2x_6,x_3x_4^2x_6). $$ \end{Example} \section{The vector-spread Macaulay theorem}\label{sec3} The purpose of this section is to give a classification of all possible $f_{\bf t}$-vectors of a ${\bf t}$-spread strongly stable ideal. We follow the steps of the classical Macaulay theorem, see \cite[Theorem 6.3.8]{JT}. We quote the following result from \cite[Lemma 6.3.4]{JT}. \begin{Lemma}\label{Lemma:BinExp} Let $\ell$ be a positive integer. Then each positive integer $a$ has a unique expansion $$ a=\sum_{j=p}^{\ell}\binom{a_j}{j}, $$ with $a_\ell>a_{\ell-1}>\dots>a_{p}\ge p\ge1$. \end{Lemma} The previous expansion is called the \textit{binomial or Macaulay expansion of $a$ with respect to $\ell$}. \begin{Definition}\label{Def:vector-spreadOp} \rm Let $n,\ell$ be positive integers, ${\bf t}=(t_1,\dots,t_{d-1})\in{\NZQ Z}_{\ge0}^d$, $d\ge2$ such that $n>\sum_{j=1}^{d-1}t_j$ and $\ell<d$. For all $1\le\ell<d$, we define a \textit{{\bf t}-spread operator} as follows: for any positive integer $a\le|M_{n,\ell,{\bf t}}|$, let $a=\sum_{j=p}^{\ell}\binom{a_j}{j}$ be the binomial expansion of $a$ with respect to $\ell$. We define $$ a^{(n,\ell,{\bf t})}=\sum_{j=p+1}^{\ell+1}\binom{a_{j-1}+1-t_\ell}{j}. $$ \end{Definition} Let $u\in M_{n,\ell,{\bf t}}$. We define the \textit{initial ${\bf t}$-spread lexsegment set determined by $u$} to be the set \begin{align*} \mathcal{L}_{\bf t}^i(u)\ &=\ \{v\in M_{n,\ell,{\bf t}}:v\ge_{\lex}u\}. \end{align*} Note that any ${\bf t}$-spread lex set $L\subset M_{n,\ell,{\bf t}}$ is an initial ${\bf t}$-spread lexsegment set. Definition \ref{Def:vector-spreadOp} is justified by the next result. \begin{Theorem}\label{Thm:M{n,l,t}VectorOp} Let $u\in M_{n,\ell,{\bf t}}$ with $\ell<d$ and $a=|M_{n,\ell,{\bf t}}\setminus\mathcal{L}_{\bf t}^i(u)|$. Then, $$ \big|M_{n,\ell+1,{\bf t}}\setminus\Shad_{\bf t}(\mathcal{L}_{\bf t}^i(u))\big|\ =\ a^{(n,\ell,{\bf t})}. $$ \end{Theorem} In order to prove the theorem, we need some preliminary lemmata. Hereafter, suppose we can write a positive integer $a$ as \begin{equation}\label{eq:fakeBinExp} a=\binom{a_\ell}{\ell}+\dots+\binom{a_p}{p}+\dots+\binom{a_1}{1}, \end{equation} where $a_\ell>a_{\ell-1}>\dots>a_p\ge p$ and $a_j<j$ for $j=1,\dots,p-1$. Then, by Lemma \ref{Lemma:BinExp}, $a=\sum_{j=p}^{\ell}\binom{a_j}{j}$ is the (unique) binomial expansion of $a$ with respect to $\ell$. However, for our convenience, we refer to \eqref{eq:fakeBinExp} also as a binomial expansion of $a$. Given $\emptyset\neq A \subseteq [n]$, we set \[ M_{A,\ell,{\bf t}} = M_{n,\ell,{\bf t}} \cap K[x_a : a\in A]. \] Moreover, if ${\bf t}=(t_1,\dots,t_{d-1})\in{\NZQ Z}_{\ge0}^d$, we set ${\bf t}_{\ge k}=(t_k,\dots,t_{d-1})$. \begin{Lemma}\label{Lemma:utilda} Let $u=x_{i_1}\cdots x_{i_{\ell}}\in M_{n,\ell,{\bf t}}$. Then \begin{equation}\label{Lemma:M-Lu} M_{n,\ell,{\bf t}} \setminus \mathcal{L}_{\bf t}^i(u) = \Union_{k=1}^{\ell} x_{i_1}\cdots x_{i_{k-1}} M_{[i_{k}+1,n],\ell-(k-1),\, {\bf t}_{\ge k}}. \end{equation} This union is disjoint, and the binomial expansion of $\left\lvert M_{n,\ell,{\bf t}} \setminus \mathcal{L}_{\bf t}^i(u)\right\rvert$ is \begin{equation}\label{Lemma:|M-Lu|} \left\lvert M_{n,\ell,{\bf t}} \setminus \mathcal{L}_{\bf t}^i(u)\right\rvert = \sum_{j=1}^{\ell} \binom{a_j}{j}, \end{equation} where $a_j=n-i_{\ell-(j-1)}+j-1-\sum_{h=\ell-(j-1)}^{\ell-1}t_h$, for all $j\in[\ell]$. \end{Lemma} \begin{proof} Since $\geq_{\lex}$ is a total order, we have $M_{n,\ell,{\bf t}} \setminus \mathcal{L}_{\bf t}^i(u) = \left\{v\in M_{n,\ell,{\bf t}}: v<_{\lex}u\right\}$.\linebreak Let $v=x_{j_1}\cdots x_{j_\ell} \in M_{n,\ell,{\bf t}}$, with $v<_{\lex} u$. Then $i_1=j_1,\ldots,i_{k-1}=j_{k-1}$ and $i_k<j_k$, for some $k\in [\ell]$. Hence $v=x_{i_1}\cdots x_{i_{k-1}} w$, where $w\in M_{[i_{k}+1,n],\ell-(k-1), {\bf t}_{\ge k}}$ and \eqref{Lemma:M-Lu} follows. To prove \eqref{Lemma:|M-Lu|} one can apply \eqref{Formula:|Mn,l,t|}, observing that the union in \eqref{Lemma:M-Lu} is disjoint and that $\lvert M_{[i_{k}+1,n],\ell-(k-1), {\bf t}_{\ge k}} \rvert = \lvert M_{n-i_k,\ell-(k-1), {\bf t}_{\ge k}} \rvert$. It remains to prove that \eqref{Lemma:|M-Lu|} is the binomial expansion of $\left\lvert M_{n,\ell,{\bf t}} \setminus \mathcal{L}_{\bf t}^i(u)\right\rvert$. Let $p=\min\{j:a_j\ge j\}$. By Lemma \ref{Lemma:BinExp}, it is enough to show the following facts: \begin{enumerate} \item[(i)] $a_\ell>a_{\ell-1}>\dots>a_p\ge p$, and \item[(ii)] $a_j<j$, for $j=1,\dots,p-1$. \end{enumerate} Statement (ii) follows from the definition of $p$. For the proof of (i), let $\ell>j\ge p$. Then, we have $$ a_{j+1}-a_{j}=i_{\ell-(j-1)}-i_{\ell-j}+1-t_{\ell-j}\ge t_{\ell-j}+1-t_{\ell - j}=1, $$ since $i_{\ell-(j-1)}-i_{\ell-j}\ge t_{\ell-j}$. Thus, $$ a_{j+1}\ge a_j+1, $$ and so $a_\ell>a_{\ell-1}>\dots>a_p\ge p$, as desired. \end{proof} Let $L\subseteq M_{n,\ell,{\bf t}}$ be a ${\bf t}$-spread lex set, with $\ell<d$. By Proposition \ref{Prop:Shadt(L)}(b), $\Shad_{\bf t}(L)\subseteq M_{n,\ell+1,{\bf t}}$ is again a ${\bf t}$-spread lex set. Let $$ u=\min_{>_{\lex}}L=x_{i_1}x_{i_2}\cdots x_{i_\ell}. $$ Then $L=\mathcal{L}_{\bf t}^i(u)$. Hence, if we set $\widetilde{L}=\Shad_{\bf t}(L)$ and $\widetilde{u}=\min_{>_{\lex}}\Shad_{\bf t}(L)$, then $\widetilde{L}= \mathcal{L}_{\bf t}^i(\widetilde{u})$. Therefore, to determine the ${\bf t}$-spread shadow $\widetilde{L}$ of $L$ it is enough to determine the monomial $\widetilde{u}$. This is accomplished in the next lemma. \begin{Lemma}\label{Lemma:utildavect} With the notation and assumptions as above, we have \begin{equation}\label{eq:utilda} \widetilde{u}=\big(\prod_{h=1}^{\ell-r}x_{i_h}\big)\big(\prod_{h=1}^{r+1}x_{n-\sum_{p=\ell-(r-h)}^{\ell}t_p}\big), \end{equation} where we set $i_0=t_0=0$ and \begin{equation}\label{eq:rwidetilde(u)} r=\min{\mathcal B}ig\{s\in [0,\ell] : n-i_\ell+\sum_{h=1}^{s}(i_{\ell-(h-1)}-i_{\ell-h}-t_{\ell-h})\ge t_\ell {\mathcal B}ig\}. \end{equation} \end{Lemma} \begin{proof} Let us prove that $\widetilde{u}$ belongs to $\widetilde{L}$. For this aim, it is enough to show that $v=\widetilde{u}/x_n\in L$. Note that $$ n-i_\ell+\sum_{h=1}^{s}(i_{\ell-(h-1)}-i_{\ell-h}-t_{\ell-h})=n-i_{\ell-s}-\sum_{h=1}^s t_{\ell-h}, $$ for all $s\in[0,\ell]$. Thus, $r=\min\big\{s\in [0,\ell] : n-i_{\ell-s}-\sum_{h=1}^s t_{\ell-h}\ge t_\ell \big\}$. Hence, $$ \big(n-\sum_{h=1}^r t_{\ell-h}\big)-i_{\ell-r}\ge t_\ell. $$ By definition of $>_{\lex}$ we have $v\ge_{\lex}u$. Since $v$ is ${\bf t}$-spread, it follows that $v\in L$. To prove that $\widetilde{u}=\min_{>_{\lex}}\widetilde{L}$, suppose by contradiction that there exists $w\in\widetilde{L}$ such that $w<_{\lex}\widetilde{u}$. Write $w=x_{j_1}\cdots x_{j_{\ell+1}}$, $\widetilde{u}=x_{k_1}\cdots x_{k_{\ell+1}}$. Then, $j_1=k_1,\dots,j_{q-1}=k_{q-1}$ and $j_q>k_q$, for some $q\in[\ell+1]$. If $q\ge \ell-r+1$, then $j_q>k_q=n-\sum_{p=q}^{\ell}t_p$. This is absurd, because all monomials $x_{s_1}\cdots x_{s_{\ell+1}}\in M_{n,\ell+1,{\bf t}}$ satisfy the inequalities $s_{q}\le n-\sum_{h=q}^{\ell}t_h$, $q\in[\ell+1]$. If $1\le q\le \ell-r$, then $j_q>k_q=i_q$. By Lemma \ref{Lemma:ShadVectSS}, $w/x_{j_{\ell+1}}=w'\in L$. Hence $\min_{>_{\lex}} L=u>_{\lex}w'$, a contradiction. Finally, $\widetilde{u}=\min_{>_{\lex}}\widetilde{L}$. \end{proof} The next example illustrates the previous lemma. \begin{Example} \rm Let ${\bf t}=(2,1,2)$, $S=K[x_1,\dots,x_8]$, $L=\mathcal{L}^i(u)$ for some $u\in M_{8,3,{\bf t}}$. Set $\Shad_{\bf t}(L)=\widetilde{L}$ and $\widetilde{u}=\min_{>_{\lex}}\widetilde{L}$. Let $r$ the integer defined in (\ref{eq:rwidetilde(u)}). Let $u=x_2x_4x_6$. Since $n-\max(u)=2=t_3$, then $r=0$ and $\widetilde{u}=ux_n=x_2x_4x_6x_8$. Let $u=x_2x_6x_7$. Then, $r=2$ and $\widetilde{u}=x_2x_5x_6x_8$. Let $u=x_4x_6x_7$. Then, $r=3$ and $\widetilde{u}=x_3x_5x_6x_8$. In such a case $\Shad_{\bf t}(L)=M_{8,4,{\bf t}}$. \end{Example} \begin{proof}[Proof of Theorem \ref{Thm:M{n,l,t}VectorOp}] As before, let $L=\mathcal{L}_{\bf t}^i(u)$, $\widetilde{L}=\Shad_{\bf t}(L)$ and $\widetilde{u}=\min_{>_{\lex}}\widetilde{L}$. Write $\widetilde{u}=x_{k_1}x_{k_2}\cdots x_{k_{\ell+1}}$, where the indices $k_j$ are determined in (\ref{eq:utilda}) and $$ r=\min{\mathcal B}ig\{s\in [0,\ell] : n-i_\ell+\sum_{h=1}^{s}(i_{\ell-(h-1)}-i_{\ell-h}-t_{\ell-h})\ge t_\ell {\mathcal B}ig\}. $$ Then, by Lemma \ref{Lemma:utilda}, we have the binomial expansions \begin{align*} |M_{n,\ell,{\bf t}}\setminus L|\ =\ \sum_{j=1}^{\ell}\binom{a_j}{j},\ \ \ \ \ \ \ \ |M_{n,\ell+1,{\bf t}}\setminus\widetilde{L}|\ =\ \sum_{j=1}^{\ell+1}\binom{\widetilde{a}_j}{j}, \end{align*} where \begin{enumerate} \item[(a)] $a_j=n-i_{\ell-(j-1)}+j-1-\sum_{h=\ell-(j-1)}^{\ell-1}t_h$, for all $j\in[\ell]$, and \item[(b)] $\widetilde{a}_j=n-k_{\ell+1-(j-1)}+j-1-\sum_{h=\ell+1-(j-1)}^{\ell}t_h$, for all $j\in[\ell+1]$. \end{enumerate} It remains to prove that $|M_{n,\ell+1,{\bf t}}\setminus\widetilde{L}|=a^{(n,\ell,{\bf t})}$. Firstly, we establish how the coefficients $a_j$ and $\widetilde{a}_j$ are related. Note that, for $j\in[r+1]$, we have \begin{align*} \widetilde{a}_j&=n-k_{\ell+1-(j-1)}+j-1-\sum_{h=\ell+1-(j-1)}^{\ell}t_h\\ &=n-{\mathcal B}ig(n-\sum_{p=\ell+1-(j-1)}^{\ell}t_p{\mathcal B}ig)+j-1-\sum_{h=\ell+1-(j-1)}^{\ell}t_h\\ &=j-1. \end{align*} Since $\binom{j-1}{j}=0$, we may write as well $$ |M_{n,\ell+1,{\bf t}}\setminus\widetilde{L}|=\sum_{j=r+2}^{\ell+1}\binom{\widetilde{a}_j}{j}. $$ Instead, since $k_{\ell+1-(j-1)}=i_{\ell-(j-1)}$, for $j\in[r+2,\ell+1]$, we have $$ \widetilde{a}_j=a_{j-1}+1-t_\ell. $$ Therefore, \begin{align*} \big| M_{n,\ell+1,{\bf t}} \setminus\widetilde{L}\big| &= \sum_{j=r+2}^{\ell+1} \binom{a_{j-1}+1-t_{\ell}}{j}. \end{align*} Let $p=\min\{j:a_j\ge j\}$. The theorem is proved if we show that $$ \big| M_{n,\ell+1,{\bf t}} \setminus\widetilde{L}\big|=\big| M_{n,\ell,{\bf t}} \setminus L\big|^{(n,\ell,{\bf t})}=\sum_{j=p+1}^{\ell+1}\binom{a_{j-1}+1-t_{\ell}}{j}. $$ If $p+1=r+2$ this is clear. Suppose $p+1>r+2$. Then, it is enough to show that $$ \binom{a_{j-1}+1-t_{\ell}}{j}=0, \ \ \ \text{for all}\ r+2\le j\le p. $$ If $j\le p$, then $a_{j-1}<j-1$. Hence $a_{j-1}+1-t_\ell\le a_{j-1}+1<j$ and $\binom{a_{j-1}+1-t_{\ell}}{j}=0$, as desired. Now let $r+2>p+1$. We must prove that $$ \binom{a_{j-1}+1-t_{\ell}}{j}=0, $$ for all $p+1\le j\le r+1$. Set $a_{\ell+1}=n-\sum_{j=1}^{\ell-1}t_j$. Then $$ r=\min\{s\in[0,\ell]:a_{s+1}-s\ge t_\ell\}. $$ If $j\le r+1$, then $j-2\le r-1$. Hence $a_{(j-2)+1}-(j-2)=a_{j-1}-(j-2)<t_\ell$. It follows that $a_{j-1}+1-t_\ell<a_{j-1}+2-t_\ell<j$ and $\binom{a_{j-1}+1-t_{\ell}}{j}=0$, as desired. \end{proof} \begin{Example} \rm Let $n=31$, ${\bf t}=(0,1,3,1)$, $a=2023$ and $\ell=3$. Then $$ a=\sum_{j=1}^{\ell}\binom{a_j}{j}=\binom{23}{3}+\binom{22}{2}+\binom{21}{1} $$ is the binomial expansion of $a$ with respect to $\ell$. Therefore, since $r=0$, we have \begin{align*} a^{(n,\ell,{\bf t})}=2023^{(31,3,(0,1,3,1))}\ &=\ \sum_{j=r+2}^{\ell+1}\binom{a_{j-1}+1-t_\ell}{j}\\ &=\ \binom{19}{2}+\binom{20}{3}+\binom{21}{4}=7296. \end{align*} \end{Example} Now, we can state and prove the main result in the article. \begin{Theorem}\label{thm:main} Let $f=(f_{-1},f_0,\dots,f_{d-1})$ be a sequence of non-negative integers. The following conditions are equivalent: \begin{enumerate} \item[\textup{(i)}] there exists a ${\bf t}$-spread strongly stable ideal $I\subset S=K[x_1,\dots,x_n]$ such that $$f_{\bf t}(I)=f;$$ \item[\textup{(ii)}] $f_{-1}=1$ and $f_{\ell+1}\le f_{\ell}^{(n,\ell+1,{\bf t})}$, for all $\ell=-1,\dots,d-2$. \end{enumerate} \end{Theorem} \begin{proof} (i) $\implies$ (ii). Assume that $f_{\bf t}(I)=f$. By Corollary \ref{Cor:SubstituteTLex}, we may replace $I$ by $I^{{\bf t},\lex}$ without changing the $f_{\bf t}$-vector. Thus, we may assume as well that $I$ is a ${\bf t}$-spread lex ideal. Then $f_{-1}=f_{{\bf t},-1}(I)=1$ and for all $-1\leq\ell\leq d-2$ we have $\Shad_{\bf t}([I_{\ell+1}]_{\bf t})\subseteq[I_{\ell+2}]_{\bf t}$. Hence, \begin{align*} f_{\ell+1}=f_{{\bf t},\ell+1}(I)=|M_{n,\ell+2,{\bf t}}|-|[I_{\ell+2}]_{\bf t}|\ &\le\ |M_{n,\ell+2,{\bf t}}|-|\Shad_{\bf t}([I_{\ell+1}]_{\bf t})|\\&=\ |M_{n,\ell+2,{\bf t}}\setminus\Shad_{\bf t}([I_{\ell+1}]_{\bf t})|\\&=\ f_{{\bf t},\ell}(I)^{(n,\ell+1,{\bf t})}=f_\ell^{(n,\ell+1,{\bf t})}, \end{align*} where the last equality follows from Theorem \ref{Thm:M{n,l,t}VectorOp}. Statement (ii) is proved. \noindent(ii) $\implies$ (i). First we prove that $$ f_{\ell}\ \le\ |M_{n,\ell+1,{\bf t}}|,\ \ \ \text{for all}\ \ \ \ell=-1,\dots,d-1. $$ For $\ell=-1$, $f_{-1}=1=|M_{n,0,{\bf t}}|$ because there is only one ${\bf t}$-spread monomial of degree 0, namely $u=1$. Now we proceed by induction. Let $\ell\ge0$. By the hypothesis (ii), we have $f_{\ell}\le f_{\ell-1}^{(n,\ell,{\bf t})}$ and, by induction, $f_{\ell-1}\le|M_{n,\ell,{\bf t}}|$. Thus, there exists a unique monomial $u\in M_{n,\ell,{\bf t}}$ such that $|M_{n,\ell,{\bf t}}\setminus\mathcal{L}_{\bf t}^i(u)|=f_{\ell-1}$. By Theorem \ref{Thm:M{n,l,t}VectorOp}, we have $f_{\ell-1}^{(n,\ell,{\bf t})}=|M_{n,\ell+1,{\bf t}}\setminus\Shad_{\bf t}(\mathcal{L}_{\bf t}^i(u))|$. This shows that $f_{\ell-1}^{(n,\ell,{\bf t})}\le|M_{n,\ell+1,{\bf t}}|$ and consequently we have $f_{\ell}\le|M_{n,\ell+1,{\bf t}}|$, as desired. For all $\ell\in [0,d]$, let $L_\ell$ be the unique ${\bf t}$-spread lex set of $M_{n,\ell,{\bf t}}$ such that $|L_{\ell}|=|M_{n,\ell,{\bf t}}|-f_{\ell-1}$. For $\ell>d$ we set $L_{\ell}=\emptyset$. As in Corollary \ref{Cor:SubstituteTLex}, we construct the ideal $I=\bigoplus_{\ell\ge0}I_\ell$ where $I_\ell$ is the $K$-vector space spanned by the set $$ L_\ell\cup\Shad_{\bf 0}(B_{\ell-1}), $$ where $B_{-1}=\emptyset$ and for $\ell \geq 1$, $B_{\ell-1}$ is the set of monomials generating $I_{\ell-1}$. As in Corollary \ref{Cor:SubstituteTLex}, one shows that $I$ is a {\bf t}-spread lex ideal. Hence, it remains to prove that $f_{\bf t}(I)=f$. As in the proof of Corollary \ref{Cor:SubstituteTLex}, this boils down to proving that $\Shad_{\bf t}(L_\ell)\subseteq L_{\ell+1}$, for all $\ell\in [0,d-1]$. Since $f_{\ell}\le f_{\ell-1}^{(n,\ell,{\bf t})}$ we have $$ |M_{n,\ell+1,{\bf t}}\setminus L_{\ell+1}|\le|M_{n,\ell,{\bf t}}\setminus L_\ell|^{(n,\ell,{\bf t})}=|M_{n,\ell+1,{\bf t}}\setminus\Shad_{\bf t}(L_{\ell})|, $$ where the last equality follows from Theorem \ref{Thm:M{n,l,t}VectorOp}. Thus $|\Shad_{\bf t}(L_\ell)|\le|L_{\ell+1}|$. Hence $\Shad_{\bf t}(L_\ell)\subseteq L_{\ell+1}$, because both are ${\bf t}$-spread lex sets. The proof is complete. \end{proof} \begin{Example} \rm Let ${\bf t}=(1,0,2)$, $d=4$ and $n=6$. Consider the following vector \begin{align*} f\ &=\ (f_{-1},f_{0},f_{1},f_{2},f_{3})\ =\ (1,6,11,18,0). \end{align*} Then $f_{-1}=1$ and $f_{\ell+1}\le f_{\ell}^{(6,\ell+1,{\bf t})}$, for all $\ell=-1,\dots,2$. Therefore, from Theorem \ref{thm:main} there exists a ${\bf t}$-spread strongly stable ideal of $S=K[x_1,\dots,x_6]$ that has $f$ as a $f_{\bf t}$-vector. The ideal $I$ of Example \ref{Ex:Itlex} is such an ideal. \end{Example} \section{An application} \label{sec4} In this final section, as an application we recover the vector-spread version of the well--known result proved by Bigatti \cite{BAM} and Hulett \cite{HH}, independently (see, also, \cite{HH2, KP}). More precisely, we prove that in the class of all ${\bf t}$-spread strongly stable ideals with a given $f_{\bf t}$-vector, the ${\bf t}$-spread lex ideals have the largest graded Betti numbers. \begin{Theorem}\label{thm:upperbound} Let $I\subset S=K[x_1,\dots,x_n]$ be a ${\bf t}$-spread strongly stable ideal. Then, $$ \beta_{i,j}(I)\ \le\ \beta_{i,j}(I^{{\bf t},\lex}), \ \ \ \text{for all}\ i\ \text{and}\ j. $$ \end{Theorem} \begin{proof} By \cite[Corollary 5.2]{F1}, we have \begin{equation}\label{eneherzogqureshiformulabetti} \beta_{i,i+j}(I)\ =\ \sum_{u\in G(I)_j}\binom{\max(u)-1-\sum_{h=1}^{j-1}t_h}{i}. \end{equation} We are going to write (\ref{eneherzogqureshiformulabetti}) in a more suitable way. We observe that $I$ is a {\bf t}-spread ideal and thus $$ G(I)_j\ =\ [I_j]_{\bf t}\setminus \Shad_{\bf t}([I_{j-1}]_{\bf t}). $$ Hence, we can write the Betti number in (\ref{eneherzogqureshiformulabetti}) as a difference $A-B$, where \begin{align*} A\ =&\ \sum_{u\in G([I_j]_{\bf t})} \binom{\max(u)-1-\sum_{h=1}^{j-1}t_h}{i}=\sum_{k=1}^n m_k([I_j]_{\bf t})\binom{k-1-\sum_{h=1}^{j-1}t_h}{i}\\ =&\ \sum_{k=1}^n{\mathcal B}ig(m_{\le k}([I_j]_{\bf t})-m_{\le k-1}([I_j]_{\bf t}){\mathcal B}ig)\binom{k-1-\sum_{h=1}^{j-1}t_h}{i}\\ =&\ \sum_{k=1}^nm_{\le k}([I_j]_{\bf t})\binom{k-1-\sum_{h=1}^{j-1}t_h}{i}-\sum_{k=1}^{n-1}m_{\le k}([I_j]_{\bf t})\binom{k-\sum_{h=1}^{j-1}t_h}{i} \end{align*} and \begin{align*} B\ =&\ \sum_{u\in \Shad_{\bf t}([I_{j-1}]_{\bf t})}\binom{\max(u)-1-\sum_{h=1}^{j-1}t_h}{i}\\=&\ \sum_{k=1+\sum_{h=1}^{j-1}t_h}^n\!\!\!\!m_k(\Shad_{\bf t}([I_{j-1}]_{\bf t}))\binom{k-1-\sum_{h=1}^{j-1}t_h}{i}\\=&\ \sum_{k=1+\sum_{h=1}^{j-1}t_h}^n m_{\le k-t_{j-1}}([I_{j-1}]_{\bf t})\binom{k-1-\sum_{h=1}^{j-1}t_h}{i}, \end{align*} where the last equality follows from Lemma \ref{Lemma:m_i}(a). Furthermore, we can write $A=A_1-A_2$ with \begin{align*} A_1\ =&\ m_{\le n}([I_j]_{\bf t})\binom{n-1-\sum_{h=1}^{j-1}t_h}{i},\\[0.8em] A_2\ =&\ \sum_{k=1}^{n-1}m_{\le k}([I_j]_{\bf t})\bigg[\binom{k-\sum_{h=1}^{j-1}t_h}{i}-\binom{k-1-\sum_{h=1}^{j-1}t_h}{i}\bigg]\\ =&\ \sum_{k=1+\sum_{h=1}^{j-1}t_h}^{n-1}m_{\le k}([I_j]_{\bf t})\binom{k-1-\sum_{h=1}^{j-1}t_h}{i-1}. \end{align*} Therefore, we obtain \begin{equation}\label{presentationbettinumbers} \begin{aligned} \beta_{i,i+j}(J)\ &=\ m_{\le n}([I_j]_{\bf t})\binom{n-1-\sum_{h=1}^{j-1}t_h}{i}\\ &-\sum_{k=1+\sum_{h=1}^{j-1}t_h}^{n-1}m_{\le k}([I_j]_{\bf t})\binom{k-1-\sum_{h=1}^{j-1}t_h}{i-1}\\ &-\sum_{k=1+\sum_{h=1}^{j-1}t_h}^n m_{\le k-t_{j-1}}([I_{j-1}]_{\bf t})\binom{k-1-\sum_{h=1}^{j-1}t_h}{i}. \end{aligned} \end{equation} Now, we compute the graded Betti numbers $\beta_{i,i+j}(I^{{\bf t},\lex})$. Recall that $I$ and $I^{{\bf t},\lex}$ share the same $f_{\bf t}$-vector. Therefore, $|[I^{{\bf t},\lex}_j]_{\bf t}|=|[I_{j}]_{\bf t}|$, for all $j$. Applying Theorem \ref{Thm:BayerVectSpread}, we have $m_{\le k}([I^{{\bf t},\lex}_j]_{\bf t})\le m_{\le k}([I_j]_{\bf t})$ for all $k\in[n]$. Moreover, $$ m_{\le n}([I_j]_{\bf t})=|[I_j]_{\bf t}|=|[I^{{\bf t},\lex}_j]_{\bf t}|=m_{\le n}([I^{{\bf t},\lex}_j]_{\bf t}). $$ Therefore, replacing in (\ref{presentationbettinumbers}), for all $k$ and $j$, every occurrence of $m_{\le k}([I_j]_{\bf t})$ with $m_{\le k}([I^{{\bf t},\lex}_j]_{\bf t})$, we get the Betti number $\beta_{i,i+j}(I^{{\bf t},\lex})$. Finally, $\beta_{i,i+j}(I)\le\beta_{i,i+j}(I^{{\bf t},\lex})$, for all $i,j\ge0$. \end{proof} \begin{Remark}\em Note that in the previous result, we allow $K$ to be an arbitrary field. \end{Remark} \begin{Example} \rm Consider again the ${\bf t}$-spread strongly stable ideal $I$ of Example \ref{Ex:Itlex}. Then, the Betti tables of $I$ and $I^{{\bf t},\lex}$ are, respectively, $$ \begin{matrix} & 0 & 1 & 2\\ \hline 2:& 4 & 4 & 1 \\ 3:& 1 & 2 & 1 \\ 4:& 1 & 2 & 1 \end{matrix}{\frk q}uad{\frk q}uad{\frk q}uad \begin{matrix} & 0 & 1 & 2 & 3 & 4\\ \hline 2: & 4 & 6 & 4 & 1 & .\\ 3: & 3 & 7 & 7 & 4 & 1\\ 4: & 2 & 4 & 2 & . & . \end{matrix} $$ From these tables we see that $\beta_{i,i+j}(I)\le\beta_{i,i+j}(I^{{\bf t},\lex})$ for all $i$ and $j$. \end{Example} \end{document}
\begin{document} \title{Evaluation of a Flow-Based Hypergraph Bipartitioning Algorithm} \begin{abstract} In this paper, we propose HyperFlowCutter, an algorithm for balanced hypergraph bipartitioning. It is based on minimum $S$-$T$ hyperedge cuts and maximum flows. It computes a sequence of bipartitions that optimize cut size and balance in the Pareto sense, being able to trade one for the other. HyperFlowCutter builds on the FlowCutter algorithm for partitioning graphs. We propose additional features, such as handling disconnected hypergraphs, novel methods for obtaining starting $S,T$ pairs as well as an approach to refine a given partition with HyperFlowCutter. Our main contribution is ReBaHFC, a new algorithm which obtains an initial partition with the fast multilevel hypergraph partitioner PaToH and then improves it using HyperFlowCutter as a refinement algorithm. ReBaHFC is able to significantly improve the solution quality of PaToH at little additional running time. The solution quality is only marginally worse than that of the best-performing hypergraph partitioners KaHyPar and hMETIS, while being one order of magnitude faster. Thus ReBaHFC offers a new time-quality trade-off in the current spectrum of hypergraph partitioners. For the special case of perfectly balanced bipartitioning, only the much slower plain HyperFlowCutter yields slightly better solutions than ReBaHFC, while only PaToH is faster than ReBaHFC. \end{abstract} \section{Introduction}\label{sec:introduction} Given a hypergraph $H=(V,E)$, a hyperedge cut $C \subset E$ is a set of hyperedges whose removal disconnects $H$. The \emph{balanced hypergraph bipartioning problem} is to find a hyperedge cut of minimum cardinality whose removal separates $H$ into two blocks of roughly equal size -- up to $(1+\varepsilon)\frac{|V|}{2}$. Hypergraph partitioning has applications in VLSI design, database sharding, and high performance computing, in particular load balancing and reducing communication for highly parallel computations as well as accelerating sparse matrix vector multiplications. This problem is NP-hard~\cite{l-caicl-90} and it is hard to approximate, even for graphs~\cite{bj-fnp-92}. Therefore, practical algorithms use heuristics. Most of them are based on the \emph{multilevel} framework~\cite{hl-amapg-95}. In this paper we consider a different approach based on the max-flow min-cut duality. The basic idea has already been used in the Flow-Balanced-Bipartition algorithm (FBB) of Yang and Wong~\cite{yw-enfbm-96}. So far it has not been of further consideration due to being too slow compared to multilevel algorithms and too slow to solve current instances in feasible time. More recently, FlowCutter~\cite{hs-gbpo-18} (FC) for graph bipartitioning has been introduced independently of FBB\@. It is designed for computing very small node separators in road networks with a rather loose balance constraint; $\varepsilon=0.33$ is recommended for the application of accelerating shortest path computations~\cite{dsw-cch-15}. Based on similar ideas as FBB but equipped with more engineering, it computes both unbalanced and highly balanced bipartitions of high quality on the Walshaw graph partitioning benchmark~\cite{swc-tgpa-04}. \subparagraph{Contribution.} We present HyperFlowCutter, an algorithm which computes a series of hypergraph bipartitions with increasing balance, up to perfectly balanced solutions. With HyperFlowCutter, we extend FlowCutter to hypergraphs and contribute additional features. We provide a method to handle disconnected hypergraphs, which FlowCutter and FBB cannot handle. Our main contribution is ReBaHFC, an algorithm to refine a given partition using HyperFlowCutter. It is a natural extension of the max-flow based refinement of the k-way hypergraph partitioner KaHyPar~\cite{hss-nfbrm-18}. We provide a thoroughly engineered implementation as well as an extensive experimental evaluation on the benchmark set of Heuer and Schlag~\cite{hs-icshp-17}, comparing HyperFlowCutter and ReBaHFC against the state-of-the-art hypergraph partitioning tools KaHyPar~\cite{hs-icshp-17,hss-nfbrm-18}, PaToH~\cite{patoh} and hMETIS~\cite{kaks-mhpav-99,kk-mkwhp-99}. In our experiments we use the fast algorithm PaToH to obtain initial partitions for ReBaHFC. When using the quality preset of PaToH, ReBaHFC computes solutions for $\varepsilon = 0.03$, which are only slightly worse than those of the best-performing partitioners KaHyPar and hMETIS and significantly better than those of PaToH. ReBaHFC is only marginally slower than PaToH and thus, like PaToH, it is one order of magnitude faster than KaHyPar and hMETIS, when using the quality preset, and two orders of magnitude faster, when using the default preset. Furthermore, ReBaHFC with the PaToH default preset computes better solutions than PaToH with its quality preset. Thus ReBaHFC offers new time-quality trade-offs. For the special case of perfectly balanced bipartitioning, only the much slower plain HyperFlowCutter yields marginally better solutions than ReBaHFC, while only PaToH is faster than ReBaHFC. \subparagraph{Outline.} After discussing related work in Section~\ref{sec:related} and presenting notation and preliminaries in Section~\ref{sec:preliminaries}, we introduce the core algorithm of HyperFlowCutter for $S$-$T$ hyperedge cuts in Section~\ref{sec:core}. Then we show how to handle disconnected hypergraphs in Section~\ref{sec:disconnectedhypergraphs}, propose our refinement algorithm ReBaHFC in Section~\ref{sec:refinement} and finally discuss the experimental evaluation in Section~\ref{sec:experiments}. \section{Related Work}\label{sec:related} For an overview of the field of hypergraph partitioning we refer to survey articles~\cite{bmsw-gpgcd-13,pm-hpc-07,ak-rdnps-95}. The most common approach among hypergraph partitioning tools is the multilevel framework. Multilevel algorithms repeatedly \emph{contract} vertices to obtain a hierarchy of \emph{coarser} hypergraphs while trying to preserve the cut structure. On the coarsest hypergraph an \emph{initial partition} is computed in some way. Then the contractions are reversed step-by-step and after every uncontraction a \emph{refinement} algorithm tries to improve the solution. Most multilevel algorithms use a variant of the Fiduccia-Mattheyses (FM)~\cite{fm-a-82} or Kernighan-Lin (KL)~\cite{kl-efppg-70} local vertex moving heuristics. These algorithms move vertices between blocks, prioritized by cut improvement. The multilevel framework has been immensely successful because it provides a global view on the problem through local operations on the coarse levels. Furthermore, it allows a great deal of engineering and tuning, which have a drastic impact on solution quality. Even though this framework has been used since the 1990s, the implementations are still improving today. A selection of well-known multilevel hypergraph partitioners are PaToH~\cite{patoh} (scientific computing), hMETIS~\cite{kaks-mhpav-99,kk-mkwhp-99} (VLSI design) KaHyPar~\cite{hs-icshp-17,hss-nfbrm-18} (general purpose, n-level), Zoltan~\cite{dbhbc-phpsc-06} (scientific computing, parallel), Zoltan-AlgD~\cite{scs-rbcmh-19} (algebraic distances based coarsening, sequential), Mondriaan~\cite{vb-atddd-05} (sparse matrices), MLPart~\cite{ahk-mcp-98} (circuit partitioning) and Par$k$way~\cite{tk-pmahp-08} (parallel). Compared to graph partitioning, the performance of local vertex moving suffers from the presence of large hyperedges with vertices scattered over multiple blocks, since many moves have zero cut improvement. On coarse levels of the multilevel hierarchy, this problem is alleviated since hyperedges contain fewer vertices. A second remedy are flow-based refinement algorithms. For graphs, Sanders and Schulz~\cite{ss-emgpa-11} extract a size-constrained corridor around the cut and compute a minimum cut within this corridor. If the cut is balanced, an improved solution was found, otherwise the step is repeated with a smaller corridor. Heuer~et al.\xspace~\cite{hss-nfbrm-18} extend their approach to hypergraphs by using \emph{Lawler networks}~\cite{l-cph-73}. The Lawler network of a hypergraph is a flow network such that minimum $S$-$T$ hyperedge cuts can be computed via max-flow. In their Flow-Balanced-Bipartition algorithm (FBB), Yang and Wong~\cite{yw-enfbm-96} use incremental maximum flows on the Lawler network to compute $\varepsilon$-balanced hypergraph bipartitions. Liu and Wong~\cite{hw-nfbmp-98} enhance FBB with a \emph{most-desirable-minimum-cut} heuristic, which is inspired by the correspondence between $S$-$T$ minimum cuts and closed node sets due to Picard and Queyranne~\cite{pq-osamc-82}. It is similar to the \emph{most-balanced-minimum-cut} heuristics used in the multilevel graph partitioning tool KaHiP~\cite{ss-emgpa-11} and KaHyPar-MF~\cite{hss-nfbrm-18} as well as the \emph{avoid-augmenting-paths} piercing heuristics of FlowCutter~\cite{hs-gbpo-18} and HyperFlowCutter. Li~et al.\xspace~\cite{llc-lvlsi-95} propose a push-relabel algorithm, which operates directly on the hypergraph. Furthermore they present heuristics rooted in VLSI design for choosing sets of initial seed vertices $S$ and $T$ as well as piercing vertices. The performance of their approach in other contexts than VLSI design remains unclear. For perfectly balanced graph partitioning, diffusion-based methods have been successful~\cite{mms-a-09}. Furthermore Sanders and Schulz~\cite{ss-tlagh-13} propose an algorithm based on detecting negative cycles, which is used on top of their evolutionary partitioner. Delling and Werneck~\cite{dw-bbgb-12} provide an efficient implementation of an optimal branch-and-bound algorithm. Additionally there are metaheuristic approaches such as PROBE~\cite{cbm-aprob-07}, as well as multilevel memetic algorithms due to Benlic and Hao~\cite{bh-aemma-10,bh-ammai-11,bh-aemts-11}. \section{Preliminaries}\label{sec:preliminaries} A \emph{hypergraph} $H=(V,E)$ consists of a set of $n$ vertices $V$ and a set of $m$ hyperedges $E$, where a hyperedge $e$ is a subset of the vertices $V$. A vertex $v\in V$ is \emph{incident} to hyperedge $e \in E$ if $v \in e$. The vertices incident to $e$ are called the \emph{pins} of $e$. We denote the incident hyperedges of $v$ by $I(v)$ and its degree by $\deg(v):=|I(v)|$. Furthermore let $p := \sum_{e \in E} |e|$ denote the total number of pins in $H$. All hypergraphs in this paper are unweighted. $H$ can be represented as a bipartite graph $G=(V \cup E, \{ (v,e) \in V \times E \mid v \in e\})$ with bipartite node set $V \cup E$ and an edge for every pin. This is also referred to as the \emph{star expansion} of $H$. $H$ is \emph{connected} if its star expansion is connected. Let $V[E'] := \bigcup_{e' \in E'} e'$ denote the vertex set induced by the hyperedge set $E'$. To avoid confusion, we use the terms vertices, hyperedges and pins for hypergraphs, and we use the terms nodes and edges for graphs. \subsection{Hypergraph Partitioning} A \emph{bipartition} of a hypergraph $H$ is a partition $(A,B)$ of the vertices $V$ into two non-empty, disjoint sets (called blocks). The \emph{cut} ${\cut(A,B) := \{ e \in E \mid e \cap A \neq \emptyset \wedge e \cap B \neq \emptyset \} }$ consists of all hyperedges with pins in both blocks. The \emph{size} of a cut is the number of cut hyperedges $|\cut(A,B)|$. Let $\varepsilon \in [0,1)$. A bipartition $(A,B)$ is $\varepsilon$-balanced if $\max(|A|,|B|) \leq \lceil (1+\varepsilon)\frac{n}{2} \rceil$. The \emph{balanced hypergraph bipartitioning problem} is to find an $\varepsilon$-balanced bipartition $(A,B)$ which minimizes the cut. The special case $\varepsilon = 0$ is called \emph{perfectly balanced bipartitioning}. \subsection{Maximum Flows} A flow network $\mathcal{N}=(\mathcal{V},\mathcal{E},S,T,c)$ is a simple symmetric directed graph $(\mathcal{V},\mathcal{E})$ with two non-empty \emph{terminal} node sets $S,T\subsetneq \mathcal{V}$, $S \cap T = \emptyset$, the source and target node set, as well as a capacity function $c : \mathcal{E} \mapsto \mathbb{R}_{\geq 0}$. Any node that is not a source node and not a target node is a \emph{non-terminal} node. A flow in $\mathcal{N}$ is a function $f:\mathcal{E} \mapsto \mathbb{R}$ subject to the \emph{capacity constraint} $f(e) \leq c(e)$ for all edges $e$, \emph{flow conservation} ${\sum_{(u,v)\in \mathcal{E}} f((u,v)) = 0}$ for all non-terminal nodes $v$ and \emph{skew symmetry} ${f((u,v))=-f((v,u))}$ for all edges~$(u,v)$. The \emph{value} of a flow ${|f| := \sum_{s \in S, (s,u)\in \mathcal{E}} f((s,u))}$ is the amount of flow leaving $S$. The \emph{residual capacity} $r_f(e) := c(e) - f(e)$ is the additional amount of flow that can pass through $e$ without violating the capacity constraint. The residual network with respect to $f$ is the directed graph $\mathcal{N}_f = (\mathcal{V},\mathcal{E}_f)$ where $\mathcal{E}_f := \{e \in \mathcal{E} | r_f(e) > 0\}$. A node $v$ is \emph{source-reachable} if there is a path from $S$ to $v$ in $\mathcal{N}_f$, it is \emph{target-reachable} if there is a path from $v$ to $T$ in $\mathcal{N}_f$. We denote the source-reachable and target-reachable nodes by $S_r$ and $T_r$, respectively. An \emph{augmenting path} is an $S$-$T$ path in $\mathcal{N}_f$. The flow $f$ is a \emph{maximum flow} if $|f|$ is maximal of all possible flows in $\mathcal{N}$. This is the case iff there is no augmenting path in $\mathcal{N}_f$. An $S$-$T$ edge cut is a set of edges whose removal disconnects $S$ and $T$. The value of a maximum flow equals the weight of a minimum-weight $S$-$T$ edge cut~\cite{ff-mftn-56}. The \emph{source-side cut} consists of the edges from $S_r$ to $\mathcal{V} \setminus S_r$ and the \emph{target-side cut} consists of the edges from $T_r$ to $\mathcal{V} \setminus T_r$. The bipartition $(S_r, \mathcal{V} \setminus S_r)$ is induced by the source-side cut and $(\mathcal{V} \setminus T_r, T_r)$ is induced by the target-side cut. Note that $\mathcal{V} \setminus S_r \setminus T_r$ is not necessarily empty. \subsection{Hyperedge Cuts via Maximum Flows}\label{sec:hg_cuts_via_flow} Lawler~\cite{l-cph-73} uses maximum flows to compute minimum $S$-$T$ hyperedge cuts without balance constraints. On the star expansion, the standard construction to model node capacities as edge capacities~\cite{amo-nf-93} is applied to the hyperedge-nodes. A hyperedge $e$ is expanded into an \emph{in-node} $e_i$ and an \emph{out-node} $e_o$ joined by a directed \emph{bridge edge} $(e_i, e_o)$ with unit capacity. For every pin $u \in e$ there are two directed \emph{external edges} $(u, e_i), (e_o, u)$ with infinite capacity. The transformed graph is called the \emph{Lawler network}. A minimum $S$-$T$ edge cut in the Lawler network consists only of bridge edges, which directly correspond to $S$-$T$ cut hyperedges in $H$. Instead of using the Lawler network, we emulate max-flow algorithms directly on the hypergraph, using an approach first proposed by Pistorius and Minoux~\cite{pm-aidlm-03}. In the paper it is formulated for unit weight hyperedges and the Edmonds-Karp flow algorithm~\cite{ek-tiaenf-72} but it can be easily extended to handle weighted hyperedges and emulate any flow algorithm. For every hyperedge $e \in E$, we store the pins sending and receiving flow via $e$. In this work, we consider only unit weight hyperedges and thus need to store only one pin $\flowfrom(e)$ sending flow into $e$, and one pin $\flowto(e)$ receiving flow from $e$. To keep the description simple, it relies on this assumption as well. Let $u$ be a fixed vertex. The idea is to enumerate short paths of the form $u$ $\rightarrow e \in I(u) \rightarrow v \in e$ that correspond to paths in the residual Lawler network. This allows us to emulate algorithms for traversing the residual Lawler network directly on the hypergraph, such as Breadth-First-Search or Depth-First-Search, as well as other types of local operations, e.\,g.\xspace, the \emph{discharge} and \emph{push} operations in push-relabel algorithms~\cite{gt-namfp-88}. For every hyperedge $e \in I(u)$ we do the following. If $e$ has no flow, we enumerate all pins $v \in e$. These paths correspond to $(u, e_i, e_o, v)$ in the Lawler network. If $e$ has flow and $u = \flowto(e)$ we also enumerate all pins $v \in e$. However, these paths correspond to $(u, e_o, e_i, v)$ in the Lawler network. If $e$ has flow and $u = \flowfrom(e)$, there is no path in the residual Lawler network starting at $u$ that uses $e$. If $e$ has flow and $\flowfrom(e) \neq u \neq \flowto(e)$, we enumerate just one path $(u,e,\flowfrom(e))$, corresponding to $(u, e_i, \flowfrom(e))$ in the Lawler network. If we can push flow from the vertex $\flowfrom(e)$ to $T$, we can redirect the flow that the vertex $\flowfrom(e)$ sends into $e$, and instead route flow coming from $u$ to $\flowto(e)$. Then $u$ becomes the new $\flowfrom(e)$. We use this approach in our implementation because it is significantly more efficient than the Lawler network in practice. In the last case we can avoid scanning all pins of $e$. In a preliminary experiment, computing flow directly on the hypergraph yielded a speedup of 15 over using the Lawler network, for a hypergraph with maximum hyperedge size of only 36. The speedup will be more extreme on hypergraphs with larger hyperedges. Via the Lawler network and the above emulation approach, the notions of flow, source-reachable vertices and source-side cuts translate naturally from graphs to hypergraphs. We use the notation and terminology already known from max-flows in graphs. \section{HyperFlowCutter}\label{sec:hfc} In the following we outline the core HyperFlowCutter algorithm, which can only be used on connected hypergraphs. Then we discuss how to handle disconnected hypergraphs, and finally show how to improve an existing partition using HyperFlowCutter. \subsection{The Core Algorithm}\label{sec:core} \begin{figure} \caption{Compute minimum $S$-$T$ cuts.} \label{fig:fc_initialstate} \caption{Add $S_r$ to $S$ and choose a piercing vertex.} \label{fig:fc_pierce} \caption{Mixed hyperedge (white square) with incidence relations (orange) and an isolated vertex (white disk).} \label{fig:fc_isolated} \caption{Flow augmentation and computing $S_r, T_r$ in Fig.\ref{fig:fc_initialstate} \label{fig:illuHFC} \end{figure} The idea of the \emph{core} HyperFlowCutter algorithm is to solve a sequence of incremental $S$-$T$ max-flow min-cut problems with monotonically increasing cut size and balance, until an $\varepsilon$-balanced bipartition is found. They are incremental in the sense that the terminals $S,T$ of the current flow problem are subsets of the terminals in the next flow problem, which allows us to reuse previously computed flows. Given starting terminal sets $S_\text{init},T_\text{init}$, we set $S:= S_\text{init}$, $T := T_\text{init}$ First, we compute a maximum $S$-$T$ flow. We terminate if the bipartition $(S_r, V \setminus S_r)$ induced by the source-side cut or $(V \setminus T_r, T_r)$ induced by the target-side cut is $\varepsilon$-balanced. Otherwise, we add the source-reachable vertices $S_r$ to $S$, if $|S_r| \leq |T_r|$, or we add $T_r$ to $T$ if $|S_r| > |T_r|$. Assume $|S_r| \leq |T_r|$ without loss of generality. Further, we add one or more vertices, called \emph{piercing vertices}, to $S$. This step is called \emph{piercing} the cut. It ensures that the next flow problem yields a different bipartition. Subsequently, we augment the previous flow to a max-flow that respects the new sources. We repeat these steps until an $\varepsilon$-balanced bipartition is found. Note that after adding $S_r$ to $S$, the flow is still a maximum $S$-$T$ flow, even though the added vertices are now exempt from flow conservation. Using the smaller side allows it to catch up with the larger side. In particular, this ensures that $\varepsilon$-balance is always possible, as neither side grows beyond $\lceil n/2 \rceil$ vertices. A hyperedge with pins in both $S$ and $T$ is \emph{mixed}, all other hyperedges are \emph{non-mixed}. We consider two options to find piercing vertices. The preferred option is to choose all pins $e \setminus S \setminus T$ of a non-mixed hyperedge $e$ in the source-side cut. Adding multiple vertices is faster in practice. This small detail is a major difference to FBB~\cite{yw-enfbm-96} and is necessary to make the running time feasible on the used benchmark set. If the source-side cut contains only mixed hyperedges, we choose a single non-terminal vertex incident to the source-side cut. We prefer piercing vertices which are not reachable from $T$, as these avoid augmenting paths in the next iteration, and thus the cut size does not increase. Avoiding augmenting paths has the highest precedence, followed by choosing hyperedges over single vertices. Ties are broken randomly. If the piercing vertices are not reachable from $T$, we do not recompute $T_r$ and we skip flow augmentation, but we do recompute $S_r$. We experimented with other piercing methods, including trying to avoid mixed hyperedges, the distance-based piercing of FlowCutter, as well as piercing based on a machine learning technique named ensemble classification. We discuss ensemble classification again in the experimental section, although in a different context. None of these approaches yielded consistent or significant quality improvements over just avoiding augmenting paths and random tie-breaking, which is why we use only those. \subparagraph{Asymptotic Complexity.} The asymptotic running time of Core HyperFlowCutter is $\mathcal{O}(cp)$ where $c$ is the cut size of the $\varepsilon$-balanced partition and $p$ is the number of pins in the hypergraph. Roughly speaking, the term $p$ stems from performing up to one hypergraph traversal per flow unit. Here we use that $n\leq p, m \leq p$ holds for connected hypergraphs. Note that the running time is output-sensitive, however the factor $c$ is rather pessimistic in practice, since the flow algorithm finds many augmenting paths in a single traversal. The original proof for FlowCutter~\cite{hs-gbpo-18} is applicable, but implementing the piercing step requires a little care. Selecting piercing vertices takes $\mathcal{O}(c)$ per iteration, and we have at most $n \leq p$ iterations. Selecting a non-mixed hyperedge for piercing takes $\mathcal{O}(c)$ time, by iterating over the cut hyperedges. Selecting single piercing vertices which avoid augmenting paths whenever possible, is slightly more involved, when restricted to $\mathcal{O}(c)$ time. For every cut hyperedge $e$ we additionally track its pins that are not reachable from either side. This can be implemented with a linked list, from which we delete vertices once they get reachable from a side. An alternative implementation divides the memory storing the pins of $e$ into three regions: the pins in $S_r$, in $T_r$ or not reachable. Then we can check, whether $e$ has pins that are not reachable from either side, and if so pick one. In practice, this adds significantly to the complexity of the implementation and the piercing step is never critical for running time, so our implementation simply scans all non-terminal boundary vertices. Our implementation has $\mathcal{O}(n+m)$ memory footprint by transferring and re-engineering the implementation details from~\cite{hs-gbpo-18}. This is dominated by the $\mathcal{O}(n+m+p)$ memory for storing the hypergraph. \subparagraph{Isolated Vertices.} We call a vertex $v \notin S \cup T$ \emph{isolated} if every incident hyperedge $e \in I(v)$ is mixed. Figure~\ref{fig:fc_isolated} illustrates an isolated vertex. An isolated vertex cannot be reached from either $S$ or $T$ via hyperedges not in the cut. Mixed hyperedges remain in both the source-side cut and the target-side cut. Thus isolated vertices can be moved freely between the two blocks to increase balance. It never makes sense to permanently add them to a side, so we exclude them from the piercing step. Furthermore, this needs to be reflected when checking for $\varepsilon$-balance. For checking the bipartition $(S_r, V \setminus S_r)$ we assume up to $n/2 - |S_r|$ of the isolated vertices were part of $S_r$, analogously for $T_r$. \subparagraph{Maximum Flow Algorithm.}\label{sec:maxflowalgo} In our implementation, we adapt Dinic maximum flow algorithm~\cite{d-aspmf-70} to operate directly on the hypergraph, as described in Section~\ref{sec:hg_cuts_via_flow}. Dinic algorithm has two alternating phases that are repeated until a maximum flow is found: computing hop distance labels of nodes, using Breadth-First-Search, and finding augmenting paths using Depth-First-Search, such that the distance labels always increase by one along the path. We interleave the first phase of Dinic algorithm with the computation of reachable vertices, in order to avoid duplicate hypergraph traversal. This intrusive optimization is important for improving the running time of flow augmentation in practice, as the part of the running time of the flow algorithm that cannot be charged towards computing reachable vertices is dominated by the part that can be. This is not possible with push-relabel algorithms~\cite{gt-namfp-88}, which is why they were ruled out after short experimentation. We experimented with the Edmonds-Karp flow algorithm~\cite{ek-tiaenf-72}, modified to augment flow along multiple vertex-disjoint paths in one Breadth-First-Search by propagating vertex labels through the layers. It is slightly slower than Dinic for plain HyperFlowCutter but unfeasible for the refinement variant of HyperFlowCutter, since there are fewer vertices to route flow through and thus the amount of flow augmented per Breadth-First-Search is limited by a few bottleneck vertices. \subsection{Disconnected Hypergraphs}\label{sec:disconnectedhypergraphs} The HyperFlowCutter core algorithm is limited to connected hypergraphs since it computes $S$-$T$-cuts. An obvious approach for handling disconnected hypergraphs is connecting components artifically. We refrained from this because a component that intersects neither $S$ nor $T$ would be added to $S$ or $T$ in its entirety. Instead, we run the core algorithm up to $\varepsilon = 0$ on every connected component. The core algorithm computes multiple bipartitions with different balances. We systematically try all possible ways to combine the bipartitions of the components into a bipartition of $H$. This can be stated as a generalization of the well-known \textsc{SubsetSum} problem. \textsc{SubsetSum} is a weakly NP-hard decision problem, which asks whether a subset of an input multiset of positive integers $A=\{a_1,\dots, a_z\}$, the \emph{items}, sums to an input target sum $W$. Finding a bipartition with zero cut is equivalent to \textsc{SubsetSum}, where the items are the sizes of the components and $W$ is the minimum size of the smaller block. We are interested in any subset summing to at least $W$. Let $A$ be sorted in increasing order and let $Q(i,S)$ be a boolean variable, which is true iff a subset of the first $i$ items sums to $S$. The standard pseudo-polynomial time dynamic program (DP)~\cite[Section 35.5]{clrs-ia-01} for \textsc{SubsetSum} computes solutions for all possible target sums. It fills the DP table $Q$ by iterating through the items in increasing order and setting $Q(i,S)$ to true if $Q(i-1,S-a_i)$ or $Q(i-1,S)$ is true. For filling row $i$, only row $i-1$ is required, so the memory footprint is not quadratic. We now turn to non-zero cut bipartitions by allowing to split items in different ways and associating costs with the splits. The core algorithm computes multiple bipartitions $P_i$ on component $C_i$, at most one for every possible size of the smaller side. These correspond directly to the different ways we can split items. The associated cost is the cut size. We modify the standard \textsc{SubsetSum} DP to minimize the added cuts instead of finding any subset, ensuring every component/item is split only one way in a solution, i.\,e.\xspace, select a bipartition, and trying the smaller and larger side of the component bipartition for the smaller side of the bipartition of $H$. The worst case asymptotic running time of this DP is $\mathcal{O}(\sum_{i=1}^z \sum_{j=1}^{i-1} |P_i||P_j|)$. We propose some optimizations to make the approach faster in practice. First we solve standard \textsc{SubsetSum} to check whether $H$ has an $\varepsilon$-balanced bipartition with zero cut. For the \emph{gap-filler} optimization, we find the largest $g \in \mathbb{N}$ such that for every $x \in [0,g]$ there are connected components, whose sizes sum to $x$. Computing $g$ is possible in $\mathcal{O}(n)$ time. Let $C_1, \dots, C_z$ be sorted by increasing size, which takes $\mathcal{O}(n)$ time using counting sort~\cite[Section 8.2]{clrs-ia-01}. Then $g=\sum_{j=1}^{k-1} |C_j|$ for the smallest $k$ such that $|C_k| > 1 + \sum_{j=1}^{k-1} |C_j|$. It is never beneficial to split the components $C_1, \dots, C_{k-1}$. For most hypergraphs we consider in our experiments, we do not invoke the DP because, due to gap-filler, we split only the largest component. For the hypergraphs on which we do invoke the DP, its running time is negligible. Nonetheless, it is easy to construct a worst case instance, where the quadratic running time is prohibitive. For a robust algorithm, we propose to sample bipartitions from every $P_i$ so that the worst case running time falls below some input threshold. The samples should include balanced bipartitions to guarantee that a balanced partition on $H$ can be combined from those on $H[C_i]$. \subsection{HyperFlowCutter as a Refinement and Balancing Algorithm}\label{sec:refinement} Instead of partitioning from scratch, HyperFlowCutter can be used to refine an existing balanced bipartition $\pi = (V_1, V_2)$, or repair the balance of an unbalanced bipartition. We fix two blocks of vertices $V'_i \subset V_i$ such that $|V'_i| \leq \alpha \cdot n$ for a \emph{relative block-size threshold} parameter $\alpha \in [0, 0.5]$. To obtain $V'_i$ we run Breadth-First-Search from the boundary vertices on the side of $V_i$ until $|V_i|-\alpha \cdot n$ vertices have been visited. The $\alpha \cdot n$ vertices not visited by the Breadth-First-Search are set as $V'_i$. Then we run HyperFlowCutter with $S=V'_1, T=V'_2$. We call this algorithm \emph{ReBaHFC}. This idea is equivalent to the way KaHyPar and KaHiP extract \emph{corridors} around the cut for their flow-based refinement. Only the semantics of the size constraint are different to our approach. However, KaHyPar and KaHiP only compute one flow. If the associated bipartition is not balanced, a smaller flow network is derived. This is repeated until the bipartition is balanced. ReBaHFC does not need to repeatedly re-scale flow networks. In this work we only perform refinement as a post-processing step to a given partition, whereas KaHyPar and KaHiP employ flow-based refinement on the multilevel hierarchy. In a future work we hope to integrate HyperFlowCutter based refinement into KaHyPar. Using ReBaHFC could eliminate the significant overhead of repeated rescaling and improve solution quality -- in particular when the minimum cut is just short of being balanced. We use the fast multilevel partitioner PaToH~\cite{patoh} to obtain initial partitions. We briefly discuss properties of PaToH and differences between its presets. For coarsening, PaToH uses agglomerative clustering, based on the number of common hyperedges divided by cluster size. For initial partitioning, a portfolio of graph growing, bin packing and neighborhood expansion algorithms is used. For refinement PaToH uses a pass of FM~\cite{fm-a-82} followed by a pass of KL~\cite{kl-efppg-70}, each initialized with boundary nodes. In order to improve cut size, Walshaw~\cite{w-mrcop-04} proposed to iterate the multilevel scheme by contracting only nodes from the same block, which maintains the cut size, thus allowing refinement on coarse levels starting from a relatively high quality partition. The existing partition serves as initial partition on the coarsest level. One iteration is called a V-cycle. Contraction can be stopped at different stages. The quality preset PaToH-Q, uses 3 full V-cycles and 3 shorter V-cycles as opposed to the single V-cycle of the default preset PaToH-D. To accelerate partitioning, both presets temporarily discard hyperedges which have more pins than some threshold. PaToH-D sets a lower threshold than PaToH-Q. \section{Experimental Evaluation}\label{sec:experiments} We implemented HyperFlowCutter and ReBaHFC in C++17 and compiled our code using g++8 with flags \texttt{-O3 -mtune=native -march=native}. The source code is available on GitHub\footnote{Souce code available at~\url{https://github.com/kit-algo/HyperFlowCutter}}. Experiments are performed sequentially on a cluster of Intel Xeon E5-2670 (Sandy Bridge) nodes with two Octa-Core processors clocked at 2.6 GHz with 64 GB RAM, 20~MB L3- and 8$\times$256 KB L2-Cache, using only one core of a node. We use the benchmark set\footnote{Benchmark set and detailed statistics available at \url{https://algo2.iti.kit.edu/schlag/sea2017/}} of Heuer and Schlag~\cite{hs-icshp-17}, which has been used to evaluate KaHyPar. It consists of 488 hypergraphs from four sources: the ISPD98 VLSI Circuit Benchmark Suite~\cite{a-tispd-98} (VLSI, 18 hypergraphs), the DAC 2012 Routability-Driven Placement Benchmark Suite~\cite{naslw-tdacr-12} (DAC, 10), the SuiteSparse Matrix Collection~\cite{dh-tufsm-11} (SPM, 184) and the international SAT Competition 2014~\cite{sat2014} (Literal, Primal, Dual, 92 hypergraphs each). The set contains 173 disconnected hypergraphs, in particular all DAC instances are disconnected. Refer to~\cite{hs-icshp-17} for more information on how the hypergraphs were derived. Unless mentioned otherwise, experiments are performed on the full benchmark set. In the following we describe the configuration of ReBaHFC and plain HyperFlowCutter, before comparing them to competing algorithms in Section~\ref{sec:comparison}. \subsection{General HyperFlowCutter Configuration}\label{sec:parameterstudy} \begin{table}[tb] \centering \caption{Average and quantile speedups of the hybrid and interleaved execution strategies.}\label{table:execution_strategy} \begin{tabular}{l *{8}{r}} \toprule & avg & min & 0.1 & 0.25 & median & 0.75 & 0.9 & max \\ \midrule hybrid & 2.21 & 0.4 & 0.96 & 1.07 & 1.29 & 1.55 & 2.57 & 49.66 \\ interleaved & 3.88 & 0.55 & 1.0 & 1.14 & 1.38 & 1.74 & 2.66 & 175.47 \\ \bottomrule \end{tabular} \end{table} To improve the solution quality of HyperFlowCutter, we run it $q \in \mathbb{N}$ times with different terminal pairs and take the minimum cut. To improve running time we run them simultaneously, in an \emph{interleaved} fashion, as already described in~\cite{hs-gbpo-18}, so that the output-sensitive running time depends on the smallest found $\varepsilon$-balanced cut, not the largest. We always schedule the terminal pair with the currently smallest cut to progress. Table~\ref{table:execution_strategy} shows the average and some quantile speedups when interleaving the execution of 20 random terminal vertex pairs, instead of running them one after another; repeated for 5 random seeds. Because consecutive execution exhibits more memory locality, we also tested a hybrid strategy where the instance with the currently smallest cut is allowed to make multiple progress iterations. Interleaving outperforms consecutive execution by a factor of 3.88 on average and also consistently beats hybrid execution. This shows that saving work is more important than memory locality. These numbers stem from a preliminary experiment on a 139 hypergraph subset of the full benchmark set. The subset contains 78 sparse matrices, 17 Primal, 23 Dual, 6 Literal SAT instances, 15 VLSI and 0 DAC instances. It contains only connected hypergraphs (all DAC instances are disconnected) in order to measure the impact of interleaving, not the setup overhead for many small connected components. \subsection{ReBaHFC Configuration}\label{sec:config:rebahfc} \begin{figure} \caption{Improvement ratios $1 - \frac{\cut(\mathrm{ReBaHFC} \label{fig:experimental:rebahfc:improvementratios} \end{figure} We now discuss the configuration for ReBaHFC. The imbalance for the initial partition is set to the same value as the desired imbalance $\varepsilon$ for the output partition, which proved superior to larger imbalances on initial partitions. The block-size threshold parameter $\alpha$ should depend on $\varepsilon$, so we settled on $\alpha = 0.4$ for $\varepsilon = 0.03$ and $\alpha = 0.46$ for $\varepsilon = 0$. We resize the blocks once and run HyperFlowCutter five times, interleaved as described in the previous section. This number seems to provide decent quality without increasing running time too much. We consider two variants: ReBaHFC-D, which uses PaToH with default preset and ReBaHFC-Q, which uses PaToH with quality preset. The parameter study, which led to these choices, is discussed in more detail in Appendix~\ref{sec:experimental:config_rebahfc}. Figure~\ref{fig:experimental:rebahfc:improvementratios} shows how much ReBaHFC improves the initial partition. We run the two ReBaHFC variants for $\varepsilon = 0, 0.03$ on all hypergraphs of the benchmark set with five different random seeds and plot the ratio $1 - \frac{\cut(\mathrm{ReBaHFC})}{\cut(\mathrm{PaToH})}$ per run. Note that there is no comparison between the curves, and higher values are better for ReBaHFC. Table~\ref{table:rebahfc_impr_by_category} reports how often ReBaHFC improves the initial partition, for different hypergraph classes. As expected, ReBaHFC-Q could improve fewer solutions than ReBaHFC-D since the PaToH baseline is already better. Furthermore, ReBaHFC has more opportunities for refinement with $\varepsilon = 0.03$, in particular on the DAC and VLSI instances, whereas it struggles with the Primal and Literal SAT instances for $\varepsilon = 0$. Note that other refinement algorithms do not always improve solutions either. In particular, local moving based refinement algorithms struggle with zero-gain moves in the presence of large hyperedges, and the flow-based refinement in KaHyPar can yield unbalanced solutions or reproduce the existing solution. These results show that HyperFlowCutter is a promising candidate for a refinement algorithm integrated in a multilevel partitioner, which is a direction we hope to investigate in future work. The PaToH runs in the experiments from Section~\ref{sec:comparison} use other random seeds than those used internally in ReBaHFC. This makes sure that stand-alone PaToH can find smaller cuts than ReBaHFC. \begin{table}[tb] \centering \caption{Overview by hypergraph class, how often ReBaHFC improves the initial partition.} \label{table:rebahfc_impr_by_category} \input{Corridor_rebahfc_impr_instances_by_class.latex_tabular} \end{table} \subsection{Plain HyperFlowCutter Configuration}\label{sec:config:plain_hfc} For the experiments on perfectly balanced partitioning we run plain HyperFlowCutter with up to $q=100$ terminal pairs and take the minimum cut. This value was used already for FlowCutter~\cite{hs-gbpo-18}. With plain HyperFlowCutter we want to push the envelope on solution quality for $\varepsilon = 0$, regardless of running time -- because, as the experiments show, ReBaHFC already provides a good time-quality trade-off. The most simple method for choosing starting terminals is to select random vertices. We unsuccessfully experimented with \emph{pseudo-peripheral} terminals, i.\,e.\xspace two vertices that are intuitively far away from each other and at the boundary of the hypergraph. Instead we propose a selection method based on ensemble classification. Ensemble classification is a technique used in machine learning to build a strong classifier from multiple weak ones. We compute $10$ bipartitions $\pi_1, \dots, \pi_{10}$ with PaToH-D. Let $x \equiv y \Leftrightarrow \pi_i(x)=\pi_i(y)$ for all $i=1,\dots,10$ be the equivalence relation, in which two vertices are equivalent if they are in the same block for all ensemble bipartitions. An equivalence class is likely in the same block of a good bipartition and is thus suited as a terminal set. We order the equivalence classes by size in descending order and group two successive classes as one terminal pair. Generally speaking, the larger equivalence classes make for better terminal pairs. Based on experiments in Appendix~\ref{sec:experimental:config_plain}, we use 3 ensemble terminal pairs and 97 random vertex pairs. The reported running time for plain HyperFlowCutter always includes the running time for the 10 PaToH-D runs. On 42 of the 488 hypergraphs, plain HyperFlowCutter with $100$ terminal pairs exceeds the eight hour time limit. One downside of interleaving executions is that the solution is only available once all terminal pairs have been processed. Instead of interleaving all 100 executions, we run four waves of $\langle 1,5,14,80 \rangle存$ terminal pairs consecutively and interleave execution within waves. An improved bipartition is available after every wave, so that, even if the time limit is exceeded, a solution is available as long as the first wave has been completed. We chose wave sizes, so that completing waves four and three corresponds to 100 and 20 terminal pairs, respectively, as these values were used in~\cite{hs-gbpo-18}. The first wave consists of the first ensemble terminal pair, the second/third wave consist of 5/14 random terminal pairs and the fourth wave consists of 78 random as well as two additional ensemble terminal pairs. There are 438 hypergraphs for which the fourth wave finishes, 35 for which the third but not the fourth wave finishes, 6 for the second, 1 for the first and there are 8 hypergraphs which are partitioned with zero cut, using just the subset sum preprocessing. \subsection{Comparing ReBaHFC and HyperFlowCutter against State-of-the-Art Hypergraph Partitioners}\label{sec:comparison} \begin{figure} \caption{Comparison between the algorithms for $\varepsilon = 0.03$. Left: Absolute running times for every hypergraph and random seed. Right: Performance plot relating the minimum cut per algorithm and hypergraph to the overall best cut for that hypergraph. Lower values are better.} \label{fig:experimental:eps0.03} \end{figure} In this section, we compare ReBaHFC and plain HyperFlowCutter against state-of-the-art hypergraph partitioners. After discussing our comparison methodology, we present results for two settings, namely $\varepsilon = 0.03$ and $\varepsilon = 0$. \subparagraph{Methodology.} We run each partitioner five times with different random seeds and report the minimum cut. For every run we set a time limit of eight hours. We use the \emph{performance plots} introduced in~\cite{shhmss-hplrb-16} to compare algorithms on a per-hypergraph basis regarding cut size. For each algorithm and hypergraph these plots contain a \emph{performance ratio} $1 - \text{best}/\text{algorithm}$, which relates the minimum cut found by any algorithm to the minimum cut found by this algorithm. The ratios of each algorithm are sorted in increasing order. A ratio of $0$ means that this algorithm found the smallest overall cut, the number of achieved ratios of $0$ is the number of hypergraphs on which this algorithm is the best. Furthermore, algorithm A dominates algorithm B if the curve of A is strictly below that of B. We use values greater than $1$ to indicate that algorithms exceeded the time limit or produced unbalanced solutions. This is clearly marked in the plots. To include partitions with zero cut, we set the performance ratio to $0$, if the algorithm found the zero cut partition, and $1$ otherwise. The performance plots use a cube root scaled y-axis in order to reduce right skewness~\cite{cuberoots} and to give a fine-grained view on the smaller improvements. For comparing algorithms regarding running time we use a combination of a scatter plot, which shows every measured running time, and a box plot (0.25, median, 0.75 quantiles, whiskers at most extreme points within distance $1.5 \cdot \operatorname{IQR}$ from the upper/lower quartile). The running time plots use a fifth root scaled y-axis for a fine-grained view on areas with smaller running times, which contain more data points. \subparagraph{Comparison for 3\% imbalance.} For $\varepsilon = 0.03$ we compare ReBaHFC against the state-of-the-art hypergraph partitioning tools KaHyPar-MF (the latest version of KaHyPar with flow-based refinement) and hMETIS-R (the recursive bisection variant of hMETIS), as well as PaToH-D (default preset) and PaToH-Q (quality preset). We use the library interface of PaToH. According to the hMETIS manual, hMETIS-R is preferred over hMETIS-K (direct k-way) for bipartitions, so we exclude hMETIS-K. These tools were chosen because they provide the best solution quality according to~\cite{ahss-ehpa-17, hs-icshp-17}. We chose $\varepsilon = 0.03$ as this is a commonly used value in the literature. Plain HyperFlowCutter is excluded from this part of the experiments because it is not competitive. Figure~\ref{fig:experimental:eps0.03} shows the running times and a performance plot on the full benchmark set for $\varepsilon = 0.03$. In addition to the running time plot, we compare algorithms by the geometric mean of their running times. We use the geometric mean in order to give instances of different sizes a comparable influence. KaHyPar-MF finds the smallest cut on 292 hypergraphs, hMETIS-R on 257, ReBaHFC-Q on 228, ReBaHFC-D on 177, PaToH-Q on 136 and PaToH-D on 75 of the 488 hypergraphs. While KaHyPar-MF is the best algorithm regarding solution quality, it is also the slowest, exceeding the time limit on 11 hypergraphs. For the instances on which ReBaHFC-Q does not find the best solution it provides solution quality similar to hMETIS-R and only marginally worse than KaHyPar-MF. In particular its solution quality compared to the best cut deteriorates less than that of hMETIS-R. With 2.23s PaToH-Q is one order of magnitude faster than KaHyPar (34.1s) and hMETIS-R (20.1s), whereas ReBaHFC-Q (2.32s) is only slightly slower than PaToH-Q. Furthermore ReBaHFC-D (0.68s) finds more of the best solutions than PaToH-Q at a running time between PaToH-D (0.5s) and PaToH-Q. Thus ReBaHFC-Q and ReBaHFC-D provide new Pareto points in the time-quality trade-off. In Appendix~\ref{sec:experimental:performanceplots_instanceclasses} Figure~\ref{fig:experimental:performanceplots_instanceclasses} shows performance plots for the different hypergraph classes of the benchmark set. ReBaHFC is particularly good on the DAC and SPM instances. There are hypergraphs on which ReBaHFC is faster than PaToH. These are disconnected hypergraphs, for which ReBaHFC invokes PaToH on smaller sub-hypergraphs, due to the gap-filler optimization and the \textsc{SubsetSum} preprocessing described in Section~\ref{sec:disconnectedhypergraphs}. \begin{figure} \caption{Comparison between the algorithms for $\varepsilon = 0$. Left: Absolute running times for every hypergraph and random seed. Right: performance plot relating the minimum cut per algorithm and hypergraph to the overall best cut for that hypergraph. Lower values are better.} \label{fig:experimental:eps0.0} \end{figure} \subparagraph{Comparison for perfectly balanced partitioning.} Even though the setting $\varepsilon = 0$ has received no attention in hypergraph partitioning and only some attention in graph partitioning~\cite{ss-tlagh-13, mms-a-09, cbm-aprob-07, bh-aemma-10, bh-ammai-11, bh-aemts-11, dw-bbgb-12}, we consider it here. Previous studies on perfectly balanced partitioning for graphs have focused on high quality solutions through running time intensive metaheuristics such as evolutionary algorithms \cite{ss-tlagh-13, bh-ammai-11, bh-aemma-10} or tabu search~\cite{bh-aemts-11} and even an exact branch-and-bound algorithm~\cite{dw-bbgb-12}. Therefore, we include KaHyPar-EVO~\cite{ass-mmhp-18} (the evolutionary algorithm of KaHyPar) as well as plain HyperFlowCutter in addition to the already considered algorithms. We exclude hMETIS-R from this comparison since it rejects $\varepsilon < 0.002$ for bipartitions. We include plain HyperFlowCutter with up to 100 terminal pairs as described in Section~\ref{sec:parameterstudy} and denote this configuration as HFC-100. The evolutionary algorithm KaHyPar-EVO generates, manages and improves a pool of solutions until a time limit is exceeded, and outputs the minimum cut out of all generated solutions. We set the instance-wise time limit to the maximum of the running times of HFC-100 and KaHyPar-MF to evaluate whether KaHyPar-EVO can yield better solution quality when given the same running time as HFC-100. As opposed to the original paper, we configure KaHyPar-EVO to use flow-based refinement, which further improves solution quality. KaHyPar-MF is unable to find any balanced bipartition on 4 hypergraps, whereas KaHyPar-EVO always finds one. Furthermore, KaHyPar-MF exceeds the time limit on 7 hypergraphs and KaHyPar-EVO on an additional 17, without reporting intermediate solutions. Figure~\ref{fig:experimental:eps0.0} shows the running times and a performance plot of all tested algorithms. HFC-100 produces the best solutions on 245 hypergraphs, followed by ReBaHFC-Q (230), ReBaHFC-D (122), PaToH-Q (121), PaToH-D (40), KaHyPar-EVO (28) and finally KaHyPar-MF (15). This shows that with exorbitant running time, HFC-100 produces high quality solutions for $\varepsilon = 0$. However the time-quality trade-off is clearly in favor of ReBaHFC-Q, especially since the solution quality of the latter is closer to the best cut for the instances on which it does not find the best cut, as opposed to HFC-100. PaToH is better than KaHyPar for $\varepsilon = 0$ because it includes a KL~\cite{kl-efppg-70} refinement pass as opposed to KaHyPar which only uses FM~\cite{fm-a-82}.\looseness=-1 \section{Conclusion} In this paper we propose and evaluate HyperFlowCutter, a hypergraph bipartitioning algorithm based on maximum flow computations. It enumerates partitions with increasing balance up to perfect balance. We also propose and evaluate ReBaHFC, a refinement algorithm based on HyperFlowCutter. In our experimental evaluation on a large set of hypergraphs, we show that while ReBaHFC is unable to beat the state-of-the-art hypergraph partitioners in terms of quality, it is still close in terms of quality and at the same time an order of magnitude faster. Thus, it offers a new trade-off between quality and running time. For the special case of perfectly balanced bipartitioning, the plain HyperFlowCutter algorithm, while being slow, computes the highest-quality solutions. In this setting, ReBaHFC not only still beats all other partitioners but is also much faster. In future work, it would be interesting to integrate the refinement step of ReBaHFC into multilevel partitioners to see if it can further improve their solution quality. \appendix \section{ReBaHFC Parameter Discussion}\label{sec:experimental:config_rebahfc} In this section we discuss our parameter choices for ReBaHFC. We use PaToH to obtain initial partitions for ReBaHFC, because it is between one and two orders of magnitude faster than KaHyPar and hMETIS, depending on whether the default or quality preset is used. In addition to constructing corridors using Breadth-First-Search, we also tried using PaToH to resize the blocks again. Regarding solution quality, the two methods are roughly equivalent, which can be seen in Figure~\ref{fig:experimental:performanceplots_rebahfcconfiguration}. However, we would like to investigate the suitability of HyperFlowCutter as a refinement algorithm in a multilevel framework such as KaHyPar, which is specifically suited since it already contains the necessary infrastructure for its own flow-based refinement. Further, the necessary overhead of one or two additional invokations of PaToH led us to prefer corridor-construction using Breadth-First-Search. We tested several block-size threshold values $\alpha$. For $\varepsilon = 0.03$ we tested $\alpha \in \{0.4, 0.42, 0.46, 0.48\}$ with $\alpha = 0.4$ working best. For $\varepsilon = 0$ we tested $\alpha \in \{0.46, 0.475, 0.49, 0.498\}$ with $\alpha = 0.46$ working best. Smaller values for $\alpha$ are possible but are not recommended since larger flow problems would increase the running time too much. Furthermore, we experimented with two different imbalance parameters $\varepsilon' \in \{0.03, 0.05\}$ for the initial partition, with requested imbalance $\varepsilon = 0.03$ for the output partition. Figure~\ref{fig:larger_external_eps_does_not_help} shows that using an imbalance $\varepsilon' > \varepsilon$ for the initial partition yields worse output partitions. \begin{figure} \caption{\centering ReBaHFC-Q, $\varepsilon = 0.03$.} \caption{\centering ReBaHFC-D, $\varepsilon = 0.03$.} \caption{\centering ReBaHFC-Q, $\varepsilon = 0$.} \caption{\centering ReBaHFC-D, $\varepsilon = 0$.} \caption{\centering ReBaHFC-Q with $\varepsilon = 0.03$ and block resizing using corridors. This plot shows that it is better to use $\varepsilon' = \varepsilon$ for the initial partition.} \caption{Performance plots to compare the two block resizing strategies and block-size thresholds $\alpha$. For both PaToH presets we get the same best-performing block-size threshold $\alpha = 0.4$ for $\varepsilon = 0.03$ and $\alpha=0.46$ for $\varepsilon = 0$. The block resizing strategies barely differ in quality.} \label{fig:larger_external_eps_does_not_help} \label{fig:experimental:performanceplots_rebahfcconfiguration} \end{figure} \FloatBarrier \section{Plain HyperFlowCutter Parameter Discussion}\label{sec:experimental:config_plain} In this section we discuss the configuration choices for plain HyperFlowCutter. Recall that we run plain HyperFlowCutter for $\varepsilon = 0$ with up to 100 terminal pairs using interleaved execution in waves, as introduced in Section~\ref{sec:config:plain_hfc}. All experiments in this section are for $\varepsilon = 0$. \subparagraph{Ensemble Terminals.} \begin{table} \caption{Percentage of cases in the randomized experiment, which yield a better/worse partition, by replacing $Y$ randomly chosen, randomly generated terminal pairs out of 100 with the $Y$ first ensemble terminal pairs.} \label{table:ensemble} \centering \begin{tabular}{l *5{r}} $Y$ & 1 & 3 & 5 & 7 & 10 \\ \midrule better [\%] & +23.3 & +28.1 & +29.4 & +30.4 & +31.1 \\ worse [\%] & -0.5 & -1.4 & -2.1 & -2.7 & -4.1 \\ \end{tabular} \end{table} We evaluate the ensemble terminal pairs that were introduced in Section~\ref{sec:config:plain_hfc}. To assess the impact of replacing randomly generated terminal pairs by ensemble terminal pairs, we propose a randomized experiment. We replace $Y \in \{1,3,5,7,10\}$ randomly selected terminal pairs out of 100 by the $Y$ first ensemble terminal pairs and measure how often this improved the minimum cut. The results in Table~\ref{table:ensemble} are accumulated over five runs of plain HyperFlowCutter with different random seeds, for each of which we accumulate 500 random samples of $Y$ ejected terminal pairs. Any choice of $Y$ other than $1$ improves the solution in roughly 28-29\% (better $-$ worse) cases. On 62.3\% of the hypergraphs, the first ensemble terminal pair is the best out of 10. Thus we use only $Y=3$ ensemble terminal pairs for the final configuration. This experiment was conducted on the 139 hypergraph parameter tuning subset used in Section~\ref{sec:parameterstudy}. \begin{figure} \caption{Ratios $1 - \frac{\cut{(\mathrm{HFC} \label{fig:experimental:waves} \end{figure} \subparagraph{Waves and Impact of Number of Terminal Pairs.} In addition to providing intermediate solutions in case the limit is exceeded, the interleaved execution in waves allows us to evaluate the impact of the number of terminal pairs on the entire benchmark set, without re-running the experiments. We call the configuration, which takes the minimum of the first $\langle 1,2,3,4 \rangle$ waves finished within time limit, HFC-$\langle 1,6,20,100 \rangle$ respectively. Figure~\ref{fig:experimental:waves} shows the sorted per-hypergraph ratios $1 - \frac{\cut{(\mathrm{HFC}-X)}}{\cut{(\mathrm{HFC}-1)}}$, which indicate by how much HFC-$\{6,20,100\}$ improved the bipartition of HFC-1. A ratio of zero means HFC-X computed the same solution as HFC-1. In this plot, the ratios share the same x-axis, since hypergraphs are sorted lexicographically by the ratios of HFC-$6$, HFC-$20$ and then HFC-$100$, as opposed to performance plots. This is possible since adding terminals only improves the solution. The purely green leg of 224 hypergraphs are instances on which HFC-1 still produces the best solutions. This confirms the impact of ensemble terminal pairs on quality. The green and blue leg from 225 to 259 are the instances where HFC-100 beats HFC-1 but HFC-20 and HFC-6 do not. The first all-color leg from 260 to 311 are the instances where HFC-100 and HFC-20 beat HFC-6 but HFC-6 does not and on the remaining 177 instances all three compute smaller cuts than HFC-1. \FloatBarrier \section{Performance Plots for different Hypergraph Classes}\label{sec:experimental:performanceplots_instanceclasses} \begin{figure} \caption{\centering Sparse Matrices.} \caption{\centering Dual SAT.} \caption{\centering Primal SAT.} \caption{\centering Literal SAT.} \caption{\centering ISPD98 VLSI.} \caption{\centering DAC.} \caption{Performance plots for $\varepsilon = 0.03$ comparing the algorithms on the different hypergraph classes of the benchmark set.} \label{fig:experimental:performanceplots_instanceclasses} \end{figure} \end{document}
\begin{document} \title{Geometrically formal homogeneous metrics of positive curvature} \author{Manuel Amann} \ensuremath{\operatorname{ad}} dress{Karlsruher Institut f\"ur Technologie\\ 76133 Karlsruhe, Germany} \epsilonmail{[email protected]} \author{Wolfgang Ziller} \ensuremath{\operatorname{ad}} dress{University of Pennsylvania: Philadelphia, PA 19104, USA} \epsilonmail{[email protected]} \thanks{The first author was supported by IMPA and a research grant of the German Research Foundation DFG. The second author was supported by CAPES-Brazil, IMPA, the National Science Foundation and the Max Planck Institute in Bonn.} \begin{abstract} A Riemannian manifold is called geometrically formal if the wedge product of harmonic forms is again harmonic, which implies in the compact case that the manifold is topologically formal in the sense of rational homotopy theory. A manifold admitting a Riemannian metric of positive sectional curvature is conjectured to be topologically formal. Nonetheless, we show that among the homogeneous Riemannian metrics of positive sectional curvature a geometrically formal metric is either symmetric, or a metric on a rational homology sphere. \epsilonnd{abstract} \maketitle Compact manifolds of positive sectional curvature form an intriguing field of study. On the one hand, there are few known examples, and on the other hand the two main conjectures in the subject, the two Hopf conjectures, are still wide open. The most basic examples of positive curvature are the rank one symmetric spaces $\mathbb{S}^n$, $\mathbb{C\mkern1mu P}^n$, $\mathbb{H\mkern1mu P}^n$ and $\mathrm{Ca}\mathbb{\mkern1mu P}^2$. Homogeneous spaces of positive curvature have been classified \cite{Be,BB}: there are the homogeneous flag manifolds due to Wallach, $W^6=\ensuremath{\operatorname{SU}} (3)/T^2$, $W^{12}=\ensuremath{\operatorname{Sp}} (3)/\ensuremath{\operatorname{Sp}} (1)^3$ and $W^{24}=\mathsf{F}_4/\ensuremath{\operatorname{Sp}} in(8)$, the Berger spaces $B^7=\ensuremath{\operatorname{SO}} (5)/\ensuremath{\operatorname{SO}} (3)$ and $B^{13}=\ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{Sp}} (2)\cdot \ensuremath{\operatorname{S}} ^1$, and the Aloff--Wallach spaces $W^7_{p,q}=\ensuremath{\operatorname{SU}} (3)/\ensuremath{\operatorname{diag}} (z^p,z^q, \bar{a}r z^{p+q})$ with \linebreak[4]$\gammad(p,q)=1$, $p\geq q> 0$. See e.g. \cite{Zi2} for a detailed discussion. Furthermore, we have the biquotient examples due to Eschenburg \cite{E1,E2} and Bazaikin \cite{Baz} and the more recent cohomogeneity one example in \cite{De,GVZ}. All the known examples have the following remarkable properties: They are rationally elliptic spaces, i.e.~their rational homotopy groups $\pi_i(M)\otimes {\mathbb{R}} a$ vanish from a certain degree $i$ on, and the even dimensional ones have positive Euler characteristic. For general simply-connected positively (or more generally non-negatively) curved manifolds, the Bott-Grove-Halperin conjecture claims rational ellipticity, whilst the Hopf conjecture asserts that their Euler characteristic is positive in even dimensions. A (simply-connected) topological space is called (topologically) \epsilonmph{formal} if its rational homotopy type is a formal consequence of its rational cohomology algebra, or, equivalently in the case of a manifold, if its real cohomology algebra is weakly equivalent to its de Rham algebra. It is a classical result of rational homotopy theory that rationally elliptic spaces with positive Euler characteristic are formal, see e.g. \cite{FHT}. In fact, one easily sees that all known examples of positive curvature are formal, in even as well as in odd dimensions. It is thus natural to conjecture that positively curved manifolds are formal in general. We mention here that the situation is different in non-negative curvature. Homogeneous spaces $G/H$ naturally admit non-negative curvature and are rationally elliptic. If $\ranglenk H= \ranglenk G$ they have positive Euler characteristic and are hence formal. On the other hand, in \cite{Ama12b, Kot11} one finds many examples of non-formal homogeneous spaces. Other classical examples of formal spaces are compact symmetric spaces and compact K\"ahler manifolds. In the case of symmetric spaces this simply follows from the fact that harmonic forms are parallel. Thus in \cite{Kot01} the notion of geometric formality was introduced: A Riemannian metric is \epsilonmph{geometrically formal} if wedge products of harmonic forms are again harmonic. On a compact manifold the Hodge decomposition implies that a manifold admitting a geometrically formal metric is also topologically formal. See \cite{Bae12} and \cite{Kot12} for some recent results on geometrically formal metrics in dimension 3 and 4, and \cite{Kot01,Kot03,Kot09,Kot11,OP,GN} for obstructions to geometric formality. There are very few known examples of compact geometrically formal manifolds. In fact, to our knowledge they all belong to the following classes (see \cite{Kot01,Kot12,Kot11,Bae12}) \begin{itemize} \item a Riemannian metric all of whose harmonic forms are parallel, \item a homogeneous metric on a manifold whose rational cohomology is isomorphic to the cohomology of $\mathbb{S}^p\times\mathbb{S}^q$ with either $p$ and $q$ both odd, or $p$ even and $q$ odd with $p>q$, \item Riemannian products of the above and finite quotients by a group of isometries. \epsilonnd{itemize} In the homogeneous case geometric formality is an obvious consequence of homogeneity, since harmonic forms must be invariant under the id component of the isometry group. Homogeneous spaces which have the rational cohomology of the product of spheres are classified in \cite{Kr}, and in \cite{Kot11} it was shown that many of them are not homotopy equivalent to symmetric spaces. There are other metrics where all harmonic forms are parallel, besides the compact symmetric spaces. For example, any metric on a rational homology sphere or a K\"ahler metric on a rational $\mathbb{C\mkern1mu P}^n$, e.g. the twistor space of the quaternionic symmetric space $\ensuremath{\operatorname{G}} _2/\ensuremath{\operatorname{SO}} (4)$. If one allows the manifold not to be simply connected, there are many such examples, e.g. fake $\mathbb{C\mkern1mu P}^2$ and $\mathbb{C\mkern1mu P}^4$, see \cite{GN}, which are compact quotients of complex hyperbolic space. Although these spaces may be called topologically formal, this property usually has not the strong consequences known from rational homotopy theory unless the space is nilpotent. For quotients of products, as for example $(M\times {\mathbb{R}} ^n)/\ensuremath{\operatorname{G}} amma$ with $M$ geometrically formal, one simply observes that geometric formality is a local property. It is the main result of this article that geometric formality is also rare in positive curvature: \begin{main*}\langlebel{theoA} A homogeneous geometrically formal metric of positive curvature is either symmetric or a metric on a rational homology sphere. \epsilonnd{main*} In \cite{Kot11},Theorem 25, it was shown that a metric on a non-trivial $\mathbb{S}^2$ bundle over $\mathbb{C\mkern1mu P}^2$ cannot be formal. This includes the 6 dimensional flag manifold $W^6$, as well as the inhomogenous Eschenburg biquotient. We will show that any homogeneous metric on the other two flag manifolds $W^{12}$ and $W^{24}$ cannot be geometrically formal. Of course, every metric on a sphere is geometrically formal, and every homogeneous metric on $\mathbb{C\mkern1mu P}^{2n}$, $\mathbb{H\mkern1mu P}^n$ and $\mathrm{Ca}\mathbb{\mkern1mu P}^2$ is symmetric. The Berger space $B^7$ is geometrically formal as well, since it is a rational homology sphere. This leaves the Berger space $B^{13}$, the Aloff--Wallach spaces, and the homogeneous metrics on $\mathbb{C\mkern1mu P}^{2n+1}$. For the Aloff--Wallach spaces, it was shown in \cite{Kot11} that the normal homogeneous metric is not geometrically formal, but this metric does not have positive curvature. The recent example of positive curvature in \cite{De,GVZ} is a rational homology sphere and hence geometrically formal. It would be interesting to know if the only other known examples of positive curvature, i.e. the 7 dimensional Eschenburg spaces and 13 dimensional Bazaikin spaces, can admit geometrically formal metrics. They have the same cohomology as $W_{p,q}$ and $B^{13}$, but our methods do not apply in this case since the isometry group is too small. It would also be interesting to have some other examples of homogeneous spaces where some of the homogeneous metrics are geometrically formal. Although the methods in this paper can be used to check this, an example seems to be difficult to find. Any relationship in the cohomology ring puts strong restrictions on a geometrically formal metric. To prove the theorem we use the elementary fact that the de Rham cohomology is isomorphic to the finite dimensional algebra of invariant forms, and hence closed and harmonic forms can be computed explicitly. The Berger space $B^{13}$ has the rational cohomology of $\mathbb{C\mkern1mu P}^2\times \mathbb{S}^9$ and the Aloff--Wallach space $W_{p,q}$ that of $\mathbb{S}^2\times \mathbb{S}^5$. Hence there is a unique harmonic 2-form $\epsilonta$ and to be geometrically formal implies that $\epsilonta^3$ resp. $\epsilonta^2$ must be 0 as a form. It turns out that even among the closed invariant forms there are none whose power is 0. For $W^{12}$ and $W^{24}$ there are relations in the cohomology ring that contradict geometric formality. In the case of $\mathbb{C\mkern1mu P}^{2n+1}$, the situation is more interesting. Here the condition is that $\epsilonta^k$ must be harmonic for all $k$. But already the harmonic 4-form changes with the metric and is the square of the harmonic 2-form only if the metric is symmetric. We point out that this metric is also almost K\"ahler, hence gives examples of such metrics which are not geometrically formal. In Section 1 we explain some background about homogeneous spaces and their cohomology. In Section 2 we deal with $B^{13} $ and in Section 3 with the Aloff--Wallach spaces. In Section 4 we discuss $W^{12}$ and $W^{24}$, and in Section 5 $\mathbb{C\mkern1mu P}^{2n+1}$. \ensuremath{\operatorname{sec}} tion{Preliminaries}\langlebel{prelim} We first discuss the methods we will use to prove our main theorem. \noindent Let $M=G/H$ be a homogeneous space with $H$ the stabilizer group at a base point $p_0\in M$. Using a fixed biinvariant metric $Q$ on the Lie algebra ${\mathfrak{g}} $, we define an orthogonal splitting $$ {\mathfrak{g}} ={\mathfrak{h}} +{\mathfrak{m}} \ \ \text{with identification} \ \ {\mathfrak{m}} \simeq T_{p_0}M $$ induced by the action fields $X^*$ via $X\in{\mathfrak{m}} \to X^*(p_0)$. The action of $H$ on $T_{p_0}M$ then becomes the adjoint action ${\ensuremath{\operatorname{Ad}} _H}$ on ${\mathfrak{m}} $. Choose an $\ensuremath{\operatorname{Ad}} _H$ invariant and $Q$ orthogonal decomposition $$ {\mathfrak{m}} ={\mathfrak{m}} _0\oplus {\mathfrak{m}} _1\oplus\ldots \oplus{\mathfrak{m}} _k $$ such that ${\ensuremath{\operatorname{Ad}} _H}_{|{\mathfrak{m}} _0}=\Id$ and ${\ensuremath{\operatorname{Ad}} _H}_{|{\mathfrak{m}} _i}$ is irreducible. A metric of the form $$g={g_0}_{|{\mathfrak{m}} _0}+\langlembda_1Q_{|{\mathfrak{m}} _1} +\langlembda_2Q_{|{\mathfrak{m}} _2}+\ldots+\langlembda_kQ_{|{\mathfrak{m}} _k}$$ with $g_0$ an inner product on ${\mathfrak{m}} _0$ and $\langlembda_i$ positive constants, is then a $G$ invariant metric on $M$. If the the $\ensuremath{\operatorname{Ad}} _H$ representations ${\mathfrak{m}} _i$, $i=1\ldots k$, are all inequivalent, every $G$ invariant metric has this form. If ${\mathfrak{m}} _i\simeq{\mathfrak{m}} _j$ the inner products between ${\mathfrak{m}} _i$ and ${\mathfrak{m}} _j$ can be described by $1,2$ or $4$ arbitrary constants, depending on wether the representations are real, complex, or quaternionic. We will use the elementary fact that the DeRham cohomology is isomorphic to the cohomology of $G$ invariant forms. By homogeneity this in turn is isomorphic to $$ H^*_{DR}(M)\simeq \left((\mathsf{L}ambda^* {\mathfrak{m}} )^H,d\right) $$ of forms on ${\mathfrak{m}} $ invariant under the isotropy action. The differential of a $k$-form $\omega\in (\mathsf{L}ambda^k {\mathfrak{m}} )^H$ is again $H$ invariant and can be computed via the following formula: \begin{equation}\langlebel{dw} d\omega(u_1,\ldots,u_{k+1})=\sum_{i<j}(-1)^{i+j} \omega([u_i,u_j]_{{\mathfrak{m}} },u_1,\ldots, \hat{u_i},\ldots, \hat{u}_j\ldots u_{k+1}) \epsilonnd{equation} for $u_i\in{\mathfrak{m}} $, where $[u_i,u_j]_{{\mathfrak{m}} }$ denotes the projection of $[u_i,u_j]$ into ${\mathfrak{m}} $. On $\mathsf{L}ambda^* {\mathfrak{m}} $ we use the inner product that makes $e_{i_1}\wedge e_{i_2}\wedge \ldots \wedge e_{i_r}$, $i_1< i_2 <\ldots<i_r$ into an orthonormal basis of $\mathsf{L}ambda^r {\mathfrak{m}} $ for any orthonormal basis $e_i$ of ${\mathfrak{m}} $. We denote the codiferential by $\delta$. Since $\langlengle \omega, d\epsilonta\ranglengle=\langlengle\delta \omega , \epsilonta\ranglengle$, a $G$ invariant form $\omega\in (\mathsf{L}ambda^r {\mathfrak{m}} )^H$ is harmonic if and only if $$ d\omega=0 \ \ \text{ and}\ \ \langlengle \omega,d\epsilonta\ranglengle=0 \ \ \text{for all}\ \ \epsilonta\in (\mathsf{L}ambda^{r-1} {\mathfrak{m}} )^H. $$ This reduces the computation of the DeRham cohomology and the harmonic forms to a finite dimensional purely Lie algebraic computation. The equations are in fact linear in the coefficients of $\omega$ in some basis, and quadratic in the coefficients of the metric. In order to simplify the computation of the differentials $d\omega$ we observe the following. Let $e_i$ be a basis of ${\mathfrak{g}} $ where each basis vector lies either in ${\mathfrak{h}} $ or ${\mathfrak{g}} $ and denote, by abuse of notation, the dual basis of 1-forms again by $e_i$. Although the 1-forms $e_i$ are in general not $\ensuremath{\operatorname{Ad}} (H)$ invariant, and hence do not represent forms on $G/H$, we can nevertheless formally use \epsilonqref{dw} to compute $de_i$. By using the product rule we can then compute $d\omega$ for any $r$-form $\omega=\sum e_{i_1}\wedge e_{i_2}\wedge\ldots\wedge e_{i_r}$, in particular the $\ensuremath{\operatorname{Ad}} (H)$ invariant forms. To see that the formula in \epsilonqref{dw} satisfies the product rule, observe that we could replace $[u_i,u_j]_{{\mathfrak{m}} }$ by $[u_i,u_j]$ since the ${\mathfrak{h}} $ component will evaluate to 0. But then it becomes the usual formula for the Lie algebra cohomology of ${\mathfrak{g}} $ and hence satisfies a product rule. Notice though that in this generality $d^2\omega$ does not have to be 0, unless $\omega$ is $H$ invariant. This is due to the fact that the proof that it vanishes, in the case of the Lie algebra cohomology, uses the Jacobi identity which does not hold if we take the ${\mathfrak{m}} $ component of all Lie brackets. \ensuremath{\operatorname{sec}} tion{The Berger space $B^{13}$}\langlebel{sec01} For the 13 dimensional Berger space $B^{13}=\ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{Sp}} (2)\cdot\ensuremath{\operatorname{S}} ^1$, the embedding $\ensuremath{\operatorname{Sp}} (2)\cdot\ensuremath{\operatorname{S}} ^1\subset\ensuremath{\operatorname{SU}} (5)$ is given by $\ensuremath{\operatorname{diag}} (zA,\bar{a}r{z}^4)$ for $A\in \ensuremath{\operatorname{Sp}} (2)\subset\ensuremath{\operatorname{SU}} (4)$ and $z\in\ensuremath{\operatorname{S}} ^1$. The manifold $B^{13}$ has the same DeRham cohomology as $\mathbb{C\mkern1mu P}^2\times \mathbb{S}^9$. One can see this for example by using the two homogeneous fibrations $$\ensuremath{\operatorname{S}} ^1\to \ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{Sp}} (2)\to B^{13} \ \text{and}\ \ensuremath{\operatorname{SU}} (4)/\ensuremath{\operatorname{Sp}} (2)\to \ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{Sp}} (2) \to \ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{SU}} (4)$$ and the fact that $\ensuremath{\operatorname{SU}} (5)/\ensuremath{\operatorname{SU}} (4)=\mathbb{S}^9$, $\ensuremath{\operatorname{SU}} (4)/\ensuremath{\operatorname{Sp}} (2)=\ensuremath{\operatorname{SO}} (6)/\ensuremath{\operatorname{SO}} (5)=\mathbb{S}^5$ and that $B^{13}$ is simply connected. Thus there exists one harmonic 2-form $\epsilonta$. Geometric formality requires $\epsilonta^2$ to be harmonic, and $\epsilonta^3=0$ on the level of forms. We actually do not need to explicitly compute the harmonic forms since we will show that there are no closed invariant 2-forms $\omega$ with $\omega^3=0$. To compute the invariant forms, we first make the following observations. $\ensuremath{\operatorname{Sp}} (n)$ acts on ${\mathbb{H}} ^n$ via matrix multiplication and $\ensuremath{\operatorname{Sp}} (n)\cdot \ensuremath{\operatorname{Sp}} (1)$ via $(A,q)v=Avq^{-1}$ for $A\in\ensuremath{\operatorname{Sp}} (n), q\in\ensuremath{\operatorname{Sp}} (1)$ and $v\in {\mathbb{H}} ^n$. It is well known that the algebra $\mathsf{L}ambda^*({\mathbb{H}} ^n)^{\ensuremath{\operatorname{Sp}} (n)}$ of invariant forms is generated by the 3 symplectic forms, corresponding to the K\"ahler forms $\omega_I, \omega_J,\omega_k$ associated to the 3 complex structures coming from right multiplication with $I,J,K\in \ensuremath{\operatorname{Sp}} (1)$. Right multiplication with $\ensuremath{\operatorname{Sp}} (1)$ acts on $\spam{\{I,J,K\} }\simeq{\mathbb{R}} ^3$ via matrix multiplication by $\ensuremath{\operatorname{SO}} (3)$ under the two fold cover $\ensuremath{\operatorname{Sp}} (1)\to \ensuremath{\operatorname{SO}} (3)$. Thus if $\ensuremath{\operatorname{S}} ^1\subset\ensuremath{\operatorname{Sp}} (1)$ is given by $e^{it}$, the algebra \begin{equation}\langlebel{invariant} \mathsf{L}ambda^*({\mathbb{H}} ^n)^{\ensuremath{\operatorname{Sp}} (n)\cdot\ensuremath{\operatorname{S}} ^1} \text{ is spanned by } \omega_I \text{ and its powers}. \epsilonnd{equation} From the inclusions $\ensuremath{\operatorname{Sp}} (2)\cdot\ensuremath{\operatorname{S}} ^1\subset \ensuremath{\operatorname{SU}} (4)\cdot\ensuremath{\operatorname{S}} ^1=\ensuremath{\operatorname{U}} (4)\subset\ensuremath{\operatorname{SU}} (5)$ it easily follows that the decomposition of ${\mathfrak{m}} $ into irreducibles under the action of $H=\ensuremath{\operatorname{Sp}} (2)\cdot \ensuremath{\operatorname{S}} ^1$ is given by ${\mathfrak{m}} =V\oplus W$ with $\dim V=5$ and $\dim W=8$. On $V$ the factor $\ensuremath{\operatorname{S}} ^1$ acts trivially and $\ensuremath{\operatorname{Sp}} (2)$ via matrix multiplication by $\ensuremath{\operatorname{SO}} (5)$ under the two fold cover $\ensuremath{\operatorname{Sp}} (2)\to \ensuremath{\operatorname{SO}} (5)$. On $W$ it acts via $(A,z)v=Avz^{-1}$ with $(A,z)\in \ensuremath{\operatorname{Sp}} (2)\times\ensuremath{\operatorname{S}} ^1$. It follows that $\mathsf{L}ambda^*(V)^H$ is spanned by a 0-form and a 5-form, the volume form $v$, and $\mathsf{L}ambda^*(W)^H$ by $\omega_I$ and its powers by \epsilonqref{invariant}. On the other hand $\mathsf{L}ambda^k(V)\otimes\mathsf{L}ambda^l(W)$ with $k,l>0$ contains no invariant forms since the $\ensuremath{\operatorname{S}} ^1$ factor clearly acts non-trivially. Thus $(\mathsf{L}ambda{\mathfrak{m}} )^H$ is spanned by $v$ and $\omega_I$ as an algebra. Since there is only one invariant 2-form, $\omega_I$ must be harmonic, and similarly $\omega_I^2$ as well. In order to obtain the cohomology ring of $B^{13}$, we need $dv\ne0$, but the only possibility, up to a multiple, is $dv=\omega_I^3$. This implies that $\omega_I^3\ne 0$ and hence no invariant metric can be geometrically formal. \ensuremath{\operatorname{sec}} tion{The Wallach spaces $W_{p,q}$} Let $H=\ensuremath{\operatorname{S}} ^1_{k}=\ensuremath{\operatorname{diag}} (e^{ik_1t}, e^{ik_2t},e^{ik_3t})\subset G=\ensuremath{\operatorname{SU}} (3)$ where $k_i$ are fixed integers with $\sum k_i=0$. The quotient $G/H=\ensuremath{\operatorname{SU}} (3)/\ensuremath{\operatorname{S}} ^1_{k}$ was studied by Aloff--Wallach \cite{AW} who showed that it admits a homogeneous metric with positive sectional curvature if none of the $k_i$ is 0. We will show that in fact none of the homogeneous metrics, even in this special case, can be geometrically formal. This was shown to be the case for the metric induced by the biinvariant metric on $\ensuremath{\operatorname{SU}} (3)$, but this metric does not have positive curvature. It is well known that the rational cohomology ring of $\ensuremath{\operatorname{SU}} (3)/\ensuremath{\operatorname{S}} ^1_{k}$ is that of $\mathbb{S}^2\times\mathbb{S}^5$, but they can be differentiated by a torsion group in $H^4$. Thus there exists one harmonic 2-form $\epsilonta$. To be geometrically formal, we need $\epsilonta^2=0$ on the level of forms. As in the previous case, we will again show that there are no closed 2-forms with square 0, although the computation in this case is much more involved. We choose the following basis for the Lie algebra of $\ensuremath{\operatorname{SU}} (3)$. To describe it, let $E_{ij}$ be the matrix which has a 1 in row i and column j, and 0 otherwise. Set \begin{align*} E_1&= E_{12}- E_{21}, & E_2&=-E_{13}+E_{31}, & E_3&=E_{23}-E_{32}\\ F_1&=iE_{12}+i E_{21}, & F_2&=i E_{13}+i E_{31}, & F_3&=i E_{23}+i E_{32}\\ H_1&=iE_{11}-i E_{22}, & H_2&=-i E_{11}+i E_{33}, & H_3&=i E_{22}-i E_{33}. \epsilonnd{align*} We also choose the biinvariant metric on ${\mathfrak{su}} (3)$ given by $\langlengle A,B\ranglengle =-{\mathfrak{r}} ac 12 \ensuremath{\operatorname{tr}} (AB)$ in which $E_i,F_i$ are orthonormal. Furthermore, $H_i$ have unit length, are orthogonal to $E_i,F_i$, and $\langlengle H_i,H_j\ranglengle=-{\mathfrak{r}} ac 12$. For the Lie brackets we have: \begin{align*} [H_i,E_i]&=2 F_i, & [H_i,F_i]&=-2 E_i, & [H_i,E_j]&=-F_j , & [H_i,F_j]&=E_j\\ [E_i,E_j]&=E_k, & [F_i,F_j]&=-E_k, & [E_i,F_j]&=-F_k , & [E_i,F_i]&=2 H_i \\ \epsilonnd{align*} where $i,j,k$ is a cyclic permutation of $1,2,3$. For the decomposition of ${\mathfrak{g}} $ we choose $${\mathfrak{g}} ={\mathfrak{h}} + V_0 + V_1 +V_2+V_3$$ where $$ V_0= \spam(\varepsilon),\, V_i= \spam(E_i,F_i) \ \text{for}\ i=1,\ldots,3. $$ Here $\varepsilon$ needs to be $Q$-orthogonal to ${\mathfrak{h}} $ and of unit length, i.e. $$ \varepsilon=\sum r_i H_i \ \text{with}\ \ (r_1-r_2)k_1+(r_3-r_1)k_2+(r_2-r_3)k_3=0 \ \text{and}\ \sum r_i^2 -\sum r_ir_j=1. $$ The subspaces $V_i$ are invariant under the isotropy action by $H$. On $V_0$ it acts trivially and on $V_i$ as: \begin{align*} \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})E_1&=(\theta_1-\theta_2)F_1, & \quad \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})F_1=-(\theta_1-\theta_2)E_1\\ \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})E_2&=(\theta_3-\theta_1)F_2, & \quad \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})F_2=-(\theta_3-\theta_1)E_2\\ \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})E_3&=(\theta_2-\theta_3)F_3, & \quad \ensuremath{\operatorname{Ad}} (\ensuremath{\operatorname{diag}} (e^{i\theta_1},e^{i\theta_2},e^{i\theta_3})F_3=-(\theta_2-\theta_3)E_3 \epsilonnd{align*} where $\theta_i=k_i\cdot t$. For the differential forms we use the basis of 1-forms dual to the basis $\varepsilon,E_i,F_i$ and by abuse of notation use the same letters. Using \epsilonqref{dw} and the above Lie brackets one easily obtains the following exterior derivatives of 1-forms: \renewcommand{1.4}{1.4} \stepcounter{equation} \begin{table}[ht] \begin{center} \begin{tabular}{c|c} $w$ & $\dif w$ \\ \hline $E_1$ & $\hspace{10pt} E_2\wedge E_3-F_2 \wedge F_3+s_1F_1\wedge \varepsilon $ \\ $E_2$ & $\hspace{10pt} E_3\wedge E_1-F_3\wedge F_1+s_2F_2\wedge \varepsilon$ \\ $E_3$ & $\hspace{10pt} E_1\wedge E_2-F_1\wedge F_2+s_3F_3\wedge \varepsilon$ \\ $F_1$ & $-E_2\wedge F_3-F_2\wedge E_3-s_1E_1\wedge \varepsilon$ \\ $F_2$ & $-E_3\wedge F_1-F_3\wedge E_1-s_2E_2\wedge \varepsilon$ \\ $F_3$ & $-E_1\wedge F_2-F_1\wedge E_2-s_3E_3\wedge \varepsilon$\\ $\varepsilon$ & $s_1 E_1\wedge F_1+s_2 E_2\wedge F_2+s_3 E_3\wedge F_3$ \epsilonnd{tabular} \epsilonnd{center} \caption{Differentials of one-forms}\langlebel{1forms} \epsilonnd{table} where $$ s_i=2r_i-r_j-r_k \ \text{with}\ i,j,k \ \text{distinct}. $$ In this computation we need to use the fact that $$ (H_1)_{\mathfrak{m}} =Q(H_1,\varepsilon)\varepsilon=Q\big(H_1,\sum r_iH_i\big)\varepsilon=(r_1-\tfrac12 (r_2+r_3)) $$ and hence $(2H_i)_{\mathfrak{m}} =s_i\varepsilon $. As explained above, these one-forms are not all well defined on $G/H$ but are useful for computing the exterior derivative of 2-forms via the product formula for forms. The discussion now depends on the values of the 3 integers $k_i$ and we differentiate between 3 cases. \subsection{All three $k_i$ are distinct.} Assume that $H=\ensuremath{\operatorname{S}} ^1_{k}=\ensuremath{\operatorname{diag}} (e^{ik_1t}, e^{ik_2t},e^{ik_3t})$ with all $k_i$ distinct. Since the differences $k_i-k_j$ are then also all distinct, the actions of $H$ on $V_i$ are all non-trivial and inequivalent. Hence an invariant metric depends on 4 parameters. The only invariant 1-form is $\varepsilon$, and the only invariant 2-forms are the volume forms of $V_i$, i.e. $\omega_i=E_i\wedge F_i$. Without having to compute which forms $\omega=\sum a_i\omega_i$ are closed, it is clear that an invariant metric cannot be geometrically formal since $\omega^2=0$ implies that $a_i=0$ for all $i$. \begin{rem*} One easily sees that the form $\omega=\sum a_i\omega_i$ is closed iff $\sum a_i=0$ and harmonic if in addition $\sum a_is_it_i^2=0$, where $t_i$ is the length of $E_i$ and $F_i$. \epsilonnd{rem*} \subsection{One of the $k_i$ vanishes.} Here we can assume, since cyclic permutations of the $k_i$ and changing the sign of all 3 does not change the homogeneous space, that $(k_1,k_2,k_3)=(0,-1,1)$. These are in fact precisely those Wallach spaces which do not admit an invariant metric with positive curvature. Nevertheless we will now show that even here there are no geometrically formal metrics. The action of $\ensuremath{\operatorname{Ad}} _H$ on $V_i$ is a rotation of speed 1 on $V_1$ and $V_2$, and speed 2 on $V_3$. Thus the space of invariant metrics is 6-dimensional. $\varepsilon$ is still the only invariant one form, but now we have 5 invariant 2-forms: $$ \omega_i=E_i\wedge F_i,\ i=1,2, 3, \ \text{and}\ \omega_4=E_1\wedge E_2+F_1\wedge F_2,\ \omega_5=F_1\wedge E_2-E_1\wedge F_2. $$ For $\varepsilon$ we choose $$ \varepsilon=(H_1+2H_2)/\sqrt{3}=\ensuremath{\operatorname{diag}} (-i,-i,2i)/\sqrt{3} \ \text{and hence}\ (s_1,s_2,s_3)=(0,3,-3). $$ From Table \ref{1forms} we easily obtain the exterior derivatives of the invariant 2-forms: \stepcounter{equation} \begin{table}[ht] \begin{center} \begin{tabular}{c|c} $w$ & $\dif w$ \\ \hline $\omega_1$ & $E_1\wedge E_2 \wedge F_3-E_1\wedge E_3 \wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3$ \\ $\omega_2$ & $E_1\wedge E_2\wedge F_3-E_1\wedge E_3\wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3 $ \\ $\omega_3$ & $E_1\wedge E_2\wedge F_3-E_1\wedge E_3\wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3$ \\ $\omega_4$ & $ 3\,\omega_5\wedge \varepsilon$ \\ $\omega_5$ & $ 3\,\omega_4\wedge \varepsilon$ \epsilonnd{tabular} \epsilonnd{center} \caption{Differentials of 2-forms for $(k_1,k_2,k_3)=(0,-1,1)$ } \epsilonnd{table} Thus the 2-form $\omega=\sum a_i\omega_i$ is closed if and only if $\sum a_i=0$ and $a_4=a_5=0$, and as in the previous case it follows that $\omega^2=0$ implies $a_i=0$ for all $i$. \subsection{Two of the $k_i$ are equal.} Up to permutations, we can assume that $(k_1,k_2,k_3)=(-2,1,1)$. Thus $\ensuremath{\operatorname{Ad}} _H$ acts with speed 3 on $V_1$ and $V_2$, but with opposite orientation, and trivially on $V_3$ and $V_0$. The metric is thus arbitrary on $V_0\oplus V_3$. Since the action on $V_1$ and $V_2$ are also equivalent, an invariant metric depends on 10 parameters. Now the invariant 1-forms are $\varepsilon,\ E_3$ and $F_3$, and the invariant 2-forms are: $$ \omega_i=E_i\wedge F_i,\ i=1,2, 3, \ \omega_4=E_1\wedge E_2-F_1\wedge F_2,\ \omega_5=F_1\wedge E_2+E_1\wedge F_2,\ \omega_6=E_3\wedge \varepsilon,\ \omega_7=F_3\wedge \varepsilon. $$ For $\varepsilon$ we choose $$ \varepsilon=H_3 \ \text{and hence}\ (s_1,s_2,s_3)=(-1,-1,2). $$ The differentials for the invariant 2-forms are: \stepcounter{equation} \begin{table}[ht] \begin{center} \begin{tabular}{c|c} $w$ & $\dif w$ \\ \hline $\omega_1$ & $E_1\wedge E_2 \wedge F_3-E_1\wedge E_3 \wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3$ \\ $\omega_2$ & $E_1\wedge E_2\wedge F_3-E_1\wedge E_3\wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3 $ \\ $\omega_3$ & $E_1\wedge E_2\wedge F_3-E_1\wedge E_3\wedge F_2+E_2\wedge E_3\wedge F_1-F_1\wedge F_2\wedge F_3$ \\ $\omega_4$ & $ -2( E_1 \wedge F_1 +E_2 \wedge F_2) \wedge F_3+2\omega_5 \wedge \varepsilon$ \\ $\omega_5$ & $-2( E_1 \wedge F_1 +E_2 \wedge F_2) \wedge E_3-2\omega_4 \wedge \varepsilon $ \\ $\omega_6$ & $( E_1 \wedge F_1 +E_2 \wedge F_2) \wedge E_3+\omega_4 \wedge \varepsilon$\\ $\omega_7$ & $( E_1 \wedge F_1 +E_2 \wedge F_2) \wedge F_3-\omega_5 \wedge \varepsilon$ \epsilonnd{tabular} \epsilonnd{center} \caption{Differentials of 2-forms for $(k_1,k_2,k_3)=(-2,1,1)$ } \epsilonnd{table} \noindent Thus a 2-form $\omega=\sum a_i\omega_i$ is closed if and only if $$ a_1+a_2+a_3=0,\ -2a_4+a_7=0,\ -2a_5+a_6=0, $$ in other words, $a_1+a_2+a_3=0, a_7=2a_4$ and $a_6=2a_5$. This leaves us with a $4$-dimensional space of closed forms in degree $2$. One easily sees that the square of such a closed form is 0 iff all $a_i$ vanish. This finishes the proof for the Aloff--Wallach spaces. \ensuremath{\operatorname{sec}} tion{The flag manifolds $W^{12}$ and $W^{24}$}\langlebel{flag} The cohomology ring of the the 3 flag manifolds $W^{6}$, $W^{12}$ and $W^{24}$ is well know, and can be computed by using Borel's method for the cohomology ring of a homogeneous space $G/H$, see e.g. \cite{Bo1,Bo2}. In our case this is particularly simple since $\ranglenk H=\ranglenk G$ and since we can restrict ourselves to real coefficients. The result is that it is generated by 3 elements $a_1,a_2,a_3\in H^{2k}(M,R)$, where $k=1,2,4$ for the 3 different flag manifolds. The relationships come from the Weyl group invariant polynomials, i.e., the symmetric polynomials in $a_i$ vanish. If we choose the generators $x=a_1+a_2$ and $y=a_1-a_2$ the cohomology ring is: $$ H^*(M,R)=\{x,y\mid x^3=0, y^2=-3x^2\} $$ with basis $x,y$ in dimension $2k$, as well as $x^2,xy$ in dimension $4k$, and the fundamental class $y^3$ in dimension $6k$. The two relationships $ x^3=0$ and $ y^2=-3x^2$ put strong restrictions on a geometrically formal metric. For $W^6$, the method in \cite{Kot11} used the fact that $y$ must be a symplectic form, whereas $x$ has a kernel, contradicting $ y^2=-3x^2$. This proof does not seem to work when $k>1$. Instead, we restrict ourselves to homogeneous metrics and use the algebra of invariant forms. For all three flag manifolds $G/H$ we have the splitting $$ {\mathfrak{m}} =V_1\oplus V_2\oplus V_3 $$ into $\ensuremath{\operatorname{Ad}} _H$ irreducibles, with $\dim V_i=2k$. Using representation theory, one easily sees that there are no invariant forms in degree $<2k$. In degree $2k$ we clearly have the $\ensuremath{\operatorname{Ad}} _H$ invariant volume forms $\omega_i$ of the modules $V_i$. Some differential must be nonzero since $b_{2k}=2$. For $\ensuremath{\operatorname{SU}} (3)/\ensuremath{\operatorname{T}} ^2$ and $\ensuremath{\operatorname{Sp}} (3)/\ensuremath{\operatorname{Sp}} (1)^3$ we also have inner automorphisms (e.g $\ensuremath{\operatorname{Ad}} (E_{12}-E_{21})$ for $\ensuremath{\operatorname{SU}} (3)/\ensuremath{\operatorname{T}} ^2$ ) which interchange the 3 modules $V_i$. For $F_4/\ensuremath{\operatorname{Sp}} in(8)$ we have the triality automorphism of $\ensuremath{\operatorname{Sp}} in(8)$. This outer automorphisms of $\ensuremath{\operatorname{Sp}} in(8)$ also extends to inner automorphisms of $F_4$, see e.g. \cite{WZ}, Theorem 3.2, and takes $V_1$ to $V_2$, $V_2$ to $V_3$, and $V_3$ to $V_1$. Thus there exist diffeomorphisms of $G/H$ which interchange the volume forms $\omega_i$, which implies that $d\omega_i\ne 0$ for all $i$. By rescaling $\omega_i$ if necessary we can assume that $\omega=\sum a_i\omega_i$ is closed iff $a_1+a_2+a_3=0$. From the description of the forms $\omega_i$ it is also clear that $\omega_i^2=0$, that $\omega_i\wedge \omega_j$, $i<j$ are linearly independent, and that $vol=\omega_1\wedge \omega_2\wedge \omega_3$ is a volume form. Thus $\omega^3=6a_1a_2a_3 vol$ is nonzero iff all three $a_i$ are nonzero. Hence $x$ must be one of 3 forms, depending which $a_i$ vanishes. Assume that say $a_3=0$ and hence $x=\omega_1-\omega_2$ up to a multiple. Then $y=\sum a_i\omega_i$ for some nonzero $a_i$ with $\sum a_i=0$. The $2k$ dimensional classes $x$ and $y$ are the only closed invariant forms and are hence harmonic. But then the relation $y^2=-3x^2$ in cohomology must also hold on the level of $4k$-forms. Since $$ x^2=-2\omega_1\wedge \omega_2 \ \text{and}\ y^2=2a_1a_2\omega_1\wedge \omega_2 +2a_1a_3\omega_1\wedge \omega_3+2a_2a_3\omega_2\wedge \omega_3, $$ it follows that $a_1a_3=a_2a_3=0$, which implies that $y$ is a multiple of $x$. But this is not possible. This finishes the proof for the 3 Wallach flag manifolds. \ensuremath{\operatorname{sec}} tion{The complex projective space $\mathbb{C\mkern1mu P}^{2n+1}$}\langlebel{CPn} For $\mathbb{C\mkern1mu P}^{2n+1}=\ensuremath{\operatorname{SU}} (2n+2)/S(\ensuremath{\operatorname{U}} (2n+1)\ensuremath{\operatorname{U}} (1))$ it is well known that the set of homogeneous metrics can be described as follows, see e.g. \cite{Zi1}. First, observe that $\ensuremath{\operatorname{Sp}} (n+1)\subset \ensuremath{\operatorname{SU}} (2n+2)$ acts transitively on $\mathbb{C\mkern1mu P}^{2n+1}$ with stabilizer $\ensuremath{\operatorname{Sp}} (n)\cdot\ensuremath{\operatorname{S}} ^1$. From the inclusions $\ensuremath{\operatorname{Sp}} (n)\cdot\ensuremath{\operatorname{S}} ^1\subset \ensuremath{\operatorname{Sp}} (n)\cdot\ensuremath{\operatorname{Sp}} (1)\subset\ensuremath{\operatorname{Sp}} (n+1)$ we obtain the twistor fibration $\mathbb{S}^2\to \mathbb{C\mkern1mu P}^{2n+1}\to \mathbb{H\mkern1mu P}^{n}$ and every homogeneous metric is a Riemannian submersion where one scales the metric induced by a biinvariant metric on $\ensuremath{\operatorname{Sp}} (n)$ with a factor $t$ on the fiber. Of course, on $\mathbb{C\mkern1mu P}^{2n+1}$ we have only one harmonic 2-form $\alpha_H$, and a metric is geometrically formal if $\alpha_H^k$ are again harmonic for $k=1,2,\ldots 2n+1$. We will show that already for $\alpha_H^2$ this is only the case for the symmetric metric. We need to explicitly express the invariant form in some basis. We choose the embedding of $\ensuremath{\operatorname{Sp}} (n)$ in $\ensuremath{\operatorname{Sp}} (n+1)$ as the upper block embedding, i.e. the stabilizer of the last basis vector in its action on ${\mathbb{H}} ^n$. We first describe the basis of its orthogonal complement. Recall that $E_{i,j}$ is the matrix which has a 1 in row i and column j, and 0 otherwise. Set $$ e_1=iE_{n+1,n+1},\ \ e_2=jE_{n+1,n+1},\ \ e_3=kE_{n+1,n+1}, \ \ Y_\alpha=E_{\alpha,n+1}-E_{n+1,\alpha}$$ $$ Y_{\alpha,1}=i E_{\alpha,n+1}+iE_{n+1,\alpha} ,\ \ Y_{\alpha,2}=j E_{\alpha,n+1}+jE_{n+1,\alpha} ,\ \ Y_{\alpha,3}=k E_{\alpha,n+1}+kE_{n+1,\alpha}$$ where $\alpha$ goes from $1$ to $n$. Let $H=\ensuremath{\operatorname{Sp}} (n)\cdot\ensuremath{\operatorname{S}} ^1\subset\ensuremath{\operatorname{Sp}} (n+1)$ where $\ensuremath{\operatorname{S}} ^1=e^{it}\subset \ensuremath{\operatorname{Sp}} (1)$. Then the orthogonal complement of ${\mathfrak{h}} $ in ${\mathfrak{g}} $ splits as $$ {\mathfrak{m}} = V\oplus W={\mathbb{C}} \oplus{\mathbb{H}} ^n \ \text{with}\ V=\spam(e_2,e_3) \ \text{and} \ W=\spam( Y_\alpha,Y_{\alpha,1},Y_{\alpha,2},Y_{\alpha,3}), \ \alpha=1,\ldots n $$ and $\ensuremath{\operatorname{diag}} (A,e^{it})\in H$ acts on ${\mathfrak{m}} $ as $(z,v) \to (e^{2it}z,Ave^{-it})$. $H$ acts irreducibly on $V$ and $W$ and hence the metric depends on 2 parameters. We denote by $\langlengle \ , \ \ranglengle_t$ the metric on ${\mathfrak{m}} $ where $e_i$ have length $t$ and the basis vectors in $W$ have length 1. Extended to a homogeneous metric on $G/H$, the symmetric metric then corresponds to $t=1$. For the ${\mathfrak{m}} $ component of the Lie brackets of vectors in ${\mathfrak{m}} $ we have: $$ [e_2,e_3]_{\mathfrak{m}} =0\ ,\ [e_i,Y_\alpha]_{\mathfrak{m}} =-Y_{\alpha,i}\ ,\ [e_i,Y_{\alpha,i}]_{\mathfrak{m}} =Y_{\alpha}\ ,\ [e_i,Y_{\alpha,j}]_{\mathfrak{m}} =Y_{\alpha,k}\ ,\ [Y_{\alpha},Y_{\alpha,i}]_{\mathfrak{m}} =-2e_i$$ $$ [Y_{\alpha,i},Y_{\alpha,j}]_{\mathfrak{m}} =2e_k\ ,\ [Y_{\alpha},Y_{\beta}]_{\mathfrak{m}} = [Y_{\alpha},Y_{\beta,i}]_{\mathfrak{m}} =[Y_{\alpha,i},Y_{\beta,i}]_{\mathfrak{m}} =[Y_{\alpha,i},Y_{\beta,j}]_{\mathfrak{m}} =0$$ where $i,j,k$ is a cyclic permutation of $1,2,3$ and $\alpha,\beta$ are distinct. As in the previous case, we first compute the differentials of 1-forms: \renewcommand{1.4}{1.4} \stepcounter{equation} \begin{table}[ht] \begin{center} \begin{tabular}{c|c} $w$ & $\dif w$ \\ \hline $Y_\alpha$ & $\hspace{10pt} e_2\wedge Y_{\alpha,2}+ e_3\wedge Y_{\alpha,3} $ \\ $Y_{\alpha,1}$ & $\hspace{10pt} e_2\wedge Y_{\alpha,3}- e_3\wedge Y_{\alpha,2}$ \\ $Y_{\alpha,2}$ & $\hspace{0pt} -e_2\wedge Y_{\alpha}\ \ + e_3\wedge Y_{\alpha,1}$ \\ $Y_{\alpha,3}$ & $\hspace{-7pt} -e_2\wedge Y_{\alpha,1}- e_3\wedge Y_{\alpha}$ \\ $e_2$ & $\hspace{10pt}\sum_\alpha ( -2Y_{\alpha}\wedge Y_{\alpha,2}+ 2Y_{\alpha,3}\wedge Y_{\alpha,1})$ \\ $e_3$ & $\hspace{10pt} \sum_\alpha ( -2Y_{\alpha}\wedge Y_{\alpha,3}+ 2Y_{\alpha,1}\wedge Y_{\alpha,2})$. \epsilonnd{tabular} \epsilonnd{center} \caption{Differentials of one-forms on $\mathbb{C\mkern1mu P}^{2n+1}$}\langlebel{1formsCPn} \epsilonnd{table} We now determine the $H$ invariant forms. Clearly, in $\mathsf{L}ambda^*(V)$ we only have the volume element $v=e_2\wedge e_3\in\mathsf{L}ambda^2(V)$. Recall that the algebra $\mathsf{L}ambda^*({\mathbb{H}} ^n)^{\ensuremath{\operatorname{Sp}} (n)}$ of invariant forms is generated by the 3 symplectic forms, corresponding to the K\"ahler forms $\omega_I, \omega_J,\omega_k$ associated to the 3 complex structures coming from right multiplication with $I,J,K\in \ensuremath{\operatorname{Sp}} (1)$ on $W={\mathbb{H}} ^n$. Thus $\mathsf{L}ambda^*(W)^{\ensuremath{\operatorname{Sp}} (n)}$ is generated by: { \fontsize{11}{15} \selectfont $$ \omega_I=\sum_\alpha ( Y_{\alpha}\wedge Y_{\alpha,1}- Y_{\alpha,2}\wedge Y_{\alpha,3})\ , \omega_J=\sum_\alpha ( Y_{\alpha}\wedge Y_{\alpha,2}- Y_{\alpha,3}\wedge Y_{\alpha,1}), \ \omega_K=\sum_\alpha ( Y_{\alpha}\wedge Y_{\alpha,3}- Y_{\alpha,1}\wedge Y_{\alpha,2}) $$ } and hence $$ \mathsf{L}ambda^*({\mathfrak{m}} )^{\ensuremath{\operatorname{Sp}} (n)} \ \text{is generated by }\ e_1,\ e_2,\ \omega_I ,\ \omega_J,\ \omega_K. $$ In this algebra we can identify the $H$ invariant forms by determining the action of the circle in $H$ and diagonalizing it via complexification. On $e_2, e_3$ the circle $e^{it}\in \ensuremath{\operatorname{S}} ^1\subset H$ acts via a rotation $R(2t)$ since it is given by conjugation. On the two-plane spanned by $Y_{\alpha},\ Y_{\alpha,1}$ one easily checks that it acts via a rotation $R(-t)$ and on the two-plane spanned by $Y_{\alpha,2},\ Y_{\alpha,3}$ as a rotation $R(t)$. Hence it acts trivially on $\omega_I$, and on the two-plane spanned by $\omega_J, \omega_K$ it acts as $R(2t)$. This action is diagonal in the basis $e_2+ie_3,e_2-ie_3, \omega_I, \omega_J+i\omega_K, \omega_J-i\omega_K$ and acts via $\theta^2 + (\theta^*)^2 + \Id+ \theta^2 + (\theta^*)^2$. Thus we obtain invariant forms, besides $\omega_I$, by taking real and imaginary parts of $(e_2+ie_3)\wedge(e_2-ie_3)$ and $( \omega_J+i\omega_K)\wedge (\omega_J-i\omega_K)$ as well as $(e_2+ie_3)\wedge(\omega_J-i\omega_K)$. This gives us the following basis for the invariant forms in low degrees: $$ \mathsf{L}ambda^2({\mathfrak{m}} )^H=\spam(v,\ \omega_I), \ \text{where }\ v=e_2\wedge e_3 $$ and $$ \mathsf{L}ambda^3({\mathfrak{m}} )^H=\spam(\beta_1,\beta_2) \ \text{where }\ \beta_1=e_2\wedge \omega_J+e_3\wedge \omega_K \ \text{and }\ \beta_2=e_2\wedge \omega_K-e_3\wedge \omega_J $$ and the invariant 4-forms $$ \mathsf{L}ambda^4({\mathfrak{m}} )^H=\spam(\omega_I^2,\ v\wedge \omega_I,\ \omega_J^2+ \omega_K^2). $$ Notice that in the above language $de_2=- 2\omega_J $ and $de_3=- 2\omega_K $, and that we have the relations $v\wedge v=v\wedge\beta_1=v\wedge\beta_2=0$. Using Table \ref{1formsCPn} and the product formula, one easily sees that: $$ dv=2\beta_2,\ \ d\omega_I=2\beta_2,\ \ d\omega_J=2e_3\wedge\omega_I,\ \ d\omega_K=-2e_2\wedge\omega_I, \ \ d\beta_1=-2(\omega_J^2+ \omega_K^2 ) -4 v\wedge\omega_I. $$ Thus $\alpha_H=v-\omega_I$ is the only closed 2-form, which is hence harmonic. We now claim that $\alpha_H^2$ can only be harmonic for the symmetric metric. For this, we compute the differentials of the 4-forms: $$ d\omega_I^2=4\omega_I\wedge\beta_2,\ d(v\wedge \omega_I)=2\omega_I\wedge\beta_2,\ d(\omega_J^2+ \omega_K^2)=-4\omega_I\wedge\beta_2. $$ Thus we have 2 closed 4-forms: $$ \alpha_H^2=\omega_I^2-2v\wedge \omega_I,\ \ \text{and }\ \omega_J^2+ \omega_K^2+2 v\wedge \omega_I, $$ and we need to determine which linear combination is harmonic. For this it needs to be orthogonal to the derivative of the invariant 3-forms, which is $d\beta_1$ since $d\beta_2={\mathfrak{r}} ac12 d\omega_I^2=0$. Thus the 4-form is harmonic iff $$ \langlengle a(\omega_I^2-2v\wedge \omega_I ) + b ( \omega_J^2+ \omega_K^2+2 v\wedge \omega_I), (\omega_J^2+ \omega_K^2 ) +2 v\wedge\omega_I \ranglengle_t=0. $$ Observe that the inner products between the 3 symplectic forms are all the same, say equal to $L$, and that they are orthogonal to $v\wedge \omega_I$. Furthermore, $\langlengle v\wedge\omega_I,v\wedge\omega_I\ranglengle= \langlengle v, v\ranglengle\cdot \langlengle \omega_I,\omega_I\ranglengle= t^2L$. Thus we need $$ 2aL+4bL-4bt^2L=2L(a+2b(1-t^2))=0. $$ But this implies that the only value of $t$ where $\alpha_H^2$ is harmonic is $t=1$, i.e. the symmetric metric. This finishes the proof of our main Theorem. We note that in the terminology from \cite{Na} we proved that a homogeneous metric on $\mathbb{C\mkern1mu P}^{n}$ which is 2-formal, i.e., the product of harmonic 2-forms is again harmonic, is already symmetric. We remark further that the metrics with positive sectional curvature are described as follows. For $B^{13}$ and $ \mathbb{C\mkern1mu P}^{2n+1}$ we consider the fibrations $\mathbb{S}^2\to \mathbb{C\mkern1mu P}^{2n+1}\to \mathbb{H\mkern1mu P}^{n}$ and $\mathbb{R\mkern1mu P}^5\to B^{13}\to \mathbb{C\mkern1mu P}^4$ and scale the fibers with $t$. The metric then has positive curvature iff $0<t<{\mathfrak{r}} ac43$. For the more complicated description of the homogeneous positively curved metrics on $W_{p,q}$ see \cite{Pu}, for the ones on the flag manifolds see \cite{Va}, and for the ones on spheres \cite{VZ}. \providecommand{\bar{y}same}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \begin{thebibliography}{10000} \bibitem[Am]{Ama12b} M.~Amann. \newblock Non-formal homogeneous spaces. \newblock {\epsilonm Math. Z.}, 274(3-4):1299--1325, 2013. \bibitem[AW]{AW} S.~Aloff and N.~Wallach, \epsilonmph{ An infinite family of 7--manifolds admitting positively curved Riemannian structures}, Bull. Amer. Math. Soc. {\bf 81}(1975), 93--97. \bibitem[Ba]{Bae12} C.~B\"ar, \epsilonmph{ Geometrically formal 4-manifolds with nonnegative sectional curvature}, arXiv:1212.1325v2, 2012. \bibitem[Baz]{Baz} Y.\ Bazaikin, \epsilonmph{On a family of $13$-dimensional closed Riemannian manifolds of positive curvature}, Siberian Math.\ J., 37 (1996), 1068--1085. \bibitem[BB]{BB} L.~B\'erard Bergery, {\epsilonm Les vari\'et\'es riemanniennes homog\`enes simplement connexes de dimension impaire \`a courbure strictement positive}, J.\ Math.\ pure et appl.\ {\bf 55}\,(1976), 47--68. \bibitem[Bo1]{Bo1} A.\ Borel, \epsilonmph{Sur la cohomologie des espaces principaux et des espaces homogenes de groupes de Lie compacts}, Ann. of Math.\ \textbf{57} (1953), 115--207. \bibitem[Bo2]{Bo2} A.\ Borel, \epsilonmph{Sur l'homologie et la cohomologie des groupes de Lie compacts connexes}, Amer. J. of Math.\ \textbf{76} (1954), 273--342. \bibitem[Be]{Be} M.\ Berger, \epsilonmph{Les vari\'et\'es riemanniennes homog\`enes normales simplement connexes \`a courbure strictement positive}, Ann.\ Scuola Norm.\ Sup.\ Pisa \textbf{15} (1961), 179--246. \bibitem[De]{De} O.~Dearricott, \epsilonmph{A 7-manifold with positive curvature}, Duke Math. J. \textbf{158} (2011), 307--346.. \bibitem[E1]{E1} J. H.~Eschenburg, \epsilonmph{New examples of manifolds with strictly positive curvature}, Invent. Math. \textbf{66} (1982), 469--480. \bibitem[E2]{E2} J. H.~Eschenburg, {\epsilonm Freie isometrische Aktionen auf kompakten Lie-Gruppen mit positiv gekr\"ummten Orbitr\"aumen,} Schriftenr.~Math.~Inst.~Univ.~M\"unster {\bf 32} (1984). \bibitem[FHT]{FHT} Y.~Felix, S.~Halperin and J.-C..~Thomas, \epsilonmph{Rational homotopy theory}, Vol. 205 of \epsilonmph{Graduate Texts in Mathematics}, Springer-Verlag, New York, 2001. \bibitem[GN]{GN} J.-F. Grosjean and P.-A. Nagy, \epsilonmph{On the cohomology algebra of some classes of geometrically formal manifolds}, Proc. London Math. Soc. \textbf{98} (2009), 607--630. \bibitem[GVZ]{GVZ} K.~Grove, L.~Verdiani and W.~Ziller, \epsilonmph{An exotic $T_1\mathbb{S}^4$ with positive curvature} Geom. Funct. Anal. \textbf{21} (2011), 499-524. \bibitem[Ko1]{Kot01} D. Kotschik, \epsilonmph{On products of harmonic forms}, Duke Math. J. \textbf{107} (2001), 521--531. \bibitem[Ko2]{Kot12} D. Kotschik, \epsilonmph{Geometric formality and non-negative scalar curvature}, arXiv:1212.3317, 2012. \bibitem[KT1]{Kot03} D. Kotschik and S. Terzic, \epsilonmph{On formality of generalized symmetric spaces}, Math. Proc. Cambridge Phil. Soc. \textbf{134} (2003), 491--505. \bibitem[KT2]{Kot09} D. Kotschik and S. Terzic, \epsilonmph{Chern numbers and the geometry of partial flag manifolds}, Comm. Math. Helv. \textbf{84} (2009), 587--616. \bibitem[KT3]{Kot11} D. Kotschik and S. Terzic, \epsilonmph{Geometric formality of homogeneous spaces and biquotients}, Pacific J. Math. \textbf{249} (2011), 157--176. \bibitem[Kr]{Kr} L. Kramer, \epsilonmph{Homogeneous spaces, Tits buildings, and isoparametric hypersurfaces}, Mem. Amer. Math. Soc. \textbf{752}, American Mathematical Society, Providence, RI, 2002. \bibitem[Na]{Na} P.-A. Nagy, \epsilonmph{On length and product of harmonic forms in K\"ahler geometry}, Math. Z. \textbf{254} (2006), 199-218. \bibitem[OP]{OP} L.~Ornea and M.~Pilca, \epsilonmph{Remarks on the product of harmonic forms}, Pac. J. Math. \textbf{250} (2011), 353–-363. \bibitem[PW]{PW} P.~Petersen and F.~Wilhelm, \epsilonmph{An exotic sphere with positive sectional curvature}, Preprint 2008. \bibitem[Pr]{Pr} G. Prasad and S.K. Yeung, \epsilonmph{Arithmetic fake projective spaces and arithmetic fake Grassmannians}, Amer. J. Mathe. \textbf{131} (2009), 379--407. \bibitem[P\"{u}]{Pu} T.~P\"{u}ttmann, \epsilonmph{Optimal pinching constants of odd dimensional homogeneous spaces}, Invent. math. \textbf{138}, (1999), 631–-684. \bibitem[Va]{Va} F.M. Valiev, \epsilonmph{Precise estimates for the sectional curvatures of homogeneous Riemannian metrics on Wallach spaces}, Sib. Mat. Zhurn. \textbf{20} (1979), 248–-262. \bibitem[VZ]{VZ} L.Verdiani - W.Ziller, \epsilonmph{Positively curved homogeneous metrics on spheres}, Math. Zeitschrift, \textbf{261} (2009), 473–-488. \bibitem[Wa]{Wa} N. Wallach, \epsilonmph{Compact homogeneous Riemannian manifolds with strictly positive curvature}, Ann. of Math., \textbf{96} (1972), 277-295. \bibitem[Zi1]{Zi1} W.~Ziller, \epsilonmph{Homogeneous Einstein metrics on Spheres and projective spaces}, Math. Ann. \textbf{259} (1982), 351–-358. \bibitem[WZ]{WZ} M.Wang-W. Ziller, \epsilonmph{On isotropy irreducible Riemannian manifolds}, Acta. Math. 166 (1991), 223-261. \bibitem[Zi2]{Zi2} W.Ziller, \epsilonmph{Examples of Riemannian manifolds with nonnegative sectional curvature}, in: Metric and Comparison Geometry, Surv. Diff. Geom. 11, ed. K.Grove and J.Cheeger, (2007), 63--102. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title{Applications of Kronecker's limit formula for elliptic Eisenstein series} \author{Jay Jorgenson \and Anna-Maria von Pippich \and Lejla Smajlovi\'{c} \footnote{\noindent The first named author acknowledges grant support from the NSF and PSC-CUNY. We thank Professor Floyd Williams for making available to us the unpublished dissertation \cite{Vassileva96} which was written by his student I. N. Vassileva. The results in the manuscript were of great interest to us, and we hope the document will become available to the mathematical community.}} \maketitle \begin{abstract}\noindent We develop two applications of the Kronecker's limit formula associated to elliptic Eisenstein series: A factorization theorem for holomorphic modular forms, and a proof of Weil's reciprocity law. Several examples of the general factorization results are computed, specifically for certain moonshine groups, congruence subgroups, and, more generally, non-compact subgroups with one cusp. In particular, we explicitly compute the Kronecker limit function associated to certain elliptic points for a few small level moonshine groups. \end{abstract} \vskip .15in \section{Introduction and statement of results} \subsection{Non-holomorphic Eisenstein series.} Let $\Gamma$ be a Fuchsian group of the first kind which acts on the hyperbolic space $\mathbb H$ by factional linear transformations, and let $M = \Gamma \backslash \mathbb H$ be the finite volume quotient. One can view $M$ as a finite volume hyperbolic Riemann surface, possibly with cusps and elliptic fixed points. In a slight abuse of notation, we will use $M$ to denote both the Riemann surface as well as a (Ford) fundamental domain of $\Gamma$ acting on $\mathbb H$. The abelian subgroups of $\Gamma$ are classified as three distinct types: Parabolic, hyperbolic and elliptic. Accordingly, there are three types of scalar-valued non-holomorphic Eisenstein series, whose definitions we now recall. Parabolic subgroups are characterized by having a unique fixed point $P$ on the extended upper-half plane $\widehat{\mathbb H}$. The fixed point $P$ is known as a cusp of $M$, and the associated parabolic subgroup is denoted by $\Gamma_{P}$. The parabolic Eisenstein series ${\cal E}^{\mathrm{par}}_{P}(z,s)$ associated to $P$ is a defined for $z\in M$ and $s \in \mathbb{C}$ with $\textrm{Re}(s) > 1$, by the series \begin{equation*} {\cal E}^{\mathrm{par}}_{P}(z,s) = \sum\limits_{\eta \in \Gamma_{P}\backslash \Gamma}\textrm{Im}(\sigma_{P}^{-1}\eta z)^{s}, \end{equation*} where $\sigma_{P}$ is the scaling matrix for the cusp $P$, i.e. the element of $\mathrm{PSL}_2(\mathbb{R})$ such that, when extending the action of $\sigma_{P}$ to $\widehat{\mathbb H}$, we have that $\sigma_{P}\infty = P$. Hyperbolic subgroups have two fixed points on the extended upper-half plane $\widehat{\mathbb H}$. Let us denote a hyperbolic subgroup by $\Gamma_{\gamma}$ for $\gamma \in \Gamma$, and let $\mathcal{L}_{\gamma}$ signify the geodesic path in $\mathbb H$ connecting the two fixed points of hyperbolic element $\gamma$. Following Kudla and Millson from \cite{KM79}, one defines a scalar-valued hyperbolic Eisenstein series for $z\in M$ and $s \in \mathbb{C}$ with $\textrm{Re}(s) > 1$ by the series \begin{equation}\label{hyp_eisen} {\cal E}^{\mathrm{hyp}}_{\gamma}(z,s) =\sum\limits_{\eta \in \Gamma_{\gamma}\backslash \Gamma} \cosh(d_{\mathrm{hyp}}(\eta z, \mathcal{L}_{\gamma}))^{-s}, \end{equation} where $d_{\mathrm{hyp}}(\eta z, \mathcal{L}_{\gamma})$ is the hyperbolic distance from the point $\eta z$ to $\mathcal{L}_{\gamma}$. Elliptic subgroups have finite order and have a unique fixed point within $\mathbb H$. In fact, for any point $w \in M$, there is an elliptic subgroup $\Gamma_{w}$ which fixes $w$, where in all but a finite number of cases $\Gamma_{w}$ is the identity element. Elliptic Eisenstein series were defined in an unpublished manuscript from 2004 by Jorgenson and Kramer and were studied in depth in the 2010 dissertation \cite{vP10} by von Pippich. Specifically, for $z\in M$, $z\not=w$, and $s \in \mathbb{C}$ with $\textrm{Re}(s) > 1$, the elliptic Eisenstein series is defined by \begin{equation}\label{ell_eisen} {\cal E}^{\textrm{ell}}_{w}(z,s) =\sum\limits_{\eta \in \Gamma_{w}\backslash \Gamma} \sinh(d_{\mathrm{hyp}}(\eta z, w))^{-s} \end{equation} where $d_{\mathrm{hyp}}(\eta z, w)$ denotes the hyperbolic distance from $\eta z$ to $w$. \subsection{Known properties and relations} There are some fundamental differences between the three types of Eisenstein series defined above. Hyperbolic Eisenstein series are in $L^{2}(M)$, whereas parabolic and elliptic series are not. Elliptic Eisenstein series are defined as a sum over a finite index subset of $\Gamma$, and indeed the series (\ref{ell_eisen}) can be extended to all $\Gamma$ which would introduce a multiplicative factor equal to the order of $\Gamma_{w}$. However, hyperbolic and parabolic series are necessarily formed by sums over infinite index subsets of $\Gamma$. Parabolic Eisenstein series are eigenfunctions of the hyperbolic Laplacian; however, elliptic and hyperbolic Eisenstein series satisfy a differential-difference equation which involves the value of the series at $s+2$. Despite their differences, there are several intriguing ways in which the Eisenstein series interact. Since the hyperbolic Eisenstein series are in $L^{2}(M)$, the expression (\ref{hyp_eisen}) admits a spectral expansion which involves the parabolic Eisenstein series; see \cite{JKvP10} and \cite{KM79}. If one considers a degenerating sequence of Riemann surfaces obtained by pinching a geodesic, then the associated hyperbolic Eisenstein series converges to parabolic Eisenstein series on the limit surface; see \cite{Fa07} and \cite{GJM08}. If one studies a family of elliptically degenerating surfaces obtained by re-uniformizing at a point with increasing order, then the corresponding elliptic Eisenstein series converge to parabolic Eisenstein series on the limit surface; see \cite{GvP09}. Finally, there are some basic similarities amongst the series. Each series admits a meromorphic continuation to all $s\in\mathbb{C}$. The poles of the meromorphic continuations have been identified and are closely related, in all cases involving data associated to the continuous and non-cuspidal discrete spectrum of the hyperbolic Laplacian and, for hyperbolic and elliptic series, involving data associated to the cuspidal spectrum as well. Finally, and most importantly for this article, the hyperbolic and elliptic Eisenstein series are holomorphic at $s=0$, and for all known instances, the parabolic Eisenstein series also is holomorphic at $s=0$. In all these cases, the value of each Eisenstein series at $s=0$ is a constant as a function of $z$. The coefficient of $s$ in the Taylor series expansion about $s=0$ shall be called the Kronecker limit function. \subsection{Kronecker limit functions} The classical Kronecker's limit formula is the following statement, which we quote from \cite{Siegel80}. If we consider the case when $\Gamma = \textrm{PSL}_2(\mathbb{Z})$, then \begin{align*} \mathcal{E}^{\mathrm{par}}_{\infty}(z,s)= \frac{3}{\pi(s-1)} -\frac{1}{2\pi}\log\bigl(|\Delta(z)|\Im(z)^{6}\bigr)+C+O(s-1) \,\,\,\,\,\textrm{as $s \rightarrow 1$,} \end{align*} where $C=6(1-12\,\zeta'(-1)-\log(4\pi))/\pi$, and with Dedekind's delta function $\Delta(z)$ given by $$ \Delta(z) = \left[q_{z}^{1/24}\prod\limits_{n=1}^{\infty}\left(1 - q_{z}^{n}\right)\right]^{24} = \eta(z)^{24} \,\,\,\,\,\textrm{with $q_{z} = e^{2\pi i z}$.} $$ By employing the well-known functional equation for $\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)$, Kronecker's limit formula can be reformulated as \begin{equation*} \mathcal{E}^{\mathrm{par}}_{\infty}(z,s)= 1+ \log\bigl(|\Delta(z)|^{1/6}\Im(z)\bigr)s+O(s^2) \,\,\,\,\,\textrm{as $s \rightarrow 0$.} \end{equation*} For general Fuchsian groups of the first kind, Goldstein \cite{Go73} studied analogue of Kronecker's limit formula associated to parabolic Eisenstein series. We will use the results from \cite{Go73} throughout this article. The hyperbolic Eisenstein series in \cite{KM79} are form-valued, and the series are defined by an infinite sum which converges for $\textrm{Re}(s) > 0$. The main result in \cite{KM79} is that the the form-valued hyperbolic Eisenstein series is holomorphic at $s=0$, and the value is equal to the harmonic form that is the Poincar\'e dual to the one-cycle in the homology group $H^{1}(M,\mathbb R)$ corresponding to the hyperbolic geodesic $\gamma$ fixed by $\Gamma_{\gamma}$. The analogue of Kronecker's limit formula for elliptic Eisenstein series was first proved in \cite{vP10} and \cite{vP15}. Specifically, it is shown that at $s=0$, the series (\ref{ell_eisen}) admits the Laurent expansion \begin{align}\label{Kronecker_elliptic} \mathrm{ord}(w)\,\mathcal{E}^{\mathrm{ell}}_{w}(z,s)&- \frac{2^{s}\sqrt{\pi}\,\Gamma(s-\frac{1}{2})}{\Gamma(s)} \sum\limits_{k=1}^{p_{\Gamma}}\mathcal{E}^{\mathrm{par}}_{p_{k}}(w,1-s) \,\mathcal{E}^{\mathrm{par}}_{p_{k}}(z,s)= \notag \\ &-c -\log\bigl(|H_{\Gamma}(z,w)|^{\mathrm{ord}(w)}(\Im(z))^{c}\bigr)\cdot s+O(s^2)\,\,\,\,\,\textrm{as $s \rightarrow 0$,} \end{align} where $p_k$, $k=1,\ldots, p_{\Gamma}$, are cusps of $M$, $c=2\pi/\vol_{\mathbb{H}yp}(M)$, and $H_{\Gamma}(z,w)$ is a holomorphic automorphic function with respect to $\Gamma$ and which vanishes only when $z=\eta w$ for some $\eta\in\Gamma$. Two explicit computations are given in \cite{vP10} and \cite{vP15} for $\Gamma = \PSL_2(\mathbb{Z})$ when considering the elliptic Eisenstein series $E^{\mathrm{ell}}_{w}(z,s)$ associated to the points $w=i$ and $w=\rho = (1+i\sqrt{3})/2$. In these cases, the elliptic Kronecker limit function $H_{\Gamma}(z,w)$ at points $w=i$ and $w=\rho$ is such that \begin{equation}\label{elliptic_at_i} \abs{H_{\Gamma}(z,i)}= \exp(-B_i)\abs{E_6(z)}, \text{ where } B_i=-3(24\zeta'(-1)-\log(2\pi)+4\log\Gamma(1/4)) \end{equation} and \begin{equation}\label{elliptic_at_rho} \abs{H_{\Gamma}(z,\rho)}=\exp(-B_{\rho})\abs{E_4(z)}, \text{ where } B_{\rho}=-2(24\zeta'(-1)-2\log(2\pi/\sqrt{3})+6\log\Gamma(1/3)). \end{equation} The Kronecker limit formula for elliptic Eisenstein series became the asymptotic formulas \begin{equation}\label{elliptic Eis_at_i} \mathcal{E}^{\mathrm{ell}}_{i}(z,s)= -\log(\vert E_{6}(z)\vert \vert\Delta(z)\vert^{-1/2})\cdot s + O(s^{2}) \,\,\,\,\,\text{\rm as $s\rightarrow 0$,} \end{equation} and \begin{equation}\label{elliptic Eis_at_rho} \mathcal{E}^{\mathrm{ell}}_{\rho}(z,s)= -\log(\vert E_{4}(z)\vert \vert\Delta(z)\vert^{-1/3})\cdot s + O(s^{2}) \,\,\,\,\,\text{\rm as $s\rightarrow 0$,} \end{equation} where $E_{4}$ and $E_{6}$ are classical holomorphic Eisenstein series on $\PSL_2(\mathbb{Z})$ of weight four and six, respectively. Before continuing, let us state what we believe to be an interesting side comment. The Kronecker limit function associated to elliptic Eisenstein series is naturally defined as the coefficient of $s$ in the Laurent expansion of the elliptic Eisenstein series near $s=0$. As we show below, one can realize the Kronecker limit function for parabolic Eisenstein series for groups with one cusp as the coefficient of $s$ in the Laurent expansion of the parabolic Eisenstein series at $s=0$. One has yet to study the Laurent expansion near $s=0$, in particular the coefficient of $s$, for the scalar-valued hyperbolic Eisenstein series; for that matter, we have not fully understood the analoguous question for the vector of parabolic Eisenstein series for general groups. We expect that one can develop systematic theory by focusing on coefficients of $s$ in all cases. \subsection{Important comment and assumption}\label{subsection_assumption} At this time, we do not have a complete understanding of the behavior of the parabolic Eisenstein series ${\cal E}^{\mathrm{par}}_{P}(z,s)$ near $s=0$. If the group has one cusp, the functional equation of the Eisenstein series shows that ${\cal E}^{\mathrm{par}}_{P}(z,0)=1$. In notation to be set below, its scattering determinant is zero at $s=0$. However, this is not true when there is more than one cusp. For example, on page 536 of \cite{He83}, the author computes the scattering matrix for $\Gamma_{0}(N)$ for square-free $N$, from which it is clear that $\Phi(s)$ is holomorphic but not zero at $s=0$. Specifically, it remains to determine if the parabolic Eisenstein series is holomorphic at $s=0$, which is a question we were unable to answer in complete generality. \vskip .10in \it Throughout this article, we assume that ${\cal E}^{\mathrm{par}}_{P}(z,s)$ is holomorphic at $s=0$. \rm \vskip .10in \noindent The assumption is true in all the instances where specific examples are developed. \subsection{Main results} The purpose of the present paper is to further study the Kronecker limit function associated to elliptic Eisenstein series. We develop two applications. To begin, we examine the relation (\ref{Kronecker_elliptic}) and study the contribution near $s=0$ of the term involving the parabolic Eisenstein series. As with the parabolic Eisenstein series, the resulting expression is particularly simple in the case when the group $\Gamma$ has one cusp. However, in all cases, we obtain an asymptotic formula for $\mathcal{E}^{\mathrm{ell}}_{w}(z,s)$ near $s=0$ which allows us to prove asymptotic bounds for the elliptic Kronecker limit function in any parabolic cusp associated to $\Gamma$. As a consequence, we are able to prove the main result of this article, namely a factorization theorem which expresses holomorphic forms on $M$ of arbitrary weight as products of the elliptic Kronecker limit functions. The product formulas are developed in detail in the case of so-called moonshine groups, which are discrete groups obtained by adding the Fricke involutions to the congruence subgroups $\Gamma_{0}(N)$. As an application of the factorization theorem, we establish further examples of relations similar to (\ref{elliptic_at_i}), (\ref{elliptic_at_rho}), \eqref{elliptic Eis_at_i} and \eqref{elliptic Eis_at_rho}. For example, the moonshine group $\Gamma = \overline{\Gamma_0(2)^+} = \Gamma_0(2)^+/\{\pm \textrm{Id}\}$ has $e_{2}=1/2 + i/2$ as a fixed point of order four. In section 6.2, we prove that the elliptic Kronecker limit function $H_2(z,e_2)$ associated to the point $e_2$ is such that $$ \abs{H_2(z,e_2)} = \exp(-B_{2,e_2})\abs{E_{4}^{(2)}(z)}^{1/2}, $$ where $E_{4}^{(2)}(z)$ is the weight four holomorphic Eisenstein series associated to $\Gamma_{0}(2)^{+}$ and $$ B_{2,e_2}=- \left( 24\zeta'(-1) + \log(8\pi^2) - \frac{11}{6} \log 2 +\frac{1}{12} \log\left( \left| \Delta(1/2 + i/2) \cdot \Delta(1+i) \right| \right)\right). $$ In this case, the Kronecker limit formula for the elliptic Eisenstein series $\mathcal{E}^{\mathrm{ell}}_{e_{2}}(z,s)$ reads as \begin{equation*} \mathcal{E}^{\mathrm{ell}}_{e_{2}}(z,s)= -\log\left(\vert E_{4}^{(2)}(z)\vert^{1/2} \vert \Delta(z)\Delta(2z) \vert ^{-1/12}\right)\cdot s + O(s^{2}) \,\,\,\,\,\text{\rm as $s\rightarrow 0$,} \end{equation*} or, equivalently, as \begin{equation} \label{ell Eis at e_2} \mathcal{E}^{\mathrm{ell}}_{e_{2}}(z,s)= -\log\left(\frac{1}{\sqrt{5}}\vert E_{4}(z) + 4E_4(2z)\vert^{1/2} \vert \Delta(z)\Delta(2z) \vert ^{-1/12}\right)\cdot s + O(s^{2}) \,\,\,\,\,\text{\rm as $s\rightarrow 0$.} \end{equation} The factorization theorem allows one to formulate numerous of examples of this type, of which we develop a few for certain moonshine and congruence subgroups. Second, we use the elliptic Kronecker limit formula to give a new proof of Weil's reciprocity formula. A number of authors have obtained generalizations of Weil's reciprocity law; see, for example, the elegant presentation in \cite{Kh08} which discusses various reciprocity laws over $\mathbb C$ as well as Deligne's article \cite{De91} where the author re-interprets Tate's local symbol and obtains a number of generalizations and applications. It would be interesting to study the possible connection between the functional analytic method of the present and companion article \cite{JvPS14} with the algebraic ideas in \cite{De91} and results surveyed in \cite{Kh08}. An outline of this article is as follows. In section 2 we establish notation and cite various results from the literature. In section 3, we reformulate Kronecker's limit formula for parabolic Eisenstein series as an asymptotic statement near $s=0$. From the results in section 3, we then prove, in section 4, the asymptotic behavior in the cusps of the elliptic Kronecker limit function. Specific examples are given for moonshine groups $\overline{\Gamma_{0}(N)^{+}}$ with square-free level $N$ and congruence subgroups $\overline{\Gamma_{0}(p)}$ with prime level $p$. In section 5 we prove the factorization theorem which states, in somewhat vague terms, that any holomorphic form on $M$ can be written as a product of elliptic Kronecker limit functions, up to a multiplicative constant. In addition, from the asymptotic formula from section 4, one is able to obtain specific information associated to the multiplicative constant in the aforementioned description of the factorization theorem. In section 6 we give examples of the factorization theorem for holomorphic Eisenstein series for the modular group, for moonshine groups of levels $2$ and $5$, for general moonshine groups, and for congruence subgroups $\overline{\Gamma_{0}(p)}$ of prime level. Finally, in section 7, we present our proof of Weil's reciprocity using the elliptic Kronecker limit functions and state a few concluding remarks. \section{Background material} \subsection{Basic notation} \label{notation} Let $\Gamma\subseteq\mathrm{PSL}_{2}(\mathbb{R})$ denote a Fuchsian group of the first kind acting by fractional linear transformations on the hyperbolic upper half-plane $\mathbb{H}:=\{z=x+iy\in\mathbb{C}\, |\,x,y\in\mathbb{R};\,y>0\}$. We let $M:=\Gamma\backslash\mathbb{H}$, which is a finite volume hyperbolic Riemann surface, and denote by $p:\mathbb{H}\longrightarrow M$ the natural projection. We assume that $M$ has $e_{\Gamma}$ elliptic fixed points and $p_{\Gamma}$ cusps. We identify $M$ locally with its universal cover $\mathbb{H}$. We let $\mu_{\mathrm{hyp}}$ denote the hyperbolic metric on $M$, which is compatible with the complex structure of $M$, and has constant negative curvature equal to minus one. The hyperbolic line element $ds^{2}_{\mathbb{H}yp}$, resp.~the hyperbolic Laplacian $\Delta_{\mathbb{H}yp}$, are given as \begin{align*} ds^{2}_{\mathbb{H}yp}:=\frac{dx^{2}+dy^{2}}{y^{2}},\quad\textrm{resp.} \quad\Delta_{\mathbb{H}yp}:=-y^{2}\left(\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}\right). \end{align*} By $d_{\mathrm{hyp}}(z,w)$ we denote the hyperbolic distance from $z\in\mathbb{H}$ to $w\in\mathbb{H}$. \subsection{Moonshine groups} Let $N=p_1\cdots p_r$ be a square-free, non-negative integer. The subset of $\SL_2(\mathbb{R})$, defined by \begin{align*} \Gamma_0(N)^+:=\left\{ e^{-1/2}\begin{pmatrix}a&b\\c&d\end{pmatrix}\in \SL_2(\mathbb{R}): \,\,\, ad-bc=e, \,\,\, a,b,c,d,e\in\mathbb{Z}, \,\,\, e\mid N,\ e\mid a, \ e\mid d,\ N\mid c \right\} \end{align*} is an arithmetic subgroup of $\SL_2(\mathbb{R})$. We use the terminology ``moonshine group'' of level $N$ to describe $\Gamma_0(N)^+$ because of the important role these groups play in ``monstrous moonshine''. Previously, the groups $\Gamma_0(N)^+$ were studied in \cite{Hel66} where it was proved that if a subgroup $G\subseteq\SL_2(\mathbb{R})$ is commensurable with $\SL_2(\mathbb{Z})$, then there exists a square-free, non-negative integer $N$ such that $G$ is a subgroup of $\Gamma_0(N)^+$. We also refer to page 27 of \cite{Sh71} where the groups $\Gamma_0(N)^+$ are cited as examples of groups which are commensurable with $\SL_2(\mathbb{Z})$ but non necessarily conjugate to a subgroup of $\SL_2(\mathbb{Z})$. Let $\{\pm \textrm{Id}\}$ denote the set of two elements consisting of the identity matrix $\textrm{Id}$ and its product with $-1$. In general, if $\Gamma$ is a subgroup of $\SL_2(\mathbb{R})$, we let $\overline{\Gamma} := \Gamma /\{\pm \textrm{Id}\}$ denote its projection into $\textrm{PSL}_2(\mathbb{R})$. \subsection{Holomorphic Eisenstein series} Following \cite{Se73}, we define a weakly modular form $f$ of weight $2k$ for $k \geq 1$ associated to $\Gamma$ to be a function $f$ which is meromorphic on $\mathbb H$ and satisfies the transformation property $$ f\left(\frac{az+b}{cz+d}\right) = (cz+d)^{-2k}f(z) \,\,\,\,\,\textrm{for all $\begin{pmatrix}a&b\\c&d\end{pmatrix} \in \Gamma$.} $$ Let $\Gamma$ be a Fuchsian group of the first kind that has at least one class of parabolic elements. By rescaling, if necessary, we may always assume that the parabolic subgroup of $\Gamma$ has a fixed point at $\infty$, with identity scaling matrix. In this situation, any weakly modular form $f$ will satisfy the relation $f(z+1)=f(z)$, so we can write $$ f(z) = \sum\limits_{n=-\infty}^{\infty}a_{n}q_z^{n} \,\,\,\,\,\textrm{where $q_z =e(z)= e^{2\pi iz}$.} $$ If $a_{n} = 0$ for all $n < 0$, then $f$ is said to be holomorphic at the cusp at $\infty$. A holomorphic modular form with respect to $\Gamma$ is a weakly modular form which is holomorphic on $\mathbb H$ and in all of the cusps of $\Gamma$. Examples of holomorphic modular forms are the holomorphic Eisenstein series, which are defined as follows. Let $\Gamma_{\infty}$ denote the subgroup of $\Gamma$ which stabilizes the cusp at $\infty$. For $k \geq 2$, let \begin{equation} \label{E_2k, Gamma} E_{2k,\Gamma}(z) := \sum_{\left( \begin{smallmatrix} * & * \\ c & d \\ \end{smallmatrix} \right) \in \Gamma_{\infty} \setminus \Gamma } (cz + d)^{-2k}. \end{equation} It is elementary to show that the series on the right-hand side of \eqref{E_2k, Gamma} is absolutely convergent for all integers $k \geq 2$ and defines a holomorphic modular form of weight $2k$ with respect to $\Gamma$. Furthermore, the series $E_{2k, \Gamma}$ is bounded and non-vanishing at cusps and such that \begin{equation*} E_{2k, \Gamma} (z) = 1 + O (\exp(-2\pi \Im (z))), \text{ as } \Im (z) \to \infty. \end{equation*} When $\Gamma=\mathrm{PSL}_2(\mathbb{Z})$, we denote $E_{2k, \mathrm{PSL}_2(\mathbb{Z})}$ by $E_{2k}$. The holomorphic forms $E_{2k}(z)$ have the $q-$expansions $$ E_{2k}(z) = 1- \frac{4k}{B_{2k}} \sum_{n=1}^{\infty} \sigma_{2k-1}(n) q_z^n, $$ where $B_{2k}$ denotes the $2k-$th Bernoulli number and $\sigma_l$ is the generalized divisor function, which is defined by $\sigma_l(m) = \sum\limits_{d \mid m} d^l$. By convention, we set $\sigma(m)=\sigma_1(m)$. On the full modular surface, there is no weight $2$ holomorphic modular form. Consider, however, the function $E_2(z)$ defined by its $q$-expansion $$ E_2(z) = 1-24 \sum_{n=1}^{\infty} \sigma(n) q_z^n $$ which transforms according to the formula $$ E_2(\gamma z) = (cz+d)^2 E_2(z) + \frac{6}{\pi i}c (cz+d), $$ for $\left( \begin{smallmatrix} * & * \\ c & d \\ \end{smallmatrix} \right) \in \textrm{PSL}_2(\mathbb{Z})$. It is elementary to show that for a prime $p$, the function \begin{equation} \label{E_2,p} E_{2,p}(z) := E_2(z)-pE_2(pz) \end{equation} is a weight 2 holomorphic form associated to the congruence subgroup $\overline{\Gamma_0(p)}$ of $\textrm{PSL}_2(\mathbb{Z})$. The $q-$expansion of $E_{2,p}$ is \begin{equation}\label{q-exp E_2,p} E_{2,p}(z)= (1-p) - 24\sum_{n=1}^{\infty}\sigma(n) (q_z^n - pq_z^{pn}). \end{equation} When $\Gamma = \overline{\Gamma_{0}^+(N)}$, we denote the forms $E_{2k, \overline{\Gamma_{0}^+(N)}}$ by $E_{2k}^{(N)}$. In \cite{JST13} it is proved that $E_{2k}^{(N)}(z)$ may be expressed as a linear combination of forms $E_{2k}(z)$, with dilated arguments, namely \begin{align}\label{E_k, p proposit fla} E_{2k}^{(N)}(z)= \frac1{\sigma_k(N)} \sum_{v \mid N}v^k E_{2k}(vz). \end{align} \subsection{Scattering matrices} Assume that the surface $M$ has $p_{\Gamma}$ cusps, we let $P_{j}$ with $j=1,\ldots, p_{\Gamma}$ denote the individual cusps. Denote by $\phi_{jk}$, with $j,k=1, \ldots, p_{\Gamma}$, the entries of the hyperbolic scattering matrix $\Phi_M(s)$ which are computed from the constant terms in the Fourier expansion of the parabolic Eisenstein series $\mathcal{E}^{\mathrm{par}}_{P_j}(z,s)$ associated to cusp $P_{j}$ in an expansion in the cusp $P_{k}$. For all $j,k = 1,\ldots, p_{\Gamma}$, each function $\phi_{jk}$ has a simple pole at $s=1$ with residue equal to $1/\vol_{\mathbb{H}yp}(M)$. Furthermore, $\phi_{jk}$ has a Laurent series expansion at $s=1$ which we write as \begin{equation}\label{phi exp at s=1} \phi_{jk}(s)= \frac{1}{\vol_{\mathbb{H}yp}(M) (s-1)} + \beta_{jk} + \gamma_{jk}(s-1) + O((s-1)^2), \text{ as } s\to 1. \end{equation} After a slight renormalization and trivial generalization, Theorem 3-1 from \cite{Go73} asserts that the parabolic Eisenstein series $\mathcal{E}^{\mathrm{par}}_{P_j}(z,s)$ admits the Laurent expansion \begin{equation} \label{KronLimitPArGen} \mathcal{E}^{\mathrm{par}}_{P_j}(z,s)= \frac{1}{\vol_{\mathbb{H}yp}(M) (s-1)} + \beta_{jj} - \frac{1}{\vol_{\mathbb{H}yp}(M)} \log \abs{\eta_{P_j}^4(z) \Im(z)} + f_j(z) (s-1) + O((s-1)^2), \end{equation} as $s \to 1$, for $j=1,\ldots, p_{\Gamma}$. As the notation suggestions, the function $\eta_{P_j}(z)$ is a holomorphic form for $\Gamma$ and is a generalization of the classical eta function for the full modular group. To be precise, $\eta_{P_j}(z)$ is an automorphic form corresponding to the multiplier system $v(\sigma)= \exp(i\pi S_{\Gamma,j}(\sigma))$, where $S_{\Gamma,j}(\sigma)$ is a generalization of a Dedekind sum attached to a cusp $P_j$ for each $j=1,\ldots,p_{\Gamma}$ of $M$, meaning a real number uniquely determined for every $\sigma = \left( \begin{smallmatrix} \ast & \ast \\ c & d \\ \end{smallmatrix} \right)\in \Gamma $ which satisfies the relation $$ \log\eta_{P_j}(\sigma(z))=\log\eta_{P_j}(z) + \frac{1}{2} \log (cz+d) + \pi i S_{\Gamma,j}(\sigma). $$ The coefficient $f_j(z)$ multiplying $(s-1)$ in formula \eqref{KronLimitPArGen} is a certain function, whose behavior is not of interest to us in this paper. This term would probably yield to a definition of generalized Dedekind sums; see, for example, \cite{Ta86}. Finally, let us set the notation \begin{equation}\label{phi exp at s=0} \phi_{jk}(s)= a_{jk} + b_{jk}s + c_{jk}s^2 + O(s^3) \,\,\,\,\,\textrm{as $s \rightarrow 0$} \end{equation} for the coefficients in the Laurent expansion of $\phi_{jk}$ near $s=0$. Note that the form of this expansion is justified by the assumption made in subsection \ref{subsection_assumption}. \section{Kronecker's limit formula for parabolic Eisenstein series}\label{sec: Kron limir parabolic} \vskip .10in In this section we will re-write the Kronecker limit formula for the parabolic Eisenstein series as an expression involving the Laurent expansion near $s=0$. We begin with the following lemma which states certain relations amongst coefficients appearing in \eqref{phi exp at s=1} and \eqref{phi exp at s=0}. \it To repeat, we assume that each parabolic Eisenstein series ${\cal E}^{\mathrm{par}}_{P_j}(z,s)$ is holomorphic at $s=0$. \rm \begin{lemma} With the notation in \eqref{phi exp at s=1} and \eqref{phi exp at s=0}, we have, for each $k, l = 1,\ldots,p_{\Gamma}$, the following relations: \begin{equation} \label{sum a_jk} \sum_{j=1}^{p_{\Gamma}} a_{jk} = 0, \end{equation} \begin{equation} \label{sum with b_jk} \sum_{j=1}^{p_{\Gamma}}\left( - \frac{b_{jk}}{\vol_{\mathbb{H}yp}(M)} + a_{jk}\beta_{jl}\right) = \delta_{kl}, \end{equation} \begin{equation}\label{sum with c_jk} \sum_{j=1}^{p_{\Gamma}}\left(- \frac{c_{jk}}{\vol_{\mathbb{H}yp}(M)} +b_{jk}\beta_{jl}\right) = \sum_{j=1}^{p_{\Gamma}} a_{jk}\gamma_{jl}, \end{equation} where $\delta_{kl}$ is the Kronecker symbol. \end{lemma} \begin{proof} The relations \eqref{sum a_jk} through \eqref{sum with c_jk} are immediate consequences of the functional equation for the scattering determinant, namely the formula $\Phi_M(s)\Phi_M(1-s) = \textrm{Id}$. In particular, the formulae are obtained by computing the coefficients of $s^{-1}$, $1$, and $s$ in the Laurent expansion near $s=0$. \end{proof} \vskip .10in \begin{proposition}\label{prop: Kronecker limit as s to 0} With the notation in \eqref{phi exp at s=1} and \eqref{phi exp at s=0}, the parabolic Eisenstein series $\mathcal{E}^{\mathrm{par}}_{P_j}(z,s)$ has a Taylor series expansion at $s=0$ which can be written as \begin{multline} \label{parabolic Kron limit as s to 0} \mathcal{E}^{\mathrm{par}}_{P_j}(z,s) = \sum_{k=1}^{p_{\Gamma}} \left[ - \frac{b_{jk}}{\vol_{\mathbb{H}yp}(M)} + a_{jk}\left( \beta_{kk} - \frac{1}{\vol_{\mathbb{H}yp}(M)} \log \left|\eta_{P_k}^4(z)\Im z \right|\right) \right] +\\ + s \cdot \sum_{k=1}^{p_{\Gamma}} \left[ - \frac{c_{jk}}{\vol_{\mathbb{H}yp}(M)} + b_{jk}\left( \beta_{kk} - \frac{1}{\vol_{\mathbb{H}yp}(M)} \log \left|\eta_{P_k}^4(z)\Im z \right|\right) +a_{jk}f_k(z) \right] + O(s^2). \end{multline} \end{proposition} \begin{proof} The result is a straightforward computation based on the functional equation $$ (\mathcal{E}^{\mathrm{par}}_{1}(z,s)\,\, ....\,\, \mathcal{E}^{\mathrm{par}}_{p}(z,s))^{T} = \Phi_M(s) (\mathcal{E}^{\mathrm{par}}_{1}(z,1-s)\,\, ....\,\, \mathcal{E}^{\mathrm{par}}_{p}(z,1-s))^{T} $$ together with the expansions \eqref{KronLimitPArGen} and \eqref{phi exp at s=0}. \end{proof} In the case when $p_{\Gamma}=1$, the relations \eqref{sum a_jk} through \eqref{sum with c_jk} and Proposition \ref{prop: Kronecker limit as s to 0} become particularly simple and yield an elegant statement. As is standard, the cusp is normalized to be at $\infty$, and the associated Eisenstein series, eta function, scattering coefficients, etc.~are written with the subscript $\infty$. \begin{corollary} \label{Kron limit as s to 0, one cusp} The Kronecker limit formula for parabolic Eisenstein series $\mathcal{E}^{\mathrm{par}}_{\infty}$ on a finite volume Riemann surface with one cusp at $\infty$ can be written as \begin{equation} \label{KronLimas s to 0} \mathcal{E}^{\mathrm{par}}_{\infty}(z,s)= 1+ \log (\abs{\eta_{\infty}^4(z)} \Im(z))s + O(s^2), \text{ as } s \to 0. \end{equation} \end{corollary} \vskip .06in \begin{example}\rm \label{ex: moonshine groups} In the case when $\Gamma=\overline{\Gamma_0^+(N)}$, for a square-free, positive integer $N$, the quotient space $X_N:=\overline{\Gamma_0^+(N)} \backslash \mathbb{H}$ has one cusp. The automorphic form $\eta_{\infty}$ is explicitly computed in \cite{JST13}, where it is proved that \begin{align*} \eta_{\infty}(z) = \sqrt[2^r]{\prod_{v \mid N} \eta(vz)}. \end{align*} \end{example} \vskip .06in \begin{example}\rm \label{ex: congruence subgr} In the case when $\Gamma$ is the group $\overline{\Gamma_0(N)}$, for a positive integer $N$, the corresponding quotient space $M_N:=\overline{\Gamma_{0}(N)}\backslash \mathbb{H}$ has many cusps. Using a standard fundamental domain, $M_{N}$ has cusps at $\infty$, at $0$ and, in the case when $N$ is not prime, at the rational points $1/v$, where $v \mid N$ is such that $(v, \frac{N}{v}) =1$, where $(\cdot,\, \cdot)$ stands for the greatest common divisor. As in the above example, let use the subscript $\infty$ to denote data associated to the cusp at $\infty$. In particular, the automorphic form $\eta_{\infty}$ in the example under consideration was explicitly computed in \cite{Vassileva96}, where it is proved that $$ \eta_{\infty}(z)= \sqrt[\varphi(N)]{\prod_{v \mid N} \eta(vz) ^{v \mu(N/v)}}, $$ where $\varphi(N)$ is the Euler $\varphi-$function and $\mu$ denotes the M\"obius function. In the case of other cusps $P_k$, the automorphic form $\eta_{P_k}$ was also computed in \cite{Vassileva96}, but the expressions are more involved so we omit repeating the formulas here. Also, for the cusp at $\infty$ and the principal congruence subgroup $\Gamma(N)$, the eta-function is computed in Theorem 1, page 405 of \cite{Ta86}. \end{example} \vskip .10in \section{Kronecker's limit formula for elliptic Eisenstein series} The function $H_{\Gamma}(z,w)$, defined in (\ref{Kronecker_elliptic}) is called the \textit{elliptic Kronecker limit function at $w$}. It satisfies the transformation rule \begin{align}\label{H e j transf. rule} H_{\Gamma}(\gamma z, w) = \varepsilon_{w}(\gamma) (cz + d)^{2C_{w}} H_{\Gamma}(z,w), \text{ for any } \gamma = \begin{pmatrix} * & * \\ c & d \end{pmatrix} \in \Gamma, \end{align} where $\varepsilon_{w}(\gamma) \in \mathbb{C}$ is a constant of absolute value $1$, independent of $z$ and \begin{equation}\label{C_w} C_w= 2\pi /(\mathrm{ord}(w) \vol_{\mathbb{H}yp}(M)), \end{equation} see \cite{vP10}, Proposition 6.1.2., or \cite{vP15}. Since $H_{\Gamma}(z,w)$, as a function of $z$, is finite and non-zero at the cusp $P_{1} = \infty$, we may re-scale the function and assume, without lost of generality, that $H_{\Gamma}(z,w)$ is real at the cusp $\infty$. \vskip .10in We begin by studying the asymptotic behavior of $H_{\Gamma}(\sigma_{P_l}z,w)$ as $y=\Im(z) \to\infty$, for $l=1, \ldots, p_{\Gamma}$. \begin{proposition} \label{prop: behavior of H(z,w)} For any cusp $P_l$, with $l=1,\ldots,p_{\Gamma}$, let \begin{equation} \label{B_e_j} B_{w,P_l}=-C_{w} \left( 2-\log 2 + \log \left| \eta_{P_l}^4(w) \Im(w) \right| - \beta_{ll}\vol_{\mathbb{H}yp}(M)\right). \end{equation} Then there exists a constant $a_{w, P_l} \in \mathbb{C}$ of modulus one such that \begin{equation*} H(\sigma_{P_l}z, w) = a_{w,P_l} \exp(-B_{w,P_l}) |c_l z + d_l|^{2C_w} + O(\exp(-2\pi \Im(z))), \text{ as } \Im(z)\rightarrow \infty\,, \end{equation*} where $\sigma_{P_l} =\left( \begin{smallmatrix} * & * \\ c_l & d_l \end{smallmatrix}\right)$ is the scaling matrix for a cusp $P_l$ and $C_w$ is defined by \eqref{C_w}. \end{proposition} \begin{proof} The proof closely follows the proof of \cite{vP10}, Proposition 6.2.2. when combined with the Taylor series expansion \eqref{parabolic Kron limit as s to 0} of the parabolic Eisenstein series at $s=0$. For the convenience of the reader, we now present the complete argument. Combining the equation \eqref{Kronecker_elliptic} with the proof of Proposition 6.1.1 from \cite{vP10}, taking $e_j=w$, we can write $$ -\log(\vert{H_{\Gamma}(z, w)} \vert \Im(z)^{C_{w}}) = \mathcal{K}_{w} (z), $$ where the function $\mathcal{K}_{w} (z)$ can be expressed as the sum of two terms: A term $\mathcal{F}_{w} (z)$ arising from the spectral expansion and a term $\mathcal{G}_{w} (z)$ which can be expressed as the sum over the group. Furthermore, for $z\in\mathbb{H}$ such that $\Im z > \Im (\gamma w)$ for all $\gamma \in \Gamma$ the parabolic Fourier expansion of $\mathcal{K}_{w}(\sigma_{P_l} z)$ is given by $$ \mathcal{K}_{w}(\sigma_{P_l} z) = \sum_{m\in\mathbb{Z}} b_{m,w,P_l}(y)e(mx) $$ with coefficients $b_{m,w,P_l}(y)$ given by $$ b_{m,w,P_l}(y)=\int\limits_{0}^{1}\mathcal{K}_{w}(\sigma_{P_l} z)e(-mx). $$ Since the hyperbolic Laplacian is $\mathrm{SL}_2-$invariant, we easily generalize computations from p. 128 of \cite{vP10} to deduce that $$ \mathcal{K}_{w} (\sigma_{P_l}z) = -C_{w}\log y + A_{w,P_l}y + B_{w,P_l} + \sum_{m=1}^\infty (A_{m; w,P_l} e(mz) + \overline{A}_{m; w,P_l} e(-m\overline{z})), $$ for some constants $A_{w, P_l}, B_{w,P_l} \in \mathbb{R}$ and complex constants $A_{m;w,P_l}$. Let us introduce the notation \begin{align}\label{f e j} f_{w,P_l}(z):=\exp\left(-2\sum_{m=1}^\infty A_{m; w,P_l} e(mz) \right), \end{align} from which one immediately can write \begin{align}\label{K e j main formula} \mathcal{K}_{w} (\sigma_{P_l} z) = A_{w,P_l}y + B_{w,P_l} - \log(\abs{f_{w,P_l}(z)} \Im(z)^{C_{w}}). \end{align} When employing \eqref{K e j main formula}, we can re-write \eqref{Kronecker_elliptic} as \begin{align}\label{Fla for comparison} \mathcal{E}^{\mathrm{ell}}_{w}(\sigma_{P_l} z,s) &- h_{w}(s)\sum_{j=1}^{p_{\Gamma}} \mathcal{E}^{\mathrm{par}}_{P_j}(w,1-s) \mathcal{E}^{\mathrm{par}}_{P_j}(\sigma_{P_l} z,s) = \\ &-C_{w} + (A_{w,P_l}y + B_{w,P_l} - \log(\vert{f_{w, P_l}(z)}\vert \Im(z)^{C_{w}})) \cdot s + O(s^2) \nonumber, \end{align} as $s \rightarrow 0$, where \begin{equation} \label{h_w} h_{w}(s):= \frac{2^s \sqrt{\pi} \,\Gamma(s-1/2)}{\mathrm{ord}(w)\Gamma(s)}. \end{equation} As in \cite{vP10}, pp. 129--130, we use the functional equation of the parabolic Eisenstein series and consider the constant term in the Fourier series expansion, as a function of $z$, of the function \begin{equation} \label{const term} \mathcal{E}^{\mathrm{ell}}_{w}(\sigma_{P_l} z,s) - h_{w}(s) \sum_{j=1}^{p_{\Gamma}} \mathcal{E}^{\mathrm{par}}_{P_j}(w,1-s) \mathcal{E}^{\mathrm{par}}_{P_j}(\sigma_{P_l} z,s)=\mathcal{E}^{\mathrm{ell}}_{w}(\sigma_{P_l} z,s) - h_{w}(s) \sum_{j=1}^{p_{\Gamma}} \mathcal{E}^{\mathrm{par}}_{P_j}(w,s) \mathcal{E}^{\mathrm{par}}_{P_j}(\sigma_{P_l} z,1-s). \end{equation} The constant term is given by $$ -h_{w}(s)\sum_{j=1}^{p_{\Gamma}} \phi_{jl}(1-s)y^s\mathcal{E}^{\mathrm{par}}_{P_j}(w,s)= -\frac{\sqrt{\pi}}{\mathrm{ord}(w)} \frac{\Gamma(s-1/2)}{\Gamma(s)}(2y)^s \sum_{j=1}^{p_{\Gamma}} \phi_{jl}(1-s)\mathcal{E}^{\mathrm{par}}_{P_j}(w,s). $$ Recall the expansions \begin{equation} \label{gamma s-1/2} \Gamma(s-1/2)= -2\sqrt{\pi}\left(1+(2-\gamma-2\log 2) s + O(s^2)\right), \end{equation} \begin{equation} \label{gamma s} \frac{1}{\Gamma(s)}= s \left(1+ \gamma s + O(s^2)\right), \text{ and }(2y)^s= 1+ s\log(2y) + O(s^2), \end{equation} which hold when $s\to 0$, where, as usual, $\gamma$ denotes the Euler constant. When combining these expressions with \eqref{phi exp at s=1}, we can write the asymptotic expansions near $s=0$ of the constant term in the Fourier series expansion of \eqref{const term} as \begin{equation}\label{F.series const intermediate} \frac{2\pi}{\mathrm{ord}(w)} \left(1+ (2+\log y -\log 2)s + O(s^2)\right) \cdot \sum_{j=1}^{p_{\Gamma}} \left( -\frac{1}{\vol_{\mathbb{H}yp}(M)} + \beta_{jl}s + O(s^2) \right)\mathcal{E}^{\mathrm{par}}_{P_j}(w,s). \end{equation} Let us now compute the first two terms in the Taylor series expansion at $s=0$ of the expression \begin{equation}\label{F.series const intermediate 2} \sum_{j=1}^{p_{\Gamma}} \left( -\frac{1}{\vol_{\mathbb{H}yp}(M)}+ \beta_{jl}s + O(s^2) \right)\mathcal{E}^{\mathrm{par}}_{P_j}(w,s). \end{equation} By applying \eqref{parabolic Kron limit as s to 0}, we conclude that the constant term in the Taylor series expansion of (\ref{F.series const intermediate 2}) is $$ \sum_{j=1}^{p_{\Gamma}} \sum_{k=1}^{p_{\Gamma}} \frac{-1}{\vol_{\mathbb{H}yp}(M)} \left( -\frac{b_{jk}}{\vol_{\mathbb{H}yp}(M)} +a_{jk} \beta_{kk} -\frac{a_{jk}}{\vol_{\mathbb{H}yp}(M)} \log \left| \eta_{P_k}^4(w) \Im(w) \right|\right). $$ Applying relations \eqref{sum a_jk} and \eqref{sum with b_jk} we then obtain, by manipulation of the sums, that the constant term in (\ref{F.series const intermediate 2}) is equal to $\displaystyle -1/\vol_{\mathbb{H}yp}(M)$. The factor multiplying $s$ is equal to \begin{multline*} \sum_{j=1}^{p_{\Gamma}} \sum_{k=1}^{p_{\Gamma}} \frac{-1}{\vol_{\mathbb{H}yp}(M)} \left( -\frac{c_{jk}}{\vol_{\mathbb{H}yp}(M)} +b_{jk} \beta_{kk} -\frac{b_{jk}}{\vol_{\mathbb{H}yp}(M)} \log \left| \eta_{P_k}^4(w) \Im(w) \right| + a_{jk} f_k(w)\right) \\ +\sum_{j=1}^{p_{\Gamma}} \sum_{k=1}^{p_{\Gamma}} \beta_{jl} \left( -\frac{b_{jk}}{\vol_{\mathbb{H}yp}(M)} +a_{jk} \beta_{kk} -\frac{a_{jk}}{\vol_{\mathbb{H}yp}(M)} \log \left| \eta_{P_k}^4(w) \Im(w) \right|\right). \end{multline*} Applying relations \eqref{sum a_jk} to \eqref{sum with c_jk} we get that $$ \sum_{j=1}^{p_{\Gamma}} \sum_{k=1}^{p_{\Gamma}}a_{jk}f_k(w)=0 $$ and \begin{align*} \sum_{k=1}^{p_{\Gamma}}&\left( \frac{-1}{\vol_{\mathbb{H}yp}(M)} \log \left| \eta_{P_k}^4(w) \Im(w) \right| + \beta_{kk} \right) \sum_{j=1}^{p_{\Gamma}} \left( -\frac{b_{jk}}{\vol_{\mathbb{H}yp}(M)} + a_{jk} \beta_{jl}\right) \\&= \frac{-1}{\vol_{\mathbb{H}yp}(M)} \log \left| \eta_{P_l}^4(w) \Im(w) \right| + \beta_{ll} \end{align*} as well as $$ \sum_{j=1}^{p_{\Gamma}} \sum_{k=1}^{p_{\Gamma}} \left( \frac{-c_{jk}}{\vol_{\mathbb{H}yp}(M)} + b_{jk} \beta_{jl}\right)= \sum_{j=1}^{p_{\Gamma}} \sum_{k=1}^{p_{\Gamma}} a_{jk}\gamma_{jl} =0. $$ Therefore, the factor multiplying $s$ in the Taylor series expansion of \eqref{F.series const intermediate 2} is equal to $$ \frac{-1}{\vol_{\mathbb{H}yp}(M)} \log \left| \eta_{P_l}^4(w) \Im(w) \right| + \beta_{ll}. $$ Inserting this into \eqref{F.series const intermediate} we see that the constant term in the Fourier series expansion of \eqref{const term} is given by $$ -C_w-C_w\left( 2-\log 2 + \log y + \log \left| \eta_{p_l}^4(w) \Im(w) \right| - \beta_{ll}\vol_{\mathbb{H}yp}(M) \right)s + O(s^2), $$ as $s \to 0$. Comparing this result with the right-hand side of formula \eqref{Fla for comparison}, having in mind the definition of the number $C_{w}$, we immediately deduce that $A_{w, P_l}=0$, $$ B_{w,P_l}=-C_{w} \left( 2-\log 2 + \log \left| \eta_{P_l}^4(w) \Im(w) \right| - \beta_{ll}\vol_{\mathbb{H}yp}(M)\right) $$ and $$ \mathcal{K}_{w} (\sigma_{P_l} z) = -\log(\vert{H_{\Gamma}(\sigma_{P_l} z, w)}\vert |c_l z +d_l |^{-2C_w} \Im(z)^{C_{w}}) = B_{w,P_l} - \log(\vert{f_{w,P_l}(z)}\vert \Im(z)^{C_{w}} ), $$ where the function $f_{w,P_l}$ is defined by \eqref{f e j}. From \eqref{f e j} we deduce that $$ \abs{f_{w,P_l}(z)} = \exp \left( -2 \mathbb{R}e \left( \sum_{m=1}^{\infty} A_{m;w, P_l} e(mz) \right)\right) = 1 + O(\exp(-2\pi \Im (z))), $$ as $\Im (z) \to \infty$. Therefore, $$ \abs{H_{\Gamma}(\sigma_{P_l} z, w)} = \exp(-B_{w,P_l})|c_l z +d_l |^{2C_w} + O(\exp(-2\pi \Im (z))), \text{ as } \Im(z) \to \infty\,, $$ and the proof is complete. \end{proof} \vskip .06in \begin{example}{\bf Moonshine groups.}\rm \label{ex: constants B_N} Let $N=p_1 \cdot \ldots \cdot p_r$ be a squarefree number. Let $X_N= \overline{\Gamma_0(N)^+} \setminus \mathbb{H}$. The surface $X_N$ possesses one cusp at $\infty$ with identity scaling matrix. The scattering determinant $\varphi_N$ associated to the only cusp of $X_N$ at $\infty$ is computed in \cite{JST12}, where it was shown that $$ \varphi_N(s)=\sqrt{\pi}\frac{\Gamma(s-1/2)}{\Gamma(s)}\frac{\zeta(2s-1)}{\zeta(2s)}\cdot D_N(s), $$ where $\zeta(s)$ is the Riemann zeta function and $$ D_N(s)=\prod_{j=1}^r\frac{p_j^{1-s}+1}{p_j^s+1}= \frac{1}{N^{s-1}}\prod_{j=1}^r\frac{p_j^{s-1}+1}{p_j^s+1}. $$ Let $b_{N}$ denote the constant term in the Laurent series expansion of $\varphi_N(s)$ at $s=1$. One can compute $b_N$ by expanding functions $D_N(s)$, $\Gamma(s)$ and $\zeta(s)$ in their Laurent expansions at $s=1$, which would yield the expressions $$ D_N(s)= \frac{2^r}{\sigma(N)}\left(1 + (s-1) \left(\sum_{j=1}^{r} \frac{(1-p_j)\log p_j}{2(p_j+1)} - \log N\right) + O((s-1)^2) \right), $$ and \begin{align} \sqrt{\pi}\frac{\Gamma(s-1/2)}{\Gamma(s)} &= \pi \left(1-2\log 2 (s-1) + O((s-1)^2)\right), \label{gamma exp} \end{align} as well as \begin{align} \frac{\zeta(2s-1)}{\zeta(2s)} &= \frac{6}{\pi^2} \left( \frac{1}{2(s-1)} - \log (2\pi) + 1-12\zeta'(-1) + O(s-1)\right). \label{zeta exp} \end{align} Multiplying expansions \eqref{gamma exp} and \eqref{zeta exp} and using that $$ \frac{1}{\vol_{\mathbb{H}yp} (X_N)} = \frac{3 \cdot 2^r}{\pi \,\sigma(N)}, $$ which was proved in \cite{JST13}, we arrive at the expression \begin{equation} \label{b_N} b_N= - \frac{1}{\vol_{\mathbb{H}yp} (X_N) }\left( \sum_{j=1}^{r} \frac{(p_j -1)\log p_j}{2(p_j+1)}- \log N + 2\log (4\pi) + 24\zeta'(-1) - 2\right). \end{equation} With this formula, Proposition \ref{prop: behavior of H(z,w)}, and Example \ref{ex: moonshine groups} we conclude that the elliptic Kronecker limit function $H_N(z,w) := H_{\overline{\Gamma_0^+(N)}} (z,w)$ associated to the point $w \in X_N$ may we written as $$ H_N(z,w)= a_{N,w}\exp(-B_{N,w}) + \exp(-2\pi\Im (z)), \text{ as } \Im (z) \to \infty, $$ where $a_{N,w}$ is a complex constant of modulus one and \begin{align*} B_{N,w} &= - \frac{2\pi}{\mathrm{ord}(w)\vol_{\mathbb{H}yp} (X_N)}\left( \sum_{j=1}^{r} \frac{(p_j -1)\log p_j}{2(p_j+1)}-\log N + C+ \log \left(\sqrt[2^r]{\prod_{v \mid N} \abs{\eta(v w)}^4} \cdot \Im (w)\right) \right)\notag \end{align*} with $C:=\log (8\pi^2) + 24\zeta'(-1)$. \end{example} \vskip .06in \begin{example}{\bf Congruence subgroups of prime level.} \rm Let $M_p= \overline{\Gamma_0(p)}\setminus \mathbb{H}$, where $p$ is a prime. The surface $M_p$ has two cusps, at $\infty$ and $0$. The scaling matrix for the cusp at $\infty$ is identity matrix. The scattering matrix in this setting is computed in \cite{He83} and is given by $$ \Phi_{M_p}(s)= \sqrt{\pi} \frac{\Gamma(s-1/2)}{\Gamma(s)} \frac{\zeta(2s-1)}{\zeta(2s)} \cdot \frac{1}{p^{2s}-1} \left( \begin{array}{cc} p-1 & p^s-p^{1-s} \\ p^s-p^{1-s} & p-1 \\ \end{array} \right). $$ Using the expansions \eqref{gamma exp} and \eqref{zeta exp}, together with $\vol_{\mathbb{H}yp}(M_p)=\pi(p+1)/3$ and the expansion $$ \frac{p-1}{p^{2s}-1}= \frac{1}{p+1}-\frac{2p^2 \log p}{(p-1)(p+1)^2} (s-1) + O((s-1)^2) \text{ as } s \to 1\,, $$ we conclude that the coefficients $\beta_{11}$ and $\beta_{22}$ in the Laurent series expansion \eqref{phi exp at s=1} are given by $$ \beta_{11}=\beta_{22}= -\frac{2}{\vol_{\mathbb{H}yp} (M_p)}\left( \log (4\pi p) + 12\zeta'(-1) -1 + \frac{\log p}{p^2 -1} \right). $$ Therefore, from Proposition \ref{prop: behavior of H(z,w)}, when applied to the cusp at $\infty$, and Example \ref{ex: congruence subgr}, we conclude that the elliptic Kronecker limit function $\widetilde{H}_p(z,w) := H_{\overline{\Gamma_0(p)}} (z,w)$ associated to the point $w \in M_p$ can be written as $$ \widetilde{H}_p(z,w)= \widetilde{a}_{p,w}\exp(-\widetilde{B}_{p,w}) + \exp(-2\pi\Im (z)), \text{ as } \Im (z) \to \infty, $$ where $\widetilde{a}_{p,w}$ is a complex constant of modulus one and \begin{align*} \widetilde{B}_{p,w} &= - \frac{2\pi}{\mathrm{ord}(w)\vol_{\mathbb{H}yp} (M_p)}\left( \frac{2 p^2 \log p }{p^2-1} +C +\log \left(\abs{\sqrt[p-1]{\frac{\eta(p w) ^p}{\eta(w)}} \cdot \Im (w)}\right) \right) \end{align*} with $C:=\log (8\pi^2) + 24\zeta'(-1)$. \end{example} \vskip .10in \section{A factorization theorem} In (\ref{elliptic_at_i}) and (\ref{elliptic_at_rho}) one has an evaluation of the elliptic Kronecker limit function in the special case when $\Gamma = \mathrm{PSL}_2(\mathbb{Z})$ and $w=i$ or $w=\rho= \exp(2\pi i /3)$ are the elliptic fixed points of $\mathrm{PSL}_2(\mathbb{Z})$. The following theorem generalizes these results. \begin{theorem} \label{thm: factorization} Let $M = \Gamma\setminus \mathbb{H}$ be a finite volume Riemann surface with at least one cusp, which we assume to be at $\infty$ with identity scaling matrix. Let $k$ be a fixed positive integer such that there exists a weight $2k$ holomorphic form $f_{2k}$ on $M$ which is non-vanishing in all cusps and with $q-$expansion at $\infty$ given by \begin{equation} \label{q exp. of f_2k} f_{2k}(z)= b_{f_{2k}} + \sum_{n=1}^{\infty}b_{f_{2k}}(n)q_z^n. \end{equation} Let $Z(f_{2k})$ denote the set of all zeros $f_{2k}$ counted according to their multiplicities and let us define the function $$ H_{f_{2k}}(z):= \prod_{w \in Z(f_{2k})} H_{\Gamma}(z,w), $$ where, as above, $H_{\Gamma}(z,w)$ is the elliptic Kronecker limit function. Then there exists a complex constant $c_{f_{2k}}$ such that \begin{equation} \label{factorization fla} f_{2k}(z) = c_{f_{2k}}H_{f_{2k}}(z) \end{equation} and $$ \abs{c_{f_{2k}}} =\abs{b_{f_{2k}}} \exp \left( \sum_{w\in Z(f_{2k})} B_{w, \infty} \right ), $$ where $B_{w,\infty}$ is defined in \eqref{B_e_j}. \end{theorem} \begin{proof} Assume that $f_{2k}$ possesses $m+l\geq 1$ zeroes on $M$, where $m$ zeros are at the elliptic points $e_j$ of $M$, $j=1,\ldots,m$, and $l$ zeroes are at the non-elliptic points $w_i \in M$; of course, all zeroes are counted with multiplicities. Then $H_{f_{2k}}(z)$ is a holomorphic function on $M$ which is vanishing if and only if $z \in Z(f_{2k})$ and which according to \eqref{H e j transf. rule} satisfies the transformation rule $$ H_{f_{2k}}(\gamma z) = \varepsilon_{f_{2k}}(\gamma)(cz+d)^{C_{f_{2k}}} H_{f_{2k}}(z), \text{ for any } \gamma = \begin{pmatrix} * & * \\ c & d \end{pmatrix} \in \Gamma, $$ where $\varepsilon_{f_{2k}}(\gamma)$ is a constant of modulus one and $$ C_{f_{2k}} = \frac{4\pi}{\vol_{\mathbb{H}yp} (M)} \left(\sum_{j=1}^{m} \frac{1}{n_{e_j}} + l \right). $$ The classical Riemann-Roch theorem relates the number of zeros of a holomorphic form to its weight and the genus of $M$ in the case $M$ is smooth and compact. A generalization of the relation follows from Proposition 7, page II-7, of \cite{SCM66} which, in the case under consideration, yields the formula \begin{align} \label{zeros f-la} k \cdot \frac{\vol_{\mathbb{H}yp}(M)}{2\pi}= \sum_{e \in \mathcal{E}_N} \frac{1}{n_e} v_{e}(f) + \sum_{z\in M \setminus \mathcal{E}_N} v_{z}(f), \end{align} where $\mathcal{E}_N$ denotes the set of elliptic points in $M$, $n_e$ is the order of the elliptic point $e\in \mathcal{E}_N$ and $v_z(f)$ denotes the order of the zero $z$ of $f$. Since $Z(f_{2k})$ is the set of all vanishing points of $f_{2k}$, formula \eqref{zeros f-la} implies that $$ 2k \cdot \frac{\vol_{\mathbb{H}yp} (M)}{4\pi} = \sum_{j=1}^{m} \frac{1}{n_{e_j}} + l, $$ hence $C_{f_{2k}} = 2k$. In other words, $H_{f_{2k}}(z)$ is a holomorphic function on $M$, vanishing if and only if $z \in Z(f_{2k})$ and satisfying transformation rule $$ H_{f_{2k}}(\gamma z) = \varepsilon_{f_{2k}}(\gamma)(cz+d)^{2k} H_{f_{2k}}(z), \text{ for any } \gamma = \begin{pmatrix} * & * \\ c & d \end{pmatrix} \in \Gamma. $$ By Proposition \ref{prop: behavior of H(z,w)}, we have that for any $w \in Z(f_{2k})$ and any cusp $P_l$ of $M$, with $l=1,\ldots,p_{\Gamma}$, the function $$ F_{f_{2k}}(z):= \frac{H_{f_{2k}}(z)}{f_{2k}(z)} $$ is a non-vanishing holomorphic function on $M$, bounded and non-zero at the cusp at $\infty$ and has at most polynomial growth in any other cusp of $M$. Therefore, the function $\log \vert F_{f_{2k}}(z)\vert$ is harmonic on $M$ whose growth in any cusp is such that $\log \vert F_{f_{2k}}(z)\vert$ is $L^{2}$ on $M$. As a result, $\log \vert F_{f_{2k}}(z)\vert$ admits a spectral expansion; see \cite{He83} or \cite{Iwa02}. Since $\log \vert F_{f_{2k}}(z)\vert$ is harmonic, one can use integration by parts to show that $\log \vert F_{f_{2k}}(z)\vert$ is orthogonal to any eigenfunction of the Laplacian. Therefore, from the spectral expansion, one concludes that $\log \vert F_{f_{2k}}(z)\vert$ is constant, hence so is $F_{f_{2k}}(z)$. The evaluation of the constant is obtained by considering the limiting behavior as $z$ approaches $\infty$. With all this, the proof of \eqref{factorization fla} is complete. \end{proof} \section{Examples of factorization} \subsection{An arbitrary surface with one cusp} In the case when a surface $M$ has one cusp, we get the following special case of Theorem \ref{thm: factorization}. \begin{corollary} \label{cor:factorization, one cusp} Let $M = \Gamma \setminus \mathbb{H}$ be a finite volume Riemann surface with one cusp, which we assume to be at $\infty$ with identity scaling matrix. Then the weight $2k$ holomorphic Eisenstein series $E_{2k, \Gamma}$ defined in \eqref{E_2k, Gamma} can be represented as $$ E_{2k, \Gamma}(z) = a_{E_{2k, \Gamma}} B_{E_{2k, \Gamma}}\prod_{w \in Z(E_{2k, \Gamma})}H_{\Gamma}(z,w), $$ where $a_{E_{2k, \Gamma}}$ is a complex constant of modulus one and $$ B_{E_{2k, \Gamma}} = \prod_{w \in Z(E_{2k, \Gamma})} \exp \left(C_w \left(\log 2 -2 + \beta_M \vol_{\mathbb{H}yp}(M) \right)\right) \cdot \left|\eta_{\infty}^4(w) \Im (w) \right|^{-C_w} . $$ As before, $\eta_{\infty}$ is the parabolic Kronecker limit function defined in section \ref{sec: Kron limir parabolic}, formula \eqref{KronLimitPArGen}, and $\beta_M$ is the constant term in the Laurent series expansion of the scattering determinant on $M$. \end{corollary} In this case, due to a very simple form of the Kronecker limit formula for parabolic Eisenstein series as $s\to 0$, the factorization theorem yields an interesting form of the Kronecker limit formula for elliptic Eisenstein series, which we state as the following proposition. \begin{proposition} \label{prop:Ell Kron limit one cusp} Let $M = \Gamma \setminus \mathbb{H}$ be a finite volume Riemann surface with one cusp, which we assume to be at $\infty$ with identity scaling matrix. Let $k$ be a fixed positive integer such that there exists a weight $2k$ holomorphic form $f_{2k}$ on $M$ with $q-$expansion at $\infty$ given by \eqref{q exp. of f_2k}. Then \begin{equation} \label{ell kroneck limit one cusp} \sum_{w\in Z(f_{2k})} \mathcal{E}^{\mathrm{ell}}_{w}(z,s)= -s\log\left( |f_{2k}(z)| |\eta_{\infty} ^4(z)|^{-k}\right) + s\log|b_{f_{2k}}| + O(s^2) \end{equation} as $s\to 0$, where $Z(f_{2k})$ denotes the set of all zeros of $f_{2k}$ counted with multiplicities. \end{proposition} \begin{proof} We start with formula \eqref{Kronecker_elliptic}, which we divide by $\mathrm{ord}(w)$, and take the sum over all $w \in Z(f_{2k})$ to get \begin{align} \label{Kronecker limit ell 2} \sum_{w\in Z(f_{2k})} \mathcal{E}^{\mathrm{ell}}_{w}(z,s)&- \mathcal{E}^{\mathrm{par}}_{\infty}(z,s) \sum_{w\in Z(f_{2k})} h_w(s) \mathcal{E}^{\mathrm{par}}_{\infty}(w,1-s) = \notag \\ &-\sum_{w\in Z(f_{2k})} C_w \left( 1 + s \log(\Im z)\right) - \log\left( \prod_{w\in Z(f_{2k})} |H_{\Gamma}(z,w)| \right)\cdot s +O(s^2) \end{align} as $s\to 0$, where $C_w$ and $h_w$ are defined by \eqref{C_w} and \eqref{h_w} respectively. One now expands the second term on the left hand side of \eqref{Kronecker limit ell 2} into a Taylor series at $s=0$ by applying formulas \eqref{gamma s-1/2}, \eqref{gamma s}, \eqref{KronLimas s to 0} and \eqref{KronLimitPArGen}. After multiplication, we get, as $s \to 0$, the expression \begin{multline} \label{parabolic sum} \mathcal{E}^{\mathrm{par}}_{\infty}(z,s) \sum_{w\in Z(f_{2k})} h_w(s) \mathcal{E}^{\mathrm{par}}_{\infty}(w,1-s) \\ = \sum_{w\in Z(f_{2k})} C_w \left( 1 + s \left[2-\log2 - \beta_M \vol(M) + \log |\eta_{\infty}^4(w) \Im (w) | + |\eta_{\infty}^4(z) \Im (z) | \right]\right)+O(s^2) \end{multline} as $s \to 0$. Theorem \ref{thm: factorization} yields that \begin{equation} \label{log of prod} \log\left( \prod_{w\in Z(f_{2k})} |H_{\Gamma}(z,w)| \right) = \log |f_{2k}(z)| - \sum_{w\in Z(f_{2k})} B_{w,\infty} - \log |b_{f_{2k}}|, \end{equation} where $B_{w,\infty}$ is defined by \eqref{B_e_j} for the cusp $P_l=\infty$. Finally, from formula \eqref{zeros f-la}, we get that $$ \sum_{w\in Z(f_{2k})} C_w =k. $$ Therefore, by inserting \eqref{B_e_j}, \eqref{log of prod} and \eqref{parabolic sum} and into \eqref{Kronecker limit ell 2}, we immediately deduce \eqref{ell kroneck limit one cusp}. The proof is complete. \end{proof} \begin{remark}\rm In the case $\Gamma=\mathrm{PSL}_2(\mathbb{Z})$, the parabolic Kronecker limit function is given by $\eta_{\infty}(z)= \eta(z)=\Delta(z)^{1/24}$. Then, for $k=3$ and $f_{2k}=E_6$, we have $b_{E_6}=1$ and $Z(E_6)= \{i\}$, hence Proposition \ref{prop:Ell Kron limit one cusp} yields \eqref{elliptic Eis_at_i}. Analoguously, for $k=2$ and $f_{2k}=E_4$, we have $b_{E_4}=1$ and $Z(E_4)= \{\rho\}$, and Proposition \ref{prop:Ell Kron limit one cusp} gives \eqref{elliptic Eis_at_rho}. Furthermore (see \cite{vP10}, p.~131), we have $B_{E_6,\Gamma}=\exp(B_i)$ and $B_{E_4,\Gamma}=\exp(B_{\rho})$, where $B_i$ and $B_{\rho}$ are given by \eqref{elliptic_at_i} and \eqref{elliptic_at_rho} respectively. \end{remark} Let us now develop further examples of a surfaces with one cusp and explicitly compute the constant $B_{E_{2k, \Gamma}}$ in these special cases. \begin{comment} \subsection{The modular group} \mathrm{PSL}_2(\mathbb{Z}) When $\Gamma=\mathrm{PSL}_2(\mathbb{Z})= \overline{\Gamma_0(1)^+}$ the corresponding surface has one cusp at $\infty$ with identity scaling matrix and two elliptic points; an order two point at $w=i$ and an order three point at $w=\rho =\exp(2\pi i /3)$. Let us denote the elliptic Kronecker limit function by $H_1(z,w)$. Classically, it is known that $w=i$ is the only vanishing point of the holomorphic form $E_6(z)$, and $w=\rho$ is the only vanishing point of $E_4(z)$. The volume of the surface is $\pi/3$, hence $C_i= 3$ and $C_{\rho}= 2$. Furthermore, from \eqref{b_N} with $N=1$, we see that the constant term in the Laurent series expansion of the scattering determinant at $s=1$ is $\beta_1 =-\frac{3}{\pi} (2 \log (4\pi) + 24\zeta'(-1) -2)$. Therefore, Corollary \ref{cor:factorization, one cusp} yields that $$ E_6(z)=a_6 B_{E_6}H_1(z,i)\,\,\, \text{and} \,\,\, E_4(z)=a_4 B_{E_4}H_1(z,\rho), $$ where $a_6$ and $a_4$ are constants of modulus one and $$ \log (B_{E_6})=-3[(\log(8\pi^2) + 24\zeta'(-1)) +4 \log |\eta(i)|], $$ $$ \log (B_{E_4})=-2[(\log(8\pi^2) + 24\zeta'(-1)) +4 \log |\eta(\rho)| +\log(\sqrt{3}/2)]. $$ The computations on p.131 of \cite{vP10} imply that $$ 4 \log |\eta(i)| = 4\log \Gamma(1/4) - \log 2 - 3 \log (2\pi) $$ and $$ 4 \log |\eta(\rho)| + \log(\sqrt{3}/2) = 6\log \Gamma(1/3) - \log 2 + \log 3- 4 \log (2\pi). $$ As a result, we have that $\log (B_{E_6})=B_i$ and $\log (B_{E_4})=B_{\rho}$, where $B_i$ and $B_{\rho}$ are given by \eqref{elliptic_at_i} and \eqref{elliptic_at_rho} respectively. This shows that Proposition 6.2.2. from \cite{vP10} is corollary of the special case of Theorem \ref{thm: factorization}. Furthermore, the parabolic Kronecker limit function for the full modular group is $\eta_{\infty}(z)= \eta(z)$, therefore, taking $f_{2k}=E_6$ and $Z(f_{2k})= \{i\}$ in proposition \ref{prop:Ell Kron limit one cusp} we immediately deduce \eqref{elliptic Eis_at_i}. Analogously, taking $f_{2k}=E_4$ and $Z(f_{2k})= \{\rho\}$ in proposition \ref{prop:Ell Kron limit one cusp} we deduce \eqref{elliptic Eis_at_rho}. \end{comment} \subsection{Moonshine groups of square-free level} \begin{example}\rm Consider the surface $X_2$. There exists one elliptic point of order two, $e_1=i/\sqrt{2}$, and one elliptic point of order four, $e_2=1/2 + i/2$. The surface $X_2$ has genus zero and one cusp, hence $\vol_{\mathbb{H}yp}(X_2)=\pi/2$. The transformation rule for $E_6^{(2)}$ implies that the form must vanish at the points $e_1$ and $e_2$. Furthermore, formula \eqref{zeros f-la} when applied to $X_{2}$ becomes \begin{align} \label{zeros f-la N=2} \frac{2k}{8}= v_{\infty}(f) + \frac{1}{4}v_{e_2}(f)+ \frac{1}{2} v_{e_1}(f) + \sum_{z\in X_2 \setminus \{e_1,e_2\}} v_{z}(f). \end{align} Taking $k=3$, we conclude that $e_1$ and $e_2$ are the only vanishing points of $E_6^{(2)}$ and the order of vanishing is one at each point. Therefore, in the notation of Theorem \ref{thm: factorization} and Example \ref{ex: constants B_N}, we have that the form $H_6^{(2)}(z)= H_{E_6^{(2)}}(z)$ is given by $H_6^{(2)} (z) := H_2(z, e_1)H_2(z, e_2)$. Assuming that the phase of $H_6^{(2)} (z)$ is such that it attains real values at the cusp $\infty$, we have that \begin{equation} \label{E 6,2} E_6^{(2)}(z) = C_{2,6} H_6^{(2)} (z), \end{equation} where the absolute value of the constant $C_{2,6}$ is given by $|C_{2,6}|=e^{B_{2,e_1}+B_{2,e_2}}$ with $$ B_{2,e_1}=-2 \left( 24\zeta'(-1) + \log(8\pi^2) - \frac{4}{3} \log 2 +\frac{1}{12} \log\left( \left| \Delta(i\sqrt{2}) \cdot \Delta(i/\sqrt{2}) \right| \right)\right) $$ and $$ B_{2,e_2}=- \left( 24\zeta'(-1) + \log(8\pi^2) - \frac{11}{6} \log 2 +\frac{1}{12} \log\left( \left| \Delta(1/2 + i/2) \cdot \Delta(1+i) \right| \right)\right). $$ Let us now consider the case when $k=2$. From \eqref{zeros f-la N=2}, we have that only $e_1$ and $e_2$ can be vanishing points of $E_4^{(2)}$. However, there are two possibilities: Either $e_2$ is an order two vanishing point, and $E_4^{(2)}(z)\neq 0$ for all $z\neq e_2$ in a fundamental domain $\mathcal{F}_2$ of $X_2$, or $e_1$ is an order one vanishing point and $E_4^{(2)}(z)\neq 0$ for all points $z\neq e_1$ in $\mathcal{F}_2$. If the latter possibility is true, then $E_6^{(2)}(z) /E_4^{(2)}(z)$ would be a weight $2$ holomorphic modular form which vanishes only at $e_2$, which is not possible since there is no weight two modular form on $X_N$ for any squarefree $N$ such that the surface $X_N$ has genus zero; see \cite{JST14}. Therefore, $E_4^{(2)}$ vanishes at $e_{2}$ of order two, and there are no other vanishing points of $E_4^{(2)}$ on $X_2$. Hence, in the notation of Theorem \ref{thm: factorization}, we have $ H_4^{(2)} (z):= H_{E_4^{(2)}}(z) = H_2(z, e_2)^2$, implying that \begin{equation} \label{E 4,2} E_4^{(2)}(z) =C_{2,4}H_2(z, e_2)^2, \end{equation} where $|C_{2,4}|=e^{2B_{2,e_2}}$. This proves that $H_2(z, e_2)^2$ is a weight four holomorphic modular function on $\overline{\Gamma_0(2)^+}$. If we combine \eqref{E 6,2} with \eqref{E 4,2} we get $$ H_2(z, e_1)^2= \frac{C_{2,4}}{C_{2,6}^2} \cdot\frac{( E_6^{(2)}(z))^2}{ E_4^{(2)}(z)}; $$ in other words, $H_2(z, e_1)^2$ is a weight eight holomorphic modular function on $\overline{\Gamma_0(2)^+}$. Furthermore, application of Proposition \ref{prop:Ell Kron limit one cusp} with $f_{2k} = E_4^{(2)}$ and $Z_{f_{2k}}=\{ e_2\}$ (with multiplicity two) together with Example \ref{ex: moonshine groups} and the representation formula \eqref{E_k, p proposit fla} yield \eqref{ell Eis at e_2}. By applying Proposition \ref{prop:Ell Kron limit one cusp} with $f_{2k} = E_6^{(2)}$ and $Z_{f_{2k}}=\{e_1, e_2\}$ together with formula \eqref{ell Eis at e_2} we get the elliptic Kronecker limit formula for $ \mathcal{E}^{\mathrm{ell}}_{e_1}(z,s)$ $$ \mathcal{E}^{\mathrm{ell}}_{e_1}(z,s)=-s\log\left( |E_6^{(2)}(z)| |E_4^{(2)}(z)|^{-1/2} |\Delta(z) \Delta(2z)|^{-1/6}\right) + O(s^2) \text{ as } s \to 0. $$ \end{example} \vskip .06in \begin{example} \rm Consider the surface $X_5$. There exist three order two elliptic elements, namely $e_1=i/\sqrt{5}$, $e_2=2/5 + i/5$, and $e_3=1/2 + i/(2\sqrt{5})$. The surface $X_{5}$ has genus zero and one cusp, hence $\vol_{\mathbb{H}yp}(X_5)=\pi$. Using the transformation rule for $E_6^{(5)}$, one concludes that the holomorphic form $E_6^{(5)}$ must vanish at $e_1$, $e_2$ and $e_3$. By the dimension formula \eqref{zeros f-la}, one sees that $e_1$, $e_2$ and $e_3$ are the only zeros of $E_6^{(5)}$. Theorem \ref{thm: factorization} then implies that \begin{equation}\label{E_6_5} E_6^{(5)}(z)= C_{5,6}H_{6}^{(5)}(z), \end{equation} where the absolute value of the constant $C_{5,6}$ is given by $|C_5|=e^{B_{5,e_1}+B_{5,e_2}+B_{5,e_3}}$ and \begin{multline*} B_{5,e_1}+B_{5,e_2}+B_{5,e_3}= -3\left(24\zeta'(-1) + \log (8\pi^2)\right) - \log 50 \\ +\frac{1}{12}\log \left(\abs{\Delta(i/\sqrt{5})\Delta(i\sqrt{5}) \Delta(2/5 + i/5) \Delta(2+i) \Delta(1/2 + i/(2\sqrt{5})) \Delta(5/2 + i\sqrt{5}/2)}\right) . \end{multline*} One can view (\ref{E_6_5}) as analogue of the Jacobi triple product formula. \end{example} \vskip .06in \begin{remark} \rm Let $N=p_1 \cdot \ldots \cdot p_r$ be a squarefree number. Then the surface $X_{N}$ has one cusp. Numerous results are known concerning the topological structure of $X_{N}$; see, for example, \cite{Cum04} and references therein. As a consequence, one can develop a number of results similar to the above examples when $N=2$ or $N=5$. In particular, Theorem \ref{thm: factorization} holds, so one can factor any holomorphic Eisenstein series $E_{2k}^{(N)}$ of weight $2k$ into a product of elliptic Kronecker limit functions, up to a factor of modulus one. \end{remark} \subsection{Congruence subgroups of prime level} Consider the surface $M_p$ for a prime $p$. The smallest positive integer $k$ such that there exists a weight $2k$ holomorphic form is $k=1$. As a result, we have the following corollary of Theorem \ref{thm: factorization}. \begin{corollary} Let $f_{2k,p}$ denote weight $2k\geq 2$ holomorphic form on the surface $M_p = \overline{\Gamma_0(p)}\setminus \mathbb{H}$ bounded at cusps and such that the constant term in its $q-$expansion is equal to $b_{f_{2k},p}$. Then, $$ f_{2k,p}(z)= a_{f_{2k},p} \widetilde{B}_{f_{2k},p} \prod_{w \in Z(f_{2k},p)} \widetilde{H}_p(z,w), $$ where $a_{f_{2k},p}$ is a complex constant of modulus one and $$ \widetilde{B}_{f_{2k},p}=\abs{b_{f_{2k},p}} \prod_{w \in Z(f_{2k},p)} \left(\exp \left[-C_w \left( \frac{2p^2 \log p}{p^2-1} + C \right) \right] \abs{\sqrt[p-1]{\frac{\eta(p w) ^p}{\eta(w)}} \, \Im (w)}^{-C_w}\right) $$ with $C:=\log (8\pi ^2) +24\zeta'(-1)$. \end{corollary} Let us now compute the constants $\widetilde{B}_{f_{2k},p}$ for two cases. \begin{example}\rm If $p=2$, then the surface $M_2$ has only one elliptic point, $e=1/2 + i/2$, which has order two. Furthermore, $\vol_{\mathbb{H}yp}(M_p) = \pi$, hence formula \eqref{zeros f-la} with $k=1$ implies that the holomorphic form $E_{2,2}$ defined by \eqref{E_2,p} with $p=2$ vanishes only at $e$, and the vanishing is to order one. From the $q-$expansion \eqref{q-exp E_2,p} we have that $\abs{b_{E_{2,2},2}}=2-1=1$. Since $C_e = 1$, we get $$ E_{2,2}(z)=a_2 \cdot \frac{1}{16 \sqrt[3]{4}\,\pi^2} \exp(- 24\zeta'(-1)) \abs{\frac{\eta (1/2 + i/2)}{\eta(1+i)^2}} \widetilde{H}_2(z,e), $$ for some complex constant $a_2$ of modulus one. In other words, the elliptic Kronecker limit function $\widetilde{H}_2(z,e)$ is a weight two modular form on $\overline{\Gamma_0(2)}$. \end{example} \begin{example}\rm If $p=3$, then the surface $M_3$ has only one elliptic point $e=1/2 + \sqrt{3}i/6$, which has order three. The volume of the surface $M_3$ is $4 \pi/3$, hence formula \eqref{zeros f-la} with $k=1$ implies that the holomorphic form $E_{2,3}$ vanishes only at $e$, of order two. Furthermore, $\abs{b_{E_{2,2},2}}=2$ and $C_e =1/2$, so then $$ E_{2,3}(z)= a_3 \cdot \frac{1}{12 \sqrt[4]{27}\, \pi^2} \exp(-24 \zeta'(-1)) \abs{\sqrt{\frac{\eta\left( 1/2 + i\sqrt{3}/6\right)}{\eta\left( 3/2 + i\sqrt{3}/2\right)^3}}}\widetilde{H}_3(z,e)^2, $$ for some complex constant $a_3$ of modulus one. \end{example} \section{Additional considerations} In this section, we use the elliptic Kronecker's limit function to prove Weil's reciprocity law. In addition, we state various concluding remarks. \subsection{Weil reciprocity} To conclude this article, we will use equation \eqref{Kronecker_elliptic} to prove Weil's reciprocity law which, for the convenience of the reader, we now state. \vskip .10in \begin{theorem}{\bf [Weil Reciprocity]}\label{Weil_reciprocity} Let $f$ and $g$ be meromorphic functions on the smooth, compact Riemann surface $M$. Let $D_{f}$ and $D_{g}$ denote the divisors of $f$ and $g$, respectively, which we write as $$ D_{f} = \sum m_{f}(P)P \,\,\,\,\,\textrm{and} \,\,\,\,\,D_{g} = \sum m_{g}(P)P. $$ Then $$ \prod\limits_{w_{j}\in D_{g}}f(w_{j})^{m_{g}(w_{j})} = \prod\limits_{z_{i}\in D_{f}} g(z_{i})^{m_{f}(z_{i})}. $$ \end{theorem} \vskip .10in \begin{proof} Consider the function $$ I(s;f,g) = \sum\limits_{z_{i}\in D_{f}}\sum\limits_{w_{j}\in D_{g}}m_{f}(z_{i})m_{g}(w_{j}) {\cal E}^{\textrm{ell}}_{w_{j}}(z_{i},s). $$ We shall compute the asymptotic expansion of $I(s;f,g)$ near $s=0$. Since both $D_{f}$ and $D_{g}$ have degree zero, we immediately have the equations $$ \sum\limits_{z_{i}\in D_{f}}\sum\limits_{w_{j}\in D_{g}}m_{f}(z_{i})m_{g}(w_{j}) c = 0 $$ and $$ \sum\limits_{z_{i}\in D_{f}}\sum\limits_{w_{j}\in D_{g}}m_{f}(z_{i})m_{g}(w_{j}) \log\left((\textrm{\rm Im}(z_{i}))^{c}\right)=0. $$ Since $M$ is assumed to be smooth and compact, the terms in \eqref{Kronecker_elliptic} involving the parabolic Eisenstein series do not appear. Hence, we have the asymptotic expansion \begin{align}\label{exp_I} I(s;f,g) = -\sum\limits_{z_{i}\in D_{f}}\sum\limits_{w_{j}\in D_{g}}m_{f}(z_{i})m_{g}(w_{j}) \log\left(\vert H(z_i,w_j)\vert\right)\cdot s+ O(s^{2}) \,\,\,\,\,\textrm{\rm as $s \rightarrow 0$.} \end{align} Weil's reciprocity formula will be proved by evaluating $$ \lim\limits_{s \rightarrow 0}s^{-1}I(s;f,g) $$ in two different ways, one by first summing over the points in $D_{f}$ the sum over the points in $D_{g}$, and the second way obtained by interchanging the order of summation. To begin, we claim there exist constants $a_{f}$ and $a_{g}$ such that $$ f(w) = a_{f}\prod\limits_{z_{i}\in D_{f}}H(z_{i},w)^{m_{f}(z_{i})} \,\,\,\,\,\textrm{and}\,\,\,\,\, g(z) = a_{g}\prod\limits_{w_{j}\in D_{g}}H(z,w_{j})^{m_{g}(w_{j})}. $$ Indeed, both sides of each proposed equality are meromorphic functions with the same divisors, hence, differ by a multiplicative constant. Since both $D_{f}$ and $D_{g}$ have degree zero, one has that $$ \prod\limits_{z_{i}\in D_{f}}\vert a_{g}\vert^{m_{f}(z_{i})} =\prod\limits_{w_{j}\in D_{g}}\vert a_{f}\vert^{m_{g}(w_{j})} = 1. $$ Therefore, we can write the lead term in \eqref{exp_I} in two ways, yielding the identity \begin{equation}\label{Weilabs} \prod\limits_{w_{j}\in D_{g}}\vert f(w_{j})\vert^{m_{g}(w_{j})} = \prod\limits_{z_{i}\in D_{f}}\vert g(z_{i})\vert^{m_{f}(z_{i})}. \end{equation} It remains to argue that (\ref{Weilabs}) holds without the absolute value signs, which can be completed as follows. First, apply the above arguments in a fundamental domain $\cal F$ of $M$ whose interior contains the support of $D_{f}$ and $D_{g}$. On such a domain, one can choose a well-defined branch of $H(z,w)$, hence we arrive at the equality \begin{equation}\label{Weil} \prod\limits_{w_{j}\in D_{g}}f(w_{j})^{m_{g}(w_{j})} = \prod\limits_{z_{i}\in D_{f}} g(z_{i})^{m_{f}(z_{i})} \end{equation} viewing all points $z_{i}$ and $w_{j}$ as lying in $\cal F$. Now, when tessellating by $\eta \in \Gamma$, one introduces multiplicative factors of the form \begin{equation}\label{multfactor} \prod\limits_{w_{j}\in D_{g}}\varepsilonilon_{\Gamma}(\eta)^{m_{g}(w_{j})} \,\,\,\,\,\textrm{and}\,\,\,\,\, \prod\limits_{z_{i}\in D_{f}} \varepsilonilon_{\Gamma}(\eta)^{m_{f}(z_{i})}. \end{equation} Since $D_{f}$ and $D_{g}$ are degree zero, each term in (\ref{multfactor}) is equal to one. Therefore, one gets a well-defined extension of (\ref{Weil}) to all $z, w \in \mathbb{H} $, which completes the proof of Theorem \ref{Weil_reciprocity}. \end{proof} \subsection{Unitary characters and Artin formalism} As with parabolic Eisenstein series, one can extend the study of elliptic Eisenstein series to include the presence of a unitary character. More precisely, let $\pi: \Gamma \rightarrow U(n)$ denote an $n$-dimensional unitary representation of the group $\Gamma$ with associated character $\chi_{\pi}$. Let us define \begin{equation}\label{ell_eisen_pi} {\cal E}^{\textrm{ell}}_{w}(z,s;\pi) =\sum\limits_{\eta \in \Gamma} \chi_{\pi}(\eta)\sinh(d_{\mathrm{hyp}}(\eta z, w))^{-s} \end{equation} to be the elliptic Eisenstein series twisted by $\chi_{\pi}$. Note that if $n=1$ and $\pi$ is trivial, then the above definition is equal to $\textrm{ord}(w)$ times the series in (\ref{ell_eisen}). (Again, we kept the definition (\ref{ell_eisen}) in order to be consistent with the notation in \cite{JvPS14}). In general terms, the meromorphic continuation of (\ref{ell_eisen_pi}) can be studied using the methodology of \cite{JvPS14}, which depended on the spectral expansion and small time asymptotics of the associated heat kernel. As a result, we feel it is safe to say that one subsequently can prove the continuation of (\ref{ell_eisen_pi}). Having established the meromorphic continuation of (\ref{ell_eisen_pi}), one then can study the elliptic Kronecker limit functions. It would be interesting to place the study in the context of the Artin formalism relations (see \cite{JLa94} and references therein). The system of elliptic Eisenstein series associated to the representations $\pi$ will satisfy additive Artin formalism relations, and, through exponentiation, the corresponding elliptic Kronecker limit functions will satisfy multiplicative Artin formalism relations. It would be interesting to carry out these computations in the setting of the congruence groups $\Gamma_{0}(N)$ as subgroups of the moonshine groups $\Gamma_{0}(N)^{+}$, for instance, in order to relate the above-mentioned computations for parabolic Kronecker limit functions. It is possible that a similar approach could yield further relations amongst the elliptic Kronecker limit functions. \subsection{The factorization theorem in other cases} \subsubsection{\bf Factorization for compact surfaces} \rm If $M$ is compact then, in a sense, Theorem \ref{thm: factorization} becomes the following. In the notation of the proof Theorem \ref{thm: factorization}, the quotient $$ F_{f_{2k}}(z):= \frac{H_{f_{2k}}(z)}{f_{2k}(z)} $$ is a non-vanishing, bounded, holomorphic function on $M$, hence is constant, thus $$ f_{2k}(z) = c_{f_{2k}} H_{f_{2k}}(z):= c_{f_{2k}} \prod_{w \in Z(f_{2k})} H_{\Gamma}(z,w) $$ for some constant $c_{f}$. The point now is to develop a strategy by which one can evaluate $c_{f_{2k}}$. Perhaps the most natural approach would be to study the limiting value of $$ \widetilde{H}_{\Gamma}(z) := \lim\limits_{w \rightarrow z} \frac{H_{\Gamma}(z,w)}{z-w}, $$ which needs to be considered in the correct sense as a holomorphic form on $M$. One can then express $c_{f_{2k}}$ in terms of the first non-zero coefficient of $f_{2k}$ about a point $z \in Z(f_{2k})$, a product of the forms $H_{f_{2k}}(z,w)$ for two different points in $Z(f_{2k})$ and $\widetilde{H}_{\Gamma}(z)$. Such formulae could be quite interesting in various cases of arithmetic interest. We will leave the development of such identities for future investigation. \subsubsection{\bf Factorization for surfaces with more than one cusp} \rm It is evident that one can generalize Theorem \ref{thm: factorization} to the case when the holomorphic form $f_{2k}$ vanishes in a cusp, or several cusps. In such an instance, one includes factors of the parabolic Kronecker limit function in the construction of $H_{f_{2k}}$. The parabolic Kronecker limit function is bounded and non-vanishing in any cusp other than the one to which it is associated, and the (fractional) order to which it vanishes follows from Theorem 1 of \cite{Ta86}. As with Theorem \ref{thm: factorization}, one can express any holomorphic modular form as a product of parabolic and elliptic Kronecker limit functions, up to a multiplicative constant. Furthermore, the multiplicative constant can be computed, up to a factor of modulus one, from the value of the various functions at a cusp. \noindent \noindent Jay Jorgenson \\ Department of Mathematics \\ The City College of New York \\ Convent Avenue at 138th Street \\ New York, NY 10031 U.S.A. \\ e-mail: [email protected] \noindent Anna-Maria von Pippich \\ Fachbereich Mathematik \\ Technische Universit\"at Darmstadt \\ Schlo{\ss}gartenstr. 7 \\ D-64289 Darmstadt \\ Germany \\ e-mail: [email protected] \noindent Lejla Smajlovi\'c \\ Department of Mathematics \\ University of Sarajevo\\ Zmaja od Bosne 35, 71 000 Sarajevo\\ Bosnia and Herzegovina\\ e-mail: [email protected] \end{document} \end{document}
\begin{document} \title{\bf Relation between the skew-rank of an oriented graph and the independence number of its underlying graph\footnote{S. L. acknowledges the financial support from the National Natural Science Foundation of China (Grant Nos. 11271149, 11371062, 11671164), the Program for New Century Excellent Talents in University (Grant No. NCET-13-0817) and the Special Fund for Basic Scientific Research of Central Colleges (Grant No. CCNU15A02052). H.W. acknowledges the support from Simons Foundation (Grant No. 245307).}} \author{Jing Huang $^a$ \and Shuchao Li $^{a,}$\footnote{Corresponding author} \and Hua Wang $^b$} \date{} \maketitle \begin{center} $^a$ Faculty of Mathematics and Statistics, Central China Normal University, \\ Wuhan 430079, P.R. China\\ [email protected] (J.~Huang), [email protected] (S.C.~Li) $^b$ Department of Mathematical Sciences, Georgia Southern University,\\ Statesboro, GA 30460, United States\\ [email protected] (H. ~Wang) \end{center} \begin{abstract} An oriented graph $G^\sigma$ is a digraph without loops or multiple arcs whose underlying graph is $G$. Let $S\left(G^\sigma\right)$ be the skew-adjacency matrix of $G^\sigma$ and $\alpha(G)$ be the independence number of $G$. The rank of $S(G^\sigma)$ is called the skew-rank of $G^\sigma$, denoted by $sr(G^\sigma)$. Wong et al. [European J. Combin. 54 (2016) 76-86] studied the relationship between the skew-rank of an oriented graph and the rank of its underlying graph. In this paper, the correlation involving the skew-rank, the independence number, and some other parameters are considered. First we show that $sr(G^\sigma)+2\alpha(G)\geqslant 2|V_G|-2d(G)$, where $|V_G|$ is the order of $G$ and $d(G)$ is the dimension of cycle space of $G$. We also obtain sharp lower bounds for $sr(G^\sigma)+\alpha(G),\, sr(G^\sigma)-\alpha(G)$, $sr(G^\sigma)/\alpha(G)$ and characterize all corresponding extremal graphs. \end{abstract} \noindent{\bf Keywords}: Skew-rank; Oriented graph; Evenly-oriented; Independence number \noindent{AMS subject classification:} 05C50 \setcounter{section}{0} \section{Introduction}\setcounter{equation}{0} We will start with introducing some background information that will lead to our main results. Some important previously established facts will also be presented. \subsection{Background} Let $G=(V_G, E_G)$ be a graph with vertex set $V_G=\{v_1,v_2,\ldots,v_n\}$ and edge set $E_G$. Denote by $P_n, C_n$ and $K_n$ a path, a cycle and a complete graph of order $n$, respectively. The set of neighbors of a vertex $v$ in $G$ is denoted by $N_G(v)$ or simply $N(v).$ Unless otherwise stated, we follow the traditional notations and terminologies (see, for instance, \cite{5}). The \textit{adjacency matrix} $A(G)$ of $G$ is an $n\times n$ matrix whose $(i, j)$-entry is 1 if vertices $v_i$ and $v_j$ are adjacent and 0 otherwise. Given a graph $G$, the oriented graph $G^\sigma$ is obtained from $G$ by assigning each edge of $G$ a direction. We call $G$ the \textit{underlying graph} of $G^\sigma$. The \textit{skew-adjacency matrix} associated to $G^\sigma$, denoted by $S(G^\sigma)$, is defined to be an $n\times n$ matrix $[s_{x,y}]$ such that $s_{x,y} = 1$ if there is an arc from $x$ to $y$, $s_{x,y} = -1$ if there is an arc from $y$ to $x$ and $s_{x,y} = 0$ otherwise. The \textit{rank} of $G$, denoted by $r(G)$, is the rank of $A(G)$. The \textit{skew-rank} of $G^\sigma$, denoted by $sr(G^\sigma)$, is the rank of $S(G^\sigma)$. It is easy to see that $sr(G^\sigma)$ is even since $S(G^\sigma)$ is skew symmetric. The value $$d(G):=|E_G|-|V_G|+\omega(G),$$ is called the \textit{dimension} of cycle space of $G$, where $\omega(G)$ is the number of the components of $G$. Two distinct edges in a graph $G$ are \textit{independent} if they do not share a common end-vertex. A \textit{matching} is a set of pairwise independent edges of $G$, while a \textit{maximum matching} of $G$ is a matching with the maximum cardinality. The \textit{matching number} of $G$, written as $\alpha'(G)$, is the cardinality of a maximum matching of $G.$ Two vertices of a graph $G$ are said to be \textit{independent} if they are not adjacent. A subset $I$ of $V_G$ is called an \textit{independent set} if any two vertices of $I$ are independent in $G$. An independent set $I$ is \textit{maximum} if $G$ has no independent set $I'$ with $|I'|>|I|$. The number of vertices in a maximum independent set of $G$ is called the \textit{independence number} of $G$ and is denoted by $\alpha(G)$. An oriented graph is called \textit{acyclic} (resp. \textit{connected,\, bipartite}) if its underlying graph is acyclic (resp. connected,\, bipartite). A graph is called an \textit{empty graph} if it has no edges. We call $v$ a \textit{cut-vertex} of a connected $G^\sigma$ if $G^\sigma-v$ is disconnected. The study on skew spectrum of oriented graphs has attracted much attention. Anuradha and Balakrishnan \cite{02} investigated skew spectrum of the Cartesian product of two oriented graphs. Anuradha et al. \cite{03} considered the skew spectrum of bipartite graphs. Hou and Lei \cite{015} studied the coefficients of the characteristic polynomial of skew-adjacency matrix of a oriented graph. Xu \cite{025} established a relation between the spectral radius and the skew spectral radius. Cavers et al. \cite{05} systematically studied skew-adjacency matrices of directed graphs. Among various specific topics, the minimal skew-rank of oriented graphs is of particular interest to researchers. The graphs with minimum skew rank 2, 4 are characterized in \cite{006}. The bicyclic oriented graphs with skew-rank 2, 4 and 6 are, respectively, determined in \cite{8} and \cite{9}. Recently, Mallik and Shader \cite{10} studied the minimum rank of all real skew-symmetric matrices described by a graph. Wong, Ma and Tian \cite{1} presented a beautiful relation between the skew-rank of an oriented graph and the rank of its underlying graph. Huang and Li \cite{004} further extended these results. For more properties and applications of the skew-rank of oriented graphs, we refer the readers to \cite{12,13,14,11}. Very recently, Ma, Wong and Tian \cite{6} determined the relationship between $sr(G^\sigma)$ and the matching number $\alpha'(G)$. They \cite{31} also characterized the relationship between $r(G)$ and $p(G)$ (the number of pendant vertices of $G$), from which one can obtain the same relationship between $sr(G^\sigma)$ and $p(G)$. It is natural to further this study by considering the correlation between $sr(G^\sigma)$ and some other parameters of its underlying graph. In this paper we first establish the sharp lower bound on $sr(G^\sigma)+2\alpha(G)$ of an oriented graph. We then apply the same fundamental idea to determine sharp lower bounds on $sr(G^\sigma)+\alpha(G), sr(G^\sigma)-\alpha(G)$ and $sr(G^\sigma)/\alpha(G)$ and characterize the corresponding extremal oriented graphs. \subsection{Main results} Let $C_k=v_1v_2\cdots v_kv_1$ be a cycle of length $k$. The \textit{sign} of $C_k^\sigma$ with respect to $\sigma$ is defined to be the sign of $\left(\prod_{i=1}^{k-1}s_{i,i+1}\right)\cdot s_{k,1}.$ An even oriented cycle $C_k^\sigma$ is called \textit{evenly-oriented} (resp. \textit{oddly-oriented}) if its sign is positive (resp. negative). An induced subgraph of $G^\sigma$ is an induced subgraph of $G$ where each edge preserves the original orientation in $G^\sigma$. For an induced subgraph $H^\sigma$ of $G^\sigma$, let $G^\sigma-H^\sigma$ be the subgraph obtained from $G^\sigma$ by removing all vertices of $H^\sigma$ and their incident edges. For $W\subseteq V_{G^\sigma}$, $G^\sigma-W$ is the subgraph obtained from $G^\sigma$ by removing all vertices in $W$ and all incident edges. A vertex of $G^\sigma$ is called a \textit{pendant vertex} if it is of degree one in $G$, whereas a vertex of $G^\sigma$ is called a \textit{quasi-pendant vertex} if it is adjacent to a pendant vertex in $G$. Given a graph $G$ with pairwise vertex-disjoint cycles, let $\mathscr{C}_G$ denote the set of all cycles of $G$. Contracting each cycle to a single vertex yields an acyclic graph $T_G$ from $G$. It is clear that $T_G$ is always acyclic. Note that the graph $T_G-W_{\mathscr{C}}$ (where $W_{\mathscr{C}}$ is the set of vertices corresponding to the cycles in $G$) is the same as the graph obtained from $G$ by removing all the vertices on cycles and their incident edges. We denote this graph by $\Gamma_G.$ For example, in Fig. 1, $T_G$ is obtained from $G$ by contracting each cycle into a single vertex, and $\Gamma_G$ is obtained from $G$ by removing all the vertices on cycles and their incident edges. \begin{figure} \caption{ Graphs $G,\, T_G$, and $\Gamma_G$.} \label{fig:1} \end{figure} Following the above notations our first main result reads as follows. \begin{thm}\label{theo:2.1} Let $G^\sigma$ be a simple connected graph on $n$ vertices. Then \begin{equation}\label{eq:1.1} sr(G^\sigma)+2\alpha(G)\geqslant2n-2d(G). \end{equation} The equality in $(\ref{eq:1.1})$ holds if and only if the following conditions hold for $G^\sigma:$ \begin{wst} \item[{\rm (i)}] the cycles (if any) of $G^\sigma$ are pairwise vertex-disjoint; \item[{\rm (ii)}] each cycle of $G^\sigma$ is odd or evenly-oriented; \item[{\rm (iii)}]$\alpha(T_G)=\alpha(\Gamma_G)+d(G)$. \end{wst} \end{thm} For example, let $G$ be as in Fig.~\ref{fig:1}. If all the even cycles in $G^\sigma$ are evenly-oriented, then $G^\sigma$ satisfies conditions (i)-(iii) (note that $\alpha(T_G)=5,\, \alpha(\Gamma_G)=2,\,d(G)=3)$ and $sr(G^\sigma)+2\alpha(G)=2n-2d(G)$ holds with $n=17,\,sr(G^\sigma)=12$ and $\alpha(G)=8$. In the case that $G$ is bipartite, the following is a direct consequence of Theorem~\ref{theo:2.1}. \begin{cor}\label{cor:2.2} Let $G^\sigma$ be a simple connected bipartite graph with $n$ vertices. Then $sr(G^\sigma)+2\alpha(G)\geqslant2n-2d(G)$ with equality if and only if the following conditions hold for $G^\sigma:$ \begin{wst} \item[{\rm (i)}] the cycles (if any) of $G^\sigma$ are pairwise vertex-disjoint; \item[{\rm (ii)}] each cycle of $G^\sigma$ is evenly-oriented; \item[{\rm (iii)}]$\alpha(T_G)=\alpha(\Gamma_G)+d(G)$. \end{wst} \end{cor} Note that $\alpha(G)+\alpha'(G)=|V_G|$ if $G$ is bipartite and $d(G)$ is exactly the number of cycles if the cycles of $G$ are pairwise vertex-disjoint. Then Corollary~\ref{cor:2.2} is equivalent to Theorem~\ref{theo:2.3} below when $G$ is bipartite, obtained in \cite{6}, showing the correlation between the skew-rank of an oriented graph, the matching number, and the dimension of cycle space of its underlying graph. \begin{thm}[\cite{6}]\label{theo:2.3} Let $G^\sigma$ be a simple connected graph. Then $sr(G^\sigma)-2\alpha'(G)\geqslant-2d(G)$ with equality if and only if the following conditions hold for $G^\sigma:$ \begin{wst} \item[{\rm (i)}] the cycles (if any) of $G^\sigma$ are pairwise vertex-disjoint; \item[{\rm (ii)}] each cycle of $G^\sigma$ is evenly-oriented; \item[{\rm (iii)}] $\alpha'(T_G)=\alpha'(\Gamma_G)$. \end{wst} \end{thm} Along the same line, we establish sharp lower bounds on $sr(G^\sigma)+\alpha(G), sr(G^\sigma)-\alpha(G)$, and $sr(G^\sigma)/\alpha(G)$ in the next three theorems. \begin{thm}\label{theo:2.4} Let $G^\sigma$ be a simple connected graph with $n$ vertices and $m$ edges. Then \begin{equation}\label{eq:1.2} sr(G^\sigma)+\alpha(G)\geqslant 4n-2m-\sqrt{n(n-1)-2m+\frac{1}{4}}-\frac{5}{2} \end{equation} with equality if and only if $G\cong S_n$ or $G\cong C_3$. \end{thm} \begin{thm}\label{theo:2.5} Let $G^\sigma$ be a simple connected graph with $n$ vertices and $m$ edges. Then \begin{equation*} sr(G^\sigma)-\alpha(G)\geqslant 4n-2m-3\sqrt{n(n-1)-2m+\frac{1}{4}}-\frac{7}{2} \end{equation*} with equality if and only if $G\cong S_n$ or $G\cong C_3$. \end{thm} \begin{thm}\label{theo:2.6} Let $G^\sigma$ be a simple connected graph with $n$ vertices and $m$ edges. Then \begin{equation*} \frac{sr(G^\sigma)}{\alpha(G)}\geqslant \frac{4(2n-m-1)}{\sqrt{4n(n-1)-8m+1}+1}-2 \end{equation*} with equality if and only if $G\cong S_n$ or $G\cong C_3$. \end{thm} In the rest of this section we recall some important known results. In Section~\ref{sec:2} we first establish some technical lemmas that help us characterize the extremal graphs. We present the proofs of our main results in Section~\ref{sec:3}. We briefly comment on our findings and propose some questions in Section~\ref{sec:4}. \subsection{Preliminaries} For the rest of our introduction we recall the following important facts. \begin{lem}[\cite{2}] \label{lem:3.1} Let $G^\sigma$ be an oriented graph: \begin{wst} \item[{\rm (i)}] If $H^\sigma$ is an induced subgraph of $G^\sigma$, then $sr(H^\sigma)\leqslant sr(G^\sigma);$ \item[{\rm (ii)}] If $G_1^\sigma, G_2^\sigma, \ldots, G_t^\sigma$ are all the components of $G^\sigma,$ then $sr(G^\sigma)=\sum^t_{i=1}sr(G_i^\sigma);$ \item[{\rm (iii)}] $sr(G^\sigma)\geqslant 0$ with equality if and only if $G^\sigma$ is an empty graph. \end{wst} \end{lem} The following observation immediately follows from the definition of the independence number. \begin{lem}\label{lem:3.2} Let $G$ be a simple connected graph. Then \begin{wst} \item[{\rm (i)}]$\alpha(G)-1\leqslant\alpha(G-v)\leqslant\alpha(G)$ for any $v\in V_G$; \item[{\rm (ii)}] $\alpha(G-e)\geqslant\alpha(G)$ for any $e\in E_G$. \end{wst} \end{lem} \begin{lem}[\cite{5}] \label{lem:3.3} Let $P_n$ be a path of order $n$. Then $r(P_n)=n$ if $n$ is even, and $r(P_n)=n-1$ if $n$ is odd. \end{lem} \begin{lem}[\cite{2}] \label{lem:3.4} Let $F^\sigma$ be an oriented acyclic graph with matching number $\alpha'(F)$. Then $sr(F^\sigma)=r(F)=2\alpha'(F).$ \end{lem} \begin{lem}[\cite{28}] \label{lem:3.5} Let $G$ be a bipartite graph with $n$ vertices. Then $\alpha(G)+\alpha'(G)=n$. \end{lem} \begin{lem}[\cite{23}] \label{lem:3.6} Let $C_n^\sigma$ be an oriented cycle of order $n$. Then $sr(C_n^\sigma)=n$ if $C_n^\sigma$ is oddly-oriented, $sr(C_n^\sigma)=n-2$ if $C_n^\sigma$ is evenly-oriented and $sr(C_n^\sigma)=n-1$ if $n$ is odd. \end{lem} \begin{lem}[\cite{2}] \label{lem:3.7} Let $y$ be a pendant vertex of $G^\sigma,$\ and $x$ be the neighbor of $y$, then $sr(G^\sigma)=sr(G^\sigma-x)+2= sr(G^\sigma-x-y)+2$. \end{lem} \begin{lem}[\cite{1}] \label{lem:3.8} Let $x$ be a vertex of $G^\sigma$. Then $sr(G^\sigma-x)$ is equal to either $sr(G^\sigma)$ or $sr(G^\sigma)-2.$ \end{lem} The following lemma on the dimension of cycle space of $G$ follows directly from the definition of $d(G)$. \begin{lem}[\cite{1}] \label{lem:3.9} Let $G$ be a graph with $x\in V_G$. \begin{wst} \item[{\rm (i)}] $d(G)=d(G-x)$ if $x$ is not on any cycle of $G;$ \item[{\rm (ii)}] $d(G-x)\leqslant d(G)-1$ if $x$ lies on a cycle; \item[{\rm (iii)}] $d(G-x)\leqslant d(G)-2$ if $x$ is a common vertex of distinct cycles; \item[{\rm (iv)}] If the cycles of $G$ are pairwise vertex-disjoint, then $d(G)$ is exactly the number of cycles in $G$. \end{wst} \end{lem} The next result is on the rank of an acyclic graph. Let $T$ be a tree with at least one edge, we denote by $\widetilde{T}$ the subtree obtained from $T$ by removing all pendant vertices of $T$. \begin{lem}[\cite{6}] \label{lem:3.10} Let $T$ be a tree with at least one edge. Then \begin{wst} \item[{\rm (i)}] $r(\widetilde{T})<r(T);$ \item[{\rm (ii)}] If $r(T-D)=r(T)$ for a subset $D$ of $V_T$, then there is a pendant vertex $v$ such that $v\notin D.$ \end{wst} \end{lem} Recall that $p(G)$ is the number of pendant vertices of $G$, from Lemmas~\ref{lem:3.4}, \ref{lem:3.5} and \ref{lem:3.10} we immediately have the following. \begin{cor} \label{cor:3.11} Let $T$ be a tree with at least one edge. Then \begin{wst} \item[{\rm (i)}] $\alpha(T)<\alpha(\widetilde{T})+p(T)$; \item[{\rm (ii)}] If $\alpha(T)=\alpha(T-D)+|D|$ for a subset $D$ of $V_T$, then there is a pendant vertex $v$ such that $v\notin D.$ \end{wst} \end{cor} \section{Technical lemmas}\label{sec:2} In this section we present a few technical lemmas. First we establish \eqref{eq:1.1}. \begin{lem}\label{lem:new} The inequality \eqref{eq:1.1} holds. \end{lem} \begin{proof} We proceed by induction on $d(G).$ If $d(G)=0$, then $G^\sigma$ is an oriented tree and the result follows immediately from Lemmas~\ref{lem:3.4} and \ref{lem:3.5}. Now suppose that $G^\sigma$ has at least one cycle, i.e., $d(G) \geqslant 1,$ and let $x$ be a vertex on some cycle. By Lemma~\ref{lem:3.9}(ii) we have \begin{eqnarray}\label{eq:3.1} d(G-x)\leqslant d(G)-1. \end{eqnarray} By the induction hypothesis one has \begin{eqnarray}\label{eq:3.2} sr(G^\sigma-x)+2\alpha(G-x)\geqslant2(n-1)-2d(G-x). \end{eqnarray} By Lemma~\ref{lem:3.1}(i) and Lemma~\ref{lem:3.2}(i), we obtain \begin{eqnarray}\label{eq:3.3} sr(G^\sigma-x)\leqslant sr(G^\sigma),\ \ \ \alpha(G-x)\leqslant\alpha(G). \end{eqnarray} The inequality \eqref{eq:1.1} then follows from (\ref{eq:3.1})-(\ref{eq:3.3}). \end{proof} For convenience we call a graph $G^\sigma$ ``lower optimal'' if it achieves equality in \eqref{eq:1.1}. In the rest of this section we aim to provide some fundamental characterizations of lower-optimal oriented graphs. \begin{lem}\label{lem:4.1} Let $x$ be a vertex on a cycle of $G^\sigma$. If $G^\sigma$ is lower-optimal, then \begin{wst} \item[{\rm (i)}] $sr(G^\sigma)=sr(G^\sigma-x);$ \item[{\rm (ii)}] $\alpha(G)=\alpha(G-x);$ \item[{\rm (iii)}] $d(G)=d(G-x)+1;$ \item[{\rm (iv)}] $G^\sigma-x$ is lower-optimal; \item[{\rm (v)}] $x$ lies on just one cycle of $G$ and $x$ is not a quasi-pendant vertex of $G$. \end{wst} \end{lem} \begin{proof} The lower-optimal condition for $G^\sigma$ together with the proof of Lemma~\ref{lem:new} forces equalities in (\ref{eq:3.1})-(\ref{eq:3.3}). Consequently we have (i)-(iv). By (iii) and Lemma~\ref{lem:3.9}(iii) we obtain that $x$ lies on just one cycle of $G$. If $x$ is a quasi-pendant vertex adjacent to a pendant vertex $y$, then by Lemma~\ref{lem:3.7}, we have $sr(G^\sigma)=sr(G^\sigma-x)+2$, which is a contradiction to (i). This completes the proof of (v). \end{proof} The next observation, although simple, is very helpful to our proof. \begin{lem}\label{lem:4.2} Let $y$ be a pendant vertex of $G$ with neighbor $x$. Then $\alpha(G)=\alpha(G-x)=\alpha(G-x-y)+1$. \end{lem} \begin{proof} It is routine to check that $\alpha(G-x)=\alpha(G-x-y)+1$. In order to complete the proof, it suffices to show that $\alpha(G)=\alpha(G-x)$. In fact, let $I$ be a maximum independent set of $G$. If $x\notin I$, then $I$ is also a maximum independent set of $G-x$ and we have $\alpha(G)=|I|=\alpha(G-x)$. If $x\in I$, then $y\notin I$, thus $(I\backslash \{x\})\cup\{y\}$ is an independent set of $G-x$. Hence we have $\alpha(G-x)\geqslant\left|(I\backslash \{x\})\cup\{y\}\right|=|I|=\alpha(G)$. By Lemma~\ref{lem:3.2}(i), we have $\alpha(G-x)\leqslant\alpha(G)$. Therefore we have $\alpha(G)=\alpha(G-x)=\alpha(G-x-y)+1$ as desired. \end{proof} Given an induced oriented subgraph $H^\sigma$ of $G^\sigma,$ let $v_i$ be in $V_{G^\sigma} \setminus V_{H^\sigma}$. Then the induced oriented subgraph of $G^\sigma$ with vertex set $V_{H^\sigma}\bigcup\{v_i\}$ is simply written as $H^\sigma+v_i$. The following lemma summarizes a few known results. \begin{lem}\label{lem:4.3} Let $C_q^\sigma$ be a pendant oriented cycle of $G^\sigma$ with $x$ being a vertex of $C_q$ of degree $3,$ and let $H^\sigma = G^\sigma-C_q^\sigma, M^\sigma = H^\sigma + x.$ Then \begin{eqnarray*} sr(G^\sigma)=\left\{ \begin{array}{lll} q-1+sr(M^\sigma), & \hbox{if $q$ is odd;} \ \ \ \ \ \text{\rm (see \cite{1})}\\ q-2+sr(M^\sigma), & \hbox{if $C_q^\sigma$ is evenly-oriented;} \ \ \ \ \ \text{\rm (see \cite{1})} \\ q+sr(H^\sigma), & \hbox{if $C_q^\sigma$ is oddly-oriented.} \ \ \ \ \ \text{\rm (see \cite{004})} \end{array} \right. \end{eqnarray*} \end{lem} Following the same direction we establish a few more facts in the rest of this section. \begin{lem}\label{lem:4.4} Let $C_q^\sigma$ be a pendant oriented cycle of $G^\sigma$ with $x$ being the unique vertex of $C_q$ of degree 3. Let $H^\sigma= G^\sigma-C_q^\sigma$ and $M^\sigma = H^\sigma + x$, if $G^\sigma$ is lower-optimal, then \begin{wst} \item[{\rm (i)}] $q$ is odd or $C_q^\sigma$ is evenly-oriented; \item[{\rm (ii)}] $sr(G^\sigma)=q-1+sr(H^\sigma), \alpha(G)=\alpha(H)+\frac{q-1}{2}$ if $q$ is odd and $sr(G^\sigma)=q-2+sr(H^\sigma), \alpha(G)=\alpha(H)+\frac{q}{2}$ if $C_q^\sigma$ is evenly-oriented; \item[{\rm (iii)}] both $H^\sigma$ and $M^\sigma$ are lower-optimal; \item[{\rm (iv)}] $sr(M^\sigma)=sr(H^\sigma)$ and $\alpha(M) = \alpha(H)+1.$ \end{wst} \end{lem} \begin{proof} (i)\ Supposing for contradiction that $C_q^\sigma$ is oddly-oriented, then by Lemma~\ref{lem:4.3} we have \begin{eqnarray}\label{eq:3.4} sr(G^\sigma)=q+sr(H^\sigma). \end{eqnarray} Note that $x$ lies on the cycle $C_q$. Hence, by Lemma~\ref{lem:4.1}(ii) we have \begin{eqnarray}\label{eq:3.5} \alpha(G)=\alpha(G-x)=\alpha(P_{q-1})+\alpha(H)=\frac{q}{2}+\alpha(H). \end{eqnarray} As $C_q$ is a pendant cycle of $G$, we have \begin{eqnarray}\label{eq:3.6} d(G)=d(M)+1=d(H)+1. \end{eqnarray} Suppose $|V_G|=n$. Since $G^\sigma$ is lower-optimal, we have \begin{eqnarray}\label{eq:3.7} sr(G^\sigma)+2\alpha(G)=2n-2d(G). \end{eqnarray} From (\ref{eq:3.4})-(\ref{eq:3.7}) we have $sr(H^\sigma)+2\alpha(H)=2(n-q)-2d(H)-2$, which is a contradiction to (\ref{eq:1.1}). This completes the proof of (i). Next we show (ii)-(iv) according to the following two possible cases. \noindent {\bf{Case 1.}}\ $q$ is odd. (ii)\ Note that $x$ lies on a cycle of $G$, by Lemma~\ref{lem:4.1}(i)-(ii) we have \begin{align} sr(G^\sigma)&=sr(G^\sigma-x)=sr(P_{q-1}^\sigma)+sr(H^\sigma)=q-1+sr(H^\sigma),\label{eq:3.8}\\ \alpha(G)&=\alpha(G-x)=\alpha(P_{q-1})+\alpha(H)=\frac{q-1}{2}+\alpha(H).\label{eq:3.9} \end{align} (iii)-(iv)\ From (\ref{eq:3.6})-(\ref{eq:3.9}) we have $sr(H^\sigma)+2\alpha(H)=2(n-q)-2d(H)$, implying that $H^\sigma$ is lower-optimal. Since $q$ is odd, by Lemma~\ref{lem:4.3}, we have \begin{eqnarray}\label{eq:3.10} sr(G^\sigma)=q-1+sr(M^\sigma). \end{eqnarray} Combining (\ref{eq:3.8}) and (\ref{eq:3.10}) yields \begin{eqnarray}\label{eq:3.11} sr(H^\sigma)=sr(M^\sigma). \end{eqnarray} Furthermore, \begin{eqnarray} 2\alpha(H)&=&2(n-q)-sr(H^\sigma)-2d(H)\notag\\ &=&2(n-q+1)-sr(M^\sigma)-2d(H)-2\notag\\ &=&2(n-q+1)-sr(M^\sigma)-2d(M)-2\notag\\ &\leqslant&2\alpha(M)-2,\label{eq:3.12} \end{eqnarray} where the first equality follows from the lower-optimal condition for $H^\sigma$, the second and the third equalities follow from (\ref{eq:3.11}) and (\ref{eq:3.6}), respectively. And the last inequality (\ref{eq:3.12}) follows from applying (\ref{eq:1.1}) to $M^\sigma$. Thus we have $\alpha(H)\leqslant\alpha(M)-1$. It follows from Lemma~\ref{lem:3.2}(i) that $\alpha(H)\geqslant\alpha(M)-1$. Hence $\alpha(H)=\alpha(M)-1$. Consequently we have $sr(M^\sigma)+2\alpha(M)=2(n-q+1)-2d(M)$, implying that $M^\sigma$ is also lower-optimal. \noindent {\bf{Case 2.}}\ $C_q^\sigma$ is evenly-oriented. (ii)\ Since $x$ lies on a cycle of $G$, by Lemma~\ref{lem:4.1}(i)-(ii) we have \begin{align} sr(G^\sigma)&=sr(G^\sigma-x)=sr(P_{q-1}^\sigma)+sr(H^\sigma)=q-2+sr(H^\sigma),\label{eq:3.13}\\ \alpha(G)&=\alpha(G-x)=\alpha(P_{q-1})+\alpha(H)=\frac{q}{2}+\alpha(H).\label{eq:3.14} \end{align} (iii)\ Let $x_1$ be on $C_q$ such that it is adjacent to $x$. By applying Lemma~\ref{lem:4.1} to $G^\sigma$ (resp. $G$) and Lemma~\ref{lem:3.7} (resp. Lemma~\ref{lem:4.2}) to $G^\sigma-x_1$ (resp. $G-x_1$) we have \begin{align} sr(G^\sigma)&=sr(G^\sigma-x_1)=q-2+sr(M^\sigma),\label{eq:3.15}\\ \alpha(G)&=\alpha(G-x_1)=\frac{q-2}{2}+\alpha(M)\label{eq:3.16}. \end{align} From (\ref{eq:3.6})-(\ref{eq:3.7}) and (\ref{eq:3.13})-(\ref{eq:3.14}), one has $sr(H^\sigma)+2\alpha(H)=2(n-q)-2d(H)$, implying that $H^\sigma$ is lower-optimal. Combining (\ref{eq:3.6})-(\ref{eq:3.7}) and (\ref{eq:3.15})-(\ref{eq:3.16}), we have $sr(M^\sigma)+2\alpha(M)=2(n-q+1)-2d(M)$, which implies that $M^\sigma$ is also lower-optimal. (iv)\ \ Combining (\ref{eq:3.13}) and (\ref{eq:3.15}) yields $sr(M^\sigma)=sr(H^\sigma),$ whereas equalities (\ref{eq:3.14}) and (\ref{eq:3.16}) lead to $\alpha(M)=\alpha(H)+1.$ This completes the proof. \end{proof} \begin{lem}\label{lem:4.5} Let $y$ be a pendant vertex of $G^\sigma$ with neighbor $x$, and let $H^\sigma=G^\sigma-y-x.$ If $G^\sigma$ is lower-optimal, then \begin{wst} \item[{\rm (i)}] $x$ does not lie on any cycle of $G;$ \item[{\rm (ii)}] $H^\sigma$ is also lower-optimal. \end{wst} \end{lem} \begin{proof} (i)\ \ Since $x$ is a quasi-pendant vertex of $G$, Lemma~\ref{lem:4.1}(v) states that $x$ does not lie on any cycle of $G.$ (ii)\ \ By Lemmas~\ref{lem:3.7} and \ref{lem:4.2}, we have \begin{eqnarray} sr(G^\sigma)=sr(H^\sigma)+2 \hbox{ and } \alpha(G)=\alpha(H)+1.\label{eq:4.17} \end{eqnarray} Since $x$ does not lie on any cycle of $G$, by Lemma~\ref{lem:3.9}(i) we have \begin{eqnarray}\label{eq:4.18} d(G)=d(H). \end{eqnarray} Equalities (\ref{eq:4.17})-(\ref{eq:4.18}), together with the lower-optimal condition of $G^\sigma$, imply that $sr(H^\sigma)+2\alpha(H)=2(n-2)-2d(H)$, i.e., $H^\sigma$ is lower-optimal. \end{proof} \begin{lem}\label{lem:4.6} If $G^\sigma$ is lower-optimal, then \begin{wst} \item[{\rm (i)}] the cycles (if any) of $G^\sigma$ are pairwise vertex-disjoint; \item[{\rm (ii)}] each cycle (if any) of $G^\sigma$ is odd or evenly-oriented; \item[{\rm (iii)}] $\alpha(G)=\alpha(T_G)+\sum_{C\in \mathscr{C}_G}\left\lfloor\frac{|V_C|}{2}\right\rfloor-d(G).$ \end{wst} \end{lem} \begin{proof} If $G$ contains cycles, then let $x$ be a vertex on some cycle. By Lemma~\ref{lem:4.1}(iii) we have $d(G)=d(G-x)+1$. By Lemma~\ref{lem:3.9}(iii) $x$ can not be a common vertex of distinct cycles, hence the cycles of $G^\sigma$ are pairwise vertex-disjoint. This completes the proof of (i). We proceed by induction on the order $n$ of $G$ to prove (ii) and (iii). The initial case $n=1$ is trivial. Suppose that (ii) and (iii) hold for any lower-optimal oriented graph of order smaller than $n$, and suppose $G^\sigma$ is an lower-optimal oriented graph of order $n\geqslant 2.$ If $T_G$ is an empty graph, then $G^\sigma$ is a single oriented cycle. Thus (ii) follows from the fact that a single oriented cycle $C_q^\sigma$ is lower-optimal if and only if $q$ is odd or $C_q^\sigma$ itself is evenly-oriented. And (iii) follows from the fact that $\alpha(C_q)=\frac{q-1}{2}$ if $q$ is odd and $\alpha(C_q)=\frac{q}{2}$ if $C_q^\sigma$ is evenly-oriented. If $T_G$ has at least one edge, then $T_G$ contains at least one pendant vertex, say $y$. Then $y$ is either a pendant vertex of $G$ or $y\in W_{\mathscr{C}}$, in which case $G$ contains a pendant cycle. We now consider both cases. \noindent {\bf{Case 1.}}\ $G$ contains a pendant vertex $y$. In this case, let $x$ be the neighbor of $y$ in $G$ and let $H^\sigma=G^\sigma-x-y.$ By Lemma~\ref{lem:4.5}, $x$ is not a vertex on any cycle of $G$ and $H^\sigma$ is also lower-optimal. By induction hypothesis we have \begin{wst} \item[{\rm (a)}] each cycle (if any) of $H^\sigma$ is odd or evenly-oriented; \item[{\rm (b)}] $\alpha(H)=\alpha(T_H)+\sum_{C\in \mathscr{C}_H}\left\lfloor\frac{|V_C|}{2}\right\rfloor-d(H).$ \end{wst} Note that all cycles of $G$ are also in $H$, Assertion (a) implies that each cycle (if any) of $G^\sigma$ is odd or evenly-oriented. Hence (ii) holds. Since $x$ does not lie on any cycle of $G$, by Lemma~\ref{lem:3.9}(i) we have \begin{eqnarray}\label{eq:3.17} d(G)=d(H). \end{eqnarray} Recall that $y$ is also a pendant vertex of $T_G$ adjacent to $x$ and $T_H=T_G-x-y$, then by Lemma~\ref{lem:4.2}, Assertion (b) and (\ref{eq:3.17}) we have $$ \alpha(G)=\alpha(H)+1=\alpha(T_H)+\sum_{C\in \mathscr{C}_H}\left\lfloor\frac{|V_C|}{2}\right\rfloor-d(H)+1=\alpha(T_G)+\sum_{C\in \mathscr{C}_G}\left\lfloor\frac{|V_C|}{2}\right\rfloor-d(G). $$ Thus (iii) holds. \noindent {\bf{Case 2.}}\ $G$ has a pendant cycle $C_q$. In this case, let $x$ be the unique vertex of $C_q$ of degree 3, $H^\sigma=G^\sigma-C_q^\sigma$ and $M^\sigma=H^\sigma+x.$ It follows from Lemma~\ref{lem:4.4}(iii) that $M^\sigma$ is lower-optimal. Applying the induction hypothesis to $M^\sigma$ yields \begin{wst} \item[{\rm (c)}] each cycle of $M^\sigma$ is odd or evenly-oriented; \item[{\rm (d)}] $\alpha(M)=\alpha(T_M)+\sum_{C\in \mathscr{C}_M}\left\lfloor\frac{|V_C|}{2}\right\rfloor-d(M).$ \end{wst} Assertion (c) and Lemma~\ref{lem:4.4}(i) imply that each cycle of $G^\sigma$ is odd or evenly-oriented since $\mathscr{C}_G=\mathscr{C}_M\bigcup{C_q}.$ Thus, (ii) holds. Combining Lemma~\ref{lem:4.4}(ii), Lemma~\ref{lem:4.4}(iv) and Assertion (d) we have \begin{eqnarray}\label{eq:3.18} \alpha(G)=\alpha(M)+\left\lfloor\frac{q}{2}\right\rfloor-1=\alpha(T_M)+\sum_{C\in \mathscr{C}_M}\left\lfloor\frac{|V_C|}{2}\right\rfloor+\left\lfloor\frac{q}{2}\right\rfloor-d(M)-1. \end{eqnarray} As $C_q$ is a pendant cycle of $G$, we have \begin{eqnarray}\label{eq:3.19} d(G)=d(M)+1. \end{eqnarray} Note that $T_M\cong T_G$ and $\left\lfloor\frac{q}{2}\right\rfloor+\sum_{C\in \mathscr{C}_M}\left\lfloor\frac{|V_C|}{2}\right\rfloor=\sum_{C\in \mathscr{C}_G}\left\lfloor\frac{|V_C|}{2}\right\rfloor$. Together with (\ref{eq:3.18})-(\ref{eq:3.19}) we have \begin{eqnarray*} \alpha(G)=\alpha(T_G)+\sum_{C\in \mathscr{C}_G}\left\lfloor\frac{|V_C|}{2}\right\rfloor-d(G), \end{eqnarray*} as desired. \end{proof} \section{Proofs of main results}\label{sec:3}\setcounter{equation}{0} We will first provide the proof of Theorem~\ref{theo:2.1}, based on which the other proofs follow. \subsection{Theorem~\ref{theo:2.1}} Lemma~\ref{lem:new} already established (\ref{eq:1.1}). We now characterize all the oriented graphs $G^\sigma$ which attain the lower bound by considering the sufficient and necessary conditions for the equality in (\ref{eq:1.1}). For ``sufficiency", we proceed by induction on the order $n$ of $G$ to show that $G^\sigma$ is lower-optimal if $G^\sigma$ satisfies the conditions (i)-(iii). The $n=1$ case is trivial. Suppose that any oriented graph of order smaller than $n$ which satisfies (i)-(iii) is lower-optimal, and suppose $G^\sigma$ is an oriented graph with order $n\geqslant 2$ that satisfies (i)-(iii). Since the cycles (if any) of $G^\sigma$ are pairwise vertex-disjoint, Lemma~\ref{lem:3.9}(iv) states that $G$ has exactly $d(G)$ cycles, implying that $|W_{\mathscr{C}}|=d(G)$. If $T_G$ is an empty graph, it follows from (ii) that $G^\sigma$ is an odd cycle or an evenly-oriented cycle, leading to the fact that $G^\sigma$ is lower-optimal. So in what follows, we assume that $T_G$ has at least one edge. Note that $\alpha(T_G)=\alpha(\Gamma_G)+d(G)=\alpha(T_G-W_{\mathscr{C}})+d(G)$. Then by Corollary~\ref{cor:3.11}(ii), there exists a pendant vertex of $T_G$ not in $W_{\mathscr{C}}$. Thus $G$ contains as least one pendant vertex, say $y$. Let $x$ be the unique neighbor of $y$ in $G$ and let $H^\sigma=G^\sigma-x-y$. Then $y$ is also a pendant vertex of $T_G$ adjacent to $x$. By Lemma~\ref{lem:4.2}, we have \begin{eqnarray}\label{eq:3.20} \alpha(T_G)=\alpha(T_G-x)=\alpha(T_H)+1. \end{eqnarray} If $x\in W_{\mathscr{C}}$, then the graph $\Gamma_G\bigcup d(G)K_1$ can be obtained from $(T_G-x)\bigcup K_1$ by removing some edges. By Lemma~\ref{lem:3.2}(ii), we get \begin{eqnarray}\label{eq:3.21} \alpha(\Gamma_G)+d(G)\geqslant\alpha(T_G-x)+1. \end{eqnarray} Now from (\ref{eq:3.20})-(\ref{eq:3.21}) we have $\alpha(\Gamma_G)\geqslant\alpha(T_G-x)-d(G)+1=\alpha(T_G)-d(G)+1$, a contradiction to (iii). Thus $x$ does not lie on any cycle of $G$. Then $y$ is also a pendant vertex of $\Gamma_G$ adjacent to $x$ and $\Gamma_H=\Gamma_G-x-y$. By Lemma~\ref{lem:4.2} we have \begin{eqnarray}\label{eq:3.21t} \alpha(\Gamma_G)=\alpha(\Gamma_H)+1. \end{eqnarray} As $x$ does not lie on any cycle of $G$, Lemma~\ref{lem:3.9}(i) implies that \begin{eqnarray}\label{eq:3.22t} d(G)=d(H). \end{eqnarray} Now from condition (iii) and (\ref{eq:3.20}), (\ref{eq:3.21t})-(\ref{eq:3.22t}), we have $\alpha(T_H)=\alpha(\Gamma_H)+d(H).$ Also note that all cycles of $G$ are cycles of $H$, we conclude that $H^\sigma$ satisfies conditions (i)-(iii). By induction hypothesis we have \begin{eqnarray}\label{eq:3.22} sr(H^\sigma)+2\alpha(H)=2(n-2)-2d(H). \end{eqnarray} Furthermore, it follows from Lemmas~\ref{lem:3.7} and \ref{lem:4.2} that \begin{eqnarray}\label{eq:3.23} sr(G^\sigma)=sr(H^\sigma)+2 \hbox{ and } \alpha(G)=\alpha(H)+1. \end{eqnarray} By (\ref{eq:3.22t})-(\ref{eq:3.23}) we have $sr(G^\sigma)+2\alpha(G)=2n-2d(G),$ implying that $G^\sigma$ is lower-optimal. \setlength{\baselineskip}{15pt} For ``necessity", let $G^\sigma$ be lower-optimal. By Lemma~\ref{lem:4.6}, the oriented cycles (if any) of $G^\sigma$ are pairwise vertex-disjoint, and each oriented cycle of $G^\sigma$ is odd or evenly-oriented. This implies (i) and (ii). We proceed by induction on the order $n$ of $G$ to prove (iii). The $n=1$ case is trivial. Suppose that (iii) holds for all lower-optimal oriented graph of order smaller than $n$, and suppose $G^\sigma$ is a lower-optimal oriented graph of order $n\geqslant 2.$ If $T_G$ is an empty graph, then $G^\sigma$ is an odd cycle or an evenly-oriented cycle, in which case (iii) follows immediately. Now suppose $T_G$ has at least one edge, then $T_G$ has at least one pendant vertex, say $y$. Similar to before, either $G$ contains $y$ as a pendant vertex, or $G$ contains a pendant cycle. \noindent {\bf{Case 1.}}\ $G$ has a pendant vertex $y$. Let $x$ be the neighbor of $y$ in $G$ and $H^\sigma=G^\sigma-x-y.$ By Lemma~\ref{lem:4.5}, $x$ is not on any cycle of $G$ and $H^\sigma$ is also lower-optimal. Applying induction hypothesis to $H^\sigma$ yields \begin{eqnarray}\label{eq:3.24} \alpha(T_H)=\alpha(\Gamma_H)+d(H). \end{eqnarray} Since $x$ does not lie on any cycle of $G$, Lemma~\ref{lem:3.9}(i) states that \begin{eqnarray}\label{eq:3.25} d(G)=d(H). \end{eqnarray} Note that $y$ is also a pendant vertex of $T_G$ (resp. $\Gamma_G$) adjacent to $x$ and $T_H=T_G-x-y$ (resp. $\Gamma_H=\Gamma_G-x-y)$, then by Lemma~\ref{lem:4.2} we have \begin{eqnarray}\label{eq:3.26} \alpha(T_G)=\alpha(T_H)+1 \hbox{ and } \alpha(\Gamma_G)=\alpha(\Gamma_H)+1. \end{eqnarray} From (\ref{eq:3.24})-(\ref{eq:3.26}) we have $$ \alpha(T_G)=\alpha(\Gamma_G)+d(G), $$ as desired. \noindent {\bf{Case 2.}}\ $G$ has a pendant cycle $C_q$. Let $x$ be the unique vertex of $C_q$ of degree 3 and $H^\sigma=G^\sigma-C_q^\sigma$. By Lemma~\ref{lem:4.4}(iii), $H^\sigma$ is lower-optimal. Applying the induction hypothesis to $H^\sigma$ yields \begin{eqnarray}\label{eq:3.27} \alpha(T_H)=\alpha(\Gamma_H)+d(H). \end{eqnarray} From Lemma~\ref{lem:4.4}(ii) we have \begin{eqnarray}\label{eq:3.28} \alpha(G)=\alpha(H)+\left\lfloor\frac{q}{2}\right\rfloor. \end{eqnarray} Note that $\mathscr{C}_G=\mathscr{C}_H\bigcup{C_q}.$ Together with (\ref{eq:3.28}) and Lemma~\ref{lem:4.6}(iii) we have \begin{eqnarray} \alpha(T_G)&=&\alpha(H)+\left\lfloor\frac{q}{2}\right\rfloor-\sum_{C\in \mathscr{C}_G}\left\lfloor\frac{|V_C|}{2}\right\rfloor+d(G)\notag\\ &=&\alpha(H)-\sum_{C\in \mathscr{C}_H}\left\lfloor\frac{|V_C|}{2}\right\rfloor+d(G).\label{eq:3.29} \end{eqnarray} Since $H^\sigma$ is lower-optimal, Lemma~\ref{lem:4.6}(iii) states that \begin{eqnarray}\label{eq:3.30} \alpha(T_H)=\alpha(H)-\sum_{C\in \mathscr{C}_H}\left\lfloor\frac{|V_C|}{2}\right\rfloor+d(H). \end{eqnarray} As $C_q$ is a pendant cycle of $G$, we have \begin{eqnarray}\label{eq:3.31} d(G)=d(H)+1. \end{eqnarray} Combining (\ref{eq:3.29})-(\ref{eq:3.31}) yields \begin{eqnarray}\label{eq:3.32} \alpha(T_G)=\alpha(T_H)+1. \end{eqnarray} Note that $\Gamma_G\cong\Gamma_H$, then the required equality $\alpha(T_G)=\alpha(\Gamma_G)+d(G)$ follows from (\ref{eq:3.27}) and (\ref{eq:3.31})-(\ref{eq:3.32}). This completes the proof.\qed \subsection{Theorems~\ref{theo:2.4}, \ref{theo:2.5} and \ref{theo:2.6}}\setcounter{equation}{0} The proofs of Theorems~\ref{theo:2.4}, \ref{theo:2.5} and \ref{theo:2.6} follow almost directly from Theorem~\ref{theo:2.1}, and are rather similar to each other in nature. Here we only provide the proof of Theorem~\ref{theo:2.4} and leave the rest to the readers. The \textit{join} of two disjoint graphs $G_1$ and $G_2$, denoted by $G_1\vee G_2$, is the graph obtained from $G_1\cup G_2$ by joining each vertex of $G_1$ to each vertex of $G_2$ by an edge. First we recall the following fact. \begin{lem}[\cite{29}] \label{lem:5.1} Let $G$ be an simple connected graph with $n$ vertices and $m$ edges. Then $$ \frac{1}{2}\left[(2m+n+1)-\sqrt{(2m+n+1)^2-4n^2}\right]\leqslant\alpha(G)\leqslant\sqrt{n(n-1)-2m+\frac{1}{4}}+\frac{1}{2}. $$ The equality on the right holds if and only if $G\cong K_{n-\alpha(G)}\vee \alpha(G)K_1$. \end{lem} \noindent{\bf Proof of Theorem~\ref{theo:2.4}.}\ Note that for a given simple connected graph $G$ with $|V_G|=n$ and $|E_G|=m$, we have $d(G)=m-n+1$. Together with (\ref{eq:1.1}) and Lemma~\ref{lem:4.1}, we have \begin{eqnarray*} sr(G^\sigma)+\alpha(G)\geqslant4n-2m-2-\alpha(G)\geqslant4n-2m-\sqrt{n(n-1)-2m+\frac{1}{4}}-\frac{5}{2} \end{eqnarray*} as stated in \eqref{eq:1.2}. Now we prove the sufficient and necessary conditions for equality in (\ref{eq:1.2}). \setlength{\baselineskip}{15pt} ``Sufficiency:" First consider the case that $G\cong S_n$. If $n=1$, then (\ref{eq:1.2}) holds trivially. If $n\geqslant2$, then we have $sr(G^\sigma)=2$ and $\alpha(G)=n-1$. Together with the fact that $m=n-1$ we have that equality holds in (\ref{eq:1.2}). Now we consider the case $G\cong C_3$. By Lemma~\ref{lem:3.6} we have $sr(G^\sigma)=2.$ Note that in this case $\alpha(G)=1$ and $m=n=3$. Hence we have equality in (\ref{eq:1.2}). ``Necessity:" Combining Theorem~\ref{theo:2.1} and Lemma~\ref{lem:4.1} we have that the equality in (\ref{eq:1.2}) holds if and only if $G^\sigma$ is lower-optimal and $G\cong K_{n-\alpha(G)}\vee \alpha(G)K_1$. Note that the cycles (if any) of $G^\sigma$ are pairwise vertex-disjoint. Hence, $n-\alpha(G)=1$ or $n-\alpha(G)=2$ and $\alpha(G)=1$, which implies $G\cong S_n$ or $G\cong C_3$. This completes the proof.\qed \section{Concluding remarks}\label{sec:4} It is well-known that the AutoGraphiX system determines classes of extremal or near-extremal graphs with a variable neighborhood search heuristic. As part of a larger study \cite{0001}, the AutographiX2 (AGX2) \cite{0002,0003,0004} system was used to study the following type of problems. For each pair of graph invariants $i_1(G)$ and $i_2(G)$, eight bounds of the following form were considered: \begin{equation}\label{eq:6.1} {\underline{b}} \le A\cdot i_1(G)\oplus B\cdot i_2(G)\le \overline{b}, \end{equation} where $\oplus$ denotes one of the operations $+,-,\times, /$, \, $A, B$ are two constants, while $\underline{b}$ and $\overline{b}$ are, respectively, lower and upper bounding functions. In this paper we considered the invariants $i_1(G)=sr(G^\sigma)$ and $i_2(G)=\alpha(G)$ where $G$ is the underlying graph of $G^\sigma.$ Theorem~\ref{theo:2.1} provides sharp lower bound on $sr(G^\sigma)+2\alpha(G)$; whereas Theorems~\ref{theo:2.4}, \ref{theo:2.5} and \ref{theo:2.6} provide sharp lower bounds on $sr(G^\sigma)+\alpha(G), sr(G^\sigma)-\alpha(G)$ and $sr(G^\sigma)/\alpha(G)$. It is nature to extend this study through examining the following bounds: \begin{itemize} \item sharp upper bounds on $sr(G^\sigma)+2\alpha(G)$; \item sharp upper bounds on $sr(G^\sigma)+\alpha(G), sr(G^\sigma)-\alpha(G)$ and $sr(G^\sigma)/\alpha(G)$; \item sharp upper and lower bounds on $sr(G^\sigma)\cdot \alpha(G)$. \end{itemize} \end{document}
\begin{document} \title{Kempner-like harmonic series} \begin{abstract} Inspired by a question asked on the list {\tt mathfun}, we revisit {\em Kempner-like series}, i.e., harmonic sums $\sum' 1/n$ where the integers $n$ in the summation have ``restricted'' digits. First we give a short proof that $\lim_{k \to \infty}(\sum_{s_2(n) = k} 1/n) = 2 \log 2$, where $s_2(n)$ is the sum of the binary digits of the integer $n$. Then we propose two generalizations. One generalization addresses the case where $s_2(n)$ is replaced with $s_b(n)$, the sum of $b$-ary digits in base $b$: we prove that $\lim_{k \to \infty}\sum_{s_b(n) = k} 1/n = (2 \log b)/(b-1)$. The second generalization replaces the sum of digits in base $2$ with any block-counting function in base $2$, e.g., the function $a(n)$ of ---possibly overlapping--- $11$'s in the base-$2$ expansion of $n$, for which we obtain $\lim_{k \to \infty}\sum_{a(n) = k} 1/n = 4 \log 2$. \end{abstract} \section{Introduction} A nice, now classical, 1914 result of Kempner \cite{Kempner} states that the sum of the inverses of the integers whose expansion in base $10$ contains no occurrence of a given digit ($\neq 0$) converges. This fact might seen amazing at first view, but looking, e.g., at all integers whose decimal expansion has no $9$ in it, one sees that larger and larger ranges of integers are excluded (think of all integers between $9 \cdot 10^k$ and $10^{k+1} - 1$). After the 1914 paper of Kempner \cite{Kempner} and the 1926 paper of Irwin \cite{Irwin}, several papers were devoted to generalizations or extensions of this result, as well as numerical computations of the corresponding series. The reader can look at, e.g., \cite{Alexander, Baillie1, Baillie2, Behforooz, Boas, Craven, Farhi, Fischer, Gordon, Klove, KS, LP, MS, Nathanson1, Nathanson2, Nathanson3, SB, SLF, Wadhwa75, Wadhwa79, WW} and the references therein. In particular the paper of Farhi \cite{Farhi} proves the somehow unexpected result that, if $c_{j,10}(n)$ denote the number of occurrences of a fixed digit $j \in \{ 0, 1, \dots, 9\}$ in the base-$10$ expansion of $n$, then $$ \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ c_{j,10}(n) = k}} \frac{1}{n} = 10 \log 10. $$ Replacing base $10$ with base $2$ and letting $c_{1,2}(n)$ denote the number of $1$'s in the binary expansion of $n$, we could expect that $$ \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ c_{1,2}(n) = k}} \frac{1}{n} = 2 \log 2. $$ The series on the lefthand side is precisely the one occurring in a recent question on {\tt mathfun}, that was forwarded to one of the authors by J. Shallit. Actually, in the post $c_{1,2}(n)$ is replaced with $s_2(n)$ which is of course the same; the question was to determine the value of the limit when $k$ goes to infinity. First we give a short proof of the following theorem which answers the {\tt mathfun} question. \begin{theorem}\label{th:base2} The following equality holds: \begin{equation}\label{base2} \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ s_2(n) = k}} \frac{1}{n} = 2 \log 2. \end{equation} \end{theorem} Then we investigate two natural generalizations of this result. In the first one we replace the sum of binary digits with the sum of $b$-ary digits. \begin{theorem}\label{th:sum-digits-base-b} Let $b \geq 2$ be an integer. Let $s_b(n)$ be the sum of digits of $n$ in base $b$. Then \begin{equation}\label{base-b} \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ s_b(n) = k}} \frac{1}{n} = \frac{2 \log b}{b-1}\cdot \end{equation} \end{theorem} In the second generalization we replace the sum of digits in base $2$, i.e., the number of $1$'s in base $2$, with $a_w(n)$ the number of occurrences of a word $w$ (a fixed string of consecutive digits) in the binary expansion of the integer $n$. \begin{theorem}\label{th:general-base2} Let $w$ be a binary word with $r$ letters. Then \begin{equation}\label{general-base2} \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ a_w(n) = k}} \frac{1}{n} = 2^r \log 2. \end{equation} \end{theorem} \begin{remark} We have essentially limited the references to ``missing digits'' given before Theorem~\ref{th:base2} to harmonic series and Dirichlet series whose summation indexes ``miss'' digits or combinations of digits. Integers with missing digits in a given base are called ``ellipsephic''. They occur in several papers (e.g., \cite{Aloui, AMM, Col2009, Biggs2021, Biggs2023, CDGJLM}). Nicholas Yu indicates in \cite[Footnote, p.~6]{Hu-N}: \begin{quote} ``This word is a translation of the French {\em ellips\'ephique}, which Mauduit coined as a port-\linebreak manteau of the Greek words \textgreek{>'elleiy\v{i}s} ({\em \'elleipsis}, ``ellipisis'') and \textgreek{yhf'io} ({\em psif{\'\i}o}, ``digit''). We \linebreak prescribe the English pronunciation [\textipa{""{\i}.l{\i}p"sEf.{\i}k}].'' \end{quote} \noindent The word in French was proposed by Christian Mauduit. Its origin is given by Sylvain Col in \cite[p.~12]{Col}: \begin{quote} ``[... les progressions arithm\'etiques]. C. Mauduit les a baptis\'es entiers {\em ellips\'ephiques} en r\'e-\linebreak f\'erence \`a la superposition de deux mots grecs, \textgreek{elliptikos} \ (litt\'eralement {\em elliptique}) et \linebreak \textgreek{ynhon} (litt\'eralement {\em petit caillou poli par l'eau}~; ces cailloux \'etaient notamment utilis\'es \linebreak pour voter et r\'ealiser les calculs) et signifie {\em qui a des chiffres manquants}. [Parmi les...]'' \linebreak \end{quote} \noindent Note that there were some misprints in the Greek words in the original quotations above. Namely ``\textgreek{>'elleiy\v{i}s}'', ``\textgreek{yhf'io}'', `` \textgreek{elliptikos}'', and ``\textgreek{ynhon}'' should be replaced respectively with ``\textgreek{>'elleiyis}'', ``\textgreek{y{\~{h}}fos}'', ``\textgreek{>elleiptik\'os}'', and ``\textgreek{y{\~{h}}fos}''. \end{remark} \section{A short proof of Theorem~\ref{th:base2}} The fact that the series $A_k := \displaystyle\sum_{\substack{n \geq 1 \\ s_2(n) = k}} \frac{1}{n}$ converges can be easily proved by a counting argument (adapting, e.g., a proof given in \cite{Irwin}) or by noting that $s_2(n)$ is the number of $1$'s in the binary expansion of $n$ and using the proof of Lemma~1 in \cite{Allouche-Shallit1989}. Let us suppose $k \geq 2$. Splitting the sum into even and odd indices, and recalling that $s_2(2n) = s_2(n)$ and $s_2(2n+1) = s_2(n) + 1$, we obtain: $$ A_k = \sum_{\substack{n \geq 1 \\ s_2(2n) = k}} \frac{1}{2n} + \sum_{\substack{n \geq 0 \\ s_2(2n+1) = k}} \frac{1}{2n+1} = \sum_{\substack{n \geq 1 \\ s_2(n) = k}} \frac{1}{2n} + \sum_{\substack{n \geq 0 \\ s_2(n) = k-1}} \frac{1}{2n+1} = \frac{1}{2} A_k + \sum_{\substack{n \geq 0 \\ s_2(n) = k-1}} \frac{1}{2n+1} $$ which we rewrite as $$ A_k = 2 \sum_{\substack{n \geq 0 \\ s_2(n) = k-1}} \frac{1}{2n+1} = A_{k-1} + B_k $$ where $B_k := 2\displaystyle\sum_{\substack{n \geq 1 \\ s_2(n) = k-1}} \left(\frac{1}{2n+1} - \frac{1}{2n}\right)$. Thus, we have, for $k \geq 2$, $$ \begin{array}{llll} A_k &- \ &A_{k-1} &= \ B_k \\ A_{k-1} &- &A_{k-2} &= \ B_{k-1} \\ \ldots \\ A_2 &- &A_1 &= \ B_2. \end{array} $$ Hence, summing these equalities, $$ A_k - A_1 = \sum_{2 \leq j \leq k} B_j = 2\sum_{2 \leq j \leq k} \sum_{\substack{n \geq 1 \\ s_2(n) = j-1}} \left(\frac{1}{2n+1} - \frac{1}{2n}\right) $$ i.e., $$ A_k - A_1 = 2\sum_{\substack{n \geq 1 \\ s_2(n) \leq k-1}} \left(\frac{1}{2n+1} - \frac{1}{2n}\right). $$ The righthand term clearly tends to $2\displaystyle\sum_{n \geq 1} \left(\frac{1}{2n+1} - \frac{1}{2n}\right)$ when $k$ tends to infinity, thus the lefthand term has a limit and $$ \lim_{k \to \infty} A_k = A_1 - 2 \sum_{n \geq 1} \left(\frac{1}{2n} - \frac{1}{2n+1}\right) $$ Now $$ A_1 = \sum_{\substack{n \geq 1 \\ s_2(n) = 1}} \frac{1}{n} = \sum_{j \geq 0} \frac{1}{2^j} = 2. $$ Hence $$ \lim_{k \to \infty} A_k = 2 \left(1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} \dots\right) = 2 \log 2. \ \ \ \ \ \Box $$ Before proving our first generalization (Theorem~\ref{th:sum-digits-base-b}), we will prove a general result on convergence of sequences and a corollary which will be useful. \section{A general result on convergence of sequences} \begin{theorem}\label{non-classical} Let $P(X) = \sum_{0 \leq k \leq d} a_k X^{d-k}$ be a polynomial having all its roots in ${\mathbb C}$ of modulus $< 1$. For a sequence $(u_n)_{n \geq 0}$, define the sequence $(u^{(P)}_n)_{n \geq d}$ by $u^{(P)}_n := \sum_{0 \leq k \leq d} a_k u_{n-k}$. Then the sequence $(u_n)_{n \geq 0}$ tends to $0$ if and only if the sequence $(u^{(P)}_n)_{n \geq d}$ tends to $0$. \end{theorem} \proof Since one direction is trivial, we only prove that if $(u^{(P)}_n)_{n \geq d}$ tends to $0$, then $(u_n)_{n \geq 0}$ tends to $0$. First we look at the case $d=1$. Suppose that $z$ is a complex number with $|z| < 1$. We prove that if the sequence $(w_n)_{n \geq 1}$ tends to $0$, where $w_n := u_n - z u_{n-1}$, then the sequence $(u_n)_{n \geq 0}$ tends to $0$. Namely, if $(w_n)_{n \geq 1}$ tends to $0$, we have $$ \forall \varepsilon > 0, \ \exists n_0 \geq 1, \text{\ such that for \ } n \geq n_0 \text{\ one has \ } |w_n| \leq (1-|z|) \varepsilon. $$ But, for all $p \in {\mathbb N}$, one has by an easy induction $$ u_{n_0+p} = \sum_{0 \leq k \leq p} z^k w_{n_0+p-k} + z^{p+1} u_{n_0-1}. $$ Hence, for $p$ larger than some $p_0$, one has $|u_{n_0+p}| \leq \varepsilon + |z|^{p+1} |u_{n_0-1}| \leq 2 \varepsilon$. Now we can address the general case where $P(X) = \sum_{0 \leq k \leq d} a_k X^{d-k} = \prod_{1 \leq j \leq d} (X - z_j)$, with $|z_j| < 1$ for all $j$. Defining $\varphi_j((u_n)_n) := ((u_n - z_j u_{n-1})_n)$, it is easy to see that $(u^{(P)}_n)_n = (\varphi_d \circ \varphi_{d-1} \circ \dots \circ \varphi_1)((u_n)_n)$. Thus it suffices to apply $d$ times the case $d=1$ above. \endpf \begin{corollary}\label{the-corollary} Let $b$ be an integer $> 1$. Then $$ \lim_{n \to \infty} ((b-1)u_n + (b-2)u_{n-1} + \dots + 2 u_{n-b+3} + u_{n-b+2}) = \ell \text{ \ if and only if \ } \lim_{n \to \infty} u_n = \dfrac{2\ell}{b(b-1)}\cdot $$ \end{corollary} \proof Of course one can suppose $b > 2$. Again one direction is trivial. For the other direction, up to replacing $(u_n)_{n \geq 0}$ with $(u'_n)_{n \geq 0}$, where $u'_n := u_n - 2\ell/(b(b-1))$, we can suppose that $\ell = 0$. Now, in order to apply Theorem~\ref{non-classical} with $P(X) = (b-1)X^{b-2} + (b-2)X^{b-3} + \dots + 2X + 1$, it suffices to prove that all the (complex) roots of this polynomial have modulus $< 1$. We note that $(1-X) P(X) = 1 + X + X^2 + \dots + X^{b-2} - (b-1) X^{b-1}$. Hence if $P(z) = 0$ for some $z$ with $|z| \geq 1$, then $(b-1) |z|^{b-1} \leq 1 + |z| + \dots + |z|^{b-2} \leq (b-1) |z|^{b-2}$, hence $|z| = 1$. Furthermore equality in the triangular inequality implies here that $z$ is real and non-negative, hence equal to $1$. Since $1$ is not a root of $P$, this gives the desired contradiction: hence, necessarily $|z| <1$. \endpf \section{A first generalization. Proof of Theorem~\ref{th:sum-digits-base-b}} We begin with a lemma. \begin{lemma}\label{harmonic} We have the following properties. \begin{itemize} \item[{\rm (i)}] The sum $\sum_{s_b(n) = k} \frac{1}{n} $ is finite. \item[{\rm (ii)}] For any $j \in \{0, 1, \dots, b-1\}$, $$ \lim_{k \to \infty} \sum_{s_b(n) = k} \left(\frac{1}{bn+j} - \frac{1}{bn}\right) = 0. $$ \item[{\rm (iii)}] For $n \geq 1$, let $H_n = 1 + \frac{1}{2} + \dots + \frac{1}{n}$ be the $n$-th harmonic number. Then, for $b > 1$, $$ \lim_{k \to \infty} \sum_{1 \leq s_b(n) \leq k} \left(\sum_{0 \leq j \leq b-1}\frac{1}{bn+j} - \frac{1}{n}\right) = \log b - H_{b-1}. $$ \end{itemize} \end{lemma} \proof To prove (i) we note that the number of integer solutions of $x_1 + x_2 + \dots + x_j = k$ is equal to $\displaystyle {j+k-1 \choose k}$, hence $$ \sum_{\substack{b^{j-1} \leq n \leq b^j \\ s_b(n) = k}} \frac{1}{n} \leq \frac{{j+k-1 \choose k}}{b^{j-1}} \sim \frac{j^k}{k! b^{j-1}}\cdot $$ The convergence of the series $\displaystyle\sum_{j \geq 1} \frac{j^k}{k! b^{j-1}}$ implies the existence of $\displaystyle \sum_{s(n) = k} \frac{1}{n}\cdot$ In order to prove (ii), we note that for any $j \in \{0, 1, \dots, b-1\}$, we have $$ \sum_{0 \leq j \leq b-1} \frac{1}{bn+j} - \frac{1}{n} = \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right) \text{\ \ is a sum of negative terms.} $$ Hence (ii) holds if and only if $\displaystyle \lim_{k \to \infty} \sum_{s_b(n) = k} \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right)=0$. But $$ \sum_{s_b(n) = k} \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right) = \sum_{s_b(n) \leq k} \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right) \ - \sum_{s_b(n) \leq k-1} \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right). $$ Thus, it suffices to prove (iii). Finally we prove (iii). Define $v_n := \displaystyle \sum_{0 \leq j \leq b-1} \frac{1}{bn+j} - \frac{1}{n} = \sum_{0 \leq j \leq b-1}\left(\frac{1}{bn+j} - \frac{1}{bn}\right).$ We have $$ \sum_{1 \leq n \leq N} v_n = H_{bN+b-1} - H_{b-1} - H_N $$ which tends to $\log b - H_{b-1}$ when $N$ tends to infinity. Now, $v_n \leq 0$, and, if $n \leq b^k - 1$, then $s_b(n) \leq k (b - 1)$. Hence $$ \sum_{1 \leq n \leq b^k-1} v_n \geq \sum_{1 \leq s_b(n) \leq k(b-1)} v_n \geq \ \sum_{n \geq 1} v_n = \log b - H_{b-1}. $$ Since $\displaystyle \sum_{1 \leq n \leq b^k-1} v_n$ tends to $\log b - H_{b-1}$, we obtain the desired result. \endpf \noindent {\it Proof of Theorem~\ref{th:sum-digits-base-b}} Define $\displaystyle u_k := \sum_{s_b(n) = k} \frac{1}{n}$. Then, splitting the integers according to their value modulo $b$, we have $$ u_k = \sum_{s_b(bn) = k} \frac{1}{bn} + \sum_{1 \leq j \leq b-1}\sum_{\substack{n \geq 0 \\ s_b(bn+j) = k}} \frac{1}{bn+j} $$ thus, using that $s_b(bn+j) = s_b(n) + j$ for $j \in \{0,1, \dots, b-1\}$, $$ u_k = \frac{1}{b} u_k + \sum_{1 \leq j \leq b-1}\sum_{\substack{n \geq 0 \\ s_b(n) = k-j}} \frac{1}{bn+j}\cdot $$ Hence $$ \left(1-\frac1b\right)(u_1+\dots+u_k) = \sum_{j=1}^{b-1}\sum_{\substack{n \geq 0 \\ s_b(n) \leq k-j}}\frac {1}{bn+j}\cdot $$ Thus, by subtracting $\left(1-\frac1b\right)(u_1+\dots+u_{k-1})$ with the definition of the $u_j$'s, $$ \left(1-\frac{1}{b}\right) u_k =\sum_{j=1}^{b-1} \sum_{\substack{n \geq 0 \\ s_b(n) \leq k-j}} \frac{1}{bn+j} - \left(1-\frac{1}{b}\right)\sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n}\cdot $$ Hence \begin{equation}\label{key} \left(1-\frac{1}{b}\right) u_k = H_{b-1} + \sum_{j=1}^{b-1} \sum_{\substack{n \geq 1 \\ s_b(n) \leq k-j}} \frac{1}{bn+j} - \left(1-\frac{1}{b}\right)\sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n}\cdot \end{equation} Let us define as in Lemma~\ref{harmonic}(iii) $w_{k-1}$ by $$ w_{k-1} := \sum_{1 \leq s_b(n) \leq k-1} \left(\sum_{0 \leq j \leq b-1}\frac {1}{bn+j}-\frac{1}{n}\right) $$ We have $$ \begin{array}{lll} w_{k-1} &=& \displaystyle\sum_{1 \leq s_b(n) \leq k-1} \ \ \sum_{1 \leq j \leq b-1} \frac {1}{bn+j} - \left(1 - \frac{1}{b}\right) \sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n} \\ &=& \ \ \displaystyle\sum_{1 \leq j \leq b-1} \ \ \sum_{1 \leq s_b(n) \leq k-1} \frac {1}{bn+j} - \left(1 - \frac{1}{b}\right) \sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n} \\ &=& \ \displaystyle\sum_{1 \leq j \leq b-1} \ \ \sum_{1 \leq s_b(n) \leq k-j} \frac {1}{bn+j} + \displaystyle\sum_{1 \leq j \leq b-1} \ \ \sum_{k-j+1 \leq s_b(n) \leq k-1} \frac {1}{bn+j} - \left(1 - \frac{1}{b}\right) \sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n} \\ &=& \ \displaystyle\sum_{1 \leq j \leq b-1} \ \ \sum_{1 \leq s_b(n) \leq k-j} \frac {1}{bn+j} + \displaystyle\sum_{2 \leq j \leq b-1} \ \ \sum_{k-j+1 \leq s_b(n) \leq k-1} \frac {1}{bn+j} - \left(1 - \frac{1}{b}\right) \sum_{1 \leq s_b(n) \leq k-1} \frac{1}{n}\cdot \\ \end{array} $$ Hence we can write, using Equation~(\ref{key}), $$ \left(1 - \frac{1}{b}\right) u_k = H_{b-1}+w_{k-1}-R_k $$ with $$ R_k = \sum_{2 \leq j \leq b-1} \ \ \sum_{k-j+1 \leq s_b(n) \leq k-1} \frac {1}{bn+j}. $$ Then, when $k \to \infty$, we can write, by using Lemma~\ref{harmonic}(ii), $$ \begin{array}{lll} R_k &=& \displaystyle\sum_{2 \leq j \leq b-1} \ \sum_{1 \leq i \leq j-1} \ \sum_{s_b(n)= k-i} \frac{1}{bn+j} \\ &=& \displaystyle\frac{1}{b} \ \sum_{2 \leq j \leq b-1}\sum_{1 \leq i \leq j-1} u_{k-i} + o(1) \\ &=& \displaystyle\frac{1}{b} \sum_{1 \leq i \leq b-2} (b-i-1) u_{k-i} + o(1). \end{array} $$ This gives $$ H_{b-1} + w_{k-1} = \left(1 - \frac{1}{b}\right) u_k + R_k = \left(1 - \frac{1}{b}\right) u_k + \frac{1}{b} \sum_{1 \leq i \leq b-2} (b-i-1) u_{k-i} + o(1) $$ but, from Lemma~\ref{harmonic}(iii), $H_{b-1} + w_{k-1} $ tends to $\log b$. Hence $$ \left(1 - \frac{1}{b}\right) u_k + \frac{1}{b} \sum_{1 \leq i \leq b-2} (b-i-1) u_{k-i} \to \log b $$ or, equivalently, $$ \sum_{0 \leq i \leq b-2} (b-i-1) u_{k-i} \to b \log b. $$ Applying Corollary~\ref{the-corollary} yields $$ u_k \to \frac{2\log b}{b-1}\cdot \ \ \ \Box $$ \section{A second generalization. Proof of Theorem~\ref{th:general-base2}} Comparing the equalities (see, e.g., \cite{Allouche-Shallit1990} and the references therein for the second equality) $$ \lim_{k \to \infty} \sum_{s_2(n) = k} \frac{1}{n} = 2 \log 2 \ \ \text{and} \ \ \sum_{n \geq 1} \frac{s_2(n)}{n(n+1)} = 2 \log 2 $$ it is tempting to prove {\it directly} that the two left-hand quantities are equal. We did not succeed, but we found that a method permitting to prove the second equality can be used for proving the first one, thus yielding a generalization to all base-$2$ pattern counting sequences. Let $w$ be a word of $0$'s and $1$'s. We let $a_w(n)$ denote the number of occurrences of $w$ in the binary expansion of the integer $n$. As usual, if $w$ begins with $0$ and is not of the form $0^{\ell}$ ---the word consisting of $\ell$ digits equal to $0$--- we assume that the binary expansion of $n$ begins with an arbitrarily long prefix of $0$'s. And if $w = 0^{\ell}$, we use the classical binary expansion of $n$ beginning with $1$. (For example, taking the respective binary expansions of $5$ and $8$, namely $5 = (0...0)101$ and $8 = 1000$, one has $a_{01}(5) = a_{01}(0...0101) = 2$ and $a_{000}(8) = a_{000}(1000) = 1$.) Also recall that $|w|$ is the length (i.e., the number of letters) of the word $w$. First we prove the following lemma. \begin{lemma}\label{general-lemma} Define $a_w(n)$ as above. Let $(f(n))_{n \geq 1}$ be a sequence of positive reals such that $\sum_n f(n) < +\infty$. Let $c_k := \displaystyle\sum_{\substack{n \geq 1 \\ a_w(n) = k}} f(n)$ \ and \ $d_k := \displaystyle\sum_{\substack{n \geq 1 \\ a_w(n) = k}} \frac{1}{n}$. Then \newline \begin{itemize} \item[\rm (i)] One has $d_k < +\infty$. \item[\rm (ii)] The series $\displaystyle\sum_{k \geq 0} c_k$ converges. \item[\rm (iii)] The sequence $(c_k)_{k \geq 0}$ tends to $0$ when $k$ tends to infinity. \end{itemize} \end{lemma} \proof The first assertion can be found, e.g., in the proof of Lemma~1 in \cite{Allouche-Shallit1989}. The third assertion is a consequence of the second one. Finally, to prove that $\sum c_k$ converges, we write (note that all terms are positive): $$ \sum_{k \geq 0} c_k = \sum_{k \geq 0}\sum_{\substack{n \geq 1 \\ a_w(n) = k}} f(n) = \sum_{n \geq 1} f(n) < +\infty. \ \ \ \ \ \Box $$ \noindent {\it Proof of Theorem~\ref{th:general-base2}} The idea for proving this theorem is to compare the quantity $\displaystyle\sum_{\substack{n \geq 1 \\ a_w(n) = k}} \frac{1}{n}$ with a series $\displaystyle\sum_{\substack{n \geq 1 \\ a_w(n) = k}} g_w(n)$ whose sum is known and converges to some limit $A_w$ for $k \to \infty$. If furthermore $g_w(n) - 1/(2^{|w|}n) = {\mathcal O}_w(1/n^2)$, then Lemma~\ref{general-lemma} will imply the existence and the value of $$ \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ a_w(n) = k}} \frac{1}{n} = 2^{|w|} A_w. $$ The choice of the function $g$ will use \cite{Allouche-Shallit1989} where the authors prove that there exists a rational function $b_w$ such that for all $k \geq 0$, one has $$ \sum_{\substack{n \geq 1 \\ a_w(n) = k}} \log(b_w(n)) = - \log 2 \ \ \ \text{(independent of $k$)}. $$ The paper \cite{Allouche-Shallit1989} explains how to construct $b_w$. This construction is given by a more explicit recursive algorithm in \cite{Allouche-Hajnal-Shallit}. Taking $g_w$ defined by $g_w = - \log(b_w)$, it will suffice to prove that $- \log(b_w(n)) - 1/(2^{|w|}n) = {\mathcal O}_w(1/n^2)$. Using \cite{Allouche-Hajnal-Shallit}, we have that, if $w = w_1 w_2 \dots w_m$, then $\log(b_w(n))$ is given in \cite{Allouche-Hajnal-Shallit} by $$ \log(b_w(n)) = Q_w(w_1 w_2 \dots w_{m-1}, w_m, n), $$ where, for $z = z_1 \dots z_r$ and $t$ two binary words, $Q_w$ is recursively defined by: $$ Q_w(z,t,n) := \begin{cases} \log(2^{|t|} n + \nu(t)) - \log(2^{|t|} n + \nu(t) + 1) \ \ &\text{if $r = 0$}, \\ Q_w(\varepsilon, t, n) - Q_w(\varepsilon, \overline{z_r} t, n) &\text{if $r=1$ and $z$ is a suffix of $w$}, \\ Q_w(z_2 z_3 \dots z_r, t, n) - Q_w(\overline{z_1} z_2 \dots z_{r-1}, z_r t, n) &\text{if $r \geq 2$ and $z$ is a suffix of $w$}, \\ Q_w(z_1 z_2 \dots z_{r-1}, z_r t, n) &\text{if $r \geq 1$ and $z$ is not a suffix of $w$}, \end{cases} $$ where, for $x \in\{0, 1\}$, one defines $\overline{x} := 1 - x$, where $\nu(t)$ is the value of the word $t$ when interpreted as a binary expansion, and $\varepsilon$ is the empty word (the word with no letter). Also recall that $|t|$ is the length (i.e., the number of letters) of the word $w$. The behavior of $Q_w(z,t,n)$ when $n$ tends to infinity can be proved by induction on $|z| \geq 1$. We claim that, for all $t$, $$ Q_w(z,t,n) = - \frac{1}{2^{|t| + |z|} n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)\cdot $$ If $|z| = 0$, i.e., $z = \varepsilon$, we have $$ Q_w(z,t,n) = \log(2^{|t|}n + \nu(t)) - \log(2^{|t|}n + \nu(t) + 1) = - \frac{1}{2^{|t|}n} + {\mathcal O}_t\left(\frac{1}{n^2}\right) = - \frac{1}{2^{|t|+|z|}n} + {\mathcal O}_t\left(\frac{1}{n^2}\right)\cdot $$ Suppose that the property holds for $|z|=r-1$ for some $r \geq 1$. Let us prove that it holds for $|z|=r$. Let $z = z_1 z_2 \dots z_r$. If $|z| = r = 1$ and $z$ is a suffix of $w$, then, using the case $|z| = 0$ above, $$ \begin{array}{lll}Q_w(z,t,n) = Q_w(\varepsilon, t, n) - Q_w(\varepsilon, \overline{z_r} t, n) &=& - \displaystyle\frac{1}{2^{|t|}n} + \frac{1}{2^{|t|+1}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right) = - \frac{1}{2^{|t|+1}n} + {\mathcal O}_t\left(\frac{1}{n^2}\right) \\ &=& - \displaystyle\frac{1}{2^{|t|+|z|}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)\cdot \\ \end{array} $$ If $r \geq 2$ and $z$ is a suffix of $w$, then, using the induction hypothesis $$ \begin{array}{lll} Q_w(z,t,n) &=& \displaystyle Q_w(z_2 z_3 \dots z_r, t, n) - Q_w(\overline{z_1} z_2 \dots z_{r-1}, z_r t, n) = - \frac{1}{2^{r-1+|t|}n} + \frac{1}{2^{r-1+ |t|+1} n} {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)\\ &=& - \displaystyle\frac{1}{2^{r+|t|}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right) = -\frac{1}{2^{|z|+|t|}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)\cdot \end{array} $$ If $r \geq 1$ and $z$ is not a suffix of $w$, then, using the induction hypothesis $$ Q_w(z,t,n) = Q_w(z_1 z_2 \dots z_{r-1}, z_r t, n) = - \frac{1}{2^{r-1+|t|+1}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right) = -\frac{1}{2^{|z|+|t|}n} + {\mathcal O}_{z,t}\left(\frac{1}{n^2}\right)\cdot $$ Thus, we obtain $$ g_w(n) = -\log(b_w(n)) = - Q_w(w_1 w_2 \dots w_{m-1}, w_m, n) = \frac{1}{2^{|w|} n} + {\mathcal O}_w\left(\frac{1}{n^2}\right)\cdot \ \ \ \ \ \Box $$ \begin{example} If $w = 11$ (note that the sequence $(-1)^{a_w(n)}$ is a classical sequence, called the Golay-Shapiro or also the Rudin-Shapiro sequence), then $$ \lim_{k \to \infty} \sum_{\substack{n \geq 1 \\ a_w(n) = k}} \frac{1}{n} = 4 \log 2. $$ \end{example} \begin{remark} A similar study could probably be undertaken for any integer base $b \geq 2$, combining ideas in \cite{Allouche-Shallit1989, Allouche-Hajnal-Shallit, Hu-Y}. \end{remark} \noindent {\bf Acknowledgments} We warmly thank Jeff Shallit and Manon Stipulanti for discussions about Kempner series, and B. Morin for sharing her expertise in ancient Greek. \end{document}
\begin{document} \title[varieties of complexes]{Varieties of complexes of fixed rank} \author{Darmajid} \author{Bernt Tore Jensen} \begin{abstract} We study varieties of complexes of projective modules with fixed ranks, and relate these varieties to the varieties of their homologies. We show that for an algebra of global dimension at most two, these two varieties are related by a pair of morphisms which are smooth with irreducible fibres.\\ \\ {\scriptsize \textit{Keywords and phrases}: complexes projective varieties, algebra of global dimension at most two, homologies of complexes, smooth morphisms.} \end{abstract} \maketitle \section*{Introduction} Let $\Bbbk$ be an algebraically closed field. For a finite dimensional $\Bbbk$-algebra $A$, Huisgen-Zimmermann and Saorin \cite{SH-Z01} define an affine variety which parameterizes bounded complexes of $A$-modules. Jensen, Su, and Zimmermann \cite{JSZ05} use varieties of complexes of projective modules to study degeneration in derived categories. These varieties, denoted by $comproj_{\mathbf{d}}^{A}$, are defined as the affine variety of all differentials $( \partial _{i})_{i\in \mathbb{Z}}$ for a fixed choice of projective modules with multiplicities of indecomposable projective summands encoded by a sequence of vectors $\mathbf{d}=(\mathbf{d}_i)_{i\in \mathbb{Z}}$, called a dimension array. In \cite{JS05} Jensen and Su use these to show that there is a well-defined notion of type of singularity in the derived category of a finite dimensional algebra, and prove that this type coincides with the type of singularity for the homology for hereditary algebras. The purpose of this paper is to study geometric properties of varieties of complexes of projective modules with a fixed rank and to relate properties of these varieties to varieties of representations via the homology functor. Let $\Lambda$ denote a $\Bbbk$-algebra of global dimension at most two. We define $comproj_{\mathbf{d},\mathbf{r}}^{\Lambda}$ to be subset of $comproj_{\mathbf{d}}^{\Lambda}$ where the differential $\partial=(\partial_i)_{i\in \mathbb{Z}}$ has fixed ranks encoded in the sequence of dimension vectors $\mathbf{r}$. Rank is an lower semi-continuous function on $comproj_{\mathbf{d}}^{\Lambda}$, and so we have a disjoint union of locally closed subsets $$comproj_{\mathbf{d}}^{\Lambda}= \bigcup_{\mathbf{r}}comproj_{\mathbf{d},\mathbf{r}}^{\Lambda}.$$ We construct a variety $comhom_{\mathbf{d},\mathbf{r}}^{\Lambda}$ whose points are complexes together with their homology, and two morphisms $\pi $ and $\rho $ $$\xymatrix{& comhom_{\mathbf{d},\mathbf{r}}^{\Lambda} \ar[dl]_\pi \ar[dr]^\rho & \\ comproj_{{\mathbf{d}},{\mathbf{r}}}^{\Lambda} && rep_{\mathbf{h}}^{\Lambda}}$$ such that $\rho \left( \pi ^{-1}\left(X\right) \right)$ is in the orbit of the homology $H^{\ast }\left( X\right) $ for all $X\in comproj_{\mathbf{d},\mathbf{r}}^{\Lambda}$ and $rep_{\mathbf{h}}^{\Lambda}$ is the variety of $\mathbb{Z}$-graded $\Lambda$-modules with dimension vectors given by the sequence $\mathbf{h}$. Our main result is the following \begin{theorem2} The morphisms $\pi$ and $\rho $ are smooth with irreducible rational fibres. \end{theorem2} We prove the theorem by showing that $\rho$ and $\pi$ are compositions of open immersions, vector bundles and $G$-bundles for irreducible algebraic groups $G$. We have an application of our result. \begin{corollary2} There is a bijection between the set of irreducible components of $comproj_{{\mathbf{d}},{\mathbf{r}}}^{\Lambda}$ and the irreducible components of $im(\rho)$. \end{corollary2} We also have the following special case. \begin{corollary2} If $\Lambda$ is hereditary, then $comproj_{\mathbf{d},\mathbf{r}}^{\Lambda}$ is irreducible, smooth and rational. \end{corollary2} The remainder of this paper is organized as follows. In Section 1 we recall basic definitions on representations of quivers and the definition of the variety $comproj_{\mathbf{d}}^{\Lambda}$. In Section 2 we give the definition of $comhom_{\mathbf{d},\mathbf{r}}^{\Lambda}$ and the morphisms $\pi$ and $\rho$. The smoothness of $\pi$ and $\rho$ are proved in Section 3, and in Section 4 we discuss some applications and examples. \section{Preliminaries} In this section we recall some facts on varieties of representation of quivers and varieties of complexes. For details, we refer to \cite{ASS06}, \cite{R86}, \cite{JS05} and \cite{JSZ05}. Note that the assumption algebra on global dimension at most two is not needed in the sequel of this section. \subsection{Representations of Quivers} A quiver $Q$ consist of a set of vertices $Q_0$, a set of arrows $Q_1$ and two maps $s,e:Q_{1}\rightarrow Q_{0}$ which assign to each arrow $\alpha \in Q_{1}$ its source and end vertex, respectively. The path algebra $\Bbbk Q$ has basis equal to the set of paths in $Q$, and multiplication of two paths $\beta$ and $\alpha$ is the composed path $\beta \alpha$, if $\alpha$ ends where $\beta$ starts, and zero otherwise. Any finite dimensional algebra $\Lambda$ is Morita equivalent to an algebra $\Bbbk Q/\mathcal{I}$ for an admissible ideal $\mathcal{I}$. For simplicity we assume that $\Lambda=\Bbbk Q/\mathcal{I}$. A representation $V=\left( V_{a},V_{\alpha }\right)_{a\in Q_0,\alpha\in Q_1} $ of a quiver $Q$ consists of a $Q_0$-graded vector space, i.e. a family of vector spaces $V_{a}$ indexes by the vertices $a\in Q_{0}$, together with a family of linear maps $V_{\alpha }:V_{s\left( \alpha \right) }\rightarrow V_{e\left( \alpha \right) }$ indexed by the arrows $\alpha \in Q_{1}$. The dimension vector $dim\left( V\right)\in \mathbb{N}^{Q_0}$ of $V$ is the vector with components $dim_{\Bbbk }\left( V_a\right)$. A representation of $\left( Q,\mathcal{I}\right) $ is a representation with maps that satisfy the relations $\mathcal{I}$. A homomorphism $\varphi :V\rightarrow W$ between two representations $V$ and $W$ is a family of $\Bbbk$-linear maps $\left( \varphi _{a}:V_{a}\rightarrow W_{a}\right) _{a\in Q_{0}}$ such that for any arrow $\alpha :a\rightarrow b$ the equality $ W_{\alpha }\circ \varphi _{a}=\varphi _{b}\circ V_{\alpha }$ holds. The vector space of homomorphisms, denoted by $Hom_{\Lambda}(V,W)$ is therefore a subspace of the space of $Q_0$-graded maps $Hom(V,W)=\prod_{a\in Q_0}Hom(V_a,W_a)$. By the rank $rank(f)$ of a homomorphism we mean the dimension vector of the image $im(f)$. The kernel of $f$ is denoted by $ker(f)$. The category of representations is equivalent to the category of finite-dimensional left $\Lambda$-modules. We use this equivalence to identify modules with representations, and vice versa. Given a dimension vector $\mathbf{d}=(d_a)_{a\in Q_0}$, we denote by $rep_{\mathbf{d}}^{\Lambda}$ the affine variety of representation $\left( Q,\mathcal{I}\right) $ with $V_{a}=\Bbbk ^{d_{a}}$ for all $a\in Q_{0} $. Using the standard basis of $\Bbbk ^{d_{a}}$, any representation in $rep_{\mathbf{d}}^{\Lambda}$ is given by a tuple of matrices. The group $$Gl_\mathbf{d}=\prod_{a\in Q_0}Gl_{d_a}$$ acts on $rep_{\mathbf{d}}^{\Lambda}$ by conjugation such that orbits correspond to isomorphism classes of representations with dimension vector $\mathbf{d}$. We denote the $Q_0$-graded space $\oplus_{a\in Q_0}\Bbbk^{d_a}$ by $\Bbbk^\mathbf{d}$. \subsection{The Variety of Complexes} Let $\{P_a|a\in Q_0\}$ be a complete set of representatives of projective indecomposable $\Lambda$-modules, one from each isomorphism class. Given a dimension vector $\mathbf{d}=(d_a)_{a\in Q_0}$, let $$P^\mathbf{d}=\bigoplus \limits_{a\in Q_0}P_{a}^{d_{a}}.$$ Let $\Theta$ be the Cartan matrix of $\Lambda$ with rows and columns indexed by $Q_0$. That is, $\Theta$ is defined as the matrix with $$\Theta \mathbf{d}=dim(P^\mathbf{d}).$$ Following \cite{JS05}, for every sequence of dimension vectors $\mathbf{d}:\mathbb{Z}\rightarrow \mathbb{N }_{0}^{Q_0}$ for which $\mathbf{d}_{i}=\left( 0,0,\ldots ,0\right) $ for all $i\gg0$ and $i\ll0$, $comproj_{\mathbf{d}}^{\Lambda}$ is the affine subvariety of \begin{equation*} \prod\limits_{i\in \mathbb{Z}}Hom_{\Lambda}(P^{\mathbf{d} _{i}},P^{\mathbf{d}_{i-1}}) \end{equation*} consisting of sequences of maps $\left( \partial _{i}:P^{\mathbf{d} _{i}}\rightarrow P^{\mathbf{d}_{i-1}}\right) _{i\in \mathbb{Z}}$ such that $\partial _{i}\partial _{i+1}=0$ for all $i\in \mathbb{Z}$. Clearly, $comproj_{\mathbf{d}}^{\Lambda}$ parameterizes complexes of projective $\Lambda$-modules with fixed dimension vectors in each degree. The sequence $\mathbf{d}$ is called a bounded dimension array. If the sequence $\mathbf{d}$ is not bounded to the left, then $comproj_{\mathbf{d}}^{\Lambda}$ is defined as a limit of affine varieties of truncated complexes \cite{JS05}. If we do not explicitly say so, any dimension array $\mathbf{d}$ is assumed to be bounded. The group \begin{equation*} G_{\mathbf{d}}:=\prod\limits_{i\in \mathbb{Z}}Aut_{\Lambda} P^{\mathbf{d}_{i}} \end{equation*} acts on $comproj_{\mathbf{d}}^{\Lambda}$ by conjugation and two complexes in $comproj_{\mathbf{d}}^{\Lambda}$ are in the same orbit if and only if they are quasi-isomorphic (\cite{JSZ05}, Lemma 1). Moreover, for any two complexes $M$ and $N$ there exists a (not necessarily bounded) dimension array $\mathbf{d}$ and complexes $M^{\prime },N^{\prime }\in comproj_{\mathbf{d}}^{\Lambda}$ such that $M$ and $N$ are quasi-isomorphic to $M^{\prime }$ and $N^{\prime }$, respectively (\cite{JSZ05}, Lemma 2). For a sequence $\mathbf{r}:\mathbb{Z}\rightarrow \mathbb{N}_{0}^{Q_{0}}$ we define $comproj_{\mathbf{d},\mathbf{r}}^{\Lambda}$ to be the locally closed $G_{\mathbf{d}}$-stable subvariety of $comproj_{\mathbf{d}}^{\Lambda}$ consisting of complexes $(\partial_i)_{i\in \mathbb{Z}}$ such that $rank\left( \partial _{i}\right) ={\mathbf r}_{i}$ for each $i\in \mathbb{Z}$. \section{The Variety $comhom_{\mathbf{d},\mathbf{r}}^{\Lambda}$} In this section we will construct a variety $comhom_{\mathbf{d},\mathbf{r} }^{\Lambda }$ consisting of complexes together with their homology. Recall that $\Lambda$ is assumed to be an algebra of global dimension at most 2. Let $\left( \partial _{i}\right) _{i\in \mathbb{Z}}\in comproj_{{\mathbf{d}} , {\mathbf{r}}}^{\Lambda }$. We define two sequences \begin{equation*} \mathbf{k},\mathbf{h}:\mathbb{Z}\rightarrow \mathbb{N}_{0}^{Q_{0}} \end{equation*} with ${\mathbf{k}}_{i}$ equal the dimension vector of the kernel of $ \partial _{i-1}$ and $\mathbf{h}_{i}$ the dimension vector of homology in degree $i-1$. That is, ${\mathbf{k}}_{i}=\Theta{\mathbf{d}}_{i-1}-{\mathbf{r} }_{i-1}$ and ${\mathbf{h}}_{i}={\mathbf{k}}_{i}-{\mathbf{r}}_{i}$, computed component-wise. \begin{lemma} For any $\left( \partial _{i}\right) _{i\in \mathbb{Z}} \in comproj_{{ \mathbf{d}},{\mathbf{r}}}^{\Lambda }$, the representation $ker\left( \partial _{i}\right)$ is a projective representation. \end{lemma} \begin{proof} For any $i$, the sequence \begin{equation*} 0\rightarrow ker\left( \partial _{i}\right)\rightarrow P^{\mathbf{d} _{i}}\rightarrow P^{\mathbf{d}_{i-1}}\rightarrow \frac{P^{\mathbf{d}_{i-1}}}{ im\left( \partial _{i}\right) }\rightarrow 0 \end{equation*} is the start of a projective resolution of $\frac{{P}^{\mathbf{d} _{i-1}}}{ im\left( \partial _{i}\right) }$, and so $ker\left( \partial _{i}\right)$ is projective, since $\Lambda $ has global dimension at most $2$. \end{proof} The lemma allows us to fix projective representations $M_{i}\in rep_{\mathbf{k}_{i}}^{\Lambda }$ such that $M_{i}\cong ker\left( \partial _{i-1}\right)$ for any $\left( \partial _{i}\right)\in comproj_{{\mathbf{d}},{\mathbf{r}}}^{\Lambda}$. Hence, there are $\Lambda$-monomorphisms $\eta _{i}:M_{i}\rightarrow P^{\mathbf{d}_{i-1}}$, $\Lambda$-homomorphisms $\phi _{i}:P^{\mathbf{d}_{i}}\rightarrow M_{i}$, and epimorphisms $\gamma _{i}:M_{i}\rightarrow \Bbbk ^{\mathbf{h}_{i}}$ such that $\partial _{i}=\eta _{i}\phi _{i}$, $\partial _{i-1}\eta _{i}=0$, and $im\left( \phi _{i}\right) =ker\left( \gamma _{i}\right) $ for any $i\in \mathbb{Z}$. \begin{lemma} \label{comhom}Let $\left( \partial _{i}\right) _{i\in \mathbb{Z}}\in comproj_{{\mathbf{d}},{\mathbf{r}}}^{\Lambda }$. Then, \begin{equation*} rank\left( \phi _{i}\right) =\mathbf{r}_{i} \end{equation*} and there are unique representations $H_{i}\in rep_{\mathbf{h}_{i}}^{\Lambda }$ such that $\gamma _{i}:M_{i}\rightarrow H_i$ are $\Lambda $-epimorphisms and $H_{i}\cong ker\left( \partial _{i-1}\right) /im\left( \partial _{i}\right) $ for any $ i\in \mathbb{Z}$. \end{lemma} \begin{proof} The condition $\partial _{i}=\eta _{i}\phi _{i}$ means that $ker\left( \phi _{i}\right) \subseteq ker\left( \partial _{i}\right) $ and the injectivity of $\eta _{i}$ ensures that $ker\left( \partial _{i}\right) \subseteq ker\left( \phi _{i}\right) $. Thus, $ker\left( \phi _{i}\right) =ker\left( \partial _{i}\right) $. By the isomorphism theorem, we have \begin{equation*} im\left( \phi _{i}\right) \cong \frac{P^{\mathbf{d}_{i}}}{ker\left( \phi _{i}\right) }=\frac{P^{\mathbf{d}_{i}}}{ker\left( \partial _{i}\right) } \cong im\left( \partial _{i}\right) . \end{equation*} Hence, \begin{equation*} rank\left( \phi _{i}\right) =rank\left( \partial _{i}\right) =\mathbf{r}_{i}. \end{equation*} Now, let $\left( \left( H_{i}\right) _{\alpha }\right) _{\alpha \in Q_{1}}$ be the structure of a representation on $\Bbbk ^{\mathbf{h}_{i}}$. Since $\gamma _{i}:M_{i}\rightarrow \Bbbk ^{\mathbf{h}_{i}}$ is an epimorphism, we have $\left( H_{i}\right) _{\alpha }\circ \left( \gamma _{i}\right) _{a}=\left( \gamma _{i}\right) _{b}\circ \left( M_{i}\right) _{\alpha }$ for any arrow $\alpha :a\rightarrow b$ in $Q_{1}$. The surjectivity of $\gamma _{i}$ implies that $\gamma _{i}$ has a right inverse $\gamma _{i}^{-1}=\left( \left( \gamma _{i}\right) _{a}^{-1}\right) _{a\in Q_{0}}$. Thus, \begin{equation*} \left( H_{i}\right) _{\alpha }=\left( \gamma _{i}\right) _{b}\circ \left( M_{i}\right) _{\alpha }\circ \left( \gamma _{i}\right) _{a}^{-1}. \end{equation*} Hence, the representation $H_{i}=\left( \Bbbk ^{\left( \mathbf{h}_{i}\right) _{a}},\left( H_{i}\right) _{\alpha }\right) $ is uniquely determined by the epimorphism $\gamma _{i}$. We complete the proof by showing that $H_{i}\cong ker\left( \partial _{i-1}\right) /im\left( \partial _{i}\right)$. The condition $\partial _{i-1}\eta _{i}=0$ means that $im\left( \eta _{i}\right) \subseteq ker\left( \partial _{i-1}\right) $. On the other hand, by the isomorphism theorem and the injectivity of $\eta _{i}$, we obtain $im\left( \eta _{i}\right) \cong \frac{M_{i}}{ker\left( \eta _{i}\right) }\cong M_{i}\cong ker\left( \partial _{i-1}\right) $. Hence $im\left( \eta_{i}\right) =ker\left( \partial _{i-1}\right) $. Therefore, \begin{equation*} H_{i}\cong \frac{M_{i}}{ker\left( \gamma _{i}\right) }=\frac{M_{i}}{im\left( \phi _{i}\right) }\cong \frac{ker\left( \partial _{i-1}\right) }{im\left( \partial _{i}\right) }. \end{equation*} \end{proof} Now, we define $comhom_{{\mathbf{d}},{ \mathbf{r}}}^{\Lambda }$ to be the locally closed subvariety of the affine space \begin{eqnarray*} &&\prod\limits_{i\in \mathbb{Z}}\left( Hom_{\Lambda }\left( P^{\mathbf{d} _{i}},P^{\mathbf{d}_{i-1}}\right) \times Hom_{\Lambda }\left( M_{i},P^{ \mathbf{d}_{i-1}}\right) \times \right. \notag \\ &&\ \ \ \ \ \left. Hom_{\Lambda }\left( P^{\mathbf{d}_{i}},M_{i}\right) \times Hom_{\Lambda }\left( M_{i},\Bbbk ^{\mathbf{h}_{i}}\right) \times rep_{ \mathbf{h}_{i}}^{\Lambda }\right) \end{eqnarray*} consisting of tuples and representations $\left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z}},$ \begin{equation*} \xymatrix{\cdots \ar[r]^{\partial_{i+1}} & P^{\mathbf{d}_i} \ar[rr]^{\partial_i} \ar[dr]^{\phi_i} & & P^{\mathbf{d}_{i-1}} \ar[r]^{\partial_{i-1}} & \cdots \\ & & M_i \ar[d]^{\gamma_i} \ar[ur]^{\eta_i} \\ && H_i}, \end{equation*} defined by the conditions $\partial _{i}\partial _{i+1}=0$, $\partial_{i}=\eta_{i}\phi_{i}$, $\partial _{i-1}\eta _{i}=0$, $\gamma _{i}\phi _{i}=0$, $\eta _{i}$ is a $\Lambda$-monomorphism, $\phi _{i}$ has rank $\mathbf{r}_{i}$, and $\gamma _{i}$ is an epimorphism. By Lemma \ref{comhom}, for any $\left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z}}\in comhom_{{\mathbf{d}},{\mathbf{r}} }^{\Lambda }$ we have $\left( \partial _{i}\right) _{i\in \mathbb{Z}}\in comproj_{{\mathbf{d}},{\mathbf{r}}}^{\Lambda }$, $ker(\gamma _{i})=im(\phi _{i})$, and $H_{i}$ is the unique point in $rep_{\mathbf{h}_{i}}^{\Lambda }$ such that $\gamma _{i}:M_i\rightarrow H_i$ is a $\Lambda $-epimorphism and $H_{i}\cong ker\left( \partial _{i-1}\right) /im\left( \partial _{i}\right) $. Let $$rep^\Lambda_{\mathbf{h}}=\prod_{i\in\mathbb{Z}} rep^\Lambda_{\mathbf{h}_i},$$ which admits an action by $$Gl_\mathbf{h}=\prod_{i\in\mathbb{Z}}Gl_{\mathbf{h}_i}$$ such that orbits are isomorphism classes of $\mathbb{Z}$-graded modules. The projections $\left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z}}\mapsto (\partial_{i})_{i\in \mathbb{Z}}$ and $\left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z}}\mapsto (H_i)_{i\in \mathbb{Z}}$ induces a pair of morphisms $$\xymatrix{& comhom_{\mathbf{d},\mathbf{r}}^{\Lambda} \ar[dl]_\pi \ar[dr]^\rho & \\ comproj_{{\mathbf{d}},{\mathbf{r}}}^{\Lambda} &&rep_{\mathbf{h}}^{\Lambda}}$$ where $\rho(\pi^{-1}(X))$ is the $Gl_\mathbf{h}$-orbit of the homology of the complex $X$. Also, $\pi(\rho^{-1}(Y))$ is the set of complexes $X$, with $H^*X\cong Y$. The actions lift to an action of $G_\mathbf{d}\times Gl_\mathbf{h}\times Aut_\Lambda M$ on $comhom_{\mathbf{d},\mathbf{r}}^{\Lambda}$, where $M=\prod_{i\in \mathbb{Z}}M_i$ and $$Aut_\Lambda M=\prod_{i\in \mathbb{Z}}Aut_\Lambda M_i.$$ Moreover, given $Z,Z'\in comhom_{\mathbf{d},\mathbf{r}}^{\Lambda}$, $Z$ and $Z'$ are conjugate if and only if $\pi(Z)$ and $\pi(Z')$ are conjugate. If $Z$ and $Z'$ are conjugate, then $\rho(Z)$ and $\rho(Z')$ are conjugate, but the converse is not true in general, as complexes are not determined by their homology in global dimension two. The map $\pi$ is surjective. We describe the image of $\rho$. Let $$rep_{\mathbf{h},\mathbf{r}}^{\Lambda }\subseteq rep_{\mathbf{h}}^{\Lambda }$$ be the subset of representations $(H_i)_{i\in \mathbb{Z}}$ which admit a presentation $$\xymatrix{P^{\mathbf{d}_i} \ar[r] & M_i \ar[r] & H_i \ar[r] & 0}.$$ The following lemma is well known, but we include it for the sake of completeness. \begin{lemma}\label{openrep} The subset $rep_{\mathbf{h},\mathbf{r}}^{\Lambda }$ is open in $rep_{\mathbf{h}}^{\Lambda }$. \end{lemma} \begin{proof} It is enough to show that the subset $rep_{\mathbf{h}_{i},\mathbf{r} _{i}}^{\Lambda }$ is open in $rep_{\mathbf{h}_{i}}^{\Lambda }$ for a fixed integer $i$. It is well known that the map $H\mapsto dim\left( \text{Ext} ^{t}\left( H,S_{a}\right) \right) $ where $S_{a}$ is the simple top of projective representation $P_{a}$, is upper semi continuous on $rep_{\mathbf{ h}_{i}}^{\Lambda },$ see \cite{S85} for details. The dimension of Ext$ ^{0}(H,S_{a})$ and Ext$^{1}(H,S_{a})$ are equal to the multiplicities of the projective $P_{a}$ in the first and second projective, respectively, of a minimal projective presentation of $H$. Therefore it follows that the set of representations $H$ admitting a projective presentation $P\rightarrow M\rightarrow H\rightarrow 0$ with $P$ and $M$ fixed, is an open subvariety of $rep_{\mathbf{h}_{i}}^{\Lambda }$. Hence, $rep_{\mathbf{h}_{i},\mathbf{r} _{i}}^{\Lambda }$ is open in $rep_{\mathbf{h}_{i}}^{\Lambda }$ and the lemma follows. \end{proof} The fact that $im(\rho)$ open in $rep_{\mathbf{h}}^{\Lambda }$ follows from Lemma \ref{openrep} and the following lemma. \begin{lemma} $im(\rho)=rep_{\mathbf{h},\mathbf{r}}^{\Lambda }$. \end{lemma} \begin{proof} That $im(\rho)\subseteq rep_{\mathbf{h},\mathbf{r}}^{\Lambda }$ follows by the construction of $comhom_{{\mathbf{d}},{\mathbf{r}}}^{\Lambda}$. For the converse let, $(H_i)_{i\in \mathbb{Z}}\in rep_{\mathbf{h},\mathbf{r}}^{\Lambda }$, and let $$0\rightarrow M_{i+1} \stackrel{\eta_{i+1}} {\longrightarrow}P^{\mathbf{d}_i} \stackrel{\phi_i}{\longrightarrow} M_i \stackrel{\gamma_i}{\longrightarrow} H_i \rightarrow 0$$ be a projective resolution of $H_i$, for all $i\in \mathbb{Z}$. The tuple $(\eta_i\phi_i,\phi_i,\eta_i,\gamma_i,H_i)_{i\in \mathbb{Z}}$ maps onto $(H_i)_{i\in \mathbb{Z}}$, and so $im(\rho)\supseteq rep_{\mathbf{h},\mathbf{r}}^{\Lambda }$. \end{proof} \begin{remark} It is possible to define a variety $comhom_{\mathbf{d},\mathbf{r}}^{\Lambda}$, and maps $\pi$ and $\rho$, for any algebra $\Lambda$, by considering tuples $\left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i},M_i,H_{i}\right) _{i\in \mathbb{Z}}$, where $M_i$ is the kernel of $\partial_{i-1}$. \end{remark} \section{Proof of the theorem} The aim of this section is to prove the main theorem stated in the introduction. That is, that the two morphisms $\pi$ and $\rho$, defined in the previous section, are smooth with irreducible rational fibres. We do so by showing that they are compositions of isomorphisms, principal $G$-bundles, open immersions, and vector bundles. Recall the following fact from elementary linear algebra, which has a consequence that right and left inverses of matrices of full rank induce morphisms of varieties. Let $Mat_{i\times j}$ denote the vector space of $i\times j$ matrices. \begin{lemma} \label{InversMatriks}Let ${U}\in Mat_{i\times j}\left( \Bbbk \right) $ and ${W}\in Mat_{j\times i}\left( \Bbbk \right) $ with $rank\left( {U}\right) =rank\left( {W}\right) =j\leq i$. Then, the left inverse of the matrix ${U}$ is \begin{equation*} {U}^{-1}=\left( {U}^{T}{U}\right) ^{-1}{U}^{T}= \frac{adj\left( {U}^{T}{U}\right) \cdot {U}^{T}}{ det\left( {U}^{T}{U}\right) } \end{equation*} and the right inverse of the matrix ${W}$ is \begin{equation*} {W}^{-1}={W}^{T}\left( {WW}^{T}\right) ^{-1}=\frac{ {W}^{T}\cdot adj\left( {WW}^{T}\right) }{det\left( {WW} ^{T}\right) } \end{equation*} where ${U}^{T}$ and $adj\left( {U}\right) $ denote the transpose and adjoint of the matrix ${U}$, respectively. \end{lemma} We will need the following result on homomorphism of vector bundles. We refer to \cite{P97} for details. \begin{proposition} \label{VBundle}Let $f:E\rightarrow F$ be a map of vector bundles over $X$. Suppose that the rank of $f_{x}$ remain constant as $x$ varies over $X$. Then $ker\left( f\right) $ and $im\left( f\right) $ are sub-bundles of $E$ and $F$, respectively. \end{proposition} \subsection{The smooth morphism from $comhom_{\mathbf{d},\mathbf{r} }^{\Lambda }$ to $comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }$} The aim of this subsection is to prove the following lemma, which is the first half of the theorem. \begin{lemma} \label{phi}The morphism $\pi :comhom_{\mathbf{d},\mathbf{r}}^{\Lambda }\rightarrow comproj_{\mathbf{d} ,\mathbf{r}}^{\Lambda }$ given by \begin{equation*} \pi \left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z}}=(\partial _{i})_{i\in \mathbb{Z}} \end{equation*} is smooth with irreducible rational fibres. \end{lemma} We prove the lemma by decomposing $\pi $ into smooth morphisms \begin{equation*} \pi =\pi _{4}\circ \pi _{3}\circ \pi _{2}\circ \pi _{1} \end{equation*} where $\pi _{i}:X_{i-1}\rightarrow X_{i}$ are projection maps and $X_{i}$ are defined as follows. Let \begin{equation*} \begin{array}{rl} X_{0} & :=comhom_{\mathbf{d},\mathbf{r}}^{\Lambda }, \\ X_{1} & \text{is the set of tuples }\left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i}\right) _{i\in \mathbb{Z}}\text{ obtained by projecting } X_{0}\text{ via }\pi_1,\\ X_{2} & \text{is the set of tuples }\left( \partial _{i},\eta _{i},\phi _{i}\right) _{i\in \mathbb{Z}}\text{ obtained by projecting }X_{1}\text{ via }\pi_2, \\ X_{3} & \text{is the set of tuples }\left( \partial _{i},\eta _{i}\right) _{i\in \mathbb{Z}}\text{ obtained by projecting }X_{2}\text{ via }\pi_3, \\ X_{4} & :=comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }, \end{array} \end{equation*} and the maps $\pi _{i}$ are restrictions of the projection maps. \begin{lemma} \label{PhiSatu}The map $\pi _{1}:comhom_{\mathbf{d},\mathbf{r}}^{\Lambda }\rightarrow X_{1}$, given by \begin{equation*} \pi _{1}(\left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z}})=\left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i}\right) _{i\in \mathbb{Z}} \end{equation*} is an isomorphism of varieties. \end{lemma} \begin{proof} We prove the lemma by constructing an inverse of $\pi _{1}$. After fixing bases we may view all maps and representations as tuples of matrices. Let $\left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i}\right) _{i\in \mathbb{Z}}\in X_{1}$. Since $\gamma _{i}:M_{i}\rightarrow \Bbbk ^{\mathbf{h} _{i}}$ is an epimorphism, it has a right inverse $\gamma _{i}^{-1}=\left( (\gamma _{i})_{a}^{-1}\right) _{a\in Q_{0}}$ by Lemma \ref{InversMatriks}. By Lemma \ref{comhom}, we construct a representation $H_{i}\ $ with $(H_{i})_{\alpha }=(\gamma _{i})_{b}(M_{i})_{\alpha }(\gamma _{i})_{a}^{-1}$ for any arrow $\alpha:a\rightarrow b$ in $Q_1$. Thus, $H_{i}$ is the unique representation in $rep_{\mathbf{ h}_{i}}^{\Lambda}$ which makes $\gamma _{i}:M_{i}\rightarrow H_{i}$ is a $\Lambda$-epimorphism. Then, for any $i\in\mathbb{Z}$, $H_i$ admits a presentation $P^{\mathbf{d}_i}\rightarrow M_i\rightarrow H_i\rightarrow 0$. Hence, $\left( H_{i}\right) _{i\in \mathbb{Z}}\in rep^{\Lambda}_{\mathbf{h},\mathbf{r}}$. So we have a morphism \begin{equation*} \left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i}\right) _{i\in \mathbb{Z} }\mapsto \left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i},\left( (\gamma _{i})_{b}(M_{i})_{\alpha }(\gamma _{i})_{a}^{-1}\right) _{\alpha :a\rightarrow b}\right) _{i\in \mathbb{Z}} \end{equation*} which is the inverse of $\pi _{1}$. This completes the proof of the lemma. \end{proof} The proof of the following lemma is similar to the proof of Lemma 17 in \cite {JS05}, and so we skip the details. \begin{lemma} \label{PhiDua}The map $\pi _{2}:X_{1}\rightarrow X_{2}$, given by \begin{equation*} \pi_2( \left( \partial _{i} , \eta _{i} ,\phi _{i} ,\gamma _{i} \right) _{i\in \mathbb{Z}})= \left( \partial _{i}, \eta _{i} ,\phi _{i}\right) _{i\in \mathbb{Z}} \end{equation*} is a locally trivial $Gl_{\mathbf{h}}$-bundle. \end{lemma} Any locally trivial $Gl_{\mathbf{h}}$-bundle is a smooth morphism, and so $\pi_2$ is smooth. \begin{lemma} \label{PhiTiga}The map $\pi _{3}:X_{2}\rightarrow X_{3}$, given by \begin{equation*} \pi_3( \left( \partial _{i} , \eta _{i} , \phi _{i}\right) _{i\in \mathbb{Z} })= \left( \partial _{i}, \eta _{i} \right) _{i\in \mathbb{Z}} \end{equation*} is an isomorphism of varieties. \end{lemma} \begin{proof} We prove that $\pi _{3}$ is an isomorphism by constructing an inverse morphism $(\pi _{3})^{-1}$. We fix bases and assume that all maps are given by matrices. Given any $\left( \partial _{i},\eta _{i}\right) _{i\in \mathbb{ Z}}\in X_{3}$. Since $\eta _{i}$ is a $\Lambda$-monomorphism, it has a left inverse $\eta _{i}^{-1}=\left( \left( \eta _{i}\right) _{a}^{-1}\right) _{a\in Q_{0}}$. For any $i\in \mathbb{Z}$, we construct a $\Lambda $-homomorphism $\phi _{i}=\eta _{i}^{-1}\partial _{i}$. By Lemma \ref {InversMatriks}, the map $(\pi _{3})^{-1}$ defined by \begin{equation*} (\pi _{3})^{-1}(\left( \partial _{i},\eta _{i}\right) _{i\in \mathbb{Z} })=\left( \partial _{i},\eta _{i},\eta _{i}^{-1}\partial _{i}\right) _{i\in \mathbb{Z}} \end{equation*} is a morphism of varieties, and it is the inverse of $\pi _{3}$, which completes the proof of the lemma. \end{proof} Finally, the smoothness of $\pi _{4}$ follows from the following lemma and the fact that vector bundles and open immersions are smooth. \begin{lemma} \label{PhiEmpat}The map $\pi_{4}:X_{3}\rightarrow comproj_{\mathbf{d}, \mathbf{r}}^{\Lambda }$, given by \begin{equation*} \pi_4(\left( \partial _{i} , \eta _{i} \right) _{i\in \mathbb{Z}})=\left( \partial _{i} \right) _{i\in \mathbb{Z}} \end{equation*} is the composition of an open immersion with a vector bundle with base $ comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }$. \end{lemma} \begin{proof} Without lost of generality we may write \begin{equation*} X_{3}\subseteq comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }\times \prod\limits_{i\in \mathbb{Z}}Inj_{\Lambda }\left( M_{i},P^{\mathbf{d} _{i-1}}\right) \end{equation*} where $\ \ \ Inj_{\Lambda }\left( M_{i},P^{\mathbf{d}_{i-1}}\right) =\left\{ \eta \in Hom_{\Lambda }\left( M_{i},P^{\mathbf{d}_{i-1}}\right) |\eta \text{ is injective}\right\} \ \ \ $ and $((\partial _{i})_{i\in \mathbb{Z}},(f_{i})_{i\in \mathbb{Z}})\in X_{3}$ if $\partial _{i-1}f_{i}=0,$ for all $i\in \mathbb{Z}$ . By the rank condition, $Inj_{\Lambda }\left( M_{i},P^{\mathbf{d} _{i-1}}\right) $ is open in $Hom_{\Lambda }\left( M_{i},P^{\mathbf{d} _{i-1}}\right) $. Let$\ $ \begin{equation*} Y\subseteq comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }\times \prod_{i\in \mathbb{Z}}Hom_{\Lambda }(M_{i},P^{\mathbf{d}_{i-1}}) \end{equation*} consist of pairs $((\partial _{i})_{i\in \mathbb{Z}},(f_{i})_{i\in \mathbb{Z} })$ such that $\partial _{i-1}f_{i}=0,$ for all $i\in \mathbb{Z}$. Hence $ X_{3}$ is open subset of $Y$ and therefore there is an open immersion $ X_{3}\rightarrow Y$ with image those pairs $((\partial _{i})_{i\in \mathbb{Z} },(f_{i})_{i\in \mathbb{Z}})$ with $f_{i}$ injective for all $i\in \mathbb{Z} $. The projection \begin{equation*} \pi ^{\prime }:Y\rightarrow comproj_{\mathbf{d},\mathbf{r}}^{\Lambda } \end{equation*} is the kernel of the morphism of trivial vector bundles \begin{equation*} comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }\times \prod_{i\in \mathbb{Z} }Hom_{\Lambda }(M_{i},P^{\mathbf{d}_{i-1}})\rightarrow comproj_{\mathbf{d}, \mathbf{r}}^{\Lambda }\times \prod_{i\in \mathbb{Z}}Hom_{\Lambda }(M_{i},P^{ \mathbf{d}_{i-2}}) \end{equation*} given by \begin{equation*} ((\partial _{i})_{i\in \mathbb{Z}},(f_{i})_{i\in \mathbb{Z}})\mapsto ((\partial _{i})_{i\in \mathbb{Z}},(\partial _{i-1}f_{i})_{i\in \mathbb{Z}}). \end{equation*} On fibres, the kernel is isomorphic to $\prod_{i\in \mathbb{Z}}Hom_{\Lambda }(M_{i},ker(\partial _{i-1}))$, which has constant dimension since $M_{i}$ is projective, and so $\pi ^{\prime }$ is a vector bundle by Proposition \ref {VBundle}. The map $\pi _{3}$ is therefore the composition of an open immersion with a vector bundle with base $comproj_{\mathbf{d},\mathbf{r} }^{\Lambda }$. The lemma follows. \end{proof} As before, let $M=\prod_{i\in \mathbb{Z}}M_i$, and $Aut_\Lambda M = \prod_{i\in \mathbb{Z}}Aut_\Lambda M_i$. \begin{corollary} $\pi_4$ is a locally trivial $Aut_\Lambda M$-bundle. \end{corollary} \begin{proof} The proof of Lemma \ref{PhiEmpat} shows that on fibres, the kernel of the trivial bundle is isomorphic to $\prod\limits_{i\in \mathbb{Z} }Hom_{\Lambda }\left( M_{i},ker\left( \partial _{i-1}\right) \right) $. Since $X_3$ is open in $Y$, on fibres the maps that belong to $X_3$ are the injective maps. Now, the set of all injective maps in $Hom_{\Lambda }\left( M_{i},ker\left( \partial _{i-1}\right) \right)$ is isomorphic to $Aut_{\Lambda }\left( M_{i}\right)$ since $M_{i}\cong ker\left( \partial _{i-1}\right)$. Hence, $\pi _{4}$ is locally trivial with fibres $ Aut_{\Lambda }M$-equivariantly isomorphic to the group $Aut_{\Lambda }M$. \end{proof} Having proved that $\pi_i$ are smooth, we can conclude that $\pi$ is smooth. Also, the previous four lemmas show that the fibres are rational and irreducible, and so Lemma \ref{phi} follows. As a consequence of the proofs, we have the following dimension formula. \begin{corollary} \label{dimpi} $$dim (comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }) = dim (comhom_{\mathbf{d},\mathbf{r}}^{\Lambda}) - \sum_{i\in \mathbb{Z}}( \mathbf{h}_i^T\mathbf{h}_i + (\Theta^{-1}\mathbf{k}_i)^T\mathbf{k}_i).$$ \end{corollary} \begin{proof} The sum $\sum_{i\in \mathbb{Z}} \mathbf{h}_i^T\mathbf{h}_i$ computes the dimension of the fibre of $\pi_2$ and the sum $\sum_{i\in \mathbb{Z}} (\Theta^{-1}\mathbf{k}_i)^T\mathbf{k}_i$ is the fibre dimension of $\pi_4$. The formula follows. \end{proof} \subsection{The smooth morphism from $comhom_{\mathbf{d},\mathbf{r} }^{\Lambda }$ to $rep_{\mathbf{h}}^{\Lambda }$} The aim now is to prove the following lemma, which is the second part of the theorem. \begin{lemma} \label{rho}The morphism $\rho :comhom_{\mathbf{d},\mathbf{r}}^{\Lambda }\rightarrow rep_{\mathbf{h}}^{\Lambda } $ given by \begin{equation*} \rho ((\partial _{i},\eta _{i},\phi _{i},\gamma _{i},H_{i})_{i\in \mathbb{Z }})=H_{i} \end{equation*} is smooth. \end{lemma} Similar to the case of $\pi $, we decompose \begin{equation*} \rho =\rho _{4}\circ \rho _{3}\circ \rho _{2}\circ \rho _{1} \end{equation*} and prove that each $\rho _{i}$ is a smooth morphism. We let $\rho _{i}:Z_{i-1}\rightarrow Z_{i}$ be projection maps, where \begin{equation*} \begin{array}{rl} Z_{0} & :=comhom_{\mathbf{d},\mathbf{r}}^{\Lambda }, \\ Z_{1} & \text{is the set of tuples }(\eta _{i},\phi _{i},\gamma _{i},H_{i})_{i\in \mathbb{Z}}\text{ obtained by projecting }Z_{0}\text{ via }\rho_1, \\ Z_{2} & \text{is the set of tuples }(\phi _{i},\gamma _{i},H_{i})_{i\in \mathbb{Z}}\text{ obtained by projecting }Z_{1}\text{ via }\rho_2, \\ Z_{3} & \text{is the set of tuples }(\gamma _{i},H_{i})_{i\in \mathbb{Z}}\text{ obtained by projecting }Z_{2}\text{ via }\rho_3, \\ Z_{4} & :=rep_{\mathbf{h},\mathbf{r}}^{\Lambda }, \end{array} \end{equation*} and the maps $\rho _{i}$ are restrictions of the projection maps. \begin{lemma} \label{RhoSatu}The map $\rho _{1}:comhom_{\mathbf{d},\mathbf{r}}^{\Lambda }\rightarrow Z_{1} $ given by \begin{equation*} \rho _{1}(\left( \partial _{i},\eta _{i},\phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z}}=\left( \eta _{i},\phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z}} \end{equation*} is an isomorphism of varieties. \end{lemma} \begin{proof} The morphism $\rho_{1}$ has an inverse given by \begin{equation*} \left( \eta _{i},\phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z} }\mapsto \left( \eta _{i}\phi _{i},\eta _{i},\phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z}}, \end{equation*} due to the equation $\partial _{i}=\eta _{i}\phi _{i}$ in the definition of $comhom_{\mathbf{d},\mathbf{r}}^{\Lambda }$. \end{proof} Since open immersions and vector bundles are smooth, the following lemma proves smoothness of $\rho _{2}$. \begin{lemma} \label{RhoDua}The map $\rho _{2}:Z_{1}\rightarrow Z_{2}$ defined by \begin{equation*} \rho_2(\left( \eta _{i}, \phi _{i} , \gamma _{i}, H_{i} \right) _{i\in \mathbb{Z}})= \left( \phi _{i} , \gamma _{i} , H_{i}\right) _{i\in \mathbb{Z} } \end{equation*} is the composition of an open immersion with a vector bundle with base $Z_2$. \end{lemma} \begin{proof} Let $Y_{1}$ be defined as $Z_{1}$, but without the restriction that $\eta _{i}$ should be a monomorphism. Then $Y_1$ is isomorphic to the kernel of the homomorphism of trivial vector bundles $$\prod_{i\in \mathbb{Z}}Hom(M_i,P^{\mathbf{d}_{i-1}})\times Z_2 \rightarrow \prod_{i\in\mathbb{Z}}Hom(M_i,M_{i-1})\times Z_2$$ given by $$(\eta_i,\phi_i,\gamma_i,H_i)_{i\in\mathbb{Z}}\mapsto (\phi_{i-1}\eta_i,\phi_i,\gamma_i,H_i)_{i\in\mathbb{Z}}.$$ By Lemma \ref{comhom}, $im(\eta_i)=ker(\partial_{i-1})=ker(\phi_{i-1})$ so that $\phi_{i-1}\eta_i=0$. On fibres, the kernel is isomorphic to $\prod_{i\in \mathbb{Z}}Hom_\Lambda(M_i,ker(\partial_{i-1}))$, which have constant dimension since $M_i$ is projective. Hence, $Y_1\rightarrow Z_2$ is a vector bundle by Proposition \ref{VBundle}. There is an open immersion $Z_{1}\rightarrow Y_{1}$ with image those tuples with $\eta _{i}$ injective for all $i$. Therefore $\rho _{2}$ is the composition of an open immersion with a vector bundle with base $Z_{2}$, and the proof is complete. \end{proof} Although, the fibres of $\rho_2$ are isomorphic to $Aut_\Lambda M$, they are not in general closed under the action of $Aut_\Lambda M$, and so $\rho_2$ is not a $Aut_\Lambda M$-bundle. This is because homology of a complex does not determine its quasi-isomorphism class for an algebra of global dimension two. \begin{lemma} \label{RhoTiga}The map $\rho _{3}:Z_{2}\rightarrow Z_{3}$ defined by \begin{equation*} \rho _{3}(\left( \phi _{i},\gamma _{i},H_{i}\right) _{i\in \mathbb{Z} })=\left( \gamma _{i},H_{i}\right) _{i\in \mathbb{Z}} \end{equation*} is the composition of an open immersion with a vector bundle with base $ Z_{3} $. \end{lemma} \begin{proof} Let $Y_{2}$ be defined as $Z_{2}$ by changing the property $im\left( \phi _{i}\right) =ker\left( \gamma _{i}\right) $ into $\gamma _{i}\phi _{i}=0$ for all $i\in \mathbb{Z}.$ Similar to the proof of Lemma \ref{PhiEmpat}, the projection $Y_{2}\rightarrow Z_{3}$ is the kernel of a homomorphism between two trivial vector bundles with base $Z_{3}$, which has fibres isomorphic to $\prod_{i\in \mathbb{Z}}Hom_{\Lambda }(P^{\mathbf{d} _{i}},ker(\gamma _{i}))$ and so $Y_{2}\rightarrow Z_{3}$ is a vector bundle. There is an open immersion $Z_{2}\rightarrow Y_{2}$ with image those tuples with $im\left( \phi _{i}\right) =ker\left( \gamma _{i}\right) $ for all $i$. Therefore $\rho _{3}$ is the composition of an open immersion with a vector bundle with base $Z_{3}$, and the proof is complete. \end{proof} \begin{lemma} \label{RhoEmpat}The map $\rho _{4}:Z_{3}\rightarrow rep_{\mathbf{h},\mathbf{r}}^{\Lambda }$ defined by \begin{equation*} \rho _{4}(\left( \gamma _{i},H_{i}\right) _{i\in \mathbb{Z}})=\left( H_{i}\right) _{i\in \mathbb{Z}} \end{equation*} is the composition of an open immersion with a vector bundle with base $rep_{\mathbf{h},\mathbf{r}}^{\Lambda }$. \end{lemma} \begin{proof} Let $Y_{3}$ be defined as $Z_{3}$, but without the restriction that $\gamma _{i}$ is an epimorphism. We prove that the morphism $Y_{3}\rightarrow rep_{\mathbf{h},\mathbf{r}}^{\Lambda }$ is the kernel of a map between trivial vector bundles. Choose $i\in \mathbb{Z}$ and for simplicity write $(\gamma,H)=(\gamma_i,H_i)$. There is a homomorphism of vector bundles \begin{equation*} Hom\left( M_i,\Bbbk ^{\mathbf{h}_i}\right) \times rep_{\mathbf{h}_i,\mathbf{r}_i}^{\Lambda } \rightarrow \prod\limits_{\alpha :a\rightarrow b}Hom_{\Bbbk }\left( \Bbbk ^{(k_i)_{a}},\Bbbk ^{(h_i)_{b}}\right)\times rep_{\mathbf{h}_i,\mathbf{r}_i}^{\Lambda } \end{equation*} \begin{equation*} \left( \left( \gamma _{a}\right) ,\left( H_{\alpha }\right) \right) \mapsto \left( \left( H_{\alpha }\circ \gamma _{a}-\gamma _{b}\circ (M_i)_{\alpha }\right) ,\left( H_{\alpha }\right) \right) . \end{equation*} On fibres, the kernel is isomorphic to $Hom_{\Lambda }\left( M_i,H\right) $ which has constant dimension since $M_i$ is projective, and so the kernel is a sub bundle by Proposition \ref{VBundle}. Since $Z_3\subseteq Y_3$ is open it follows that $\rho_4$ is the composition of an open immersion with a vector bundle. \end{proof} Lemma \ref{rho} follows from the previous four lemmas. We have the following formula relating the dimension of $comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }$ to the dimension of $rep_{\mathbf{h},\mathbf{r}}^\Lambda$. This concludes the proof of the theorem. \begin{corollary} $$dim (comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }) = dim (rep_{\mathbf{h},\mathbf{r}}^\Lambda) + \sum_{i\in \mathbb{Z}}( \mathbf{d}_i^T\mathbf{r}_i - (\Theta^{-1}\mathbf{k}_i)^T\mathbf{k}_i).$$ \end{corollary} \begin{proof} The sum $\sum_{i\in \mathbb{Z}} (\mathbf{h}_i^T\mathbf{h}_i+\mathbf{d}_i^T\mathbf{r}_i)$ is the dimension of the fibre of $\rho$. The corollary now follows from Corollary \ref{dimpi}. \end{proof} \section{Applications and Examples} We start by proving the two corollaries stated in the introduction. \begin{proof}[Proof of Corollary 2.] Each of the morphisms $\pi_i:X_{i-1}\rightarrow X_i$ induce a bijection between the irreducible components of $X_{i-1}$ and its image $im(\pi_i)=X_i$. As $\pi$ is surjective, we have a bijection between the irreducible components of $comproj^\Lambda_{\mathbf{d},\mathbf{r}}$ and $comhom^\Lambda_{\mathbf{d},\mathbf{r}}$. Similarly, there is a bijection for $comhom^\Lambda_{\mathbf{d},\mathbf{r}}$ and the image of $\rho$. The corollary follows. \end{proof} \begin{proof}[Proof of Corollary 3.] Since $\Lambda$ is hereditary, $rep_{\mathbf{h} }^{\Lambda }$ is vector space which is both smooth and rational. Using the morphisms $\rho_i$ and $\pi_i$ we therefore have that $comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }$ is both smooth and rational. The irreducibility of $comproj_{\mathbf{d},\mathbf{ r}}^{\Lambda }$ follows from Corollary 2. \end{proof} We end this paper with an example. \begin{example} Let $\Lambda$ be given the quiver \begin{equation*} \xymatrix{1 \ar[r]^\alpha & 2 \ar[r]^\beta & 3} \end{equation*} with the relation $\beta \alpha =0.$ Consider the dimension array \begin{equation*} \mathbf{d}=(\mathbf{d}_{0},\mathbf{d}_{1},\mathbf{d}_{2})=\left( \left( \begin{array}{c} 2 \\ 2 \\ 2 \end{array} \right) ,\left( \begin{array}{c} 2 \\ 4 \\ 1 \end{array} \right) ,\left( \begin{array}{c} 2 \\ 3 \\ 2 \end{array} \right) \right) \end{equation*} Then $comproj_{\mathbf{d}}^{\Lambda }$ consist of matrices $x$ and $y$, \begin{equation*} 0\rightarrow P_{1}^{2}\oplus P_{2}^{3}\oplus P_{3}^{2}\xrightarrow{x} P_{1}^{2}\oplus P_{2}^{4}\oplus P_{3}^{1}\xrightarrow{y}P_{1}^{2}\oplus P_{2}^{2}\oplus P_{3}^{2}\rightarrow 0 \end{equation*} where \begin{equation*} x=\left( \begin{array}{ccc} x_{1} & x_{2} & 0 \\ 0 & x_{3} & x_{4} \\ 0 & 0 & x_{5} \end{array} \right) \text{ and }y=\left( \begin{array}{ccc} y_{1} & y_{2} & 0 \\ 0 & y_{3} & y_{4} \\ 0 & 0 & y_{5} \end{array} \right) \end{equation*} and each $x_{i}$, $y_{i}$ is a matrix block, e.g. \begin{equation*} x_{2}=\left( \begin{array}{cc} x_{2,11} & x_{2,12} \\ x_{2,21} & x_{2,22} \end{array} \right) . \end{equation*} The defining equations of $comproj_{\mathbf{d}}^{\Lambda }$ are obtained from the matrix equations \begin{equation*} yx=0\text{ and }y_{2}x_{5}=0, \end{equation*} the first giving a complex and the second coming from the relation $\beta \alpha =0$. Let \begin{equation*} \mathbf{r}=(\mathbf{r}_{1},\mathbf{r}_{2})=\left( \left( \begin{array}{c} 0 \\ 2 \\ 1 \end{array} \right) ,\left( \begin{array}{c} 0 \\ 1 \\ 1 \end{array} \right) \right) \end{equation*} denote the ranks of the matrices, i.e. the dimension vector of the images of $y$ and $x$. We compute and find \begin{equation*} M_{0}=M_{1}=M_{2}=P^{\mathbf{d}_{0}}. \end{equation*} The dimension vectors of the corresponding homology modules are \begin{equation*} \mathbf{h}=(\mathbf{h}_{0},\mathbf{h}_{1},\mathbf{h}_{2})=\left( \left( \begin{array}{c} 2 \\ 2 \\ 3 \end{array} \right) ,\left( \begin{array}{c} 2 \\ 3 \\ 3 \end{array} \right) ,\left( \begin{array}{c} 2 \\ 4 \\ 4 \end{array} \right) \right) . \end{equation*} Now $rep_{\mathbf{h}_{0}}^{\Lambda }$ has three irreducible components, given as the orbit closures of the modules \begin{equation*} P_{1}^{2}\oplus P_{3}^{3},~~~S_{1}^{2}\oplus P_{2}^{2}~~~\text{and~~~} P_{1}\oplus P_{2}\oplus P_{3}^{2}\oplus S_{1}, \end{equation*} where $S_{i}$ is the simple representation at vertex $i$. Of these, only the latter two have the required presentation \begin{equation*} P^{\mathbf{d}_{1}}\rightarrow M_{0}\rightarrow H_{0}\rightarrow 0. \end{equation*} The variety $rep_{\mathbf{h}_{1}}^{\Lambda }$ also has three irreducible components, given by \begin{equation*} S_{1}^{2}\oplus P_{2}^{3},~~~P_{1}^{2}\oplus P_{2}\oplus P_{3}^{2}~~~\text{ and}~~\ S_{1}\oplus P_{1}\oplus P_{2}^{2}\oplus P_{3}, \end{equation*} where only the latter two have the required presentation. In total there are therefore 4 irreducible components in $rep_{\mathbf{h},\mathbf{r}}^{\Lambda } $, and therefore also in $comproj_{\mathbf{d},\mathbf{r}}^{\Lambda }.$ \end{example} \noindent{\bf Acknowledgement.} The authors would like to thank the referee for many valuable suggestions. \noindent \textsc{Darmajid} : Algebra Research Division, Institut Teknologi Bandung.\\ Jalan Ganeca no.10, Bandung, Indonesia.\\ Email : [email protected].\\ \newline \noindent \textsc{Bernt Tore Jensen} : Gj\o vik University College \\ Teknologivn 22, 2815, Gj\o vik, Norway.\\ Email : [email protected] \end{document}
\begin{document} \title[single commutators]{On single commutators in II$_1$--factors} \author[Dykema]{Ken Dykema$^{*}$} \author[Skripka, {\timeHHMM\space o'clock, \today}]{Anna Skripka$^\dag$} \address{K.D., Department of Mathematics, Texas A\&M University, College Station, TX 77843-3368, USA} \email{[email protected]} \address{A.S., Department of Mathematics, University of Central Florida, 4000 Central Florida Blvd., P.O.\ Box 161364, Orlando, FL 32816-1364, USA} \email{[email protected]} \thanks{\footnotesize ${}^{*}$Research supported in part by NSF grant DMS--0901220. ${}^{\dag}$Research supported in part by NSF grant DMS-0900870} \subjclass[2000]{47B47, 47C15} \keywords{Commutators, II$_1$--factors} \date{February 1, 2011} \begin{abstract} We investigate the question of whether all elements of trace zero in a II$_1$--factor are single commutators. We show that all nilpotent elements are single commutators, as are all normal elements of trace zero whose spectral distributions are discrete measures. Some other classes of examples are considered. \end{abstract} \maketitle \section{Introduction} In an algebra $\Afr$, the {\em commutator} of $B,C\in\Afr$ is $[B,C]=BC-CB$, and we denote by $\Comm(\Afr)\subseteq\Afr$ the set of all commutators. A {\em trace} on $\Afr$ is by definition a linear functional that vanishes on $\Comm(\Afr)$. The algebra $M_n(k)$ of $n\times n$ matrices over a field $k$ has a unique trace, up to scalar multiplication; (we denote the trace sending the identity element to $1$ by $\tr_n$). It is known that every element of $M_n(k)$ that has null trace is necessarily a commutator (see~\cite{S36} for the case of characteristic zero and \cite{AM57} for the case of an arbitrary characteristic). For the complex field, $k=\Cpx$, a natural generalization of the algebra $M_n(\Cpx)$ is the algebra $B(\HEu)$ of all bounded operators on a separable, possibly infinite dimensional Hilbert space $\HEu$. Thanks to the ground breaking paper~\cite{BP65} of Brown and Pearcy, $\Comm(B(\HEu))$ is known: the commutators in $B(\HEu)$ are precisely the operators that are not of the form $\lambda I+K$ for $\lambda$ a nonzero complex number, $I$ the identity operator and $K$ a compact operator (and an analogous result holds when $\HEu$ is nonseparable). Characterizations of $\Comm(B(X))$ for some Banach spaces $X$ are found in~\cite{A72}, \cite{A73}, \cite{D09} and~\cite{DJ10}. The von Neumann algebra factors form a natural family of algebras including the matrix algebras $M_n(\Cpx)$ and $B(\HEu)$ for infinite dimensional Hilbert spaces $\HEu$; (these together are the type~I factors). The set $\Comm(\Mcal)$ was determined by Brown and Pearcy~\cite{BP66} for $\Mcal$ a factor of type III and by Halpern~\cite{H69} for $\Mcal$ a factor of type II$_\infty$. The case of type II$_1$ factors remains open. A type II$_1$ factor is a von Neumann algebra $\Mcal$ whose center is trivial and that has a trace $\tau:\Mcal\to\Cpx$, which is then unique up to scalar multiplication; by convention, we always take $\tau(1)=1$. The following question seems natural, in light of what is known for matrices: \begin{ques}\label{qn:comm} Do we have \[ \Comm(\Mcal)=\ker\tau \] for any one particular II$_1$--factor $\Mcal$, or even for all II$_1$--factors? \end{ques} Some partial results are known. Fack and de la Harpe~\cite{FH80} showed that every element of $\ker\tau$ is a sum of ten commutators, (and with control of the norms of the elements). The number ten was improved to two by Marcoux~\cite{M06}. Pearcy and Topping, in~\cite{PT69}, showed that in the type II$_1$ factors of Wright (which do not have separable predual), every self--adjoint element of trace zero is a commutator. In section~\ref{sec:normal}, we employ the construction of Pearcy and Topping for the Wright factors and a result of Hadwin~\cite{Had98} to show firstly that all normal elements of trace zero in the Wright factors are commutators. We then use this same construction to derive that in any II$_1$--factor, every normal element with trace zero and purely atomic distribution is a single commutator. In section~\ref{sec:nilpotent}, we show that all nilpotent operators in II$_1$--factors are commutators. Finally, in section~\ref{sec:ques}, we provide classes of examples of elements of II$_1$--factors that are not normal and not nilpotent but are single commutators, and we ask some specific questions suggested by our examples and results. \noindent{\bf Acknowledgement.} The authors thank Heydar Radjavi for stimulating discussions about commutators, and Gabriel Tucci for help with his operators. \section{Some normal operators} \label{sec:normal} The following lemma (but with a constant of $2$) was described in Concluding Remark (1) of~\cite{PT69}, attributed to unpublished work of John Dyer. That the desired ordering of eigenvalues can be made with bounding constant~$4$ follows from work of Steinitz~\cite{St13}, the value $2$ follows from~\cite{GS80} and the better constant in the version below (which is not actually needed in our application of it) is due to work of Banaszczyk~\cite{Ba87}, \cite{Ba90}. \begin{lemma}\label{lem:Anormal} Let $A\in M_n(\Cpx)$ be a normal element with $\tr_n(A)=0$. Then there are $B,C\in M_n(\Cpx)$ with $A=[B,C]$ and $\|B\|\,\|C\|\le\frac{\sqrt5}2\|A\|$. \end{lemma} \begin{proof} After conjugating with a unitary, we may without loss of generality assume $A=\diag(\lambda_1,\ldots,\lambda_n)$ and we may choose the diagonal elements to appear in any prescribed order. We have $A=[B,C]$ where \begin{equation}\label{eq:B} B=\left( \begin{matrix} 0&1&0&\cdots&0 \\ 0&0&1 \\ \vdots & &\ddots&\ddots \\ 0& & \cdots & 0 &1 \\ 0& 0 & \cdots & &0 \end{matrix}\right) \end{equation} and $C=B^*D$, where \begin{equation}\label{eq:D} D=\diag(\lambda_1,\,\lambda_1+\lambda_2,\,\ldots,\lambda_1+\cdots+\lambda_{n-1},0). \end{equation} By work of Banaszczyk~\cite{Ba87}, \cite{Ba90}, any list $\lambda_1,\ldots,\lambda_n$ of complex numbers whose sum is zero can be reordered so that for all $k\in\{1,\ldots,n-1\}$ we have \begin{equation}\label{eq:lambdasum} \left|\sum_{j=1}^k\lambda_j\right|\le\frac{\sqrt5}2\max_{1\le j\le n}|\lambda_j|. \end{equation} This ensures $\|B\|\le1$ and $\|C\|\le\frac{\sqrt5}2\|A\|$. \end{proof} The II$_1$--factors of Wright~\cite{W54} are the quotients of the von Neumann algebra of all bounded sequences in $\prod_{n=1}^\infty M_n(\Cpx)$ by the ideal $I_\omega$, consisting of all sequences $(a_n)_{n=1}^\infty\in\prod_{n=1}^\infty M_n(\Cpx)$ such that $\lim_{n\to\omega}\tr_n(a_n^*a_n)=0$, where $\omega$ is a nontrivial ultrafilter on the natural numbers. The trace of the element of $\Mcal$ associated to a bounded sequence $(b_n)_{n=1}^\infty\in\prod_{n=1}^\infty M_n(\Cpx)$ is $\lim_{n\to\omega}\tr_n(b_n)$. (See~\cite{McD70} or~\cite{J72} for ultrapowers of finite von Neumann algebras.) The following result in the case of self--adjoint operators is due to Pearcy and Topping~\cite{PT69}. \begin{thm}\label{thm:PT} If $\Mcal$ is a Wright factor and if $T\in\Mcal$ is normal with $\tau(T)=0$, then $T\in\Comm(\Mcal)$. \end{thm} \begin{proof} Let $T\in\Mcal$ be normal and let $X$ and $Y$ be the real and imaginary parts of $T$, respecitvely. Let $(S_n)_{n=1}^\infty\in\prod_{n=1}^\infty M_n(\Cpx)$ be a representative of $T$, with $\|S_n\|\le\|T\|$ for all $n$. Let $X_n$ and $Y_n$ be the real and imaginary parts of $S_n$. Then the mixed $*$--moments of the pair $(X_n,Y_n)$ converge as $n\to\omega$ to the mixed $*$--moments of $(X,Y)$. By standard methods, we can construct some commuting, self--adjoint, traceless $n\times n$ matrices $H_n$ and $K_n$ such that $H_n$ converges in moments to $X$ and $K_n$ converges in moments to $Y$, as $n\to\infty$. Now using a result of Hadwin (Theorem 2.1 of~\cite{Had98}), we find $n\times n$ unitaries $U_n$ such that \[ \lim_{n\to\omega}\|U_nX_nU_n^*-H_n\|_2=0 \qquad \lim_{n\to\omega}\|U_nY_nU_n^*-K_n\|_2=0, \] where $\|Z\|_2=\tr_n(Z^*Z)^{1/2}$ is the Euclidean norm resulting from the normalized trace on $M_n(\Cpx)$. This shows that $T$ has respresentative $(T_n)_{n=1}^\infty$, where $T_n=U_n^*(H_n+iK_n)U_n$ is normal and, of course, traceless. By Lemma~\ref{lem:Anormal}, for each $n$ there are $B_n,C_n\in M_n(\Cpx)$ with $\|B_n\|=1$ and $\|C_n\|\le\frac{\sqrt5}2\|T\|$ such that $T_n=[B_n,C_n]$. Let $B,C\in\Mcal$ be the images (in the quotient $\prod_{n=1}^\infty M_n(\Cpx)/I_\omega$) of $(B_n)_{n=1}^\infty$ and $(C_n)_{n=1}^\infty$, respectively. Then $T=[B,C]$. \end{proof} The {\em distribution} of a normal element $T$ in a II$_1$--factor is the compactly supported Borel probability measure on the complex plane obtained by composing the trace with the projection--valued spectral measure of $T$. \begin{thm}\label{thm:normalhyp} If $R$ is the hyperfinite II$_1$--factor and if $\mu$ is a compactly supported Borel probability measure on the complex plane such that $\int z\,\mu(dz)=0$, then there is a normal element $T\in\Comm(R)$ whose distribution is $\mu$. \end{thm} \begin{proof} We will consider a particular instance of the construction from the proof of Theorem~\ref{thm:PT}. Let $\Mcal$ be a factor of Wright, with tracial state $\tau$. Let $L$ be the maximum modulus of elements of the support of $\mu$. We may choose complex numbers $(\lambda^{(n)}_j)_{j=1}^n$ for $n\ge1$ such that the measures $\frac1n\sum_{j=1}^n\delta_{\lambda_j^{(n)}}$ converge in weak$^*$--topology to $\mu$ and all have support contained inside the disk of radius $L$ centered at the origin and such that $\sum_{j=1}^n\lambda^{(n)}_j=0$ for each $n$. Let $T_n=\diag(\lambda^{(n)}_1,\ldots,\lambda^{(n)}_n)\in M_n(\Cpx)$ and let $T\in\Mcal$ be the element associated to the sequence $(T_n)_{n=1}^\infty$. Then the distribution of $T$ is $\mu$. By~\cite{Ba87}, \cite{Ba90}, we can order these $\lambda^{(n)}_1,\ldots,\lambda^{(n)}_n$ so that $\big|\sum_{j=1}^k\lambda^{(n)}_j\big|\le\frac{\sqrt5}2\|T\|$ for all $1\le k\le n$. Then, as in the proof of Lemma~\ref{lem:Anormal}, we have $T_n=[B_n,B_n^*D_n]$ where $B_n$ and $D_n$ are the $n\times n$ matrices $B$ and $D$ of~\eqref{eq:B} and~\eqref{eq:D}, respectively. If $B,D\in\Mcal$ are the images in the quotient of the sequences $(B_n)_{n=1}^\infty$ and $(D_n)_{n=1}^\infty$, respectively, then $T=[B,B^*D]$. However, note that $B\in\Mcal$ is a unitary element such that $\tau(B^k)=0$ for all $k>0$. Moreover, the set $\{B^kDB^{-k}\mid k\in\Ints\}$ generates a commutative von Neumann subalgebra $\Ac$ of $\Mcal$ and every element of $\Ac$ is the image (under the quotient mapping) of a sequence $(A_n)_{n=1}^\infty$ where each $A_n\in M_n(\Cpx)$ is a diagonal matrix. Thus, the unitary $B$ acts by conjugation on $\Ac$, and, moreover, we have $\tau(AB^k)=0$ for all $A\in\Ac$ and all $k>0$. Therefore the von Neumann subalgebra generated by $\Ac\cup\{B\}$ is a case of the group--measure-space construction, $\Ac\rtimes\Ints$, and is a hyperfinite von Neumann algebra by \cite{Co76} and can, thus, be embedded into the hyperfinite II$_1$--factor $R$. \end{proof} The above proof actually shows the following. \begin{cor} Given any compactly supported Borel probability measure $\mu$ on the complex plane with $\int z\,\mu(dz)=0$, there is $f\in L^\infty([0,1])$ and a probability-measure-preserving transformation $\alpha$ of $[0,1]$ such that the distribution of $f-\alpha(f)$ equals $\mu$ and the supremum norm of $f$ is no more than $\frac{\sqrt5}2$ times the maximum modulus of the support of $\mu$. \end{cor} \begin{thm}\label{thm:atomic} If $\Mcal$ is any II$_1$--factor and $T\in\Mcal$ is a normal element whose distribution is purely atomic and with trace $\tau(T)=0$, then $T\in\Comm(\Mcal)$. \end{thm} \begin{proof} $\Mcal$ contains a (unital) subfactor $R$ isomorphic to the hyperfinite II$_1$--factor. By Theorem~\ref{thm:normalhyp}, there is an element $\Tt\in\Comm(R)$ whose distribution equals the distribution of $T$. Since this distribution is purely atomic, there is a unitary $U\in\Mcal$ such that $U\Tt U^*=T$. Thus, $T\in\Comm(\Mcal)$. \end{proof} \section{Nilpotent operators} \label{sec:nilpotent} The von Neumann algebra $\Mcal$ is embedded in $B(\HEu)$ as a strong--operator--topology closed, self--adjoint subalgebra. If $T\in\Mcal$, we denote the self--adjoint projection onto $\ker(T)$ by $\kerproj(T)$ and the self--adjoint projection onto the closure of the range of $T$ by $\ranproj(T)$. Both of these belong to $\Mcal$, and we have \[\tau(\kerproj(T))+\tau(\ranproj(T))=1\] The following decomposition follows from the usual sort of analysis of subspaces that one does also in the finite dimensional setting. \begin{lemma}\label{lem:UT} Let $\Mcal$ be a II$_1$--factor and let $T\in\Mcal$ be nilpotent, $T^n=0$. Then there are integers $n\ge k_1>k_2>\ldots>k_m\ge1$ and for each $j\in\{1,\ldots,m\}$ there are equivalent projections $f^{(j)}_1,\ldots,f^{(j)}_{k_j}$ in $\Mcal$ such that \begin{enumerate}[(i)] \item $f^{(j)}:=f^{(j)}_1+\cdots+f^{(j)}_{k_j}$ commutes with $T$, \item $f^{(1)}+\cdots+f^{(m)}=1$, \item the $k_j\times k_j$ matrix of $f^{(j)}T$ with respect to these projections $f^{(j)}_1,\ldots,f^{(j)}_{k_j}$ is strictly upper triangular. \end{enumerate} \end{lemma} In other words, the lemma says that $T$ lies in a unital $*$--subalgebra of $\Mcal$ that is isomorphic to $M_{k_1}(\Afr_1)\oplus\cdots\oplus M_{k_m}(\Afr_m)$ for certain compressions $\Afr_j$ of $\Mcal$ by projections, and the direct summand component of $T$ in each $M_{k_j}(\Afr_j)$ is a strictly upper triangular matrix. \begin{proof} The proof is by induction on $n$. The case $n=1$ is clear, because then $T=0$. Assume $n\ge2$. We consider the usual system $p_1,p_2,\ldots,p_n$ of pairwise orthogonal projections with respect to which $T$ is upper triangular: \begin{align*} p_1&=\kerproj(T), \\ p_j&=\kerproj(T^j)-\kerproj(T^{j-1}),\quad(2\le j\le n). \end{align*} Then we have \begin{gather} \tau(\ranproj(Tp_j))=\tau(p_j),\qquad(2\le j\le n), \label{eq:Tpj} \\ \ranproj(Tp_j)\le\kerproj(T^{j-1})=p_1+p_2+\cdots+p_{j-1},\qquad(2\le j\le n), \label{eq:Tpjle} \\ \ranproj(Tp_j)\wedge(p_1+ p_2+\cdots+p_{j-2})=0,\qquad(3\le j\le n). \label{eq:rpw} \end{gather} Indeed, for~\eqref{eq:Tpj}, it will suffice to show $\kerproj(Tp_j)=1-p_j$. For this, note that if $p_j\xi=\xi$ and $T\xi=0$, then $\xi\in\ker T\subseteq\ker T^{j-1}$. Since $p_j\perp\kerproj(T^{j-1})$, this gives $\xi=0$. The relation~\eqref{eq:Tpjle} is clear. For~\eqref{eq:rpw}, if $q:=\ranproj(Tp_j)\wedge\kerproj(T^{j-2})\ne0$, then by standard techniques (see, e.g., Lemma~2.2.1 of~\cite{CD09}), we would have a nonzero projection $r\le p_j$ such that $q=\ranproj(Tr)\le\kerproj(T^{j-2})$. However, this would imply $r\le\kerproj(T^{j-1})$, which contradicts $p_j\perp\kerproj(T^{j-1})$. Let \begin{align*} q_n&=p_n\,, \\ q_{n-j}&=\ranproj(T^jq_n),\qquad(1\le j\le n-1). \end{align*} Then we have \begin{gather} q_k=\ranproj(Tq_{k+1})\le p_1+\cdots+p_k,\qquad(1\le k\le n-1), \label{eq:Tqk} \\ q_k\wedge(p_1+\cdots+p_{k-1})=0,\qquad(2\le k\le n). \label{eq:qk} \end{gather} Now~\eqref{eq:Tpj} and~\eqref{eq:Tqk} together imply $\tau(q_k)=\tau(q_{k+1})$, and from~\eqref{eq:qk} we have $\tau(q_1\vee\cdots\vee q_k)=k\tau(q_1)$. Thus, we have pairwise equivalent and orthogonal projections $f_1,\ldots,f_n$ defined by \begin{align*} f_n&=q_n\,, \\ f_k&=(q_k\vee\cdots\vee q_n)-(q_{k+1}\vee\cdots\vee q_n),\qquad(1\le k\le n-1), \end{align*} $T$ commutes with $f:=f_1+\cdots+f_n$ and $Tf$ is strictly upper triangular when written as an $n\times n$ matrix with respect to $f_1,\ldots,f_n$. Moreover, we have $(T(1-f))^{n-1}=T^{n-1}(1-f)=0$ and the induction hypothesis applies to $T(1-f)$. \end{proof} \begin{prop} Let $\Mcal$ be a II$_1$--factor. Then $\Comm(\Mcal)$ contains all nilpotent elements of $\Mcal$. \end{prop} \begin{proof} By Lemma~\ref{lem:UT}, we only need to observe that a strictly upper triangular matrix in $M_n(\Afr)$ is a single commutator, for any algebra $\Afr$. But this is easy: if \[ A=\left( \begin{matrix} 0&a_{1,2}&a_{1,3}&\cdots&a_{1,n} \\ 0&0 &a_{2,3}&\cdots&a_{2,n} \\ \vdots & &\ddots&\ddots&\vdots \\ & & & &a_{n-1,n} \\ 0 & & \cdots & &0 \end{matrix}\right), \] then $A=BC-CB$, where $B$ is the matrix in~\eqref{eq:B}, \begin{equation} C=\left( \begin{matrix} 0&0&\cdots&0 \\ 0&c_{2,2}&\cdots&c_{2,n} \\ \vdots& &\ddots&\vdots \\ 0&\cdots&0 &c_{n,n} \end{matrix}\right), \end{equation} and where the $c_{i,j}$ are chosen so that \begin{align*} a_{1,j}&=c_{2,j}\,,\qquad (2\le j\le n), \\ a_{p,j}&=c_{p+1,j}-c_{p,j-1}\,,\qquad(2\le p<j\le n). \end{align*} \end{proof} \section{Examples and questions} \label{sec:ques} \begin{example} A particular case of Theorem~\ref{thm:atomic} is that if $p$ is a projection (with irrational trace) in any II$_1$--factor $\Mcal$, then $p-\tau(p)1\in\Comm(\Mcal)$. We note that a projection with rational trace is contained in some unital matrix subalgebra $M_n(\Cpx)\subseteq\Mcal$; therefore, the case of a projection with rational trace is an immediate application of Shoda's result. \end{example} \begin{ques} In light of Theorem~\ref{thm:atomic}, it is natural to ask: does $\Comm(\Mcal)$ contain all normal elements of $\Mcal$ whose trace is zero? (Note that each such element is the limit in norm of a sequence of elements of the sort considered in Theorem~\ref{thm:atomic}.) It is of particular interest to focus on normal elements that generate maximal self--adjoint abelian subalgebras (masas) in $\Mcal$. Does it make a difference whether the masa is singular or semi-regular? (See~\cite{SS08}.) \end{ques} A particular case: \begin{ques} If $a$ and $b$ freely generate the group $\Fb_2$, let $\lambda_a$ and $\lambda_b$ be the corresponding unitaries generating the group von Neumann algebra $L(\Fb_2)$. Do we have $\lambda_a\in\Comm(L(\Fb_2))$? \end{ques} Our next examples come from ergodic theory. \begin{example}\label{ex:ergodic} Let $\alpha$ be an ergodic, probability measure preserving transformation of a standard Borel probability space $X$, that is not weakly mixing. Consider the hyperfinite II$_1$--factor $R$ realized as the crossed product $R=L^\infty(X)\rtimes_\alphat\Ints$ where $\alphat$ is the automorphism of $L^\infty(X)$ arising from $\alpha$ by $\alphat(f)=f\circ\alpha$. For $f\in L^\infty([0,1])$, we let $\pi(f)$ denote the corresponding element of $R$, and we write $U\in R$ for the implementing unitary, so that $U\pi(f)U^*=\pi(\alphat(f))$. By a standard result in ergodic theory (see, for example, Theorem 2.6.1 of~\cite{P83}), there is an eigenfunction, i.e., $h\in L^\infty(X)\backslash\{0\}$ so that $\alphat(h)=\zeta h$ for some $\zeta\ne1$; moreover, all eigenfunctions $h$ of an ergodic transformation must have $|h|$ constant. If $g\in L^\infty(X)$, then \begin{align*} [U\pi(g),\pi(h)]=U\pi\big(g\big(h-\alphat^{-1}(h)\big)\big). \end{align*} Since $h-\alphat^{-1}(h)$ is invertible, by making appropriate choices of $g$ we get $U\pi(f)=[U\pi(g),\pi(h)]\in\Comm(R)$ for all $f\in L^\infty(X)$. \end{example} \begin{ques} If $\alpha$ is a weakly mixing transformation of $X$ (for example, a Bernoulli shift), then, with the notation of Example~\ref{ex:ergodic}, do we have $U\pi(f)\in\Comm(R)$ for all $f\in L^\infty(X)$? \end{ques} \begin{example} Assume that $\alphat$ from Example \ref{ex:ergodic} has infinitely many distinct eigenvalues. This is the case for every compact ergodic action $\alpha$ (for example, an irrational rotation of the circle or the odometer action), but can also hold for a non-compact action (for example, a skew rotation of the torus). For every finite set $F\subset\Ints\setminus\{0\}$, there is an eigenvalue $\zeta$ such that $\zeta^k\neq 1$, for any $k\in F$. Let $h$ be an eigenfunction of $\alphat$ corresponding to this eigenvalue $\zeta$; clearly, $|h|$ is a constant. Then, for $g_k\in L^\infty(X)$, \[ \left[\sum_{k \in F}U^k\pi(g_k),\pi(h)\right]=\sum_{k\in F} \left[U^k\pi(g_k),\pi(h)\right] =\sum_{k\in F}U^k\pi\big(g_k\big(h-\alphat^{-k}(h)\big)\big). \] Thus, for any $f_k\in L^\infty(X)$, by choosing $g_k=f_k \big(h-\alphat^{-k}(h)\big)^{-1}$, we obtain \[ \sum_{k\in F} U^k\pi(f_k)\in\Comm(R). \] \end{example} \begin{ques} It is natural to ask Question~\ref{qn:comm} in the particular case of quasinilpotent elements $T$ of $\Mcal$: must they lie in $\Comm(\Mcal)$? From Proposition~4 of~\cite{MW79}, it follows that every quasinilpotent operator $T$ in a II$_1$--factor has trace zero. (Alternatively, use L.\ Brown's analogue~\cite{B86} of Lidskii's theorem in II$_1$--factors and the fact that the Brown measure of $T$ must be concentrated at $0$). \end{ques} \begin{ques} Consider the quasinilpotent DT--operator $T$ (see~\cite{DH04}), which is a generator of the free group factor $L(\Fb_2)$. Do we have $T\in\Comm(L(\Fb_2))$? \end{ques} \begin{example}\label{ex:Tucci} Consider G.\ Tucci's quasinilpotent operator \[ A=\sum_{n=1}^\infty a_n V_n\in R, \] from~\cite{T08}, where $a=(a_n)_{n=1}^\infty\in\ell^1_+$, the set of summable sequences of nonnegative numbers. Here $R=\overline{\bigotimes_1^\infty M_2(\Cpx)}$ is the hyperfinite II$_1$--factor and \begin{equation}\label{def:Vn} V_n=I^{\otimes n-1}\otimes\left(\begin{smallmatrix}0&1\\0&0\end{smallmatrix}\right)\otimes I\otimes I\otimes\cdots. \end{equation} Tucci showed in Remark~3.7 (p.\ 2978) of~\cite{T08} that $A$ is a single commutator whenever $a=(b_nc_n)_{n=1}^\infty$ for some $b=(b_n)_{n=1}^\infty\in\ell^1$ and $c=(c_n)_{n=1}^\infty\in\ell^1$, by writing $A=[B,C]$, where \begin{align} B&=\sum_{n=1}^\infty b_nV_nV_n^*, \label{eq:Bop} \\ C&=\sum_{n=1}^\infty c_nV_n\,. \label{eq:C} \end{align} Note that, for $a\in\ell^1_+$, there exist $b$ and $c$ in $\ell^1$ such that $a=(b_nc_n)_{n=1}^\infty$ if and only if $\sum_{n=1}^\infty a_n^{1/2}<\infty$, i.e., if and only if $a\in\ell^{1/2}_+$. \end{example} The rest of the paper is concerned with some further results and remarks about Tucci's operators. We might try to extend the formula $A=[B,C]$ for $B$ and $C$ as in~\eqref{eq:Bop} and~\eqref{eq:C}, respectively, to other sequences $a\in\ell^1_+$, i.e.\ for $b$ and $c$ not necessarily in $\ell^1$, and where the convergence in~\eqref{eq:Bop} and~\eqref{eq:C} might be in some weaker topology. We first turn our attention to~\eqref{eq:C}. Denoting the usual embedding $R\hookrightarrow L^2(R,\tau)$ by $X\mapsto\Xh$, from \eqref{def:Vn} we see that the vectors $\Vh_n$ are orthogonal and all have $L^2(R,\tau)$-norm equal to $1/\sqrt2$; therefore, the series~\eqref{eq:C} converges in $L^2(R,\tau)$ as soon as $c\in\ell^2$, and we have \begin{equation}\label{eq:Ch} \Ch=\sum_{n=1}^\infty c_n\Vh_n. \end{equation} We easily see (below) that only for $c\in\ell^1$ there is a bounded operator $C\in R$ such that $\Ch$ is given by~\eqref{eq:Ch}. \begin{prop}\label{prop:cl1} Let $c\in\ell^2$. Suppose there is a bounded operator $C\in R$ such that $\Ch$ is given by~\eqref{eq:Ch}. Then $c\in\ell^1$. \end{prop} \begin{proof} For any sequence $(\zeta_n)_{n=1}^\infty$ of complex numbers of modulus $1$, there is an automorphism of $R$ sending $V_n$ to $\zeta_nV_n$ for all $n$. Thus, without loss of generality we may assume $c_n\ge0$ for all $n$. Letting $E_n:R\to M_2(\Cpx)^{\otimes n}\otimes I\otimes I\otimes\cdots\cong M_{2^n}(\Cpx)$ be the conditional expectation onto the tensor product of the first $n$ copies of the $2\times 2$ matrices (see Example~\ref{ex:Tucci}), we must have $C_n:=E_n(C)=\sum_{k=1}^n c_kV_k\in M_{2^n}(\Cpx)$. Let $x=2^{-n/2}(1,1,\ldots,1)^t$ be the normalization of the column vector of length $2^n$ with all entries equal to $1$. Taking the usual inner product in $\Cpx^{2^n}$, we see $\langle V_kx,x\rangle=1/2$ for all $k\in\{1,\ldots,n\}$. Thus, \[ \frac12\sum_{k=1}^nc_k=\big|\,\langle C_n x,x\rangle\,\big|\le\|C_n\|\le\|C\|. \] This shows $c\in\ell^1$. \end{proof} Let us now investigate the series~\eqref{eq:Bop} for some sequence $b=(b_n)_{n=1}^\infty$ of complex numbers. We claim that this series gives rise (in a weak sense explained below) to a bounded operator if and only if $b\in\ell^1$. Indeed, for $K$ a finite subset of $\Nats$, we have \[ \left\|\sum_{n\in K}b_nV_nV_n^*\right\|_{L^2(R,\tau)}^2 =\;\frac14\sum_{n\in K}|b_n|^2+\frac14\left|\sum_{n\in K}b_n\right|^2. \] Now suppose $K_1\subseteq K_2\subseteq\cdots$ are finite sets whose union is all of $\Nats$. Then $\sum_{n\in K_p}b_nV_nV_n^*$ converges in $L^2(R,\tau)$ as $p\to\infty$ if and only if $b\in\ell^2$ and $y:=\lim_{p\to\infty}\sum_{n\in K_p}b_n$ exists. Then the limit in $L^2(R,\tau)$ is \begin{equation}\label{eq:Bh} \Bh=\sum_{n=1}^\infty b_n\left(V_nV_n^*-\frac12\right)^{\widehat{\;}}+\frac y2 \oneh. \end{equation} If there is a bounded operator $B$ such that $\Bh$ is given by~\eqref{eq:Bh}, then for every finite $F\subseteq\Nats$, the conditional expectation $E_F(B)$ of $B$ onto the (finite dimensional) subalgebra of $R$ generated by $\{V_nV_n^*\mid n\in F\}$ will be $\sum_{n\in F}b_n(V_nV_n^*-\frac12)+\frac y2$. Taking the projection $P=\prod_{n\in F}V_nV_n^*$, we have $E_F(B)P=\frac12(y+\sum_{n\in F}b_n)P$, so \[ \left|\frac12\left(y+\sum_{n\in F}b_n\right)\right|\le\|E_F(B)\|\le\|B\|. \] As $F$ was arbitrary, this implies $b\in\ell^1$. Suppose $b_nc_n=\frac1{n^r}$ and $b=(b_n)_1^\infty\in\ell^1$. Letting $(b^*_n)_1^\infty$ denote the nonincreasing rearrangement of $(|b_n|)_1^\infty$, we have $b^*_n=o(\frac1n)$ and standard arguments show $c^*_n\ge\frac{K}{n^{r-1}}$ for some constant $K$. Thus, by Proposition~\ref{prop:cl1}, Tucci's formula for writing $A=[B,C]$ does not work if $a_n=\frac1{n^r}$ for $1<r\le2$, while of course for $r>2$ it works just fine. \begin{ques}\label{qn:Tuccir} Fix $1<r\leq 2$, and let \[ A=\sum_{n=1}^\infty \frac1{n^r}V_n\in R \] be Tucci's quasinilpotent operator in the hyperfinite II$_1$--factor. Do we have $A\in\Comm(R)$? \end{ques} \begin{bibdiv} \begin{biblist} \bib{AM57}{article}{ author={Albert, A. A.}, author={Muckenhoupt, B.}, title={On matrices of trace zero}, journal={Michigan Math. J.}, volume={3}, year={1957}, pages={1--3} } \bib{A72}{article}{ author={Apostol, Constantin}, title={Commutators on $\ell^p$ spaces}, journal={Rev. Roumaine Math. Pures Appl.}, volume={17}, year={1972}, pages={1513--1534} } \bib{A73}{article}{ author={Apostol, Constantin}, title={Commutators on $c_0$ and $\ell^\infty$ spaces}, journal={Rev. Roumaine Math. Pures Appl.}, volume={18}, year={1973}, pages={1025--1032} } \bib{Ba87}{article}{ author={Banaszczyk, Wojciech}, title={The Steinitz constant of the plane}, journal={J. reine angew. Math.}, volume={373}, year={1987}, pages={218--220} } \bib{Ba90}{article}{ author={Banaszczyk, Wojciech}, title={A note on the Steinitz constant of the Euclidean plane}, journal={C. R. Math. Rep. Acad. Sci. Canada}, volume={12}, year={1990}, pages={97--102} } \bib{BP65}{article}{ author={Brown, Arlen}, author={Pearcy, Carl}, title={Structure of commutators of operators}, journal={Ann. of Math. (2)}, volume={82}, year={1965}, pages={112--127} } \bib{BP66}{article}{ author={Brown, Arlen}, author={Pearcy, Carl}, title={Commutators in factors of type III}, journal={Canad. J. Math.}, volume={18}, year={1966}, pages={1152--1160} } \bib{B86}{article}{ author={Brown, Lawrence G.}, title={Lidskii's theorem in the type II case}, conference={ title={Geometric methods in operator algebras}, address={Kyoto}, date={1983} }, book={ series={Pitman Res. Notes Math. Ser.}, volume={123}, publisher={Longman Sci. Tech.}, address={Harlow}, date={1986} }, pages={1--35} } \bib{CD09}{article}{ author={Collins, Beno\^it}, author={Dykema, Ken}, title={On a reduction procedure for Horn inequalities in finite von Neumann algebras}, journal={Oper. Matrices}, volume={3}, year={2009}, pages={1-40} } \bib{Co76}{article}{ author={Connes, Alain}, title={Classification of injective factors}, journal={Ann. Math.}, volume={104}, pages={73--115}, year={1976} } \bib{D09}{article}{ author={Dosev, Detelin}, title={Commutators on $\ell_1$}, journal={J. Funct. Anal.}, volume={256}, year={2009}, pages={3490--3509} } \bib{DJ10}{article}{ author={Dosev, Detelin}, author={Johnson, William B.}, title={Commutators on $\ell_\infty$}, journal={Bull. London Math. Soc.}, volume={42}, year={2010}, pages={155-169} } \bib{DH04}{article}{ author={Dykema, Ken}, author={Haagerup, Uffe}, title={Invariant subspaces of the quasinilpotent DT--operator}, journal={J. Funct. Anal.}, volume={209}, year={2004}, pages={332--366} } \bib{FH80}{article}{ author={Fack, Thierry}, author={de la Harpe, Pierre}, title={Sommes de commutaterus dans les alg\`ebres de von Neumann finies continues}, journal={Ann. Inst. Fourier (Grenoble)}, volume={30}, year={1980}, pages={49--73} } \bib{GS80}{article}{ author={Grinberg, V.S.}, author={Sewast'janow, S.V.}, title={Regarding the value of Steinitz's constant}, journal={Funktsional. Anal. i Prilozhen}, volume={14}, year={1980}, pages={56--57}, translation={ journal={Functional Anal. and Appl.}, volume={14}, year={1980}, pages={125--126} } } \bib{Had98}{article}{ author={Hadwin, Don}, title={Free entropy and approximate equivalence in von Neumann algebras}, conference={ title={Operator algebras and operator theory}, address={Shanghai}, date={1997} }, book={ series={Contemp. Math.} volume={228}, publisher={Amer. Math. Soc.}, address={Providence, RI}, year={1998} }, pages={111--131} } \bib{H69}{article}{ author={Halpern, Herbert}, title={Commutators in properly infinite von Neumann algebras}, journal={Trans. Amer. Math. Soc.}, volume={139}, year={1969}, pages={55-73} } \bib{J72}{article}{ author={Janssen, Gerhard}, title={Restricted ultraproducts of finite von Neumann algebras}, conference={ title={Contributions to non-standard analysis}, address={Oberwolfach}, date={1970}, }, book={ series={Studies in Logic and Found. Math.}, volume={69}, publisher={North--Holland}, address={Amsterdam}, date={1972} }, pages={101--114} } \bib{M06}{article}{ author={Marcoux, Laurent}, title={Sums of small numbers of commutators}, journal={J. Operator Theory}, volume={56}, year={2006}, pages={111--142} } \bib{McD70}{article}{ author={McDuff, Dusa}, title={Central sequences and the hyperfinite factor}, journal={Proc. London Math. Soc. (3)}, volume={21}, year={1970}, pages={443--461} } \bib{MW79}{article}{ author={Murphy, Gerard J.}, author={West, T. T.}, title={Spectral radius forumlae}, journal={Proc. Edinburgh Math. Soc.}, volume={22}, year={1979}, pages={271--275} } \bib{PT69}{article}{ author={Pearcy, Carl}, author={Topping, David}, journal={J. Funct. Anal.}, title={Commutators and certain II$_1$--factors}, volume={3}, year={1969}, pages={69--78} } \bib{P83}{book}{ author={Petersen, Karl}, title={Ergodic theory}, publisher={Cambridge Univ. Press.}, series={Cambridge studies in advanced mathematics}, volume={2}, year={1983} } \bib{S36}{article}{ author={Shoda, Kenjiro}, title={Einige S\"atze \"uber Matrizen}, journal={Japanese J. Math.}, volume={13}, year={1936}, pages={361--365} } \bib{SS08}{book}{ author={Sinclair, Allan M.}, author={Smith, Roger R.}, title={Finite von Neumann algebras and masas}, series={London Mathematical Society Lecture Note Series}, volume={351}, publisher={Cambridge University Press}, address={Cambridge}, year={2008} } \bib{St13}{article}{ author={Steinitz, Ernst}, title={Bedingt konvergente Reihen und konvexe Systeme}, journal={J. reine angew. Math.}, volume={143}, year={1913}, pages={128--175} } \bib{T08}{article}{ author={Tucci, Gabriel}, title={Some quasinilpotent generators of the hyperfinite II$_1$ factor}, journal={J. Funct. Anal.}, volume={254}, year={2008}, pages={2969--2994} } \bib{W54}{article}{ author={Wright, Fred}, title={A reduction for algebras of finite type}, journal={Ann. of Math. (2)}, volume={60}, year={1954}, pages={560--570} } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \thispagestyle{empty} \begin{frontmatter} \title{Towards the interpretation of time-varying regularization parameters in streaming penalized regression models$^*$} \author[1]{Lenka {Zbo\v{n}\'{a}kov\'{a}}\corref{cor1}} \author[2]{Ricardo Pio {Monti}} \author[1,3,4]{Wolfgang Karl {H\"{a}rdle }} \address[1]{C.A.S.E. - Center for Applied Statistics \& Economics, Humboldt-Universit\"{a}t zu Berlin, Spandauer Str. 1, 10178 Berlin, Germany} \address[2]{Gatsby Computational Neuroscience Unit, UCL, 25 Howland Street, London, W1T 4JG} \address[3]{Sim Kee Boon Institute for Financial Economics, Singapore Management University, 50 Stamford Road, Singapore 178899, Singapore} \address[4]{The Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, 361005 China} \received{1 May 2013} \finalform{10 May 2013} \accepted{13 May 2013} \availableonline{15 May 2013} \communicated{S. Sarkar} \begin{abstract} High-dimensional, streaming datasets are ubiquitous in modern applications. Examples range from finance and e-commerce to the study of biomedical and neuroimaging data. As a result, many novel algorithms have been proposed to address challenges posed by such datasets. In this work, we focus on the use of $\ell_1$ regularized linear models in the context of (possibly non-stationary) streaming data. Recently, it has been noted that the choice of the regularization parameter is fundamental in such models and several methods have been proposed which iteratively tune such a parameter in a~time-varying manner; thereby allowing the underlying sparsity of estimated models to vary. Moreover, in many applications, inference on the regularization parameter may itself be of interest, as such a parameter is related to the underlying \textit{sparsity} of the model. However, in this work, we highlight and provide extensive empirical evidence regarding how various (often unrelated) statistical properties in the data can lead to changes in the regularization parameter. In particular, through various synthetic experiments, we demonstrate that changes in the regularization parameter may be driven by changes in the true underlying sparsity, signal-to-noise ratio or even model misspecification. The purpose of this letter is, therefore, to highlight and catalog various statistical properties which induce changes in the associated regularization parameter. We conclude by presenting two applications: one relating to financial data and another to neuroimaging data, where the aforementioned discussion is relevant. \end{abstract} \end{frontmatter} \section{Introduction} \let\thefootnote\relax\footnote{ \hspace{-5mm}$^*$Financial support from the Deutsche Forschungsgemeinschaft via CRC 649 ``Economic Risk" , the IRTG 1792 ”High Dimensional Non Stationary Time Series”, as well as the Czech Science Foundation under grant no. 19-28231X, the Yushan Scholar Program and the European Union’s Horizon 2020 research and innovation program ''FIN-TECH: A Financial supervision and Technology compliance training programme'' under the grant agreement No 825215 (Topic: ICT-35-2018, Type of action: CSA), Humboldt-Universit\"at zu Berlin, is gratefully acknowledged.\\\textit{This is a post-peer-review, pre-copyedit version of an article published in Pattern Recognition Letters. The final authenticated version is available online at:} \url{http://dx.doi.org/10.1016/j.patrec.2019.06.021}\\ } High-dimensional, streaming datasets pose a unique challenge to modern statisticians. To date, the challenges associated with high-dimensional and streaming data have been extensively studied independently. In the case of the former, a~popular avenue of research is the use of regularization methods such as the Lasso \citep{Hastie2016}. Such methods effectively address issues raised by high-dimensional data by assuming the underlying model is sparse, thereby having only a small number of non-zero coefficients. Sparse models are often easier to both estimate and interpret. Concurrently, many methods have been developed to handle streaming datasets; popular examples include sliding window methods and their generalizations to weighted moving averages \citep{Hayking2008}. Recently, the intersection of these two avenues of research has begun to receive increasing attention as large-scale, streaming datasets become commonplace. Prominent examples include \cite{Bottou2010} and \cite{Duchi2011} who propose methods through which to efficiently estimate $\ell_1$ penalized models in a streaming data context. However, an important aspect, which has been largely overlooked, corresponds to the optimal choice of the regularization parameter. While it is possible to employ a fixed regularization parameter, it may be the case that the statistical properties of the data vary over time, suggesting that the optimal choice of the regularization parameter may itself also vary over time. Examples of large-scale, non-stationary datasets, where the choice of the regularization parameter has been reported to be time-varying, include finance \citep{Yu2017} and neuroscience \citep{Monti2017a}. We note that many methods have been proposed for selecting the regularization parameter in the context of non-streaming data, the standard approach being to employ some variant of cross-validation or bootstrapping, e.g. in \citet{Hastie2016} or \citet{Chern2018}. However, such methods are infeasible in the domain of streaming datasets due to limited computational resources. More importantly, the statistical properties of a data stream may vary, further complicating the use of sub-sampling methods. Recently, methods to handle time-varying regularization parameters have been proposed. \cite{Monti2018sadm} propose a novel framework through which to iteratively infer a time-varying regularization parameter via the use of adaptive filtering. The proposed framework is developed for penalized linear regression (i.e., the Lasso) and subsequently extended to penalized generalized linear models. \cite{zbonakova2017time} study the dynamics of the regularization parameter, focusing particularly on quantile regression in the context of financial data. Using sliding windows method, they demonstrate that the choice of time-varying regularization parameter based on the adjusted Bayesian information criterion (BIC) is closely correlated with the financial volatility. The BIC was employed, as such a choice of parameter is optimal in terms of model consistency. While the aforementioned methods correspond to valuable contributions, the purpose of this paper is to highlight potential shortcomings when interpreting time-varying regularization parameters. In particular, we enumerate several (often unrelated) statistical properties of the underlying data which may lead to changes in the optimal choice of the regularization parameter. This paper, therefore, serves to highlight important issues associated with the interpretation of time-varying regularization parameters as well as the associated model parameters. The remainder of this paper is organized as follows. We formally outline the challenge of tuning time-varying regularization parameters as well as related work in Section \ref{sec::PreLim}. In Section \ref{sec::ExpRes}, we present extensive empirical results, highlighting how various aspects of the underlying data may result in changes in the estimated regularization parameter. Computations included in this work were performed with the help of R software environment \citep{rcore2014} and we provide code to reproduce all experiments at \href{http://quantlet.de}{\protect\quantnet{Quantlet}} platform. \section{Preliminaries and related work}\label{sec::PreLim} In this work, we focus on streaming linear regression problems. Formally, it is assumed that we observe a sequence of pairs $(X_t, y_t)$, where $X_t \in \mathbb{R}^p$ corresponds to a $p$-dimensional vector of predictor variables and $y_t \in \mathbb{R}$ is a univariate response. The objective of penalized streaming linear regression problems consists in accurately predicting future responses, $y_{t+1}$, from predictors $X_{t+1}$ via a linear model. Following the work of \cite{tibshirani1996regression}, an $\ell_1$ penalty, parameterized by $\lambda \in \mathbb{R}_+$, is subsequently introduced in order to encourage sparse solutions as well as ensure the associated optimization problem is well-posed. For a~pre-specified choice of fixed regularization parameter, $\lambda$, time-varying regression coefficients can be estimated by minimizing the following convex objective: \begin{equation} L_t (\beta, \lambda) = \sum_{i=1}^t w_i \left ( y_i - X_i^{\top} \beta \right )^2 + \lambda ||\beta||_1, \label{ConvexObjectiveLasso} \end{equation} where $w_i >0$ are weights indicating the importance given to past observations \citep{aggarwal2007data}. For example, it is natural to allow $w_i$ to decay monotonically in a manner which is proportional to the chronological proximity of the $i$th observation. While the weights $w_i$ may be tuned using a fixed forgetting factor, throughout this work we opt for the use of a sliding window due to the simplicity of the latter method. In the context of non-stationary data the optimal estimates of regression coefficients, $\hat \beta_t$, may vary over time and several methods have been proposed in order to address this issue \citep{Bottou2010, Duchi2011}. However, the same argument can be posed in terms of the associated regularization parameter, $\lambda$. The choice of such a parameter dictates the severity of the associated $\ell_1$ penalty, implying that different choices of $\lambda$ will result in vastly different estimated models. While there exists a large range of methodologies through which to iteratively update the regression coefficients, the choice of the regularization parameter has, until recently, been largely overlooked. Lately, \cite{Monti2018sadm} proposed a framework through which to learn time-varying regularization parameter in a streaming scenario. The proposed framework is motivated by adaptive filtering theory \citep{Hayking2008} and seeks to iteratively update the regularization parameter via stochastic gradient descent. In related work, \cite{zbonakova2017time} focus on the choice of the regularization parameter in the context of a quantile regression model. They propose the use of sliding windows and information theoretic quantities to select the associated regularization parameter. Formally, \cite{Osborne2000lassodual} clearly outline the relationship between the Lasso parameter, $\lambda$, and the data. They note that the regularization parameter may be interpreted as the Lagrange multiplier associated with a constraint on the $\ell_1$ norm of the regression coefficients. As such, considering the dual formulation yields: \begin{equation} \lambda = \frac{\lbrace \textbf{Y} - \textbf{X}\hat{\beta}(\lambda)\rbrace ^{\top}\textbf{X}\hat{\beta}(\lambda)}{||\hat{\beta}(\lambda)||_1}, \label{eq:lambda} \end{equation} where we have ignored the weights, $w_i$, and use bold notation to denote vectors and matrices respectively. Note that in equation (\ref{eq:lambda}) we clearly denote the dependence of the estimated regression coefficients on $\lambda$. As a result, we observe three main effects driving the optimal choice of the regularization parameter: \begin{enumerate} \item Variance or magnitude of the residuals: $ \textbf{Y}- \textbf{X} \hat \beta(\lambda)$. As the variance of residuals increases so does the associated regularization parameter, leading to an increase in sparsity of $\hat \beta(\lambda)$. This is natural as an increase of the variance of residuals is indicative of a drop in the signal-to-noise ratio of the data. \item The $\ell_1$ or $\ell_0$ norm of the model coefficients: $||\hat{\beta}(\lambda)||_1$. As this term appears in the denominator of equation (\ref{eq:lambda}), it is inversely correlated with the regularization parameter. This is to be expected as we require a small regularization parameter in order to accurately recover regression coefficients with large $\ell_1$ norm. \item Covariance structure of the design matrix: $\textbf{X}$. The term related to the covariance structure of the design matrix, $\textbf{X}^{\top}\textbf{X}$, can be extracted from the elements in the numerator of equation (\ref{eq:lambda}). This suggests that the covariance matrix of the predictors will have a significant impact on the value of the regularization parameter, $\lambda$. We note that this effect will also affect the $\ell_1$ and $\ell_0$ norms of the model coefficients, resulting in a complicated relationship with the regularization parameter. In Section \ref{Sim_rho} we demonstrate the non-linear nature of this relationship. \end{enumerate} As such, it follows that multiple aspects of the data may influence the choice of the associated regularization parameter. Crucially, whilst such a parameter is often interpreted as being indicative of the \textit{sparsity} of the underlying model, equation (\ref{eq:lambda}) together with the aforementioned discussion demonstrates that this is not necessarily the case. In the remainder of this work, we provide extensive empirical evidence to validate these claims. \section{Experimental results} In this section, we provide an extensive simulation study to demonstrate the effects of the three aforementioned model properties on the choice of the optimal regularization parameter. Based on the observations from Section \ref{sec::PreLim}, we designed a series of experiments where one property of the data was allowed to vary whilst the remaining two were left unchanged. A further concern is to show that if two or more of the properties of the data should simultaneously change it can result in cancelling out their effects on the regularization parameter. Further experiments were designed to study those scenarios. The purpose of the experimental results presented in this section is two-fold. First, we identify the various statistical properties which cause the optimal choice of regularization parameter to vary. Second, we also highlight how changes of such properties interact with each other and catalog their joint effects on the choice of the regularization parameter. \label{sec::ExpRes} \subsection{Synthetic data generation} We focus exclusively on a linear model of the form: $$y_t = X_t\beta_t + \varepsilon_t.$$ We define the number of observations as $n$, the number of non-zero parameters as $q = ||\beta||_0 \leq p$ and an \textit{iid} error term $\varepsilon = (\varepsilon_1, \ldots, \varepsilon_n)^{\top}$, such that $\varepsilon_t \sim (0, \sigma_t^2)$. The $p$-dimensional vector of predictor variables $X_t$ was generated from the normal distribution $\mbox{N}_p(0, \Sigma)$, where the elements of $(p \times p)$ covariance matrix $\Sigma = (\sigma_{ij})_{i,j = 1}^p$ were set to be $\sigma_{ij} = \rho^{|i - j|}$, for $i, j = 1, \ldots, p$, with a correlation parameter $\rho$. We generate synthetic data where one of the following properties varies over time (thereby resulting in non-stationarity): \begin{enumerate} \item Time-varying variance of residuals: $\sigma_t^2$ varies over time. \item Time-varying $\ell_1$ or $\ell_0$ norm of regression coefficients: $q$ varies over time. \item Time-varying correlation within design matrix: $\rho$ varies over time. \end{enumerate} For each experiment, the total number of observations was set to $n=400$ with a dimensionality of $p=20$. The optimal choice of the regularization parameter (together with associated regression coefficients) was estimated using three distinct methods. We consider the use of the sliding window method in combination with both Bayesian information criterion (BIC) and generalized cross-validation (GCV) to select the associated regularization parameter. Finally, the gradient method proposed by \cite{Monti2018sadm}, named Real-time Adaptive Penalization (RAP), is also applied. A burn-in period of 50 observations was employed to obtain an initial estimate for regression coefficients as well as $\lambda$. Each experiment was repeated $100$ times and the mean value of the regularization parameter was studied. \subsubsection{Change of the variance of residuals} We begin by studying the effect of residual variance on the choice of the regularization parameter, $\lambda$. The regression coefficients were set to $\beta_t = (1, 1, 1, 1, 1, 0, \ldots, 0)^{\top}$, yielding $q = 5$ and the covariance parameter was set to be $\rho = 0.5$. The vector of residuals was simulated according to a piece-wise stationary distribution as follows: \begin{equation} \varepsilon_t \sim \left\lbrace \begin{array}{lcl} \mbox{N}(0, \sigma^2_1), & \mbox{for} & t < 200;\\ \mbox{N}(0, \sigma^2_2), & & t \geq 200, \end{array} \right. \label{eq:resvector} \end{equation} resulting in a significant change in the variance of residuals at the 200th observation. Throughout these experiments, we set $\sigma_1=1$ and allowed $\sigma_2$ to vary from $\sigma_2 \in \{1.1, \ldots, 2\}$. In order to study the effects of changes in the variance of residuals, we consider the change in the estimated regularization parameter defined by the ratio of the values of $\lambda$ after ($\lambda_2$) and before ($\lambda_1$) the change point as a function of the ratio $\sigma_2/ \sigma_1$. Following the discussion from Section \ref{sec::PreLim}, we would expect larger values of the ratio to yield larger changes in the choice of the regularization parameter. \begin{figure} \caption{Relative changes of $\lambda$ in dependence on relative changes of the standard deviation $\sigma$.} \label{fig:deltasigma} \end{figure} Figure \ref{fig:deltasigma} plots the effect of the changes in the standard deviation of residuals on the Lasso parameter $\lambda$. As expected when looking at the formula (\ref{eq:lambda}), there is a linear dependency visible. In case of the BIC and GCV as selection criteria for the values of $\lambda$, the line is almost identical. For the RAP algorithm, $\lambda$ changes slower, but the effect can be clearly seen. In order to illustrate how the series of values of the Lasso parameter changes over time and how long it takes to adjust for the new settings of the model, we depict the average $\lambda$ over the 100 scenarios in Figure \ref{fig:deltasigmaseries}, where $\sigma_1 = 1$ and $\sigma_2 = 1.5$. Since the BIC and GCV yield very similar results, we omit the GCV in this case and normalize the BIC and RAP values of $\lambda$ to fit into the interval [0, 1]. \begin{figure} \caption{Standardized series of average $\lambda$ over 100 scenarios with a change point at $t = 200$ and $\sigma_1 = 1$ and $\sigma_2 = 1.5$. } \label{fig:deltasigmaseries} \end{figure} From Figure \ref{fig:deltasigmaseries} it is clear that the values of $\lambda$ adjust for the new model settings for the whole length of the moving window (50 in this case) if the BIC is implemented and for the RAP algorithm the adjustment is dependent on the size of the fixed forgetting factor. $r$. We note there is a clear change in the regularization parameter following $t=200$, indicating the need to adaptively estimate the regularization parameter and demonstrating the drawback of using a fixed and pre-specified value of $\lambda$. \subsubsection{Change of the $\ell_1$ and $\ell_0$ norm of $\beta$} It follows that the choice of the regularization parameter is closely related to the true underlying $\ell_1$ and $\ell_0$ norm of the regression coefficients; the relation to the latter is because the Lasso constraint is introduced as a convex relaxation of the $\ell_0$ norm. In this set of simulations, we therefore quantify the effects of changes in both the $\ell_0$ and $\ell_1$ norms on the optimal choice of the regularization parameter. In particular, we set $\sigma_1 = \sigma_2 = 1$ and $\rho = 0.5$. As a first example, we consider the following changes in the $\ell_1$ norm: \begin{equation} \beta_t = \left\lbrace \begin{array}{lcl} (1, 1, 1, 1, 1, 0, \ldots, 0)^{\top}, & \mbox{for} & t < 200;\\ (1, 0.8, 0.6, 0.4, 0.2, 0, \ldots, 0)^{\top}, & & t \geq 200. \end{array} \right. \label{eq:l1normbeta} \end{equation} The time series of estimated $\lambda$ values is presented in Figure \ref{fig:deltal1series}. \begin{figure} \caption{Standardized series of average $\lambda$ over 100 scenarios with a change point at $t = 200$ and regression coefficients $\beta_t$ defined by (\ref{eq:l1normbeta} \label{fig:deltal1series} \end{figure} We note that the change in the $\ell_1$ norm of the model coefficients $\beta$ results in an upward trend in $\lambda$ for the BIC parameter choice which is visible in the long run. For a short period after the change, exactly the period of 50 observations from the moving window, the misspecification of the model drives the size of residuals and with them, the values of $\lambda$ higher and lower again in a ``bump"-shaped line. The same holds for the RAP algorithm, however, because of the fixed forgetting factor, the values of $\lambda$ are adjusting to the new model settings more slowly. In order to study the effect of changes in the $\ell_0$ norm (i.e., the size of the active set) we generated synthetic data whereby: \begin{equation} ||\beta_t||_0 = \left\lbrace \begin{array}{lcl} q_1, & \mbox{for} & t < 200;\\ q_2, & & t \geq 200, \end{array} \right. \label{eq:l0normbeta} \end{equation} with $q_1 = 5$ and $q_2 \in \{ 6, \ldots, 10, 15 \}$. Figure \ref{fig:deltaq} visualizes the relative changes of $\lambda$ as a function of the relative changes in the size of the active set, defined as $q_2/q_1$. We note there is a clear decay of values of $\lambda$ as $q_2/q_1$ increases. This is to be expected, as an increase in the specified ratio implies a larger active set in the latter part of the time series. This figure provides empirical validation of the inverse correlation between the magnitude of the active set and the estimated regularization parameter. \begin{figure} \caption{Relative changes of $\lambda$ in dependence on relative changes of the size of the active set $q$. } \label{fig:deltaq} \end{figure} \subsubsection{Change of covariance parameter $\rho$} \label{Sim_rho} Finally, we study the effect of changes in the covariance structure of features, $X_t$, on the regularization parameter. We note that whilst it is possible to vary the covariance structure in many ways, we consider a simple model whereby $\Sigma = (\sigma_{ij})_{i,j=1}^p$ and set $\sigma_{ij} = \rho^{|i-j|}$. The benefit of such a model is that it only depends on a single parameter, $\rho$, simplifying the interpretation and visualization of results. As such, we investigate changes in the covariance parameter $\rho$, while fixing $\sigma=1$ and $q=5$. Formally, piece-wise stationary data was generated such that: \begin{equation} \rho_t= \left\lbrace \begin{array}{lcl} \rho_1, & \mbox{for} & t < 200;\\ \rho_2, & & t \geq 200, \end{array} \right. \label{eq:l0normbeta} \end{equation} where $\rho_1 = 0.1$ and $\rho_2 \in \{ 0.2, 0.3, \ldots, 0.9\} $. As in the previous experiments, we visualize the relative changes of $\lambda$ with respect to the relative changes of $\rho$ in Figure \ref{fig:deltarho}. The time series of the estimated values of $\lambda$ over the whole sample size are depicted in Figure \ref{fig:deltarhoseries}. \begin{figure} \caption{Relative changes of $\lambda$ in dependence on relative changes of the correlation parameter $\rho$. } \label{fig:deltarho} \end{figure} From Figure \ref{fig:deltarho} it is important to note that the changes of $\lambda$ no longer demonstrate a linear dependency with the statistical property of interest. For $\rho_2 = 0.2, \ldots, 0.8$ the values of $\lambda$ tend to rise with a rising covariance of the predictors and the biggest change occurs for $\rho_2 = 0.5$ in the case of the BIC and GCV. In the RAP method example, the values of $\lambda$ decrease for $\rho_2 = 0.2$ and $0.9$ and the biggest change is visible in the case that $\rho$ changes to the value $\rho_2 = 0.6$. A potential explanation for the non-linear nature of the relationship demonstrated in Figure \ref{fig:deltarho} is due to the selection properties of the Lasso. It is widely acknowledged that in the presence of strongly correlated variables (corresponding to large $\rho$ values) the Lasso tends to choose only a single variable form the group of strongly correlated covariates (indeed this phenomenon is the inspiration for the elastic net \citep{zou2005regularization}). As such, as $\rho$ increases, the term $\textbf{X}^{\top}\textbf{X}$ from the numerator of $\lambda$ drives its values higher. If the $\rho$ value is too high, we speak of multicollinearity, where the denominator of $\lambda$ is affected and becomes larger, which consequently causes the $\lambda$ values to drop. \begin{figure} \caption{Standardized series of average $\lambda$ over 100 scenarios with a change point at $t = 200$ and $\rho_1 = 0.1$ and $\rho_2 = 0.5$. } \label{fig:deltarhoseries} \end{figure} In Figure \ref{fig:deltarhoseries} the change from $\rho_1 = 0.1$ to $\rho_2 = 0.5$ is depicted. We note there is a change in $\lambda$ despite the fact that the $\ell_1$ and $\ell_0$ norms remain unchanged. \subsubsection{Simultaneous changes of model specifications} While the previous experiments have examined the effects of changing a single property of the data, we now consider combinations of specific changes. In particular, the purpose of the remaining experiments is to highlight how simultaneous changes to two properties of the data result in a \textit{canceling out} the effects on the regularization parameter. The purpose of this section is, therefore, to highlight the fact that it is possible to have a non-stationary data where the three properties discussed in Section \ref{sec::PreLim} vary, and yet the optimal choice of the sparsity parameter is itself constant. We begin by studying simultaneous changes in the $\ell_0$ or $\ell_1$ norm of regression parameters, $\beta_t$, together with changes in the variance of residuals, $\sigma^2$. Recall that the optimal choice of regularization parameter was positively correlated with the magnitude of residuals (see Figure \ref{fig:deltasigma}) whilst being negative correlated with $q$ (see Figure \ref{fig:deltaq}). Figure \ref{fig:QvsSigma} shows the relative change of $\lambda$ as a~function of both $q_2/q_1$ and $\sigma_2/\sigma_1$. It is important to note the diagonal trend, which indicates that for any increase in $q$, a proportional increase in $\sigma$ directly cancels out the change in the estimated regularization parameter. This is a natural result, as the changes in $\sigma$ influence the numerator, whilst changes in the $\ell_0$ or $\ell_1$ norm affect the denominator in (\ref{eq:lambda}). \begin{figure} \caption{Relative changes of $\lambda$ corresponding to the combination of relative changes of $q$ and $\sigma$.} \label{fig:QvsSigma} \end{figure} Next, we consider the combination of varying the covariance parameter, $\rho$, and the variance of the residuals, parameterized by $\sigma$. Recall from the previous discussion that the covariance parameter, $\rho$, did not have a linear relationship with the regularization parameter, $\lambda$. Such a non-linear relationship can be clearly seen again in Figure \ref{fig:RhovsSigma}. Furthermore, we note that changes in $\sigma$ tend to dominate the changes in $\rho$ with the largest changes in $\lambda$ occurring for large changes in $\sigma$. Finally, we also studied the combination of changes in the $\ell_0$ norm, denoted by $q$, together with changes in the covariance parameter, $\rho$. Note that changes in these parameters are strongly coupled due to the effect of multicollinearity induced by simultaneously increasing the number of non-zero regression coefficients together with their correlations. The results, provided in Figure \ref{fig:QvsRho}, highlight these dependencies. For the values of $\rho$ near $\rho = 0.5$, there are some combinations which cancel each other. For the extreme parts of the heatmap, e.g. $\rho_2 = 0.2$ or $\rho_2 = 0.9$, the pattern is clearly driven by the change in the active set only. \begin{figure} \caption{Relative changes of $\lambda$ corresponding to the combination of relative changes of $\rho$ and $\sigma$.} \label{fig:RhovsSigma} \end{figure} \begin{figure} \caption{Relative changes of $\lambda$ corresponding to the combination of relative changes of $q$ and $\rho$.} \label{fig:QvsRho} \end{figure} \subsection{Application to financial and neuroimaging data} Until now we have provided extensive empirical evidence based on a variety of simulations, each varying one or more of the statistical properties of the data. In this section, we conclude by presenting two distinct real-world datasets were we observe significant variability in the time-varying regularization parameter. The two examples presented in this section provide a clear illustration that non-stationary data are present in a wide range of applications. We study two high-dimensional real-world datasets from distinct applications: the first consists of stock returns and the second corresponds to functional MRI (fMRI) dataset taken from an emotion task. The stock return data consists of daily stock returns of 100 largest financial companies over a period of 11 years from 2007 to 2018. The companies listed on NASDAQ are ordered by the market capitalization and downloaded from Yahoo Finance. This data is particularly interesting as it covers the financial crisis of 2008-2009. By analyzing this data, it is hoped that we may be able to understand the statistical properties which directly precede similar financial crises, thereby potentially providing some form of advanced warning. The second dataset we consider corresponds to fMRI data collected as a part of the Human Connectome Project (HCP). This dataset consists of measurements of 15 distinct brain regions taken during an Emotion task, as described in \cite{barch2013function}. Data was analyzed over a subset of 50 subjects. While traditional neuroimaging studies were premised on the assumption of stationarity, an exciting avenue of neuroscientific research corresponds to understanding the non-stationary properties of the data and how these may potentially correspond to changes induced by distinct tasks \citep{Monti2017a} or changes across subjects \citep{Monti2017}. The modelling procedure employed for both of the datasets consisted in regressing each of the components of the multivariate time series on the rest. This way we got either 100 or 15 sequences of the Lasso parameter values, for financial and neuroimaging data respectively, which were then averaged and normalized to the [0, 1] interval as before. The resulting time series for the US stock market data are depicted in Figure \ref{fig:finance} and for the fMRI data the graphical output can be seen in Figure \ref{fig:neuro}. \begin{figure} \caption{Standardized series of average $\lambda$ in the US stock returns data, daily observations from January 3, 2007 to August 10, 2018. } \label{fig:finance} \end{figure} From Figure \ref{fig:finance} it is visible that the values of $\lambda$ react to the situation on the market in both of the algorithms, the standard one with the BIC as a selecting rule and the RAP. Especially pronounced is the change of the values during the financial crisis of 2008-2009 where the volatility observable on the market was elevated and thus results in increased values of the Lasso parameter, too. Interestingly, both of the considered methods react instantly if some change occurs, but take a different amount of observations to adjust back to the standard situation. \begin{figure} \caption{Standardized series of average $\lambda$ in the fMRI dataset. Distinct tasks are indicated by the background color (red indicates a neutral task, blue indicates an emotion task and white denotes the resting period). } \label{fig:neuro} \end{figure} Figure \ref{fig:neuro} shows the time series of the average regularization parameter over eight distinct subjects performing an emotion related task. The task required participants to perform a series of trails presented in blocks. The trails either required them to decide which of the two faces presented on the bottom of the screen match the face at the top of the screen, or which of the two shapes presented at the bottom of the screen match the shape at the top of the screen. The former was considered the emotion task (and denoted in blue in Figure \ref{fig:neuro}) and the latter the neutral task (denoted in red in Figure \ref{fig:neuro}). From Figure \ref{fig:neuro} we see clear changes in the estimated regularization parameter induced by changes in the underlying cognitive task, and thus, changes in the connectedness of the brain regions. This finding is in line with the current trend in the study of the fMRI data, which is interested in quantifying and understanding the non-stationarity properties of such a data and how these relate to changes in cognitive state \citep{calhoun2014chronnectome}. \section{Discussion} In this work, we have highlighted and provided extensive empirical evidence for the various statistical properties which affect the optimal choice of a regularization parameter in a penalized linear regression model. Based on the theory of the Lasso, we specifically consider three distinct properties: the variance of residuals, the $\ell_0$ and $\ell_1$ norms of the regression coefficients and the covariance structure of the design matrix. Throughout a series of experiments, we confirm the manner in which each of these properties affects the optimal choice of the regularization parameter. We relate the dependencies between each of the aforementioned statistical properties and estimated regularization parameter to the theoretical properties presented in \cite{Osborne2000lassodual}. In particular, we conclude that: \begin{itemize} \item There is a (positive) linear relationship between changes in the variance of residuals, $\sigma^2$, and the estimated regularization parameter, as clearly demonstrated in Figure~\ref{fig:deltasigma}. \item There is a (negative) linear relationship between changes in the size of the active set (either $ell_0$ or $\ell_1$ norm) and the estimated regularization parameter, as shown in Figure~\ref{fig:deltaq}. \item There is a non-linear relationship between changes in the correlation structure in the design matrix and the estimated regularization parameter, as visualized in Figure~\ref{fig:deltarho}. \end{itemize} We further provide a series of experiments where two of the statistical properties jointly varied in order to demonstrate the possibility of having non-stationary time-series data where the optimal regularization parameter does not alter. This is most clearly seen in the case of changes in the active set, $q$, together with changes in the residual variance, $\sigma^2$, shown in Figure \ref{fig:QvsSigma}. Finally, we conclude by two case studies involving high-dimensional time-series data in the context of finance and neuroimaging. Both datasets demonstrate significant temporal variability in the estimated regularization parameter, thereby validating the need for the methods through which to iteratively tune such a parameter. In conclusion, the purpose of this letter is to highlight and rigorously catalog the various statistical properties which may lead to changes in the choice of the regularization parameters in $\ell_1$ penalized models. Such models are widely employed, indicating that an appreciation of the relationships between the various statistical properties of the data and the choice of the regularization parameter is important. \end{document}
\begin{itemize}n{equation}gin{document} \date{} \title{\bf Lipschitz optimal transport metric for a wave system modeling nematic liquid crystals} \author[1]{Hong Cai\thanks{Email: [email protected] (Hong Cai)}} \author[2]{Geng Chen\thanks{Email: [email protected] (Geng Chen)}} \author[2]{Yannan Shen\thanks{Email: [email protected] (Yannan Shen)}} \affil[1]{\it{\small School of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao, 266061, P.R. China.}} \affil[2]{\it{\small Department of Mathematics, University of Kansas, Lawrence, KS 66045, USA. }} \renewcommand\Authands{ and } \maketitle \begin{itemize}n{equation}gin{abstract} In this paper, we study the Lipschitz continuous dependence of conservative H\"older continuous weak solutions to a variational wave system derived from a model for nematic liquid crystals. Since the solution of this system generally forms finite time cusp singularity, the solution flow is not Lipschitz continuous under the Sobolev metric used in the existence and uniqueness theory. We establish a Finsler type optimal transport metric, and show the Lipschitz continuous dependence of solution on the initial data under this metric. This kind of Finsler type optimal transport metrics was first established in [A. Bressan and G. Chen, Arch. Ration. Mech. Anal. 226(3) (2017), 1303-1343] for the scalar variational wave equation. This equation can be used to describe the unit direction $\mathbf{n}$ of mean orientation of nematic liquid crystals, when $\mathbf{n}$ is restricted on a circle. The model considered in this paper describes the propagation of $\mathbf{n}$ without this restriction, i.e. $\mathbf{n}$ takes any value on the unite sphere. So we need to consider a wave system instead of a scalar equation. \bigbreak \noindent {\bf \normalsize Keywords.} {System of wave equations; Liquid crystal;\, Lipschitz metric;\, Singularity} \bigbreak \end{abstract} \section{Introduction} \setcounter{equation}{0} In this paper, we study the Lipschitz continuous dependence for solutions of the variational wave system \begin{itemize}n{equation}gin{equation} \lambdabel{vwl} \partialartial_{tt}n_i-\partialartial_x\big(c^2(n_1)\partialartial_x n_i\big)=\bigl(-|{\mathbf n}_t|^2+\big(2c^2(n_1)-\zeta_i\big)|{\mathbf n}_x|^2\bigr)n_i,\qquad i=1,2,3. \end{equation} The time $t$ and space variable $x$ belong to $\mathop{\mathbb R\kern 0pt}\nolimits^+$ and $\mathop{\mathbb R\kern 0pt}\nolimits$, respectively, and the unit vector ${\mathbf n}=(n_1,n_2, n_3)$ satisfies \begin{itemize}n{equation}gin{equation}\lambdabel{n1} |{\mathbf n}|=1. \end{equation} The (positive) wave speed $c$ depends on $n_1$ with \begin{itemize}n{equation}q\lambdabel{c-def} c^2(n_1)=\alpha+(\gamma-\alpha)n_1^2. \end{equation} The constants \[ \zeta_1=\ga>0\quad\hbox{and}\quad\zeta_2=\zeta_3=\al>0. \] In this paper, we consider the initial value problem with inital data satisfying \begin{itemize}n{equation}gin{equation}\lambdabel{ID} n_i|_{t=0}={n_i}_0\in H^1(\mathbb{R}),\quad (n_i)_t|_{t=0}={n_i}_1\in L^2(\mathbb{R}), \quad i=1, 2, 3. \end{equation} We briefly introduce the origin of system \eqref{vwl} from modelling nematic liquid crystal. Liquid crystal is often viewed as an intermediate state between liquid and solid. More precisely, a nematic crystal can be described, when we ignore the motion of the fluid, by the dynamics of the so-called director field of unit vectors ${\mathbf n}\in{\mathbb S}^2$ describing the orientation of the rod-like molecules. We consider a regime in which inertia effects dominate viscosity. The propagation of the orientation waves in the director field is modelled by the least action principle (\cite{AH,[37]}) \begin{itemize}n{equation}q \delta \int_{\mathop{\mathbb R\kern 0pt}\nolimits^+} \int_{\mathop{\mathbb R\kern 0pt}\nolimits^3}\Big\{ \frac12 \partial_t{\mathbf n}\cdot\partial_t{\mathbf n} - W({\mathbf n}, \nabla{\mathbf n})\Big\}\,d{\mathbf x}\,dt = 0, \qquad {\mathbf n}\cdot {\mathbf n} = 1. \lambdabel{1.2} \end{equation} The potential energy density $W$ is given by the well-known Oseen-Frank energy from the continuum theory of nematic liquid crystals (\cite{GP}, Ch. 3.), \begin{itemize}n{equation}q W\left({\mathbf n},\nabla{\mathbf n}\right) = \frac12\alpha(\nabla\cdot{\mathbf n})^2 +\frac12\begin{itemize}n{equation}ta\left({\mathbf n}\cdot(\nabla\times{\mathbf n})\right)^2 +\frac12\gamma\left|{\mathbf n}\times(\nabla\times{\mathbf n})\right|^2, \lambdabel{1.1a} \end{equation} where the positive constants $\alpha$, $\begin{itemize}n{equation}ta$, and $\gamma$ are elastic constants of the liquid crystal, corresponding to splay, twist, and bend, respectively. A special case is the one-constant model in which $\alpha=\begin{itemize}n{equation}ta=\gamma$, the function $W$ then reduces to the harmonic map energy density $W=\frac{1}{2}\alpha|\nabla \mathbf{n}|^2.$ The associate variational principle \eqref{1.2} leads to the equation for harmonic wave maps from ($1+3$)-dimensional Minkowski space into two sphere, see \cite{CTZ,S,STZ} for example. The Euler-Lagrange equation associated with \eqref{1.2} and \eqref{1.1a} is \begin{itemize}n{equation}gin{equation}\lambdabel{EL} {\mathbf n}_{tt}=\alpha \nabla(\nabla\cdot \mathbf n)-\begin{itemize}n{equation}ta [A \nabla\times \mathbf n+\nabla\times(A \mathbf n)]+\gamma [B\times (\nabla\times \mathbf n)-\nabla\times (B\times \mathbf n)]+\lambdambda \mathbf n, \end{equation} with $A=\mathbf{n}\cdot (\nabla\times \mathbf n),\ B=\mathbf n\times (\nabla\times \mathbf n).$ The Lagrange multiplier $\lambdambda(x, t)$ in \eqref{EL} is chosen so that $\mathbf n\cdot\mathbf n = 1$, and is given explicitly in terms of $\mathbf n$ by \begin{itemize}n{equation}gin{equation}\lambdabel{Lm} \lambdambda=-|{\mathbf n}_{t}|^2+\alpha[|\nabla\mathbf n|^2-|\nabla\times \mathbf n|^2]+2[\begin{itemize}n{equation}ta A^2 +\gamma|B|^2]+(\alpha-\gamma)(\nabla\cdot B). \end{equation} When the space dimension is one (1-d), i.e. $x\in \mathop{\mathbb R\kern 0pt}\nolimits$, $W$ in \eqref{1.1a} is given specifically by \begin{itemize}n{equation}gin{equation*} W\left({\mathbf n},\partialartial_x{\mathbf n}\right) = \frac\alpha 2(\partialartial_x n_1)^2 +\frac\begin{itemize}n{equation}ta 2\left((\partialartial_x n_2)^2+(\partialartial_x n_3)^2\right) +\frac12(\gamma-\begin{itemize}n{equation}ta)n_1^2 |\mathbf{n}_x|^2, \end{equation*} which together with \eqref{EL} and \eqref{Lm} implies that \begin{itemize}n{equation}gin{equation}\lambdabel{full} \begin{itemize}n{equation}gin{cases} \partialartial_{tt}n_1-\partialartial_{x}[c_1^2(n_1)\partialartial_x n_1]=\left[-|\mathbf{n}_t|^2+(2c_2^2-\gamma)|\mathbf{n}_x|^2+2(\alpha-\begin{itemize}n{equation}ta) (\partialartial_x n_1)^2\right]n_1,\\ \partialartial_{tt}n_2-\partialartial_{x}[c_2^2(n_1)\partialartial_x n_2]=\left[-|\mathbf{n}_t|^2+(2c_2^2-\begin{itemize}n{equation}ta)|\mathbf{n}_x|^2+(\begin{itemize}n{equation}ta-\alpha) n_1\partialartial_{xx}n_1\right]n_2,\\ \partialartial_{tt}n_3-\partialartial_{x}[c_2^2(n_1)\partialartial_x n_3]=\left[-|\mathbf{n}_t|^2+(2c_2^2-\begin{itemize}n{equation}ta)|\mathbf{n}_x|^2+(\begin{itemize}n{equation}ta-\alpha) n_1\partialartial_{xx}n_1\right]n_3, \end{cases} \end{equation} with $c_1^2(n_1)=\alpha+(\gamma-\alpha)n_1^2$ and $ c_2^2(n_1)=\begin{itemize}n{equation}ta+(\gamma-\begin{itemize}n{equation}ta)n_1^2.$ In particular, putting $\alpha=\begin{itemize}n{equation}ta$ in \eqref{full}, we obtain our system \eqref{vwl}. The existence and uniqueness of energy conservation $H^1$ solution for \eqref{vwl}--\eqref{ID} has already been established in \cite{CZZ} by Chen-Zhang-Zheng following by an earlier work \cite{ZZ10}, and in \cite{CCD} by Cai-Chen-Du, respectively. In general, the solution of \eqref{vwl}--\eqref{ID} or other type of variational wave system such as \eqref{VW} is not unique, due to the formation of cusp singularity \cite{BH,BZ,GHZ,ZZ03}. To obtain a unique solution after the formation of singularity, one needs to assume an additional admissible condition, such as the energy conservative condition in the week form. Also see results for dissipative solutions in \cite{BH, ZZ03}, and for \eqref{full} in \cite{ZZ11}. Since the solution of \eqref{vwl}--\eqref{ID} generally forms finite time cusp singularity, the solution flow is not Lipschitz continuous under the $H^1$ metric, \cite{GHZ}. The goal of this paper is to establish a Finsler type optimal transport metric, and show the Lipschitz continuous dependence of conservative solution on the initial perturbation under this metric. In what follows, we always assume that the following generic condition is satisfied \begin{itemize}n{equation}gin{equation}\lambdabel{gencon} \alpha\neq \gamma. \end{equation} When $\alpha=\gamma$, the wave speed $c$ is a constant, so the system becomes a one dimensional semi-linear wave equation. Then the well-posedness of solution can be solved easily using the classical method. To avoid unnecessary complexity on notations and estimates, we do not address this case in this paper. There is a highly simplified case when $ {\mathbf n} = (\cos u(t,x), \sin u(t,x),0) $ (planar deformation) with $x\in \mathbb R$, where the dependent variable $u\in\mathbb{R}$ measures the angle of the director field to the $x$-direction. In this case, the function $u$ satisfies the scalar variational wave equation \begin{itemize}n{equation}q\lambdabel{VW} u_{tt} -c(u)(c(u)\,u_x)_x = 0,\end{equation} with $ c^2(u) = \gamma\cos^2u + \alpha\sin^2u. $ See \cite{AH, BZ, CZZ} for more details on the derivations of \eqref{vwl} and \eqref{VW}. The research on global well-posedness of H\"older continuous conservative solutions for variational wave type equations was initiated from \eqref{VW}, where current results include global existence \cite{BZ,HR}, uniqueness \cite{BCZ}, Lipschitz continuous dependence under Finsler type transport metric \cite{BC2015}, and generic regularity \cite{BC}. Especially, the construction of new Lipschitz optimal transport metric for \eqref{vwl} is based on the metric established for \eqref{VW} in \cite{BC2015} by Bressan and the second author. In this paper, we leap from a scalar equation to a system of wave equations. The new metric for system \eqref{vwl} is quite different from the one for scalar equation, mainly because we need to control the energy transfer between different components of ${\bf n}$ in each characteristic family. We will introduce more details in section 2. Finally, the recent result on Poiseuille flow of nematic liquid crystals via the full Ericksen-Leslie model in \cite{CHL20} shows that the results on global well-posedness for variational wave systems \eqref{VW} and \eqref{vwl} have direct applications on the Ericksen-Leslie model described by a coupled system consisting of a wave system on the director field of unit vector ${\bf n}$ and Navier-Stokes equations on the fluid velocity ${\bf u}$. For results on elliptic and parabolic type Ericksen-Leslie systems, which are proved by very different techniques, we refer the reader to the pioneer paper \cite{lin89}, a survey paper \cite{linwangs14} and the references therein. \subsection{Existing existence and uniqueness results} In \cite{CCD, CZZ}, the authors established the existence and uniqueness of global conservative solution to the Cauchy problem \eqref{vwl}--\eqref{ID}. We first review the global existence theorem in \cite{CZZ}, where one can also find the definition of weak solution inside this theorem. \begin{itemize}n{equation}gin{Theorem} [Existence \cite{CZZ}] \lambdabel{CZZthm} The Cauchy problem \eqref{vwl}--\eqref{ID} has a global weak solution ${\mathbf n}(t, x)=(n_1, n_2, n_3)(t,x)$ defined for all $(t, x)\in [0, \infty)\times {\mathbb R}$ in the following sense: \begin{itemize}n{equation}gin{itemize} \item[{\rm (i)}] In the $t$-$x$ plane, the functions $(n_1, n_2, n_3)$ are locally H\"older continuous with exponent $1/2$. This solution $t\mapsto (n_1, n_2, n_3)(t,\cdot)$ is continuously differentiable as a map with values in $L^p_{\rm loc}$, for all $1\leq p<2$. Moreover, it is Lipschitz continuous with respect to $(w.r.t.)$~the $L^2$ distance, that is, there exists a constant $L$ such that \begin{itemize}n{equation}gin{equation*} \big\|{n_i}(t,\cdot)-{n_i}(s,\cdot)\big\|_{L^2} \leq L\,|t-s|, \quad i=1, 2, 3,\quad \hbox{for all } t,s\in\mathbb R^+. \end{equation*} \item[{\rm (ii)}] The functions $(n_1, n_2, n_3)(t,x)$ take on the initial conditions in \eqref{ID} pointwise, while their temporal derivatives hold in $L^p_{\rm loc}\,$ for $p\in [1,2)\,$. \item[{\rm (iii)}] The equation \eqref{vwl} holds in distributional sense for all test function $\varphi\in C^1_c(\mathbb R^+\times \mathbb R)$. \end{itemize} \end{Theorem} The uniqueness result for conservative solution in \cite{CCD} can be summarized as follows. \begin{itemize}n{equation}gin{Theorem}[Uniqueness \cite{CCD} and energy conservation \cite{CZZ}]\lambdabel{ECthm} Under the previous assumptions, a unique solution ${\mathbf n}={\mathbf n}(t,x)$ exists which is {\bf conservative} in the following sense: There exist two families of positive Radon measures on the real line: $\{\mu_-^t\}$ and $\{\mu_+^t\}$, depending continuously on $t$ in the weak topology of measures, with the following properties. \begin{itemize}n{equation}gi \item[(i)] At every time $t$ one has \begin{itemize}n{equation}gin{equation*} \mu_-^t(\mathbb {R})+\mu_+^t(\mathbb {R})~=~E_0~:=~2 \int_{-\infty}^\infty \Big[|{\bf n}_1|^2(x) + c^2(n_{10}(x)) |{\bf n}_{0,x}(x)|^2\Big]\, dx \,,\end{equation*} where we denote the initial data $$ {\bf n}_0=(n_{10},n_{20}, n_{30})= {\bf n}|_{t=0},\qquad {\bf n}_1=(n_{11},n_{21}, n_{31})= {\bf n}_t|_{t=0}. $$ \item[(ii)] For each $t$, the absolutely continuous parts of $\mu_-^t$ and $\mu_+^t$ w.r.t.~the Lebesgue measure have densities respectively given by $ \bigl|{\bf n}_t + c(n_1) {\bf n}_x\bigr|^2 {\rm ~and~} \bigl|{\bf n}_t - c(n_1) {\bf n}_x\bigr|^2. $ \item[(iii)] For almost every $t\in\mathbb {R}^+$, the singular parts of $\mu^t_-$ and $\mu^t_+$ are concentrated on the set where $n_1=0$ or $\partialm 1$, when $\alpha\neq\gamma$. \end{itemize} \end{Theorem} \subsection{Our main result} Then we come to state our main Lipschitz continuous dependence theorem. \begin{itemize}n{equation}gin{Theorem}\lambdabel{thm_metric} The energy conservative weak solution to the nonlinear wave system of nematic liquid crystals \eqref{vwl}--\eqref{ID} depends Lipschitz continuously on the initial data, under a Finsler type optimal transport metric, defined in Definition \ref{def_weak}. Namely, let $(\mathbf{n}_0,\mathbf{n}_1)$ and $(\hat{\mathbf{n}}_0,\hat{\mathbf{n}}_1)$ be two initial data in \eqref{ID}, then for any time $t\in[0,T]$, there exists a distance functional $d$, such that, the corresponding solutions satisfy \[d\big((\mathbf{n},\mathbf{n}_t)(t),(\hat{\mathbf{n}}, \hat{\mathbf{n}}_t)(t)\big)\leq C\cdot d\big((\mathbf{n}_0,\mathbf{n}_1),(\hat{\mathbf{n}}_0, \hat{\mathbf{n}}_1)\big),\] where the constant $C>0$ depends only on $T$ and initial total energy. \end{Theorem} This paper is divided into five sections. In section 2, we introduce the main ideas used to construct the metric, and the difference between our metric and the metric for scalar equation. In section 3, we establish the Lipschitz metric for smooth solutions. In section 4, we extend the metric to piecewise smooth generic solutions, using the generic regularity result in \cite{CCD}. Finally, we extend the metric to $H^1$ solution and prove the main theorem in section 5, where we also compare our Finsler metric with some Sobolev metrics and Kantorovich-Rubinstein metric. \section{Basic setup and main ideas used to establish the metric}\lambdabel{sec_smooth} To describe how to construct the Lipschitz metric, we first consider smooth solutions to \eqref{vwl}--\eqref{ID}. Due to energy concentration when singularity forms, the solution flow fails to be Lipschitz under the $H^1$ distance, where this distance is a natural choice corresponding to the energy. Instead, we will establish a Finsler type optimal transport geodesic distance between any two solutions. Basically, the optimization is taken on the cost of energy transportation between two solutions. To keep track of the cost of energy transportation, we are led to construct the geodesic distance. That is, for two given solution profiles $\mathbf{n}(t)$ and $\mathbf{n}^\epsilon(t)$, we consider all possible smooth deformations/paths $\gamma^t: \theta \mapsto \mathbf{n}^\theta(t)$ for $\theta\in [0,1]$ with $\gamma^t(0) = \mathbf{n}(t)$ and $\gamma^t(1) = \mathbf{n}^\epsilon(t)$, and then measure the length of these paths through integrating the norm of the tangent vector $d\gamma^t/d\theta$. The distance between $\mathbf{n}$ and $\mathbf{n}^\epsilon$ will be calculated by the optimal path length \begin{itemize}n{equation}gin{equation*} d\left( \mathbf{n}(t), \mathbf{n}^\epsilon(t) \right) = \inf_{\gamma^t}\|\gamma^t\| := \inf_{\gamma^t}\int^1_0 \| \mathbf{v}^\theta(t) \|_{\mathbf{n}^\theta(t)} \ d\theta, \quad \text{where } \mathbf{v}^\theta(t) = {d\gamma^t\over d\theta}. \end{equation*} Here the subscript $\mathbf{n}^\theta(t)$ emphasizes the dependence of the norm on the flow $\mathbf{n}^\theta$. In fact, there might be no smooth enough path between two solutions. We will use the generic regularity result in \cite{CCD} to overcome this problem. The metric will be established in three steps: \begin{itemize}n{equation}gin{itemize} \item[1.] For smooth solutions, we find a norm $\| \mathbf{v}^\theta(t) \|_{\mathbf{n}^\theta(t)}$ measuring the cost in shifting from one conservative solution $(\mathbf{n},\mathbf{n}_t)$ (with energy density $\mu$) to the other one $(\hat{\mathbf{n}},{\hat{\mathbf{n}}}_t)$, for any time $t\leq T$, such that \begin{itemize}n{equation}q\lambdabel{2.41} \frac{d}{dt}\| \mathbf{v}^\theta(t) \|_{\mathbf{n}^\theta(t)}\leq C_T\cdot \|\mathbf{v}^\theta(t) \|_{\mathbf{n}^\theta(t)}. \end{equation} Hence, \begin{itemize}n{equation}gin{equation*}\lambdabel{d2}\textstyle d\big( \mathbf{n}(t),\hat{\mathbf{n}}(t))\big) ~\leq~C_T\cdot d\big( \mathbf{n}(0), \hat{\mathbf{n}}(0))\big),\end{equation*} where $C_T$ is a constant only depending on the arbitrarily given $T$ and initial energy, but is uniformly bounded when solution approaches a singularity. \item[2.] Extend the Lipschitz metric in step 1 to piecewise smooth generic solutions. \item[3.] Apply the generic regularity result in \cite{CCD} to prove the desired Lipschitz continuous property for any (piecewise smooth) generic solution, then take a limit to all conservative solutions, using the result in \cite{CCD} that generic solutions are dense in the energy space $(\mathbf{n},\mathbf{n}_t)(t)\in H^1\times L^2$. \end{itemize} Finally, we introduce how to define $\| \mathbf{v}^\theta(t) \|_{\mathbf{n}^\theta(t)}$ in Step 1 such that the inequality \eqref{2.41} is uniformly satisfied before blowup. To embed the wave structure in the metric, we consider a ``double transportation problem'', i.e. study the wave propagation for forward and backward characteristics, respectively, then find two corresponding cost functions. We introduce \begin{itemize}n{equation}gin{equation} \left\{ \begin{itemize}n{equation}gin{array}{l} \mathbf{R}=(R_1,R_2,R_3) ~:=~{\mathbf n}_t+c{\mathbf n}_x,\\ [4mm] \mathbf{S}=(S_1,S_2,S_3)~:=~{\mathbf n}_t-c{\mathbf n}_x, \end{array}\right. \lambdabel{R-S} \end{equation} for backward and forward characteristic directions, respectively. And the corresponding energy densities are ${\bf R}^2$ and ${\bf S}^2$, where we use the following notations \[ \mathbf{R}^2= \mathbf{R}\cdot \mathbf{R},\qquad \mathbf{S}^2= \mathbf{S}\cdot \mathbf{S}. \] Then, for smooth solutions, equations (\ref{vwl}) is equivalent to the following system for $(\mathbf{R}, \mathbf{S}, \mathbf{n})$ \begin{itemize}n{equation}gin{equation} \left\{ \begin{itemize}n{equation}gin{array}{ll} \partialartial_t R_i-c\partialartial_xR_i= \dfrac1{4c^2}\bigl\{(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2)-2(3c^2-\zeta_i) \mathbf{R}\cdot\mathbf{S}\bigr\}n_i+\dfrac{c'(n_1)}{2c(n_1)}(R_i-S_i)R_1, \\ \partialartial_t S_i+c\partialartial_xS_i=\dfrac1{4c^2}\bigl\{(c^2-\zeta_i) (\mathbf{R}^2+\mathbf{S}^2)-2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\bigr\}n_i-\dfrac{c'(n_1)}{2c(n_1)}(R_i-S_i)S_1, \\ [4mm] {\mathbf n}_x=\dfrac{\mathbf{R}-\mathbf{S}}{2c(n_1)}\quad \mbox{or} \quad {\mathbf n}_t=\dfrac{\mathbf{R}+\mathbf{S}}2, \end{array}\right. \lambdabel{R-S-eqn} \end{equation} for $i=1,2,3$, with $ \zeta_1=\ga$ and $\zeta_2=\zeta_3=\al.$ System (\ref{R-S-eqn}) has the following form of energy conservation law: \begin{itemize}n{equation}gin{equation*}\lambdabel{energy1} \frac{1}{4}\partialartial_t\bigl(\mathbf{R}^2+\mathbf{S}^2\bigr)-\frac{1}{4}\partialartial_x\bigl(c(n_1)(\mathbf{R}^2-\mathbf{S}^2)\bigr)=0, \end{equation*} and two balance laws for energy densities in two directions, respectively, \begin{itemize}n{equation}q\lambdabel{balance} \left\{ \begin{itemize}n{equation}gin{array}{rcl}( \mathbf{R}^2)_t - (c\mathbf{R}^2)_x & = & {c'(n_1)\over 2c(n_1)}( \mathbf{R}^2S_1 - R_1 \mathbf{S}^2)\, , \\ [3mm] (\mathbf{S}^2)_t + (c\mathbf{S}^2)_x & = & - {c'(n_1)\over 2c(n_1)}( \mathbf{R}^2S_1 -R_1 \mathbf{S}^2)\,. \end{array} \right. \end{equation} Now we introduce difficulties we meet and new ideas we use when we establish the metric. \partialaragraph{\bf 1.} The double transportation problem gives us tools, i.e. equations \eqref{R-S-eqn} and \eqref{balance} on $\bf R$ and $\bf S$, to study wave propagation in each characteristic family and wave interactions. This is crucial for us to find cost functions and prove the Lipschitz continuous property. However, forward and backward energy might increase in the wave interaction (see the cubic nonlinearity in \eqref{balance}), although the total energy is bounded. This happens because energy transfers between different characteristic families during wave interactions. We introduce some interaction potentials, which share similar philosophy as the Glimm potential for hyperbolic conservation laws. Very roughly speaking, the interaction potential memories the possible future increase of energy on a single forward or backward wave. As a wave interaction happens, the interaction potential (future possible increase of energy) decays, since the current interaction is out of the list of future interactions. This decay will balance the possible increase of forward or backward energy. \partialaragraph{\bf 2.} The second difficulty comes from the energy transfer inside one characteristic direction between different components. The quadratic terms $R_i R_1$ and $S_i S_1$ in the equations of $R_i$ and $S_i$ in \eqref{R-S-eqn} shows such kind of phenomena. This is a fundamental difficulty when one jumps from a scalar wave equation to a wave system. The wave potentials mentioned in the last part can only balance higher order crossing terms such as $S_i R_1$ in \eqref{R-S-eqn} or ${\bf R}^2 S_1 $ in \eqref{balance}. But it takes no effect on $R_i R_1$, $i\neq 1$, in \eqref{R-S-eqn}. Briefly speaking, our strategy is to adjust components in the metric in a very subtle way. This is the most difficulty part in this paper, and will make the metric for \eqref{vwl} quite different from the one for \eqref{VW} in \cite{BC2015}. The most important discovery in this paper is that we find the cancellation between time derivatives of two terms in the metric for scalar equation \eqref{VW} ($\dot{I}_2$ and $\dot{I}_5$ in \cite{BC2015}). This cancellation also holds for system \eqref{vwl}. Although for a scalar variational wave equations, one can bound these two time derivatives separately, it is not the case for the wave system \eqref{vwl} because of the energy transfer in the same characteristic family between different components. After using the new term in the metric, now denoted as $I_4$ in \eqref{norm1} (correspond to $I_2+I_5$ in \cite{BC2015}), one can prove \eqref{2.41}. In fact, we find the new term $I_4$ exactly accounts for the change of base measure with density $R_i$. This is a more appropriate term to use in the metric than the old two terms used in \cite{BC2015}, although each of them also has its physical meaning. Secondly, wave speed $c(n_1)$ only depends on $n_1$, and the equations of $n_i$ have different coefficients $\zeta_i$. As a consequence, our metric needs to be ``inhomogeneous'' in order to reflect the inhomogeneity mentioned above. Let's only explain the idea for the backward wave on $\bf R$. The idea for the forward direction is the same. To obtain precise estimates on the propagation of each $R_i$ and energy transfer between $R_i$ and $R_j$ with $i\neq j$, we need to adjust the relative shift term, such as change $R_i$ to $R_1$ in some relative shift term to reflect the dependence of $c(n_1)$ on $n_1$ but not on $n_2$ and $n_3$. In fact, after we shift a wave, we create some wave interactions manually, so the corresponding increase of energy needs to be counted in the metric, by adding some relative shift term. This is a very crucial and subtle part in the metric. More details will be introduced later when we construct the metric. \section{The norm of tangent vectors for smooth solutions} Now, let us consider a smooth solution $(\mathbf{n},\mathbf{R},\mathbf{S})(x)$ to \eqref{vwl}, \eqref{R-S-eqn}, and then take a family of perturbed solutions $(\mathbf{n}^\epsilon,\mathbf{R}^\epsilon,\mathbf{S}^\epsilon)(x)$ of the form \begin{itemize}n{equation}gin{equation}\lambdabel{perb} n_i^\epsilon(x)=n_i(x)+\epsilon v_i(x)+o(\epsilon), \quad{\rm and }\quad \begin{itemize}n{equation}gin{cases} R_i^\epsilon(x)=R_i(x)+\epsilon r_i(x)+o(\epsilon),\\ S_i^\epsilon(x)=S_i(x)+\epsilon s_i(x)+o(\epsilon), \end{cases} \end{equation} for $i=1,2,3$ and $\mathbf{n}^\epsilon=(n_1^\epsilon,n_2^\epsilon,n_3^\epsilon)$, $\mathbf{R}^\epsilon=(R_1^\epsilon,R_2^\epsilon,R_3^\epsilon)$, $\mathbf{S}^\epsilon=(S_1^\epsilon,S_2^\epsilon,S_3^\epsilon)$. Let the tangent vectors $\mathbf{r}=(r_1,r_2,r_3), \mathbf{s}=(s_1,s_2,s_3)$ be given, from \eqref{R-S-eqn} and \eqref{perb}, it follows that the perturbation $\mathbf{v}=(v_1,v_2,v_3)$ can be uniquely determined by \begin{itemize}n{equation}gin{equation}\lambdabel{vx} \mathbf{v}_x=\frac{\mathbf{r}-\mathbf{s}}{2c(n_1)}-\frac{\mathbf{R}-\mathbf{S}}{2c^2(n_1)}c'(n_1)v_1,\qquad \mathbf{v}(t,0)=\mathbf{0}, \end{equation} and \begin{itemize}n{equation}gin{equation}\lambdabel{vt} \mathbf{v}_t=(\mathbf{r}+\mathbf{s})/2. \end{equation} Moreover, in light of \eqref{vwl} and \eqref{R-S-eqn}, it is straightforward to check that the first order perturbations $\mathbf{v},\mathbf{s},\mathbf{r}$ must satisfy the equations \begin{itemize}n{equation}gin{equation}\lambdabel{vtt} \begin{itemize}n{equation}gin{split} &\partialartial_{tt}v_i-c^2 \partialartial_{xx}v_{i}=2\big[(c')^2\partialartial_x n_1\partialartial_x n_i+cc''\partialartial_x n_1\partialartial_x n_i+cc'\partialartial_{xx} n_i\big]v_1-\big[|\mathbf{n}_t|^2-(2c^2-\zeta_i)|\mathbf{n}_x|^2\big]v_i\\ &\qquad\qquad-2\big[\mathbf{n}_t\cdot\mathbf{v}_t -(2c^2-\zeta_i)\mathbf{n}_x\cdot\mathbf{v}_x\big]n_i+4cc'n_iv_1|\mathbf{n}_x|^2+2cc'(\partialartial_x n_1\partialartial_x v_i+\partialartial_x n_i\partialartial_x v_1), \end{split}\end{equation} and \begin{itemize}n{equation}gin{equation}\lambdabel{rt} \begin{itemize}n{equation}gin{cases} \displaystyle\partialartial_t r_i-c\partialartial_x r_i=c'v_1\partialartial_x R_i+\frac{c'\zeta_i}{2c^3}v_1 n_i\big(\mathbf{R}^2+\mathbf{S}^2-2\mathbf{R}\cdot\mathbf{S}\big)+\frac{cc''-(c')^2}{2c^2}(R_i-S_i)R_1v_1\\ \qquad\qquad\qquad\displaystyle+\frac{v_i}{4c^2}\big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2)-2(3c^2-\zeta_i)\mathbf{R\cdot\mathbf{S}}\big]+\displaystyle\frac{c'}{2c}\big[(R_i-S_i)r_1+(r_i-s_i)R_1\big]\\ \qquad\qquad\qquad\displaystyle+\frac{n_i}{2c^2}\big[(c^2-\zeta_i)(\mathbf{R}\cdot\mathbf{r}+\mathbf{S}\cdot\mathbf{s}) -(3c^2-\zeta_i)(\mathbf{R}\cdot\mathbf{s}+\mathbf{S}\cdot\mathbf{r})\big],\\ \displaystyle\partialartial_t s_i+c\partialartial_x s_i=-c'v_1\partialartial_x S_i+\frac{c'\zeta_i}{2c^3}v_1 n_i\big(\mathbf{R}^2+\mathbf{S}^2-2\mathbf{R}\cdot\mathbf{S}\big)+\frac{cc''-(c')^2}{2c^2}(R_i-S_i)S_1v_1\\ \qquad\qquad\qquad\displaystyle+\frac{v_i}{4c^2}\big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2)-2(3c^2-\zeta_i)\mathbf{R\cdot\mathbf{S}}\big]\displaystyle+\frac{c'}{2c}\big[(R_i-S_i)s_1+(r_i-s_i)S_1\big]\\ \qquad\qquad\qquad\displaystyle+\frac{n_i}{2c^2}\big[(c^2-\zeta_i)(\mathbf{R}\cdot\mathbf{r}+\mathbf{S}\cdot\mathbf{s}) -(3c^2-\zeta_i)(\mathbf{R}\cdot\mathbf{s}+\mathbf{S}\cdot\mathbf{r})\big],\\ \end{cases}\end{equation} for $i=1,2,3$ and $\zeta_1=\ga, \zeta_2=\zeta_3=\al.$ To continue, one also needs to add quantities, named as $w(t,x), z(t,x)$, to measure the horizontal shifts, corresponding to backward and forward directions, respectively, which provide enough freedom for planar transports. Here we require $w(t,x)$ to satisfy \[ \epsilon w(t,x)+o(\epsilon)=x^\epsilon(t)-x(t), \] where $x^\epsilon(t)$ and $x(t)$ are two backward characteristics starting from initial points $x^\epsilon(0)$ and $x(0)$. Similarly, the function $\epsilon z(t,x)$ measures the difference of two forward characteristics. More precisely, we choose $w, z$ to be the solutions of the following system \begin{itemize}n{equation}gin{equation}\lambdabel{wz} \begin{itemize}n{equation}gin{cases} \displaystyle w_t-cw_x=-c'(v_1+w\partialartial_x n_1 ),\\ \displaystyle z_t+cz_x=c'(v_1+z\partialartial_x n_1 ),\\ w(0,x)=w_0(x),\qquad z(0,x)=z_0(x). \end{cases}\end{equation} With the above preparation, we can now define a Finsler norm on the space of tangent vectors $(\mathbf{v},\mathbf{r},\mathbf{s})$ and the flow itself $(\mathbf{n}, \mathbf{R}, \mathbf{S})$ as \begin{itemize}n{equation}gin{equation}\lambdabel{Finsler v} \|(\mathbf{v},\mathbf{r},\mathbf{s})\|_{(\mathbf{n},\mathbf{R},\mathbf{S})}: = \inf_{\mathbf{v}, \mathbf{r^*}, \mathbf{s^*}, w, z} \|(\mathbf{v}, \mathbf{r^*},\mathbf{s^*}, w, z)\|_{(\mathbf{n},\mathbf{R},\mathbf{S})}, \end{equation} where the infimum is taken over the set of vertical displacements $\mathbf{v},\mathbf{r^*}=(r^*_1,r^*_2,r^*_3),\mathbf{s^*}=(s^*_1,s^*_2,s^*_3)$ and horizontal shifts $w,z$ which satisfy equations \eqref{vx}, \eqref{vt}, \eqref{wz} and \begin{itemize}n{equation}gin{equation}\lambdabel{rseq} \begin{itemize}n{equation}gin{cases} \displaystyle r^*_i=r_i+w\partialartial_xR_i+\frac{n_i}{8c^3}\big[(c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big](w-z) -\frac{c'}{4c^2}(w-z)R_1S_i,\\ \displaystyle s^*_i=s_i+z\partialartial_xS_i+\frac{n_i}{8c^3}\big[(c^2-\zeta_i)\mathbf{R}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big](w-z) -\frac{c'}{4c^2}(w-z)R_iS_1, \end{cases} \end{equation} for $i=1,2,3$ and $ \zeta_1=\ga, \zeta_2=\zeta_3=\al.$ Next, the norm $\|(\mathbf{v}, \mathbf{r^*},\mathbf{s^*}, w, z)\|_{(\mathbf{n},\mathbf{R},\mathbf{S})}$ is defined as \begin{itemize}n{equation}gin{equation}\lambdabel{norm1} \begin{itemize}n{equation}gin{split} &\ \|(\mathbf{v}, \mathbf{r^*},\mathbf{s^*}, w, z)\|_{(\mathbf{n},\mathbf{R},\mathbf{S})}\\ &:=~\kappa_0\int_\mathbb{R} \big[|w|\, \mathcal{V}^-+|z|\, \mathcal{V}^+\big]\,dx+\kappa_1\int_\mathbb{R} \big[|w|(1+\mathbf{R}^2)\, \mathcal{V}^- +|z|(1+\mathbf{S}^2)\, \mathcal{V}^+\big]\,dx\\ &\quad+\kappa_2\sum_{i=1}^3\int_\mathbb{R} \Big|v_i+\frac{R_iw-S_iz}{2c}\Big|\big[(1+\mathbf{R}^2)\, \mathcal{V}^- +(1+\mathbf{S}^2)\, \mathcal{V}^+\big]\,dx\\ &\quad+\kappa_3\int_\mathbb{R} \Big[ \Big|w_x+\frac{c'}{4c^2}(w-z)S_1\Big|\, \mathcal{V}^-+ \Big|z_x+\frac{c'}{4c^2}(w-z)R_1\Big|\, \mathcal{V}^+\Big]\,dx \\ &\quad+\kappa_4\sum_{i=1}^3\int_\mathbb{R} \Big[\Big|r^*_i+R_i\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\Big|\, \mathcal{V}^- +\Big|s^*_i+S_i\big(z_x+\frac{c'}{4c^2}(w-z)R_1\big)\Big|\, \mathcal{V}^+\Big]\,dx \\ &\quad+\kappa_5\int_\mathbb{R} \Big[ \Big|2\mathbf{R}\cdot \mathbf{r^*}+\mathbf{R}^2w_x+\frac{c'}{4c^2}(w-z)\mathbf{R}^2S_1\Big|\, \mathcal{V}^-\\ &\qquad\qquad\qquad+ \Big|2\mathbf{S}\cdot \mathbf{s^*}+\mathbf{S}^2z_x+\frac{c'}{4c^2}(w-z)\mathbf{S}^2R_1\Big|\, \mathcal{V}^+\Big]\,dx \\ &=: \sum_{j=0}^5\kappa_j\big(\int_\mathbb{R} J_j^-\, \mathcal{V}^-\,dx+\int_\mathbb{R} J_j^+\, \mathcal{V}^+\,dx\big)=: \sum_{j=0}^5\kappa_j I_j, \end{split} \end{equation} where $\kappa_j$, $j=0,1,\cdots,5$ are the constants to be determined later, and $I_j, J_j^-, J_j^+$ are the corresponding terms in the above equation. On the other hand, in view of \eqref{balance}, the forward or backward energy might increase during the wave interaction, although the total energy is conserved. To balance this possible energy increase, a pair of {\em interaction potentials} $\mathcal{V}^+/\mathcal{V}^-$ for forward/backward directions need to be added in \eqref{norm1} as \begin{itemize}n{equation}gin{equation*}\lambdabel{W} \mathcal{V}^-:=1+\int_{-\infty}^x \mathbf{S}^2(y)\,dy,\quad \mathcal{V}^+:=1+\int^{+\infty}_x \mathbf{R}^2(y)\,dy. \end{equation*} Then it follows from \eqref{balance} that \begin{itemize}n{equation}gin{equation}\lambdabel{Westimate} \begin{itemize}n{equation}gin{cases} \displaystyle\mathcal{V}^-_t-c\mathcal{V}^-_x= -2c\mathbf{S}^2+\int_{-\infty}^x \big[\frac{c'}{2c}(R_1\mathbf{S}^2-\mathbf{R}^2S_1)\big]\,dy\leq -2c_0\mathbf{S}^2+G(t),\\ \displaystyle\mathcal{V}^+_t+c\mathcal{V}^+_x= -2c \mathbf{R}^2+\int^{+\infty}_x \big[\frac{c'}{2c}(\mathbf{R}^2S_1-R_1\mathbf{S}^2)\big]\,dy\leq -2c_0\mathbf{R}^2+G(t),\\ \end{cases} \end{equation} with $\displaystyle G(t):=\int_{-\infty}^{+\infty} \Big|\frac{c'}{2c}(\mathbf{R}^2S_1-R_1\mathbf{S}^2)\Big|\,dy.$ Moreover, with the aid of \cite{CCD}, we can see that \begin{itemize}n{equation}gin{equation}\lambdabel{4.9} \int_0^T G(t)\leq C_T, \end{equation} for some constant $C_T$ depending only on $T$ and the total energy. Now we briefly explain how to obtain $J_j^-, j=0,1,\cdots,5$ in \eqref{norm1}. And $J_j^+$ is symmetric for forward waves. \partialaragraph{\bf (1)} $J_1^-$ measures [change in $x$]$\cdot (1+\mathbf{R}^2)$, where \[[\hbox{change in }x]=\lim_{\epsilon\rightarrow 0}\epsilon^{-1}(x^\epsilon-x)=w(x).\] The terms in $I_0$ are corresponding to the variation of $|x|$ with base measure with density $1$, which are added for a technical purpose. \partialaragraph{\bf (2)} $J_2^-$ measures $\displaystyle\sum_{i=1}^3$ [change in $n_i$]$\cdot (1+\mathbf{R}^2)$, where \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} [\hbox{change in }n_i]=\lim_{\epsilon\rightarrow 0}\frac{n_i^\epsilon(x^\epsilon) - n_i(x)}{ \epsilon} &= v_i(x) + \partialartial_x n_i(x) w(x)\\ &= v_i(x) + \frac{R_i(x)-S_i(x)}{2c(n_1(x))}w(x)\\ &= v_i(x) + \frac{R_iw-S_iz}{2c}+\frac{z-w}{2c}S_i. \end{split}\end{equation*} Here the term $\frac{z-w}{2c}S_i$ on the above equation is just balanced with the {\bf relative shift} term. Here we use this term to introduce how to calculate the relative shift term. By the third equation of $\eqref{R-S-eqn}$, we have $ \partialartial_x n_i=\frac{R_i-S_i}{2c}. $ That is, roughly speaking, \begin{itemize}n{equation}q\lambdabel{deltau} \Delta n_i\approx\frac{\Delta x}{2c}(R_i-S_i)=\frac{z-w}{2c}(R_i-S_i). \end{equation} Here the $S_i$ term balances $\frac{z-w}{2c}S_i$. We omit the $R_i$ term since it is a lower order term. \partialaragraph{\bf (3)} $J_4^-$ measures $\displaystyle\sum_{i=1}^3$ [change of base measure with density $R_i$]. More precisely \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} &\quad \lim_{\epsilon\rightarrow 0}\frac{ R_i^\epsilon(x^\epsilon)\,dx^\epsilon- R_i(x)\,dx}{\epsilon}\\ &= \lim_{\epsilon\rightarrow 0}\frac{ \Big( R_i^\varepsilon(x^\epsilon) -R_i(x) \Big)\,dx^\epsilon + R_i(x)(dx^\epsilon-dx)}{\epsilon} \\ &= \Big(r_i(x) + w(x)\partialartial_x R_i(x)+R_i(x)w_x(x)\Big)\,dx , \end{split} \end{equation*} which together with the relative shift term gives $J_4^-$. Here we add some subtle adjustments in the relative shift terms to take account of interactions between forward and backward waves using \eqref{R-S-eqn}. As mentioned before, this is a new term comparing to the metric in \cite{BC2015}. $J_3^-$ measures [change of base measure with density $1$], which is added to close the estimate of time derivatives for $J_4^-$. This term is to some extend lower order term of $J_4^-$. \partialaragraph{\bf (4)} $J_5^-$ measures [change of base measure with density $\mathbf{R}^2$], using the identity \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} \big(\mathbf{R}^\epsilon(x^\epsilon) \big)^2 & = \mathbf{R}^2(x^\epsilon) + 2\epsilon \mathbf{R}(x^\epsilon)\cdot \mathbf{r}(x^\epsilon) + o(\epsilon) \\ & = \mathbf{R}^2(x) + 2\epsilon w(x) \mathbf{R}(x)\cdot \mathbf{R}_{x}(x) + 2\epsilon \mathbf{R}(x)\cdot \mathbf{r}(x) + o(\epsilon), \end{split} \end{equation*} we obtain that \begin{itemize}n{equation}gin{equation}\lambdabel{exp1} \begin{itemize}n{equation}gin{split} \big( \mathbf{R}^\epsilon_x(x^\epsilon) \big)^2 dx^\epsilon - \mathbf{R}^2(x) dx =\Big( 2\epsilon \mathbf{R}(x)\cdot \mathbf{R}_{x}(x)w(x) + 2\epsilon \mathbf{R}(x) \cdot\mathbf{r}(x) +\epsilon \mathbf{R}^2(x)w_x(x)+o(\epsilon)\Big)\,dx. \end{split} \end{equation} On the other hand, as in \eqref{deltau}, if the mass with density $\mathbf{S}^2$ is transported from $x$ to $x+\epsilon z(x)$, in view of \eqref{balance}, the relative shift between forward and backward waves will contribute \begin{itemize}n{equation}gin{equation}\lambdabel{exp2} \frac{c'}{2c}(\mathbf{R}^2S_1-R_1\mathbf{S}^2)\frac{z-w}{2c} \epsilon. \end{equation} Subtracting \eqref{exp2} from \eqref{exp1} we have \begin{itemize}n{equation}gin{equation}\lambdabel{exp3} 2 \mathbf{R}\cdot\mathbf{r} + 2 \mathbf{R}\cdot \mathbf{R}_{x}w + \mathbf{R}^2w_x +\frac{c'}{4c^2}(\mathbf{R}^2S_1-R_1\mathbf{S}^2)(w-z). \end{equation} By using $|\mathbf{n}|=1$ and $c'=\frac{\gamma-\alpha}{c}n_1$, so that $\mathbf{R}\cdot\mathbf{n}=0$, we further obtain \begin{itemize}n{equation}gin{equation*}\lambdabel{exp4} 2 \mathbf{R}\cdot\mathbf{r}^* = 2 \mathbf{R}\cdot\mathbf{r} + 2 \mathbf{R}\cdot \mathbf{R}_{x}w -\frac{c'}{4c^2}R_1\mathbf{S}^2(w-z). \end{equation*} This together with \eqref{exp3} gives the term $J_6^-$. Now we state the main result of this section, which is showing that the norm of tangent vectors defined in \eqref{Finsler v} satisfies a Gr\"{o}wnwall type inequality. \begin{itemize}n{equation}gin{Lemma}\lambdabel{lem_est} Let $(\mathbf{n},\mathbf{R},\mathbf{S})(t,x)$ be a smooth solution to \eqref{vwl} and \eqref{R-S-eqn} for $t\in[0,T]$, with $T>0$ be given. Assume that the first order perturbations $(\mathbf{v},\mathbf{r},\mathbf{s})$ satisfy the corresponding equations \eqref{vtt}--\eqref{rt}. Then it follows that \begin{itemize}n{equation}gin{equation}\lambdabel{normest} \|(\mathbf{v},\mathbf{r},\mathbf{s})(t)\|_{(\mathbf{n},\mathbf{R},\mathbf{S})(t)}\leq C\|(\mathbf{v},\mathbf{r},\mathbf{s})(0)\|_{(\mathbf{n},\mathbf{R},\mathbf{S})(0)}, \end{equation} with the constant $C$ depending only on the initial total energy and $T$. \end{Lemma} \begin{itemize}n{equation}gin{proof} To achieve \eqref{normest}, it suffices to show that \begin{itemize}n{equation}gin{equation}\lambdabel{est on w and z} {d \over dt}\|(\mathbf{v}, \mathbf{r^*},\mathbf{s^*}, w,z)(t)\|_{(\mathbf{n},\mathbf{R},\mathbf{S})(t)} \leq a(t) \|(\mathbf{v}, \mathbf{r^*},\mathbf{s^*}, w,z)(t)\|_{(\mathbf{n},\mathbf{R},\mathbf{S})(t)}, \end{equation} for any $w, z$ and $\mathbf{r^*}, \mathbf{s^*}$ satisfying (\ref{wz}) and (\ref{rseq}), with a local integrable function $a(t)$. In fact, by elaborate calculations on the time derivatives of all terms in \eqref{norm1}, we have \begin{itemize}n{equation}gin{equation}\lambdabel{sum}\left.\begin{itemize}n{equation}gin{array}{l} \displaystyle\frac{dI_k}{dt}\leq~ C\sum_{\ell\in {\mathcal F}^l_k} \left(\int_\mathbb{R} (1+|\mathbf{S}|)\,J_\ell^-\,{\mathcal V}^- \,dx+\int_\mathbb{R} (1+|\mathbf{R}|)J_\ell^+\,\,{\mathcal V}^+ \,dx \right) \\[2mm] \displaystyle \qquad \qquad+C\sum_{\ell\in {\mathcal F}^h_k} \left(\int_\mathbb{R} (1+ \mathbf{S}^2)\,J_\ell^-\,{\mathcal V}^- \,dx+\int_\mathbb{R} (1+\mathbf{R}^2)\,J_\ell^+\,{\mathcal V}^+\, dx \right)\\[2mm] \displaystyle\qquad\qquad+G(t)I_k-c_0\left(\int_\mathbb{R} \mathbf{S}^2\,J_k^-\,{\mathcal V}^- \,dx+\int_\mathbb{R} \mathbf{R}^2\,J_k^+\,{\mathcal V}^+ \,dx \right). \end{array}\right. \end{equation} The detail calculation for \eqref{sum} can be found in Appendix \ref{app}. Here $\mathcal {F}^l_k,\mathcal {F}^h_k\subset\{0,1,2,\cdots,5\}$ are suitable sets of indices from the estimates \eqref{I0est}, \eqref{I1est}, \eqref{I2est}, \eqref{I3est}, \eqref{I4est} and \eqref{I5est}, where a graphical summary of all the a priori estimate is illustrated in Fig. \ref{f:wa36}. For example, by \eqref{I3est}, $\mathcal {F}^l_3=\{2,3,4\}$ and $\mathcal {F}^h_3=\{1\}$. Throughout the paper, $C>0$ is a generic constant depending only on the initial total energy and $T$, which may vary in different estimates. \begin{itemize}n{equation}gin{figure}[htbp] \centering \includegraphics[scale=0.6]{vwsr.png} \caption{ $\dot{I_k}=\frac{d I_k}{dt}$. If $\ell\in {\mathcal F}_k^l$, then $\dot{I_k}$ and $I_\ell$ are connected by a dash line. If $\ell\in {\mathcal F}_h^l$, then $\dot{I_k}$ and $I_\ell$ are connected by a solid line. $k\rightarrow {\mathcal F}_k^h \subset\{0,1,\cdots 5\}$ has {\bf{no cycle!}} Choose $\kappa_k$ in a certain order ($\kappa_0\gg\kappa_1\gg\kappa_3\gg\kappa_4\gg \kappa_2, \kappa_5 $) to prove \eqref{est on w and z}. } \lambdabel{f:wa36} \end{figure} Since there is no cycle for the relation tree ${\mathcal F}_k^h$, we can choose a suitable small constant $\delta>0$, with the weighted norm defined by \begin{itemize}n{equation}gin{equation*} \|(\mathbf{v}, \mathbf{r^*},\mathbf{s^*}, w,z)(t)\|_{(\mathbf{n},\mathbf{R},\mathbf{S})(t)} :=I_0+\delta I_1+\delta^4 I_2+\delta^2 I_3+\delta^3 I_4+\delta^4 I_5, \end{equation*} we arrive at the desired estimate \eqref{est on w and z}. Therefore, the proof of Lemma \ref{lem_est} is finished. \end{proof} \section{Metric for piecewise smooth solutions} Having constructed a weighted norm on tangent vectors for smooth solutions, our main goal now is how to extend this metric to general weak solutions. By the strong nonlinearity of equations, solutions with smooth initial data can lose regularity in finite time. When this happens, the tangent vector ${\bf v}$ may no longer exist since there may be no regular path between two solutions. Even if the tangent vector does exist, it is not obvious that the estimate in Lemma \ref{lem_est} holds. In this section, we first extend the metric to piecewise smooth solutions. A natural question arises as whether there are a dense set of piecewise smooth paths of solutions, whose weighted length can be controlled in time. We note that an analogous theorem proved in \cite{CCD} by authors gives a positive answer to this question. Roughly speaking, we proved that, for generic smooth initial data, the solution is piecewise smooth. Its gradient blows up along finitely many smooth curves in the $t$-$x$ plane. In subsection \ref{sec:4.1}, we first review this basic construction and the characterization of generic singularities \cite{CCD}. \subsection{Generic regularity and smooth path of solutions}\lambdabel{sec:4.1} We define the forward and backward characteristics as follows: \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{cases} \frac{d}{ds}x^\partialm(s,t,x)=\partialm c(n_1(s,x^\partialm(s,t,x))),\\ x^\partialm|_{s=t}=x. \end{cases} \end{equation*} Then we define the coordinate transformation $(t,x)\to (X,Y)$ where \begin{itemize}n{equation}gin{equation*} X~:=~\int_0^{x^-(0,t,x)}[1+\mathbf R^2(0,y)]\,dy, \quad \text{and }\quad Y~:=~\int^0_{x^+(0,t,x)}[1+\mathbf S^2(0,y)]\,dy. \end{equation*} Of course this implies \begin{itemize}n{equation}gin{equation}\lambdabel{XY} X_t-c(n_1)X_x=0,\quad Y_t+c(n_1)Y_x=0. \end{equation} Furthermore, for any smooth function $f$, by using \eqref{XY}, we obtain that \begin{itemize}n{equation}gin{equation}\lambdabel{2.15} \begin{itemize}n{equation}gin{cases} f_t+c(n_1)f_x=(X_t+c(n_1)X_x)f_X=2c(n_1)X_x f_X,\\ f_t-c(n_1)f_x=(Y_t-c(n_1)Y_x)f_Y=-2c(n_1)Y_x f_Y. \end{cases} \end{equation} Now, we choose new variables to avoid the blowup as: \begin{itemize}n{equation}gin{eqnarray*} &&\displaystyle p=\frac{1+|\mathbf R|^2}{X_x},\qquad q=\frac{1+|\mathbf S|^2}{-Y_x}, \nonumber \\ &&\displaystyle\mathbf{L}=(l_1,l_2,l_3)=\frac{\mathbf R}{1+|\mathbf R|^2},\quad \mathbf m=(m_1,m_2,m_3)=\frac{\mathbf S}{1+|\mathbf S|^2}, \nonumber\\ &&\displaystyle h_1=\frac{1}{1+|\mathbf R|^2}, \quad h_2=\frac{1}{1+|\mathbf S|^2}.\nonumber \end{eqnarray*} With the above notations, these variables satisfy the following semi-linear system, c.f. \cite{CZZ} \begin{itemize}n{equation}gin{equation}\lambdabel{2.17} \begin{itemize}n{equation}gin{cases} &\partialartial_Y l_i=\displaystyle\frac{q}{8c^3(n_1)}[(c^2(n_1)-\zeta_i)(h_1+h_2-2h_1h_2)-2(3c^2(n_1)-\zeta_i)\mathbf L\cdot \mathbf m]n_i\\ &\displaystyle\qquad\qquad+\frac{c'(n_1)}{4c^2(n_1)}l_1q(l_i-m_i),\\ &\partialartial_X m_i =\displaystyle\frac{p}{8c^3(n_1)}[(c^2(n_1)-\zeta_i)(h_1+h_2-2h_1h_2)-2(3c^2(n_1)-\zeta_i)\mathbf L\cdot \mathbf m]n_i\\ &\displaystyle\qquad\qquad-\frac{c'(n_1)}{4c^2(n_1)}m_1p(l_i-m_i),\\ &\partialartial_Y\mathbf n =\displaystyle \frac{q}{2c(n_1)}\mathbf m, \qquad (\text {or } \quad\partialartial_X\mathbf n =\frac{p}{2c(n_1)}\mathbf L ),\\ &\partialartial_Y h_1 =\displaystyle \frac{c'(n_1)}{4c^2(n_1)}ql_1(h_1- h_2),\qquad \partialartial_X h_2 = \frac{c'(n_1)}{4c^2(n_1)}pm_1(h_2- h_1),\\ &p_Y =\displaystyle -\frac{c'(n_1)}{4c^2(n_1)}pq(l_1 -m_1),\qquad q_X = \frac{c'(n_1)}{4c^2(n_1)}pq(l_1 -m_1), \end{cases} \end{equation} with $i=1,2,3$, $\zeta_1=\gamma$ and $\zeta_2=\zeta_3=\alpha$. Using \eqref{2.15}, by letting $f=t$ or $x$, we obtain the equations \begin{itemize}n{equation}gin{equation}\lambdabel{2.18} t_X=\frac{ph_1}{2c(n_1)},\quad t_Y=\frac{qh_2}{2c(n_1)},\quad x_X=\frac{ph_1}{2},\quad x_Y=-\frac{qh_2}{2}. \end{equation} On the initial line $t=0$ in $(t,x)$ plane, we transform it to a particular curve \begin{itemize}n{equation}gin{equation*} \gamma_0=\{(X,Y);~X+Y=0\}\subset\mathbb{R}^2 \end{equation*} in the $(X,Y)$ plane. Along the curve $\gamma_0$ parameterized by $x\mapsto(\bar{X}(x),\bar{Y}(x)):=(x,-x)$, we assign the boundary data $(\bar{\mathbf n},\bar{\mathbf L}, \bar{\mathbf m},\bar{h}_1, \bar{h}_2,\bar{p},\bar{q})$ defined by their definition evaluated at the initial data \eqref{ID}, that is \begin{itemize}n{equation}gin{equation*}\lambdabel{2.19} \begin{itemize}n{equation}gin{split} &\bar{\mathbf n}=\mathbf n_0(x), \quad \bar{\mathbf L}=\mathbf R(0,x)\bar{h}_1, \quad\bar{\mathbf m}=\mathbf S(0,x)\bar{h}_2, \\ &\bar{h}_1=\frac{1}{1+|\mathbf R(0,x)|^2}, \quad \bar{h}_2=\frac{1}{1+|\mathbf S(0,x)|^2},\\ &\bar{p}=1+|\mathbf R(0,x)|^2, \quad\bar{q}=1+|\mathbf S(0,x)|^2, \end{split} \end{equation*} where \begin{itemize}n{equation}gin{equation*} \mathbf R(0,x)=\mathbf n_1+c(n_{10}(x))\mathbf n'_0(x), \quad \mathbf S(0,x)=\mathbf n_1-c(n_{10}(x))\mathbf n'_0(x). \end{equation*} The existence and uniqueness of global weak energy conservative solutions of \eqref{vwl} has been established in \cite{CCD, CZZ}, by transforming the solution $\mathbf n(X,Y)$ of \eqref{2.17} to $\mathbf n(t, x)$ on the original variables $(t, x)$: \begin{itemize}n{equation}gin{Lemma}[\cite{CCD, CZZ}]\lambdabel{Lemma 2.1} Let the generic condition \eqref{gencon} and initial data \eqref{ID} be satisfied. Then there exists a unique solution $(X,Y)\mapsto (\mathbf n, \mathbf L, \mathbf m, h_1, h_2, p, q, x,t)(X,Y)$ with $p,q>0$ to the system \eqref{2.17}--\eqref{2.18} with boundary data assigned along the line $\gamma_0$. Moreover the set of points \begin{itemize}n{equation}gin{equation}\lambdabel{2.20} \big\{(t(X,Y),x(X,Y),\mathbf n(X,Y));~(X,Y)\in \mathbb{R}^2\} \end{equation} is the graph of a unique conservative solution $\mathbf n=\mathbf n(X,Y)$ to the Cauchy problem \eqref{vwl}--\eqref{ID}. \end{Lemma} To continue, we introduce the following definitions. \begin{itemize}n{equation}gin{Definition}\lambdabel{def_gensin} A solution $\mathbf n=\mathbf n(x,t)$ of \eqref{vwl} is called has {\bf generic singularities} for $t\in[0,T]$ if it admits a representation of the form \eqref{2.20}, where {\rm (i)} the functions $(\mathbf n, \mathbf L, \mathbf m, h_1, h_2, p, q, x,t)(X,Y)$ are $\mathcal{C}^\infty$, {\rm (ii)} for $t(X,Y)\in[0,T]$, the following generic conditions hold: \begin{itemize}n{equation}gin{equation*}\lambdabel{generic_con} \begin{itemize}n{equation}gin{cases} h_1=0, \mathbf L_X=\mathbf{0} {\bf L}ongrightarrow \mathbf L_Y\neq \mathbf{0},\mathbf L_{XX}\neq \mathbf{0},\\ h_2=0, \mathbf{m}_Y=\mathbf{0} {\bf L}ongrightarrow \mathbf{m}_X\neq \mathbf{0},\mathbf{m}_{YY}\neq \mathbf{0},\\ h_1=0, h_2=0 {\bf L}ongrightarrow \mathbf{L}_X\neq\mathbf{0}, \mathbf{m}_Y\neq \mathbf{0}.\\ \end{cases}\end{equation*} \end{Definition} \begin{itemize}n{equation}gin{Definition}\lambdabel{def_repath} A path of initial data $\Gamma^0:\lambdambda\mapsto (\mathbf{n}_0^\lambdambda,\mathbf{n}_1^\lambdambda)$, $\lambdambda\in[0,1]$ is called a {\bf piecewise regular path} if the following conditions hold. {\rm (i)} There exists a continuous map $(X,Y,\lambdambda)\mapsto (\mathbf n, \mathbf L, \mathbf m, h_1, h_2, p, q, x,t)$ such that the semilinear system \eqref{2.17}--\eqref{2.18} holds for $\lambdambda\in[0,1]$, and the function $\mathbf{n}^\lambdambda(x,t)$ whose graph is \begin{itemize}n{equation}gin{equation*} \text{ Graph }(\mathbf n^\lambdambda)=\{(x,t,\mathbf n)(X,Y,\lambdambda);~(X,Y)\in \mathbb{R}^2\} \end{equation*} provides the conservation solution of \eqref{vwl} with initial data $\mathbf n^\lambdambda(x,0)=\mathbf n^\lambdambda_0(x),\mathbf n_t^\lambdambda(x,0)=\mathbf n_1^\lambdambda(x)$. {\rm (ii)} There exist finitely many values $0=\lambdambda_0<\lambdambda_1<\cdots<\lambdambda_N=1$ such that the map $(X,Y,\lambdambda)\mapsto(\mathbf n, \mathbf L, \mathbf m, h_1, h_2, p, q, x,t)$ is $\mathcal{C}^\infty$ for $\lambdambda\in(\lambdambda_{i-1},\lambdambda_i), i=1,\cdots, N$, and the solution $\mathbf n^\lambdambda=\mathbf n^\lambdambda(x,t)$ has only generic singularities at time $t=0$. In addition, if for all $\lambdambda\in[0,1]\backslash\{\lambdambda_1,\cdots,\lambdambda_N\}$, the solution $\mathbf n^\lambdambda$ has only generic singularities for $t\in [0,T]$, then we say that the path of solution $\Gamma^t: \lambdambda\mapsto (\mathbf n^\lambdambda,\mathbf n^\lambdambda_t)$ is {\bf piecewise regular} for $t\in[0,T]$. \end{Definition} The following result shows that the set of piecewise regular paths is dense. \begin{itemize}n{equation}gin{Corollary}\lambdabel{thm_repath} Assume the generic condition \eqref{gencon} holds. For any fixed $T>0,$ let $\lambdambda\mapsto(\mathbf n^\lambdambda, \mathbf L^\lambdambda, \mathbf m^\lambdambda,$ $h_1^\lambdambda, h_2^\lambdambda, p^\lambdambda, q^\lambdambda, x^\lambdambda,t^\lambdambda), \lambdambda\in[0,1],$ be a smooth path of solutions to the system \eqref{2.17}--\eqref{2.18}. Then there exists a sequence of paths of solutions $\lambdambda\mapsto(\mathbf n^\lambdambda_i, \mathbf L^\lambdambda_i, \mathbf m^\lambdambda_i,(h_1^\lambdambda)_i, (h_2^\lambdambda)_i, p^\lambdambda_i, q^\lambdambda_i, x^\lambdambda_i,$ $t^\lambdambda_i),$ such that {\rm (i)} For each $i\geq 1$, the path of the corresponding solution of \eqref{vwl} $\lambdambda\mapsto \mathbf{n}_i^\lambdambda$ is regular for $t\in[0,T]$ in the sense of Definition \ref{def_repath}. {\rm (ii)} For any bounded domain $\Sigma$ in the $(X$,$Y)$ space, the functions $(\mathbf n^\lambdambda_i, \mathbf L^\lambdambda_i, \mathbf m^\lambdambda_i,(h_1^\lambdambda)_i, (h_2^\lambdambda)_i, p^\lambdambda_i,$ $ q^\lambdambda_i, x^\lambdambda_i, t^\lambdambda_i)$ converge to $(\mathbf n^\lambdambda, \mathbf L^\lambdambda, \mathbf m^\lambdambda, h_1^\lambdambda, h_2^\lambdambda, p^\lambdambda, q^\lambdambda, x^\lambdambda,t^\lambdambda)$ uniformly in $\mathcal{C}^k([0,1]\times\Sigma)$, for every $k\geq 1$, as $i\to \infty.$ \end{Corollary} The proof of this lemma is very similar to the corresponding one in \cite{BC,CCD}. So we omit the proof and refer the readers to \cite{BC,CCD} for more details. \subsection{Tangent vectors in transformed coordinates} Now, we derive an expression for the norm of tangent vectors \eqref{norm1} as a line integral in $X$-$Y$ coordinates. For a reference solution $\mathbf{n}(x,t)$ of \eqref{vwl}, and let $\mathbf{n}^\varepsilon(x,t)$ be a family of perturbed solutions. In the $(X$,$Y)$ plane, denote $(\mathbf{n}, \mathbf{L}, \mathbf{m}, h_1, h_2, p, q, x,t)$ and $( \mathbf{n}^\varepsilon, \mathbf{L}^\varepsilon, \mathbf{m}^\varepsilon, h_1^\varepsilon, h_2^\varepsilon, p^\varepsilon, q^\varepsilon, x^\varepsilon, t^\varepsilon)$ be the corresponding smooth solutions of \eqref{2.17}--\eqref{2.18}, and consider the perturbed solutions of the form \begin{itemize}n{equation}gin{equation*} ( n^\varepsilon_i, l^\varepsilon_i, m^\varepsilon_i, h_1^\varepsilon, h_2^\varepsilon, p^\varepsilon, q^\varepsilon, x^\varepsilon, t^\varepsilon)=(n_i, l_i, m_i, h_1, h_2, p, q, x,t)+\varepsilon(N_i, L_i, M_i,H_1, H_2, P, Q,\mathcal{X},\mathcal{T})+o(\varepsilon). \end{equation*} with $i=1,2,3, \mathbf{N}=(N_1,N_2,N_3), \mathbf{M}=(M_1,M_2,M_3),$ and $\mathcal{L}=(L_1,L_2,L_3)$. Here we denote the curve in ($X,Y)$ plane by \begin{itemize}n{equation}gin{equation*}\lambdabel{curve} {\bf L}ambda_\tau=\{(X,Y)\,|\,t(X,Y)=\tau\}=\{(X,Y(\tau, X));X\in\mathbb{R}\}=\{(X(\tau, Y),Y);Y\in\mathbb{R}\}\end{equation*} and the perturbed curve as $$ {\bf L}ambda_\tau^\varepsilon=\{(X,Y)\,|\,t^\varepsilon(X,Y)=\tau\}=\{(X,Y^\varepsilon(\tau, X));X\in\mathbb{R}\}=\{(X^\varepsilon(\tau, Y),Y);Y\in\mathbb{R}\}.$$ By the smooth coefficients of system \eqref{2.17}--\eqref{2.18}, we see that the first order perturbations are well defined for $(X,Y)\in\mathbb{R}^2$ and also satisfy a linearized system. In what follows, we express the terms $I_0$--$I_5$ of \eqref{norm1} in terms of $(\mathbf{N}, \mathcal{L}, \mathbf{M}, H_1, H_2, P, Q,\mathcal{X},\mathcal{T})$. First, we observe that $$t^\varepsilon\big(X,Y^\varepsilon(\tau,X)\big) =t^\varepsilon\big(X^\varepsilon(\tau,Y),Y\big)=\tau.$$ By the implicit function theorem, at $\varepsilon=0$, it holds that \begin{itemize}n{equation}gin{equation}\lambdabel{xvar0} \frac{\partialartial X^\varepsilon}{\partialartial\varepsilon}\Big|_{\varepsilon=0} =-\mathcal{T}\frac{2c}{ph_1}, \quad {\rm and}\quad \frac{\partialartial Y^\varepsilon}{\partialartial\varepsilon}\Big|_{\varepsilon=0}=-\mathcal{T}\frac{2c}{qh_2}. \end{equation} (1). The change in $x$ is compute by \begin{itemize}n{equation}gin{equation}\lambdabel{wXchange} \begin{itemize}n{equation}gin{split} w&=\displaystyle\lim_{\varepsilon\to 0}\frac{x^\varepsilon\big(X,Y^\varepsilon(\tau,X)\big) -x\big(X,Y(\tau,X)\big)}{\varepsilon}\\ &= \mathcal{X}\big(X,Y(\tau,X)\big)+x_Y\cdot \frac{\partialartial Y^\varepsilon}{\partialartial \varepsilon}\Big|_{\varepsilon=0} =\big(\mathcal{X}+c\mathcal{T}\big)(X,Y(\tau,X)). \end{split} \end{equation} Similar to \eqref{wXchange}, by \eqref{xvar0}, we have \begin{itemize}n{equation}gin{equation}\lambdabel{zYchange} \begin{itemize}n{equation}gin{split} z&=\displaystyle\lim_{\varepsilon\to 0}\frac{x^\varepsilon\big(X^\varepsilon(\tau,Y),Y\big) -x\big(X(\tau,Y),Y\big)}{\varepsilon}\\ &= \mathcal{X}\big(X(\tau,Y),Y\big)+x_X\cdot \frac{\partialartial X^\varepsilon}{\partialartial \varepsilon}\Big|_{\varepsilon=0} =\big(\mathcal{X}-c\mathcal{T}\big)(X(\tau,Y),Y). \end{split} \end{equation} (2). To see the change in $n_i$, observe that \eqref{2.17} and \eqref{xvar0} implies \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} v+\partialartial_x n_i w&=\displaystyle\frac{d}{d\varepsilon}n_i^\varepsilon \big(X,Y^\varepsilon(\tau,X)\big)\Big|_{\varepsilon=0}\\ &= N_i\big(X,Y(\tau,X)\big)+\partialartial_Yn_i\cdot \frac{\partialartial Y^\varepsilon}{\partialartial \varepsilon}\Big|_{\varepsilon=0} =\big(N_i-\frac{\mathcal{T} m_i}{h_2}\big)(X,Y(\tau,X)), \end{split} \end{equation*} with $i=1,2,3,$ so that \begin{itemize}n{equation}gin{equation}\lambdabel{uXchange} v_i+\frac{R_iw-s_iz}{2c}=v_i+\partialartial_x n_i w+\frac{w-z}{2c}S_i=N_i(X,Y(\tau,X)), \end{equation} where we have used \eqref{R-S-eqn}. (3). Now we want to estimate the change in the base measure with density $1+\mathbf{R}^2$. By \eqref{2.17}, we obtain \begin{itemize}n{equation}gin{equation}\lambdabel{pXchange} \frac{d}{d\varepsilon}p^\varepsilon\big(X,Y^\varepsilon(\tau,X)\big)\Big|_{\varepsilon=0} = P+p_Y\cdot \frac{\partialartial Y^\varepsilon}{\partialartial \varepsilon}\Big|_{\varepsilon=0}=P+\frac{c'\mathcal{T}p }{2c h_2}(l_1-m_1). \end{equation} Applying \eqref{2.17} again, we have the change in base measure with density $\mathbf{R}^2$: \begin{itemize}n{equation}gin{equation}\lambdabel{R2change} \begin{itemize}n{equation}gin{split} &\frac{d}{d\varepsilon}\Big(\big(p^\varepsilon(1-h_1^\varepsilon)\big)\big(X,Y^\varepsilon(\tau,X)\big)\Big)\Big|_{\varepsilon=0}\\ &= \big(P+p_Y\cdot \frac{\partialartial Y^\varepsilon}{\partialartial \varepsilon}\Big|_{\varepsilon=0}\big)(1-h_1)-p\big(H_1+\partialartial_Yh_1\cdot \frac{\partialartial Y^\varepsilon}{\partialartial \varepsilon}\Big|_{\varepsilon=0}\big)\\ &=P(1-h_1)-H_1p+\frac{c'\mathcal{T}p}{2c}\Big(\frac{l_1-m_1}{h_2}+\frac{m_1h_1}{h_2}-l_1\Big). \end{split}\end{equation} As an immediate consequence of \eqref{pXchange} and \eqref{R2change}, we can achieve the change in base measure with density 1 by subtracting \eqref{pXchange} from \eqref{R2change}: \begin{itemize}n{equation}gin{equation}\lambdabel{1change} \begin{itemize}n{equation}gin{split} h_1P+H_1p++\frac{c'\mathcal{T}p}{2c}\Big(l_1-\frac{m_1h_1}{h_2}\Big). \end{split}\end{equation} (4). Finally, for the change in the base measure with density $R_i$, it follows from \eqref{2.17} and \eqref{xvar0} that \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} &\quad \frac{d}{d\varepsilon}\Big(p^\varepsilon l^\varepsilon_i \big(X,Y^\varepsilon(\tau,X)\big)\Big)\Big|_{\varepsilon=0}\\ &= p\big(L_i+\partialartial_Yl_i\cdot \frac{\partialartial Y^\varepsilon}{\partialartial \varepsilon}\Big|_{\varepsilon=0}\big)+l_i\big(P+\partialartial_Y p\cdot \frac{\partialartial Y^\varepsilon}{\partialartial \varepsilon}\Big|_{\varepsilon=0}\big)\\ &=pL_i+l_iP-\frac{\mathcal{T}n_ip}{4c^2 h_2}[(c^2-\zeta_i)(h_1+h_2-2h_1h_2)-2(3c^2-\zeta_i)\mathbf{L}\cdot\mathbf{m}] +\frac{c'\mathcal{T}p}{2ch_2}(l_1m_i-l_im_1), \end{split}\end{equation*} with $i=1,2,3$, $\zeta_1=\gamma$ and $\zeta_2=\zeta_3=\alpha$. Notice that $$(1+\mathbf{R}^2)dx=pdX,\quad (1+\mathbf{S}^2)dx=-qdY.$$ Base on the estimate \eqref{wXchange}--\eqref{1change}, we deduce that the weighted norm \eqref{norm1} can be rewritten as a line integral over the line ${\bf L}ambda_\tau$ defined in \eqref{curve}: \begin{itemize}n{equation}gin{equation}\lambdabel{normXY} \|(\mathbf{v}, \mathbf{r^*},\mathbf{s^*}, w, z)\|_{(\mathbf{n},\mathbf{R},\mathbf{S})}=\sum_{j=0}^5\kappa_j\int_{{\bf L}ambda_\tau} \Big(|J_j^-|\mathcal{V}^-dX+|J_j^+|\mathcal{V}^+dY\Big), \end{equation} where \begin{itemize}n{equation}gin{eqnarray*} J_0^-&=&\big(\mathcal{X}+c\mathcal{T}\big)ph_1,\\ J_1^-&=&\big(\mathcal{X}+c\mathcal{T}\big)p,\\ J_2^-&=&\sum_{i=1}^3N_ip,\\ J_3^-&=&h_1P+pH_1+\frac{c'\mathcal{T}p}{2c }l_1,\\ J_4^-&=&\sum_{i=1}^3 \Big(l_iP+pL_i-\frac{n_i\mathcal{T}p}{4c^2}(c^2-\zeta_i)(1-h_1) \Big),\\ J_5^-&=&(1-h_1) P-pH_1,\\ \end{eqnarray*} and \begin{itemize}n{equation}gin{eqnarray*} J_0^+&=&\big(\mathcal{X}-c\mathcal{T}\big)qh_2,\\ J_1^+&=&\big(\mathcal{X}-c\mathcal{T}\big)q,\\ J_2^+&=&\sum_{i=1}^3N_iq,\\ J_3^+&=&h_2Q+qH_2+\frac{c'\mathcal{T}q}{2c }m_1,\\ J_4^+&=&\sum_{i=1}^3 \Big(m_iQ+qM_i-\frac{n_i\mathcal{T}q}{4c^2}(c^2-\zeta_i)(1-h_2) \Big),\\ J_5^+&=&(1-h_2)Q-qH_2.\\ \end{eqnarray*} Furthermore, it is straightforward to check that the integrands $J_j^\partialm$ are all smooth, for $j=0,1,\cdots,5$. \subsection{Length of piecewise regular paths}\lambdabel{sub_piecewise} Now we define the weighted length of a piecewise regular path. \begin{itemize}n{equation}gin{Definition}\lambdabel{def_piece} The length $\|\Gamma^t\|$ of the piecewise regular path $\Gamma^t: \lambdambda\mapsto \big(\mathbf{n}^\lambdambda(t),\mathbf{n}^\lambdambda_t(t)\big)$ is defined as \begin{itemize}n{equation}gin{equation}\lambdabel{piecedef} \|\Gamma^t\|=\inf_{\Gamma^t}\int_0^1\Big\{\sum_{j=0}^5\kappa_j\int_{{\bf L}ambda_t^\lambdambda} \Big(|(J_j^-)^\lambdambda|\mathcal{V}^-dX+|(J_j^+)^\lambdambda|\mathcal{V}^+dY\Big)\Big\}\,d\lambdambda, \end{equation} where the infimum is taken over all piecewise smooth relabelings of the $X$-$Y$ coordinates and ${\bf L}ambda_\tau^\lambdambda:=\{(X,Y);t^\lambdambda(X,Y)=\tau\}$. \end{Definition} Then we state the main theorem of this section, which implies the appearance of the generic singularity will not impact the Lipschitz property of this metric. \begin{itemize}n{equation}gin{Theorem}\lambdabel{thm_length} Let $T>0$ be given, consider a path of solutions $\lambdambda\mapsto \big(\mathbf{n}^\lambdambda(t),\mathbf{n}^\lambdambda_t(t)\big)$ of \eqref{vwl}, which is piecewise regular for $t\in[0,T]$. Moreover, the total energy is less than some constants $E>0$. Then there exists constants $\kappa_0,\kappa_1,\cdots,\kappa_5$ in \eqref{piecedef} and $C>0$, such that for any $0\leq t\leq T$, it holds \begin{itemize}n{equation}gin{equation*}\lambdabel{pieceest} \|\Gamma^t\|\leq C\|\Gamma^0\|, \end{equation*} where the constant $C$ depends only on $T$ and $E$. \end{Theorem} The proof of this lemma is similar to \cite{BC2015}, and we omit it here for brevity. \section{Construction of the geodesic distance for general weak solutions} Our final goal is to construct a geodesic distance, under which the general weak solutions obtained in Theorem \ref{CZZthm}--\ref{ECthm} is Lipschitz continuous. The main idea is to extend the metric from generic piecewise smooth solution in Theorem \ref{thm_length} to general weak solution by taking limit from generic solutions. To this end, we would like to point out the generic regularity theorem in \cite{CCD}, that is, there exists an open dense set of initial data $\mathcal{D}\subset \Big(\mathcal{C}^3(\mathbb{R})\cap H^1(\mathbb{R})\big)\times \big(\mathcal{C}^2(\mathbb{R})\cap L^2(\mathbb{R})\big)$, such that, for $(n_{i0},n_{i1})\in\mathcal{D}$, the conservative solution $\mathbf{n}=(n_1,n_2,n_3)$ of \eqref{vwl} has only generic singularities. The structure of conservative solution thus provides the ideal tool to construct a distance on a set $$\mathcal{D}^\infty:=\mathcal{C}_0^\infty\cap\mathcal{D},$$ by optimizing over all piecewise regular paths connecting two solutions of \eqref{vwl}. As a consequence, we extend our distance from space $\mathcal{D}^\infty$ to a larger domain by using the semilinear system \eqref{2.17}--\eqref{2.18} and Theorem \ref{thm_length}, We first introduce some definitions. For future use, we begin by introducing the subset of all data with energy less than any fix constant $E>0$, specifically, \begin{itemize}n{equation}gin{equation*} \Omega:=\{(n_i,n_{it})\in H^1(\mathbb{R})\times L^{2}(\mathbb{R}); ~\mathcal{E}(\mathbf{n},\mathbf{n}_t):=\int_\mathbb{R}[\mathbf{n}_t^2+c(n_1) \mathbf{n}_x^2]\,dx \leq E\}. \end{equation*} \begin{itemize}n{equation}gin{Definition}\lambdabel{def_piecepath} For solutions with initial data in $\mathcal{D}^\infty\cap~\Omega$, we define the geodesic distance $d\big((\mathbf{n},\mathbf{n}_t),$ $ (\hat{\mathbf{n}},\hat{\mathbf{n}}_t)\big)$ as \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} d\big((\mathbf{n},\mathbf{n}_t),(\hat{\mathbf{n}},\hat{\mathbf{n}}_t)\big):=\inf \{ \|\Gamma^t\|: &\Gamma^t \text{ is a piecewise regular path}, \Gamma^t(0)=(\mathbf{n},\mathbf{n}_t),\\ & \Gamma^t(1)=(\hat{\mathbf{n}},\hat{\mathbf{n}}_t), \mathcal{E}(\mathbf{n}^\lambdambda,\mathbf{n}_t^\lambdambda)\leq E, \text{ for all } \lambdambda\in[0,1]\}, \end{split}\end{equation*} for any time $t$, where the infimum is taken over all weighted lengths of piecewise regular paths $\lambdambda\mapsto (\mathbf{n}^\lambdambda,\mathbf{n}_t^\lambdambda)$, which connect $(\mathbf{n},\mathbf{n}_t)$ with $(\hat{\mathbf{n}},\hat{\mathbf{n}}_t)$. \end{Definition} The definition $J(\cdot,\cdot)$ actually defines a distance because after a suitable re-parameterization, the concatenation of two piecewise regular paths is still a piecewise regular path. With this definition of distance, the distance for general weak solutions is defined as follows. \begin{itemize}n{equation}gin{Definition}\lambdabel{def_weak} Let $(\mathbf{n}_0,\mathbf{n}_1)$ and $(\hat{\mathbf{n}}_0,\hat{\mathbf{n}}_1)$ be two initial data as required in the existence and uniqueness Theorem \ref{CZZthm} and Theorem \ref{ECthm}. Denote $\mathbf{n}$ and $\hat{\mathbf{n}}$ to be the corresponding global weak solutions, then for any time $t$, we define, \begin{itemize}n{equation}gin{equation*} d\big((\mathbf{n},\mathbf{n}_t),(\hat{\mathbf{n}},\hat{\mathbf{n}}_t)\big):=\lim_{k\rightarrow \infty}d\big((\mathbf{n}^k,\mathbf{n}_t^k),(\hat{\mathbf{n}}^k,\hat{\mathbf{n}}_t^k)\big), \end{equation*} for any two sequences of solutions $(\mathbf{n}^k,\mathbf{n}_t^k)$ and $(\hat{\mathbf{n}}^k,\hat{\mathbf{n}}_t^k)$ with the corresponding initial data in $\mathcal{D}^\infty \cap \Omega$, moreover for $i=1,2,3$, \[ \|(n^k_{i0}-n_{i0}, \hat{n}^k_{i0}-\hat{n}_{i0})\|_{H^1}\rightarrow 0,\quad\hbox{and}\quad \|(n^k_{i1}-n_{i1}, \hat{n}^k_{i1}-\hat{n}_{i1})\|_{L^2}\rightarrow 0. \] \end{Definition} Thanks to the analysis above, we now have all ingredients toward a proof of main theorem: Theorem \ref{thm_metric}. First, we claim that the definition of this metric is well-defined. In fact, since the solution with initial data in $\mathcal{D}^\infty\cap \Omega$ is Lipschitz continuous, we thus derive that the limit in the definition \ref{def_weak} is independent on the selection of sequences. On the other hand, by the fact that $\mathcal{D}^\infty\cap \Omega$ is a dense set in the solution space, one can easily extend the Lipschitz metric to the general solutions. As a consequence of Theorem \ref{thm_length}, we deduce directly the result of Theorem \ref{thm_metric}. \vskip 0.2cm We end this section with the relations between our distance function $d$ with other distances determined by various norms. \begin{itemize}n{equation}gin{Proposition}[Comparison with the Sobolev metric]\lambdabel{Prop_sob} For any two finite energy initial data $(\mathbf{n}_0,$ $\mathbf{n}_1)$ and $(\hat{\mathbf{n}}_0, \hat{\mathbf{n}}_1) \in \mathcal D^\infty$, one has \begin{itemize}n{equation}gin{equation*} d\big((\mathbf{n}_0,\mathbf{n}_1),(\hat{\mathbf{n}}_0, \hat{\mathbf{n}}_1)\big)\leq C\sum_{i=1}^3\big(\|n_{i0}-\hat{n}_{i0}\|_{H^1}+\|n_{i0}-\hat{n}_{i0}\|_{W^{1,1}}+ \|n_{i1}-\hat{n}_{i1}\|_{L^1}+\|n_{i1}-\hat{n}_{i1}\|_{L^2}\big), \end{equation*} with a constant $C$ depends only on the initial energy. \end{Proposition} \begin{itemize}n{equation}gin{proof} In order to get an upper bound for this optimal transport metric, a natural choice is letting the shifts $w=z=0$ in \eqref{norm1}, in this way, the norm becomes \begin{itemize}n{equation}gin{equation} \lambdabel{norm_rec} \begin{itemize}n{equation}gin{split} &\quad\|(\mathbf{v}^\lambdambda, (\mathbf{r}^*)^\lambdambda, (\mathbf{s}^*)^\lambdambda, w^\lambdambda, z^\lambdambda)\|_{(\mathbf{n}^\lambdambda,\mathbf{R}^\lambdambda,\mathbf{S}^\lambdambda)}\\ &=\kappa_2\sum_{i=1}^3\int_\mathbb{R} \big|v_i^\lambdambda\big|\big[(1+(\mathbf{R}^\lambdambda)^2)\, (\mathcal{V}^-)^\lambdambda+(1+(\mathbf{S}^\lambdambda)^2)\, (\mathcal{V}^+)^\lambdambda\big]\,dx\\ &\quad+\kappa_3\sum_{i=1}^3\int_\mathbb{R} \big[|r_i^\lambdambda|\, (\mathcal{V}^-)^\lambdambda +|s_i^\lambdambda|\, (\mathcal{V}^+)^\lambdambda\big]\,dx \\ &\quad+\kappa_6\int_\mathbb{R} \Big[ \big|2\mathbf{R}^\lambdambda\cdot\mathbf{r}^\lambdambda\big|\, (\mathcal{V}^-)^\lambdambda+ \big|2\mathbf{S}^\lambdambda\cdot\mathbf{s}^\lambdambda\big|\, (\mathcal{V}^+)^\lambdambda\Big]\,dx. \end{split} \end{equation} For $\lambdambda\in[0,1]$, consider the path $(\mathbf{n}^\lambdambda_0,\mathbf{n}^\lambdambda_1)$ connecting $(\mathbf{n}_0,\mathbf{n}_1)$ and $(\hat{\mathbf{n}}_0, \hat{\mathbf{n}}_1)$, which satisfies \begin{itemize}n{equation}gin{equation*} \mathbf{R}^\lambdambda=\lambdambda \mathbf{R}+(1-\lambdambda)\hat{\mathbf{R}},\qquad \mathbf{S}^\lambdambda=\lambdambda \mathbf{S}+(1-\lambdambda)\hat{\mathbf{S}}. \end{equation*} Indeed, by \eqref{u_rec}, one can easily verify that above equations recover a unique path $(\mathbf{n}^\lambdambda, \mathbf{n}^\lambdambda_t)$. Moreover the energy $\int_\mathbb{R} (\mathbf{R}^\lambdambda)^2+(\mathbf{S}^\lambdambda)^2\,dx$ is bounded by the energies of $(\mathbf{n}_0,\mathbf{n}_1)$ and $(\hat{\mathbf{n}}_0, \hat{\mathbf{n}}_1)$. To estimate the right hand side of \eqref{norm_rec}, we first observe that \begin{itemize}n{equation}q\lambdabel{r_rec} \mathbf{r}^\lambdambda=\frac{d}{d\lambdambda} \mathbf{R}^\lambdambda=\mathbf{R}-\hat{\mathbf{R}} ,\quad {\rm and} \quad \mathbf{s}^\lambdambda=\frac{d}{d\lambdambda} \mathbf{S}^\lambdambda=\mathbf{S}-\hat{\mathbf{S}}. \end{equation} Next, from the definition of $\mathbf{R}, \mathbf{S}$ at \eqref{R-S}, it follows \begin{itemize}n{equation}gin{equation}\lambdabel{u_rec} \mathbf{n}^\lambdambda_x=\frac{\mathbf{R}^\lambdambda-\mathbf{S}^\lambdambda}{2c(n^\lambdambda_1)}. \end{equation} Since the right hand side is Lipschitz on $\mathbf{n}^\lambdambda$ and $\mathbf{n}^\lambdambda$ has compact support, one can easily prove the existence and uniqueness of $\mathbf{n}^\lambdambda(x)$. So $\mathbf{v}^\lambdambda=\frac{d}{d\lambdambda} \mathbf{n}^\lambdambda$ satisfies \[ \mathbf{v}^\lambdambda_x=\frac{\mathbf{r}^\lambdambda-\mathbf{s}^\lambdambda}{2c(n_1^\lambdambda)} -\frac{\mathbf{R}^\lambdambda-\mathbf{S}^\lambdambda}{2c^2(n_1^\lambdambda)}c'(n_1^\lambdambda)v^\lambdambda_1, \] whence, by virtue of \eqref{r_rec} and after a straightforward manipulation, we arrive at estimate for $\mathbf{v}^\lambdambda$: \begin{itemize}n{equation}q\lambdabel{v_rec} |v_i^\lambdambda|\leq K\, \sum_{i=1}^3(\| R_i-\hat {R}_i\|_{L^1}+\| S_i-\hat{S_i}\|_{L^1}) \end{equation} for some constant $K$ and $i=1,2,3$. Substituting \eqref{r_rec}--\eqref{v_rec} into \eqref{norm_rec}, we get the desired conclusion of Proposition \ref{Prop_sob} directly. \end{proof} Actually, thanks to our main Theorem \ref{thm_metric}, this proposition also tells that, for any $t\geq0$, \begin{itemize}n{equation}gin{equation*} d\big((\mathbf{n},\mathbf{n}_t)(t),(\hat{\mathbf{n}}, \hat{\mathbf{n}}_t)(t)\big)\leq C\sum_{i=1}^3\big(\|n_{i0}-\hat{n}_{i0}\|_{H^1}+\|n_{i0}-\hat{n}_{i0}\|_{W^{1,1}}+ \|n_{i1}-\hat{n}_{i1}\|_{L^1}+\|n_{i1}-\hat{n}_{i1}\|_{L^2}\big). \end{equation*} \begin{itemize}n{equation}gin{Proposition}[Comparison with $L^1$ metric] \lambdabel{Prop_L1} Let $\mathbf{n}(t),$ $\hat{\mathbf{n}}(t)$ be conservative solutions obtained in Theorem \ref{CZZthm} and Theorem \ref{ECthm} with initial data $n_{i0},\hat{n}_{i0}\in H^1(\mathbb{R})\cap L^1(\mathbb{R})$ and $n_{i1},\hat{n}_{i1}\in L^2(\mathbb{R})$, $i=1,2,3$, there exists some constant $C$ depends only on the upper bound for the total energy, such that, \begin{itemize}n{equation}gin{equation*}\lambdabel{L1} \sum_{i=1}^3\|n_i-\hat{n}_i\|_{L^1}\leq C\cdot d\Big((\mathbf{n},\mathbf{n}_t)(t),(\hat{\mathbf{n}}, \hat{\mathbf{n}}_t)(t)\Big). \end{equation*} \end{Proposition} \begin{itemize}n{equation}gin{proof} Suppose that $\Gamma^t:\lambdambda\mapsto \big(\mathbf{n}^\lambdambda(t),\mathbf{n}^\lambdambda_t(t)\big)$ is a regular path connecting $\mathbf{n}(t)$ with $\hat{\mathbf{n}}(t)$. It is obvious that \begin{itemize}n{equation}gin{equation}\lambdabel{L11} \begin{itemize}n{equation}gin{split} |v_i|&=\Big|v_i+\frac{R_iw-S_iz}{2c}-\frac{R_iw-S_iz}{2c}\Big|\leq \Big|v_i+\frac{R_iw-S_iz}{2c}\Big|+\Big|\frac{R_iw}{2c}\Big| +\Big|\frac{-S_iz}{2c}\Big|\\ &\leq \Big|v_i+\frac{R_iw-S_iz}{2c}\Big|+\frac{|w|(1+\mathbf{R}^2)}{4c} +\frac{|z|(1+\mathbf{S}^2)}{4c}. \end{split}\end{equation} Recalling the definition \ref{def_weak}, we conclude from \eqref{norm1} and \eqref{L11} that \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} d\big((\mathbf{n},\mathbf{n}_t)(t),(\hat{\mathbf{n}}, \hat{\mathbf{n}}_t)(t)\big)&\geq C\inf_{\Gamma^t}\int_0^1\sum_{i=1}^3\int_{\mathbb{R}} |v_i^\lambdambda|\,dx\,d\lambdambda\\ &=C\inf_{\Gamma^t}\int_0^1\sum_{i=1}^3\int_{\mathbb{R}} \big|\frac{d n_i^\lambdambda}{d\lambdambda}\big|\,dx\,d\lambdambda\\ &\geq C\sum_{i=1}^3\|n_i-\hat{n}_i\|_{L^1}. \end{split} \end{equation*} The proof of Proposition \ref{Prop_L1} is complete. \end{proof} \begin{itemize}n{equation}gin{Proposition}[Comparison with the Kantorovich-Rubinstein metric]\lambdabel{Prop_KR} Let $\mathbf{n}(t),$ $\hat{\mathbf{n}}(t)$ be conservative solutions obtained in Theorem \ref{CZZthm} and Theorem \ref{ECthm} with initial data $n_{i0},\hat{n}_{i0}\in H^1(\mathbb{R})$ and $n_{i1},\hat{n}_{i1}\in L^2(\mathbb{R})$, $i=1,2,3$, there exists some constant $C$ depends only on the upper bound for the total energy, such that, \begin{itemize}n{equation}gin{equation}\lambdabel{KR} \sup_{\|f\|_{\mathcal{C}^1}\leq 1}\left|\int f \,d\mu-\int f\,d\hat{\mu}\right|\leq C \cdot d\Big((\mathbf{n},\mathbf{n}_t)(t),(\hat{\mathbf{n}}, \hat{\mathbf{n}}_t)(t)\Big), \end{equation} where $\mu,\hat{\mu}$ are the measures with densities $ \mathbf{n}_t^2+c(n_1) \mathbf{n}_x^2$ and $\hat{\mathbf{n}}_t^2+c(\hat{n}_1) \hat{\mathbf{n}}_x^2$ with respect to the Lebesgue measure. The metric (\ref{KR}) is usually called a Kantorovich-Rubinstein distance, which is equivalent to a Wasserstein distance by a duality theorem \cite{V}. \end{Proposition} \begin{itemize}n{equation}gin{proof} Let $\Gamma^t:\lambdambda\mapsto \big(\mathbf{n}^\lambdambda(t),\mathbf{n}^\lambdambda_t(t)\big)$ be a regular path connecting $\mathbf{n}(t)$ with $\hat{\mathbf{n}}(t)$. For any function $f$ with $\|f\|_{\mathcal{C}^1}\leq 1$, let $\mu^\lambdambda$ be the measure with density $ (\mathbf{n}^\lambdambda_t)^2+c(n_1^\lambdambda) (\mathbf{n}^\lambdambda_x)^2=\frac{1}{2}((\mathbf{R}^\lambdambda)^2+(\mathbf{S}^\lambdambda)^2)$ with respect to the Lebesgue measure. Then a direct computation gives rise to \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} &\left|\int_0^1\frac{d}{d\lambdambda}\int f\,d\mu^\lambdambda\,d\lambdambda \right| \leq C\int_0^1\int_{\mathbb{R}} \Big(|w^\lambdambda|(1+(\mathbf{R}^\lambdambda)^2)+|z^\lambdambda|(1+(\mathbf{S}^\lambdambda)^2)\Big)\,dx\,d\lambdambda\\ &\quad+\int_0^1\int_{\mathbb{R}}|f|\cdot\Big| 2\mathbf{R}^\lambdambda\cdot(\mathbf{r}^\lambdambda+\mathbf{R}^\lambdambda_xw^\lambdambda)+(\mathbf{R}^\lambdambda)^2w_x^\lambdambda +2\mathbf{S}^\lambdambda\cdot(\mathbf{s}^\lambdambda+\mathbf{S}^\lambdambda_xz^\lambdambda)+(\mathbf{S}^\lambdambda)^2z_x^\lambdambda\Big|\,dx\,d\lambdambda\\ &\leq C\int_0^1\int_{\mathbb{R}} \Big\{ |w^\lambdambda|(1+(\mathbf{R}^\lambdambda)^2)+|z^\lambdambda|(1+(\mathbf{S}^\lambdambda)^2)\\ &\quad\qquad\qquad+\big|2\mathbf{R}^\lambdambda\cdot(\mathbf{r}^*)^\lambdambda+(\mathbf{R}^\lambdambda)^2w_x^\lambdambda +\frac{c'(w^\lambdambda-z^\lambdambda)}{4c^2}(\mathbf{R}^\lambdambda)^2S_1^\lambdambda\big| \\ &\quad\qquad\qquad+\big|2\mathbf{S}^\lambdambda(\mathbf{s}^*)^\lambdambda+(\mathbf{S}^\lambdambda)^2z^\lambdambda_x+\frac{c'(w^\lambdambda-z^\lambdambda)}{4c^2} (\mathbf{S}^\lambdambda)^2R_1^\lambdambda\big|\Big\}\,dx\,d\lambdambda. \end{split}\end{equation*} This yields the desired conclusion of Proposition \ref{Prop_KR} immediately. \end{proof} \begin{itemize}n{equation}gin{appendices} \section{The proof of Lemma \ref{lem_est}} \lambdabel{app} Now we give detail proof for \eqref{sum}. This is the key estimate in the proof of Lemma \ref{lem_est}. We leave it in the appendix since it is lengthy. \begin{itemize}n{equation}gin{proof} The goal of the forthcoming computations is to validate the estimate \eqref{sum}. For the sake of clarity, we divide it into six steps. \partialaragraph{\bf Step 1.} To estimate the time derivative of $I_0$, we first write the first equation of \eqref{wz} in the form \begin{itemize}n{equation}gin{equation*}\lambdabel{wt} \displaystyle w_t-(cw)_x=-c'(v_1+ \frac{R_1w-S_1z}{2c})+\frac{c'}{c}S_1w -\frac{c'}{2c}(S_1z+R_1w). \end{equation*} Now, from the uniform bounds \eqref{Westimate} on the weights, we find easily that \begin{itemize}n{equation}gin{equation}\lambdabel{w11} \begin{itemize}n{equation}gin{split} &\frac{d}{dt}\int_\mathbb{R}J_0^-\mathcal{V}^-\,dx = \frac{d}{dt}\int_\mathbb{R}|w|\mathcal{V}^-\,dx \\ &\leq C\int_\mathbb{R} |w|(|R_1|+|S_1|)\mathcal{V}^-\,dx+C\int_\mathbb{R} |z||S_1|\mathcal{V}^+\,dx+ C\int_\mathbb{R} \Big|v_1+\frac{R_1w-S_1z}{2c}\Big|\mathcal{V}^-\,dx\\ &\quad+G(t)\int_\mathbb{R} |w|\mathcal{V}^-\,dx-2c_0\int_\mathbb{R} |w|\mathbf{S}^2\mathcal{V}^-\,dx. \end{split} \end{equation} Similar to the estimate of \eqref{w11}, we can also obtain the time derivative of $\int_\mathbb{R}J_0^+\mathcal{V}^+\,dx$. Hence, it holds that \begin{itemize}n{equation}gin{equation}\lambdabel{I0est} \begin{itemize}n{equation}gin{split} \frac{d}{dt}I_0 &=\frac{d}{dt}\int_\mathbb{R} \Big(J_0^-\mathcal{V}^-+J_0^+\mathcal{V}^+\Big)\,dx\\ &\leq C\sum_{k=1,2}\int_\mathbb{R}\Big((1+|\mathbf{S}|)J_k^-\mathcal{V}^- +(1+|\mathbf{R}|)J_k^+\mathcal{V}^+\Big)\,dx\\ &\quad+G(t)I_0-2c_0\int_\mathbb{R} \Big(\mathbf{S}^2J_0^-\mathcal{V}^-+\mathbf{R}^2J_0^+\mathcal{V}^+\Big)\,dx. \end{split} \end{equation} \partialaragraph{\bf Step 2.} Next, we estimate the time derivative of $I_1$. By \eqref{R-S-eqn} and the first equation of \eqref{balance}, we have \begin{itemize}n{equation}gin{equation}\lambdabel{1+R} (1+\mathbf{R}^2)_t-[c(1+\mathbf{R}^2)]_x=\frac{c'}{2c} (\mathbf{R}^2S_1-R_1\mathbf{S}^2-R_1 +S_1). \end{equation} This together with \eqref{wz} yields \begin{itemize}n{equation}gin{equation}\lambdabel{wt1} \begin{itemize}n{equation}gin{split} &\big[w(1+\mathbf{R}^2)\big]_t-\big[cw(1+\mathbf{R}^2)\big]_x\\ &=(w_t-cw_x)(1+\mathbf{R}^2) +w[(1+\mathbf{R}^2)_t-\big(c(1+\mathbf{R}^2)\big)_x]\\ &=-c'\big(v_1+ \frac{R_1-S_1}{2c}w\big)(1+\mathbf{R}^2)+\frac{c'w}{2c}\big (\mathbf{R}^2S_1-R_1\mathbf{S}^2-R_1 +S_1\big)\\ &=-c'\big(v_1+ \frac{R_1w-S_1z}{2c}\big)(1+\mathbf{R}^2)+\frac{c'w}{2c}\big (2\mathbf{R}^2S_1-R_1\mathbf{S}^2-R_1 +2S_1\big)-\frac{c'z}{2c}S_1(1+\mathbf{R}^2). \end{split} \end{equation} By summing up \eqref{Westimate} and \eqref{wt1}, we achieve \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} &\frac{d}{dt}\int_\mathbb{R}J_1^-\mathcal{V}^-\,dx = \frac{d}{dt}\int_\mathbb{R}|w|(1+\mathbf{R}^2)\mathcal{V}^-\,dx \\ &\leq C\int_\mathbb{R} |w|(|R_1\mathbf{S}^2|+|\mathbf{R}^2S_1|+|R_1|+|S_1|)\mathcal{V}^-\,dx\\ &\quad+C\int_\mathbb{R} |z|(|S_1|+|\mathbf{R}^2S_1|)\mathcal{V}^+\,dx+C\int_\mathbb{R} \Big|v_1+\frac{R_1w-S_1z}{2c}\Big|(1+\mathbf{R}^2)\mathcal{V}^-\,dx\\ &\quad+G(t)\int_\mathbb{R} |w|(1+\mathbf{R}^2)\mathcal{V}^-\,dx-2c_0\int_\mathbb{R} |w|(1+\mathbf{R}^2)\mathbf{S}^2\mathcal{V}^-\,dx. \end{split} \end{equation*} In a similar way, we can get the remaining term of $I_1$. We thus conclude from the Young inequality that \begin{itemize}n{equation}gin{equation}\lambdabel{I1est} \begin{itemize}n{equation}gin{split} \frac{d}{dt}I_1 \leq& C\sum_{k=1,2}\int_\mathbb{R}\Big((1+|\mathbf{S}|)J_k^-\mathcal{V}^- +(1+|\mathbf{R}|)J_k^+\mathcal{V}^+\Big)\,dx+ C\int_\mathbb{R}\Big(\mathbf{S}^2J_0^-\mathcal{V}^-+\mathbf{R}^2J_0^+\mathcal{V}^+\Big)\,dx\\ & +G(t)I_1-c_0\int_\mathbb{R} \Big(\mathbf{S}^2J_1^-\mathcal{V}^- +\mathbf{R}^2J_1^+\mathcal{V}^+\Big)\,dx. \end{split} \end{equation} \partialaragraph{\bf Step 3.} For the time derivative of $I_2$, we first use \eqref{vx} and \eqref{vt} to derive the equation for the first order perturbation $\mathbf{v}$ as \begin{itemize}n{equation}gin{equation}\lambdabel{vequ} \mathbf{v}_t-c\mathbf{v}_x=\mathbf{s}+ \frac{c'}{2c}(\mathbf{R}-\mathbf{S})v_1. \end{equation} On the other hand, it follows from \eqref{R-S-eqn} and \eqref{wz} that \begin{itemize}n{equation}gin{equation}\lambdabel{Rwt} \begin{itemize}n{equation}gin{split} &\Big[\frac{R_iw-S_iz}{2c}\Big]_t -c\Big[\frac{R_iw-S_iz}{2c}\Big]_x\\ &=\frac{w}{2c}(\partialartial_t R_i-c\partialartial_xR_i) -\frac{z}{2c}(\partialartial_t S_i+c\partialartial_xS_i)+z\partialartial_xS_i +\frac{R_i}{2c}(w_t-cw_x)\\ &\quad-\frac{S_i}{2c}(z_t+cz_x)+S_iz_x +(R_iw-S_iz)\big[(\frac{1}{2c})_t-c(\frac{1}{2c})_x\big]\\ &=\frac{n_i}{8c^3}\Big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\Big](w-z)+z\partialartial_xS_i+S_iz_x\\ &\quad-\frac{c'}{2c}\big(v_1+\frac{R_1w-S_1z}{2c} \big)(R_i+S_i) +\frac{c'}{4c^2}(R_1-S_1)(R_iw-S_iz), \end{split} \end{equation} for $i=1,2,3$ and $ \zeta_1=\ga,$ $\zeta_2=\zeta_3=\al.$ Hence combining \eqref{1+R}, \eqref{vequ} and \eqref{Rwt}, we obtain \begin{itemize}n{equation}gin{equation*}\lambdabel{Rwt1} \begin{itemize}n{equation}gin{split} &\Big[\big(v_i+\frac{R_iw-S_iz}{2c}\big)(1+\mathbf{R}^2)\Big]_t -\Big[c\big(v_i+\frac{R_iw-S_iz}{2c}\big)(1+\mathbf{R}^2)\Big]_x\\ &=\Big[(\partialartial_tv_i-c\partialartial_xv_i) +\big(\frac{R_iw-S_iz}{2c}\big)_t -c\big(\frac{R_iw-S_iz}{2c}\big)_x\Big](1+\mathbf{R}^2)\\ &\quad+\big(v_i+\frac{R_iw-S_iz}{2c}\big) \big[(1+\mathbf{R}^2)_t-\big(c(1+\mathbf{R}^2)\big)_x\big]\\ &=(1+\mathbf{R}^2)\Big[s^*_i+S_i(z_x+\frac{c'}{4c^2}(w-z)R_1) -\frac{c'}{c}S_i(v_1+\frac{R_1w-S_1z}{2c}\big)\Big] \\ &\quad+\frac{n_i}{8c^3}(c^2-\zeta_i)(w-z)(1+\mathbf{R}^2)\mathbf{S}^2 +\frac{c'}{2c}\big(v_i+\frac{R_iw-S_iz}{2c}\big)(\mathbf{R}^2S_1- R_1\mathbf{S}^2-R_1+S_1). \end{split} \end{equation*} This together with the uniform bounds on the weights \eqref{Westimate} implies that \begin{itemize}n{equation}gin{equation*}\lambdabel{J2est} \begin{itemize}n{equation}gin{split} &\frac{d}{dt}\int_\mathbb{R}J_2^-\mathcal{V}^-\,dx = \frac{d}{dt}\sum_{i=1}^3\int_\mathbb{R}\Big|v_i+\frac{R_iw-S_iz}{2c}\Big|(1+\mathbf{R}^2)\mathcal{V}^-\,dx \\ &\leq C\sum_{i=1}^3\int_\mathbb{R} \Big|v_i+\frac{R_iw-S_iz}{2c}\Big|\big[(|R_1|+|S_1|+|S_i|+|\mathbf{R}^2S_1|+|\mathbf{R}^2S_i|)\mathcal{V}^- +|R_1\mathbf{S}^2|\mathcal{V}^+\big]\,dx\\ &\quad+C\int_\mathbb{R} \Big|s^*_i+S_i(z_x+\frac{c'}{4c^2}(w-z)R_1)\Big|(1+\mathbf{R}^2)\mathcal{V}^+\,dx +C\int_\mathbb{R} |w|(1+\mathbf{R}^2)\mathbf{S}^2\mathcal{V}^-\,dx\\ &\quad+C\int_\mathbb{R} |z|(1+\mathbf{R}^2)\mathbf{S}^2\mathcal{V}^+\,dx+G(t)\sum_{i=1}^3\int_\mathbb{R} \Big|v_i+\frac{R_iw-S_iz}{2c}\Big|(1+\mathbf{R}^2)\mathcal{V}^-\,dx\\ &\quad -2c_0\sum_{i=1}^3\int_\mathbb{R} \Big|v_i+\frac{R_iw-S_iz}{2c}\Big|(1+\mathbf{R}^2)\mathbf{S}^2\mathcal{V}^-\,dx. \end{split} \end{equation*} By using the same argument to the time derivative of $\int_\mathbb{R}J_2^+\mathcal{V}^+\,dx$, we arrive at \begin{itemize}n{equation}gin{equation}\lambdabel{I2est} \begin{itemize}n{equation}gin{split} \frac{d}{dt}I_2 \leq& C\int_\mathbb{R}\Big((1+|\mathbf{S}|)J_2^-\mathcal{V}^- +(1+|\mathbf{R}|)J_2^+\mathcal{V}^+\Big)\,dx\\ &+ C\sum_{k=1,4}\int_\mathbb{R}\Big((1+\mathbf{S}^2)J_k^-\mathcal{V}^- +(1+\mathbf{R}^2)J_k^+\mathcal{V}^+\Big)\,dx\\ &+G(t)I_2 -2c_0\int_\mathbb{R} \Big(\mathbf{S}^2J_2^-\mathcal{V}^-+\mathbf{R}^2J_2^+\mathcal{V}^+\Big)\,dx. \end{split} \end{equation} \partialaragraph{\bf Step 4.} Now we devote to the time derivative of $I_3$. By \eqref{R-S-eqn} and \eqref{wz}, it holds that \begin{itemize}n{equation}gin{equation}\lambdabel{Sest} \begin{itemize}n{equation}gin{split} &\Big[\frac{c'}{4c^2}(w-z)S_i\Big]_t -\Big[c\frac{c'}{4c^2}(w-z)S_i\Big]_x\\ &=\frac{c'S_i}{4c^2}(w_t-cw_x) -\frac{c'S_i}{4c^2}(z_t+cz_x)+\frac{c'S_i}{2c}z_x +\frac{c'(w-z)}{4c^2}\big(\partialartial_tS_i+c\partialartial_xS_i\big)\\ &\quad -\frac{c'(w-z)}{2c}\partialartial_x S_i+\frac{cc''-2(c')^2}{4c^3}(\partialartial_tn_1-c\partialartial_xn_1)(w-z)S_i-\frac{c'(w-z)}{4c^2}S_ic'\partialartial_x n_1\\ &=-\frac{(c')^2}{2c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)S_i +\frac{c'n_i}{16c^4}(w-z)\big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big]\\ &\quad +\frac{c'S_i}{2c}z_x-\frac{c'(w-z)}{2c}\partialartial_x S_i+\frac{c''(w-z)}{4c^2}S_1 S_i-\frac{(c')^2}{8c^3}(w-z)(S_1 S_i+R_iS_1), \end{split} \end{equation} for $i=1,2,3$ and $ \zeta_1=\ga, \zeta_2=\zeta_3=\al.$ Next, differentiating $\eqref{wz}_1$ with respect to $x$, it is clear that \begin{itemize}n{equation}gin{equation}\lambdabel{wxt} \begin{itemize}n{equation}gin{split} w_{tx}&-\big(c w_x\big)_x=\frac{(c')^2-cc''}{2c^2}(R_1-S_1)(v_1+\frac{R_1-S_1}{2c}w)\\ &-\frac{c'}{2c}(r_1+w\partialartial_xR_1+R_1w_x) +\frac{c'}{2c}(s_1+w\partialartial_xS_1+S_1w_x). \end{split}\end{equation} Putting \eqref{rseq}, \eqref{Sest} for $i=1$ and \eqref{wxt} together leads to\ \begin{itemize}n{equation}gin{equation}\lambdabel{wxtest} \begin{itemize}n{equation}gin{split} &\Big[w_x+\frac{c'}{4c^2}(w-z)S_1\Big]_t -\Big[c\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\Big]_x\\ &=w_{xt}-\big(cw_x\big)_x+\Big[\frac{c'}{4c^2}(w-z)S_1\Big]_t -\Big[c\frac{c'}{4c^2}(w-z)S_1\Big]_x\\ &=-\frac{c'}{2c}(r_1+w\partialartial_xR_1+R_1w_x)+\frac{c'}{2c}(s_1+z\partialartial_xS_1+S_1z_x) +\frac{c'}{2c}S_1\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\\ &\quad+\frac{(c')^2-cc''}{2c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)(R_1-S_1) -\frac{(c')^2}{2c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)S_1\\ &\quad+\frac{c'n_1}{16c^4}(w-z)\big[(c^2-\gamma)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\gamma)\mathbf{R}\cdot\mathbf{S}\big]+\frac{2 c c''-3(c')^2}{8c^3}(w-z)R_1S_1\\ &=-\frac{c'}{2c}\big[r^*_1+R_1(w_x+\frac{c'}{4c^2}(w-z)S_1)\big] +\frac{c'}{2c}\big[s^*_1+S_1(z_x+\frac{c'}{4c^2}(w-z)R_1)\big]\\ &\quad+\frac{c'}{2c}S_1\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big) +\frac{(c')^2-cc''}{2c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)(R_1-S_1)\\ &\quad -\frac{(c')^2}{2c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)S_1+\frac{c'n_1}{8c^4}(w-z)\big[(c^2-\gamma)\mathbf{S}^2 -(3c^2-\gamma)\mathbf{R}\cdot\mathbf{S}\big]\\ &\quad+\frac{2 c c''-3(c')^2}{8c^3}(w-z)R_1S_1, \end{split} \end{equation} from which one can deduce that \begin{itemize}n{equation}gin{equation}\lambdabel{J3est} \begin{itemize}n{equation}gin{split} &\frac{d}{dt}\int_\mathbb{R}J_3^-\mathcal{V}^-\,dx = \frac{d}{dt}\int_\mathbb{R}\Big|w_x+\frac{c'}{4c^2}(w-z)S_1\Big|\mathcal{V}^-\,dx \\ &\leq C\int_\mathbb{R} |w|(|R_1S_1|+\mathbf{S}^2+|\mathbf{R}\cdot\mathbf{S}|)\mathcal{V}^-\,dx+C\int_\mathbb{R} |z|(|R_1S_1|+\mathbf{S}^2+|\mathbf{R}\cdot\mathbf{S}|)\mathcal{V}^+\,dx\\ &\quad+C\int_\mathbb{R} \Big|v_1+\frac{R_1w-S_1z}{2c}\Big|(|R_1|+|S_1|)\mathcal{V}^-\,dx+C\int_\mathbb{R} \Big|w_x+\frac{c'}{4c^2}(w-z)S_1\Big||S_1|\mathcal{V}^-\,dx\\ &\quad+C\int_\mathbb{R} \Big[\Big|r^*_1+R_1(w_x+\frac{c'}{4c^2}(w-z)S_1)\Big|\mathcal{V}^-+ \Big|s^*_1+S_1(z_x+\frac{c'}{4c^2}(w-z)R_1)\Big|\mathcal{V}^+\Big]\,dx\\ &\quad +G(t)\int_\mathbb{R} \Big|w_x+\frac{c'}{4c^2}(w-z)S_1\Big|\mathcal{V}^-\,dx -2c_0\int_\mathbb{R} \Big|w_x+\frac{c'}{4c^2}(w-z)S_1\Big|S^2\mathcal{V}^-\,dx. \end{split} \end{equation} Hence, arguing exactly as for the proof of \eqref{J3est} shows that \begin{itemize}n{equation}gin{equation}\lambdabel{I3est} \begin{itemize}n{equation}gin{split} \frac{d}{dt}I_3 \leq& C\sum_{k=2,3,4}\int_\mathbb{R}\Big((1+|\mathbf{S}|)J_k^-\mathcal{V}^- +(1+|\mathbf{R}|)J_k^+\mathcal{V}^+\Big)\,dx\\ &+ C\int_\mathbb{R}\Big((1+\mathbf{S}^2)J_1^-\mathcal{V}^- +(1+\mathbf{R}^2)J_1^+\mathcal{V}^+\Big)\,dx\\ &+G(t)I_3-2c_0\int_\mathbb{R} \Big(\mathbf{S}^2J_3^-\mathcal{V}^-+\mathbf{R}^2J_3^+\mathcal{V}^+\Big)\,dx. \end{split} \end{equation} \partialaragraph{\bf Step 5.} Bounding the time derivative of $I_4$ is a little bit complicated. To this end, we first differentiate $\eqref{R-S-eqn}_1$ with respect to $x$ and arrive at \begin{itemize}n{equation}gin{equation}\lambdabel{Rxt} \begin{itemize}n{equation}gin{split} \partialartial_{xt}R_{i}&-\big(c\partialartial_xR_i\big)_x= \frac{n_i}{2c^2}\big[(c^2-\zeta_i)(\mathbf{R}\cdot\mathbf{R}_x +\mathbf{S}\cdot\mathbf{S}_x)-(3c^2-\zeta_i) (\mathbf{R}_x\cdot\mathbf{S}+\mathbf{R}\cdot\mathbf{S}_x)\big]\\ &+\frac{cc''-(c')^2}{4c^3})(R_1-S_1)(R_i-S_i)R_1 +\frac{c'}{2c}\big[(\partialartial_xR_i-\partialartial_xS_i)R_1+(R_i-S_i)\partialartial_xR_1\big]\\ &+\frac{1}{8c^3}\big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2)- 2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big](R_i-S_i)\\ &+\frac{c'\lambdambda_in_i}{4c^4}(\mathbf{R}^2+\mathbf{S}^2-2\mathbf{R}\cdot\mathbf{S})(R_1-S_1) , \end{split}\end{equation} for $i=1,2,3$ and $ \zeta_1=\ga, \zeta_2=\zeta_3=\al.$ Then with aid of \eqref{rt}, \eqref{wz} and \eqref{Rxt}, we deduce that \begin{itemize}n{equation}gin{equation}\lambdabel{rtest1} \begin{itemize}n{equation}gin{split} &\big[r_i+w\partialartial_xR_i\big]_t-\big[c(r_i+w\partialartial_xR_i)\big]_x\\ &=\partialartial_tr_i-c\partialartial_xr_i-r_ic'\partialartial_x n_1 +\partialartial_xR_i(w_t-cw_x) +w[(\partialartial_xR_{i})_t-(c\partialartial_xR_i)_x]\\ &=\Big[\frac{c'\lambdambda_in_i}{2c^3}(\mathbf{R}^2+\mathbf{S}^2-2\mathbf{R}\cdot\mathbf{S}) +\frac{cc''-(c')^2}{2c^2}(R_i-S_i)R_1\Big](v_1+\frac{R_1-S_1}{2c}w) \\ &\quad+\frac{1}{4c^2}\big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big](v_i+\frac{R_i-S_i}{2c}w)\\ &\quad+\frac{c'}{2c}\big[(R_i-S_i)(r_1+w\partialartial_xR_1)-R_1(s_i+w\partialartial_xS_i) +S_1(r_i+w\partialartial_xR_i)\big]\\ &\quad+\frac{n_i}{2c^2}(c^2-\zeta_i)\big[\mathbf{R}\cdot(\mathbf{r}+\mathbf{R}_xw) +\mathbf{S}\cdot(\mathbf{s}+\mathbf{S}_xw)\big]\\ &\quad-\frac{n_i}{2c^2}(3c^2-\zeta_i)\big[\mathbf{R}\cdot(\mathbf{s}+\mathbf{S}_xw) +\mathbf{S}\cdot(\mathbf{r}+\mathbf{R}_xw)\big].\\ \end{split} \end{equation} In addition, combining \eqref{R-S-eqn}, \eqref{balance} and \eqref{wz} directly gives \begin{itemize}n{equation}gin{equation}\lambdabel{rtest2} \begin{itemize}n{equation}gin{split} &\Big[\frac{n_i}{8c^3}\Big((c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\Big)(w-z)\Big]_t\\ &\quad-\Big[c\frac{n_i}{8c^3}\Big((c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\Big)(w-z)\Big]_x\\ &=\frac{\partialartial_tn_i-c\partialartial_xn_i}{8c^3}\Big((c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\Big)(w-z)\\ &\quad+\frac{n_i}{8c^3}\Big((c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\Big)\big[(w_t-cw_x) -(z_t+cz_x)+2cz_x\big]\\ &\quad+\frac{n_i(c^2-\zeta_i)}{8c^3} \big[(\mathbf{S}^2)_t+\big(c\mathbf{S}^2\big)_x-2\big(c\mathbf{S}^2\big)_x\big](w-z)\\ &\quad-\frac{n_i(3c^2-\zeta_i)}{4c^3} \big[(\mathbf{R}_t-c\mathbf{R}_x)\cdot\mathbf{S} +\mathbf{R}\cdot(\mathbf{S}_t+c\mathbf{S}_x)-2c\mathbf{R}\cdot\mathbf{S}_x -\mathbf{R}\cdot\mathbf{S}c'\partialartial_x n_1\big](w-z)\\ &\quad+\frac{n_i}{8}\Big(\mathbf{S}^2\big[\big(\frac{c^2-\zeta_i}{c^3}\big)_t -c\big(\frac{c^2-\zeta_i}{c^3}\big)_x\big] -2\mathbf{R}\cdot\mathbf{S}\big[ \big(\frac{3c^2-\zeta_i}{c^3}\big)_t -c\big(\frac{3c^2-\zeta_i}{c^3}\big)_x\big]\Big)(w-z)\\ &=\frac{S_i}{8c^3}\Big((c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\Big)(w-z) -\frac{c^2-\zeta_i}{2c^2}n_i(w-z)\mathbf{S}\cdot\mathbf{S}_x\\ &\quad+\frac{3c^2-\zeta_i}{2c^2}n_i(w-z)\mathbf{R}\cdot\mathbf{S}_x +\frac{n_iz_x}{4c^2}\Big((c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\Big)\\ &\quad-\frac{c'n_i}{4c^3}\Big((c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\Big)\big(v_1+\frac{R_1w-S_1z}{2c}\big) \\ &\quad -\frac{c'\zeta_i n_i}{16c^4}(w-z)\big(R_1\mathbf{R}^2+2\mathbf{R}^2S_1-4R_1\mathbf{R}\cdot\mathbf{S} +4S_1\mathbf{R}\cdot\mathbf{S} +3R_1\mathbf{S}^2-2S_1\mathbf{S}^2\big)\\ &\quad +\frac{c'n_i}{16c^2}(w-z)\big(3R_1\mathbf{R}^2+8\mathbf{R}^2S_1-12R_1\mathbf{R}\cdot\mathbf{S} -12S_1\mathbf{R}\cdot\mathbf{S} +9R_1\mathbf{S}^2+4S_1\mathbf{S}^2\big). \end{split} \end{equation} To continue, one can use \eqref{R-S-eqn} and \eqref{Sest} to get \begin{itemize}n{equation}gin{equation}\lambdabel{rtest3} \begin{itemize}n{equation}gin{split} &\Big[\frac{c'}{4c^2}(w-z)R_1S_i\Big]_t -\Big[c\frac{c'}{4c^2}(w-z)R_1S_i\Big]_x\\ &=(\partialartial_t R_1-c\partialartial_xR_1)\frac{c'}{4c^2}(w-z)S_i+R_1\big[\big(\frac{c'}{4c^2}(w-z)S_i\big)_t -\big(c\frac{c'}{4c^2}(w-z)S_i\big)_x\big]\\ &=-\frac{(c')^2}{2c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)R_1S_i +\frac{c'R_1S_i}{2c}z_x-\frac{c'(w-z)}{2c}R_1\partialartial_x S_i\\ &\quad+\frac{c'}{16c^4}(w-z)R_1n_i\big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big]\\ &\quad+\frac{c'}{16c^4}(w-z)S_in_1\big[(c^2-\gamma)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\gamma)\mathbf{R}\cdot\mathbf{S}\big]\\ &\quad +\frac{c''(w-z)}{4c^2}R_1S_1 S_i-\frac{(c')^2}{8c^3}(w-z)(2R_1S_1 S_i+R_1R_iS_1-R_1^2S_i). \end{split} \end{equation} In terms of \eqref{rseq} and \eqref{rtest1}--\eqref{rtest3}, the vertical displacement $\mathbf{r}^*$ satisfies the following equation \begin{itemize}n{equation}gin{equation}\lambdabel{rtest5} \begin{itemize}n{equation}gin{split} &(r^*_i)_t-(cr^*_i)_x\\ &=\Big[\frac{c'\lambdambda_in_i}{2c^3}(\mathbf{R}^2+\mathbf{S}^2-2\mathbf{R}\cdot\mathbf{S}) +\frac{cc''-(c')^2}{2c^2}(R_i-S_i)R_1\Big](v_1+\frac{R_1w-S_1z}{2c}) \\ &\quad+\Big[\frac{(c')^2}{2c^2}R_1S_i-\frac{c'n_i}{4c^3}\big[(c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big]\Big](v_1+\frac{R_1w-S_1z}{2c})\\ &\quad+\frac{1}{4c^2}\big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big](v_i+\frac{R_iw-S_iz}{2c})\\ &\quad+\frac{n_i}{2c^2}(c^2-\zeta_i)\big[\mathbf{R}\cdot(\mathbf{r}+\mathbf{R}_xw) +\mathbf{S}\cdot(\mathbf{s}+\mathbf{S}_xz)\big]\\ &\quad-\frac{n_i}{2c^2}(3c^2-\zeta_i)\big[\mathbf{R}\cdot(\mathbf{s}+\mathbf{S}_xz) +\mathbf{S}\cdot(\mathbf{r}+\mathbf{R}_xw)\big] \\ &\quad+\frac{c'}{2c}\big[(R_i-S_i)(r_1+w\partialartial_xR_1)-R_1(s_i+z\partialartial_xS_i+S_iz_x) +S_1(r_i+w\partialartial_xR_i)\big]\\ &\quad-\frac{c'}{16c^4}(w-z)R_1n_i\big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big]\\ &\quad-\frac{c'}{16c^4}(w-z)S_in_1\big[(c^2-\gamma)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\gamma)\mathbf{R}\cdot\mathbf{S}\big]\\ &\quad+\frac{(c')^2}{8c^3}(w-z)(3R_1R_iS_1-R_1^2S_i) -\frac{c''}{4c^2}(w-z)R_1R_iS_1\\ &\quad-\frac{c^2-\zeta_i}{8c^3}(w-z)\mathbf{R}^2S_i +\frac{n_iz_x}{4c^2}\Big((c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\Big)\\ &\quad -\frac{c'\zeta_i n_i}{16c^4}(w-z)\big(R_1\mathbf{R}^2+6\mathbf{R}^2S_1-4R_1\mathbf{R}\cdot\mathbf{S} -4S_1\mathbf{R}\cdot\mathbf{S} +3R_1\mathbf{S}^2+2S_1\mathbf{S}^2\big)\\ &\quad +\frac{c'n_i}{16c^2}(w-z)\big(3R_1\mathbf{R}^2+8\mathbf{R}^2S_1-12R_1\mathbf{R}\cdot\mathbf{S} -12S_1\mathbf{R}\cdot\mathbf{S} +9R_1\mathbf{S}^2+4S_1\mathbf{S}^2\big). \end{split} \end{equation} Moreover, it follows from \eqref{R-S-eqn} and \eqref{wxtest} that \begin{itemize}n{equation}gin{equation}\lambdabel{Rwxtext} \begin{itemize}n{equation}gin{split} &\Big[R_i\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\Big]_t -\Big[cR_i\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\Big]_x\\ &=(\partialartial_t R_i-c\partialartial_x R_i)\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\\ &\quad+R_i\Big[\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)_t -\big(c(w_x+\frac{c'}{4c^2}(w-z)S_1)\big)_x\Big]\\ &=\frac{(c')^2-cc''}{2c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)(R_1-S_1)R_i -\frac{(c')^2}{2c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)R_iS_1\\ &\quad-\frac{c'}{2c}R_i(r_1+w\partialartial_xR_1)+\frac{c'}{2c}R_i(s_1 +z\partialartial_xS_1+S_1z_x)\\ &\quad +\frac{c'}{2c}(R_iS_1-R_1S_i)\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big) +\frac{ c c''-(c')^2}{4c^3}(w-z)R_iR_1S_1\\ &\quad+\frac{n_i}{16c^4}\big(4c^2w_x+c'S_1(w-z)\big)\big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big]\\ &\quad+\frac{c'n_1}{16c^4}R_i(w-z)\big[(c^2-\gamma)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\gamma)\mathbf{R}\cdot\mathbf{S}\big]. \end{split} \end{equation} Consequently, by \eqref{rtest5} and \eqref{Rwxtext}, it is easy to see that \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} &\Big[r_i^*+R_i\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\Big]_t -\Big[c\Big(r_i^*+R_i\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\Big)\Big]_x\\ &=\frac{(c^2-\zeta_i)n_i}{4c^2}\big[2\mathbf{R}\cdot\mathbf{r}^* +\mathbf{R}^2w_x+\frac{c'}{4c^2}(w-z)\mathbf{R} ^2S_1+2\mathbf{S}\cdot\mathbf{s}^*+\mathbf{S}^2z_x+\frac{c'}{4c^2}(w-z)R_1\mathbf{S} ^2\big]\\ &\quad-\frac{(3c^2-\zeta_i)n_i}{2c^2}\big[\mathbf{R}\cdot\big(\mathbf{s}^* +\mathbf{S}(z_x+\frac{c'}{4c^2}(w-z)R_1)\big) +\mathbf{S}\cdot(\mathbf{r}^*+\mathbf{R}(w_x+\frac{c'}{4c^2}(w-z)S_1)\big] \\ &\quad-\frac{c'}{2c}R_1\big[s_i^*+S_i(z_x+\frac{c'}{4c^2}(w-z)R_1)\big] +\frac{c'}{2c}R_i\big[s_1^* +S_1(z_x+\frac{c'}{4c^2}(w-z)R_1)\big]\\ &\quad-\frac{c'}{2c}S_i\big[r^*_1+R_1(w_x+\frac{c'}{4c^2}(w-z)S_1)\big] +\frac{c'}{2c}S_1\big[r^*_i +R_i(w_x+\frac{c'}{4c^2}(w-z)S_1)\big]\\ &\quad+ \frac{(c^2-\zeta_i)n_i}{4c^2}\mathbf{S}^2(w_x+\frac{c'}{4c^2}(w-z)S_1\big) +\frac{c'n_1}{16c^4}(c^2-\gamma)(w-z)\big(R_i\mathbf{S}^2-\mathbf{R}^2S_i\big)\\ &\quad+\Big[\frac{c'\lambdambda_in_i}{2c^3}(\mathbf{R}^2+\mathbf{S}^2-2\mathbf{R}\cdot\mathbf{S}) +\frac{cc''-2(c')^2}{2c^2}(R_iS_1-R_1S_i)\Big](v_1+\frac{R_1w-S_1z}{2c}) \\ &\quad-\frac{c'n_i}{4c^3}\big[(c^2-\zeta_i)\mathbf{S}^2 -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big](v_1+\frac{R_1w-S_1z}{2c})\\ &\quad+\frac{1}{4c^2}\big[(c^2-\zeta_i)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\zeta_i)\mathbf{R}\cdot\mathbf{S}\big](v_i+\frac{R_iw-S_iz}{2c})\\ &\quad -\frac{c^2-\zeta_i}{8c^3}(w-z)\mathbf{R}^2S_i-\frac{5c'\zeta_i n_i}{16c^4}(w-z)\mathbf{R}^2S_1+\frac{c'n_i}{16c^2}(w-z)\big(2R_1\mathbf{S}^2+3\mathbf{R}^2S_1\big), \end{split} \end{equation*} which leads to \begin{itemize}n{equation}gin{equation*}\lambdabel{ii} \begin{itemize}n{equation}gin{split} &\frac{d}{dt}\int_\mathbb{R}J_4^-\mathcal{V}^-\,dx = \frac{d}{dt}\sum_{i=1}^3\int_\mathbb{R}\big|r_i^*+R_i\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\big|\mathcal{V}^-\,dx \\ &\leq C\sum_{i=1}^3\int_\mathbb{R} \big|r_i^*+R_i\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\big||\mathbf{S}|\mathcal{V}^-\,dx \\ &\quad+C\sum_{i=1}^3\int_\mathbb{R} \big|s_i^*+S_i\big(z_x+\frac{c'}{4c^2}(w-z)R_1\big)\big||\mathbf{R}|\mathcal{V}^+\,dx \\ &\quad +C\int_\mathbb{R} \Big|2\mathbf{R}\cdot\mathbf{r}^* +\mathbf{R}^2w_x+\frac{c'}{4c^2}(w-z)\mathbf{R} ^2S_1\Big|\mathcal{V}^-\,dx+C\int_\mathbb{R} |w|(\mathbf{R}^2|\mathbf{S}|+|\mathbf{R}|\mathbf{S}^2)\mathcal{V}^-\,dx\\ &\quad +C\int_\mathbb{R} \Big|2\mathbf{S}\cdot\mathbf{s}^* +\mathbf{S}^2z_x+\frac{c'}{4c^2}(w-z)R_1\mathbf{S} ^2\Big|\mathcal{V}^+\,dx+C\int_\mathbb{R} |z|(\mathbf{R}^2|\mathbf{S}|+|\mathbf{R}|\mathbf{S}^2)\mathcal{V}^+\,dx \\ &\quad +C\sum_{i=1}^3\int_\mathbb{R} \Big|v_i+\frac{R_iw-S_iz}{2c}\Big|(\mathbf{R}^2\mathcal{V}^-+\mathbf{S}^2\mathcal{V}^+ )\,dx+C\int_\mathbb{R} \Big|w_x+\frac{c'}{4c^2}(w-z)S_1\Big|\mathbf{S}^2\mathcal{V}^-\,dx\\ &\quad+G(t) \sum_{i=1}^3\int_\mathbb{R}\big|r_i^*+R_i\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\big|\mathcal{V}^-\,dx \\ &\quad-2c_0\sum_{i=1}^3\int_\mathbb{R} \big|r_i^*+R_i\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\big|\mathbf{S}^2\mathcal{V}^-\,dx. \end{split} \end{equation*} By performing a routine procedure, one can arrive at \begin{itemize}n{equation}gin{equation}\lambdabel{I4est} \begin{itemize}n{equation}gin{split} \frac{d}{dt}I_4 \leq& C\sum_{k=2,4,5}\int_\mathbb{R}\Big((1+|\mathbf{S}|)J_k^-\mathcal{V}^- +(1+|\mathbf{R}|)J_k^+\mathcal{V}^+\Big)\,dx\\ &+ C\sum_{k=1,3}\int_\mathbb{R}\Big((1+\mathbf{S}^2)J_k^-\mathcal{V}^- +(1+\mathbf{R}^2)J_k^+\mathcal{V}^+\Big)\,dx\\ &+G(t)I_4-2c_0\int_\mathbb{R} \Big(\mathbf{S}^2J_4^-\mathcal{V}^-+\mathbf{R}^2J_4^+\mathcal{V}^+\Big)\,dx. \end{split} \end{equation} \partialaragraph{\bf Step 6.} In order to handle the last term of the norm \eqref{norm1}, we first observe that owing to $|\mathbf{n}|=1$ and $c'(n_1)=\frac{(\gamma-\alpha)n_1}{c(n_1)}$, so that $\mathbf{R}\cdot\mathbf{n}=0$, then it holds that \begin{itemize}n{equation}gin{equation}\lambdabel{2Rest1} \mathbf{R}\cdot\mathbf{r}^*=\mathbf{R}\cdot\mathbf{r}+\mathbf{R}\cdot\mathbf{R}_x w-\frac{c'}{8c^2}R_1\mathbf{S}^2(w-z). \end{equation} To estimate the time derivative of $\mathbf{R}\cdot\mathbf{r}^*$, using \eqref{balance}, we first compute \begin{itemize}n{equation}gin{equation}\lambdabel{2Rest2} \begin{itemize}n{equation}gin{split} \big[\mathbf{R}\cdot\mathbf{r}\big]_t-\big[c\mathbf{R}\cdot\mathbf{r}\big]_x =&c'v_1\mathbf{R}\mathbf{R}_x+\frac{cc''-(c')^2}{4c^2}(R_1\mathbf{R}^2-R_1\mathbf{S}^2)v_1 \\ &+\frac{c'}{4c}\big[(\mathbf{R}^2-\mathbf{S}^2)r_1 +2S_1\mathbf{R}\cdot\mathbf{r}-2R_1\mathbf{S}\cdot\mathbf{s}\big]. \end{split} \end{equation} Next, by \eqref{R-S-eqn}, \eqref{balance} and \eqref{wz}, we have \begin{itemize}n{equation}gin{equation}\lambdabel{2Rest3} \begin{itemize}n{equation}gin{split} &\Big[\frac{c'}{8c^2}(w-z)R_1\mathbf{S}^2\Big]_t -\Big[c\frac{c'}{8c^2}(w-z)R_1\mathbf{S}^2\Big]_x\\ &=\frac{c'}{8c^2}(w_t-cw_x)R_1\mathbf{S}^2-\frac{c'}{8c^2}(z_t+cz_x)R_1\mathbf{S}^2 +\frac{c'}{4c}z_xR_1\mathbf{S}^2+\frac{c'\mathbf{S}^2}{8c^2}(w-z)(\partialartial_t R_1-c\partialartial_xR_1)\\ &\quad+\frac{c'R_1}{8c^2}(w-z)\big(\partialartial_t \mathbf{S}^2+(c\mathbf{S}^2)_x\big)-\frac{c'R_1}{4c^2}(w-z)(c\mathbf{S}^2)_x +R_1\mathbf{S}^2(w-z)\big[\big(\frac{c'}{8c^2}\big)_t -c\big(\frac{c'}{8c^2}\big)_x\big]\\ &=-\frac{(c')^2}{4c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)R_1\mathbf{S}^2 +\frac{c'R_1\mathbf{S}^2}{4c}z_x-\frac{c'(w-z)}{2c}R_1\mathbf{S} \cdot\mathbf{S}_x\\ &\quad+\frac{c'n_1\mathbf{S}^2}{32c^4}(w-z)\big[(c^2-\gamma)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\gamma)\mathbf{R}\cdot\mathbf{S}\big] \\ &\quad+\frac{cc''-(c')^2}{8c^3}(w-z)R_1S_1\mathbf{S}^2 +\frac{(c')^2}{16c^3}(w-z)R_1(R_1\mathbf{S}^2-S_1\mathbf{R}^2). \end{split} \end{equation} Thanks to \eqref{R-S-eqn}, \eqref{wz}, \eqref{Rxt} and \eqref{2Rest1}--\eqref{2Rest3}, one can get \begin{itemize}n{equation}gin{equation}\lambdabel{2Rest4} \begin{itemize}n{equation}gin{split} &\big[\mathbf{R}\cdot\mathbf{r}^*\big]_t-\big[c\mathbf{R}\cdot\mathbf{r}^*\big]_x\\ &=\big[\mathbf{R}\cdot\mathbf{r}\big]_t-\big[c\mathbf{R}\cdot\mathbf{r}\big]_x +\mathbf{R}\cdot\mathbf{R}_x(w_t-cw_x)+w(\mathbf{R}_t-c\mathbf{R}_x)\cdot\mathbf{R}_x\\ &\quad+\mathbf{R}\cdot\big(\mathbf{R}_{xt}-(c\mathbf{R}_x)_x\big)w -\Big[\frac{c'}{8c^2}(w-z)R_1\mathbf{S}^2\Big]_t +\Big[c\frac{c'}{8c^2}(w-z)R_1\mathbf{S}^2\Big]_x\\ &=\frac{cc''-(c')^2}{4c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)(\mathbf{R}^2- \mathbf{S}^2)R_1+\frac{(c')^2}{4c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big)R_1\mathbf{S}^2 \\ &\quad-\frac{c'}{2c}R_1\mathbf{S} \cdot(\mathbf{s}+\mathbf{S}_xz)+\frac{c'}{2c}S_1\mathbf{R} \cdot(\mathbf{r}+\mathbf{R}_xw)+\frac{c'}{4c} (\mathbf{R}^2-\mathbf{S}^2)(r_1+w\partialartial_x R_1)\\ &\quad-\frac{c'R_1\mathbf{S}^2}{4c}z_x-\frac{c'n_1\mathbf{S}^2}{32c^4}(w-z)\big[(c^2-\gamma)(\mathbf{R}^2+\mathbf{S}^2) -2(3c^2-\gamma)\mathbf{R}\cdot\mathbf{S}\big] \\ &\quad-\frac{cc''-(c')^2}{8c^3}(w-z)R_1S_1\mathbf{R}^2-\frac{(c')^2}{16c^3}(w-z)R_1(R_1\mathbf{S}^2-S_1\mathbf{R}^2). \end{split} \end{equation} Hence, together \eqref{balance}, \eqref{wxtest} and \eqref{2Rest4}, we can conclude that \begin{itemize}n{equation}gin{equation*}\lambdabel{wRest5} \begin{itemize}n{equation}gin{split} &\Big[2\mathbf{R}\cdot\mathbf{r}^*+\mathbf{R}^2w_x +\frac{c'}{4c^2}(w-z)\mathbf{R}^2S_1\Big]_t -\Big[c\big(2\mathbf{R}\cdot\mathbf{r}^*+\mathbf{R}^2w_x +\frac{c'}{4c^2}(w-z)\mathbf{R}^2S_1\big)\Big]_x\\ &=2\big([\mathbf{R}\cdot\mathbf{r}^*]_t-[c\mathbf{R}\cdot\mathbf{r}^*]_x\big) +\mathbf{R}^2\Big(\big[w_x+\frac{c'}{4c^2}(w-z)S_1\big]_t -\big[c\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)\big]_x\Big)\\ &\quad+\big(w_x+\frac{c'}{4c^2}(w-z)S_1\big)[(\mathbf{R}^2)_x-c(\mathbf{R}^2)_x]\\ &=\frac{c'}{2c}S_1\big[2\mathbf{R} \cdot\mathbf{r}^*+\mathbf{R}^2w_x+\frac{c'}{4c^2}(w-z)\mathbf{R}^2S_1\big] -\frac{c'}{2c}\mathbf{S}^2\big(r_1^*+ R_1w_x+\frac{c'}{4c^2}(w-z)R_1S_1\big)\\ &\quad-\frac{c'}{2c}R_1\big[2\mathbf{S}\cdot\mathbf{s}^*+\mathbf{S}^2z_x +\frac{c'}{4c^2}(w-z)\mathbf{S}^2R_1\big] +\frac{c'}{2c}\mathbf{R}^2\big(s_1^*+S_1z_x+\frac{c'}{4c^2}(w-z)R_1S_1\big)\\ &\quad+\frac{cc''-2(c')^2}{2c^2}\big(v_1+\frac{R_1w-S_1z}{2c}\big) (\mathbf{R}^2S_1-R_1\mathbf{S}^2). \end{split} \end{equation*} This in turn yields \begin{itemize}n{equation}gin{equation*} \begin{itemize}n{equation}gin{split} &\frac{d}{dt}\int_\mathbb{R}J_5^-\mathcal{V}^-\,dx = \frac{d}{dt}\int_\mathbb{R}\Big|2\mathbf{R}\cdot\mathbf{r}^*+\mathbf{R}^2w_x +\frac{c'}{4c^2}(w-z)\mathbf{R}^2S_1\Big|\mathcal{V}^-\,dx \\ &\leq C\int_\mathbb{R} \Big|v_1+\frac{R_1w-S_1z}{2c}\Big|\mathbf{R}^2|S_1|\mathcal{V}^-\,dx +C\int_\mathbb{R} \Big|v_1+\frac{R_1w-S_1z}{2c}\Big||R_1|\mathbf{S}^2\mathcal{V}^+\,dx \\ &\quad+ C\int_\mathbb{R} \big|r_1^*+ R_1w_x+\frac{c'}{4c^2}(w-z)R_1S_1\big|\mathbf{S}^2\mathcal{V}^-\,dx\\ &\quad+C\int_\mathbb{R} \big|s_1^*+S_1z_x+\frac{c'}{4c^2}(w-z)R_1S_1\big|\mathbf{R}^2\mathcal{V}^+\,dx\\ &\quad+C\int_\mathbb{R}\Big|2\mathbf{R}\cdot\mathbf{r}^*+\mathbf{R}^2w_x +\frac{c'}{4c^2}(w-z)\mathbf{R}^2S_1\Big||S_1|\mathcal{V}^-\,dx \\ &\quad+C\int_\mathbb{R} \Big|2\mathbf{S}\cdot\mathbf{s}^*+\mathbf{S}^2z_x +\frac{c'}{4c^2}(w-z)\mathbf{S}^2R_1\Big||R_1|\mathcal{V}^+\,dx \\ &\quad+G(t)\int_\mathbb{R} \Big|2\mathbf{R}\cdot\mathbf{r}^*+\mathbf{R}^2w_x +\frac{c'}{4c^2}(w-z)\mathbf{R}^2S_1\Big|\mathcal{V}^-\,dx\\ &\quad -2c_0\int_\mathbb{R} \Big|2\mathbf{R}\cdot\mathbf{r}^*+\mathbf{R}^2w_x +\frac{c'}{4c^2}(w-z)\mathbf{R}^2S_1\Big|\mathbf{S}^2\mathcal{V}^-\,dx. \end{split} \end{equation*} Therefore, we get the following estimate \begin{itemize}n{equation}gin{equation}\lambdabel{I5est} \begin{itemize}n{equation}gin{split} \frac{d}{dt}I_5 \leq& C\sum_{k=2,5}\int_\mathbb{R}\Big((1+|S_1|)J_k^-\mathcal{V}^- +(1+|R_1|)J_k^+\mathcal{V}^+\Big)\,dx\\ &+C\int_\mathbb{R}\Big((1+\mathbf{S}^2)J_4^-\mathcal{V}^- +(1+\mathbf{R}^2)J_4^+\mathcal{V}^+\Big)\,dx\\ &+G(t)I_5-2c_0\int_\mathbb{R} \Big(\mathbf{S}^2J_5^-\mathcal{V}^-+\mathbf{R}^2J_5^+\mathcal{V}^+\Big)\,dx. \end{split} \end{equation} Summing up \eqref{I0est}, \eqref{I1est}, \eqref{I2est}, \eqref{I3est}, \eqref{I4est}, \eqref{I5est} and using \eqref{4.9}, we obtain the desired estimate \eqref{sum}. \end{proof} \end{appendices} \section*{Acknowledgments} The first author is partially supported by the National Natural Science Foundation of China (No. 11801295), and the Shandong Provincial Natural Science Foundation, China (No. ZR2018BA008). The second author is partially supported by NSF with grant DMS-2008504. The third author is partially supported by NSF with grant DMS-206218. \begin{itemize}n{equation}gin{thebibliography}{99} \bibitem{AH} G. Al\`{i} and J. K. Hunter, Orientation waves in a director field with rotational inertia, {\it Kinet. Relat. Models} {\bf 2}(1) (2009), 1--37. \bibitem{BC} A. Bressan and G. Chen, Generic regularity of conservative solutions to a nonlinear wave equation, {\it Ann. Inst. H.~Poincar\'e Anal. Non Lin\'eaire} {\bf 34}(2) (2017), 335--354. \bibitem{BC2015} A. Bressan and G. Chen, Lipschitz metrics for a class of nonlinear wave equations, {\it Arch. Ration. Mech. Anal.} {\bf 226}(3) (2017), 1303--1343. \bibitem{BCZ} A. Bressan, G. Chen and Q. T. Zhang, Unique conservative solutions to a variational wave equation, {\it Arch. Ration. Mech. Anal.} {\bf 217}(3) (2015), 1069--1101. \bibitem{BH} A.~Bressan and T.~Huang, Representation of dissipative solutions to a nonlinear variational wave equation, {\em Commun. Math. Sci.} {\bf 14}(1) (2016), 31--53. \bibitem{BZ} A.~Bressan and Y. X. Zheng, Conservative solutions to a nonlinear variational wave equation, {\it Comm. Math. Phys.} {\bf 266}(2) (2006), 471--497. \bibitem{CCD} H. Cai, G. Chen and Y. Du, Uniqueness and regularity of conservative solution to a wave system modeling nematic liquid crystal, {\it J. Math. Pures Appl.} {\bf 117} (2018), 185--220. \bibitem{CHL20} G. Chen, T. Huang and W. S. Liu, Poiseuille flow of nematic liquid crystals via the full Ericksen-Leslie model, {\em Arch. Ration. Mech. Anal.} {\bf 236}(2) (2020), 839–-891. \bibitem{CZZ} G. Chen, P. Zhang and Y. X. Zheng, Energy conservative solutions to a nonlinear wave system of nematic liquid crystals, {\it Commun. Pure Appl. Anal.} {\bf 12}(3) (2013), 1445--1468. \bibitem{CTZ}D. Christodoulou and A. S. Tahvildar-Zadeh, On the regularity of spherically symmetric wave maps, {\it Comm. Pure Appl. Math.} {\bf 46}(7) (1993), 1041--1091. \bibitem{GP} P. G. De Gennes and J. Prost, The Physics of Liquid Crystals, 2nd edition, Oxford University Press, Oxford, 1995. \bibitem{GHZ} R.~T.~Glassey, J.~K.~Hunter and Y. X.~Zheng, Singularities of a variational wave equation, {\em J. Differential Equations} {\bf 129}(1) (1996), 49--78. \bibitem{HR} H.~Holden and X.~Raynaud, Global semigroup of conservative solutions of the nonlinear variational wave equation, {\it Arch. Ration. Mech. Anal.} {\bf 201}(3) (2011), 871--964. \bibitem{lin89} F. H. Lin, Nonlinear theory of defects in nematic liquid crystals; phase transition and flow phenomena, {\em Comm. Pure Appl. Math.} {\bf 42}(6) (1989), 789--814. \bibitem{linwangs14} F. H. Lin and C. Y. Wang, Recent developments of analysis for hydrodynamic flow of nematic liquid crystals, {\em Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci.} {\bf 372}(2029) (2014), 20130361, 18 pp. \bibitem{[37]} R.~A.~Saxton, Dynamic instability of the liquid crystal director, in {\it Contemporary Mathematics} Vol.~100: Current Progress in Hyperbolic Systems, pp.~325--330, ed. W.~B.~Lindquist, AMS, Providence, 1989. \bibitem{S} J. Shatah, Weak solutions and development of singularities of the $SU$($2$) $\sigma$-model, {\it Comm. Pure Appl. Math.} {\bf 41}(4) (1988), 459--469. \bibitem{STZ} J. Shatah and A. Tahvildar-Zadeh, Regularity of harmonic maps from the Minkowski space into rotationally symmetric manifolds, {\it Comm. Pure Appl. Math.} {\bf 45}(8) (1992), 947--971. \bibitem{V} C. Villani, Topics in optmal transportation, American Mathematical Society, providence, 2003. \bibitem{ZZ03} P. Zhang and Y. X. Zheng, Weak solutions to a nonlinear variational wave equation, {\em Arch. Ration. Mech. Anal.} {\bf 166}(4) (2003), 303--319. \bibitem{ZZ10} P. Zhang and Y. X. Zheng, Conservative solutions to a system of variational wave equations of nematic liquid crystals, {\em Arch. Ration. Mech. Anal.} {\bf 195}(3) (2010), 701--727. \bibitem{ZZ11} P. Zhang and Y. X. Zheng, Energy conservative solutions to a one-dimensional full variational wave system, {\em Comm. Pure Appl. Math.} {\bf 65}(5) (2012), 683--726. \end{thebibliography} \end{document}
\begin{equation}gin{document} \selectlanguage{english} \title{On the generalization of the Costas property in the continuum} \author{Konstantinos Drakakis\footnote{The author holds a Diploma in Electrical and Computer Engineering from NTUA, Athens, Greece, and a Ph.D. in Applied and Computational Mathematics from Princeton University, NJ, USA. He was a scholar of the Lilian Boudouris Foundation.}, Scott Rickard \\Electronic and Electrical Engineering\\University College Dublin\\ \& \\ Claude Shannon Institute\footnote{www.shannoninstitute.ie}\\Ireland} \maketitle \abstract{We extend the definition of the Costas property to functions in the continuum, namely on intervals of the reals or the rationals, and argue that such functions can be used in the same applications as discrete Costas arrays. We construct Costas bijections in the real continuum within the class of piecewise continuously differentiable functions, but our attempts to construct a fractal-like Costas bijection there are successful only under slight but necessary deviations from the usual arithmetic laws. Furthermore, we are able, contingent on the validity of Artin's conjecture, to set up a limiting process according to which sequences of Welch Costas arrays converge to smooth Costas bijections over the reals. The situation over the rationals is different: there, we propose an algorithm of great generality and flexibility for the construction of a Costas fractal bijection. Its success, though, relies heavily on the enumerability of the rationals, and therefore it cannot be generalized over the reals in an obvious way.} \section{Introduction} Costas arrays \cite{C} have been an active topic of research for more than 40 years now; however, after 1984, when 2 algebraic construction methods for Costas arrays were published (the Welch and the Golomb method \cite{G}), still the only ones available today, there has been effectively no progress at all in the construction of new Costas arrays, with the obvious exception of brute force searches. Recent research on Costas arrays tends to focus on the discovery of new properties \cite{D2,D3,SVM}, hoping that they will either furnish some lead for a new construction method, or prove that such a method does not exist, and thus overcome the current virtual stalemate in the core problems of the field. In line with this effort, it is likely that research on Costas arrays would benefit by the extension of the definition of the Costas property in the continuum, for 2 reasons: on the one hand, this might open the door to assistance from the entire arsenal of analysis, as was the case with the successful generalization of the factorial in terms of the Gamma function; on the other hand, the recent advances in the subject of the Instantaneous Frequency of a signal \cite{HSLWSZTL} make it possible to design signals with continuously varying frequencies instead of piecewise constant frequencies, such as the usual discrete Costas arrays model, and there might be benefits in doing so. And besides, such objects certainly have an intrinsic pure mathematical merit for study. In this work, we propose a suitable extension of the definition of the Costas property in the continuum (which we take here to mean the real and rational numbers), and we explain how the existing discrete Costas permutations can be used to generate continuum Costas permutations. Note that, in accordance with common practice in recent literature, we will be using the terms ``Costas permutation'' and ``Costas array'' interchangeably. \section{Basics} We reproduce below the definition of a Costas function/permutation \cite{D}: \begin{equation}gin{dfn}\label{bas} Let $[n]:=\{0,\ldots,n-1\},\ n\in\mathbb{N}$ and consider a bijection $f:[n]\rightarrow [n]$; $f$ is a Costas permutation iff the multiset $\{(i-j,f(i)-f(j)): 0\leq j<i< n\}$ is actually a set, namely all of its elements are distinct. \selectlanguage{english}d{dfn} These permutations are extremely useful because they give rise to binary signals with an optimal autocorrelation pattern: \begin{equation}gin{dfn}\label{dac} Let $f:[n]\rightarrow [n]$, $n\in\mathbb{N}^*$, be a Costas permutation, and let $F:\mathbb{Z}^2\rightarrow [2]$, the corresponding binary signal of $f$, satisfy $F(i,f(i))=1,\ i\in[n]$, and $F=0$ everywhere else. The autocorrelation of $f$ is: \[A_F(u,v)=\sum_{i,j\in\mathbb{Z}} F(u+i,v+j)F(i,j),\ (u,v)\in\mathbb{Z}^2\] \selectlanguage{english}d{dfn} The following result is just a restatement of the Costas property: \begin{equation}gin{thm} Let $f:[n]\rightarrow [n]$, $n\in\mathbb{N}^*$, be a permutation, and let $F$ be its corresponding binary signal; then, $0\leq A_f(u,v)<2,\ \forall u,v\in\mathbb{Z}^2-\{0,0\}$ iff $f$ has the Costas property. \selectlanguage{english}d{thm} We have already mentioned the Welch construction method for Costas arrays. As we will refer to it several times below, we offer its definition for the sake of completeness: \begin{equation}gin{thm}[Welch construction $W_1(p,g,c)$] Let $p$ be a prime, let $g$ be a primitive root of the finite field $\mathbb{F}(p)$ of $p$ elements, and let $c\in[p-1]$ be a constant; then, the function $f:[p-1]+1\rightarrow [p-1]+1$ where $\displaystyle f(i)=g^{i-1+c}\mod p,\ i\in[p-1]$ is a bijection with the Costas property. \selectlanguage{english}d{thm} \section{Costas bijections in the real continuum} From now on, until Section \ref{crat}, we will be using the term ``continuum'' in the sense of ``real continuum'', unless explicitly stated otherwise. \subsection{Definitions and simple results}\label{defs} In our extension of Definition \ref{bas} in the continuum we will replace $[n]$ by $[0,1]$, but otherwise the definition remains the same: \begin{equation}gin{dfn} Consider a bijection $f:[0,1]\rightarrow [0,1]$; $f$ \emph{is a Costas permutation} iff the multiset $\{(x-y,f(x)-f(y)): 0\leq y<x\leq 1\}$ is actually a set, namely all of its elements are distinct. \selectlanguage{english}d{dfn} \begin{equation}gin{rmk} The choice of the interval $[0,1]$ is by no means restrictive: it can be seen immediately that for any pair $a,b\in\mathbb{R}$, $a<b$ there exists a linear monotonic mapping $h$ mapping $[0,1]$ bijectively on $[a,b]$, specifically $h(x)=a+x(b-a),\ 0\leq x\leq 1$, and $f$ has the Costas property on $[a,b]$ iff $h^{-1}\circ f\circ h$ has the Costas property on $[0,1]$. \selectlanguage{english}d{rmk} Yet again, we can give an alternative but equivalent definition of the Costas property in terms of autocorrelation: \begin{equation}gin{dfn} Consider a bijection $f:[0,1]\rightarrow [0,1]$, and let $F:\mathbb{R}^2\rightarrow \{0,1,\infty\}$ be its corresponding quasi-binary signal (that is, binary whenever finite), so that $F(x,f(x))=1,\ x\in[0,1]$, and $F=0$ otherwise. The autocorrelation of $f$ is: \[A_f(u,v)=\int_0^1 \int_0^1 \delta(F(x+u,y+v)-F(x,y))dxdy,\ (u,v)\in\mathbb{R}^2\] \selectlanguage{english}d{dfn} \begin{equation}gin{rmk} Notice that this autocorrelation, just like its discrete counterpart in Definition \ref{dac}, takes integer values whenever finite, as it counts the number of zeros in the argument of the Dirac $\delta$-function. \selectlanguage{english}d{rmk} Once more, then, the following result is just a restatement of the Costas property: \begin{equation}gin{thm} Consider a bijection $f:[0,1]\rightarrow [0,1]$, and let $F$ be its corresponding quasi-binary signal; then, $f$ has the Costas property iff $0\leq A_f(u,v)<2,\ \forall (u,v)\in\mathbb{R}^2-\{0,0\}$. \selectlanguage{english}d{thm} \subsection{Applications} Continuum Costas bijections\footnote{We have to resort to the use of the uncommon word ``continuum'' in the role of an adjective here instead of the perhaps more appealing intuitively ``continuous'': the term ``continuum function'' accurately describes a function defined on an interval, or on something non-finite and dense at any rate, whereas the term ``continuous function'' has an already established different meaning in mathematics.} can find applications in the same situations their discrete counterparts do \cite{C}. For example, consider a RADAR system whose operation relies on a usual Costas waveform. In practical terms, this means that the waveform it transmits is of the form: \[w(t)=A\cos\left(2\pi\left(\sum_{k=0}^{n-1} \frac{s(k)+1}{n}f \mathbf{1}_{\left[\frac{k}{n}T,\frac{k+1}{n}T\right)}(t)\right)t\right),\ s\text{ a Costas permutation of order $n$},\ t\in[0,T)\] which is a different way to express that, for $n\in\mathbb{N}^*$, $\displaystyle w(t)=A\cos\left(2\pi \frac{s(k)+1}{n}ft\right),\ t\in\left[\frac{k}{n}T,\frac{k+1}{n}T\right),\ k\in[n]$. Alternatively, we could have used a continuum Costas permutation $s$ on $[0,1]$. Let us consider the waveform: \[w(t)=A\cos\left(2\pi f \int_0^t s(u)du+2\pi f_0 t\right),\ t\in[0,T)\] Bedrosian's theorem \cite{B, HSLWSZTL} on instantaneous frequency asserts that the instantaneous frequency of $w$ is \[ \frac{1}{2\pi}\left(2\pi f \int_0^t s(u)du+2\pi f_0 t\right)'=s(t)f+f_0,\] as long as $\hat{w}(0)=0$; this condition can be satisfied, at least approximately, through an appropriate choice of $f_0$. \subsection{Link between continuum and discrete Costas permutations} How do the 2 definitions compare? The expression for the discrete waveform is clearly a special case of the continuum expression, and this can be seen if we write $\displaystyle S(t)=\sum_{k=0}^{n-1} \frac{s(k)+1}{n}f \mathbf{1}_{\left[\frac{k}{n}T,\frac{k+1}{n}T\right)}(t)$, where $s$ is a Costas array of order $n$ and $S$ is a continuum permutation (but obviously not Costas). The verification of the Costas property through the autocorrelation in the discrete case is also a subprocess of the verification in the continuum case: we just need to take care that horizontal and vertical displacements of the copies of the functions in the autocorrelation formula are integral multiples of $\displaystyle \frac{T}{n}$ and $\displaystyle \frac{f}{n}$, respectively. As $S$ is a piecewise constant function, one might be tempted to attempt to formulate a definition for (at least a class of) continuum Costas permutations in terms of Costas arrays, as limits of sequences of Costas arrays, just like measurable functions are approximated by sequences of piecewise constant functions: a Costas array $s_n$ of order $n$ can be mapped on a piecewise constant function $S_n$, just as we did above, and, letting $n\rightarrow \infty$, we can hopefully obtain a continuum Costas permutation $S$. This limit would probably be highly discontinuous, of a fractal nature perhaps, as Costas arrays are highly erratic and patternless. The problem with the plan of action suggested above is that we seem to have no good understanding yet of sequences of Costas arrays across different orders that follow a clear pattern, so that we can successfully describe how the limit of such a sequence would look like; a notable exception is the example we give below in Section \ref{lm}. Nevertheless, the idea of seeking continuum Costas permutations among fractals seems, in principle, promising in itself and worthwhile investigating. But first, let us focus on the case of smooth functions. \section{Construction of smooth continuum Costas permutations}\label{smperm} The whole idea of the existence of smooth functions with the Costas property may sound outright irrational at first, and any investigation futile: after all, there can hardly be any object more irregular and discontinuous that Costas arrays. Nonetheless, the continuum is dense in itself, while finite discrete sets are not, and this makes a big difference, as we are about to see: for example, the function $f(x)=x^2$ has no chance of being a permutation on any discrete set other than $[2]\cup\{\infty\}$, while it is a permutation on both $[0,1]$ and $[1,+\infty]$, as it effectively makes some areas of the intervals ``denser'' and some ``sparser'' (consider, for instance, that the images under $f$ of all points in $[0,\sqrt{2}/2]$ get ``crammed'' in the smaller interval $[0,0.5]$). In the continuum we can create Costas permutations by causing ``elastic deformations'', by ``changing the density'' of points in an interval, whereas such techniques are inapplicable on discrete sets. Let us begin by seeking functions with the Costas property that are reasonably smooth; for example, let us confine ourselves to special categories of almost everywhere differentiable bijections. \begin{equation}gin{dfn} Let $f:[0,1]\rightarrow [0,1]$ be a bijection; \begin{equation}gin{itemize} \item $f$ will be \emph{piecewise continuously differentiable} iff there exists $n\in\mathbb{N}^*\cup \{\infty\}$ and a sequence of intervals $\{I_i\}_{i=1}^n$ so that, for each $i=1,\ldots,n$, $f$ is continuously differentiable in $\displaystyle \overset{\circ}{I_i}$ ($n=\infty$ is used as a convention to denote a countable infinity of intervals); \item if, in addition to being piecewise continuously differentiable, for each $i=1,\ldots,n$ $\displaystyle f'|\overset{\circ}{I_i}$ is strictly monotonic, $f$ will be called \emph{piecewise strictly monotonic piecewise continuously differentiable}; \item if, in addition to being piecewise continuously differentiable, $f$ satisfies the property that, for all sequences of points $\{x_i\}_{i=1}^n$ such that $\displaystyle x_i\in \overset{\circ}{I_i},\ i=1,\ldots,n$, it is true that the sequences $\{f'(x_i)\}_{i=1}^n$ are either all strictly increasing or all strictly decreasing, $f$ will be called \emph{overall strictly monotonic piecewise continuously differentiable}; \item $f$ may combine all 3 features above, in which case it will be called \emph{overall and piecewise strictly monotonic piecewise continuously differentiable}. \selectlanguage{english}d{itemize} \selectlanguage{english}d{dfn} \begin{equation}gin{thm} Let $f:[0,1]\rightarrow [0,1]$ be an overall and piecewise strictly monotonic continuously differentiable bijection. Then, $f$ has the Costas property on $[0,1]$. \selectlanguage{english}d{thm} \begin{equation}gin{proof} Let us choose 4 points in $[0,1]$, say $x$, $y$, $x+d$ and $y+d$ so that $y< x$ and $d\geq 0$; these may actually be 3 points if $x=y+d$. We need to show that \[f(x)-f(x+d)=f(y)-f(y+d)\Rightarrow d=0\] Exactly one of the 2 pairs of intervals $[x,y],[x+d,y+d]$ or $[x,x+d], [y,y+d]$ consists of intervals with disjoint interiors. Without loss of generality, assume it is the second pair, then the Newton-Leibnitz Theorem implies that \[f(x+d)-f(x)=\int_{x}^{x+d}f'(u)du,\ f(y+d)-f(y)=\int_{y}^{y+d}f'(u)du\] Now, if $f$ is overall and piecewise strictly monotonic continuously differentiable, it is always the case that either $\forall u,v:\ u\in(x,x+d),\ v\in(y,y+d) f'(u)< f'(v)$ or $\forall u,v:\ u\in(x,x+d),\ v\in(y,y+d) f'(u)> f'(v)$, so that $f(x)-f(x+d)\neq f(y)-f(y+d)$ unless $d=0$. \selectlanguage{english}d{proof} \begin{equation}gin{thm} Let $f:[0,1]\rightarrow [0,1]$ be a piecewise continuously differentiable bijection; if $f'$ is not injective, $f$ does not have the Costas property. \selectlanguage{english}d{thm} \begin{equation}gin{proof} We distinguish the following cases: \begin{equation}gin{itemize} \item $f'$ is constant on an interval, say $f'\equiv c\in\mathbb{R}$ , or, equivalently, $f$ is linear on that interval: it follows there exist 4 points $x$, $y$, $x+d$ and $y+d$ with $y< x$ and $d> 0$ so that $\displaystyle \frac{f(x+d)-f(x)}{d}=\frac{f(y+d)-f(y)}{d}=c$, hence the Costas property is violated. \item Assume that $f'$ is never constant on an interval. Then, either there exist $i_1,i_2$ so that $|f'(I_{i_1})\cap f'(I_{i_2})|>0$, namely it fails to be overall strictly monotonic, or there exists an $i$ for which $f'|I_i$ is not monotonic. In either case, there exist 2 points $x_1,x_2\in(0,1)$, so that $x_1<x_2$ and $f'(x_1)=f'(x_2)$. We distinguish 2 subcases: \begin{equation}gin{itemize} \item Neither of the points is an inflection point, that is both points lie in regions of the domain where $f$ is either convex or concave; these regions are necessarily different, or the derivative could not possibly be equal at these points. This implies that there exist real numbers $\epsilon_1,\epsilon_2>0$ so that, if 2 parallels are drawn to the tangent at each of the points $x_1$ and $x_2$, at the side of the tangents where the function graph lies, and whose distances from the tangents are less than $\epsilon_1$ and $\epsilon_2$, respectively, they each intersect the function graph at 2 points, say $x_{11}<x_{12}$ and $x_{21}<x_{22}$. Clearly both $x_{11}-x_{12}$ and $x_{21}-x_{22}$ go to 0 as the parallels move closer to the tangents, whence $f(x_{11})-f(x_{12})$ and $f(x_{21})-f(x_{22})$ also go to 0; moreover, if $\epsilon_1$ and $\epsilon_2$ are sufficiently small, $(x_{11},x_{12})\cap (x_{21},x_{22})=\emptyset$, and each of $(x_{11},x_{12}), (x_{21},x_{22})$ falls entirely within one of the intervals $\{I_i\},\ i=1,ldots,n$. Hence, we can choose a pair of parallels so that $\displaystyle \frac{f(x_{11})-f(x_{12})}{x_{11}-x_{12}}= \frac{f(x_{21})-f(x_{22})}{x_{21}-x_{22}}$ and $x_{11}-x_{12}=x_{21}-x_{22}$. This violates the Costas property. \item At least one of the points is an inflection point, say $x_1$, so there is a $\delta$ so that $x\in(x_1-\delta,x_1+\delta)-\{x_1\}\Rightarrow f'(x)<f'(x_1)$ and $(x_1-\delta,x_1+\delta)$ falls within one of the intervals $\{I_i\},\ i=1,ldots,n$, say $I_k$. As $f'$ is continuous within $I_k$, and is not constant in any interval, there exist $u_1\in(x_1-\delta,x_1)$, $u_2\in(x_1,x_1+\delta$ so that neither is an inflection point and that $f'(u_1)=f'(u_2)$. We are now back to the case above. \selectlanguage{english}d{itemize} \selectlanguage{english}d{itemize} \selectlanguage{english}d{proof} Note that the derivative of a continuously differentiable bijection must keep the same sign throughout its domain, or else the bijection would have an extremum and would not be a bijection. Further, in the case of a continuously differentiable bijection, overall and piecewise strict monotonicity are identical, hence strict monotonicity implies injectivity. Therefore, in this special case, the following holds: \begin{equation}gin{cor}\ \begin{equation}gin{itemize} \item Let $f:[0,1]\rightarrow [0,1]$ be a bijection continuously differentiable in $(0,1)$; then, $f$ has the Costas property iff $f'$ is strictly monotonic. \item A continuously differentiable bijection on $f:[0,1]\rightarrow [0,1]$ with the Costas property must be strictly monotonic. \selectlanguage{english}d{itemize} \selectlanguage{english}d{cor} \begin{equation}gin{rmk} The issue of the continuity of the derivative of a function is quite esoteric. When a function is differentiable in an open interval, its derivative is not necessarily continuous. However, it is ``almost'' continuous, in the sense that, for any value between 2 values the derivative actually assumes at 2 points, there is a point between the 2 aforementioned points where the derivative assumes the chosen value. This property is known as Darboux continuity in the literature \cite{BC}. Working with piecewise continuously differentiable functions, we ``float over'' this technical point. \selectlanguage{english}d{rmk} Let us now see some examples of continuously differentiable bijections with the Costas property as well as some rules to produce new ones from known ones: \begin{equation}gin{cor}\label{exmp} The following continuously differentiable bijections $f:[0,1]\rightarrow [0,1]$ have the Costas property on $[0,1]$: \begin{equation}gin{itemize} \item $f(x)=x^a$, $a\in\mathbb{R}_+$, $a\neq 0,1$; \item $\displaystyle f(x)=\frac{a^x-1}{a-1},\ a\in\mathbb{R}^*_+ -\{1\}$; \item $\displaystyle f(x)=\sin\left(\frac{\pi}{2} x\right)$; \selectlanguage{english}d{itemize} Further, if $f,g:[0,1]\rightarrow [0,1]$ are continuously differentiable bijections and have the Costas property on $[0,1]$, the following functions also do: \begin{equation}gin{itemize} \item $1-f$; \item $af+bg$, $a,b\in\mathbb{R}_+$, $a+b=1$, if $f,g$ are both strictly increasing or all strictly decreasing, and so are $f',g'$; \item $f\circ g$, if $f',g'$ are strictly monotonic of the same type and $g$ is strictly increasing; \item $fg$, if $f,g,f',g'$ are all strictly increasing or all strictly decreasing. \selectlanguage{english}d{itemize} \selectlanguage{english}d{cor} \begin{equation}gin{proof} Observe that $\displaystyle \left(\frac{a^x-1}{a-1}\right)'=\ln(a)\frac{a^x}{a-1}$ is strictly increasing for $a>1$ and strictly decreasing for $a<1$, $(x^a)'=ax^{a-1}>0$ is strictly increasing when $a>1$ and strictly decreasing when $0<a<1$, and $\displaystyle \left(\sin\left(\frac{\pi}{2} x\right)\right)'=\frac{\pi}{2}\cos\left(\frac{\pi}{2} x\right)$ is strictly decreasing. Moreover, all of these functions are bijections, hence, they have the Costas property. Further, \begin{equation}gin{itemize} \item $(1-f)'=-f'$ is strictly monotonic iff $f'$ is, although of the opposite type, and $1-f$ is a bijection on $[0,1]$, so it also has the Costas property. \item $(af+bg)'=af'+bg'$ is strictly monotonic if $f',g'$ are both strictly monotonic of the same type, and $af+bg$ is strictly monotonic too, hence a bijection, if $f,g$ are both strictly monotonic of the same type. \item $f\circ g$ is clearly a bijection if both $f$ and $g$ are, and $(f\circ g)' = g'f'\circ g$ is strictly increasing (decreasing) if both $f',g'$ are strictly increasing (decreasing) and $g$ is strictly increasing. \item $fg$ is strictly increasing (decreasing), hence a bijection, if $f,g$ are both strictly increasing (decreasing), while $(fg)'=fg'+f'g$ is strictly increasing (decreasing) if $f,g,f',g'$ are all strictly increasing (decreasing). \selectlanguage{english}d{itemize} \selectlanguage{english}d{proof} We have now offered a quite extensive description of the class of piecewise continuously differentiable bijections on $[0,1]$ with the Costas property, and an exact characterization of the continuously differentiable bijections with the Costas property. What about discontinuous bijections, though? By interpreting discontinuity in the most extreme way, we are led back to the idea of fractals. \section{Costas fractals}\label{cofrac} In what follows, we establish a connection between discrete and continuum Costas permutations: we use discrete Costas permutations to build continuum ones through a process of multiscale rearrangement of subintervals of $[0,1]$; in other words, we build a ``Costas fractal''. At this moment, however, we are unable to prove the correctness of our construction below under the usual laws of arithmetic: we will need the equivalent of ``xor'' addition (and subtraction), namely addition without carry, in representations over an arbitrary basis. We will need first of all the slightly stronger definition given below: \begin{equation}gin{dfn}Consider a bijection $f:[n]\rightarrow [n]$; $f$ is a \emph{modulo Costas permutation} iff the multiset $\{(i-j,f(i)-f(j)\mod(n+1)): 0\leq j<i< n\}$ is actually a set, namely all of its elements are distinct. \selectlanguage{english}d{dfn} \begin{equation}gin{rmk} Note that both the Golomb and the Welch constructions actually lead to modulo Costas permutations \cite{D,G}. \selectlanguage{english}d{rmk} \begin{equation}gin{dfn}\label{no} Let the numbers $x,y\in[0,1]$ be expanded over basis $n\in\mathbb{N}^*$: $\displaystyle x=\sum_{i=1}^\infty x_i n^{-i}$, $\displaystyle y=\sum_{i=1}^\infty y_i n^{-i}$, where $\forall i\in\mathbb{N}^*, x_i,y_i\in[n]$. Then, we define the ``no carry'' addition and subtraction as: \[x\oplus y=\sum_{i=1}^\infty \frac{(x_i+y_i)\mod n}{n^i},\ x\ominus y=\sum_{i=1}^\infty \frac{(x_i-y_i)\mod n}{n^i}\] \selectlanguage{english}d{dfn} \begin{equation}gin{thm}\label{fr} Let $n\in\mathbb{N}$ and let $f_i:[n]\rightarrow [n],\ i\in\mathbb{N}^*$ be a sequence of (not necessarily distinct) modulo Costas permutations. Define a function $F:[0,1]\rightarrow [0,1]$ by the following formula: \[F\left(\sum_{i=1}^\infty a_i n^{-i}\right)=\sum_{i=1}^\infty f_i(a_i)n^{-i}\] where $\forall i\in\mathbb{N}^*, a_i\in[n]$, and so that there exists no $N\in\mathbb{N}^*: a_i=n-1$ for $i\geq N$, unless $N=1$. Then, $F$ has the Costas property, when subtraction is interpreted as in Definition \ref{no}. \selectlanguage{english}d{thm} \begin{equation}gin{rmk} The explicit exclusion of sequences $\{a_i\}_{i=1}^\infty$ so that $\exists N\in\mathbb{N}^*: a_i=n-1$ for $i\geq N$ is necessary in order to ensure that every number in $[0,1)$ can be expressed over base $n$ in a unique way, otherwise some numbers can have 2 different expansions: a familiar example over base 10 would be that $0.5=0.5000\ldots=0.4999\ldots$. However, we still need to represent $\displaystyle 1=\sum_{i=1}^\infty \frac{n-1}{n^i}$, hence the exception for $N=1$. \selectlanguage{english}d{rmk} \begin{equation}gin{proof} Select 4 points in $[0,1]$, say $x$, $y$, $x+d$ and $y+d$ so that $y< x$ and $d\geq 0$; notice that these can actually be 3 equidistant points if $y+d=x$. We need to test whether $F(x)\ominus F(x+ d)=F(y)\ominus F(y+ d)$ necessarily implies $d=0$. Let the interval $[0,1]$ be divided into $n$ subintervals, $\displaystyle \left\{I_{1;i}=\left[\frac{i}{n}, \frac{i+1}{n}\right):i\in[n-1]\right\}\bigcup \left\{I_{1;n-1}=\left[\frac{n-1}{n}, 1\right] \right\}$, so that $\forall i\in[n], F(I_{1;i})=I_{1;f(i)}$. We distinguish the following cases: \begin{equation}gin{enumerate} \item \label{main} $y+d\neq x$ and the 4 chosen points all lie in different subintervals: then, we can write $\displaystyle F(x)=\frac{s_1}{n}+\epsilon_1$, $\displaystyle F(y)=\frac{s_2}{n}+\epsilon_2$, $\displaystyle F(x+d)=\frac{s_3}{n}+\epsilon_3$, and $\displaystyle F(y+d)=\frac{s_4}{n}+\epsilon_4$, with $s_i\in[n]$, $\displaystyle \epsilon_i<\frac{1}{n},\ i=1,2,3,4$. It follows that $\displaystyle F(x)\ominus F(x+d)=\frac{(s_1-s_3)\mod n}{n}+(\epsilon_1 \ominus \epsilon_3)$, and $\displaystyle F(y)\ominus F(y+d)=\frac{(s_2-s_4)\mod n}{n}+(\epsilon_2 \ominus \epsilon_4)$, where, if we assume $d>0$, $(s_1-s_3)\mod n\neq (s_2-s_4) \mod n $, by the modulo Costas property of $f_1$, while $\displaystyle |(\epsilon_1 \ominus \epsilon_3)\ominus (\epsilon_2 \ominus \epsilon_4)|<\frac{1}{n}$. Hence, $F(x)\ominus F(x+d)\neq F(y)\ominus F(y+d)$ and the proof is complete for this case. \item $y+d= x$ and the 3 chosen points all lie in different subintervals: then we can repeat verbatim the previous argument with 3 instead of 4 points. \item $y+d\neq x$ and one pair of the 4 chosen points lie in the same subinterval, while the remaining pair lie in different subintervals: then, without loss of generality, assume that $x$ and $x+d$ lie in the same subinterval. In terms of the previous argument, $(s_1-s_3)\mod n=0\neq (s_2-s_4)\mod n$ and the proof follows again. \item $y+d= x$ and the 3 chosen points lie in 2 different subintervals: then, exactly 2 points lie in the same subinterval, and, without loss of generality, assume they are $y$ and $y+d=x$. In terms of the previous argument, $s_4=s_1$, $(s_1-s_3)\mod n\neq (s_2-s_1)\mod n=0$ and the proof follows again. \item\label{bad} Either $y+d\neq x$ and the 4 chosen points lie pairwise in the same subintervals, or $y+d= x$ and the 3 chosen points all lie in the same subinterval: then, assume, without loss of generality, that $x$ and $x+d$ lie in the same subinterval, and so do $y$ and $y+d$. It follows that $(s_1-s_3)\mod n=0= (s_2-s_4)\mod n$ and the argument fails. \selectlanguage{english}d{enumerate} In the last case where the argument fails, we need to refine our subinterval division. We already saw the first level of this division. At level $k\in\mathbb{N}$, we consider the collection of intervals \begin{equation}gin{multline*} \left\{I_{k;i_1,\ldots,i_k}=\left[\sum_{j=1}^k\frac{i_j}{n^j}, \sum_{j=1}^{k-1}\frac{i_j}{n^j}+\frac{i_k+1}{n^j}\right):\ i_j\in[n],\ j=1,\ldots,k,\ \exists j:i_j\neq n-1\right\}\bigcup\\ \left\{I_{k;n-1,\ldots,n-1}=\left[1-\frac{1}{n^k},1\right]\right\} \selectlanguage{english}d{multline*} With respect to the newly defined levels of subintervals, there are 2 possibilities: \begin{equation}gin{itemize} \item The chosen points fall in a case other than \ref{bad} for the first time in level $k$: then, it must be the case that: \begin{equation}gin{multline*} \sum_{j=1}^k \frac{(f_j(x_j+d_j)-f_j(x_j))\mod n}{n^j}=\frac{(f_k(x_k+d_k)-f_k(x_k))\mod n}{n^k}\neq\\ \sum_{j=1}^k \frac{f_j((y_j+d_j)-f_j(y_j))\mod n}{n^j}=\frac{(f_k(y_k+d_k)-f_k(y_k))\mod n }{n^k}\selectlanguage{english}d{multline*} due to the modulo Costas property of $f_k$, whence $F(x)\ominus F(x+d)\neq F(y)\ominus F(y+d)$ for $d>0$. \item Otherwise, we need to consider the levels beyond level $k$. \selectlanguage{english}d{itemize} But the length of the subintervals in level $k$ is $n^{-k}$ which decays to 0 as $k\rightarrow \infty$; therefore, any specific selection of points can remain in case \ref{bad} for a finite number of levels only. This completes the proof. \selectlanguage{english}d{proof} It is easy to see where our proof fails under ordinary arithmetic: revisiting case \ref{main}, we would need to show that, under the assumption that $s_1-s_3\neq s_2-s_4$, which holds because $f_1$ is a Costas permutation (we no longer need it to be a modulo Costas permutation), $\displaystyle \frac{s_1-s_3}{n}+(\epsilon_1 - \epsilon_3)\neq \frac{s_2-s_4}{n}+(\epsilon_2 - \epsilon_4)$ holds. Since $\displaystyle \epsilon_i<\frac{1}{n},\ i=1,2,3,4$, it follows that $\displaystyle |\epsilon_1 - \epsilon_3|,|\epsilon_2 - \epsilon_4|<\frac{1}{n}$ and $\displaystyle |(\epsilon_1- \epsilon_3)- (\epsilon_2 - \epsilon_4)|<\frac{2}{n}$, so that, if $|(s_1-s_3)-(s_2-s_4)|=1$, it may still be the case that $\displaystyle \frac{s_1-s_3}{n}+(\epsilon_1 - \epsilon_3)= \frac{s_2-s_4}{n}+(\epsilon_2 - \epsilon_4)\Leftrightarrow F(x)-F(x+d)=F(y)-F(y+d)$ when $d>0$, and the Costas property fails. The key feature of the arithmetic proposed in Definition \ref{no} that allowed the proof of Theorem \ref{fr} to complete successfully was that if, at any level of interval subdivision, the 4 chosen points were found to lie into distinct subintervals, the defining inequality of the Costas property would be satisfied for the chosen points. There are alternative arithmetics with this property: \begin{equation}gin{dfn}\label{np} Let the numbers $x,y\in[0,1]$ be expanded over basis $n\in\mathbb{N}^*$: $\displaystyle x=\sum_{i=1}^\infty x_i n^{-i}$, $\displaystyle y=\sum_{i=1}^\infty y_i n^{-i}$, where $\forall i\in\mathbb{N}^*, x_i,y_i\in[n]$. Then, we define the ``contracted'' subtraction as: \[x\ominus y=\sum_{i=1}^\infty \frac{x_i-y_i}{n^{2i-1}}\] \selectlanguage{english}d{dfn} \begin{equation}gin{thm}\label{fs} Let $n\in\mathbb{N}$ and let $f_i:[n]\rightarrow [n],\ i\in\mathbb{N}^*$ be a sequence of (not necessarily distinct) Costas permutations. Define a function $F:[0,1]\rightarrow [0,1]$ by the following formula: \[F\left(\sum_{i=1}^\infty a_i n^{-i}\right)=\sum_{i=1}^\infty f_i(a_i)n^{-i}\] where $\forall i\in\mathbb{N}^*, a_i\in[n]$, and so that there exists no $N\in\mathbb{N}^*: a_i=n-1$ for $i\geq N$, unless $N=1$. Then, $F$ has the Costas property, when subtraction is interpreted as in Definition \ref{np}. \selectlanguage{english}d{thm} \begin{equation}gin{proof} This is a verbatim repetition of the proof of Theorem \ref{fr}. \selectlanguage{english}d{proof} Is it likely that Theorem \ref{fr} still hold true for ordinary arithmetic despite the fact that our proof does not carry through? At this time we have no reason to believe that it does. It may still be possible to use discrete Costas permutations to generate a Costas fractal in the continuum, but the actual mechanism should most probably be different. \section{A limiting process}\label{lm} Assuming Artin's Conjecture holds true \cite{M}, which would be the case if the Generalized Riemann Hypothesis holds true, for any non-square integer $k\in\mathbb{N}^*$, $k>1$ there exists an infinite sequence of primes, say $\{p_n\}_{n\in\mathbb{N}^*}$, for which $k$ is a primitive root. We can construct then the sequence of Welch Costas permutations corresponding to the primes of the sequence and the primitive root $k$: \[f_n: [p_n-1]+1\rightarrow [p_n-1]+1,\ f_n(i)=k^{i-1}\mod p_n,\ i\in[p_n-1]+1,\ n\in\mathbb{N}^*\] The key observation is that $\forall m\in\mathbb{N}^*, \exists N\in\mathbb{N}^*:\ \forall i=1,\ldots,m, f_n(i)=k^{i-1}$; in particular, $N$ is the smallest integer for which $p_N>k^{m-1}$. In other words, for any fixed number of terms, all functions of the sequence, after skipping a finite number of functions, have these initial terms in common. Define then the \emph{pointwise intermediate limit} of $\{f_n\}_{n\in\mathbb{N}^*}$ to be as follows: for a fixed $i\in\mathbb{N}^*$, \[f(i)=\lim f_n(i)=\lim_{n>N} f_n(i):=\lim k^{i-1},\text{ where $N$ is the smallest integer such that $p_N>k^{i-1}$}\] Choose now a sequence $\{i_n\}_{n\in\mathbb{N}^*}$ of integers such that $\displaystyle \lim\frac{i_n-1}{p_n}=x$. We define the limit of $\{f_n\}_{n\in\mathbb{N}^*}$ evaluated on $\{i_n\}_{n\in\mathbb{N}^*}$ to be a continuum function on $[0,1]$ as follows: \[s(x)=\lim \left(f(i_n)\right)^{\frac{1}{p_n}}=k^x,\ x\in[0,1)\] We can bring the range of $s$ within $[0,1)$ as well after a linear transformation, and create: $\displaystyle S(x)=\frac{k^x-1}{k-1}$; this is the second example function in Corollary \ref{exmp}. To sum up, in the special case of an infinite sequence of Welch Costas permutations generated by a common primitive root $k$, we were able to carry out a limiting process and construct a continuum Costas permutation, using the property that all the members of this sequence (except possibly some of the first ones) have a common beginning. The limit we obtained, however, is a smooth function and not a fractal, as one might expect given the way Welch Costas permutations look like. \section{Costas bijections in the rational continuum} \label{crat} The idea of fractals with the Costas property in the (real) continuum was explored above in Section \ref{cofrac}, where we saw that their implementation required special considerations. We return to this issue here, but this time in the context of the rationals $Q=\mathbb{Q}\cap [0,1]$: in many ways the rationals stand midway between the integers and the reals, in the sense that they form a dense set (like the reals), but still enumerable (like the integers). We are about to see that these 2 properties allow us to make further progress in the subject. Note that Costas permutations on the rational continuum is a genuinely new problem, and in no way a special case of the constructions in the real continuum; the reason is that the constructions of Section \ref{smperm} do not map bijectively the rationals onto the rationals. For example, $f(x)=x^2$ is not a bijection over $Q$, as $\displaystyle \nexists x\in Q:\ f(x)=\frac{1}{3}$, say. The relevant definitions of the Costas property on rational bijections closely parallel the ones in Section \ref{defs} (regarding the real continuum) and will not be repeated here. \subsection{An existence result} In this section we offer an algorithm of considerable generality for the construction of bijections on $Q$ with the Costas property. Let us begin by reordering the elements of $Q$ as follows: we order firstly by the magnitude of the denominator, and secondly by the magnitude of the numerator (both in an increasing way). Explicitly, first come those rational numbers in $[0,1]$ whose denominator is 1, namely $\displaystyle 0=\frac{0}{1}$ and $\displaystyle 1=\frac{1}{1}$; then, those whose denominator is 2, namely $\displaystyle \frac{1}{2}$; then, those whose denominator is 3, namely $\displaystyle \frac{1}{3}$ and $\displaystyle \frac{2}{3}$ etc. Hence, the sequence looks like this: \[0,1,\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4},\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5},\frac{1}{6},\frac{5}{6}\ldots \] Notice that the numerators are always taken to be relatively prime to the denominators in order to avoid duplicate entries. We denote $Q$ equipped with this particular ordering by $Q_X$, and its elements, in the order dictated by the ordering, by $x_0,x_1,x_2,\ldots$. This ordering has the advantage that each rational is preceded by a finite number of rationals only (in set theoretic terminology, it does not contain any transfinite points). Similarly, we denote by $Q_Y$ the set $Q$ equipped with any arbitrary but fixed ordering without transfinite points, and we denote its elements, in the order dictated by its ordering, by $y_0,y_1,y_2,\ldots$. Consider now the following algorithm for the construction of a mapping $f:Q\rightarrow Q$: \begin{equation}gin{alg}\ \label{algq} \begin{equation}gin{description} \item[Initialization] Choose $f(x_0)=y_0$; set $Q'_{Y}\leftarrow Q_Y-\{y_0\}$, $Q'_{X}\leftarrow Q_X-\{x_0\}$, $X\leftarrow \{x_0\}$, $Y\leftarrow \{y_0\}$, and $D\leftarrow \{\}$. \item[Find $x$ for $y$:] Set $Q_{X,\text{av}}\leftarrow Q'_X$, $x\leftarrow \inf Q_{X,\text{av}}$, $y\leftarrow \inf Q'_{Y}$; while the (multi)set $\{\text{sgn}(x'-x)(x'-x, f(x')-y): x'\in X\}\cup D$ is actually a multiset, set $Q_{X,\text{av}}\leftarrow Q_{X,\text{av}}-\{x\}$, $x\leftarrow \inf Q_{X,\text{av}}$, and repeat. Set $f(x)=y$, $D\leftarrow \{\text{sgn}(x'-x)(x'-x, f(x')-y): x'\in X\}\cup D$, $Q'_{Y}\leftarrow Q'_{Y}-\{y\}$, $Q'_{X}\leftarrow Q'_{X}-\{x\}$, $X\leftarrow X\cup \{x\}$, $Y\leftarrow Y\cup \{y\}$. \item[Find $y$ for $x$:] Set $Q_{Y,\text{av}}\leftarrow Q'_Y$, $y\leftarrow \inf Q_{Y,\text{av}}$, $x\leftarrow \inf Q'_{X}$; while the (multi)set $\{\text{sgn}(x'-x)(x'-x, f(x')-y): x'\in X\}\cup D$ is actually a multiset, set $Q_{Y,\text{av}}\leftarrow Q_{Y,\text{av}}-\{y\}$, $y\leftarrow \inf Q_{Y,\text{av}}$, and repeat. Set $f(x)=y$, $D\leftarrow \{\text{sgn}(x'-x)(x'-x, f(x')-y): x'\in X\}\cup D$, $Q'_{Y}\leftarrow Q'_{Y}-\{y\}$, $Q'_{X}\leftarrow Q'_{X}-\{x\}$, $X\leftarrow X\cup \{x\}$, $Y\leftarrow Y\cup \{y\}$. \selectlanguage{english}d{description} The algorithm needs to be supplied with a step sequence before execution begins. For the purposes of the correctness proof the exact step sequence is unimportant (this is yet another degree of freedom of the algorithm), as long as the following rules are observed: \begin{equation}gin{itemize} \item Initialization is run first and only once; \item Neither Find $x$ for $y$ nor Find $y$ for $x$ is run infinitely many times in a row. \selectlanguage{english}d{itemize} \selectlanguage{english}d{alg} For example, when $Q_Y=Q_X$ and the steps are run alternatingly, we get $\displaystyle f(0)=0,\ f(1)=1,\ f\left(\frac{1}{2}\right)=\frac{1}{3},\ f\left(\frac{1}{3}\right)=\frac{1}{2},\ f\left(\frac{2}{3}\right)=\frac{2}{3}$ etc. \begin{equation}gin{thm}\label{thmq} Algorithm \ref{algq} produces infinitely many bijections $f:Q\rightarrow Q$ with the Costas property. \selectlanguage{english}d{thm} \begin{equation}gin{proof} In order to prove the correctness of Algorithm \ref{algq} above, we need to demonstrate that a) $\forall y\in Q_Y, \exists! x\in Q_X: f(x)=y$, and b) $\forall x\in Q_X, \exists y\in Q_Y: f(x)=y$. To begin with, note that the construction algorithm above guarantees that the constructed $f$ has the Costas property and that every $y\in Q_Y$ appears in the range of $f$ at most once. We only need to show that the algorithm never gets ``stuck'', namely that the two while loops always exit. \begin{equation}gin{itemize} \item For a given $x$, is it possible to assign a value to $f(x)$? In other words, if $A\subset Q'_Y$ is the set of all values $f(x)$ can take without violating the Costas property of $f$, is it true that $A\neq \emptyset$? The answer is in the affirmative, as, intuitively, we can see that the Costas property restrictions impose only a finite number of constraints on $f(x_i)$, while $Q'_Y$ is countably infinite. Rigorously, we have to check 2 conditions: \begin{equation}gin{itemize} \item Let $A_1\subset Q'_Y$ be the set of possible values for $f(x)$ for which $\text{sgn}(x'-x)(x'-x,f(x')-f(x))=\text{sgn}(x'-x'')(x'-x'',f(x')-f(x''))$ is never true for $x',x''\in X$. We show that $A_1\neq \emptyset$. In fact, consider $\displaystyle \frac{1}{p}$, where $p$ is a prime that does not appear as a factor in the denominator of some $f(x'),\ x'\in X$: choosing $\displaystyle f(x)=\frac{1}{p}$, it follows that $\displaystyle \frac{1}{p}-f(x')$ contains $p$ as a factor in the denominator, while $f(x')-f(x'')$ does not, hence they cannot be equal, and therefore that $\displaystyle \frac{1}{p}\in A_1\neq \emptyset$ as promised. Clearly, there are infinitely many choices for $p$ possible, so $A_1$ contains actually infinitely many elements. \item Let $A_2\subset A_1$ be the set of possible values for $f(x)$ for which $\text{sgn}(x'-x)(x'-x,f(x')-f(x))=\text{sgn}(x''-x)(x''-x,f(x'')-f(x))$ is never true for $x',x''\in X$. We show that $A_2\neq \emptyset$. In order for one of these equalities to hold, $x$ must be the midpoint of $x'$ and $x''$, while at the same time $f(x)$ be the midpoint of $f(x')$ and $f(x'')$. Choosing $\displaystyle f(x)=\frac{1}{p}$ where $p$ is as above, and writing $\displaystyle x'=\frac{u_1}{v_1},\ x''=\frac{u_2}{v_2}$, we need to investigate whether the following is possible: \[\frac{1}{2}\left(\frac{u_1}{v_1}+\frac{u_2}{v_2}\right)=\frac{1}{p},\ (u_1,v_1)=(u_2,v_2)=1,\ p\not|v_1,v_2.\] This implies $p(u_1 v_2+u_2 v_1)=2v_1v_2$, and therefore that $p|2v_1v_2\Rightarrow p|2\Rightarrow p=2$. Hence, $A_2$ does contain all points of the form $\displaystyle \frac{1}{p}$, too, where $p$ does not divide the denominator of some $f(x'),\ x'\in X$ (which are infinitely many), except possibly for $\displaystyle \frac{1}{2}$; in any case $A_2\neq \emptyset$. \selectlanguage{english}d{itemize} But $A_2=A$, hence $A\neq \emptyset$, a contradiction; therefore, $f(x)$ can assume a value without $f$ losing the Costas property. \item For a given $y$, is it possible to find $x\in Q'_X$ so that $f(x)=y$? In other words, if $A\subset Q'_X$ is the set of all values $x$ for which $f(x)$ can be $y$ without violating the Costas property of $f$, is it true that $A\neq \emptyset$? The answer is in the affirmative as well, and the argument is an almost verbatim repetition of the argument above. Rigorously, we have to check 2 conditions: \begin{equation}gin{itemize} \item Let $A_1\subset Q'_X$ be the set of possible values for $x$ for which $\text{sgn}(x'-x)(x'-x,f(x')-y)=\text{sgn}(x'-x'')(x'-x'',f(x')-f(x''))$ is never true for $x',x''\in X$. We show that $A_1\neq \emptyset$. In fact, consider $\displaystyle \frac{1}{p}$, where $p$ is a prime that does not appear as a factor in the denominator of some $x'\in X$: choosing $\displaystyle x=\frac{1}{p}$, it follows that $\displaystyle \frac{1}{p}-x'$ contains $p$ as a factor in the denominator, while $x'-x''$ does not, hence they cannot be equal, and therefore that $\displaystyle \frac{1}{p}\in A_1\neq \emptyset$ as promised. Clearly, there are infinitely many choices for $p$ possible, so $A_1$ contains actually infinitely many elements. \item Let $A_2\subset A_1$ be the set of possible values for $x$ for which $\text{sgn}(x'-x)(x'-x,f(x')-y)=\text{sgn}(x''-x)(x''-x,f(x'')-y)$ is never true for $x',x''\in X$. We show that $A_2\neq \emptyset$. In order for one of these equalities to hold, $x$ must be the midpoint of $x'$ and $x''$, while at the same time $y$ be the midpoint of $f(x')$ and $f(x'')$. Choosing $\displaystyle x=\frac{1}{p}$ where $p$ is as above, and writing $\displaystyle x'=\frac{u_1}{v_1},\ x''=\frac{u_2}{v_2}$, we need to investigate whether the following is possible: \[\frac{1}{2}\left(\frac{u_1}{v_1}+\frac{u_2}{v_2}\right)=\frac{1}{p},\ (u_1,v_1)=(u_2,v_2)=1,\ p\not|v_1,v_2.\] This implies $p(u_1 v_2+u_2 v_1)=2v_1v_2$, and therefore that $p|2v_1v_2\Rightarrow p|2\Rightarrow p=2$. Hence, $A_2$ does contain all points of the form $\displaystyle \frac{1}{p}$, too, where $p$ does not divide the denominator of some $f(x'),\ x'\in X$ (which are infinitely many), except possibly for $\displaystyle \frac{1}{2}$; in any case $A_2\neq \emptyset$. \selectlanguage{english}d{itemize} But $A_2=A$, hence $A\neq \emptyset$, a contradiction; therefore, there exists a $x:f(x)=y$ without $f$ losing the Costas property. \selectlanguage{english}d{itemize} This completes the proof. \selectlanguage{english}d{proof} \begin{equation}gin{rmk} Intuitively, the mechanism responsible for the flexibility of the algorithm is the opportunity the countable infinity of the rationals offers for ``double deference of all difficulties for a future time'': when faced with the difficulty of assigning a value to $f$ at a given point, we always have infinitely many possibilities, out of which some will work; this in turn creates the difficulty of assigning the values we skipped to some point, but, when faced with this difficulty, we again have infinitely many points waiting for an assignment, out of which some again will work; but in choosing one we once more skip some points, and we need to choose values for them, hence the cycle restarts. This interplay is precisely what we cannot do with a finite set, hence the contrast between the easiness of the Costas construction over the rationals, as opposed to the intractability of the classical construction of Costas arrays. \selectlanguage{english}d{rmk} \begin{equation}gin{rmk} The above proof makes heavy use of the enumerability of the rationals, and therefore cannot be readily extended to the reals, who lack this property. \selectlanguage{english}d{rmk} It may come as a surprise that we can extend the algorithm even further: \begin{equation}gin{thm} Algorithm \ref{algq} will produce a bijection $f:Q\rightarrow Q$ with the Costas property even if one of the steps From $x$ to $y$ or From $y$ to $x$ is applied infinitely many times in a row. \selectlanguage{english}d{thm} \begin{equation}gin{proof} Let us consider the case where From $x$ to $y$ is run infinitely many times in a row immediately after Initialization. This causes no loss of generality: the case where From $y$ to $x$ is run infinitely many times in a row immediately after Initialization is completely dual (observe the duality in the proof of Theorem \ref{thmq}), while the more general situation where finitely many alternations between the 2 steps occur before the algorithm ``locks'' in one can be considered to fall within one of the 2 cases we just mentioned, but with a different, more extensive Initialization. Assume then that we go through $x\in Q_X$ one after another and we try to assign values to $f(x)\in Q_Y$ while retaining the Costas property. The proof of Theorem \ref{thmq} guarantees that we will succeed for all points. What we need to worry about is whether some $y\in Q_Y$ will be left out in the process: in other words, we know that $\forall x\in Q_X,\ \exists y\in Q_Y: f(x)=y$, but we still need to know that $\forall y\in Q_Y,\ \exists! x\in Q_X:\ f(x)=y$. Assume then that at some step of the algorithm we find that $y\in Q'_Y$ has been skipped, and is the smallest element of $Q_Y$ that has been skipped. Will the algorithm ever ``pick it up''? As before, let us denote by $A\subset Q'_X$ the set of all available $x$ for which we can set $f(x)=y$ without violating the Costas property; we need to show that $A\neq \emptyset$. Because we proceed through $Q_X$ sequentially from the beginning, at the particular step of the algorithm we find ourselves there exists $x_0\in Q_X:\ X=\{x\in Q_X: x\leq x_0\}$ (remember that $\leq$ refers to the ordering of $Q_X$, \emph{not} the usual ordering!). Consider a $x\in Q'_X$ which is of the form $\displaystyle \frac{1}{p}$, $p$ prime, say $\chi$. As in the proof of Theorem \ref{thmq}, we need to show 2 things: \begin{equation}gin{itemize} \item $\text{sgn}(x'-\chi)(x'-\chi,f(x')-y)=\text{sgn}(x'-x'')(x'-x'',f(x')-f(x''))$ is never true for $x',x''<\chi$. The additional complication here is that at the current step of the algorithm we know the values of $f$ up to $x_0$, but we endeavor to prove a property that holds for $x<\chi$, i.e. involving future values! The way to avoid the complication is to apply our favorite argument on the first coordinate only, disregarding entirely what the values of $f$ are: $x'-x''$ cannot contain $p$ as a factor in its denominator, while $\chi-x'$ does, hence they cannot be equal. It follows that $\chi$ will belong in $A$ as long as it satisfies the second condition we are now about to test, and also that $\chi$ can actually be chosen among infinitely many points. \item $\text{sgn}(x'-\chi)(x'-\chi,f(x')-y)=\text{sgn}(x''-\chi)(x''-\chi,f(x'')-y)$ is never true for $x',x''<\chi$. In order to check this we repeat verbatim the proof of Theorem \ref{thmq}: we assume that $\chi$ is the midpoint of some $x'$ and $x''$, and then show this is impossible, unless perhaps $\displaystyle \chi=\frac{1}{2}$. It follows that $\displaystyle \frac{1}{p}$ satisfies this condition too, with the possible exception of when $p=2$. But this still leaves infinitely many points of the form $\displaystyle \frac{1}{p}$, $p$ prime, in $A$, hence in particular $A\neq \emptyset$. \selectlanguage{english}d{itemize} This completes the proof. \selectlanguage{english}d{proof} \subsection{An explicit construction} Algorithm \ref{algq} is not exactly constructive; we cannot, for example, readily compute what $\displaystyle f\left(\frac{8}{1025}\right)$ is equal to. We propose here a constructive algorithm for the construction of a Costas permutation on the rationals; the catch is, however, that it only works on a subset of $Q$. \begin{equation}gin{dfn} We define the set of \emph{prime rationals} $Q_P$ in $[0,1]$ to be the subset of $Q$ with prime denominators; namely $\displaystyle Q_P=\left\{\frac{i}{p}:\ i\in[p-1],p\text{ prime}\right\}$. \selectlanguage{english}d{dfn} \begin{equation}gin{thm} For each prime $p$, consider a Welch Costas permutation $f_p:[p-1]+1\rightarrow [p-1]+1$ constructed in $\mathbb{F}(p)$, and consider the set of points $\displaystyle S(p)=\left\{\left(\frac{i}{p},\frac{f_p(i)}{p}\right):\ i\in[p-1]+1\right\}$. The set $\displaystyle S=\bigcup_{p\text{ prime}} S(p)$ is a Costas permutation on $Q_P$. \selectlanguage{english}d{thm} \begin{equation}gin{proof} $S$ is clearly a permutation. We need to show that the distance vectors between all pairs of points are distinct. \begin{equation}gin{itemize} \item Choose 4 points in the same $S(p)$: the Costas property of $f_p$ guarantees the 2 distance vectors they define are distinct. \item Choose 2 points in $S(p)$ and 2 points in $S(q)$, $q\neq p$: the first distance vector has coordinates that are fractions over $p$, while the second over $q$, hence they cannot be equal. \item Choose 2 points in $S(p)$, a point in $S(q)$, and a point in $S(r)$, where $p,q,r$ are distinct primes: the first distance vector has coordinates that are fractions over $p$, while the second over $qr$, hence they cannot be equal. \item Choose a point in $S(p)$, a point in $S(q)$, a point in $S(r)$, and a point in $S(s)$, where $p,q,r,s$ are distinct primes: the first distance vector has coordinates that are fractions over $pq$, while the second over $rs$, hence they cannot be equal. \selectlanguage{english}d{itemize} This completes the proof. \selectlanguage{english}d{proof} \section{Conclusion} In this work, we have made 4 main and original contributions to the subject of Costas arrays: \begin{equation}gin{itemize} \item We defined the Costas property on a real continuum function in 2 ways, through distance vectors between points and through the autocorrelation, and we showed that the 2 definitions are equivalent. We also showed that real continuum Costas bijections can be used in the same applications as discrete Costas arrays, by designing signals with the appropriate instantaneous frequency, which has been made possible by the recent advances in the field. Subsequently, we studied similarly the Costas property on rational continuum functions. Essentially, we have now translated the entire framework of Costas arrays in the continuum. \item We showed that real continuum Costas bijections exist and we offered some examples; we characterized completely the continuously differentiable Costas bijections in terms of the monotonicity of their derivative, and we also obtained some good results for the case where the bijections are only piecewise continuously differentiable. \item We investigated whether it is possible to construct fractal bijections with the Costas property, perhaps by employing discrete Costas arrays as building blocks. We answered that in the affirmative under nonstandard arithmetic laws (where addition and subtraction take place without carry, or where the contribution of the least significant digits of the points to their distance is deemphasized) in the real continuum; under ordinary arithmetic we have no reason to believe that the result still holds true. \item We proposed a very general and flexible algorithm for the construction of Costas permutations over the rationals, that is not, however, entirely constructive. We were also able to formulate such a constructive algorithm, but its applicability is limited over a subset of the rationals. \selectlanguage{english}d{itemize} Overall, it comes to us as a surprise that it was relatively simple to construct smooth continuum functions with the Costas property, whereas all efforts to create a fractal Costas real bijection were unsuccessful (under ordinary arithmetic). Intuitively, given the irregularity of discrete Costas arrays, we would expect the known construction methods for Costas arrays to generalize in a natural way in the real continuum leading to a fractal; however, a direct recursion, such as our attempt in Section \ref{cofrac}, seems to be inappropriate, unless we change the arithmetic we use. It may still be possible to construct a Costas fractal bijection based on discrete Costas arrays through a different, less obvious mechanism, and we challenge the reader to discover such a mechanism. \begin{equation}gin{thebibliography}{10} \bibitem{B} E. Bedrosian. ``A product theorem for Hilbert transform'', Proceedings of the IEEE, 51 (1963), pp. 868-869 \bibitem{BC} A. M. Bruckner, J. C. Ceder. ``Darboux continuity'', Jahresber. Deutsch. Math. Ver., 67 (1965), pp. 93-117 \bibitem{C} J. P. Costas. ``A study of detection waveforms having nearly ideal range-doppler ambiguity properties'', Proceedings of the IEEE, 72(8), pp. 996-1009 \bibitem{D} K. Drakakis. ``A review of Costas arrays'', Journal of Applied Mathematics, Volume 2006 (2006), Article ID 26385 \bibitem{D3} K. Drakakis: ``Data Mining and Costas Arrays'', Turkish Journal of Electrical Engineering \& Computer Sciences, Vol. 1, No.4, 2007 \bibitem{D2} K. Drakakis. ``On some properties of Costas arrays generated via finite fields'', IEEE CISS 2006 \bibitem{G} S. Golomb. ``Algebraic Constructions for Costas Arrays'', Journal of Combinatorial Theory, Series A 37 (1984), pp. 13-21 \bibitem{HSLWSZTL} Huang, N. E., Shen, Z., Long, S. R., Wu, M. C., Shih, E. H., Zheng, Q., Tung, C. C., and Liu, H. H., ``The Empirical Mode Decomposition Method and the Hilbert Spectrum for Non-stationary Time Series Analysis'', Proc. Roy. Soc. London, A454 (1998), pp. 903-995 \bibitem{M} K. R. Matthews. ``A Generalization of Artin's Conjecture for Primitive Roots'', Acta Arith. 29 (1976), pp. 113-146 \bibitem{SVM} J. Silverman, V. Vickers, and J. Mooney. ``On the Number of Costas arrays as a function of array size.'' Proceedings of the IEEE, pp. 851-853, July 1988 \selectlanguage{english}d{thebibliography} \selectlanguage{english}d{document}
\begin{document} \title{Interference-induced directional emission from an unpolarized two level emitter into a circulating cavity} \author{Lucas Ostrowski} \author{Scott Parkins} \affiliation{Dodd-Walls Centre for Photonic and Quantum Technologies, New Zealand} \affiliation{Department of Physics, University of Auckland, New Zealand} \author{Morito Shirane} \author{Mark Sadgrove} \affiliation{Department of Physics, Faculty of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan} \begin{abstract} Chiral coupling between quantum emitters and evanescent fields allows directional emission into nanophotonic devices and is now considered to be a vital ingredient for the realization of quantum networks. However, such coupling requires a well defined circular dipole moment for the emitter -- something difficult to achieve for solid state emitters at room temperature due to thermal population of available spin states. Here, we demonstrate that a two level emitter with a randomly polarized dipole moment can be made to emit directionally into a circulating cavity if a separate emitter is chirally coupled to the same cavity, for the case when both emitter-cavity couplings are strong but in the bad-cavity regime. Our analysis of this system first considers a transient scenario, which highlights the physical mechanism giving rise to the directional emission of the two level emitter into the cavity. An alternative setup involving a weak laser field continuously driving the system is also considered, where the directionality (our proposed figure of merit for this scheme) is shown to be significantly more robust against noise processes. The results presented here take the form of approximate analytical expressions backed by complete numerical simulations of the system. \end{abstract} \maketitle \section{Introduction} For a number of years it has been recognized that single quantum emitters (QEs) coupled strongly to an optical cavity are excellent candidates for nodes in quantum networks where matter qubits based on QE are interfaced with flying qubits~\cite{KimbleQI}. In particular, protocols exist for the direct quantum state transfer of qubits from light to atoms and back using such nodes~\cite{Boozer,Dayan1,Dayan2}. However, one problem with such cavity based nodes is that the inherent left-right symmetry of the cavity means that photons couple randomly to the left and right propagating cavity modes, meaning that any state transfer protocol is non-deterministic. Although use of a one-sided cavity solves this problem after a fashion, it also inconveniently restricts the topology of the network. Recently, the rise of nanophotonic devices and associated evanescent light-matter coupling schemes has led to an elegant work around for the above problem, which goes as follows: While the \emph{total} spin angular momentum of the evanescent field is typically zero, \emph{locally} it is elliptically polarized and, crucially, this local elliptical polarization \emph{is coupled to the direction of propagation of the waveguide mode.} This property is variously referred to as spin-orbit coupling of light~\cite{Liberman,Nori1}, and spin-momentum locking~\cite{Jacob,Bliokh}. This fact means that quantum emitters which support a circularly polarized dipole transition (e.g., a magnetic sublevel) with a suitably oriented quantization axis will couple strongly only to the waveguide mode whose local polarization at the position of the emitter matches the transition's polarization, and thus (due to the coupling of local polarization state and mode propagation direction) couple strongly to only one direction. The research field which studies light-matter interactions with spin-momentum-locked light is known as ``chiral quantum optics"~\cite{ChiralQO}. In the past few years, a number of devices have been proposed and demonstrated using the principles of chiral quantum optics~\cite{Turchette, RauschenbeutelCirc, DayanRefl,RauschIsolator,Shomroni, Sollner}, and chiral techniques are now expected to be a novel tool for the realization of quantum networks~\cite{Pichler,Stannigel}. \begin{figure}\label{fig:Concept} \end{figure} One practical problem when it comes to applying chiral coupling techniques is that room temperature solid state quantum emitters typically emit randomly polarized photons~\cite{Abe}. In this case directional emission of photons using the spin-momentum locking effect is not possible. Here, we will demonstrate how it is possible to arrange directional emission from such an unpolarized quantum emitter into a circulating cavity mode if a separate emitter is chirally coupled to the same cavity. Our scheme works as follows: First, assume that a two level quantum emitter -- which we will refer to as a quantum dot (QD) -- couples equally, with strength $g_q$, to both counter-clockwise- and clockwise-propagating modes (henceforth referred to as modes $a$ and $b$ respectively) of a whispering-gallery-mode (WGM) cavity. Additionally, we assume that the coupling is in the so-called bad-cavity regime, i.e., $g_q\ll\kappa$, with $\kappa$ the cavity field decay rate, but $g_q$ is still assumed to be much larger than the excited state spontaneous emission rate $\gamma_q$ of the QD, so that the cooperativity $C_q=g_q^2/(\kappa\gamma_q)\gg 1$. This ensures that any emission from the quantum dot takes place primarily through the cavity modes. Now suppose that another quantum emitter -- which we shall refer to as an atom -- is prepared in a (stable) internal state which couples chirally to the same cavity, i.e., it has a large coupling strength $g_{a}$ to mode $a$, but a much smaller coupling strength $g_{b}$ to mode $b$. Later, we will consider the concrete example of a $^{133}$Cs atom, for which the ratio $g_{a}/g_{b}$ can be as large as $\sqrt{45}$. Now, if the atom-cavity coupling is also in the bad-cavity regime and with large cooperativity ($C_a=g_a^2/(\kappa\gamma_a)\gg 1$), we will show that the QD excited state decays principally into just one mode (i.e., one direction) as a result of destructive quantum interference between the QD and atomic dipole fields in one polarization. The direction can be chosen by the atomic internal state, giving all of the advantages of chiral coupling even though the quantum dot is randomly polarized. We note that an alternative scheme also exists where the atom is in the strong coupling regime, i.e., $g_a\gg\kappa,\gamma_a$; here, the mechanism is the strong vacuum Rabi-splitting of one cavity mode, which simply drives that mode off resonance from the QD, leading to Purcell enhancement for QD emission just into the other cavity mode. However, the bad cavity condition is less strict than the strong coupling regime, and, indeed, can be achieved even for very lossy resonators such as plasmonic nanostructures. We therefore focus on the scheme in which both quantum dot and atom are coupled to the cavity in the bad cavity regime. At this point, we address one obvious question regarding our work: if we assume the existence of a chirally coupled atom, then why not dispense with the quantum dot and simply use the chiral coupled photons from the atom itself? There are several reasons why our scheme has value in spite of this obvious objection. First, although for simplicity we consider a single chirally coupled atom in the present work, there is actually no need for a \textit{single} chirally coupled emitter - our scheme also works with collectively coupled chiral emitters and indeed there are some advantages in terms of a reduction in the required single emitter coupling strength in this case~\cite{ScottMaarten}. Moreover, it is in general much easier experimentally to couple multiple atoms to a resonator, and it can feasibly be achieved for room temperature atoms as opposed to laser-cooled ones~\cite{Ritter}. In this case, our scheme has the advantage that the achievement of a single emitter is only necessary in the experimentally simpler case of the quantum dot or other solid state emitter. Second, the chirally coupled emitter(s) can in principle control the direction of emission from a number of quantum dots coupled to the cavity, assuming that the quantum dots are addressed one at a time. Third, it is useful both conceptually, and potentially in applications, to separate the functions of chiral coupling and single photon emission. This is in line with the general principle of hybrid quantum systems, wherein the advantages of individual quantum systems are hybridized by coupling them. In this case, the convenience of single photon emission from a fixed, single quantum dot and the directionality of coupling for cold atoms are combined to create a single system more useful than either of the systems alone. The rest of the paper proceeds as follows: In Section~\ref{SecModel} we introduce our formal model of the system, and define the relevant master equation. In Section~\ref{SecSingleEx} we analyse the single excitation regime, where the QD starts in the excited state, the atom is in its ground state, and both cavity modes are in the vacuum state. We perform a trajectory analysis and derive the directionality of emission in the ideal case. In Section~\ref{SecWeaklyDriven}, we consider the case where the quantum dot is weakly driven by an external field. We derive the steady state system properties, and calculate the directionality in the steady state limit. The photon statistics of the cavity output fields are also examined in detail. Finally in Section~\ref{SecDisc}, we discuss our results and offer conclusory remarks. \section{Theoretical model} \label{SecModel} \subsection{Master Equation} Referring to Fig.~\ref{fig:Concept}, we define our system as being comprised of a quantum dot modelled as a two-level system with ground state $|G\rangle$ and excited state $|E\rangle$, a three-level atom with ground state $|g\rangle$ and excited states $|+\rangle$ and $|-\rangle$, and a circulating cavity with two orthogonal modes $a$ and $b$ (counter-clockwise- and clockwise-propagating, respectively), to which both the quantum dot and the atom couple. To understand the physics behind this scheme, our study begins with a master equation to model the dissipative dynamics of our proposed system. By employing a Born-Markov treatment for the mechanisms of spontaneous emission and cavity decay, we find the following equation of motion for the system density operator $\hat\rho$ ($\hbar=1$): \begin{align}\label{m_full} \begin{split} \frac{d\hat{\rho}}{dt} = & -i\commutator{\hat{H}}{\hat{\rho}(t)}+\kappa\left(\mathcal{D}\left[\hat{a}\right] + \mathcal{D}[\hat{b}]\right)\hat{\rho}(t) \\ & \frac{\gamma_q}{2}{D}\left[\hat{\sigma}_{q-}\right]\hat{\rho}(t) + \frac{\gamma_a}{2}\left(\mathcal{D}\left[\hat{\sigma}_{a-}\right] + \mathcal{D}\left[\hat{\sigma}_{b-}\right]\right)\hat{\rho}(t). \end{split} \end{align} Here, we have the usual Lindblad superoperator defined as $\mathcal{D}[ \hat{O} ]\bullet \equiv 2\hat{O}\bullet\hat{O}^\dagger - \hat{O}^\dagger\hat{O}\bullet - \bullet\hat{O}^\dagger\hat{O}$, the lowering operators $\hat{\sigma}_{q-}\equiv\ket{G}\bra{E}$, $\hat{\sigma}_{a-}\equiv\ket{g}\bra{+}$, and $\hat{\sigma}_{b-}\equiv\ket{g}\bra{-}$, where the former acts on the Hilbert space of the QD and the latter two act on the Hilbert space of the atom. The bosonic annihilation operators $\hat{a}$ and $\hat{b}$ act on the spaces of the degenerate, counterpropagating cavity modes $a$ and $b$, respectively. Assuming that the quantum dot couples to each cavity mode with strength $g_q$, and that the atom couples to mode $a$ with strength $g_a$ and mode $b$ with strength $g_b$, then in the rotating wave approximation and assuming a Jaynes-Cummings-type interaction between each of the QD and atom with the quantized radiation fields of the cavity, the Hamiltonian may be expressed as \begin{align}\label{H_full} \begin{split} \hat{H} = {} & \Delta_c\left( \hat{a}^\dagger\hat{a} + \hat{b}^\dagger\hat{b} \right) \\& + \Delta_q\hat{\sigma}_{q+}\hat{\sigma}_{q-} + \Delta_a\hat{\sigma}_{a+}\hat{\sigma}_{a-} + \Delta_b\hat{\sigma}_{b+}\hat{\sigma}_{b-} \\ & + \left(g_{q}\hat{\sigma}_{q+}\left(\hat{a} + \hat{b}\right) + g_a\hat{\sigma}_{a+}\hat{a} + g_b\hat{\sigma}_{b+}\hat{b} + \text{H.c.}\right) \\ & + \Omega \hat{\sigma}_{q+} + \Omega^*\hat{\sigma}_{q-}, \end{split} \end{align} where H.c. denotes Hermitian conjugate. This Hamiltonian is written in a frame rotating at a rate $\omega_L$, corresponding to the frequency of the laser driving the QD with strength $\Omega$. The detunings listed in this equation are therefore defined as the difference between $\omega_L$ and the resonance frequency of the cavity ($\omega_c$), or the transition frequencies between the respective ground and excited states of the two- ($\omega_q$) and three-level emitters ($\omega_\pm$): $\Delta_c\equiv\omega_c-\omega_L$, $\Delta_q\equiv\omega_q-\omega_L$, $\Delta_a\equiv\omega_+-\omega_L$, and $\Delta_b\equiv\omega_--\omega_L$. \subsection{Cavity Output Fields} Equations (\ref{m_full}) and (\ref{H_full}) together describe the evolution of a system state in the form of a density operator, $\hat{\rho}(t)$, which is generally mixed. An alternative yet equally useful approach to model the behaviour may be found by working within the Heisenberg picture, where a set of Langevin equations may be derived which describe the temporal evolution of the system operators. The equations of motion for the two cavity annihilation operators are \begin{subequations}\label{Langevin} \begin{align}\label{a(t)} \begin{split} \frac{d\hat{a}}{dt} & = -i\commutator{\hat{a}(t)}{\hat{H}} - \kappa\hat{a}(t) - \sqrt{2\kappa}\, \hat{a}_\text{in}(t) \\ & = -\left( \kappa + i\Delta_c \right)\hat{a}(t) - ig_q\hat{\sigma}_{q-}(t) - ig_a\hat{\sigma}_{a-}(t) \\ & ~~~~ - \sqrt{2\kappa}\, \hat{a}_\text{in}(t) , \end{split} \end{align} \begin{align}\label{b(t)} \begin{split} \frac{d\hat{b}}{dt} & = -i\commutator{\hat{b}(t)}{\hat{H}} - \kappa\hat{b}(t) - \sqrt{2\kappa}\, \hat{b}_\text{in}(t) \\ & = -\left( \kappa + i\Delta_c \right)\hat{b}(t) - ig_q\hat{\sigma}_{q-}(t) - ig_b\hat{\sigma}_{b-}(t) \\ & ~~~~ - \sqrt{2\kappa}\, \hat{b}_\text{in}(t), \end{split} \end{align} \end{subequations} where $\hat{a}_\text{in}(t)$ and $\hat{b}_\text{in}(t)$ are vacuum input field operators. A simple formula may then be obtained which relates the system operators to the fields emitted into and out of the cavity by employing the input-output formalism developed in \cite{Gardiner}: \begin{subequations} \begin{align} \hat{a}_{out}(t) = \sqrt{2\kappa}\hat{a}(t) + \hat{a}_{in}(t), \end{align} \begin{align} \hat{b}_{out}(t) = \sqrt{2\kappa}\hat{b}(t) + \hat{b}_{in}(t). \end{align} \end{subequations} Note, however, that in this work we will only be concerned with properties of the output fields that depend on normally-ordered moments of the output field operators and so the vacuum input field operators will not contribute and can be neglected from here on. \section{Single excitation regime} \label{SecSingleEx} \subsection{Trajectory Analysis} In this section we wish to analyse the transient dynamics of the system starting from an initial state at time $t=0$ with the quantum dot residing in the excited state, the atom in its ground state and the two cavity modes in the vacuum state. It is assumed that there is no coherent field driving the QD, in which case the Hamiltonian for this system reduces to \begin{align}\label{H_single} \begin{split} \hat{H} = {} & \Delta_q\hat{\sigma}_{q+}\hat{\sigma}_{q-} + \Delta_a\hat{\sigma}_{a+}\hat{\sigma}_{a-} + \Delta_b\hat{\sigma}_{b+}\hat{\sigma}_{b-} \\ & + \left(g_{q}\hat{\sigma}_{q+}\left(\hat{a} + \hat{b}\right) + g_a\hat{\sigma}_{a+}\hat{a} + g_b\hat{\sigma}_{b+}\hat{b} + \text{H.c.}\right), \end{split} \end{align} which is now written in a frame rotating with the cavity resonance frequency, such that $\Delta_q \equiv \omega_q - \omega_c$, $\Delta_a \equiv \omega_+ - \omega_c$ and $\Delta_b \equiv \omega_- - \omega_c$. This Hamiltonian will only couple states included within the one-quantum manifold, while the action of a jump operator ($\hat{a},\hat{b},\hat{\sigma}_{q-},\hat{\sigma}_{a-},\hat{\sigma}_{b-}$) on any of these states will either be zero, or project the system into the (zero excitation) ground state, $\ket{G_0}$. Therefore, one may effectively implement a trajectory unravelling of the master equation, following the methods outlined in \cite{Howard2}, by decomposing the master equation into the sum of two parts, \begin{align}\label{twoparts} \begin{split} \frac{d\hat{\rho}}{dt} {} & = \mathcal{L}\hat{\rho}(t) \equiv \left(\mathcal{M}+\mathcal{N}\right)\hat{\rho}(t), \end{split} \end{align} with the superoperator $\mathcal{M}$ acting on the one-quantum subspace, \begin{align} \begin{split} \mathcal{M} = & -i\commutator{\hat{H}}{\bullet} - \kappa\commutator{\hat{a}^\dagger\hat{a} + \hat{b}^\dagger\hat{b}}{\bullet}_+ - \frac{\gamma_q}{2}\commutator{\hat{\sigma}_{q+}\hat{\sigma}_{q-}}{\bullet}_+ \\ & - \frac{\gamma_a}{2}\commutator{\hat{\sigma}_{a+}\hat{\sigma}_{a-}}{\bullet}_+ - \frac{\gamma_b}{2}\commutator{\hat{\sigma}_{b+}\hat{\sigma}_{b-}}{\bullet}_+, \end{split} \end{align} where $\commutator{\bullet}{\bullet}_+$ denotes the anticommutator, and the superoperator $\mathcal{N}$ generating transitions to the ground state, \begin{align} \begin{split} \mathcal{N} = & 2\kappa(\hat{a}\bullet\hat{a}^\dagger + \hat{b}\bullet\hat{b}^\dagger) + \gamma_q\hat{\sigma}_{q-}\bullet\hat{\sigma}_{q+} \\ & + \gamma_a\hat{\sigma}_{a-}\bullet\hat{\sigma}_{a+} + \gamma_b\hat{\sigma}_{b-}\bullet\hat{\sigma}_{b+}. \end{split} \end{align} For times $t>0$, the system either emits a photon with a probability of $P(t)$, thus collapsing the state into $\ket{G_0}$, or the system resides in the pure one-quantum state \begin{align}\label{one_quant} \begin{split} \ket{\bar{\psi}(t)} = \big{[} & Q(t)\hat{\sigma}_{q+} + A(t)\hat{\sigma}_{a+} \\ & + B(t)\hat{\sigma}_{b+} + \alpha(t)\hat{a}^\dagger + \beta(t)\hat{b}^\dagger\big{]}\ket{G_0}, \end{split} \end{align} where $Q(t)$ is the probability amplitude for the quantum dot to be excited, $A(t)$ ($B(t)$) is the probability amplitude for the state $\ket{+}$ ($\ket{-}$) of the atom to be excited, and $\alpha(t)$ ($\beta(t)$) is the excitation probability amplitude for the $a$-mode ($b$-mode) of the resonator. The density operator is decomposed in a similar manner, \begin{align} \hat{\rho}(t) = P(t)\ket{G_0}\bra{G_0} + (1-P(t))\ket{\bar{\psi}(t)}\bra{\bar{\psi}(t)}. \end{align} Here, $\ket{\bar{\psi}(t)}$ evolves according to a non-unitary Schr\"{o}dinger equation, \begin{align} \frac{d\ket{\bar{\psi}}}{dt} = -i\hat{H}_{NH}\ket{\bar{\psi}}, \end{align} where $\hat H_{NH}$ is the \textit{non-hermitian Hamiltonian} \begin{align} \begin{split} \hat{H}_{NH} = & \hat{H} -i\kappa(\hat{a}^\dagger\hat{a} + \hat{b}^\dagger\hat{b}) - i\frac{\gamma_q}{2}\hat{\sigma}_{q+}\hat{\sigma}_{q-} \\ & - i\frac{\gamma_a}{2}\hat{\sigma}_{a+}\hat{\sigma}_{a-} - i\frac{\gamma_b}{2}\hat{\sigma}_{b+}\hat{\sigma}_{b-}, \end{split} \end{align} and the norm of $\ket{\bar{\psi}(t)}$ is equal to $(1-P(t))$. One may then derive the following set of coupled equations for the excited state probability amplitudes: \begin{subequations} \begin{align} & \dot{A} = -\left( \gamma_a/2 + i\Delta_a \right)A(t) - ig_a\alpha(t), \label{A} \\ & \dot{\alpha} = -\kappa\alpha(t) - ig_qQ(t) - ig_aA(t), \label{alpha} \\ & \dot{Q} = -\left( \gamma_q/2 + i\Delta_q \right)Q(t) - ig_q\alpha(t) - ig_q\beta(t), \label{Q} \\ & \dot{\beta} = -\kappa\beta(t) - ig_qQ(t) - ig_bB(t), \label{beta} \\ & \dot{B} = -\left( \gamma_b/2 + i\Delta_b \right)B(t) - ig_b\beta(t). \label{B} \end{align} \end{subequations} \subsection{Adiabatic Elimination of the Cavity Modes} As alluded to previously, the focus of this analysis centers on the scheme in which the QD and atom are coupled to the cavity in the bad-cavity, large-cooperativity regime ($\gamma_{q,a}\ll g_{q,a,b}\ll\kappa$). This permits the cavity mode amplitudes in Eqs.~(\ref{A})-(\ref{B}) to be adiabatically eliminated from the dynamics. This is achieved by setting the time derivatives for $\alpha(t)$ and $\beta(t)$ to zero, which yields a set of equations that relate the cavity mode amplitudes to the excited state amplitudes of the QD and atom, i.e., \begin{subequations} \begin{align}\label{crude2a} \alpha(t) = \frac{-i\left(g_qQ(t) + g_aA(t)\right)}{\kappa}, \end{align} \begin{align}\label{crude2b} \beta(t) = \frac{-i\left(g_qQ(t) + g_bB(t)\right)}{\kappa}. \end{align} \end{subequations} Substituting (\ref{crude2a}) and (\ref{crude2b}) into (\ref{A}), (\ref{Q}), and (\ref{B}) gives equations of motion for the QD and atomic excited state probability amplitudes as \begin{subequations} \begin{align}\label{Adot} \dot{A} = -\left( \Gamma_a + i\Delta_a \right)A(t) - g_a'Q(t), \end{align} \begin{align}\label{Qdot} \dot{Q} = -\left( \Gamma_q + i\Delta_q \right)Q(t) - g_a'A(t) - g_b'B(t), \end{align} \begin{align}\label{Bdot} \dot{B} = -\left( \Gamma_b + i\Delta_b \right)B(t) - g_b'Q(t), \end{align} \end{subequations} where we have introduced the cavity-enhanced spontaneous emission rates \begin{align} \begin{split} \Gamma_q \equiv \frac{\gamma_q}{2}\left( 1 + \frac{4g_q^2}{\gamma_q\kappa} \right), \quad \Gamma_{a,b} \equiv \frac{\gamma_{a,b}}{2}&\left( 1 + \frac{2 g_{a,b}^2}{\gamma_{a,b}\kappa} \right), \end{split} \end{align} and the effective coupling strengths between the atom and the quantum dot, \begin{align} g'_{a,b} = \frac{g_qg_{a,b}}{\kappa} . \end{align} These couplings between atom and QD are mediated by the cavity modes $a$ and $b$, respectively. \subsection{Directional Photon Emission: Ideal Case} We now address an idealized case where relatively simple analytic solutions to Eqs.~(\ref{Adot})-(\ref{Bdot}) may be found, which highlight the mechanism which gives rise to directional photon emission within this system. Here, we assume that $g_b$ - the coupling of the atomic transition to the mode $b$ - is negligible, so that the atom may be modelled as an ideal chiral-two-level system. Additionally, all of the transitions within the QD and atom are assumed to be resonant with the cavity frequency ($\Delta_{q,a,b} = 0$) and the coupling of the quantum dot and atom to the cavity modes far exceeds the spontaneous emission rates, so that we may set $\gamma_{q,a}\approx0$. In this situation Eqs.~(\ref{Adot})-(\ref{Bdot}) reduce to \begin{subequations} \begin{align}\label{Adotideal} \dot{A} = -\tilde\Gamma_aA(t) - \sqrt{\frac{\tilde\Gamma_a\tilde\Gamma_q}{2}}Q(t), \end{align} \begin{align}\label{Qdotideal} \dot{Q} = -\tilde\Gamma_qQ(t) - \sqrt{\frac{\tilde\Gamma_a\tilde\Gamma_q}{2}}A(t), \end{align} \end{subequations} where $\tilde\Gamma_{a} = g_a^2/\kappa$ and $\tilde\Gamma_{q} = 2g_q^2/\kappa$. The solutions to these equations are \begin{subequations} \begin{align}\label{Qapx} \begin{split} Q(t) = & \frac{1}{2}\left(1 + \frac{\tilde\Gamma_a - \tilde\Gamma_q}{\sqrt{\tilde\Gamma_a^2 + \tilde\Gamma_q^2}}\right)e^{\lambda_+t} \\ & + \frac{1}{2}\left(1 - \frac{\tilde\Gamma_a - \tilde\Gamma_q}{\sqrt{\tilde\Gamma_a^2 + \tilde\Gamma_q^2}}\right)e^{\lambda_-t}, \end{split} \end{align} \begin{align}\label{Aapx} A(t) = \sqrt{\frac{\tilde\Gamma_q\tilde\Gamma_a}{2\left(\tilde\Gamma_q^2+\tilde\Gamma_a^2\right)}}\left(e^{\lambda_-t} - e^{\lambda_+t}\right), \end{align} \end{subequations} or, in terms of $g_a$ and $g_q$ \begin{subequations} \begin{align}\label{Qapx2} \begin{split} Q(t) = & \frac{1}{2}\left(1 + \frac{g_a^2 - 2g_q^2}{\sqrt{g_a^4 + 4g_q^4}}\right)e^{\lambda_+t} \\ & + \frac{1}{2}\left(1 - \frac{g_a^2 - 2g_q^2}{\sqrt{g_a^4 + 4g_q^4}}\right)e^{\lambda_-t}, \end{split} \end{align} \begin{align}\label{Aapx2} A(t) = \frac{g_ag_q}{\sqrt{\left(g_a^4+4g_q^4\right)}}\left(e^{\lambda_-t} - e^{\lambda_+t}\right), \end{align} \end{subequations} where \begin{align} \lambda_{\pm} = \frac{-(\tilde\Gamma_a+\tilde\Gamma_q)\pm\sqrt{(\tilde\Gamma_a^2+\tilde\Gamma_q^2)}}{2}. \end{align} By substituting Eqs.~(\ref{Qapx2}) and (\ref{Aapx2}) into Eqs.~(\ref{crude2a}) and (\ref{crude2b}), the cavity mode amplitudes may be expressed as \begin{subequations} \begin{align}\label{modea} \begin{split} \alpha(t) = \frac{-ig_q}{2\kappa}\left( \left[ 1 - \frac{g_a^2+2g_q^2}{\sqrt{g_a^4 + 4g_q^4}}\right]e^{\lambda_+t} \right. \\ \left. + \left[ 1 + \frac{g_a^2+2g_q^2}{\sqrt{g_a^4 + 4g_q^4}}\right]e^{\lambda_-t} \right), \end{split} \end{align} \begin{align}\label{modeb} \begin{split} \beta(t) = \frac{-ig_q}{2\kappa}\left(\left[1 + \frac{g_a^2 - 2g_q^2}{\sqrt{g_a^4 + 4g_q^2}}\right]e^{\lambda_+t} \right. \\ + \left. \left[1 - \frac{g_a^2 - 2g_q^2}{\sqrt{g_a^4 + 4g_q^4}}\right]e^{\lambda_-t}\right). \end{split} \end{align} \end{subequations} The probability of a certain cavity mode emitting a photon may then be calculated using \begin{subequations} \begin{align}\label{Pa} P_a = 2\kappa\int_0^\infty dt'\langle(\hat{a}^\dagger\hat{a})(t')\rangle = 2\kappa\int_0^\infty dt'|\alpha(t')|^2, \end{align} \begin{align}\label{Pb} P_b = 2\kappa\int_0^\infty dt'\langle(\hat{b}^\dagger\hat{b})(t')\rangle = 2\kappa\int_0^\infty dt'|\beta(t')|^2. \end{align} \end{subequations} That is, $P_a$ ($P_b$) is the probability of the system emitting a photon from cavity mode $a$ (mode $b$) at some time during the temporal evolution. We then define the directionality as \begin{align}\label{Direction1} D \equiv \frac{P_b - P_a}{P_b + P_a}, \end{align} which is a measure of the ability of the system to preferentially emit a photon from the desired cavity mode (in this case, mode $b$). Using (\ref{modea}) and (\ref{modeb}) the probabilities $P_a$ and $P_b$ are readily evaluated as \begin{equation} P_a = \frac{\frac{1}{2}\tilde\Gamma_q}{\tilde\Gamma_a+\tilde\Gamma_q} , ~~~ P_b = \frac{\tilde\Gamma_a+\frac{1}{2}\tilde\Gamma_q}{\tilde\Gamma_a+\tilde\Gamma_q} , \end{equation} which give \begin{equation} D = \frac{\tilde\Gamma_a}{\tilde\Gamma_a+\tilde\Gamma_q} . \end{equation} These results clearly demonstrate how the presence of a chiral atom can effectively control the emission of the QD into the cavity modes. In particular, if the atomic coupling strength $g_a$ is sufficiently large, such that $\tilde\Gamma_a\gg \tilde\Gamma_q$ (i.e., $g_a^2/(2g_q^2)\gg 1$), then $P_b\simeq D \simeq 1$ and the QD emission is predominantly through mode $b$. Note, though, that we still require $\tilde\Gamma_q=2g_q^2/\kappa\gg\gamma_q$ in order for the results of the analysis of this section to hold. Physically, this can be interpreted in terms of destructive quantum interference between QD and atomic dipole fields, i.e., the dipole field associated with the $\ket{g}\leftrightarrow\ket{+}$ atomic transition is $\pi$ out of phase with the $\sigma^+$-polarized component of the QD, and of the same magnitude, leading to a much-diminished amplitude for the $a$-mode field. This is seen explicitly by noting that, in the limit that we consider above, one has $Q(t)\simeq e^{\lambda_+t}$ and $A(t)\simeq -(g_q/g_a)e^{\lambda_+t}$, so $\alpha (t)=-i(g_qQ(t)+g_aA(t))/\kappa\simeq 0$. Having a large value of the ratio $g_a/g_q$ means that as soon as some excitation of the QD is transferred to the $a$-mode it is rapidly taken up by the $\ket{g}\leftrightarrow\ket{+}$ atomic transition, enabling destructive interference between the QD and atomic fields over the bulk of the time evolution. This behaviour is highlighted in Fig.~\ref{Ideal_dynamics}, where the excited state amplitudes for the QD and the $\ket{g}\leftrightarrow\ket{+}$ atomic transition are plotted as a function of time, along with the photon emission probabilities from the two cavity modes for two values of the ratio $g_a/g_q$ (one small and one large). \begin{figure} \caption{\textbf{Top row:} \label{Ideal_dynamics} \end{figure} \subsection{Third Atomic Level and Spontaneous Emission} The results obtained above are useful to outline the physical processes within our scheme that cause directional photon emission from the cavity, however the parameter choice is quite removed from what could be achieved within a contemporary experimental setup. In particular, it is unrealistic to neglect the intrinsic process of spontaneous emission from the quantum emitters, as well as the coupling between the $\ket{g}\leftrightarrow\ket{-}$ atomic transition and mode $b$ of the cavity. We therefore shift our attention to a more realistic model, where we now account for these processes and provide a numerical analysis to address their influence on the efficiency of this scheme. To this end, we base our atomic model on a $^{133}\textrm{Cs}$ atom, where the ratio between the coupling strengths of the two transitions to the counter-propagating cavity modes is as large as $g_a/g_b = \sqrt{45}$. For simplicity, the spontaneous emission rate from the excited atomic states is assumed to match that of the QD, i.e., $\gamma_a=\gamma_q \equiv \gamma$. The left-hand plot in Fig.~\ref{3_lev_and_spon} shows a density plot for the directionality, as defined in Eq.~(\ref{Direction1}), against a range of coupling strengths for the QD and atom, when the spontaneous emission rate is set to a value of $\gamma /\kappa= 0.001$. It is observed here that the directionality is optimised when the ratio $g_a/g_q\gg1$, which is consistent with the results obtained for the ideal case where the spontaneous emission rates were neglected. However, the right-hand plot within this figure gives a better representation of how spontaneous emission will influence this scheme. In this plot, the photon emission probability from mode $b$ (Eq.~(\ref{Pb})) is given as a function of the atomic coupling strength and the spontaneous emission rate. It is clear that by increasing the spontaneous emission rates, more of the excitation within the system will be lost to this scattering process. \begin{figure} \caption{\textbf{Left plot:} \label{3_lev_and_spon} \end{figure} Additionally, the emission probability is seen to also decrease above values of $g_a/\kappa\simeq 0.12$. This is due to the fact that upon allowing for a non-zero coupling to the weaker atomic transition, a new eigenstate of the Hamiltonian (\ref{H_single}) arises, which takes the form \begin{align}\label{cav_drk} \begin{split} \ket{CD} \propto \Big( & g_qg_b\ket{G}\ket{+} - g_ag_b\ket{E}\ket{g} \\ & + g_qg_a\ket{G}\ket{-} \Big)\ket{0}_a\ket{0}_b. \end{split} \end{align} Because the only cavity states that contribute to $\ket{CD}$ are the vacuum states, $\ket{0}_a$ and $\ket{0}_b$, this state will be dark to the cavity modes and will therefore not emit photons into the cavity modes. The overlap between this state and the initial system state $\ket{\psi(t = 0)} = \ket{E}\ket{g}\ket{0}_a\ket{0}_b$ is given by \begin{align}\label{overlap} |\braket{CD}{\psi(t=0)}|^2 = \frac{g_a^2g_b^2}{g_q^2g_b^2 + g_a^2g_b^2 + g_q^2g_a^2}. \end{align} This result shows that by increasing the atomic coupling strengths ($g_a$ and $g_b$) the initial state of the system populates more of the cavity-dark eigenstate. As this eigenstate decays only via spontaneous emission, the fraction of emission that is routed into the cavities, and in turn the probability of photon emission into the desired cavity mode, is reduced. A trade-off is therefore identified for the engineering of this system, where an optimal atomic coupling strength should be found which will achieve the highest possible directionality without significantly populating this cavity-dark eigenstate, such that most of the excitation that is initially stored in the QD will be routed into mode $b$ of the cavity. However, it is in fact possible for this effect to be reduced by introducing a finite detuning ($\Delta_b$) of the weaker atomic transition from cavity resonance. In this situation, the eigenstate will no longer be dark to the cavity modes, therefore reducing the amount of light that is lost from the system via spontaneous emission. As an example, consider the case corresponding to the top right-hand corner of the plot of $P_b$ in Fig.~\ref{3_lev_and_spon}, where $g_a/\kappa =0.25$, $g_q/\kappa =0.05$, and $\gamma /\kappa =0.001$. With $\Delta_b=0$, we have $P_b=0.54$ and $D=0.87$, but with $\Delta_b/\kappa =0.1$ we obtain $P_b=0.79$ and $D=0.91$, a significant improvement. Furthermore, note that for the parameters of this example $\tilde\Gamma_q/\gamma_q=5$, which is not a lot larger than 1 (so the assumption that $\gamma_q\simeq 0$ is marginal). Doubling the coupling strengths to $g_a/\kappa =0.5$ and $g_q/\kappa =0.1$, and choosing $\Delta_b=0.1$ again, we find further improvement, with $P_b=0.91$ and $D=0.92$. We consider the effect of detuning the weakly coupled atomic transition further in the following section, where we examine the regime in which the QD is driven continuously by a weak coherent field and we evaluate the steady-state behaviour of the system. \section{Weakly driven quantum dot regime} \label{SecWeaklyDriven} The analysis of the transient scenario considered in the previous section is useful for highlighting important dynamical aspects of system, although the efficiency of this directional emission scheme is reasonably limited by noise processes that would be typical in practice. In particular, obtaining high values for the directionality, while also maintaining a high emission probability $P_b$ required the consideration of very low spontaneous emission rates of the emitters - much lower than what would be feasible experimentally in the near-future. Here we investigate a slightly altered setup in order to study the behaviour of our system at steady-state, in which the directional emission of the QD into the waveguide is shown to be more robust against the effects of spontaneous emission from the emitters. In contrast to the previous section, we now focus on the situation where the QD is continuously driven by a weak coherent field, which is to say that the parameter $\Omega$ in Eq.~(\ref{H_full}) is non-zero, yet much smaller than the rates $g_q$ and $\kappa$, such that a linearised set of equations may be obtained for the relevant operator expectation values. \subsection{Steady-State Output Photon Fluxes} Starting from Eqs.~(\ref{m_full}) and (\ref{H_full}), the equation of motion for the expectation value of a system operator $\langle\hat{O}\rangle$ follows from the formula $\frac{d\langle\hat{O}\rangle}{dt} = \textrm{Tr}\left[\hat{O}\frac{d\hat{\rho}}{dt}\right]$. The equations of motion for the relevant system operators follow as \begin{subequations}\label{mean} \begin{align}\label{mean1} \begin{split} \frac{d\langle\hat{\sigma}_{a-}\rangle}{dt} = {} & -\left( i\Delta_a + \frac{\gamma_a}{2} \right)\langle \hat{\sigma}_{a-} \rangle + ig_a\langle \hat{\sigma}_{az} \hat{a} \rangle \\ & + ig_b\langle \hat{\sigma}_{b+}\hat{\sigma}_{a-}\hat{b} \rangle, \end{split} \end{align} \begin{align}\label{mean2} \begin{split} \frac{d\langle\hat{a}\rangle}{dt} = -\left(i\Delta_c + \kappa\right)\langle \hat{a} \rangle - ig_q\langle \hat{\sigma}_{q-} \rangle - ig_a\langle \hat{\sigma}_{a-} \rangle, \end{split} \end{align} \begin{align}\label{mean3} \begin{split} \frac{d\langle\hat{\sigma}_{q-}\rangle}{dt} = {} & -\left(i\Delta_q + \frac{\gamma_q}{2}\right)\langle \hat{\sigma}_{q-} \rangle + ig_q\langle \hat{\sigma}_{qz}\hat{a} \rangle \\ & + ig_q\langle \hat{\sigma}_{qz}\hat{b} \rangle + i\Omega\langle \hat{\sigma}_{qz} \rangle, \end{split} \end{align} \begin{align}\label{mean4} \begin{split} \frac{d\langle\hat{b}\rangle}{dt} = -\left(i\Delta_c + \kappa\right)\langle \hat{b} \rangle - ig_q\langle \hat{\sigma}_{q-} \rangle - ig_b\langle \hat{\sigma}_{b-}\rangle, \end{split} \end{align} \begin{align}\label{mean5} \begin{split} \frac{d\langle\hat{\sigma}_{b-}\rangle}{dt} = {} & -\left( i\Delta_b + \frac{\gamma_a}{2} \right)\langle \hat{\sigma}_{b-} \rangle + ig_b\langle \hat{\sigma}_{bz} \hat{b} \rangle \\ & + ig_a\langle \hat{\sigma}_{a+}\hat{\sigma}_{b-}\hat{a} \rangle, \end{split} \end{align} \end{subequations} where $\langle\hat{\sigma}_{iz}\rangle = \langle \hat{\sigma}_{i+}\hat{\sigma}_{i-} \rangle - \langle \hat{\sigma}_{i-}\hat{\sigma}_{i+} \rangle$ is the mean inversion of a transition within the QD or atom. These equations are linearised by assuming that the driving strength of the laser, $\Omega$, is sufficiently weak that the quantum dot and atom remain primarily in their ground states. This permits the approximations $\langle\hat{\sigma}_{iz}\rangle \approx -1$, $\langle \hat{\sigma}_{az} \hat{a} \rangle \approx -\langle \hat{a} \rangle$, $\langle \hat{\sigma}_{bz} \hat{b} \rangle \approx -\langle \hat{b} \rangle$ and $\langle \hat{\sigma}_{b+}\hat{\sigma}_{a-}\hat{a} \rangle\approx\langle \hat{\sigma}_{a+}\hat{\sigma}_{b-}\hat{a} \rangle\approx0$. Within this approximation, Eqs.~(\ref{mean1})-(\ref{mean5}) reduce to \begin{subequations}\label{mean_approx} \begin{align}\label{mean_approx1} \frac{d\langle\hat{\sigma}_{a-}\rangle}{dt} = -\left( i\Delta_a + \frac{\gamma_a}{2} \right)\langle \hat{\sigma}_{a-} \rangle - ig_a\langle \hat{a} \rangle, \end{align} \begin{align}\label{mean_approx2} \frac{d\langle\hat{a}\rangle}{dt} = -\left(i\Delta_c + \kappa\right)\langle \hat{a} \rangle - ig_q\langle \hat{\sigma}_{q-} \rangle - ig_a\langle \hat{\sigma}_{a-} \rangle, \end{align} \begin{align}\label{mean_approx3} \frac{d\langle\hat{\sigma}_{q-}\rangle}{dt} = -\left(i\Delta_q + \frac{\gamma_q}{2}\right)\langle \hat{\sigma}_{q-} \rangle - ig_q\langle \hat{a} \rangle - ig_q\langle \hat{b} \rangle - i\Omega, \end{align} \begin{align}\label{mean_approx4} \frac{d\langle\hat{b}\rangle}{dt} = -\left(i\Delta_c + \kappa\right)\langle \hat{b} \rangle - ig_q\langle \hat{\sigma}_{q-} \rangle - ig_b\langle \hat{\sigma}_{b-}\rangle, \end{align} \begin{align}\label{mean_approx5} \frac{d\langle\hat{\sigma}_{b-}\rangle}{dt} = -\left( i\Delta_b + \frac{\gamma_a}{2} \right)\langle \hat{\sigma}_{b-} \rangle - ig_b\langle \hat{b} \rangle. \end{align} \end{subequations} We are interested in the steady-state behaviour. Therefore we set the time derivatives in Eqs.~(\ref{mean_approx1})-(\ref{mean_approx5}) to zero. For $\gamma = \gamma_q = \gamma_a$ and $\Delta_{c,q,a,b} = 0$, a relatively simple set of algebraic equations are found for the following expectation values of the system operators at steady-state: \begin{subequations}\label{ss} \begin{align}\label{ssq} \langle \hat{\sigma}_{q-} \rangle_{ss} = \frac{-2i\Omega}{\gamma\left(1 + 2C_q\left[\frac{1}{1+2C_a} + \frac{1}{1+2C_b}\right]\right)}, \end{align} \begin{align}\label{ssA} \langle \hat{\sigma}_{a-} \rangle_{ss} = \frac{-2g_qg_a}{\kappa\gamma(1+2C_a)}\langle \hat{\sigma}_{q-} \rangle_{ss}, \end{align} \begin{align}\label{ssB} \langle \hat{\sigma}_{b-} \rangle_{ss} = \frac{-2g_qg_b}{\kappa\gamma(1+2C_b)}\langle \hat{\sigma}_{q-} \rangle_{ss}, \end{align} \begin{align}\label{ssa} \langle \hat{a} \rangle_{ss} = \frac{-ig_q}{\kappa(1+2C_a)}\langle \hat{\sigma}_{q-} \rangle_{ss}, \end{align} \begin{align}\label{ssb} \langle \hat{b} \rangle_{ss} = \frac{-ig_q}{\kappa(1+2C_b)}\langle \hat{\sigma}_{q-} \rangle_{ss}. \end{align} \end{subequations} Here we have defined the cooperativities $C_q\equiv g_q^2/(\kappa\gamma)$, $C_a\equiv g_a^2/(\kappa\gamma)$ and $C_b\equiv g_b^2/(\kappa\gamma)$. Eqs.~(\ref{ssq})-(\ref{ssb}) reveal how directional emission is induced in the system at steady-state. It is apparent that both $\langle \hat{\sigma}_{a-} \rangle_{ss}$ and $\langle \hat{\sigma}_{b-} \rangle_{ss}$ are perfectly out of phase with $\langle \hat{\sigma}_{q-} \rangle_{ss}$, which again implies that the radiation from the quantum dot will interfere destructively with the re-radiation from the atom; this is essentially the same effect that was found to give rise to the directional photon emission that was addressed in Section \ref{SecSingleEx}. As the atomic coupling to mode $a$ of the cavity is much larger than that for mode $b$, this interference effect will be larger within mode $a$ and emission from this mode will be largely suppressed. \begin{figure} \caption{Mean steady-state intracavity photon numbers $\langle n_a\rangle$ and $\langle n_b\rangle$, ratios $R_a$ and $R_b$, directionality $D_{ss} \label{varying_driving1} \end{figure} To study this effect and others in more detail analytically, we can look more carefully at the results (\ref{ssq})-(\ref{ssb}) in certain limits of interest. First, let us consider the case of no atom ($C_a=C_b=0$) and $2C_q\gg 1$. Then \begin{align} \langle \hat{\sigma}_{q-} \rangle_{ss}^0 = -\frac{2i\Omega}{\gamma (1+4C_q)} \simeq -\frac{i\Omega}{2\gamma C_q} , \end{align} and \begin{align} \langle \hat{a} \rangle_{ss}^0 = \langle \hat{b} \rangle_{ss}^0 = - \frac{ig_q}{\kappa} \langle \hat{\sigma}_{q-} \rangle_{ss}^0 \simeq - \frac{\Omega}{2g_q} . \end{align} The output photon fluxes from the two cavity modes are then \begin{align} \Phi_{a,ss}^0 &= 2\kappa\langle \hat a^\dag \hat a\rangle_{ss}^0 \simeq 2\kappa |\langle \hat{a} \rangle_{ss}^0|^2 \simeq \frac{\Omega^2}{4g_q^2} , \\ \Phi_{b,ss}^0 &= \Phi_{a,ss}^0 \simeq \frac{\Omega^2}{4g_q^2} . \end{align} Now, consider the case in which the atom is coupled with $2C_a\gg 1$ and $C_q,C_a\gg C_b$ (and, again, $2C_q\gg 1$). We find \begin{align} \langle \hat{\sigma}_{q-} \rangle_{ss} \simeq -\frac{i\Omega}{2\gamma C_q} 2(1+2C_b) . \end{align} We note immediately that the QD polarization is a factor of $2(1+2C_b)$ {\em larger} than for the no-atom case. Meanwhile, for the cavity field amplitudes we find \begin{align} \langle \hat{a} \rangle_{ss} \simeq - \frac{\Omega}{g_q} \frac{1+2C_b}{2C_a} \simeq 0 , \end{align} and \begin{align} \langle \hat{b} \rangle_{ss} \simeq - \frac{\Omega}{g_q} . \end{align} The amplitude $\langle \hat{b} \rangle_{ss}$ is a factor of 2 larger than for the no-atom case. This means that the (coherent) output photon flux from mode $b$ is enhanced by a factor of 4 over its corresponding no-atom value and, further, that the total output photon flux from the system through the cavity modes is, for a given driving strength of the QD, {\em doubled}, i.e., $\Phi_{a,ss}+\Phi_{b,ss}=2(\Phi_{a,ss}^0+\Phi_{b,ss}^0)$. Moreover, the directionality, defined for the continuous driving case by \begin{align}\label{Direction2} D_{ss} \equiv \frac{\Phi_{b,ss} - \Phi_{a,ss}}{\Phi_{b,ss} + \Phi_{a,ss}}, \end{align} is essentially equal to 1 in the limit considered. To back up these approximate analytical calculations, in Fig.~\ref{varying_driving1} we plot the steady-state intracavity photon numbers $\langle n_a\rangle=\langle\hat a^\dag\hat a\rangle$ and $\langle n_b\rangle=\langle\hat b^\dag\hat b\rangle$ (equivalent to the output photon fluxes from the two modes scaled by $2\kappa$), and the directionality $D_{ss}$, computed numerically from the full master equation (\ref{m_full}), as a function of the driving strength $\Omega$ of the QD and for two values of the detuning of the atomic transition $|g\rangle\leftrightarrow |-\rangle$ from the driving laser frequency, $\Delta_b=\omega_--\omega_L$. We also plot the ratios of the output fluxes from each cavity mode in the presence of the atom to the total output flux from both modes in the absence of the atom, i.e., we plot \begin{align}\label{R_ab} R_a = \frac{\Phi_{a,ss}}{\Phi_{a,ss}^0 + \Phi_{b,ss}^0} , ~~~ R_b = \frac{\Phi_{b,ss}}{\Phi_{a,ss}^0 + \Phi_{b,ss}^0} . \end{align} Finally, we plot the 2nd-order intensity correlation function $g_b^{(2)}(0)$, which we will discuss in more detail in the next section. For the parameters of Fig.~\ref{varying_driving1}, we have $C_q=C_a=25$ and $C_b=0.556$. We see that there is good qualitative agreement with the predictions of the approximate analysis given above. In particular, the output photon flux from mode $b$ ($a$) is substantially enhanced (reduced) with the addition of the atom. Furthermore, this enhancement does indeed take it well beyond the no-atom total output flux over much of the range of driving strengths considered, while good directionality is also demonstrated. These effects are noticeably further enhanced with the addition of a finite detuning, $\Delta_b/\kappa =0.1$, leading also to better quantitative agreement with the predictions of the linear analysis above. We note that for $\Delta_b/\kappa =0$ the numerical solutions of the master equation reveal a significant population of the atomic state $|-\rangle$ for the driving strengths considered. With $\Delta_b/\kappa =0.1$, however, this population becomes negligible. Some insight is offered by modifying the analysis above to allow for finite $\Delta_b$. In particular, doing so one finds, for $2\Delta_b/\gamma_a\gg 1$ (in Fig.~\ref{varying_driving1}, for $\Delta_b/\kappa =0.1$ we have $2\Delta_b/\gamma_a=20$), that $\langle\hat b\rangle_{\rm ss}\simeq -\Omega /g_q$ once again, but \begin{align} \langle\hat\sigma_{b-}\rangle_{ss} = -\frac{ig_b}{i\Delta_b+\gamma_a/2} \langle\hat b\rangle_{\rm ss} \simeq \frac{g_b\Omega}{g_q\Delta_b} \ll \langle\hat\sigma_{b-}\rangle_{ss}^{\Delta_b=0} . \end{align} The effects described above for weak, continuous driving -- enhanced photon flux from mode $b$ and good directionality -- are also more robust with respect to QD and atomic spontaneous emission than the single-photon pulse regime of the previous section. This is demonstrated in Fig.~\ref{varying_driving2}, where the results shown are obtained for the same parameters as in Fig.~\ref{varying_driving1} except that now we set $\gamma_q/\kappa=\gamma_a/\kappa =0.05$. Note that for these values we have $C_q=C_a=5$. (Note also that fairly similar results are obtained with still larger spontaneous emission rates, i.e., with $\gamma_q/\kappa=\gamma_a/\kappa =0.1$, for which $C_q=C_a=2.5$.) \begin{figure} \caption{Mean steady-state intracavity photon numbers $\langle n_a\rangle$ and $\langle n_b\rangle$, ratios $R_a$ and $R_b$, directionality $D_{ss} \label{varying_driving2} \end{figure} \subsection{Photon Statistics and Correlations} \label{SecPhotonStats} \subsubsection{Weak Driving Approximation} Our analysis is concluded by briefly addressing the photon statistics of the fields emitted from the cavity. We are therefore interested in evaluating the following autocorrelation functions \begin{subequations} \begin{align}\label{2t1} g_a^{(2)}(t,\tau) = \frac{\langle \hat{a}^\dagger(t)\hat{a}^\dagger(t+\tau)\hat{a}(t+\tau)\hat{a}(t)\rangle}{\langle \hat{a}^\dagger (t)\hat{a}(t)\rangle^2}, \end{align} \begin{align}\label{2t2} g_b^{(2)}(t,\tau) = \frac{\langle \hat{b}^\dagger(t)\hat{b}^\dagger(t+\tau)\hat{b}(t+\tau)\hat{b}(t)\rangle}{\langle \hat{b}^\dagger (t)\hat{b}(t)\rangle^2}, \end{align} \end{subequations} in the steady-state limit $t\xrightarrow{}\infty$. Eq.~(\ref{2t1}) ((\ref{2t2})) then gives the relative change in likelihood for the detection of a photon from mode $a$ ($b$) of the cavity at time $\tau$ later than an initial photon detection from mode $a$ ($b$) at time $t$. To simplify notation, the parameter $t$ is dropped from Eqs.~(\ref{2t1}) and (\ref{2t2}) where it should be understood that the system resides in its steady state at time $\tau=0$. We again specialise to the bad-cavity regime and assume the driving laser is on resonance with the cavity modes, the quantum dot and both atomic transitions ($\Delta_{c,q,a,b} = 0$). Additionally, the assumption is made that the driving laser is weak enough that the steady-state of the system is approximately a pure state. This allows for a rather straightforward evaluation of Eqs.~(\ref{2t1}) and (\ref{2t2}) at zero time delay ($\tau=0$), which follows from the methods outlined in Section (13.2.3) of \cite{Howard2}. Working within these parameter constraints, it is possible to again adiabatically eliminate the cavity modes from the system dynamics. At the level of the master equation, this may be achieved by tracing over the cavity modes in Eq.~(\ref{m_full}) to obtain a master equation for the reduced density operator $\hat{\rho}_r(t)$ that describes an effective interaction between the QD and atom: \begin{align} \begin{split} \frac{d\hat{\rho}_r}{dt} = & -i\Omega\commutator{\hat{\sigma}_{q+} + \hat{\sigma}_{q-}}{\hat{\rho}_r} + \frac{\gamma_q}{2}\mathcal{D}\left[\hat{\sigma}_{q-}\right]\hat{\rho}_r \\ & + \frac{\gamma_a}{2}\left(\mathcal{D}\left[\hat{\sigma}_{a-}\right] + \mathcal{D}\left[\hat{\sigma}_{b-}\right]\right)\hat{\rho}_r \\ & + \frac{1}{\kappa}\left(\mathcal{D}\left[g_q\hat{\sigma}_{q-} + g_a\hat{\sigma}_{a-}\right] + \mathcal{D}\left[g_q\hat{\sigma}_{q-} + g_b\hat{\sigma}_{b-}\right]\right)\hat{\rho}_r . \end{split} \end{align} The adiabatically eliminated cavity operators (in the Heisenberg picture) are expressed in terms of operators acting within the Hilbert spaces of the QD and atom \begin{subequations} \begin{align} & \hat{a}(t) = \frac{-i}{\kappa}\left( g_q\hat{\sigma}_{q-}(t) + g_a\hat{\sigma}_{a-}(t) \right) + \text{v.f.}, \label{a_elim}\\ & \hat{b}(t) = \frac{-i}{\kappa}\left( g_q\hat{\sigma}_{q-}(t) + g_b\hat{\sigma}_{b-}(t) \right) + \text{v.f.}, \label{b_elim} \end{align} \end{subequations} where v.f. denotes the vacuum field contribution, which has again been included for completeness, but may be neglected for this analysis. With a sufficiently weak driving laser strength $\Omega$, the state of the system can be expanded as a pure state, \begin{align}\label{two_quant} \begin{split} \ket{\psi(t)} = & \ket{G,g} + \alpha(t)\ket{E,g} + \beta(t)\ket{G,+} \\ & + \eta(t)\ket{G,-} + \zeta(t)\ket{E,+} + \xi(t)\ket{E,-}, \end{split} \end{align} where we allow for up to two quanta of excitation. Note that the amplitudes $\alpha(t)$ and $\beta(t)$ are not the same as those shown in Eq.~(\ref{one_quant}). The state (\ref{two_quant}) will evolve according to a non-unitary Schr\"{o}dinger equation, \begin{align}\label{non_unit} \frac{d\ket{\psi}}{dt} = -i\hat{\mathcal{H}}\ket{\psi}, \end{align} with the non-Hermitian Hamiltonian \begin{align} \begin{split} \hat{\mathcal{H}} = & \Omega\hat{\sigma}_{q+} - \frac{i\gamma_q}{2}\hat{\sigma}_{q+}\hat{\sigma}_{q-} - \frac{i\gamma_a}{2}\hat{\sigma}_{a+}\hat{\sigma}_{a-} - \frac{i\gamma_a}{2}\hat{\sigma}_{b+}\hat{\sigma}_{b-} \\ & - \frac{i}{\kappa}\left( g_q\hat{\sigma}_{q+} + g_a\hat{\sigma}_{a+} \right)\left( g_q\hat{\sigma}_{q-} + g_a\hat{\sigma}_{a-} \right) \\ & - \frac{i}{\kappa}\left( g_q\hat{\sigma}_{q+} + g_b\hat{\sigma}_{b+} \right)\left( g_q\hat{\sigma}_{q-} + g_b\hat{\sigma}_{b-} \right). \end{split} \end{align} From this, we may derive equations of motion for the complex amplitudes appearing in Eq.~(\ref{two_quant}), which take the forms \begin{subequations} \begin{align} \dot\alpha &= -\Gamma_q\alpha - \frac{g_q}{\kappa} (g_a\beta + g_b\eta ) , \\ \dot\beta &= -\Gamma_a\beta - \frac{g_ag_q}{\kappa}\alpha , \\ \dot\eta &= -\Gamma_b\eta - \frac{g_bg_q}{\kappa}\alpha , \\ \dot\zeta &= -(\Gamma_q+\Gamma_a)\zeta - i\Omega\beta , \\ \dot\xi &= -(\Gamma_q+\Gamma_b)\xi - i\Omega\eta , \end{align} \end{subequations} where $\Gamma_q=\frac{\gamma_q}{2}(1+4C_q)$, $\Gamma_a=\frac{\gamma_a}{2}(1+2C_a)$, and $\Gamma_b=\frac{\gamma_a}{2}(1+2C_b)$. Solving these equations at steady-state yields \begin{subequations} \begin{align} & \alpha_{ss} = \frac{-i\Omega}{\Gamma_q - \frac{g_q^2}{\kappa^2}\left(\frac{g_a^2}{\Gamma_a} + \frac{g_b^2}{\Gamma_b}\right)}, \\ & \beta_{ss} = -\frac{g_ag_q}{\Gamma_a\kappa}\alpha_{ss}, \\ & \eta_{ss} = -\frac{g_bg_q}{\Gamma_b\kappa}\alpha_{ss}, \\ & \zeta_{ss} = \frac{i\Omega}{\left( \Gamma_q + \Gamma_a \right)}\frac{g_ag_q}{\Gamma_a\kappa}\alpha_{ss}, \\ & \xi_{ss} = \frac{i\Omega}{\left( \Gamma_q + \Gamma_b\right)}\frac{g_bg_q}{\Gamma_b\kappa}\alpha_{ss} . \end{align} \end{subequations} Within this approximation, the autocorrelation functions (\ref{2t1}) and (\ref{2t2}) are given by \begin{subequations} \begin{align} & g_a^{(2)}(\tau) = \frac{\bra{\psi_a(\tau)}\hat a^\dagger \hat a\ket{\psi_a(\tau)}}{\bra{\psi_{ss}}\hat a^\dagger \hat a\ket{\psi_{ss}}}, \\ & g_b^{(2)}(\tau) = \frac{\bra{\psi_b(\tau)}\hat b^\dagger \hat b\ket{\psi_b(\tau)}}{\bra{\psi_{ss}}\hat b^\dagger \hat b\ket{\psi_{ss}}}, \end{align} \end{subequations} where $\ket{\psi_{ss}}$ is the state vector (\ref{two_quant}) at steady-state and $\ket{\psi_{a,b}(\tau)}$ are found by solving Eq.~(\ref{non_unit}) subject to the initial condition \begin{subequations}\label{ab} \begin{align} \label{init1} & \ket{\psi_a(\tau = 0)} = \frac{\hat a\ket{\psi_{ss}}}{\sqrt{\bra{\psi_{ss}}\hat a^\dagger \hat a\ket{\psi_{ss}}}}, \\ & \label{init2} \ket{\psi_b(\tau = 0)} = \frac{\hat b\ket{\psi_{ss}}}{\sqrt{\bra{\psi_{ss}}\hat b^\dagger \hat b\ket{\psi_{ss}}}}, \end{align} \end{subequations} i.e., the steady-state, $\ket{\psi_{ss}}$, conditioned on the emission of a photon from mode $a$ and mode $b$ of the cavity, respectively. Note that in (\ref{ab}), we take $\hat a = -i(g_q\hat\sigma_{q-}+g_a\hat\sigma_{a-})/\kappa$ and $\hat b = -i(g_q\hat\sigma_{q-}+g_b\hat\sigma_{b-})/\kappa$. \subsubsection{Zero Time Delay ($\tau = 0$)} Explicit expressions for the correlation functions are now derived in the limit $\tau=0$. Writing out Eqs.~(\ref{init1}) and (\ref{init2}) explicitly, we find \begin{subequations} \begin{align}\label{conditional1} \begin{split} \ket{\psi_a(\tau = 0)} = \frac{1}{\sqrt{n_{a}}}\bigg[ & \left( \frac{g_q}{\kappa}\alpha_{ss} + \frac{g_a}{\kappa}\beta_{ss} \right)\ket{G,g} \\ & + \frac{g_q}{\kappa}\zeta_{ss}\ket{G,+} + \frac{g_q}{\kappa}\xi_{ss}\ket{G,-} \\ & + \frac{g_a}{\kappa}\zeta_{ss}\ket{E,g} \bigg], \end{split} \end{align} \begin{align}\label{conditional2} \begin{split} \ket{\psi_b(\tau = 0)} = \frac{1}{\sqrt{n_{b}}}\bigg[ & \left( \frac{g_q}{\kappa}\alpha_{ss} + \frac{g_b}{\kappa}\eta_{ss} \right)\ket{G,g} \\ & + \frac{g_q}{\kappa}\zeta_{ss}\ket{G,+} + \frac{g_q}{\kappa}\xi_{ss}\ket{G,-} \\ & + \frac{g_b}{\kappa}\xi_{ss}\ket{E,g} \bigg], \end{split} \end{align} \end{subequations} with \begin{subequations} \begin{align} \begin{split} n_{a} \equiv & \frac{g_q^2}{\kappa^2}|\alpha_{ss} +(g_a/g_q)\beta_{ss}|^2 \\ & + \left( \frac{g_q^2 + g_a^2}{\kappa^2} \right)|\zeta_{ss}|^2 + \frac{g_q^2}{\kappa^2}|\xi_{ss}|^2, \end{split} \end{align} and \begin{align} \begin{split} n_{b} \equiv & \frac{g_q^2}{\kappa^2}|\alpha_{ss} + (g_b/g_q)\eta_{ss}|^2 \\ & + \left( \frac{g_q^2 + g_b^2}{\kappa^2} \right)|\xi_{ss}|^2 + \frac{g_q^2}{\kappa^2}|\zeta_{ss}|^2. \end{split} \end{align} \end{subequations} From this, we obtain the \textit{autocorrelation functions with zero time delay}, \begin{subequations} \begin{align} & g_a^{(2)}(0) = \frac{g_a^2g_q^2}{\kappa^4}\frac{|\zeta_{ss}|^2}{n_{a}^2}, \label{0time1} \\ & g_b^{(2)}(0) = \frac{g_b^2g_q^2}{\kappa^4}\frac{|\xi_{ss}|^2}{n_{b}^2}, \label{0time2} \end{align} \end{subequations} which, using the results above in the limit $\Omega\rightarrow 0$, reduce to \begin{widetext} \begin{subequations} \begin{align} g_a^{(2)}(0) &= C_a^2(1+2C_a)^2 \frac{\gamma_q^2}{(\Gamma_q+\Gamma_a)^2} \left[ 1 + 2C_q \left( \frac{1}{1+2C_a} + \frac{1}{1+2C_b} \right) \right]^2 , \label{0time1b} \\ g_b^{(2)}(0) &= C_b^2(1+2C_b)^2 \frac{\gamma_q^2}{(\Gamma_q+\Gamma_b)^2} \left[ 1 + 2C_q \left( \frac{1}{1+2C_a} + \frac{1}{1+2C_b} \right) \right]^2 . \label{0time2b} \end{align} \end{subequations} \end{widetext} Within the regime of most interest to us here, i.e., $2C_q\simeq 2C_a\gg 1>C_b$, these results predict extreme bunching of photons emitted from mode $a$ ($g_a^{(2)}(0)\gg 1$) and antibunching of photons emitted from mode $b$ ($g_b^{(2)}(0)< 1$). This extreme difference in the photon statistics of the two modes can be explained by examining Eqs.~(\ref{a_elim}) and (\ref{b_elim}). In particular, upon the event of a photon emission, it is uncertain whether the photon was emitted from the quantum dot or from the atom, meaning that the system resides in an entangled state, i.e., a coherent superposition of the ground state $\ket{G,g}$ and the three excited states $\ket{E,g}$, $\ket{G,+}$ and $\ket{G,-}$. Due to the much larger coupling of the atom to mode $a$, one has $|g_a\zeta_{ss}|\gg|g_b\xi_{ss}|$ and the population of the excited states is larger within the conditional state $\ket{\psi_a(\tau = 0)}$ than for $\ket{\psi_b(\tau = 0)}$. Furthermore, for the amplitude of the ground state $|G,g\rangle$ in $\ket{\psi_a(\tau = 0)}$ one finds \begin{align} \frac{g_q}{\kappa}\alpha_{ss} + \frac{g_a}{\kappa}\beta_{ss} = \frac{g_q}{\kappa} \frac{1}{1+2C_a} \alpha_{ss} \simeq 0 \end{align} for $2C_a\gg 1$ and so the excited state amplitudes dominate in $\ket{\psi_a(\tau = 0)}$, leading to strong bunching in mode $a$. In contrast, the amplitude of the ground state $|G,g\rangle$ in $\ket{\psi_b(\tau = 0)}$ is in fact the dominant amplitude in the state, thus giving rise to antibunching in mode $b$. The numerical solutions of the full master equation confirm these bunching and antibunching behaviours of the $a$ and $b$ modes, respectively, in the appropriate parameter regime ($\kappa >g_{q,a,b}$, $C_{q,a}\gg C_b$). Strong antibunching of mode $b$ is illustrated by the plots of $g_b^{(2)}(0)$ versus $\Omega /\kappa$ in Figs.~\ref{varying_driving1} and \ref{varying_driving2}. The corresponding $g_a^{(2)}(0)$ is not plotted as it takes values several orders of magnitude larger than $g_b^{(2)}(0)$. \section{Discussion} \label{SecDisc} We have demonstrated that directional coupling from an unpolarized emitter to a circulating cavity can be induced by coupling another emitter chirally to the same cavity. Our analysis reveals that in both the single excitation regime and the steady state of the driven regime, the mechanism behind the directionality is a quantum interference effect, wherein the field amplitude of one of the modes (and therefore in one direction) is strongly suppressed. For an ideal system, and assuming the coupling asymmetry of atomic cesium, directionalities well in excess of 90$\%$ can be achieved. Furthermore, in the continuous driving case we find that the intensity of the directional emission is also significantly enhanced compared to emission from the QD in the absence of the atom, and that the directional emission is still strongly antibunched. We now make several comments related to the scheme presented above. Firstly, an important point to consider is the experimental realizability of the proposal. In recent years, a number of experimental studies have demonstrated large Purcell factors, typically for the case of photonic crystal cavities~\cite{Englund}. Although resonators with circulating geometries have not typically been used for such demonstrations, there is no a priori reason why large Purcell enhancement cannot be achieved in such a configuration, and the large quality factors and relatively small mode volumes necessary have already been demonstrated in a number of cases~\cite{AokiResonator, RauschResonator}. In addition, because the scheme presented here works in the bad cavity regime, it should also be possible to use it with plasmonic resonators which typically exhibit very fast decay times (large $\kappa$) along with large coupling rates. Another important point to note, as mentioned in the introduction, is that there is no need for the scheme to use just a single atom. Indeed, collective coupling of optically pumped atoms to the resonator can alleviate the need for large single atom couplings, due to the collective enhancement factor of $\sqrt{N}$ which is applied to the single atom coupling rate~\cite{ScottMaarten}, and at the level of a single-excitation, or for weak continuous driving, the response of an atomic ensemble is the same as for a single atom. Additionally, the application of a magnetic field as shown in Fig.~\ref{fig:Concept} is not strictly necessary to stabilise the atomic spin states, as it was recently shown in \cite{pucher2021atomic} that chiral atomic coupling can also be implemented using tensor light shifts. More speculatively, it may be possible to further simplify the setup by using emitters which have structural related chirality at room temperature. Particles such as carbon nanotubes~\cite{Wei,Sato} and transition metal dichalcogenides~\cite{Hu} exhibit circular dipole moments at room temperature, but are not in general single photon emitters. It might be possible, therefore, to replace the atoms in our current work with such nanomaterials. Indeed, research regarding the coupling of these materials is already underway~\cite{Hu,Khas,MarkCNT}. In summary, directional emission of single photons enabled by chiral coupling between quantum emitters and nanophotonic devices is a technique with important applications to future quantum information technologies. However, given that it requires a polarized emitter dipole moment, its use is typically restricted to ultra-cold systems. Nonetheless, as we show here, by using a circulating cavity, directional emission can effectively be transferred from a chirally coupled emitter to a randomly polarized emitter, even when both emitters are coupled to the resonator in the bad-cavity regime. We anticipate that this result may allow the easing of requirements on the types of emitters which can be used in directional emission schemes. \subsection*{Acknowledgments} MS acknowledges funding from JPS Kakenhi , a Matsuo Foundation grant, and funding from the Quantum Nanophotonic Device project at Tokyo University of Science. \end{document}
\begin{document} \pagestyle{plain} \title{An Introduction to Torsion Subcomplex Reduction} \subjclass[2010]{MSC 11F75: Cohomology of arithmetic groups} \date{\today} \author{Alexander D. Rahm} \address{Laboratoire de math\'ematiques GAATI, Universit\'e de la Polyn\'esie Fran\c{c}aise, BP 6570 -- 98702 Faaa, French Polynesia} \urladdr{http://gaati.org/rahm/} \email{[email protected]} \maketitle \begin{abstract} This survey paper introduces to a technique called Torsion Subcomplex Reduction (TSR) for computing torsion in the cohomology of discrete groups acting on suitable cell complexes. TSR enables one to skip machine computations on cell complexes, and to access directly the reduced torsion subcomplexes, which yields results on the cohomology of matrix groups in terms of formulas. TSR has already yielded general formulas for the cohomology of the tetrahedral Coxeter groups as well as, at odd torsion, of SL$_2$ groups over arbitrary number rings. The latter formulas allow to refine the Quillen conjecture. Furthermore, progress has been made to adapt TSR to Bredon homology computations. In particular for the Bianchi groups, yielding their equivariant $K$-homology, and, by the Baum--Connes assembly map, the $K$-theory of their reduced $C^*$-algebras. As a side application, TSR has allowed to provide dimension formulas for the Chen--Ruan orbifold cohomology of the complexified Bianchi orbifolds, and to prove Ruan's crepant resolution conjecture for all complexified Bianchi orbifolds. \end{abstract} \section{Introduction} This survey paper is based on the habilitation thesis of the author, restricting to the expository parts, which are updated here, and referring to previously published papers for the proofs. The goal is to introduce to a technique for computing Farrell--Tate cohomology of arithmetic groups, presented in Section~\ref{techniques}. This technique can also be applied in the computation of other invariants, as described in Section~\ref{Results}, where further results are stated. \subsection{Background}\label{background} Our objects of study are discrete groups~$\Gamma$ such that~$\Gamma$ admits a torsion-free subgroup of finite index. By a theorem of Serre~\cite{SerreGroupesDiscrets}, all the torsion-free subgroups of finite index in~$\Gamma$ have the same cohomological dimension; this dimension is called the virtual cohomological dimension (abbreviated vcd) of~$\Gamma$. Above the vcd, the (co)homology of a discrete group is determined by its system of finite subgroups. We are going to discuss it in terms of Farrell--Tate cohomology. The Farrell--Tate cohomology $\widehat{\operatornameeratorname{H}}^q$ is identical to group cohomology $\Homol^q$ in all degrees $q$ above the vcd, and extends in lower degrees to a cohomology theory of the system of finite subgroups. Details are elaborated in Brown's book~\cite{Brown}*{chapter X}. So for instance considering the Coxeter groups, the virtual cohomological dimension of all of which vanishes, their Farrell--Tate cohomology is identical to all of their group cohomology. In Section~\ref{conjugacy reduction}, we will introduce a method of how to explicitly determine the Farrell--Tate cohomology: By reducing torsion sub-complexes. Let us note that for the same arithmetic groups, cohomology outside of our setting has much stronger contemporary interest, and therefore, there has been extensive work on it. Just to mention a few, fairly recent publications about group cohomology in low cohomological degrees, from which to find more references: On SL$_N({\mathbb{Z}})$ with rising rank $N$ and modulo small torsion~\cites{sikiri2019voronoi}, on infinite towers of congruence subgroups \cites{AGMY,BergeronSengunVenkatesh}, on arbitrary groups using general purpose algorithms \cites{Ellis}. \subsection{Overview of the results} This paper introduces the technique of \emph{torsion subcomplex reduction}. It is a technique for the study of discrete groups $\Gamma$, giving easier access to the cohomology of the latter at a fixed prime~$\ell$ and above the virtual cohomological dimension, by extracting the relevant portion of the equivariant spectral sequence and then simplifying it. Instead of having to work with a full cellular complex $X$ with a nice $\Gamma$-action, the technique inputs only an often lower-dimensional subcomplex of $X$, and reduces it to a small number of cells. The author first developed torsion subcomplex reduction for a specific class of arithmetic groups, the Bianchi groups, for which the method yielded all of the homology above the virtual cohomological dimension~\cite{Rahm:homological_torsion}. Some elements of this technique had already been used by Soul\'e for a modular group~\cite{Soule}; and were used by Mislin and Henn as a set of ad hoc tricks. After rediscovering these ad hoc tricks, the author puts them into a general framework~\cite{Rahm:formulas}. The advantage of using this framework is that it becomes possible to find general formulas for the dimensions of the Farrell--Tate cohomology, for instance for the entire family of the Bianchi groups. It is convenient to give some examples of where the technique of torsion subcomplex reduction has already produced good results: \begin{itemize} \item The Bianchi groups and their congruence subgroups (cf. Section \ref{The Bianchi groups}); \item The Coxeter groups (cf. Section \ref{The Coxeter groups}); \item The SL$_2$ groups over arbitrary number rings (cf. Section \ref{formulas for the Farrell--Tate cohomology}); \item PSL$_4({\mathbb{Z}})$ and the PGL\texorpdfstring{$_3$}{(3)} groups over rings of quadratic integers (cf. Section \ref{GL3}). \item The technique has also been adapted to groups with non-trivial centre (cf. Section \ref{non-trivial-centre}). \end{itemize} This has led to the following applications: \begin{itemize} \item Refining the Quillen conjecture (cf. Section \ref{QC}), \item Computing equivariant \textit{K}-homology (cf. Section \ref{Bredon state}), \item Understanding Chen--Ruan orbifold cohomology (cf. Section \ref{orbifold state}). \end{itemize} \section{The technique of Torsion Subcomplex Reduction} \label{techniques} \subsection{Farrell--Tate cohomology and Steinberg homology}\label{Farrell--Tate cohomology and Steinberg homology} Let $\Gamma$ be a virtual duality group: this means, $\Gamma$ admits a finite index subgroup $\Gamma'$ such that ${\mathbb{Z}}$ admits a finite projective resolution over ${\mathbb{Z}}[\Gamma']$, and there is an integer $n$ such that $\Homol^i(\Gamma; \thinspace {\mathbb{Z}}[\Gamma])= 0$ for $i\neq n$ and $\Homol^n(\Gamma; \thinspace {\mathbb{Z}}[\Gamma])$ is ${\mathbb{Z}}$-torsion-free. Then $\Gamma$ is of finite virtual cohomological dimension vcd$(\Gamma) = n < \infty$ with $n$ the aforementioned integer (where we have to make the smallest choice $n=0$ if $n$ is not unique). Then the ``dualizing module'' is $D := \Homol^n(\Gamma;\thinspace {\mathbb{Z}}[\Gamma])$, and the \textit{Steinberg homology} of $\Gamma$ (with coefficients $M$) is $\Steinberg_i(\Gamma; \thinspace M) := \Homol_i(\Gamma; \thinspace D\otimes M)$. Recall~\cite{Brown79}*{\S 11.8} that there is an exact sequence tying together group cohomology $\Homol^\bullet$, Steinberg homology $\Steinberg_\bullet$ and Farrell-Tate cohomology $\widehat{\operatornameeratorname{H}}^\bullet$ of $\Gamma$ : \begin{center} \begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}] \matrix (m) [ matrix of math nodes, row sep=1em, column sep=1.4em, text height=1.99ex, text depth=0.75ex ] { & & & & \Homol^0 & \hdots & \Homol^{n-1} & \Homol^{n} & \Homol^{n+1} & \Homol^{n+2} & \hdots \\ & & & & & & & & \parallel & \parallel & \\ \hdots & \widehat{\operatornameeratorname{H}}^{-3} & \widehat{\operatornameeratorname{H}}^{-2} & \widehat{\operatornameeratorname{H}}^{-1}& \widehat{\operatornameeratorname{H}}^{0} & \hdots &\widehat{\operatornameeratorname{H}}^{n-1}&\widehat{\operatornameeratorname{H}}^{n}&\widehat{\operatornameeratorname{H}}^{n+1}&\widehat{\operatornameeratorname{H}}^{n+2}& \hdots \\ & \parallel & \parallel & & & & & & & & \\ \hdots &\Steinberg_{n+2}&\Steinberg_{n+1}&\Steinberg_{n}&\Steinberg_{n-1}& \hdots &\Steinberg_{0}\\ }; \path[overlay,->, font=\scriptsize,>=latex] (m-5-4) edge[out=-355,in=-155] (m-1-5) (m-3-4) edge[thick, right hook->] (m-5-4) (m-1-5) edge (m-3-5) (m-3-5) edge (m-5-5) (m-1-7) edge (m-3-7) (m-3-7) edge (m-5-7) (m-5-5) edge[out=-355,in=-155] (m-1-6) (m-5-6) edge[out=-355,in=-155] (m-1-7) (m-5-7) edge[out=-355,in=-155] (m-1-8) (m-1-8) edge[thick, draw,->>] (m-3-8) ; \end{tikzpicture} \end{center} Therefore, Brown describes the Farrell--Tate cohomology of $\Gamma$ to consist of the cohomology functors $\Homol^i$ for $i>n$, the Steinberg homology functors $\Steinberg_i$ for $i>n$, modified $\Homol^n$ and $\Steinberg_n$ functors, and $n$ additional functors $\widehat{\operatornameeratorname{H}}^0$, $\hdots$, $\widehat{\operatornameeratorname{H}}^{n-1}$ which are some sort of mixture of the functors $\Homol^i$ and $\Steinberg_i$ for $i \leq n$. \subsection{Reduction of torsion subcomplexes in the classical setting} \label{conjugacy reduction} Let $\ell$ be a prime number. We require any discrete group $\Gamma$ under our study to be provided with what we will call a \textit{polytopal $\Gamma$-cell complex}, that is, a finite-dimensional simplicial complex $X$ with cellular $\Gamma$-action such that each cell stabiliser fixes its cell point-wise. In practice, we relax the simplicial condition to a polytopal one, merging finitely many simplices to a suitable polytope. We could obtain the simplicial complex back as a triangulation. We further require that the fixed point set~$X^G$ be acyclic for every non-trivial finite $\ell$-subgroup $G$ of~$\Gamma$. Then, the $\Gamma$-equivariant Farrell--Tate cohomology $\widehat{\operatornameeratorname{H}}^*_\Gamma(X; \thinspace M)$ of~$X$, for any trivial $\Gamma$-module $M$ of coefficients, gives us the $\ell$-primary part $\widehat{\operatornameeratorname{H}}^*(\Gamma; \thinspace M)_{(\ell)}$ of the Farrell--Tate cohomology of~$\Gamma$, as follows. \begin{proposition}[Brown \cite{Brown}] \label{Brown's proposition} For a $\Gamma$-action on $X$ as specified above, the canonical map $$ \widehat{\operatornameeratorname{H}}^*(\Gamma; \thinspace M)_{(\ell)} \to \widehat{\operatornameeratorname{H}}^*_\Gamma(X; \thinspace M)_{(\ell)} $$ is an isomorphism. \end{proposition} The classical choice \cite{Brown} is to take for $X$ the geometric realization of the partially ordered set of non-trivial finite subgroups (respectively, non-trivial elementary Abelian $\ell$-subgroups) of~$\Gamma$, the latter acting by conjugation. The stabilisers are then the normalizers, which in many discrete groups are infinite. In addition, there are often great computational challenges to determine a group presentation for the normalizers. When we want to compute the module $\widehat{\operatornameeratorname{H}}^*_\Gamma(X; \thinspace M)_{(\ell)}$ subject to Proposition~\ref{Brown's proposition}, at least we must know the ($\ell$-primary part of the) Farrell--Tate cohomology of these normalizers. The Bianchi groups are an instance where different isomorphism types can occur for this cohomology at different conjugacy classes of elementary Abelian $\ell$-subgroups, both for $\ell=2$ and $\ell=3$. As the only non-trivial elementary Abelian $3$-subgroups in the Bianchi groups are of rank $1$, the orbit space $_\Gamma \backslash X$ consists only of one point for each conjugacy class of type ${\mathbb{Z}}/3$ and a corollary~\cite{Brown} from Proposition~\ref{Brown's proposition} decomposes the $3$-primary part of the Farrell--Tate cohomology of the Bianchi groups into the direct product over their normalizers. However, due to the different possible homological types of the normalizers (in fact, two of them occur), the final result remains unclear and subject to tedious case-by-case computations of the normalizers. In contrast, in the cell complex we are going to construct (specified in Definition~\ref{reduced torsion subcomplex definition} below), the connected components of the orbit space are for the $3$-torsion in the Bianchi groups not simple points, but have either the shape $\edgegraph$ or $\circlegraph$. This dichotomy already contains the information about the occurring normalizer. The starting point for our construction is the following definition. \begin{df} Let $\ell$ be a prime number. The \emph{$\ell$-torsion subcomplex} of a polytopal $\Gamma$-cell complex~$X$ consists of all the cells of $X$ whose stabilisers in~$\Gamma$ contain elements of order $\ell$. \end{df} We are from now on going to require the cell complex $X$ to admit only finite stabilisers in~$\Gamma$, and we require the action of $\Gamma$ on the coefficient module $M$ to be trivial. Then obviously only cells from the \emph{$\ell$-torsion subcomplex} contribute to $\widehat{\operatornameeratorname{H}}^*_\Gamma(X; \thinspace M)_{(\ell)}$. \begin{corollary}[Corollary to Proposition~\ref{Brown's proposition}] \label{Brownian} There is an isomorphism between the $\ell$-primary parts of the Farrell--Tate cohomology of~$\Gamma$ and the $\Gamma$-equivariant Farrell--Tate cohomology of the $\ell$-torsion subcomplex. \end{corollary} We are going to reduce the \emph{$\ell$-torsion subcomplex} to one which still carries the $\Gamma$-equivariant Farrell--Tate cohomology of~$X$, but which can also have considerably fewer orbits of cells. This can be easier to handle in practice, and, for certain classes of groups, leads us to an explicit structural description of the Farrell--Tate cohomology of~$\Gamma$. The pivotal property of this reduced $\ell$-torsion subcomplex will be given in Theorem~\ref{pivotal}. Our reduction process uses the following conditions, which are imposed to a triple $(\sigma, \tau_1, \tau_2)$ of cells in the $\ell$-torsion subcomplex, where $\sigma$ is a cell of dimension $n-1$, lying in the boundary of precisely the two $n$-cells $\tau_1$ and~$\tau_2$, the latter cells representing two different orbits. \begin{ConditionA} \label{cell condition} The triple $(\sigma, \tau_1, \tau_2)$ is said to satisfy Condition A if no higher-dimensional cells of the $\ell$-torsion subcomplex touch $\sigma$, if the interior of $\tau_1$ and the interior of $\tau_2$ do not contain two points which are on the same orbit, and if the $n$-cell stabilisers admit an isomorphism $\Gamma_{\tau_1} \cong \Gamma_{\tau_2}$. \end{ConditionA} Where this condition is fulfilled in the $\ell$-torsion subcomplex, we merge the cells $\tau_1$ and $\tau_2$ along~$\sigma$ and do so for their entire orbits, if and only if they meet the following additional condition B. We will refer by \emph{mod $\ell$ cohomology} to group cohomology with ${\mathbb{Z}}/\ell$-coefficients under the trivial action. \begin{isomorphismCondition} With the notation above Condition $A$, the inclusion $ \Gamma_{\tau_1} \subset \Gamma_\sigma$ induces an isomorphism on mod $\ell$ cohomology. \end{isomorphismCondition} \begin{lemma}[\cite{Rahm:formulas}] \label{A} Let $\widetilde{X_{(\ell)}}$ be the $\Gamma$-complex obtained by orbit-wise merging two $n$-cells of the $\ell$-torsion subcomplex $X_{(\ell)}$ which satisfy Conditions~$A$ and~$B$. Then, $$\widehat{\operatornameeratorname{H}}^*_\Gamma(\widetilde{X_{(\ell)}}; \thinspace M)_{(\ell)} \cong \widehat{\operatornameeratorname{H}}^*_\Gamma(X_{(\ell)}; \thinspace M)_{(\ell)}.$$ \end{lemma} By a ``terminal $(n-1)$-cell'', we will denote an $(n-1)$-cell $\sigma$ with \begin{itemize} \item modulo~$\Gamma$ precisely one adjacent $n$-cell $\tau$, \item and such that $\tau$ has no further cells on the $\Gamma$-orbit of $\sigma$ in its boundary; \item there shall be no higher-dimensional cells adjacent to $\sigma$. \end{itemize} And by ``cutting off'' the $n$-cell $\tau$, we will mean that we remove $\tau$ together with~$\sigma$ from our cell complex. \begin{df} \label{reduced torsion subcomplex definition} A \emph{reduced $\ell$-torsion subcomplex} associated to a polytopal $\Gamma$-cell complex~$X$ is a cell complex obtained by recursively merging orbit-wise all the pairs of cells satisfying conditions~$A$ and~$B$, and cutting off $n$-cells that admit a terminal $(n-1)$-cell when condition~$B$ is satisfied. \end{df} A priori, this process yields a unique reduced $\ell$-torsion subcomplex only up to suitable isomorphisms, so we do not speak of ``the'' reduced $\ell$-torsion subcomplex. The following theorem makes sure that the $\Gamma$-equivariant mod $\ell$ Farrell--Tate cohomology is not affected by this issue. \begin{theorem}[\cite{Rahm:formulas}] \label{pivotal} There is an isomorphism between the $\ell$-primary part of the Farrell--Tate cohomology of~$\Gamma$ and the $\Gamma$-equivariant Farrell--Tate cohomology of a reduced $\ell$-torsion subcomplex obtained from $X$ as specified above. \end{theorem} In order to have a practical criterion for checking Condition~$B$, we make use of the following stronger condition. Here, we write ${\rm N}_{\Gamma_\sigma}$ for taking the normalizer in ${\Gamma_\sigma}$ and ${\rm Sylow}_\ell$ for picking an arbitrary Sylow $\ell$-subgroup. This is well defined because all Sylow $\ell$-subgroups are conjugate. We use Zassenhaus's notion for a finite group to be $\ell$-\emph{normal}, if the center of one of its Sylow $\ell$-subgroups is the center of every Sylow $\ell$-subgroup in which it is contained. \begin{ConditionBprime} With the notation of Condition $A$, the group $\Gamma_\sigma$ admits a (possibly trivial) normal subgroup $T_\sigma$ with trivial mod~$\ell$ cohomology and with quotient group $G_\sigma$; and the group $\Gamma_{\tau_1}$ admits a (possibly trivial) normal subgroup $T_\tau$ with trivial mod~$\ell$ cohomology and with quotient group $G_\tau$ making the sequences \begin{center} $ 1 \to T_\sigma \to \Gamma_\sigma \to G_\sigma \to 1$ and $ 1 \to T_\tau \to \Gamma_{\tau_1} \to G_\tau \to 1$ \end{center} exact and satisfying one of the following. \begin{enumerate} \item Either $G_\tau \cong G_\sigma$, or \item $G_\sigma$ is $\ell$-normal and $G_\tau \cong {\rm N}_{G_\sigma}({\rm center}({\rm Sylow}_\ell(G_\sigma)))$, or \item both $G_\sigma$ and $G_\tau$ are $\ell$-normal and there is a (possibly trivial) group $T$ with trivial mod~$\ell$ cohomology making the sequence $$1 \to T \to {\rm N}_{G_\sigma}({\rm center}({\rm Sylow}_\ell(G_\sigma))) \to {\rm N}_{G_\tau}({\rm center}({\rm Sylow}_\ell(G_\tau))) \to 1$$ exact. \end{enumerate} \end{ConditionBprime} \begin{lemma}[\cite{Rahm:formulas}] \label{Implying the isomorphism condition} Condition B' implies Condition B. \end{lemma} \begin{remark} The computer implementation \cite{BuiRahm:scpInHAP} checks Conditions~$B' (1)$ and $B' (2)$ for each pair of cell stabilisers, using a presentation of the latter in terms of matrices, permutation cycles or generators and relators. In the below examples however, we do avoid this case-by-case computation by a general determination of the isomorphism types of pairs of cell stabilisers for which group inclusion induces an isomorphism on mod $\ell$ cohomology. The latter method is the procedure of preference, because it allows us to deduce statements that hold for the entire class of groups in question. \end{remark} \subsubsection{Example: A \texorpdfstring{$2$}{2}-torsion subcomplex for SL\texorpdfstring{$_3(\mathbb{Z})$}{(3,\textbf{Z})}} The $2$-torsion subcomplex of the cell complex described by Soul\'e~\cite{Soule}, obtained from the action of SL$_3(\mathbb{Z})$ on its symmetric space, has the following homeomorphic image. \begin{center} \scalebox{0.9} { \begin{pspicture}(-1.3,-7.44125)(11.894688,7.46125) \pstriangle[linewidth=0.04,dimen=outer](5.8803124,-6.42125)(10.46,7.72) \psline[linewidth=0.04](11.050312,-6.36125)(5.9103127,-3.52125)(5.8903127,1.25875)(0.6903125,1.27875)(0.6903125,-6.40125)(5.9303126,-3.52125)(5.9503126,-3.54125) \psline[linewidth=0.04](11.090313,-6.42125)(11.110312,1.27875)(5.8903127,1.25875)(5.9103127,5.65875)(0.6703125,1.27875)(0.6903125,1.25875) \psline[linewidth=0.04](5.9303126,5.65875)(11.110312,1.27875)(11.130313,1.25875) \usefont{T1}{ptm}{m}{it} \rput(5.9759374,6.26875){stab(M) $\cong {\mathcal{S}}_4$} \usefont{T1}{ptm}{m}{it} \rput(-0.575,1.44875){stab(Q) $\cong {\mathcal{D}}_6$} \usefont{T1}{ptm}{m}{it} \rput(7.217656,1.60875){stab(O) $\cong {\mathcal{S}}_4$} \usefont{T1}{ptm}{m}{it} \uput[0](11.0,1.62875){stab(N) $\cong {\mathcal{D}}_4$} \usefont{T1}{ptm}{m}{it} \uput[0](6.0,-3.4){stab(P) $\cong {\mathcal{S}}_4$} \usefont{T1}{ptm}{m}{it} \rput(0.14765625,-6.21125){N'} \usefont{T1}{ptm}{m}{it} \rput(11.657657,-6.19125){M'} \uput[90](8.5,3.5){${\mathcal{D}}_2$} \uput[0](5.9,3.5){${\mathcal{D}}_3$} \uput[0](5.9,-2.0){${\mathcal{D}}_3$} \uput[0](5.4,-5.0){${\mathcal{D}}_2$} \uput[180](3.3,3.5){${\mathbb{Z}}/2$} \rput(0.14765625,-2.21125){${\mathbb{Z}}/2$} \uput[0](2.6,-2.0){${\mathbb{Z}}/2$} \uput[0](2.6,-4.7){${\mathcal{D}}_4$} \uput[0](8.2,-2.0){${\mathbb{Z}}/2$} \uput[0](8.2,-4.7){${\mathcal{D}}_4$} \uput[0](2.6,1.6){${\mathcal{D}}_2$} \uput[270](8.6,1.2){${\mathbb{Z}}/2$} \psline[linewidth=0.04](8.99,3.5)(8.5,3.5)(8.5,3.0) \psline[linewidth=0.04](9.2,3.3)(8.7,3.3)(8.7,2.8) \psline[linewidth=0.04](10.7,-1.7)(11.1,-2.1)(11.5,-1.7) \psline[linewidth=0.04](10.7,-1.9)(11.1,-2.3)(11.5,-1.9) \psline[linewidth=0.04](5.49875,-6.0)(5.81875,-6.4)(5.55875,-6.8) \psline[linewidth=0.04](5.77875,-6.0)(6.03875,-6.4)(5.79875,-6.8) \end{pspicture} } \end{center} Here, the three edges $NM$, $NM'$ and $N'M'$ have to be identified as indicated by the arrows. All of the seven triangles belong with their interior to the $2$-torsion subcomplex, each with stabiliser ${\mathbb{Z}}/2$, except for the one which is marked to have stabiliser ${\mathcal{D}}_2$. Using the methods described in Section~\ref{conjugacy reduction}, we reduce this subcomplex to \begin{center} \scalebox{1} { \begin{pspicture}(-1.9,-0.9)(8.5,0.3) \psdots(-0.0,0.0) \psline(-0.0,0.0)(2.0,0.0) \uput{0.1}[90](-0.0,0.0){ ${\mathcal{S}}_4$} \uput{0.4}[270](-0.1,0.2){ $O$} \psdots(2,0.0) \uput{0.1}[90](1.0,0.0){ ${\mathcal{D}}_2$} \uput{0.1}[90](2.0,0.0){ ${\mathcal{D}}_6$} \uput{0.4}[270](2.0,0.2){ $Q$} \psline(2,0.0)(8,0.0) \uput{0.1}[90](3.0,0.0){ ${\mathbb{Z}}/2$} \uput{0.1}[90](4.0,0.0){ ${\mathcal{S}}_4$} \uput{0.4}[270](4.0,0.2){$M$} \psdots(4,0.0) \uput{0.1}[90](5.0,0.0){ ${\mathcal{D}}_4$} \uput{0.1}[90](6.0,0.0){ ${\mathcal{S}}_4$} \uput{0.2}[270](6.0,0.0){$P$} \psdots(6,0.0) \uput{0.1}[90](7.0,0.0){ ${\mathcal{D}}_4$} \uput{0.1}[90](8.0,0.0){ ${\mathcal{D}}_4$} \uput{0.4}[270](8.0,0.2){$N'$} \psdots(8,0.0) \end{pspicture} } \end{center} and then to \begin{center} \scalebox{1} { \begin{pspicture}(-1.9,-0.2)(8.0,0.3) \uput{0.1}[90](2.0,0.0){ ${\mathcal{S}}_4$} \psdots(2,0.0) \psline(2,0.0)(6,0.0) \uput{0.1}[90](3.0,0.0){ ${\mathbb{Z}}/2$} \uput{0.1}[90](4.0,0.0){ ${\mathcal{S}}_4$} \psdots(4,0.0) \uput{0.1}[90](5.0,0.0){ ${\mathcal{D}}_4$} \uput{0.1}[90](6.0,0.0){ ${\mathcal{S}}_4$} \psdots(6,0.0) \end{pspicture} } \end{center} which is the geometric realization of Soul\'e's diagram of cell stabilisers. This yields the mod $2$ Farrell--Tate cohomology as specified in~\cite{Soule}. \subsubsection{Example: Farrell--Tate cohomology of the Bianchi modular groups} Consider the $\mathrm{SL}_2$ matrix groups over the ring $\mathcal{O}m$ of integers in the imaginary quadratic number field $\ensuremath{\rightarrow}tionals(\sqrt{-m})$, with $m$ a square-free positive integer. These groups, as well as their central quotients $\mathrm{PSL}_2\left(\mathcal{O}_{-m}\right)$, are known as \textit{Bianchi (modular) groups}. We recall the following information from~\cites{Rahm:formulas} on the $\ell$-torsion subcomplex of $\mathrm{PSL}_2\left(\mathcal{O}_{-m}\right)$. Let~$\Gamma$ be a finite index subgroup in $\text{PSL}_2(\mathcal{O}_{-m})$. Then any element of~$\Gamma$ fixing a point inside hyperbolic $3$-space~$\mathcal{H}$ acts as a rotation of finite order. By Felix Klein's work, we know conversely that any torsion element~$\alpha$ is elliptic and hence fixes some geodesic line. We call this line \emph{the rotation axis of~$\alpha$}. Every torsion element acts as the stabiliser of a line conjugate to one passing through the Bianchi fundamental polyhedron. We obtain the \textit{refined cellular complex} from the action of~$\Gamma$ on~$\mathcal{H}$ as described in~\cite{Rahm:homological_torsion}, namely we subdivide~$\mathcal{H}$ until the stabiliser in~$\Gamma$ of any cell $\sigma$ fixes $\sigma$ point-wise. We achieve this by computing Bianchi's fundamental polyhedron for the action of~$\Gamma$, taking as a preliminary set of 2-cells its facets lying on the Euclidean hemispheres and vertical planes of the upper-half space model for $\mathcal{H}$, and then subdividing along the rotation axes of the elements of~$\Gamma$. It is well-known~\cite{SchwermerVogtmann} that if $\gamma$ is an element of finite order $n$ in a Bianchi group, then $n$ must be 1, 2, 3, 4 or 6, because $\gamma$ has eigenvalues $\rho$ and $\overline{\rho}$, with $\rho$ a primitive $n$-th root of unity, and the trace of~$\gamma$ is $\rho + \overline{\rho} \in \mathcal{O}_{-m} \cap {\mathbb{R}} = {\mathbb{Z}}$. When $\ell$ is one of the two occurring prime numbers $2$ and~$3$, the orbit space of this subcomplex is a graph, because the cells of dimension greater \mbox{than 1} are trivially stabilized in the refined cellular complex. We can see that this graph is finite either from the finiteness of the Bianchi fundamental polyhedron, or from studying conjugacy classes of finite subgroups as in~\cite{Kraemer:Diplom}. As in \cite{RahmFuchs}, we make use of a $2$-dimensional deformation retract $X$ of the refined cellular complex, equivariant with respect to a Bianchi group \mbox{$\Gamma$}. This retract has a cell structure in which each cell stabiliser fixes its cell pointwise. Since $X$ is a deformation retract of $\mathcal{H}$ and hence acyclic, $${\mathbb{C}}ohomol^*_\Gamma(X) \cong {\mathbb{C}}ohomol^*_\Gamma(\mathcal{H}) \cong {\mathbb{C}}ohomol^*(\Gamma).$$ \begin{table} \begin{center} $ \begin{array}{|c|c|c|c|c|c|} \hline \text{Subgroup type} & {\mathbb{Z}}/2 & {\mathbb{Z}}/3 & {\mathcal{D}}_2 &{\mathcal{D}}_3& {\mathcal{A}}_4 \\ \hline \text{Number of conjugacy classes} & \lambda_{4} & \lambda_6 & \mu_2 & \mu_3 & \mu_T \\ \hline \end{array} $ \end{center} \caption{The non-trivial finite subgroups of $\mathrm{PSL}_2\left(\mathcal{O}_{-m}\right)$ were classified by Klein~\cite{Klein:binaereFormenMathAnn9}. Here, ${\mathbb{Z}}/n$ is the cyclic group of order $n$, the dihedral groups are ${\mathcal{D}}_2$ with four elements and ${\mathcal{D}}_3$ with six elements, and the tetrahedral group is isomorphic to the alternating group ${\mathcal{A}}_4$ on four letters. Formulas for the numbers of conjugacy classes counted by the Greek symbols are given by Kr\"amer~\cite{Kraemer:Diplom}.} \label{table:covering} \end{table} In Theorem~\ref{Grunewald-Poincare series formulas} below, we give a formula expressing precisely how the Farrell--Tate cohomology of a Bianchi group with units $\{\pm 1\}$ (i.e., just excluding the Gaussian and the Eisentein integers as imaginary quadratic rings, see Section~\ref{Bredon state}) depends on the numbers of conjugacy classes of non-trivial finite subgroups of the occurring five types specified in Table~\ref{table:covering}. The main step in order to prove this, is to read off the Farrell--Tate cohomology from the quotient of the reduced torsion sub-complexes. Kr\"amer's formulas \cite{Kraemer:Diplom} express the numbers of conjugacy classes of the five types of non-trivial finite subgroups given in Table~\ref{table:covering}. We are going to use the symbols of that table also for the numbers of conjugacy classes in $\Gamma$, where $\Gamma$ is a finite index subgroup in a Bianchi group. Recall that for $\ell = 2$ and $\ell = 3$, we can express the dimensions of the homology of $\Gamma$ with coefficients in the field ${\mathbb{F}_\ell}$ with $\ell$ elements in degrees above the virtual cohomological dimension of the Bianchi groups -- which is $2$ -- by the Poincar\'e series $$P^\ell_\Gamma(t) := \sum\limits_{q \thinspace > \thinspace 2}^{\infty} \dim_{\mathbb{F}_\ell} \Homol_q \left(\Gamma;\thinspace {\mathbb{F}_\ell} \right)\thinspace t^q,$$ which has been suggested by Grunewald. Further let $P_{\circlegraph} (t) := \frac{-2t^3}{t-1}$ , which equals the Poincar\'e series $P^2_\Gamma(t)$ of the groups $\Gamma$ the quotient of the reduced $2$--torsion sub-complex of which is a circle. Denote by \begin{itemize} \item $P_{{\mathcal{D}}_2}^*(t) := \frac{-t^3(3t -5)}{2(t-1)^2}$, the Poincar\'e series over $$\dim_{\mathbb{F}_2} \Homol_q \left({\mathcal{D}}_2;\thinspace {\mathbb{F}_2} \right) -\frac{3}{2}\dim_{\mathbb{F}_2} \Homol_q \left({\mathbb{Z}}/2;\thinspace {\mathbb{F}_2} \right)$$ \item and by $P_{{\mathcal{A}}_4our}^*(t) := \frac{-t^3(t^3 - 2t^2 + 2t - 3)}{2(t-1)^2 (t^2 + t + 1 ) }$, the Poincar\'e series over $$\dim_{\mathbb{F}_2} \Homol_q \left({\mathcal{A}}_4our;\thinspace {\mathbb{F}_2} \right) -\frac{1}{2}\dim_{\mathbb{F}_2} \Homol_q \left({\mathbb{Z}}/2;\thinspace {\mathbb{F}_2} \right).$$ \end{itemize} In 3-torsion, let $P_{\edgegraph} (t) := \frac{-t^3(t^2 - t + 2)}{(t-1)(t^2+1)}$, which equals the Poincar\'e series $P^3_\Gamma(t)$ for those Bianchi groups, the quotient of the reduced $3$--torsion sub-complex of which is a single edge without identifications. \vbox{ \begin{theorem}[\cite{Rahm:formulas}] \label{Grunewald-Poincare series formulas} For any finite index subgroup $\Gamma$ in a Bianchi group with units $\{\pm 1\}$, the group homology in degrees above its virtual cohomological dimension is given by the Poincar\'e series $$P^2_\Gamma(t) = \left(\lambda_4 -\frac{3\mu_2 -2\mu_T}{2}\right)P_{\circlegraph} (t) +(\mu_2 -\mu_T)P_{{\mathcal{D}}_2}^*(t) +\mu_T P_{{\mathcal{A}}_4our}^*(t)$$ and $$P^3_\Gamma(t) = \left(\lambda_6 -\frac{\mu_3}{2}\right)P_{\circlegraph} (t) + \frac{\mu_3}{2}P_{\edgegraph}(t).$$ \end{theorem} } More general results are stated in Section~\ref{formulas for the Farrell--Tate cohomology} below. \subsubsection{Example: Farrell--Tate cohomology of Coxeter (tetrahedral) groups} Recall that a Coxeter group is a group admitting a presentation $$\langle g_1, g_2, ..., g_n \medspace | \medspace (g_i g_j)^{m_{i,j}} = 1 \ensuremath{\rightarrow}ngle,$$ where $m_{i,i} = 1$; for $i \neq j$ we have $m_{i,j} \geq 2$; and $m_{i,j} = \infty$ is permitted, meaning that $(g_i g_j)$ is not of finite order. As the Coxeter groups admit a contractible classifying space for proper actions \cite{Davis}, their Farrell--Tate cohomology yields all of their group cohomology. So in this section, we make use of this fact to determine the latter. For facts about Coxeter groups, and especially for the Davis complex, we refer to \cite{Davis}. Recall that the simplest example of a Coxeter group, the dihedral group $\mathcal{D}_n$, is an extension $$ 1 \to {\mathbb{Z}}/n \to \mathcal{D}_n \to {\mathbb{Z}}/2 \to 1.$$ So we can make use of the original application~\cite{Wall} of Wall's lemma to obtain its mod $\ell$ homology for prime numbers $\ell >2$, $$ \Homol_q(\mathcal{D}_n; \thinspace {\mathbb{Z}}/\ell) \cong \begin{cases} {\mathbb{Z}}/\ell, & q = 0, \\ {\mathbb{Z}}/{\rm gcd}(n,\ell), & q \equiv 3 \medspace {\rm or} \medspace 4 \mod 4, \\ 0, & {\rm otherwise}. \end{cases} $$ \begin{theorem}[\cite{Rahm:formulas}] \label{small rank Coxeter groups} Let $\ell > 2$ be a prime number. Let $\Gamma$ be a Coxeter group admitting a Coxeter system with at most four generators, and relator orders not divisible by~$\ell^2$. Let $Z_{(\ell)}$ be the $\ell$--torsion sub-complex of the Davis complex of~$\Gamma$. If $Z_{(\ell)}$ is at most one-dimensional and its orbit space contains no loop or bifurcation, then the$\mod \ell$ homology of~$\Gamma$ is isomorphic to $\left(\Homol_q(\mathcal{D}_\ell; \thinspace {\mathbb{Z}}/\ell)\right)^m$, with $m$ the number of connected components of the orbit space of~$Z_{(\ell)}$. \end{theorem} The conditions of this theorem are for instance fulfilled by the Coxeter tetrahedral groups; the exponent $m$ has been specified for each of them in the tables in~\cite{Rahm:formulas}. In the easier case of Coxeter triangle groups, we can sharpen the statement as follows. The non-spherical and hence infinite \emph{Coxeter triangle groups} are given by the presentation $$ \langle\, a, b, c \;|\; a^2 = b^2 = c^2 = (ab)^p = (bc)^q = (c a)^r = 1 \,\ensuremath{\rightarrow}ngle\, , $$ where $2 \leq p,q,r \in {\mathbb{N}}$ and $\frac{1}{p} + \frac{1}{q} + \frac{1}{r} \le 1$. \begin{proposition}[\cite{Rahm:formulas}] For any prime number $\ell>2$, the {\rm mod} $\ell$ homology of a Coxeter triangle group is given as the direct sum over the {\rm mod} $\ell$ homology of the dihedral groups ${\mathcal D}_p$, ${\mathcal D}_q$ and ${\mathcal D}_r$. \end{proposition} \subsection{The non-central torsion subcomplex} \label{The non-central torsion subcomplex} In the case of a trivial kernel of the action on the polytopal $\Gamma$-cell complex, torsion subcomplex reduction allows one to establish general formulas for the Farrell--Tate cohomology of~$\Gamma$ \cite{Rahm:formulas}. In contrast, for instance the action of $\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)$ on hyperbolic $3$-space has the $2$-torsion group $\{\pm 1\}$ in the kernel; since every cell stabiliser contains $2$-torsion, the $2$-torsion subcomplex then does not ease our calculation in any way. We can remedy this situation by considering the following object, on whose cells we impose a supplementary property. \begin{df} \label{non-central torsion subcomplex} The \emph{non-central $\ell$-torsion subcomplex} of a polytopal $\Gamma$-cell complex $X$ consists of all the cells of $X$ whose stabilisers in~$\Gamma$ contain elements of order $\ell$ that are not in the center of~$\Gamma$. \end{df} We note that this definition yields a correspondence between, on one side, the \textit{non-central} $\ell$-torsion subcomplex for a group action with kernel the center of the group, and on the other side, the $\ell$-torsion subcomplex for its central quotient group. In~\cite{BerkoveRahm}, this correspondence has been used in order to identify the \textit{non-central} $\ell$-torsion subcomplex for the action of $\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)$ on hyperbolic $3$-space as the $\ell$-torsion subcomplex of $\mathrm{PSL}_2\left(\mathcal{O}_{-m}\right)$. However, incorporating the non-central condition for $\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)$ introduces significant technical obstacles, which were addressed in that paper, establishing the following theorem for any finite index subgroup $\Gamma$ in $\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)$. Denote by $X$ a $\Gamma$-equivariant retract of SL$_2({\mathbb{C}})/$SU$_2$, by $X_s$ the $2$-torsion subcomplex with respect to P$\Gamma$ (the ``non-central'' $2$-torsion subcomplex for $\Gamma$), and by $X_s^\prime$ the part of it with higher $2$-rank. Further, let $v$ denote the number of conjugacy classes of subgroups of higher $2$-rank, and define ${\rm sign}(v) := $\scriptsize$\begin{cases} 0, & v = 0,\\ 1,& v> 0. \end{cases}$\normalsize \\ For $q \in \{1, 2\}$, denote the dimension $\dim_{{\mathbb{F}}_2}{\mathbb{C}}ohomol^q(_\Gamma \backslash X ; \thinspace {\mathbb{F}}_2)$ by $\beta^q$. \vbox{ \begin{thm}[\cite{BerkoveRahm}] \label{E2 page} The $E_2$ page of the equivariant spectral sequence with ${\mathbb{F}}_2$-coefficients associated to the action of $\Gamma$ on $X$ is concentrated in the columns $n \in \{0, 1, 2\}$ and has the following form. \[ \begin{array}{l | cccl} q = 4k+3 & E_2^{0,3}(X_s) & E_2^{1,3}(X_s) \operatornamelus ({\mathbb F}_2)^{a_1} & ({\mathbb F}_2)^{a_2} \\ q = 4k+2 & {\mathbb{C}}ohomol^2_\Gamma(X_s^\prime) \operatornamelus ({\mathbb F}_2)^{1 -{\rm sign}(v)} & ({\mathbb F}_2)^{a_3}& {\mathbb{C}}ohomol^2(_\Gamma \backslash X) \\ q = 4k+1 & E_2^{0,1}(X_s) & E_2^{1,1}(X_s) \operatornamelus ({\mathbb F}_2)^{a_1} & ({\mathbb F}_2)^{a_2} \\ q = 4k & {\mathbb{F}}_2 & {\mathbb{C}}ohomol^1(_\Gamma \backslash X) & {\mathbb{C}}ohomol^2(_\Gamma \backslash X) \\ \hline k \in \mathbb{N} \cup \{0\} & n = 0 & n = 1 & n = 2 \end{array} \] where \[\begin{array}{ll} a_1 & = \chi(_\Gamma \backslash X_s) -1 +\beta^1(_\Gamma \backslash X) +c \\ a_2 & = \beta^{2} (_\Gamma \backslash X) +c \\ a_3 & = \beta^{1} (_\Gamma \backslash X) +v -{\rm sign}(v). \end{array} \] \end{thm} } In order to derive the example stated in Section~\ref{non-trivial-centre} below, we combine the latter theorem with the following determination (carried out in~\cite{BerkoveRahm}) of the $d_2$-differentials on the four possible (cf. Table~\ref{table:subcomplexes}) connected component types $\circlegraph$, $\edgegraph$, $\graphFive$ and $\graphTwo$ of the reduced non-central $2$-torsion subcomplex for the full SL$_2$ groups over the imaginary quadratic number rings. \begin{lemma}[\cite{BerkoveRahm}] \label{d_2 lemma} The $d_2$ differential in the equivariant spectral sequence associated to the action of $\mathrm{SL}_2(\mathcal{O}m)$ on hyperbolic space is trivial on components of the non-central $2$-torsion subcomplex quotient \begin{itemize} \item of type $\circlegraph$ in dimensions $q \equiv 1 \bmod 4$ if and only if it is trivial on these components in dimensions $q \equiv 3 \bmod 4$. \item of type $\edgegraph$. \item of types $\graphTwo$ and $\graphFive$ in dimensions $q \equiv 3 \bmod 4$. \end{itemize} \end{lemma} \begin{table} \begin{center} \caption{Connected component types of reduced torsion subcomplex quotients for the PSL$_2$ Bianchi groups. The exhaustiveness of this table has been established using theorems of Kr\"amer \cite{BerkoveRahm}.} \label{table:subcomplexes} \label{one} \footnotesize \begin{tabular}{|c|c|c|c|c|} \hline & & &&\\ \begin{tabular}{c}$2$--torsion\\subcomplex \\components\end{tabular} & \begin{tabular}{c}counted \\by \end{tabular} & & \begin{tabular}{c}$3$--torsion\\subcomplex \\components\end{tabular} & \begin{tabular}{c}counted \\by \end{tabular} \\ \hline & & &&\\ $\circlegraph \thinspace {\mathbb{Z}}/2$ & $o_2 = \lambda_4 -\lambda_4^* $ & & $\circlegraph \thinspace {\mathbb{Z}}/3$ & $o_3 = \lambda_6 -\lambda_6^* $\\ & & & &\\ ${\mathcal{A}}_4our \edgegraph {\mathcal{A}}_4our$ & $\iota_2$ & & ${\mathcal{D}}_3 \edgegraph {\mathcal{D}}_3$ & $\iota_3 = \lambda_6^* $\\ & & &&\\ ${\mathcal{D}}_2 \graphFive \thinspace {\mathcal{D}}_2$ & $\theta$&&&\\ & & &&\\ ${\mathcal{D}}_2 \graphTwo {\mathcal{A}}_4our$ & $\rho$ &&&\\ \hline \end{tabular} \normalsize \end{center} \end{table} \section{Applications of the technique and their results} \label{Results} This section is going to state some results in which the technique described in Section~\ref{techniques} was involved. \subsection{The Bianchi groups and their congruence subgroups} \label{The Bianchi groups} In the case of the PSL$_2$ groups over rings of imaginary quadratic integers (known as the Bianchi groups), the torsion subcomplex reduction technique has permitted the author to find a description of the cohomology ring of these groups in terms of elementary number-theoretic quantities~\cite{Rahm:formulas}. The key step has been to extract, using torsion subcomplex reduction, the essential information about the geometric models, and then to detach the cohomological information completely from the model. Torsion subcomplex reduction combined with an analysis of the equivariant spectral sequence by Ethan Berkove, Grant Lakeland and the author provides new tools for the calculation of the torsion in the cohomology of congruence subgroups in the Bianchi groups~\cite{BLR}. \subsection{The Coxeter groups} \label{The Coxeter groups} Let us recall that the Coxeter groups are generated by reflections, and their homology consists solely of torsion. Thus, torsion subcomplex reduction allows one to obtain all homology groups for all of the tetrahedral Coxeter groups at all odd prime numbers, in terms of a general formula~\cite{Rahm:formulas}. \subsection{The SL\texorpdfstring{$_2$}{(2)} groups over arbitrary number rings}\label{formulas for the Farrell--Tate cohomology} Matthias Wendt and the author established a complete description of the Farrell--Tate cohomology with odd torsion coefficients for all groups $\operatorname{SL}_2(\mathcal{O}_{K,S})$, where $\mathcal{O}_{K,S}$ is the ring of $S$-integers in an arbitrary number field $K$ at an arbitrary non-empty finite set $S$ of places of $K$ containing the infinite places~\cite{RahmWendt}, based on an explicit description of conjugacy classes of finite cyclic subgroups and their normalizers in $\operatorname{SL}_2(\mathcal{O}_{K,S})$. The statement uses the following notation. Let $\ell$ be an odd prime number different from the characteristic of $K$. In the situation where, for $\zeta_\ell$ some primitive $\ell$-th root of unity, $\zeta_\ell+\zeta_\ell^{-1}\in K$, we will abuse notation and write $\mathcal{O}_{K,S}[\zeta_\ell]$ to mean the ring $\mathcal{O}_{K,S}[T]/(T^2-(\zeta_\ell+\zeta_\ell^{-1})T+1)$. Moreover, we denote the norm maps for class groups and units by $$ \operatorname{Nm}_0: \widetilde{\operatorname{K}_0}(\mathcal{O}_{K,S}[\zeta_\ell])\to \widetilde{\operatorname{K}_0}(\mathcal{O}_{K,S}) \qquad\textrm{ and }\qquad \operatorname{Nm}_1:\mathcal{O}_{K,S}[\zeta_\ell]^\times\to \mathcal{O}_{K,S}^\times. $$ Denote by $M_{(\ell)}$ the $\ell$-primary part of a module $M$; by $N_G(\Gamma)$ the normalizer of $\Gamma$ in $G$; and by $\widehat{\operatorname{H}}^\bullet$ Farrell--Tate cohomology (cf. Section~\ref{background}). \begin{theorem} [\cite{RahmWendt}] \label{thm:gl2nf} ${}$ \begin{enumerate} \item $\widehat{\operatorname{H}}^\bullet(\operatorname{SL}_2(\mathcal{O}_{K,S}),\mathbb{F}_\ell)\neq 0$ if and only if \\ $\zeta_\ell+\zeta_\ell^{-1}\in K$ and the Steinitz class $\det_{\mathcal{O}_{K,S}}(\mathcal{O}_{K,S}[\zeta_\ell])$ is contained in the image of the norm map $\operatorname{Nm}_0$. \item Assume the condition in (1) is satisfied. The set $\mathcal{C}_\ell$ of conjugacy classes of order $\ell$ elements in $\operatorname{SL}_2(\mathcal{O}_{K,S})$ sits in an extension $$ 1\to \operatorname{coker}\operatorname{Nm}_1\to \mathcal{C}_\ell\to \ker\operatorname{Nm}_0\to 0. $$ The set $\mathcal{K}_\ell$ of conjugacy classes of order $\ell$ subgroups of $\operatorname{SL}_2(\mathcal{O}_{K,S})$ can be identified with the quotient $\mathcal{K}_\ell=\mathcal{C}_\ell/\operatorname{Gal}(K(\zeta_\ell)/K)$. There is a direct sum decomposition $$ \widehat{\operatorname{H}}^\bullet(\operatorname{SL}_2(\mathcal{O}_{K,S}),\mathbb{F}_\ell)\cong \bigoplus_{[\Gamma]\in\mathcal{K}_\ell} \widehat{\operatorname{H}}^\bullet(N_{\operatorname{SL}_2(\mathcal{O}_{K,S})}(\Gamma), \mathbb{F}_\ell) $$ which is compatible with the ring structure, i.e., the Farrell--Tate cohomology ring of $\operatorname{SL}_2(\mathcal{O}_{K,S})$ is a direct sum of the sub-rings for the normalizers $N_{\operatorname{SL}_2(\mathcal{O}_{K,S})}(\Gamma)$. \item If the class of $\Gamma$ is not $\operatorname{Gal}(K(\zeta_\ell)/K)$-invariant, then $$N_{\operatorname{SL}_2(\mathcal{O}_{K,S})}(\Gamma)\cong \ker\operatorname{Nm}_1.$$ There is a degree $2$ cohomology class $a_2$ and a ring isomorphism $$ \widehat{\operatorname{H}}^\bullet(\ker\operatorname{Nm}_1,\mathbb{Z})_{(\ell)}\cong \mathbb{F}_\ell[a_2,a_2^{-1}]\otimes_{\mathbb{F}_\ell}\bigwedge \left(\ker\operatorname{Nm}_1\right). $$ In particular, this is a free module over the subring $\mathbb{F}_\ell[a_2^2,a_2^{-2}]$. \item If the class of $\Gamma$ is $\operatorname{Gal}(K(\zeta_\ell)/K)$-invariant, then there is an extension $$ 0\to \ker\operatorname{Nm}_1\to N_{\operatorname{SL}_2(\mathcal{O}_{K,S})}(\Gamma)\to \mathbb{Z}/2\to 1. $$ There is a ring isomorphism $$ \widehat{\operatorname{H}}^\bullet(N_{\operatorname{SL}_2(\mathcal{O}_{K,S})}(\Gamma),\mathbb{Z})_{(\ell)}\cong \left(\mathbb{F}_\ell[a_2,a_2^{-1}]\otimes_{\mathbb{F}_\ell}\bigwedge \left(\ker\operatorname{Nm}_1\right)\right)^{\mathbb{Z}/2}, $$ with the $\mathbb{Z}/2$-action given by multiplication with $-1$ on $a_2$ and $\ker\operatorname{Nm}_1$. In particular, this is a free module over the subring $$\mathbb{F}_\ell[a_2^2,a_2^{-2}]\cong \widehat{\operatorname{H}}^\bullet(D_{2\ell},\mathbb{Z})_{(\ell)}.$$ \item The restriction map induced from the inclusion $\operatorname{SL}_2(\mathcal{O}_{K,S})\to \operatorname{SL}_2(\mathbb{C})$ maps the second Chern class $c_2$ to the sum of the elements $a_2^2$ in all the components. \end{enumerate} \end{theorem} Wendt has extended this investigation to the cases of $\operatorname{SL}_2$ over the ring of functions on a smooth affine curve over an algebraically closed field~\cite{sl2parabolic}. \subsection{Farrell--Tate cohomology of higher rank arithmetic groups} \label{GL3} Pertinent progress was also made on the Farrell--Tate cohomology of GL$_3$ over rings of quadratic integers. For this purpose, the conjugacy classification of cyclic subgroups was reduced to the classification of modules of group rings over suitable rings of integers which are principal ideal domains, generalizing an old result of Reiner. As an example of the number-theoretic input required for the Farrell--Tate cohomology computations, Bui, Wendt and the author describe the homological torsion in PGL$_3$ over principal ideal rings of quadratic integers, accompanied by machine computations in the imaginary quadratic case~\cite{BuiRahmWendt:GL3om}. For machine calculations of Farrell--Tate or Bredon (co)homology, one needs cell complexes where cell stabilizers fix their cells pointwise. Bui, Wendt and the author provided two algorithms computing an efficient subdivision of a complex to achieve this rigidity property~\cite{BuiRahmWendt:Farrell-Tate}. Applying these algorithms to available cell complexes for {PSL}$_4({\mathbb{Z}})$, they computed the Farrell--Tate cohomology for small primes as well as the Bredon homology for the classifying spaces of proper actions with coefficients in the complex representation ring. \subsection{Adaptation of the technique to groups with non-trivial centre} \label{non-trivial-centre} Berkove and the author~\cite{BerkoveRahm} extended the technique of torsion subcomplex reduction, which originally was designed for groups with trivial centre (e.g., PSL$_2$), to groups with non-trivial centre (e.g., SL$_2$). This way, they determined the $2$-torsion in the cohomology of the SL$_2$ groups over imaginary quadratic number rings~$\mathcal{O}m$ in $\ensuremath{\rightarrow}tionals(\sqrt{-m})$, based on their action on hyperbolic 3-space~$\mathcal{H}$. For instance, they get the following result in the case where the quotient of the $2$--torsion subcomplex has the shape $\edgegraph$, which is equivalent to the following three conditions (cf.~\cite{Rahm:formulas}): $m \equiv 3 \bmod 8$, the field $\ensuremath{\rightarrow}tionals(\sqrt{-m})$ has precisely one finite ramification place over $\ensuremath{\rightarrow}tionals$, and the ideal class number of the totally real number field $\ensuremath{\rightarrow}tionals(\sqrt{m})$ is $1$. Under these assumptions, our cohomology ring has the following dimensions: $$ \dim_{{\mathbb{F}}_2}{\mathbb{C}}ohomol^{q}(\mathrm{SL}_2\left(\mathcal{O}_{-m}\right); \thinspace {\mathbb{F}}_2) = \begin{cases} \beta^{1} +\beta^{2} , & q = 4k+5, \\ \beta^1 +\beta^{2} +2, & q = 4k+4, \\ \beta^{1} + \beta^{2}+3, & q = 4k+3, \\ \beta^1+\beta^{2} +1, & q = 4k+2, \\ \beta^{1}, & q = 1, \\ \end{cases} $$ where $\beta^q := \dim_{{\mathbb{F}}_2}{\mathbb{C}}ohomol^q(_{\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)} \backslash \mathcal{H} ; \thinspace {\mathbb{F}}_2)$. Let $ \beta_1 := \dim_{\ensuremath{\rightarrow}tionals}{\mathbb{C}}ohomol_1(_{\mathrm{SL}_2\left(\mathcal{O}_{-m}\right)} \backslash \mathcal{H} ; \thinspace \ensuremath{\rightarrow}tionals).$ For all absolute values of the discriminant less than $296$, numerical calculations yield $\beta^2 +1 = \beta^1= \beta_1.$ In this range, the numbers $m$ subject to the above dimension formula and $\beta_1$ are given as follows (the Betti numbers are computed in a previous paper of the author~\cite{Rahm:higher_torsion}). $$\begin{array}{l|ccccccccccccccccccccccccccc} m & 11 & 19 & 43 & 59 & 67 & 83 & 107 & 131 & 139 & 163 & 179 & 211 & 227 & 251 & 283\\ \hline \beta_1 & 1 & 1 & 2 & 4 & 3 & 5 & 6 & 8 & 7 & 7 & 10 & 10 & 12 & 14 & 13\\ \end{array}$$ This result is a consequence of Theorem~\ref{E2 page}, combined with Lemma~\ref{d_2 lemma} above. \subsection{Investigation of the refined Quillen conjecture} \label{QC} The Quillen conjecture on the cohomology of arithmetic groups has spurred a great deal of mathematics (see the pertinent monograph \cite{Knudson:book}). Using Farrell--Tate cohomology computations, Wendt and the author established further positive cases for the Quillen conjecture for $\operatorname{SL}_2$. In detail, the original conjecture of 1971~\cite{Quillen} is as follows for GL$_n$. \begin{conjecture}[Quillen] \label{Quillen-conjecture} Let $\ell$ be a prime number. Let $K$ be a number field with $\zeta_\ell\in K$, and $S$ a finite set of places containing the infinite places and the places over $\ell$. Then the natural inclusion $\mathcal{O}_{K,S}\hookrightarrow \mathbb{C}$ makes $\operatorname{H}^\bullet(\operatorname{GL}_n(\mathcal{O}_{K,S}),\mathbb{F}_\ell)$ a free module over the cohomology ring $\operatorname{H}^\bullet_{\operatorname{cts}}(\operatorname{GL}_n(\mathbb{C}),\mathbb{F}_\ell)$. \end{conjecture} While there are counterexamples to the original version of the conjecture, it holds true in many other cases. From the first counterexamples through the present, the conjecture has kept researchers interested in determining its range of validity~\cite{Anton-mod5}. Positive cases in which the conjecture has been established are $n=\ell=2$ by Mitchell \cite{Mitchell}, $n=3$, $\ell=2$ by Henn \cite{Henn}, and $n=2$, $\ell=3$ by Anton \cite{anton}. On the other hand, cases where the Quillen conjecture is known to be false can all be traced to a remark by Henn, Lannes and Schwartz~\cite{henn:lannes:schwartz}*{remark on p. 51}, which shows that Quillen's conjecture for $\operatorname{GL}_n(\mathbb{Z}[1/2])$ implies that the restriction map $$ \operatorname{H}^\bullet(\operatorname{GL}_n(\mathbb{Z}[1/2]),\mathbb{F}_2)\to \operatorname{H}^\bullet(\operatorname{T}_n(\mathbb{Z}[1/2]),\mathbb{F}_2) $$ from $\operatorname{GL}_n(\mathbb{Z}[1/2])$ to the subgroup $\operatorname{T}_n(\mathbb{Z}[1/2])$ of diagonal matrices is injective. Non-injectivity of the restriction map has been shown by Dwyer \cite{dwyer} for $n\geq 32$ and $\ell=2$. Dwyer's bound was subsequently improved by Henn and Lannes to $n\geq 14$. At the prime $\ell=3$, Anton~\cite{anton} proved non-injectivity for $n\geq 27$. Wendt's and the author's contribution is that we can determine precisely the module structure above the virtual cohomological dimension; this has allowed us to relate the Quillen conjecture for $\operatorname{SL}_2$ to statements about Steinberg homology (recall Section~\ref{Farrell--Tate cohomology and Steinberg homology}). This, together with the results of~\cite{sl2parabolic}, has allowed us to find a refined version of the Quillen conjecture, which keeps track of all the types of known counter-examples to the original Quillen conjecture: \begin{conjecture}[Refined Quillen conjecture \cite{qcnote}] \label{refined Quillen-conjecture} Let $K$ be a number field. Fix a prime $\ell$ such that $\zeta_\ell\in K$, and an integer $n<\ell$. Assume that $S$ is a set of places containing the infinite places and the places lying over $\ell$. If each cohomology class of $\operatorname{GL}_n(\mathcal{O}_{K,S})$ is detected on some finite subgroup, then $\operatorname{H}^\bullet(\operatorname{GL}_n(\mathcal{O}_{K,S}),\mathbb{F}_\ell)$ is a free module over the image of the restriction map $$\operatorname{H}^\bullet_{\operatorname{cts}}(\operatorname{GL}_n(\mathbb{C}),\mathbb{F}_\ell)\to \operatorname{H}^\bullet(\operatorname{GL}_n(\mathcal{O}_{K,S}),\mathbb{F}_\ell).$$ \end{conjecture} We can make the following use of the description of the Farrell--Tate cohomology of SL$_2$ over rings of $S$-integers. \begin{corollary}[Corollary to Theorem \ref{thm:gl2nf}] Let $K$ be a number field, let $S$ be a finite set of places containing the infinite ones, and let $\ell$ be an odd prime. \begin{enumerate} \item The original Quillen conjecture holds for group cohomology $\operatorname{H}^\bullet(\operatorname{SL}_2(\mathcal{O}_{K,S}),\mathbb{F}_\ell)$ above the virtual cohomological dimension. \item The refined Quillen conjecture holds for Farrell--Tate cohomology $\widehat{\operatorname{H}}^\bullet(\operatorname{SL}_2(\mathcal{O}_{K,S}),\mathbb{F}_\ell)$. \end{enumerate} \end{corollary} \subsection*{Verification of the Quillen conjecture in the rank 2 imaginary quadratic case} Bui and the author confirm a conjecture of Quillen in the case of the mod $2$ cohomology of arithmetic groups ${\rm SL}_2({\mathcal{O}}_{\Q(\sqrt{-m}\thinspace )}[\frac{1}{2}])$, where ${\mathcal{O}}_{\Q(\sqrt{-m}\thinspace )}$ is an imaginary quadratic ring of integers. To make explicit the free module structure on the cohomology ring conjectured by Quillen, they computed the mod $2$ cohomology of $\arithGrp$ via the amalgamated decomposition of the latter group~\cite{BuiRahm:Verification}. \subsection{Application to equivariant \textit{K}-homology} \label{Bredon state} For the Bianchi groups, the torsion subcomplex reduction technique was adapted from group homology to Bredon homology $\Homol^\mathfrak{Fin}_n(\Gamma; \thinspace R_{\mathbb{C}})$ with coefficients in the complex representation rings, and with respect to the family of finite subgroups~\cite{Rahm:equivariant}. This has led the author to the following formulas for this Bredon homology, and by the Atiyah--Hirzebruch spectral sequence, to the below formulas for equivariant $K$-homology of the Bianchi groups acting on their classifying space for proper actions. We let a Bianchi group $\Gamma$ act on a $2$-dimensional retract $X$ of hyperbolic $3$-space. Denote by $\Gamma_\sigma$ the stabiliser of a cell $\sigma$, and by $R_{\mathbb{C}}(G)$ the complex representation ring of a group $G$. As $X$ is a model for the classifying space for proper $\Gamma$-actions, the homology of the Bredon chain complex of our $\Gamma$-cell complex $X$ is identical to the Bredon homology $\Homol^\mathfrak{Fin}_p(\Gamma; \thinspace R_{\mathbb{C}})$ of~$\Gamma$~\cite{Sanchez-Garcia}. This Bredon chain complex can be stated as $$\xymatrix{ 0 \to \bigoplus\limits_{\sigma \in \thinspace_\Gamma \backslash X^{(2)}} R_{\mathbb{C}} (\Gamma_\sigma) \ar[r]^{\Psi_2 } & \bigoplus\limits_{\sigma \in \thinspace_\Gamma \backslash X^{(1)}} R_{\mathbb{C}} (\Gamma_\sigma) \ar[r]^{\Psi_1} & \bigoplus\limits_{\sigma \in \thinspace_\Gamma \backslash X^{(0)}} R_{\mathbb{C}} (\Gamma_\sigma) \to 0, } $$ where the blocks of the differential matrices $\Psi_1$ and $\Psi_2$ are obtained by inducing homomorphisms on the involved complex representation rings from the cell stabiliser inclusions. Note that in general, the Bredon chain complex continues in higher dimensions, and its truncation at dimension $2$ results from the dimension of $X$. \begin{theorem} \label{splitting} Let $\Gamma$ be a Bianchi group or any one of its subgroups. Then the Bredon homology $\Homol^\mathfrak{Fin}_n(\Gamma; \thinspace R_{\mathbb{C}})$ splits as a direct sum over (1) the orbit space homology $\Homol_n(\underbar{\rm B}\Gamma; \thinspace {\mathbb{Z}})$, \begin{enumerate} \item[(2)] a submodule $\Homol_n(\Psi_\bullet^{(2)})$ determined by the reduced $2$-torsion subcomplex of $(\underline{\rm E}\Gamma, \Gamma)$ and \item[(3)] a submodule $\Homol_n(\Psi_\bullet^{(3)})$ determined by the reduced $3$-torsion subcomplex of $(\underline{\rm E}\Gamma, \Gamma)$. \end{enumerate} \end{theorem} These submodules are given as follows. Except for the Gauss{}ian and Eisenstein integers, which can easily be treated ad hoc~\cite{Rahm:noteAuxCRAS}, all the rings of integers of imaginary quadratic number fields admit as only units $\{\pm 1\}$. In the latter case, we call $\mathrm{PSL}_2(\mathcal{O}m)$ a \textit{Bianchi group with units} $\{\pm 1\}$. \begin{theorem} \label{2} The $2$-torsion part of the Bredon complex of a {Bianchi group $\Gamma$ with units} $\{\pm 1\}$ has homology \begin{center} $\Homol_n(\Psi_\bullet^{(2)}) \cong \begin{cases} {\mathbb{Z}}^{z_2}\operatornamelus ({\mathbb{Z}}/2)^\frac{d_2}{2},& n = 0,\\ {\mathbb{Z}}^{o_2},& n = 1,\\ 0,&\text{\rm otherwise}, \end{cases} $ \end{center} where $z_2$ counts the number of conjugacy classes of subgroups of type ${\mathbb{Z}}/2$ in $\Gamma$, $o_2$ counts the conjugacy classes of type ${\mathbb{Z}}/2$ in $\Gamma$ which are not contained in any $2$-dihedral subgroup, and $d_2$ counts the number of $2$-dihedral subgroups, whether or not they are contained in a tetrahedral subgroup of $\Gamma$. \end{theorem} \vbox{ \begin{theorem} \label{3} The $3$-torsion part of the Bredon complex of a {Bianchi group $\Gamma$ with units} $\{\pm 1\}$ has homology \begin{center} $\Homol_n(\Psi_\bullet^{(3)}) \cong \begin{cases} {\mathbb{Z}}^{2o_3+\iota_3},& n = 0 \medspace \text{\rm or }1,\\ 0,&\text{\rm otherwise}, \end{cases} $ \end{center} where amongst the subgroups of type ${\mathbb{Z}}/3$ in $\Gamma$, $o_3$ counts the number of conjugacy classes of those of them which are not contained in any $3$-dihedral subgroup, and $\iota_3$ counts the conjugacy classes of those of them which are contained in some $3$-dihedral subgroup in $\Gamma$. \end{theorem} } There are formulas for $o_2, z_2, d_2, o_3$ and $\iota_3$ in terms of elementary number-theoretic quantities~\cite{Kraemer:Diplom}, which are readily computable by machine~\cite{Rahm:formulas}*{appendix}. See Table~\ref{table:subcomplexes} for how they relate to the types of connected components of torsion subcomplexes. We deduce in the following corollary formulas for the equivariant $K$-homology of the Bianchi groups. Note for this purpose that for a Bianchi group $\Gamma$, there is a model for \underline{E}$\Gamma$ of dimension 2, so $\Homol_2(\underline{\rm B}\Gamma ; \thinspace {\mathbb{Z}}) \cong {\mathbb{Z}}^{\beta_2}$ is torsion-free. Note also that the naive Euler characteristic of the Bianchi groups vanishes (again excluding the two special cases of Gaussian and Eisensteinian integers), that is, for $\beta_i = \dim \Homol_i(\underline{\rm B}\Gamma ; \thinspace \ensuremath{\rightarrow}tionals)$ we have $\beta_0 -\beta_1 +\beta_2 = 0$ and $\beta_0 = 1$. Whenever we have a classifying space for proper $G$-actions of dimension at most $2$, the Atiyah---Hirzebruch spectral sequence from its Bredon homology to the equivariant \mbox{$K$-homology} $K^G_j(\underbar{\rm E}G)$ of a group $G$ degenerates on the $E^2$-page and directly yields the following theorem, which can be found in the book by Mislin and Valette. Note that by Bott periodicity, only two indices $j = 0$ and $j = 1$ are relevant. \begin{thm}[\cite{MislinValette}] \label{Bredon_to_K-homology} Let $G$ be an arbitrary group such that $\dim \underbar{\rm E}G \leq 2$. Then there is a natural short exact sequence $$0 \to \Homol^\mathfrak{Fin}_0(G; R_{\mathbb{C}}) \to K^G_0(\underbar{\rm E}G) \to \Homol^\mathfrak{Fin}_2(G; R_{\mathbb{C}}) \to 0 $$ and a natural isomorphism $\Homol^\mathfrak{Fin}_1(G; R_{\mathbb{C}}) \cong K^G_1(\underbar{\rm E}G)$. \end{thm} \begin{corollary}[Corollary to theorems \ref{splitting}, \ref{2}, \ref{3} and \ref{Bredon_to_K-homology}] For any {Bianchi group $\Gamma$ with units} $\{\pm 1\}$, the short exact sequence linking Bredon homology and equivariant $K$-homology splits into $$K^\Gamma_0(\underbar{\rm E}\Gamma) \cong {\mathbb{Z}} \operatornamelus {\mathbb{Z}}^{\beta_2} \operatornamelus {\mathbb{Z}}^{z_2} \operatornamelus ({\mathbb{Z}}/2)^\frac{d_2}{2} \operatornamelus {\mathbb{Z}}^{2o_3+\iota_3}.$$ Furthermore, $K^\Gamma_1(\underbar{\rm E}\Gamma) \cong \Homol_1(\underline{\rm B}\Gamma; \thinspace {\mathbb{Z}}) \operatornamelus {\mathbb{Z}}^{o_2} \operatornamelus {\mathbb{Z}}^{2o_3+\iota_3}$. \end{corollary} In order to adapt torsion subcomplex reduction to Bredon homology and prove Theorem~\ref{splitting}, we need to perform a ``representation ring splitting''. \textit{Representation ring splitting}. \label{Representation ring splitting} The classification of Felix Klein~\cite{Klein:binaereFormenMathAnn9} of the finite subgroups in $\mathrm{PSL}_2(\mathcal{O})$ is recalled in Table~\ref{table:covering}. We further use the existence of geometric models for the Bianchi groups in which all edge stabilisers are finite cyclic and all cells of dimension $2$ and higher are trivially stabilised. Therefore, the system of finite subgroups of the Bianchi groups admits inclusions only emanating from cyclic groups. This makes the Bianchi groups and their subgroups subject to the splitting of Bredon homology stated in Theorem~\ref{splitting}. The proof of Theorem~\ref{splitting} is based on the above particularities of the Bianchi groups, and applies the following splitting lemma for the involved representation rings to a Bredon complex for~$(\underline{\rm E}\Gamma, \Gamma)$. \vbox{\begin{lemma}[\cite{Rahm:equivariant}] \label{representation ring splitting} Consider a group $\Gamma$ such that every one of its finite subgroups is either cyclic of order at most~$3$, or of one of the types ${\mathcal{D}}_2, {\mathcal{D}}_3$ or~${\mathcal{A}}_4our$. Then there exist bases of the complex representation rings of the finite subgroups of~$\Gamma$, such that simultaneously every morphism of representation rings induced by inclusion of cyclic groups into finite subgroups of~$\Gamma$, splits as a matrix into the following diagonal blocks. \begin{enumerate} \item A block of rank $1$ induced by the trivial and regular representations, \item a block induced by the $2$--torsion subgroups \item and a block induced by the $3$--torsion subgroups. \end{enumerate} \end{lemma} } As this splitting holds simultaneously for every morphism of representation rings, we have such a splitting for every morphism of formal sums of representation rings, and hence for the differential maps of the Bredon complex for any Bianchi group and any of their subgroups. The bases that are mentioned in the above lemma, are obtained by elementary base transformations from the canonical basis of the complex representation ring of a finite group to a basis whose matrix form has \begin{itemize} \item its first row concentrated in its first entry, for a finite cyclic group (edge stabiliser). The base transformation is carried out by summing over all representations to replace the trivial representation by the regular representation. \item its first column concentrated in its first entry, for a finite non-cyclic group (vertex stabiliser). The base transformation is carried out by subtracting the trivial representation from each representation, except from itself. \end{itemize} The details are provided in \cite{Rahm:equivariant}. In this setting, the technique has inspired work beyond the range of arithmetic groups, which has led to formulas for the integral Bredon homology and equivariant K-homology of all compact 3-dimensional hyperbolic reflection groups~\cite{LORS}, through a novel criterion for torsion-freeness of equivariant K-homology in a more general framework. \subsection{Chen--Ruan orbifold cohomology of the complexified Bianchi orbifolds} \label{orbifold state} The action of the Bianchi groups $\Gamma$ on real hyperbolic $3$-space $\mathcal{H} = \mathrm{SL}_2({\mathbb{C}})/\mathrm{SU}_2$ induces an action on a complexification $\mathcal{H}_{\mathbb{C}}$ of the latter (of real dimension $6$). For the orbifolds $[\mathcal{H}_{\mathbb{C}} /_\Gamma]$ given by this action, we can compute the Chen--Ruan Orbifold Cohomology as follows. Let~$\Gamma$ be a discrete group acting \emph{properly}, i.e. with finite stabilizers, by diffeomorphisms on a manifold~$X$. For any element $g \in \Gamma$, denote by $C_\Gamma(g)$ the centralizer of $g$ in~$\Gamma$. Denote by $X^g$ the subset of $X$ consisting of the fixed points of $g$. \begin{df} Let $T \subset \Gamma$ be a set of representatives of the conjugacy classes of elements of finite order in~$\Gamma$. Then we set $$ \Homol^*_{orb}([X / _\Gamma]) := \bigoplus_{g \in T} \Homol^* \left( X^g / C_\Gamma(g); \thinspace \ensuremath{\rightarrow}tionals \right),$$ where $\Homol^* \left( X^g / C_\Gamma(g); \thinspace \ensuremath{\rightarrow}tionals \right)$ is the ordinary cohomology of the quotient space $X^g / C_\Gamma(g)$. \end{df} It can be checked that this definition gives the vector space structure of the orbifold cohomology defined by Chen and Ruan~\cite{ChenRuan}, if we forget the grading of the latter. We can verify this fact using arguments analogous to those used by Fantechi and G\"ottsche \cite{FantechiGoettsche} in the case of a finite group~$\Gamma$ acting on~$X$. The additional argument needed when considering some element $g$ in~$\Gamma$ of infinite order, is the following. As the action of~$\Gamma$ on $X$ is proper, $g$ does not admit any fixed point in $X$. Thus, $ \Homol^* \left( X^g / C_\Gamma(g); \thinspace \ensuremath{\rightarrow}tionals \right) = \Homol^* \left( \emptyset; \thinspace \ensuremath{\rightarrow}tionals \right) = 0. $ \begin{theorem}[\cite{PerroniRahm}] \label{introduced result} Let $\Gamma$ be a finite index subgroup in a Bianchi group (except over the Gaussian or Eisensteinian integers). Denote by $\lambda_{2n}$ the number of conjugacy classes of cyclic subgroups of order ${n}$ in $\Gamma$. Denote by $\lambda_{2n}^*$ the cardinality of the subset of conjugacy classes which are contained in a dihedral subgroup of order $2n$ in~$\Gamma$. Then, \small $$ \Homol^d_{orb}\left([\mathcal{H}_{\mathbb{C}} /_\Gamma] \right) \cong \Homol^d\left(\mathcal{H}/_{\Gamma}; \thinspace \ensuremath{\rightarrow}tionals \right) \operatornamelus \begin{cases} \ensuremath{\rightarrow}tionals^{\lambda_4 +2\lambda_6 -\lambda_6^*}, & d=2, \\ \ensuremath{\rightarrow}tionals^{\lambda_4-\lambda_4^* +2\lambda_6 -\lambda_6^*}, & d=3, \\ 0, & \mathrm{otherwise}. \end{cases}$$ \normalsize \end{theorem} The (co)homology of the quotient space $\mathcal{H} / _\Gamma$ has been computed numerically for a large range of Bianchi groups \cite{Vogtmann}, \cite{Scheutzow}, \cite{Rahm:higher_torsion}; and bounds for its Betti numbers are given in \cite{Kraemer:Thesis}. Kr\"amer \cite{Kraemer:Diplom} has determined number-theoretic formulas for the numbers $\lambda_{2n}$ and $\lambda_{2n}^*$ of conjugacy classes of finite subgroups in the Bianchi groups. Building on this, Perroni and the author established the following result~\cite{PerroniRahm}. \begin{theorem}\label{mainthm} ${}$\\ Let $\mathcal{H}_{\mathbb{C}} /_\Gamma$ be the coarse moduli space of $[\mathcal{H}_{\mathbb{C}} /_\Gamma]$; \\ and let $Y \ensuremath{\rightarrow} \mathcal{H}_{\mathbb{C}} /_\Gamma$ be a crepant resolution of $\mathcal{H}_{\mathbb{C}} /_\Gamma$.\\ Then there is an isomorphism as graded ${\mathbb{C}}C$-algebras between the Chen-Ruan cohomology ring of $[\mathcal{H}_{\mathbb{C}} /_\Gamma]$ and the singular cohomology ring of $Y$: $$ \left( H_{\rm CR}^*([\mathcal{H}_{\mathbb{C}} / _\Gamma]) , \cup_{\rm CR} \right) \cong \left( H^*(Y) , \cup \right) \, . $$ \end{theorem} The Chen--Ruan orbifold cohomology is conjectured by Ruan to match the quantum corrected classical cohomology ring of a crepant resolution for the orbifold. Perroni and the author proved furthermore that the Gromov-Witten invariants involved in the definition of the quantum corrected cohomology ring of $Y\ensuremath{\rightarrow} \mathcal{H}_{\mathbb{C}} /_\Gamma$ vanish. Hence, they deduced the following. \begin{corollary}\label{maincor} Ruan's crepant resolution conjecture holds true for the complexified Bianchi orbifolds $[\mathcal{H}_{\mathbb{C}} / _\Gamma]$. \end{corollary} A result on the vector space structure of the Chen--Ruan orbifold cohomology of Bianchi orbifolds are the below two theorems. \begin{theorem}[\cite{Rahm:equivariant}] \label{3-torsion quotients} For any element $\gamma$ of order $3$ in a finite index subgroup~$\Gamma$ in a Bianchi group with units~$\{\pm 1\}$, the quotient space $\mathcal{H}^\gamma /_{C_\Gamma(\gamma)}$ of the rotation axis modulo the centralizer of $\gamma$ is homeomorphic to a circle. \end{theorem} \begin{theorem}[\cite{Rahm:equivariant}] \label{2-torsion quotients} Let $\gamma$ be an element of order $2$ in a Bianchi group~$\Gamma$ with units~$\{\pm 1\}$. Then, the homeomorphism type of the quotient space $\mathcal{H}^\gamma /_{C_\Gamma(\gamma)}$ is \end{theorem} \begin{itemize} \item[$\edgegraph$] \textit{an edge without identifications, if $\langle \gamma \ensuremath{\rightarrow}ngle$ is contained in a subgroup of type ${\mathcal{D}}_2$ inside~$\Gamma$ and \item[$\circlegraph$] a circle, otherwise.} \end{itemize} Denote by $\lambda_{2\ell}$ the number of conjugacy classes of subgroups of type ${\mathbb{Z}}/\ell$ in a finite index subgroup {$\Gamma$} in a Bianchi group with units $\{\pm 1 \}$. Denote by $\lambda_{2\ell}^*$ the number of conjugacy classes of subgroups of type ${\mathbb{Z}}/\ell$ which are contained in a subgroup of type ${\mathcal{D}}_2n$ in~$\Gamma$. By \cite{Rahm:equivariant}, there are \mbox{$2\lambda_6 -\lambda_6^*$} conjugacy classes of elements of order $3$. As a result of Theorems~\ref{3-torsion quotients} and~\ref{2-torsion quotients}, the vector space structure of the orbifold cohomology of $[\mathcal{H}_{\mathbb{R}} / _\Gamma]$ is given as $ \Homol^\bullet_{orb}([\mathcal{H}_{\mathbb{R}} / _\Gamma]) \cong \Homol^{\bullet} \left( \mathcal{H}_{\mathbb{R}} / _\Gamma; \thinspace \ensuremath{\rightarrow}tionals \right) \bigoplus\nolimits^{\lambda_4^*} \Homol^{\bullet} \left( \edgegraph; \thinspace \ensuremath{\rightarrow}tionals \right) \bigoplus\nolimits^{(\lambda_4 -\lambda_4^*)} \Homol^{\bullet} \left( \circlegraph; \thinspace \ensuremath{\rightarrow}tionals \right) \bigoplus\nolimits^{(2\lambda_6 -\lambda_6^*)} \Homol^{\bullet} \left(\circlegraph; \thinspace \ensuremath{\rightarrow}tionals \right). $ \normalsize \\ The (co)homology of the quotient space $\mathcal{H}_{\mathbb{R}} / _\Gamma$ has been computed numerically for a large range of Bianchi groups \cite{Vogtmann}, \cite{Scheutzow}, \cite{Rahm:higher_torsion}; and bounds for its Betti numbers are given in \cite{Kraemer:Thesis}. Kr\"amer \cite{Kraemer:Diplom} has determined number-theoretic formulas for the numbers $\lambda_{2\ell}$ and $\lambda_{2\ell}^*$ of conjugacy classes of finite subgroups in the full Bianchi groups. Kr\"amer's formulas were evaluated for hundreds of thousands of Bianchi groups \cite{Rahm:formulas}, and these values are matching with the ones from the orbifold structure computations with \cite{Rahm:BianchiGP} in the cases where the latter are available. When we pass to the complexified orbifold $[\mathcal{H}_{\mathbb{C}} / _\Gamma]$, the real line that is the rotation axis in~$\mathcal{H}_{\mathbb{R}}$ of an element of finite order, becomes a complex line. However, the centralizer still acts in the same way by reflections and translations. So, the interval $\edgegraph$ as a quotient of the real line yields a stripe $\edgegraph \times {\mathbb{R}}$ as a quotient of the complex line. And the circle $\circlegraph$ as a quotient of the real line yields a cylinder $\circlegraph \times {\mathbb{R}}$ as a quotient of the complex line. Therefore, using the degree shifting numbers computed in~\cite{Rahm:equivariant}, we obtain the result of Theorem~\ref{introduced result}. \subsection*{Acknowledgement.} The author was supported by the MELODIA project, grant number ANR-20-CE40-0013, during the revision of this paper. \begin{bibdiv} \begin{biblist} \bib{anton}{article}{ author={Anton, Marian F.}, title={On a conjecture of Quillen at the prime $3$}, journal={J. Pure Appl. Algebra}, volume={144}, date={1999}, number={1}, pages={1--20}, issn={0022-4049}, review={\MR{1723188 (2000m:19003)}}, doi={10.1016/S0022-4049(98)00050-4}, } \bib{Anton-mod5}{article}{ author={Anton, Marian F.}, title={Homological symbols and the Quillen conjecture}, journal={J. Pure Appl. Algebra}, volume={213}, date={2009}, number={4}, pages={440--453}, issn={0022-4049}, review={\MR{2483829 (2010f:20042)}}, doi={10.1016/j.jpaa.2008.07.011}, } \bib{AGMY}{article}{ author={Ash, A.}, author={Gunnells, P. E.}, author={McConnell, M.}, author={Yasaki, D.}, title={On the growth of torsion in the cohomology of arithmetic groups}, journal={J. Inst. Math. Jussieu}, volume={19}, date={2020}, number={2}, pages={537--569}, issn={1474-7480}, review={\MR{4079152}}, doi={10.1017/s1474748018000117}, } \bib{BergeronSengunVenkatesh}{article}{ author={Bergeron, Nicolas}, author={\c{S}eng\"{u}n, Mehmet Haluk}, author={Venkatesh, Akshay}, title={Torsion homology growth and cycle complexity of arithmetic manifolds}, journal={Duke Math. J.}, volume={165}, date={2016}, number={9}, pages={1629--1693}, issn={0012-7094}, review={\MR{3513571}}, doi={10.1215/00127094-3450429}, } \bib{BerkoveRahm}{article}{ author = {Berkove, Ethan}, author = {Rahm, Alexander~D.} , title={The mod 2 cohomology rings of ${\rm SL}_2$ of the imaginary quadratic integers}, note={With an appendix by Aurel Page}, journal={J. Pure Appl. Algebra}, volume={220}, date={2016}, number={3}, pages={944--975}, issn={0022-4049}, review={\MR{3414403}}, doi={10.1016/j.jpaa.2015.08.002}, } \bib{BLR}{article}{ author = {Berkove, Ethan}, author = {Lakeland, Grant} , author = {Rahm, Alexander~D.} , title = {The mod $2$ cohomology rings of congruence subgroups in the Bianchi groups}, journal ={J. Algebr. Comb.}, year = {2019}, pages = {\url{https://doi.org/10.1007/s10801-019-00912-8}}, } \bib{Brown}{book}{ author={Brown, Kenneth S.}, title={Cohomology of groups}, series={Graduate Texts in Mathematics}, volume={87}, note={Corrected reprint of the 1982 original}, publisher={Springer-Verlag}, place={New York}, date={1994}, pages={x+306}, isbn={0-387-90688-6}, review={\MR{1324339 (96a:20072)}}, } \bib{Brown79}{article}{ author={Brown, Kenneth S.}, title={Groups of virtually finite dimension}, conference={ title={Homological group theory}, address={Proc. Sympos., Durham}, date={1977}, }, book={ series={London Math. Soc. Lecture Note Ser.}, volume={36}, publisher={Cambridge Univ. Press, Cambridge-New York}, }, date={1979}, pages={27--70}, review={\MR{564419}}, } \bib{BuiRahm:Verification}{article}{ author = {Bui Anh Tuan}, author = {Rahm, Alexander D.}, title = {Verification of the Quillen conjecture in the rank 2 imaginary quadratic case}, journal={HHA (Homology, Homotopy and Applications)}, Volume ={22}, year ={2020}, Number={2}, Pages={265--278, \url{http://dx.doi.org/10.4310/HHA.2020.v22.n2.a17}}, } \bib{BuiRahm:scpInHAP}{book}{ author={Bui Anh Tuan}, author = {Rahm, Alexander~D.} , title = {Torsion Subcomplexes package in HAP}, address = {a GAP subpackage, \url{http://hamilton.nuigalway.ie/Hap/doc/chap26.html} }, } \bib{BuiRahmWendt:GL3om}{article}{ TITLE = {{On Farrell--Tate cohomology of GL(3) over rings of quadratic integers}}, AUTHOR = {Bui Anh Tuan}, author = {Rahm, Alexander~D.}, author = {Wendt, Matthias}, NOTE = {Preprint, \url{https://hal.archives-ouvertes.fr/hal-02435963}}, YEAR = {2020}, } \bib{ChenRuan}{article}{ author={Chen, Weimin}, author={Ruan, Yongbin}, title={A new cohomology theory of orbifold}, journal={Comm. Math. Phys.}, volume={248}, date={2004}, number={1}, pages={1--31}, issn={0010-3616}, review={\MR{2104605 (2005j:57036)}}, review={Zbl 1063.53091}, } \bib{Davis}{book}{ author={Davis, Michael W.}, title={The geometry and topology of Coxeter groups}, series={London Mathematical Society Monographs Series}, volume={32}, publisher={Princeton University Press}, place={Princeton, NJ}, date={2008}, pages={xvi+584}, isbn={978-0-691-13138-2}, isbn={0-691-13138-4}, review={\MR{2360474 (2008k:20091)}}, } \bib{sikiri2019voronoi}{misc}{ title={Voronoi complexes in higher dimensions, cohomology of $GL_N(Z)$ for $N\geq 8$ and the triviality of $K_8(Z)$}, author={Mathieu Dutour Sikirić and Philippe Elbaz-Vincent and Alexander Kupers and Jacques Martinet}, year={2019}, address={arXiv:1910.11598[math.KT]}, } \bib{dwyer}{article}{ author={Dwyer, William G.}, title={Exotic cohomology for ${\rm GL}_n({\bf Z}[1/2])$}, journal={Proc. Amer. Math. Soc.}, volume={126}, date={1998}, number={7}, pages={2159--2167}, issn={0002-9939}, review={\MR{1443381 (2000a:57092)}}, doi={10.1090/S0002-9939-98-04279-8}, } \bib{Ellis}{book}{ author={Ellis, Graham}, title={An invitation to computational homotopy}, publisher={Oxford University Press, Oxford}, date={2019}, pages={xx+525}, isbn={978-0-19-883298-0}, isbn={978-0-19-883297-3}, review={\MR{3971587}}, doi={10.1093/oso/9780198832973.001.0001}, } \bib{FantechiGoettsche}{article}{ author={Fantechi, Barbara}, author={G{\"o}ttsche, Lothar}, title={Orbifold cohomology for global quotients}, journal={Duke Math. J.}, volume={117}, date={2003}, number={2}, pages={197--227}, issn={0012-7094}, review={\MR{1971293 (2004h:14062)}}, review={Zbl 1086.14046}, } \bib{Henn}{article}{ author={Henn, Hans-Werner}, title={The cohomology of ${\rm SL}(3,{\bf Z}[1/2])$}, journal={$K$-Theory}, volume={16}, date={1999}, number={4}, pages={299--359}, issn={0920-3036}, review={\MR{1683179 (2000g:20087)}}, } \bib{henn:lannes:schwartz}{article}{ author={Henn, Hans-Werner}, author={Lannes, Jean}, author={Schwartz, Lionel}, title={Localizations of unstable $A$-modules and equivariant mod $p$ cohomology}, journal={Math. Ann.}, volume={301}, date={1995}, number={1}, pages={23--68}, issn={0025-5831}, review={\MR{1312569 (95k:55036)}}, doi={10.1007/BF01446619}, } \bib{Klein:binaereFormenMathAnn9}{article}{ author={Klein, Felix}, title={Ueber bin\"are {F}ormen mit linearen {T}ransformationen in sich selbst}, date={1875}, ISSN={0025-5831}, journal={Math. Ann.}, volume={9}, number={2}, pages={183\ndash 208}, url={http://dx.doi.org/10.1007/BF01443373}, review={\MR{1509857}}, } \bib{Knudson:book}{book}{ author={Knudson, Kevin P.}, title={Homology of linear groups}, series={Progress in Mathematics}, volume={193}, publisher={Birkh\"auser Verlag, Basel}, date={2001}, pages={xii+192}, isbn={3-7643-6415-7}, review={\MR{1807154 (2001j:20070)}}, doi={10.1007/978-3-0348-8338-2}, } \bib{Kraemer:Diplom}{book}{ author={Kr\"amer, Norbert}, title={Die Konjugationsklassenanzahlen der endlichen Untergruppen in der Norm-Eins-Gruppe von Maxi\-malordnungen in Quaternionenalgebren}, date={Diplomarbeit, Mathematisches Institut, Universit\"at Bonn, 1980. \url{http://tel.archives-ouvertes.fr/tel-00628809/}}, language={German}, } \bib{Kraemer:Thesis}{thesis}{ author = {Kr\"amer, Norbert}, school = {Math.-Naturwiss. Fakult\"{a}t der Rheinischen Friedrich-Wilhelms-Universit\"{a}t Bonn; Bonn. Math. Schr.}, title = {Beitr\"{a}ge zur {A}rithmetik imagin\"{a}rquadratischer {Z}ahlk\"{o}rper}, year = {1984}, } \bib{LORS}{article}{ author={Lafont, Jean-Fran\c{c}ois}, author={Ortiz, Ivonne J.}, author={Rahm, Alexander D.}, author={S\'{a}nchez-Garc\'{\i}a, Rub\'{e}n J.}, title={Equivariant $K$-homology for hyperbolic reflection groups}, journal={Q. J. Math.}, volume={69}, date={2018}, number={4}, pages={1475--1505}, issn={0033-5606}, review={\MR{3908707}}, doi={10.1093/qmath/hay030}, } \bib{MislinValette}{collection}{ author={Mislin, Guido}, author={Valette, Alain}, title={Proper group actions and the Baum-Connes conjecture}, series={Advanced Courses in Mathematics. CRM Barcelona}, publisher={Birkh\"auser Verlag}, place={Basel}, date={2003}, pages={viii+131}, isbn={3-7643-0408-1}, review={\MR{2027168 (2005d:19007)}}, review={Zbl 1028.46001}, } \bib{Mitchell}{article}{ author={Mitchell, Stephen A.}, title={On the plus construction for $B{\rm GL}\,{\bf Z}[\frac12]$ at the prime $2$}, journal={Math. Z.}, volume={209}, date={1992}, number={2}, pages={205--222}, issn={0025-5874}, review={\MR{1147814 (93b:55021)}}, doi={10.1007/BF02570830}, } \bib{PerroniRahm}{article}{ author={Perroni, Fabio}, author={Rahm, Alexander D.}, title={On Ruan's cohomological crepant resolution conjecture for the complexified Bianchi orbifolds}, journal={Algebr. Geom. Topol.}, volume={19}, date={2019}, number={6}, pages={2715--2762}, issn={1472-2747}, review={\MR{4023327}}, doi={10.2140/agt.2019.19.2715}, } \bib{Quillen}{article}{ AUTHOR = {Quillen, Daniel}, TITLE = {The spectrum of an equivariant cohomology ring. {I}, {II}}, JOURNAL = {Ann. of Math. (2)}, VOLUME = {94}, YEAR = {1971}, PAGES = {549--572; ibid. (2) 94 (1971), 573--602}, ISSN = {0003-486X}, } \bib{Rahm:noteAuxCRAS}{article}{ author={Rahm, Alexander D.}, title={Homology and $K$-theory of the \mbox{Bianchi} groups (Homologie et $K$-th\'eorie des groupes de \mbox{Bianchi})}, date={2011}, journal={Comptes Rendus Math\'ematique de l' Acad\'emie des Sciences - Paris}, volume={349}, number ={11-12}, pages={615\ndash 619}, } \bib{Rahm:BianchiGP}{book}{ author = {Rahm, Alexander~D.} , title = {Bianchi.gp}, address = { Open source program (GNU general public license), validated by the CNRS: \url{http://www.projet-plume.org/fiche/bianchigp} subject to the Certificat de Comp\'etences en Calcul Intensif (C3I) and part of the GP scripts library of Pari/GP Development Center, 2010}, } \bib{Rahm:formulas}{article}{ author={Rahm, Alexander D.}, title={Accessing the cohomology of discrete groups above their virtual cohomological dimension}, journal={J. Algebra}, volume={404}, date={2014}, pages={152--175}, issn={0021-8693}, review={\MR{3177890}}, } \bib{Rahm:homological_torsion}{article}{ author={Rahm, Alexander~D.}, title={The homological torsion of $\rm{PSL}_2$ of the imaginary quadratic integers}, journal={Trans. Amer. Math. Soc.}, volume={365}, date={2013}, number={3}, pages={1603--1635}, review={\MR{3003276}}, } \bib{Rahm:equivariant}{article}{ author = {Rahm, Alexander D.} , title = {On the equivariant $K$-homology of PSL$_2$ of the imaginary quadratic integers}, journal={Annales de l'Institut Fourier}, volume={66}, number={4}, year={2016}, pages={1667--1689, \url{http://dx.doi.org/10.5802/aif.3047} }, } \bib{Rahm:higher_torsion}{article}{ author={Rahm, Alexander~D.}, title={Higher torsion in the Abelianization of the full Bianchi groups}, journal={LMS J. Comput. Math.}, volume={16}, date={2013}, pages={344--365}, issn={1461-1570}, review={\MR{3109616}}, } \bib{BuiRahmWendt:Farrell-Tate}{article}{ author={Bui, Anh Tuan}, author={Rahm, Alexander D.}, author={Wendt, Matthias}, title={The Farrell--Tate and Bredon homology for ${\rm PSL}_4(\mathbb{Z})$ via cell subdivisions}, journal={J. Pure Appl. Algebra}, volume={223}, date={2019}, number={7}, pages={2872--2888}, issn={0022-4049}, review={\MR{3912952}}, doi={10.1016/j.jpaa.2018.10.002}, } \bib{RahmFuchs}{article}{ Author = {Alexander D. {Rahm} and Mathias {Fuchs}}, Title = {{The integral homology of $\mathrm{PSL}_2$ of imaginary quadratic integers with non-trivial class group}}, Journal = {{J. Pure Appl. Algebra}}, ISSN = {0022-4049}, Volume = {215}, Number = {6}, Pages = {1443--1472}, Year = {2011}, Publisher = {Elsevier Science B.V. (North-Holland), Amsterdam}, DOI = {10.1016/j.jpaa.2010.09.005}, review = { Zbl 1268.11072} } \bib{RahmWendt}{article}{ author={Rahm, Alexander D.}, author={Wendt, Matthias}, title={On Farrell-Tate cohomology of $\rm SL_2$ over $S$-integers}, Journal={{J. Algebra}}, volume={512}, date={2018}, pages={427--464}, issn={0021-8693}, review={\MR{3841530}}, doi={10.1016/j.jalgebra.2018.06.031}, } \bib{qcnote}{article}{ author={Rahm, Alexander D.}, author={Wendt, Matthias}, title={A refinement of a conjecture of Quillen}, journal={{Comptes Rendus Math\'ematique} de l'Acad\'emie des Sciences}, volume = {353}, number = {9}, pages = {779--784}, year = {2015}, issn = {1631-073X}, doi = {http://dx.doi.org/10.1016/j.crma.2015.03.022}, } \bib{Sanchez-Garcia}{article}{ author={S{\'a}nchez-Garc{\'{\i}}a, Rub{\'e}n}, title={Bredon homology and equivariant $K$-homology of ${\rm SL}(3,{{\mathbb{Z}}})$}, journal={J. Pure Appl. Algebra}, volume={212}, date={2008}, number={5}, pages={1046--1059}, issn={0022-4049}, review={\MR{2387584 (2009b:19007)}}, } \bib{Scheutzow}{article}{ author={Scheutzow, Alexander}, title={Computing rational cohomology and Hecke eigenvalues for Bianchi groups}, journal={J. Number Theory}, volume={40}, date={1992}, number={3}, pages={317--328}, issn={0022-314X}, review={\MR{1154042 (93b:11068)}}, doi={10.1016/0022-314X(92)90004-9}, } \bib{SchwermerVogtmann}{article}{ author={Schwermer, Joachim}, author={Vogtmann, Karen}, title={The integral homology of ${\rm SL}_{2}$ and ${\rm PSL}_{2}$ of Euclidean imaginary quadratic integers}, journal={Comment. Math. Helv.}, volume={58}, date={1983}, number={4}, pages={573--598}, issn={0010-2571}, review={\MR{728453 (86d:11046)}}, doi={10.1007/BF02564653}, } \bib{SerreGroupesDiscrets}{article}{ author={Serre, Jean-Pierre}, title={Cohomologie des groupes discrets}, language={French}, conference={ title={Prospects in mathematics}, address={Proc. Sympos., Princeton Univ., Princeton, N.J.}, date={1970}, }, book={ publisher={Princeton Univ. Press, Princeton, N.J.}, }, date={1971}, pages={77--169. Ann. of Math. Studies, No. 70}, review={\MR{0385006}}, } \bib{Soule}{article}{ author={Soul{\'e}, Christophe}, title={The cohomology of ${\rm SL}_{3}({\bf Z})$}, journal={Topology}, volume={17}, date={1978}, number={1}, pages={1--22}, issn={0040-9383}, } \bib{Vogtmann}{article}{ author={Vogtmann, Karen}, title={Rational homology of Bianchi groups}, journal={Math. Ann.}, volume={272}, date={1985}, number={3}, pages={399--419}, ISSN={0025-5831}, review={\MR{799670 (87a:22025)}}, review={Zbl 0545.20031 } } \bib{Wall}{article}{ author={Wall, C. Terence~C.}, title={Resolutions for extensions of groups}, date={1961}, journal={Proc. Cambridge Philos. Soc.}, volume={57}, pages={251\ndash 255}, review={\MR{0178046 (31 \#2304)}}, } \bib{sl2parabolic}{article}{ author={Wendt, Matthias~}, title={Homology of {SL}$_2$ over function fields I: parabolic subcomplexes}, journal={J. Reine Angew. Math.}, volume={739}, date={2018}, pages={159--205}, issn={0075-4102}, review={\MR{3808260}}, doi={10.1515/crelle-2015-0047}, } \end{biblist} \end{bibdiv} \end{document}
\begin{document} \title{Experimental Realization of the Deutsch-Jozsa Algorithm with a Six-Qubit Cluster State} \author{Giuseppe Vallone} \homepage{http://quantumoptics.phys.uniroma1.it/} \affiliation{Museo Storico della Fisica e Centro Studi e Ricerche Enrico Fermi, Via Panisperna 89/A, Compendio del Viminale, 00184 Roma, Italy} \affiliation{Dipartimento di Fisica, Universit\`{a} Sapienza di Roma, 00185 Roma, Italy} \author{Gaia Donati} \homepage{http://quantumoptics.phys.uniroma1.it/} \affiliation{Dipartimento di Fisica, Universit\`{a} Sapienza di Roma, 00185 Roma, Italy} \author{Natalia Bruno} \homepage{http://quantumoptics.phys.uniroma1.it/} \affiliation{Dipartimento di Fisica, Universit\`{a} Sapienza di Roma, 00185 Roma, Italy} \author{Andrea Chiuri} \homepage{http://quantumoptics.phys.uniroma1.it/} \affiliation{Dipartimento di Fisica, Universit\`{a} Sapienza di Roma, 00185 Roma, Italy} \author{Paolo Mataloni} \homepage{http://quantumoptics.phys.uniroma1.it/} \affiliation{Dipartimento di Fisica, Universit\`{a} Sapienza di Roma, 00185 Roma, Italy} \affiliation{Istituto Nazionale di Ottica Applicata (INOA-CNR), L.go E. Fermi 6, 50125 Florence, Italy} \date{\today} \begin{abstract} We describe the first experimental realization of the Deutsch-Jozsa quantum algorithm to evaluate the properties of a 2-bit boolean function in the framework of one-way quantum computation. For this purpose {a novel two-photon six-qubit cluster state was engineered. Its peculiar topological structure is the basis of the original measurement pattern allowing the algorithm realization.} The good agreement of the experimental results with the theoretical predictions, obtained at $\sim$1kHz success rate, demonstrate the correct implementation of the algorithm. \end{abstract} \pacs{ 03.67.Ac 03.67.Bg 03.67.Mn } \maketitle {\em Introduction. --} In the last decade, quantum information processing and, in particular, quantum computation, have been conquering increasing interest and importance in the scientific community, supported by the promising theoretical and experimental results obtained. One of the many present efforts is the construction of quantum hardware, which up to now has been realized by following different experimental techniques \cite{kok07rmp,benh08nap}. In this way, it was then possible to demonstrate the correct functioning of one and two-qubit logic gates as well as the successful implementation of quantum algorithms which strongly show the efficiency of a quantum computer with respect to its classical analogue. Among these, the Deutsch-Jozsa (DJ) algorithm is the first example of the speed-up exhibited by a computer taking advantage of quantum mechanics in the evaluation of a global property of an $n$-bit boolean function \cite{deut02pro}. In this Letter we report the realization of the Deutsch-Jozsa algorithm in the framework of the one-way model of quantum computation \cite{raus01prl,brie09nap}, which has already proved successful in the construction of quantum gates such as the controlled-\textsc{NOT} (\textsc{CNOT}) gate \cite{vall08prl,gao09qph, vall09qph} and in the implementation of the Grover \cite{vall08pra, walt05nat,prev07nat, vall08prl} and the Deutsch algorithms \cite{vall08pra,tame07prl}. The latter corresponds to the case $n=1$ and is based on the use of four-qubit cluster photon states. Here we get the access to the case $n = 2$ by taking advantage of a peculiar two-dimensional two-photon six-qubit cluster state generated by a source of multi-qubit cluster states whose performances have been already demonstrated \cite{cecc09prl,vall09qph}. At variance with the simple case $n = 1$, the DJ algorithm allows to take advantage of the exponential growing of the computational speed-up for increasing values of $n$, as said. Hence the results presented in this paper are important in that they open the way to the implementation of the DJ algorithm with a still larger number of qubits. Although the DJ algorithm has been implemented before with photons \cite{brai03prl}, our realization represents the first realization with a 2-bit function in the context of {measurement-based quantum computation}. \begin{figure} \caption{Graphs associtated to (a) the $\ket{{\textsf{E} \label{fig:graphs} \end{figure} {\em Realization pattern for the Deutsch-Jozsa algorithm. --} Let us briefly recall the generalized version of the Deutsch algorithm \cite{deut02pro}, where a set of $n$ qubits constitutes the input of a black box, usually known as the Oracle, which implements the $n$-bit boolean function $f(x)$ such that $f: \{0,1\}^n \rightarrow \{0,1\}$. The aim of the DJ algorithm is to determine whether the function evaluated by the oracle is constant or balanced; a function $f$ is said to be balanced if it is equal to $0$ when calculated in half of the possible values of $x$ and equal to $1$ when the remaining allowed values for $x$ are taken into account. Classically, $2^{n-1} + 1$ queries to the oracle are necessary to solve the problem while, in the frame of quantum mechanics, the answer comes with one single query. Consequently, the greater is the number $n$ of qubits involved, the more evident is the difference in the performances of the quantum computer with respect to its classical counterpart. The initial state of the system is $\ket{0} \otimes \ket{0} \otimes \cdots \otimes \ket{0} = \ket{0}^{\otimes n}$ and an ancillary qubit in the state $\ket{1}$ is added to the $n$ input qubits. The operation performed by the oracle is given by $(\textsf{H}^{\otimes n} \otimes \textsf{H})U_f(\textsf{H}^{\otimes n} \otimes \textsf{H})$, where $\textsf{H}$ is the Hadamard gate; the unitary operator $U_f$ acts on the states of the computational basis so that $U_f \ket{x}\ket{y} = \ket{x}\ket{y \oplus f(x)}$. The final state is found to be $\bigl(\frac{1}{2^n} \sum_{x, y = 0}^{2^n - 1} (-1)^{f(x)+ x \cdot y} \ket{y}\bigr) \ket{1}$. Measuring the state of the $n$ qubits in the computational basis leads to the conclusion: if we get the state $\ket{0}^{\otimes n}$ the function $f$ is constant, otherwise it is balanced, as seen from the above expression for the final state of the global system. Moreover, the measurement of the ancillary qubit in the computational basis is expected to always return the $\ket{1}$ state. We now go into the details of the proposed experimental realization of the DJ algorithm for { a function acting on $n = 2$ bits}. In this case, the boolean function which we are interested in is such that $f:~\{0,1\}^2\rightarrow\{0,1\}$. The function $f$ can be calculated in its four arguments $x = 0, 1, 2, 3$, with $x = 2x_1 + x_0$ and $x_0, x_1 = 0, 1$. Among the 16 possible functions of this kind, we focus our attention on the balanced function $f_B$, such that $f_B(0) = f_B(3) = 0$, $f_B(1) = f_B(2) = 1$, and on the constant function $f_C$ for which we have that $f_C(x) = 0$ for every allowed value of $x$. We thus identify the state $\ket{x}_Q = \ket{x_1}_{\overline{1}} \ket{x_0}_{\overline{2}}$ as the input entering the oracle. In the former expression the subscripts $\overline{1}$ and $\overline{2}$ refer to logical qubits. As we know, the implementation of the DJ algorithm requires an additional ancillary qubit in the initial state $\ket{y}_A \equiv \ket{y}_{\overline{3}}$\,, where $\overline{3}$ is the logical qubit associated to the ancilla. For the previously defined functions we have that $U_{f_C}=\openone$ and $U_{f_B}=\textsc{Cnot}_{\bar1\bar3} \textsc{Cnot}_{\bar2\bar3}$, respectively\footnote{$\textsc{Cnot}_{\bar i\bar j}$ indicates a controlled-NOT gate between logical qubits\ $\bar i$ and $\bar j$.}. \begin{figure} \caption{Balanced function: (a) algorithm realized according to the pattern of single-qubit measurements and (b) equivalent circuit implementing the DJ algorithm for $n=2$. Gray gates represent Pauli errors. } \label{fig:circuitB} \end{figure} \begin{figure} \caption{Constant function: (a) algorithm realized according to the pattern of single-qubit measurements and (b) equivalent circuit implementing the DJ algorithm for $n=2$. Gray gates represent Pauli errors. } \label{fig:circuitC} \end{figure} In the framework of one-way quantum computing, the starting point of any computation is the construction of a multi-qubit cluster state; successively, the choice of a sequence of single-qubit measurements determines the program to be executed on the quantum computer. For a review on graph and cluster states and their use for one-way computing see \cite{raus01prl,brie09nap, hein06var, vall08pra}. Let us start from the identification of the appropriate cluster state allowing the realization of the DJ algorithm in the present work: Fig. \ref{fig:graphs}(a) shows the graph corresponding to a two-dimensional six-qubit cluster state where the numbered vertices stand for physical qubits. These qubits are equally distributed among two photons, labeled as $A$ and $B$: qubits $1$, $2$ and $3$ belong to photon $A$ and interact by two controlled-$\textsf{Z}$ gates represented by vertical connections on the graph, while qubits $4$, $5$ and $6$ are associated to photon $B$. As usual in the one-way model, it can be useful to think of the distinct horizontal qubits as ``the original [logical] qubit at different times'' \cite{niel04prl}; indeed, we identify the logical qubits $\overline{1}$ and $\overline{2}$ with physical qubits $1$, $4$ and $3$, $6$, respectively. The ancillary qubit $\overline{3}$ is represented by qubits $2$ and $5$. The ``\textsf{E} cluster'' just described is our quantum computer; we can show that the choice of the measurement sequence for the two qubits associated to the ancilla leads to the evaluation of both the balanced $f_B$ and the constant $f_C$ functions. This implies that, in the \textsf{E} cluster, qubits $2$ and $5$ play the role of the oracle, while the remaining qubits constitute the tools we have at our disposal to discriminate between a balanced and a constant function. To better understand this feature it is useful to consider the circuit representations associated to the realization of the two considered global properties of the 2-bit boolean function $f$. The proposed measurement configurations are the following: 1. \textit{balanced function -} By measuring qubits $1$, $3$ and $5$ in the bases $B_1(0)$, $B_3(0)$ and $B_5(\pi)$ we implement, at the logical level, the two \textsc{Cnot} gates ($U_{f_B}$) needed to implement the oracle function $f_B$ (see Fig. \ref{fig:circuitB}). Then we proceed with the measurement of the output qubits $4$, $6$ and $2$ in the bases $C_{4}^{\ket{0}}$, $C_{6}^{\ket{0}}$ and $C_{2}^{\ket{0}}$. 2. \textit{constant function -} We measure qubits $1$, $3$ and $2$ in the bases $B_1(0)$, $B_3(0)$ and $C_{2}^{\ket{0}}$. These operations implement, at the logical level, the identity gate $U_{f_C}$ needed to implement the oracle function $f_C$ (see Fig. \ref{fig:circuitC}). Then we read the result of the computation on the output qubits $4$, $6$ and $5$ by measuring them in the bases $C_{4}^{\ket{0}}$, $C_{6}^{\ket{0}}$ and $B_5(\pi)$. We define $B_j(\alpha) = \{\ket{\alpha_{+}}_j,\ket{\alpha_{-}}_j\}$ with $\ket{\alpha_{\pm}}_j = \frac{1}{\sqrt{2}} (\ket{0} \pm e^{-i\alpha} \ket{1}_j)$, while $C_{j}^{\ket{0}} = \{\ket{0}_j,\ket{1}_j\}$ is the computational basis for the Hilbert space associated to qubit $j$. The above sequences of single-qubit measurement lead us to the circuits shown in Fig. \ref{fig:circuitB} and \ref{fig:circuitC}; in particular, we can see the elements of the circuit realizing the unitary transformation $U_f$ for the balanced function $f_B$ and the constant function $f_C$, as well as single-qubit Pauli gates. Here and in the following we indicate with $Z$ ($X$) the Pauli matrix $\sigma_z$ ($\sigma_x$). For a given basis $B_j(\alpha)$, we introduce the quantity $s_j$ whose value is $0$ ($1$) if the measurement result is equal to $\ket{\alpha_{+}}_j$ ($\ket{\alpha_{-}}_j$) and equivalently for the $C_{j}^{\ket{0}}$ basis. According to the algorithm, we will expect, as outputs, the state $\ket{1\oplus s_1\oplus s_5}_{\bar1} \ket{1\oplus s_3\oplus s_5}_{\bar2}\ket{1\oplus s_5}_{\bar3}$ for the balanced and $\ket{s_1\oplus s_2}_{\bar1}\ket{s_2\oplus s_3}_{\bar2}\ket{1\oplus s_2}_{\bar3}$ for the constant function. In the previous expressions we take into account the feed-forward corrections of the Pauli errors. {\em Experimental preparation of the cluster state. --} Referring to Fig. \ref{fig:graphs}, the two-dimensional cluster state $\ket{{\textsf{E}}}$ is obtained from a six-qubit hyperentangled state \cite{vall09pra}, $\ket{{\rm HE}_6}$, whose graph is shown in Fig. \ref{fig:graphs}(b) \cite{cecc09prl,vall09pra}. Our experimental setup adopts a source of two-photon states based on a Spontaneous Parametric Down-Conversion (SPDC) process where the two particles are entangled at the same time in the polarization and in two linear momentum degrees of freedom (DOFs). By a proper interferometric setup \cite{vall09qph} it is possible to measure the two spatial DOFs; these variables, labeled as the ``right/left'' momentum ($r/\ell$) and the ``external/internal'' momentum ($E/I$), are both associated to each of the eight modes on which the two photons are emitted. A detailed description of the experimental setup, enabling the transformation from the six-qubit hyperentangled state $\ket{{\rm HE}_6}$ into a linear cluster state, can be found in recent papers \cite{cecc09prl, vall09qph}. It is interesting to note that the transition from the one-dimensional linear cluster state to the \textsf{E} cluster considered here is entirely determined by the choice of the controlled-$\sigma_z$ (\textsf{CZ}) operations corresponding to the vertical links in the graph (see Fig. \ref{fig:graphs}(a)). These gates are optically implemented between couples of qubits belonging to the same photon. The graph associated to the six-qubit cluster state $\ket{{\textsf{E}}}$ exhibits two links between qubits $1$ and $2$ (encoded in the $E/I$ momentum DOF and in polarization, respectively) and between qubits $2$ and $3$ (with qubit 3 encoded in the $r/\ell$ momentum DOF), hence the corresponding $\textsf{CZ}_{12}$ and $\textsf{CZ}_{23}$ logic gates only involve qubits belonging to photon $A$, as already noticed above. The optical implementation of the two controlled-$\textsf{Z}$ gates is realized by means of two half-wave plates, as shown in Fig. \ref{fig:graphs}(c). In order to give an explicit expression for the six-qubit cluster state produced in the present experiment we point out that the experimental hyperentangled state, which we label as $\ket{{\rm HE}_6}tilde$, does not coincide with the hyperentangled state $\ket{{\rm HE}_6}$ corresponding to the graph in Fig. \ref{fig:graphs}(a) and instead satisfies the relation \begin{equation}\label{hyperent} \begin{split} \ket{{\rm HE}_6}tilde &= \textsf{H}_4 Z_5\textsf{H}_5 X_6\textsf{H}_6 \ket{{\rm HE}_6} = \\ &= \frac{1}{\sqrt{2}}(\ket{00}_{14} + \ket{11}_{14}) \otimes \frac{1}{\sqrt{2}}(\ket{00}_{25} - \ket{11}_{25}) \otimes \\ &\quad \otimes \frac{1}{\sqrt{2}}(\ket{01}_{36} + \ket{10}_{36}), \end{split} \end{equation} where $\textsf{H}_j$ is the Hadamard gate on qubit $j$ and $X_j$ ($Z_j$) is the $\sigma_x$ ($\sigma_z$) gate on the corresponding qubit. For the \textsf{E} cluster represented by the graph in Fig. \ref{fig:graphs}(a) we can write that \begin{equation}\label{Ecluster} \ket{{\textsf{E}}} = \textsf{CZ}_{12} \textsf{CZ}_{23} \ket{{\rm HE}_6}. \end{equation} Combining Eq. \eqref{Ecluster} with Eq. \eqref{hyperent} we get \begin{equation}\label{Eexp} \begin{split} \ket{{\textsf{E}}}tilde &= \textsf{CZ}_{12} \textsf{CZ}_{23} \ket{{\rm HE}_6}tilde=\textsf{H}_4 Z_5\textsf{H}_5 X_6\textsf{H}_6 \ket{{\textsf{E}}} = \\ &= \frac{1}{2} (\ket{EE}\ket{\phi^{-}}_{\pi}\ket{r \ell} + \ket{EE}\ket{\phi^{+}}_{\pi}\ket{\ell r} \: + \\ &\quad + \ket{II}\ket{\phi^{+}}_{\pi}\ket{r \ell} + \ket{II}\ket{\phi^{-}}_{\pi}\ket{\ell r}) \end{split} \end{equation} for the six-qubit two-photon \textsf{E} cluster state generated in the laboratory. In the above expression the states $\ket{\phi^{+}}_{\pi}$ and $\ket{\phi^{-}}_{\pi}$ are the two polarization Bell states. As usual \cite{vall08prl,walt05nat,prev07nat}, we refer to $\ket{{\textsf{E}}}$ and $\ket{{\textsf{E}}}tilde$ as the state in the ``cluster'' and ``laboratory'' basis, respectively. As shown in Fig. \ref{fig:graphs}(c) we transform the hyperentangled state $\ket{{\rm HE}_6}tilde$ into the cluster state $\ket{{\textsf{E}}}tilde$ by applying two \textsf{CZ}\ operations. {\em Experimental results. --} In order to characterize the generated $\ket{{\textsf{E}}}tilde$ state we measured the witness operator $\mathcal W=3 - 2(\prod^3_{k=1}\frac{\widetilde g_{2k} + 1}{2}+ \prod^3_{k=1}\frac{\widetilde g_{2k-1} + 1}{2})$ \cite{toth05pra} (see \cite{cecc09prl} for the definition of the $\widetilde g_i$). We found $\langle \mathcal W\rangle=-0.333\pm0.002$, demonstrating a genuine six-qubit entanglement \cite{toth05prl}. Since it is possible to show \cite{toth05pra, vall07prl} that the fidelity $F$ satisfies the relation $F\geq \frac12(1-\langle \mathcal W\rangle)$, a lower bound for the fidelity is easily found: \begin{equation} F\geq0.667\pm0.001. \end{equation} Let's now turn to the DJ algorithm. We performed the sets of single-qubit measurements stated above and found the results presented in Table \ref{table:results}: here we show the probabilities of the outputs of the computation {when no Pauli errors are present (No-FF). This corresponds to consider only the case where $s_1=s_3=s_5=0$ for the balanced function and $s_1=s_2=s_3=0$ for the constant function. We also show the results obtained by considering all possible outputs and applying} the feed-forward (FF) operations correcting the Pauli errors (see also Figs. \ref{fig:circuitB} and \ref{fig:circuitC}). {It is worth noting that, since the output of the computation is read in the $\{\ket0,\ket1\}$ basis, the FF is a {\it relabeling feed-forward}, i.e. ``the earlier measurement determines the meaning of the final readout'' (see ``Grover's search algorithm'' section of \cite{walt05nat} or the end of section II in \cite{vall08pra}).} It is also important to notice that the physical qubits constituting the $\ket{{\textsf{E}}}tilde$ cluster were actually measured in the appropriate laboratory basis, which differs from the cluster basis when a single-qubit gate acts on the considered qubit; referring to Eq. \eqref{Eexp}, this corresponds to the case of qubits $4$, $5$ and $6$. \begin{table}[t] \caption{\label{table:results}Experimental probabilities of the obtained output states for balanced and constant function, with (FF) or without (no-FF) feed-forward. We indicate in bold character the data corresponding to the expected outputs.} \begin{ruledtabular} \begin{tabular}[c]{c|rr|rr|} &\multicolumn{2}{c}{$f_B$: Balanced}&\multicolumn{2}{c}{$f_C:$ Constant}\\ \hline Output & No-FF$(\%)$ & FF$(\%)$& No-FF$(\%)$ & FF$(\%)$\\ \hline $\ket{000}$ & $0.8\pm0.1$ & $0.9\pm0.1$ & $0.7\pm0.1$ & $0.9\pm0.1$ \\ $\ket{001}$ & $2.7\pm0.2$ & $2.6\pm0.1$ & ${\bf77.5\pm0.5}$ & ${\bf75.2\pm0.2}$ \\ $\ket{010}$ & $1.4\pm0.2$ & $1.4\pm0.2$ & $1.2\pm0.1$ & $1.4\pm0.1$ \\ $\ket{011}$ & $15.5\pm0.5$ & $14.1\pm0.1$ & $3.5\pm0.2$ & $3.4\pm0.1$ \\ $\ket{100}$ & $1.3\pm0.2$ & $1.0\pm0.1$ & $1.2\pm0.1$ & $1.0\pm0.1$ \\ $\ket{101}$ & $2.4\pm0.2$ & $3.4\pm0.1$ & $13.5\pm0.4$ & $14.1\pm0.2$ \\ $\ket{110}$ & $0.4\pm0.1$ & $1.4\pm0.1$ & $0.4\pm0.1$ & $1.4\pm0.1$ \\ $\ket{111}$ & ${\bf 75.5\pm0.6}$ & ${\bf75.2\pm0.2}$ & $2.0\pm0.2$ & $2.6\pm0.1$ \\ \end{tabular} \end{ruledtabular} \end{table} The experimental results are in good agreement with the theoretical predictions for both functions. The main discrepancy resides on the output probabilities of the states $\ket{011}$ for $f_B$ and $\ket{101}$ for $f_C$. These states differ from the expected outputs in the value of the logical qubit $\bar1$. {This is mainly due to the non perfect interference visibility associated to the E/I momentum DOF ($V\sim 70\%$). We attribute this to the difficulties in obtaining a perfect mode matching in the second interferometer (see \cite{vall09qph} for more details).} {\em Conclusions. --} We have presented an all-optical implementation of the DJ algorithm for $n = 2$ qubits. For this purpose, by taking advantage of the generation of a six-qubit two-photon hyperentangled state, we created a novel, high fidelity, two-dimensional six-qubit cluster state that represents the first step for the realization of the algorithm as a one-way quantum computation. We were then able to evaluate a two-bit balanced function as well as a constant one and to discriminate between them in one single run of the executed program, in contrast to the three runs needed with a classical computer. The correct output is identified at a frequency of almost 1kHz without feed-forward, a result which overcomes by several orders of magnitude what can be achieved with a six-photon cluster state, according to the current optical technology. By using all possible detection outputs and applying the feed-forward corrections we could obtain a frequency 8 times larger. Note that our experiment was actually performed with four detectors \cite{vall09qph}. In order to consider all the possible outcomes at the same time we would need 16 detectors. The experimental results demonstrate the correctness of the proposed algorithm implementation and represent the first proof of such a computation with a two-bit function in the framework of the one-way model. \acknowledgements We thank R. Jozsa for useful discussions. \begin{thebibliography}{20} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{\citenamefont{Kok et~al.}(2007)\citenamefont{Kok, Munro, Nemoto, Ralph, Dowling, and Milburn}}]{kok07rmp} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Kok}}, \bibinfo{author}{\bibfnamefont{W.~J.} \bibnamefont{Munro}}, \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Nemoto}}, \bibinfo{author}{\bibfnamefont{T.~C.} \bibnamefont{Ralph}}, \bibinfo{author}{\bibfnamefont{J.~P.} \bibnamefont{Dowling}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.~J.} \bibnamefont{Milburn}}, \bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{79}}, \bibinfo{pages}{135} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Benhelm et~al.}(2008)\citenamefont{Benhelm, Kirchmair, Roos, and Blatt}}]{benh08nap} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Benhelm}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Kirchmair}}, \bibinfo{author}{\bibfnamefont{C.~F.} \bibnamefont{Roos}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Blatt}}, \bibinfo{journal}{Nature Physics} \textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{463} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Deutsch and Jozsa}(1992)}]{deut02pro} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Deutsch}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Jozsa}}, in \emph{\bibinfo{booktitle}{Proceedings of the Royal Society of London A}} (\bibinfo{year}{1992}), vol. \bibinfo{volume}{439}, pp. \bibinfo{pages}{553--558}. \bibitem[{\citenamefont{Raussendorf and Briegel}(2001)}]{raus01prl} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Raussendorf}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.~J.} \bibnamefont{Briegel}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{5188} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Briegel et~al.}(2009)\citenamefont{Briegel, Browne, D\"ur, Raussendorf, and den Nest}}]{brie09nap} \bibinfo{author}{\bibfnamefont{H.~J.} \bibnamefont{Briegel}}, \bibinfo{author}{\bibfnamefont{D.~E.} \bibnamefont{Browne}}, \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{D\"ur}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Raussendorf}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~V.} \bibnamefont{den Nest}}, \bibinfo{journal}{Nature Physics} \textbf{\bibinfo{volume}{5}}, \bibinfo{pages}{19 } (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Vallone et~al.}(2008{\natexlab{a}})\citenamefont{Vallone, Pomarico, {De Martini}, and Mataloni}}]{vall08prl} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Vallone}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Pomarico}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{{De Martini}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Mataloni}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{100}}, \bibinfo{pages}{160502} (\bibinfo{year}{2008}{\natexlab{a}}). \bibitem[{\citenamefont{Gao et~al.}(2009)\citenamefont{Gao, Xu, Yao, G\"uhne, Cabello, Lu, Peng, Chen, and Pan}}]{gao09qph} \bibinfo{author}{\bibfnamefont{W.-B.} \bibnamefont{Gao}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Xu}}, \bibinfo{author}{\bibfnamefont{X.-C.} \bibnamefont{Yao}}, \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{G\"uhne}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Cabello}}, \bibinfo{author}{\bibfnamefont{C.-Y.} \bibnamefont{Lu}}, \bibinfo{author}{\bibfnamefont{C.-Z.} \bibnamefont{Peng}}, \bibinfo{author}{\bibfnamefont{Z.-B.} \bibnamefont{Chen}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.-W.} \bibnamefont{Pan}} (\bibinfo{year}{2009}), \bibinfo{note}{preprint}, \eprint{{\tt[arXiv:0905.2103]}}. \bibitem[{\citenamefont{Vallone et~al.}(2009{\natexlab{a}})\citenamefont{Vallone, Donati, Ceccarelli, and Mataloni}}]{vall09qph} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Vallone}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Donati}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Ceccarelli}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Mataloni}} (\bibinfo{year}{2009}{\natexlab{a}}), \bibinfo{note}{preprint}, \eprint{{\tt[arXiv:0911.2365]}}. \bibitem[{\citenamefont{Vallone et~al.}(2008{\natexlab{b}})\citenamefont{Vallone, Pomarico, {De Martini}, and Mataloni}}]{vall08pra} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Vallone}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Pomarico}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{{De Martini}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Mataloni}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{78}}, \bibinfo{pages}{042335} (\bibinfo{year}{2008}{\natexlab{b}}). \bibitem[{\citenamefont{Walther et~al.}(2005)\citenamefont{Walther, Resch, Rudolph, Schenck, Weinfurter, Vedral, Aspelmeyer, and Zeilinger}}]{walt05nat} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Walther}}, \bibinfo{author}{\bibfnamefont{K.~J.} \bibnamefont{Resch}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Rudolph}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Schenck}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}}, \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Vedral}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Aspelmeyer}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{journal}{Nature (London)} \textbf{\bibinfo{volume}{434}}, \bibinfo{pages}{169} (\bibinfo{year}{2005}). \bibitem[{\citenamefont{Prevedel et~al.}(2007)\citenamefont{Prevedel, Walther, Tiefenbacher, B{\"o}hi, Kaltenbaek, Jennewein, and Zeilinger}}]{prev07nat} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Prevedel}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Walther}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Tiefenbacher}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{B{\"o}hi}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Kaltenbaek}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Jennewein}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{journal}{Nature (London)} \textbf{\bibinfo{volume}{445}}, \bibinfo{pages}{65} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Tame et~al.}(2007)\citenamefont{Tame, Prevedel, Paternostro, Bohi, Kim, and Zeilinger}}]{tame07prl} \bibinfo{author}{\bibfnamefont{M.~S.} \bibnamefont{Tame}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Prevedel}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Paternostro}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Bohi}}, \bibinfo{author}{\bibfnamefont{M.~S.} \bibnamefont{Kim}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{98}}, \bibinfo{pages}{140501} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Ceccarelli et~al.}(2009)\citenamefont{Ceccarelli, Vallone, {De Martini}, Mataloni, and Cabello}}]{cecc09prl} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Ceccarelli}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Vallone}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{{De Martini}}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Mataloni}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Cabello}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{103}}, \bibinfo{pages}{160401} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Brainis et~al.}(2003)\citenamefont{Brainis, Lamoureux, Cerf, Emplit, Haelterman, and Massar}}]{brai03prl} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Brainis}}, \bibinfo{author}{\bibfnamefont{L.-P.} \bibnamefont{Lamoureux}}, \bibinfo{author}{\bibfnamefont{N.~J.} \bibnamefont{Cerf}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Emplit}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Haelterman}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Massar}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{90}}, \bibinfo{pages}{157902} (\bibinfo{year}{2003}). \bibitem[{\citenamefont{Hein et~al.}(2006)\citenamefont{Hein, D{\"u}r, Eisert, Raussendorf, den Nest, and Briegel}}]{hein06var} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Hein}}, \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{D{\"u}r}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Eisert}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Raussendorf}}, \bibinfo{author}{\bibfnamefont{M.~V.} \bibnamefont{den Nest}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.-J.} \bibnamefont{Briegel}}, in \emph{\bibinfo{booktitle}{Quantum computers, algorithms and chaos}}, edited by \bibinfo{editor}{\bibfnamefont{P.}~\bibnamefont{Zoller}}, \bibinfo{editor}{\bibfnamefont{G.}~\bibnamefont{Casati}}, \bibinfo{editor}{\bibfnamefont{D.}~\bibnamefont{Shepelyansky}}, \bibnamefont{and} \bibinfo{editor}{\bibfnamefont{G.}~\bibnamefont{Benenti}} (\bibinfo{year}{2006}), International School of Physics Enrico Fermi (Varenna, Italy), \eprint{quant-ph/0602096}. \bibitem[{\citenamefont{Nielsen}(2004)}]{niel04prl} \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Nielsen}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{93}}, \bibinfo{pages}{040503} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Vallone et~al.}(2009{\natexlab{b}})\citenamefont{Vallone, Ceccarelli, {De Martini}, and Mataloni}}]{vall09pra} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Vallone}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Ceccarelli}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{{De Martini}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Mataloni}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{79}}, \bibinfo{pages}{030301(R)} (\bibinfo{year}{2009}{\natexlab{b}}). \bibitem[{\citenamefont{T\'oth and G{\"u}hne}(2005{\natexlab{a}})}]{toth05pra} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{T\'oth}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{G{\"u}hne}}, \bibinfo{journal}{Phys. Rev. A.} \textbf{\bibinfo{volume}{72}}, \bibinfo{pages}{022340} (\bibinfo{year}{2005}{\natexlab{a}}). \bibitem[{\citenamefont{T\'oth and G{\"u}hne}(2005{\natexlab{b}})}]{toth05prl} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{T\'oth}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{G{\"u}hne}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{94}}, \bibinfo{pages}{060501} (\bibinfo{year}{2005}{\natexlab{b}}). \bibitem[{\citenamefont{Vallone et~al.}(2007)\citenamefont{Vallone, Pomarico, Mataloni, {De Martini}, and Berardi}}]{vall07prl} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Vallone}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Pomarico}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Mataloni}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{{De Martini}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Berardi}}, \bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{98}}, \bibinfo{pages}{180502} (\bibinfo{year}{2007}). \end{thebibliography} \end{document}
\begin{document} \renewcommand{Proof}{Proof} \renewcommand\refname{References} \renewcommand\contentsname{Table of contents} \renewcommand{Abstract}{Abstract} \renewcommand{\*}{\*} \title{Isometric affine actions on Banach spaces and spaces with labelled partitions} \footnotetext{The research of the author was partially supported by grant 20-149261 of Swiss SNF.} \begin{abstract} We define the structure of spaces with labelled partitions which generalizes the structure of spaces with measured walls and study the link between actions by automorphisms on spaces with labelled partitions and isometric affine actions on Banach spaces, and more particularly, on $L^p$ spaces. We build natural spaces with labelled partitions for the action of various contructions of groups, namely: direct sum; semi-direct product; wreath product and amalgamated free product. We apply this to prove that the wreath product of a group with property $PL^p$ by a group with Haagerup property has property $PL^p$ and the amalgamated free product of groups with property $PL^p$ has property $PL^p$. \end{abstract} \section{Introduction} A locally compact second countable group $G$ has \emph{Haagerup property} (or is \emph{a-(T)-menable}) if there exists a proper continuous isometric affine action of $G$ on a Hilbert space; this property can be seen as a strong negation of Kazhdan's property (T) (an overview of the Haagerup property can be found in \cite{checowjoljulval}). Groups having Haagerup property are known to satisfy the Baum-Connes conjecture by a result of Higson and Kasparov in \cite{higkas} (see \cite{julg} for further details). Haagerup property is closed by taking subgroups, direct products, amalgamated products over finite subsets but it is not stable by group extensions in general, even in the case of semi-direct products. However, Cornulier, Stalder and Valette recently proved in \cite{wreath} that it is stable by a particular kind of extension, namely the wreath product. They use for their proof the connexion between Haagerup property and spaces with measured walls, that we will now explain. A \emph{space with walls} is a pair $(X,W)$ where $X$ is a set and $W$ is a family of partitions of $X$ in two pieces called \emph{walls} such that any pair of points of $X$ is separated by finitely many walls. This notion was introduced by Haglund and Paulin in \cite{hagpaul} and generalized in a topological setting by Cherix, Martin and Valette in \cite{chemarval} to \emph{space with measured walls} (see Definition \ref{labpart_meswalls}). It was gradually realised that the Haagerup property is equivalent to the existence of a proper action on a space with measured walls; more precisely, we have the following theorem: \emph{a locally compact second countable group has the Haagerup property if, and only if, it acts properly by automorphisms on a space with measured walls.} Using results of Robertson and Steger (see \cite{robste}), Cherix, Martin and Valette in \cite{chemarval}, proved this theorem for discrete groups and Chatterji, Drutu and Haglund extended the equivalence to locally compact second countable groups using the notion of median metric spaces in \cite{chadruhag}. The stability of the Haagerup property by wreath product was established in \cite{wreath} by constructing a space with measured walls from the structures of measured walls on each factor, and moreover, in the same article, Cornulier, Stalder and Valette generalized this result to the permutational wreath product (see Definition \ref{wreath_df}) when the index set $I$ is a quotient by a co-Haagerup subgroup of the shifting group $G$ (see \cite{chiioa} for a counter example when the pair $(G,I)$ has relative property (T)). This result led to the first example of Haagerup groups which are not weakly amenable in the sense of \cite{cowhaa}. \\ The notion of Haagerup property naturally extends to proper isometric affine action on Banach spaces. Recent works have been made about isometric actions on Banach spaces: in \cite{haaprz}, Haagerup and Przybyszewska showed that every locally compact second countable group $G$ acts properly by affine isometries on the reflexive Banach space $\bigoplus_{n \in \N}^2 L^{2n}(G,\mu)$ where $\mu$ is the Haar measure; Cornulier, Tessera, Valette introduced in \cite{cortesval} property $(BP_0^V)$ for $V$ a Banach space as a tool to show that the simple Lie group $G=Sp(n,1)$ acts properly by isometries on $L^p(G)$ for $p > 4n+2$; in \cite{badfurgelmon}, Bader, Furman Gelander and Monod, studied an analog of property (T) in terms of $L^p$ spaces and more generally, of superreflexive Banach spaces. One of the motivation of this topic is given by a recent result of Kasparov and Yu in \cite{kasyu} which asserts that the existence of coarse embeddings of a finitely generated group in a uniformly convex Banach space implies the coarse geometric Novikov conjecture for this group. See \cite{now} for an overview of results and questions about isometric affine actions on Banach spaces. \\ We will focus on specific Banach spaces, namely, $L^p$ spaces. For $p \geq 1$, we say that a locally compact second countable group $G$ has \emph{property $PL^p$} (or is \emph{a-$FL^p$-menable}) if there exists a proper continuous isometric affine action on a $L^p$ space. See for instance \cite{chadruhag}, for a characterisation of property $PL^p$ for $p\in [1,2]$ in terms of Haagerup property. An important example is the following theorem due to Yu (see \cite{yuhyp}): \emph{let $\G$ be a discrete Gromov hyperbolic group. Then there exists $p \geq 2$ such that $\G$ has property $PL^p$.} Yu proved this result by giving an explicit proper isometric affine action of $\G$ on $\ell^{p}(\G\times \G\st d(x,y)\leq R)$ using a construction of Mineyev in \cite{minhyp}; see \cite{bourd1} or \cite{nicahyp} for other proofs of this result in terms of boundaries of $G$. A remarkable consequence is that there exists infinite groups with property (T) (and hence, without Haagerup property) which have property $PL^p$ for some $p >2$. In this paper, we define a generalization of the structure of spaces with measured walls, namely, the structure of spaces with labelled partitions which provides a flexible framework, in terms of geometry and stability by various type of group constructions, for isometric affine actions on Banach spaces. In Paragraph \ref{subsec_action}, we establish the following result which links isometric affine actions on Banach spaces and actions by automorphisms on spaces with labelled partitions : \begin{theor}\label{labpart_affact} Let $G$ be topological group. \begin{enumerate} \item[1.] If $G$ acts (resp. acts properly) continuously by affine isometries on a Banach space $B$ then there exists a structure $(G,\P,F(\P))$ of space with labelled partitions on $G$ such that $G$ acts (resp. acts properly) continuously by automorphisms on $(G,\P,F(\P))$ via its left-action on itself. Moreover, there exists a linear isometric embedding $F(\P) \hookrightarrow B$. \item[2.] If $G$ acts (resp. acts properly) continuously by automorphisms on a space with labelled partitions $(X,\P,F(\P))$ then there exists a (resp. proper) continuous isometric affine action of $G$ on a Banach space $B$. Moreover, $B$ is a closed subspace of $F(\P)$. \end{enumerate} \end{theor} This theorem can be rephrased in the particular case of $L^p$ spaces as follows: \begin{corr}\label{labpart_aflp} Let $p\geq 1$ with $p \notin 2\Z\smallsetminus \{2\}$ and $G$ be a topological group. $G$ has property $PL^p$ if, and only if, $G$ acts properly continuously by automorphisms on a space with labelled partitions $(X,\P,F(\P))$ where $F(\P)$ is isometrically isomorphic to a closed subspace of an $L^p$ space. \end{corr} A crucial factor in the definition of space with labelled partitions is the ``geometric'' understanding of the construction of Mineyev in \cite{minhyp} used by Yu in \cite{yuhyp} to exhibit a proper action of discrete hyperbolic groups on some $\ell^p$ space. Moreover, another inspiration for this definition comes from \cite{checowjoljulval} Proposition 7.4.1 where Valette states the following geometric characterisation of the Haagerup property for locally compact groups : $G$ has the Haagerup property if, and only if, there exists a metric space $(X,d)$ on which $G$ acts isometrically and metrically properly, a unitary representation $\pi$ of $G$ on a Hilbert space $\H_{\pi}$, and a continuous map $c: X \times X \rightarrow \H_{\pi}$ such that : \begin{itemize} \item[1.] Chasles' relation : \begin{center}for all $x,y,z \in X$, $c(x,z)=c(x,y)+c(y,z)$;\end{center} \item[2.] $G$-equivariance condition : \begin{center}for all $x,y \in X$, $g\in G$, $c(gx,gy)=\pi(g)c(x,y)$;\end{center} \item[3.] Properness condition : \begin{center}if $d(x,y) \rightarrow +\infty$, then $\|c(x,y)\|_{\H_{\pi}}\rightarrow +\infty$.\end{center} \end{itemize} To emphasize the connection with this result, we use the same notation $c$ (for \emph{c}ocycle) for the separation map $c:X\times X \rightarrow F(\P)$ associated with a set of labelling functions $\P$ (see Definition \ref{labpart_sepmap}). In fact, an immediate consequence of Theorem \ref{labpart_affact} Statement 1. is that the separation map $c_{\a}$ of the set of labelled partitions associated with a proper isometric affine action $\a$ on a Banach space (see Definition \ref{labpart_alpha_def}), satisfies the conditions 1., 2., and 3. mentionned above. \\ We describe, in Part \ref{subsubsec_defactlab}, the maps that preserve the structure of spaces with labelled partitions in order to define actions by automorphisms on a space with labelled partitions. This notion of homomorphisms of spaces with labelled partitions generalizes the notion of homomorphisms of spaces with measured walls (see \cite{chadruhag} Definition 3.5). \\ We discuss some constructions of spaces with labelled partitions for the direct sum, semi-direct product, wreath product and amalgamated free product in Sections \ref{sec_dirsum}, \ref{sec_wreath} and \ref{freepartsec}. We apply these constructions in the case of groups with property $PL^p$ and we obtain the following stability properties : \begin{theor}\label{const_semidir_wreathflp} Let $H,G$ be countable discrete groups, $L$ be a subgroup of $G$ and $p > 1$, with $p \notin 2\Z\smallsetminus \{2\}$. We denote by $I$ the quotient $G/L$ and $W=\bigoplus_{I}H$. Assume that $G$ is Haagerup, $L$ is co-Haagerup in $G$ and $H$ has property $PL^p$. \\ Then the permutational wreath product $H\wr_I G=W\rtimes G$ has property $PL^p$. \end{theor} \begin{theor}\label{amalgamflp} Let $G,H$ be countable discrete groups, $F$ be a finite group such that $F \hookrightarrow G$ and $F \hookrightarrow H$ and $p > 1$ with $p \notin 2\Z \smallsetminus \{2\}$. Then $G,H$ have property $PL^p$ if, and only if, $G*_F H$ has property $PL^p$. \end{theor} We mention that, in his thesis, Pillon obtains by other methods the stability of property $PL^p$ by amalgamated free product over finite subgroups and he also computes a lower bound for the equivariant $L^p$-compression of such a product in terms of the equivariant $L^p$-compression of the factors. More precisely, he exhibits an explicit proper cocycle for a representation of the product built from the induced representations of the factors. \tableofcontents Subsequently, all topological groups we consider are assumed to be Hausdorff. \section{Preliminaries}\label{sec_prelim} \subsection{Metrically proper actions}\label{sub_prelim_proper} A pseudo-metric $d$ on a set $X$ is a symmetric map $d: X\times X \rightarrow \R_+$ which satisfies the triangle inequality and $d(x,x)=0$. But unlike a metric, a pseudo-metric need not separate points. \begin{df} Let $G$ be a topological group acting continuously isometrically on a pseudo-metric space $(X,d_X)$. The $G$-action on $X$ is said \emph{metrically proper} if, for all (or equivalently, for some) $x_0 \in X$ : $$\displaystyle \lim_{g \rightarrow \infty}d_X(g.x_0,x_0)= +\infty ,$$ i.e., in other words, for all $R \geq 0$, the set $\{ g \in G \st d_X(g.x_0,x_0) \leq R \}$ is relatively compact in $G$. \end{df} Let $X$ be a set endowed with a pseudo-metric $d$. We put on $X$ the following equivalence relation: for $x,x' \in X$, $x\sim x'$ if, and only if, $d(x,x')=0$, and we denote by $Y$ the quotient set $X/ \sim$. Then we can define a metric $\tilde{d}$ on $Y$ by setting, for $x,x' \in X$, $\tilde{d}([x],[x'])=d(x,x')$. Moreover, an isometric group action $(X,d)$ preserves the classes of $\sim$ and then induces an isometric action on $(Y,\tilde{d})$. \begin{lem} Let $G$ be a topological group acting continuously isometrically on a pseudo-metric space $(X,d)$. The $G$-action on $X$ is \emph{metrically proper} if, and only if, the induced $G$-action on the quotient metric space $(Y,\tilde{d})$ is metrically proper. \end{lem} \subsection{Isometric affine actions}\label{sub_prelim_affact} \begin{df} We say that the action of a topological group $G$ on a topological space $X$ is \emph{strongly continuous} if, for all $x \in X$, the orbit map from $G$ to $X$, $g \mapsto gx$ is continuous. \end{df} Let $G$ be a topological group and let $(B,\|.\|)$ be a Banach space on $\K=\R$ or $\C$. \begin{df} A \emph{continuous isometric affine} action $\a$ of $G$ on $B$ is a strongly continuous morphism $$\a: G \longrightarrow \text{Isom}(B)\cap \text{Aff}(B).$$ \end{df} Notice that if $B$ is a real Banach space, then, by Mazur-Ulam Theorem, $$\text{Isom}(B)\cap \text{Aff}(B)=\text{Isom}(B).$$ \begin{prop} A continuous isometric affine action $\a$ of $G$ on $B$ is characterised by a pair $(\pi,b)$ where: \begin{itemize} \item $\pi$ is a strongly continuous isometric representation of $G$ on $B$, \item $b:G\rightarrow B$ is a continuous map satisfying the 1-cocycle relation: for $g,h \in G$, $$ b(gh)=\pi(g)b(h)+b(g). $$ \end{itemize} And we have, for $g \in G$, $x \in B$: $$ \a(g)x=\pi(g)x+b(g). $$ \end{prop} \begin{df} Let $\a$ be a continuous isometric affine action of $G$ on $B$. We say that $\a$ is \emph{proper} if the action of $G$ on the metric space $(B,d_{\|.\|})$ is metrically proper where $d_{\|.\|}$ is the canonical metric on $B$ induced by the norm $\|.\|$. \end{df} \begin{prop} A continuous isometric affine action $\a$ of $G$ on $B$ is proper if, and only if $$\|b(g)\|\underset{g \rightarrow \infty}{\longrightarrow}+\infty.$$ \end{prop} \begin{df} Let $p \geq 1$. We say that $G$ has property $PL^p$ (or is \emph{a-$FL^p$-menable}) if there exists a proper continuous isometric affine action of $G$ on a $L^p$ space. \end{df} \subsection{On isometries of $L^p$-spaces}\label{sub_prelim_lp} In general, for $p\geq 1$, a closed subspace of a $L^p$-space is not a $L^p$-space (exempt the special case $p=2$); but, in \cite{hardin}, Hardin showed the following result about extension of linear isometries on closed subspace of a $L^p$ (here, we give a reformulation of this result coming from \cite{badfurgelmon}, Corollary 2.20): \begin{theo}\label{hardin_lp} Let $p > 1$ with $p \notin 2\Z\smallsetminus \{2\}$ and $F$ be a closed subspace of $L^p(X,\mu)$. Let $\pi$ be a linear isometric representation of a group $G$ on $F$. Then there is a linear isometric representation $\a'$ of $G$ on some other space $L^p(X',\mu')$ and a linear $G$-equivariant isometric embedding $F \hookrightarrow L^p(X',\mu')$. \end{theo} An immediate consequence is the following : \begin{cor}\label{hardin_lp_cor} Let $p > 1$ with $p \notin 2\Z\smallsetminus \{2\}$, $F$ be a closed subspace of a $L^p$-space and $G$ be a topological group. If $G$ acts properly by affine isometries on $F$, then $G$ has property $PL^p$. \end{cor} In Section \ref{sec_dirsum}, we embed linearly isometrically into $L^p$ spaces some normed vector spaces isometrically isomorphic to a direct sums of $L^p$ spaces thanks to the following basic result: \begin{df} Let $I$ be a countable index set, $(B_i,\|.\|_{_{B_i}})_{i \in I}$ be a family of Banach spaces and $p \geq 1$. We call \emph{$\ell^p$-direct sum} of the family $(B_i)$ the space: $$ B=\rexp{p}{\bigoplus_{i \in I}} B_i:=\left \lbrace (x_i)_{i \in I} \in \prod_{i \in I}B_i \st \sum_{i \in I}\|x_i\|_{_{B_i}}^p < +\infty \right \rbrace, $$ and we denote, for $x=(x_i) \in B$, $$ \|x\|_p:=\left(\sum_{i \in I}\|x_i\|_{_{B_i}}^p\right)^{\frac{1}{p}}.$$ \end{df} The space $B=\rexp{p}{\bigoplus_{i \in I}} B_i$ endowed with the norm $\|.\|_p$ is a Banach space, and moreover, we have: \begin{prop}\label{lpisomisom} Let $I$ be a countable index set, $p \geq 1$ and $(L^p(X_i,\mu_i))_{i \in I}$ be a family of $L^p$-spaces. Then $\displaystyle \left(\rexp{p}{\bigoplus_{i \in I}} L^p(X_i,\mu_i),\|.\|_p\right)$ is isometrically isomorphic to a $L^p$-space. \end{prop} \section{Spaces with labelled partitions and actions on Banach Spaces}\label{sec_lab} In this section we will introduce the structure of \emph{space with labelled partitions} and record for further use a few basic properties. \subsection{Spaces with labelled partitions}\label{subsec_lab} \subsubsection{Definitions}\label{subsubsec_deflab} Let $\K=\R$ or $\C$. \\ Consider a set $X$ and a function $p:X\rightarrow \K$. There is a natural partition $P=P(p)$ of $X$ associated with $p$: \\ We have the following equivalence relation $\sim_p$ on $X$: for $x,y \in X$, \begin{center} $x \sim_p y$ if, and only if, $p(x)=p(y)$. \end{center} We define the partition associated with $p$ by $\displaystyle P(p)= \{ \pi_p^{-1}(h) \st h \in X/ \sim_p \}$ where $\pi_p$ is the canonical projection from $X$ to $X/ \sim_p$. \begin{df}\label{labpart} Let $X$ be a set, and $\P=\{ p: X \rightarrow \K \}$ be a family of functions. \begin{itemize} \item We say that $p$ is a \emph{labelling function} on $X$ and the pair $(P,p)$ is called a \emph{labelled partition} of $X$.\\ \item We say that $x,y \in X$ are \emph{separated} by $p \in \P$ if $p(x)\neq p(y)$ and we denote by $\P(x|y)$ the set of all labelling functions separating $x$ and $y$. \end{itemize} \end{df} \begin{rmq}\label{labpart_rmq} The terminology ``$x$ and $y$ are separated by $p$'' comes from the fact that, if we denote by $P$ the partition of $X$ associated with $p$, $x$ and $y$ are separated by $p$ if, and only if, $x$ and $y$ belongs to two different sets of the partition $P$ i.e. $P$ separates $x$ and $y$. \end{rmq} Consider a set $\P$ of labelling functions on $X$, and the $\K$-vector space $\F(\P,\K)$ of all functions from $\P$ to $\K$. Then we have a natural map $c:X\times X \rightarrow \F(\P,\K)$ given by: for $x,y \in X$ and $p \in \P$, $$c(x,y)(p)=p(x)-p(y).$$ Notice that $p$ belongs to $\P(x|y)$ if, and only if, $c(x,y)(p)\neq 0$. \begin{df}\label{labpart_sepmap} Let $X$ be a set and $\P$ be a family of labelling functions. The map $c: X \times X \rightarrow \F(\P,\K)$ such that, for $x,y \in X$ and for $p \in \P$, $c(x,y)(p)=p(x)-p(y)$ is called the \emph{separation map} of $X$ relative to $P$. \end{df} We now define the notion of space with labelled partitions: \begin{df}[Space with labelled partitions]\label{labpart_space} $\;$ \\ Let $X$ be a set, $\P$ be a family of labelling functions from $X$ to $\K$ and $(\F(\P),\|.\|)$ be a semi-normed space of $\K$-valued functions on $\P$ such that the quotient vector space $F(\P)$ of $\F(\P)$ by its subspace $\F(\P)_0=\{\k \in \F(\P) \st \|\k\|=0 \}$ is a Banach space. \\ We say that $(X,\P,F(\P))$ is a space with labelled partitions if, for all $x,y \in X$: $$c(x,y): \P \rightarrow \K \text{ belongs to }\F(\P).$$ \end{df} \begin{df}\label{labpart_metric} If $(X,\P,F(\P))$ is a space with labelled partitions, we can endow $X$ with the following pseudo-metric: $d(x,y)=\|c(x,y)\|$ for $x,y \in X$. \\ We call $d$ the \emph{labelled partitions pseudo-metric} on $X$. \end{df} \begin{rmq}\label{labpart_metric_rmq} If $(X,\P,F(\P))$ is a space with labelled partitions, then the separation map $c:X\times X \rightarrow F(\P)$ is continuous where $X\times X$ is endowed with the product topology induced by the topology of $(X,d)$. \end{rmq} \subsubsection{Actions on spaces with labelled partitions}\label{subsubsec_defactlab} Here, we describe the maps that preserve the structure of space with labelled partitions. \begin{df}[homomorphism of spaces with labelled partitions]\label{labpart_homo} Let $(X,\P,F(\P))$, $(X',\P',F'(\P'))$ be spaces with labelled partitions and let $f:X\rightarrow X'$ be a map from $X$ to $X'$. \\ We say that $f$ is a \emph{homomorphism of spaces with labelled partitions} if: \\ \begin{enumerate} \item for any $p' \in \P'$, $\Phi_f(p'):=p'\circ f$ belongs to $\P$, \item for all $\k \in F(\P)$, $\k\circ \Phi_f$ belongs to $F'(\P')$ and, $$\|\k\circ \Phi_f\|_{_{F'(\P')}}=\|\k\|_{_{F(\P)}}.$$ \end{enumerate} An \emph{automorphism} of the space with labelled partitions $(X,\P,F(\P))$ is a bijective map $f:X\rightarrow X$ such that $f$ and $f^{-1}$ are homomorphisms of spaces with labelled partitions from $(X,\P,F(\P))$ to $(X,\P,F(\P))$. \end{df} \begin{rmq}\label{labpart_homo_rmq} $\;$ \\ - If $f$ is a homomorphism of spaces with labelled partitions, then $f$ is an isometry from $X$ to $X'$ endowed with their respective labelled partitions pseudo-metrics; indeed, for $x,y \in X$, $$d_X(x,y)=\|c(x,y)\|_{_{F(\P)}}=\|c(x,y)\circ \Phi_f\|_{_{F'(\P')}}=\|c'(f(x),f(y))\|_{_{F'(\P')}}=d_{X'}(f(x),f(y)),$$ since we have $c(x,y)\circ \Phi_f=c'(f(x),f(y))$. \\ - If $f$ is an automorphism of space with labelled partitions, the map $\Phi_f$ is a bijection: $(\Phi_f)^{-1}=\Phi_{f^{-1}}$. \\ \end{rmq} \begin{prop}\label{comp_homo} Let $(X,\P,F(\P))$, $(X',\P',F'(\P'))$,$(X'',\P'',F''(\P''))$ be spaces with labelled partitions and $f:X\rightarrow X',\; f':X'\rightarrow X''$ be homomorphisms of spaces with labelled partitions. \\ We denote $\Phi_f$ the map such that $\Phi_f(p'):=p'\circ f$, for $p' \in \P'$, and $\Phi_{f'}$ the map such that $\Phi_{f'}(p''):=p''\circ f'$, for $p'' \in \P''$. \\ Then $f' \circ f$ is a homomorphism of spaces with labelled partitions from $(X,\P,F(\P))$ to $(X'',\P'',F''(\P''))$ and we have, by denoting $\Phi_{f'\circ f}(p''):=p''\circ (f'\circ f)$: $$ \Phi_{f} \circ \Phi_{f'}=\Phi_{f'\circ f}.$$ \end{prop} \begin{proof} For all $p'' \in \P''$, we have: \begin{center} \begin{tabular}{rl} $\Phi_{f'\circ f}(p'')$&$=p''\circ (f'\circ f)$ \\ &$=(p''\circ f')\circ f$ \\ &$=\Phi_{f'}(p'') \circ f$ with $\Phi_{f'}(p'') \in \P'$ by Definition \ref{labpart_homo} \\ &$=\Phi_f(\Phi_{f'}(p''))$ and hence, \\ $\Phi_{f'\circ f}(p'')$&$=\Phi_f\circ \Phi_{f'}(p'') \in \P$ by Definition \ref{labpart_homo}. \end{tabular} \end{center} It follows that $\Phi_{f} \circ \Phi_{f'}=\Phi_{f'\circ f}$. \\ Now, let $\k \in F(\P)$. Since $\k \circ \Phi_{f}$ belongs to $F'(\P')$, $$\k \circ \Phi_{f'\circ f}= (\k \circ \Phi_{f}) \circ \Phi_{f'} \in F''(\P''),$$ and we clearly have, using the previous equality, $$\|\k \circ \Phi_{f'\circ f}\|_{_{F''(\P'')}}= \|\k \circ \Phi_{f}\|_{_{F'(\P')}}=\|\k\|_{_{F(\P)}}.$$ \end{proof} \begin{rmq}\label{comp_homo_rmq} Assume a group $G$ acts by automorphisms on $(X,\P,F(\P))$. For $g \in G$, we denote by $\tau(g): X \rightarrow X$, the map $x \mapsto \tau(g)x=gx$. Then, by Proposition \ref{comp_homo}, we have: $$\Phi_{\tau(g_2)}\circ \Phi_{\tau(g_1)} = \Phi_{\tau(g_1g_2)}.$$ \end{rmq} \begin{df}\label{labpart_propertiesaction} Let $(X,\P,F(\P))$ be a space with labelled partitions and $G$ be a topological group acting by automorphisms on $(X,\P,F(\P))$. \begin{itemize} \item We say that $G$ acts \emph{continuously} on $(X,\P,F(\P))$, if the $G$-action on $(X,d)$ is strongly continuous. \\ \item We say that $G$ acts \emph{properly} on $(X,\P,F(\P))$, if the $G$-action on $(X,d)$ is metrically proper where $d$ is the labelled partitions pseudo-metric on $X$. \end{itemize} \end{df} \begin{rmq}\label{labpart_actionproper_rmq} Notice that if a topological Hausdorff group $G$ acts properly continuously by automorphisms on a space $(X,\P,F(\P))$ with labelled partitions, then it is locally compact and $\s$-compact: in fact, let $x_0 \in X$; for $r>0$, $V_r=\overline{\{ g \in G \st d(gx_0,x_0) \leq r \}}$ is a compact neighbourhood of the identity element $e$ in $G$ since the action on $(X,d)$ is strongly continuous and proper, and we have $G=\cup_{n \in\N^*}V_n$. \end{rmq} \begin{prop}\label{labpart_proper_prop} Let $G$ be a topological group. Assume $G$ acts continuously by automorphisms on $(X,\P,F(\P))$. \\ The $G$-action on $(X,\P,F(\P))$ is proper if, and only if, for every (resp. for some) $x_0 \in X$, $\|c(gx_0,x_0)\| \rightarrow \infty$ when $g\rightarrow \infty$. \end{prop} \begin{proof} It follows immediatly from the definition of a metrically proper action. \end{proof} \begin{lem}[pull back of space with labelled partitions]\label{pullbackpart} Let $(X,\P_X,F_X(\P_X))$ be a space with labelled partitions, $Y$ be a set and $f:Y\rightarrow X$ be a map. Then there exists a \emph{pull back} structure of space with labelled partitions $(Y,\P_Y,F_Y(\P_Y))$ turning $f$ into a homomorphism. \\ Moreover, if $G$ acts on $Y$ and $G$ acts continuously by automorphisms on $(X,\P_X,F_X(\P_X))$ such that $f$ is $G$-equivariant, then $G$ acts continuously by automorphisms on $(Y,\P_Y,F_Y(\P_Y))$. \end{lem} \begin{proof} We consider the family of labelling functions on $Y$ : $$ \P_Y=\{ p\circ f \st p \in \P_X\}, $$ and let $c_Y$ be the separation map on $Y$ associated with $\P_Y$. \\ Let $T: \Vect\big(c_Y(y,y') \st y,y' \in Y\big) \rightarrow F_X(\P_X)$ be the linear map such that $T(c_Y(y,y'))=c_X(f(y),f(y'))$. The map $T$ is well defined and is injective since, for every $p \in \P_X$, $$c_X(f(y),f(y'))(p)=p\circ f(y)-p\circ f(y')=c_Y(y,y')(p\circ f). $$ On $\Vect\big(c_Y(y,y') \st y,y' \in Y\big)$, we consider the following norm : \\ for $\k \in \Vect\big(c_Y(y,y') \st y,y' \in Y\big)$, we set, $$ \|\k\|_{_{\P_Y}}=\|T(\k)\|_{_{F_X(\P_X)}}. $$ And we set $F_Y(\P_Y)=\overline{\Vect\big(c_Y(y,y') \st y,y' \in Y\big)}^{\|\cdotp\|_{_{\P_Y}}}$. Hence, by construction, $(Y,\P_Y,F_Y(\P_Y))$ is a space with labelled partitions and $f$ is clearly an homorphism from $(Y,\P_Y,F_Y(\P_Y))$ to $(X,\P_X,F(\P_X))$ since, for all $y,y' \in Y$, $$ c_Y(y,y')\circ \Phi_f=c_X(f(y),f(y')), $$ where $\Phi_f(p)=p\circ f$ for $p \in \P_X$. \\ Assume that $G$ acts on $Y$ via $\tau_Y$ and $G$ acts continuously by automorphisms on $(X,\P,F(\P))$ via $\tau_X$, and $f$ is $G$-equivariant. We denote, for $p \in \P_X$ and $g \in G$: \\ - $\Phi_{\tau_X(g)}(p):=p \circ \tau_X(g)$ and, \\ - $\Phi_{\tau_Y(g)}(p\circ f):=(p \circ f) \circ \tau_Y(g)$. \\ Since $f$ is $G$-equivariant and $\P_X$ is stable by $\tau_X$, we have, for all $p \in \P_X$ and all $g \in G$ : $$ (p \circ f) \circ \tau_Y(g)=(p \circ \tau_X(g))\circ f \in \P_Y .$$ Now, for every $\k \in F_Y(\P_Y)$ and every $g \in G$, we have : $$ \|\k\circ \Phi_{\tau_Y(g)}\|_{_{\P_Y}}=\|T(\k)\circ \Phi_{\tau_X(g)} \|_{_{F_X(\P_X)}}=\|T(\k)\|_{_{F_X(\P_X)}}=\|\k\|_{_{\P_Y}}. $$ It follows that $G$ acts by automorphisms on $(Y,\P_Y,F_Y(\P_Y))$. \\ Moreover, we have, for every $y \in Y$ and every $g \in G$, $d_Y(\tau_Y(g)y,y)=d_X(\tau_X(g)f(y),f(y))$, where $d_X$ and $d_Y$ are the labelled partitions pseudo-metric on respectively $X$ and $Y$. Hence, for $y \in Y$, $y \rightarrow \tau_Y(g)y$ is continuous from $G$ to $(Y,d_Y)$. \end{proof} \begin{df}\label{pullbackk} Let $(X,\P_X,F_X(\P_X))$ be a space with labelled partitions, $Y$ be a set and $f:Y\rightarrow X$ be a map. The structure of space with labelled partitions $(Y,\P_Y,F_Y(\P_Y))$ given by Lemma \ref{pullbackpart} is called the \emph{pull back by $f$ of the space with labelled partitions $(X,\P_X,F_X(\P_X))$}. \end{df} \subsection{Examples} \subsubsection{Spaces with measured walls} Our first example of spaces with labelled partitions is given by spaces with measured walls. Here we cite the definition of the structure of space with measured walls from \cite{wreath}. \\ Let $X$ be a set. We endow $2^X$ with the product topology and we consider, for $x \in X$, the clopen subset of $2^X$, $\A_x:=\{ A \subset X \st x \in A\}$. \begin{df}\label{labpart_meswalls} A \emph{measured walls structure} is a pair $(X, \mu)$ where $X$ is a set and $\mu$ is a Borel measure on $2^X$ such that for all $x, y \in X$: $$ d_{\mu}(x,y):=\mu(\A_x\ds \A_y)<+\infty $$ \end{df} \begin{prop}\label{walls_labpart} Let $(X,\mu)$ be a measured space with walls. Then, for every real number $q \geq 1$, $(X,\P,L^q(\P,\mu))$ is a space with labelled partitions where $\P=\{ \cara_h \st h \in 2^X \}$. \\ Morever, we have, for $x,y \in X$, $$ \|c(x,y)\|_q^q=d_{\mu}(x,y) .$$ \end{prop} \begin{proof} We denote $\P=\{ \cara_h \st h \in 2^X \}$. Then $\P$ is a family of labelling functions on $X$ and we denote by $c$ the separation map of $X$ associated with $\P$. \\ Let $x,y \in X$. For $h \in 2^X$, we have: $$c(x,y)(\cara_h)=\cara_h(x)-\cara_h(y)=\cara_{\A_x}(h)-\cara_{\A_y}(h).$$ The function $f:2^X \rightarrow \P$ such that, for $h \in 2^X$, $f(h)=\cara_h$ is a bijection, and we endow $\P$ with the direct image topology induced by $f$. Then, $\mu^*:\P\rightarrow \R$ such that, for any Borel subset $A$ of $\P$, $\mu^*(A)=\mu(f^{-1}(A))$ is a Borel measure on $\P$. \\ We have $\|c(x,y)\|_q^q=\int_{\P} |c(x,y)(p)|^q d\mu^*(p)=\int_{2^X} |\cara_{\A_x}(h)-\cara_{\A_y}(h)|^q d\mu(h)=\mu(\A_x\ds \A_y)$, and then: $$\|c(x,y)\|_q^q=d_{\mu}(x,y) < +\infty .$$ It follows that, for all $x,y \in X$, $c(x,y)$ belongs to $L^q(\P,\mu)$ and hence, $(X,\P,L^q(\P,\mu))$ is a space with labelled partitions. \end{proof} \begin{center} \includegraphics[width=11cm]{z2murs_labels.eps} \\ Examples of walls in $\Z^2$. \end{center} \subsubsection{Gromov hyperbolic groups} The following Lemma is a reformulation of a result of Yu (see \cite{yuhyp}, Corollary 3.2) based on a construction of Mineyev in \cite{minhyp}. \\ For a triple $x,y,z$ in a metric space $(X,d)$, we denote by $(x|y)_z=\frac{1}{2}(d(x,z)+d(y,z)-d(x,y))$. \begin{lem}[Mineyev, Yu]\label{hyp_lem} Let $\G$ be a finitely generated $\d$-hyperbolic group. Then there exists a $\G$-equivariant function $h:\G \times \G \rightarrow \F_c(\G)$ where $\F_c(\G)=\{ f:\G\rightarrow \R \text{ with finite support}\st \|f\|_1=1\}$ such that: \begin{enumerate} \item[1.] for all $a,x \in \G$, $\su h(x,a) \subset B(a,10\d)$, \item[2.] there exists constants $C \geq 0$ and $\eps > 0$ such that, for all $x,x',a \in \G$, $$\| h(x,a)-h(x',a)\|_1\leq C e^{-\eps (x|x')_a}, $$ \item[3.] there exists a constant $K \geq 0$ such that, for all $x,x' \in \G$ with $d(x,x')$ large enough, $$\#\{ a \in \G \st \su h(x,a) \cap \su h(x',a)=\emptyset \}\geq d(x,x')-K. $$ \end{enumerate} \end{lem} \begin{center} \includegraphics[width=11cm]{yu.eps} \\ Support of the labelling function associated with $(a,b)$ with $d(a,b)=10\d$. \end{center} This Lemma gives us a way to build a structure of labelled partitions on Gromov hyperbolic groups: \begin{prop}[Labelled partitions on a $\d$-hyperbolic group]\label{hyp_labpart} Let $\G$ be a finitely generated $\d$-hyperbolic group and we denote $\P=\{(a,b) \in \G \times \G \st d(a,b)\leq 10\d \}$. There exists $q_0\geq 1$ such that, for all $q > q_0$, $(\G,\P,\ell^q(\P))$ is a space with labelled partitions. \end{prop} \begin{rmq} Notice that, stated this way, $\P$ is not a set of labelling functions on $\G$. Implicitely, we do the following identification : $$\{(a,b) \in \G \times \G \st d(a,b)\leq 10\d \} \sim \{x \mapsto h(a,x)(b) \st (a,b) \in \G^2 \text{ with }d(a,b)\leq 10\d \}.$$ In fact, $x \mapsto h(a,x)(b)$ is uniquely determined by the pair $(a,b)$. \end{rmq} \begin{proof}[Proof of Proposition \ref{hyp_labpart}] We fix a finite generating set of $\G$ and we denote $d$ the word metric associated with it (and such that $\G$ is Gromov hyperbolic of constant $\d$ with respect to $d$). As $\G$ is uniformly locally finite, there exists a constant $k>0$ such that, for all $r>0$ and $x \in \G$, $\#B(x,r)\leq k^r$. \\ Let $\eps$ be as in 2. Lemma \ref{hyp_lem} and set $q_0 = \frac{\text{ln}(k)}{\eps}$. Let $q > q_0$. Then for all $q > q_0$, $$ \sum_{n \in \N}k^ne^{-nq\eps} < +\infty. $$ Let $h$ be the function given by Lemma \ref{hyp_lem} and notice that, for $x,x',a \in \G$, since $\# \supp(h(x,a)) \leq k$, $$ \|h(x,a)-h(x',a)\|_q \leq 2k^{\frac{1}{q}}\|h(x,a)-h(x',a)\|_1. \quad \quad (*)$$ As said in the previous remark, we can see $\P$ as a set of labelling functions on $\G$ using the function $h$ : we set, for $(a,b) \in \P$ and $x \in \G$, $$(a,b)(x):=h(x,a)(b).$$ We denote by $c$ the separation map associated with $\P$. We have, for $x,x' \in \G$, \begin{center} \begin{tabular}{rcl} $\displaystyle \|c(x,x')\|_{\ell^q(\P)}^q$&$=$&$\displaystyle \sum_{(a,b) \in \P}|h(x,a)(b)-h(x,a)(b)|^q,$ \\ &$=$&$\displaystyle \sum_{a \in \G}\|h(x,a)-h(x,a)\|_q^q$ by 1. Lemma \ref{hyp_lem}, \\ &$\leq$&$\displaystyle \sum_{a \in \G}2^q k\|h(x,a)-h(x,a)\|_1^q$ by $(*)$, \\ &$\leq$&$\displaystyle (2C)^qk\sum_{a \in \G} e^{-q\eps (x|x')_a}$ by 2. Lemma \ref{hyp_lem}, \\ &$\leq$&$\displaystyle (2C)^qk\sum_{a \in \G} e^{-q\eps (d(x,a)-d(x,x'))}$, \\ &$\leq$&$\displaystyle (2C)^qk\sum_{n \in \N} k^ne^{-q\eps (n-d(x,x'))}$, and hence, since $q>q_0$: \\ $\displaystyle \|c(x,x')\|_{\ell^p(\P)}^p$&$\leq$&$\displaystyle (2C)^q e^{q\eps d(x,x')}< +\infty$ \\ \end{tabular} \end{center} Thus $c(x,x')$ belongs to $\ell^p(\P)$ for all $x,x' \in \G$. It follows that $(\G,\P,\ell^p(\P))$ is a space with labelled partitions. \end{proof} \begin{prop}\label{hyp_actionprop} Let $\G$ be a finitely generated $\d$-hyperbolic group. Let $q_0 \geq 1$ as in Proposition \ref{hyp_labpart} and for $q >q_0$, let $(\G,\P,\ell^q(\P))$ be the space with labelled partitions given by Proposition \ref{hyp_labpart}. Then the action of $\G$ by left-translation on itself induces a proper action of $\G$ by automorphisms on $(\G,\P,\ell^q(\P))$. \end{prop} \begin{proof} We keep the notations used in the proof of Proposition \ref{hyp_labpart}. We first show that $\G$ acts by automorphisms on $(\G,\P,\ell^q(\P))$. Let $\g,x \in \G$ and $(a,b) \in \P$. Since $h$ is $\G$-equivariant, we have: $$\Phi_{\g}((a,b))(x)=(a,b)(\g x)=h(\g x,a)(b)=h(x,\g^{-1}a)(\g^{-1}b)=(\g^{-1}a,\g^{-1}b)(x),$$ And hence, $$\Phi_{\g}((a,b))=(\g^{-1}a,\g^{-1}b) \in \P.$$ Moreover, for $\k \in \ell^q(\P)$, we have: \begin{center} \begin{tabular}{rl} $\|\k \circ \Phi_{\g}\|_{_{\ell^q(\P)}}^q $&$\displaystyle=\sum_{(a,b)\in \P}|\k(\g^{-1}a,\g^{-1}b)|^q$, \\ &$\displaystyle=\sum_{(\g a,\g b)\in \P}|\k(a,b)|^q$, \\ &$\displaystyle=\sum_{(a,b)\in \P}|\k(a,b)|^q$, \\ $\|\k \circ \Phi_{\g}\|_{_{\ell^q(\P)}}^q $&$=\|\k\|_{_{\ell^q(\P)}}^q$. \end{tabular} \end{center} It follows that $\G$ acts by automorphisms on $(\G,\P,\ell^q(\P))$. \\ Now, consider the identity element $e$ of $\G$ and let $\g \in \G$. \\ We denote $A=\{ a \in \G \st \su h(\g,a) \cap \su h(e,a)=\emptyset \}$. Notice that for every $x,a \in \G$, $\|h(x,a)\|_q\geq \frac{1}{k}$. We have, by 3. Lemma \ref{hyp_lem}, when $d(\g,e)$ is large enough: \begin{center} \begin{tabular}{rcl} $\|c(\g,e)\|_{_{\ell^q(\P)}}^q $&$=$&$\displaystyle\sum_{a\in \G}\|h(\g,a)-h(e,a)\|_q^q$, \\ &$\geq$&$\displaystyle \sum_{a\in A}\|h(\g,a)-h(e,a)\|_q^q\geq\sum_{a\in A}(\frac{2}{k})^q$, since $\|h(x,a)\|_q\geq \frac{1}{k}$ \\ $\|c(\g,e)\|_{_{\ell^q(\P)}}^q $&$\geq$&$\displaystyle (\frac{2}{k})^q(d(\g,e)-K)$. \end{tabular} \end{center} And hence, when $\g \rightarrow \infty$ in $\G$, we have: $\|c(\g,e)\|_{_{\ell^q(\P)}}^q\geq (\frac{2}{k})^q(d(\g,e)-K)\rightarrow +\infty$. \end{proof} \subsubsection{Labelled partitions on metric spaces} It turns out that any pseudo-metric spaces $(X,d)$ can be realized as a space with labelled partitions $(X,\P,F(\P))$ with $F(\P)\simeq \ell^{\infty}(X)$ and such that the pseudo-metric of labelled partitions is exactly $d$ : \begin{prop} Let $(X,d)$ be a pseudo-metric space and consider the family of labelling functions on $X$ : $$\P=\{ p_z:x \mapsto d(x,z) \st z \in X\}.$$ Then $(X,\P,\ell^{\infty}(\P))$ is a space with labelled partitions. \\ Moreover, for all $x,y \in X$, $$ d_{\P}(x,y)=d(x,y), $$ where $d_{\P}$ is the pseudo-metric of labelled partitions on $X$. \end{prop} \begin{proof} Let $c$ be the separation map associated with $\P=\{ p_z:x \mapsto d(x,z) \st z \in X\}$. For $x,y \in X$ and $p_z \in \P$, we have : $$ c(x,y)(p_z)=p_z(x)-p_z(y)=d(x,z)-d(y,z)\leq d(x,y),$$ and, in particular, $c(x,y)(p_y)=d(x,y)$, then, $$ \|c(x,y)\|_{\infty}=\sup_{p_z \in \P}|c(x,y)(p_z)|=d(x,y). $$ Hence, $(X,\P,\ell^{\infty}(\P))$ is a space with labelled partitions and $d_{\P}(x,y)=\|c(x,y)\|_{\infty}=d(x,y)$. \end{proof} This result motivates the study of structures of spaces labelled partitions on a pseudo-metric space $X$ : can we find other Banach spaces than $\ell^{\infty}(X)$ which gives a realization of the pseudo-metric on $X$ as a pseudo-metric of labelled partitions ? \\ A first element of answer is given by the case of the discrete metric on a set. On every set, we can define a structure of labelled partitions which gives the discrete metric on this set: \begin{prop}\label{naivelabpart_prop} Let $X$ be a set and $\P=\{ \di_x \st x \in X\}$ be the family of labelling functions where, for $x \in X$, $\di_x=2^{-\frac{1}{q}}\d_x$. \\ Then, for every $q \geq 1$, $(X,\P,\ell^q(\P))$ is a space with labelled partitions. \end{prop} \begin{proof} We have, for $x,y,z \in X$ with $x\neq y$: \\ \begin{equation*} c(x,y)(\di_z)=\di_z(x)-\di_z(y)=\begin{cases} 0 \text{ if } z\notin \{x,y\} \\ \pm 2^{-\frac{1}{q}} \text{ otherwise.} \end{cases} \end{equation*} and then, $$ \|c(x,y)\|^q_q=\sum_{z \in X}|c(x,y)(\di_z)|^q=|c(x,y)(\di_x)|^q+|c(x,y)(\di_y)|^q=1. $$ \end{proof} Notice that the labelled partitions pseudo-metric $d$ on $X$ in this case is precisely the discrete metric on $X$ i.e. $d(x,y)=1$ for all $x,y \in X$, $x \neq y$. \begin{df}[Naive $\ell^q$ space with labelled partitions]\label{naivelabpart} Let $X$ be a set and $\P=\{ \di_x \st x \in X\}$. \\ For $q \geq 1$, $(X,\P,\ell^q(\P))$ is called the \emph{naive $\ell^q$ space with labelled partitions of $X$}. \end{df} \begin{rmq}\label{labpart_homo_ex} Let $X$ be a set, $q \geq 1$ and $G$ a group acting on $X$. Then $G$ acts by automorphisms on the naive $\ell^q$ space with labelled partitions of $X$. \\ In fact, if, for $g \in G$, we denote $\tau(g): x \mapsto gx$, we have, for $z \in X$, $$\di_z \circ \tau(g) = \di_{g^{-1}z} \in \P,$$ and, for all $\k \in \ell^q(\P)$, $$\|\k\circ \Phi_{\tau(g)}\|^q_q=\sum_{x\in X}|\k(\di_{gx})|^q=\sum_{g^{-1}x\in X}|\k(\di_{x})|^q=\sum_{x\in X}|\k(\di_{x})|^q=\|\k\|_q^q.$$ \end{rmq} \subsubsection{Labelled partitions on Banach spaces} Every Banach space has a natural structure of space with labelled partitions and the metric of labelled partitions of this structure is exactly the metric induced by the norm. \\ Let $f$ be a $\K$-valued function on a set $B$ and $k \in \K$. We denote $f+k:=\{x \mapsto f(x)+k\}$. \begin{df} Let $B$ be a Banach space and $B'$ be its topological dual. The set : $$ \P=\{f+k \st f \in B', \; k \in \K\}$$ is called the \emph{natural family of labelling functions} on $B$. \\ Let $c$ be the separation map on $B$ associated with $\P$. We denote : $$ \d(\P)=\{c(x,x')\st x,x' \in B \}. $$ \end{df} \begin{rmq} This definition and the fact that the natural family of labelling functions contains the constant functions are motivated by the following : as we shall see in Lemma \ref{actcanbanach}, a $G$-action on a Banach space $B$ by affine isometries induces an action of $G$ on the natural family of labelling functions on $B$. \end{rmq} \begin{prop}\label{banachspacelabpart} Let $(B,\|\cdotp\|)$ be a Banach space and $\P$ be its natural family of labelling functions. Then $\d(\P)$ is isomorphic to $B$ and $(B,\P,\d(\P))$ is a space with labelled partitions where $\d(B)$ is viewed as an isometric copy of $B$. Moreover, we have, for $x,x' \in B$ : $$ d(x,x')=\|x-x'\|, $$ where $d$ is the pseudo-metric of labelled partitions on $(B,\P,\d(\P))$. \end{prop} \begin{proof} Let $\P:=\{f+k \st f \in B', \; k \in \K\}$ and let $c$ be the separation map on $B$ associated with $\P$. Notice that for all $x,x' \in B$, $c(x-x',0)=c(x,0)-c(x',0)=c(x,x')$. Then the map $T: B \rightarrow \d(B)$ such that $x \mapsto c(x,0)$ is clearly a surjective linear operator. Now, we have $c(x,0)=0\Leftrightarrow \forall f \in B', \; f(x)=0$, and hence, by Hahn-Banach Theorem, $T$ is injective. It follows that $T$ is an isomorphism. \\ The quantity $\|c(x,x')\|_{_{\d(\P)}}:=\|x-x'\|$ defines a norm on $\d(\P)$ and hence, $(\d(\P),\|.\|_{_{\d(\P)}})$ is a Banach space as $T$ is an isometric isomorphism. It follows immediately that $(B,\P,\d(\P))$ is a space with labelled partitions. \end{proof} \begin{df} Let $B$ be a Banach space. The space with labelled partitions $(B,\P,\d(\P))$ where $\P=\{f+k \st f \in B', \; k \in \K\}$ and $\d(\P)\simeq B$ is called the \emph{natural structure of labelled partitions on $B$}. \end{df} \begin{lem}\label{actcanbanach} Let $G$ be a topological group. Then a continuous isometric affine action of $G$ on a Banach space $B$ induces a continuous action of $G$ by automorphisms on the natural space with labelled partitions $(B,\P, \d(\P))$ on $B$. \end{lem} \begin{proof} Let $\a$ be a continuous isometric affine action of $G$ on a Banach space $B$ with linear part $\pi$ and translation part $b$. Let $(B,\P,\d(\P))$ be the natural space with labelled partitions on $B$. \\ Notice that for all $f \in B'$, $f \circ \pi(g) \in B'$ since $\pi$ is an isometric representation. Hence, for all $g \in G$ and $p=f+k \in \P$ : $$p \circ \a(g)=f\circ \a(g)+k=f \circ \pi(g) + (k +f(b(g))) \in \P.$$ We denote, for $g \in G$ and $p \in \P$, $\Phi_g(p)=p \circ \a(g)$. We have, for $g \in G$ and $c(x,x') \in \d(\P)$, \begin{center} \begin{tabular}{rcl} $ \|c(x,x')\circ \Phi_g\|_{_{\d(\P)}}$&$=$&$ \|c(\a(g)x,\a(g)x')\|_{_{\d(\P)}},$ \\ &$=$&$ \|\a(g)x-\a(g)x'\|,$ \\ &$=$&$ \|\pi(g)(x-x')\|,$ \\ &$=$&$ \|x-x'\|,$ \\ $ \|c(x,x')\circ \Phi_g\|_{_{\d(\P)}}$&$=$&$ \|c(x,x')\|_{_{\d(\P)}}.$ \end{tabular} \end{center} It follows that $G$ acts by automorphisms on $(B,\P, \d(\P))$ and this action is clearly continuous since $d(x,x')=\|x-x'\|$ where $d$ is the pseudo-metric of labelled partitions. \end{proof} In the particular case of a real Banach space $B$, we can consider another family of labelling functions on $B$ which is composed of functions valued in $\{0,1\}$; hence, it can be tought as characteristic functions of half spaces of the real Banach space $B$ : \begin{dfpr} Let $B$ be a \emph{real} Banach space, $B'$ be its topological dual and $SB'$ its unit sphere (for the operator norm). For $f \in SB'$ and $k \in \R$, we define the function $p_{f,k}: B \rightarrow \{0,1\}$ by, for $x \in B$ : \begin{equation*} p_{f,k}(x)=\begin{cases} 1 \text{ if } f(x)-k >0 \\ 0 \text{ otherwise.} \end{cases} \end{equation*} We set $\P=\{p_{f,k} \st f \in SB',\; k \in \R \}$ and $\F(\P)=\Vect(c(x,y) \st x,y \in B)$ where $c$ is the separation map associated with $\P$. Then, for $\k \in \F(\P)$, the quantity $$\|\k\|:= \sup_{f \in SB'}\left|\int_{\R}\k(p_{f,k})dk\right|$$ is a semi-norm on $\F(\P)$ and $F(\P)=\F(\P)/\{\k \st \|\k\|=0\}$ is a Banach space isometrically isomorphic to $B$. Moreover, $(B,\P,F(\P))$ is a space with labelled partitions and we have, for all $x,y \in B$ : $$ d_{\P}(x,y)=\|c(x,y)\|=\|x-y\|_B. $$ \end{dfpr} \begin{proof} First, notice that for $x,y \in B$ and $p_{f,k} \in \P$, we have : \begin{equation*} c(x,y)(p_{f,k})=p_{f,k}(x)-p_{f,k}(y)=\begin{cases} \pm 1 \text{ if } f(x)>k\geq f(y) \text{ or } f(y)>k\geq f(x) \\ 0 \text{ otherwise.} \end{cases} \end{equation*} Hence, for $f \in SB'$, \begin{equation*} \int_{\R}c(x,y)(p_{f,k})dk=\begin{cases} f(x)-f(y) \text{ if } f(x)\geq f(y) \\ f(y)-f(x) \text{ if } f(y)> f(x) \end{cases} \end{equation*} It follows that $\|c(x,y)\|=\sup_{f \in SB'}|f(x)-f(y)|=\|x-y\|_B <+\infty$ $(*)$. \\ Now, for $\k=\sum \l_i c(x_i,y_i) \in \F(\P)$, $\|k\| \leq \sum |\l_i|\;\| c(x_i,y_i)\| < +\infty $, and then, $\|.\|$ is a semi-norm on $\F(\P)$. \\ Let us now consider the quotient $F(\P)=\F(\P)/\sim$ where $\k\sim \k'$ if, and only if $\|\k-\k'\|=0$. For $\l,\mu \in \R$ and $x,y \in B$, we have $c(\l x +\mu y, 0) \sim \l c(x,0) + \mu c(y,0)$ and $c(x-y,0)\sim c(x,y)$. Thus, $T: B \rightarrow F(\P)$ such that $T(x)=c(x,0)$ is an isomorphism and by $(*)$ it is isometric. Hence, $F(\P)$ is a Banach space isometrically isomorphic to $B$ and $(B,\P,F(\P))$ is a space with labelled partitions. \end{proof} \subsection{Link with isometric affine actions on Banach spaces}\label{subsec_action} In this section, we aim to prove the two statements of Theorem \ref{labpart_affact} which gives an analog of the equivalence between proper actions on spaces with measured walls and Haagerup property in terms of proper actions on spaces with labelled partitions and isometric affine actions on Banach spaces; and more particularly in the case of $L^p$ spaces, using Hardin's result about extension of isometries on closed subspaces of $L^p$ spaces. \\ \begin{noth}[\ref{labpart_affact}] Let $G$ be topological group. \begin{enumerate} \item[1.] If $G$ acts (resp. acts properly) continuously by affine isometries on a Banach space $B$ then there exists a structure $(G,\P,F(\P))$ of space with labelled partitions on $G$ such that $G$ acts (resp. acts properly) continuously by automorphisms on $(G,\P,F(\P))$ via its left-action on itself. Moreover, there exists a linear isometric embedding $F(\P) \hookrightarrow B$. \item[2.] If $G$ acts (resp. acts properly) continuously by automorphisms on a space with labelled partitions $(X,\P,F(\P))$ then there exists a (resp. proper) continuous isometric affine action of $G$ on a Banach space $B$. Moreover, $B$ is a closed subspace of $F(\P)$. \end{enumerate} \end{noth} \begin{nocor}[\ref{labpart_aflp}] Let $p\geq 1$ with $p \notin 2\Z\smallsetminus \{2\}$ and $G$ be a topological group. $G$ has property $PL^p$ if, and only if, $G$ acts properly continuously by automorphisms on a space with labelled partitions $(X,\P,F(\P))$ where $F(\P)$ is isometrically isomorphic to a closed subspace of an $L^p$ space. \end{nocor} \begin{proof}[Proof of Corollary \ref{labpart_aflp}] The direct implication follows immediately from 1) Theorem \ref{labpart_affact}. \\ Now, assume $G$ acts properly continuously by automorphisms on a space $(X,\P,F(\P))$ and $T:F(\P)\hookrightarrow L^p(X,\mu)$ is a linear isometric embedding. \\ By 2) Theorem \ref{labpart_affact}, there is a proper continuous isometric affine action $\a$ of $G$ on a closed subspace $B$ of $F(\P)$ with $\a(g)=\pi(g)+b(g)$. Thus, as $T$ is a linear isometry, $T(B)$ is a closed subspace of $L^p(X,\mu)$ and $\a'$ such that $\a'(g)=T\circ\pi(g)\circ T^{-1}+T(b(g))$ is a continuous isometric affine action of $G$ on $T(B)$. Then, by Corollary \ref{hardin_lp_cor}, $G$ has property $PL^p$. \end{proof} \subsubsection{Labelled partitions associated with an isometric affine action}\label{subsubsec_actionlab} In this part, we introduce the space with labelled partitions associated with a continuous isometric affine action of a topological group $G$ and we give a proof of 1) Theorem \ref{labpart_affact} by defining an action of $G$ by automorphisms on this structure. \\ Given a continuous isometric affine action on a Banach space, we consider the pullback of the natural structure of space with labelled partitions of the Banach space on the group itself : \begin{df}\label{labpart_alpha_def} Let $G$ be a topological group and $\a$ be a continuous isometric affine action of $G$ on a Banach space $(B,\|.\|)$ with translation part $b:G \rightarrow B$. Consider the pullback $(G,\P_{\a},F_{\a}(\P_{\a}))$ by $b$ of the natural space with labelled partitions $(B,\P,\d(\P))$ on $B$, where $\P=B'$ and $\d(\P)\simeq B$. \\ The triple $(G,\P_{\a},F_{\a}(\P_{\a}))$ is called \emph{the space with labelled partitions associated with $\a$}. \\ More precisely, we have : \\ $\P_{\a} = \{f\circ b+k \st f \in B', \; k \in \K \}$; \\ $F_{\a}(\P_{\a}) \simeq \overline{\Vect(b(G))}^{\|.\|}$; \\ \end{df} \begin{rmq}\label{alpha_rmq} - The linear map $T:F_{\a}(\P_{\a}) \hookrightarrow B$ such that $T: c_{\a}(g,h) \mapsto b(g)-b(h)$ is an isometric embedding, where $c_{\a}$ is the separation map on $G$ associated with $\P_{\a}$. \\ - If the continuous isometric affine action $\a$ is linear i.e. $b(G)=\{ 0 \}$, then the space $(G,\P_{\a},F_{\a}(\P_{\a}))$ with labelled partitions associated with $\a$ is degenerated in the sense that the quotient metric space associated with $(G,d)$ contains a single point, $\P_{\a}$ contains only the zero function from $G$ to $\K$ and $F_{\a}(\P_{\a})=\{0\}$. \end{rmq} \begin{prop}\label{labpart_actiontolabpart} Let $G$ be a topological group and $(G,\P,F(\P))$ be the space with labelled partitions associated with a continuous isometric affine action of $G$ on a Banach space $B$. \\ Then the action of $G$ on itself by left-translation induces a continuous action of $G$ by automorphisms on $(G,\P,F(\P))$. \end{prop} \begin{proof} Let $\a$ be a continuous isometric affine action of $G$ on a Banach space $B$ with translation part $b: G \rightarrow B$. By Lemma \ref{actcanbanach}, $G$ acts continuously on the natural space with labelled partitions on $(B,\P,\d(\P))$ on $B$. Moreover, the map $b$ is $G$-equivariant since we have, for $g,h \in G$, $b(gh)=\a(g)b(h)$. By Lemma \ref{pullbackpart}, it follows that the $G$-action on itself by left-translation induces a continuous action by automorphisms on $(G,\P,F(\P))$. \end{proof} \begin{proof}[Proof of 1) Theorem \ref{labpart_affact}] Assume $\a$ is continuous isometric affine action of $G$ on a Banach space $(B,\|.\|)$ with translation part $b$ and let $G$. \\ By Proposition \ref{labpart_actiontolabpart}, the $G$-action by left-translation on itself induces a continuous action by automorphisms on the space with labelled partitions associated with $\a$, $(G,\P_{\a},F_{\a}(\P_{\a}))$. \\ Moreover, assume $\a$ is proper. Then, by Remark \ref{alpha_rmq}, we have : $$ d_{\a}(g,e)=\|b(g)\| \underset{g \rightarrow \infty}{\longrightarrow} +\infty,$$ and hence, the $G$-action by automorphisms on $(G,\P_{\a},F_{\a}(\P_{\a}))$ is proper. \end{proof} \subsubsection{From actions on a space with labelled partitions to isometric affine actions}\label{subsubsec_labaction} We prove here statement 2) of Theorem \ref{labpart_affact} by giving a (non-canonical) way to build a proper continuous isometric affine action on a Banach space given a proper continuous action by automorphisms on space with labelled partitions. \begin{lem}\label{labpart_lptoaction_lem} Let $G$ be a topological group, $(X,\P,F(\P))$ be a space with labelled partitions and we denote $E=\Vect(c(x,y) \st x,y \in X)$ where $c$ is the separation map associated with $\P$.\\ If $G$ acts continuously by automorphisms on $(X,\P,F(\P))$, then, for all $x,y \in X$, $(g,h) \mapsto c(gx,hy)$ is continuous from $G\times G$ to $E$. \end{lem} \begin{proof} Consider on the subspace $E$ of $F(\P)$ the topology given by the norm $\|.\|$ of $F(\P)$. If $X \times X$ is endowed with the product topology of $(X,d)$, as said in Remark \ref{labpart_metric_rmq}, $c: X\times X \rightarrow E$ is continuous and, since the $G$-action on $X$ is strongly continuous, for all $x,y \in X$, $(g,h) \mapsto (gx,hy)$ is continuous. Then, by composition, for all $x,y \in X$, $(g,h) \mapsto c(gx,hy)$ is continuous. \end{proof} \begin{prop}\label{labpart_labparttoaction} Let $G$ be a topological group acting continuously by automorphisms on a space with labelled partitions $(X,\P,F(\P))$. Then there exists a continuous isometric affine action of $G$ on a Banach subspace $B$ of $F(\P)$. \\ More precisely, $B=\overline{\Vect(c(x,y) \st x,y \in X)}^{\|.\|}$ where $c$ is the separation map associated with $\P$ and $\|.\|$ is the norm of $F(\P)$, and moreover, the linear part $\pi$ and the translation part $b$ of the affine action are given by, for a fixed $x_0 \in X$: $$\pi(g)\k=\k \circ\Phi_{\tau(g)} \text{ for } g \in G \text{ and } \k \in B;$$ and $$b(g)=c(gx_0,x_0) \text{ for } g \in G.$$ \end{prop} \begin{proof} Let $\tau$ be the $G$-action on $X$.\\ By Definition \ref{labpart_homo} and Remark \ref{comp_homo_rmq}, the map $\Phi_{\tau(g)}:\P \rightarrow \P$ such that $\Phi_{\tau(g)}(p)=p\circ \tau(g)$ induces a linear representation $\pi$ of $G$ on $F(\P)$ given by, for $\k \in F(\P)$ and $g\in G$: $$\pi(g)\k=\k\circ\Phi_{\tau(g)}.$$ By the second requirement of Definition \ref{labpart_homo}, we have $\|\pi(g)\k\|=\|\k\|$. Thus, $\pi$ is an isometric linear representation of $G$ on $F(\P)$. \\ Consider $E=\Vect(c(x,y)\st x,y \in X)$. Then the Banach subspace $B=\overline{E}^{\|.\|}$ of $F(\P)$ is stable under $\pi$ since $\pi(g)(c(x,y))=c(gx,gy)$ for $x,y \in X$, $g \in G$. Let us show that the representation $\pi$ of $G$ on $B$ is strongly continuous. Let $\k=\sum_{i=1}^{n}\l_i c(x_i,y_i) \in E$. We have, for $g \in G$, $$\pi(g)\k=\k\circ \Phi_{\tau(g)}=\sum_{i=1}^{n}\l_i c(gx_i,gy_i) \in E,$$ and, by Lemma \ref{labpart_lptoaction_lem}, for every $i$, $g \mapsto c(gx_i,gy_i)$ is continuous. \\ Hence, $g \mapsto \sum_{i=1}^{n}\l_i c(gx_i,gy_i)=\pi(g)\k$ is continuous. Finally, by density, for all $\k \in B$, $g \mapsto \pi(g)\k$ is continuous from $G$ to $B$. \\ Now, let us define the translation part of the action. Fix $x_0 \in X$ and set, for all $g \in G$, $b(g)=c(gx_0,x_0) \in E$. We claim $b$ is a continuous 1-cocycle relative to $\pi$; indeed, we have, for $g \in G$, $x,y \in X$, $c(gx,gy)=c(x,y)\circ \Phi_{\tau(g)}=\pi(g)c(x,y)$ and then, for $g,h \in G$, $$ b(gh)=c(ghx_0,x_0)=c(ghx_0,gx_0)+c(gx_0,x_0)=\pi(g)b(h)+b(g). $$ The continuity of $b$ follows immediatly from Lemma \ref{labpart_lptoaction_lem}. \\ Hence, the morphism $\a: G \rightarrow \text{Isom}(B)\cap \text{Aff}(B)$ defined by, for all $g \in G$, $\k \in B$, $\a(g)\k=\pi(g)\k+b(g)$ is a continuous isometric affine action of $G$ on $B$. \end{proof} \begin{rmq}\label{labpart_labparttoaction_rmq} In the case where $G$ is discrete, we do not have to find a subspace of $F(\P)$ on which the representation is strongly continuous; then we have the following statement: \\ If $G$ discrete acts by automorphisms on $(X,\P,F(\P))$, then there exists an isometric affine action of $G$ on $F(\P)$. \end{rmq} \begin{proof}[Proof of 2) Theorem \ref{labpart_affact}] Assume $G$ acts \emph{properly} continuously on a space with labelled partitions $(X,\P,F(\P))$. \\ Consider the action $\a$ on the Banach subspace $B=\overline{E}^{\|.\|}$ given by prop \ref{labpart_labparttoaction}, where $E=\Vect(c(x,y)\st x,y \in X)$ and $\a(g)\k=\pi(g)\k+b(g)$, for $g \in G$, $\k \in B$. \\ Then we have, if we denote by $d$ the pseudo-metric of labelled partitions on $X$: $$\|b(g)\|=\|c(gx_0,x_0)\|_{\P}=d(gx_0,x_0)\underset{g\rightarrow \infty}{\longrightarrow}\infty$$ since the action of $G$ on $(X,\P,F(\P))$ is proper, and hence, $\a$ is a proper continuous isometric affine action of $G$ on $B$. \end{proof} \section{Labelled partitions on a direct sum}\label{sec_dirsum} In this section, we define a space with labelled partitions on the direct sum of a countable family of spaces with labelled partitions and we build on it a proper action given by proper actions on each factor. \subsection{Natural space with labelled partitions on a direct sum}\label{subsec_dsum} Given a family of space with labelled partitions, we give a natural construction of a space with labelled partitions on the direct sum of this family. A similar construction in the case of spaces with measured walls can be found in \cite{chemarval}. \begin{df}\label{const_dirsum_df} Let $I$ be an index set, $(X_i)_{i\in I}$ be a family of non empty sets and fix $x_0=(x_i^0)_{i \in I} \in \prod_{i \in I} X_i$. \\ The direct sum of the family $(X_i)_{i\in I}$ relative to $x_0$ is defined by: $$ \lexp{x_0}{\bigoplus_{i \in I}}X_i:= \left \lbrace(x_i)_{i\in I} \in \prod_{i\in I}X_i \st x_i \neq x_i^0 \text{ for finitely many }i \in I \right\rbrace .$$ \\ For $i \in I$, we denote by $\pi^{X}_{X_i}:X \rightarrow X_i$ the canonical projection from the direct sum to the factor $X_i$. \\ For $x=(x_i)_{i \in I} \in \lexp{x_0}{\bigoplus_{i \in I}}X_i$, the \emph{support} of $x$ is the finite subset of $I$: $$\supp (x) = \{i \in I \st x_i \neq x_i^0 \}.$$ \end{df} \begin{df}\label{dirsum_labpart_df} Let $I$ be an index set, $ \left((X_i,\P_i,F_i(\P_i))\right)_{i\in I}$ be a family of spaces with labelled partitions and fix $x_0=(x_i^0)_{i \in I} \in \prod_{i \in I} X_i$. We denote $X=\lexp{x_0}{\bigoplus_{i \in I}}X_i$. \\ Let $i \in I$. For $p_i \in \P_i$, we define the labelling function $p_i^{\oplus_i}: X \rightarrow \K$ by: $$ p_i^{\oplus_i}= p_i\circ \pi^{X}_{X_i}.$$ i.e., for $x=(x_i)_{i \in I} \in X$, $ p_i^{\oplus_i}(x)=p_i(x_i)$. \\ We denote $\P_i^{\oplus_i}=\{p_i^{\oplus_i} \st p_i \in \P_i \}$, and we call the set $$ \P_X=\bigcup_{i \in I} \P_i^{\oplus_i} $$ the \emph{natural family of labelling functions on $X$} (associated with the family $( \P_i )_{i \in I}$). \end{df} Let $X_1,X_2$ be non empty sets and $\P_1,\P_2$ be families of labelling functions on, respectively, $X_1$ and $X_2$. \\ In terms of partitions, if $P_1$ is the partition of $X_1$ associated with $p_1 \in \P_1$, the partition $P_1^{\oplus_1}$ of $X_1 \times X_2$ associated with $p_1^{\oplus_1}$ is: $$ P_1^{\oplus_1}=\{ h \times X_2 \st h_1 \in P_1 \}, $$ and similarly, for $p_2 \in \P_2$, we have: $$ P_2^{\oplus_2}=\{ X_1 \times k \st k_1 \in P_2 \}. $$ \begin{center} \includegraphics{partitions_direct_product.eps} \\ Partitions for the direct product \end{center} \begin{df}\label{dirsum_banach_df} Let $I$ be a countable index set, $ \left((X_i,\P_i,F_i(\P_i))\right)_{i\in I}$ be a family of spaces with labelled partitions and fix $x_0=(x_i^0)_{i \in I} \in \prod_{i \in I} X_i$. We denote $X=\lexp{x_0}{\bigoplus_{i \in I}}X_i$. \\ Let $i \in I$. For $\k_i \in F_i(\P_i)$, we denote $\k_i^{\oplus_i}: \P_X \rightarrow \K$ the function: \begin{center}\begin{equation*} \k_i^{\oplus_i}(p)=\begin{cases} \k_i(p_i)\text{ if }p=p_i^{\oplus_i} \in \P_i^{\oplus_i} \\ 0\;\;\;\;\:\:\text{ if }p=p_j^{\oplus_j} \in \P_j^{\oplus_j} \text{ with } i\neq j \end{cases} \end{equation*} \end{center} Let $q \geq 1$. We denote $F_q(\P_X)$ the closure of $$ E_q(\P_X):=\left\lbrace \sum_{i\in I} \k_i^{\oplus_i} \st \k_i \in F_i(\P_i)\text{ with }\k_i \neq 0 \text{ for a finite number of }i \in I \right\rbrace,$$ endowed with the norm $\|.\|_{q}$ defined by, for $\k=\sum_{i\in I} \k_i^{\oplus_i}$: $$ \|\k\|_{q}:=\left(\sum_{i \in I}\|\k_i\|_{_{F_i(\P_i)}}^q\right)^{\frac{1}{q}}.$$ The vector space $F_q(\P_X)$ is called the \emph{$q$-space of functions on $\P_X$ of $X$}. \end{df} \begin{prop}\label{dirsum_banach} Let $I$ be a countable index set and $ \left((X_i,\P_i,F_i(\P_i))\right)_{i\in I}$ be a family of spaces with labelled partitions and fix $x_0=(x_i^0)_{i \in I} \in \prod_{i \in I} X_i$. We denote $X=\lexp{x_0}{\bigoplus_{i \in I}}X_i$. \\ Then $(F_q(\P_X),\|.\|_{q})$ is isometrically isomorphic to $(\bigoplus^q_{i\in I}F_i(\P_i),\|.\|_q)$. In particular, $F_q(\P_X)$ is a Banach space. \end{prop} \subsection{Action on the natural space with labelled partitions of the direct sum}\label{subsec_actdsum} Let $I$ be an index set and $(H_i)_{i \in I}$ be a family of groups. We denote $e_W=(e_{H_i})_{i \in I}$ where, for $i \in I$, $e_{H_i}$ is the identity element of $H_i$. \\ We simply denote $\displaystyle \bigoplus_{i \in I}H_i$ the group $W=\displaystyle \lexp{e_W}{\bigoplus_{i \in I}}H_i$ whose identity element is $e_W$. \begin{prop}\label{dirsum_action} $\;$ \\ Let $I$ be a countable set and $(H_i)_{i \in I}$ be a family of groups such that, for each $i \in I$, $H_i$ acts by automorphisms on a space with labelled partitions $(X_i,\P_i,F_i(\P_i))$. We denote $X=\lexp{x_0}{\bigoplus_{i \in I}}X_i$ and $W=\bigoplus_{i \in I}H_i$. \\ Let $q \geq 1$. Then $W$ acts by automorphisms on the natural space with labelled partitions on the direct sum $(X,\P_X,F_q(\P_X))$ via the natural action of $W$ on $X$. \end{prop} \begin{proof} We denote by $\tau$ the $W$-action on $X$ and for $w \in W$, $p \in \P_X$, $\Phi_{\tau(w)}(p):=p\circ \tau(w)$ and, for $i \in I$, we denote by $\tau_i$ the $H_i$-action on $X$ and for $h_i \in H_i$, $p_i \in \P_i$, $\Phi_{\tau_i(h_i)}(p_i):=p_i\circ \tau_i(h_i)$. \\ Let $p \in \P_X=\bigcup_{i \in I}\P_i^{\oplus_i}$ and $w=(h_i)_{i \in I} \in W$. Then there exists $i \in I$ and $p_i \in \P_i$ such that $p=p_i^{\oplus_i}$, and we have: $$ \Phi_{\tau(w)}(p_i^{\oplus_i})=(\Phi_{\tau_i(h_i)}(p_i))^{\oplus_i} \in \P_i^{\oplus_i}\subset \P_X, $$ since $\Phi_{\tau_i(h_i)}(p_i)$ belongs to $\P_i$. \\ For $\k=\sum_{i \in I}\k_i^{\oplus_i} \in E_q(\P_X)$, we have: \begin{center} \begin{tabular}{rl}$\k\circ \Phi_{\tau(w)}(p)$&$=\k(p_i^{\oplus_i}\circ \tau(w))$ \\ &$=\k((p_i\circ \tau_i(h_i))^{\oplus_i})$ \\ &$=\k_i^{\oplus_i}((p_i\circ \tau_i(h_i))^{\oplus_i})$ \\ &$=\k_i(p_i\circ \tau_i(h_i))$ \\ &$=\k_i\circ \Phi_{\tau_i(h_i)}(p_i)$ \\ $\k\circ \Phi_{\tau(w)}(p)$&$=(\k_i\circ \Phi_{\tau_i(h_i)})^{\oplus_i}(p_i^{\oplus_i}),$ \\ \end{tabular} \end{center} And hence, $$ \k\circ \Phi_{\tau(w)}=\sum_{i \in I}(\k_i\circ \Phi_{\tau_i(h_i)})^{\oplus_i} \in F_q(\P_X). $$ By completeness of $F_q(\P_X)$, for all $\k \in F_q(\P_X)$, $\k\circ \Phi_{\tau(w)} \in F_q(\P_X)$. \\ Moreover, for $\k=\sum_{i \in I}\k_i^{\oplus_i} \in E_q(\P_X)$, we have: $$ \|\k\circ \Phi_{\tau(w)}\|_{q}^q=\sum_{i \in I}\|\k_i\circ \Phi_{\tau_i(h_i)}\|_{_{F_i(\P_i)}}^q=\sum_{i \in I}\|\k_i\|_{_{F_i(\P_i)}}= \|\k\|_{q}^q, $$ since, for all $i \in I$, $\|\k_i\circ \Phi_{\tau_i(h_i)}\|_{_{F_i(\P_i)}}=\|\k_i\|_{_{F_i(\P_i)}}$. \\ Thus, by density of $E_q(\P_X)$ in $F_q(\P_X)$, for all $\k \in F_q(\P_X)$, $\|\k\circ \Phi_{\tau(w)}\|_{q}=\|\k\|_{q}.$ \\ It follows that $W$ acts by automorphisms on $(X,\P_X,F_q(\P_X))$. \\ \end{proof} When $I$ is finite, $X=\lexp{x_0}{\bigoplus_{i \in I}}X_i$ is simply the direct sum of the $X_i$ and does not depend on $x_0$. In this case, proper continuous actions on each factor $(X_i,\P_i,F_i(\P_i))$ induce a proper continuous action on the natural space with labelled partitions of the direct sum $(X,\P_X,F_q(\P_X))$: \begin{prop}\label{dirsum_finite} Let $n \in \N^*$. For $i \in I=\{1,...,n\}$, let $H_i$ be a topological group acting \emph{properly continuously} on a space with labelled partitions $(X_i,\P_i,F_i(\P_i))$; we denote $\displaystyle X=X_1 \times ... \times X_n$ and $W=H_1 \times ... \times H_n$. \\ Let $q \geq 1$. Then $W$ acts \emph{properly continuously} by automorphisms on the natural space with labelled partitions of the direct product $(X,\P_X,F_q(\P_X))$ via the natural action of $W$ on $X$. \end{prop} \begin{proof} We consider the group $W$ endowed with the product topology of the $H_i$'s. We denote by $c$ the separation map associated with $\P_X$ and, for $i \in I$, $c_i$ the separation map associated with $\P_i$. By Proposition \ref{dirsum_action}, $W$ acts by automorphisms on $(X,\P_X,F_q(\P_X))$. Let us show that this action is proper. For $x=(x_1,...,x_n) \in X$ and for $w=(h_1,...,h_n) \in W$, we have : $$ \|c(wx,x)\|_{q}^q=\sum_{i=1}^n\|c_i(h_ix_i,x_i)\|_{_{F_i(\P_i)}}^q.$$ Thus, if $\|c(wx,x)\|_{q}\leq R$ for some $R \geq 0$, then for $i=1,...,n$, $\|c_i(h_ix_i,x_i)\|_{_{F_i(\P_i)}}\leq R$. \\ Hence, for every $R \geq 0$ : \\ $\{w=(h_i)\in W \st \|c(wx,x)\|_{q} \leq R \}$ is a subset of $\prod_{i=1}^n \{h_i \in H_i \st \|c_i(h_ix_i,x_i)\|_{_{F_i(\P_i)}}\leq R \}$ which is a relatively compact set in $W$ since each $H_i$ acts properly on $(X_i,\P_i,F_i(\P_i))$. It follows that $W$ acts properly on $(X,\P_X,F_q(\P_X))$. \\ It remains to prove that the $W$-action on $(X,d)$ is strongly continuous. Remark that $d=(\sum_{i=0}^nd_i^q)^{\frac{1}{q}}$, then, the topology of $(X,d)$ is equivalent to the product topology of the $X_i$'s on $X$. \\ Let $x=(x_i)_{i \in I} \in X$. We denote by $\tau_x:W\rightarrow X$ the function $w \mapsto wx$. For all $i \in I$, $\pi_{X_i}^X\circ \tau_x:w\rightarrow h_ix_i$ is continuous since $h_i \rightarrow h_ix_i$ is continuous; hence it follows that $\tau_x$ is continuous. \end{proof} If $I$ is countably infinite, even if each $H_i$-action on $(X_i,\P_i,F_i(\P_i))$ is proper, $W$ does not act properly on the natural space with labelled partitions on the direct sum $(X,\P_X,F_q(\P_X))$ in general. In fact, let $C$ be a positive real constant, and assume there exists, in each $H_i$, an element $h_i$ such that $\|c_i(h_ix_i^0,x_i^0)\|_{_{F_i(\P_i)}} \leq C$. For $j \in I$, the element $\d_j(h_j)$ of $W$ such that $\pi^{W}_{H_i}(\d_j(h_j))=e_{H_i}$ if $i \neq j$ and $\pi^{W}_{H_j}(\d_j(h_j))=h_j$ leaves every finite set of $W$ when $j$ leaves every finite set of $I$, but: $$\|c(\d_j(h_j)x_0,x_0)\|_{_{F_q(\P_X)}}=\|c_i((h_j)x_i^0,x_i^0)\|_{_{F_i(\P_i)}} \leq C.$$ And then, $W$ does not act properly on $(X,\P_X,F_q(\P_X))$. \\ To make $W$ act properly on a space with labelled partitions in the case where $W$ is endowed with the discrete topology, we have to define a structure of labelled partitions on $W$ such that the labelled partitions metric between $e_W$ and $w$ goes to infinity when the support of $w$ leaves every finite set in $I$. To build this structure, we scale every labelling function of the naive $\ell^q$ space with labelled partitions on each factor $H_i$ by a weight depending on $i$ which grows as $i$ leaves every finite set in $I$. \\ \begin{nt}\label{const_suppinf} Let $I$ be a countable index set and $X=\lexp{x_0}{\bigoplus_{i \in I}}X_i$ be a direct sum of sets $X_i$'s. \\ We say that, for $x \in X$, \emph{$supp(x)$ leaves every finite set in $I$} or \emph{$supp(x)\rightarrow \infty$ in $I$} if there exists $j \in supp(x)$ which leaves every finite set in $I$. \end{nt} \begin{df}\label{phinaivelabpart} Let $X$ be a set and $w$ be a non-negative real. \\ We set, for $x \in X$: $$\lexp{(w)}{\di}_{x}:=2^{-\frac{1}{q}}w\d_{x}:X\rightarrow \K,$$ where $\d_x:X\rightarrow \{0,1\}$ is the Dirac function at $x$, and we call the set $$\lexp{(w)}{\di}:=\{\lexp{(w)}{\di}_{x} \st x \in X \},$$ the \emph{$w$-weighted naive family of labelling functions} on $X$. \end{df} \begin{prop} Let $X$ be a set and $w$ be a non-negative real. \\ Let $q \geq 1$. Then the triple $(X,\lexp{(w)}{\di},\ell^q(\lexp{(w)}{\di}))$ is a space with labelled partitions. \\ Moreover, if a group $H$ acts on $X$, then $H$ acts by automorphisms on $(X,\lexp{(w)}{\di},\ell^q(\lexp{(w)}{\di}))$. \end{prop} \begin{proof} It is a straightfoward generalization of Proposition \ref{naivelabpart_prop} and Remark \ref{labpart_homo_ex}. \end{proof} Subsquently, for a countably infinite set $I$, we consider a function $\phi: I \rightarrow \R_+$ such that $\phi(i)\underset{i\rightarrow \infty}{\longrightarrow}+\infty$ (such a function always exists when $I$ is countably infinite: for instance, take any bijective enumeration function $\phi$ from $I$ to $\N$). \begin{lem}\label{propdirsum_lem} Let $I$ be a countably infinite set and $(H_i)_{i \in I}$ be a family of countable discrete groups and we denote $W$ the group $\bigoplus_{i \in I}H_i$ endowed with the discrete topology. Consider, on each $H_i$, the $\phi(i)$-weighted naive family of labelling functions $\lexp{(\phi(i))}{\di}$ and we denote by $\lexp{(\phi)}{\di}=\bigcup_{i \in I}\lexp{(\phi(i))}{\di}^{\oplus_i}$ the natural set of labelling functions associated with $(\lexp{(\phi(i))}{\di})_{i \in I}$. \\ Let $q \geq 1$. Then, $W$ acts by automorphisms on the natural space with labelled partitions on the direct sum $(W,\lexp{(\phi)}{\di},F_q(\lexp{(\phi)}{\di}))$. \\ Moreover, we have: $$ \|c_{\phi}(w,e_W)\|_{_{F_q(\lexp{(\phi)}{\di})}}\rightarrow +\infty \text{ when } \supp(w) \rightarrow \infty \text{ in } I, $$ where $c_{\phi}$ is the separation map associated with $\lexp{(\phi)}{\di}$. \end{lem} \begin{proof} By Proposition \ref{dirsum_action}, $W$ acts by automorphisms on $(W,\lexp{(\phi)}{\di},F_q(\lexp{(\phi)}{\di}))$ and we have, for $w=(h_i),w'=(h'_i) \in W$: \begin{center} \begin{tabular}{rl} $\|c_{\phi}(w,w')\|_{_{F_q(\lexp{(\phi)}{\di})}}^q$&$\displaystyle =\sum_{i\in I}\|c_{\phi(i)}(h_i,h'_i)\|_q^q$ \\ &$\displaystyle =\sum_{i\in \supp(w^{-1}w')}\phi(i)^q$. \\ \end{tabular} \end{center} Let $w \in W$ such that $\supp(w) \rightarrow \infty$ in $I$. Then there exists $j \in \supp(w)$ such that $j\rightarrow \infty$ in $I$ and hence: $$ \|c_{\phi}(w,e_W)\|_{_{F_q(\lexp{(\phi)}{\di})}}^q=\sum_{i\in \supp(w)}\phi(i)^q \geq \phi(j)^q \rightarrow +\infty. $$ \end{proof} \begin{prop}\label{const_dirsumproper} Let $I$ be a countably infinite set and $(H_i)_{i \in I}$ be a family of countable discrete groups such that, for each $i \in I$, $H_i$ acts \emph{properly} by automorphisms on a space with labelled partitions $(X_i,\P_i,F_i(\P_i))$. We denote $X=\lexp{x_0}{\bigoplus_{i \in I}}X_i$ and $W=\bigoplus_{i \in I}H_i$ endowed with the discrete topology. \\ Let $q \geq 1$. Then there exists a structure of space with labelled partitions $(Y,\P_Y,F_q(\P_Y))$ on which $W$ acts \emph{properly} by automorphisms. \\ More precisely, $(Y,\P_Y,F(\P_Y))$ is the natural space with labelled partitions on the direct product $Y=X\times W$ where : \begin{itemize} \item on $X$, we consider the natural space with labelled partitions on the direct sum of the family $((X_i,\P_i,F_i(\P_i)))_{i \in I}$; \item on $W$, we consider the natural space with labelled partitions on the direct sum of the family $((H_i,\lexp{(\phi(i))}{\di},\ell^q(\lexp{(\phi(i))}{\di})))_{i \in I}$ where for $i \in I$, $\lexp{(\phi(i))}{\di}$ is the $\phi(i)$-weighted naive family of labelling functions on $H_i$. \end{itemize} \end{prop} \begin{proof} By Proposition \ref{dirsum_action}, $W$ acts by automorphisms on both $(X,\P_X,F_q(\P_X))$ and $(W,\lexp{(\phi)}{\di},\ell^q(\lexp{(\phi)}{\di}))$. We set $Y=X \times W$ and consider the natural space with labelled partitions $(Y,\P_Y,F_q(\P_Y))$ on the direct product where: $$ \P= \P_X^{\oplus_1}\cup \lexp{(\phi)}{\di}^{\oplus_2}, $$ and $$ F_q(\P) \simeq F_q(\P_X) \oplus \ell^q(\lexp{(\phi)}{\di}).$$ Then, by Proposition \ref{dirsum_action}, $W\times W$ acts by automorphisms on $(Y,\P_Y,F_q(\P_Y))$ via the action $(w_1,w_2).(x,w)=(w_1.x,w_2w)$. Hence, $W$ acts by automorphisms on $(Y,\P_Y,F_q(\P_Y))$, where $W$ is viewed as the diagonal subgroup $\{(w,w)\st w \in W\} < W\times W$. \\ It remains to prove that the $W$-action on $(Y,\P_Y,F_q(\P_Y))$ is proper. We have, for $w=(h_i) \in W$: \begin{center} \begin{tabular}{rl} $\|c_{\P_Y}(w.(x_0,e_W),(x_0,e_W))\|_{_{F_q(\P_Y)}}^q$&$\displaystyle =\|c_{\P_X}(w.x_0,x_0)\|_{_{F_q(\P_X)}}^q+\|c_{\phi}(w,e_W)\|_q^q$ \\ &$\displaystyle =\sum_{i\in \supp(w)}\|c(h_ix^0_i,x^0_i)\|_{_{F_i(\P_i)}}^q+\sum_{i\in \supp(w)}\phi(i)^q$. \\ \end{tabular} \end{center} Hence, for $R \geq 0$, $\|c_{\P_Y}(w.(x_0,e_W),(x_0,e_W))\|_{_{F_q(\P_Y)}} \leq R$ implies that $\|c(h_ix^0_i,x^0_i)\|_{_{F_i(\P_i)}} \leq R$ and $\phi(i) \leq R$ for all $i \in \supp(w)$. Thus, for all $R \geq 0$, $\{w \st \|c_{\P_Y}(w.(x_0,e_W),(x_0,e_W))\|_{_{F_q(\P_Y)}} \leq R \}$ is a subset of $$ \left\lbrace w=(h_i) \st \supp(w) \subset \{j \st \phi(j) \leq R\} \text{ and } \{ h_i \in H_i \st \|c(h_ix^0_i,x^0_i)\|_{_{F_i(\P_i)}} \leq R \}\right\rbrace, $$ which is a finite set as $\{j \st \phi(j) \leq R\}$ is finite and by properness of the $H_i$'s actions, each $ \{ h_i \in H_i \st \|c(h_ix^0_i,x^0_i)\|_{_{F_i(\P_i)}}\leq R\}$ is finite. \\ It follows that $W$ acts properly on $(Y,\P_Y,F_q(\P_Y))$. \end{proof} \subsection{Action of a semi-direct product on a space with labelled partitions}\label{subsec_semidir} \begin{df}[compatible action]\label{const_semidir_comp} Let $G_1,G_2$ be groups and $\rho:G_2\rightarrow Aut(G_1)$ be a morphism of groups. \\ Consider a set $X$ on which $G_1$ and $G_2$. We say that the $G_2$-action is \emph{compatible} with the $G_1$-action with respect to $\rho$ if, for $g_1 \in G_1$, $g_2 \in G_2$, we have, for all $x \in X$ : $$ g_2g_1g_2^{-1}x=\rho(g_2)(g_1)x. $$ \end{df} \begin{exemple}\label{const_semidir_comp_ex} If $\rho:G_2\rightarrow Aut(G_1)$ is a morphism, then the action $\rho$ of $G_2$ on $G_1$ is compatible with the action of $G_1$ on itself by translation with respect to $\rho$. \end{exemple} \begin{prop}\label{const_semidir_labpart} Let $(X_1,\P_1,F_1(\P_1))$,$(X_2,\P_2,F_2(\P_2))$ be spaces with labelled partitions and $G_1,G_2$ be topological groups acting continuously by automorphisms on, respectively,\linebreak $(X_1,\P_1,F_1(\P_1))$ and $(X_2,\P_2,F_2(\P_2))$ via $\tau_1$ and $\tau_2$. \\ Let $ \rho:G_2\rightarrow Aut(G_1)$ be a morphism of groups such that $(g_1,g_2) \mapsto \rho(g_2)g_1$ is continuous for the product topology on $G_1 \times G_2$. \\ Assume that there exists a continuous action by automorphisms of $G_1\rtimes_{\rho}G_2$ on $X_1$ which extends the $G_1$ action. \\ Then the semi-direct product $G_1\rtimes_{\rho}G_2$ acts continuously by automorphisms on the natural structure of labelled partitions $(X_1\times X_2, \P,F_q(\P))$ on the direct product of $X_1\times X_2$. \\ Moreover, if, for $i=1,2$, $G_i$ acts properly on $(X_i,\P_i,F_i(\P_i))$, then $G_1\rtimes_{\rho}G_2$ acts properly on $(X_1\times X_2, \P,F(\P))$. \end{prop} \begin{proof} Let us denote by $\tau_1$ the $G_1$-action on $X_1$, by $\tau_2$ the $G_2$-action on $X_2$ and by $\tilde{\rho}$ the $G_2$-action on $G_1$ defined by the restriction on $G_2$ of the $G_1\rtimes_{\rho}G_2$-action on $X_1$. Then $\tilde{\rho}$ is compatible with $\tau_1$ with respect to $\rho$. \\ We denote by $\tau$ the action of $G=G_1\rtimes_{\rho}G_2$ on $X=X_1\times X_2$ defined by: $$\tau(g_1,g_2)(x_1,x_2)=(\tau_1(g_1)(\tilde{\rho}(g_2)x_1),\tau_2(g_2)x_2).$$ We show that, via this action, $G$ acts by automorphisms on the direct product of spaces with labelled partitions $(X, \P,F_q(\P))$ where $\P=\P_1^{\oplus_1} \cup \P_2^{\oplus_2}$ and $F_q(\P)\simeq F_1(\P_1)\oplus F_2(\P_2)$ endowed with the $q$-norm of the direct sum for $q \geq 1$. \\ Let $p\in \P$ and $g=(g_1,g_2) \in G$. If $p=p_1^{\oplus_1} \in \P_1^{\oplus_1}$, then, for all $x=(x_1,x_2) \in X$, we have: \begin{center} \begin{tabular}{rl}$\Phi_{\tau(g)}(p)(x)$&$=p(\tau(g)x)$ \\ &$=p_1^{\oplus_1}(\tau_1(g_1)(\tilde{\rho}(g_2)x_1),\tau_2(g_2)x_2)$ \\ &$=p_1(\tau_1(g_1)(\tilde{\rho}(g_2)x_1))$ \\ &$=p_1\circ \tau_1(g_1)\circ \tilde{\rho}(g_2)(x_1)$ \\ $\Phi_{\tau(g)}(p)(x)$&$=(p_1\circ \tau_1(g_1)\circ \tilde{\rho}(g_2))^{\oplus_1}(x_1,x_2),$ \\ \end{tabular} \end{center} and since $G_1$ acts by automorphisms on $(X_1,\P_1,F_1(\P_1))$ via $\tau_1$, we have $p_1\circ \tau_1(g_1) \in \P_1$, and $G_2$ acts by automorphisms on $(X_1,\P_1,F_1(\P_1))$ via $\tilde{\rho}$, then $p_1\circ \tau_1(g_1) \circ \tilde{\rho}(g_2) \in \P_1$. \\ Hence, $\Phi_{\tau(g)}(p)=(p_1\circ \tau_1(g_1)\circ \tilde{\rho}(g_2))^{\oplus_1}$ belongs to $\P$. \\ For $p=p_2^{\oplus_2} \in \P_2^{\oplus_2}$, we have $\Phi_{\tau(g)}(p)=(p_2\circ \tau_2(g_2))^{\oplus_2}$ which belongs to $\P$ since $G_2$ acts by automorphisms on $(X_2,\P_2,F_2(\P_2))$ via $\tau_2$. \\ Then, for all $g \in G$ and all $p \in \P$, $$ \Phi_{\tau(g)}(p)=p \circ \tau(g) \in \P. $$ Let us fix some notations. We denote, for $g_1 \in G_1$, $g_2 \in G_2$: \\ - $\Phi^{^{(1)}}_{\tau_1(g_1)}: \P_1 \rightarrow \P_1$ the map $\Phi^{^{(1)}}_{\tau_1(g_1)}(p_1)=p_1\circ \tau_1(g_1)$; \\ - $\Phi^{^{(\tilde{\rho})}}_{\tilde{\rho}(g_2)}: \P_1 \rightarrow \P_1$ the map $\Phi^{^{(\tilde{\rho})}}_{\tilde{\rho}(g_2)}(p_1)=p_1\circ \tilde{\rho}(g_2)$; \\ - $\Phi^{^{(2)}}_{\tau_2(g_2)}: \P_2 \rightarrow \P_2$ the map $\Phi^{^{(2)}}_{\tau_2(g_2)}(p_2)=p_2\circ \tau_2(g_2)$. \\ Let $\k$ be in $F(\P)$ and $g=(g_1,g_2) \in G$. We have, for all $p_1 \in \P_1$ and all $p_2 \in \P_2$: $$\k \circ \Phi_{\tau(g)}(p_1^{\oplus_1})=(\k_1\circ\Phi^{^{(\tilde{\rho})}}_{\tilde{\rho}(g_2)} \circ \Phi^{^{(1)}}_{\tau_1(g_1)})^{\oplus_1}(p_1^{\oplus_1}),$$ and $$\k \circ \Phi_{\tau(g)}(p_2^{\oplus_2})=(\k_1\circ \Phi^{^{(2)}}_{\tau_2(g_2)})^{\oplus_2}(p_2^{\oplus_2}). $$ Hence, $\k \circ \Phi_{\tau(g)} = (\k_1\circ \Phi^{^{(\tilde{\rho})}}_{\tilde{\rho}(g_2)} \circ \Phi^{^{(1)}}_{\tau_1(g_1)})^{\oplus_1} + (\k_2\circ \Phi^{^{(2)}}_{\tau_2(g_2)})^{\oplus_2}$ and we have: \\ \begin{tabular}{rl}$\|\k \circ \Phi_{\tau(g)} \|_{q}^q$&$=\|\k_1\circ \Phi^{^{(\tilde{\rho})}}_{\tilde{\rho}(g_2)} \circ \Phi^{^{(1)}}_{\tau_1(g_1)}\|_{_{F_1(\P_1)}}^q+\|\k_2\circ \Phi^{^{(2)}}_{\tau_2(g_2)}\|_{_{F_2(\P_2)}}^q$ \\ &$=\|\k_1\|_{_{F_1(\P_1)}}^q+\|\k_2\||_{_{F_2(\P_2)}}^q$ \\ $\|\k \circ \Phi_{\tau(g)} \|_{q}$&$=\|\k\|_{q}$ \\ \end{tabular} \\ It follows that $G_1\rtimes_{\rho}G_2$ acts by automorphisms on the space with labelled partitions $(X_1\times X_2,\P,F_q(\P))$. \\ It remains to check this action by automorphisms is continuous, i.e. for all $x \in X$, $g \mapsto \tau(g)x$ is continuous. \\ As a set $G_1\rtimes_{\rho}G_2$ is simply $G_1\times G_2$ and since $(g_1,g_2) \mapsto \rho(g_2)g_1$ is continuous, the product topology on $G_1\times G_2$ is compatible with the group structure of $G_1\rtimes_{\rho}G_2$ (see \cite{boutop}, III.18 Proposition 20). \\ Moreover, $\tau_1$, $\tau_2$ and $\tilde{\rho}$ are strongly continuous, then, for all $(x_1,x_2) \in X$, the map $(g_1,g_2) \rightarrow (\tau(g_1)(\tilde{\rho}(g_2)x_1),\tau_2(g_2)x_2)$ is continuous from $G_1 \times G_2$ endowed with the product topology to $(X,d)$ where $d$ is the labelled partitions pseudo-metric. \\ Hence, $G_1\rtimes_{\rho}G_2$ acts continuously by automorphisms on $(X,\P,F_q(\P))$. \\ Assume, for $i=1,2$, $G_i$ acts properly on $(X_i,\P_i,F_i(\P_i))$ via $\tau_i$, and we denote by $c_i$ the separation map associated with $\P_i$. \\ Fix $x_0=(x_1,x_2) \in X_1\times X_2$. \\ The following egality holds for every $g=(g_1,g_2) \in G_1\rtimes_{\rho}G_2$: $$\|c(\tau(g)x_0,x_0)\|_{q}^q=\|c_1(\tau_1(g_1)(\tilde{\rho}(g_2)x_1),x_1)\|_{_{F_1(\P_1)}}^q+\|c_2(\tau_2(g_2)x_2,x_2)\|_{_{F_2(\P_2)}}^q .$$ Since $G_1\rtimes_{\rho}G_2$ is endowed with the product topology of $G_1$ and $G_2$, $g=(g_1,g_2) \rightarrow \infty$ in $G_1\rtimes_{\rho}G_2$ if, and only if, $g_1 \rightarrow \infty$ in $G_1$ or $g_2 \rightarrow \infty$ in $G_2$. Hence, we have two disjoint cases: \\ First case: $g_1 \rightarrow \infty$ in $G_1$ and $g_2$ belongs to a compact subset $K_2$ of $G_2$. \\ By continuity of $g'_2 \mapsto \|c(\tilde{\rho}(g'_2)x_1,x_1)\|_{_{F_1(\P_1)}}$, there exists $C(K_2) \geq 0$ such that, for every $g'_2 \in K_2$, $\|c(\tilde{\rho}(g'_2)x_1,x_1)\|_{_{F_1(\P_1)}}\leq C(K_2)$, and, hence, \begin{center} \begin{tabular}{rl} $\|c(\tau(g_1)\tilde{\rho}(g_2)x_1,\tilde{\rho}(g_2)x_1)\|_{_{F_1(\P_1)}}$&$ \leq \|c(\tau_1(g_1)\tilde{\rho}(g_2)x_1,x_1)\|_{_{F_1(\P_1)}}+\|c(\tilde{\rho}(g_2)x_1,x_1)\|_{_{F_1(\P_1)}}$ \\ &$\leq \|c(\tau(g_1)\tilde{\rho}(g_2)x_1,x_1)\|_{_{F_1(\P_1)}} +C(K_2)$. \end{tabular} \end{center} But, since $G_1$ acts properly on $(X_1,\P_1,F_1(\P_1))$, $\|c(\tau(g_1)\tilde{\rho}(g_2)x_1,\tilde{\rho}(g_2)x_1)\|_{_{F_1(\P_1)}}\underset{g_1\rightarrow \infty}{\longrightarrow} +\infty$, and then, $$ \|c(\tau(g_1)\tilde{\rho}(g_2)x_1,x_1)\|_{_{F_1(\P_1)}}\underset{g_1\rightarrow \infty}{\longrightarrow} +\infty. $$ It follows that $\|c(\tau(g)x_0,x_0)\|_{q} \underset{g_1\rightarrow \infty}{\longrightarrow} +\infty$. \\ Second case: $g_2 \rightarrow \infty$ in $G_2$. \\ We have $\|c_2(\tau_2(g_2)x_2,x_2)\|_{_{F_2(\P_2)}}\underset{g_2\rightarrow \infty}{\longrightarrow} +\infty$ and then $\|c(\tau(g)x_0,x_0)\|_{q} \rightarrow +\infty$. \\ Finally, as required, we have $$\|c(\tau(g)x_0,x_0)\|_{q} \underset{g\rightarrow \infty}{\longrightarrow} +\infty,$$ and then, $G_1\rtimes_{\rho} G_2$ acts properly by automorphisms on $(X,\P,F_q(\P))$. \end{proof} \section{Wreath products and property $PL^p$}\label{sec_wreath} Using Proposition \ref{const_semidir_labpart}, we simplify a part of the proof of Th 6.2 in \cite{wreath} where Cornulier, Stalder and Valette establish the stability of the Haagerup property by wreath product; and we generalize it in the following way: the wreath product of a group with property $PL^p$ by a Haagerup group has property $PL^p$. \begin{noth}[\ref{const_semidir_wreathflp}] Let $H,G$ be countable discrete groups, $L$ be a subgroup of $G$ and $p > 1$, with $p \notin 2\Z\smallsetminus \{2\}$. We denote by $I$ the quotient $G/L$ and $W=\bigoplus_{I}H$. Assume that $G$ is Haagerup, $L$ is co-Haagerup in $G$ and $H$ has property $PL^p$. \\ Then the permutational wreath product $H\wr_I G=W\rtimes G$ has property $PL^p$. \end{noth} \subsection{Permutational wreath product} We first introduce the notion of permutational wreath product : \begin{df}\label{wreath_df} Let $H,G$ be countable groups, $I$ be a $G$-set and $W=\bigoplus_{i \in I}H$. The \emph{permutational wreath product} $H\wr_I G$ is the group: $$ H\wr_I G:=W\rtimes_{\rho} G, $$ where $G$ acts by shift on $W$ via $\rho$ i.e. $\rho(g): (h_i)_{i \in I} \mapsto (h_{g^{-1}i})_{i \in I}$, for $g \in G$. \\ When $I=G$, $H\wr_G G$ is simply called \emph{wreath product} and is denoted $H\wr G$. \end{df} \subsection{Property $PL^p$ for the permutational wreath product} To prove Theorem \ref{const_semidir_wreathflp}, we need the following structure of space with measured walls relative to the wreath product built in \cite{wreath}, Theorem 4.2 (see \cite{wreath} $\S$ 6.1 for examples of co-Haagerup subgroups) : \begin{df} Let $G$ be a group and $L$ be subgroup of $G$. We say that $L$ is \emph{co-Haagerup} in $G$ if there exists a proper $G$-invariant conditionally negative definite kernel on $G/L$. \end{df} \begin{theo}[Cornulier, Stalder, Valette]\label{const_semidir_theoCSV} Let $H,G$ be countable discrete groups and let $L$ be a subgroup of $G$. We denote by $I$ the quotient $G/L$ and $W=\bigoplus_{I}H$. \\ Suppose that $G$ is Haagerup and that $L$ is co-Haagerup in $G$. \\ Then there exists a structure $(W\times I,\mu)$ of space with measured walls on $W\times I$, with wall pseudo-metric denoted by $d_{\mu}$, on which $W\rtimes G$ acts by automorphisms and which satisfies, for any $x_0=(w_0, i_0) \in W \times I$ and for all $g \in G$: \\ $$ d_{\mu}((w,g)x_0,x_0)\rightarrow +\infty \text{ when }w \in W \text{ is such that } \supp (w) \rightarrow \infty \text{ in } I. $$ \end{theo} \begin{rmq}\label{rmk_CSV} The property ``$ d_{\mu}((w,g)x_0,x_0)\rightarrow +\infty$ when $w \in W$ is such that $\supp (w) \rightarrow \infty$ in $I$`` can be reformulated as follows : \\ For all $R \geq 0$, there exists a finite set $J_R \subset I$ such that, for $(w,g) \in H\wr_I G$, $$d_{\mu}((w,g)x_0,x_0) \leq R \text{ implies } \supp(w) \subset J_R.$$ \end{rmq} \begin{lem}\label{const_semidir_wreathflp_lem} Let $H,G$ be countable discrete groups, $L$ be a subgroup of $G$ and $q \geq 1$. We denote by $I$ the quotient $G/L$ and $W=\bigoplus_{I}H$. Suppose that $G$ is Haagerup, $L$ is co-Haagerup in $G$ and $H$ has property $PL^q$. \\ Then $W$ and $G$ acts by automorphisms on a space $(X,\P,F(\P))$ with labelled partitions such that: \begin{itemize} \item the $W$-action is proper, \\ \item the $G$-action is compatible with the $W$-action, \\ \item the Banach space $F(\P)$ is isometrically isomorphic to a Banach subspace of a $L^q$ space. \end{itemize} \end{lem} \begin{proof} Consider the $W\rtimes G$-action on the space with measured walls $(W\times I,\mu)$ given by Proposition \ref{const_semidir_theoCSV}. Then, by Proposition \ref{walls_labpart}, $W\rtimes G$ acts by automorphisms on the space with labelled partitions $(W\times I,\P_{\mu},L^q(\P_{\mu},\mu))$. Let $y_0=(e_W,i_0) \in W \times I$. The separation map $c_{\mu}$ associated with $\P_{\mu}$ satisfies: $$\|c_{\mu}((w,g)y_0,y_0)\|_q^q=d_{\mu}((w,g)y_0,y_0).$$ Now, consider the structure of space with labelled partitions on $H$ given by its proper isometric affine action on a space $L^q(E,\nu)$. By Proposition \ref{dirsum_action}, $W$ acts by automorphisms on the natural structure of space with labelled partitions $(W,\P_W,F_q(\P_W))$ of the direct sum of spaces with labelled partitions on $H$. Moreover, $G$ acts by automorphisms on $(W,\P_W,F_q(\P_W))$ by shift via its action on $I$. \\ We denote $X=(W\times I) \times W$ and consider the space with labelled partitions $(X,\P,F(\P))$ given by the direct product of spaces with labelled partitions $(W\times I,\P_{\mu},L^q(\P_{\mu},\mu))$ and $(W,\P_W,F_q(\P_W))$. Then we have actions by automorphisms $\tau_W$ of $W$ and $\tau_G$ on $X$ given by, for $x=(w_1,i,w_2) \in X$, $w \in W$ and $g \in G$: $$ \tau_W(w)x=(ww_1,i,ww_2) \text{ and } \tau_G(g)x=(\rho(g)w_1,gi,\rho(g)w_2). $$ The action $\tau_G$ is clearly compatible with $\tau_W$ since $W\rtimes_{\rho} G$ acts naturally on $W$ and on $W\times I$. \\ The Banach space $F(\P)$ is isometrically isomorphic to the $q$-direct sum $L^q(\P_{\mu},\mu)\oplus F_q(\P_W) $, then $F(\P)$ is isometrically isomorphic to a Banach subspace of $ L^q(\P_{\mu},\mu) \oplus (\bigoplus_I^q L^q(E,\nu))$. It follows that $F(\P)$ is isometrically isomorphic to a Banach subspace of a $L^q$ space. We denote $x_0=(e_W,i_0,e_W) \in X$. We have, for $w=(h_i)_{i \in I} \in W$: \\ \begin{tabular}{rl} $\|c(\tau_W(w)x_0,x_0)\|_{_{F(\P)}}^q$&$=\|c_{\mu}((w,i_0),(e_W,i_0))\|_q^q+\|c_{\P_W}(w,e_W)\|_{_{F_{q}(\P_{W})}}^q$ \\ &$\displaystyle =d_{\mu}((w,e_G)i_0,i_0)+\sum_{i \in \supp(w)}\|c_{\P_H}(h_i,e_H)\|_{_{F_H(\P_H)}}^q$ \\ \end{tabular} \\ Hence, $W$ acts properly by automorphisms on $(X,\P,F(\P))$: indeed, for $R \geq 0$ and $w=(h_i) \in W$, $\|c(\tau_W(w)x_0,x_0)\|_{_{F(\P)}} \leq R$ implies $d_{\mu}((w,e_G)i_0,i_0) \leq R^q$ and $\|c_{\P_H}(h_i,e_H)\|_{_{F_H(\P_H)}} \leq R$. It follows that, for $R \geq 0$ and $J_{R^q} \subset I$ as in Remark \ref{rmk_CSV}, $\{ w \in W \st \|c(\tau_W(w)x_0,x_0)\|_{_{F(\P)}} \leq R \}$ is a subset of : $$ \left\lbrace w=(h_i) \st \supp(w) \subset J_{R^q} \text{ and } \{ h_i \in H \st \|c_{\P_H}(h_i,e_H)\|_{_{F_H(\P_H)}} \leq R \}\right\rbrace, $$ which is a finite set as $J_{R^q}$ is a finite set and $H$-action is proper. \end{proof} \begin{proof}[Proof of Theorem \ref{const_semidir_wreathflp}] By Lemma \ref{const_semidir_wreathflp_lem}, $W$ and $G$ act by automorphisms on a space $(X,\P,F(\P))$ with labelled partitions such that the $W$-action is proper, and the $G$-action is compatible with the $W$-action with respect to $\rho$. Moreover, since $G$ is Haagerup, $G$ acts properly by automorphisms on a space $(Y,\P',F'(\P'))$ with labelled partitions where $F'(\P')$ isometrically isomorphic to a $L^q$ space. \\ Hence, by Theorem \ref{const_semidir_labpart}, $H\wr_I G=W\rtimes_{\rho} G$ acts properly by automorphisms on a space $(Z,\P_Z,F_Z(\P_Z))$ where $F_Z(\P_Z)$ is isometrically isomorphic to $F(\P) \oplus F'(\P')$ endowed with the $q$-norm of the direct sum. It follows that $F_Z(\P_Z)$ is isometrically isomorphic to a Banach subspace of a $L^q$ space. \\ Thus, by Corollary \ref{labpart_aflp}, $H\wr_I G$ has property $PL^q$. \end{proof} \section{Amalgamated free product}\label{freepartsec} In this section, we develop tools around the notion of tree of spaces in order to build a structure of space with labelled partitions on which an amalagmated free product acts by automorphisms given actions of the factors on some spaces with labelled partitions. \\ \subsection{Labelled partitions on a tree of spaces}\label{free_tree_sec} A tree is a pair of sets $T=(V,\E)$, where $V$ is the set of vertices and $\E$ is the set of edges, together with an injective map $\E \rightarrow \{ \{v,w\} \st v\neq w \in V \}$; and satisfies that every two vertices are connected by a unique edge path, that is, a path without backtracking. \\ The set of vertices $V$ can be endowed with a natural metric $d_T$ : the distance between two vertices is the number of edges in the edge path joining them. Moreover, we say that a vertex $u$ is between $v$ and $w$ in $V$ if $u$ is an endpoint of some edge which belongs to the edge path between $v$ and $w$. \begin{df}\label{free_tree_df} Let $T=(V,\E)$ be a tree, $\left(X_v\right)_{v \in V}$ and $\left(X_e\right)_{e \in \E}$ be collections of non empty sets such that there exists, for all $e=\{v,w\} \in \E$ : $$ \s_{e,v}:X_e \longhookrightarrow X_{v}\text{ and }\s_{e,w}:X_e \longhookrightarrow X_{w}. $$ The triple $\big(T,\left(X_v\right)_{v \in V},\left(X_e\right)_{e \in \E} \big)$ is called a \emph{tree of spaces}. We define the \emph{total space} $X$ associated with this tree of spaces as the disjoint union of the $X_v$'s : $$ X=\bigsqcup_{v \in V}X_v. $$ \end{df} \begin{center} \includegraphics[width=16cm]{arbresTXX.eps} \\ A tree $T$ and, in green, the total space $X$ of a tree of spaces of base $T$. \end{center} \begin{rmq} Some authors consider another definition for the total space $X$ of a tree of spaces (see, for instance, \cite{tu}) which keeps track of the adjacency in the base tree, namely, given an orientation of the edges : $$X=\left(\bigsqcup_{v \in V}X_v \sqcup \bigsqcup_{e \in \E}(X_e \times [0,1])\right)/\sim ,$$ where the identification $\sim$ is given by, for $e=(v,w) \in \E$, $X_e\times \{0\}\sim \s_{e,v}(X_e)$ and $X_e\times \{1\}\sim \s_{e,w}(X_e).$ This corresponds to the cylinders in the previous figure. \\ In the present paper, we do not consider this additionnal data in the total space for a matter of simplicity. \end{rmq} \begin{df} Let $v \in V$ and $x,y \in X$ with $x \in X_w$ and $y \in X_u$. We say that $X_v$ is between $x$ and $y$ if the vertex $v$ is between $w$ and $u$ in $T$ i.e. $v$ belongs to the vertex path joining $w$ to $u$. \end{df} \begin{rmq} In the case where the $X_v$'s are metric spaces, the total space $X$ can be naturally endowed with a metric which extends the metric of each $X_v$ and the tree metric (see \cite{guedar}). This metric on $X$ can also be obtained by the labelled partitions metric from the constructions we define in Definition \ref{free_natural} and Definition \ref{free_tree} when each $X_v$ is endowed with a structure of space with labelled partitions (in the case where edge sets are single points). \end{rmq} An automorphism of a tree is a bijection $f$ of the vertex set such that $f(v)$ and $f(w)$ are connected by an edge if and only if $v$ and $w$ are connected by an edge. From this notion, we describe what is an automorphism of a tree of spaces : \begin{df}[Automorphisms of tree of spaces]\label{autom_tree} Let $\mathcal{X}=\big(T,\left(X_v\right)_{v \in V},\left(X_e\right)_{e \in \E} \big)$ be a tree of spaces and $X$ be its total space. We denote by $\rho : X \rightarrow V$ the natural projection given by $x \in X_v \mapsto v$. \\ We say that a bijection $\vp: X \rightarrow X$ is an automorphism of $\mathcal{X}$ if : \begin{enumerate} \item There exists $\widetilde{\vp}:V\rightarrow V$ such that $\widetilde{\vp}$ is an automorphism of $T$ and : $$ \widetilde{\vp}\circ \rho = \rho \circ \vp. $$ \item The restriction $\vp_{|_{\s_{e,v}(X_e)}}$ induces a bijection from $\s_{e,v}(X_e)$ to $\s_{\widetilde{\vp}(e),\widetilde{\vp}(v)}(X_{\widetilde{\vp}(e)})$. \end{enumerate} \end{df} \begin{rmq}\label{free_autom_rmq} Let $\vp$ be an automorphism of $\mathcal{X}$. \begin{enumerate} \item The restriction $\vp_{|_{X_v}}$ is a bijection from $X_v$ to $X_{\widetilde{\vp}(v)}$. \\ \item The map $\widehat{\vp}_{e,v}:= \s_{\widetilde{\vp}(e),\widetilde{\vp}(v)}^{-1} \circ \vp \circ \s_{e,v}$ is a bijection from $X_e$ to $X_{\widetilde{\vp}(e)}$. \\ \item The map $\vp^{-1}$ is an automorphism of $\mathcal{X}$ and we have $\widetilde{\vp^{-1}}=\widetilde{\vp}^{-1}$. \end{enumerate} \end{rmq} Subsequently, we consider a tree of spaces where the edge sets are reduced to single points : \\ Let $\big(T,\left(X_v\right)_{v \in V},\left(\{\bu_e\}\right)_{e \in \E} \big)$ be a tree of spaces and $X$ be its total space. \begin{center} \includegraphics[width=13cm]{arbrespbu.eps} \\ The total space of a tree of spaces whose edge sets are singletons. \end{center} \begin{df}[Projection on vertex sets]\label{free_proj} Let $v \in V$. The map $\pi_v : X \rightarrow X_v$ defined by, for $x \in X_w$ with $w \in V$ : $$ \pi_{v}(x)=\left \{ \begin{array}{ll} x &\text{ if }w=v, \\ \s_{e,v}(\bu_e) &\text{ if } w\neq v, \\ \end{array} \right .$$ where $e$ is the first edge in the edge path in $T$ from $v$ to $w$. \end{df} \begin{center} \includegraphics[width=11cm]{arbreproj.eps} \\ The projection $\pi_v$ on $X_v$. \end{center} \begin{lem}\label{free_proj_lem} Let $x,y \in X$ and $v \in V$. If $\pi_v(x) \neq \pi_v(y)$ then $X_v$ is between $x$ and $y$. In particular, the set $\{v \in V \st \pi_v(x) \neq \pi_v(y) \}$ is finite. \end{lem} \begin{proof} Let $x \in X_w$ and $y \in X_u$ with $w,u \in V$. If $v$ is not between $w$ and $u$ in $T$, then $v \neq w$, $v \neq u$ and moreover the edge path joining $v$ to $w$ and the edge path joining $v$ to $u$ in $T$ coincide at least on the first edge $e$. Hence, by definition : $$\pi_{v}(x)=\s_{e,v}(\bu_e)=\pi_{v}(y).$$ \end{proof} \begin{lem}\label{free_act_proj} Let $\vp$ be an automorphism of a tree of spaces $\mathcal{X}=\big(T,\left(X_v\right)_{v \in V},\left(\{\bu_e\}\right)_{e \in \E} \big)$. Then for $v \in V$, we have : $$ \vp \circ \pi_v=\pi_{\widetilde{\vp}(v)} \circ \vp. $$ \end{lem} \begin{proof} Let us show that, for all $x$ in the total space $X$ of $\mathcal{X}$, $\vp(\pi_v(x))=\pi_{\widetilde{\vp}(v)}(\vp(x))$. If $x \in X_v$, the identity is clear since $\vp(x)$ belongs to $X_{\widetilde{\vp}(v)}$. Now, assume that $x \in X_w$ with $w \neq v$ and denote by $e$ the first edge in the edge path joining $v$ to $w$ in $T$. As $\widetilde{\vp}$ is an automorphism of $T$, $\widetilde{\vp}(e)$ is the first edge in the edge path joining $\widetilde{\vp}(v)$ to $\widetilde{\vp}(w)$. Hence, since $\vp(x) \in X_{\widetilde{\vp}(w)}$, we have, by Remark \ref{free_autom_rmq}, 2. : $$\pi_{\widetilde{\vp}(v)}(\vp(x))=\s_{\widetilde{\vp}(e),\widetilde{\vp}(v)}(\bu_{\widetilde{\vp}(e)})=\vp(\s_{e,v}(\bu_{e}))=\vp(\pi_v(x)). $$ \end{proof} \begin{lem}\label{free_act_phibet} Let $\vp$ be an automorphism of a tree of spaces $\big(T,\left(X_v\right)_{v \in V},\left(\{\bu_e\}\right)_{e \in \E} \big)$. For $x,y \in X$, we have : $$ \{v \in V \st \pi_v(\vp(x)) \neq \pi_v(\vp(y)) \}=\widetilde{\vp}\left( \{v \in V \st \pi_v(x) \neq \pi_v(y) \} \right).$$ \end{lem} \begin{proof} Let $x,y \in X$. By Lemma \ref{free_act_proj}, we have, for $v \in V$ : \begin{center} \begin{tabular}{rl} $\pi_v(\vp(x)) \neq \pi_v(\vp(y))$&$\Leftrightarrow \vp (\pi_{\widetilde{\vp}^{-1}(v)}(x)) \neq \vp(\pi_{\widetilde{\vp}^{-1}(v)}(y))$ \\ $\;$&$\Leftrightarrow \pi_{\widetilde{\vp}^{-1}(v)}(x) \neq \pi_{\widetilde{\vp}^{-1}(v)}(y), \;$ since $\vp$ is a bijection. \\ \end{tabular} \end{center} Thus, \begin{center} \begin{tabular}{rl} $\{v \in V \st \pi_v(\vp(x)) \neq \pi_v(\vp(y)) \}$&$=\{v \in V \st \pi_{\widetilde{\vp}^{-1}(v)}(x) \neq \pi_{\widetilde{\vp}^{-1}(v)}(y) \}$ \\ $\;$&$=\{\widetilde{\vp}(v) \in V \st \pi_v(\vp(x)) \neq \pi_v(\vp(y)) \}.$ \\ \end{tabular} \end{center} \end{proof} \begin{exemple}\label{free_example} We consider an amalgamated free product $\G=G*_C H$, together with the natural action on its Bass-Serre tree $T=(V,\E)$ where : $$V= \G/G \sqcup \G/H \text{ and } \E=\G/C,$$ and the endpoints maps are given by the inclusions of $C$ left-cosets into $G$ and $H$ left-cosets. \\ For our purpose, we construct the following tree of space of base $T$ : \\ - For $v=\g G \in V$, we consider $X_v = \g G/C=\{\g g C \st g\in G\}$ and for $v=\g H$, we set $X_v = \g H/C$. \\ - For $e=\g C \in \E$, we consider the singleton $X_e=\{\g C\}$. The structural maps $\s_{\g C,\g G}$ and $\s_{\g C,\g H}$ are the trivial maps $\g C \mapsto \g C \in \g G/C$ and $\g C \mapsto \g C \in \g H/C$. \\ These datas give rise to a tree of spaces $\mathcal{X}=(T,\{X_v\},\{X_e\})$ on which $\G$ acts by automorphisms of tree of spaces. \\ In fact, every $\g \in \G$ defines a bijection of the total space $X$ by : $$\g'g C \in X_{\g'G} \mapsto \g\g'g C \in X_{\g\g'G}$$ and, $$\g'h C \in X_{\g'H} \mapsto \g\g'h C \in X_{\g\g'H}.$$ Moreover, the map $\widetilde{\g}:V \rightarrow V$ is exactly the map $v \mapsto \g v$ given by the action of $\G$ on $T$ and we have : $$\s_{\g\g' C,\g\g' G}(\bu_{\g\g' C})= \g\g' C = \g \s_{\g'C,\g'G}(\bu_{\g'C}).$$ Thus $\s_{\g\g' C,\g\g' G}^{-1}\circ \g \circ \s_{\g'C,\g'G}$ is a bijection and similarly, $\s_{\g\g' C,\g\g' H}^{-1}\circ \g \circ \s_{\g'C,\g'H}$ is a bijection, for all $\g' \in \G$. \end{exemple} \begin{df}\label{free_example_df} Let $\G=G*_C H$ be an amalgamated and $T$ be its Bass-Serre tree. We call \emph{tree of $C$-cosets spaces associated with $\G$}, the tree of spaces $(T,\{X_{v}\},\{X_e\})$ defined in Example \ref{free_example}. \end{df} \begin{center} \includegraphics[width=11cm]{Cspaces.eps} \\ The tree of $C$-cosets spaces associated with $G*_C H$. \end{center} \subsubsection{Labelled partitions induced by the vertex sets} \begin{df} Let $\big(T,\left(X_v\right)_{v \in V},\left(\{\bu_e\}\right)_{e \in \E} \big)$ be a tree of spaces and $X$ be its total space. Assume that each vertex set $X_v$ is endowed with a structure of space with labelled partitions $(X_v,\P_v,F_v(\P_v))$. Let $v \in V$. We set, for $p_v \in \P_v$, the following labelling function on $X$ : $$ p_v^{\oplus_v}=p_v\circ \pi_{v}, $$ and we denote $ \P_v^{\oplus_v}=\{p_v^{\oplus_v}\st p_v \in \P_v \}$. \\ The set : $$ \P_X=\bigcup_{v \in V} \P_v^{\oplus_v} $$ is called the \emph{family of labelling functions induced by the vertex sets}. \end{df} \begin{center} \includegraphics[width=14cm]{arbrespart.eps} \\ Partition of $X$ induced by a partition of $X_v$ via the projection $\pi_v$. \end{center} \begin{df} Let $\big(T,\left(X_v\right)_{v \in V},\left(\{\bu_e\}\right)_{e \in \E} \big)$ be a tree of spaces and $X$ be its total space. Assume that each vertex set $X_v$ is endowed with a structure of space with labelled partitions $(X_v,\P_v,F_v(\P_v))$. \\ Let $v\in V$. For $\k \in F_v(\P_v)$, we denote $\k^{\oplus_v}: \P_{X} \rightarrow \K$ the function : \begin{center}\begin{equation*} \k^{\oplus_v}(p)=\begin{cases} \k(p_v)\text{ if }p=p_v^{\oplus_v} \in \P_v^{\oplus_v} \\ 0\;\;\;\;\:\:\text{ otherwise.} \end{cases} \end{equation*} \end{center} Let $q \geq 1$. We set : $$ E_q(\P_{X}):=\left\lbrace \sum_{v \in V} \k_{v}^{\oplus_v} \st \k_{v} \in F_v(\P_v)\text{ with }\k_{v} = 0 \text{ for all but finitely many vertices }v \right\rbrace,$$ endowed with the norm $\|.\|_{q}$ defined by, for $\k=\sum_{v} \k_{v}^{\oplus_v}$: $$ \|\k\|_{q}:=\left(\sum_{v}\|\k_{v}\|_{_{F_v(\P_v)}}^q\right)^{\frac{1}{q}}.$$ The Banach space $F_q(\P_{X}):=\overline{E_q(\P_{X})}^{\|.\|_{q}}$ is called the \emph{$q$-space of functions on $\P_{X}$ of $X$}. \end{df} \begin{prop}[Labelled partitions structure on $X$ induced by the vertex sets]\label{const_labfree_prop} $\;$ \\ Let $\big(T,\left(X_v\right)_{v \in V},\left(\{\bu_e\}\right)_{e \in \E} \big)$ be a tree of spaces and $X$ be its total space. Assume that each vertex set $X_v$ is endowed with a structure of space with labelled partitions $(X_v,\P_v,F_v(\P_v))$. Consider $X$ together with its family $\P_{X}$ of labelling functions induced by the vertex sets. \\ Let $q \geq 1$ and $F_q(\P_{X})$ be the $q$-space of functions on $\P_{X}$ of $X$. Then, the triple $(X,\P_{X},F_q(\P_{X}))$ is a space with labelled partitions. Moreover, we have, for $x,y \in X$ : $$ \|c(x,y)\|_{_{F_q(\P_{X})}}^q=\sum_{v \in V}\|c_v(\pi_v(x),\pi_v(y))\|_{_{F_v(\P_v)}}^q $$ \end{prop} \begin{proof} We denote by $c_v$ the separation map of $X_v$ associated with $\P_v$ and by $c_{X}$ the separation map associated with $\P_{X}$. \\ Let $x,y \in X$ and $v \in V$. For $p_v^{\oplus_v} \in \P_v^{\oplus_v}$, we have: $$c_{X}(x,y)(p_v^{\oplus_v})=p_v(\pi_{v}(x))-p_v(\pi_{v}(y))=c_v(\pi_{v}(x),\pi_{v}(y))(p_v).$$ It follows that $c_{X}(x,y)=\sum_{v} c_v(\pi_{v}(x),\pi_{v}(y))^{\oplus_v}$ which is a finite sum since $\pi_{v}(x)=\pi_{v}(y)$ for all but finitely many $v$'s by Lemma \ref{free_proj_lem}. Thus, $c_{X}(x,y)$ belongs to $F_q(\P_{X})$ and hence, $(X,\P_{X},F_q(\P_{X}))$ is a space with labelled partitions. \\ \end{proof} \begin{df}\label{free_natural} Let $\big(T,\left(X_v\right)_{v \in V},\left(\{\bu_e\}\right)_{e \in \E} \big)$ be a tree of spaces and $X$ be its total space. Assume that each vertex set $X_v$ is endowed with a structure of space with labelled partitions $(X_v,\P_v,F_v(\P_v))$. Consider $X$ together with its family $\P_{X}$ of labelling functions induced by the vertex sets and let $F_q(\P_{X})$ be the $q$-space of functions on $\P_{X}$ of $X$. \\ The triple $(X,\P_{X},F_q(\P_{X}))$ is called the \emph{space with labelled partitions on $X$ induced by the vertex sets}. \end{df} \begin{prop}\label{free_autom_part} Let $\mathcal{X}=\big(T,\left(X_v\right)_{v \in V},\left(\{\bu_e\}\right)_{e \in \E} \big)$ be a tree of spaces and $X$ be its total space. Assume that each vertex set $X_v$ is endowed with a structure of space with labelled partitions $(X_v,\P_v,F_v(\P_v))$ and let $(X,\P_{X},F_q(\P_{X}))$ be the space with labelled partitions on $X$ induced by the vertex sets. \\ Let $\vp$ be an automorphism of $\mathcal{X}$. If, for all $v \in V$, the map $\vp_{|_{X_v}}:X_v \rightarrow X_{\widetilde{\vp}(v)}$ is a homomorphism of spaces with labelled partitions, then $\phi$ is an automorphism of space with labelled partitions of $(X,\P_{X},F_q(\P_{X}))$. \end{prop} \begin{proof} Let $p=p_v^{\oplus_v} \in \P_X$ with $v \in V$ and $p_v \in \P_v$. Then we have, by Lemma \ref{free_act_proj} : $$ \phi_{\vp}(p)=p_v \circ \pi_v \circ \vp = p_v \circ \vp_{|_{X_{\widetilde{\vp}^{-1}(v)}}} \circ \pi_{\widetilde{\vp}^{-1}(v)}=(p_v \circ \vp_{|_{X_{\widetilde{\vp}^{-1}(v)}}})^{\oplus_{\widetilde{\vp}^{-1}(v)}} \in \P_X,$$ since $\vp_{|_{X_{\widetilde{\vp}^{-1}(v)}}}:X_{\widetilde{\vp}^{-1}(v)} \rightarrow X_v$ is a homomorphism of space with labelled partitions. \\ Now, let $\k = \sum_{v \in V} \k_{v}^{\oplus_v} \in E_q(\P_X)$ where $\k_v \in F_v(\P_v)$ for all $v \in V$. We have, for $p=p_v^{\oplus_v} \in \P_X$ : \begin{center} \begin{tabular}{rl} $\k \circ \phi_{\vp}(p) $&$=\k((p_v \circ \vp_{|_{X_{\widetilde{\vp}^{-1}(v)}}})^{\oplus_{\widetilde{\vp}^{-1}(v)}})$, \\ $\;$&$=\k_{\widetilde{\vp}^{-1}(v)}(p_v \circ \vp_{|_{X_{\widetilde{\vp}^{-1}(v)}}}), $ \\ $\k \circ \phi_{\vp}(p) $&$=(\k_{\widetilde{\vp}^{-1}(v)}\circ \phi_{\vp_{|_{X_{\widetilde{\vp}^{-1}(v)}}}})^{\oplus_v}(p).$ \end{tabular} \end{center} Thus, $$ \k \circ \phi_{\vp} = \sum_{v \in V} (\k_{\widetilde{\vp}^{-1}(v)}\circ \phi_{\vp_{|_{X_{\widetilde{\vp}^{-1}(v)}}}})^{\oplus_v}. $$ Hence, by making the substitution $v=\widetilde{\vp}^{-1}(v)$, we obtain : $$ \k \circ \phi_{\vp} = \sum_{v \in V} (\k_{v}\circ \phi_{\vp_{|_{X_{v}}}})^{\oplus_{\widetilde{\vp}(v)}} \in E_q(\P_X), $$ since $\k_{v}\circ \phi_{\vp_{|_{X_{v}}}}$ belongs to $F_{\widetilde{\vp}(v)}(\P_{\widetilde{\vp}(v)})$ and $\k_{v}=0$ for all but finitely many $v$'s. \\ By completeness of $F_q(\P_X)$, we have, for all $\k \in F_q(\P_X)$, $\k \circ \phi_{\vp} \in F_q(\P_X)$. \\ Moreover, since, for all $v \in V $, $\|\k_v \circ \phi_{\vp_{|_{X_{v}}}}\|_{_{F_{\widetilde{\vp}(v)}(\P_{\widetilde{\vp}(v)})}}=\|\k_v\|_{_{F_{v}(\P_{v})}}$, it follows that : \begin{center} \begin{tabular}{rl} $\|\k \circ \phi_{\vp}\|_{_{q}}^q $&$=\sum_{v \in V} \|\k_{v}\circ \phi_{\vp_{|_{X_v}}}\|_{_{F_{\widetilde{\vp}(v)}(\P_{\widetilde{\vp}(v)})}}^q$, \\ $\;$&$=\sum_{v \in V}\|\k_v\|_{_{F_{v}(\P_{v})}}^q, $ \\ $\|\k \circ \phi_{\vp}\|_{_{q}}^q $&$=\|\k\|_{_{q}}^q.$ \end{tabular} \end{center} Then, $\vp$ is an automorphism of $(X,\P_{X},F_q(\P_{X}))$. \end{proof} \subsubsection{Labelled partitions induced by the tree structure} We detail here the space with labelled partitions induced by the natural wall structure on the set of vertices of a tree $T$, and we consider the pullback of this structure on a tree of spaces of base $T$ via the projection $\rho : X \rightarrow V$ namely, for $x \in X_v \subset X$, $\rho(x)=v$. Here we denote by $[v,w]_{\E}$ the set of edges in the edge path between two vertices $v$ and $w$. \\ Let $q \geq 1$ and let $T=(V,\E)$ be a tree. Let $e$ be an edge with endpoints $v$ and $w$ in $V$. ``Removing'' this edge from $T$ gives rise to two complementary connected components of vertices, namely $h_{e,v}=\{u \in V \st d_T(v,u)< d_T(w,u)\}$ and $h_{e,w}=\{u \in V \st d_T(w,u)< d_T(v,u)\}$. \begin{center} \includegraphics[width=10cm]{arbrewall.eps} \\ Wall structure on a tree induced by the edge set. \end{center} We then consider the following family of labelling functions on $V$ : $$ \P = \{ 2^{-\frac{1}{q}}\;\cara_{h_{e,v}} \st e \in \E\text{ and } v \text{ is a endpoint of }e\},$$ where $\cara_{h_{e,v}}$ is the characteristic function of the set $h_{e,v}$. \\ Notice that, for $v,w \in V$ : \begin{center}\begin{equation*} |\cara_{h_{e,u}}(v)-\cara_{h_{e,u}}(w)|=\begin{cases} 1\text{ if }e \in [v,w]_{\E} \\ 0\text{ otherwise.} \end{cases} \end{equation*} \end{center} Hence, we have, for $v,w \in V$ : \begin{center} \begin{tabular}{rl} $\|c(v,w)\|_{\ell^q(\P)}^q$&$=\sum_{p \in \P}|p(v)-p(w)|^q,$ \\ $ \;$ &$=\frac{1}{2}\sum_{e \in \E} \sum_{u \in e} |\cara_{h_{e,u}}(v)-\cara_{h_{e,u}}(w)|^q,$ \\ $ \;$ &$=\frac{1}{2} \sum_{e \in [v,w]_{\E}} \;2\;, $ \\ $\|c(v,w)\|_{\ell^q(\P)}^q$ &$=\#[v,w]_{\E}=d_T(v,w)$ \\ \end{tabular} \end{center} Hence, $(V,\P,\ell^q(\P))$ is a space with labelled partitions. \\ We can now consider the pullback (see Definition \ref{pullbackk}) of this space with labelled partitions via the projection $\rho: X \rightarrow V$ : \begin{df}\label{free_tree} Let $\mathcal{X}=\big(T,\left(X_v\right)_{v \in V},\left(X_e\right)_{e \in \E} \big)$ be a tree of spaces and $X$ be its total space. The space with labelled partitions on $X$ defined as the pullback of $(V,\P,\ell^q(\P))$ via $\rho :X \rightarrow V$ is called the \emph{structure of labelled partitions of $\mathcal{X}$ induced by the tree structure}. \end{df} \begin{prop}\label{free_tree_prop} Let $\mathcal{X}=\big(T,\left(X_v\right)_{v \in V},\left(X_e\right)_{e \in \E} \big)$ be a tree of spaces, $X$ be its total space and $\vp: X \rightarrow X$ be an automorphism of $\mathcal{X}$. \\ Then $\vp$ is an automorphism of the space with labelled partitions of $\mathcal{X}$ induced by the tree structure. \end{prop} \begin{proof} Let $\vp$ be an automorphism of $\mathcal{X}$. Since $\widetilde{\vp}$ is an automorphism of $T$, one can easily show that $\widetilde{\vp}$ is an automorphism of space with labelled partitions of $(V,\P,\ell^q(\P))$. Hence, as $\vp \circ \phi = \phi \circ \widetilde{\vp}$, by Lemma \ref{pullbackpart}, $\phi$ is an automorphism the space with labelled partitions of $\mathcal{X}$ induced by the tree structure. \end{proof} \subsection{Amalgamated free product and property $PL^p$}\label{free_amalg} Let $\G=G*_C H$ be an amalgamated free product and let us consider the tree of $C$-cosets spaces $\mathcal{X}=\big(T,\left(X_v\right)_{v \in V},\left(\{\bu_e\}\right)_{e \in \E} \big)$ on which $\G$ acts by automorphisms (see Definition \ref{free_example_df}. \\ Recall that $X_v=\g G/C$ if $v=\g G$, $X_v=\g H/C$ if $v=\g H$ and $\bu_{\g C} = \g C$. \\ Let us consider systems of representatives $\rep(G/C)$ and $\rep(H/C)$ of $G/C$ and $H/C$ respectively, each containing the unit of the group as the representative of the class $C$. Every element in $\G$ can be expressed as a reduced word in terms of these systems of representatives (see, for instance, \cite{scowal}) : \begin{dfpr}[reduced word]\label{normalform} Let $G,H$ be groups and $C$ be a common subgroup. An element $\g$ of $G*_C H$ can be uniquely written in the following way : $$ \g = g_1h_1...g_nh_n c, $$ where : \begin{itemize} \item[-] for $i=1,...,n$, $g_i \in \rep(G/C)$ and $h_i \in \rep(H/C)$; \item[-] for $i > 1$, $g_i\neq e_G $ and for $i < n$, $h_i \neq e_H$; \item[-] $c \in C$. \end{itemize} Such an expression of $\g$ is called a \emph{reduced word} (relatively to $\rep(G/C)$ and $\rep(H/C)$). \\ Let us denote : \\ \indent $ R_{\G/G}=\{ g_1h_1...g_nh_n \st g_1h_1...g_nh_n \text{ is a reduced word with} h_n \neq e_H \},$ and \\ \indent $ R_{\G/H}=\{ g_1h_1...g_n \st g_1h_1...g_n \text{ is a reduced word} \}.$ (notice that $g_n\neq e_G$ by definition). Then $R_{\G/G}$ is a system of representatives of $\G/G$ in $\G$ and $R_{\G/H}$ is a system of representatives of $\G/H$ in $\G$. \end{dfpr} The maps we define below will allow us to endow the vertex sets of the tree of $C$-cosets spaces of $\G$ with the pullback structure of space with labelled partitions coming from $G/C$ and $H/C$. \begin{nt}\label{pullmaps} Let $\g \in R_{\G/G}$ and $\g' \in R_{\G/H}$. We set : \begin{itemize} \item[-] $f_{\g G}: X_{\g G}\rightarrow G/C$ such that $\g g C \mapsto g C$ and, \\ \item[-] $f_{\g' H}: X_{\g' H}\rightarrow H/C$ such that $\g' h C \mapsto h C$. \end{itemize} \end{nt} These maps satifies the following equivariance formulas : \begin{lem}\label{free_formula_coset} Let $\g \in \G$. We have : \\ - Let $\g_1, \g_2 \in R_{\G/G}$. Assume there exists $g \in G$ such that $\g \g_1 = \g_2 g$. Then, for all $x \in X_{\g_1 G}$ : $$ f_{\g_2 G}(\g x)= g f_{\g_1 G}(x).$$ - Let $\g_1, \g_2 \in R_{\G/H}$. Assume there exists $h \in H$ such that $\g \g_1 = \g_2 h$. Then, for all $x \in X_{\g_1 H}$ : $$ f_{\g_2 H}(\g x)= h f_{\g_1 H}(x).$$ \end{lem} \begin{proof} Let $\g \in \G$ and $\g_1,\g_2 \in R_{\G/G}$ such that there exists $g \in G$ such that $\g \g_1 = \g_2 g$. For $x \in X_{\g_1 G}$, there exists $g_x \in G$ such that $x= \g_1g_x C$. Then we have : $$f_{\g_2 G}(\g x) = f_{\g_2 G}(\g_2 g g_xC)=g g_x C=g f_{\g_1 G}(x).$$ A similar argument holds for the second statement. \end{proof} \begin{theo}\label{free_act_theo} Let $G,H$ be groups and $C$ be a common subgroup. Assume that $G$ acts by automorphisms on $(G/C,\P,F(\P))$ and $H$ acts by automorphisms on $(H/C,\P',F'(\P'))$. \\ Let $\mathcal{X}$ be the tree of $C$-cosets spaces associated with $\G=G*_C H$ and $X$ be the total space of $\mathcal{X}$. Then $\G$ acts by automorphisms on $(X,\P_X,F_X(\P_X))$ such that $F_q(\P_X)$ is isometrically isomorphic to a closed subspace of $\rexp{q}{\bigoplus} F(\P)\oplus \rexp{q}{\bigoplus} F'(\P') \oplus \ell^q$. \\ Moreover, considering $C \in X_G \subset X$, we have, for $\g = g_1h_1...g_nh_n c \in G*_C H$ : $$ \|c_X(\g C,C)\|^q=\sum_{k=1}^n (\|c_G(g_k C,C)\|^q_{_{F(\P)}}+\|c_H(h_kC,C)\|^q_{_{F'(\P')}})+d_T(\g G,G)^q. $$ In particular, if $G \curvearrowright (G/C,\P,F(\P))$ and $H \curvearrowright (H/C,\P',F'(\P'))$ are proper, then $\G \curvearrowright (X,\P_X,F_X(\P_X))$ is proper. \end{theo} \begin{proof} $\mathcal{X}=\big(T,\left(X_v\right)_{v \in V},\left(\{\bu_e\}\right)_{e \in \E} \big)$ tree of $C$-cosets spaces associated with $\G=G*_C H$ and let $R_{\G/G}$ and $R_{\G/H}$ be the systems of representative of $\G/G$ and $\G/H$ respectively defined in Definition-Proposition \ref{normalform}. \\ Notice that $V=\{\g G \st \g \in R_{\G/G}\} \sqcup \{\g H \st \g \in R_{\G/H}\}$. \\ For $ \g G, \g' H \in V$ with $\g \in R_{\G/G}$ and $\g' \in R_{\G/H}$, we endow the vertex sets $X_{\g G}$ and $X_{\g H}$ with pullback structures of spaces with labelled partitions $(X_{\g G},\P_{\g G},F_{\g G}(\P_{\g G})$ and $(X_{\g H},\P_{\g H},F_{\g H}(\P_{\g H})$ given, respectively, by the maps introduced in Notation \ref{pullmaps} :\\ - $f_{\g G}: X_{\g G}\rightarrow G/C$ such that $\g g C \mapsto g C$ and, \\ - $f_{\g' H}: X_{\g' H}\rightarrow H/C$ such that $\g' H C \mapsto h C$. \\ Now, from these structures, we consider the space with labelled partitions $(X,\P_{V},F_q(\P_{V}))$ induced by the vertex sets given by Definition \ref{free_natural} and we denote by $c_V$ the associated separation map. \\ We must prove that $\G$ acts by automorphisms of space with labelled partitions on $(X,\P_{V},F_q(\P_{V}))$. We showed in Example \ref{free_example} that $\G$ acts by automorphisms of tree of spaces on $\mathcal{X}$. Hence, by Lemma \ref{autom_tree}, it sufficies to show that, for all $\g \in \G$, the map $\g_{|_{X_v}}:X_v \rightarrow X_{\g v}$ is a homomorphisms of space with labelled partitions, for every $v \in V$: \\ Let $\g \in \G$. Let $v = \g_1 G \in V$ with $\g_1 \in R_{\G/G}$ and let $\g_2 \in R_{\G/G}$ such that $\g \g_1 = \g_2 g$ for some $g \in G$ i.e. $\g_2$ is the representative in $R_{\G/G}$ of the coset $\g \g_1 G$. Notice that $\g v = \g_2 G$. \\ A labelling function of $\P_{\g v}$ is of the form $p \circ f_{\g_2 G}$ for some $p \in \P$, and we have, by Lemma \ref{free_formula_coset}, for all $x \in X_v$,$ p(f_{\g_2 G}(\g x))=p(g f_{\g_1 G}(x)).$ \\ Let us set the following maps : \\ - $\phi_{f_{\g_1 G}}: \P \rightarrow \P_v$ such that $p \mapsto p \circ f_{\g_1 G}$; \\ - $\phi_{f_{\g_2 G}}: \P \rightarrow \P_{\g v}$ such that $p \mapsto p \circ f_{\g_2 G}$; \\ - $\phi_{g}: \P \rightarrow \P$ such that $p \mapsto \{x \in G/C \mapsto p(gx)\}$ and \\ - $\phi_{\g}:\P_{\g v} \rightarrow \{\K\text{-valued functions on }X_v\}$ such that $p_{\g v} \mapsto p_{\g v} \circ \g_{|_{X_v}}$; \\ Thus, by the previous equality, we have : $$ \phi_{\g}(p\circ f_{\g_2 G})=\phi_g(p) \circ f_{\g_1 G} \in \P_{X_v}, \; (\ast) $$ since $\phi_g(p)$ belongs to $\P$. \\ Now, let $\k \in F_v(\P_v)$. By using the definitions of pullback structures, we have : \begin{center} \begin{tabular}{rl} $\|\k \circ \phi_{\g}\|_{_{F_{\g v}(\P_{\g v})}}$&$= \|\k \circ \phi_{\g}\circ \phi_{f_{\g_2 G}}\|_{_{F(\P)}}$, \\ $\;$&$= \|\k \circ \phi_{f_{\g_1 G}} \circ \phi_{g}\|_{_{F(\P)}}$, by $(\ast)$, \\ $\;$&$= \|\k \circ \phi_{f_{\g_1 G}}\|_{_{F(\P)}}$, by $(\ast)$, \\ $\|\k \circ \phi_{\g}\|_{_{F_{\g v}(\P_{\g v})}}$&$= \|\k\|_{_{F_v(\P_v)}}$, \\ \end{tabular} \end{center} Hence, $\g_{|_{X_v}}:X_v \rightarrow X_{\g v}$ is a homomorphism of space with labelled partitions and similar argument holds for vertices $v$ of the form $v = \g_1 H$ with $\g_1 \in R_{\G/H}$. As said before, it follows that $\G$ acts by automorphisms on $(X,\P_{V},F_q(\P_{V}))$. \\ Let $\g = g_1h_1...g_nh_n c \in G*_C H$ be a reduced word and consider the element $C \in X_G$. By Proposition \ref{const_labfree_prop}, we have : $$ \|c_V(\g C, C)\|_{_{F_q(\P_{V})}}^q=\sum_{v \in V}\|c_v(\pi_v(\g C),\pi_v(C))\|_{_{F_v(\P_v)}}^q. $$ By Lemma \ref{free_proj_lem}, this sum is a finite sum over a subset of $\{v \in V \st v \text{ is between }G \text{ and }\g G\}$. But the vertices between $G$ and $\g G$ are the following : $$ G,\;\;g_1H,\;\;\cdots\;,\; g_1h_1...h_{k-1}G,\;\;g_1h_1...h_{k-1}g_k H,\;\;g_1h_1...g_kh_k G,\;\;\cdots\;,\;g_1h_1...g_n H,\;\;\g G ,$$ and notice that : $$ g_1h_1...h_{k-1}g_k C \text{ is the edge between }g_1h_1...h_{k-1}G\text{ and }g_1h_1...h_{k-1}g_k H,$$ and $$ g_1h_1...g_kh_k C \text{ is the edge between }g_1h_1...h_{k-1}g_kH\text{ and }g_1h_1...g_kh_k G.$$ It follows that, for $v=g_1h_1...h_{k-1}g_kH$, $$ \pi_v(C)=g_1h_1...h_{k-1}g_k C \text{ and } \pi_v(\g C)=g_1h_1...g_kh_k C.$$ Then, by denoting $\g_k=g_1h_1...h_{k-1}g_k$, we have : \begin{center} \begin{tabular}{rl} $\|c_v(\pi_v(\g C),\pi_v(C))\|_{_{F_v(\P_v)}}$&$=\|c_v(\g_k h_k C,\g_k C)\|_{_{F_v(\P_v)}}$ \\ $\;$&$=\|c_H(f_{\g_kH}(\g_k h_k C),f_{\g_kH}(\g_k C))\|_{_{F'(\P')}}$ (notice that $\g_k \in R_{\G/H}$) \\ $\;$&$=\|c_H( h_k C, C)\|_{_{F'(\P')}}$ \\ \end{tabular} \end{center} Now, for $v=g_1h_1...g_{k-1}h_{k-1}G$, $$ \pi_v(C)=g_1h_1...g_{k-1}h_{k-1} C \text{ and } \pi_v(\g C)=g_1h_1...h_{k-1}g_{k} C.$$ Hence, similarly, we have $\|c_v(\pi_v(\g C),\pi_v(C))\|_{_{F_v(\P_v)}}=\|c_G(g_k C, C)\|_{_{F(\P)}}.$ \\ Thus : $$ \|c_V(\g C, C)\|_{_{F_q(\P_{V})}}^q=\sum_{k=1}^n\|c_G(g_k C, C)\|_{_{F(\P)}}^q+\|c_H(h_k C, C)\|_{_{F'(\P')}}^q. $$ Let us now consider the structure of space with labelled partitions $(X,\P_T,\ell^q(\P_T))$ induces by the tree structure given by Definition \ref{free_tree}. By Proposition \ref{free_tree_prop}, $\G$ acts by automorphisms on $(X,\P_T,\ell^q(\P_T))$ and we have, for $\g = g_1h_1...g_nh_n c$ a reduced word, $$ \|c_T(\g C, C)\|_{_{\ell^q(\P_{T})}}^q = d_T(\g G, G)= k, $$ where $2n-2 \leq k \leq 2n$ (depending on the fact that $g_1$ and $h_1$ can possibly be trivial). \\ Finally, we endow $X$ with a structure of labelled partitions $(X,\P_X,F_X(\P_X))$ given by the pullback of the product structure $(X\times X, \P_V^{\oplus} \sqcup \P_T^{\oplus}, F_q(\P_V)\oplus \ell^q(\P_T))$ via the $\G$-equivariant map $x \mapsto (x,x)$. Hence, we have, for $\g = g_1h_1...g_nh_n c$ and $C \in X_G$ : $$\|c_X(\g C,C)\|^q=\sum_{k=1}^n \|c_G(g_k C,C)\|^q_{_{F(\P)}}+\|c_H(h_kC,C)\|^q_{_{F'(\P')}}+d_T(\g G,G),$$ where $c_X$ is the separation map associated with $\P_X$. \\ \end{proof} \begin{cor}\label{free_act_cor} Let $G,H$ be groups, $F$ be a common \emph{finite} subgroup and $q \geq 1$. Assume that $G$ acts \emph{properly} by automorphisms on $(G,\P_G,F_G(\P_G))$ and $H$ acts \emph{properly} by automorphisms on $(H,\P_H,F_H(\P_H))$. \\ Then there exists a space with labelled partitions $(X,\P_X,F_X(\P_X))$ on which $G*_F H$ acts properly by automorphisms, and morevover $F_X(\P_X)$ is isometrically isomorphic to a closed subspace of $\rexp{q}{\bigoplus} F_G(\P_G)\oplus \rexp{q}{\bigoplus} F_H(\P_H) \oplus \ell^q$. \end{cor} Before we prove this corollary, we need the following lemma : \begin{lem}\label{finite_labpart} Let $G$ be a group and $F$ be a finite subgroup of $G$. Assume that $G$ is endowed with a structure of space with labelled partitions $(G,\P,F(\P))$ on which it acts by automorphisms via left-translation. \\ Then there exists a structure of space with labelled partitions $(G/F, \P', F'(\P'))$ on which $G$ acts by automorphisms via its natural action on the quotient $G/F$ and where $F'(\P')$ is isometrically isomorphic to a closed subspace of $F(\P)$. Moreover, there exists $K\geq 0$ such that, for all $g,g' \in G$ : $$ \|c(g,g')\|_{_{F'(\P')}}+K\geq \|c'(gF,g'F)\|_{_{F'(\P')}} \geq \|c(g,g')\|_{_{F'(\P')}}-K, $$ where $c,c'$ are the respective separation maps of $(G,\P,F(\P))$ and $(G/F, \P', F'(\P'))$. \\ In particular, $G\curvearrowright(G,\P,F(\P))$ is proper if, and only if, $G\curvearrowright(G/F, \P', F'(\P'))$ is proper. \end{lem} \begin{proof} For $p \in \P$, we define the labelling function $p':G/F \rightarrow \K$, by, for $g \in G$ : $$ p'(gF):=\frac{1}{\#F}\sum_{f \in F}p(gf). $$ Notice that $p'$ is well-defined since $\sum_{f \in F}p(g'f)=\sum_{f \in F}p(gf)$ for every $g' \in gF$. \\ We consider the family of labelling functions $\P'=\{p' \st p\in \P \}$ and we denote by $c'$ its associated separation function. For $p \in \P$, $g,g' \in G$, we have : $$ c'(gF,g'F)(p')= \frac{1}{\#F}\sum_{f \in F}c(gf,g'f)(p).$$ Then, if we set $E'(\P'):=\Vect(c'(gF,g'F)\st g,g' \in G)$, the linear operator $T:E'(\P') \rightarrow F(\P)$ such that : $$T :c'(gF,g'F)\mapsto \frac{1}{\#F}\sum_{f \in F}c(gf,g'f),$$ is injective. Hence we can consider the Banach space $F'(\P')$ defined as the closure of $E'(\P')$ endowed with the norm $\|\k\|_{_{F'(\P')}}:=\|T(\k)\|_{_{F(\P)}}$. As $c'(gF,g'F)$ belongs to $F'(\P')$ for all $g,g' \in G$, it follows that $(G/F,\P',F'(\P'))$ is a space with labelled partitions, and it is clear that $G$ acts on it by automorphisms via its natural action on $G/F$. \\ Now, let us consider $\eta=\frac{1}{\#F}\sum_{f \in F}c(f,e) \in F(\P)$. Then we have, for $g \in G$ : \begin{center} \begin{tabular}{rl}$\|c'(gF,F)\|_{_{F'(\P')}}$&$=\|\frac{1}{\#F}\sum_{f \in F}c(gf,f)\|_{_{F(\P)}}$ \\ &$=\|c(g,e)+\eta \circ \phi_g - \eta\|_{_{F(\P)}}.$ \\ \end{tabular} \end{center} As $ \k \mapsto \k \circ \phi_g$ is an isometry of $F(\P)$, we have, by triangular inequality, $\|\eta \circ \phi_g - \eta\|_{_{F(\P)}} \leq 2\|\eta\|_{_{F(\P)}}$. Hence, again by triangular inequalities : \\ $$\|c(g,e)\|_{_{F(\P)}}+K \geq \|c'(gF,F)\|_{_{\F'(\P')}} \geq \|c(g,e)\|_{_{F(\P)}}-K,$$ where $K=2\|\eta\|_{_{F(\P)}}$. \end{proof} \begin{proof}[Proof of Corollary \ref{free_act_cor}] Assume that $G$ acts properly by automorphisms on $(G,\P_G,F_G(\P_G))$ and $H$ acts properly by automorphisms on $(H,\P_H,F_H(\P_H))$. Then, by Lemma \ref{finite_labpart}, there exists spaces with labelled partitions $(G/F,\P,F(\P))$ and $(H/F,\P',F'(\P'))$ on which $G$ and $H$ respectively act properly via their natural actions on quotients. Thus we can apply Theorem \ref{free_act_theo} : $G*_F H$ acts by automorphisms on a space with labelled partitions $(X,\P_X,F_q(\P_X))$ where $X$ is the total space of the tree of $F$-cosets spaces associated with $G*_F H$ and $F_q(\P_X)\lesssim \rexp{q}{\bigoplus} F(\P)\oplus \rexp{q}{\bigoplus} F'(\P') \oplus \ell^q$. Moreover, we have, for $\g=g_1h_1...g_nh_n f \in G*_F H$ : $$\|c_X(\g F,F)\|^q=\sum_{k=1}^n \|c_G(g_k F,F)\|^q_{_{F(\P)}}+\|c_H(h_kF,F)\|^q_{_{F'(\P')}}+d_T(\g G,G).$$ For $ R \geq 0$, $\|c_X(\g F,F)\| \leq R$ implies that $2n-2 \leq d_T(\g G,G) \leq R^q$, $\|c_G(g_k F,F)\|_{_{F(\P)}} \leq R$ and $\|c_H(h_k F,F)\|_{_{F'(\P')}} \leq R$. \\ Hence, for all $R \geq 0$, $\{ \g=g_1h_1...g_nh_n f \st \|c_X(\g F,F)\| \leq R \}$ is a subset of : $$ \left\lbrace \g=g_1h_1...g_nh_n f \st n \leq 2(R^q +2) \text{ and } \|c_G(g_k F,F)\|_{_{F(\P)}} \leq R, \; \|c_H(h_k F,F)\|_{_{F'(\P')}} \leq R\right\rbrace, $$ which is a finite set as $F$ is finite, $n$ is bounded and $G \curvearrowright (G/F,\P,F(\P))$, $H \curvearrowright (H/F,\P',F'(\P'))$ are proper. \\ Thus, $\G \curvearrowright (X,\P_X,F_q(\P_X))$ is proper. \end{proof} \begin{proof}[Proof of Theorem \ref{amalgamflp}] The necessary condition is clear as $G,H$ are subgroups of $G*_F H$. \\ Now, assume $G,H$ have property $PL^p$. It follows from Corollary \ref{labpart_aflp} that there exists structures of spaces with labelled partitions $(G,\P,F(\P'))$ and $(H,\P',F'(\P'))$ on which $G$ and $H$ respectively, act properly by automorphisms via left-translations. Moreover $F(\P)$ is isometrically isomorphic to a closed subspace of a $L^p$ space and so does $F'(\P')$.\\ Thus, as $F$ is finite, by Corollary \ref{free_act_cor}, there exists a space with labelled partitions $(X,\P_X,F_X(\P_X))$ on which $G*_F H$ acts properly by automorphisms where : $$ F_X(\P_X) \lesssim \rexp{p}{\bigoplus} F(\P)\oplus \rexp{p}{\bigoplus} F'(\P') \oplus \ell^p.$$ Hence, $ F_X(\P_X)$ is isometrically isomorphic to a closed subspace of a $L^p$ space by Proposition \ref{lpisomisom}. By Corollary \ref{labpart_aflp}, it follows that $G*_F H$ has property $PL^p$. \end{proof} \nocite{*} \end{document}
\begin{document} \title{ extbf{Consciousness and Endogenous State Reduction: Two Experiments} \begin{abstract} There is a tradition in science that regards consciousness as merely epiphenomenal. Accordingly, physical systems can create and influence consciousness, but consciousness can have no influence on physical systems. Indeed, the current understanding of quantum mechanics provides no way for consciousness to alter the wave function of a quantum mechanical state. Furthermore, there is nothing in molecular biology that would suggest that the human body is anything more that an automaton that operates on the basis of purely physical and chemical interactive forces. However, I believe that the epiphenomenal view is fundamentally flawed, and I suggest the following experiments as a way of demonstrating the existence of an influence of consciousness on material systems. The first uses Positron Emission Tomography (PET) with a human subject, and the second used autoradiography with rats. Detailed arguments for my position can be found in three papers that have been published in recent years. A brief summery of the arguments is initially given below, where it is claimed that `pain' consciousness might be correlated in a certain way with the relative binding of opiates to receptors in a subject's brain. \end{abstract} \section*{1. Introduction} In recent papers\cite{RM1}\cite{RM2}\cite{RM3}, I accept the evolutionary argument of William James to the effect that psychological states must have evolved along with the biological states\cite{WJ}. For these two very different kinds of things to have evolved in parallel with one another (i.e., for one to have anything to do with the other), James says that there must have been an interaction between the two. Otherwise, wrong psychological constructions would not have been selected against, and therefore, would have improperly survived the evolutionary struggle. If that had been the case, then any resemblance between the subjective imagery of our species and the world about us would be completely fortuitous. This argument is given more fully in ref.\ 2. It says that the psycho-physical parallelism of von Neumann must have come about through a natural process in which psychological and physical states were mutually engaged. Otherwise, one would have to believe either in Libnitz's claim of a (miraculous) \emph{pre-established harmony} between these two things, or in Bishop Berkeley's denial that there exists a psycho-physical parallelism in the first place. According to von NeumannÕs interpretation of quantum mechanical measurement, the collapse of the wave function requires the presence of a subjective (i.e., conscious) observer\cite{vN}. I make use of this idea together with the notion of an \emph{inside observer} (initially defined in ref.\ 1) to show how conscious states arising within a physical system might conceivably influence the probability amplitude of quantum mechanical choices made by the system. An inside observer is defined to be a \emph{state of conscious awareness} that emerges on one component of an endogenous quantum mechanical superposition of (physiological) states. These states can be macroscopic in the formalism of quantum mechanics; although, in deference to environmental entanglement and decoherence, one might call them ``mixtures" instead of ``superpositions". I do not use this language because I am not concerned here with coherence or interference between macrostate components.\footnote{Environmentally entangled macrostates really are superpositions. Joos and Zeh say, ``... the interference terms still exist, but they are not \emph{there}."\cite{JZ} This paradoxical statement means that the system's phases exist between global states that include non-local correlations connecting a macrostate with its environment. These phases are not accessible to a local observer; and in consequence, the superposition appears locally to be a mixture.\cite{Gea} But however one regards such a macrostate ensemble, as a global superposition or as a local mixture, each component has a probability amplitude in a fully quantum mechanical system. Therefore, according to von Neumann, a conscious observation is necessary for one of these states to become a concrete reality.} When different inside observers finally do emerge on different components of a physiological superposition, I propose that there will be a shift in the relative probability amplitudes of the components that favors certain conscious states over others. The principles that govern this leaning toward especially favored states are described below. I assume that the above shift preserves normalization.\footnote{I do not say that this shift among probability amplitudes is \emph{caused} by the appearance of consciousness, inasmuch as the underlying influence is revealed only as an empirical relationship. The existence of various conscious states in the endogenous superposition, \emph{plus} my hypothetical change in relative probability amplitudes, \emph{plus} an accompanying collapse of the wave function may all result from a `common' unknown cause\cite{RM4}.} This possibility provides us with an opening in quantum mechanics that may admit the evolutionary influence envisioned by James. What is needed is a model of some primitive species at the time of its first use of consciousness, together with an identification of the kind of conscious experience that can influence the creature's evolution by using the above `inside observer' mechanism. \section*{2. My Hypothesis} I believe that the first conscious experiences that appeared in any evolving species must have been very straightforward, like simple \emph{pleasure or pain}. Elemental perceptions involving sight, sound, or touch serve no behavioral purpose in themselves, for they have no intrinsic motivational weight or direction. These perceptions must be `interpreted' in order to have significance, and that is too much to expect of the first glimmer of consciousness. In addition, emotions such as fear, anger, and love have no intrinsic meaning apart from an existing subjective construction of the world toward which they are directed. This means that an awareness of either one of these emotions requires a greater sophistication than is required for simple experiences like pleasure or pain. The latter have a direct `unsophisticated' motivational power that stands apart from any concept of the external world.\footnote{This excludes `emotional' pain. It refers only to `physical' pain (e.g., a flesh wound or a broken bone)} I therefore develop an evolutionary model that uses `pain' as a creatureÕs first conscious experience, where pain consciousness is said to have an influence on the quantum mechanical choices that are made within the creature. This is done in detail in ref.\ 2, and refined in ref.\ 3. My hypothesis requires that if an endogenous quantum mechanical superposition develops within a conscious creature in which a more painful component competes with a less painful one, than, other things being equal, \emph{the less painful component will have an enhanced probability of surviving a collapse of the state function}. This is the interactive mechanism whereby conscious states are claimed to influence matter. The model assumes that the more painful experience is associated with a life-threatening behavior in a way that is best explained in ref.\ 3, pp. 1953-4. An endogenous quantum mechanical superposition develops within the experimental subjects because the \emph{ligands} that attach to receptors in the brain have quantum mechanical wave functions that spread rapidly in space due to the Heisenberg uncertainty principle.\footnote{The term ligand refers to any molecule, endogenous or exogenous, that attaches to a receptor.} The spreading takes place as ligands are swept along in blood and cerebrospinal fluid on their way to the receptor. This means that there is an intrinsic probability governing the number of ligands that become attached to the opiate receptors in a pain responsive region of the brain, thereby posing a quantum mechanical choice between a more painful experience and a less painful experience. `\emph{More pain' and `less pain' are eigenvalues of the endogenous state reduction}.\footnote{Another way of saying this is that there is a `more pain' observer, and a `less pain' observer, who are competing ``inside observers" associated with different components of the endogenous quantum mechanical superposition. The reduction is one that makes a definite choice between these contenders} In this situation, my hypothesis requires an increase in the probability of the less painful eigenstate. One would therefore expect to find a surviving endogenous eigenstate to contain more opiate-like molecules in a part of the brain that mediates pain, than would be expected on the basis of biochemical considerations alone. Most measurements of ligand bonding in humans appear to have been made using subpharmacological doses, so it is clear that the above hypothesis needs to be tested in vivo using doses that are large enough to be felt. That is the purpose of these experiments. Ligands are called \emph{agonists} if they produce cellular effects within the receptor to which they are attached. Morphine, fentanyl, and carfentanil are examples of opiate receptor agonists, inasmuch as they cause cellular disturbances that we recognize as analgesia and/or euphoria. Other molecules produce no pharmacological effects when they are attached to receptors. These are called \emph{antagonists}. Naloxone and diprenorphine are examples of opiate receptor antagonists. When a mixture of an agonist and an antagonist is injected into a subject, the two molecules will compete with on another for attachment to the available receptors. Such a mixture (in sufficient dose) will produce pharmacological effects, owing to the attached agonist molecules. The competition between the two substances is expressed quantum mechanically by their appearing in different ratios on different components of the endogenous superposition. Competing components of the superposition will therefore support competing inside observers who experience different degrees of pain. It is my claim that the probability amplitudes of these components will be skewed in favor of the observer experiencing less pain.\footnote{My hypothesis chooses less pain to be favored over more pain for reasons that are best understood in terms of the evolutionary model developed in refs. 2 and 3.} No attempt is made here to determine which agonists and antagonists are best suited for the experiments. Carfentanil and diprenorphine may be acceptable for the experiment with rats; but the toxicity of carfentanil makes is unsuitable for use with humans in pharmacological doses. Perhaps fentanyl and naloxone would be the best combination for humans. However, there may be no ideal choice of ligands at this time. Background radiation may swamp our anticipated results because of ligand binding that is not sufficiently specific to the targeted receptors, and/or because of the existence of too many free unbound ligands. Research is ongoing to find ligands with greater affinity and specificity. Therefore, although the following experiments may not now give unambiguous results, they can be thought of as idealized experiments to be performed when the technology is sufficiently improved. \section*{3. Preliminary to the Experiments} Prior to one of the experiments, the subject who is exposed to painful trauma should be given a prescribed mixture of an agonist and an antagonist to determine the size of the dose that brings the subject to the threshold of analgesia. It is assumed that a \emph{threshold dose} is small enough that opiate receptors remain unsaturated throughout the brain, and in fact, that the number of bound receptors in each region remains linear with the number of ligands that are available to the receptors. \section*{4. The First Experiment} Four PET scans are proposed that will allow a comparison to be made between the binding of an agonist and an antagonist in the brain of a human suffering from rheumatoid arthritis (or other chronic pain), or one who is subjected to cutaneous applications of a pain producing heat (or other inflicted pain). Each scan begins with an intravenous injection of agonist and antagonist in ratio $R$. This ratio is chosen to insure that the number of agonist molecules that become attached to the receptors is roughly equal to the number of antagonist molecules that become attached to the receptors. Only one of these substances is labeled radioactively during a single scan. A scan might reasonably begin 30 minutes after injection and last for 45 minutes. In the first scan, the subject is given a threshold dose of hot agonist and cold antagonist. After the scan, the receptor count/pixel given by $C_A$ is recorded in each ROI (neurological region of interest). In the second scan, the subject is given a threshold injection of cold agonist, and hot antagonist, which has a net weight equal to that of the first injection. After that scan, the receptor count/pixel, given in this case by $C_{AA}$, is recorded in each ROI. The total receptor count/pixel is then $C = C_A + C_{AA}$, and the ratio is \begin{equation} r = C_A/C_{AA} \end{equation} in each ROI. If the test subject is required to endure cutaneous heat, then this will be applied from the time of injection to the end of the scan. The first row in fig.\ 1 represents the population of molecules that occupy opiate receptors in some ROI in the first scan. The agonist is shown to be radioactive in the first row (as indicated by the wings), where the number of endogenous ligands (e.g., endorphins or other peptides) that are competing for site receptors is indefinite. The second row of fig.\ 1 represents the population of molecules that occupy the opiate receptors in the second scan, where the antagonist molecules are now radioactive. Combining the first and second scans allows one to measure $C_A$ and $C_{AA}$ in each ROI, and so to find $C$ and $r$. The third and fourth scans are identical with the first and second, except that the doses in this case are subpharmacological. This guarantees that consciousness will have nothing to do with the result. The third and fourth scans therefore provide values of $r$ for each ROI that are determined by all known biochemical influences as well as any purely methodological influences. It is my assumption that \begin{eqnarray} r&\mbox{(pain responsive ROIs in scans one and two)}\\ >r&\mbox{(pain responsive ROIs in scans three and four)}\nonumber \end{eqnarray} and that $r$ will be the same in regions that are not pain responsive. The only thing that distinguishes these two values of $r$ in eq.\ 2 is the size of the dose, and biochemically speaking, that should not have anything to with the result. That's because the only biochemical influence on the observed value of $r$ is the competition between the agonist and the antagonist, \emph{and dose should not affect this balance for small doses at steady state equilibrium.}\footnote{Small dose means: on the linear portion of the binding $vs$ concentration curve, far from saturation\cite{MT}. For ligands of normal potency, this should present no difficulty for threshold doses.} The inequality in eq.\ 2 is therefore a test of my hypothesis concerning the influence of pain consciousness. It suggests that there are non-biophysical influences operating in the pain responsive regions of the brain that result in more agonist molecules being bound to these regions than would otherwise be expected. Presumably, this is because the conscious observer (i.e., the PET subject) becomes associated with a collapse of an endogenous wave function that gives preferential weight to less painful eigenstates. In order to isolate the observer in this experiment, monitors carrying raw data from the scanner should be covered during the time of the scan. \section*{5. Distribution in r} The ratio $r = C_A/C_{AA}$ is a variable of the total endogenous quantum mechanical state. $C = C_A + C_{AA}$ is another variable, but it is of no interest here. The pulse appearing on the left in fig.\ 2 represents the distribution of eigenstates of $r$ in a region of the brain that is not responsive to pain. These eigenstate amplitudes will be the same as those predicted by quantum physiology. In regions that are responsive to the pain, the eigenstates on the right-hand slope of the left-hand pulse will have proportionally more agonist molecules than those that are on the left-hand slope, inasmuch as they represent states having a greater ratio $r$. So eigenstates on the left-hand slope are more painful than those on the right-hand slope. Therefore, according to my hypotheses, the eigenstates on the right-hand slope will grow in amplitude relative to those on the left. When this process comes to equilibrium, the entire pulse will have displaced to the right as shown in fig.\ 2 where its components will be richer in the analgesic agonist. The measured value of $r$ will therefore be greater in these regions. The quantum mechanical state function for the subject's entire body is a function of many variables including $r_A, r_B, r_C, r_D,$ . . . etc., where these represent the ratio $r$ in regions $A, B, C, D,$ . . . etc. The total physical state can therefore be written in the form $\Psi(r_A, r_B, r_C, r_D,$ . . . etc.). Presumably the conscious state of the subject is determined by $\Psi$ in its entirety. The distribution of $r$ in regions that are not pain responsive will be determined by the quantum mechanics alone. However, displacements in the functional dependence of $r$ in all of the regions that are pain responsive will decrease the pain consciousness of the organism of the whole. \section*{6. Discussion} In pain responsive ROIs, endogenous opioid agonists will generally be secreted as part of an attempt by the body to alleviate the pain. As a result, the total number C of exogenous ligands will be decreased in these regions as has been shown in other studies\cite{AJ}. However, this decrease will not matter to the experiment because it is the ratio $r$ of the two injected ligands that is important. That ratio is governed only by the competition between the two, and is independent of other secretions. This is one reason why the experiment is based on the unitless ratio $r$. A concern is that at pharmacological doses, this effect will result in a more homogeneous radioactive response over all regions of the brain, which would tend to mask differences in $r$. The experiment will be valid, even if the agonist is specific to, say $\mu$-receptors, and the antagonist is non-specific, as is the case of carfentanil and diprenorphine. The ratio $r$ should certainly be affected by variations of specificity, but it would remain the same between the first two scans and the second two scans in this experiment. Whatever the value of $r$ for subpharmacological doses, it should not change (biochemically speaking) as the dose is increased to threshold, even if the ligands engage different populations of receptors. A concern is that differences in $r$ will be masked by excessive non-specificity. It will also not matter if the size of the dose is slightly different for different subjects. A different dose will change the total count C in each ROI, but the ratio $r$ in each region should not thereby be affected. What is important is that the dose be one that puts the injected agonist and antagonist molecules into one-on-one competition with one another at the analgesic threshold of the subject. Since it is impossible at this point to estimate the magnitudes of the hypothetical displacement of the pulse in fig.\ 2, it is impossible to estimate the number of data points that would be necessary to get good statistics. Resolution is part of what must be decided by the experiment. \section*{7. The Second Experiment} There is another approach to this problem that involves in vivo autoradiography with rats experiencing pain. Instead of four PET scans, there are four injections (in four different rats) of a mixture of agonist and antagonist that follow the same protocol as before. Each of the four rats is sacrificed. The brain of each is then sliced and exposed to film to reveal the concentration of labeled ligands in different parts of the brain. As in the PET case, the first two doses will allow a determination of $r = C_A/C_{AA}$ in each ROI at threshold levels. The second two doses will allow $r$ to be determined in each ROI at subpharmacological doses, which insures that consciousness will not be a factor. My claim is that eq.\ 2 should also apply in this case, giving evidence of the influence of consciousness on the outcome. The second (autoradiographic) experiment might be cheaper and easier than the first (PET) experiment. But there is one great disadvantage. A negative result might mean that rats are automatons that have no conscious life. It is only in a truly in vivo experiment (i.e., one in which the subject is alive and known to be fully conscious when the data is taken) that the hypothesis can be fully tested. This requires a PET based experiment that uses human subjects. On the other hand, a positive result for the second (autoradiographic) experiment would be useful because it would suggest that rats are conscious, and that my hypothesis is correct. \end{document}
\bar{e}gin{document} \title[]{The hyperbolic Yang--Mills equation for \\ connections in an arbitrary topological class} \author{Sung-Jin Oh} \address{Department of Mathematics, UC Berkeley, Berkeley, CA 94720 and KIAS, Seoul, Korea 02455} \email{[email protected]} \author{Daniel Tataru} \address{Department of Mathematics, UC Berkeley, Berkeley, CA 94720} \email{[email protected]} \bar{e}gin{abstract} This is the third part of a four-paper sequence, which establishes the Threshold Conjecture and the Soliton-Bubbling vs.~Scattering Dichotomy for the energy critical hyperbolic Yang--Mills equation in the $(4+1)$-dimensional Minkowski space-time. This paper provides basic tools for considering the dynamics of the hyperbolic Yang--Mills equation in an arbitrary topological class at an optimal regularity. We generalize the standard notion of a topological class of connections on $\BigR^{d}$, defined via a pullback to the one-point compactification $\BigS^{d} = \BigR^{d} \cup \set{\infty}$, to rough connections with curvature in the critical space $L^{\frac{d}{2}}(\BigR^{d})$. Moreover, we provide excision and extension techniques for the Yang--Mills constraint (or Gauss) equation, which allow us to efficiently localize Yang--Mills initial data sets. Combined with the results in the previous paper \cite{OTYM2}, we obtain local well-posedness of the hyperbolic Yang--Mills equation on $\BigR^{1+d}$ $(d \frkgeq 4)$ in an arbitrary topological class at optimal regularity in the temporal gauge (where finite speed of propagation holds). In addition, in the energy subcritical case $d = 3$, our techniques provide an alternative proof of the classical finite energy global well-posedness theorem of Klainerman--Machedon \cite{KlMa2}, while also removing the smallness assumption in the temporal-gauge local well-posedness theorem of Tao \cite{TaoYM}. Although this paper is a part of a larger sequence, the materials presented in this paper may be of independent and general interest. For this reason, we have organized the paper so that it may be read separately from the sequence. \end{abstract} \maketitle \tilde{a}bleofcontents \section{Introduction} The subject of this paper is the $(d+1)$-dimensional hyperbolic Yang--Mills equation with compact noncommutative structure group. Our goal is two-fold: \bar{e}gin{itemize} \item To describe, topologically and analytically, the Yang--Mills initial data sets at the optimal $L^{2}$-Sobolev regularity; \item To provide a good local theory for solutions at the optimal $L^{2}$-Sobolev regularity. \end{itemize} In each case, we consider two model base spaces: Either a ball $B_{R} = \set{x \in \BigR^{d} : \abs{x} < R}$ or the whole space $\BigR^{d}$ for the first goal, and (suitable time restrictions of) their respective domains of dependence $\mathcal D(B_{R}) = \set{(t, x) \in \BigR^{1+d} : \abs{t} + \abs{x} < R}$ and $\mathcal D(\BigR^{d}) = \BigR^{1+d}$ for the second goal. The main results of this paper may be classified into three classes: \bar{e}gin{enumerate} \item {\it Good global gauge and topological class of rough connections.} Motivated by the optimal regularity theory for the hyperbolic Yang--Mills equation, we consider locally-defined connections on a subset of $\BigR^{d}$ with $L^{\frac{d}{2}}$-curvature. Patching together the local gauges, we show that we can always produce good global gauges in the two model base spaces above (Theorems~\ref{thm:goodrep-ball} and \ref{thm:goodrep}). Moreover, in whole space case, we use the asymptotics of the good global gauge potential to extend the notion of topological classes of connections to the rough setting. (Definition~\ref{def:top-class}). \item {\it Initial data surgery.} We provide techniques for excising and extending Yang--Mills initial data sets, which are subject to the nonlinear Yang--Mills constraint (or Gauss) equation (Theorems~\ref{thm:ext-id} and \ref{thm:excise}). These are based on a sharp solvability result for the covariant divergence equation $\bfD^{\ell} e_{\ell} = h$ which preserves physical space support property (Theorem~\ref{thm:gauss-0}). \item {\it Large data local theory.} Using the ideas of initial data surgery and patching solutions, we show how to extend a small data well-posedness result in the temporal gauge to arbitrarily large data; the key is that causality (or finite speed of propagation) holds in the temporal gauge. Combined with the optimal regularity temporal gauge small data global well-posedness theorem proved in \cite{OTYM2}, we prove local well-posedness of the hyperbolic Yang--Mills equation in the temporal gauge for arbitrary critical Sobolev initial data in $d \frkgeq 4$ (Theorem~\ref{thm:local-temp}). In $d = 3$, we obtain a generalization of a low regularity result of Tao \cite{TaoYM}, as well as an alternative proof of the classical result of Klainerman--Machedon \cite{KlMa2}. \end{enumerate} In addition, in the last section we provide a review of the theory of harmonic Yang--Mills equation on $\BigR^{4}$ using the topological framework developed in this paper. A particular emphasis is given to the recent sharp energy lower bound for non-instanton solutions due to Gursky--Kelleher--Streets \cite{GKS}, which clarifies the threshold energy for the energy critical hyperbolic Yang--Mills equation (and the Yang--Mills heat flow); namely, it is twice the ground state energy. \bar{e}gin{remark} \label{rem:series} When restricted to the energy critical dimension $d = 4$, the results in this paper constitute the third part of a four-paper sequence, whose principal aim is to prove the Threshold Theorem for the energy critical hyperbolic Yang--Mills equation. The four installments of the series are concerned with \bar{e}gin{enumerate} \item the \emph{caloric gauge} for the hyperbolic Yang--Mills equation, \cite{OTYM1}. \item large data \emph{energy dispersed} caloric gauge solutions, \cite{OTYM2}. \item \emph{topological classes} of connections and large data local well-posedness, present article. \item \emph{soliton bubbling} vs. scattering dichotomy for large data solutions, \cite{OTYM3}. \end{enumerate} A short overview of the whole sequence is provided in the survey paper \cite{OTYM0}. The present paper is mostly independent of the other papers in the series; the only exception is the small data well-posedness result for the hyperbolic Yang--Mills equation from \cite{OTYM2} ($d \frkgeq 4$), which is used here as a black-box. \end{remark} This paper is structured as follows. In the remainder of the introduction, we present the basic definitions and main results of this paper. For the notation and conventions that are not explained in the course of exposition, we refer the reader to Section~\ref{sec:notation}. In Sections~\ref{sec:rough-conn}--\ref{sec:threshold}, we elaborate and provide proofs of the results stated in the introduction. \subsection{Connections on a vector bundle with structure group \tilde{e}xorpdfstring{ $\bfG$}{G}} \label{subsec:vb} Here we give a quick review of the basic theory of connections on vector bundles, and at the same time fix some notation and conventions. For a textbook treatment of these materials, we recommend \cite{MR1393940, MR1393941, MR0440554}. Let $\bfG$ be a compact Lie group with Lie algebra $\frkg$. We denote the adjoint action of $\bfG$ on $\frkg$ by $Ad(O) A = O A O^{-1}$, and the corresponding action of $\frkg$ by $ad(A) B = [A, B]$. We endow $\frkg$ with an inner product $\brk{\cdot, \cdot}$ which is $Ad$-invariant (or bi-invariant), i.e., \bar{e}gin{align*} \brk{A, B} = \brk{Ad(O) A, Ad(O) B} \qquad A, B \in \frkg, \ O \in \bfG. \end{align*} Such an $Ad$-invariant inner product always exists if $\bfG$ is compact. Indeed, from any inner product $\brk{\cdot, \cdot}'$, we may construct an $Ad$-invariant inner product by applying $Ad(O)$ to each input and averaging in $O \in \bfG$. The main objects we consider are connections $\bfD$ on a vector bundle on some smooth base manifold $X$ with structure group $\bfG$. Here we recall the standard local definition of a vector bundle in the smooth and continuous cases, which will be most useful later: \bar{e}gin{definition} \label{def:vb} A $C^{\infty}$ [resp. $C^{0}$] vector bundle $\eta$ on a smooth manifold $X$ with fibers modeled on a vector space $V$ consists of the following objects: \bar{e}gin{itemize} \item An open cover $\set{U_{\alpha}}$ of $X$; \item For each pair $U_{\alpha}, U_{\bar{e}ta}$, a $C^{\infty}$ [resp. $C^{0}$] \emph{transition map} $O_{(\alpha \bar{e}ta)} : U_{\alpha} \cap U_{\bar{e}ta} \to Aut(V)$, which satisfy the following \emph{cocycle properties}: \bar{e}gin{enumerate} \item $O_{(\alpha \alpha)} = I \quad \mathfrak{h}box{ on } U_{\alpha} (= U_{\alpha} \cap U_{\alpha})$, \item $O_{(\alpha \frkgamma)} = O_{(\alpha \bar{e}ta)} O_{(\bar{e}ta \frkgamma)} \quad \mathfrak{h}box{ on } U_{\alpha} \cap U_{\bar{e}ta} \cap U_{\frkgamma}$. \end{enumerate} \end{itemize} Suppose that a Lie group $\bfG$ acts on $V$, in the sense that there exists a smooth representation $\rho : \bfG \to Aut(V)$. We say that \emph{$\eta$ has structure group $\bfG$} if the transition functions may be lifted to $C^{\infty}$ [resp. $C^{0}$] $\bfG$-valued cocyles, i.e., \bar{e}gin{equation*} O_{(\alpha \bar{e}ta)} = \rho \circ \tilde{O}_{(\alpha \bar{e}ta)} \quad \mathfrak{h}box{ for some } \tilde{O}_{(\alpha \bar{e}ta)} : U_{\alpha} \cap U_{\bar{e}ta} \to \bfG \end{equation*} so that $\set{\tilde{O}_{(\alpha \bar{e}ta)}}$ satisfy the cocycle property. For simplicity, throughout the paper we omit the representation $\rho$ and denote the lifted cocycles $\tilde{O}_{(\alpha \bar{e}ta)}$ by $O_{(\alpha \bar{e}ta)}$. \end{definition} In the local formulation, vector bundles with structure group ${\bf G}$ defined by the data sets $\set{U_{\alpha}, O_{(\alpha \bar{e}ta)}}$ and $\set{U'_{\alpha'}, O'_{(\alpha' \bar{e}ta')}}$ are \emph{isomorphic} if and only if there exists a common refinement $\set{V_{\frkgamma}}$ of $\set{U_{\alpha}}$ and $\set{U'_{\alpha'}}$, so that $V_{\frkgamma} \subseteq U_{\alpha(\frkgamma)} \cap U_{\alpha'(\frkgamma)}$ and $C^{\infty}$ [resp. $C^{0}$] functions $P_{(\frkgamma)} : V_{\frkgamma} \to \bfG$ so that \bar{e}gin{equation*} P_{(\frkgamma)} O_{(\alpha(\frkgamma) \alpha(\delta))} = O'_{(\alpha'(\frkgamma) \alpha'(\delta))} P_{(\delta)} \quad \mathfrak{h}box{ on } V_{\frkgamma} \cap V_{\delta}. \end{equation*} By the \emph{topological} or \emph{isomorphism class} of a vector bundle $\eta$, we mean the class of all vector bundles isomorphic to $\eta$. The open cover $\set{U_{\alpha}}$ in Definition~\ref{def:vb} provides subsets on which $\eta$ is isomorphic to the trivial bundle $U_{\alpha} \times V$, and the transition maps $\set{O_{(\alpha \bar{e}ta)}}$ describe how these local trivial bundles are patched together. We call an isomorphism $\eta \restriction_{U_{\alpha}} \to U_{\alpha} \times V$ a \emph{local gauge} (or local trivializations), and refer to $O_{(\alpha \bar{e}ta)}$, viewed as an isomorphism between two trivial bundles $U_{\alpha} \times V$, as a \emph{local gauge transformation}. Moreover, we use the term \emph{global gauge} for a global isomorphism from $\eta \to X \times V$ (if it exists), and \emph{global gauge transformation} for a $\bfG$-valued function on $X$, viewed as an isomorphism between such trivial bundles. Let $\eta$ be a $C^{\infty}$ vector bundle with structure group $\bfG$, defined by the data $\set{U_{\alpha}, O_{(\alpha \bar{e}ta)}}$. A \emph{section $s$ of $\eta$} consists of local data $s_{(\alpha)}$ (the local expression for $s$ in the local gauge on $U_{\alpha}$), which are smooth functions $s_{(\alpha)} : U_{\alpha} \to V$ satisfying the compatibility condition \bar{e}gin{equation*} s_{(\alpha)} = O_{(\alpha \bar{e}ta)} s_{(\bar{e}ta)} \quad \mathfrak{h}box{ on } U_{\alpha} \cap U_{\bar{e}ta}. \end{equation*} A \emph{connection $\bfD$} on $\eta$ consists of local data $\mathrm{d} + A_{(\alpha)}$, where each $A_{(\alpha)}$ is a smooth $\frkg$-valued 1-form on $U_{\alpha}$ satisfying the compatibility condition: \bar{e}gin{equation*} A_{(\alpha)} = Ad(O_{(\alpha \bar{e}ta)})A_{(\bar{e}ta)} - \partial O_{(\alpha \bar{e}ta)} O^{-1}_{(\alpha \bar{e}ta)} \quad \mathfrak{h}box{ on } U_{\alpha} \cap U_{\bar{e}ta}. \end{equation*} We call $A_{(\alpha)}$ a \emph{gauge potential} for $\bfD$ in the local gauge $U_{\alpha}$. Observe that $\bfD$ defines a first order differential on the space of smooth sections of $\eta$, in the sense that $\bfD (f s) = \mathrm{d} f s + f \bfD s$ for any function $f$ and any section $s$. The space of all connections is denoted by $\mathcal A(\eta)$. As is well-known, $\mathcal A(\eta)$ has the structure of an affine space, in the sense that the difference of two connections $\bfD$ and $\bfD'$ is a 1-form taking values in the adjoint bundle $ad(\eta)$ (defined with the same data as $\eta$, but where $V = \frkg$ and $O_{(\alpha \bar{e}ta)}$ acts on $V$ on the left by the adjoint action). The curvature $2$-form of $\bfD$ is defined by the relation \bar{e}gin{equation*} F[\bfD](X, Y) \cdot s = \bfD_{X} \bfD_{Y} s - \bfD_{Y} \bfD_{X} s - \bfD_{[X, Y]} s \end{equation*} Locally, it takes the form \bar{e}gin{equation*} F_{(\alpha)} = \mathrm{d} A_{(\alpha)} + \frac{1}{2} [A_{(\alpha)} \wedge A_{(\alpha)}] \quad \mathfrak{h}box{ on } U_{\alpha}, \end{equation*} and different local data are related to each other by \bar{e}gin{equation*} F_{(\alpha)} = Ad(O_{(\alpha \bar{e}ta)}) F_{(\bar{e}ta)} \quad \mathfrak{h}box{ on } U_{\alpha} \cap U_{\bar{e}ta}. \end{equation*} In other words, $F$ is an $ad(\eta)$-valued 2-form on $X$. Finally, we introduce the notion of the associated \emph{principal ${\bf G}$-bundle}, which is the bundle with data the $\set{U_{\alpha}, O_{(\alpha \bar{e}ta)}}$ and with the fibers modeled on the group ${\bf G}$, where the transition functions $O_{(\alpha \bar{e}ta)}$ act on ${\bf G}$ by right multiplication. From the local viewpoint, it is simply a way to encapsulate the data $\set{U_{\alpha}, O_{(\alpha \bar{e}ta)}}$ without reference to any vector space $V$. Principal bundles may serve as an alternative starting point for developing the theory of vector bundles (cf. Kobayashi--Nomizu \cite{MR1393940,MR1393941}). \subsection{Global gauges and topological classes of \tilde{e}xorpdfstring{$C^{\infty}$}{C-infty} connections} \label{subsec:smth} In the following few subsections, we specialize to the cases $X = B_{R}$ (a ball of radius $R$ in $\BigR^{d}$) or $\BigR^{d}$. Eventually, we aim to give a suitable definition of connections at the optimal regularity, and introduce the notion of topological classes of such connections. Before we embark on these goals, we first review the simple case of a $C^{\infty}$ connection with a compactly supported curvature. We start with the case $X = B_{R}$. Since $B_{R}$ is contractible, all $C^{\infty}$ vector bundles over $B_{R}$ are trivial; more precisely, a global gauge (or trivialization) of $\eta$ on $B_{R}$ can be constructed by parallel transportation with respect to $\bfD$ along each ray starting from the center $x_{0}$ of $B_{R}$. We obtain a representative $A$ of $\bfD$ on $B_{R}$ such that \bar{e}gin{equation} \label{eq:goodrep-smth-ball} A \in C^{\infty}(B_{R}; \frkg). \end{equation} Moreover, $(x - x_{0})^{j}A_{j} = 0$ by the parallel transport condition. Next, we consider the case $X = \BigR^{d}$. Since $\BigR^{d}$ is contractible, too, all $C^{\infty}$ vector bundles over $\BigR^{d}$ are trivial. However, when the vector bundles is endowed with a compactly supported curvature, we may define their topological class by viewing them as bundles on the compactification $\BigR^{d} \cup \set{\infty}$, which is homeomorphic to $\BigS^{d} = \set{X \in \BigR^{d+1} : \abs{X} = 1}$. More precisely, consider the stereographic projection \bar{e}gin{equation} \label{eq:st-proj} \boldsymbol{\Sigma} : \BigS^{d} \to \BigR^{d}, \quad (X^{1}, \ldots, X^{d+1}) \mapsto \left( \frac{X^{1}}{1 - X^{d+1}}, \ldots, \frac{X^{d}}{1 - X^{d+1}} \right). \end{equation} Note that the pullback of $(\eta, \bfD)$ along $\boldsymbol{\Sigma}$, which we denote by $(\boldsymbol{\Sigma}^{\ast} \eta, \boldsymbol{\Sigma}^{\ast} \bfD)$, obeys $F[\boldsymbol{\Sigma}^{\ast} \bfD] = 0$ on $U'_{\infty} = \set{X \in \BigS^{d} : 0 < X^{d+1} < 1} = \boldsymbol{\Sigma}^{-1}(\BigR^{d} \setminus B_{1})$. Since $U_{\infty}'$ is simply connected, the pullback bundle $\boldsymbol{\Sigma}^{\ast} \eta$ is isomorphic to the trivial bundle $U'_{\infty} \times V$ \cite[Corollary~9.2]{MR1393940}, which may be easily extended to $U_{\infty} = \set{X \in \BigS^{d} : X^{d+1} > 0}$. Therefore, $\boldsymbol{\Sigma}^{\ast} \eta$ extends to a smooth vector bundle on $\BigS^{d}$. The \emph{topological class} of $(\eta, \bfD)$ may be defined to be that of the extended bundle on $\BigS^{d}$. Since $\BigS^{d}$ is covered by with two contractible open sets, namely $U_{0} = \BigS^{d} \setminus \set{(0, \ldots, 0, 1)}$ and $U_{\infty} = \BigS^{d} \setminus \set{(0, \ldots, 0, -1)}$, the topological class of the bundle on $\BigS^{d}$ is determined by the transition map in-between. At the level of $\eta$, it is the transition map $O$ between $\BigR^{d}$, on which there exists a local representative $\bfD = \mathrm{d} + A$ with $A (0)=0$ and $x^{j} A_{j} = 0$ (parallel transport along radial rays from $0$), and $\BigR^{d} \setminus B_{1}$, on which $\bfD = \mathrm{d}$. On $\BigR^{d} \setminus B_{1}$, we have \bar{e}gin{equation*} A = - \partial_{x} O O^{-1}. \end{equation*} Moreover, since $x^{j} A_{j} = 0$, it follows that $x^{j} \partial_{j} O = 0$ on $\BigR^{d} \setminus B_{1}$, i.e., $O(x) = O(\frac{x}{\abs{x}})$ for $\abs{x} \frkgeq 1$. Defining $O_{(\infty)} : \BigR^{d} \setminus \set{0} \to \bfG$, $O_{(\infty)}(x) = O(\frac{x}{\abs{x}})$ and introducing a smooth function $\boldsymbol{\chi}i$ such that $1-\boldsymbol{\chi}i$ is compactly supported, we arrive at: \bar{e}gin{theorem} \label{thm:goodrep-smth} Let $\bfD$ be a $C^{\infty}$ connection on a $C^{\infty}$ vector bundle $\eta$ on $\BigR^{d}$, whose curvature is compactly supported. Then there exists a global gauge for $\eta$ in which the global gauge potential $A = \bfD - \mathrm{d}$ admits a decomposition of the form \bar{e}gin{equation} \label{eq:goodrep-smth} A = - \boldsymbol{\chi}i O_{(\infty); x} + B \end{equation} where $O_{(\infty)} (x)$ is a smooth $0$-homogeneous map into $\bfG$ and $B \in C^{\infty}_{c}(\BigR^{d}; \frkg)$. \end{theorem} It is not difficult to see that $O_{(\infty)}$, which we call a \emph{gauge at infinity for $A$}, is defined uniquely up to homotopy (cf. Proposition~\ref{prop:goodrep-homotopy-0}). The \emph{homotopy class $[O_{(\infty)}]$}, which is defined intrinsically without reference to the pullback procedure, \emph{determines the topological class}\footnote{Strictly speaking, $O_{(\infty)}$ in Theorem~\ref{thm:goodrep-smth} directly determines only the smooth isomorphism class, which in turn determines the topological (i.e., $C^{0}$) isomorphism class by a density argument.} \emph{of the extended pullback bundle on $\BigS^{d}$}. Hence, any topological invariants of the extended pullback bundle depend only on $[O_{(\infty)}]$. \emph{Characteristic classes} are important invariants of a vector (or principal) $\bfG$-bundle. On $\BigS^{d}$, by the Chern--Weil theory \cite[Chapter~XII]{MR1393941}, these may be defined in terms of a connection $\bfD$ as follows. Given any symmetric $Ad$-invariant $k$-linear function $f$ on $\frkg$, we call the $2k$-form \bar{e}gin{equation*} f(F[\bfD], \ldots, F[\bfD]) = f(F_{j_{1} j_{2}}, \ldots, F_{j_{d-1} j_{d}}) \mathrm{d} x^{j_{1}} \wedge \mathrm{d} x^{j_{2}} \wedge \cdots \wedge \mathrm{d} x^{j_{d}} \end{equation*} the \emph{characteristic class} associated to $f$. This $2k$-form is closed and is invariant, up to an exact form, in the choice of a connection $\bfD$ on the bundle; hence it defines a cohomology class in $H^{2k}(\BigS^{d})$, which depends only on the isomorphism class of the bundle. Moreover, when $d = 2k$, the integral \bar{e}gin{equation*} \boldsymbol{\chi}_{f} = \int_{\BigS^{d}} f(F[\bfD], \ldots, F[\bfD]), \end{equation*} called the \emph{characteristic number}, is also an invariant of the bundle. Now, as an application of Theorem~\ref{thm:goodrep-smth}, consider a $C^{\infty}$ connection $\bfD$ on $\BigR^{d}$ with compactly supported curvature. Then $\boldsymbol{\chi}_{f}$ of the pullback bundle equals \bar{e}gin{equation} \label{eq:ch-no-smth} \boldsymbol{\chi}_{f} = \int_{\BigR^{d}} f(F[\bfD], \ldots, F[\bfD]), \end{equation} and depends only on $[O_{(\infty)}]$ in Theorem~\ref{thm:goodrep-smth}. An important special case of the above theory is when $d = 4$ and $\bfG = SU(2)$, and we take $f(A, B) = \frac{1}{8 \pi^{2}} \mathrm{tr}\,(AB)$. The corresponding characteristic number, given by the integral formula \bar{e}gin{equation*} c_{2} = \frac{1}{8 \pi^{2}} \int_{\BigR^{4}} \mathrm{tr}\,(F \wedge F), \end{equation*} is called the \emph{second Chern number}. It is always an integer, and it classifies the topological classes of $SU(2)$-bundles. For more on characteristic classes, we refer the reader to \cite{MR0440554}. \subsection{Global gauges for rough \tilde{e}xorpdfstring{ $\bfG$}{G}-bundles} \label{subsec:rough-conn} We are now ready to describe our first set of results. Motivated by the desire to study the hyperbolic Yang--Mills equation (cf. Section~\ref{subsec:ym}) at the optimal scaling-invariant regularity, our aim here is to sharpen \eqref{eq:goodrep-smth-ball} and \eqref{eq:goodrep-smth} in two ways: \bar{e}gin{enumerate} \item To obtain quantitative bounds for $A$ in a ``good global gauge'' in terms of $F$; \item To relax the condition for $F$ to the scaling-invariant condition $F \in L^{\frac{d}{2}}(X)$. \end{enumerate} In what follows, we restrict to $d \frkgeq 3$ (which, for instance, avoids the case $L^{\frac{d}{2}} = L^{1}$). To set up the scene, we start with the definition of connections with $L^{\frac{d}{2}}_{loc}$ curvature. Let $X$ be an open subset of $\BigR^{d}$. For $k \in \BigR$ and $p \in [1, \infty]$, we introduce \bar{e}gin{align} \mathcal G^{k, p}_{loc}(X) = & \set{O \in W^{k, p}_{loc}(X; \BigR^{N \times N}) : O (x) \in {\bf G} \mathfrak{h}box{ for a.e. } x \in X}. \end{align} The relevant regularity class is $\mathcal G^{2, \frac{d}{2}}_{loc}$, which turns to be closed under multiplication and inverse (see Lemmas~\ref{lem:mult}, \ref{lem:inv} and \ref{lem:d-inv} below). In parallel to Section~\ref{subsec:vb}, we define \emph{a $\mathcal G^{2, \frac{d}{2}}_{loc}$ (principal) $\bfG$-bundle on $X \subseteq \BigR^{d}$} by the data: \bar{e}gin{itemize} \item An open cover $\set{U_{\alpha}}$ of $X$; \item A transition function $O_{(\alpha \bar{e}ta)} \in \mathcal G_{loc}^{2, \frac{d}{2}} (U_{\alpha} \cap U_{\bar{e}ta})$ for every $\alpha, \bar{e}ta$, obeying the \emph{cocycle conditions}: \bar{e}gin{enumerate} \item $O_{(\alpha \alpha)} = id$ on each $U_{\alpha}$; \item $O_{(\alpha \bar{e}ta)} \cdot O_{(\bar{e}ta \frkgamma)} = O_{(\alpha \frkgamma)}$ on each $U_{\alpha} \cap U_{\bar{e}ta} \cap U_{\frkgamma}$. \end{enumerate} \end{itemize} An open cover $\set{V_{\frkgamma}}$ is a \emph{refinement} of $\set{U_{\alpha}}$ if there exists a function $\alpha = \alpha(\frkgamma)$ such that $V_{\frkgamma} \subseteq U_{\alpha(\frkgamma)}$. We say that two data sets $\set{U_{\alpha}, O_{(\alpha \bar{e}ta)}}$ and $\set{U'_{\alpha'}, O'_{(\alpha' \bar{e}ta')}}$ define an \emph{equivalent $\mathcal G^{2, \frac{d}{2}}_{loc}$ bundle} if there exists a common refinement $V_{\frkgamma}$ of the open covers and $P_{(\frkgamma)} \in \mathcal G^{2, \frac{d}{2}}_{loc}(V_{\frkgamma})$ such that \bar{e}gin{equation*} P_{(\delta)} \cdot O_{(\alpha(\delta) \alpha(\frkgamma))} = O'_{(\alpha'(\delta) \alpha'(\frkgamma))} \cdot P_{(\frkgamma)} \quad \mathfrak{h}box{ on } V_{\frkgamma} \cap V_{\delta}. \end{equation*} A \emph{$W^{1, \frac{d}{2}}_{loc}$ connection} $\bfD$ on the bundle defined by $\set{U_{\alpha}, O_{(\alpha \bar{e}ta)}}$ is given by the local data: \bar{e}gin{itemize} \item A 1-form $A_{(\alpha)} \in W^{1, \frac{d}{2}}_{loc}(U_{\alpha}; \frkg)$ for each $\alpha$, called the \emph{local representative} of $\bfD$ on $U_{\alpha}$, satisfying the compatibility condition \bar{e}gin{equation*} A_{(\alpha)} = Ad (O_{(\alpha \bar{e}ta)}) A_{(\bar{e}ta)} - O_{(\alpha \bar{e}ta); x} \quad \mathfrak{h}box{ on each } U_{\alpha} \cap U_{\bar{e}ta}. \end{equation*} \end{itemize} Given a $W^{1, \frac{d}{2}}_{loc}$ connection $\bfD$, we define its \emph{curvature 2-form} $F = F[\bfD]$ by the local data: \bar{e}gin{equation*} F_{(\alpha)} = \mathrm{d} A_{(\alpha)} + \frac{1}{2} [A_{(\alpha)} \wedge A_{(\alpha)}] \quad \mathfrak{h}box{ on each } U_{\alpha}. \end{equation*} We denote by $\mathcal A^{1, \frac{d}{2}}_{loc}(X)$ the space of all $W^{1, \frac{d}{2}}_{loc}$ connections on all $\mathcal G^{2, \frac{d}{2}}_{loc}$ bundles on $X$. By the compatibility property of $F_{(\alpha)}$ (algebraically the same as in the smooth case), note that \bar{e}gin{equation*} \abs{F} = \abs{F_{(\alpha)}} = \sqrt{\brk{F_{(\alpha)}, F_{(\alpha)}}} \quad \mathfrak{h}box{ on each } U_{\alpha} \end{equation*} is a well-defined element of $L^{\frac{d}{2}}_{loc}(X)$. Consider the case $X = B_{R}$. In order to state quantitative bounds for the gauge potential in a ``good gauge'', we introduce the \emph{inner ($L^{\frac{d}{2}}$-)concentration scale} with threshold $\epsilon_{\ast}$ of a connection $\bfD$, defined as follows: \bar{e}gin{equation*} \underline{r}_{c}^{\epsilon_{\ast}} [\bfD] = \sup \set{r > 0 : \nrm{F[\bfD]}_{L^{\frac{d}{2}}(B_{r}(x) \cap X)} \leq \epsilon_{\ast} \ \mathfrak{h}box{ for all } x \in X}. \end{equation*} \bar{e}gin{theorem} [Good gauge on a ball] \label{thm:goodrep-ball} Let $\bfD \in \mathcal A^{1, \frac{d}{2}}_{loc}(B_{R})$ satisfy $F[\bfD] \in L^{\frac{d}{2}}(B_{R})$ and $\underline{r}_{c}^{\epsilon_{\ast}} [\bfD] \frkgeq r$, for some $r > 0$ and a sufficiently small $\epsilon_{\ast} >0$. Then there exists a global gauge in which the gauge potential $A$ for $\bfD$ satisfies \bar{e}gin{equation} \label{eq:goodrep-ball-est} \nrm{A}_{\dot{W}^{1, \frac{d}{2}}(B_{R})} \lesssim_{\epsilon_{\ast}, \frac{R}{r}} 1. \end{equation} If, in addition, $\bfD^{(n)} F \in L^{p}(B_{R})$ for some nonnegative integer $n$ and $p \in (1, \infty)$ such that $p \frkgeq \frac{d}{n+2}$, then $A \in W^{n+1, p}(B_{R})$. \end{theorem} Theorem~\ref{thm:goodrep-ball} tells us that given any connection on a ball with $L^{\frac{d}{2}}$-curvature, there exists a good gauge in which the a-priori bound \eqref{eq:goodrep-ball-est} holds. When $\nrm{F[\bfD]}_{L^{\frac{d}{2}}(B_{R})}$ is sufficiently small (with the threshold depending on $d$), Theorem~\ref{thm:goodrep-ball} is the classical result of Uhlenbeck \cite{MR648356}. The general case is proved by appropriately patching up local applications of Uhlenbeck's lemma. Next, we consider the case $X = \BigR^{d}$. To proceed, we need an additional concept. We define the \emph{outer ($L^{\frac{d}{2}}$-)concentration radius} with threshold $\epsilon_{\ast}$ of a connection $\bfD$ to be \bar{e}gin{equation*} \underline{R}_{c}^{\epsilon_{\ast}} [\bfD] = \inf \set{r > 0 : \nrm{F[\bfD]}_{L^{\frac{d}{2}}(\BigR^{d} \setminus B_{r}(x))} \leq \epsilon_{\ast} \ \mathfrak{h}box{ for some } x \in \BigR^{d}}. \end{equation*} Let $1- \boldsymbol{\chi}i \in C^{\infty}_{c}(\BigR^{d})$ be fixed. \bar{e}gin{theorem} [Good global gauge on $\BigR^{d}$] \label{thm:goodrep} Let $\bfD \in \mathcal A^{1, \frac{d}{2}}_{loc}(\BigR^{d})$ satisfy $F[\bfD] \in L^{\frac{d}{2}}(\BigR^{d})$, as well as $\underline{r}_{c}^{\epsilon_{\ast}}[\bfD] \frkgeq r$ and $\underline{R}_{c}^{\epsilon_\ast}[\bfD] \leq R$ for some $0 < r \leq R$ and a universal small constant $\epsilon_{\ast} > 0$. Then there exists exists a global gauge on $\BigR^{d}$, in which the gauge potential $A \in \dot{W}^{1, \frac{d}{2}}_{loc}(\BigR^{d})$ for $\bfD$ admits a decomposition of the form \bar{e}gin{equation} \label{eq:goodrep-0} A = - \boldsymbol{\chi}i (\cdot / R) O_{(\infty); x} + B \end{equation} where $O_{(\infty)} (x)$ is a smooth $0$-homogeneous map into $\bfG$ and $B \in \dot{W}^{1, \frac{d}{2}}(\BigR^{d}; \frkg)$. Moreover, \bar{e}gin{equation} \label{eq:goodrep-bnd} \nrm{B}_{\dot{W}^{1, \frac{d}{2}}} \lesssim_{\epsilon_{\ast}, \frac{R}{r}} 1, \qquad \nrm{O_{(\infty)}}_{C^{N}(\BigS^{d-1})} \lesssim_{\epsilon_{\ast}, \frac{R}{r}, N} 1 \quad \mathfrak{h}box{ for all } N \frkgeq 0. \end{equation} If, in addition, $\bfD^{(n)} F \in L^{p}(B_{R})$ for some nonnegative integer $n$ and $p \in (1, \infty)$ such that $p \frkgeq \frac{d}{n+2}$, then $B \in \dot{W}^{n+1, p}(\BigR^{d})$. \end{theorem} Thanks to Theorems~\ref{thm:goodrep-ball} and \ref{thm:goodrep}, we may identify any connection $\bfD \in \mathcal A^{1, \frac{d}{2}}(X)$ with a gauge potential $A \in W^{1, \frac{d}{2}}_{loc}(X)$ in a good global gauge. In the rest of the introduction, we adopt the convention of referring to a connection $\bfD$ on $B_{R}$ or $\BigR^{d}$ by its global gauge potential $A$. \subsection{Topological classes of rough connections} \label{subsec:top-class} Given a $W^{1, \frac{d}{2}}_{loc}$ connection $A$ on $\BigR^{d}$, we call a pair $(O_{(\infty)}, B)$ of a smooth $0$-homogeneous map into $\bfG$ and an element in $\dot{W}^{1, \frac{d}{2}}(\BigR^{d}; \frkg)$ a \emph{good representative of $A$} if $A = - \boldsymbol{\chi}i O_{(\infty); x} + B$ for some $1-\boldsymbol{\chi}i \in C^{\infty}_{c}(\BigR^{d})$. We furthermore call $O_{(\infty)}$ a \emph{gauge (transformation) at infinity for $A$}. Theorem~\ref{thm:goodrep} insures that a good representative always exists provided that $F[A] \in L^{\frac{d}{2}}$. Recall that when the curvature is smooth and compactly supported, the topological class of $A$ is classified by the homotopy class of its gauge at infinity $O_{(\infty)}$. We extend the definition of the topological class to a rough connections on $\BigR^{d}$ with $L^{\frac{d}{2}}$-curvature using this classification. We need the following preliminary results: \bar{e}gin{proposition} \label{prop:goodrep-homotopy-0} Let $A \in \mathcal A_{loc}^{1, \frac{d}{2}}(\BigR^{d})$ satisfy $F[A] \in L^{\frac{d}{2}}(\BigR^{d})$, and let $(O_{(\infty)}, B)$ be a good representative of $A$. \bar{e}gin{enumerate} \item If $(O'_{(\infty)}, B')$ is another good representation of $A$, then $O_{(\infty)}$ is homotopic to $O'_{(\infty)}$. \item Conversely, given any smooth $O'_{(\infty)} : \BigS^{d-1} \to \bfG$ homotopic to $O_{(\infty)}$, there exists another good representation $(O'_{(\infty)}, B')$ of $A$. \end{enumerate} \end{proposition} \bar{e}gin{remark} \label{rem:chi-indep-0} For completeness, we make the trivial observation that the homotopy class of $O_{(\infty)}$ is independent of the choice of $\boldsymbol{\chi}i$, too. \end{remark} Theorem~\ref{thm:goodrep}, Proposition~\ref{prop:goodrep-homotopy-0} and Remark~\ref{rem:chi-indep-0} lead to the following: \bar{e}gin{definition} \label{def:top-class} Given an $L^{\frac{d}{2}}$-curvature connection $A$, we define the \emph{topological class} $[A]$ of $A$ to be the homotopy class of $O_{(\infty)} : \BigS^{d-1} \to \bfG$ of a good representative ({i.e., a gauge at infinity for $A$}). If the topological class of $A'$ is $[A]$, then we write $A' \in [A]$. \end{definition} Observe that the addition of a 1-form $B$ in $\dot{W}^{1, \frac{d}{2}}(\BigR^{d}; \frkg)$ does not change the topological class of $A$, i.e., \bar{e}gin{equation*} A + B \in [A]. \end{equation*} In particular, by mollifying and cutting off $B$, we can easily find approximations by smooth connections with compactly supported curvature in the same topological class with respect to the distance $d_{\dot{W}^{1, \frac{d}{2}}}(A, A') = \nrm{A - A'}_{\dot{W}^{1, \frac{d}{2}}(\BigR^{d}; \frkg)}$. Moreover, good representations of two connections with the same $O_{(\infty)}$ are path-connected with respect to the $d_{\dot{W}^{1, \frac{d}{2}}}$. By Proposition~\ref{prop:goodrep-homotopy-0}, it follows that each topological class is path-connected with respect to $d_{\dot{W}^{1, \frac{d}{2}}}$ up to global gauge transformations in $\mathcal G^{2, \frac{d}{2}}_{loc}(\BigR^{d})$. Observe also that topological class is determined by the part of the connection where the $L^{\frac{d}{2}}$ norm of $F$ is concentrated. More precisely, we have: \bar{e}gin{proposition} \label{prop:top-class-outer} Let $A, A' \in \mathcal A^{1, \frac{d}{2}}_{loc}(\BigR^{d})$ satisfy $F[A], F[A'] \in L^{\frac{d}{2}}(\BigR^{d})$. Assume moreover that $A$ and $A'$ are close in $L^{d}(B_{5R})$, and have small $L^{\frac{d}{2}}$ curvature outside $B_{R}$, i.e., \bar{e}gin{equation*} \nrm{A - A'}_{L^{d}(B_{5R})} \leq \epsilon_{\ast}, \quad \nrm{F[A]}_{L^{\frac{d}{2}}(\BigR^{d} \setminus B_{R})} \leq \epsilon_{\ast}, \quad \nrm{F[A']}_{L^{\frac{d}{2}}(\BigR^{d} \setminus B_{R})} \leq \epsilon_{\ast}, \end{equation*} where $\epsilon_{\ast} > 0$ is sufficiently small universal constant. Then $[A] = [A']$. \end{proposition} We now discuss some simple consequences of the above results. Given an $L^{\frac{d}{2}}$-curvature connection $A$, let $A^{n}$ be an approximation of $A$ in $d_{\dot{W}^{1, \frac{d}{2}}}$, such that each $A^{n}$ is smooth and $F[A^{n}]$ is compactly supported. For any symmetric $Ad$-invariant $k$-linear function $f$ on $\frkg$, the associated characteristic classes of the pullback bundles $(\boldsymbol{\Sigma}^{\ast} \eta, \boldsymbol{\Sigma}^{\ast} A^{n})$ are independent of $n$ (for sufficiently large $n$), as well as of the approximating sequence. Moreover, when $d = 2k$, the characteristic numbers obey \bar{e}gin{equation*} \boldsymbol{\chi}_{f} = \int_{\BigR^{d}} f(F[A^{n}], \ldots, F[A^{n}]) \to \int_{\BigR^{d}} f(F[A], \ldots, F[A]) \end{equation*} by continuity of the integral with respect to $\nrm{A - A'}_{\dot{W}^{1, \frac{d}{2}}(\BigR^{d}; \frkg)}$. Hence we recover the following result of Uhlenbeck \cite{MR815194}: \bar{e}gin{corollary}\label{cor:ch-class} The characteristic numbers $\boldsymbol{\chi}_{f}$, defined as in \eqref{eq:ch-no-smth}, depend only on $[A]$. In particular, they vanish for $[0]$. \end{corollary} As another corollary of Theorem~\ref{thm:goodrep}, we obtain a characterization of the topologically trivial class (i.e., the topological class of the trivial connection $A = 0$): \bar{e}gin{corollary} \label{cor:top-triv} The space of topologically trivial connections with finite $L^{\frac{d}{2}}$ curvature correspond exactly to \bar{e}gin{equation*} \mathcal A^{1, \frac{d}{2}}_{0}(\BigR^{d}) = \set{\bfD = \mathrm{d} + A : A \in \dot{W}^{1, \frac{d}{2}}(\BigR^{d}; \frkg)}. \end{equation*} All characteristic numbers associated to a connection $A$ in $\mathcal A^{1, \frac{d}{2}}_{0}(\BigR^{d})$ vanish. \end{corollary} \bar{e}gin{remark} The preceding corollary implies that given any connection $A$ in the topologically trivial class, there exists a global representative $\tilde{A}$ in the space $\dot{W}^{1, \frac{d}{2}}(\BigR^{d}; \frkg)$. Note, however, that no quantitative bound on $\nrm{\tilde{A}}_{\dot{W}^{1, \frac{d}{2}}}$ is claimed; such a bound would rely on quantitative bounds on a homotopy of $O_{(\infty)}$ to the identity in terms of scaling-invariant bounds on $O_{(\infty)}$. \end{remark} \subsection{Hyperbolic Yang--Mills equation} \label{subsec:ym} The remainder of the introduction concerns the hyperbolic Yang--Mills equation. The purpose of this subsection is to provide a brief introduction to this equation. Let $\BigR^{1+d}$ denote the $(d+1)$-dimensional Minkowski space, which is equipped with the Minkowski metric ${\bf m}_{\mu \nu} = \mathrm{diag}(-1, +1, \ldots, +1)$ in the rectangular coordinates $(x^{0}, x^{1}, \ldots, x^{d})$. We will often write $t = x^{0}$, to emphasize the role of $x^{0}$ as (a choice of) a time function. Throughout this paper, we will use the usual convention of raising and lowering indices using the Minkowski metric, as well as summing up repeated upper and lower indices. Consider a connection $\bfD$ on a vector bundle on $\BigR^{1+d}$ with structure group $\bfG$. By topological triviality of $\BigR^{d}$ (or Theorem~\ref{thm:goodrep} at low regularity), $\bfD$ at each $t$ may be identified with a global gauge potential $A$. The \emph{hyperbolic Yang--Mills equation} on $\BigR^{1+d}$ for $A$ is the Euler--Lagrange equation associated with the formal Lagrangian action functional \bar{e}gin{equation*} \mathcal L(A) = \frac{1}{2} \int_{\BigR^{1+d}} \brk{F_{\alpha \bar{e}ta}, F^{\alpha \bar{e}ta}} \, \mathrm{d} x \mathrm{d} t, \end{equation*} which takes the form \bar{e}gin{equation} \label{eq:ym} \bfD^{\alpha} F_{\alpha \bar{e}ta} = 0. \end{equation} Clearly, \eqref{eq:ym} is invariant under (smooth) gauge transformations. This equation possesses a conserved energy, given by \bar{e}gin{equation*} \mathcal E_{\set{t} \times \BigR^{d}}(A) = \int_{\set{t} \times \BigR^{d}} \sum_{\alpha < \bar{e}ta} \abs{F_{\alpha \bar{e}ta}}^{2} \, \mathrm{d} x. \end{equation*} Furthermore, \eqref{eq:ym} is invariant under the scaling \bar{e}gin{equation*} A(t, x) \mapsto \lambda A (\lambda t, \lambda x) \qquad (\lambda > 0). \end{equation*} The scaling-invariant $L^{2}$-Sobolev norm is $\nrm{A(t, \cdot)}_{\dot{H}^{\frac{d-2}{2}}}$. In particular, \eqref{eq:ym} is \emph{energy critical} when $d = 4$, in the sense that the conserved energy (which scales like $\nrm{A(t, \cdot)}_{\dot{H}^{1}}$) is invariant under the scaling. We are interested in the initial value problem for \eqref{eq:ym} at the scaling-invariant $L^{2}$-Sobolev regularity. For this purpose we first formulate a gauge-covariant notion of initial data sets. We say that a pair $(a, e)$ of a gauge potential $a$ and a $\frkg$-valued 1-form $e$ on $\BigR^{d}$ is an initial data set for a solution $A$ to \eqref{eq:ym} if \bar{e}gin{equation*} (A_{j}, F_{0j}) \restriction_{\set{t = 0}} = (a_{j}, e_{j}). \end{equation*} Here and throughout this paper, the roman letters stand for the spatial coordinates $x^{1}, \ldots, x^{d}$. Note that \eqref{eq:ym} with $\bar{e}ta = 0$ imposes the condition that \bar{e}gin{equation} \label{eq:YMconstraint} \bfD^{j} e_{j} = \partial^{j} e_{j} + [a^{j}, e_{j}] = 0. \end{equation} This equation is the \emph{Gauss} (or the \emph{constraint}) \emph{equation} for \eqref{eq:ym}. It turns out that \eqref{eq:YMconstraint} characterizes precisely those pairs $(a, e)$ which can arise as an initial data set. Thus we make the following definition: \bar{e}gin{definition} \label{def:ym-id} An $\mathcal H^{\sigma}(\mathcal O)$ (resp. $\dot{\mathcal H}^{\sigma}(\mathcal O)$ or $\mathcal H^{\sigma}_{loc}(\mathcal O)$) \emph{initial data set} for the Yang-Mills equation is a pair $(a,e) \in H^{\sigma} \times H^{\sigma-1}(\mathcal O)$ (resp. $\dot{H}^{\sigma} \times \dot{H}^{\sigma-1}(\mathcal O)$ or $H^{\sigma}_{loc} \times H^{\sigma-1}_{loc}(\mathcal O)$) that satisfies the constraint equation \eqref{eq:YMconstraint}. \end{definition} Due to invariance under gauge transformations, \eqref{eq:ym} is not even formally well-posed when viewed as a PDE for $A$. In order to analyze \eqref{eq:ym} at the level of $A$, this invariance must be removed by fixing a representative (or a gauge). A simple and useful way is to require that \bar{e}gin{equation} \label{eq:temporal} A_{0} = 0. \end{equation} The gauge thus chosen is called \emph{temporal}. In this gauge, \eqref{eq:ym} becomes a coupled system of wave and transport equations for the curl and divergence of $A$, respectively, and local well-posedness for regular data is easily follows. Moreover, in the regular case it is also easy to verify the finite speed of propagation property, in the sense that $A$ vanishes on the domain of dependence of the zero-set of the data. The aforementioned coupled wave-transport system in the temporal gauge becomes difficult to analyze in the low regularity setting. Nonetheless, in \cite{OTYM2}, global well-posedness of \eqref{eq:ym} under \eqref{eq:temporal} was proved for small data at the optimal $L^{2}$-Sobolev regularity (for dimensions $d \frkgeq 4$), by first working in a gauge with more favorable structure (caloric gauge), and then estimating the gauge transformation to the temporal gauge. At this point, one may imagine upgrading the small data result to large data local well-posedness by the following procedure: \bar{e}gin{enumerate} \item Constructing local-in-spacetime solutions from the small data result applied to suitable localizations of the initial data; \item Patch the local-in-spacetime solutions together by finite speed of propagation. \end{enumerate} Though this strategy eventually works (see Section~\ref{subsec:local} below), this is not trivial. The primary reason is because the Gauss equation \eqref{eq:YMconstraint} is nonlocal, and thus initial data sets cannot be freely cut off. The next subsection is devoted to resolving this issue. \subsection{Excision and extension of Yang--Mills initial data} \label{subsec:excise} In this subsection we present the second set of results of this paper, which eventually lead to a useful excision-and-extension technique for Yang--Mills initial data. The first and main result is solvability of the inhomogeneous Gauss equation \bar{e}gin{equation} \label{eq:gauss-inhom} (\bfD^{(a)})^{\ell} e_{\ell} = h \end{equation} while keeping good physical space support properties. \bar{e}gin{theorem} \label{thm:gauss-0} Let $d \frkgeq 4$ and $a \in \dot{H}^{\frac{d-2}{2}}(\BigR^{d})$. Given any convex open set $K$, there exists a solution operator $T_{a}$ for \eqref{eq:gauss-inhom} satisfying the following conditions: \bar{e}gin{enumerate} \item (Boundedness) We have \bar{e}gin{equation} \label{eq:gauss-0-bnd} \nrm{T_{a}[h]}_{\dot{H}^{\frac{d-4}{2}}} \lesssim_{\nrm{a}_{\dot{H}^{\frac{d-2}{2}}}, L(K)} \nrm{h}_{\dot{H}^{\frac{d-6}{2}}}, \end{equation} where $L(K)$ is a scaling-invariant quantity (i.e., $L(\lambda K)$ is independent of $\lambda > 0$) defined in \eqref{eq:lip-K}. \item (Exterior support property) If $h$ is supported outside the set \bar{e}gin{equation*} \lambda K = \set{\lambda (x - x_{K}) \in \BigR^{d} : \mathfrak{h}box{$x_{K}$ is the barycenter of $K$}} \end{equation*} for some $\lambda > 0$, then so is $T_{a}[h]$. \item (Higher regularity) If $h$ and $a$ are smooth, so is $T_{a}[h]$. \end{enumerate} \end{theorem} \bar{e}gin{remark} \label{rem:gauss-low-d} In $d \leq 3$, our proof does not apply at the critical regularity $e \in \dot{H}^{\frac{d-4}{2}}$, since the possible error of \eqref{eq:YMconstraint} belongs only to the ill-behaved space $\dot{H}^{-\frac{3}{2}}$. However, under an extra smallness assumption for $\nrm{a}_{\dot{H}^{\frac{d-2}{2}}}$, the conclusion of Theorem~\ref{thm:gauss-0} holds for $h \in \dot{H}^{\sigma-1}$ and $e \in \dot{H}^{\sigma}$ for the subcritical regularities $\sigma > 1- \frac{d}{2}$; see Proposition~\ref{prop:gauss-small} below. \end{remark} As a consequence of Theorem~\ref{thm:gauss-0}, we have the following extension result for the Yang--Mills initial data sets. \bar{e}gin{theorem} \label{thm:ext-id} For $d \frkgeq 4$, let $K$ be a convex domain in $\BigR^{d}$, and let $(a, e)$ be an $\mathcal H^{\frac{d-2}{2}}$ Yang--Mills initial data set on $2K \setminus \overline{K}$. Then there exists an $\mathcal H^{\frac{d-2}{2}}$ Yang--Mills initial data set $(\bar{a}, \bar{e})$ on $\BigR^{d} \setminus \overline{K}$ that coincides with $(a, e)$ on $2K \setminus \overline{K}$ and obeys \bar{e}gin{align} \nrm{\bar{a}}_{\dot{H}^{\frac{d-2}{2}}(\BigR^{d} \setminus \overline{K})} & \lesssim_{L(K)} \nrm{a}_{\dot{H}^{\frac{d-2}{2}}(2K \setminus \overline{K})}, \label{eq:ext-id-a} \\ \nrm{\bar{e}}_{\dot{H}^{\frac{d-4}{2}}(\BigR^{d} \setminus \overline{K})} & \lesssim_{\nrm{a}_{\dot{H}^{\frac{d-2}{2}}(2K \setminus \overline{K})}, L(K)} \nrm{e}_{\dot{H}^{\frac{d-4}{2}}(2K \setminus \overline{K})}. \label{eq:ext-id-e} \end{align} It can be arranged so that the association $(a, e) \mapsto (\bar{a}, \bar{e})$ is equivariant under constant gauge transformations, i.e., $(Ad(O) a, Ad(O) e) \mapsto (Ad(O) \bar{a}, Ad(O) \bar{e}))$ for each $O \in \bfG$. Moreover, if $(a, e)$ is smooth, then so is $(\bar{a}, \bar{e})$. \end{theorem} At this point, it is useful to introduce a suitable generalization of local energy for initial data sets at the optimal $L^{2}$-Sobolev regularity. For $d \frkgeq 4$ even, we make a gauge-invariant definition \bar{e}gin{equation*} \mathcal E^{\frac{d-2}{2}}_{U}(a, e) = \nrm{(\bfD^{(a)})^{(\frac{d-2}{2})} (F[a], e)}_{L^{2}(U)}^{2} + \nrm{(F[a], e)}_{L^{\frac{d}{2}}(U)}^{2}. \end{equation*} Note that this is equivalent to the energy when $d = 4$. For $d \frkgeq 4$ odd, there is a nuisance that the optimal $L^{2}$-Sobolev regularity involves a fractional derivative. Here, we take an easy way out, and make a gauge-dependent definition in this case: \bar{e}gin{equation*} \mathcal E^{\frac{d-2}{2}}_{U}(a, e) = \nrm{(a, e)}_{\dot{H}^{\frac{d-2}{2}} \times \dot{H}^{\frac{d-4}{2}}(U)}^{2}. \end{equation*} Let $\epsilon_{\ast} > 0$. For $X = B_{R}$ or $\BigR^{d}$, we define the notion of the (inner) critical $L^{2}$-Sobolev concentration scale with threshold $\epsilon_{\ast}$ as follows: \bar{e}gin{align} r_{c}^{\epsilon_{\ast}} =& r_{c}^{\epsilon_{\ast}}[a, e] = \sup \set{r > 0: \mathcal E^{\frac{d-2}{2}}_{X \cap B_{r}(x)}(a, e) \leq \epsilon_{\ast}^{2} \mathfrak{h}box{ for all } x \in X}, \label{eq:conc-scale-id} \end{align} When $d = 4$, we call $r_{c}^{\epsilon_{\ast}}$ the \emph{energy concentration scale} with threshold $\epsilon_{\ast}$. Combining Theorem~\ref{thm:ext-id} with Uhlenbeck's lemma, we also obtain the following excision-and-extension result. \bar{e}gin{theorem} \label{thm:excise} Let $(a, e)$ be an $\mathcal H^{\frac{d-2}{2}}_{loc}$ Yang--Mills initial data set on $X = B_{R}$ (resp. $X = \BigR^{d}$) with critical $L^{2}$-Sobolev concentration scale (with threshold $\epsilon_{\ast}$) at most $r_{c}$. Consider a ball $B_{r}(x)$ with radius $r < 10 r_{c}$ and $x \in X$. For $\epsilon_{\ast} > 0$ sufficiently small (as a universal constant), the following statements hold. \bar{e}gin{enumerate} \item To $(a, e)$, we associate $(\tilde{a}, \tilde{e}, O) \in \mathcal H^{\frac{d-2}{2}}(\BigR^{d}) \times \mathcal G^{\frac{d}{2}}(B_{r}(x) \cap X)$ such that $(\tilde{a}, \tilde{e})$ is gauge equivalent to $(a, e)$ on $B_{r}(x) \cap X$, i.e., \bar{e}gin{equation*} (\tilde{a}, \tilde{e}) = (Ad(O) a - O_{;x}, Ad(O) e) \quad \mathfrak{h}box{ in } B_{r}(x) \cap X. \end{equation*} Moreover, $(\tilde{a}, \tilde{e})$ and $O$ obey the bounds \bar{e}gin{align} \nrm{(\tilde{a}, \tilde{e})}_{\dot{H}^{\frac{d-2}{2}} \times \dot{H}^{\frac{d-4}{2}}}^{2} + r^{-(d-2)} \nrm{\tilde{a}}_{L^{2}}^{2} + r^{-(d-4)} \nrm{\tilde{e}}_{L^{2}}^{2} \lesssim & \mathcal E^{\frac{d-2}{2}}_{B_{r}(x) \cap X}(a, e), \label{eq:excise-a} \\ \nrm{O_{;x}}_{\dot{H}^{\frac{d-2}{2}}(B_{r}(x) \cap X)} \lesssim & \nrm{a}_{\dot{H}^{\frac{d-2}{2}}(B_{r}(x) \cap X)}. \label{eq:excise-O} \end{align} When $d$ is odd, $O$ is a constant gauge transformation. If $(a, e)$ is smooth, then so are $(\tilde{a}, \tilde{e})$ and $O$. \item Let $\set{(a^{n}, e^{n})}$ be a sequence of $\mathcal H^{\frac{d-2}{2}}$ Yang--Mills initial data sets on $B_{r}(x) \cap X$ such that $(a^{n}, e^{n}) \to (a, e)$ in $H^{\frac{d-2}{2}} \times H^{\frac{d-4}{2}}(B_{r}(x) \cap X)$. Let $(\tilde{a}^{n}, \tilde{e}^{n}, O^{n})$ be given\footnote{Note that the hypothesis on the critical $L^{2}$-Sobolev concentration scale is satisfied for large enough $n$.} by (1) from $(a^{n}, e^{n})$. Then after passing to a subsequence and suitably conjugating each $(\tilde{a}^{n}, \tilde{e}^{n}, O^{n})$ with a constant gauge transformation, we have \bar{e}gin{align*} (\tilde{a}^{n}, \tilde{e}^{n}) \to (\tilde{a}, \tilde{e}) \mathfrak{h}box{ in } H^{\frac{d-2}{2}} \times H^{\frac{d-4}{2}}(\BigR^{d}), \qquad O^{n} \to O \mathfrak{h}box{ in } H^{\frac{d}{2}}(B_{r}(x) \cap X). \end{align*} \end{enumerate} \end{theorem} \bar{e}gin{remark} Theorems~\ref{thm:ext-id} and \ref{thm:excise} have a similar flavor to the so-called \emph{initial data gluing} procedure in general relativity \cite{ChDe, Co, CoSch}, which is a method to remove an error in the constraint equation while keeping physical space localization properties. See \cite{OT1} for an adaptation of this procedure for the Maxwell--Klein--Gordon constraint equation at the critical regularity, which had a similar role as Theorems~\ref{thm:ext-id} and \ref{thm:excise} in the present paper. We also note that an initial data extension theorem, analogous to Theorem~\ref{thm:ext-id}, was recently proved for the vacuum Einstein equation at the $L^{2}$-curvature regularity \cite{Czi1, Czi2}. \end{remark} As is evident from (2), it is natural to view the association $(a, e) \mapsto (\tilde{a}, \tilde{e}, O)$ in (1) as defined up to a constant gauge transformation. \subsection{Local theory in an arbitrary topological class} \label{subsec:local} We present the third set of results of this paper, which concern local theory of \eqref{eq:ym} for arbitrary $\mathcal H^{\frac{d-2}{2}}_{loc}$ initial data set. The main local well-posedness results in the temporal gauge (Theorems~\ref{thm:local-temp} and \ref{thm:local-temp-sub}) are proved as consequences of the finite speed of propagation property of \eqref{eq:ym}, the results in Section~\ref{subsec:excise} and small data well-posedness results \cite{OTYM2, TaoYM}. We start with a (rather general) basic definition of a solution. \bar{e}gin{definition} \label{def:ym-sol-rough} \bar{e}gin{enumerate} \item An \emph{$\mathcal H^{\frac{d-2}{2}}_{loc}$ connection} in an open set $\mathcal O \subseteq \BigR^{1+d}$ is a connection $\bfD = \mathrm{d} + A$ satisfying \bar{e}gin{equation*} (A, \partial_{t} A) \in C_{t} H^{\frac{d-2}{2}}_{loc} \times C_{t} H^{\frac{d-4}{2}}_{loc}(\mathcal O). \end{equation*} \item An \emph{$\mathcal H^{\frac{d-2}{2}}$ solution for the hyperbolic Yang--Mills equation \eqref{eq:ym}} in $\mathcal O$ is an $\mathcal H^{\frac{d-2}{2}}_{loc}$ connection $\bfD = \mathrm{d} + A$ in $\mathcal O$ which is the limit of regular solutions in the topology $C_{t} H^{\frac{d-2}{2}}_{loc} \times C_{t} H^{\frac{d-4}{2}}_{loc}(\mathcal O)$. \end{enumerate} \end{definition} It is straightforward to see that the set of $\mathcal H^{\frac{d-2}{2}}_{loc}$ solutions is closed with respect to the $C_{t} H^{\frac{d-2}{2}}_{loc} \times C_{t} H^{\frac{d-4}{2}}_{loc}$ topology. Next, we formulate the notion of gauge covariance of $\mathcal H^{\frac{d-2}{2}}_{loc}$ connections, as follows: \bar{e}gin{definition} \label{def:ym-sol-rough-gt} \bar{e}gin{enumerate} \item A \emph{regular gauge transformation} in an open set $\mathcal O \subseteq \BigR^{1+d}$ is a map $O : \mathcal O \to \bfG$ with the regularity properties $O_{;t, x} \in C_{t} H^{N}_{loc}$. \item An \emph{admissible gauge transformation} in $\mathcal O$ is a map $O : \mathcal O \to \bfG$ with the regularity properties $O_{;t, x} \in C_{t} H^{\frac{d-2}{2}}_{loc}$. \item We say that two $\mathcal H^{\frac{d-2}{2}}$ connections $A^{(1)}$ and $A^{(2)}$ in $\mathcal O$ are gauge equivalent if there exists an admissible gauge transformation $O$ in $\mathcal O$ such that $A^{(2)}_{j} = Ad(O) A^{(1)}_{j} - O_{;j}$. \end{enumerate} \end{definition} Any admissible gauge transformation may be approximated by regular gauge transformations in $C_{t} H^{\frac{d}{2}}_{loc}$ (the proof is a straightforward variant of Lemma~\ref{lem:part-approx} below, and is left to the reader). As a consequence, if $A$ and $A'$ are gauge equivalent $\mathcal H^{\frac{d-2}{2}}$ connections in $\mathcal O$, $A$ is a $\mathcal H^{\frac{d-2}{2}}$ solution to \eqref{eq:ym} if and only if $A'$ is. Moreover, the class of gauge-equivalent connections is closed: \bar{e}gin{proposition} \label{prop:closed-class} The class $[A]$ of gauge-equivalent $\mathcal H^{\frac{d-2}{2}}$ connections is closed in the topology $C_{t} H^{\frac{d-2}{2}}_{loc} \times C_{t} H^{\frac{d-4}{2}}_{loc}(\mathcal O)$ \end{proposition} With the basic notion of a solution in our hands, we are ready to discuss the local theory of \eqref{eq:ym} for $\mathcal H^{\frac{d-2}{2}}_{loc}$ initial data sets. Given a subset $X$ of $\BigR^{d}$ and a time interval $I$, denote by $\mathcal D_{I}(X)$ the future domain of dependence of $X$, intersected with $I \times \BigR^{d}$: \bar{e}gin{equation*} \mathcal D_{I}(X) = \set{(t, x) \in [0, \infty) \times \BigR^{d} : B_{t}(x) \subseteq X} \cap I \times \BigR^{d}. \end{equation*} In \cite{OTYM2}, global well-posedness of \eqref{eq:ym} in the temporal gauge for small $\dot{\mathcal H}^{\frac{d-2}{2}}$ data on $\BigR^{d}$ was proved for dimensions\footnote{The exposition of \cite{OTYM2} is focused on the case $d = 4$, but the proof extends in a straightforward manner to $d \frkgeq 4$.} $d \frkgeq 4$ (see Theorem~\ref{thm:small-temp} below). Combined with the excision-and-extension result in Section~\ref{subsec:excise} and the finite speed of propagation property in the temporal gauge, we obtain: \bar{e}gin{theorem}[Local well-posedness at optimal regularity, $d \frkgeq 4$] \label{thm:local-temp} For $d \frkgeq 4$, there exists a dimensional constant $\epsilon_{\ast} > 0$ such that the Yang--Mills equation in the temporal gauge is locally well-posed on the time interval of length $r_{c}^{\epsilon_\ast} = r_{c}^{\epsilon_\ast} [a, e]$ for initial data $(a, e) \in \mathcal H^{\frac{d-2}{2}}_{loc}(X)$ for $X = B_{R}$ or $\BigR^{d}$. More precisely, the following statements hold. \bar{e}gin{enumerate} \item (Regular data) Let $(a, e) $ be a smooth Yang--Mills initial data set on $X$. Then there exists a unique smooth solution $A_{t,x}$ to the Yang--Mills equation in the temporal gauge on $\mathcal D_{[0, r_{c})}(X)$ such that $(A_{j}, F_{0j}) \restriction_{\set{t = 0}} = (a_{j}, e_{j})$. \item (Rough data) Let $\mathcal H^{\frac{d-2}{2}}_{loc, \, r_{c}}(X)$ be the class of $\mathcal H^{\frac{d-2}{2}}_{loc}(X)$ Yang--Mills initial data sets with concentration scale $\frkgeq r_{c}$, topologized with the norm \bar{e}gin{equation*} \nrm{(a, e)}_{\mathcal H^{\frac{d-2}{2}}_{loc, \, r_{c}}(X)} = \sup_{x \in X} \nrm{(a, e)}_{\dot{H}^{\frac{d-2}{2}} \times \dot{H}^{\frac{d-4}{2}} (B_{r_{c}}(x) \cap X)}. \end{equation*} Then the data-to-solution map admits a continuous extension \bar{e}gin{equation} \label{eq:local-temp-cont} \mathcal H^{\frac{d-2}{2}}_{loc, \, r_{c}}(X) \ni (a, e) \mapsto (A_{x}, \partial_{t} A_{x}) \in C_{t} \mathcal H^{\frac{d-2}{2}}_{loc, \, r_{c}}(\mathcal D_{[0, r_{c})}(X)). \end{equation} \item (A-priori bound) The solution defined as above obeys the a-priori bound \bar{e}gin{equation} \label{eq:local-temp-bnd} \nrm{(A, \partial_{t} A)}_{L^{\infty} (H^{\frac{d-2}{2}} \times H^{\frac{d-4}{2}}) (\mathcal D_{[0, r_{c})}(B_{R'}(x)))} \lesssim \nrm{(a, e)}_{H^{\frac{d-2}{2}} \times H^{\frac{d-4}{2}}(B_{R'}(x))} \end{equation} for any $B_{R'}(x) \subseteq X$. \end{enumerate} \end{theorem} The temporal gauge solution given by Theorem~\ref{thm:local-temp} represents any $\mathcal H^{\frac{d-2}{2}}_{loc}$ solution in the sense of Definition~\ref{def:ym-sol-rough}. \bar{e}gin{theorem} \label{thm:equiv-temp} Any $\mathcal H^{\frac{d-2}{2}}_{loc}$ solution to the hyperbolic Yang--Mills equation in $\mathcal D_{I}(X)$ (where $X = B_{R}$ or $\BigR^{d}$) can be put into the temporal gauge. \end{theorem} When $X = \BigR^{d}$, we say that $A$ is a $\mathcal H^{\frac{d-2}{2}}$ solution to the hyperbolic Yang--Mills equation in $I \times \BigR^{d}$ if it is an $\mathcal H^{\frac{d-2}{2}}_{loc}$ solution, and moreover satisfies the following condition for every $t \in I$: \bar{e}gin{equation} \label{eq:tail} \mathcal E^{\frac{d-2}{2}}_{\BigR^{d}}(A_{x}(t) , F_{0x}(t)) < \infty. \end{equation} By Uhlenbeck's lemma and Theorem~\ref{thm:local-temp}.(3), \eqref{eq:tail} holds for every $t \in I$ if it holds for its data $(a, e)$ at some $t \in I$. For such a solution, the topological class of $A_{x}(t)$ is preserved under the hyperbolic Yang--Mills evolution. \bar{e}gin{proposition} \label{prop:top-class-ym} Let $A$ be an $\mathcal H^{\frac{d-2}{2}}$ solution to \eqref{eq:ym} in $I \times \BigR^{4}$. Then $[A_{x}(t)]$ is constant in $t$. \end{proposition} The temporal gauge is convenient in order to deal with causality, but it lacks good dispersive bounds in contrast to the caloric gauge \cite{OTYM2} (cf. also the small data result in the Coulomb gauge in \cite{KT}). In a different global gauge, the caloric gauge regularity may be patched up, as the following sample result demonstrates: \bar{e}gin{theorem} \label{thm:imp-reg} Let $A$ be an $\mathcal H^{\frac{d-2}{2}}_{loc}$ solution to \eqref{eq:ym} in $\mathcal D_{[0, r_{c})}(B_{R})$, whose initial data set has critical $L^{2}$-Sobolev concentration scale $\frkgeq r_{c}$ with sufficiently small $\epsilon_{\ast} > 0$. In a suitable global gauge in $D = [0, r_{c}) \times B_{R - 4 r_{c}}$, the solution obeys \bar{e}gin{equation} \label{eq:imp-reg} \nrm{\nabla A_{x}}_{L^{\infty} \dot{H}^{\frac{d-4}{2}}(D)} + \nrm{\Box A_{x}}_{\ell^{1} L^{2} \dot{H}^{\frac{d-5}{2}}(D)} + \nrm{\nabla A_{0}}_{\ell^{1} L^{2} \dot{H}^{\frac{d-3}{2}}(D)} \lesssim_{\epsilon_{\ast}, \frac{R}{r_{c}}} 1. \end{equation} \end{theorem} \bar{e}gin{remark} The restriction to $[0, r_{c}) \times B_{R - 4 r_{c}}$ instead of $\mathcal D_{[0, r_{c})(B_{R})}$ is enforced merely to avoid technical issues near the boundary, and may be removed if desired. We do not pursue this improvement, since Theorem~\ref{thm:imp-reg} suffices for our application in \cite{OTYM3}. \end{remark} Finally, we discuss application of our techniques to the case of $d = 3$. For $X = B_{R}$ or $\BigR^{3}$, we topologize the space $\mathcal H^{\sigma}_{loc}(X)$ with the norm \bar{e}gin{equation*} \nrm{(a, e)}_{\mathcal H^{\sigma}_{loc}(X)} = \sup_{x \in X} \nrm{(a, e)}_{H^{\sigma} \times H^{\sigma-1}(B_{1}(x) \cap X)}. \end{equation*} From the small data local well-posedness result of Tao \cite{TaoYM}, we obtain the following large data result: \bar{e}gin{theorem}[Local well-posedness in the temporal gauge, $d = 3$] \label{thm:local-temp-sub} Let $\sigma > \frac{3}{4}$. The Yang-Mills equation in the temporal gauge is locally well-posed for initial data $(a, e) \in \mathcal H^{\sigma}_{loc}(\BigR^{3})$ on a time interval of length $\frkgeq T(\nrm{(a, e)}_{\mathcal H^{\sigma}_{loc}})$. \end{theorem} Moreover, the techniques of this paper lead to an alternative proof of the classical result of Klainerman--Machedon \cite{KlMa2}: \bar{e}gin{theorem} \label{thm:KM} The Yang--Mills equation in the temporal gauge is globally well-posed for initial data $(a, e) \in \mathcal H^{1}_{loc}(\BigR^{3})$. \end{theorem} An advantage of the present approach is that the delicate issue of boundary values on spacetime cones (i.e., the domains of dependence of balls) is avoided by the robust excision-and-extension procedure. We note that yet another proof of Theorem~\ref{thm:KM} relying on a global gauge defined by the Yang--Mills heat flow (a subcritical version of the \emph{caloric gauge} we use in the present series \cite{OTYM1, OTYM2, OTYM3}) was given by the first author \cite{Oh1, Oh2}. \subsection{Topological classes, instantons and harmonic Yang--Mills connections on \tilde{e}xorpdfstring{$\BigR^{4}$}{R4}} \label{subsec:threshold} In this subsection, we restrict to the energy critical dimension $d = 4$, and discuss the relationship between the topological class of a connection $a$ on $\BigR^{4}$ and its \emph{static energy} \bar{e}gin{equation} \label{eq:ym-har-en} \calE_{e}(a) = \mathcal E_{\BigR^{4}}(a, 0) = \frac{1}{2} \int_{\BigR^{4}} \brk{F_{jk}[a], F^{jk}[a]} \, \mathrm{d} x. \end{equation} Recall that each topological class $[a]$ of finite energy connections form a path-connected component in the $\dot{H}^{1}$ distance up to gauge transformations (Section~\ref{subsec:top-class}). We may therefore look for an absolute minimizer of $\calE_{e}(a)$ in each topological class; such a connection is called an \emph{instanton}\footnote{Usually, one also distinguishes between an instanton and an anti-instanton, depending on whether the curvature is self- or anti-self-dual. Here, we make no such distinction.}. More generally, we refer to a critical point of \eqref{eq:ym-har-en} as a \emph{harmonic Yang--Mills connection}. Such connections are clearly static solutions to both the Yang--Mills heat flow and the hyperbolic Yang--Mills equation, and hence obstructions to convergence of solutions to the trivial connection (as well as scattering). Moreover, these connections may also arise as ``bubbles'' near the singularity of a dynamic solution. Therefore, knowledge of the energies of the harmonic Yang--Mills connections is necessary for determining the precise threshold energy in the Threshold Theorem, both for the Yang--Mills heat flow \cite{OTYM1} and for the hyperbolic Yang--Mills equation \cite{OTYM3}. We open our discussion with the important special case $\bfG = SU(2)$. The corresponding Lie algebra $\frkg = su(2)$ consists of $2 \times 2$ complex anti-hermitean matrices with zero trace. We furthermore assume that the $Ad$-invariant inner product on $\frkg$ takes the form \bar{e}gin{equation*} \brk{A, B} = - \mathrm{tr}\, (A B). \end{equation*} In fact, as all $Ad$-invariant inner products on $\frkg$ are positive multiples of each other, there is no loss of generality. In this case, the topological classes of finite energy connections are classified by the second Chern number $c_{2}$, which takes the explicit form (via the Chern--Weil theory) \bar{e}gin{equation} \label{eq:chern-no} c_{2} = \frac{1}{8 \pi^{2}} \int_{\BigR^{4}} \mathrm{tr}\, (F[a] \wedge F[a]). \end{equation} For any finite energy connection $a$, the second Chern number $c_{2}$ is an integer; in fact, it equals the degree of the $0$-homogeneous map $O$ (defined using the homeomorphism $SU(2) \simeq \BigS^{3}$) in Theorem~\ref{thm:goodrep}. A simple algebraic manipulation using the Hodge star operator\footnote{To define $\star$, we use the standard inner product on $2$-forms such that $\set{\mathrm{d} x^{j} \wedge \mathrm{d} x^{k} : j < k}$ is an orthonormal basis.} $\star$ shows that \bar{e}gin{align*} \brk{F_{jk}[a], F^{jk}[a]} =& - \star 2 \mathrm{tr}\,(F \wedge \star F) \\ = & - \star \mathrm{tr}\,((F \pm \star F) \wedge \star (F \pm \star F)) \pm 2 \star \mathrm{tr}\, (F \wedge F) \\ = & \frac{1}{2} \brk{F \pm \star F, F \pm \star F} \pm 2 \star \mathrm{tr}\, (F \wedge F) . \end{align*} Note that the first term on the last line is nonnegative. Integrating over $\BigR^{4}$, we obtain the \emph{Bogomoln'yi bound} \bar{e}gin{equation} \label{eq:bog} \calE_{e}(a) \frkgeq 8 \pi^{2} \abs{c_{2}}. \end{equation} The equality holds (in which case, $a$ is an instanton) if and only if $F = \mp \star F$, where $\pm$ is the sign of $c_{2}$. We call such a connection \emph{anti-self} or \emph{self dual}, respectively. There is a beautiful theory due to Atiyah--Drinfeld--Hitchin--Manin \cite{ADHM}, which gives an explicit construction of all anti-self dual (resp. self-dual) connections with $c_{2} > 0$ (resp. $c_{2} < 0$). In particular, we have: \bar{e}gin{theorem}[\cite{ADHM}]\label{thm:instanton-SU2} For any $\kappa \in \BigZ$, there exists an instanton with $c_{2} = - \kappa$ and energy $8 \pi^{2} \abs{\kappa}$. \end{theorem} However, the instantons do not tell the full story. It is known that there also exist nontrivial harmonic Yang--Mills connections which are not self or anti-self dual \cite{SSU, Bor, SaSe, Parker}. Nevertheless, by the recent result of Gursky--Kelleher--Streets \cite{GKS}, they must have energy at least $16 \pi^{2}$ more than the Bogomoln'yi bound\footnote{Note that \cite[Corollary~1.2]{GKS} is stated on $\BigS^{4}$, but the same conclusion holds on $\BigR^{4}$ by conformal invariance of the harmonic Yang--Mills equation and $\calE_{e}$. Moreover, to compare the results, recall that $\calE_{e}(a) = \frac{1}{2} \nrm{F[a]}_{L^{2}}^{2}$.}: \bar{e}gin{theorem}[{\cite[Corollary~1.2]{GKS}}] \label{thm:GKS-SU2} Any harmonic Yang--Mills connection on $\BigR^{4}$ either has energy equal to $8 \pi^{2} \abs{c_{2}}$, or has energy at least $8 \pi^{2} \abs{c_{2}} + 16 \pi^{2}$. \end{theorem} In conclusion, we see that: \emph{Any nontrivial harmonic $SU(2)$ Yang--Mills connection either has energy at least $16 \pi^{2}$, or it is an instanton with $c_{2} = \pm 1$ (a \emph{first instanton}) with energy $8 \pi^{2}$.} We call the first instanton alternatively as the \emph{ground state} (as it has the lowest nontrivial energy), and refer to its energy as the \emph{ground state energy} $E_{GS}$. We now turn to the general case when $\bfG$ is a compact Lie group, for which our goal is to establish a similar conclusion. Consider $f_{2} (\cdot, \cdot) = - \brk{\cdot, \cdot}$, which is a symmetric $Ad$-invariant bilinear function, and the corresponding characteristic class (cf. Section~\ref{subsec:smth}). \bar{e}gin{equation} \label{eq:ch-cl} - \brk{F[a] \wedge F[a]} = - \brk{F_{i j}[a], F_{k \ell}[a]} \, \mathrm{d} x^{i} \wedge \mathrm{d} x^{j} \wedge \mathrm{d} x^{k} \wedge \mathrm{d} x^{\ell}. \end{equation} The characteristic number \bar{e}gin{equation} \label{eq:ch-no} \boldsymbol{\chi} = \int_{\BigR^{4}} - \brk{F[a] \wedge F[a]} \end{equation} is determined by the topological class $[a]$, by Corollary~\ref{cor:ch-class}. Moreover, the same algebra as in \eqref{eq:bog} leads to: \bar{e}gin{lemma} \label{lem:bog-G} Let $\bfG$ be a compact Lie group. For any finite energy connection $a$ on a $\bfG$-bundle on $\BigR^{4}$, we have the pointwise bound \bar{e}gin{equation} \label{eq:bog-G} \frac{1}{2} \brk{F_{jk}[a], F^{jk}[a]} \frkgeq \abs{\brk{F[a] \wedge F[a]}}, \end{equation} and the corresponding integrated bound \bar{e}gin{equation*} \calE_{e}(a) \frkgeq \abs{\boldsymbol{\chi}}. \end{equation*} \end{lemma} Note that when $\bfG$ is commutative, then the harmonic Yang--Mills connections are nothing else than the harmonic $2$-forms; thus no nontrivial finite energy harmonic Yang--Mills connections exist. In the noncommutative case, we prove: \bar{e}gin{theorem} \label{thm:thr} Let $\bfG$ be a noncommutative compact Lie group. Let \bar{e}gin{equation*} E_{GS} = \inf \set{\calE_{e}(a) : \mathfrak{h}box{$a$ is a nontrivial harmonic Yang--Mills connection on a $\bfG$-bundle on $\BigR^{4}$}}. \end{equation*} Then the following statements hold. \bar{e}gin{enumerate} \item There exists a nontrivial harmonic Yang--Mills connection $a$ such that $\calE_{e}(Q) = E_{GS} < \infty$. \item Let $a$ be any nontrivial harmonic Yang--Mills connection. Then either $\calE_{e}(a) \frkgeq 2 E_{GS}$, or \bar{e}gin{equation*} \abs{\boldsymbol{\chi}} = \calE_{e}(a) \frkgeq E_{GS}. \end{equation*} \end{enumerate} \end{theorem} We call $E_{GS}$ the \emph{ground state energy}, and a harmonic Yang--Mills connection $Q$ attaining this energy a \emph{ground state}. The proof of Theorem~\ref{thm:thr} combines well-known results concerning the structure of a compact Lie group and the preceding analysis in the case $\bfG = SU(2)$; it is provided in Section~\ref{sec:threshold}. \addtocontents{toc}{\protect\setcounter{tocdepth}{-1}} \subsection*{Acknowledgments} S.-J. Oh was supported by the Miller Research Fellowship from the Miller Institute, UC Berkeley and the TJ Park Science Fellowship from the POSCO TJ Park Foundation. D. Tataru was partially supported by the NSF grant DMS-1266182 as well as by a Simons Investigator grant from the Simons Foundation. \addtocontents{toc}{\protect\setcounter{tocdepth}{2}} \section{Notation and conventions} \label{sec:notation} Here we collect some notation and conventions used in this paper. \bar{e}gin{itemize} \item We employ the usual asymptotic notation $A \lesssim B$ to denote $A \leq C B$ for some implicit constant $C > 0$. The dependence of $C$ on various parameters is specified by subscripts. \item Throughout the paper, we omit the dependence of constants on the dimension $d$. In particular, by a universal constant, we mean a constant that depends only on $d$. \item We call a bounded open subset $U$ of $\BigR^{d}$ a \emph{domain}. For $\lambda > 0$, $\lambda U$ is defined to be rescaling of $U$ by $\lambda$ centered at the barycenter of $U$. For any $r > 0$ and $x \in \BigR^{d}$, $B_{r}(x)$ is the ball of radius $r$ centered at $x$. When $(x)$ is omitted, the center is taken to be the origin $0$. \item We use the notation $\partial$ (without sub- or superscripts) for the spatial gradient $\partial = (\partial_{1}, \partial_{2}, \ldots, \partial_{d})$, and $\nabla$ for the spacetime gradient $\nabla = (\partial_{0}, \partial_{1}, \ldots, \partial_{d})$. We write $\partial^{(n)}$ (resp. $\nabla^{(n)}$) for the collection of $n$-th order spatial (resp. spacetime) derivatives, and $\partial^{(\leq n)}$ (resp. $\nabla^{(\leq n)}$) for those up to order $n$. \item The $n$-th homogeneous $L^{p}$-Sobolev space for functions from $\BigR^{d}$ into a normed vector space $V$ is denoted by $\dot{W}^{n, p}(\BigR^{d}; V)$. In the special case $p = 2$, we write \[ \dot{H}^{n}(\BigR^{d}; V) = \dot{W}^{n, 2}(\BigR^{d}; V). \] The inhomogeneous counterparts are denoted by $W^{n, p}(\BigR^{d}; V)$ and $H^{n}(\BigR^{d}; V)$, respectively. The Lebesgue spaces (i.e., when $n = 0$) are denoted by $L^{p}(\BigR^{d}; V)$. \item The mixed spacetime norm $L^{q}_{t} \dot{W}^{n, r}_{x}$ of functions on $\BigR^{1+d}$ is often abbreviated as $L^{q} \dot{W}^{n, r}$. \item Given a function space $X$ (on either $\BigR^{d}$ or $\BigR^{1+d}$), we define the space $\ell^{p} X$ by \bar{e}gin{equation*} \nrm{u}_{\ell^{p}X}^{p} = \sum_{k} \nrm{P_{k} u}^{p}_{X} \end{equation*} (with the usual modification for $p = \infty$), where $P_{k}$ $(k \in \BigZ)$ are the usual Littlewood--Paley projections to dyadic frequency annuli. \item Generally, a function space on an open subset $U \subseteq \BigR^{d}$ is defined by restriction, i.e., $\nrm{u}_{X(U)} = \inf \set{\nrm{\tilde{u}}_{X} : \tilde{u} \in X, \ \tilde{u} \restriction_{U} = u}$. A similar convention applies for a function space on an open subset $\mathcal O \subseteq \BigR^{1+d}$. According to this convention, the restriction of the homogeneous Sobolev norm $\dot{W}^{n, p}$ for $n \in \BigN$, $1 < p < \frac{d}{n}$ for a locally Lipschitz domain $U$ is characterized by \bar{e}gin{equation*} \nrm{u}_{\dot{W}^{n, p}(U)} \simeq_{U} \nrm{\partial^{(n)} u}_{L^{p}(U)} + \nrm{u}_{L^{p^{\ast}}(U)}, \quad \mathfrak{h}box{ where } \frac{d}{p^{\ast}} = \frac{d}{p} - n. \end{equation*} Note, importantly, that the implicit constant is invariant under scaling. To distinguish this norm from the usual homogeneous Sobolev semi-norm, we introduce the notation $\mathring{W}^{n, p}(U)$ for a nonnegative integer $n$ and $p \in [1, \infty]$, and define $\nrm{u}_{\mathring{W}^{n, p}(U)} = \nrm{\partial^{(n)} u}_{L^{p}(U)}$. \item The local function space $X_{loc}(U)$ is defined as \bar{e}gin{equation*} X_{loc}(U) = \bigcap_{B_{x}(r): \overline{B}_{x}(r) \subseteq U} X(B_{x}(r)). \end{equation*} \end{itemize} \section{Connections with \tilde{e}xorpdfstring{$L^{\frac{d}{2}}$}{Ld/2}-curvature} \label{sec:rough-conn} In this section, we prove the good global gauge theorems Theorems~\ref{thm:goodrep-ball} and \ref{thm:goodrep}. Throughout the section, we let $d \frkgeq 3$. \subsection{\tilde{e}xorpdfstring{$\bfG$}{G}-valued functions at critical regularity} \label{subsec:rough-gt} We start by collecting some basic analytic facts concerning $\bfG$-valued functions at regularity $W^{k, \frac{d}{k}}$. In what follows, we assume that $\bfG$ is a group of orthogonal matrices in $\BigR^{N \times N}$, equipped with the usual inner product $\brk{A, B} = \mathrm{tr}\, A B^{\dagger}$. Recall the standard fact that any compact Lie group $\bfG$ may be realized as such a matrix group, and the inner product on $\frkg = T_{Id} \bfG$ is equivalent to the one induced from $\BigR^{N \times N}$. Let $U \subseteq \BigR^{d}$ be an open set, $k \in \BigR$ and $p \in [1, \infty]$. In Section~\ref{subsec:rough-conn}, we introduced \bar{e}gin{align*} \mathcal G^{k, p}(U) = & \set{O \in W^{k, p}(U; \BigR^{N \times N}) : O (x) \in {\bf G} \mathfrak{h}box{ for a.e. } x \in U}. \end{align*} Since $\bfG$ is compact, any $O \in \mathcal G^{k, p}(U)$ belongs to $L^{\infty}(U)$. When $U$ is a domain with locally Lipschitz boundary, an element $O \in \mathcal G^{k, p}(U)$ may be extended\footnote{We emphasize, however, that $\tilde{O}(x) \not \in {\bf G}$ for $x \not \in U$ in general.} to $\tilde{O} \in W^{k, p} \cap L^{\infty}(\BigR^{d})$; see \cite[\S VI.3]{MR0290095}. For a general irregular open set $U$, we instead use \bar{e}gin{align*} \mathcal G^{k, p}_{loc}(U) = & \set{O \in W^{k, p}_{loc}(U; \BigR^{N \times N}) : O (x) \in {\bf G} \mathfrak{h}box{ for a.e. } x \in U}, \end{align*} for which the following extension property holds: For any ball $B \subseteq U$, there exists ${}^{(B)}\tilde{O} \in W^{k, p} \cap L^{\infty}(\BigR^{d})$ such that ${}^{(B)}\tilde{O}(x) = O(x)$ for a.e. $x \in B$. In view of the applications to the hyperbolic Yang--Mills equation at the critical regularity, we consider the scale-invariant case $p = \frac{d}{k} > 1$, which is subtle due to the fact that $H^{k, \frac{d}{k}} \not \mathfrak{h}ookrightarrow L^{\infty}$, and thus $H^{k, \frac{d}{k}}$ is not an algebra. Nevertheless, as we will see, basic operations needed to define a $\bfG$-bundle are still well-defined. To avoid technical issues, we focus on the case when $k$ is a positive integer. Of special importance is when $k = 2$, which correspond to local gauge transformations in a bundle admitting a connection with $L^{\frac{d}{2}}$ curvature. As a quick consequence of the extension properties mentioned above, we have the following multiplication lemma. \bar{e}gin{lemma} \label{lem:mult} Let $k$ be a positive integer, and let $U \subseteq \BigR^{d}$ be an open set. Then the pointwise multiplication map \bar{e}gin{equation*} \mathcal G^{k, \frac{d}{k}}_{loc}(U) \times \mathcal G^{k, \frac{d}{k}}_{loc}(U) \ni (O_{1}, O_{2}) \mapsto O_{1} \cdot O_{2} \in \mathcal G^{k, \frac{d}{k}}_{loc}(U) \end{equation*} is continuous. If $U$ is a domain with a locally Lipschitz boundary, then the same conclusion holds for the space $\mathcal G^{k, \frac{d}{k}}(U)$. \end{lemma} Although multiplication is continuous, we remark that it utterly fails to be any more regular. This is in sharp contrast with the subcritical case $\mathcal G^{k, p}$ with $p > \frac{d}{k}$, in which multplication is smooth. \bar{e}gin{proof} It suffices to consider the case when $U$ is a domain with a locally Lipschitz boundary (the other case follows by taking $U$ to be balls). Let $O_{1}, O_{2} \in \mathcal G^{k, \frac{d}{k}}(U)$, and consider their usual extensions outside $U$. Note that $O_{1} \cdot O_{2}$ is an $L^{1}_{loc}$ function with values in ${\bf G}$ for a.e. $x \in U$, and belongs to $W^{k, \frac{d}{k}}(U)$ by the whole space estimate \bar{e}gin{equation*} \nrm{O_{1} \cdot O_{2}}_{W^{k, \frac{d}{k}}} \lesssim \nrm{O_{1}}_{L^{\infty}} \nrm{O_{2}}_{W^{k, \frac{d}{k}}} + \nrm{O_{1}}_{W^{k, \frac{d}{k}}} \nrm{O_{2}}_{L^{\infty}}. \end{equation*} To prove continuity, consider sequences $O^{n}_{1} \to O_{1}$ and $O^{n}_{2} \to O_{2}$ in $\mathcal G^{k, \frac{d}{k}}(U)$. We extend $O^{n}_{1}$ and $O^{n}_{2}$ to the whole space using the same extension operator as before, which insures $O^{n}_{1} \to O_{1}$ and $O^{n}_{2} \to O_{2}$ in $W^{k, \frac{d}{k}}(\BigR^{d}; \BigR^{N \times N})$. By the Leibniz rule and the Sobolev inequality, for any multi-index $\alpha$ of order $k$, we may show that \bar{e}gin{equation*} \partial^{\alpha} (O^{n}_{1} \cdot O^{n}_{2}) - (\partial^{\alpha} O^{n}_{1}) \cdot O^{n}_{2} - O^{n}_{1} \cdot \partial^{\alpha} O^{n}_{2} \to \partial^{\alpha} (O_{1} \cdot O_{2}) - (\partial^{\alpha} O_{1}) O_{2} - O_{1} \partial^{\alpha} O_{2} \mathfrak{h}box{ in } L^{\frac{d}{k}}. \end{equation*} By symmetry, it only remains to prove that $(\partial^{\alpha} O^{n}_{1}) \cdot O^{n}_{2} \to (\partial^{\alpha} O_{1}) \cdot O_{2}$ in $L^{\frac{d}{k}}$. Since $O^{n}_{2}$ is uniformly bounded, the problem is further reduced to showing that \bar{e}gin{equation*} \nrm{\partial^{\alpha} O_{1} \cdot (O^{n}_{2} - O_{2})}_{L^{\frac{d}{k}}} \to 0. \end{equation*} If this limit were not true, then there would exist a subsequence with no further subsequence converging to zero. However, $O^{n}_{2} \to O_{2}$ in $W^{k, \frac{d}{k}}$ implies a.e. convergence along a subsequence, along which the above limit holds by the dominated convergence theorem. \qedhere \end{proof} It is well-known that if $U$ is an open set with piecewise smooth boundary, then any $O \in \mathcal G^{2, \frac{d}{2}}(U)$ can be approximated by a sequence $O^{n} \in C^{\infty}(U; \bfG)$ in the $W^{2, \frac{d}{2}}(U; \BigR^{N \times N})$-topology \cite{MR710054}. We state here a technical refinement which allows us to localize the region where we perform the approximation (essentially from \cite{MR815194}). This version will be helpful for handling the extension problem to a $\bfG$-valued map (not $\BigR^{N \times N}$-valued). \bar{e}gin{lemma} \label{lem:part-approx} Let $k$ be a positive integer. Let $U \subseteq \BigR^{d}$ be a domain with locally Lipschitz boundary, and let $O \in \mathcal G^{k, \frac{d}{k}}(U)$. If $V, W$ are (possibly empty) open sets in $\BigR^{d}$ such that $\overline{V} \cup \overline{W} \subseteq \overline{U}$ and $\overline{V} \cap \overline{W} = \emptyset$, then for every $\epsilon > 0$ there exists $O' \in \mathcal G^{k, \frac{d}{k}}(U)$ such that $O' \restriction_{V} = O \restriction_{V}$, $O' \in C^{\infty}(W; \bfG)$ and $\nrm{O' - O}_{W^{k, \frac{d}{k}}(U; \BigR^{N \times N})} < \epsilon$. \end{lemma} We recover the usual approximation result by setting $V = \emptyset$ and $W = U$. As a consequence, for a general open set $U$, any $O \in \mathcal G^{k, \frac{d}{k}}_{loc}(U)$ can be approximated by $O^{n} \in C^{\infty}(B; \bfG)$ in the $W^{k, \frac{d}{k}}(B; \BigR^{N \times N})$-topology for any open ball $B \subseteq U$. \bar{e}gin{proof} We may assume that $W \neq \emptyset$, as otherwise we may set $O^{\epsilon} = O$. By standard Sobolev extension, there exists $\tilde{O} \in W^{k, \frac{d}{k}}(\BigR^{d}; \BigR^{N \times N})$ such that $\tilde{O} \restriction_{U} = O$. We introduce $\delta > 0$ to be fixed later, and let $h : U \to [0, 1]$ be a smooth function such that $h = 0$ on $V$ and $h = 1$ on $W$ (smooth Urysohn's lemma). Fix a smooth function $\zeta$ supported in the unit ball satisfying $\int \zeta = 1$. We define $\tilde{O}^{\delta} : \BigR^{d} \to \BigR^{N \times N}$ by inhomogeneous mollification: \bar{e}gin{equation*} \tilde{O}^{\delta} (x) = \int \zeta(y) \tilde{O}(x - \delta h(x) y) \, \mathrm{d} y. \end{equation*} It is straightforward to verify that $\nrm{\tilde{O}^{\delta}-\tilde{O}}_{W^{k, \frac{d}{k}}(U)} \to 0$ as $\delta \to 0$, and also that $\tilde{O}^{\delta}$ is smooth on $W$. However, $\tilde{O}^{\delta} (x) \not \in \bfG$ in general. To rectify this, we proceed as in \cite{MR710054}. Let $\tilde{\bfG} \subseteq \BigR^{N \times N}$ be a tubular neighborhood of $\bfG$, on which the nearset-point projection $\pi_{\bfG} : \tilde{\bfG} \to \bfG$ is well-defined as a smooth map. For $x \in U$, we wish to ensure that $\tilde{O}^{\delta}(x) \in \tilde{\bfG}$ for $\delta$ sufficiently small. Since $O(y) \in \bfG$ for a.e. $y \in U$, we have \bar{e}gin{align*} d(\tilde{O}^{\delta}(x), \bfG)^{d} \leq \frac{1}{\abs{U \cap B_{\delta h(x)}(x)}} \int_{U \cap B_{\delta h(x)}(x)} \abs{\tilde{O}^{\delta}(x) - O(y)}^{d} \ \mathrm{d} y . \end{align*} By boundedness and the locally Lipschitz condition, $\abs{U \cap B_{r}(x)} \frkgtrsim_{U, d} r^{d}$ for every $x \in \overline{U}$ and sufficiently small $r > 0$. Moreover, by the Poincar\'e inequality $\nrm{f}_{L^{d}(B_{r}(x))} \lesssim_{\zeta} r \nrm{\partial f}_{L^{d}(B_{r}(x))}$ for $f$ satisfying $\int \zeta(y) f(x + r y) \, \mathrm{d} y = 0$, we have \bar{e}gin{equation*} d(\tilde{O}^{\delta}(x), \bfG)^{d} \lesssim_{U, \zeta, d} \int_{B_{\delta h(x)}(x)} \abs{\partial \tilde{O}(y)}^{d} \ \mathrm{d} y. \end{equation*} By compactness of $\overline{U}$, the RHS goes to $0$ uniformly as $\delta \to 0$, so that $\tilde{O}^{\delta}(x) \in \tilde{\bfG}$. Define $O' = \pi_{\bfG} \circ \tilde{O}^{\delta} \restriction_{U}$. It is now straightforward to show that $O'$ obeys the desired properties once we fix $\delta > 0$ small enough (depending on $\epsilon$). \end{proof} As a consequence of the approximation property, we now show that pointwise inversion is well-defined as a continuous map $\mathcal G^{k, \frac{d}{k}}_{loc}(U) \to \mathcal G^{k, \frac{d}{k}}_{loc}(U)$. \bar{e}gin{lemma} \label{lem:inv} Let $k$ be a positive integer, and let $U \subseteq \BigR^{d}$ be an open set. Then the pointwise inversion map \bar{e}gin{equation*} \mathcal G^{k, \frac{d}{k}}_{loc}(U) \ni O \mapsto O^{-1} \in \mathcal G^{k, \frac{d}{k}}_{loc}(U) \end{equation*} is continuous. Moreover, the usual differentiation rule $\partial_{x} O^{-1} = - O^{-1} \partial_{x} O O^{-1}$ holds for $O \in \mathcal G^{k, \frac{d}{k}}_{loc}(U)$. If $U$ is a domain with a locally Lipschitz boundary, then the same conclusion holds for the space $\mathcal G^{k, \frac{d}{k}}(U)$. \end{lemma} \bar{e}gin{proof} As before, we only consider the case when $U$ is a domain with a locally Lipschitz boundary. For simplicity, we only treat the case $k = 1$; the higher $k$'s are handled similarly. Given $O \in \mathcal G^{1, d}(U)$, let $O^{n} \to O$ be a smooth approximation sequence in $\mathcal G^{1, d}(U)$ given by Lemma~\ref{lem:part-approx} (with $V = \emptyset$ and $W = U$). By passing to a subsequence, we may assume that $O^{n} \to O$ a.e. in $U$ as well; hence $(O^{n})^{-1} \to O^{-1}$ in $U$. Moreover, by the usual differentiation formula in the smooth case, \bar{e}gin{equation*} \partial_{x} (O^{n})^{-1} = - (O^{n})^{-1} \partial_{x} O^{n} (O^{n})^{-1}. \end{equation*} By the dominated convergence theorem, $\partial_{x} (O^{n})^{-1}$ is Cauchy in $W^{1, d}(U; \BigR^{N \times N})$, so that $O^{-1} \in \mathcal G^{1, d}(U)$. Moreover, the formula \bar{e}gin{equation*} \partial_{x} O^{-1} = - O^{-1} \partial_{x} O O^{-1} \end{equation*} is justified for $O \in \mathcal G^{1, d}(U)$. By a similar argument using the dominated convergence theorem applied to an arbitrary sequence $O^{n} \to O$ in $\mathcal G^{1, d}(U)$. the continuity property also follows. \end{proof} Next, from the approximation property and Lemma~\ref{lem:inv}, it follows that the usual operations involving $\mathcal G^{k, \frac{d}{k}}_{loc}(U)$ and $W^{k', \frac{d}{k}}_{loc}(U; \frkg)$ are continuous. \bar{e}gin{lemma} \label{lem:d-inv} Let $k$ be a positive integer, and let $U \subseteq \BigR^{d}$ be an open set. \bar{e}gin{enumerate} \item The operations $O \mapsto O_{;x} = \partial_{x} O O^{-1}$ and $O \mapsto O^{-1}_{;x} = - O^{-1} \partial_{x} O$ are continuous as mappings $\mathcal G^{k, \frac{d}{k}}_{loc}(U) \to W^{k-1, \frac{d}{k}}_{loc}(U; \frkg)$. \item For any integer $0 \leq k' \leq k$, the operation $(O, B) \mapsto Ad(O) B = O B O^{-1}$ is continuous as a mapping $\mathcal G^{k, \frac{d}{k}}_{loc}(U) \times W^{k', \frac{d}{k}}_{loc}(U; \frkg) \to W^{k', \frac{d}{k}}_{loc}(U; \frkg)$. \item If $O, O_{1}, O_{2} \in \mathcal G^{k, \frac{d}{k}}_{loc}(U)$ and $B \in W^{k', \frac{d}{k}}_{loc}(U; \frkg)$, then the following Leibniz rules hold: \bar{e}gin{align*} (O_{1} O_{2})_{;x} =& O_{1;x} + Ad(O_{1}) O_{2;x}, \\ \partial_{x} (Ad(O) B) =& Ad(O) \partial_{x} B + Ad(O) [O_{;x}, B]. \end{align*} \end{enumerate} If $U$ has a locally Lipschitz boundary, then the same conclusion holds for the spaces $\mathcal G^{k, \frac{d}{k}}(U)$ and $W^{k', \frac{d}{k}}(U; \frkg)$. \end{lemma} As before, the fact that these operations map into the right space is justified by using a smooth approximating sequence (Lemma~\ref{lem:part-approx}), and then their continuity properties are proved in a similar manner. We omit the proof. We end with an auxiliary lemma concerning the construction of a $\bfG$-valued function on an annulus with a prescribed normal derivative on the outer boundary. \bar{e}gin{lemma} \label{lem:Ar=0} Let $B$ be a $\frkg$-valued function in $H^{\frac{d-3}{2}}(\BigS^{d-1})$. There exists $O \in L^{\infty} \cap H^{\frac{d}{2}}(B_{1})$, which depends continuously on $B$, such that \bar{e}gin{equation*} (O, O_{;r}) \restriction_{\set{r = 1}} = (Id, B), \end{equation*} where $O_{;r} = \frac{x^{j}}{\abs{x}} O_{;j}$. A similar construction can be done in the exterior region $\BigR^{d} \setminus B_{1}$. \end{lemma} \bar{e}gin{proof} We first work on the annulus $B_{1} \setminus \overline{\frac{1}{2} B_{1}}$, which we view as the product space $(\frac{1}{2}, 1)_{r} \times \BigS^{d-1}_{\Theta}$ (note that the corresponding Lebesgue and Sobolev spaces are equivalent). We define $\varphi(r, \Theta)$ to be Poisson semigroup $\varphi(r, \Theta) = e^{\sqrt{-\Delta_{\Theta}} (r-1)} B$, and define \bar{e}gin{equation*} \Psi(r, \Theta) = (r - 1) \varphi(r, \Theta), \qquad \end{equation*} By the properties of the Poisson semigroup, observe that \bar{e}gin{equation*} \Psi(r, \Theta) = (r-1) B(\Theta) + o_{r \to 1}(r-1) \quad \mathfrak{h}box{ in } H^{\frac{d-3}{2}}(\BigS^{d-1}). \end{equation*} Moreover, $\Psi(r, \Theta) \in L^{\infty} \cap H^{\frac{d}{2}}((\frac{1}{2}, 1) \times \BigS^{d-1})$ and \bar{e}gin{equation*} \nrm{\Psi(r, \cdot)}_{L^{\infty}(\BigS^{d-1})} = o_{r \to 1}(1) \end{equation*} where the rate depends only on the right tail of the $H^{\frac{d-3}{2}}$ frequency envelope of $B$. Let \bar{e}gin{equation*} O(r, \Theta) = \exp (\boldsymbol{\chi}i \Psi(r, \Theta)). \end{equation*} where $\boldsymbol{\chi}i = \boldsymbol{\chi}i(r)$ is a smooth radial function such that $\boldsymbol{\chi}i = 0$ in $\set{r < \frac{2}{3}}$ and $\boldsymbol{\chi}i = 1$ in $\set{r > \frac{5}{6}}$. Since $L^{\infty} \cap H^{\frac{d}{2}}$ is an algebra, and since $O = Id$ in $\set{r < \frac{2}{3}}$, it may be checked that $O \in L^{\infty} \cap H^{\frac{d}{2}}(B_{1})$. Moreover, $\partial_{r} O(r, \Theta) O^{-1}(r, \Theta) \restriction_{\set{r=1}} = \partial_{r} \Psi(r, \Theta) \restriction_{\set{r = 1}} = B(\Theta)$, as desired. \qedhere \end{proof} \subsection{Patching procedures} \label{subsec:patching} Here we describe procedures for patching together local gauges to a global gauge, which is one of the main ingredients of the proof of the good global gauge theorems. We consider three scenarios: \bar{e}gin{enumerate} \item Local gauges given on small (round) cubes $Q_{(\alpha)}$ covering a large (round) cube $Q_{R}$; \item Local gauges given on small balls $B_{(\alpha)}$ covering a large ball $B_{R}$; \item Local gauges given on concentric balls $B_{R_{n}}$ covering $X = B_{R}$ or $\BigR^{d}$. \end{enumerate} In all three scenarios, the patching procedure depends only on the trivial topology and differentiable structure of the base. \subsubsection*{Scenario~(1): Large cubes covered by smaller cubes} We first consider a covering consisting of (round) cubes, which admits simple intersection properties. Let $Q_{R}$ be a smooth domain in $\BigR^{d}$, and consider a covering $\set{Q_{\alpha}}_{\alpha \in \bfGamma}$ of $Q_{R}$ by smooth domains $Q_{\alpha}$ indexed by a subset $\bfGamma$ of the lattice $\BigZ^{d}$. We equip $\BigZ^{d}$ with two norms: $\abs{\alpha}_{\infty}= \sup_{k} \abs{\alpha_{k}}$ and $\abs{\alpha}_{1}= \left( \sum_{k} \abs{\alpha_{k}}^{2} \right)^{1/2}$. We say that two indices are adjacent if $\abs{\alpha - \alpha'}_{\infty} \leq 1$. If $\abs{\alpha - \alpha'}_{1} \leq 1$, we say that $\alpha$ and $\alpha'$ are \emph{face-adjacent}; if $\abs{\alpha - \alpha'}_{\infty} = 1$ but $\abs{\alpha - \alpha'}_{1} > 1$, then we say that $\alpha$ and $\alpha'$ are \emph{corner-adjacent}. We say that the covering $\set{Q_{\alpha}}_{\alpha \in \bfGamma}$ is \emph{good} if the following properties hold: \bar{e}gin{enumerate}[label=(\alphah*)] \item The index set $\bfGamma$ is of the form $\bfGamma = \set{\alpha \in \BigZ^{d} : \abs{\alpha}_{\infty} < R_{\bfGamma}}$ for some $R_{\bfGamma} > 0$. \item For each $\alpha$, there exist a sequence of shrinking domains $Q_{\alpha} = Q^{(0)}_{\alpha} \supset Q^{(1)}_{\alpha} \supset \cdots$, such that, for each $n \frkgeq 0$, \bar{e}gin{equation*} Q_{R} \subseteq \bigcup_{\alpha \in \bfGamma} Q^{(n)}_{\alpha}, \qquad \overline{Q^{(n+1)}_{\alpha}} \cap Q_{R} \subseteq Q^{(n)}_{\alpha} \cap Q_{R}. \end{equation*} \item Two domains $Q^{(n)}_{\alpha}$ and $Q^{(n')}_{\alpha'}$ intersect if and only if their indices are adjacent. \item Consider any $\alpha \in \bfGamma$ and a subfamily $\bfGamma' \subseteq \bfGamma$ of adjacent indices with the property that (i) the face-adjacent indices in $\bfGamma'$ are adjacent to each other and (ii) each corner-adjacent index in $\bfGamma'$ is adjacent to some face-adjacent index in $\bfGamma'$. Then for each $n \frkgeq 1$ there exists a diffeomorphism $\Phi^{(n)}_{\bfGamma'}$ from $Q^{(n)}_{\alpha}$ into $\tilde{F}^{(n)}_{\bfGamma'} = \left(\cup_{\alpha' \in \bfGamma'} Q^{(n-1)}_{\alpha'}\right) \cap Q^{(n)}_{\alpha}$, which equals the identity in $F^{(n)}_{\bfGamma'} = \left(\cup_{\alpha' \in \bfGamma'} Q^{(n)}_{\alpha'}\right) \cap Q^{(n)}_{\alpha}$. \end{enumerate} Given any cube $Q_{R}$ of sidelength $R > 1$, we construct a good covering of $Q_{R}$ by round cubes (i.e., with rounded edges, so that they are smooth) with roughly unit sidelength (more precisely, between $1/2$ and $4$) as follows. Rescaling by a factor $\simeq 1$ (say between $1/2$ and $2$), we may assume that $R$ is an integer. Partition $Q_{R}$ into unit cubes $\tilde{Q}_{\alpha}$ with integer vertices, indexed in an obvious manner by $\bfGamma \subseteq \BigZ^{d}$ as in (a). Rounding off the edges (uniformly in $\alpha$), we may replace each $\tilde{Q}_{\alpha}$ by a round cube, such that $\set{1.1 \tilde{Q}_{\alpha}}$ still covers $Q_{R}$. Fix a sequence $2 > \lambda^{(0)} > \lambda^{(1)} > \cdots > 1.1$, and define $Q^{(n)}_{\alpha}$ to be the enlargement $\lambda^{(n)}\tilde{Q}_{\alpha}$. It is then straightforward to verify that (b)--(d) hold for $\set{Q^{(n)}_{\alpha}}$. \bar{e}gin{remark}\label{rem:Q-cover} We make the simple but crucial observation that the preceding construction of a good covering may be \emph{fixed} depending only on the size $R$ of the large cube. Also, $Q_{R}$ may be taken to be a round cube as well; it does not affect the properties (a)--(d), as long as the edges are rounded off at a scale much smaller than $1$. \end{remark} Let $\set{Q_{\alpha}}_{\alpha \in \bfGamma}$ be a good covering of $Q_{R}$, and suppose that a local data set $\set{Q_{\alpha}, O_{(\alpha \bar{e}ta)}}$ for a $\bfG$-bundle (with arbitrary regularity) is given. Our goal is to patch the local gauges up to form a global gauge on $Q_{R}$. More concretely, we find a gauge transformation $P_{(\alpha)}$ on each $Q^{(N)}_{(\alpha)}$, where $N = \# \bfGamma$, such that \bar{e}gin{equation*} P_{(\bar{e}ta)} = P_{(\alpha)} \cdot O_{(\alpha \bar{e}ta)} \quad \mathfrak{h}box{ in } Q^{(N)}_{\alpha} \cap Q^{(N)}_{\bar{e}ta}. \end{equation*} To start the construction, we endow $\bfGamma$ with the lexicographic ordering (i.e., $\alpha < \alpha'$ if $\alpha_{i} < \alpha'_{i}$, where $i$ is the first index where the components differ); we denote by $[\alpha]$ the ordinality of $\alpha$ in this covering (thus $1 \leq [\alpha] \leq N$). The simple key observation is that such an ordering insures that each $\alpha$ and $\bfGamma' = \set{\alpha' : \alpha' \mathfrak{h}box{ is adjacent to } \alpha, \, \alpha' < \alpha}$ satisfy the condition of (d). We proceed inductively on $[\alpha]$, and construct $P_{(\alpha)}$ on $Q^{([\alpha])}_{\alpha}$ such that \bar{e}gin{equation*} P_{(\bar{e}ta)} = P_{(\alpha)} \cdot O_{(\alpha \bar{e}ta)} \quad \mathfrak{h}box{ in } Q_{\alpha}^{([\bar{e}ta])} \cap Q_{\bar{e}ta}^{([\bar{e}ta])}, \mathfrak{h}box{ for } \alpha \leq \bar{e}ta. \end{equation*} For the first element $[\alpha] = 1$, we simply take $P_{(\alpha)} = Id$ on $Q^{(1)}_{\alpha}$. Now assume that $P_{(\alpha')}$ has been constructed on $Q^{([\alpha'])}_{\alpha'}$ for $\alpha' < \alpha$, where $[\alpha] = n > 1$. Define $\tilde{P}_{(\alpha)}$ in $\tilde{F}^{(n)}_{\set{\alpha' < \alpha}} = \left(\cup_{\alpha' < \alpha} Q^{(n-1)}_{\alpha'} \right) \cap Q^{(n)}_{\alpha}$ by \bar{e}gin{equation} \label{eq:patching-tP} \tilde{P}_{(\alpha)} = P_{(\alpha')} \cdot O_{(\alpha' \alpha)} \quad \mathfrak{h}box{ on } Q^{(n-1)}_{\alpha'} \cap Q^{(n)}_{\alpha} \mathfrak{h}box{ for each } \alpha' < \alpha. \end{equation} By construction, these expressions match on the intersections. Applying (d) in the definition of a good covering, we find a diffeomorphism $\Phi^{(n)}_{\set{\alpha' < \alpha}}$ from $Q^{(n)}_{\alpha}$ into $\tilde{F}^{(n)}_{\set{\alpha' < \alpha}}$, which equals the identity in $F^{(n)}_{\set{\alpha' < \alpha}} = \left(\cup_{\alpha' < \alpha} Q^{(n)}_{\alpha'} \right) \cap Q^{(n)}_{\alpha}$. We simply define $P_{(\alpha)}$ in $Q^{(n)}_{\alpha}$ by the pullback \bar{e}gin{equation} \label{eq:patching-P} P_{(\alpha)} = \tilde{P}_{(\alpha)} \circ \Phi^{(n)}_{\set{\alpha' < \alpha}}. \end{equation} Next, suppose that local data for a connection $\set{A_{(\alpha)}}$ are also given. Then the gauge potential $A$ in the global gauge constructed above is described in terms of $A_{(\alpha)}$ and $P_{(\alpha)}$ as follows: Given a partition of unity $\boldsymbol{\chi}i_{(\alpha)}$ subordinate to $\set{Q^{(N)}_{(\alpha)}}$, we have \bar{e}gin{equation} \label{eq:patching-A} A = \sum \boldsymbol{\chi}i_{(\alpha)} \left( Ad(P_{(\alpha)}) A_{(\alpha)} - P_{(\alpha); x} \right). \end{equation} The advantage of this patching procedure is that it relies only on the properties (a)--(d) of the good covering $\set{Q_{\alpha}}_{\alpha \in \bfGamma}$, and is universal in the data $\set{O_{(\alpha \bar{e}ta)}}$ or $\set{A_{(\alpha)}}$. Moreover, it is straightforward to infer properties of $P_{(\alpha)}$ and $A$ from those of $\set{O_{(\alpha \bar{e}ta)}}$ and $\set{A_{(\alpha)}}$. Indeed, in the above construction, observe that $\set{P_{(\alpha)}}$ is constructed from $\set{O_{(\alpha \bar{e}ta)}}$ using only the operations of (i) pointwise multplication, (ii) pullback by a diffeomorphism, (iii) restriction to a smooth subdomain and (iv) patching up local expressions which are consistent on the intersections. Any property of $\set{O_{(\alpha \bar{e}ta)}}$ invariant under these operations transfers to $P_{(\alpha)}$. In particular, for any $k \frkgeq 1$ and $p \frkgeq \frac{d}{k}$, \bar{e}gin{equation*} O_{(\alpha \bar{e}ta)} \in \mathcal G^{k, p}_{loc}(Q_{\alpha} \cap Q_{\bar{e}ta}) \ \forall \alpha, \bar{e}ta \Rightarrow P_{(\alpha)} \in \mathcal G^{k, p}_{loc}(Q^{(N)}_{\alpha}) \ \forall \alpha. \end{equation*} Regarding bounds for $A$, it is useful to introduce the following definition: \bar{e}gin{definition} \label{def:patching-adm} We say that a norm $Y$ on $\BigR^{d}$ is \emph{(patching-)admissible} if: \bar{e}gin{itemize} \item $Y$ is invariant under pullback by any diffeomorphism; \item $Y$ is invariant under any smooth cutoff; \item If $A \in Y$ and $O_{;x} \in Y$, then $Ad(O) A \in Y$ with $\nrm{Ad(O) A}_{Y} \lesssim_{\nrm{A}_{Y}, \nrm{O_{;x}}_{Y}} 1$. \end{itemize} \end{definition} From the preceding observation regarding the construction of $P_{(\alpha)}$, as well as the explicit formula \eqref{eq:patching-A}, we see that: \bar{e}gin{equation*} O_{(\alpha \bar{e}ta); x} \in Y(Q_{\alpha} \cap Q_{\bar{e}ta}) \ \forall \alpha, \bar{e}ta \mathfrak{h}box{ and } A_{(\alpha)} \in Y(Q_{\alpha})\ \forall \alpha \Rightarrow \nrm{A}_{Y(Q_{R})} \lesssim 1, \end{equation*} where the implicit constant depends only on the good covering (which, in turn, may be fixed depending only on $R$; cf. Remark~\ref{rem:Q-cover}), $\sup_{\alpha} \nrm{A_{(\alpha)}}_{Y(Q_{\alpha})}$ and $\sup_{\alpha, \bar{e}ta} \nrm{O_{(\alpha \bar{e}ta)}}_{Y(Q_{\alpha} \cap Q_{\bar{e}ta})}$. \subsubsection*{Scenario~(2): Large ball covered by small balls} Here, we wish to patch up local data for a $\bfG$-bundle and a connection given on small balls centered inside $B_{R}$; this is the case we encounter in our applications. The idea is to reduce to Scenario~(1) by a suitable diffeomorphism. Consider a covering $\set{B_{\alpha} \cap B_{R}}$ of $B_{R}$ by finitely many balls. Let $\Phi$ be a bi-Lipschitz isomorphism from the cube $Q_{\lambda_{0} R}$ to $B_{R}$, where $\lambda_{0} \in (0, \infty)$ is to be fixed below. Let $\set{Q_{\alpha}}_{\alpha \in \bfGamma}$ be a good covering of $Q_{\lambda_{0} R}$ as in Scenario~(1). We wish to insure that the image of each $Q_{\alpha}$ under $\Phi_{R}$ is contained in a unit ball. Indeed, observe that, by scaling-invariance, the Lipschitz constant of $\Phi_{R}$ is independent of $R$, but decreases in $\lambda_{0}$. Hence, for any $\delta > 0$, by choosing $\lambda_{0}$ sufficiently large (independent of $R$) we may insure that \bar{e}gin{equation} \label{eq:cube-ball} \Phi_{R}(Q_{\alpha}) \subseteq B_{\delta}(x) \quad \mathfrak{h}box{ for some } x \in B_{R}. \end{equation} By Lebesgue's covering lemma, this ensures that $\Phi(Q_{\alpha})$ is contained in some ball $B_{\alpha}$ in the covering. Finally, by rounding off the edges of $Q_{\lambda_{0} R}$, we may replace $Q_{\lambda_{0} R}$ by a round cube, and $\Phi$ by a diffeomorphism with uniform bounds. Note that this can be done while not disturbing the Lipschitz constant much (and thus \eqref{eq:cube-ball} still holds), while the uniform bounds of higher derivatives would depend on $R$. \bar{e}gin{remark}\label{rem:B-cover} In the above procedure, note that $\lambda_{0}$ depends only on Lebesgue constant $\delta > 0$ of the covering $\set{B_{\alpha}}$. In particular, if $B_{\alpha}$'s are unit balls which are uniformly separated, so that the Lebesgue constant is $\simeq 1$, $\lambda_{0}$ may be fixed independent of $R$. The remaining components of the construction may be \emph{fixed} depending only on the radius $R$ (recall also Remark~\ref{rem:Q-cover}). \end{remark} We now apply the patching procedure in Scenario~(1) to the pulled-back data $\set{Q_{\alpha}, O_{(\alpha \bar{e}ta)} \circ \Phi, \Phi^{\ast} A_{(\alpha)}}$, which are well-defined since each $\Phi(Q_{\alpha})$ is contained in some ball $B_{\alpha}$ in the covering. Then we return to $B_{R}$ via $\Phi^{-1}$. As a result, we obtain a refinement $\set{B'_{\alpha} = \Phi(Q^{(N)}_{\alpha})}$ of the covering $\set{B_{\alpha}}$ (the index sets are different, but we abuse the notation and denote both by $\alpha$), as well as a gauge transform $P_{\alpha'}$ in each $B'_{\alpha'}$, such that \bar{e}gin{equation} \label{eq:patching-P-ball} P_{(\alpha)} = P_{(\alpha')} \cdot O_{(\alpha' \alpha)} \quad \mathfrak{h}box{ on } B'_{\alpha'} \cap B'_{\alpha}. \end{equation} Moreover, given a partition of unity subordinate to $\set{B'_{\alpha}}$, the global gauge potential $A$ takes the form \bar{e}gin{equation} \label{eq:patching-A-ball} A = \sum \boldsymbol{\chi}i_{(\alpha)} \left(Ad(P_{(\alpha)} A_{(\alpha)} - P_{(\alpha); x} \right). \end{equation} Finally, we obtain the following result: \bar{e}gin{proposition} \label{prop:patching-ball} Let $R \frkgeq 1$, and consider a covering $\set{B_{\alpha} \cap B_{R}}$ of $B_{R}$ by uniformly separated unit balls $B_{\alpha}$ centered inside $B_{R}$. Any $\bfG$-bundle with $O_{(\alpha \bar{e}ta)} \in \mathcal G^{k, \frac{d}{k}}(B_{\alpha} \cap B_{\bar{e}ta} \cap B_{R})$ admits a global gauge. Moreover, given any local data $\set{A_{(\alpha)}}$ for a connection on this $\bfG$-bundle satisfying $A \in W^{k-1, \frac{d}{k}}(B_{\alpha} \cap B_{R})$, the global gauge potential satisfies $A \in W^{k, \frac{d}{k}}(B_{R})$. More precisely, if \bar{e}gin{equation*} \sup_{\alpha} \nrm{A_{(\alpha)}}_{W^{k-1, \frac{d}{k}}(B_{\alpha} \cap B_{R})} \leq M, \quad \sup_{\alpha, \bar{e}ta} \nrm{O_{(\alpha \bar{e}ta); x}}_{W^{k-1, \frac{d}{k}}(B_{\alpha} \cap B_{\bar{e}ta} \cap B_{R})} \leq M, \end{equation*} for some $M > 0$, then \bar{e}gin{equation*} \nrm{A}_{W^{k-1, \frac{d}{k}}(B_{R})} \lesssim_{R, M} 1. \end{equation*} \end{proposition} \subsubsection*{Scenario~(3): $X = B_{R}$ or $\BigR^{d}$ covered by concentric balls} Finally, we consider the case when local data for a $\bfG$-bundle and a connection are given all concentric balls $\set{B_{R_{n}}}_{n= 1, 2, \ldots}$ with $R_{n} \nearrow R$ or $\infty$. Add a smaller ball $B_{R_{0}} \subset B_{R_{1}}$ to the covering. For $n \frkgeq 2$, let $\Phi_{n}$ be a diffeomorphism from $B_{R_{n}}$ into $B_{R_{n-1}}$, which equals the identity on $B_{R_{n-2}}$. Define $P_{(n)}$ on $B_{R_{n}}$ inductively by $P_{(1)} = id$ and \bar{e}gin{equation*} P_{(n)} = (P_{(n-1)} \cdot O_{((n-1) n)}) \circ \Phi_{n}. \end{equation*} Then we restrict the data and $P_{(n)}$ on $B_{R_{n}}$ to $B_{R_{n-1}}$. It follows by construction that, for $n < m$, \bar{e}gin{equation*} P_{(m)} = P_{(n)} \cdot O_{(n m)} \quad \mathfrak{h}box{ in } B_{R_{n-1}}. \end{equation*} Given some local data $\set{A_{(n)}}$ for a connection, the global gauge potential is given by \bar{e}gin{equation*} A = Ad(P_{(n)}) A_{(n)} - P_{(n); x} \quad \mathfrak{h}box{ in } B_{R_{n-1}}. \end{equation*} These expressions are consistent in the intersection (i.e., the smaller ball). Again, observe that $P_{(n)}$ is constructed by the same operations (i)--(iv) as in Scenario~(1). As a consequence this patching procedure, as well as Proposition~\ref{prop:patching-ball}, we obtain the following soft result, which is a starting point for the good global gauge theorems. \bar{e}gin{proposition} \label{prop:ball-triv} Any $\bfG$-bundle with regularity $\mathcal G^{k, \frac{d}{k}}_{loc}$ on $X = B_{R}$ or $\BigR^{d}$ admits a global gauge. Moreover, for any $\bfD \in \mathcal A^{k-1, \frac{d}{k}}_{loc}(X)$ on this $\bfG$-bundle, the global gauge potential satisfies $A \in W^{k-1, \frac{d}{k}}_{loc}(X)$. \end{proposition} \bar{e}gin{proof} Let $\set{U_{\alpha}, O_{(\alpha \bar{e}ta)}}$ be the local data for a $\bfG$-bundle with regularity $\mathcal G^{k, \frac{d}{k}}_{loc}$ on $X = B_{R}$ or $\BigR^{d}$, and consider a smaller ball $B_{R'}$ such that $\overline{B_{R'}} \subseteq X$. By Lebesgue's covering lemma, there exists a refinement of $U_{\alpha}$ by balls $\set{B_{\delta}(x) \cap B_{R'}}_{x \in B_{R'}}$ of the same radius $\delta > 0$ . By Proposition~\ref{prop:patching-ball}, we obtain a global gauge on $B_{R'}$. Since $R'$ is arbitrary, Scenario~(3) applies to a sequence of global gauges on $B_{R'}$ with $R' \nearrow R$ or $\infty$, and we obtain a global gauge on $X$. Existence of a corresponding global gauge potential for any $\mathcal A^{k-1, \frac{d}{k}}_{loc}(X)$ connection is a quick corollary. \qedhere \end{proof} \subsection{Uhlenbeck lemmas and elliptic regularity} Thanks to Proposition~\ref{prop:ball-triv}, we know that any $\mathcal A^{1, \frac{d}{2}}_{loc}(X)$ connection admits a global gauge potential in $W^{1, \frac{d}{2}}_{loc}(X)$. This is a natural setting for Uhlenbeck's lemma, which finds good local gauges under a gauge-invariant smallness assumption. These good local gauges furnish another main ingredient of the proof of the good global gauge theorems. We start with the case of a ball $B_{1}$. \bar{e}gin{theorem} [Uhlenbeck's lemma on a ball] \label{thm:uhlenbeck-ball} Consider $\bfD \in \mathcal A^{1, \frac{d}{2}}_{loc}(B_{1})$ of the form $\bfD = \mathrm{d} + A$ with $A \in W^{1, \frac{d}{2}}(B_{1}; \frkg)$, which satisfies \bar{e}gin{equation} \label{eq:uhlenbeck-ball-hyp} \nrm{F[A]}_{L^{\frac{d}{2}}(B_{1})} < \epsilon_{0}. \end{equation} \bar{e}gin{enumerate} \item There exists $O \in \mathcal G^{2, \frac{d}{2}}(B_{1})$, unique up to multiplication by a constant element of $\bfG$, such that $\tilde{A} = Ad(O) A - O_{;x} \in W^{1, \frac{d}{2}}(B_{1}; \frkg)$ obeys \bar{e}gin{equation*} \partial^{\ell} \tilde{A}_{\ell} = 0 \mathfrak{h}box{ in } B_{1}, \qquad x^{\ell} \tilde{A}_{\ell} = 0 \mathfrak{h}box{ on } \partial B_{1} \end{equation*} and \bar{e}gin{equation*} \nrm{\tilde{A}}_{W^{1, \frac{d}{2}}(B_{1})} \lesssim \nrm{F[A]}_{L^{\frac{d}{2}}(B_{1})}. \end{equation*} \item Let $A^{n}$ be a sequence of connections such that $A^{n} \to A$ in $W^{1, \frac{d}{2}}(B_{1}; g)$. Let $(\tilde{A}^{n}, O^{n})$ be given by (1) from $A^{n}$. Then passing to a subsequence and suitably conjugating each $(\tilde{A}^{n}, O^{n})$ with a constant gauge transformation, we have \bar{e}gin{equation*} \tilde{A}^{n} \to \tilde{A} \mathfrak{h}box{ in } W^{1, \frac{d}{2}}(B_{1}), \qquad O^{n} \to O \mathfrak{h}box{ in } W^{2, \frac{d}{2}}(B_{1}). \end{equation*} \end{enumerate} \end{theorem} \bar{e}gin{proof} For a proof of the existence claim in (1), see \cite[Theorem~1.3]{MR648356}. For uniqueness, observe that the gauge transformation $\tilde{O} \in \mathcal G^{2, \frac{d}{2}}(B_{1})$ between the two possible $\tilde{A}$ and $\tilde{A}'$ satisfies the a-priori bound $\nrm{\tilde{O}_{;x}}_{W^{1, \frac{d}{2}}(B_{1})} \lesssim \epsilon_{0}$, and also solves the div-curl system \bar{e}gin{equation*} \partial^{\ell} \tilde{O}_{;\ell} = Ad(\tilde{O}) [\tilde{O}_{;\ell}, (\tilde{A}')^{\ell}], \qquad \partial_{j} \tilde{O}_{;k} - \partial_{k} \tilde{O}_{;j} = - [\tilde{O}_{;j}, \tilde{O}_{;k}], \end{equation*} with the boundary condition $x^{\ell} \tilde{O}_{;\ell} = 0$ on $\partial B_{1}$. It follows that $\tilde{O}_{;x} = 0$, i.e., $\tilde{O}$ is constant. To prove $(2)$, observe first that the $W^{2, \frac{d}{2}}(B_{1})$ norm of $O^{n}$ is uniformly bounded, thanks to the formula $O^{n}_{;x} = Ad(O^{n}) A^{n} - \tilde{A}^{n}$. Thus, after passing to a subsequence, $O^{n} \rightharpoonup O'$ and $\tilde{A}^{n} \rightharpoonup \tilde{A}'$ in $W^{2, \frac{d}{2}}(B_{1})$ and $W^{1, \frac{d}{2}}(B_{1})$, respectively. This weak convergence is enough to justify \bar{e}gin{equation*} \tilde{A}' = Ad(O') A - O'_{;x} \mathfrak{h}box{ in } B_{1}, \quad \partial^{\ell} \tilde{A}_{\ell} = 0 \mathfrak{h}box{ in } B_{1}, \quad x^{\ell} \tilde{A}_{\ell} = 0 \mathfrak{h}box{ on } \partial B_{1}. \end{equation*} Hence, by the uniqueness statement in (1), $(\tilde{A}', O')$ coincides with $(\tilde{A}, O')$ up to a constant gauge transformation $O_{0}$. Applying $O_{0}$ to the sequence $(\tilde{A}^{n}, O^{n})$, we may insure that $O^{n} \rightharpoonup O$ and $\tilde{A}^{n} \rightharpoonup \tilde{A}$ in $W^{2, \frac{d}{2}}(B_{1})$ and $W^{1, \frac{d}{2}}(B_{1})$, respectively. To upgrade the weak convergence to strong convergence, we use the div-curl system for $\tilde{A}$. First, by the strong $W^{1, \frac{d}{2}}$ convergence $A^{n} \to A$ and the weak $W^{2, \frac{d}{2}}$ convergence $O^{n} \to O$, it follows that \bar{e}gin{equation*} F[\tilde{A}^{n}] = Ad(O^{n}) F[A^{n}] \to Ad(O) F[A] = F[\tilde{A}] \quad \mathfrak{h}box{ in } L^{\frac{d}{2}}(B_{1}). \end{equation*} Then by the div-curl system \bar{e}gin{equation*} \partial^{\ell} \tilde{A}_{\ell} = 0, \qquad \partial_{j} \tilde{A}_{k} - \partial_{k} \tilde{A}_{j} = F[\tilde{A}^{n}], \end{equation*} the weak $W^{1, \frac{d}{2}}$ convergence $\tilde{A}^{n} \to \tilde{A}$ is improved to strong convergence. Finally, by the formula $O_{;x} = Ad(O) A - \tilde{A}$, the weak $W^{2, \frac{d}{2}}$ convergence $O^{n} \to O$ is also improved to strong convergence. \qedhere \end{proof} Theorem~\ref{thm:uhlenbeck-ball} was extended in \cite{MR815194} to a ``removal of singularity'' result for connections defined only on a punctured ball. Let $B'_{r} = \set{x \in \BigR^{d} : 0 < \abs{x} < r}$. \bar{e}gin{theorem} [Uhlenbeck's lemma on a punctured ball] \label{thm:uhlenbeck-pball} Consider $\bfD \in \mathcal A^{1, \frac{d}{2}}_{loc}(B_{1+\delta}')$ for some $\delta > 0$, which admits a representative $\bfD = \mathrm{d} + A$ with $A \in W^{1, \frac{d}{2}}_{loc}(B'_{1+\delta}; \frkg)$ and satisfies \bar{e}gin{equation*} \nrm{F[A]}_{L^{\frac{d}{2}}(B_{1}')} \leq \epsilon_{0}'. \end{equation*} Then there exists $O \in \mathcal G^{2, \frac{d}{2}}_{loc}(B_{1}')$ such that $\tilde{A} = Ad(O) A - O_{;x}$ obeys \bar{e}gin{equation*} \partial^{\ell} \tilde{A}_{\ell} = 0 \quad \mathfrak{h}box{ in } B_{1}', \end{equation*} and \bar{e}gin{equation*} \nrm{\tilde{A}}_{W^{1, \frac{d}{2}}(B_{1}')} \lesssim \nrm{F[A]}_{L^{\frac{d}{2}}(B_{1}')}. \end{equation*} \end{theorem} As a consequence, we see that $\tilde{A}$ is the restriction of a $\mathcal A^{1, \frac{d}{2}}_{loc}$ connection on the full ball $B_{1+\delta}$. For a proof, we refer the reader to \cite{MR815194}. If $F$ satisfies higher (covariant) regularity bounds, then so does $\tilde{A}$ in the above theorems. This statement is most naturally formulated as an elliptic regularity result for the nonlinear div-curl system satisfied by $\tilde{A}$ with $\partial^{\ell} \tilde{A}_{\ell} = 0$. In what follows, we omit the tilde for simplicity, and we focus on quantitative bounds in scaling-invariant spaces. We start with a simple interior regularity result. \bar{e}gin{lemma} \label{lem:div-curl-A-intr} Let $A \in W^{1, \frac{d}{2}}(B)$ be a solution to the nonlinear div-curl system \bar{e}gin{equation} \label{eq:div-curl-A} \bar{e}gin{aligned} \partial_{j} A_{k} - \partial_{k} A_{j} =& F_{jk} - [A_{j}, A_{k}], \\ \partial^{\ell} A_{\ell} =& 0. \end{aligned} \end{equation} If $\bfD^{(m)} F \in L^{\frac{d}{m+2}}(B)$ with $\frac{d}{m+2} > 1$, then $\partial^{(m+1)} A \in L^{\frac{d}{n+2}}(\lambda B)$ for any $0 \leq \lambda < 1$, with a bound depending only on $m$, $\nrm{\bfD^{(m)} F}_{L^{\frac{d}{m+2}}(B)}$, $\nrm{A}_{L^{d}(B)}$ and $\lambda$. \end{lemma} \bar{e}gin{proof} Since it is a straightforward interior elliptic regularity argument, we only sketch the proof. We proceed by a simple induction on $m$; the key point is that $\partial^{(m)} F_{jk}$ and $\partial^{(m)} [A_{j}, A_{k}]$ in $L^{\frac{d}{m+2}}$ are controlled by $\bfD^{(m)} F$ in $L^{\frac{d}{m+2}}$ and the inductive bounds for $\partial^{(m'+1)} A$ in $L^{\frac{d}{m'+2}}$ ($0 \leq m' \leq m$). \end{proof} When Theorem~\ref{thm:uhlenbeck-ball} is applied to a unit ball $B_{1}(x)$ centered near the boundary $\partial B_{R}$ of a larger ball, it is of interest to control regularity of $A$ up to the boundary $\partial B_{R}$. For this purpose, consider normalized angular derivatives $\mbox{$\not \mathfrak{h}skip-.25em \rd$} = \set{\frac{1}{\abs{x}} (x_{j} \partial_{k} - x_{k} \partial_{j})}$ about the origin (at which $B_{R}$ is centered), and the corresponding covariant angular derivatives $\mbox{$\not \!\! \covD$} = \set{\frac{1}{\abs{x}} (x_{j} \bfD_{k} - x_{k} \bfD_{j})}$. In any unit ball away from the origin, we show that higher angular regularity of $F$ implies the corresponding regularity of $A$ in the Coulomb gauge. \bar{e}gin{lemma} \label{lem:div-curl-A-tang} Let $B$ be a unit ball in $\BigR^{d}$ such that $B \cap B_{1}(0) = \emptyset$, and let $A \in W^{1, \frac{d}{2}}(B)$ be a solution to the nonlinear div-curl system \eqref{eq:div-curl-A}. If $\mbox{$\not \!\! \covD$}^{(m)} F \in L^{\frac{d}{m+2}}(B)$ with $\frac{d}{m+2} > 1$, then $\partial \mbox{$\not \mathfrak{h}skip-.25em \rd$}^{(m)} A \in L^{\frac{d}{m+2}}(\lambda B)$ for any $0 \leq \lambda < 1$, with a bound depending only on $m$, $\nrm{\mbox{$\not \!\! \covD$}^{(m)} F}_{L^{\frac{d}{m+2}}(B)}$, $\nrm{A}_{W^{1, \frac{d}{2}}(B)}$ and $\lambda$. \end{lemma} \bar{e}gin{proof} This lemma is most simply proved by commuting with the Lie derivatives with respect to the normalized rotation vector fields $\overline{\Omega}_{jk} = \frac{1}{d(0, B)} \Omega_{jk}$; these are isometries and thus exactly commute with the div-curl system. Moreover, their lengths are comparable to $1$ (independent of $B$), so that $\abs{\mathcal L^{(\leq n)}_{\overline{\Omega}} A} \simeq_{n} \abs{\mbox{$\not \mathfrak{h}skip-.25em \rd$}^{(\leq n)} A}$. As before, when $p = \frac{d}{n+2} > 1$, the statement follows (with explicit bounds) by an induction on $n$. By the trace theorem and the (angular) Sobolev inequality, observe that \bar{e}gin{equation*} \nrm{u}_{L^{\infty}_{r} L^{\frac{d-1}{m+1}}_{\Theta}(B)} \lesssim \nrm{u}_{L^{\frac{d}{m+2}}(B)} + \nrm{\partial u}_{L^{\frac{d}{m+2}}(B)}. \end{equation*} Using this inequality and H\"older, we may control $\mathcal L_{\overline{\Omega}}^{(n)} F$ and $\overline{\Omega}^{(n)} [A_{j}, A_{k}]$ in $L^{\frac{d}{n+2}}$ by $\mbox{$\not \!\! \covD$}^{(\leq n)} F$ in $L^{\frac{d}{n+2}}$ and the inductive bounds for $\partial \mbox{$\not \mathfrak{h}skip-.25em \rd$}^{(\leq m)} A$ in $L^{\frac{d}{m+2}}$. Then we may proceed as in the proof of Lemma~\ref{lem:div-curl-A-intr}. \end{proof} \bar{e}gin{remark} \label{rem:uhlenbeck-cont} As in Theorem~\ref{thm:uhlenbeck-ball}(2), an argument similar to Lemma~\ref{lem:div-curl-A-intr} (resp. Lemma~\ref{lem:div-curl-A-tang}) for the div-curl system for $\tilde{A}$ leads to strong convergence of $\partial^{(\leq m+1)} \tilde{A}^{n}$ and $\partial^{(\leq m+2)} O^{n}$ in $L^{\frac{d}{n+2}}(\lambda B)$ (resp. $\partial^{(\leq m+1)} \tilde{A}^{n}$ and $\partial^{(\leq 2)} \mbox{$\not \mathfrak{h}skip-.25em \rd$}^{(\leq m)} O^{n}$ in $L^{\frac{d}{n+2}}(\lambda B \cap B_{R})$), provided that $A^{n} \to A$ in $W^{m, \frac{d}{m+1}}$. We omit the straightforward proof. \end{remark} Next, we record a simple interior regularity result for the div-curl system of $O$. \bar{e}gin{lemma} \label{lem:div-curl-O} Let $O \in W^{2, \frac{d}{2}}(B)$ be a solution to the div-curl system \bar{e}gin{equation} \label{eq:div-curl-O} \bar{e}gin{aligned} \partial_{j} O_{;k} - \partial_{k} O_{;j} =& [O_{;j}, O_{;k}] \\ \partial^{\ell} O_{; \ell} = & H. \end{aligned} \end{equation} If $H \in \ell^{1} L^{\frac{d}{2}}(B)$, then $O_{;x} \in \ell^{1} W^{1, \frac{d}{2}}(\lambda B)$ for any $0 \leq \lambda < 1$, with the bound \bar{e}gin{equation*} \nrm{O_{;x}}_{\ell^{1} \dot{W}^{1, \frac{d}{2}}(\lambda B)} \lesssim_{\lambda} \nrm{H}_{\ell^{1} L^{\frac{d}{2}}(B)} + \nrm{O_{;x}}_{W^{1, \frac{d}{2}}(B)}^{2}. \end{equation*} Moreover, if $(O', H') \in W^{2, \frac{d}{2}}(B) \times \ell^{1} L^{\frac{d}{2}}(B)$ is another solution to \eqref{eq:div-curl-O}, then \bar{e}gin{equation*} \nrm{O_{;x} - O'_{;x}}_{\ell^{1} \dot{W}^{1, \frac{d}{2}} (\lambda B)} \lesssim_{\lambda} \nrm{H-H'}_{\ell^{1} L^{\frac{d}{2}}(B)} +( \nrm{O_{;x}}_{W^{1, \frac{d}{2}}(B)} + \nrm{O'_{;x}}_{W^{1, \frac{d}{2}}(B)}) \nrm{O_{;x} - O'_{;x}}_{W^{1, \frac{d}{2}}(B)}. \end{equation*} \end{lemma} The key point is that $[O_{;j}, O_{;k}]$ in $\ell^{1} L^{\frac{d}{2}}(B)$ can be estimated by $O_{;j}, O_{;k}$ in $W^{1, \frac{d}{2}}(B)$. We omit the obvious proof. The $\ell^{1} \dot{W}^{1, \frac{d}{2}}$ bound on $O_{;x}$ is useful as it implies continuity of $O$. More precisely, we have the following: \bar{e}gin{lemma} \label{lem:ptwise-O} If $O_{;x} \in \ell^{1} W^{1, \frac{d}{2}}(B)$, then $O$ is continuous on $B$. \end{lemma} \bar{e}gin{proof} Without loss of generality, let $x_{1}$ be farther away from $\partial B$ than $x_{2}$. As in the proof of Morrey's inequality, we have \bar{e}gin{equation*} d(O(x_{1}), O(x_{2})) \lesssim \int_{B(x_{1}, 2r)} \frac{\abs{O_{;x}}}{\abs{x - x_{1}}^{d-1}} + \frac{\abs{O_{;x}}}{\abs{x - x_{2}}^{d-1}} \, \mathrm{d} x. \end{equation*} The last integral may be estimated in terms of the Besov norm of the extension of $O_{;x}$, and vanishes as $x_{1} \to x_{2}$. \qedhere \end{proof} \subsection{Good global gauge theorem on the ball} The goal of this subsection is to prove Theorem~\ref{thm:goodrep-ball}. The overall proof is divided into two steps: \bar{e}gin{itemize} \item First, we prove the quantitative statements under the assumption that $\bfD$ admits a global gauge potential $A \in \dot{W}^{1, \frac{d}{2}}(B_{R})$. \item Next, using softer arguments, we remove the global gauge assumption. \end{itemize} In the first step, the idea is to produce local gauges on balls $B_{1}(x)$ centered inside $B_{R}$ using Uhlenbeck's lemma, and then patch them up to a global gauge on $B_{R}$. To handle balls near the boundary, the following simple extension procedure is helpful. \bar{e}gin{lemma} \label{lem:ext-simple} Let $A \in W^{1, \frac{d}{2}}(B_{R})$ with $A_{r} = 0$ on $\partial B_{R}$. Extend $A$ outside $B_{R}$ by \bar{e}gin{equation*} \bar{a}r{A}_{r} \left( \frac{R^{2}}{r}, \Theta \right)= - A_{r}(r, \Theta), \quad \bar{a}r{A}_{\Theta} \left( \frac{R^{2}}{r}, \Theta \right)= A_{\Theta} (r, \Theta). \end{equation*} Then the extension obeys \bar{e}gin{equation} \label{eq:ext-simple} F[\bar{a}r{A}] \left(\frac{R^{2}}{r}, \Theta\right) = F[A](r, \Theta) \quad \mathfrak{h}box{ for } r < R. \end{equation} \end{lemma} The proof is an easy algebra computation, which we omit. We now carry out the first step. \bar{e}gin{proposition} \label{prop:goodrep-ball-key} Theorem~\ref{thm:goodrep-ball} holds under the additional assumption that $A \in \dot{W}^{1, \frac{d}{2}}(B_{R})$. \end{proposition} \bar{e}gin{proof} By rescaling, we may set $r = 1$, i.e., $\underline{r}_{c}^{\epsilon_\ast}[A] \frkgeq 1$. Then we need to show that \eqref{eq:goodrep-ball-est} holds with an implicit constant depending only on $\epsilon_{\ast}$ and $R$, provided that $\epsilon_{\ast}$ is sufficiently small compared to a universal constant. If $R \lesssim 1$, then the conclusion of Theorem~\ref{thm:goodrep-ball} follows by Uhlenbeck's lemma, so we may assume that $R > 10$ (say). Applying Lemma~\ref{lem:Ar=0}, we may assume, without loss of generality that $A_{r} = 0$. Then we extend $A$ outside $B_{R}$ via Lemma~\ref{lem:ext-simple}. By \eqref{eq:ext-simple}, it follows that the extended connection still has concentration radius $\frkgtrsim 1$ in $B_{R+10}$. Choosing $\epsilon_{\ast}$ sufficiently small, we may insure that Uhlenbeck's lemma applies to the extended connection on balls of radius $2$ centered in $B_{R}$. Consider a covering $\set{B_{\alpha}}$ of $B_{R}$ by uniformly separated unit balls centered in $B_{R}$, and apply Uhlenbeck's lemma on each $2 B_{\alpha}$ to obtain local data $A_{(\alpha)} \in W^{1, \frac{d}{2}}(2 B_{\alpha})$ and $O_{(\alpha \bar{e}ta)} \in \mathcal G^{2, \frac{d}{2}}(2 B_{\alpha} \cap 2 B_{\bar{e}ta})$. By Lemma~\ref{lem:div-curl-A-intr}, we see that $A_{(\alpha)}$ enjoys higher regularity properties in each interior ball $B_{\alpha}$ (i.e., $2 B_{\alpha} \cap \partial B_{R} = \emptyset$). For a boundary ball $B_{\alpha}$, i.e., $2 B_{\alpha} \cap \partial B_{R} \neq \emptyset$, we first obtain higher angular regularity of $A$ in $B_{\alpha} \cap B_{R}$ by Lemma~\ref{lem:div-curl-A-tang}, and then also regularity in the radial direction by the equations \bar{e}gin{equation} \label{eq:div-curl-rad} \partial_{r} A_{r} = - \mathrm{div}_{\Theta} A_{\Theta}, \qquad \partial_{r} A_{\Theta} = \partial_{\Theta} A_{r} + [A_{r}, A_{\Theta}] + F_{r \Theta}, \end{equation} as well as radial covariant derivative bounds on $F_{r \Theta}$. Finally, observe that the desired higher regularity of $O_{(\alpha \bar{e}ta)}$ in $B_{\alpha} \cap B_{\bar{e}ta} \cap B_{R}$ follows from the equation $O_{(\alpha \bar{e}ta); x} = Ad(O_{(\alpha \bar{e}ta)}) A_{(\bar{e}ta)} - A_{(\alpha)}$ and the bounds for $A_{(\alpha)}, A_{(\bar{e}ta)}$. As a result, on the covering $\set{B_{\alpha} \cap B_{R}}$, we obtain local data $O_{(\alpha \bar{e}ta)} \in W^{k, \frac{d}{k}}(B_{\alpha} \cap B_{\bar{e}ta} \cap B_{R})$ and $A_{(\alpha)} \in W^{k, \frac{d}{k}}(B_{(\alpha)} \cap B_{R})$, provided that $\bfD^{(k)} F \in L^{\frac{d}{k}}$ (with $k \frkgeq 1$, $\frac{d}{k} > 1$). We are in a position to apply Proposition~\ref{prop:patching-ball}, from which the conclusion of Theorem~\ref{thm:goodrep-ball} follows. \end{proof} Finally, we remove the global gauge assumption, and thereby complete the proof of Theorem~\ref{thm:goodrep-ball}. \bar{e}gin{proof}[Completion of proof of Theorem~\ref{thm:goodrep-ball}] Consider a sequence $R_{n} \nearrow R$. Apply Proposition~\ref{prop:goodrep-ball-key} to each $A \restriction_{B_{R_{n}}}$, which gives rise to $\tilde{A}^{(n)}$ and $O^{(n)}$ such that \bar{e}gin{align*} O^{(n)}_{;j} =& Ad(O^{(n)}) A_{j} - \tilde{A}^{(n)}_{j} \\ \partial_{k} O^{(n)}_{;j} =& [O^{(n)}_{;k}, Ad(O^{(n)}) A_{j}] + Ad(O^{(n)}) \partial_{k} A_{j} - \partial_{k} \tilde{A}^{(n)}_{j} \end{align*} It follows that $O^{(m)}_{;x}$ is uniformly bounded in $W^{1, \frac{d}{2}}$ on each fixed $B_{R'}$. Therefore, after passing to a subsequence, there exists $O \in W^{2, \frac{d}{2}}_{loc}(B_{R}; \BigR^{N \times N})$ such that $O^{(n)} \rightharpoonup O$ in $W^{2, \frac{d}{2}}(B_{R'}; \BigR^{N \times N})$ for every $0 < R' < R$ and $O^{(n)} \to O$ a.e. on $B_{R}$. Hence, $O \in \mathcal G^{2, \frac{d}{2}}_{loc}(B_{R})$ and moreover \bar{e}gin{equation*} \tilde{A}_{j} = Ad(O) A_{j} - O_{;j} \end{equation*} is the weak limit of $\tilde{A}^{(n)}$ in $W^{1, \frac{d}{2}}_{loc}$. Since the $\dot{W}^{1, \frac{d}{2}}(B_{R_{n}})$ norm of $\tilde{A}^{(n)}$ is uniformly bounded in $n$, it follows that $\nrm{\tilde{A}}_{\dot{W}^{1, \frac{d}{2}}(B_{R})} \lesssim_{\nrm{F}_{L^{\frac{d}{2}}}} 1$. \end{proof} \subsection{Good global gauge theorem on the whole space} Next, we establish Theorem~\ref{thm:goodrep}. \bar{e}gin{proof}[Proof of Theorem~\ref{thm:goodrep}] By rescaling, we set $\underline{R}_{c} = 1$. Throughout this proof, we work with global gauge potentials in $W^{1, \frac{d}{2}}_{loc}(\BigR^{d})$ for $\bfD$, which exists thanks to Proposition~\ref{prop:ball-triv}. The first main task is to find a good gauge in a suitable exterior domain. By hypothesis, and our normalization $\underline{R}_{c} = 1$, we have $\nrm{F[A]}_{L^{\frac{d}{2}}(\BigR^{d} \setminus \overline{B})} < \epsilon_{\ast}$. Consider the inversion map \bar{e}gin{equation*} \iota : x \mapsto y = \frac{x}{\abs{x}^{2}}, \end{equation*} which clearly satisfies $\iota \circ \iota = id$. Under $\iota$, the exterior region $\BigR^{d} \setminus \overline{B}$ is the image of the punctured unit ball $B'$, and vice versa. The map $\iota$ is a conformal isometry, such that \bar{e}gin{equation*} (\iota^{\ast} \delta)^{ij} = \abs{x}^{4} \delta^{ij}, \quad \iota^{\ast}(\mathrm{d} y^{1} \wedge \cdots \wedge \mathrm{d} y^{d}) = \frac{(-1)^{d}}{\abs{x}^{2d}} \mathrm{d} x^{1} \wedge \cdots \wedge \mathrm{d} x^{d}. \end{equation*} In particular, if $T$ is a covariant $2$-tensor on $\iota(U) \subseteq \BigR^{d}$, then \bar{e}gin{equation*} \int_{\iota(U)} (\sum_{i, j} \abs{T_{y^{i} y^{j}}}^{2} )^{\frac{d}{4}}(y) \, \mathrm{d} y = \int_{U} (\sum_{i, j} \abs{\iota^{\ast} T_{x^{i} x^{j}}}^{2} )^{\frac{d}{4}}(x) \, \mathrm{d} x. \end{equation*} Choosing $\epsilon_{\ast} < \epsilon_{0}'$, we have $\nrm{\iota^{\ast} F}_{L^{\frac{d}{2}}(B')} = \nrm{F}_{L^{\frac{d}{2}}(\BigR^{d} \setminus \overline{B}} < \epsilon_{0}'$, and we may apply Theorem~\ref{thm:uhlenbeck-pball} to find a local gauge in which the gauge potential satisfies $\tilde{A}_{(\infty)} \in \dot{W}^{1, \frac{d}{2}}(B)$. We define $A_{(\infty)}$ to be the local gauge potential of $\bfD = \iota^{\ast} \iota^{\ast} \bfD$ on $\BigR^{d} \setminus \overline{B}$ given by $A_{(\infty)} = \iota^{\ast} \tilde{A}_{(\infty)}$. Since $\partial (\iota^{\ast} \tilde{A}_{(\infty)}) = \iota^{\ast} (\partial \tilde{A}_{(\infty)})$ and $\nrm{\iota^{\ast} \partial \tilde{A}_{(\infty)}}_{L^{\frac{d}{2}}(\BigR^{d} \setminus \overline{B})} = \nrm{\partial \tilde{A}_{(\infty)}}_{L^{\frac{d}{2}}(B')}$, it follows that $A_{(\infty)} \in \dot{W}^{1, \frac{d}{2}}(\BigR^{d} \setminus \overline{B})$ and \bar{e}gin{equation} \label{eq:goodrep-extr} \nrm{A_{(\infty)}}_{L^{d} \cap \dot{W}^{1, \frac{d}{2}}(\BigR^{d} \setminus \overline{B})} \lesssim \epsilon_{\ast}. \end{equation} On the other hand, by Theorem~\ref{thm:goodrep-ball} applied to $5B$, we obtain a local gauge potential $A_{(0)} \in $ for such that \bar{e}gin{equation} \label{eq:goodrep-intr} \nrm{A_{(0)}}_{L^{d} \cap \dot{W}^{1, \frac{d}{2}}(5B)} \lesssim_{\epsilon_{\ast}, \underline{r}_{c}^{-1}} 1. \end{equation} By construction there exists $O \in \mathcal G^{2, \frac{d}{2}}_{loc}(5B \setminus \overline{B})$ such that \bar{e}gin{equation*} A_{(0)} = Ad(O) A_{(\infty)} - O_{;x} \quad \mathfrak{h}box{ in } 5B \setminus \overline{B}. \end{equation*} By this relation, \eqref{eq:goodrep-extr} and \eqref{eq:goodrep-intr}, on $5B \setminus \overline{B}$ we have \bar{e}gin{equation*} \nrm{O_{;x}}_{L^{d} \cap \dot{W}^{1, \frac{d}{2}}(5B \setminus \overline{B})} \lesssim_{\epsilon_{\ast}, \underline{r}_{c}^{-1}} 1. \end{equation*} Using the partial approximation lemma (Lemma~\ref{lem:part-approx}) and performing $0$-homogeneous extension outside a suitable sphere, it is straightforward to construct a gauge transform $\tilde{O}_{(\infty)}$ on $\BigR^{d} \setminus \overline{B}$ satisfying the following properties: \bar{e}gin{itemize} \item $\tilde{O}_{(\infty)} = O$ in $2B \setminus \overline{B}$; \item $\tilde{O}_{(\infty)}(r \Theta) = \tilde{O}_{(\infty)}(4 \Theta)$ for $\Theta \in \BigS^{d-1}$ and $r \frkgeq 4$ \item $\displaystyle{\nrm{\tilde{O}_{(\infty)}}_{L^{d} \cap \dot{W}^{1, \frac{d}{2}}(5B \setminus \overline{B})} \lesssim_{\epsilon_{\ast}, \underline{r}_{c}^{-1}} 1} $; \item $\tilde{O}_{(\infty)} $ is $C^{\infty}$ in $5B \setminus \overline{3B}$ with $\nrm{O_{(\infty)}}_{C^{N}(5B \setminus \overline{3B})} \lesssim_{\epsilon_{\ast}, \underline{r}_{c}^{-1}, N} 1$ for all $N \frkgeq 0$. \end{itemize} Using $\tilde{O}_{(\infty)}$ to patch up the local gauges in $2B$ and $\BigR^{d} \setminus \overline{B}$, we obtain the global gauge potential \bar{e}gin{equation*} A_{x} = \left\{ \bar{e}gin{array}{cl} A_{(0) x} & \mathfrak{h}box{ on } 2B \\ Ad(\tilde{O}_{(\infty)}) A_{(\infty) x} - \tilde{O}_{(\infty); x} & \mathfrak{h}box{ on } \BigR^{d} \setminus \overline{B} \end{array} \right. \end{equation*} Let $O_{(\infty)}$ be the smooth $0$-homogeneous map on $\BigR^{d} \setminus \set{0}$ defined by $O_{(\infty)}(r \Theta) = \tilde{O}_{(\infty)}(4 \Theta)$, and define $B_{x} = A_{x} + \boldsymbol{\chi}i O_{(\infty); x}$. By \eqref{eq:goodrep-extr}, \eqref{eq:goodrep-intr} and the preceding bounds for $\tilde{O}_{(\infty)}$, the desired bounds \eqref{eq:goodrep-bnd} follow. \qedhere \end{proof} \subsection{Topological classes of rough connections} Here, we verify the results stated in Section~\ref{subsec:top-class}. Our first goal is to prove homotopy equivalence of $O_{(\infty)}$ of different good representations of the same connection (Proposition~\ref{prop:goodrep-homotopy-0}). We need a few lemmas. \bar{e}gin{lemma} \label{lem:homotopy} Let $O \in \mathcal G^{2,\frac{d}{2}}(A)$, where $A = \set{x \in \BigR^{d} : R_{1} < \abs{x} < R_{2}}$ is an annulus. For almost every $R \in (R_{1}, R_{2})$, $O \restriction_{\partial B_{R}}$ is continuous, which are all homotopic to each other. \end{lemma} By this lemma, we may define $[O]$ to be the homotopy class (as continuous maps $\BigS^{d-1} \to \bfG$) of the restriction of $O$ to $\partial B_{R}$ for almost every $R$. We refer to such $R$'s as \emph{generic radii}. \bar{e}gin{proof} Since the boundary of $A$ is smooth, we may approximate $O$ by $O^{n} \in C^{\infty}(A; \bfG)$ in the $W^{2, \frac{d}{2}}(A; \BigR^{N \times N})$-topology \cite{MR710054, MR815194}. After passing to a subsequence, for almost every $R \in (R_{1}, R_{2})$, we have \bar{e}gin{equation*} O^{n} \restriction_{\partial B_{R}} \to O^{n} \restriction_{\partial B_{R}} \quad \mathfrak{h}box{ in } W^{2, \frac{d}{2}}(\partial B_{R}; \BigR^{N \times N}). \end{equation*} The lemma now follows from the observation that $W^{2, \frac{d}{2}}(\partial B_{R}; \BigR^{N \times N}) \mathfrak{h}ookrightarrow C^{0}(\partial B_{R}; \BigR^{N \times N})$, due to the Sobolev embedding on spheres. \end{proof} \bar{e}gin{lemma} \label{lem:homotopy-0} Let $\delta > 0$ and let $O \in \mathcal G^{2,\frac{d}{2}}(\tilde{A})$, where $\tilde{A} = \set{x \in \BigR^{d} : R_{1} -\delta < \abs{x} < R_{2}}$. Then there exists an extension $\tilde{O} \in \mathcal G^{2, \frac{d}{2}}(B_{R_{2}})$ such that $\tilde{O} \restriction_{A} = O \restriction_{A}$ if and only if $[O] = [id]$. \end{lemma} In this lemma, $[O]$ is defined by viewing $O$ as defined on either the annulus $\tilde{A}$ or $A$; both give the same answer by Lemma~\ref{lem:homotopy}. Our proof is qualitative, in that we make no claim regarding the size of $\tilde{O} \in \mathcal G^{2, \frac{d}{2}}(B_{R_{2}})$. \bar{e}gin{proof} We first prove the ``only if'' part. By Lemma~\ref{lem:part-approx} (with $V = \emptyset$ and $U = W = B_{R_{2}}$), there exists an approximating sequence $O^{n} \in C^{\infty}(B_{R_{2}}; \bfG)$, which approaches $O$ in the $W^{2, \frac{d}{2}}(B_{R_{2}}; \BigR^{N \times N})$-topology. Recalling the proof of Lemma~\ref{lem:homotopy}, we see that $[O]$ is the homotopy class of $O^{n} \restriction_{\partial B_{R}}$ for any $\partial B_{R} \subseteq B_{R_{2}}$, provided that $n$ is sufficiently large. Now, the whole map $O^{n} : B_{R_{2}} \to \bfG$ provides a homotopy from $O^{n} \restriction_{\partial B_{R_{2}}}$ to the constant map $O^{n} \restriction_{\set{0}}$, which in turn is homotopic to the identity map. Next, we prove the ``if'' part. First, by Lemma~\ref{lem:part-approx}, there exists $O' \in \mathcal G^{2, \frac{d}{2}}(R_{1} - \frac{4}{3} \delta < \abs{x} < R_{2})$ such that $O' \in C^{\infty}(R_{1} - \frac{4}{3} \delta < \abs{x} <R_{1} - \frac{1}{4} \delta; \bfG)$, $O' \restriction_{A} = O \restriction_{A}$. By Lemma~\ref{lem:homotopy}, $[O'] = [O] = [id]$. Working in the smooth category, we may find $\tilde{O} \in \mathcal G^{2, \frac{d}{2}}(B_{R_{2}})$ such that $\tilde{O} \restriction_{A} = O' \restriction_{A}$ while $\tilde{O} \in C^{\infty}(\abs{x} < R_{2} - \frac{1}{2} \delta)$. \qedhere \end{proof} \bar{e}gin{lemma} \label{lem:homotopy-infty} Let $O \in \mathcal G^{2, \frac{d}{2}}(\BigR^{d} \setminus \overline{B})$. Then $[O] = [id]$. \end{lemma} In this lemma, $[O]$ is defined by viewing $O$ as defined on an annulus $A \subseteq \BigR^{d} \setminus B$. \bar{e}gin{proof} Without loss of generality, let $U = \BigR^{d} \setminus \overline{B}_{1}$. We also observe that it suffices to prove $[O] = [const]$. As before, by Lemma~\ref{lem:part-approx} (more precisely, a slight variant for the exterior domain) there exists an approximating sequence $O^{n} \in C^{\infty}(U; \bfG)$, which approaches $O$ in the $W^{2, \frac{d}{2}}(U; \BigR^{N \times N})$-topology, where $[O]$ is the homotopy class of $O^{n} \restriction_{\partial B_{R}}$ for any $\partial B_{R} \subseteq U$, provided that $n$ is sufficiently large. By Sobolev embedding, note that \bar{e}gin{equation*} \int_{U} \abs{O^{n}_{;x}}^{d} \, \mathrm{d} x < \infty \quad \mathfrak{h}box{ for all } n. \end{equation*} In the polar coordinates $(r, \Theta) \in (0, \infty) \times \BigS^{d-1}$, it follows that \bar{e}gin{equation*} \int_{1}^{\infty} \int_{\BigS^{d-1}} \abs{\partial_{\Theta} O^{n} (r, \Theta)}^{d} \, \mathrm{d} V_{\BigS^{d-1}}(\Theta) \, \frac{\mathrm{d} r}{r} < \infty \quad \mathfrak{h}box{ for all } n, \end{equation*} which implies that $\liminf_{r \to \infty} \nrm{\partial_{\Theta} O^{n} (r, \Theta)}_{L^{d}(\BigS^{d})} = 0$. The desired conclusion $[O] = [const]$ now follows. \end{proof} We are ready to prove Proposition~\ref{prop:goodrep-homotopy-0}. \bar{e}gin{proof}[Proof of Proposition~\ref{prop:goodrep-homotopy-0}] By suitably replacing $\boldsymbol{\chi}i$, we may assume that $1-\boldsymbol{\chi}i$ vanishes outside the unit ball $B$. \pfstep{Proof of (1)} By equivalence of $(O_{(\infty)}, B_{x})$ and $(O'_{(\infty)}, B_{x}')$, there exists $O \in \mathcal G_{loc}^{2, \frac{d}{2}}(\BigR^{d})$ such that \bar{e}gin{equation*} - O_{(\infty);x } + B_{x} = - Ad(O) O_{(\infty); x}' - O_{;x} + Ad(O) B_{x}'. \end{equation*} From simple computation, it follows that \bar{e}gin{equation*} (O_{(\infty)}^{-1} O O'_{(\infty)})_{;x} = Ad(O_{(\infty)}^{-1} O) B_{x}' - Ad(O_{(\infty)}^{-1}) B_{x}, \end{equation*} which implies that $O_{(\infty)}^{-1} O O'_{(\infty)} \in \mathcal G^{1, d}(\BigR^{d} \setminus \overline{B})$. Applying Lemmas~\ref{lem:homotopy-0} and ~\ref{lem:homotopy-infty} to $O$ and $O_{(\infty)}^{-1} O O'_{(\infty)}$, respectively, it follows that \bar{e}gin{equation*} [id] = [O] = [O_{(\infty)}^{-1} O O'_{(\infty)}]. \end{equation*} Therefore, $[O_{(\infty)}] = [O'_{(\infty)}]$, as desired. \pfstep{Proof of (2)} Since $[O_{(\infty)}' O_{(\infty)}^{-1}] = [id]$, by Lemma~\ref{lem:homotopy-0} there exists a gauge transform $P \in \mathcal G^{2, \frac{d}{2}}(2B)$ such that $P = O_{(\infty)}' O_{(\infty)}^{-1}$ in $2B \setminus \overline{B}$. Extend $P$ as a $0$-homogeneous map outside $2B$; we abuse the notation and refer to the extension again by $P$ (thus, $P = O_{(\infty)}' O_{(\infty)}^{-1}$ in $\BigR^{d} \setminus \overline{B}$). Apply the gauge transform $P$ to $A_{x} = - \boldsymbol{\chi}i O_{(\infty); x} + B_{x}$, and define $B'_{x}$ by the decomposition $Ad(P) A_{x} - P_{;x} = - \boldsymbol{\chi}i O'_{(\infty); x} + B'_{x}$. From $P \in \mathcal G^{2, \frac{d}{2}}(2B)$, it follows that $B'_{x} \in L^{d} \cap \dot{W}^{1, \frac{d}{2}}(2B)$. Moreover, outside $2B$, \bar{e}gin{equation*} B'_{x} = Ad(P) B_{x}. \end{equation*} Observe that $0$-homogeneity of $P$ is sufficient to ensure $Ad(P)B_{x} \in L^{d} \cap \dot{W}^{1, \frac{d}{2}}(\BigR^{d} \setminus \overline{2B})$. Hence $(O'_{(\infty); x}, B'_{x})$ is also a good representation, as desired. \qedhere \end{proof} Finally, we prove Proposition~\ref{prop:top-class-outer}. \bar{e}gin{proof}[Proof of Proposition~\ref{prop:top-class-outer}] By scaling, we may set $R = 1$. Arguing as in the proof of Theorem~\ref{thm:goodrep}, we find local gauge potentials $A_{(\infty)}$ and $A'_{(\infty)}$ in $\BigR^{d} \setminus \overline{B}$ satisfying \eqref{eq:goodrep-extr}. By construction, there exist $O, O' \in \mathcal G^{2, \frac{d}{2}}(5B \setminus \overline{B})$ such that \bar{e}gin{equation*} A = Ad(O) A_{(\infty)} - O_{;x}, \quad A' = Ad(O') A'_{(\infty)} - O'_{;x} \quad \mathfrak{h}box{ in } 5B \setminus \overline{B}. \end{equation*} From the proof of Theorem~\ref{thm:goodrep}, as well as Definition~\ref{def:top-class}, note that the topological classes $[A]$ and $[A']$ are determined by the homotopy classes $[O]$ and $[O']$, respectively, as defined in Lemma~\ref{lem:homotopy}. In particular, it suffices to prove that $O \restriction \partial B_{r}$ and $O' \restriction \partial B_{r}$ are homotopic to each other for a generic $1 < r < 5$, in the sense of Lemma~\ref{lem:homotopy}. Since $\nrm{A - A'}_{L^{d}(5B}) \leq \epsilon_{\ast}$, the difference $O_{;x} - O'_{;x}$ obeys the bound \bar{e}gin{equation*} \nrm{O_{;x} - O'_{;x}}_{L^{d}(5B \setminus \overline{B})} \lesssim \epsilon_{\ast}, \end{equation*} which holds independently of possible additional constant gauge transformations for $O$ or $O'$. By the pigeonhole principle, the following bound holds some generic $1 < r < 5$: \bar{e}gin{equation*} \nrm{O_{;x} - O'_{;x}}_{L^{d}(\partial B_{r})} \lesssim \epsilon_{\ast}. \end{equation*} After a suitable constant gauge transformation (which does not change the homotopy class), it follows that $O$ and $O'$ are close in $C^{\frac{1}{d}}(\partial B_{r})$, and therefore belong to the same homotopy class. \qedhere \end{proof} \section{Excision, gluing and extension of Yang--Mills initial data sets} \label{sec:excise} In this section, we provide proofs of the results stated in Section~\ref{subsec:excise} concerning the Yang--Mills initial data sets. \subsection{Solvability results for the inhomogeneous Gauss equation} In this subsection, we address the question of solvability for divergence equations \bar{e}gin{equation}\label{ext-div} (\bfD^{(a)})^{\ell} e_{\ell} = h \end{equation} in exterior of a convex domain. To quantify the constants, we need to quantify the geometry of a convex domain. Let $K$ be a convex domain with barycenter $x_{K}$. By convexity, for each $\Theta \in \BigS^{d-1}$, there exists a unique intersection $f_{K}(\Theta)$ of $\partial K$ and the ray in the direction $\Theta$ emanating from $x_{K}$. Define the \emph{radius} of $K$ by $R(K) = \sup_{x, y \in K} \abs{x-y}$, and the \emph{Lipschitz constant} of $K$ by \bar{e}gin{align} L(K) =& \sup_{\Theta, \Theta' \in \BigS^{d-1}} \frac{\abs{f_{K}(\Theta) - f_{K}(\Theta')}}{R(K) \abs{\Theta - \Theta'}} \label{eq:lip-K}. \end{align} Clearly, $R(K)$ is $1$-homogeneous and $L(K)$ is scaling-invariant, in the sense that $R(\lambda K) = \lambda R(K)$ and $L(\lambda K) = L(K)$ for $\lambda > 0$. We begin with a general solvability result for the usual divergence equation (i.e., $a = 0$). \bar{e}gin{proposition} \label{prop:gauss-zero} For any convex domain $K$, there exists a solution operator $T_{0}$ for the equation $\partial^{\ell} e_{\ell} = h$ with the following properties: \bar{e}gin{enumerate} \item (Boundedness) For $1 < p < \infty$ and $1-\frac{d}{p} < \sigma < 1 + \frac{d}{p}$, \bar{e}gin{equation} \label{eq:gauss-zero} \nrm{T_{0} h}_{\dot{W}^{\sigma, p}} \lesssim_{L(K), \sigma, p} \nrm{h}_{\dot{W}^{\sigma-1, p}}. \end{equation} \item (Exterior support) If $h = 0$ in $\lambda K$, then $T_{0} h = 0$ in $\lambda K$. \item (Higher regularity) If $h$ is smooth, so is $T_{0} h$. \end{enumerate} \end{proposition} \bar{e}gin{proof} In the case $K$ is a ball, this was considered in our prior work \cite{OT1}, where $T_{0}$ is constructed as a pseudodifferential operator of order $-1$. Here we will use a slightly different but closely related solution operator. First, we claim that given a unit vector $\omega \in \BigS^{d-1}$, we can construct an exact solution operator $T_\omega$ with smooth homogeneous symbol of order $1$, and kernel supported in a small conic neighborhood of $\omega$. Our starting point is the simple observation that the following operator solves the divergence equation (say for $h \in C^{\infty}_{c}(\BigR^{d})$): \bar{e}gin{equation*} \tilde{T}_{e_{1}} h(x) = \int_{-\infty}^{x^{1}} e_{1} h(y^{1}, x^{2}, \ldots, x^{d}) \, \mathrm{d} y^{1}, \end{equation*} where $e_{1}$ is the unit vector $(1, 0, \ldots, 0)$. This operator is translation-invariant with kernel \bar{e}gin{equation*} e_{1} 1_{(0, \infty)}(x^{1}) \delta_{0}(x^{2}) \cdots \delta_{0}(x^{d}), \end{equation*} which is supported on the ray $\set{r e_{1} : r > 0}$. By rotation, for any unit vector $\omega \in \BigS^{d-1}$, we obtain an analogous translation-invariant solution operator $\tilde{T}_{\omega}$ whose kernel is supported on the ray $\set{r \omega : r > 0}$. Moreover, given a smooth function $\tilde{\boldsymbol{\chi}i}_{\omega}(\omega')$ on $\BigS^{d-1}$ supported on a neighborhood $\mathfrak{h}at{C}_{\omega} \subseteq \BigS^{d-1}$, the smooth average \bar{e}gin{equation*} T_{\omega} h = \int \tilde{T}_{\omega'} (h) \tilde{\boldsymbol{\chi}i}_{\omega}(\omega') \, \mathrm{d} \omega' \end{equation*} is a translation-invariant solution operator, whose kernel is smooth outside the origin, homogeneous of degree $-d+1$ and supported in the conic neighborhood $C_{\omega} = \set{x \in \BigR^{d}: \frac{x}{\abs{x}} \in \mathfrak{h}at{C}_{\omega}}$, as desired. We now turn to the issue of insuring the exterior support property. If one were to work with the operators $\tilde{T}_{\omega}$, then it is easy to produce such an solution operator $T$: We simply decompose the input into each angle $\omega$ and apply $\tilde{T}_{\omega}$, i.e., $T = \int_{\BigS^{d-1}} \tilde{T}_{\omega} \delta_{\omega}(\omega') \, \mathrm{d} \omega'$. Then (formally) the exterior support property holds for any convex set $K$. To use the operators $T_{\omega}$ with ``fattened'' kernel, we use a uniform conical partition of unity in the physical space $1 = \sum \boldsymbol{\chi}i_\omega$ (centered at the origin) and define our solution operator $T_{0}$ to be \[ T_{0} = \sum T_\omega \boldsymbol{\chi}i_\omega. \] Making the angular support of each $\boldsymbol{\chi}i_{\omega}$ sufficiently narrow (which, of course, increases the number of partitions) depending on $L(K)$, we may insure the exterior support property of $T$. Multiplication by each $\boldsymbol{\chi}i_{\omega}$ is bounded on $\dot{W}^{\sigma-1, p}$ thanks to Hardy's inequality, which holds since $\abs{\sigma-1} < \frac{d}{p}$; hence \eqref{eq:gauss-zero} follows. The higher regularity property follows by differentiation. \qedhere \end{proof} Next, we generalize Proposition~\ref{prop:gauss-zero} to the inhomogeneous covariant Gauss equation \eqref{ext-div} when $\nrm{a}_{\dot{H}^{\frac{d-2}{2}}}$ is small by a perturbative argument. \bar{e}gin{proposition} \label{prop:gauss-small} Let $\bfD = \mathrm{d} + a \in \mathcal A^{\frac{d-2}{2}, 2}(\BigR^{d})$ satisfy $\nrm{a}_{\dot{H}^{\frac{d-2}{2}}} \leq \epsilon_{\ast}$. For any convex domain $K$, there exists a solution operator $T_{a}$ for the equation $\bfD^{\ell} e_{\ell} = h$ with the following properties: \bar{e}gin{enumerate} \item (Boundedness) For $2 \leq p < \infty$ and $1-\frac{d}{p} < \sigma < \frac{d}{2}$, \bar{e}gin{equation} \label{eq:gauss-small} \nrm{T_{a} h}_{\dot{W}^{\sigma, p}} \lesssim_{L(K), \sigma, p} \nrm{h}_{\dot{W}^{\sigma-1, p}}. \end{equation} \item (Exterior support) If $h = 0$ in $\lambda K$, then $T_{a} h = 0$ in $\lambda K$. \item (Higher regularity) If $a$ and $h$ is smooth, so is $T_{a} h$. \end{enumerate} \end{proposition} \bar{e}gin{proof} We proceed in two steps. \pfstep{Step~1: Definition of $T_{a}$} To define $T_{a}$, we solve the fixed point problem is \[ e = T( h - [a^{\ell},e_{\ell}]). \] Let us abbreviate $[a^{\ell}, e_{\ell}] = ad(a) e$. Under the conditions for $p$ and $\sigma$, multiplication by $a$ takes $\dot{W}^{\sigma, p}$ into $\dot{W}^{\sigma-1, p}$ (this may be proved by the usual Littlewood--Paley trichotomy), so that we can estimate \[ \| T ad(a) \|_{\dot{W}^{\sigma, p} \to \dot{W}^{\sigma, p}} \lesssim \|a\|_{\dot{H}^{\frac{d-2}{2}}}. \] Therefore, for $\nrm{a}_{\dot{H}^{\frac{d-2}{2}}}$ sufficiently small, we find $T_{a}$ which clearly satisfies the boundedness and exterior support properties. \pfstep{Step~2: Higher regularity} Here we assume that $\partial^{(m)} a \in \dot{H}^{\frac{d-2}{2}}$ and $\partial^{(m)} h \in \dot{W}^{\sigma-1, p}$ for $0 \leq m \leq n$, then we prove that $\partial^{(n)} e \in \dot{W}^{\sigma, p}$. We consider the case $n=1$; higher values of $n$ are dealt with in a similar manner. Differentiating our fixed point problem we get \bar{e}gin{equation} \label{eq:T-diff} \partial e = T( \partial h - [a^{\ell},\partial e_{\ell}]) - T( [\partial a^{\ell},e_{\ell}]) + [\partial, T] ( h - [a^{\ell},e_{\ell}]) \end{equation} where we can estimate \[ \| T( [\partial a,e]) + [T,\partial] ( h - [a,e])\|_{\dot{W}^{\sigma, p}} \lesssim \| e\|_{\dot{W}^{\sigma, p}} + \|h\|_{\dot{W}^{\sigma, p}} \] with an implicit constant depending on the $\dot{H}^{\frac{d-2}{2}}$ norms of $\partial a$ and $a$. Then we have a fixed point problem for $\partial e$, which is solved in $\dot{W}^{\sigma, p}$ to obtain the bound \[ \| \partial e\|_{\dot{W}^{\sigma, p}} \lesssim \| h \|_{\dot{W}^{\sigma, p} \cap \dot{W}^{\sigma-1, p}} \] One minor issue here is that we do not a-priori know that $ \partial e \in \dot{W}^{\sigma, p}$. But this can be easily circumvented by replacing the gradient with the appropriated divided difference. \qedhere \end{proof} Finally, we prove Theorem~\ref{thm:gauss-0}, where the smallness assumption for $a$ is removed. For simplicity, we restrict to the critical space $h \in \dot{H}^{\frac{d-6}{2}}$ where $d \frkgeq 4$, which suffices for our main applications. \bar{e}gin{proof}[Proof of Theorem~\ref{thm:gauss-0}] We work from the case when $h$ is not differentiated (i.e., $h \in L^{\frac{d}{2}}$), and gradually move up to higher regularity spaces. In the proof, we omit the dependence of constants on $L(K)$. \pfstep{Step~1: Construction of $T_{a}: \dot{W}^{-1, p} \to L^{p}$ $(1 < p < d)$} We compensate for the lack of smallness of $a$ by adding a weight $w = 2^{-\phi}$ where $\phi$ is a smooth bounded increasing radial function. The goal is to insure that \[ \| T ad(a) \|_{L^{p}_{w} \to L^p_w} \ll 1 \] We denote \[ A_k = \{ x \in \BigR^{d}: k \leq \phi(x) \leq k+1\} . \] Then for $j \frkgeq k$, by H\"older's inequality, the embedding $L^{q} \mathfrak{h}ookrightarrow \dot{W}^{-1, p}$ (where $q^{-1} = p^{-1} + d^{-1}$) and Proposition~\ref{prop:gauss-zero} we have \[ \| 1_{A_j} T ad(a) 1_{A_k} \|_{L^{p}_{w} \to L^{p}_w} \lesssim 2^{k-j} \|a\|_{L^{d}(A_k)} . \] On the other hand, the LHS vanishes when $j < k$ by the exterior support property. After summation, we obtain \[ \| T ad(a) \|_{L^{p}_{w} \to L^{p}_w} \lesssim \sup_k \| a\|_{L^{d}(A_k)}. \] Thus to insure the desired smallness, it suffices to choose $w$ so that the RHS is small, which is easily done. \pfstep{Step~2: Boundedness into $\dot{H}^{\frac{d-4}{2}}$} Let $n$ be the least integer greater than or equal to $\frac{d-4}{2}$. The strategy is to commute $\partial$ for up to order $n$ (as in Step~2 in the proof of Proposition~\ref{prop:gauss-small}), and inductively prove boundedness of $T_{a}: \dot{W}^{m-1, \frac{d}{m+2}} \to \dot{W}^{m, \frac{d}{m+2}}$ for $m=1, \ldots, n$; this would directly imply \eqref{eq:gauss-0-bnd} for even $d$, and after interpolation for odd $d$. For simplicity, as in Step~2 of the proof of Proposition~\ref{prop:gauss-small}, we only consider the case $n=1$; the general case is dealt with by induction in a similar manner. Our starting point is \eqref{eq:T-diff}: \bar{e}gin{equation} \label{eq:T-diff'} \partial e = T( \partial h - [a^{\ell},\partial e_{\ell}]) - T( [\partial a^{\ell},e_{\ell}]) + [\partial, T] ( h - [a^{\ell},e_{\ell}]). \end{equation} The strategy is to use $\nrm{e}_{L^{d}}$, which is already under control, to estimate the last two terms, and use an iteration argument in $L^{p}_{w}$ as in Step~1 with $p = \frac{d}{3}$ to estimate\footnote{As in Step~2 of Proposition~\ref{prop:gauss-small}, to be rigorous one should work with divided differences, but the argument is essentially the same.} $\partial e$. By Proposition~\ref{prop:gauss-zero}, Sobolev and H\"older, we have \bar{e}gin{equation*} \nrm{T([\partial a^{\ell}, e_{\ell}])}_{L^{\frac{d}{3}}} \lesssim \nrm{\partial a}_{L^{\frac{d}{2}}} \nrm{e}_{L^{\frac{d}{2}}} \lesssim \nrm{a}_{\dot{H}^{\frac{d-2}{2}}} \nrm{e}_{L^{\frac{d}{2}}}. \end{equation*} On the other hand, note that $[T, \partial] = \sum_{\omega} T_{\omega} \partial \boldsymbol{\chi}i_{\omega}$ (cf. proof of Proposition~\ref{prop:gauss-zero}), where $\boldsymbol{\chi}i_{\omega}$ is $0$-homogeneous. Thus by $T_{\omega} : \dot{W}^{-1, p} \to L^{p}$, Hardy, Sobolev and H\"older, \bar{e}gin{equation*} \nrm{[\partial, T](h - [a^{\ell}, e_{\ell}])}_{L^{\frac{d}{3}}} \lesssim \nrm{h - [a^{\ell}, e_{\ell}]}_{L^{\frac{d}{3}}} \lesssim \nrm{h}_{L^{\frac{d}{3}}} + \nrm{a}_{\dot{H}^{\frac{d-2}{2}}}\nrm{e}_{L^{\frac{d}{2}}}. \end{equation*} By Step~1 with $p = \frac{d}{2}$, we have $\nrm{e}_{L^{\frac{d}{2}}} \lesssim \nrm{h}_{\dot{W}^{-1, \frac{d}{2}}} \lesssim \nrm{h}_{L^{\frac{d}{3}}}$. Then finding the fixed point $\partial e$ of \eqref{eq:T-diff'} as in Step~1, the desired estimate $\nrm{\partial e}_{L^{\frac{d}{3}}} \lesssim_{\nrm{a}_{\dot{H}^{\frac{d-2}{2}}}} \nrm{h}_{L^{\frac{d}{3}}}$ follows. \pfstep{Step~3: Higher regularity} This step is analogous to Step~2 of Proposition~\ref{prop:gauss-small}, where the iteration is done in $L^{p}_{w}$. \qedhere \end{proof} \subsection{Initial data surgery} Now we explore consequences of the previous result in terms of excising and extending initial data sets. The aim of this subsection is to prove Theorems~\ref{thm:ext-id} and \ref{thm:excise}. Before we turn to the proofs, a few remarks about Sobolev extension are in order. For any domain $K$ with locally Lipschitz boundary, Stein's extension theorem \cite[\S VI.3]{MR0290095} says that there exists a universal linear extension operator $\mathfrak{E}$ for all Sobolev spaces $W^{\sigma, p}(K) \to W^{\sigma, p}(\BigR^{d})$. When $K$ is convex with $R(K) = 1$ (which we may insure by scaling), it can be checked that the constant depends only on $\sigma, p$ and the Lipschitz constant $L(K)$. In particular, we have \bar{e}gin{equation} \label{eq:ext-K} \nrm{\mathfrak{E} u}_{\dot{W}^{\sigma, p}} \lesssim_{L(K), p, \sigma} \nrm{u}_{\dot{W}^{\sigma, p}(K)} \quad \mathfrak{h}box{ where } \sigma \frkgeq 0, \ q, p \in (1, \infty), \ \frac{d}{p} - \sigma = \frac{d}{q}. \end{equation} The same bound holds for general $R(K)$ by scaling-invariance of the both sides. Similarly, for an annular region $4K \setminus \overline{K}$ (with general $R(K)$), there exists a universal linear extension operator $\mathfrak{E}$ such that \bar{e}gin{equation} \label{eq:ext-ann-K} \nrm{\mathfrak{E} u}_{\dot{W}^{\sigma, p}} \lesssim_{L(K), p, \sigma} \nrm{u}_{\dot{W}^{\sigma, p}(4 K \setminus \overline{K})} \quad \mathfrak{h}box{ where } \sigma \frkgeq 0, \ q, p \in (1, \infty), \ \frac{d}{p} - \sigma = \frac{d}{q}. \end{equation} Now we prove Theorem~\ref{thm:ext-id}, concerning truncation of Yang--Mills initial data sets. \bar{e}gin{proof}[Proof of Theorem~\ref{thm:ext-id}] Let $(a, e)$ be the given $\mathcal H^{\frac{d-2}{2}}$ Yang--Mills initial data set on $2K \setminus \overline{K}$. In this proof, we use the shorthands \bar{e}gin{equation*} A = \nrm{a}_{\dot{H}^{\frac{d-2}{2}}(2K \setminus \overline{K})}, \quad E = \nrm{e}_{\dot{H}^{\frac{d-4}{2}}(2K \setminus \overline{K})}. \end{equation*} First, we use the universal extension $\mathfrak{E}$ to extend $a, e$ to $\bar{a}, \bar{e}'$ on $\BigR^{d}$, respectively. Clearly, restriction of $\bar{a}$ satisfies \eqref{eq:ext-id-a}. On the other hand, $\bar{e}'$ obeys a favorable bound, but violates the Gauss equation outside $2K \setminus \overline{K}$. More precisely, \bar{e}gin{equation*} (\bfD^{(\bar{a})})^{\ell} \bar{e}'_{\ell} = h \end{equation*} where $h = 0$ in $2K \setminus \overline{K}$ since $(\bar{a}, \bar{e}) = (a, e)$ there. Let $\boldsymbol{\chi}i_{out}$ be a smooth cutoff which equals zero in $K$ and $1$ outside $2K$, then let $h_{out} = \boldsymbol{\chi}i_{out} h$. Note that \bar{e}gin{equation*} \nrm{h_{out}}_{\dot{H}^{\frac{d-6}{2}}} \lesssim \nrm{\partial \bar{e}}_{\dot{H}^{\frac{d-6}{2}}}+\nrm{\bar{a}}_{\dot{H}^{\frac{d-2}{2}}}\nrm{\bar{e}}_{\dot{H}^{\frac{d-4}{2}}} \lesssim_{A} E. \end{equation*} Hence, by Theorem~\ref{thm:gauss-0}, we find $d_{\ell}$ such that $(\bfD^{(\bar{a})})^{\ell} d_{\ell} = - h_{out}$, $d = 0$ in $2K$ and $\nrm{d}_{\dot{H}^{\frac{d-4}{2}}} \lesssim_{A} E$. The desired $\bar{e}$ is then given by the restriction of $\bar{e}' + d$ to $\BigR^{d} \setminus \overline{K}$. To conclude the proof, note that the higher regularity and local Lipschitz properties are obvious by construction. Finally, equivariance under constant gauge transformations can be insured by fixing a particular construction, conjugating by elements of $\bfG$, and then averaging over $\bfG$. \qedhere \end{proof} Combined with Uhlenbeck's lemma (Theorem~\ref{thm:uhlenbeck-ball}), we may now prove the final excision-and-extension result (Theorem~\ref{thm:excise}). \bar{e}gin{proof}[Proof of Theorem~\ref{thm:excise}] We only treat the case when $d \frkgeq 4$ is even and $X = B_{R}$. The other cases are simpler and thus are left to the reader (when $d$ is odd, Uhlenbeck's lemma is not needed, and when $X = \BigR^{d}$, the extension procedure is unnecessary). \pfstep{Step~1: Application of Uhlenbeck's lemma} As in the proof of Proposition~\ref{prop:goodrep-ball-key}, we first set $a_{r} = 0$ by Lemma~\ref{lem:Ar=0}, and extend $a$ outside $B_{R}$ by Lemma~\ref{lem:ext-simple}. Then the $L^{d}$-concentration radius of $a$ does not vary much, and Uhlenbeck's lemma (Theorem~\ref{thm:uhlenbeck-ball}) is applicable on any ball $B_{2 r}(x)$ with $r < 10 r_{c}$ and $x \in B_{R}$. We claim that \bar{e}gin{equation*} \nrm{\tilde{a}}_{\dot{H}^{\frac{d-2}{2}}(B_{r}(x) \cap X)} \lesssim \nrm{\bfD^{(\frac{d-4}{2})} F[a]}_{L^{2}(B_{r}(x) \cap X)} + \nrm{F[a]}_{L^{\frac{d}{2}}(B_{r}(x) \cap X)}. \end{equation*} For interior balls (i.e., $B_{2r}(x) \cap \partial B = \emptyset$), this bound follows directly from Lemma~\ref{lem:div-curl-A-intr}. For boundary balls (i.e., $B_{2r}(x) \cap \partial B \neq \emptyset$), we obtain angular regularity (with respect to the center of $B_{R}$) by Lemma~\ref{lem:div-curl-A-tang}, then radial regularity by \eqref{eq:div-curl-rad}. We note that the implicit constant is controlled thanks to the smallness of $\epsilon_{\ast}$. Next, by the formula $O_{;x} = Ad(O) a - \tilde{a})$, we obtain \eqref{eq:excise-O}. Then it also follows that \bar{e}gin{equation*} \nrm{\tilde{e}}_{\dot{H}^{\frac{d-4}{2}}(B_{r}(x) \cap X)} \lesssim \nrm{\bfD^{(\frac{d-4}{2})} e}_{L^{2}(B_{r}(x) \cap X)} + \nrm{e}_{L^{\frac{d}{2}}(B_{r}(x) \cap X)}. \end{equation*} \pfstep{Step~2: Application of Theorem~\ref{thm:ext-id}} We apply Theorem~\ref{thm:ext-id} to $(\tilde{a}, \tilde{e})$ and obtain an extended Yang--Mills initial data set outside the convex domain $K = B_{r}(x) \cap B_{R}$, which we still denote by $(\tilde{a}, \tilde{e})$. We note for domains of the form $K = B_{r}(x) \cap B_{R}$, we have the universal bound $R(K) \simeq r$ and $L(K) \simeq 1$. Therefore, by \eqref{eq:ext-id-a}, \eqref{eq:ext-id-e}, and the preceding bounds for $(\tilde{a}, \tilde{e})$ on $K$, we obtain \eqref{eq:excise-a}. \pfstep{Step~3: Completion of proof} It remains to prove Theorem~\ref{thm:excise}.(2). We begin by clarifying the ambiguity of the construction so far. In Step~1, the triple $(\tilde{a}, \tilde{e}, O) \restriction_{K}$ is determined up to a constant gauge transformation, as in Uhlenbeck's lemma (Theorem~\ref{thm:uhlenbeck-ball}). Since Theorem~\ref{thm:ext-id} is equivariant under such operations, the corresponding extensions $(\tilde{a}, \tilde{e})$ are also constant gauge transformations of each other. As a result, in order to prove (2), it suffices to show that we can enforce strong convergence of $(\tilde{a}^{n}, \tilde{e}^{n}, O^{n})$ to $(\tilde{a}, \tilde{e}, O)$ in $H^{\frac{d-2}{2}} \times H^{\frac{d-4}{2}} \times H^{\frac{d}{2}}(K)$, after passing to a subsequence and conjugating the sequence with a constant gauge transformation. Proceeding as in Theorem~\ref{thm:uhlenbeck-ball}.(2), we may first insure convergence of a suitable subsequence up to a constant gauge transformation in $W^{1, \frac{d}{2}} \times L^{\frac{d}{2}} \times W^{2, \frac{d}{2}}(K)$. Then by Remark~\ref{rem:uhlenbeck-cont}, strong convergence in the desired topology (of the same sequence) may be proved. We omit the straightforward details. \qedhere \end{proof} \section{The local theory for the hyperbolic Yang--Mills equation} \label{sec:local} In this section, we consider the local-in-time theory for the hyperbolic Yang--Mills equation for data in an arbitrary topological class. \subsection{Gauge equivalent classes of connections} We start by verifying that the gauge-equivalent class of $\mathcal H^{\frac{d-2}{2}}$ connections is closed, as asserted in Section~\ref{subsec:local}. \bar{e}gin{proposition}\label{p:closed-class} Let $A$ be an $\mathcal H^{\frac{d-2}{2}}_{loc}$ connection in $\mathcal O \subseteq \BigR^{1+d}$. Then $[A]$ is closed in the corresponding topology. \end{proposition} \bar{e}gin{proof} Suppose that $O^{(n)}$ is a sequence of admissible gauge transformations so that the gauge equivalent connections $A^{(n)}$ given by \bar{e}gin{equation} \label{eq:closed-An} A^{(n)} = Ad(O^{(n)}) A - O^{(n)}_{; t,x} \end{equation} converge to an $\mathcal H^{\frac{d-2}{2}}_{loc}$ connection $B$. Then we need to show that $A$ and $B$ are gauge equivalent. We first consider the corresponding gauge transformations $O^{(n)}$. By the relation \eqref{eq:closed-An}, it follows that these are uniformly bounded on compact sets. Hence, by compact Sobolev embeddings we obtain a limiting gauge transformation $O$, so that on a subsequence we have \bar{e}gin{enumerate}[label=(\roman*)] \item $O$ satisfies the bounds \[ \nabla O \in L^\infty H^{\frac{d-2}{2}}_{loc} \] \item Convergence in weaker topologies: \[ \nabla O^{(n)} \to \nabla O \qquad \tilde{e}xt{ in $L^p W^{\frac{d-2}{2},2-}_{loc}$,\ \ $p < \infty$.} \] \item Pointwise a.e convergence: \[ O^{(n)}(t,x) \to O(t,x) \ \ \tilde{e}xt{a.e.}, \qquad \nabla O^{(n)} (t,x) \to \nabla O(t,x) \ \ \tilde{e}xt{a.e.}. \] \end{enumerate} These properties allow us to pass to the limit and obtain \[ B = O A O^{-1} - O_{;t,x} \] as well as the similar relation for the curvatures. It remains to improve the first property (i) above to continuity in time. This cannot come from weak convergence, instead it is a consequence of the corresponding continuity property for $A$ and $B$. We start from property (ii), which guarantees that $O(t,x)$ is continuous in $t$ for almost every $x$. Since $A,B \in C_t L^{d}_{loc}$, so is $Ad(O) A $ and thus $\nabla O$. We now differentiate and repeat the process for $\partial \nabla O$ in $L^{\frac{d}{2}}$, and so on. \end{proof} \subsection{Local theory at optimal regularity for dimensions \tilde{e}xorpdfstring{$d \frkgeq 4$}{d>3}} We begin by recalling the temporal gauge small data global well-posedness result proved\footnote{In \cite{OTYM2}, this theorem is stated an proved in the most difficult case $d = 4$. Nevertheless, its proof may be extended to $d > 4$.} in \cite{OTYM2}. \bar{e}gin{theorem}[{\cite[Theorem~1.17]{OTYM2}}] \label{thm:small-temp} If the $\dot{H}^{\frac{d-2}{2}} \times \dot{H}^{\frac{d-4}{2}}$ norm of the initial data set $(a, e)$ is smaller than some universal constant $\epsilon_{\ast}$, then the corresponding solution $(A_{t,x}, \partial_{t} A_{t,x})$ in the temporal gauge $A_{0} = 0$ exists globally in $C_{t}(\BigR; \dot{H}^{\frac{d-2}{2}} \times \dot{H}^{\frac{d-4}{2}})$, and obeys the a-priori bound \bar{e}gin{equation*} \nrm{\nabla A_{x}}_{L^{\infty} \dot{H}^{\frac{d-4}{2}}} \lesssim \nrm{(a, e)}_{\dot{H}^{\frac{d-2}{2}} \times \dot{H}^{\frac{d-4}{2}}}. \end{equation*} The solution is unique among the local-in-time limits of smooth solutions, and it depends continuously on the data $(a, e) \in \dot{H}^{\frac{d-2}{2}} \times \dot{H}^{\frac{d-4}{2}}$. \end{theorem} We now derive Theorem~\ref{thm:local-temp} from Theorems~\ref{thm:excise} and \ref{thm:small-temp}. \bar{e}gin{proof}[Proof of Theorem~\ref{thm:local-temp}] The idea is to construct the local-in-spacetime solutions using Theorems~\ref{thm:excise} and \ref{thm:small-temp}, and then patch up by finite speed of propagation (i.e., local-in-spacetime uniqueness) in the temporal gauge. \pfstep{Step~1: Construction of local-in-spacetime solutions} Consider a ball $B_{r}(x)$ with $r < 10 r_{c}$ and $x \in X$; we introduce the abbreviation $K = B_{r}(x) \cap X$. Let $(\tilde{a}, \tilde{e})$ and $O$ be the global Yang--Mills initial data and the gauge transformation associated with $(a, e)$ by Theorem~\ref{thm:excise}.(1); recall that $(a, e)$ is gauge equivalent to $(\tilde{a}, \tilde{e})$ via $O$ on $K$. Choosing $\epsilon_{\ast}$ sufficiently small, Theorem~\ref{thm:small-temp} produces a unique $C_{t} \mathcal H^{\frac{d-2}{2}}$ temporal-gauge solution $\tilde{A}$ corresponding to $(\tilde{a}, \tilde{e})$. We define $A$ on $\mathcal D(K)$ by \bar{e}gin{equation*} A_{\mu}(t,x) = Ad(O^{-1}(x)) \tilde{A}_{\mu}(t,x) - O^{-1}_{; \mu}(x). \end{equation*} Note that $(\tilde{a}, \tilde{e}, O)$ in Theorem~\ref{thm:excise}.(1) is determined up to a constant gauge transformation, but any choice leads to the same solution $A$. By \eqref{eq:excise-a}, \eqref{eq:excise-O} and Theorem~\ref{thm:small-temp}, it follows that \bar{e}gin{equation} \label{eq:local-temp-A} \nrm{\nabla A_{x}}_{L^{\infty} \dot{H}^{\frac{d-4}{2}}(\mathcal D(K))} \lesssim \nrm{(a, e)}_{\dot{H}^{\frac{d-2}{2}} \times \dot{H}^{\frac{d-4}{2}}(K)}. \end{equation} \pfstep{Step~2: Continuous dependence and uniqueness} We claim that the mapping \bar{e}gin{equation*} \mathcal H^{\frac{d-2}{2}}(K) \ni (a, e) \mapsto (A, \partial_{t} A) \in C_{t} (H^{\frac{d-2}{2}} \times H^{\frac{d-4}{2}}) (\mathcal D(K)) \end{equation*} is continuous. Indeed, for the purpose of contradiction, suppose that there is a sequence of $\mathcal H^{\frac{d-2}{2}}$ Yang--Mills initial data sets on $K$ such that $(a^{n}, e^{n}) \to (a, e)$, while $(A^{n}, \partial_{t} A^{n}) \not \to (A, \partial_{t} A)$ in $C_{t} (H^{\frac{d-2}{2}} \times H^{\frac{d-4}{2}})(\mathcal D(K))$. By passing to a subsequence, we may assume that no further subsequence of $(A^{n}, \partial_{t} A^{n})$ converges to $(A, \partial_{t} A)$ in the same topology. However, by Theorem~\ref{thm:excise}.(2) and the continuity statement in Theorem~\ref{thm:small-temp}, there exists a subsequence for which $(\tilde{A}^{n}, \partial_{t} \tilde{A}^{n}) \to (\tilde{A}, \partial_{t} \tilde{A})$ in $C_{t} (\dot{H}^{\frac{d-2}{2}} \times \dot{H}^{\frac{d-4}{2}})$. By the convergence $O^{n} \to O$ and $(O^{n})^{-1} \to O^{-1}$ in $\mathcal G^{\frac{d}{2}, 2}(K)$, it follows that $(A^{n}, \partial_{t} A^{n}) \to (A, \partial_{t} A)$ in the above topology, which is a contradiction. From continuous dependence and persistence of regularity in Theorems~\ref{thm:excise} and \ref{thm:small-temp} it follows that $(A, \partial_{t}A)$ defined in Step~1 is approximated by smooth (temporal gauge) solutions, i.e., it is a solution to \eqref{eq:ym} in the sense of Definition~\ref{def:ym-sol-rough}. Therefore, uniqueness of the solution on $\mathcal D(K)$ in the sense of Definition~\ref{def:ym-sol-rough} in the temporal gauge follows. \pfstep{Step~3: Conclusion of the proof} Consider now a family of balls $\set{B_{2r_{c}}(x)}_{x \in X}$, and the corresponding family of temporal gauge solutions in each $\mathcal D(B_{2r_{c}}(x) \cap X)$. By the local-in-spacetime uniqueness that we just proved, these solutions coincide on the intersections, and therefore define a unique temporal gauge solution (in the sense of Definition~\ref{def:ym-sol-rough}) in $\mathcal D_{[0, r_{c})}(X) \subseteq \cup_{x \in X} \mathcal D(B_{2r_{c}}(x) \cap X)$. Properties (1) and (2) claimed in Theorem~\ref{thm:local-temp} follow from the construction. For the a-priori bound in (3), we repeat the above steps to the data restricted to uniformly spaced balls $B$ of radius $2 r_{c}$ that cover $B_{R'}(x)$. By local-in-spacetime uniqueness, the result coincides with $A$ in $\mathcal D_{[0, r_{c})} (B_{R'}(x))$. Moreover, \eqref{eq:local-temp-bnd} follows by summing up the a-priori bounds in Theorem~\ref{thm:small-temp} for the local-in-spacetime solutions. \qedhere \end{proof} Next, we also show that all $\mathcal H^{\frac{d-2}{2}}_{loc}$ solutions (in the sense of Definition~\ref{def:ym-sol-rough}) are gauge equivalent to the corresponding temporal solutions. \bar{e}gin{proof}[Proof of Theorem~\ref{thm:equiv-temp}] Let $A^{(n)}$ be a sequence of smooth solutions which converge to $A$ in the norm $C_{t} (H^{\frac{d-2}{2}}_{loc} \times H^{\frac{d-4}{2}}_{loc})$. Let $\tilde{A}^{(n)}$, respectively $\tilde{A}$, be the corresponding temporal solutions. We know that $\tilde{A}^{(n)}$ and $A^{(n)}$ are gauge equivalent; denote by $O^{(n)}$ the corresponding gauge transformations. We know that in the $H^1$ topology \[ Ad(O^{(n)}) \tilde{A}^{(n)} - O^{(n)}_{;t,x} = A^{(n)} \to A \] but also that \[ \tilde{A}^{(n)} \to \tilde{A} \] Thus, the gauge transformations $O^{(n)}$ satisfy uniform bounds locally. Then it follows that (up to a subsequence) \[ Ad(O^{(n)}) \tilde{A} - O^{(n)}_{;t,x} \to A . \] But now we can use Proposition~\ref{p:closed-class} to conclude that $\tilde{A}$ and $A$ are gauge equivalent. \end{proof} Continuity of $A_{x}(t)$ in $\mathcal H^{\frac{d-2}{2}}_{loc, r_{c}}$, as stated in Theorem~\ref{thm:local-temp}, is in general insufficient to conclude invariance of the topological class. However, combined with finite speed of propagation and Proposition~\ref{prop:top-class-outer}, we may nevertheless prove that the topological class of $A_{x}(t)$ is conserved under the hyperbolic Yang--Mills evolution. \bar{e}gin{proof}[Proof of Proposition~\ref{prop:top-class-ym}] Thanks to Theorem~\ref{thm:equiv-temp}, it suffices to consider a temporal gauge solution $A_{x}(t)$. By a usual continuous induction in $t$ (as well as time reversibility of \eqref{eq:ym}), it suffices to show that the $[A_{x}(t)] = [A_{x}(0)]$ for all $t > 0$ sufficiently close to $0$. Since $\mathcal E_{\BigR^{d}}^{\frac{d-2}{2}}(a, e) < \infty$, there exists $R > 0$ such that $\mathcal E_{\BigR^{d} \setminus \overline{B_{R}}}^{\frac{d-2}{2}}(a, e) \ll \epsilon_{\ast}$. By Uhlenbeck's lemma (when $d$ is even) and the local-in-spacetime a-priori estimate \eqref{eq:local-temp-bnd}, it follows that \bar{e}gin{equation*} \sup_{t \in [0, r_{c})}\mathcal E_{\BigR^{d} \setminus \overline{B_{R + t}}}^{\frac{d-2}{2}} (A_{x}(t), \partial_{t} A_{x}(t)) \lesssim \mathcal E_{\BigR^{d} \setminus \overline{B_{R}}}^{\frac{d-2}{2}}(a, e) \ll \epsilon_{\ast}. \end{equation*} In particular, choosing $R$ large enough, we may insure that \bar{e}gin{equation*} \nrm{F[A_{x}(t)]}_{L^{\frac{d}{2}}(\BigR^{d} \setminus \overline{B_{2R}})} < \epsilon_{\ast}, \end{equation*} where $\epsilon_{\ast}$ is as in Proposition~\ref{prop:top-class-outer}. For $t > 0$ sufficiently close to $0$, by the continuity property \eqref{eq:local-temp-cont}, we may also insure that \bar{e}gin{equation*} \nrm{A_{x}(t) - A_{x}(0)}_{L^{d}(B_{2R})} < \epsilon_{\ast}. \end{equation*} By Proposition~\ref{prop:top-class-outer}, it follows that $[A_{x}(t)] = [A_{x}(0)]$. \qedhere \end{proof} Finally, we turn to the proof of Theorem~\ref{thm:imp-reg}. The main ingredient is the caloric gauge small data well-posedness theorem from \cite{OTYM2}: \bar{e}gin{theorem}[{\cite[Corollary~1.13]{OTYM2}}] \label{thm:small-cal} Let $(a, e)$ be an Yang--Mills initial data set with the property that its $\dot{H}^{\frac{d-2}{2}} \times \dot{H}^{\frac{d-4}{2}}$ norm is smaller than some universal constant $\epsilon^{2}_{\ast}$. Then there exists a gauge transformation $O \in \dot{H}^{\frac{d}{2}}(\BigR^{d}; \bfG)$ of $(a, e)$ to a caloric gauge data $(\tilde{a}, \tilde{e})$, which is unique up to a constant gauge transformation. Moreover, the corresponding caloric gauge solution $(\tilde{A}_{t,x}, \partial_{t} \tilde{A}_{t,x})$ exists globally in time, and obeys the a-priori bound \bar{e}gin{equation} \label{eq:small-cal-S} \nrm{\tilde{A}_{x}}_{S^{\frac{d-2}{2}}} \lesssim \nrm{(a, e)}_{\dot{H}^{\frac{d-2}{2}}\times\dot{H}^{\frac{d-4}{2}}}. \end{equation} \end{theorem} We refer the reader to \cite{OTYM1, OTYM2} for the precise definition of the caloric gauge and the $S^{\frac{d-2}{2}}$ norm. For our purposes, all we need to know is that \bar{e}gin{equation} \label{eq:small-cal-H} \nrm{\nabla \tilde{A}_{x}}_{L^{\infty} \dot{H}^{\frac{d-4}{2}}} + \nrm{\tilde{A}_{x}}_{L^{2} L^{2d}} \lesssim \nrm{\tilde{A}_{x}}_{S^{\frac{d-2}{2}}} \end{equation} and that the a-priori bound of the $S^{\frac{d-2}{2}}$ norm implies the following additional control of the solution $\tilde{A}_{t,x}$ \cite[Theorem~5.1]{OTYM2}: \bar{e}gin{equation} \label{eq:small-cal-ell} \nrm{\Box \tilde{A}_{x}}_{\ell^{1} L^{2} \dot{H}^{\frac{d-5}{2}}} + \nrm{\partial^{\ell} \tilde{A}_{\ell}}_{\ell^{1} L^{2} \dot{H}^{\frac{d-3}{2}}} + \nrm{\nabla \tilde{A}_{0}}_{\ell^{1} L^{2} \dot{H}^{\frac{d-3}{2}}} \lesssim_{\nrm{\tilde{A}_{x}}_{S^{\frac{d-2}{2}}}} \nrm{\tilde{A}_{x}}_{S^{\frac{d-2}{2}}}^{2}. \end{equation} Combined with the initial data surgery technique (Theorem~\ref{thm:excise}) and the patching procedure in Section~\ref{subsec:patching}, we may now prove Theorem~\ref{thm:imp-reg}. \bar{e}gin{proof}[Proof of Theorem~\ref{thm:imp-reg}] On the one hand, we have a global $\mathcal H^{\frac{d-2}{2}}_{loc}$ solution $A$ in $\mathcal D_{[0, r_{c})}(B_{R})$ by Theorem~\ref{thm:local-temp}. On the other hand, we can cover $[0, r_{c}) \times B_{R - 4 r_{c}}$ with cylinders $[0, r_{c}) \times B_{r_{c}}(x_{\alpha})$, each of which is contained in a truncated cone $\mathcal D_{[0, r_{c})}(B_{4 r_{c}}(x_{\alpha}))$ whose base is contained in $B_{R}$, i.e., $B_{4 r_{c}}(x_{\alpha}) \subseteq B_{R}$. In each $\mathcal D_{[0, r_{c})}(B_{4 r_{c}}(x_{\alpha}))$, by Theorem~\ref{thm:small-cal}, we have a gauge-equivalent caloric solution $\tilde{A}_{(\alpha)}$ satisfying \bar{e}gin{equation} \label{eq:imp-reg-tA} \nrm{\nabla \tilde{A}_{(\alpha) x}}_{L^{\infty} \dot{H}^{\frac{d-4}{2}}} + \nrm{\Box \tilde{A}_{(\alpha) x}}_{\ell^{1} L^{2} \dot{H}^{\frac{d-5}{2}}} + \nrm{\partial^{\ell} \tilde{A}_{(\alpha) \ell}}_{\ell^{1} L^{2} \dot{H}^{\frac{d-3}{2}}} + \nrm{\nabla \tilde{A}_{(\alpha) 0}}_{\ell^{1} L^{2} \dot{H}^{\frac{d-3}{2}}} \lesssim \epsilon_{\ast}. \end{equation} In the remainder of the proof, we restrict each solution $\tilde{A}_{(\alpha)}$ to the cylinder $[0, r_{c}) \times B_{r_{c}}(x_{\alpha})$. We need to compute the regularity of the gauge transformation $O_{(\alpha \bar{e}ta)}$ between two such solutions $\tilde{A}_{(\alpha)}$ and $\tilde{A}_{(\bar{e}ta)}$. We build up the regularity of $O_{(\alpha \bar{e}ta)}$ in several stages, depending on the formula \bar{e}gin{equation*} \tilde{A}_{(\alpha)} = Ad(O_{(\alpha \bar{e}ta)}) \tilde{A}_{(\bar{e}ta)} - O_{(\alpha \bar{e}ta); t,x} \quad \mathfrak{h}box{ in } [0, r_{c}) \times (B_{r_{c}}(x_{\alpha}) \cap B_{r_{c}}(x_{\bar{e}ta})) \end{equation*} In what follows, all norms are over $[0, r_{c}) \times (B_{r_{c}}(x_{\alpha}) \cap B_{r_{c}}(x_{\bar{e}ta}))$, and we omit the subscripts $(\alpha)$, $(\bar{e}ta)$ and $(\alpha \bar{e}ta)$. \bar{e}gin{enumerate}[label=(\roman*)] \item {\it $L^p$ regularity.} It immediately follows that \[ O_{;x}, O_{;t} \in L^\infty L^{d} \cap L^2 L^{2d}. \] Reiterating this, we also obtain \[ O_{;x},O_{;t} \in L^\infty \dot H^{\frac{d-2}{2}}. \] \item{\it $\ell^1$ Besov structure for $O_{;x}$.} Here we obtain \[ O_{;x} \in \ell^1 (L^\infty \dot H^\frac{d-2}{2} \cap L^2 \dot H^{\frac{d-1}{2}}). \] which follows from the div-curl system\footnote{In order to appeal to interior regularity, we may in fact start with local data on slightly larger balls $B_{2 r_{c}}(x_{\alpha})$, then shrink their radii to $r_{c}$ at this stage. We omit this minor technical detail.} for $O_{;x}$ (cf. Lemma~\ref{lem:div-curl-O}). \item {\it $\ell^1$ Besov structure for $O_{;t}$.} Next, we obtain \[ O_{;t} \in \ell^1 (L^\infty \dot H^\frac{d-2}{2} \cap L^2 \dot H^{\frac{d-1}{2}}). \] which is obtained by differentiating in $x$ in the $O_{;t}$ relation. Differentiating instead in $t$, we also obtain \[ \partial_{t} O_{;t} \in \ell^1 (L^\infty \dot H^\frac{d-4}{2} \cap L^2 \dot H^{\frac{d-3}{2}}). \] \item {\it $\Box O_{;x} \in \ell^1 L^2 \dot H^{\frac{d-5}{2}}$.} This requires a similar bound for $[O_{;\alphaha}, \partial^\alphaha \tilde{A}]$ and for $[\partial^\alphaha O_{;\alphaha},\tilde{A}]$. Both of them follow from the previous bounds. \end{enumerate} To summarize, we have the regularity properties: \bar{e}gin{equation} \label{eq:imp-reg-O} O_{;x} \in \ell^1 L^2 \dot H^{\frac{d-1}{2}} , \qquad \partial_{t}^{2} O_{;x} \in \ell^1 L^2 \dot H^{\frac{d-5}{2}}, \qquad \nabla O_{;t} \in \ell^1 L^2 \dot H^{\frac{d-3}{2}}, \end{equation} where $\partial_{t}^{2} O_{;x} \in L^{2} \dot{H}^{\frac{d-5}{2}}$ follows by combining (ii) and (iii). These in particular imply that each $O$ is continuous, and is close to a constant in $L^\infty$. Hence, the operations of pointwise multiplication, inversion, adjoint action on $\frkg$ etc. are all well-behaved for $O$ (in contrast to the general situation in Section~\ref{subsec:rough-gt}). Next step is to patch up the local gauges. Taking only the balls $B_{r_{c}}(x_{\alpha})$ which cover $B_{R-4 r_{c}}$ and which are uniformly separated, Scenario~(2) in Section~\ref{subsec:patching} is applicable to each fixed time $\set{t} \times B_{R-4 r_{c}}$. Note that the diffeomorphisms and the smooth cutoffs involved in the patching procedure in Scenario~(1) in Section~\ref{subsec:patching} all depend trivially on $t$. It follows that on each $[0, r_{c}) \times B'_{\alpha}$, the gauge transformations $P_{(\alpha)}$ obey \bar{e}gin{equation} \label{eq:imp-reg-P} P_{;x} \in \ell^1 L^2 \dot H^{\frac{d-1}{2}} , \qquad \partial_{t}^{2} P_{;x} \in \ell^1 L^2 \dot H^{\frac{d-5}{2}}, \qquad \nabla P_{;t} \in \ell^1 L^2 \dot H^{\frac{d-3}{2}}, \end{equation} where the bound depends only on $R / r_{c}$ and $\epsilon_{\ast}$. It remains to verify the bound \eqref{eq:imp-reg} for the global gauge potential $A$, which is a consequence of \eqref{eq:imp-reg-tA}, \eqref{eq:imp-reg-O} and the formula \eqref{eq:patching-A-ball} (it is easily extended to the $0$-th component). Here, we only sketch the proof of $\Box A_{x} \in \ell^{1} L^{2} \dot{H}^{\frac{d-5}{2}}$, which is the trickiest, and leave the remaining cases to the reader. Recalling the formula \eqref{eq:patching-A-ball}, we have \bar{e}gin{equation*} \Box A_{x} = \sum \boldsymbol{\chi}i_{\alpha} \left( Ad(P_{(\alpha)}) \Box \tilde{A}_{(\alpha) x} - \Box P_{;x} + h.o.t.\right). \end{equation*} The higher order terms, whose precise expression is omitted, are estimated by \eqref{eq:imp-reg-P} and \eqref{eq:imp-reg-tA}. Moreover, $\Box P_{;x} = - \partial_{t}^{2} P_{;x} + \Delta P_{;x} \in \ell^{1} L^{2} \dot{H}^{\frac{d-5}{2}}([0, r_{c}) \times B'_{\alpha})$ by \eqref{eq:imp-reg-P}. Thanks to \eqref{eq:imp-reg-P}, $Ad(P_{(\alpha)})$ may be easily removed in $\ell^{1} L^{2} \dot{H}^{\frac{d-5}{2}}([0,r_{c}) \times B'_{\alpha})$. Then finally, $\Box \tilde{A}_{(\alpha) x} \in \ell^{1} L^{2} \dot{H}^{\frac{d-5}{2}}([0, r_{c}) \times B'_{\alpha})$ by \eqref{eq:imp-reg-tA}. \qedhere \end{proof} \subsection{Local theory in dimension \tilde{e}xorpdfstring{$d = 3$}{d=3}} Here we sketch the proofs of Theorems~\ref{thm:local-temp-sub} and \ref{thm:KM}. The key result is the following subcritical initial data surgery result (cf. Theorems~\ref{thm:ext-id} and \ref{thm:excise}): \bar{e}gin{theorem} \label{thm:excise-3} Let $\frac{1}{2} < \sigma < \frac{5}{2}$, and let $(a, e)$ be an $\mathcal H^{\sigma}$ Yang--Mills initial data set on a convex domain $K$ in $\BigR^{3}$ satisfying \bar{e}gin{equation} \label{eq:excise-3-hyp} \nrm{a}_{\dot{H}^{\frac{1}{2}}(K)} \leq \epsilon. \end{equation} If $\epsilon > 0$ is sufficiently small depending on $L(K)$, then there exists an $\mathcal H^{\sigma}$ Yang--Mills initial data set $(\bar{a}, \bar{e})$ in $\BigR^{3}$ that coincides with $(a, e)$ on $K$ and obeys \bar{e}gin{align} \nrm{\bar{a}}_{\dot{H}^{\sigma} \cap R(K)^{-\sigma} L^{2}} + \nrm{\bar{e}}_{\dot{H}^{\sigma-1} + R(K)^{\sigma-1} L^{2}} \lesssim_{L(K)} \nrm{a}_{\dot{H}^{\sigma} \cap R(K)^{-\sigma} L^{2} (K)} + \nrm{e}_{\dot{H}^{\sigma-1} + R(K)^{\sigma-1} L^{2}(K)}. \label{eq:excise-3} \end{align} It can be arranged so that the association $(a, e) \mapsto (\bar{a}, \bar{e})$ is equivariant under constant gauge transformations, and so that $(a, e) \mapsto (\bar{a}, \bar{e})$ is locally Lipschitz continuous. Moreover, if $(a, e)$ is smooth, then so is $(\bar{a}, \bar{e})$. \end{theorem} \bar{e}gin{proof} By rescaling, we set $R(K) = 1$ so that $\dot{H}^{\sigma} \cap R(K)^{-\sigma} L^{2} \simeq H^{\sigma}$ and $\dot{H}^{\sigma-1} + R(K)^{\sigma-1} L^{2} \simeq H^{\sigma-1}$. As in the proof of Theorem~\ref{thm:ext-id}, we apply the universal extension operator $\mathfrak{E}$ to $(a, e)$ to first obtain $(\bar{a}, \bar{e}') \in H^{\sigma} \times H^{\sigma-1}(\BigR^{3})$. Then the error for the Gauss equation $h = (\bfD^{(\bar{a})})^{\ell} \bar{e}'$ is supported outside $K$ and obeys $\nrm{h}_{H^{\sigma-2}} \lesssim_{\nrm{\bar{a}}_{\dot{H}^{\frac{1}{2}}}} \nrm{e}_{H^{\sigma-1}(K)}$. Since \bar{e}gin{equation*} \nrm{\bar{a}}_{\dot{H}^{\frac{1}{2}}} \lesssim_{L(K)} \nrm{a}_{\dot{H}^{\frac{1}{2}}(K)} \leq \epsilon, \end{equation*} Proposition~\ref{prop:gauss-small} is applicable if $\epsilon > 0$ is chosen sufficiently small. Thus $d = - T_{\bar{a}} h$ satisfies \bar{e}gin{equation*} (\bfD^{(\bar{a})})^{\ell} d_{\ell} = - h, \qquad \nrm{d}_{H^{\sigma-1}} \lesssim \nrm{h}_{H^{\sigma-2}} \lesssim \nrm{\bar{e}'}_{H^{\sigma-1}}, \end{equation*} and vanishes in $K$. It follows that $(\bar{a}, \bar{e} = \bar{e}' + d)$ is a Yang--Mills initial data set obeying the desired bound \eqref{eq:excise-3}. The higher regularity and local Lipschitz properties are obvious by construction. Finally, equivariance under constant gauge transformations can be insured by fixing a particular construction, conjugating by elements of $\bfG$, and then averaging. \end{proof} Next, we recall the temporal gauge small data local well-posedness of Tao. \bar{e}gin{theorem} [{\cite{TaoYM}}]\label{thm:small-temp-sub} Let $\sigma > \frac{3}{4}$. If the $\mathcal H^{\sigma}$ norm of the initial data set $(a, e)$ is sufficiently small, then the corresponding solution $(A_{t,x}, \partial_{t} A_{t,x})$ in the temporal gauge $A_{0} = 0$ exists in $C_{t}((-1, 1); H^{\sigma} \times H^{\sigma-1})$, and obeys the a-priori bound \bar{e}gin{equation*} \nrm{(A_{x}, \partial_{t} A_{x})}_{L^{\infty} (H^{\sigma} \times H^{\sigma-1})} \lesssim \nrm{(a, e)}_{H^{\sigma} \times H^{\sigma-1}}. \end{equation*} The solution is unique among the local-in-time limits of smooth solutions, and it depends in a locally Lipschitz manner on the data $(a, e) \in H^{\sigma} \times H^{\sigma-1}$. \end{theorem} Now we are ready to prove Theorem~\ref{thm:local-temp-sub}. \bar{e}gin{proof}[Sketch of Proof of Theorem~\ref{thm:local-temp-sub}] As in the proof of Theorem~\ref{thm:local-temp}, the idea is to patch together the small local-in-spacetime solutions constructed using Theorems~\ref{thm:excise-3} and \ref{thm:small-temp-sub} in the temporal gauge. It suffices to consider $\frac{3}{4} < \sigma < \frac{5}{2}$. Observe that, by subcriticality, the $\mathcal H^{\sigma}_{loc}$ norm obeys the following one-sided scaling property: \bar{e}gin{equation*} \nrm{(a^{(\lambda)}, e^{(\lambda)})}_{\mathcal H^{\sigma}_{loc}} \lesssim \lambda^{\sigma - \frac{1}{2}} \nrm{(a, e)}_{\mathcal H^{\sigma}_{loc}} \quad \mathfrak{h}box{ for } \lambda \leq 1. \end{equation*} Here $(a^{(\lambda)}, e^{(\lambda)})(x) = (\lambda a, \lambda^{2} e)(\lambda x)$ is the invariant scaling. Choosing \bar{e}gin{equation*} \lambda \simeq \left( \epsilon_{\ast} \nrm{(a, e)}_{\mathcal H^{\sigma}_{loc}}^{-1} \right)^{\frac{2}{\sigma-1}} , \end{equation*} we may insure that $\nrm{(a^{(\lambda)}, e^{(\lambda)})}_{\mathcal H^{\sigma}_{loc}} \ll \epsilon_{\ast}$. Choosing $\epsilon_{\ast} > 0$ sufficiently small, we may apply Theorem~\ref{thm:excise-3} to each $(a^{(\lambda)}, e^{(\lambda)}) \restriction_{B_{2}(x)}$ to find an extension $(\bar{a}^{(\lambda)}, \bar{e}^{(\lambda)})$, and then Theorem~\ref{thm:small-temp-sub} to this global-in-space small data to obtain a temporal gauge solution $A^{(\lambda)}$ on the time interval $(-1, 1)$. Proceeding as in the proof of Theorem~\ref{thm:local-temp}, we obtain a well-posed temporal gauge solution for $(a^{(\lambda)}, e^{(\lambda)})$ on $(-1, 1)$. By rescaling back, the theorem follows with an explicit lower bound $T \frkgtrsim \nrm{(a, e)}_{\mathcal H^{\sigma}_{loc}}^{-\frac{2}{\sigma-1}}$. \end{proof} Finally, Theorem~\ref{thm:KM} is an easy corollary of Uhlenbeck's lemma (at subcritical regularity) and Theorem~\ref{thm:local-temp-sub}. \bar{e}gin{proof}[Sketch of Proof of Theorem~\ref{thm:KM}] By conservation of energy, it suffices to prove that the temporal gauge solution given by Theorem~\ref{thm:local-temp-sub} exists on a interval of length $T(\nrm{(F[a], e)}_{L^{2}_{loc}})$, where $\nrm{\cdot}_{L^{2}_{loc}} = \sup_{x \in \BigR^{3}} \nrm{\cdot}_{L^{2}(B_{1}(x))}$. As before, we have the one-sided scaling property \bar{e}gin{equation*} \nrm{(F[a^{(\lambda)}], e^{(\lambda)})}_{L^{2}_{loc}} \lesssim \lambda^{\frac{1}{2}} \sup_{x \in \BigR^{3}} \nrm{(F[a], e)}_{L^{2}_{loc}} \quad \mathfrak{h}box{ for } \lambda \leq 1. \end{equation*} Choosing $\lambda \simeq \epsilon_{\ast} \nrm{(F[a], e)}_{L^{2}_{loc}}^{-2}$, we may insure that the LHS is $\lesssim \epsilon_{\ast}$. In what follows, we work with the rescaled data $(a^{(\lambda)}, e^{(\lambda)})$; we omit the superscript $(\lambda)$ for simplicity. For the rescaled data, we wish to show that the corresponding temporal gauge solution given by Theorem~\ref{thm:local-temp-sub} exists on the unit time interval $[0, 1)$. Fix a unit ball $B = B_{1}(x_{0})$. Applying Uhlenbeck's lemma \cite[Theorem~1.3]{MR648356} (which is possible if we take $\epsilon_{\ast}$ sufficiently small), we find $O \in \mathcal G^{2, 2}(2B)$ such that \bar{e}gin{equation*} \nrm{O}_{H^{2}(B)} \lesssim \nrm{a}_{H^{1}(2B)}, \end{equation*} and $(\tilde{a}, \tilde{e}) = (Ad(O) a - O_{;x}, Ad(O) e)$ obeys \bar{e}gin{equation*} \nrm{(\tilde{a}, \tilde{e})}_{H^{1} \times L^{2}(2B)} \lesssim \nrm{(F[a], e)}_{L^{2}(2B)} \lesssim \epsilon_{\ast}. \end{equation*} By Theorem~\ref{thm:small-temp-sub} (taking $\epsilon_{\ast}$ even smaller if necessary), we find a temporal gauge solution $\tilde{A}$ with data $(\tilde{a}, \tilde{e})$ on $(-1, 1)$. Applying the $H^{2}(2B)$ gauge transformation $O^{-1}$, we obtain a temporal gauge solution $A = Ad(O^{-1}) \tilde{A} + O^{-1} O_{;t, x}$ in $\mathcal D_{[0, 1)}(2B)$. It can be easily verified that this solution is the limit of smooth temporal gauge solutions; hence it coincides with the solution given by Theorem~\ref{thm:local-temp-sub} in $\mathcal D_{[0, 1)}(2B)$. Since this procedure can be applied to any unit ball $B \subseteq \BigR^{3}$, it follows that the temporal gauge solution exists on the time interval $[0, 1)$, as desired. \qedhere \end{proof} \section{{Harmonic Yang--Mills connections with compact structure group}} \label{sec:threshold} The goal of this section is to prove Theorem~\ref{thm:thr}. We proceed in two steps, in increasing generality. \pfstep{Step~1: $\bfG$ is simple, compact and simply connected} Assume that $\bfG$ is compact and simply connected, and also that $\frkg$ is \emph{simple}, i.e., it is nonabelian ($[\frkg, \frkg] \neq 0$) and there is no nonzero proper ideal. As we will see, this case turns out to be completely analogous to the model case $\bfG = SU(2)$. We need some algebraic preliminaries on compact simple Lie algebras over $\BigR$. We only sketch the part of the theory that is needed for us; for a more comprehensive treatment, see \cite[Chapters~II and IV]{Knapp}. A maximal abelian subalgebra $\mathfrak{h}$ of $\frkg$ is called a \emph{Cartan subalgebra}. Given such a $\mathfrak{h}$, consider $\set{ad(H) : \frkg \to \frkg}_{H \in \mathfrak{h}}$, which is a family of commuting anti-self-adjoint operators. Thus, viewed as linear operators on the complexification $\frkg_{\BigC} = \frkg \otimes_{\BigR} \BigC$, they are simultaneously diagonalizable with purely imaginary (or zero) eigenvalues. A nonzero linear functional $\alpha \in \mathfrak{h}^{\ast}$ is called a \emph{root}\footnote{A more standard definition (used in \cite{Knapp}) is to define roots as $\alpha \in \mathfrak{h}_{\BigC}^{\ast}$ such that $\cap_{H \in \mathfrak{h}_{\BigC}} ker(ad(H) - \alpha(H)) \neq \set{0}$. This differs from our definition by a factor of $i$.} if the simultaneous eigenspace (called the \emph{root space}) \bar{e}gin{equation*} \frkg_{\BigC, \alpha} = \set{A \in \frkg_{\BigC} : ad(H) A = i \alpha(H) A, \ \forall H \in \mathfrak{h}} \end{equation*} is nonzero. We write $\Delta$ for the space of all roots. By the preceding discussion, we see that \bar{e}gin{equation*} \frkg_{\BigC} = \mathfrak{h}_{\BigC} \oplus \bigoplus_{\alpha \in \Delta} \frkg_{\BigC, \alpha} \end{equation*} as vector spaces. In particular, $\Delta \neq \set{0}$; in fact, it spans $\mathfrak{h}^{\ast}$. It is a fundamental result of Cartan that all Cartan subalgebras are related to each other by an $Ad(O)$-action; thus $\Delta$ is independent of the choice of $\mathfrak{h}$. To each $\alpha \in \Delta$, we use the inner product $\brk{\cdot, \cdot}$ to associate $H_{\alpha} \in \mathfrak{h}$ such that \bar{e}gin{equation*} \alpha(H) = \brk{H_{\alpha}, H}, \qquad H \in \mathfrak{h}, \end{equation*} and define the induced inner product on $\Delta$ by $\brk{\alpha, \bar{e}ta} = \brk{H_{\alpha}, H_{\bar{e}ta}}$. The roots with the largest norm are called the \emph{highest roots}. Clearly, if $\alpha \in \Delta$, then $- \alpha \in \Delta$ with $\frkg_{\BigC, -\alpha} = \overline{\frkg_{\BigC, \alpha}}$. For any $E_{\alpha} \in \frkg_{\BigC, \alpha}$, by definition, \bar{e}gin{equation*} [H_{\alpha}, E_{\alpha}] = i \alpha(H_{\alpha}) E_{\alpha} = i \brk{\alpha, \alpha} E_{\alpha}, \qquad [H_{\alpha}, \overline{E_{\alpha}}] = - i \alpha(H_{\alpha}) \overline{E_{\alpha}} = - i \brk{\alpha, \alpha} \overline{E_{\alpha}}. \end{equation*} Moreover, $\dim_{\BigC }\frkg_{\BigC, \alpha} = 1$ and for any $E_{\alpha} \in \frkg_{\BigC ,\alpha}$, we have \bar{e}gin{equation*} \brk{E_{\alpha}, E_{\alpha}}= 0, \qquad [E_{\alpha}, \overline{E_{\alpha}}] = i \brk{E_{\alpha}, \overline{E_{\alpha}}} H_{\alpha}, \end{equation*} where $\brk{\cdot, \cdot}$ is extended to $\frkg_{\BigC}$ in a $\BigC$-bilinear fashion. For the proofs of the last properties, see \cite[Section~II.4]{Knapp}. Every root generates an embedding of $su(2)$ into $\frkg$. More precisely, given a root $\alpha \in \Delta$, normalize $E_{\alpha}$ so that \bar{e}gin{equation*} \brk{E_{\alpha}, \overline{E_{\alpha}}} = \frac{2}{\brk{\alpha, \alpha}}, \end{equation*} and consider ${\bf i}_{\alpha}, {\bf j}_{\alpha}, {\bf k}_{\alpha} \in \frkg$ defined by \bar{e}gin{equation*} {\bf i}_{\alpha} = (E_{\alpha} + \overline{E_{\alpha}}), \quad {\bf j}_{\alpha} = i (E_{\alpha} - \overline{E_{\alpha}}), \quad {\bf k}_{\alpha} = \frac{2}{\brk{\alpha, \alpha}} H_{\alpha}. \end{equation*} Then it is straightforward to verify that $\set{{\bf i}_{\alpha}, {\bf j}_{\alpha}, {\bf k}_{\alpha}}$ generate an $su(2)$-subalgebra, i.e., \bar{e}gin{equation} \label{eq:lie-alg-su2} [{\bf i}_{\alpha}, {\bf j}_{\alpha}] = 2 {\bf k}_{\alpha}, \qquad [{\bf j}_{\alpha}, {\bf k}_{\alpha}] = 2 {\bf i}_{\alpha}, \qquad [{\bf k}_{\alpha}, {\bf i}_{\alpha}] = 2 {\bf j}_{\alpha}. \end{equation} Indeed, \eqref{eq:lie-alg-su2} are precisely the Lie bracket relations satisfied the following standard basis of $su(2)$: \bar{e}gin{equation*} {\bf i} = \left(\bar{e}gin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right), \qquad {\bf j} = \left(\bar{e}gin{array}{cc} 0 & i \\ i & 0 \end{array}\right), \qquad {\bf k} = \left(\bar{e}gin{array}{cc} i & 0 \\ 0 & -i \end{array}\right). \end{equation*} Note also that ${\bf i}_{\alpha}, {\bf j}_{\alpha}, {\bf k}_{\alpha}$ obeys \bar{e}gin{equation} \label{eq:lie-norm-alp} \abs{{\bf i}_{\alpha}}^{2} = \abs{{\bf j}_{\alpha}}^{2} = \abs{{\bf k}_{\alpha}}^{2} = \frac{4}{\brk{\alpha, \alpha}}. \end{equation} By simplicity, all symmetric $Ad$-invariant bilinear functions on $\frkg$ (of which $\brk{\cdot, \cdot}$ is an example) are constant multiples of each other \cite[Corollary~4.9]{Knapp}. Multiplying $\brk{\cdot, \cdot}$ by a suitable constant, which does not change the conclusion of Theorem~\ref{thm:thr}, we may assume that: \bar{e}gin{equation} \label{eq:lie-norm} \mathfrak{h}box{The highest roots in $\frkg$ have $\brk{\alpha, \alpha} = 2$.} \end{equation} When $\bfG = SU(n)$, this amounts to taking $\brk{A, B} = -\mathrm{tr}\,(AB)$. We now recall the following well-known result of Bott \cite{MR0087035} concerning the third homotopy group $\pi_{3}(\bfG)$ of $\bfG$: \bar{e}gin{theorem} \label{thm:bott} Let $\bfG$ be a simple, compact, simply connected Lie group. Then $\pi_{3}(\bfG) \simeq \BigZ$. Any Lie group homomorphism $\varphi : SU(2) \to \bfG$, induced by the Lie algebra homomorphism \bar{e}gin{equation*} \mathrm{d} \varphi : su(2) \to \frkg, \ ({\bf i}, {\bf j}, {\bf k}) \mapsto ({\bf i}_{\alpha}, {\bf j}_{\alpha}, {\bf k}_{\alpha}) \end{equation*} for a highest root $\alpha$ in $\frkg$, induces an isomorphism $\pi_{3}(SU(2)) \to \pi_{3}(\bfG)$. \end{theorem} The identification $\pi_{3}(\bfG) \simeq \BigZ$ is due to Bott \cite{MR0087035}. For the proof that such a $\varphi$ induces an isomorphism, see Atiyah--Hitchin--Singer \cite[Section~8]{AHS}. By our normalization \eqref{eq:lie-norm}, $\mathrm{d} \varphi$ is isometric. Our goal now is to prove an analogue of Theorem~\ref{thm:instanton-SU2} concerning topological classes, characteristic numbers and instantons. Let $a$ be a $\mathcal A^{1, 2}_{loc}$ connection on $\BigR^{4}$ with finite energy, and let $O_{(\infty)}$ be a gauge at infinity for $a$ (which exists thanks to Theorem~\ref{thm:goodrep}). By Theorem~\ref{thm:bott}, $[O_{(\infty)}] = - \kappa [\varphi]$ for some $\kappa \in \BigZ$. We claim that: \bar{e}gin{claim} We have $\boldsymbol{\chi} = - 8 \pi^{2} \kappa$. Moreover, there exists an instanton for each $\kappa$ with energy $8 \pi^{2} \abs{\kappa}$. \end{claim} To prove the claim, note that each self-dual (resp. anti-self-dual) $SU(2)$-connection $\tilde{a}_{\kappa}$ with second Chern number $c_{2} = -\kappa$ where $\kappa > 0$ (resp. $\kappa < 0$) induces a self-dual (resp. anti-self-dual) $\bfG$-connection $a_{\kappa} = \mathrm{d} \varphi(\tilde{a}_{\kappa})$ by the Lie algebra homomorphism $\mathrm{d} \varphi : su(2) \to \frkg$. Since $\mathrm{d} \varphi$ preserves the normalized $Ad$-invariant inner product, which equals $- \mathrm{tr}\,(AB)$ on $su(2)$, we have \bar{e}gin{align*} \boldsymbol{\chi} =& \int_{\BigR^{4}} - \brk{\mathrm{d} \varphi(F[\tilde{a}_{\kappa}]), \mathrm{d} \varphi(F[\tilde{a}_{\kappa}])} = \int_{\BigR^{4}} \mathrm{tr}\, (F[\tilde{a}_{\kappa}] \wedge F[\tilde{a}_{\kappa}]) = 8 \pi^{2} c_{2} \\ \calE_{e}(a_{\kappa}) =& \frac{1}{2} \int_{\BigR^{4}} \brk{\mathrm{d} \varphi(F_{jk}[\tilde{a}_{\kappa}]), \mathrm{d} \varphi(F^{jk}[\tilde{a}_{\kappa}]) } = \frac{1}{2} \int_{\BigR^{4}} - \mathrm{tr}\, (F_{jk}[\tilde{a}_{\kappa}] F^{jk}[\tilde{a}_{\kappa}]) = 8 \pi^{2} \abs{c_{2}}. \end{align*} Moreover, by a standard computation, the degree of a gauge at infinity $\tilde{O}_{\kappa (\infty)}$ for $\tilde{a}_{\kappa}$, viewed as a map $\BigS^{3} \to SU(2) \simeq \BigS^{3}$, is equal to $c_{2} = \kappa$ (with the appropriate choices of the orientations). Correspondingly, $O_{\kappa (\infty)} = \varphi \circ \tilde{O}_{(\kappa (\infty)}$ is a gauge at infinity for $a_{\kappa}$, and since $\varphi$ induces the isomorphism $\pi_{3}(SU(2)) \to \pi_{3}(\bfG)$, we have $[O_{\kappa(\infty)}] = - \kappa [\varphi]$. Since $\boldsymbol{\chi}$ is dependent only on the topological class, the claim follows. Next, analogous to Theorem~\ref{thm:GKS-SU2}, we claim that: \bar{e}gin{claim} Let $a$ be a finite energy harmonic Yang--Mills connection, which is not an instanton. Then \bar{e}gin{equation*} \calE_{e}(a) \frkgeq \abs{\boldsymbol{\chi}} + 16 \pi^{2}. \end{equation*} \end{claim} In essence, this is \cite[Corollary~1.2]{GKS}. However, to insure that we obtain the sharp bound, we need to verify that the proof goes through for our choice of $\brk{\cdot, \cdot}$, without relying on an embedding $\frkg \subset so(n)$ to normalize $\brk{\cdot, \cdot}$ as in \cite{GKS}. For this purpose, we have the following replacement of \cite[Lemma~2.1]{GKS}: \bar{e}gin{lemma} \label{lem:GKS-inner} Under our normalization \eqref{eq:lie-norm}, we have \bar{e}gin{equation*} \abs{[A, B]} \leq \sqrt{2} \abs{A} \abs{B} \qquad \mathfrak{h}box{ for any } A, B \in \frkg. \end{equation*} with equality if and only if, up to an $Ad(O)$-action, $A$ and $B$ are proportional to two of $\set{{\bf i}_{\alpha}, {\bf j}_{\alpha}, {\bf k}_{\alpha}}$ for some highest root $\alpha$. \end{lemma} \bar{e}gin{proof} Consider a maximal abelian subalgebra $\mathfrak{h}$ containing $A$. Eigenvalues of $ad(A)$ are $\set{0, i \alpha(A)}_{\alpha \in \Delta}$. By \eqref{eq:lie-norm}, $\abs{\alpha} \leq \sqrt{2}$. Thus, \bar{e}gin{equation*} \abs{[A, B]} = \abs{ad(A) B} \leq \sup_{\alpha \in \Delta} \abs{\alpha(A)} \abs{B} \leq \sup_{\alpha \in \Delta} \abs{\alpha} \abs{A} \abs{B} \leq \sqrt{2} \abs{A} \abs{B}. \end{equation*} In order for the equalities to hold, $\alpha$ must be a highest root, $A = \abs{A} H_{\alpha} = \abs{A} {\bf k}_{\alpha}$, and $B \in span({\bf i}_{\alpha}, {\bf j}_{\alpha})$. Since $Ad(\exp(s {\bf k}_{\alpha}))$ simply rotates the plane $span({\bf i}_{\alpha}, {\bf j}_{\alpha})$, and leaves ${\bf k}_{\alpha}$ invariant, we see that $Ad(\exp(s {\bf k}_{\alpha})) B$ is parallel to ${\bf i}_{\alpha}$ for an appropriate choice of $s \in \BigR$. Finally, the converse is easy to verify. \qedhere \end{proof} The proof in \cite{GKS} now goes through for a $\bfG$-bundle with normalization \eqref{eq:lie-norm} with the parameters $\frkgamma_{0} = \sqrt{2}$ and $\frkgamma_{1} = \frac{2}{\sqrt{3}} \frkgamma_{0} = \frac{4}{\sqrt{6}}$. The $SU(2)$-instanton with $\kappa = 1$, which we constructed above, saturates the inequalities in \cite{GKS}, exactly as in \cite[Remark~2.7 and Section~3.2]{GKS}. \pfstep{Step~2: $\bfG$ is a general nonabelian compact Lie group} Finally, we consider a general nonabelian compact Lie group ${\bf G}$ and prove Theorem~\ref{thm:thr}. Observe that the $Ad$-invariant inner product on $\frkg$ can be used to define the orthogonal complement $\mathfrak{h}^{\perp}$ of an ideal $\mathfrak{h} \subseteq \frkg$, which is also an ideal. Thus $\frkg$ admits the direct-sum splitting \bar{e}gin{equation*} \frkg = \tilde{\frkg}_{1} \oplus \cdots \oplus \tilde{\frkg}_{\tilde{n}} \end{equation*} as Lie algebra ideals, where each summand has no proper nonzero ideal. In fact, it is either $1$-dimensional, and thus abelian, or simple. Since $\bfG$ is assumed to be nonabelian, at least one summand is simple. Thus, we arrive at the decomposition \bar{e}gin{equation*} \frkg = \frkg_{1} \oplus \cdots \oplus \frkg_{n} \oplus \mathfrak{a}. \end{equation*} where $n \frkgeq 1$, each $\frkg_{i}$ is simple, and $\mathfrak{a}$ is abelian. As a result, the universal cover $\tilde{\bfG}$ of $\bfG$ splits into \bar{e}gin{equation*} \tilde{\bfG} = \Pi_{i} \bfG_{i} \times \BigR^{r} \end{equation*} where $\bfG_{i}$ is the simply connected Lie group corresponding to $\frkg_{i}$, and $r = \dim \mathfrak{a}$. Denote by $\boldsymbol{\pi}_{i}$ the projection $\bfG \to \bfG_{i}$, and by $\mathrm{d} \boldsymbol{\pi}_{i}$ the corresponding projection $\frkg \to \frkg_{i}$, with the convention $\bfG_{n+1} = \BigR^{r}$, $\frkg_{n+1} = \mathfrak{a}$. As we are working with global gauge potentials on $\BigR^{4}$, the splitting allows us to decompose any $a$ into components $\mathrm{d} \boldsymbol{\pi}_{i} (a)$, which are completely decoupled from each other. We have the splitting \bar{e}gin{align} \boldsymbol{\chi} = & \int_{\BigR^{4}} - \brk{F[a], F[a]} = \sum_{i} \int_{\BigR^{4}} - \brk{\mathrm{d} \boldsymbol{\pi} (F[a]), \mathrm{d} \boldsymbol{\pi} (F[a])} = \sum_{i} \boldsymbol{\chi}(\mathrm{d} \boldsymbol{\pi}_{i}(a)), \label{eq:cpt-ch} \\ \calE_{e}(a) = & \frac{1}{2} \int_{\BigR^{4}} \brk{F_{jk}[a], F^{jk}[a]} = \sum_{i} \frac{1}{2} \int_{\BigR^{4}} \brk{\mathrm{d} \boldsymbol{\pi} (F_{jk}[a]), \mathrm{d} \boldsymbol{\pi} (F^{jk}[a])} = \sum_{i} \calE_{e}(\mathrm{d} \boldsymbol{\pi}_{i}(a)). \label{eq:cpt-en} \end{align} Moreover, $a$ is a harmonic Yang--Mills connection if and only if each $\mathrm{d} \boldsymbol{\pi}_{i}(a)$ is. In this case, $\mathrm{d} \boldsymbol{\pi}_{n+1}(a) = 0$, since no nontrivial finite energy harmonic $2$-form exists on $\BigR^{4}$. For each compact simple $\bfG_{i}$, let $E_{i}$ be the energy of a first instanton; from Step~1, we know that $E_{i} = \frac{16}{\brk{\alpha, \alpha}} \pi^{2}$, where $\alpha$ is a highest root in $\frkg_{i}$. Reordering the factors if necessary, we may arrange so that $E_{1} \leq E_{2} \leq \ldots \leq E_{n}$. In particular, $E_{1}$ coincides with the infimum in Theorem~\ref{thm:thr}, and part (1) follows. To prove part (2), note that if $a$ is a finite energy harmonic Yang--Mills connection with energy $< 2 E_{1} \leq 2 E_{i}$, then by Step~1, each $\mathrm{d} \boldsymbol{\pi}_{i}(a)$ is either zero or a first instanton. Immediately by \eqref{eq:cpt-en}, we also see that exactly one of $\mathrm{d} \boldsymbol{\pi}_{i}(a)$ is nonzero. Thus $\abs{\boldsymbol{\chi}} = \abs{\boldsymbol{\chi}(\mathrm{d} \boldsymbol{\pi}_{i}(a))} = \calE_{e}(\mathrm{d} \boldsymbol{\pi}_{i}(a)) = \calE_{e}(a)$, as desired. \end{document}
\begin{document} \abovedisplayskip=6pt plus 1pt minus 1pt \belowdisplayskip=6pt plus 1pt minus 1pt \thispagestyle{empty} \vspace*{-1.0truecm} \noindent \vskip 10mm \begin{center}{\large Positive quandle homology and its applications in knot theory} \end{center} \vskip 5mm \begin{center}{Zhiyun Cheng \quad Hongzhu Gao\\ {\small School of Mathematical Sciences, Beijing Normal University \\Laboratory of Mathematics and Complex Systems, Ministry of Education, Beijing 100875, China \\(email: [email protected] \quad [email protected])}}\end{center} \vskip 1mm \noindent{\small {\small\bf Abstract} Algebraic homology and cohomology theories for quandles have been studied extensively in recent years. With a given quandle 2(3)-cocycle one can define a state-sum invariant for knotted curves(surfaces). In this paper we introduce another version of quandle (co)homology theory, say positive quandle (co)homology. Some properties of positive quandle (co)homology groups are given and some applications of positive quandle cohomology in knot theory are discussed. \ \ \baselineskip 12pt \noindent{\small\bf Keywords} quandle homology; positive quandle homology; cocycle knot invariant\ \ \noindent{\small\bf MR(2010) Subject Classification} 57M25, 57M27, 57Q45\ \ {\rm }} \vskip 1mm \baselineskip 12pt \section{Introduction} In knot theory, by considering representations from the knot group onto the dihedral group of order $2n$ one obtain a family of elementary knot invariants, known as Fox $n$-colorings \cite{Fox1961}. Quandle, a set with certain self-distributive operation satisfying axioms analogous to the Reidemeister moves, was first proposed by D. Joyce \cite{Joy1982} and S. V. Matveev \cite{Mat1984} independently. With a given quandle $X$ one can define the quandle coloring invariant by counting the quandle homomorphisms from the fundamental quandle of a knot to $X$. For the fundamental quandle and its presentations the reader is referred to \cite{Joy1982} and \cite{Fen1992}. Equivalently speaking, one can label each arc of a knot diagram by an element of a fixed quandle, subject to certain constraints. The quandle coloring invariant can be computed by counting ways of these labellings. It is natural to consider how to improve this integral valued knot invariant. Since the quandle coloring invariant equals the number of different proper colorings, it is natural to associate a weight function to each colored knot diagram which does not depend on the choice of the knot diagram. In this way, instead of several colored knot diagrams one will obtain several weight functions and the number of these weight functions is exactly the quandle coloring invariant. In \cite{Car2003} J.S. Carter et al. associate a Boltzmann weight to each crossing and then consider the signed product of Boltzmann weights for all crossing points. In fact based on R. Fenn, C. Rourke and B. Sanderson's framework of rack and quandle homology \cite{Fen1995,Fen1996}, J.S. Carter et al. describe a homology theory for quandles such that each 2-cocycle and 3-cocycle can be used to define a family of invariants of knots and knotted surfaces respectively. Many applications of quandle cocycle invariants have been investigated in the past decade. For example, with a suitable choice of 3-cocycle from the dihedral quandle $R_3$, one can prove the chirality of trefoil \cite{Kam2002}. For knotted surface, by using cocycle invariants it was proved that the 2-twist spun trefoil is non-invertible and has triple point number 4 \cite{Car2003,Sat2004}. In this paper we introduce another quandle homology and cohomology theory, say positive quandle homology and positive quandle cohomology. The definition of positive quandle (co)homology is similar to that of the original quandle (co)homology. It is not surprising that positive quandle homology shares many common properties with quandle homology, which will be discussed in Section 4. The most interesting part of this new quandle (co)homology theory is that it also can be used to define cocycle invariants for knots and knotted surfaces. Most properties of quandle homology and quandle cocycle invariants have their corresponding versions in positive quandle homology theory. This phenomenon suggests that quandle homology theory and positive quandle homology theory are parallel to each other, and in some special cases (Proposition 3.3) they coincides with each other. The rest of this paper is arranged as follows: In Section 2, a brief review of quandle structure and quandle coloring invariant is given. Some applications of quandle coloring invariant in knot theory will also be discussed. In Section 3, we give the definition of positive quandle homology and cohomology. The relation between positive quandle (co)homology and quandle (co)homology will also be studied. Section 4 is devoted to the calculation of positive quandle homology and cohomology. We will calculate the positive quandle homology for some simple quandles. In Section 5, we show how to use positive quandle 2-cocycle and 3-cocycle to define invariants for knots and knotted surfaces respectively. We end this paper by two examples which study the trivially colored crossing points of a knot diagram, from where the motivation of this study arises. \section{Quandle and quandle coloring invariants} First we take a short review of the definition of quandle. \begin{definition} A quandle $(X, \ast)$, is a set $X$ with a binary operation $(a, b)\rightarrow a\ast b$ satisfying the following axioms: \begin{enumerate} \item For any $a\in X$, $a\ast a=a$. \item For any $b, c\in X$, there exists a unique $a\in X$ such that $a\ast b=c$. \item For any $a, b, c\in X$, $(a\ast b)\ast c=(a\ast c)\ast(b\ast c)$. \end{enumerate} \end{definition} Usually we simply denote a quandle $(X, \ast)$ by $X$. If a non-empty set $X$ with a binary operation $(a, b)\rightarrow a\ast b$ satisfies the second and the third axioms, then we name it a \emph{rack}. In particular if a quandle $X$ satisfies a modified version of the second axiom "for any $b, c\in X$, $(c\ast b)\ast b=c$", i.e. the unique element $a=c\ast b$, we call such quandle an \emph{involutory quandle} \cite{Joy1982}or \emph{kei} \cite{Tak1942}. The relation below follows directly from the definitions above: \begin{center} $\{$keis$\}\subset\{$quandles$\}\subset\{$racks$\}$. \end{center} In the second axiom we usually denote the element $a$ by $a=c\ast^{-1}b$. It is not difficult to observe that $(X, \ast^{-1})$ also defines a quandle structure, which is usually named as the \emph{dual quandle} of $(X, \ast)$. We denote the dual of $X$ by $X^*$. Note that a quandle is an involutory quandle if and only if $\ast=\ast^{-1}$. Next we list some most common examples of quandle, see \cite{Fen1992,Ho2005,Joy1982,Ven2012} for more examples. \begin{itemize} \item Trivial quandle of order $n$: $T_n=\{a_1, \cdots, a_n\}$ and $a_i\ast a_j=a_i$. \item Dihedral quandle of order $n$: $R_n=\{0, \cdots, n-1\}$ and $i\ast j=2j-i$ $($mod $n)$. \item Conjugation quandle: a conjugacy class $X$ of a group $G$ with $a\ast b=b^{-1}ab$. \item Alexander quandle: a $Z[t, t^{-1}]$-module $M$ with $a\ast b=ta+(1-t)b$. \end{itemize} From now on all the quandles mentioned throughout are assumed to be finite quandles. With a given finite quandle $X$, we can define an associated integer-valued knot invariant Col$_X(K)$, i.e. the quandle coloring invariant. Let $K$ be a knot diagram. We will often abuse our notation, letting $K$ refer both to a knot diagram and the knot itself. It is not difficult to determine the meaning that is intended from the context. A \emph{coloring} of $K$ by a given quandle $X$ is a map from the set of arcs of $K$ to the elements of $X$. We say a coloring is \emph{proper} if at each crossing the images of the map satisfies the relation given in Figure 1. \begin{center} \includegraphics{figure1.eps} \centerline{\small Figure 1: The proper coloring rule\quad} \end{center} Now we define the \emph{quandle coloring invariant} Col$_X(K)$ to be the number of proper colorings of $K$ by the quandle $X$. Since $X$ is finite, this definition makes sense. It is well-known that although the definition of Col$_X(K)$ depends on the choice of a knot diagram, however the integer Col$_X(K)$ is independent of the knot diagram. In fact the three axioms from the definition of quandle structure correspond to the three Reidemeister moves. In particular Col$_X(K)\geq n$ if $X$ contains $n$ elements, since there always exist $n$ trivial colorings. When $X=R_n$, we have Col$_{R_n}(K)=$Col$_n(K)$, the number of distinct Fox $n$-colorings of $K$ \cite{Fox1961}. It is well-known that Col$_n(K)$ equals the number of distinct representations from the knot group $\pi_1(R^3\backslash K)$ to the dihedral group of order $2n$. As a generalization of Fox $n$-coloring, Col$_X(K)$ is equivalent to the number of quandle homomorphisms from the fundamental quandle of $K$ to $X$. Here the fundamental quandle of $K$ is defined by assigning generators to arcs, and certain relations to crossings, which is quite similar to the presentation of the knot group. See \cite{Joy1982} and \cite{Mat1984} for more details. Before ending this section we list some properties of the quandle coloring invariant. \begin{itemize} \item Col$_X(K)$=Col$_X(\overline{K^*})$. Here $\overline{K^*}$ denotes the mirror image of $K$ with the reversed orientation. This follows from the fact that the fundamental quandles of $K$ and $\overline{K^*}$ are isomorphic \cite{Joy1982,Mat1984}. \item log$_{|X|}($Col$_X(K))\leq b(K)$ and log$_{|X|}($Col$_X(K))\leq u(K)+1$ \cite{Prz1998}. Here $|X|$ denotes the order of $X$, $b(K)$ and $u(K)$ denote the bridge number and unknotting number respectively. The readers are referred to \cite{Cla2013} for some recent progress on the applications of quandle coloring invariants. \item Col$_X(K)$ is not a Vassiliev invariant. This can be proved with the similar idea of \cite{Eis1999}, in which M. Eisermann proved that Col$_n(K)$ is not a Vassiliev invariant. Briefly speaking, in \cite{Eis1999} it was proved that if a Vassiliev invariant $F$ is bounded on any given vertical twist sequence, then $F$ is constant. On the other hand, for any fixed vertical twist sequence the braid index is bounded by some integer, say $b$. It is not difficult to show that the fundamental quandle of each knot of this vertical twist sequence can be generated by at most $b$ elements. Assume $X$ contains $n$ elements, then we deduce that Col$_X(K)\leq n^b$. Because Col$_X(K)$ is not constant (note that the choice of the quandle $X$ is arbitrary), therefore Col$_X(K)$ is not a Vassiliev invariant. \end{itemize} \section{Homology and cohomology theory for quandles} Rack (co)homology theory was first defined in \cite{Fen1996}, which is similar to the group (co)homology theory. As a modification of the rack (co)homology, quandle (co)homology was proposed by J.S. Carter, D. Jelsovsky, S. Kamada, L. Langford and M. Saito in \cite{Car2003}. As an application, they defined state-sum invariants for knots and knotted surfaces by using quandle cocycles. Some calculations of quandle homology groups and the associated state-sum invariants can be found in \cite{Car2001J,Car2001A,Moc2003,Nie2009}, or see \cite{Car2012} for a good survey. First we take a short review of the construction of the quandle (co)homology group, then we will give the definition of positive quandle (co)homology group. Assume $X$ is a finite quandle. Let $C^R_n(X)$ denote the free abelian group generated by $n-$tuples $(a_1, \cdots, a_n)$, where $a_i\in X$. In order to make $C^R_n(X)$ into a chain complex, let us consider the following two homomorphisms from $C^R_n(X)$ to $C^R_{n-1}(X)$, here $\overline{a_i}$ denotes the omission of the element $a_i$. \begin{flalign*} & d_1(a_1, \cdots, a_n)=\sum\limits_{i=1}^n(-1)^i(a_1, \cdots, \overline{a_i}, \cdots, a_n) \quad (n\geq2)\\ & d_2(a_1, \cdots, a_n)=\sum\limits_{i=1}^n(-1)^i(a_1\ast a_i, \cdots, a_{i-1}\ast a_i, a_{i+1}, \cdots, a_n) \quad (n\geq2)\\ & d_i(a_1, \cdots, a_n)=0 \quad (n\leq1, i=1, 2) \end{flalign*} For the two homomorphisms $d_1$, $d_2$ defined above, we have the following lemma. \begin{lemma} $d_1^2=d_2^2=d_1d_2+d_2d_1=0$. \end{lemma} \begin{proof} One computes \begin{flalign*} &d_1^2(a_1, \cdots, a_n)&\\ =&d_1(\sum\limits_{i=1}^n(-1)^i(a_1, \cdots, \overline{a_i}, \cdots, a_n))\\ =&\sum\limits_{i=1}^n(-1)^i(\sum\limits_{j<i}(-1)^j(a_1, \cdots, \overline{a_j}, \cdots, \overline{a_i}, \cdots, a_n)+\sum\limits_{j>i}(-1)^{j-1}(a_1, \cdots, \overline{a_i}, \cdots, \overline{a_j}, \cdots, a_n))\\ =&0 \end{flalign*} \begin{flalign*} &d_2^2(a_1, \cdots, a_n)&\\ =&d_2(\sum\limits_{i=1}^n(-1)^i(a_1\ast a_i, \cdots, a_{i-1}\ast a_i, a_{i+1}, \cdots, a_n))\\ =&\sum\limits_{i=1}^n(-1)^i(\sum\limits_{j<i}(-1)^j((a_1\ast a_i)\ast (a_j\ast a_i), \cdots, (a_{j-1}\ast a_i)\ast (a_j\ast a_i), {a_{j+1}\ast a_i}, \cdots, a_{i-1}\ast a_i, a_{i+1}, \cdots, a_n)\\ +&\sum\limits_{j>i}(-1)^{j-1}((a_1\ast a_i)\ast a_j, \cdots, (a_{i-1}\ast a_i)\ast a_j, a_{i+1}\ast a_j, \cdots, a_{j-1}\ast a_j, a_{j+1}, \cdots, a_n))\\ =&0 \end{flalign*} \begin{flalign*} &d_1d_2(a_1, \cdots, a_n)+d_2d_1(a_1, \cdots, a_n)&\\ =&d_1(\sum\limits_{i=1}^n(-1)^i(a_1\ast a_i, \cdots, a_{i-1}\ast a_i, a_{i+1}, \cdots, a_n))+d_2(\sum\limits_{i=1}^n(-1)^i(a_1, \cdots, \overline{a_i}, \cdots, a_n))\\ =&\sum\limits_{i=1}^n\sum\limits_{j<i}(-1)^{i+j}(a_1\ast a_i, \cdots, \overline{a_j\ast a_i}, \cdots, a_{i-1}\ast a_i, a_{i+1}, \cdots, a_n)\\ +&\sum\limits_{i=1}^n\sum\limits_{j>i}(-1)^{i+j-1}(a_1\ast a_i, \cdots, a_{i-1}\ast a_i, a_{i+1}, \cdots, \overline{a_j}, \cdots, a_n)\\ +&\sum\limits_{i=1}^n\sum\limits_{j<i}(-1)^{i+j}(a_1\ast a_j, \cdots, a_{j-1}\ast a_j, a_{j+1}, \cdots, \overline{a_i}, \cdots, a_n)\\ +&\sum\limits_{i=1}^n\sum\limits_{j>i}(-1)^{i+j-1}(a_1\ast a_j, \cdots, \overline{a_i\ast a_j}, \cdots, a_{j-1}\ast a_j, a_{j+1}, \cdots, a_n)\\ =&0 \end{flalign*} \end{proof} Lemma 3.1 suggests us to investigate the following four chain complexes: $\{C^R_n(X), d_1\}$, $\{C^R_n(X), d_2\}$, $\{C^R_n(X), d_1+d_2\}$ and $\{C^R_n(X), d_1-d_2\}$. We remark that $\{C^R_n(X), d_1\}$ is acyclic. In a recent work of A. Inoue and Y. Kabaya \cite{Ino2013}, $\{C^R_n(X), d_1\}$ was regarded as a right $\mathbf{Z}[G_X]-$module, here $G_X$ denotes the associated group of $X$, i.e. $G_X$ is generated by the elements of $X$ and satisfies the relation $a\ast b=b^{-1}ab$. With this viewpoint they defined the simplicial quandle homology to be the homology group of the chain complex $\{C^R_n(X)\bigotimes_{\mathbf{Z}[G_X]}\mathbf{Z}, d_1\}$. The readers are referred to \cite{Ino2013} for more details. Assume $X$ is a fixed finite quandle. Let $C^D_n(X)$ $(n\geq2)$ denote the free abelian group generated by $n-$tuples $(a_1, \cdots, a_n)$ with $a_i=a_{i+1}$ for some $1\leq i\leq n-1$, and $C^D_n(X)=0$ if $n\leq1$. The following lemma tells us $\{C^D_n(X), d_1\pm d_2\}$ is a sub-complex of $\{C^R_n(X), d_1\pm d_2\}$. \begin{lemma} $\{C^D_n(X), d_i\}$ is a sub-complex of $\{C^R_n(X), d_i\}$ $(i=1, 2)$. \end{lemma} \begin{proof} Choose an n-tuple $(a_1, \cdots, a_i, a_{i+1}, \cdots, a_n)\in C^D_n(X)$, where $a_i=a_{i+1}$. One computers \begin{flalign*} &d_1(a_1, \cdots, a_i, a_{i+1}, \cdots, a_n)&\\ =&\sum\limits_{j<i}(-1)^j(a_1, \cdots, \overline{a_j}, \cdots, a_i, a_{i+1}, \cdots, a_n)+\sum\limits_{j>i+1}(-1)^j(a_1, \cdots, a_i, a_{i+1}, \cdots, \overline{a_j}, \cdots, a_n)\\ +&(-1)^i(a_1, \cdots, \overline{a_i}, a_{i+1}, \cdots, a_n)+(-1)^{i+1}(a_1, \cdots, a_i, \overline{a_{i+1}}, \cdots, a_n)\\ =&\sum\limits_{j<i}(-1)^j(a_1, \cdots, \overline{a_j}, \cdots, a_i, a_{i+1}, \cdots, a_n)+\sum\limits_{j>i+1}(-1)^j(a_1, \cdots, a_i, a_{i+1}, \cdots, \overline{a_j}, \cdots, a_n)\\ \in &C^D_{n-1}(X) \end{flalign*} \begin{flalign*} &d_2(a_1, \cdots, a_i, a_{i+1}, \cdots, a_n)&\\ =&\sum\limits_{j<i}(-1)^j(a_1\ast a_j, \cdots, a_{j-1}\ast a_j, a_{j+1}, \cdots, a_i, a_{i+1}, \cdots, a_n)\\ +&\sum\limits_{j>i+1}(-1)^j(a_1\ast a_j, \cdots, a_i\ast a_j, a_{i+1}\ast a_j, \cdots, a_{j-1}\ast a_j, a_{j+1}, \cdots, a_n)\\ +&(-1)^i(a_1\ast a_i, \cdots, a_{i-1}\ast a_i, a_{i+1}, \cdots, a_n)+(-1)^{i+1}(a_1\ast a_{i+1}, \cdots, a_i\ast a_{i+1}, a_{i+2}, \cdots, a_n)\\ =&\sum\limits_{j<i}(-1)^j(a_1\ast a_j, \cdots, a_{j-1}\ast a_j, a_{j+1}, \cdots, a_i, a_{i+1}, \cdots, a_n)\\ +&\sum\limits_{j>i+1}(-1)^j(a_1\ast a_j, \cdots, a_i\ast a_j, a_{i+1}\ast a_j, \cdots, a_{j-1}\ast a_j, a_{j+1}, \cdots, a_n)\\ \in &C^D_{n-1}(X) \end{flalign*} \end{proof} Define $C_n^Q(X)=C_n^R(X)/C_n^D(X)$, then we have two chain complexes $\{C_n^Q(X), d_1\pm d_2\}$, here $d_1\pm d_2$ denote the induced homomorphisms. For simplicity, we use $\partial^+$ and $\partial^-$ to denote $d_1+d_2$ and $d_1-d_2$ respectively, and use $C_\ast^{W\pm}(X)$ to denote $\{C_\ast^{W}(X), \partial_\ast^\pm\}$ $(W\in\{R, D, Q\})$. For an abelian group $G$, define the the chain complex $C_\ast^{W\pm}(X; G)$ and cochain complex $C_{W\pm}^{\ast}(X; G)$ as below $(W\in\{R, D, Q\})$ \begin{itemize} \item $C_\ast^{W\pm}(X; G)=C_\ast^{W\pm}(X)\bigotimes G$, \quad $\partial_\ast^\pm=\partial_\ast^\pm\bigotimes$ id; \item $C_{W\pm}^{\ast}(X; G)=$Hom$(C_\ast^{W\pm}(X), G)$, \quad $\delta_\pm^\ast=$Hom$(\partial_\ast^\pm$, id). \end{itemize} The \emph{positive quandle (co)homology groups} of $X$ with coefficient $G$ is defined to be the (co)homology groups of the (co)chain complex $C_\ast^{Q+}(X; G)$ $(C_{Q+}^{\ast}(X; G))$, and the \emph{negative quandle (co)homology groups} of a quandle $X$ with coefficient $G$ is defined to be the (co)homology groups of the (co)chain complex $C_\ast^{Q-}(X; G)$ $(C_{Q-}^{\ast}(X; G))$. In other words, \begin{center} $H_n^{Q\pm}(X; G)=H_n(C_\ast^{Q\pm}(X; G))$ and $H^n_{Q\pm}(X; G)=H^n(C_{Q\pm}^{\ast}(X; G))$. \end{center} Similarly we can define the \emph{$\pm$ rack (co)homology groups} and \emph{$\pm$ degeneration (co)homology groups} as below, \begin{center} $H_n^{R\pm}(X; G)=H_n(C_\ast^{R\pm}(X; G))$ and $H^n_{R\pm}(X; G)=H^n(C_{R\pm}^{\ast}(X; G))$,\\ $H_n^{D\pm}(X; G)=H_n(C_\ast^{D\pm}(X; G))$ and $H^n_{D\pm}(X; G)=H^n(C_{D\pm}^{\ast}(X; G))$. \end{center} The reader has recognized that the negative quandle (co)homology groups are nothing but the quandle (co)homology groups introduced by J.S. Carter et al in \cite{Car2003}. Therefore we will still use the name quandle (co)homology instead of nagative quandle (co)homology, and write $H_n^Q(X; G)$ $(H^n_Q(X; G))$instead of $H_n^{Q-}(X; G)$ $(H^n_{Q-}(X; G))$. In the rest of this paper we will focus on the positive quandle homology groups $H_\ast^{Q+}(X; G)$ and cohomology groups $H^\ast_{Q+}(X; G)$. In particular, when $G=\mathbf{Z}_2$, the following result is obvious. \begin{proposition} $H_n^{Q+}(X; \mathbf{Z}_2)\cong H_n^Q(X; \mathbf{Z}_2)$ and $H^n_{Q+}(X; \mathbf{Z}_2)\cong H^n_Q(X; \mathbf{Z}_2)$. \end{proposition} In the end of this section we list the positive quandle 2-cocycle condition and positive quandle 3-cocycle condition below. Later it will be shown that they are related to the third Reidemeister move of knots and the tetrahedral move of knotted surfaces. The readers are suggested to compare these with the quandle 2-cocycle condition and quandle 3-cocycle condition given in \cite{Car2003}. \begin{itemize} \item A positive quandle 2-cocycle $\phi$ satisfies the condition \begin{center} $-\phi(b, c)-\phi(b, c)+\phi(a, c)+\phi(a\ast b, c)-\phi(a, b)-\phi(a\ast c, b\ast c)=0$. \end{center} \item A positive quandle 3-cocycle $\theta$ satisfies the condition \begin{center} $-\theta(b, c, d)-\theta(b, c, d)+\theta(a, c, d)+\theta(a\ast b, c, d)$\\ $-\theta(a, b, d)-\theta(a\ast c, b\ast c, d)+\theta(a, b, c)+\theta(a\ast d, b\ast d, c\ast d)=0$. \end{center} \end{itemize} \section{Computing positive quandle homology and cohomology} This section is devoted to the calculation of positive quandle homology and cohomology for some simple examples. Before this, we need to discuss some basic properties of the positive quandle homology and cohomology. Most of these results have their corresponding versions in quandle homology and quandle cohomology. First it was pointed out that since $\{C_n^{Q}(X)\}$ is a chain complex of free abelian groups, there is a universal coefficient theorem for quandle homology and quandle cohomology \cite{Car2001J}. Due to the same reason, there also exists a universal coefficient theorem for positive quandle homology and cohomology. \begin{theorem}[Universal Coefficient Theorem] For a given quandle $X$, there are a pair of split exact sequences \begin{center} $0\rightarrow H^{Q+}_n(X; \mathbf{Z})\bigotimes G\rightarrow H_n^{Q+}(X; G)\rightarrow \emph{Tor}(H_{n-1}^{Q+}(X; \mathbf{Z}), G)\rightarrow0$, \end{center} \begin{center} $0\rightarrow \emph{Ext}(H_{n-1}^{Q+}(X; \mathbf{Z}), G)\rightarrow H_{Q+}^n(X; G)\rightarrow \emph{Hom}(H_n^{Q+}(X; \mathbf{Z}), G)\rightarrow0$. \end{center} \end{theorem} The universal coefficient theorem tells us that it suffices to study the positive quandle homology and cohomology groups with integer coefficients. As usual we will omit the coefficient group $G$ if $G=\mathbf{Z}$. The following lemma gives an example of the computation of the simplest nontrivial quandle $R_3$ in detail. \begin{lemma} $H_{Q+}^2(R_3)\cong \mathbf{Z}_3.$ \end{lemma} \begin{proof} Recall that $R_3=\{0, 1, 2\}$ with quandle operations $i\ast j=2j-i$ $($mod 3$)$. Choose a positive quandle 2-cocycle $\phi\in Z^2_{Q+}(R_3)$. We assume that $\phi=\sum\limits_{i, j\in \{0, 1, 2\}}c_{(i, j)}\chi_{(i, j)}$, here $\chi_{(i, j)}$ denotes the characteristic function \begin{center} $\chi_{(i, j)}(k, l)= \begin{cases} 1,& \text{if}\ (i, j)=(k, l);\\ 0,& \text{if}\ (i, j)\neq(k, l). \end{cases}$ \end{center} Recall that $\phi(i, i)=0$, i.e. $c_{(i, i)}=0$. Next we need to investigate the positive quandle 2-cocycle conditions \begin{center} $-\phi(j, k)-\phi(j, k)+\phi(i, k)+\phi(i\ast j, k)-\phi(i, j)-\phi(i\ast k, j\ast k)=0$ \end{center} for all triples $(i, j, k)$ from $\{0, 1, 2\}$. There are totally 12 equations on $c_{(i, j)}$. \begin{center} $\begin{cases} -2c_{(1, 0)}+c_{(2, 0)}-c_{(0, 1)}-c_{(0, 2)}=0 \\ -2c_{(2, 0)}+c_{(1, 0)}-c_{(0, 2)}-c_{(0, 1)}=0 \\ -2c_{(0, 1)}+c_{(2, 1)}-c_{(1, 0)}-c_{(1, 2)}=0 \\ -2c_{(2, 1)}+c_{(0, 1)}-c_{(1, 2)}-c_{(1, 0)}=0 \\ -2c_{(0, 2)}+c_{(1, 2)}-c_{(2, 0)}-c_{(2, 1)}=0 \\ -2c_{(1, 2)}+c_{(0, 2)}-c_{(2, 1)}-c_{(2, 0)}=0 \\ -2c_{(1, 2)}+c_{(0, 2)}-c_{(0, 1)}-c_{(1, 0)}=0 \\ -2c_{(2, 1)}+c_{(0, 1)}-c_{(0, 2)}-c_{(2, 0)}=0 \\ -2c_{(0, 2)}+c_{(1, 2)}-c_{(1, 0)}-c_{(0, 1)}=0 \\ -2c_{(2, 0)}+c_{(1, 0)}-c_{(1, 2)}-c_{(2, 1)}=0 \\ -2c_{(0, 1)}+c_{(2, 1)}-c_{(2, 0)}-c_{(0, 2)}=0 \\ -2c_{(1, 0)}+c_{(2, 0)}-c_{(2, 1)}-c_{(1, 2)}=0 \end{cases}$ \end{center} After simplifying the equations above we obtain \begin{center} $\begin{cases} c_{(0, 1)}=z\\ c_{(1, 0)}=-y-z\\ c_{(0, 2)}=y\\ c_{(2, 0)}=-y-z\\ c_{(1, 2)}=y\\ c_{(2, 1)}=z \end{cases}$ \end{center} Here we put $c_{(1, 2)}=y$ and $c_{(2, 1)}=z$. Hence the positive quandle 2-cocycle \begin{center} $\phi=y(\chi_{(0, 2)}+\chi_{(1, 2)}-\chi_{(1, 0)}-\chi_{(2, 0)})+z(\chi_{(0, 1)}+\chi_{(2, 1)}-\chi_{(1, 0)}-\chi_{(2, 0)})$. \end{center} On the other hand, we have \begin{center} $\begin{cases} \delta\chi_0=(\chi_{(0, 2)}+\chi_{(1, 2)}-\chi_{(1, 0)}-\chi_{(2, 0)})+(\chi_{(0, 1)}+\chi_{(2, 1)}-\chi_{(1, 0)}-\chi_{(2, 0)})\\ \delta\chi_1=(\chi_{(1, 0)}+\chi_{(2, 0)}-\chi_{(0, 1)}-\chi_{(2, 1)})+(\chi_{(0, 2)}+\chi_{(1, 2)}-\chi_{(0, 1)}-\chi_{(2, 1)})\\ \delta\chi_2=(\chi_{(0, 1)}+\chi_{(2, 1)}-\chi_{(0, 2)}-\chi_{(1, 2)})+(\chi_{(1, 0)}+\chi_{(2, 0)}-\chi_{(0, 2)}-\chi_{(1, 2)}). \end{cases}$ \end{center} Since \begin{center} $\phi=y(\delta\chi_0)+(z-y)(\chi_{(0, 1)}+\chi_{(2, 1)}-\chi_{(1, 0)}-\chi_{(2, 0)})$, \end{center} then \begin{center} $H_{Q+}^2(R_3)\cong\{\chi_{(0, 1)}+\chi_{(2, 1)}-\chi_{(1, 0)}-\chi_{(2, 0)}\mid \delta\chi_0, \delta\chi_1\}$ \end{center} From $\delta\chi_0=\delta\chi_1=0$ one can easily deduce that $3(\chi_{(0, 1)}+\chi_{(2, 1)}-\chi_{(1, 0)}-\chi_{(2, 0)})=0$. It follows that $H_{Q+}^2(R_3)\cong \mathbf{Z}_3.$ \end{proof} We remark that the second quandle cohomology group of $R_3$ is trivial, $H_Q^2(R_3; \mathbf{Z})\cong 0$ \cite{Car2003}. According to the definition $C_n^Q(X)=C_n^R(X)/C_n^D(X)$, there is a short exact sequence \begin{center} $0\rightarrow C_\ast^D(X)\rightarrow C_\ast^R(X)\rightarrow C_\ast^Q(X)\rightarrow0$ \end{center} of chain complexes, it follows that there is a long exact sequence of homology groups \begin{center} $\cdots\rightarrow H_n^D(X)\rightarrow H_n^R(X)\rightarrow H_n^Q(X)\rightarrow H_{n-1}^D(X)\rightarrow\cdots$ \end{center} In \cite{Car2001J}, it was conjectured that the short exact sequence of chain complexes above is split. Later R.A. Litherland and S. Nelson gave an affirmative answer to this conjecture in \cite{Lit2003}. The following theorem says that the splitting map defined by R.A. Litherland and S. Nelson still works in positive quandle homology theory. \begin{theorem} For a given quandle $X$, there exists a short exact sequence \begin{center} $0\rightarrow H_n^{D+}(X)\rightarrow H_n^{R+}(X)\rightarrow H_n^{Q+}(X)\rightarrow0$. \end{center} \end{theorem} \begin{proof} According to the definition of positive homology groups there exists a short exact sequence \begin{center} $0\rightarrow C_\ast^{D+}(X)\xrightarrow{u_\ast} C_\ast^{R+}(X)\xrightarrow{v_\ast} C_\ast^{Q+}(X)\rightarrow0$. \end{center} It suffices to find a chain map $w_n: C_n^{R+}(X)\rightarrow C_n^{D+}(X)$ such that $w_n\circ u_n=id$. Here we use the splitting map $w_n(c)=c-\alpha_n(c)$ introduced by R.A. Litherland and S. Nelson in \cite{Lit2003}, $c\in C_n^{R+}(X)$, and $\alpha_n$ is defined by $\alpha_n(a_1, \cdots, a_n)=(a_1, a_2-a_1, \cdots, a_n-a_{n-1})$ on $n-$tuples and extending linearly to $C_n^{R+}(X)$. The following two relationships will be frequently used during the proof, which also can be found in \cite{Lit2003}. Note that the notation we use here is a bit different from that in \cite{Lit2003}. \begin{itemize} \item $\partial^+(a_1, \cdots, a_{n+1})=(\partial^+(a_1, \cdots, a_n), a_{n+1})+(-1)^{n+1}((a_1, \cdots, a_n)+(a_1, \cdots, a_n)\ast a_{n+1})$, here the notation $(a_1, \cdots, a_n)\ast a_{n+1}$ denotes $(a_1\ast a_{n+1}, \cdots, a_{n}\ast a_{n+1})$. \item $\alpha_{n+1}(a_1, \cdots, a_{n+1})=(\alpha_{n}(a_1, \cdots, a_n), a_{n+1})-(\alpha_n(a_1, \cdots, a_n), a_n)$. Generally, we write \begin{center} $\alpha_{n+1}(c, a_{n+1})=(\alpha_n(c), a_{n+1})-(\alpha_n(c), l(c))$, \end{center} here $c\in C_n^{R+}(X)$ and $l(c)\in C_1^{R+}(X)$. In particular $l(a_1, \cdots, a_n)=a_n$. \end{itemize} First we show that $c-\alpha_n(c)\in C_n^{D+}(X)$ and $w_n\circ u_n=id$. In order to prove $c-\alpha_n(c)\in C_n^{D+}(X)$ it is sufficient to consider the case $c=(a_1, \cdots, a_n)\in C_n^{R+}(X)$. Note that $a_1-\alpha_1(a_1)=a_1-a_1=0\in C_1^{D+}(X)$ and $(a_1, a_2)-\alpha_2(a_1, a_2)=(a_1, a_2)-(a_1, a_2) +(a_1, a_1)=(a_1, a_1)\in C_2^{D+}(X)$. Suppose $c-\alpha_n(c)\in C_n^{D+}(X)$ for some $n$, consider \begin{flalign*} &(a_1, \cdots, a_{n+1})-\alpha_{n+1}(a_1, \cdots, a_{n+1})&\\ =&(a_1, \cdots, a_{n+1})-(\alpha_n(a_1, \cdots, a_n), a_{n+1})+(\alpha_n(a_1, \cdots, a_n), a_n)\\ =&(a_1, \cdots, a_{n+1})-(\alpha_n(a_1, \cdots, a_n), a_{n+1})-(a_1, \cdots, a_n, a_n)+(\alpha_n(a_1, \cdots, a_n), a_n)+(a_1, \cdots, a_n, a_n)\\ =&((a_1, \cdots, a_n)-\alpha_n(a_1, \cdots, a_n), a_{n+1})-((a_1, \cdots, a_n)-\alpha_n(a_1, \cdots, a_n), a_n)+(a_1, \cdots, a_n, a_n)\\ \in& C_n^{D+}(X). \end{flalign*} In order to show that $w_n\circ u_n=id$, choose $c=(a_1, \cdots, a_i, a_{i+1}, \cdots, a_n)\in C_n^{D+}(X)$, where $a_i=a_{i+1}$, it suffices to prove that $\alpha_n(c)=0$. In fact \begin{center} $\alpha_n(c)=(a_1, a_2-a_1, \cdots, a_{i+1}-a_i, \cdots, a_n-a_{n-1})=0$. \end{center} Next we shoe that $w_n: C_n^{R+}(X)\rightarrow C_n^{D+}(X)$ is a chain map. We need the two equalities below $(n\geq2)$: \begin{flalign*} \alpha_n(d_1(a_1, \cdots, a_n), a_n)=&\alpha_n(-(a_2, \cdots, a_n, a_n)+\cdots+(-1)^n(a_1, \cdots, a_n))&\\ =&(-1)^n\alpha_n(a_1, \cdots, a_n) \end{flalign*} \begin{flalign*} \alpha_n(d_2(a_1, \cdots, a_n), a_n)=&\alpha_n(\sum\limits_{i=1}^n(-1)^i(a_1\ast a_i, \cdots, a_{i-1}\ast a_i, a_{i+1}, \cdots, a_n, a_n))&\\ =&(-1)^n\alpha_n((a_1, \cdots, a_n)\ast a_n) \end{flalign*} Now we show that $\partial_{n+1}^+\alpha_{n+1}-\alpha_n\partial_{n+1}^+=0$. First note that \begin{center} $\partial_2^+\alpha_2(a_1, a_2)=-(a_2)-(a_2)+(a_1)+(a_1\ast a_2)=\alpha_1\partial_2^+(a_1, a_2)$. \end{center} Assume $\partial_{n+1}^+\alpha_{n+1}-\alpha_n\partial_{n+1}^+=0$ holds for some $n\geq 2$, one computes \begin{flalign*} &\partial_{n+1}^+\alpha_{n+1}(a_1, \cdots, a_{n+1})-\alpha_n\partial_{n+1}^+(a_1, \cdots, a_{n+1})&\\ =&\partial_{n+1}^+((\alpha_n(a_1, \cdots, a_n), a_{n+1})-(\alpha_n(a_1, \cdots, a_n), a_n))\\ &-\alpha_n((\partial_n^+(a_1, \cdots, a_n), a_{n+1})+(-1)^{n+1}(a_1, \cdots, a_n)+(-1)^{n+1}(a_1, \cdots, a_n)\ast a_{n+1})\\ =&(\partial_n^+\alpha_n(a_1, \cdots, a_n), a_{n+1})+(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)+(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)\ast a_{n+1}\\ &-(\partial_n^+\alpha_n(a_1, \cdots, a_n), a_n)-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)\ast a_n\\ &-(\alpha_{n-1}\partial_n^+(a_1, \cdots, a_n), a_{n+1})-(\alpha_{n-1}\partial_n^+(a_1, \cdots, a_n), l(\partial_n^+(a_1, \cdots, a_n)))\\ &-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)-(-1)^{n+1}\alpha_n((a_1, \cdots, a_n)\ast a_{n+1})\\ =&-(\alpha_{n-1}\partial_n^+(a_1, \cdots, a_n), a_n)-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)\\ &-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)\ast a_n-(\alpha_{n-1}\partial_n^+(a_1, \cdots, a_n), l(\partial_n^+(a_1, \cdots, a_n)))\\ =&-\alpha_n(\partial_n^+(a_1, \cdots, a_n), a_n)-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)\ast a_n\\ =&-\alpha_n((d_1+d_2)(a_1, \cdots, a_n), a_n)-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)\ast a_n\\ =&-(-1)^n\alpha_n(a_1, \cdots, a_n)-(-1)^n\alpha_n(a_1, \cdots, a_n)\ast a_n\\ &-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)-(-1)^{n+1}\alpha_n(a_1, \cdots, a_n)\ast a_n\\ =&0 \end{flalign*} \end{proof} Now we investigate $H_1^{Q+}(X)$ and $H_2^{Q+}(X)$ for general quandle $X$. The similar results of quandle homology groups can be found in \cite{Car2001J} and \cite{Kam2002}. Assume $X=\{a_1, \cdots, a_n\}$, according to the definitions of $d_1$ and $d_2$ we have $Z_1^{Q+}(X)=C_1^{Q+}(X)=C_1^{R+}(X)$, i.e. the free abelian group generated by $\{a_1, \cdots, a_n\}$. Since $\partial_2^+(a, b)=-b-b+a+a\ast b$, we conclude that \begin{center} $H_1^{Q+}(X)\cong\{a_1, \cdots, a_n\mid a_i\ast a_j=2a_j-a_i\}$. \end{center} \begin{proposition} $H_1^{Q+}(T_n)\cong\mathbf{Z}\bigoplus(\bigoplus\limits_{n-1}\mathbf{Z}_2)$ and $H_1^{Q+}(R_n)\cong\mathbf{Z}\bigoplus \mathbf{Z}_n$. \end{proposition} \begin{proof} According to the analysis above, we have \begin{center} $H_1^{Q+}(T_n)\cong\{a_1, \cdots, a_n\mid 2a_i=2a_j\}\cong\{a_1, a_2-a_1, \cdots, a_n-a_1\mid 2(a_i-a_1)=0\}\cong\mathbf{Z}\bigoplus(\bigoplus\limits_{n-1}\mathbf{Z}_2)$. \end{center} For the dihedral quandle $R_n=\{a_0, \cdots, a_{n-1}\}$ with quandle operations $a_i\ast a_j=a_{2j-i\ (mod\ n)}$, we have \begin{center} $H_1^{Q+}(R_n)\cong\{a_0, \cdots, a_{n-1}\mid a_{2j-i\ (mod\ n)}=2a_j-a_i\}\cong\{a_0, a_1-a_0\mid n(a_1-a_0)=0\}\cong\mathbf{Z}\bigoplus \mathbf{Z}_n$. \end{center} \end{proof} Next we study the second positive degeneration homology $H_2^{D+}(X)$. Given a quandle $X$ and $\{a, b\}\in X$, we define $a\sim b$ if there exists some elements $a_1, \cdots, a_n$ of $X$ such that $b=(\cdots ((a\ast^{\varepsilon_1} a_1)\ast^{\varepsilon_2}a_2)\cdots )\ast^{\varepsilon_n} a_n$, where $\varepsilon_i\in \{\pm1\}$. The \emph{orbits} of $X$ are defined to be the set of equivalence classes of $X$ by $\sim$. We denote it by Orb$(X)$, and as usual the number of elements in Orb$(X)$ is denoted by $|$Orb$(X)|$. Since $\partial^+(a, a)=-a-a+a+a=0$, and \begin{flalign*} &\partial^+(a, a, b)=-2(a, b)+(a, b)+(a, b)-(a, a)-(a\ast b, a\ast b)=-(a, a)-(a\ast b, a\ast b),\\ &\partial^+(a, b, b)=-2(b, b)+(a, b)+(a\ast b, b)-(a, b)-(a\ast b, b)=-2(b, b). \end{flalign*} Combining with Theorem 4.3, it follows that \begin{proposition} $H_2^{D+}(X)\cong\bigoplus\limits_{|Orb(X)|}\mathbf{Z}_2$ and $H_2^{R+}(X)\cong H_2^{Q+}(X)\bigoplus(\bigoplus\limits_{|Orb(X)|}\mathbf{Z}_2)$. \end{proposition} In the end of this section let us turn to the trivial quandle $T_n$. In quandle homology theory, the boundary operators of $T_n$ are trivial, therefore $H_n^Q(T_n)\cong C_n^Q(T_n)$. However in positive quandle homology theory, the boundary operators are not trivial in general. In fact we have the following proposition. \begin{proposition} $H_i^{Q+}(T_n)\cong \begin{cases} \mathbf{Z}\bigoplus(\bigoplus\limits_{n-1}\mathbf{Z}_2),& \ i=1;\\ \bigoplus\limits_{(n-1)^i}\mathbf{Z}_2,& \ i\geq2, \end{cases}$ and $H^i_{Q+}(T_n)\cong \begin{cases} \mathbf{Z},& \ i=1;\\ \bigoplus\limits_{(n-1)^{i-1}}\mathbf{Z}_2,& \ i\geq2. \end{cases}$ \end{proposition} \begin{proof} It suffices to compute $H_i^{Q+}(T_n)$, $H_{Q+}^i(T_n)$ can be deduced from the universal coefficient theorem. For the case $i=1$, the result follows from Proposition 4.4. Now we show that $H_2^{Q+}(T_n)\cong\bigoplus\limits_{(n-1)^2}\mathbf{Z}_2$, recall that $T_n=\{a_1, \cdots, a_n\}$ with quandle operations $a_i\ast a_j=a_i$. Notice that $\partial_2^+(a_i, a_j)=-2a_j+a_i+a_i\ast a_j=2(a_i-a_j)$, therefore any element $\psi\in Z_2^{Q+}(T_n)$ can be wrote as $\psi=\sum\limits_{i=1}^nc_i\psi_i$, where $\psi_i=(a_{i_1}, a_{i_2})+\cdots+(a_{i_{k-1}}, a_{i_k})+(a_{i_k}, a_{i_1})$. It follows that $Z_2^{Q+}(T_n)$ can be generated by \begin{center} $\{(a_i, a_j)+(a_j, a_i), (a_1, a_i)+(a_i, a_j)+(a_j, a_1)\}$ \quad $(1\leq i<j\leq n)$, \end{center} which is equivalent to \begin{center} $\{(a_1, a_i)+(a_i, a_j)+(a_j, a_1)\}$ \quad $(2\leq i\leq j\leq n)$. \end{center} On the other hand, since \begin{center} $\partial^+(a_i, a_j, a_k)=2(-(a_j, a_k)+(a_i, a_k)-(a_i, a_j))$ and $\partial^+(a_i, a_j, a_i)=2(-(a_j, a_i)-(a_i, a_j)),$ \end{center} we have \begin{flalign*} H_2^{Q+}(T_n)\cong&\{(a_1, a_i)+(a_i, a_j)+(a_j, a_1)\mid 2((a_i, a_j)+(a_j, a_i)), 2((a_i, a_j)+(a_j, a_k)-(a_i, a_k))\}&\\ \cong&\{(a_1, a_i)+(a_i, a_j)+(a_j, a_1)\mid 2((a_1, a_i)+(a_i, a_j)+(a_j, a_1))\}\\ \cong&\bigoplus\limits_{(n-1)^2}\mathbf{Z}_2 \end{flalign*} Similarly since $\partial_i^+=2d_1$ for $C_i^Q(T_n)$, it is not difficult to observe that $($here $2\leq j_k\leq n)$ \begin{flalign*} H_i^{Q+}(T_n)\cong&\{\frac{1}{2}(\partial_{i+1}^+(a_1, a_{j_1}, \cdots, a_{j_i}))\mid \partial_{i+1}^+(a_1, a_{j_1}, \cdots, a_{j_i})\}&\\ \cong&\bigoplus\limits_{(n-1)^i}\mathbf{Z}_2 \end{flalign*} \end{proof} \section{Knot invariants derived from positive quandle cocycles} \subsection{Positive quandle cocycle invariants for knots} One of the most important applications of quandle cohomology groups is that one can define knot invariants via quandle 2-cocycles and knotted surface invariants via quandle 3-cocycles. In this section we will show that positive quandle 2-cocycles can also be used to define knot invariants, which is similar to the definition of quandle cocycle invariants introduced in \cite{Car2003}. Let $K$ be a oriented knot diagram and $X$ a finite quandle. Assume $G$ is an abelian group and $\phi\in H_{Q+}^2(X; G)$ is a positive quandle 2-cocycle. It is well-known that all regions of $R^2-K$ can be colored with white and black in checkerboard fashion such that the unbounded region gets the white color. For each crossing point $\tau$ we can associate a sign $\epsilon(\tau)$ as the figure below. \begin{center} \includegraphics{figure2.eps} \centerline{\small Figure 2: The signs of crossings\quad} \end{center} Let $\rho$ be a proper coloring of $K$ by $X$, i.e. a homomorphism from the fundamental quandle of $K$ to $X$. In other words, each arc of the diagram is labelled with an element of $X$. For each crossing point $\tau$, assume the over-arc and under-arcs at $\tau$ are colored by $b$ and $a, a\ast b$ respectively, see Figure 1. We consider a weight which is an element of $G$ as \begin{center} $W_{\phi}(\tau, \rho)=\phi(a, b)^{\epsilon(\tau)}$, \end{center} where $\epsilon(\tau)=\pm1$ according to Figure 2. Then we define the \emph{positive quandle 2-cocycle invariant} of $K$ to be \begin{center} $\Phi_{\phi}(K)=\sum\limits_{\rho}\prod\limits_{\tau}W_{\phi}(\tau, \rho)\in \mathbf{Z}G$, \end{center} where $\rho$ runs all proper colorings of $K$ by $X$ and $\tau$ runs all crossing points of the diagram. Note that if the sign of the crossing $\epsilon(\tau)$ is replaced by the writhe of $\tau$, one obtains the state-sum $($associated with a quandle 2-cocycle $\phi)$ knot invariants defined by J.S. Carter et al. in \cite{Car2003}. \begin{theorem} The positive quandle 2-cocycle invariant $\Phi_{\phi}(K)$ is preserved under Reidemeister moves. If a pair of positive quandle 2-cocycles $\phi_1$ and $\phi_2$ are cohomologous, then $\Phi_{\phi_1}(K)=\Phi_{\phi_2}(K)$. In particular if $\phi$ is a coboundary, we have $\Phi_{\phi}(K)=\sum\limits_{Col_X(K)}1$. \end{theorem} \begin{proof} First we prove that $\Phi_{\phi}(K)$ is invariant under Reidemeister moves. In \cite{Pol2010}, M. Polyak proved that all the classical Reidemeister moves can be realized by a generating set of four Reidemeister moves: $\{\Omega_{1a}, \Omega_{1b}, \Omega_{2a}, \Omega_{3a}\}$, see Figure 3. Hence it suffices to show that $\Phi_{\phi}(K)$ is invariant under $\Omega_{1a}, \Omega_{1b}, \Omega_{2a}$ and $\Omega_{3a}$. \begin{center} \includegraphics{figure3.eps} \centerline{\small Figure 3: Reidemeister moves\quad} \end{center} \begin{itemize} \item $\Omega_{1a}$ and $\Omega_{1b}$: the weight assigned to the crossing point in $\Omega_{1a}$ or $\Omega_{1b}$ is of the form $\phi(a, a)^{\pm1}$, according to the definition of positive quandle cocycle we have $\phi(a, a)^{\pm1}=1$. \item $\Omega_{2a}$: assume the two arcs on the left side are colored by $a, b$ respectively, then the sum of the weights of the two crossing points on the right side is $\phi(b, a)\phi(b, a)^{-1}=1$. \item $\Omega_{3a}$: without loss of generality, we assume the top region on both sides are colored white. Under this assumption the signs of each crossings are shown in the figure below. \begin{center} \includegraphics{figure4.eps} \centerline{\small Figure 4: Proper colorings under $\Omega_{3a}$\quad} \end{center} In order to show that $\Phi_{\phi}(K)$ is invariant under $\Omega_{3a}$, it is sufficient to prove that \begin{center} $\phi(x, y)^{-1}\phi(z, y)\phi((z\ast y)\ast^{-1}(x\ast y), x\ast y)^{-1}=\phi(z\ast^{-1}x, y)^{-1}\phi(z\ast^{-1}x, x)\phi(x, y)$. \end{center} Note that $(z\ast y)\ast^{-1}(x\ast y)=(z\ast^{-1}x)\ast y$. Put $(a, b, c)=(z\ast^{-1}x, x, y)$ and compare the equation with the positive quandle 2-cocycle condition (note that the equation is written in multiplicative notation here), the result follows. \end{itemize} In order to finish the proof it suffices to show that $\Phi_{\phi}(K)=\sum\limits_{Col_X(K)}1$ if $\phi$ is a coboundary. Assume $\phi=\delta_+^1\varphi$ for some $\varphi\in C_{Q+}^1(X; G)$, then \begin{center} $\phi(a, b)=\delta_+^1\varphi(a, b)=\varphi(\partial_2^{+}(a, b))=\varphi(-2(b)+(a)+(a\ast b))=\varphi(b)^{-2}\varphi(a)\varphi(a\ast b)\in G$. \end{center} First let us consider the simplest case, we assume the knot diagram is alternating, therefore all crossings have the same sign. Without loss of generality all the crossings are assumed to be positive. In this case for a given arc $\lambda$ of the knot diagram, there exists only one crossing such that $\lambda$ is the over-arc at this crossing. On the other hand, this arc is the under-arc at two crossings. For a fixed proper coloring $\rho$, suppose the labelled element of $\lambda$ is $a\in X$, then the contribution of $\lambda$ to $\prod\limits_\tau W_\phi(\tau, \rho)$ comes from the three crossing points that $\lambda$ involved, which equals $\varphi(a)^{-2}\varphi(a)\varphi(a)=1$. It follows that $\prod\limits_\tau W_\phi(\tau, \rho)=1$, hence $\Phi_{\phi}(K)=\sum\limits_{Col_X(K)}1$. The proof of the non-alternating case is analogous to the alternating case. In fact it suffices to notice that if an arc $\lambda$ is the over-arc at several crossings, then the signs of these crossings are alternating. It is not difficult to find that the contribution of $\lambda$ to $\prod\limits_\tau W_\phi(\tau, \rho)$ is still trivial. The proof is finished. \end{proof} Recall that in quandle cohomology theory $H_Q^2(R_3)=0$, it means quandle 2-cocycle invariant of $R_3$ can not offer any more information than the Fox 3-colorings. In fact it was pointed out in \cite{Car2003} that all knots have trivial quandle 2-cocycle invariants with any dihedral quandle $R_n$ and any quandle 2-cocycle. We remark that although quandle 2-cocycle invariants of $R_n$ are trivial, some quandle 3-cocycle of $H_Q^3(R_3; \mathbf{Z}_3)$ can be used to distinguish trefoil and its mirror image \cite{Rou2000}. \begin{proposition} All knots have trivial positive quandle 2-cocycle invariants with any dihedral quandle $R_n$, associated with any positive quandle 2-cocycle $\phi\in H_{Q+}^2(R_n)$. \end{proposition} \begin{proof} If $n$ is even, according to the coloring rule at each crossing point, for each colored knot diagram all the assigned elements have the same parity. If all assigned elements are even, then by replacing the assigned element $i$ with $\frac{i}{2}$ we obtain a proper coloring with $R_{\frac{n}{2}}$. Consider the element $\phi'$ of $H_{Q+}^2(R_{\frac{n}{2}})$ defined by $\phi'(i, j)=\phi(2i, 2j)$, then $\Phi_{\phi'}(K)$ with $R_{\frac{n}{2}}$ is nontrivial if $\Phi_{\phi}(K)$ with $R_n$ is nontrivial. If all assigned elements are odd, then one obtains a proper coloring with $R_{\frac{n}{2}}$ by replacing each labelled element $i$ with $\frac{i-1}{2}$. Similarly if $\Phi_{\phi}(K)$ with $R_n$ is nontrivial then $\Phi_{\phi''}(K)$ with $R_{\frac{n}{2}}$ is also nontrivial, where $\phi''(i, j)=\phi(2i+1, 2j+1)$. Therefore it is sufficient to consider the case of odd $n$. If $n$ is odd, it suffices to prove that the free part of $H_{Q+}^2(R_n)=0$. This follows from a general fact: $\Phi_{\phi}(K)$ is trivial if $\phi$ has finite order in $H_{Q+}^2(X)$. In fact assume $k\phi=0\in H_{Q+}^2(X)$, then $\prod\limits_\tau W_{k\phi}(\tau, \rho)=0$. In other words, $\prod\limits_\tau k\phi(a, b)^{\epsilon(\tau)}=k(\prod\limits_\tau\phi(a, b)^{\epsilon(\tau)})=0$. Since we are working with the coefficient $\mathbf{Z}$, it follows that $\prod\limits_\tau\phi(a, b)^{\epsilon(\tau)}=0$. Assume the free part of $H_{Q+}^2(R_n)\neq0$, it follows that the free part of $H^{Q+}_2(R_n)\neq0$. Replacing the coefficient $\mathbf{Z}$ by $\mathbf{Z}_2$ one concludes that $H^{Q+}_2(R_n; \mathbf{Z}_2)$ contains $\mathbf{Z}_2$ as a summand. By Proposition 3.3 we have $H_2^Q(R_n; \mathbf{Z}_2)=\mathbf{Z}_2\bigoplus$ else. However since $H_2^Q(R_n; \mathbf{Z})=0$ \cite{Car2001J} and $H_1^Q(R_n; \mathbf{Z})=\mathbf{Z}$, the universal coefficient theorem tells us that $H_2^Q(R_n; \mathbf{Z}_2)=0$. The proof is finished. \end{proof} Now we give a non-trivial example of positive quandle 2-cocycle invariant. With the matrix of a finite quandle introduced in \cite{Ho2005}, quandle $S_4$ contains four elements $\{0, 1, 2, 3\}$ with quandle operations \begin{center} $\begin{bmatrix} 0 & 2 & 3 & 1 \\ 3 & 1 & 0 & 2 \\ 1 & 3 & 2 & 0 \\ 2 & 0 & 1 & 3 \\ \end{bmatrix}$, \end{center} where the entry in row $i$ column $j$ denotes $(i-1)\ast(j-1)$ $(1\leq i, j\leq4)$. Choose a positive quandle 2-cocycle \begin{center} $\phi=\chi_{(0, 1)}+\chi_{(1, 0)}+\chi_{(2, 0)}+\chi_{(0, 2)}+\chi_{(1, 2)}+\chi_{(2, 1)}\in H_{Q+}^2(S_4; \mathbf{Z}_2)$, \end{center} it was proved in \cite{Car2003} that $\Phi_{\phi}(3_1)=\Phi_{\phi}(4_1)=\sum\limits_40+\sum\limits_{12}1$. We end this subsection by some remarks on the positive quandle 2-cocycle invariants with trivial quandles. First note that for $T_n$ and for any knot diagram there exist exactly $n$ trivial proper colorings. By the definition of $\pm$ quandle homology groups we can not obtain any new information from the $\pm$ quandle cocycle invariants. However it was pointed out in \cite{Car2003} that for any $\phi\in H_Q^2(T_n)$ and any link $L$, the quandle 2-cocycle invariant $\Phi_\phi(L)$ is a function of pairwise linking numbers. For example $\phi=\chi_{(a_1, a_2)}\in H_Q^2(T_2)$ can be used to distinguish the Hopf link from the trivial link. Since $H_{Q+}^2(T_2)\cong \mathbf{Z}_2$ with generator $\phi=\chi_{(a_1, a_2)}-\chi_{(a_2, a_1)}$, one obtains $\Phi_\phi(L)$ is trivial for any link $L$. In order to obtain some information from the link, we can work with coefficient $\mathbf{Z}_2$. In this way we can obtain the parity information of the pairwise linking numbers. For example, a link $L=K_1\cup \cdots \cup K_m$ is a proper link, i.e. $\sum\limits_{j\neq i}lk(K_i, K_j)=0$ $($mod 2$)$ for any $1\leq i\leq m$, if and only if $\sum\limits_{\rho_{1, m-1}}\prod\limits_{\tau}W_{\phi=\chi_{(a_1, a_2)}}(\tau, \rho_{1, m-1})=\sum\limits_m0$. Here $\mathbf{Z}_2=\{0, 1\}$ and $\rho_{1, m-1}$ denotes the set of proper colorings which assign one component with $a_1$ and the else with $a_2$. This result mainly follows from the fact that $H_{Q+}^2(X; \mathbf{Z}_2)\cong H_Q^2(X; \mathbf{Z}_2)$. From this viewpoint, for $T_n$, it seems that the positive quandle 2-cocycle invariant is a sort of $\mathbf{Z}_2$-version of the quandle 2-cocycle invariant. Later in the final section we will show that this is not the case. \subsection{Positive quandle cocycle invariants for knotted surfaces} In this subsection, with a given positive quandle 3-cocycle we will define a state-sum invariant for knotted surfaces in $R^4$. First we will take a short review of the background of knotted surfaces in $R^4$. The readers are referred to \cite{Car1998} and \cite{Car2004} for more details. By a \emph{knotted surface} we mean an embedding $f$ of a closed oriented surface $F$ into $R^4$. Sometimes we also call the image $f(F)$ a knotted surface and denote it by $F$ for convenience. In particular when $F=S^2$ we name it a \emph{2-knot}. Two knotted surfaces are \emph{equivalent} if there exists an orientation preserving automorphism of $R^4$ which takes one knotted surface to the other. Similar to the knot diagram in knot theory, we usually study knotted surfaces via the knotted surface diagrams. Let $p: R^4\rightarrow R^3$ be the orthogonal projection from $R^4$ onto $R^3$, we may deform $f(F)$ slightly such that $p\circ f(F)$ is in a general position, then $p\circ f(F)$ is called a \emph{knotted surface diagram}. We must notice that a knotted surface diagram does not just mean an immersed surface in $R^3$. First there exist double points, triple points and branch points in $p\circ f(F)$. However it is well-known that $f(F)$ can be isotoped into a new position such that the projection contains no branch points \cite{Car1992,Gil1982}. Second, a knot diagram can be regarded as a 4-valent planar graph with some over-under information on each vertex. Hence a knotted surface diagram also contains the information of the over-sheet and under-sheet along the double curves. In other words, a knotted surface diagram is obtained from the projection by removing small open neighborhoods of the under-sheets along double curves. Similar to the definition of the knot invariant Col$_X(K)$, we can define an integer-valued knotted surface invariant with a given quandle $X$. The main idea is using the elements of $X$ to color the regions of the broken surface diagram according to some rules at double curves. See the figure below, here $\overrightarrow{n}$ denotes the normal vector of the knotted surface diagram. \begin{center} \includegraphics{figure5.eps} \centerline{\small Figure 5: Coloring rules at a double curve\quad} \end{center} It is not difficult to check that the rule above is well-defined at each triple point \cite{Car2003}. Recall that different knotted surface diagrams represent the same knotted surface if and only if one of them can be achieved from the other by a finite sequence of Roseman moves \cite{Ros1998}. Similar to the proper coloring of knot diagrams, the number of the coloring satisfying the condition above is invariant under the Roseman moves, hence is a knotted surface invariant. We use Col$_X(F)$ to denote it. The main idea of defining a knotted surface invariant with a positive quandle 3-cocycle is analogous to the definition of the quandle 3-cocycle invariant proposed in \cite{Car2003}. As a generalization of the counting invariant Col$_X(F)$, we need to assign an invariant for each colored knotted surface diagram and then take the sum of them. The position of triple point in knotted surface diagram is analogous to that of crossing point in knot diagram. Therefore this invariant can be obtained by assigning a weight to each triple point of the colored diagram. Let $F$ be a knotted surface diagram and $X$ a finite quandle. Assume $G$ is an abelian group and $\theta\in H_{Q+}^3(X; G)$ is a positive quandle 3-cocycle. Consider the shadow of the diagram $F$, which is the immersed surface in $R^3$ without removing neighborhood along double curves. The shadow separates $R^3$ into several regions. It is not difficult to observe that we can use white and black to color these regions in 3-dimensional checkerboard fashion, i.e. adjacent regions are colored with different colors. We remark that the assumption that the surface is orientable is essentially used here. As before we assume that the unique unbounded region is colored white. For each triple point $\tau$ we can associate a sign $\epsilon(\tau)$ according to the figure below $($W=white, B=Black$)$. \begin{center} \includegraphics{figure6.eps} \centerline{\small Figure 6: Signs of triple points\quad} \end{center} Let $\rho$ denote a coloring of $F$ by $X$. Assume $\tau$ is a triple point of $F$, the bottom, middle, top sheets around the octant from which all normal vectors point outwards are colored by $a, b, c$ respectively, see the figure above. Note that the sign of the triple point used here does not depend on the orientation of the surface. We associate a weight at the triple point $\tau$ as \begin{center} $W_{\theta}(\tau, \rho)=\theta(a, b, c)^{\epsilon(\tau)}\in G$. \end{center} Now we can define the \emph{positive quandle 3-cocycle invariant} of knotted surface $F$ associated with $\theta$ to be \begin{center} $\Theta_{\theta}(F)=\sum\limits_{\rho}\prod\limits_{\tau}W_{\theta}(\tau, \rho)\in \mathbf{Z}G$, \end{center} where $\rho$ runs all colorings of $F$ by $X$ and $\tau$ runs all triple points of the diagram. We remark that the sign of a triple point has another definition. Consider the normal vectors of the top, middle and bottom sheets, if the orientation in this order matches the orientation of $R^3$, we say this triple point is positive. Otherwise it is negative. Replace $\epsilon(\tau)$ with the sign of triple point defined in this way one obtains the state-sum invariants introduced in \cite{Car2003}. \begin{theorem} The positive quandle 3-cocycle invariant $\Theta_{\theta}(F)$ is preserved under Roseman moves. If a pair of positive quandle 3-cocycles $\theta_1$ and $\theta_2$ are cohomologous, then $\Theta_{\theta_1}(F)=\Theta_{\theta_2}(F)$. In particular if $\theta$ is a coboundary, we have $\Theta_{\theta}(F)=\sum\limits_{Col_X(F)}1$. \end{theorem} \begin{proof} We summarize the proof. There are only three types of Roseman move that involve triple points, see \cite{Car2003}. The first one creates or cancels a pair of triple points with oppositive signs, the second one moves a branch point through a sheet. The contribution of the two triple points in the first case will cancel out, and the contribution of the triple point in the second case is trivial according to the definition of positive quandle cohomology groups. Thus it suffices to prove that $\Theta_{\theta}(F)$ is invariant under tetrahedral move. See the figures below. \begin{center} \includegraphics{figure7.eps} \centerline{\small Figure 7: Left hand side of tetrahedral move\quad} \end{center} \begin{center} \includegraphics{figure8.eps} \centerline{\small Figure 8: Right hand side of tetrahedral move\quad} \end{center} Here we use the movie description of knotted surface, see \cite{Car1998} for more detail. For example figure 7 contains five slices of a knotted surface according to a fixed height function, each slice consists of four sheets which are cross sections of four planes, and a pair of adjacent slices depict a triple point. Figure 7 and figure 8 correspond to the left hand side and the right hand side of tetrahedral move. Without loss of generality, suppose the leftmost region of each slice has the white color, and other regions can be colored in checkerboard fashion. The left hand side of tetrahedral move contributes $\theta(a, b, c)\theta(a\ast c, b\ast c, d)^{-1}\theta(a, c, d)\theta(b, c, d)^{-1}$ to $\Theta_{\theta}(F)$, and the right side has the contribution $\theta(b, c, d)\theta(a\ast b, c, d)^{-1}\theta(a, b, d)\theta(a\ast d, b\ast d, c\ast d)^{-1}$. In order to prove that $\Theta_{\theta}(F)$ is invariant under tetrahedral move, it suffices to show that \begin{center} $\theta(a, b, c)\theta(a\ast c, b\ast c, d)^{-1}\theta(a, c, d)\theta(b, c, d)^{-1}\theta(b, c, d)^{-1}\theta(a\ast b, c, d)\theta(a, b, d)^{-1}\theta(a\ast d, b\ast d, c\ast d)=1$ \end{center} Comparing the equation above with the positive quandle 3-cocycle condition (note that the equation is written in multiplicative notation at present), we find that the condition $\theta\in H_{Q+}^3(X; G)$ guarantees the invariance of $\Theta_{\theta}(F)$. Here we only list one case of tetrahedral move, for other possible tetrahedral moves the invariance of $\Theta_{\theta}(F)$ can be proved in the same way. Next we show that $\Theta_{\theta}(F)=\sum\limits_{Col_X(F)}1$ if $\theta$ is a coboundary. As we mentioned before, we can choose a knotted surface diagram such that the shadow of it contains no branch points. The double point set of it is a 6-valent graph and each vertex corresponds to a triple point. Fix a coloring $\rho$. According to the assumption that $\theta$ is a coboundary, i.e. $\theta=\delta_+^2\phi$ for some $\phi\in H_{Q+}^2(X; G)$, we have \begin{center} $\theta(a, b, c)=\delta_+^2\phi(a, b, c)=\phi(\partial_3^+(a, b, c))=\phi(b, c)^{-2}\phi(a, c)\phi(a\ast b, c)\phi(a, b)^{-1}\phi(a\ast c, b\ast c)^{-1}\in G$ \end{center} Consider the triple point $\tau$ on the left side of figure 6, which has a weight $W_\theta(\tau, \rho)=\theta(a, b, c)=\phi(b, c)^{-2}\phi(a, c)\phi(a\ast b, c)\phi(a, b)^{-1}\phi(a\ast c, b\ast c)^{-1}$. There are six edges adjacent to the triple point $\tau$, two of them come from the intersection of the top sheet and the middle sheet, two of them come from the intersection of the middle sheet and the bottom sheet and the rest come from the intersection of the bottom sheet and the top sheet. We use $tm_1(\tau), tm_2(\tau), mb_1(\tau), mb_2(\tau), bt_1(\tau), bt_2(\tau)$ to denote these edges, where $tm_i(\tau)$ $(i=1, 2)$ denote the two edges belonging to the intersection of the top sheet and the middle sheet, $mb_i(\tau)$ $(i=1, 2)$ denote the two edges belonging to the intersection of the middle sheet and the bottom sheet and $bt_i(\tau)$ $(i=1, 2)$ denote the two edges belonging to the intersection of the bottom sheet and the top sheet. The order of the two edges belonging to the intersection of two sheets matches the orientation of the normal vector of the third sheet. Then the contribution of $\tau$ to $\Theta_{\theta}(F)$ can be separated into six parts: $\phi(b, c)^{-1}, \phi(b, c)^{-1}, \phi(a, b)^{-1}, \phi(a\ast c, b\ast c)^{-1}, \phi(a, c), \phi(a\ast b, c)$. We assign these six parts to $tm_1(\tau), tm_2(\tau), mb_1(\tau), mb_2(\tau), bt_1(\tau), bt_2(\tau)$ respectively. Therefore the contribution of $\tau$ can be regarded as the product of the contribution of the six edges adjacent to $\tau$. We remark that the contribution of each edge can be read directly from figure 5, the double line in figure 5 has contribution $\phi(a, b)^{\pm1}$. Here the sign of $\pm1$ is decided by the position of the two sheets. The sign is positive if the two sheets are the top sheet and the bottom sheet, for other cases the sign is negative. If sign of the triple point is negative then all the contribution will take the inverse. In order to show that $\Theta_{\theta}(F)$ is trivial, it is sufficient to prove that each edge obtains opposite contributions from the two endpoints of it. We continue our discussion in two cases: two endpoints has the same sign or different signs. \begin{center} \includegraphics{figure9.eps} \centerline{\small Figure 9: Two possibilities of adjacent triple points with the same sign\quad} \end{center} \begin{itemize} \item $\epsilon(\tau_1)=+1$ and $\epsilon(\tau_2)=+1$, there are two possibilities in this case. First consider the left side of figure 9. There are two triple points $\tau_1$ and $\tau_2$ with the same sign. Without loss of generality we assume the sign is positive. The frame with color $c$ denotes the top sheet of $\tau_1$ and $\tau_2$, and the straight lines are cross sections between the middle sheet or bottom sheet with the top sheet. Since $W_{\theta}(\tau_1, \rho)=\theta(a, b, c)$ and $W_{\theta}(\tau_2, \rho)=\theta(d, a\ast b, c)$, the contribution from $\tau_1$ to the edge with color $a\ast b$ is $\phi(a\ast b, c)$ and that from $\tau_2$ is $\phi(a\ast b, c)^{-1}$. The negative sign comes from the fact that for triple point $\tau_2$, the edge with color $a\ast b$ belongs to the intersection of the top sheet and the middle sheet. Hence the contributions from $\tau_1$ and $\tau_2$ to the edge between them cancel out. Consider the 6-valent graph consists of the double point set, it follows that the product of the contribution from each vertex to $\Theta_{\theta}(F)$ vanishes. For the right side of figure 9, we still have $\epsilon(\tau_1)=+1$ and $\epsilon(\tau_2)=+1$. Note that in this case the sheet with color $d$ is the top sheet of the triple point $\tau_2$. We have $W_{\theta}(\tau_1, \rho)=\theta(a, b, c)$ and $W_{\theta}(\tau_2, \rho)=\theta(a\ast b, c, d)$. Therefore the contribution from $\tau_1$ to the edge with color $a\ast b$ is $\phi(a\ast b, c)$ and the contribution from $\tau_2$ to the edge with color $a\ast b$ is $\phi(a\ast b, c)^{-1}$, since the edge with color $a\ast b$ belongs to the intersection of the middle sheet and the bottom sheet of $\tau_2$. Therefore the contributions from $\tau_1$ and $\tau_2$ to the edge between them still cancel out. \end{itemize} \begin{center} \includegraphics{figure10.eps} \centerline{\small Figure 10: Adjacent triple points with different signs\quad} \end{center} \begin{itemize} \item $\epsilon(\tau_1)=+1$ and $\epsilon(\tau_2)=-1$, see figure 10. We can read from the figure that $W_{\theta}(\tau_1, \rho)=\theta(a, b, c)$ and $W_{\theta}(\tau_2, \rho)=\theta(a\ast b, d, c)^{-1}$. As before the contribution from $\tau_1$ to the edge with color $a\ast b$ is $\phi(a\ast b, c)$. Meanwhile, due to $\epsilon(\tau_2)=-1$, the contribution from $\tau_2$ to the edge with color $a\ast b$ equals $\phi(a\ast b, c)^{-1}$. Hence in this case we still have $\prod\limits_{\tau}W_{\theta}(\tau, \rho)=1$. The proof is finished. \end{itemize} \end{proof} \textbf{Remark} In quandle cohomology theory, quandle 3-cocycle $\theta$ also can be used to define a state-sum invariant for knots via the shadow coloring. Given a knot diagram $K$ and a quandle $X$, a \emph{shadow coloring} of $K$ by $X$ is a function from the set of arcs of $K$ and the regions separated by the shadow of $K$ to the quandle $X$, satisfying the coloring condition depicted below. \begin{center} \includegraphics{figure11.eps} \centerline{\small Figure 11: Shadow coloring at a crossing\quad} \end{center} It is not difficult to observe that shadow colorings are completely decided by the proper colorings on arcs and the color of one fixed region. Hence the number of shadow colorings do not offer any new information rather than Col$_X(K)$. Given a quandle 3-cocycle $\theta\in H_Q^3(X; G)$ one can associate a weight $W_{\theta}(\tau, \widetilde{\rho})=\theta(c, a, b)^{w(\tau)}$ with the crossing point in figure 11, here $w(\tau)$ means the writhe of the crossing and $\widetilde{\rho}$ denotes a shadow coloring. Then the element of $\mathbf{Z}G$ : $\Psi_{\theta}(K)=\sum\limits_{\widetilde{\rho}}\prod\limits_{\tau}W_{\theta}(\tau, \widetilde{\rho})$ defines a knot invariant, where $\widetilde{\rho}$ runs all shadow colorings and $\tau$ runs all crossing points. It was pointed out in \cite{Rou2000} that this state-sum invariant can be used to detect the chirality of the trefoil knot. An interesting question is how to define a knot invariant with a given positive quandle 3-cocycle. \section{On trivially colored crossing points} We end this paper with two elementary examples which concerns trivially colored crossing points. Given a knot diagram $K$ and a quandle $X$, choose a crossing point $\tau$ of the knot diagram. We say $\tau$ is a \emph{trivially colored crossing point} if for any proper coloring of $K$ by $X$, the over-arc and the two under-arcs of $\tau$ are labelled with the same color. For example the crossing point involved in the first Reidemeister move is a trivially colored crossing point for any given quandle. As another instance, consider the crossing $\tau$ of the knot diagram below. If we take $X=R_3$, then the crossing $\tau$ is a trivially colored crossing point. \begin{center} \includegraphics{figure12.eps} \centerline{\small Figure 12: A trivially crossing point\quad} \end{center} There are two reasons for us to study trivially colored crossing points. The first motivation comes from the Kauffman-Harary conjecture. L. Kauffman and F. Harary \cite{Har1999} conjectured that the minimum number of distinct colors that are needed to produce a non-trivial Fox $n$-coloring of a reduced alternating knot diagram $K$ with prime determinate $n$ equals the crossing number of $K$. In other words for any non-trivial Fox $n$-coloring of $K$, different arcs are assigned by different colors. In 2009 this conjecture was settled by T.W. Mattman and P. Solis in \cite{Mat2009}. It means that for a given reduced alternating diagram with prime determinate $n$ and the quandle $R_n$, no crossing point of the knot diagram is trivially colored. However this conjecture does not hold if we ignore the condition of prime determinate. For example consider the standard diagram of the connected sum of two reduced alternating knot diagrams which have prime determinate $m$ and $n$ respectively. Choose the quandle $R_{mn}$. Now there exists no Fox $mn$-coloring such that different arcs has different colors, but for each crossing point there exists a proper coloring such that this cross point is nontrivially colored. It is possible to extend the range of knots in Kauffman-Harary conjecture by replacing the heterogeneity of the coloring with the nonexistence of trivially colored crossing points. The second motivation of investigating trivially colored crossing points arises from the $\pm$ quandle 2-cocycle invariants. Recall the definition of $\pm$ quandle cohomology groups, in order for the 2-cocycle invariant to be preserved under the first Reidemeister move we put $\phi(a, a)=1$. In this way the first Reidemeister move has no effect on the 2-cocycle invariant, but the disadvantage is the information of trivially colored crossing points are also lost. For instance if a crossing point $\tau$ of a knot diagram $K$ is a trivially colored crossing point $($associated with $X)$, then $W_{\phi}(\tau, \rho)=1$ for any 2-cocycle $\phi$ and proper coloring $\rho$. Hence it has no contribution to the cocycle invariant. The first example we want to discuss is the Borromean link. The Borromean link is a nontrivial 3-component link with trivial proper sublinks. The Borromean link is nontrivial follows from the fact that one component of the Borromean represents a commutator of the fundamental group of the complement of the other two components \cite{Rol1976}. Let $X=T_n$, as we mentioned before, the quandle 2-cocycle of a link is a function of pairwise linking numbers \cite{Car2003}. Since the pairwise linking numbers of the Borromean link are all trivial, it follows that the quandle 2-cocycle invariant can not distinguish the Borromean link from the trivial link. However we can use a refinement of the positive quandle 2-cocycle invariant to show that the Borromean link is nontrivial. \begin{center} \includegraphics{figure13.eps} \centerline{\small Figure 13: The Borromean link\quad} \end{center} Let $K_1, K_2, K_3$ denote the three components of the Borromean link and $\tau_i$ $(1\leq i\leq6)$ denote the crossing points of it. See the figure above. According to the definition of $\epsilon(\tau_i)$ we used in Section 5, we have $\epsilon(\tau_i)=+1$ $(1\leq i\leq6)$. Take $\phi=\chi_{(a_1, a_2)}+\chi_{(a_2, a_1)}\in H_{Q+}^2(T_2; \mathbf{Z}_4)$, consider the element \begin{center} $\widetilde{\Phi}_{\phi}(BL)=\sum\limits_{\rho}(t_1^{W_{\phi}(\tau_1, \rho)+W_{\phi}(\tau_2, \rho)}t_2^{W_{\phi}(\tau_3, \rho)+W_{\phi}(\tau_4, \rho)}t_3^{W_{\phi}(\tau_5, \rho)+W_{\phi}(\tau_6, \rho)})\in \mathbf{Z}[t_1, t_2, t_3]/(t_1^4=t_2^4=t_3^4=1)$, \end{center} where $W_{\phi}(\tau_i, \rho)$ is the weight associated to the crossing $\tau_i$ and $\rho$ runs all proper colorings of the diagram in figure 13 by $T_2$. In general for a diagram of a 3-component link $L=K_1\cup K_2\cup K_3$, we define \begin{center} $\widetilde{\Phi}_{\phi}(L)=\sum\limits_{\rho}(t_1^{\sum\limits_{\tau\in K_1\cap K_2}W_{\phi}(\tau, \rho)}t_2^{\sum\limits_{\tau\in K_2\cap K_3}W_{\phi}(\tau, \rho)}t_3^{\sum\limits_{\tau\in K_3\cap K_1}W_{\phi}(\tau, \rho)})\in \mathbf{Z}[t_1, t_2, t_3]/(t_1^4=t_2^4=t_3^4=1)$, \end{center} where $K_i\cap K_j$ denotes the set of crossing points between $K_i$ and $K_j$ and $\rho$ runs all proper colorings of the diagram by $T_2$. \begin{proposition} $\widetilde{\Phi}_{\phi}(L)$ is invariant under Reidemeister moves. \end{proposition} \begin{proof} The result mainly follows from the fact that $\phi=\chi_{(a_1, a_2)}+\chi_{(a_2, a_1)}\in H_{Q+}^2(T_2; \mathbf{Z}_4)$. \end{proof} Direct calculation shows that $\widetilde{\Phi}_{\phi}(BL)=2+2t_1^2t_2^2+2t_2^2t_3^2+2t_3^2t_1^2$ and $\widetilde{\Phi}_{\phi}(TL)=8$, where $BL$ denotes the Borromean link and $TL$ denotes the 3-component trivial link. Therefore $\widetilde{\Phi}_{\phi}(L)$ can be used to distinguish the Borromean link from the trivial link. Further, since we are working with $T_2$, it follows that $\widetilde{\Phi}_{\phi}(L)$ is invariant under self-crossing changes. Hence the result above shows that the Borromean link is not link-homotopic to the 3-component trivial link. Essentially speaking, the reason why $\widetilde{\Phi}_{\phi}(L)$ can tell the difference between the Borromean link and the trivial link is that the Borromean link is alternating. The writhe of a crossing between two components does not depend on the position of the third component, hence if the linking number of two components is zero then the third component has no effect on the quandle 2-cocycle invariant $($associated with $T_n)$. However the sign $\epsilon(\tau)$ we used here contains some information of the position of the third component. This is the reason why positive quandle 2-cocycle can be used to distinguish the Borromean link and the trivial link. We remark that although for any quandle 2-cocycle of $T_n$ the state-sum invariant can not distinguish the Borromean link and the trivial link, in \cite{Ino2012} A. Inoue used a 2-cocycle of a quasi-trivial quandle to show that the Borromean link is not link-homotopic to the 3-component trivial link. Note that the link-homotopy invariants defined by A. Inoue in \cite{Ino2012} have the same value on the Borromean link and the 3-component trivial link if we work with the trivial quandles. The second example concerns the Fox 3-coloring. As we mentioned before, the diagram of knot $7_4$ in figure 12 contains a trivially colored crossing point if we consider the Fox 3-colorings. A natural question is which kind of knot diagram contains a trivially colored crossing point $($associated with $R_3)$. For example if the determinate of the knot is not divisible by 3 then there exists no nontrivial Fox 3-coloring, hence each crossing point is a trivially colored crossing point. We end this paper by a simple sufficient condition to this question, which shows that the knot diagram in figure 12 contains a trivially colored crossing point without needing to list all the proper colorings. \begin{proposition} Let $K$ be a knot diagram, consider the Fox 3-colorings, if $\sum\limits_{\tau}\epsilon(\tau)$ is not divisible by 3, then $K$ contains at least one trivially colored crossing point. \end{proposition} \begin{proof} Recall that $R_3=\{0, 1, 2\}$ with quandle operations $i\ast j=2j-i$ $($mod 3$)$. Consider the coboundary \begin{center} $\phi=\chi_{(0, 1)}+\chi_{(1, 0)}+\chi_{(1, 2)}+\chi_{(2, 1)}+\chi_{(2, 0)}+\chi_{(0, 2)}\in H_{Q+}^2(R_3; \mathbf{Z}_3)$. \end{center} Since $\phi=\delta\chi_0$ it follows that $\Phi_{\phi}(K)=\sum\limits_{Col_3(K)}0$ $($here we write $\mathbf{Z}_3=\{0, 1, 2\})$. On the other hand, for each nontrivially colored crossing point $\tau$, the contribution of $\tau$ to $\Phi_{\phi}(K)$ is $\epsilon(\tau)$. Therefore if $K$ contains no trivially colored crossing points we have $\sum\limits_{\tau}\epsilon(\tau)=0$ $($mod 3$)$. The result follows. \end{proof} \end{document}
\begin{eqnarray}gin{document} \tauitle{First-order primal-dual algorithm with Correction} \tauitlerunning{First-order primal-dual algorithm} \author{Xiaokai Chang$^{1,2}$ \and Sanyang Liu$^2$ } \institute{\deltaing {41} Xiaokai Chang \at [email protected] \and 1 \ \ School of Science, Lanzhou University of Technology, Lanzhou, Gansu, P. R. China. \\ 2 \ \ School of Mathematics and Statistics, Xidian University, Xi'an, Shaanxi, P. R. China.} \deltaate{Received: date / Accepted: date} \title{First-order primal-dual algorithm with Correction} \begin{eqnarray}gin{abstract} This paper is devoted to the design of efficient primal-dual algorithm (PDA) for solving convex optimization problems with known saddle-point structure. We present a new PDA with larger acceptable range of parameters and correction, which result in larger step sizes. The step sizes are predicted by using a local information of the linear operator and corrected by linesearch to satisfy a very weak condition, even weaker than the boundedness of sequence generated. The convergence and ergodic convergence rate are established for general cases, and in case when one of the prox-functions is strongly convex. The numerical experiments illustrate the improvements in efficiency from the larger step sizes and acceptable range of parameters. \varepsilonnd{abstract} \keywords{Saddle-point problem \and primal-dual algorithm \and correction \and larger step size \and convergence rate} \sigmaubclass{49M29 \and 65K10 \and 65Y20 \and 90C25 } \sigmaection{Introduction} \langlebel{sec_introduction} Let $X$, $Y$ be two finite-dimensional real vector spaces equipped with an inner product $\langlengle\cdot,\cdot\ranglengle$ and its corresponding norm $\|\cdot\| =\sigmaqrt{\langlengle\cdot,\cdot\ranglengle}$. We focus on the following primal problem \begin{eqnarray}\langlebel{primal} \min_{x\in X} f(Kx)+g(x), \varepsilone where \begin{itemize} \item $K : X\rightarrow Y$ is a bounded linear operator, with operator norm $L = \|K\|$; \item $f:\rightarrow(-\infty, +\infty]$ and $g :X \rightarrow(-\infty, +\infty]$ are proper lower semicontinuous convex functions. \varepsiloni Let $f^*$ denotes the Legendre-Fenchel conjugate of the function $f$, and $K^*$ the adjoint of the operator $K$, then $f^*$ is a proper, convex, lower-semicontinuous (l.s.c.) function. The dual problem of (\ref{primal}) reads as: \begin{eqnarray}\langlebel{dual} \min_{y\in Y} f^*(y)+g^*(-K^*y). \varepsilone Actually, problem (\ref{primal}) together with its dual (\ref{dual}) is equivalent to the following convex-concave saddle point problem \begin{eqnarray}\langlebel{primal_problem} \min_{x\in X}\max_{y\in Y} ~~g(x)+\langlengle Kx,y\ranglengle-f^*(y). \varepsilone By introducing an auxiliary variable $z$, problem (\ref{primal}) can be written as two-block separable convex optimization: \begin{eqnarray}\langlebel{two-block} \min \;\; && f(z)+g(x)\nonumber\\ s.t. \;\;&& Kx-z=0, \\ && x \in X, ~~z\in Y.\nonumber \varepsilonnd{eqnarray} The convex-concave saddle point problem (\ref{primal_problem}) and its primal problem with forms (\ref{primal}) and (\ref{two-block}) are widely presented in many disciplines, including mechanics, signal and image processing, and economics \cite{app1,app2,app3,app4,statistical_learning,11.,FB-Tseng}. Saddle point problems are ubiquitous in optimization as it is a very convenient way to represent many nonsmooth problems, and it in turn often allows to improve the complexity rates from ${\mathcal O}(1/\sigmaqrt{N})$ to ${\mathcal O}(1/N)$. However, the saddle point problem (\ref{primal_problem}) is a typical example where the two simplest iterative methods, the forward-backward method and the Arrow-Hurwicz method \cite{AH}, will not work. Many efficient methods have been proposed for solving problem (\ref{primal_problem}), for instance, alternating direction method of multipliers (ADMM) \cite{statistical_learning,PC-ADMM,He_ADMM-based,ADMM}, extrapolational gradient methods \cite{6.,extragradient,extragradient-type,13.}, primal-dual algorithms (PDA) \cite{PC-PDA,CP_PDA,He_PDA,zhang_PDA,inertial-FBF} and their accelerated and generalized versions \cite{Nesterov2004,Acc_PDA,M_PDA}. Here, we concentrate on the most simple first-order PDA and its acceleration. The iterative scheme of the original PDA \cite{CP_PDA,PC-PDA,M_PDA} with fixed step sizes reads as \begin{eqnarray}\langlebel{pda_basic} \left. \begin{array}{l} y_{n+1}={\rm Prox}_{\sigmaigma f^*}(y_n+\sigmaigma K z_{n}),\\ x_{n+1}={\rm Prox}_{\tauau g}(x_n-\tauau K^*y_{n+1}),\\ z_{n+1}=x_{n+1}+\deltaelta (x_{n+1}-x_n), \varepsilona\right\} \varepsilone where $\deltaelta$ is called an extrapolation parameter, $\tauau>0$ and $\sigmaigma> 0$ are regarded as step sizes. When $\deltaelta= 0$ in (\ref{pda_basic}), the primal-dual procedure (\ref{pda_basic}) reduces to the Arrow-Hurwicz algorithm \cite{AH-PDA}, which has been highlighted in \cite{zhu_TV} for TV image restoration problems. In \cite{CP_PDA}, it was shown that the primal-dual procedure (\ref{pda_basic}) is closely related to many existing methods including the extrapolational gradient method \cite{Popov}, the Douglas-Rachford splitting method \cite{DR-splitting}, and the ADMM. The convergence of (\ref{pda_basic}) was proved in \cite{CP_PDA} under assumptions $\deltaelta=1$ and $\tauau\sigmaigma L^2 < 1$. Of course, it requires knowing the operator norm $L$ of $K$ to determine step sizes, namely, one need to compute the maximal eigenvalue $\langlembda_{\max}(K^*K)$. For simple and sparse $K$ it can not matter, however, for large scale dense matrix $K$ computation of norm is much more expensive. Moreover, the eigenvalues of $K^*K$ can be quite different, so the step sizes governed by the maximal eigenvalue will be very conservative. As first-order algorithm, PDA suffers from slow convergence especially on poorly conditioned problems, when they may take thousands of iterations and still struggle reaching just four digits of accuracy \cite{Acc_PDA}. As a remedy for the slow convergence, diagonal \cite{PC-PDA} and non-diagonal precondition \cite{Acc_PDA} were proposed, numerical efficiency can be improved for some cases, but still there is no strong evidence that such precondition improves or at least does not worsen the speed of convergence of PDA. Generally, larger step sizes often yield a faster convergence, a linesearch thus was introduced to gain the speed improvement in \cite{M_PDA}. It is known that, the linesearch requires the extra proximal operator or evaluations of $K$, or even both in every linesearch iteration, they will turn to be computationally expensive in situations, where the condition to satisfy is strong and the proximal operator is hard to compute and somewhat expensive. The linesearch in \cite{M_PDA} is to find a proper step size $\tauau_n$ satisfying \begin{eqnarray}\langlebel{ineq_0} \sigmaqrt{\begin{eqnarray}ta}\tauau_n\|K^*y_{n+1}-K^*y_n\|\leq\alpha\|y_{n+1}-y_n\| \varepsilone with $\begin{eqnarray}ta>0$ and $\alpha\in(0,1)$. The important parameter $\alpha$ relating to the step size was restricted on $\alpha \in ]0, 1[$ for guaranteeing the convergence, which will hamper larger step sizes. \tauextbf{Contributions.} Our purpose here is to propose an efficient PDA with a prediction-correction procedure (PDA-C for short) to estimate step sizes, rather than using the norm of $K$. The main contribution can be summarized as follows: \begin{itemize} \item We extend the range of $\deltaelta$ to $]\frac{\sigmaqrt{5}-1}{2},+\infty[$, obtain a larger value of $\alpha<\frac{1}{\sigmaqrt{\deltaelta}}$. Namely, $\deltaelta$ can be less than 1 and $\alpha$ can close to $\sigmaqrt{\phi}\approx1.272$ ($\phi=\frac{\sigmaqrt{5}+1}{2}$ is the golden ratio), which will extend the range of step sizes. \item The step sizes are predicted with low computational cost, by the aid of an inverse local Lipschitz constant of $K^*$, and then corrected when $\deltaelta<1$ to satisfy a very weak condition $\|x_{n+1}-x_n\|<+\infty$. For all the tested problems shown in Section \ref{sec_experiments}, this condition is so weak that the linesearch in Correction step does not start or run only a few times to arrive termination conditions. \item We prove that PDA-C converges with ergodic rate ${\mathcal O}(1/N)$ for the primal-dual gap, and introduce an accelerated version of PDA-C under the assumption that the primal or dual problem is strongly convex. The theoretical rate of convergence can be further improved to yield an ergodic rate of convergence ${\mathcal O}(1/N^2)$ for the primal-dual gap. \varepsiloni The paper is organized as follows. In Section \ref{sec_preliminarries}, we provide some useful facts and notations. Section \ref{sec_PDAU} is devoted to our basic PDA with correction and updating step sizes. We prove the convergence and establish the ergodic convergence rate for the primal-dual gap. In Section \ref{sec_Acceleration} we propose an accelerated version of PDA-C under the assumption that the primal or dual problem is strongly convex. The implementation and numerical experiments, for solving the LASSO, min-max matrix game and nonnegative least square, are provided in Section \ref{sec_experiments}. We conclude our paper in the final section. \sigmaection{Preliminaries} \langlebel{sec_preliminarries} We state the following notations and facts on the well-known properties of the proximal operator and Young's inequality. Some properties are included in textbooks such as \cite{Bauschke2011Convex}. Let $g :X \rightarrow(-\infty, +\infty]$ be a proper lower semicontinuous convex functions, the proximal operator ${\rm Prox}_{\langlembda g}:X\rightarrow X$ is defined as ${\rm Prox}_{\langlembda g}(x) = (I+\langlembda \partial g)^{-1}(x), \langlembda>0, x\in X$, and explicitly as \begin{eqnarray}n {\rm Prox}_{\langlembda g}(x) = \argmin_{y\in X}\left\{ g(y)+\frac{1}{2\langlembda}\|x-y\|^2\right\},\quad\forall\, x\in X, \langlembda>0. \varepsilonen \begin{eqnarray}gin{fact}\cite{Bauschke2011Convex}\langlebel{fact_proj} Let $g :X\rightarrow (-\infty, +\infty]$ be a convex function. Then for any $\langlembda>0$ and $x\in X$, $p = {\rm Prox}_{\langlembda g}(x)$ if and only if \begin{eqnarray}n \langlengle p-x, y-p\ranglengle\geq \langlembda [g(p)-g(y)],~~ \forall y\in X. \varepsilonen \varepsilonnd{fact} \begin{eqnarray}gin{fact}\langlebel{fact_ab} Let $\{a_n\}_{n\in{\mathbb N}}$, $\{b_n\}_{n\in{\mathbb N}}$ be two nonnegative real sequences and $\varepsilonxists N>0$ such that \begin{eqnarray}n a_{n+1} \leq a_n-b_n,~~\forall n>N. \varepsilonen Then $\{a_n\}_{n\in{\mathbb N}}$ is convergent and $\lim_{n\rightarrow \infty} b_n = 0$. \varepsilonnd{fact} \begin{eqnarray}gin{fact}\langlebel{fact_Yang} (Young's inequality) For any $a, b\geq0$ and $\varepsilon> 0$, we have $$ ab\leq \frac{a^2}{2\varepsilon} + \frac{\varepsilon b^2}{2}. $$ \varepsilonnd{fact} The following identity (cosine rule) appears in many convergence analyses and we will use it many times. For any $x, y, z\in {\mathbb R}^n$, \begin{eqnarray}\langlebel{id} \langlengle x-y, x-z\ranglengle= \frac{1}{2}\|x-y\|^2 +\frac{1}{2}\|x-z\|^2- \frac{1}{2}\|y-z\|^2. \varepsilone We assume that the solution set of problem (\ref{primal_problem}) is nonempty and denoted by ${\mathcal S}$. Let $(\begin{array}r{x}, \begin{array}r{y})$ be a saddle point of problem (\ref{primal_problem}), i.e. $(\begin{array}r{x}, \begin{array}r{y})\in{\mathcal S}$, it therefore satisfies \begin{eqnarray}n K\begin{array}r{x} \in \partial f^*(\begin{array}r{y}), ~~-(K^*\begin{array}r{y})\in \partial g(\begin{array}r{x}), \varepsilonen where $\partial f^*$ and $\partial g$ are the subdifferential of the convex functions $f^*$ and $g$. For more details on the theory of saddle point, see \cite{Bauschke2011Convex}. Throughout the paper we will assume that $f$ and $g$ are proper (or simple), in the sense that their resolvent operator has a closed-form representation. By the definition of saddle point, for any $(\begin{array}r{x}, \begin{array}r{y})\in{\mathcal S}$ we have \begin{eqnarray} P_{\begin{array}r{x}, \begin{array}r{y}}(x) := g(x)-g(\begin{array}r{x}) + \langlengle K^* \begin{array}r{y}, x-\begin{array}r{x}\ranglengle \geq0,~\forall x\in X, \langlebel{P}\\ D_{\begin{array}r{x}, \begin{array}r{y}}(y) := f^*(y)-f^*(\begin{array}r{y}) - \langlengle K \begin{array}r{x}, y-\begin{array}r{y}\ranglengle \geq 0,~\forall y\in Y. \langlebel{D} \varepsilone The primal-dual gap can be expressed as $G_{\begin{array}r{x}, \begin{array}r{y}}(x,y)= P_{\begin{array}r{x}, \begin{array}r{y}}(x)+D_{\begin{array}r{x}, \begin{array}r{y}}(y)$. In certain cases when it is clear which saddle point is considered, we will omit the subscript in $P$, $D$ and $G$. It is also important to highlight that functions $P(\cdot)$ and $D(\cdot)$ are convex for fixed $(\begin{array}r{x}, \begin{array}r{y})\in{\mathcal S}$. \sigmaection{Primal-Dual Algorithm with Correction} \langlebel{sec_PDAU} In this section, we state our primal-dual algorithm with correction and explore its convergence. The step sizes are predicted by the aid of an inverse local Lipschitz constant of $K^*$ and corrected using linesearch. \vskip5mm \hrule\vskip2mm \begin{eqnarray}gin{algo} [PDA-C for solving (\ref{primal_problem})]\langlebel{algo1} {~}\vskip 1pt {\rm \begin{eqnarray}gin{description} \item[{\varepsilonm Step 0.}] Take $\deltaelta\in]\frac{\sigmaqrt{5}-1}{2},+\infty[ $, $\varrho\in]0,1[$ and $1<\nu\leq\mu$. Choose $x_0\in X,$ $y_0\in Y$, $\langlembda_0=\langlembda_1>0$, $\begin{eqnarray}ta>0$ and $\alpha \in ]0, \frac{1}{\sigmaqrt{\deltaelta}}[ $. Set $n=0$ and $$\zeta_0=\max\{\|x_0-{\rm Prox}_{\langlembda_0 g}(x_0-\langlembda_0 K^*y_0)\|,~~\|y_{0}-{\rm Prox}_{\langlembda_{0} f^*}(y_0+\begin{eqnarray}ta\langlembda_{0} K x_{0})\|\}.$$ \item[{\varepsilonm Step 1.}] 1.a. Compute \begin{eqnarray} x_{n+1}&=&{\rm Prox}_{\langlembda_n g}(x_n-\langlembda_n K^*y_n),\langlebel{x_updating} \varepsilone 1.b. \tauextbf{Correct when $\deltaelta<1$}: compute $\zeta_{n+1}=\|x_{n+1}-x_n\|$, check \begin{eqnarray}\langlebel{zeta} \zeta_{n+1}\leq \min\{\mu\zeta_0, ~~\nu\zeta_{n}\}, \varepsilone if not hold, set $\langlembda_n\leftarrow \varrho \langlembda_n$, $\langlembda_{n+1}\leftarrow \min\{\langlembda_n, \langlembda_{n+1}\}$ and return to Step 1.a. \item[{\varepsilonm Step 2.}] Compute \begin{eqnarray} z_{n+1}&=&x_{n+1}+\deltaelta (x_{n+1}-x_n),\nonumber \\ y_{n+1}&=&{\rm Prox}_{\langlembda_{n+1} f^*}(y_n+\begin{eqnarray}ta\langlembda_{n+1} K z_{n+1}),\langlebel{y_updating} \varepsilone and update \begin{eqnarray}\langlebel{lambda} \langlembda_{n+2}&=& \left\{\begin{array}{cl} \min~\left\{{\frac{\alpha\|y_{n+1}-y_{n}\|} {\sigmaqrt{\begin{eqnarray}ta}\|K^*y_{n+1}-K^*y_{n}\|},~~ \langlembda_{n+1} }\right\} , & \mbox{if}\ \ K^*y_{n+1}-K^*y_{n}\neq0, \\ \langlembda_{n+1},& \mbox{otherwise}; \varepsilona \right. \varepsilone \item[{\varepsilonm Step 3.}] Set $n\leftarrow n + 1$ and return to step 1.\\ \varepsilonnd{description} } \varepsilonnd{algo} \vskip1mm\hrule\vskip5mm \begin{eqnarray}gin{rem} For brevity of establishing convergence, different step sizes $\langlembda_n$ and $\langlembda_{n+1}$ are used in (\ref{x_updating}) and (\ref{y_updating}), respectively. So we have to take two step sizes to compute $x_{n+1}$ and $y_{n+1}$, then obtain the next step size $\langlembda_{n+2}$ during each iteration. Furthermore, if $\deltaelta=1$, then $\alpha<1$, in this sense the step sizes agree with that introduced in \cite{CP_PDA,M_PDA}. \varepsilonnd{rem} \begin{eqnarray}gin{rem} Note that the primal and dual variables are symmetrical in the problem (\ref{primal_problem}) and PDA-C, we thus choose a variable with simple proximal operator to correct. In practice, there are many functions with linear (or affine) proximal operator, for instance, $\langlengle a,x\ranglengle$, $\frac{1}{2}\|x-a\|^2$ and the indicator function $l_C(x)$ with $C=\{y:\langlengle a,y\ranglengle=b\}$ or $C={\mathbb B}(c, r)$, a closed ball with a center $c$ and a radius $r> 0$. For these functions, the linesearch becomes extremely simple: it does not require any additional matrix-vector multiplications. \varepsilonnd{rem} The aim of Correction step is to bound $\{\|x_{n+1}-x_{n}\|\}$ when $\deltaelta<1$, as convergence analysis requires $\|x_{n+1}-x_{n}\|<+\infty$. From (\ref{zeta}), we have $\zeta_n\leq \nu\zeta_0$ for all $n\geq 1$, and $\zeta_{n+1}\leq\mu \zeta_{n}$ for bounding more tightly due to $\|x_{n+1}-x_n\|\rightarrow 0$. The following lemma shows that the correction procedure described in Algorithm \ref{algo1} is well-defined. \begin{eqnarray}gin{lem}\langlebel{lem_cor} The correction procedure always terminates. i.e., $\{\langlembda_n\}$ is well defined when $\deltaelta\in]\frac{\sigmaqrt{5}-1}{2}, 1[$. \varepsilonnd{lem} \proof Denote \begin{eqnarray}n A:=\partial g ~~\mbox{and}~~x_{n+1}(\langlembda) := \mbox{prox}_{\langlembda g}(x_n-\langlembda K^*y_n). \varepsilonen From \cite[Theorem 23.47]{Bauschke2011Convex}, we have that $\mbox{prox}_{\langlembda g}[x_{n+1}(0)]\rightarrow P_{\overline{\deltaom A}}[x_{n+1}(0)]$ as $\langlembda\rightarrow 0$ ($\overline{\deltaom A}$ denotes the closures of $\deltaom A$), which together with the nonexpansivity of $\mbox{prox}_{\langlembda g}$ yields \begin{eqnarray}n &&\|x_{n+1}(\langlembda)-P_{\overline{\deltaom A}}[x_{n+1}(0)]\|\\ &\leq& \|x_{n+1}(\langlembda)-\mbox{prox}_{\langlembda g}[x_{n+1}(0)]\|+\|\mbox{prox}_{\langlembda g}[x_{n+1}(0)] -P_{\overline{\deltaom A}}[x_{n+1}(0)]\|\\ &\leq&\langlembda\|F(y_n)\|+\|\mbox{prox}_{\langlembda g}[x_{n+1}(0)] -P_{\overline{\deltaom A}}[x_{n+1}(0)]\|. \varepsilonen By taking the limit as $\langlembda\rightarrow 0$, we deduce that $x_{n+1}(\langlembda)\rightarrow P_{\overline{\deltaom A}}[x_{n+1}(0)]$. Notice that $x_{n+1}(0)=x_n$, we observe $P_{\overline{\deltaom A}}[x_{n+1}(0)]=x_n$. By a contradiction, suppose that the correction procedure in Algorithm \ref{algo1} fails to terminate at the $n$-th iteration. Then, for all $\langlembda= \varrho^i \langlembda_n$ with $i=0,1,\cdots $, we have $\|x_{n+1}(\langlembda)-x_n\| >\min\{\mu\zeta_0, \nu\zeta_{n}\}$. Since $\varrho^i\rightarrow 0$ as $i\rightarrow\infty$, so $\langlembda\rightarrow 0$, this gives a contradiction $0\geq\min\{\mu\zeta_0, \nu\zeta_{n}\}$, which completes the proof. \varepsilonndproof \begin{eqnarray}gin{lem}\langlebel{lem_bound} Let $\{\langlembda_n\}_{n\in{\mathbb N}}$ be a sequence generated by PDA-C, then $\{\langlembda_n\}_{n\in{\mathbb N}}$ is bounded, $\lim\limits_{n\rightarrow\infty}\langlembda_n>0$ and $\lim\limits_{n\rightarrow\infty}\frac{\langlembda_n}{\langlembda_{n-1}}=1$. \varepsilonnd{lem} \proof First, $\{\langlembda_n\}_{n\in{\mathbb N}}$ is upper bounded. Note that $K^*$ is a $L$-Lipschitz continuous mapping with $L = \|K\|$, we have \begin{eqnarray}n \frac{\alpha\|y_{n+1}-y_{n}\|}{\|K^*y_{n+1}-K^*y_{n}\|}\geq\frac{\alpha\|y_{n+1}-y_{n} \|}{L\|y_{n+1}-y_{n}\|}=\frac{\alpha}{L} \varepsilonen for $K^*y_{n+1}-K^*y_{n}\neq0$. This implies the predicted step $ \{\langlembda_n\}_{n\in{\mathbb N}} $ has a lower bound $\tauau=\min\{\frac{\alpha}{\sigmaqrt{\begin{eqnarray}ta}L},\langlembda_0\}$, then $\langlembda_n\geq\tauau$ when $\deltaelta\geq1$. If $\deltaelta<1$, $\{\langlembda_n\}$ is well defined from Lemma \ref{lem_cor}, and has a lower bound $\tauau=\min\{{\frac{\varrho^{i_0}\alpha}{\sigmaqrt{\begin{eqnarray}ta}L },\langlembda_0 }\}$ for some $i_0\geq0$. Notice that the sequence $\{\langlembda_n\}$ is monotonically decreasing, we have $\langlembda=\lim\limits_{n\rightarrow\infty}\langlembda_n>0$, consequently, $\lim\limits_{n\rightarrow\infty}\frac{\langlembda_n}{\langlembda_{n-1}}=1$. \varepsilonndproof The properties of the generated step sizes, shown in Lemma \ref{lem_bound}, are vital for establishing convergence of PDA-C. In sequel, we give an observation in detail on the convergence by using this properties. \sigmaubsection{Convergence Analysis} \langlebel{sec_Convergence} This section devotes to the convergence theorem of PDA-C. First, we give the following lemmas for any $\deltaelta\in]\frac{\sigmaqrt{5}-1}{2},+\infty[$, which play a crucial role in the proof of the main theorem. \begin{eqnarray}gin{lem}\langlebel{lem1} Let $(\begin{array}r{x}, \begin{array}r{y})\in {\mathcal S}$, and $\{(x_n,y_n)\}_{n\in{\mathbb N}}$ be a sequence generated by PDA-C. Define \begin{eqnarray}\langlebel{eta} \varepsilonta_{n}:=(1+\deltaelta)P(x_n)-\deltaelta P(x_{n-1})+D(y_n), \varepsilone then for any $(x,y)\in X\tauimes Y$, we have \begin{eqnarray}n \langlembda_n\varepsilonta_{n} &\leq&\langlengle x_{n+1}-x_{n}, \begin{array}r{x}-x_{n+1}\ranglengle+\left\langlengle \frac{1}{\begin{eqnarray}ta}(y_{n}-y_{n-1}), \begin{array}r{y}-y_{n}\right\ranglengle+\left\langlengle \frac{\langlembda_n(z_{n}-x_{n})}{\deltaelta\langlembda_{n-1}}, x_{n+1}- z_n\right\ranglengle \\ &&+ \langlembda_n\langlengle K^*y_{n}-K^*y_{n-1}, z_n-x_{n+1}\ranglengle. \varepsilonen \varepsilonnd{lem} \proof By (\ref{x_updating}), (\ref{y_updating}) and Fact \ref{fact_proj}, we have \begin{eqnarray} \langlengle x_{n+1}-x_{n} + \langlembda_{n}K^*y_{n}, ~~ \begin{array}r{x}-x_{n+1}\ranglengle\geq \langlembda_{n}[g(x_{n+1})-g(\begin{array}r{x})],\langlebel{temp_01}\\ \left\langlengle \frac{1}{\begin{eqnarray}ta}(y_{n}-y_{n-1}) - \langlembda_{n}K z_{n}, ~~\begin{array}r{y}-y_{n}\right\ranglengle\geq \langlembda_{n}[f^*(y_{n})-f^*(\begin{array}r{y})].\langlebel{temp_001} \varepsilone Similarly as in (\ref{temp_01}), for any $x\in X$ we have \begin{eqnarray}n \langlengle x_{n}-x_{n-1} + \langlembda_{n-1}K^*y_{n-1}, ~~ x-x_{n}\ranglengle\geq \langlembda_{n-1}[g(x_{n})-g(x)]. \varepsilonen Substituting $x= x_{n+1}$ and $x= x_{n-1}$ in the inequality above, we obtain \begin{eqnarray} \langlengle x_{n}-x_{n-1} + \langlembda_{n-1}K^*y_{n-1},~~ x_{n+1}-x_{n}\ranglengle\geq \langlembda_{n-1}[g(x_{n})-g(x_{n+1})],\langlebel{tem1}\\ \langlengle x_{n}-x_{n-1} + \langlembda_{n-1}K^*y_{n-1}, ~~ x_{n-1}-x_{n}\ranglengle\geq \langlembda_{n-1}[g(x_{n})-g(x_{n-1})].\langlebel{tem2} \varepsilone Multiplying (\ref{tem2}) by $\deltaelta$ and then adding it to (\ref{tem1}) yields \begin{eqnarray}\langlebel{tem3} \langlengle x_{n}-x_{n-1} + \langlembda_{n-1}K^*y_{n-1},~~ x_{n+1}-z_{n}\ranglengle\geq \langlembda_{n-1}[(1+\deltaelta)g(x_{n})-g(x_{n+1})-\deltaelta g(x_{n-1})], \varepsilone where we use $z_{n}=x_{n}+\deltaelta (x_{n}-x_{n-1})$. Multiplying (\ref{tem3}) by $\frac{\langlembda_{n}}{\langlembda_{n-1}}$ and using $z_{n}=x_{n}+\deltaelta (x_{n}-x_{n-1})$ again, we get \begin{eqnarray}\langlebel{temp_02} \left\langlengle \frac{\langlembda_n(z_{n}-x_{n})}{\deltaelta\langlembda_{n-1}} + \langlembda_{n}K^*y_{n-1}, ~~x_{n+1}-z_{n}\right\ranglengle\geq \langlembda_{n}[(1+\deltaelta)g(x_{n})-g(x_{n+1})-\deltaelta g(x_{n-1})]. \varepsilone Note that \begin{eqnarray}\langlebel{temp_03} \langlengle K^*y_{n}-K^*\begin{array}r{y}, ~~z_n-\begin{array}r{x}\ranglengle=\langlengle Kz_n-K\begin{array}r{x},~~y_{n}-\begin{array}r{y}\ranglengle, \varepsilone adding (\ref{temp_01}) and (\ref{temp_001}) to (\ref{temp_02}) gives \begin{eqnarray} &&\langlengle x_{n+1}-x_{n}, \begin{array}r{x}-x_{n+1}\ranglengle+\left\langlengle \frac{1}{\begin{eqnarray}ta}(y_{n}-y_{n-1}), \begin{array}r{y}-y_{n}\right\ranglengle+\left\langlengle \frac{\langlembda_n(z_{n}-x_{n})}{\deltaelta\langlembda_{n-1}}, x_{n+1}- z_n\right\ranglengle \nonumber\\ &&+ \langlembda_n\langlengle K^*y_{n}-K^*y_{n-1},~~ z_n-x_{n+1}\ranglengle-\langlembda_n \langlengle K^*\begin{array}r{y}, ~~ z_n-\begin{array}r{x}\ranglengle +\langlembda_{n} \langlengle K\begin{array}r{x}, ~~ y_{n}-\begin{array}r{y}\ranglengle\nonumber\\ &\geq&\langlembda_{n}[f^*(y_{n})-f^*(\begin{array}r{y})]+\langlembda_{n}[(1+\deltaelta)g(x_{n})-g(\begin{array}r{x})-\deltaelta g(x_{n-1})].\langlebel{temp_04} \varepsilone Recalling the definitions of $P$ and $D$ in (\ref{P}) and (\ref{D}), and \begin{eqnarray}n &&(1+\deltaelta)g(x_{n})-g(\begin{array}r{x})-\deltaelta g(x_{n-1})+\langlengle K^*\begin{array}r{y}, z_n-\begin{array}r{x}\ranglengle\\ &=&(1+\deltaelta)[g(x_n)-g(\begin{array}r{x}) + \langlengle K^* \begin{array}r{y}, x_n-\begin{array}r{x}\ranglengle]-\deltaelta[g(x_{n-1})-g(\begin{array}r{x}) + \langlengle K^* \begin{array}r{y}, x_{n-1}-\begin{array}r{x}\ranglengle]\\ &=&(1+\deltaelta)P(x_n)-\deltaelta P(x_{n-1}), \varepsilonen we can rewrite (\ref{temp_04}) as \begin{eqnarray}n &&\langlengle x_{n+1}-x_{n}, \begin{array}r{x}-x_{n+1}\ranglengle+\left\langlengle \frac{1}{\begin{eqnarray}ta}(y_{n}-y_{n-1}), \begin{array}r{y}-y_{n}\right\ranglengle+\left\langlengle \frac{\langlembda_n(z_{n}-x_{n})}{\deltaelta\langlembda_{n-1}}, x_{n+1}- z_n\right\ranglengle \\ &&+ \langlembda_n\langlengle K^*y_{n}-K^*y_{n-1}, z_n-x_{n+1}\ranglengle\\ &\geq&\langlembda_n[D(y_n)+(1+\deltaelta)P(x_n)-\deltaelta P(x_{n-1})]. \varepsilonen The proof can be completed with the definition of $\varepsilonta_{n}$ in (\ref{eta}). \varepsilonndproof \begin{eqnarray}gin{lem}\langlebel{lem2} Let $(\begin{array}r{x}, \begin{array}r{y})\in{\mathcal S}$, and $\{(x_n,y_n)\}_{n\in{\mathbb N}}$ be a sequence generated by PDA-C. Then, for any $\varepsilon>0$, define \begin{eqnarray} a_n:&=&\|x_n-\begin{array}r{x}\|^2+\frac{1}{\begin{eqnarray}ta}\|y_{n-1}-\begin{array}r{y}\|^2+ 2 \langlembda_{n-1}(1+\deltaelta)P(x_{n-1}),\langlebel{a_n}\\ b_n:&=&(\frac{\langlembda_{n}}{\deltaelta \langlembda_{n-1}}-\frac{\alpha\varepsilon\langlembda_{n}}{\langlembda_{n+1}})\|x_{n+1}-z_n\|^2+ \left(1-\frac{\langlembda_n}{\deltaelta \langlembda_{n-1}} \right)\|x_{n+1}-x_n\|^2+\frac{\deltaelta\langlembda_n}{ \langlembda_{n-1}} \|x_{n}-x_{n-1}\|^2\nonumber\\ &&+\frac{1}{\begin{eqnarray}ta}\left(1- \frac{\alpha\langlembda_{n}}{\varepsilon\langlembda_{n+1}}\right)\|y_{n}-y_{n-1}\|^2, \langlebel{b_n} \varepsilone we have \begin{eqnarray}n a_{n+1}\leq a_{n}-b_n. \varepsilonen \varepsilonnd{lem} \pn {Proof.} By Lemma \ref{lem1}, (\ref{id}) and Cauchy-Schwarz inequality, we obtain \begin{eqnarray}\langlebel{ineq1} &&\|x_{n+1}-\begin{array}r{x}\|^2+\frac{1}{\begin{eqnarray}ta}\|y_{n}-\begin{array}r{y}\|^2+ 2 \langlembda_n(1+\deltaelta)P(x_n)\nonumber\\ &\leq& \|x_n-\begin{array}r{x}\|^2+\frac{1}{\begin{eqnarray}ta}\|y_{n-1}-\begin{array}r{y}\|^2 +2\langlembda_{n}\deltaelta P(x_{n-1})-2\langlembda_n D(y_n)\nonumber\\ &&-\frac{\langlembda_{n}}{\deltaelta \langlembda_{n-1}}[\|z_n-x_n\|^2+\|x_{n+1}-z_n\|^2]+ \left(\frac{\langlembda_n}{\deltaelta \langlembda_{n-1}}-1 \right)\|x_{n+1}-x_n\|^2-\frac{1}{\begin{eqnarray}ta}\|y_{n}-y_{n-1}\|^2 \nonumber\\ &&+ 2\langlembda_n\|K^*y_{n}-K^*y_{n-1}\|\|x_{n+1}-z_n\|\nonumber\\ &\leq&\|x_n-\begin{array}r{x}\|^2+\frac{1}{\begin{eqnarray}ta}\|y_{n-1}-\begin{array}r{y}\|^2 +2\langlembda_{n}\deltaelta P(x_{n-1})-2\langlembda_n D(y_n)\nonumber\\ &&-\frac{\langlembda_{n}}{\deltaelta \langlembda_{n-1}}[\|z_n-x_n\|^2+\|x_{n+1}-z_n\|^2]+ \left(\frac{\langlembda_n}{\deltaelta \langlembda_{n-1}}-1 \right)\|x_{n+1}-x_n\|^2 \nonumber\\ &&+ \frac{2\alpha}{\sigmaqrt{\begin{eqnarray}ta}}\frac{\langlembda_{n}}{\langlembda_{n+1}}\|y_{n}-y_{n-1}\|\|x_{n+1}-z_n\|- \frac{1}{\begin{eqnarray}ta}\|y_{n}-y_{n-1}\|^2. \varepsilone Using Fact \ref{fact_Yang} with any $\varepsilon>0$, we have \begin{eqnarray}\langlebel{ineq2} \frac{2}{\sigmaqrt{\begin{eqnarray}ta}}\|y_n-y_{n-1}\|\|z_n-x_{n+1}\| &\leq&\frac{1}{\varepsilon \begin{eqnarray}ta}\|y_n-y_{n-1}\|^2+\varepsilon\|x_{n+1}-z_n\|^2. \varepsilone Combining (\ref{ineq1}) with (\ref{ineq2}), by $z_n-x_n=\deltaelta(x_{n}-x_{n-1})$ we have \begin{eqnarray}\langlebel{ineq5} &&\|x_{n+1}-\begin{array}r{x}\|^2+\frac{1}{\begin{eqnarray}ta}\|y_{n}-\begin{array}r{y}\|^2+ 2 \langlembda_n(1+\deltaelta)P(x_n)\nonumber\\ &\leq& \|x_n-\begin{array}r{x}\|^2+\frac{1}{\begin{eqnarray}ta}\|y_{n-1}-\begin{array}r{y}\|^2 +2\langlembda_{n}\deltaelta P(x_{n-1})-2\langlembda_n D(y_n)\nonumber\\ &&-(\frac{\langlembda_{n}}{\deltaelta \langlembda_{n-1}}-\frac{\alpha\varepsilon\langlembda_{n}}{\langlembda_{n+1}})\|x_{n+1}-z_n\|^2- \left(1-\frac{\langlembda_n}{\deltaelta \langlembda_{n-1}} \right)\|x_{n+1}-x_n\|^2-\frac{\deltaelta\langlembda_n}{ \langlembda_{n-1}} \|x_{n}-x_{n-1}\|^2\nonumber\\ &&-\frac{1}{\begin{eqnarray}ta}\left(1- \frac{\alpha\langlembda_{n}}{\varepsilon\langlembda_{n+1}}\right)\|y_{n}-y_{n-1}\|^2. \varepsilone Since $(\begin{array}r{x}, \begin{array}r{y})$ is a saddle point, then $D(y_{n})\geq0$ and $P(x_{n-1})\geq0$. Together with $\deltaelta>0$ and $0<\deltaelta\langlembda_{n}\leq(1+\deltaelta)\langlembda_{n-1}$, the proof is completed by the definitions of $a_n$ and $b_n$. \varepsilonndproof Since $\lim\limits_{n\rightarrow\infty}\frac{\langlembda_n}{\langlembda_{n-1}}=1$, we have $\lim\limits_{n\rightarrow\infty}(1-\frac{\langlembda_n}{\deltaelta \langlembda_{n-1}})=1-\frac{1}{\deltaelta}\geq0$ for any $\deltaelta\geq1$. But for $\deltaelta\in]\frac{\sigmaqrt{5}-1}{2},1[$, we have $1-\frac{1}{\deltaelta}<0$. So, convergence of Algorithm \ref{algo1} with $\deltaelta<1$ is different from that with $\deltaelta\geq1$, and hence cannot be established by the similar methods as in \cite{yang,M_PDA,Proximal-extrapolated}. By summing and integrating terms $\left(1-\frac{\langlembda_n}{\deltaelta \langlembda_{n-1}} \right)\|x_{n+1}-x_n\|^2$ and $\frac{\deltaelta\langlembda_n}{ \langlembda_{n-1}} \|x_{n}-x_{n-1}\|^2$, convergence is established when $\deltaelta<1$ using Correction step in the following Theorem. \begin{eqnarray}gin{thm}\langlebel{thm1} Let $(\begin{array}r{x}, \begin{array}r{y})\in{\mathcal S}$ and $\{(x_n,y_n)\}_{n\in{\mathbb N}}$ be a sequence generated by PDA-C. Then it is a bounded sequence in $X\tauimes Y$ and all its cluster points are solutions of (\ref{primal_problem}). Moreover, if $g|\deltaom g$ is continuous then the whole sequence $\{(x_{n}, y_n)\}$ converges to a solution of (\ref{primal_problem}). \varepsilonnd{thm} \proof By Lemma \ref{lem_bound}, we observe $\lim\limits_{n\rightarrow\infty}\frac{\langlembda_n}{\langlembda_{n+1}}=1$ and $\lim\limits_{n\rightarrow\infty}\frac{\langlembda_n}{\langlembda_{n-1}}=1$. Setting $\varepsilon=\frac{1}{\sigmaqrt{\deltaelta}}$ in Lemma \ref{lem2}, since $\alpha<\frac{1}{\sigmaqrt{\deltaelta}}$ and $\deltaelta\in]\frac{\sigmaqrt{5}-1}{2},+\infty[$ we deduce \begin{eqnarray}\langlebel{ineq_6} \left. \begin{array}{r} \lim\limits_{n\rightarrow\infty}(\frac{\langlembda_{n}}{\deltaelta \langlembda_{n-1}}-\frac{\alpha\varepsilon\langlembda_{n}}{\langlembda_{n+1}}) =\frac{1}{\deltaelta}-\frac{\alpha}{\sigmaqrt{\deltaelta}}>0,\\ \lim\limits_{n\rightarrow\infty}\left(1- \frac{\alpha\langlembda_{n}}{\varepsilon\langlembda_{n+1}}\right)=1- \alpha \sigmaqrt{\deltaelta}>0,\\ \lim\limits_{n\rightarrow\infty}\left(1-\frac{\deltaelta\langlembda_n}{ \langlembda_{n-1}} +\frac{\deltaelta\langlembda_{n+1}}{ \langlembda_{n}}\right) =1-\frac{1}{\deltaelta}+\deltaelta >0. \varepsilona \right\} \varepsilone There exists an integer $N>2,$ such that for any $n>N$, \begin{eqnarray}\langlebel{ineq_all} \left. \begin{array}{r} \frac{\langlembda_{n}}{\deltaelta \langlembda_{n-1}}-\frac{\alpha\varepsilon\langlembda_{n}}{\langlembda_{n+1}}>0,\\ 1- \frac{\alpha\langlembda_{n}}{\varepsilon\langlembda_{n+1}}>0,\\ 1-\frac{1}{\deltaelta}+\frac{\deltaelta\langlembda_{n+1}}{ \langlembda_{n}}>0, \varepsilona \right\} \varepsilone which implies $b_n\geq0$ ($n>N$) and $\deltaelta\geq1$. Hence, by $a_n\geq0$, Lemma \ref{lem2} and Fact \ref{fact_ab}, $\{a_n\}_{n\in{\mathbb N}}$ is convergent and $\lim\limits_{n\rightarrow\infty} b_n = 0$, then \begin{eqnarray}\langlebel{limit} \lim\limits_{n\rightarrow\infty}\|x_{n+1}-z_n\|=0, ~~\lim\limits_{n\rightarrow\infty}\|x_{n+1}-x_n\|=0 ~~\mbox{and}~~ \lim\limits_{n\rightarrow\infty}\|y_n-y_{n-1}\|=0. \varepsilone Now, let us explore the case $\deltaelta<1$. By the definition of $b_n$ in (\ref{b_n}), for any $M>N+1$, we have \begin{eqnarray}n &&a_{M+1}-a_{N+1}=\sigmaum\limits_{n=N+1}^{M}(a_{n+1}-a_{n})\\ &\leq& -\sigmaum\limits_{n=N+1}^{M}(\frac{\langlembda_{n}}{\deltaelta \langlembda_{n-1}}-\frac{\alpha\varepsilon\langlembda_{n}}{\langlembda_{n+1}})\|x_{n+1} -z_n\|^2-\sigmaum\limits_{n=N+1}^{M}\frac{1}{\begin{eqnarray}ta}\left(1- \frac{\alpha\langlembda_{n}}{\varepsilon\langlembda_{n+1}}\right)\|y_{n}-y_{n-1}\|^2\nonumber\\ &&-\sigmaum\limits_{n=N+2}^{M}\left(1-\frac{\langlembda_n}{\deltaelta \langlembda_{n-1}} +\frac{\deltaelta\langlembda_{n+1}}{ \langlembda_{n}} \right)\|x_{n+1}-x_n\|^2\nonumber\\ &&-\frac{\deltaelta\langlembda_{N+1}}{ \langlembda_{N}} \|x_{N+1}-x_{N}\|^2 +\begin{array}r xi_M\nonumber\\ &\leq&\begin{array}r xi_M, \varepsilonen where $\begin{array}r xi_M=\left(\frac{\langlembda_{M+1}}{\deltaelta \langlembda_{M}}-1\right)\|x_{M+2}-x_{M+1}\|^2<+\infty $ from Lemma \ref{lem_bound} and Correction step. This together with $a_{n}\geq0$ implies that $\{a_{n}\}_{n\in{\mathbb N}}$ is bounded and \begin{eqnarray}n \left. \begin{array}{r} 0\leq\sigmaum\limits_{n=N+1}^{\infty}(\frac{\langlembda_{n}}{\deltaelta \langlembda_{n-1}}-\frac{\alpha\varepsilon\langlembda_{n}}{\langlembda_{n+1}})\|x_{n+1} -z_n\|^2<+\infty,\\ 0\leq\sigmaum\limits_{n=N+1}^{\infty}\frac{1}{\begin{eqnarray}ta}\left(1- \frac{\alpha\langlembda_{n}}{\varepsilon\langlembda_{n+1}}\right)\|y_{n}-y_{n-1}\|^2<+\infty,\\ 0\leq\sigmaum\limits_{n=N+2}^{\infty}\left(1-\frac{\langlembda_n}{\deltaelta \langlembda_{n-1}} +\frac{\deltaelta\langlembda_{n+1}}{ \langlembda_{n}} \right)\|x_{n+1}-x_n\|^2<+\infty, \varepsilona\right\} \varepsilonen so (\ref{limit}) is valid as well. Due to $\|x_n-\begin{array}r{x}\|^2\leq a_n$, then $\{x_{n}\}_{n\in{\mathbb N}}$ is bounded for any $\deltaelta\in]\frac{\sigmaqrt{5}-1}{2},+\infty[$. By $\|x_{n+1}-x_n\|=\frac{1}{\deltaelta}\|x_{n+1}-y_{n+1}\|$, we obtain $\lim\limits_{n\rightarrow\infty}\|x_{n}-y_n\|=0$ and then $\{ y_n\}_{n\in{\mathbb N}}$ is bounded. Let $\{(x_{n_k+1},y_{n_k})\}_{k\in{\mathbb N}}$ be a subsequence that converges to some cluster $(x^*,y^*)$, then $z_{n_k}\rightarrow x^*$. Applying Fact \ref{fact_proj}, we deduce that \begin{eqnarray}\langlebel{ineq3} \left.\begin{array}{l} \langlengle x_{n_k+1}-x_{n_k} + \langlembda_{n_k}K^*y_{n_k}, ~~ x-x_{n_k+1}\ranglengle\geq \langlembda_{n_k}[g(x_{n_k+1})-g(x)]\\ \left\langlengle \frac{1}{\begin{eqnarray}ta}(y_{n_k}-y_{n_k-1}) - \langlembda_{n_k}K z_{n_k}, ~~y-y_{n_k}\right\ranglengle\geq \langlembda_{n_k}[f^*(y_{n_k})-f^*(y)], \varepsilona \right\} \varepsilone for any $(x,y)\in X\tauimes Y$, which implies $(x^*,y^*)$ is a saddle problem (\ref{primal_problem}) by passing to the limit and the fact $\langlembda_n$ is separated from 0. We take $(\begin{array}r{x},\begin{array}r{y})=(x^*,y^*)$ in the definition of $a_n$ and label as $a_n^*$. Notice that $\{\langlembda_n\}_{n\in{\mathbb N}}$ is bounded and $P(x^*, \cdot)$ is continuous when $g|\deltaom g$ is continuous, hence, $P(x_{n_k})\rightarrow0$ and \begin{eqnarray}n \lim\limits_{n\rightarrow\infty}a_{n+1}^* &=&\lim\limits_{k\rightarrow\infty}a_{n_{k}+1}^* \\ &=&\lim\limits_{k\rightarrow\infty}\left(\|x_{n_{k}+1}-x^*\|^2+\frac{1} {\begin{eqnarray}ta}\|y_{n_{k}}-y^*\|^2+ 2\langlembda_{n_{k}}(1+\deltaelta)P(x_{n_{k}})\right) = 0, \varepsilonen which means $x_{n+1}\rightarrow x^*$ and $y_{n}\rightarrow y^*$. This completes the proof. \varepsilonndproof \begin{eqnarray}gin{rem} From \cite{M_PDA}, the condition of $g|\deltaom g$ to be continuous is not restrictive: it holds when $\deltaom g$ is an open set (this includes all finite-valued functions) or $g=\deltaelta_C$ for any closed convex set $C$. Moreover, it holds for any separable lower semicontinuous convex function from \cite[Corollary 9.15]{Bauschke2011Convex}. \varepsilonnd{rem} \sigmaubsection{Ergodic Convergence Rate} \langlebel{sec_Rate} In this section, we investigate the convergence rate of the ergodic sequence $\{(X_j, Y_j)\}_{j\in{\mathbb N}}$ defined in (\ref{definition_XY}). For the case $\deltaelta\geq1$, it can be obtained by the similar technique as that in \cite{M_PDA}, we thus focus on the case when $\deltaelta\in]\frac{\sigmaqrt{5}-1}{2},1[$. \begin{eqnarray}gin{thm}\langlebel{thm2} Let $\{(x_n,y_n)\}_{n\in{\mathbb N}}$ be a sequence generated by PDA-C with $\deltaelta\in]\frac{\sigmaqrt{5}-1}{2},1[$ and $(\begin{array}r{x}, \begin{array}r{y})\in{\mathcal S}$. For any $n_1>N$ and $j>n_1$, we define \begin{eqnarray}\langlebel{definition_XY} s_j=\sigmaum_{l=n_1}^j\langlembda_l,~~X_j=\frac{\langlembda_{n_1}\deltaelta x_{n_1-1} + \sigmaum_{l=n_1}^j\langlembda_l z_l}{\langlembda_{n_1}\deltaelta + s_j},~~Y_j=\frac{\sigmaum_{l=n_1}^j\langlembda_l y_l}{s_j}, \varepsilone then there exists a sufficient large $J$, when $j>J$ we have \begin{eqnarray}n G(X_j,Y_j) = P(X_j)+D(Y_j)\leq \frac{1}{2s_j}\left[\|x_{n_1}-\begin{array}r{x}\|^2+\frac{1}{\begin{eqnarray}ta}\|y_{n_1-1}-\begin{array}r{y}\|^2 + 2\deltaelta \langlembda_{n_1}P(x_{n_1-1})\right]. \varepsilonen \varepsilonnd{thm} \proof First of all, combining the definition of $\varepsilonta_{n}$ in (\ref{eta}) with the inequality (\ref{ineq5}) yields \begin{eqnarray}\langlebel{ineq51} 2\langlembda_n\varepsilonta_{n} \leq\|x_n-\begin{array}r{x}\|^2-\|x_{n+1}-\begin{array}r{x}\|^2+\frac{1}{\begin{eqnarray}ta}\|y_{n-1}-\begin{array}r{y}\|^2- \frac{1}{\begin{eqnarray}ta}\|y_{n}-\begin{array}r{y}\|^2-b_n, \varepsilone where $b_n$ is defined in (\ref{b_n}). Recalling (\ref{ineq_all}) and summing from $l= n_1$ ($n_1>N$) to $j>n_1$, we get \begin{eqnarray}n &&\|x_{n_1}-\begin{array}r{x}\|^2+\frac{1}{\begin{eqnarray}ta}\|y_{n_1-1}-\begin{array}r{y}\|^2-\|x_{j+1}-\begin{array}r{x}\|^2- \frac{1}{\begin{eqnarray}ta}\|y_{j}-\begin{array}r{y}\|^2\\ &&+\left(\frac{\langlembda_j}{\deltaelta \langlembda_{j-1}}- 1\right)\|x_{j+1}-x_j\|^2-\frac{\deltaelta\langlembda_{n_1}}{ \langlembda_{n_1-1}} \|x_{n_1}-x_{n_1-1}\|^2\\ &\geq&2\sigmaum_{l=n_1}^j\langlembda_l\varepsilonta_{l}. \varepsilonen Since $\|x_{n+1}-x_n\|\rightarrow0$ as $ n\rightarrow+\infty$, there exists a sufficiently large $J$ such that for any $j>J$, it holds $\|x_{j+1}-x_j\|\leq\|x_{n_1}-x_{n_1-1}\|\neq0$ (Here we assume $\|x_{n_1}-x_{n_1-1}\|\neq0$). Then \begin{eqnarray}n &&\left(\frac{\langlembda_j}{\deltaelta \langlembda_{j-1}}- 1\right)\|x_{j+1}-x_j\|^2-\frac{\deltaelta\langlembda_{n_1}}{ \langlembda_{n_1-1}} \|x_{n_1}-x_{n_1-1}\|^2\\ &\leq&\left(\frac{\langlembda_j}{\deltaelta \langlembda_{j-1}}- 1-\frac{\deltaelta\langlembda_{n_1}}{ \langlembda_{n_1-1}}\right)\|x_{j+1}-x_j\|^2\\ &\leq&\left(\frac{1}{\deltaelta}- 1-\frac{\deltaelta\langlembda_{n_1}}{ \langlembda_{n_1-1}}\right)\|x_{j+1}-x_j\|^2\leq0 \varepsilonen from the third inequality of (\ref{ineq_all}) and $\langlembda_j\leq\langlembda_{j-1}$. Note that \begin{eqnarray}n \sigmaum_{l=n_1}^j\langlembda_l\varepsilonta_{l}= \langlembda_j(1+\deltaelta)P(x_j)+\sigmaum_{l=n_1+1}^j [(1+\deltaelta)\langlembda_{l-1}-\deltaelta\langlembda_l)]P(x_{l-1})-\deltaelta \langlembda_{n_1}P(x_{n_1-1})+ \sigmaum_{l=n_1}^j\langlembda_lD(y_{l}). \varepsilonen By convexity of $P(\cdot)$, we observe \begin{eqnarray}n &&\langlembda_j(1+\deltaelta)P(x_j)+\sigmaum_{l=n_1+1}^j [(1+\deltaelta)\langlembda_{l-1}-\deltaelta\langlembda_l)]P(x_{l-1})\\ &\geq& (\langlembda_{n_1}\deltaelta + s_j)P\left(\frac{\langlembda_{n_1}(1 +\deltaelta)x_{n_1} + \sigmaum_{l=n_1+1}^j\langlembda_l z_l}{\langlembda_{n_1}\deltaelta + s_j} \right)\\ &=& (\langlembda_{n_1}\deltaelta + s_j)P\left(\frac{\langlembda_{n_1}\deltaelta x_{n_1-1} + \sigmaum_{l=n_1}^j\langlembda_l z_l}{\langlembda_{n_1}\deltaelta + s_j} \right) \geq s_jP(X_j), \varepsilonen where $s_j=\sigmaum_{l=n_1}^j\langlembda_l$. Similarly, \begin{eqnarray}n \sigmaum_{l=n_1}^j\langlembda_lD(y_l)\geq s_j D\left(\frac{\sigmaum_{l=n_1}^j\langlembda_l y_l}{s_j}\right) = s_j D(Y_j). \varepsilonen Hence, we conclude \begin{eqnarray}n \sigmaum_{l=n_1}^j\langlembda_l\varepsilonta_{l}\geq s_j[P(X_j) + D(Y_j)]- \langlembda_{n_1}\deltaelta P(x_{n_1-1}), \varepsilonen and \begin{eqnarray}n G(X_j,Y_j) = P(X_j)+D(Y_j)\leq \frac{1}{2s_j}\left[\|x_{n_1}-\begin{array}r{x}\|^2+\frac{1}{\begin{eqnarray}ta}\|y_{n_1-1}-\begin{array}r{y}\|^2 + 2\deltaelta \langlembda_{n_1}P(x_{n_1-1})\right], \varepsilonen which finishes the proof. \varepsilonndproof Notice that $ \{\langlembda_n\}_{n\in{\mathbb N}}$ has a lower bound $\tauau>0$ from the proof of Lemma \ref{lem_bound}. Fix $n_1>N$, we get $s_j\geq (j-n_1+1)\tauau$. This implies $s_j\rightarrow +\infty$ when $j\rightarrow+\infty$ and PDA-C has the same ${\mathcal O}(1/j)$ ergodic rate of convergence when $j>n_1>N$. \sigmaubsection{Heuristics on Nonmonotonic Step Sizes} \langlebel{sec_improved} In Algorithm \ref{algo1}, the step size $\{\langlembda_n\}_{n\in{\mathbb N}}$ is updated but in a nonincreasing way, which might be adverse if the algorithm starts in the region with a big curvature of $K^*$. For the purpose of breaking away from overdependence on the few initial step sizes, we choose $\widehat{\langlembda}>0$, $n_0>1$ and a sequence $\{\phi_n\}_{n\in{\mathbb N}}$ with $\phi_n\in[1,\frac{1+\deltaelta} {\deltaelta}]$ and $\phi_n=1$ when $n>n_0$, and update step sizes using following scheme \begin{eqnarray}\langlebel{lambda-non} \langlembda_{n+2}&=& \left\{\begin{array}{cl} \min~\left\{{\frac{\alpha\|y_{n+1}-y_{n}\|} {\sigmaqrt{\begin{eqnarray}ta}\|K^*y_{n+1}-K^*y_{n}\|},~~ \phi_{n}\langlembda_{n+1} },~~\widehat{\langlembda}\right\} , & \mbox{if}\ \ K^*y_{n+1}-K^*y_{n}\neq0, \\ \langlembda_{n+1},& \mbox{otherwise}. \varepsilona \right. \varepsilone Correspondingly, we use $\langlembda_{n+1}\leftarrow \min\{\phi_n\langlembda_n, \langlembda_{n+1}\}$ in Correction step to ensure $\langlembda_{n+1}\leq\phi_n\langlembda_n$. The role of multiplier $\phi_n$ is to allow step sizes to increase, which fulfills that the step sizes can be updated non-monotonically, unlike the updating strategies presented in \cite{yang}. The constant $\widehat{\langlembda}$ in Algorithm \ref{algo1} is given only to ensure the upper boundedness of $\{\langlembda_n\}$. Hence, it makes sense to choose $\widehat{\langlembda}$ quite large. In this case, the step sizes can be generated non-monotonically when $n<n_0$ but bounded from Lemma \ref{lem_bound}. Consequently, it follows from $\phi_n=1$ when $n\geq n_0$ for given $n_0$ that the sequence $\{\langlembda_{n}\}_{n>n_0}$ is monotonically decreasing. This means $\{\langlembda_{n}\}$ is convergent, \begin{eqnarray}n \lim_{n\rightarrow\infty}\langlembda_{n}>0,~~ \lim_{n\rightarrow\infty}\frac{\langlembda_{n}} {\langlembda_{n-1}}=1, \varepsilonen and $\left(\frac{\langlembda_{n}}{\deltaelta \langlembda_{n-1}}-1\right)\|x_{n+1}-x_{n}\|^2<+\infty$. By $\phi_n\in[1,\frac{1+\deltaelta} {\deltaelta}]$ and $\langlembda_{n+1}\leq\phi_n\langlembda_{n}$, we can deduce $\deltaelta\langlembda_{n+1}\leq(1+\deltaelta)\langlembda_{n}$. Then Lemmas \ref{lem1} and \ref{lem2} are still valid. Under these conditions, it is not difficult to prove the convergence of Algorithm \ref{algo1} using (\ref{lambda-non}), but its convergence rate is unknown. \sigmaection{Accelerated PDA when $\deltaelta\geq1$.} \langlebel{sec_Acceleration} In this section, we consider accelerated version of the primal-dual algorithm when $\deltaelta\geq1$, as the nonnegativity of $\{b_n\}_{n>N}$ can not be ensured for the case $\deltaelta<1$. In many cases, the speed of convergence of PDA crucially depends on the ratio $\begin{eqnarray}ta$ between primal and dual steps. It is shown in \cite{FIST,CP_PDA,M_PDA,Nesterov1983,Nesterov2008} that in case $g$ or $f^*$ are strongly convex, one can modify PDA and derive ${\mathcal O}(1/N^2)$ convergence by adapting alterable $\begin{eqnarray}ta$. We show that the same holds for PDA-C and when a special strategy is used to update $\langlembda_n$. Due to the symmetry of the primal and dual variables in the problem (\ref{primal_problem}) and our method PDA-C, we will only treat the case where $g$ is strongly convex for simplicity, the case where $f^*$ is strongly convex is completely equivalent. Assume that $g$ is $\gamma$-strongly convex, i.e., \begin{eqnarray}n g(x_2)-g(x_1)\geq \langlengle u, x_2-x_1\ranglengle + \frac{\gamma}{2}\|x_2-x_1\|^2,~~x_1, x_2\in X, u\in \partial g(x_1), \varepsilonen and the parameter $\gamma$ is known. Exploiting the strong convexity of $g$, we introduce the following accelerated PDA-C (APDA-C). \vskip5mm \hrule\vskip2mm \begin{eqnarray}gin{algo} [Accelerated PDA-C for solving (\ref{primal_problem}) when $g$ is $\gamma$-strongly convex.]\langlebel{algo2} {~}\vskip 1pt {\rm \begin{eqnarray}gin{description} \item[{\varepsilonm Step 0.}] Take $\deltaelta\in[1,+\infty[$, choose $x_0, y_0\in X,$ $\langlembda_0=\langlembda_1>0$, $\begin{eqnarray}ta_0>0$ and $\alpha \in ]0, \frac{1}{\sigmaqrt{\deltaelta}}[ $. Set $n=0$. \item[{\varepsilonm Step 1.}]Compute \begin{eqnarray} x_{n+1}&=&{\rm Prox}_{\langlembda_n g}(x_n-\langlembda_n K^*y_n),\nonumber\\ z_{n+1}&=&x_{n+1}+\deltaelta (x_{n+1}-x_n),\nonumber \\ \begin{eqnarray}ta_{n+1}&=&\begin{eqnarray}ta_{n}(1+\gamma \langlembda_{n+1}),\langlebel{beta_updating}\\ y_{n+1}&=&{\rm Prox}_{\langlembda_{n+1} f^*}(y_n+\begin{eqnarray}ta_{n+1}\langlembda_{n+1} K z_{n+1}).\nonumber \varepsilone \item[{\varepsilonm Step 2.}] Update \begin{eqnarray}\langlebel{acc_lambda} \langlembda_{n+2}= \left\{\begin{array}{cl} \min~\left\{{\frac{\alpha\|y_{n+1}-y_n\|} {\sigmaqrt{\begin{eqnarray}ta_{n+1}}\|K^*y_{n+1}-K^*y_n\|},~~ \frac{\sigmaqrt{\begin{eqnarray}ta_{n}}}{\sigmaqrt{\begin{eqnarray}ta_{n+1}}}\langlembda_{n+1} } \right\} , & \mbox{if}\ \ K^*y_{n+1}-K^*y_n\neq0, \\ \frac{\sigmaqrt{\begin{eqnarray}ta_{n}}}{\sigmaqrt{\begin{eqnarray}ta_{n+1}}}\langlembda_{n+1},& \mbox{otherwise}. \varepsilona \right. \varepsilone \item[{\varepsilonm Step 3.}] Set $n\leftarrow n + 1$ and return to step 1.\\ \varepsilonnd{description} } \varepsilonnd{algo} \vskip1mm\hrule\vskip5mm The main difference of the accelerated variant APDA-C from the basic PDA-C is that now we have to update $\begin{eqnarray}ta_{n+1}$ by $\begin{eqnarray}ta_{n+1}=\begin{eqnarray}ta_{n}(1+\gamma \langlembda_{n+1})$ in every iteration, and obtain $\langlembda_{n+2}$ from a special strategy (\ref{acc_lambda}), which will result in the unboundedness of $\{\begin{eqnarray}ta_{n}\}_{n\in{\mathbb N}}$ and $\langlembda_n\rightarrow0 (n\rightarrow+\infty)$. Even so, the desired properties can be established for the sequences $\{\begin{eqnarray}ta_{n}\}_{n\in{\mathbb N}}$ and $\{\langlembda_{n}\}_{n\in{\mathbb N}}$, shown in the following Lemma. Also notice that the accelerated algorithm above coincides with PDA-C when a parameter of strong convexity $\gamma= 0$. \begin{eqnarray}gin{lem}\langlebel{acc_beta} Let $\{\langlembda_n\}_{n\in{\mathbb N}}$ and $\{\begin{eqnarray}ta_n\}_{n\in{\mathbb N}}$ be sequences generated by APDA-C, then\\ (i) $0<\langlembda_{n}\leq\frac{1+\deltaelta}{\deltaelta}\langlembda_{n-1}$, $\lim\limits_{n\rightarrow\infty}\langlembda_n=0$ and $\lim\limits_{n\rightarrow\infty} \frac{\langlembda_n}{\langlembda_{n-1}}=1$;\\ (ii) there exists $C>0$ such that $\begin{eqnarray}ta_n\geq C n^2$ for all $n>0$. \varepsilonnd{lem} \proof (i) The result $0<\langlembda_{n}\leq\frac{1+\deltaelta}{\deltaelta}\langlembda_{n-1}$ is clear as $\frac{\sigmaqrt{\begin{eqnarray}ta_{n}}}{\sigmaqrt{\begin{eqnarray}ta_{n+1}}}<1$ and $\deltaelta\geq1$. Let $\sigmaigma_n:=\sigmaqrt{\begin{eqnarray}ta_{n-1}} \langlembda_n$, using (\ref{acc_lambda}) yields \begin{eqnarray}n \sigmaigma_{n+2}= \left\{\begin{array}{cl} \min~\left\{{\frac{\alpha\|y_{n+1}-y_n\|} {\|K^*y_{n+1}-K^*y_n\|},~~ \sigmaigma_{n+1} }\right\} , & \mbox{if}\ \ K^*y_{n+1}-K^*y_n\neq0, \\ \sigmaigma_{n+1},& \mbox{otherwise}. \varepsilona \right. \varepsilonen By the similar techniques as in Lemma \ref{lem_bound}, $\{\sigmaigma_n\}_{n\geq1}$ is bound and has a lower bound $\min\{{\frac{\alpha}{L },\sigmaigma_1 }\}$, then its limit $\sigmaigma=\lim\limits_{n\rightarrow\infty}\sigmaigma_n$ exists and $\sigmaigma>0$. Suppose that $\langlembda_{n}\nrightarrow0$ when $n\rightarrow+\infty$, we observe $\begin{eqnarray}ta_n\rightarrow+\infty$ as $\begin{eqnarray}ta_{n+1}=\begin{eqnarray}ta_{n}(1+\gamma \langlembda_{n+1})$ and $\langlembda_{n}>0$, then $\sigmaigma_n\rightarrow+\infty$, which is a contradiction. Thus, we have $\langlembda_{n}\rightarrow0$. Consequently, we deduce \begin{eqnarray}n \lim\limits_{n\rightarrow\infty}\frac{\begin{eqnarray}ta_{n+1}}{\begin{eqnarray}ta_{n}}= \lim\limits_{n\rightarrow\infty}(1+\gamma \langlembda_{n+1})=1, \varepsilonen and \begin{eqnarray}n \lim\limits_{n\rightarrow\infty}\frac{\langlembda_{n+1}}{\langlembda_{n}}= \lim\limits_{n\rightarrow\infty}\frac{\sigmaigma_{n+1}}{\sigmaigma_{n}} \frac{\sigmaqrt{\begin{eqnarray}ta_{n-1}}}{\sigmaqrt{\begin{eqnarray}ta_{n}}}=1. \varepsilonen (ii) By (\ref{acc_lambda}) and $\phi_n\geq1$, we find \begin{eqnarray}n \langlembda_{n+2}= \frac{\alpha\|y_{n+1}-y_n\|} {\sigmaqrt{\begin{eqnarray}ta_{n+1}}\|K^*y_{n+1}-K^*y_n\|}\geq \frac{\alpha} {\sigmaqrt{\begin{eqnarray}ta_{n+1}}L} \varepsilonen or \begin{eqnarray}n \langlembda_{n+2}\geq \frac{\sigmaqrt{\begin{eqnarray}ta_{n}}}{\sigmaqrt{\begin{eqnarray}ta_{n+1}}}\langlembda_{n+1}\geq \frac{\sigmaqrt{\begin{eqnarray}ta_{n}}}{\sigmaqrt{\begin{eqnarray}ta_{n+1}}} \frac{\sigmaqrt{\begin{eqnarray}ta_{n-1}}}{\sigmaqrt{\begin{eqnarray}ta_{n}}}\cdots \frac{\sigmaqrt{\begin{eqnarray}ta_{n_0}}}{\sigmaqrt{\begin{eqnarray}ta_{n_0+1}}}\frac{\alpha} {\sigmaqrt{\begin{eqnarray}ta_{n_0}}L}=\frac{\alpha} {\sigmaqrt{\begin{eqnarray}ta_{n+1}}L} \varepsilonen for some $n_0<n$ such that $\langlembda_{n_0+2}= \frac{\alpha\|y_{n_0+1}-y_{n_0}\|} {\sigmaqrt{\begin{eqnarray}ta_{n_0+1}}\|K^*y_{n_0+1}-K^*y_{n_0}\|}$. Then, we have \begin{eqnarray}\langlebel{beta_ineq} \begin{eqnarray}ta_{n+1}=\begin{eqnarray}ta_{n}(1+\gamma \langlembda_{n+1})\geq \begin{eqnarray}ta_{n}\left(1+\gamma \frac{\alpha}{\sigmaqrt{\begin{eqnarray}ta_{n}}L}\right) =\begin{eqnarray}ta_{n}+\gamma \frac{\alpha\sigmaqrt{\begin{eqnarray}ta_{n}}}{L}. \varepsilone By induction, there exists $C>0$ such that $\begin{eqnarray}ta_{n}\geq Cn^2$ for all $n>0$. \varepsilonndproof From Lemma \ref{acc_beta} (i), Lemmas \ref{lem1} and \ref{lem2} are still valid with $\begin{eqnarray}ta_n$ instead of $\begin{eqnarray}ta$, but Theorem \ref{thm1} is not necessarily in place due to $\langlembda_n\rightarrow0$. In sequel, we explain that our accelerated method APDA-C yields essentially the same rate ${\mathcal O}(1/j^2)$ of convergence for the primal dual gap, though the convergence of $\{y_n\}_{n\in{\mathbb N}}$ is not able to prove. Instead of (\ref{temp_01}), now one can use a stronger inequality \begin{eqnarray}\langlebel{stronger_temp1} \langlengle x_{n+1}-x_{n} + \langlembda_{n}K^*y_{n}, ~~ \begin{array}r{x}-x_{n+1}\ranglengle\geq \langlembda_{n}[g(x_{n+1})-g(\begin{array}r{x})+ \frac{\gamma}{2}\|x_{n+1}-\begin{array}r{x}\|^2]. \varepsilone In turn, using (\ref{stronger_temp1}) and the definition of $\varepsilonta_n$ yields a stronger version of (\ref{ineq51}) (also with $\begin{eqnarray}ta_n$ instead of $\begin{eqnarray}ta$). \begin{eqnarray}\langlebel{stronger_ineq51} &&(1+\gamma\langlembda_{n+1})\|x_{n+1}-\begin{array}r{x}\|^2+\frac{\begin{eqnarray}ta_{n+1}}{\begin{eqnarray}ta_{n}} \frac{1}{\begin{eqnarray}ta_{n+1}}\|y_{n}-\begin{array}r{y}\|^2+ 2 \langlembda_{n}\varepsilonta_{n}\nonumber\\ &\leq& \|x_n-\begin{array}r{x}\|^2+ \frac{1}{\begin{eqnarray}ta_{n}}\|y_{n-1}-\begin{array}r{y}\|^2 -b_n. \varepsilone Since $\frac{\begin{eqnarray}ta_{n+1}}{\begin{eqnarray}ta_{n}}=1+\gamma \langlembda_{n+1}$ in APDA-C. For brevity let \begin{eqnarray}n A_{n}:=\|x_n-\begin{array}r{x}\|^2+ \frac{1}{\begin{eqnarray}ta_{n}}\|y_{n-1}-\begin{array}r{y}\|^2, \varepsilonen then (\ref{stronger_ineq51}) gives \begin{eqnarray}n \begin{eqnarray}ta_{n+1}A_{n+1}+2 \begin{eqnarray}ta_{n}\langlembda_{n}\varepsilonta_{n} \leq \begin{eqnarray}ta_{n}A_{n}-\begin{eqnarray}ta_{n}b_n. \varepsilonen Recalling (\ref{ineq_all}), for any $n>N$, we have \begin{eqnarray}n \begin{eqnarray}ta_{n+1}A_{n+1}+2 \begin{eqnarray}ta_{n}\langlembda_{n}\varepsilonta_{n}\leq \begin{eqnarray}ta_{n}A_{n}. \varepsilonen Fix $n>N$ and sum from $l= n$ to $j>n$, we get \begin{eqnarray}n \begin{eqnarray}ta_{n}A_{n}-\begin{eqnarray}ta_{j+1}A_{j+1} \geq2\sigmaum_{l=n}^j\begin{eqnarray}ta_{l}\langlembda_l\varepsilonta_{l}. \varepsilonen Defining \begin{eqnarray}n s_j=\sigmaum_{l=n}^j\begin{eqnarray}ta_l\langlembda_l,~~X_j=\frac{\langlembda_n\deltaelta x_{n-1} + \sigmaum_{l=n}^j\begin{eqnarray}ta_l\langlembda_l z_l}{\begin{eqnarray}ta_n\langlembda_n\deltaelta + s_j},~~Y_j=\frac{\sigmaum_{l=n}^j\begin{eqnarray}ta_l\langlembda_l y_l}{s_j}, \varepsilonen for any $j>n$, and using the similar process as in Theorem \ref{thm2}, we observe \begin{eqnarray}n \begin{eqnarray}ta_{j+1}A_{j+1}+2s_j G(X_j,Y_j) \leq \begin{eqnarray}ta_{n}A_n+ 2\deltaelta \begin{eqnarray}ta_{n}\langlembda_nP(x_{n-1}). \varepsilonen From this we deduce that the sequence $\{y_n\}_{n\in{\mathbb N}}$ is bounded and \begin{eqnarray}n G(X_j,Y_j) &\leq& \frac{1}{2s_j} [\begin{eqnarray}ta_{n}A_n+ 2\deltaelta\begin{eqnarray}ta_{n} \langlembda_nP(x_{n-1})],\\ \|x_{j+1}-\begin{array}r{x}\|^2&\leq& \frac{1}{2\begin{eqnarray}ta_{j+1}}\left[\begin{eqnarray}ta_{n}A_n+2 \deltaelta \begin{eqnarray}ta_{n}\langlembda_nP(x_{n-1})\right]. \varepsilonen Then follows Lemma \ref{acc_beta}, for some constant $C_1 > 0$ we have \begin{eqnarray}n \|x_{j+1}-\begin{array}r{x}\|^2\leq \frac{C_1}{(j+1)^2}. \varepsilonen From (\ref{beta_ineq}), we get $\begin{eqnarray}ta_{n+1}-\begin{eqnarray}ta_{n}\geq \gamma \frac{\alpha\sigmaqrt{\begin{eqnarray}ta_{n}}}{L}$. Since \begin{eqnarray}n \begin{eqnarray}ta_n\langlembda_{n}=\begin{eqnarray}ta_n\langlembda_{n+1}\frac{\langlembda_{n}}{\langlembda_{n+1}} =\frac{\langlembda_{n}}{\langlembda_{n+1}} \frac{\begin{eqnarray}ta_{n+1}-\begin{eqnarray}ta_{n}}{\gamma}\geq \frac{\langlembda_{n}}{\langlembda_{n+1}} \frac{\alpha\sigmaqrt{\begin{eqnarray}ta_{n}}}{L}\geq \frac{\langlembda_{n}}{\langlembda_{n+1}} \frac{\alpha\sigmaqrt{C}n}{L}, \varepsilonen we obtain $s_j=\sigmaum_{l=n}^j\begin{eqnarray}ta_l\langlembda_l={\mathcal O}(j^2)$ using $\lim\limits_{n\rightarrow\infty}\frac{\langlembda_{n}}{\langlembda_{n+1}}=1$. This means that for some constant $C_2 > 0$, \begin{eqnarray}n G(X_j, Y_j)\leq \frac{C_2}{j^2}. \varepsilonen Finally, we have shown the following result: \begin{eqnarray}gin{thm} Let $\{(x_n,y_n)\}_{n\in{\mathbb N}}$ be a sequence generated by APDA-C. Then $\|x_j-\begin{array}r{x}\|={\mathcal O}(1/j)$ and $G(X_j, Y_j) = {\mathcal O}(1/j^2)$. \varepsilonnd{thm} \sigmaection{Numerical Experiments} \langlebel{sec_experiments} We present numerical results to demonstrate the computational performance of PDA-C (Algorithm \ref{algo1} using (\ref{lambda-non}) to update step sizes) and its acceleration (Algorithm \ref{algo2}) \footnote{All codes are available at http://www.escience.cn/people/changxiaokai/Codes.html} for solving some minimization problems with saddle-point structure. The following state-of-the-art algorithms are compared to investigate the computational efficiency: \begin{eqnarray}gin{itemize} \item Tseng's forward-backward-forward splitting method used as in \cite[Section 4]{Proximal-extrapolated} (denoted by ``FBF"), with $\begin{eqnarray}ta= 0.7, \tauheta= 0.99$; \item Proximal gradient method (denoted by ``PGM"), with fixed step $\frac{1}{\|K\|^2}$; \item Proximal extrapolated gradient methods \cite[Algorithm 2]{Proximal-extrapolated} (denoted by ``PEGM"), with line search and $\alpha= 0.41, \sigmaigma= 0.7$; \item Primal-Dual algorithm with linesearch \cite{M_PDA} (denoted by ``PDA-L"), with $\mu = 0.7$, $\alpha= 0.99$, and $\tauau_0=\frac{\sigmaqrt{\min\{n,m\}}}{\|K\|_F}$. \item FISTA \cite{FIST,Nesterov1983} with standard linesearch (denoted by ``FISTA"), with $\begin{eqnarray}ta= 0.7, \langlembda_0= 1$; \varepsilonnd{itemize} We denote the random number generator by $seed$ for generating data again in Python 3.8. All experiments are performed on an Intel(R) Core(TM) i5-4590 CPU@ 3.30 GHz PC with 8GB of RAM running on 64-bit Windows operating system. There are many choices of the sequence $\{\phi_n\}_{n\in{\mathbb N}}$, but in the earlier iterations the large range of $\langlembda_n$ is benefit for selecting proper step size, we thus use \begin{eqnarray}\langlebel{phi0} \phi_n=\left\{ \begin{array}{rl}\frac{1+\deltaelta}{\deltaelta},~&~\mbox{if}~n\leq\hat{n};\\ \frac{1+\deltaelta+n-\widehat{n}}{\deltaelta+n-\widehat{n}},~&~\mbox{if}~n>\widehat{n};\\ 1,~&~\mbox{if}~n>n_0, \varepsilona \right. \varepsilone for given $\hat{n}, n_0\in{\mathbb N}$. For PDA-C, we set $\varrho=0.7$, $\alpha=1.27$ and $\deltaelta=0.62$ unless otherwise stated. For APDA-C, we set $\alpha=0.99$ and $\deltaelta=1$. \begin{eqnarray}gin{prob}[LASSO]\langlebel{pro_1} We want to minimize: $$ \min_x \phi(x):=\frac 1 2 ||Kx-b||^2 + \mu ||x||_1 $$ where $K\in \mathbb R^{m\tauimes n}$ is a matrix data, $b\in \mathbb R^m$ is a given observation, and $x\in \mathbb R^n$ is an unknown signal. \varepsilonnd{prob} We can rewrite the problem above in a primal-dual form as follows: \begin{eqnarray}\langlebel{pro_11} \min_{x\in \mathbb R^n} \max_{y\in \mathbb R^m} g(x)+\langlengle Kx,y\ranglengle-f^*(y), \varepsilone where $f(p) = \frac 1 2 ||p-b||^2$, $f^*(y) = \frac 1 2 ||y||^2 + (b,y) = \frac 1 2 ||y+b||^2 -\frac{1}{2}||b||^2$ and $g(x) = \mu ||x||_1$. We set $seed=1$ and generate some random $w\in {\mathbb R}^n$ in which $s$ random coordinates are drawn from ${\mathcal N}(0,1)$ and the rest are zeros. Then we generate $\nu\in {\mathbb R}^m$ with entries drawn from ${\mathcal N}(0, 0.1)$ and set $b = Kw +\nu$. The matrix $K\in {\mathbb R}^{m\tauimes n}$ is constructed in one of the following ways: \begin{itemize} \item[1.] $n = 1000$, $m = 200$, $s = 10$, $\mu= 0.1$. All entries of $K$ are generated independently from ${\mathcal N}(0; 1)$. The $s$ entries of $w$ are drawn from the uniform distribution in $[-10, 10]$. \item[2.] $n = 2000$, $m = 1000$, $s = 100$, $\mu= 0.1$. All entries of $K$ are generated independently from ${\mathcal N}(0, 1)$. The $s$ entries of $w$ are drawn from ${\mathcal N}(0, 1)$. \varepsiloni For the primal-dual form (\ref{pro_11}) of Problem \ref{pro_1}, we apply primal-dual methods, and for the problem in a primal form we apply PGM and FISTA. For this we set $h(x) = f(Ax)$ and get $\nabla h(x) = K^*(Kx-b)$. The values of parameters are set as in \cite{M_PDA}, here we rewrite them to facilitate the readers. For PGM and FISTA a fixed step size $\alpha= \frac{1}{\|K\|^2}$ is used. For PDA (\ref{pda_basic}) we use $\sigmaigma= \frac{1}{20\|K\|}, ~\tauau=\frac{20}{\|K\|}$. For PDA-L and PDA-C we set $\begin{eqnarray}ta= \frac{1}{400}$ and for PDA-C we set $\hat{n}=5000$. The initial points for all methods are $x_0 = (0,\cdots, 0)$ and $y_0=Kx_0-b$. \begin{eqnarray}gin{figure}[htp] \centering \sigmaubfigure[$\phi(x)-\phi^*$]{ \includegraphics[width=0.45\tauextwidth]{LR1.pdf}} \sigmaubfigure[$\langlembda_n$ (or $\tauau_k$)]{ \includegraphics[width=0.45\tauextwidth]{LR1_step.pdf}} \caption{Comparison of $\phi(x)-\phi^*$ and $\langlembda_n$ (or $\tauau_k$) for solving Problem \ref{pro_1} generated by the first way. } \langlebel{Fig 1} \varepsilonnd{figure} \begin{eqnarray}gin{figure}[htp] \centering \sigmaubfigure[$\phi(x)-\phi^*$.]{ \includegraphics[width=0.45\tauextwidth]{LR2.pdf}} \sigmaubfigure[$\langlembda_n$ (or $\tauau_k$)]{ \includegraphics[width=0.45\tauextwidth]{LR2_step.pdf}} \caption{Comparison of $\phi(x)-\phi^*$ and $\langlembda_n$ (or $\tauau_k$) for solving Problem \ref{pro_1} generated by the second way. } \langlebel{Fig 2} \varepsilonnd{figure} To illustrate how does the values $\phi(x_n)-\phi^*$ ($\phi^*=\phi(\begin{array}r{x})$ with $(\begin{array}r{x},\begin{array}r{y})\in{\mathcal S}$) and $\langlembda_n$ for PDA-C (or $\tauau_k$ for PDA-L) change over iterations, we give two convergence plots for the maximum number of iterations set at 30,000. From the results shown in Fig. \ref{Fig 1} and \ref{Fig 2}, primal-dual methods show better performance for the instances of Problem \ref{pro_1}, though they require a tuning the parameter $\begin{eqnarray}ta$. For the tested problems, PDA-C needs to correct less that 20 times linesearch, so PDA-C with $\deltaelta=0.62$ is more efficient than PDA-L. The advantage of PDA-C is a larger interval for possible step size $\langlembda_n$, see Fig.\ref{Fig 1} (b) and Fig.\ref{Fig 2} (b), which resulted from the smaller choice of $\deltaelta$ and the larger value of $\alpha$. \begin{eqnarray}gin{prob}[Min-max matrix game]\langlebel{pro_3} The second problem is the following min-max matrix game \begin{eqnarray}\langlebel{mm_pro} \min_{x \in \Delta_n}\max_{y\in \Delta_m} \lr{Kx, y}, \varepsilone where $x\in \mathbb R^n$, $y\in \mathbb R^m$, $K\in \mathbb R^{m\tauimes n}$, and $\Deltaelta_m$, $\Delta_n$ denote the standard unit simplices in $\mathbb R^m$ and $\mathbb R^n$ respectively. \varepsilonnd{prob} The variational inequality formulation of (\ref{mm_pro}) is: $$\lr{F(z^*),z-z^*} + G(z) - G(z^*) \geq 0 \quad \forall z \in Z,$$ where $$ Z = \mathbb R^n\tauimes \mathbb R^m,\quad z=\begin{itemize}nom{x}{y},\quad F = \begin{eqnarray}gin{bmatrix} 0 & K^*\\ -K & 0\varepsilonnd{bmatrix}, \quad G(z) = \delta_{\Delta_n}(x) + \delta_{\Delta_m}(y). $$ For a comparison we use a primal-dual gap (PD gap) as in \cite{M_PDA}, which can be easily computed for a feasible pair $(x, y)$, defined as \begin{eqnarray}n G(x, y) := \max_i(Kx)_i - \min_j(K^*y)_j. \varepsilonen We use an auxiliary point (see \cite{modified-FB}) to compute the primal-dual gap for Tseng's method FBF, as its iterates may be infeasible. The initial point in all cases is chosen as $x_0 =\frac{1}{n}(1,\cdots,1)$ and $y_0 =\frac{1}{m}(1,\cdots,1)$. We use the algorithm from \cite{projections} to compute projection onto the unit simplex. For PDA we set $\tauau= \sigmaigma= \frac{1}{\|K\|}$, which we compute in advance. The input data for FBF and PEGM with linesearch are taken the same as in \cite{Proximal-extrapolated}. For PDA-L and PDA-C we set $\begin{eqnarray}ta= 1$ (the same as $\tauau=\sigmaigma$ in PDA) and for PDA-C we set $\hat{n}=40,000$. Since PDA-C with $\deltaelta=1$ performs well than smaller $\deltaelta$ for the min-max matrix game, we apply PDA-C with $\deltaelta=1$ to testify. We consider four differently generated samples of the matrix $K\in {\mathbb R}^{m\tauimes n}$ with $seed=100$ as in \cite{M_PDA}: \begin{itemize} \item[1.] $m = 100$, $n = 100$. All entries of $K$ are generated independently from the uniform distribution in $[-1, 1]$; \item[2.] $m = 100$, $n = 100$. All entries of $K$ are generated independently from the normal distribution ${\mathcal N} (0,1)$; \item[3.] $m = 500$, $n = 100$. All entries of $K$ are generated independently from the normal distribution ${\mathcal N} (0,10)$; \item[4.] $m = 100$, $n = 200$. All entries of $K$ are generated independently from the uniform distribution in $[0, 1]$. \varepsiloni \begin{eqnarray}gin{figure}[htp] \centering \sigmaubfigure[Example 1.]{ \includegraphics[width=0.45\tauextwidth]{MMM1.pdf}} \sigmaubfigure[Example 2.]{ \includegraphics[width=0.45\tauextwidth]{MMM2.pdf}} \sigmaubfigure[Example 3.]{ \includegraphics[width=0.45\tauextwidth]{MMM3.pdf}} \sigmaubfigure[Example 4.]{ \includegraphics[width=0.45\tauextwidth]{MMM4.pdf}} \caption{Comparison of PD gap for solving Problem \ref{pro_3} within 100,000 iterations. } \langlebel{Fig 5} \varepsilonnd{figure} For every case we report the computing time (Time) measured in seconds, and show the primal-dual gap vs the computing time. The results are presented in Fig. \ref{Fig 5}. The execution time of all iterations for PDA is the lowest, PDA-C is slightly more than PDA, and PDA-L is about 2 times more expensive than PDA. From Fig. \ref{Fig 5}, PDA-L and PDA-C show better performance than PDA for the instances of Problem \ref{pro_3}. Furthermore, we notice that PDA-C can be better than PDA-L. \begin{eqnarray}gin{prob}[Nonnegative least square problem]\langlebel{pro_2} We are interested in the following problem \begin{eqnarray}n \min_{x>0} \phi(x):=\frac 1 2 ||Kx-b||^2 \varepsilonen or in a primal-dual form \begin{eqnarray}\langlebel{pro_22} \min_{x\in \mathbb R^n} \max_{y\in \mathbb R^m} g(x)+\langlengle Kx,y\ranglengle-f^*(y), \varepsilone where $f(p) = \frac 1 2 ||p-b||^2$, $f^*(y) = \frac 1 2 ||y||^2 + (b,y) = \frac 1 2 ||y+b||^2 -\frac{1}{2}||b||^2$, $g(x) = \deltaelta_{\mathbb R^n_+}(x)$. \varepsilonnd{prob} We consider two real data examples from the Matrix Market library \footnote{https://math.nist.gov/MatrixMarket/data/Harwell-Boeing/lsq/lsq.html.} One is ``WELL1033": sparse matrix with $m = 1033, n = 320$, another is ``ILLC1033": sparse matrix with $m = 1033, n = 320$. For all cases the entries of vector $b\in {\mathbb R}^m$ are generated independently from ${\mathcal N} (0, 1)$. To apply FISTA, we define $h(x) = f(Kx) = \frac{1}{2} ||Kx-b||^2$ for Problem \ref{pro_2}, then $\nabla h(x) = K^*(Kx-b)$. Since $f^*$ is strongly convex, in addition to PDA, PDA-L, and FISTA, we include in our comparison APDA, APDA-L and APDA-C. We apply APDA-L to the primal-dual form (\ref{pro_22}) and APDA-C to the symmetry of (\ref{pro_22}), namely, \begin{eqnarray}n f^*(y) = \deltaelta_{\mathbb R^n_+}(y),~~~g(x) =\frac 1 2 ||x+b||^2 -\frac{1}{2}||b||^2. \varepsilonen We take parameter of strong convexity as $\gamma=1/2$. For PDA, APDA, and FISTA we compute $\|K\|$ and set $\tauau= \sigmaigma = \frac{1}{\|K\|}$, $\alpha=\frac{1}{\|K\|^2}$. For PDA-L and PDA-C we set $\begin{eqnarray}ta= 1$ (the same as $\tauau=\sigmaigma$ in PDA) and for PDA-C we set $\hat{n}=5000$. Since $\langlembda_n\rightarrow0$ from Lemma \ref{acc_beta}, we set $\phi_n\varepsilonquiv1$ for APDA-C in this section. The initial points are $x_0 = (0,\cdots, 0)$ and $y_0 = Kx_0-b =-b$. \begin{eqnarray}gin{figure}[htp] \centering \sigmaubfigure[$\phi(x)-\phi^*$]{ \includegraphics[width=0.45\tauextwidth]{NLS1.pdf}} \sigmaubfigure[$\langlembda_n$(or $\begin{eqnarray}ta_{n-1}\langlembda_n$, $\tauau_k$)]{ \includegraphics[width=0.45\tauextwidth]{NLS_step1.pdf}} \caption{Comparison of $\phi(x)-\phi^*$ and $\langlembda_n$(or $\begin{eqnarray}ta_{n-1}\langlembda_n$, $\tauau_k$) for solving Problem \ref{pro_2} with ``WELL1033". } \langlebel{Fig 3} \varepsilonnd{figure} \begin{eqnarray}gin{figure}[htp] \centering \sigmaubfigure[$\phi(x)-\phi^*$]{ \includegraphics[width=0.45\tauextwidth]{NLS2.pdf}} \sigmaubfigure[$\langlembda_n$(or $\begin{eqnarray}ta_{n-1}\langlembda_n$, $\tauau_k$).]{ \includegraphics[width=0.45\tauextwidth]{NLS_step2.pdf}} \caption{Comparison of $\phi(x)-\phi^*$ and $\langlembda_n$(or $\begin{eqnarray}ta_{n-1}\langlembda_n$, $\tauau_k$) for solving Problem \ref{pro_2} with ``ILL1033". } \langlebel{Fig 4} \varepsilonnd{figure} We illustrate the plots of $\phi(x_n)-\phi^*$ and $\langlembda_n$ from PDA-C (or $\begin{eqnarray}ta_{n-1}\langlembda_n$ from APDA-C, $\tauau_k$ from PDA-L) change over iterations. From the results shown in Fig. \ref{Fig 3} and \ref{Fig 4}, we observe that PDA-C with $\deltaelta=0.62$ and PDA-L show better performance for the case ``WELL1033", while APDA-C and APDA-L for the case ``ILL1033". It is interesting to highlight that sometimes non-accelerated methods can be better than their accelerated variants for the well problems. \sigmaection{Conclusions} \langlebel{sec_conclusion} In this work, we have presented a primal-dual algorithm with correction and explored its acceleration. Firstly, the proposed PDA-C allows us to avoid the evaluation of the operator norm. Secondly, we have presented a prediction-correction strategy to estimate step sizes, which may results in larger step sizes. In practice, the correction step is conducted infrequently as only a weak condition needs to satisfy. Finally, we have proved convergence and established convergence rate, for PDA-C and its accelerated version. Notice that only a very weak condition needs to check in the correction, and the correction step is not necessary to arrive termination conditions for some problems. Whether do the proposed PDA converge without correction? For $\deltaelta\in]0,\frac{\sigmaqrt{5}-1}{2}]$, whether there are any (larger) $\alpha>0$ such that the proposed PDA is convergent? We leave these as interesting topics for our future research. \begin{eqnarray}gin{acknowledgements} The research of Xiaokai Chang was supported by the Hongliu Foundation of First-class Disciplines of Lanzhou University of Technology. The project was supported by the National Natural Science Foundation of China under Grant 61877046. \varepsilonnd{acknowledgements} \begin{eqnarray}gin{thebibliography}{} \begin{itemize}bitem{AH-PDA}Arrow, K. J., Hurwicz, L. and Uzawa, H.: Studies in Linear and Non-Linear Programming, Stanford Mathematical Studies in the Social Sciences, II, Stanford University Press, Stanford, CA (1958) \begin{itemize}bitem{13.} Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange function. Ekonomika i Matematicheskie Metody, 12(6), 1164--1173 (1976) \begin{itemize}bitem{AH} Arrow, K. J., Hurwicz, L., and Uzawa, H. Studies in linear and non-linear programming. Stanford University Press (1958) \begin{itemize}bitem{Bauschke2011Convex} Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer Berlin, New York (2011) \begin{itemize}bitem{FIST} Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imgaging Sci. 2(1), 183--202 (2009) \begin{itemize}bitem{app2} Bertsekas, D.P., Gafni, E.M.: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Prog. Study, 17, 139--159 (1982) \begin{itemize}bitem{app3} Burachik, R.S., Lopes, J.O., Svaiter, B.F.: An outer approximation method for the variational inequality problem. SIAM J. Control Optim. 43(6), 2071--2088 (2005) \begin{itemize}bitem{statistical_learning} Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1--122 (2011) \begin{itemize}bitem{FB-Tseng} Bo$\c{t}$ R.I., Csetnek, E.R.: Forward-backward and Tseng's type penalty schemes for monotone inclusion problems. Set-Valued Var. Anal. 22, 313--331 (2014) \begin{itemize}bitem{inertial-FBF} Bo$\c{t}$ R.I., Csetnek, E.R.: An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algor. 71, 519--540 (2016) \begin{itemize}bitem{PC-ADMM} Chang, X., Liu, S., Zhao, P., Li, X.: Convergent prediction-correction-based ADMM for multi-block separable convex programming. J. Comput. Appl. Math. 335, 270--288 (2018) \begin{itemize}bitem{ADMM} Chang, X., Liu, S., Zhao, P., Song, D.: A generalization of linearized alternating direction method of multipliers for solving two-block separable convex programming. J. Comput. Appl. Math. 357, 251--272 (2019) \begin{itemize}bitem{PEG_CXK} Chang, X., Liu, S., Bai, J., Yang, J.: Proximal extrapolated gradient methods with larger step size for monotone variational inequalities, https://arxiv.org/abs/1812.04876 (2018) \begin{itemize}bitem{CP_PDA} Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis., 40(1):120--145 (2011) \begin{itemize}bitem{DR-splitting} Douglas J., Rachford, H. H.: On the numerical solution of the heat conduction problem in 2 and 3 space variables, Trans. Amer. Math. Soc., 82, pp. 421--439 (1956) \begin{itemize}bitem{projections} Duchi, J., Shalev-Shwartz, S., Singer, Y., and Chandra, T.: Efficient projections onto the $l_1$-ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine learning, pp. 272--279 (2008) \begin{itemize}bitem{app1} Ekeland, I., Temam R.: Convex Analysis and Variational Problems. North-Holland, Amsterdam, Holland (1976) \begin{itemize}bitem{11.} Facchinei, F., Pang, J.-S.: Finite-Dimensional variational inequalities and complementarity problem. Springer-Verlag, New York (2003) \begin{itemize}bitem{He_ADMM-based} He, B., Yuan, X.: A class of ADMM-based algorithms for multi-block separable convex programming. Comput. Optim. Appl. 70(3), 791--826 (2018) \begin{itemize}bitem{He_PDA} He, B., Yuan, X.: Convergence analysis of primal-dual algorithms for a saddle-point problem: from contraction perspective. SIAM J. Imaging Sci., 5(1):119--149 (2012) \begin{itemize}bitem{extragradient-type} Iusem, A.N., P\'{e}rez, L.R.: An extragradient-type algorithm for nonsmooth variational inequalities. Optimization, 48, 309--332 (2000) \begin{itemize}bitem{extragradient} Korpelevich, G.M.: The extragradient method for finding saddle points and other problem. Ekonomika i Matematicheskie Metody, 12, 747--756 (1976) \begin{itemize}bitem{Acc_PDA} Liu, Y., Xu, Y., Yin W.: Acceleration of primal-dual methods by preconditioning and simple subproblem procedures. https://arxiv.org/abs/1811.08937v2 (2018) \begin{itemize}bitem{Proximal-extrapolated} Malitsky, Y.V.: Proximal extrapolated gradient methods for variational inequalities. Optim. Methods Soft. 33(1), 140--164 (2018) \begin{itemize}bitem{6.} Malitsky, Y.V., Semenov, V.V.: An extragradient algorithm for monotone variational inequalities. Cybern. Syst. Anal. 50, 271--277 (2014) \begin{itemize}bitem{M_PDA} Malitsky, Y., Pock, T.: A first-order primal-dual algorithm with linesearch. SIAM J. Optimiz., 28(1):411-432 (2018) \begin{itemize}bitem{Golden} Malitsky Y . Golden ratio algorithms for variational inequalities. https://arxiv.org/abs/1803.08832 (2019) \begin{itemize}bitem{app4} Noor, M.A.: Modified projection method for pseudomonotone variational inequalities. Appl. Math. Lett. 15, 315--320 (2002) \begin{itemize}bitem{Nesterov1983} Nesterov, Y.: A method of solving a convex programming problem with convergence rate $O(1/k^2)$. Soviet Mathematics Doklady, 27(2), 372--376 (1983) \begin{itemize}bitem{Nesterov2004} Nesterov, Y.: Introductory lectures on convex optimization: A basic course. Kluwer Academic Publishers, Boston (2004) \begin{itemize}bitem{Nesterov2008} Nesterov, Yu.: Gradient methods for minimizing composite objective function. Technical report, CORE DISCUSSION PAPER (2007) \begin{itemize}bitem{rate-convergence} Nemirovski, A.: Prox-method with rate of convergence ${\mathcal O}(1/t)$ for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15, 229--251 (2004) \begin{itemize}bitem{PC-PDA} Pock, T., and Chambolle, A.: Diagonal preconditioning for first order primal-dual algorithms in convex optimization. In Computer Vision (ICCV), 2011 IEEE International Conference, IEEE, pp. 1762--1769 (2011) \begin{itemize}bitem{Popov} Popov, L. D.: A modification of the Arrow-Hurwitz method of search for saddle points, Mat. Zametki, 28, pp. 777--784 (1980) \begin{itemize}bitem{modified-FB} Tseng, P.: A modified forward-backward splitting method for maximal monotone mapping. SIAM J. Control Optim. 38, 431--446 (2000) \begin{itemize}bitem{yang} Yang J., Liu H.: A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 179(1), 197--211 (2018) \begin{itemize}bitem{zhang_PDA} Zhang, X., Burger, M., Osher, S.: A unified primal-dual algorithm framework based on bregman iteration. J. Sci. Comput. 46(1):20--46 (2011) \begin{itemize}bitem{zhu_TV}Zhu, M., Chan, T. F.: An efficient primal-dual hybrid gradient algorithm for total variation image restoration, CAM Report 08--34, UCLA, Los Angeles, CA (2008) \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Corner effects on the perturbation of an electric potential \thanks{\footnotesize This work is supported by the Korean Ministry of Science, ICT and Future Planning through NRF grant No. 2016R1A2B4014530 (to M.L.) and by the Swedish Research Council under contract 621-2014-5159 (to J.H.).}} \author{ Doo Sung Choi\thanks{\footnotesize Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea ([email protected]).} \and Johan Helsing\thanks{\footnotesize Centre for Mathematical Sciences, Lund University, 221 00 Lund, Sweden ([email protected]).} \and Mikyoung Lim\thanks{\footnotesize Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea ([email protected]).} \footnote{Corresponding author.}{}} \date{\today} \maketitle \begin{abstract} We consider the perturbation of an electric potential due to an insulating inclusion with corners. This perturbation is known to admit a multipole expansion whose coefficients are linear combinations of generalized polarization tensors. We define new geometric factors of a simple planar domain in terms of a conformal mapping associated with the domain. The geometric factors share properties of the generalized polarization tensors and are the Fourier series coefficients of a generalized external angle of the inclusion boundary. {Since the generalized external angle contains the Dirac delta singularity at corner points, we can determine a criteria for the existence of corner points on the inclusion boundary in terms of the geometric factors.} We illustrate and validate our results with numerical examples computed to a high degree of precision using integral equation techniques, the Nystr\"om discretization, and recursively compressed inverse preconditioning. \end{abstract} \noindent {\footnotesize {\bf AMS subject classifications.} {35R30; 30C35; 35J57} } \noindent {\footnotesize {\bf Key words.} {Generalized Polarization Tensors; Planar domain with corners; Riemann mapping; Schwarz-Christoffel Transformation; RCIP method} } \section{Introduction} Let $\Omegaega$ be a simply connected bounded domain in $\mathbb{R}^2$ containing the origin and with Lipschitz boundary. We suppose that the exterior of $\overline{\Omegaega}$ has unit conductivity and that $\Omegaega$ is insulated. Let $h$ be a harmonic function, and consider the following conductivity problem: \begin{equation} \langlebel{condeqn} \begin{cases} \displaystyle\Delta u =0 \quad&\mbox{in } \mathbb{R}^2\setminus \overline{\Omegaega},\\ \displaystyle \partiald{u}{\nu}=0 &\mbox{on } \partial \Omegaega,\\ \displaystyle u({\bf x})- h({\bf x}) =O(|{\bf x}|^{-1}) &\mbox{as } |{\bf x}|\to \infty. \end{cases} \end{equation} The background potential $h$ here is perturbed by $u$ due to the presence of the inclusion $\Omegaega$. One can easily express the perturbation $u-h$ using a boundary integral equation formulation of~(\ref{condeqn}) involving the Neumann-Poincar\'e (NP) operator. Furthermore, the boundary integral equation framework admits a multipole expansion of $u-h$ whose coefficients are linear combinations of the generalized polarization tensors (GPTs), which can also be expressed in terms of boundary integrals. The GPTs are a sequence of real-valued tensors that are associated with $\Omegaega$ and they generalize the classical polarization tensors \cite{polya1951isoperimetric}. They can be obtained from multistatic measurements, where a high signal-to-noise ratio is needed to acquire high-order terms \cite{ammari2014target}. They have been used as building blocks when solving imaging problems for inclusions with smooth boundaries \cite{AGKLY14,AKbook}. The GPTs contain sufficient geometric information to determine $\Omegaega$ uniquely \cite{AKbook}. One can suitably approximate the conductivity, location, and shape of an inclusion, or several inclusions, using the first few leading terms of the GPTs by adopting an optimization framework \cite{AGKLY14,AKLZ12}. An inclusion with corners generally induces strong scattering close to corner points (vertices). The understanding and application of such corner effects is a subject of great interest. Detection methods for inclusions, from boundary measurements, have been developed in \cite{ikehata1999enclosing}. The gradient blow-up of the electrical potential for a bow-tie structure, which has two closely located domains with corners, was investigated in \cite{kang2017optimal}. It has been shown that the spectral features of the NP operator of a domain with corners is significantly different from that of a smooth domain \cite{Carleman-book-16,HKL17,HMM-NJP-11,HP-ACHA-13,PP-arXiv}. It is worth mentioning that the spectrum of the NP operator has recently drawn significant attention in relation to plasmonic resonances \cite{ammari2013spectral,KLY, YL17}. This paper analyzes the effects of inclusion corners on the perturbation of an electric potential. {We present new geometric factors that clearly reveal the boundary information of an inclusion, including the existence of corner points.} They satisfy mutually equivalent relations with the GPTs, so that one can compute them from the GPTs and vice versa. The geometric factors form an infinite sequence of complex numbers. {The main result of this paper is that the geometric factors are actually the Fourier series coefficients of a generalized external angle function (for a precise statement, see Theorem \ref{thm:mainsigma}).} The generalized external angle function has the Dirac delta singularity at corner points. As a consequence, we can determine when there exist corner points: the geometric factor sequence converges to zero when the inclusion has a smooth boundary. However, it does not converge to zero, but oscillates, if there is any corner point on the boundary of the inclusion. {In practice, only a finite number of leading terms of the GPTs can be obtained due to the limitation of the signal-to-noise ratio in the measurements. If sufficiently many terms of the GPTs are given, then the partial Fourier series sum of the generalized external angle function computed with the GPTs shows isolated high peaks at corner points, as illustrated in Section \ref{sec:experi}. } Our derivation is motivated by the recent result \cite{KLL14}, where explicit relationships between the exterior Riemann mapping coefficients and the GPTs were derived. In addition, we consider the internal conformal mapping, with which the geometric factors are defined and which is obtained by reflecting the exterior Riemann mapping function across the unit circle. We prove the Fourier relation between the geometric factors and the generalized external angle by applying the Carath\'eodory mapping theorem on the uniform convergence of conformal mappings (see Appendix \ref{sec:cara}). One can numerically compute the GPTs for an arbitrary domain with corners to a high degree of precision using integral equation techniques, the Nystr\"om discretization, and recursively compressed inverse preconditioning (RCIP) \cite{HelsOjal08}. The geometric factors can then be obtained from their relations with the GPTs. We present some numerical examples to validate and visualize our results. {The rest of the paper is organized as follows: In Section \ref{sec:GPTsRiemann} we derive explicit connections between the GPTs and the coefficients of the Riemann mappings and define the geometric factors. In Section \ref{sec:curvilinear} we express the geometric factors for curvilinear polygons in terms of vertices and external angles. Section \ref{sec:genLip} is about equivalent relations between the geometric factors and the GPTs for arbitrary Lipschitz domains and the investigation of corner effects. Section \ref{sec:experi} presents numerical examples. We prove several recursive relations in Section \ref{sec:proof}, and we conclude with some discussion. } \section{Generalized polarization tensors and Riemann mappings} \langlebel{sec:GPTsRiemann} \subsection{Multipole Expansion} {We identify ${\bf x}=(x_1,x_2)$ in $\mathbb{R}^2$ with $z=x_1+{\rm{i}}x_2\in\mathbb{C}$ for notational convenience. Let $h({\bf x})=\mbox{Re}\{H(z)\}$ with \begin{displaymath} H(z) = \alpha_0 +\sum_{n=1}^\infty \alpha_n z^n,\quad \alpha_n=a_n^c+{\rm i}a_n^s. \end{displaymath} Then, it is shown in \cite{ammari2007polarization, ammari2013enhancement} that the solution $u$ to \eqnref{condeqn} satisfies $u({\bf x})=\mbox{Re}\{U(z)\}$, where $U$ is a complex analytic function in $\mathbb{C}\setminus\overline{\Omega}$ such that \begin{equation}\langlebel{eqn:multipole} U(z)=\alpha_0 +\sum_{n=1}^\infty \left[a_n^c\left(z^n-\sum_{m=1}^\infty \frac{\gamma_{mn}^1+\gamma_{mn}^2}{z^m}\right)+{\rm i}a_n^s\left(z^n-\sum_{m=1}^\infty\frac{\gamma_{mn}^1 -\gamma_{mn}^2}{z^m}\right)\right] \end{equation} for $|z|$ sufficiently large with} \begin{equation} \langlebel{GPT12} \begin{cases} \displaystyle \gamma_{kn}^1=\frac{1}{4\partiali k} \left[M_{kn}^{cc}-M_{kn}^{ss}+{{\rm{i}}}(M_{kn}^{cs}+M_{kn}^{sc})\right], \\[2mm] \displaystyle\gamma_{kn}^2=\frac{1}{4\partiali k} \left[M_{kn}^{cc}+M_{kn}^{ss}-{{\rm{i}}}(M_{kn}^{cs}-M_{kn}^{sc})\right],\quad k,n\in\mathbb{N}. \end{cases} \end{equation} Here the quantities $\{M_{kn}^{\alpha\beta}\}_{k,n\in\mathbb{N}}$ ($\alpha, \beta\in \{c,s\}$) are the so-called (contracted) GPTs. The zero Neumann condition on $\partial \Omega$ in \eqnref{condeqn} implies that \begin{equation}\langlebel{Uconst}\Im U\,=\,\mbox{constant on }\partial\Omega.\end{equation} The GPTs are defined in terms of boundary integrals as follows: We set polar coordinates \begin{displaymath} P_n^c({\bf x}) = r^n \cos n\theta\,, \qquad P_n^s({\bf x}) = r^n \sin n\theta\,, \end{displaymath} and define \begin{equation} \langlebel{gpt} M^{\alpha\beta}_{kn} := \int_{\partial \Omegaega} P_k^\beta({\bf x}) (-\frac{1}{2} I - \mathcal{K}^*_{\partial\Omegaega})^{-1}[\nu \cdot \nabla P_n^\alpha ]({\bf x}) \, d\sigma({\bf x}) \end{equation} for $k,n\in\mathbb{N}$ and $\alpha,\beta\in \{c,s\}$. The operator $\mathcal{K}^*_{\partial\Omegaega}$ is the Neumann-Poincar\'e (NP) operator \begin{equation} \langlebel{introkd2} \mathcal{K}^*_{\partial\Omegaega} [\varphi] ({\bf x}) = \frac{1}{2\partiali} p.v.\int_{\partial\Omegaega} \frac{\langle {\bf x} -{\bf y}, \nu_{\bf x} \rangle}{|{\bf x}-{\bf y}|^2} \varphi({\bf y})\,d\sigma({\bf y})\;, \quad {\bf x} \in \partial\Omegaega, \end{equation} where $\nu_{\bf x}$ is the outward unit normal vector to $\partial\Omegaega$ and $p.v.$ denotes the Cauchy principal value. It was shown in \cite{escauriaza1992regularity,verchota1984layer} that $\langlembda I- \mathcal{K}_{\partial\Omega}^*$ is invertible on $L^2_0(\partial \Omega)$ for $|\langlembda|\geq1/2$. See \cite{ammari2007polarization} for more properties of the NP operator. From \eqnref{GPT12}, one can get the $M_{kn}^{\alpha\beta}$ from the $\gamma_{kn}^j$ and vice versa. In this sense we will refer to the $\gamma_{kn}^j$ as GPTs as well in this paper. It is worth mentioning that the contracted GPTs have been used in making a near-cloaking structure \cite{ammari2013enhancement, ammari2013enhancementmaxwell} and that they can be used as shape descriptors \cite{AGKLY14}. More applications of the GPTs can be found in \cite{ammari2013mathematical} and references therein. \subsection{Two Riemann mapping functions} \langlebel{section:mappings} Since $\Omega$ is simply connected, thanks to the Riemann mapping theorem there exists a unique exterior Riemann mapping $\Phi : \mathbb{C}\setminus \mathbb{D} \to \mathbb{C}\setminus \Omegaega$ of the form \begin{equation} \langlebel{Phi} \Phi[\Omega](\zeta)=C\left( \mu_{-1}\zeta+\mu_0+\frac{\mu_1}{\zeta}+\frac{\mu_2}{\zeta^2}+\cdots\right), \end{equation} where $\mathbb{D}$ denotes the unit disc centered at the origin, $C>0$ is a constant, and we set $\mu_{-1}=1$. We may simply write $\Phi(\zeta)$ when the domain is clear from the context. For $k\ge 1$, the coefficients $\mu_k$ are invariant under translation and scaling of $\Omegaega$. In \cite{KLL14}, an explicit relation was derived between the exterior Riemann mapping coefficients $\mu_k$ and the GPTs associated with $\Omega$. This means that one can compute the $\mu_k$ from the GPTs. In this paper, we additionally consider the internal conformal mapping $S$ that is obtained by reflecting the exterior Riemann mapping function across the unit circle. Remind that we assume $0\in\Omega$. We will then derive formulas for the GPTs using both the coefficients of $\Phi$ and those of $S$. By ${\Omega^r}$ we denote the reflection of $\Omega$ across the unit circle, {\it i.e.}, \begin{equation} {\Omega^r}:={\bf i}gr\{\frac{1}{\zeta}\;|\; \zeta\in \mathbb{C}\setminus\overline{\Omegaega}{\bf i}gr\}\cup\{0\}. \end{equation} Note that $(\Omega^r)^r=\Omega$. We also define $S[\Omega^r]:\mathbb{D}\rightarrow\Omega^r$ by \begin{equation}\langlebel{RG:conformal} S[\Omega^r](w):= \begin{cases} \displaystyle\frac{1}{\Phi[\Omega](\frac{1}{w})}\quad&\mbox{for }w\in \mathbb{D}\setminus\{0\},\\ \displaystyle0\quad&\mbox{for }w=0. \end{cases} \end{equation} We may simply write $S(w)$ when the domain is clear from the context. Then, ${\Omega^r}$ is a simply connected domain containing $0$ and $S:\mathbb{D}\rightarrow {\Omega^r}$ is the interior Riemann mapping corresponding to $\Omega^r$ satisfying $S(0)=0$ and $S'(0)>0$. It is obvious from \eqnref{Phi} that $S$ admits the series expansion \begin{equation} \langlebel{Sseries0} S[\Omega^r](w)=\frac{1}{C}\left(b_1w+b_2w^2+\cdots\right)\quad\mbox{in } \mathbb{D} \end{equation} with some complex numbers $b_k$. Note that $b_1=1$ because $\mu_{-1}=1$. \begin{lemma}\langlebel{eqn:mub} The coefficients of $\Phi[\Omega]$ have the following equivalent relation with those of $S[\Omega^r]$: \begin{equation} \langlebel{CauchyProduct} \mu_{k-1} +b_{k+1}+\sum_{j=2}^k b_j \mu_{k-j}=0\,,\quad k\ge 1\,. \end{equation} \end{lemma} \noindent {\sl Proof}. \ For $0<|w|<1$, we have \begin{align*} 1&=S[\Omega^r](w)\cdot \Phi[\Omega](1/w)=\left(\sum_{j=1}^\infty b_j w^j\right)\left(\sum_{j=-1}^\infty \mu_kw^k\right)=1+\sum_{k=1}^\infty \left(\sum_{j=1}^{k+1}b_j \mu_{k-j}\right)w^k \end{align*} Since $b_1=\mu_{-1}=1$, this proves the lemma.\qed \subsection{Generalized polarization tensors and Riemann mappings coefficients} \langlebel{sec:GPTandcoeff} { Let $V_1$ be the analytic function in $\mathbb{C}\setminus\overline{\Omega}$ such that $\Re\{V_1(z)\}$ be the solution to \eqnref{condeqn} with $h(\mathbf{x})=\Re\{z^n\}$. Similarly, we define $V_2$ the analytic function in $\mathbb{C}\setminus\overline{\Omega}$ such that $\Re\{{\rm i}V_2(z)\}$ be the solution with $h(\mathbf{x})=\Re\{{\rm i}z^n\}$. From \eqnref{eqn:multipole} we have \begin{align*} \left(V_1\circ\Phi\right)(\zeta)=\Phi(\zeta)^n - \sum_{m=1}^\infty \frac{ \gamma_{mn}^1+\gamma_{mn}^2 }{\Phi(\zeta)^m},\\ \left(V_2\circ\Phi\right)(\zeta)=\Phi(\zeta)^n - \sum_{m=1}^\infty \frac{ \gamma_{mn}^1-\gamma_{mn}^2 }{\Phi(\zeta)^m} \end{align*} for sufficiently large $|\zeta|$. As discussed in \eqnref{Uconst} the zero Neumann condition in \eqnref{condeqn} implies $\Im \{V_1\circ\Phi(\zeta)\},\, \Im\{{\rm i}V_2\circ\Phi(\zeta)\}$ are constants for $|\zeta|=1$ and, thus, \begin{equation}\langlebel{extension1} \rm{i}\Im\{V_1\circ\Phi\}+\Im\{{\rm i}V_2\circ\Phi\}\,=\,\mbox{constant for }|\zeta|=1. \end{equation} Note that \begin{align} \rm{i}\Im\{V_1\circ\Phi\}+\Im\{{\rm i}V_2\circ\Phi\} &=\frac{1}{2}\left(V_1\circ\Phi+V_2\circ\Phi\right)(\zeta)- \frac{1}{2}\overline{\left(V_1\circ\Phi-V_2\circ\Phi\right)(\zeta)}\notag\\\langlebel{extension2} &=\Phi(\zeta)^n - \sum_{m=1}^\infty \frac{ \gamma_{mn}^1 }{\Phi(\zeta)^m} +\overline{\sum_{m=1}^\infty \frac{ \gamma_{mn}^2 }{\Phi(\zeta)^m}}, \end{align} where the second equality holds for sufficiently large $|\zeta|$. This induces the following lemma, which plays an essential role in deriving relations between the GPTs and the $\mu_k$ in \cite{KLL14}. \begin{lemma} \langlebel{lemma:basic}(\cite{KLL14}) The function $V_1\circ\Phi + V_2\circ\Phi$ has an entire extension. \end{lemma} } To state our result, we define two multi-index sequences $\{\mu_{n,k}\}$ and $\{b_{n,k}\}$ ($n,k\in\mathbb{N}$, $k\geq n$) such that the following formal expansions hold: \begin{equation}\notag \sum_{k=n}^{\infty} \mu_{n,k}{x^k}=\left(\sum_{k=1}^\infty \mu_{k-2}x^k \right)^n,\quad \sum_{k=n}^{\infty}{b_{n,k}}{x^k}=\left(\sum_{k=1}^\infty b_k x^k \right)^n. \end{equation} In other words \begin{align*} \mu_{n,k} &=\sum_{s_1+s_2+\cdots+s_k =n, \atop s_1+2s_2+\cdots+ks_k=k} \frac{n!}{s_1!s_2!\cdots s_k!} {\mu_{-1}}^{s_1}{\mu_0}^{s_2}\cdots {\mu_{k-2}}^{s_k},\\ b_{n,k} &=\sum_{s_1+s_2+\cdots+s_k =n, \atop s_1+2s_2+\cdots+ks_k=k} \frac{n!}{s_1!s_2!\cdots s_k!} {b_1}^{s_1}{b_2}^{s_2}\cdots {b_{k}}^{s_k}. \end{align*} Here, $s_1,\ldots,s_k$ are non-negative integers. In particular, we have \begin{equation} \langlebel{bkk} b_{1,k} = b_{k}\,,\quad \mu_{1,k}=\mu_{k-2}\,, \quad \mu_{k,k}=\mu_{-1}^k=1\,,\quad b_{k,k}=b_1^k=1\,. \end{equation} We can deduce the following proposition using $\mu_{n,k}$ and $b_{n,k}$. See Section \ref{proof:theorem1} for a detailed proof. Lemma \ref{lemma:basic} plays an essential role in the derivation. \begin{prop}\langlebel{theorem1} The GPTs associated with $\Omega$ have recurrence formulas with the coefficients of $\Phi[\Omega]$ and $S[\Omega^r]$. For each $k,n\in \mathbb{N}$, we have \begin{align} \langlebel{Gamma1} \displaystyle\gamma_{kn}^1&= C^{k+n}\left( \mu_{n,2n+k}-\sum_{m=1}^{k-1}\frac{\gamma_{mn}^1}{C^{m+n}} b_{m,k}\right),\\[1.5mm] \displaystyle\gamma_{kn}^2 &= \begin{cases} \displaystyle- C^{k+n}\left(\bar{\mu}_{n,2n-k} +\sum_{m=1}^{k-1}\frac{\gamma_{mn}^2}{C^{m +n}} b_{m,k}\right), \qquad &k\le n\,,\\[1.5mm] \displaystyle-C^{k+n}\sum_{m=1}^{k-1} \frac{\gamma_{mn}^2 }{C^{m+n}} b_{m,k}\,, \qquad &k\ge n+1\,. \langlebel{Gamma2} \end{cases} \end{align} \end{prop} We can also express $b_k$ and $\mu_k$ by the GPTs as follows. See Section \ref{sec:proof:others} for a proof. \begin{prop}\langlebel{GPTtob} We have \begin{align} \langlebel{eqn:C} C &= \sqrt{-\gamma_{11}^2}\,,\\ b_k& = \sum_{m=2}^{k} \frac{\gamma_{m1}^2}{C^{m+1}} b_{m,k}\,, \quad k\geq 2\,. \langlebel{eqn:bk1} \end{align} \end{prop} For example, with $k=2,3$ in \eqnref{eqn:bk1} we deduce, using $b_{2,3}=2b_2$, that $$ b_2=\gamma_{21}^2\left(-\gamma_{11}^2\right)^{-\frac{3}{2}},\quad b_3 =2(\gamma_{21}^2)^2\left(-\gamma_{11}^2\right)^{-3} +{\gamma_{31}^2}\left(-\gamma_{11}^2\right)^{-2}\,. $$ Now, with $k=1,2$ in \eqnref{CauchyProduct} we have $ \mu_0=-b_2$ and $ \mu_1 = -b_3-b_2\mu_0$. \begin{remark} \langlebel{remark:bk2} We can show, as proved in Section \ref{sec:proof:others}, that $b_k$ also satisfies \begin{equation} \gamma_{11}^1 b_k = {C^{2}} \left(\mu_{k} - \sum_{m=2}^{k} \frac{\gamma_{m1}^1}{C^{m+1}} b_{m,k}\right), \quad k\geq 2\,. \langlebel{eqn:bk2} \end{equation} Through this relation as well as \eqnref{eqn:bk1}, we can see that there are domain-independent relationships among the GPTs. Additional examples are provided in \cite{KLL14}. \end{remark} \subsection{Geometric factor $\sigma_k$ and equivalent relations} \begin{definition} We define a new sequence of geometric factors $\{\sigma_k\}_{k=1}^\infty$ for $\Omegaega$ as \begin{equation} \langlebel{sigma} \sigma_k[\Omega] := k(k+1)b_{k+1} - \sum_{j=1}^{k-1} (j+1)b_{j+1} {\sigma_{k-j}}\,,\qquad k\geq 1\,. \end{equation} \end{definition} We can simplify this definition as $ \displaystyle\{\sigma_k\}_{k=1}^\infty=\mathcal{P}\big(\{b_k\}_{k=2}^\infty\big) $ or \begin{equation} \langlebel{eqn:P} \displaystyle \sigma_k=\sum_{i_1+2i_2+\cdots+ki_k=k}P_{k, i_1,\ldots,i_k} b_{2}^{i_1}\cdots b_{k+1}^{i_k}\,, \end{equation} with some integer coefficients $P_{k,i_1,\ldots,i_k}$. Here, $i_1,\ldots,i_k$ are non-negative integers. The definition \eqnref{sigma} implies that $\mathcal{P}$ is invertible. Thanks to the relations between the GPTs and the Riemann mapping coefficients in the previous section, we can see that there are mutually equivalent connections among the GPTs $\gamma_{kn}^j$, the exterior Riemann mapping coefficients $\mu_k$, the interior Riemann mapping coefficients $b_k$, and the geometric factors $\sigma_k$. For instance, we have \begin{alignat*}{3} &b_2=\frac{1}{2}\sigma_1, \quad && b_3=\frac{1}{6}\left(\sigma_2 + \sigma_1^2\right),\quad && b_4=\frac{1}{24}\left(2\sigma_3+3\sigma_1\sigma_2+\sigma_1^3\right)\\ &\mu_0 = -\frac{1}{2}\sigma_1\,, \quad &&\mu_1=\frac{1}{12}\left(- 2\sigma_2 +\sigma_1^2\right)\,, \quad&&\mu_2 = \frac{1}{24}\left(-2\sigma_3+\sigma_1\sigma_2\right)\,, \end{alignat*} and \begin{alignat*}{3} &\gamma^1_{11}= - \frac{C^2}{6}\sigma_2 + \frac{C^2}{12}\sigma_1^2,\quad \gamma_{12}^1=-\frac{C^3}{6}\sigma_3+\frac{C^3}{4}\sigma_1\sigma_2- \frac{C^3}{12}\sigma_1^3,\quad \gamma_{21}^1=-\frac{C^3}{12}\sigma_3+\frac{C^3}{8}\sigma_1\sigma_2- \frac{C^3}{24}\sigma_1^3\,,\\ &\gamma_{11}^2=-C\bar{C}\,,\qquad \gamma_{12}^2= C\bar{C}^2\sigma_1\,,\qquad \gamma_{21}^2=\frac{C^2\bar{C}}{2} \sigma_1\,. \end{alignat*} \section{Geometric factor for a curvilinear polygon}\langlebel{sec:curvilinear} In this section we restrict $\Omega$ to be a curvilinear polygon, in other words $\Omega=P^r$ with a simple polygon $P$, and deduce explicit connections between the geometric factors and the corner geometry of $\Omega$. Figure \ref{figure:triangle}(a,b) illustrate a curvilinear polygon and its reflection across the unit circle. More specifically, we let $P$ be a simply connected region bounded by a polygon whose vertices are $A_1,\ldots,A_n$ (ordered consecutively) with $n\geq 3$ and with external angles $\beta_1\partiali,\ldots,\beta_n\partiali$. We assume $0\in P$. Note that $-1<\beta_j<1$ and $\sum_{j=1}^n \beta_j =2$. It is well known that the interior conformal mapping of a simple polygon can be expressed as the {\it Schwarz-Christoffel integral}. For the polygon $P$ described above, we have \begin{equation}\notag S[P](z) = C_1 \int_{0}^{z} {\partialrod_{j=1}^n \left(w - a_j \right)^{-\beta_j}}dw + C_2\,, \end{equation} where $C_1$ and $C_2$ are complex constants, and $a_1,\ldots,a_n$ are $n$ distinct pre-vertices on $\partialartial \mathbb{D}$ satisfying $S(a_j)=A_j$ for each $j=1,\ldots,n$. One can find a detailed explanation of the Schwarz-Christoffel integral in many textbooks, for example in \cite{SS}. Assuming $S(0) = 0$ and $S'(0)=1/C>0$, we have a slightly different formulation: \begin{equation} S[P](z)=\frac{1}{C}\int_{0}^{z} \partiali(w)\,dw\,,\qquad \partiali(z)={\partialrod_{j=1}^n \left(1 - \frac{z}{a_j}\right)^{-\beta_j}}\,. \end{equation} Here, the branch-cut is given such that $\partiali(0)=1$. The coefficients in the expansion \eqnref{Sseries0} satisfy $C\cdot S'(z)=1+\sum_{k=1}^\infty b_{k+1} (k+1) z^{k}$. From the fact $\partiali(z)=C\cdot S'(z)$ we deduce \begin{equation}\langlebel{piz} b_{k+1}=\frac{\partiali^{(k)}(0)}{(k+1)!}\,.\end{equation} \begin{lemma} \langlebel{lemma:sigma} For the polygon $P$ described above, we have for each $k\in\mathbb{N}$ \begin{equation}\langlebel{def:sigma}\sigma_k[P^r]=\sum_{j=1}^n \beta_j a_j^{-k}.\end{equation} \end{lemma} \noindent {\sl Proof}. \ Set $\widetilde{\sigma}_k=\sum_{j=1}^n \beta_j a_j^{-k}$ and consider the function \begin{equation}\notag F_k(w)=(k-1)!\sum_{j=1}^n \beta_j{a_j}^{-k} \left(1-\frac{w}{a_j} \right)^{-k},\quad k\geq 1\,.\end{equation} One can easily see that $\partiali'=\partiali F_1$ and ${F_k}'= F_{k+1}$. Applying the Leibniz rule, we have \begin{equation*} \partiali^{(k)} = (\partiali F_1)^{(k-1)} = \sum _{j=0}^{k-1}{k-1 \choose j}{\partiali}^{(j)}{F_1}^{(k-1-j)} = \sum _{j=0}^{k-1}{k-1 \choose j}{\partiali}^{(j)}{F_{k-j}}. \end{equation*} Note that $F_k(0)=(k-1)!\,\widetilde{\sigma}_k$. Evaluating the above equation at $0$, we prove \begin{equation} \partiali^{(k)}(0) = (k-1)!\,\widetilde{\sigma}_k + \sum_{j=1}^{k-1} \frac{(k-1)!}{j!}\partiali^{(j)}(0){\widetilde{\sigma}_{k-j}}.\langlebel{eqn:pisigma} \end{equation} The lemma then follows as a direct consequence of \eqnref{piz} and \eqnref{sigma}.\qed \section{Analysis of corner effects}\langlebel{sec:genLip} We now consider an arbitrary planar domain with corners. We assume that $\Omegaega$ is a simply connected domain bounded by a piecewise regular analytic curve with a finite number of corners. Figures \ref{fig:asymm}(a,b) illustrate an example of such domain and its reflection across the unit circle. We characterize the corner effects in the geometric factor (and as a result, in the GPTs or in Riemann mapping coefficients) by approximating $\Omega^r$ with simple polygons. \subsection{Generalized external angle of $\partial {\Omega^r}$ } Let $S[\Omega^r]:\mathbb{D}\rightarrow \Omega^r$ be given as in Section \ref{section:mappings}. Since $\partialartial \Omegaega$ is a piecewise regular analytic curve, so is $\partialartial {\Omega^r}$. Since $\partialartial {\Omega^r}$ is a Jordan curve, the interior Riemann mapping $S$ extends to a bijective continuous function from $\overline{\mathbb{D}}$ to $\overline{{\Omega^r}}$. Let $\zeta$ be an arbitrary point on $\partial \Omega^r$ which is not a corner point. Then the inverse mapping $S^{-1}:\Omega^r \rightarrow\mathbb{D}$ extends holomorphically across $\zeta$ (see \cite{cho1996regularity}) and $(S^{-1})'(\zeta)\neq 0$ (see \cite[Theorem 18, p. 217, and Theorem 20, p. 226]{clebsch1931mathematische}). From the inverse function theorem, $S$ is smooth near $S^{-1}(\zeta)$. We assume that the curvature of $\partial \Omega^r$ is uniformly bounded except at the corner points. Denote $\alpha(t)=(x(t),y(t))$, $t\in[0,1]$, the piecewise analytic parametrization of $\partial \Omega^r$ by the Riemann mapping $S[\Omega^r]$. In other words \begin{displaymath} \alpha(t)=S[\Omega^r](e^{2\partiali t{\rm i}})\,,\qquad t\in[0,1]\,, \end{displaymath} together with finitely many points $0\leq t_1<\cdots<t_M<1$, where $\alpha(t_l)$, $l=1,\ldots,M$, are corner points on $\partial\Omega^r$. We set $t_{M+1}=t_1+1$ and regard $\alpha$ as a periodic function with period $1$, for notational convenience. Owing to the assumption $C>0$ in series expansions for $S$, $\alpha$ has positive orientation. We define the external angle $\beta_l \partiali$ at each corner $\alpha(t_l)$ by the signed angle between the two vectors $\alpha'(t_l-)$ and $\alpha'(t_l+)$, {\it i.e.}, $\beta_l\in(-1,1)$ and \begin{equation}\langlebel{def:beta}\beta_l \partiali = \arg\left(\alpha'(t_l+)\right)-\arg\left(\alpha'(t_l-)\right)\end{equation} with a modulus of $2\partiali$. We then generalize the concept of external angle for any boundary points as follows. The generalized external angle is essential in understanding corner effects in perturbations of an electric potential. \begin{definition} Define a function $\Theta : [0,1] \to \mathbb{R}$ as \begin{equation}\langlebel{def:theta}\Theta(t): = \frac{1}{\partiali} k_{\rm{g}}(t) |\alpha'(t)|+ \sum_{l=1}^M \beta_l \delta(t-t_l)\,, \quad t\in[0,1]\,,\end{equation} where $k_{\rm{g}}$ denotes the (geodesic) curvature of $\alpha$ in $\mathbb{R}^2$, i.e., $$k_{\rm{g}}(t)=\frac{x'(t)y''(t)-x''(t)y'(t)}{|\alpha'(t)|^3}\,.$$ We call $\partiali\Theta(t)$ the generalized external angle of $\partial {\Omega^r}$ at $\alpha(t)$. We may indicate the associated domain by writing $\Theta[\partial {\Omega^r}]$ if necessary. \end{definition} Recall that the internal angle is $\partiali(1-\beta)$ at the corner $\alpha(t_l)$. From \cite{warschawski1955theorem} we have $S^{-1}(z)\sim (z-\alpha(t_l))^\frac{1}{1-\beta}$ for $z\in\Omega^r$ near $\alpha(t_l)$ and, thus, $\alpha(t)\sim (t-t_l)^{1-\beta}$. Here, $f\sim g$ near $z_0$ means $\lim_{z\rightarrow z_0}\frac{f(z)}{g(z)}\in\mathbb{C}\setminus\{0\}$. Therefore, $ \frac{1}{\partiali} k_{\rm{g}}(t) |\alpha'(t)|$ is integrable. From the Gauss-Bonnet formula we have \begin{equation} \int_{0}^{1} {\bf i}g[k_{\rm{g}}(t) |\alpha'(t) | + \sum_{l=1}^N \beta_l \partiali \delta(t-t_l){\bf i}g] dt = \int_{\partialartial \Omegaega} k_{\rm{g}}(s)ds + \sum_{l=1}^N \beta_l \partiali =2\partiali\,. \langlebel{gauss} \end{equation} \subsection{Approximation of ${\Omega^r}$ by a sequence of polygons} Fix $n\in\mathbb{N}$ and consider equally distanced nodes $\big\{\frac{j}{n}|1\leq j\leq n\big\}$ on $[0,1]$. For each $l=1,\ldots,M$, let $j_l$ be the index such that $\alpha(\frac{j_l}{n})$ is the nearest point in $\big\{\alpha(\frac{j}{n})|1\leq j\leq n\big\}$ to the corner point $\alpha(t_l)$. We set \begin{align*} p_{n,j} &=\displaystyle\begin{cases} \alpha(t_l),\quad &\mbox{if }j=j_l\mbox{ for some }l=1,\ldots,M,\\ \alpha(\frac{j}{n}),\quad&\mbox{otherwise}, \end{cases}\\ e_{n,j} &=\displaystyle \begin{cases} e^{2\partiali t_l{\rm i}},\quad &\mbox{if }j=j_l\mbox{ for some }l=1,\ldots,M,\\ e^{2\partiali\frac{ j}{n}{\rm i}},\quad&\mbox{otherwise}. \end{cases} \end{align*} We denote by $P_n$ the polygon defined by the chain of edges $[p_{n,j}, p_{n,j+1}]$, $j=1,\ldots,n$, with $p_{n,n+1}=p_{n,1}$. We let $\beta_{n,j}\partiali$ be the external angle of $P_n$ at each $p_{n,j}$. Recall that $\beta_{n,j}\in(-1,1)$. \begin{figure} \caption{The polygon $P_n$ (in red) approximates ${\Omega^r} \end{figure} The polygon $P_n$ is an $n$-sided simple polygon with sufficiently large $n$, and one can show that the sequence of polygons $P_n$ converges in the sense of Carath\'eodory to its kernel $\Omega^r$. See Appendix A for the definition of such convergence. Figure \ref{approx} shows how $P_n$ approximates to $\Omega^r$ as $n$ increases. We can obtain the asymptotic for the external angles. \begin{lemma} \langlebel{lemma:betadecay} Let $n$ be large enough so that $P_n$ is a simple polygon. For $p_{n,j}$ located away from corner points, we have \begin{equation}\langlebel{betanj} \beta_{n,j}= \displaystyle\frac{1}{n\partiali}{k_{\rm{g}}\left(\frac{j}{n}\right)}\left|\alpha'\left(\frac{j}{n}\right)\right|+o(\frac{1}{n}). \end{equation} \end{lemma} \noindent {\sl Proof}. \ We set $h=\frac{1}{n}$ and $t=\frac{j}{n}$. Note that the external angle $\beta_{n,j}\partiali$ is the relative angle of $\alpha(t+h)-\alpha(t)$ with respect to the direction of $\overrightarrow{\alpha(t-h)\alpha(t)}$. Let us denote $$\Delta^{\partialm}_{h} \alpha (t) = \partialm \left(\alpha(t\partialm h)-\alpha(t)\right).$$ Then the angle $\beta_{n,j}$ satisfies $$\left|\beta_{n,j}\right|= \frac{1}{\partiali} \arccos \left(\frac{\langlengle \Delta^+_{h}\alpha(t),\ \Delta^-_{h}\alpha(t) \ranglengle}{|\Delta^+_{h}\alpha(t) | \cdot|\Delta^-_{h}\alpha(t) |}\right).$$ Let us suppose that $p_{n,j}=\alpha(\frac{j}{n})$ is located away from corner points. By applying the Taylor's Theorem, we have $$ \Delta^\partialm_{h}\alpha(t) = \alpha'(t) h\partialm\frac{\alpha''(t)}{2!}h^2+\frac{\alpha'''(t)}{3!}h^3+o(h^3) $$ and, therefore, \begin{align*} \frac{\langlengle \Delta^+_{h}\alpha(t),\ \Delta^-_{h}\alpha(t) \ranglengle}{|\Delta^+_{h}\alpha(t) | \cdot| \Delta^-_{h}\alpha(t) |}&=1-\frac{h^2}{2}\left( \frac{x'(t)y''(t) - x''(t)y'(t)}{[x'(t)]^2 + [y'(t)]^2} \right)^2+o(h^2)\\ &= 1-\frac{h^2}{2}k_{\rm{g}}(t)^2 | \alpha'(t) |^2 +o(h^2). \end{align*} Therefore, we get $$ \left|\beta_{n,j}\right| = \frac{h}{\partiali} |k_{\rm{g}}(t)||\alpha'(t)|+o(h) = \displaystyle\frac{1}{n\partiali}{\left|k_{\rm{g}}\left(\frac{j}{n}\right)\right|}\left|\alpha'\left(\frac{j}{n}\right)\right|+o\left(\frac{1}{n}\right). $$ Since the magnitude of $\beta_{n,j}$ is small due to $t$ not being a corner point, the sign of $\beta_{n,j}$ coincides with that of the cross product $(\nabla_h^-\alpha(t))\times(\nabla_h^+\alpha(t))$. Since $(\nabla_h^-\alpha(t))\times(\nabla_h^+\alpha(t))$ has the same sign as $k_{\rm{g}}(\frac{j}{n})$, we prove the lemma. \qed Since $\partial\Omega^r$ is a piecewise analytic curve, we have from a similar analysis as in the previous lemma that for any $\epsilon>0$, there exists $\delta=\delta(\epsilon)>0$ such that \begin{equation}\langlebel{nearcorner1} \sum_{p_{n,j}\in B_\delta}|\beta_{n,j}|<\epsilon \end{equation} with $B_\delta=\cup_{l=1}^M\left \{z\in\partial P_n:0<|z-\alpha(t_l)|<\delta\right\}$. Since $\alpha(t)\sim(t-t_l)^{1-\beta}$, we have \begin{equation}\langlebel{nearcorner2} \left|\frac{j}{n}-t_l\right|=O(\delta^{\frac{1}{1-\beta}})\quad\mbox{for }p_{n,j}\in B_\delta. \end{equation} \noindent{\textbf{Riemann mapping functions.}} We set $\Omegaega_n =(P_n)^r$ and, thus, $(\Omega_n)^r = P_n$. The Riemann mapping functions $\Phi[\Omega_n]$ and $S[P_n]$ admit the series expansions \begin{align*} \Phi[\Omega_n](z)&=C_n\left(\mu_{n,-1}z+\mu_{n,0}+\frac{\mu_{n,1}}{z} +\frac{\mu_{n,2}}{z^2}+\cdots\right), \quad z\in\mathbb{C}\setminus \mathbb{D}\,,\\ S[P_n](z)&=\frac{1}{C_n}\left(b_{n,1}z+b_{n,2} z^2 + b_{n,3}z^3+\cdots\right),\quad z\in\mathbb{D}\,, \end{align*} with $\mu_{n,-1}=b_{n,1}=1$ and some constants $C_n>0$. Since $P_n$ converges in the sense of Carath\'eodory to its kernel ${\Omega^r}$, we have uniform convergence from $S[P_n]$ to $S[\Omega^r]$ and from $\Phi[\Omega_n]$ to $\Phi[\Omega]$ as $n\rightarrow\infty$ from the Carath\'eodory's mapping theorem (see Appendix A for the statement and references). Therefore, for each $k$ we have \begin{displaymath} \mu_{n,k}\rightarrow\mu_k\,,\quad b_{n,k}\rightarrow b_k\,,\quad C_n\rightarrow C\,,\quad\mbox{as }n\rightarrow\infty\,. \end{displaymath} Here, $\mu_k,\,b_k,\, C$ are coefficients corresponding to $\Omega$. From \eqnref{eqn:P} we have for each $k$ \begin{equation}\langlebel{eqn:sigmaconv} \sigma_{k}[\Omega_n]\rightarrow\sigma_k[\Omega]\,,\quad\mbox{as }n\rightarrow\infty\,. \end{equation} Since the boundary of $\Omega^r$ is a Jordan curve, the corresponding Riemann mapping $S[\Omega^r]$ extends to a bijective continuous function from $\overline{\mathbb{D}}$ onto $\overline{\Omega^r}$ as explained before. From the Carath\'eodory's mapping theorem, $S[P_n]^{-1}$ converges uniformly to $S[\Omega]^{-1}$ on any compact subset of $\Omega$ as $n \to \infty$. Moreover, it was proved in \cite[pp. 75--79, vol. 2]{markushevich1977theory} that $S[P_n]^{-1}$ converges uniformly to $S[\Omega]^{-1}$ on $\overline{\Omega^r}$ if $P_n$ decreases to $\Omega^r$. In the proof, the equicontinuity of $\{S[P_n]^{-1}\}$ plays an essential role. By slightly modifying the proof of \cite[Theorem 2.26, vol.3]{markushevich1977theory}, we have the following: \begin{lemma}\langlebel{lemma:equi} $\left\{S[P_n]^{-1}|_{\overline{P_n\cap \Omega^r}}\right\}_{n\in \mathbb{N}}$ is equicontinuous. \end{lemma} \noindent {\sl Proof}. \ We set $f_n = S[P_n]^{-1}$ and $f=S[\Omega^r]^{-1}$. If the lemma is not true, then there exists $\epsilon_0>0$, a sequence $\{n_k\}$ with $n_1<n_2<\dots$, and two sequences $\{ z_k' \}$, $\{z_k''\}$ such that $z_k',z_k''\in P_n\cap \Omega^r$ and $$|f_{n_k}(z_k')-f_{n_k}(z_k'')|\geq \epsilon_0\mbox{ for each }k,\quad\mbox{while }\lim_{k\rightarrow \infty}(z_k'-z_k'')=0.$$ Due to the uniform convergence of $f_n$ to $f$ on a compact subset of $\Omega^r$, we have (by taking a subsequence of $\{n_k\}$ if necessary) \begin{equation}\notag \lim_{k\rightarrow\infty} z_k'=\lim_{k\rightarrow\infty}z_k''=\xi\in\partial \Omega^r \end{equation} and \begin{equation} \langlebel{eqn:fnk} f_{n_k}(z_k')\rightarrow w'\,,\quad f_{n_k}(z_k'')\rightarrow w''\,,\quad |w'|=|w''|=1\,, \quad w'\neq w''\,. \end{equation} We can take a sequence $\rho_k$ such that $z_k',z_k''\in \left\{z:|z-\xi|<\rho_k\right\}$ and $\rho_k\rightarrow 0$ as $k\rightarrow\infty$. From the construction of $P_n$, there exists $R>0$ such that $\left\{z:|z-\xi|=\rho \right\}\cap P_{n_k}$ is an arc with the center $\xi$ for any $\rho\in (\rho_k,R)$. From \eqnref{eqn:fnk} we can assume \begin{equation} \langlebel{eqn:fnk2} |f_{n_k}(z_k')-f_{n_k}(z_k'')|>0\,,\quad |f_{n_k}(z_k')|>0.5\,, \quad |f_{n_k}(z_k'')|>0.5\,. \end{equation} Let $l_k'$, $l_k''$ be the line segments joining $0$ to $f_{n_k}(z_k')$, $f_{n_k}(z_k'')$, and consider the two curves $f_{n_k}^{-1}(l_k')$ and $f_{n_k}^{-1}(l_k'')$. For each $\rho\in(\rho_k, R)$, there is an arc $\Lambda_{k,\rho}$ contained in $P_{n_k}$ with the center $\xi$ such that one boundary point, say $z_{k,\rho}'$, is in $f_{n_k}^{-1}(l_k')$ and the other, say $z_{k,\rho}''$, is in $f_{n_k}^{-1}(l_k'')$. From $f(0)=0$ and the continuity of $f^{-1}$, for given $\epsilon>0$ we have $\{|z|<\delta\}\subset f\left(\{|z|<\epsilon\}\right)$ with some $\delta=\delta(\epsilon)>0$. Due to $|\xi|=1$, we can take $R$ sufficiently small such that $f(\Lambda_{k,\rho})$ is located away from $0$. From the uniform convergence of $f_n^{-1}$ to $f^{-1}$ near $0$, there is a $d>0$ independent of $k$ such that \begin{equation}\langlebel{eqn:inf}\inf_{\rho\in(\rho_k,R)}{\bf i}gr\{|f_{n_k}(z_{k,\rho}')|, |f_{n_k}(z_{k,\rho}'')|{\bf i}gr\}> d\,.\end{equation} Therefore, from the fact $f_{n_k}(z_{k,\rho}')\in l_k'$ and $f_{n_k}(z_{k,\rho}'')\in l_k''$, there is a $\tilde{d}>0$ independent of $k$ such that $$0<\tilde{d}<\left|f_{n_k}(z_{k,\rho}')-f_{n_k}(z_{k,\rho}'')\right|\quad\mbox{for all }\rho\in(\rho_k, R).$$ We now compute $$0<\tilde{d}<\left|\int^{z_{k,\rho}''}_{z_{k,\rho}'}f'_{n_k}(z)dz\right|\leq\int_{\Lambda_{k,\rho}}\left|f'_{n_k}(\xi+\rho e^{{\rm i}\theta})\right|\rho d\theta.$$ By using the Cauchy-Schwarz inequality, $${\tilde{d}}^2\leq 2\partiali \int_{\Lambda_{k,\rho}}\left|f'_{n_k} (\xi+\rho e^{{\rm i}\theta})\right|^2\rho^2 d\theta\quad\mbox{for all } \rho\in(\rho_k, R)\,.$$ By dividing both sides by $\rho$ and integrating them, we finally have \begin{align*} \tilde{d}^2\ln \frac{R}{\rho_k}& \leq 2\partiali\int_{\rho_k}^R \int_{\Lambda_{k,\rho}} \left|f'_{n_k}(\xi+\rho e^{{\rm i}\theta})\right|^2\rho d\theta d\rho\\ &\leq 2\partiali\int_{B_R(\xi)\cap P_{n_k}} \left|f'_{n_k}(\xi+\rho e^{{\rm i}\theta})\right|^2\rho d\theta d\rho \leq 2\partiali\cdot\mbox{area}\left(f_{n_k}(P_{n_k})\right) \leq 2\partiali\cdot\mbox{area}(\mathbb{D}). \end{align*} The right-hand side is bounded independently of $k$. This fact contradicts $\rho_k\rightarrow 0$ as $k\rightarrow\infty$.\qed \begin{cor}\langlebel{lem:SnS} we have $$\sup_{j=1,\ldots,n} \left| \left(S[P_n]^{-1}\circ S[\Omega^r]\right)(e_{n,j}) -e_{n,j}\right|\rightarrow 0\quad\mbox{as }n\to \infty\,.$$\end{cor} \noindent {\sl Proof}. \ We set $f_n = S[P_n]^{-1}$ and $f=S[\Omega^r]^{-1}$ as in the proof of the previous lemma. For $z=p_{n,j}$, which is in $\partial (P_n\cap \Omega^r)$, we decompose $$ \bigr| f_n(z) - f(z) \bigr| \le \bigr| f_n(z) - f_n(\zeta) \bigr| + \bigr| f_n(\zeta) - f(\zeta) \bigr| + \bigr| f(\zeta) - f(z) \bigr|$$ with $\zeta\in P_n\cap \Omega^r$ closely located to $z$. From Lemma \ref{lemma:equi}, the uniform convergence of $f_n$ to $f$ on a compact subset of $\Omega^r$ and the uniform continuity of $f$ on $\overline{\Omega^r}$, we can easily derive that $\bigr| f_n(z) -f(z)\bigr|\rightarrow 0$ uniformly for $j=1,\dots,n$ as $n\to \infty$. Due to $p_{n,j}=S[\Omega^r](e_{n,j})$, we finish the proof. \qed \subsection{Corner effects} Since $P_n$ is a polygon, $S[P_n]$ admits \begin{equation} \notag S[P_n](z) = \frac{1}{C_n} \int_{0}^{z} {\partialrod_{j=1}^n \left(1 - \frac{w}{a_{n,j}}\right)^{-\beta_{n,j}}}dw \end{equation} with a positive constant $C_n$ and pre-vertices $a_{n,j}=S[P_n]^{-1}(p_{n,j})=\left(S[P_n]^{-1}\circ S[\Omega^r]\right)(e_{n,j})$. We denote the geometric factor of $\Omega_n$ by $\sigma_{n,k}$, in other words $\sigma_{n,k}=\sigma_k[\Omega_n]$. Then, it follows from Lemma \ref{lemma:sigma} \begin{equation} \langlebel{sigma_nk} \sigma_{n,k}=\sum_{j=1}^n \beta_{n,j} a_{n,j}^{-k},\quad k\in\mathbb{N}\,. \end{equation} In order to have the value of $\sigma_{n,k}$ we need to know $a_{n,j}=\left(S[P_n]^{-1}\circ S[\Omega^r]\right)(e_{n,j})$, which are the pre-vertices of $P_n$. However, the problem of finding the pre-vertices for a given polygon, the \textit{Schwarz-Christoffel Parameter Problem}, is challenging to solve for arbitrary polygons. There are numerical algorithms for finding pre-vertices for special polygons, for instance \cite{DT}. Because of this difficulty we instead consider $$ \widetilde{\sigma}_{n,k}:=\sum_{j=1}^n \beta_{n,j} e_{n,j}^{-k}.$$ \begin{lemma} \langlebel{eqn:twosigma} We have $$\sigma_k[\Omega]=\lim_{n\rightarrow\infty}\widetilde{\sigma}_{n,k}\,, \quad k\in\mathbb{N}\,.$$ \end{lemma} \noindent {\sl Proof}. \ Note that $$ \widetilde{\sigma}_{n,k} =\sum_{1\leq j\leq n,\atop j\neq j_1,\ldots,j_M}\beta_{n,j} e_{n,j}^{-k}+\sum_{j=j_1,\ldots,j_M}\beta_{n,j} e_{n,j}^{-k}\, $$ and $|a^{-k}-b^{-k}|=|b^k-a^k|/|a^kb^k|\leq k|a-b|$ for $a,b\in\partial\mathbb{D}$. Therefore, we have \begin{align*}|\sigma_{n,k}-\tilde{\sigma}_{n,k}| &\leq \sum_{j=1}^n k|\beta_{n,j}||a_{n,j}-e_{n,j}|\\ &=\sum_{1\leq j\leq n,\atop j\neq j_1,\ldots,j_M}k|\beta_{n,j}| |a_{n,j}-e_{n,j}|+\sum_{j=j_1,\ldots,j_M}k|\beta_{n,j}||a_{n,j}-e_{n,j}|\,. \end{align*} Fix $\epsilon>0$ and choose $\delta>0$ such that \eqnref{nearcorner1} holds. Using Lemma \ref{lemma:betadecay} and the fact $\beta_{n,j}\in(-1,1)$, we obtain \begin{equation}\notag \left|\sigma_{n,k}-\tilde{\sigma}_{n,k}\right| \leq k \sup_{j=1,\ldots,n} \left|a_{n,j} -e_{n,j}\right| \left(\sum_{ p_{n,j}\notin B_\delta \atop j\neq j_1,\ldots,j_M}\frac{1}{n\partiali} \left|k_{\rm{g}}\left(\frac{j}{n}\right)\alpha'\left(\frac{j}{n}\right)\right| +\sum_{ p_{n,j}\in B_\delta }\beta_{n,j}+M\right).\end{equation} Remind that $\frac{1}{\partiali} k_{\rm{g}}(t) |\alpha'(t)|$ is integrable. From Corollary \ref{lem:SnS} and \eqnref{nearcorner1}, it follows for each $k$ that \begin{equation} \left|\sigma_{n,k}-\tilde{\sigma}_{n,k}\right|\rightarrow 0\quad\mbox{as } n\rightarrow \infty\,.\notag \end{equation} This proves the lemma thanks to \eqnref{eqn:sigmaconv}.\qed The following proposition shows that the limit is indeed the Fourier transform of $\Theta$. \begin{theorem}\langlebel{thm:mainsigma} We assume that $\Omegaega$ is a simply connected domain that is bounded by a piecewise regular analytic curve with a finite number of corners. Then, we have \begin{equation} \langlebel{thmeqn1} \sigma_{k}[\Omega]=\widehat{\Theta}(k)\,,\quad k\in\mathbb{N}\,, \end{equation} where $\widehat{\Theta}$ is the Fourier coefficients of $\Theta[\partial\Omega^r]$, that is $\widehat{\Theta}(k)=\int_{0}^{1} \Theta[\partial\Omega^r] (t) e^{-2\partiali{\rm i}kt} dt$. \end{theorem} \noindent {\sl Proof}. \ {Let $\epsilon>0$ and choose $\delta>0$ such that \eqnref{nearcorner1} holds. Using Lemma \ref{lemma:betadecay} and \eqnref{nearcorner2}, we have \begin{align*} \lim_{n \to \infty} \widetilde{\sigma}_{n,k} & = \lim_{n \to \infty} \sum_{j=1}^n \beta_{n,j}e_{n,j}^{-k} \nonumber\\ &=\lim_{n\rightarrow\infty}\sum_{p_{n,j}\notin B_\delta \atop j\neq j_1,\ldots,j_M} \frac{1}{n\partiali}k_{\rm{g}}\left(\frac{j}{n}\right) \left|\alpha'\left(\frac{j}{n}\right)\right|e_{n,j}^{-k}+O(\epsilon) +\sum_{l=1}^M \beta_l e_{n,j_l}^{-k}\\ &=\int_{\delta^{\frac{1}{1-\beta}}}^1\frac{1}{\partiali}k_{\rm{g}}(t)|\alpha'(t)|e^{-2\partiali{\rm i}kt}dt +\sum_{j=1}^M \beta_l e^{-2\partiali{\rm i}kt_l }+O(\epsilon). \end{align*} } Since $\epsilon$ can be arbitrarily small, we have $$\lim_{n \to \infty} \widetilde{\sigma}_{n,k} = \int_{0}^{1} \Theta \left(t\right) e^{-2\partiali{\rm i}kt} dt =\widehat{\Theta}(k).$$ Thus, we prove the theorem by using Lemma \ref{eqn:twosigma}. \qed Since $\frac{1}{\partiali} k_{\rm{g}}(t) |\alpha'(t)|$ is integrable, its Fourier coefficient decays to zero by the Riemann-Lebesgue Lemma. As a direct consequence of Theorem \ref{thm:mainsigma} and the definition of $\Theta$ we have the following criterion for the existence of corner points: \begin{cor} \langlebel{lemma:sigmadecay} If $\partialartial \Omegaega$ has at least one corner point, then $\sigma_k[\Omega] = O(1)$ with no decay, or $\sigma_k[\Omega]$ oscillates between some bounded values in $\mathbb{C}$ as $k$ goes to infinity. Otherwise, if $\partialartial \Omegaega$ is a regular analytic curve, then $\sigma_k = O(r^k)$ with some constant $r\in (0,1)$. \end{cor} From \eqnref{gauss} the constant coefficient of $\Theta$ is $2$. Hence, the Fourier series of $\Theta : [0,1] \to \mathbb{R}$ is $$ \Theta[\partial\Omega^r](t) = 2 + 2\sum_{k=1}^\infty \Re\{\sigma_k\} \cos(2 \partiali kt) + 2\sum_{k=1}^\infty \Im\{\sigma_k\} \sin(2 \partiali kt)\,. $$ \begin{cor}\langlebel{radialsymm} $\Omega$ is $n$-point radially symmetric if and only if $\sigma_k = 0$ for every $k \not\equiv 0 \partialmod{n}$. \end{cor} \noindent {\sl Proof}. \ If $\Omega$ is $n$-point radially symmetric, so is $\Omega^r$. That means $\Theta[\partial\Omega^r](t) = \Theta[\partial\Omega^r](t+\frac{1}{n})$ for all $t$. This is equivalent to $\Re\{\sigma_k\}=\Im\{\sigma_k\}=0$ for every $k \not\equiv 0 \partialmod{n}$. \qed \subsection{Imaging from a finite number of components of the GPTs}\langlebel{sec:finiteGPTs} The GPTs can be obtained from multistatic measurements \cite{ammari2014target}. We need infinitely many components of the GPTs to have the full sequence of geometric factors. However, one can accurately acquire only a finite number of components of the GPTs from far-field measurements and, as a consequence, a finite number of geometric factors. Using the equivalent relations between the Riemann mapping coefficients, the GPTs and the geometric factors, one can determine $$C,\quad \{b_k\}_{k\leq N},\quad\{\sigma_k\}_{k\leq N-1},\quad\{\mu_k\}_{k\leq N-2}$$ from $$\left\{\gamma^2_{k1}\right\}_{k\leq N},\quad N\geq 2.$$ As a consequence, we have $\Phi_{N-2}[\Omega]$ and $\Theta_{N-1}[\partial\Omega^r]$, where $\Phi_m[\Omega]$ and $\Theta_m[\partial \Omega^r]$, $m\geq1$, are the truncation of $\Phi[\Omega]$ and the Fourier series of $\Theta[\partial \Omega^r]$ at the $m$-th order, {\it i.e.}, \begin{align} \Phi_m[\Omega](\zeta)&=C\sum_{k=-1}^m \mu_k\zeta^{-k},\quad \zeta=\frac{1}{e^{2\partiali t{\rm{i}}}},\langlebel{eqn:PhiN}\\ \langlebel{eqn:thetaN} \Theta_m[\partial\Omega^r](t) &= 2 + 2\sum_{k=1}^m \Re\{\sigma_k\} \cos(2 \partiali kt) + 2\sum_{k=1}^m \Im\{\sigma_k\}\sin(2\partiali kt) \end{align} for $t\in[0,1]$. From \eqnref{def:theta}, $\Theta_m[\partial\Omega^r](t)$ has an isolated peak at $t=t_0$ for a large number $m$ if $S[\Omega^r](e^{2\partiali t_0{\rm{i}}})$ is a corner point of $\partial\Omega^r$. For such $t_0$, $\Phi[\Omega](e^{-2\partiali t_0{\rm i}})$ is a corner point of $\partial \Omega$. \section{Numerical results}\langlebel{sec:experi} \subsection{Description of the numerical method} \langlebel{sec:RCIP} The numerical evaluation of the right-hand side of~(\ref{gpt}) involves, as its chief difficulty, the discretization and solution of a Fredholm second kind integral equation on the piecewise smooth boundary $\partialartial\Omegaega$. For this, we use {\it Nystr\"om discretization}~\cite[Chapter~4.1]{Atki97} based on 16-point composite Gauss--Legendre quadrature and a computational mesh that is dyadically refined in the direction of the corner vertices on $\partialartial\Omegaega$. The resulting linear system for values of the unknown layer density at the discretization points is compressed using a lossless technique called {\it recursively compressed inverse preconditioning} (RCIP)~\cite{HelsOjal08} and solved using a standard direct method. We have implemented our scheme in {\sc Matlab}. The execution time in the numerical examples in this paper is typically a few seconds. The RCIP technique serves two purposes. First, it greatly accelerates the solution process when boundary singularities are present. In fact, the combination of Nystr\"om discretization and RCIP acceleration enables the solution of Fredholm second kind integral equations on piecewise smooth boundaries with approximately the same speed at which they can be solved on smooth boundaries using Nystr\"om discretization only. Second, RCIP stabilizes the solution process to the extent that integral equations modeling well-conditioned boundary value problems for elliptic partial differential equations in piecewise smooth domains often can be solved with almost machine precision. See~\cite{HKL17,HelsKarl16,HMM-NJP-11,HP-ACHA-13} for examples where RCIP accelerated Nystr\"om discretization has been used to compute, very accurately, polarizabilities and resonances of various arrangements of dielectric objects with sharp corners and edges. See also the recently revised compendium~\cite{Tutorial-arXiv} for a comprehensive review of the RCIP technique and an ample reference list. It should be mentioned that RCIP compresses the integral equation around one corner of $\partialartial\Omegaega$ at a time. The compression requires, for each corner, a local boundary parameterization $z_{\rm loc}(t)$. If the original parameterization $z(t)$ has a corner vertex at $t=t_i$, then this local parameterization is defined by \begin{equation} z_{\rm loc}(t):= z(t+t_i)-z(t_i)\,. \langlebel{eq:locparam} \end{equation} This means that mesh refinement occurs at $t\approx 0$ and that $z_{\rm loc}(0)$ is at the origin. For high achievable accuracy in the solution, the numerical implementation of $z_{\rm loc}(t)$ from the definition~(\ref{eq:locparam}) is usually not good due to numerical cancellation. Rather, $z_{\rm loc}(t)$ should be available in a form that allows for evaluation with high relative accuracy also for small arguments $t$. In the present work we find local parameterizations accurate for small arguments using series expansion techniques. \subsection{Symmetric domain} \langlebel{example:shape1} In this example, we consider a symmetric curvilinear triangle $\Omega$. The domain $\Omega$ is the reflection of an equilateral triangle $P$ across the unit circle. Note that $P=\Omega^r$. See Figure \ref{figure:triangle}(a,b) for the shape of $\Omega$ and $P$. The interior Riemann mapping function corresponding to the triangle $P$ is \begin{equation*} S[\Omega^r](z) = \frac{1}{C} \int_{0}^{z} \partialrod_{j=1}^3 \left(1 - \frac{w}{a_j}\right)^{-\frac{2}{3}} dw\,, \end{equation*} with $C=1$ and $a_j = \exp(\frac{2\partiali j}{3} {\rm i})$ for $j=1,2,3$. Since $P$ is a polygon, the boundary curvature is zero except at corner points. Hence, we have $\Theta[\partial \Omega^r](t) = \sum_{j=1}^3 \frac{2}{3}\delta(t-\frac{j}{3})$, and $\sigma_k[\Omega]$ get periodic values \begin{equation} \sigma_k\left[\Omega\right]= \begin{cases} \displaystyle 0,\quad&\mbox{if }k\not\equiv0 \quad (\mbox{mod }3)\,,\\[1mm] \displaystyle 2,\quad&\mbox{if }k\equiv0 \quad (\mbox{mod }3)\,. \end{cases}\langlebel{sigmak:example1} \end{equation} This fact fits well with Corollary \ref{lemma:sigmadecay} and the existence of three corners on $\partialartial\Omegaega$. Since $\Omega$ is $3$-point radially symmetric, $\sigma_k[\Omega] = 0$ for every $k \not \equiv 0 \partialmod 3$ as shown in Corollary \ref{radialsymm}. Figure \ref{figure:triangle}(c) shows the graph of the geometric factors and Figure \ref{figure:triangle}(d) the graph of $\Theta_{21}[\partial\Omega^r](t)$. $\Theta_{21}[\partial\Omega^r](t)$ exhibits three isolated peaks at $t$-values corresponding to corner points, which are marked by red vertical dashed lines. The peaks correspond to the Dirac delta singularities in $\Theta$. Now we perform a numerical computation to solve \eqnref{gpt} using the RCIP-accelerated Nystr\"om scheme described in Section \ref{sec:RCIP} and acquire the GPTs from \eqnref{GPT12}. Using the computed GPTs, we then calculate $\sigma_k$ via~(\ref{eqn:bk1}) and~(\ref{sigma}). Table \ref{table:sigma_ex1} displays the 20 first computed values of $\sigma_k$. The acquired non-zero values agree with the analytic values in \eqnref{sigmak:example1} to between 10 and 14 digits. The zero values agree even better. \begin{figure} \caption{Symmetric curvilinear triangle $\Omegaega$} \caption{Equilateral triangle $P=\Omega^r$} \caption{Geometric factor $\sigma_k\left[\Omega\right]$} \caption{Truncation of $\Theta[\partial\Omega^r]$} \caption{Symmetric cornered domain. (a) and (b) illustrate a curvilinear triangle $\Omega$ and its reflection $P$ across the unit circle. Since ${\Omega} \end{figure} \begin{table}[p] {\begin{tabular}{|c|c|c|c|} \hline $k$ & $\sigma_k$ & $k$ & $\sigma_k$ \\ \hline $1$ & $0$ & $11$ & $-1\times10^{-15}$\\ \hline $2$ & $-3\times10^{-17}$ & $12$ & $2.00000000001$\\ \hline $3$ & $1.999999999999994$ & $13$ & $1\times10^{-15}$\\ \hline $4$ & $1\times10^{-33}$ & $14$ & $1\times10^{-16}$\\ \hline $5$ & $-3\times10^{-16}$ & $15$ & $2.0000000001$\\ \hline $6$ & $1.99999999999997$ & $16$ & $8\times10^{-16}$\\ \hline $7$ & $2\times10^{-32}$ & $17$ & $-3\times10^{-15}$\\ \hline $8$ & $-3\times10^{-16}$ & $18$ & $2.000000001$\\ \hline $9$ & $2.0000000000003$ & $19$ & $-7\times10^{-16}$\\ \hline $10$ & $4\times10^{-16}$ & $20$ & $-7\times10^{-15}$\\ \hline \end{tabular}} \caption{The 20 first geometric factors $\sigma_k$ of the symmetric domain in Figure \ref{figure:triangle}(a), obtained from numerically computed GPTs. The values agree well with the analytic expression of \eqnref{sigmak:example1}.} \langlebel{table:sigma_ex1} \end{table} \subsection{Non-symmetric domain} \langlebel{example:shape2} In this example $\Omega$ is the non-symmetric domain with corners in Figure \ref{fig:asymm}(a) (see Appendix \ref{appen:para} for the parametrization). As in Section~\ref{example:shape1}, the GPTs are numerically computed and the geometric factors $\sigma_k$ are calculated from the GPTs via (\ref{GPT12}), (\ref{gpt}), (\ref{eqn:bk1}) and~(\ref{sigma}). Table~\ref{table:sigma_ex2} displays the first 20 geometric factors. {Note that $\sigma_k\left[\Omega\right]$ shows an oscillatory behavior as $k$ increases, as also shown in Figure \ref{fig:asymm}(c).} The graph of $\Theta_{28}[\partial \Omega^r](t)$ shows three isolated peaks at the locations of the corner points, which are marked by red vertical lines. Again, the peaks correspond to the Dirac delta singularities in $\Theta$. \begin{figure} \caption{Cap-shaped cornered domain $\Omega$} \caption{Reflected domain $\Omega^r$} \caption{Geometric factor $\sigma_k[\Omega]$} \caption{Truncation of $\Theta[\partial \Omega^r]$} \caption{Non-symmetric cornered domain. (a) and (b) illustrate a non-symmetric domain $\Omega$ and its reflection $\Omega^r$ across the unit circle. {(c) shows an oscillatory behavior of $\sigma_k[\Omega]$ as $k$ increases, which is consistent with Corollary \ref{lemma:sigmadecay} \end{figure} \begin{table}[p] {\begin{tabular}{|c|c|c|c|} \hline $k$ & $\sigma_k$ & $k$ & $\sigma_k$ \\ \hline $1$ & $ 0.336144826114240 - 0.076400757440234{\rm i}$ & $11$ & $ 0.186721819078 - 0.595616065541{\rm i}$\\ \hline $2$ & $-1.75172536453942 - 0.64675188584893{\rm i}$ & $12$ & $ 0.023499384397 + 0.939740685579{\rm i}$\\ \hline $3$ & $ 0.03406793409600 + 1.78388113685821{\rm i}$ & $13$ & $ 0.49141111608 - 0.42493152167{\rm i}$\\ \hline $4$ & $ 0.82403911365013 - 0.50742133234639{\rm i}$ & $14$ & $ 0.14107795747 - 0.23706422509{\rm i}$\\ \hline $5$ & $ 0.49942961065065 + 0.12520990108117{\rm i}$ & $15$ & $-0.11870768404 + 0.08096200618{\rm i}$\\ \hline $6$ & $-0.1083287142652 - 0.8609605708526{\rm i}$ & $16$ & $ 0.0444798323 - 0.8639904912{\rm i}$\\ \hline $7$ & $-0.3918884064658 - 0.1538531930618{\rm i}$ & $17$ & $-0.6290121604 + 0.8209980128{\rm i}$\\ \hline $8$ & $-0.147595479311 + 0.493831447944{\rm i}$ & $18$ & $ 0.1981029905 - 0.6878495747{\rm i}$\\ \hline $9$ & $-0.072598481664 - 0.719868325959{\rm i}$ & $19$ & $-0.4512400133 + 0.9581338937{\rm i}$\\ \hline $10$ & $-0.330200317533 + 1.052309941949{\rm i}$ & $20$ & $ 0.407788339 - 0.162013757{\rm i}$\\ \hline \end{tabular}} \caption{The 20 first geometric factors $\sigma_k$ of the non-symmetric domain in Figure \ref{fig:asymm}(a), obtained from numerically computed GPTs.} \langlebel{table:sigma_ex2} \end{table} In Figure \ref{fig:truncation}, we consider the imaging problem of $\Omega$ from a finite number of components of the GPTs. As discussed in Section \ref{sec:finiteGPTs} one can have the truncated series $\Phi_{N-2}[\Omega](\zeta)$ from $\left\{\gamma^2_{k1}: k\leq N\right\}$. In this example, we reconstruct $\Omega$ using $N=6$ and $N=29$. The image of the unit circle under $\Phi_{N-2}[\Omega]$ approximates the shape of $\partial \Omega$ even for small $N$. For $N=29$ the graph of $\Theta_{N-1}[\partial\Omega^r](t)$ shows isolated peaks at $t$ corresponding to corner points, while it does not for $N=6$. \begin{figure} \caption{Imaging $\Omega$ from a finite number of components of the GPTs. $\Omega$ is given as in Figure \ref{fig:asymm} \end{figure} \subsection{Smooth domain} In Figure \ref{figure:smoothtriangle} we consider a smooth domain, denoted by $\widetilde{\Omega}$, with $3$-point radial symmetry. Note that $\widetilde{\Omega}$ has a similar shape as that of $\Omega$ in Figure \ref{figure:triangle}. In Figure \ref{fig:asymmetricsmooth} we consider another smooth domain, denoted again by $\widetilde{\Omega}$, which has a similar shape as $\Omega$ in Figure \ref{fig:asymm}. In both cases the domain $\widetilde{\Omega}$ is made as $\partial\widetilde{\Omega}=\{P(z):|z|=1\}$ by using a polynomial $P(z)$. Then, the corresponding geometric factors are analytically calculated. In contrast to the examples with the cornered domain, the geometric factors of $\widetilde{\Omega}$ decay exponentially. The partial sums of Fourier series of $\Theta$ have relatively large values at $t$-values corresponding to boundary points of $\widetilde{\Omega}^r$ with large curvature. Since $\widetilde{\Omega}$ in Figure \ref{figure:smoothtriangle} is $3$-point radially symmetric, we have $\sigma_k[\widetilde{\Omega}] = 0$ for every $k \not \equiv 0 \partialmod 3$. \begin{figure} \caption{Symmetric smooth domain $\widetilde{\Omegaega} \caption{Reflected domain ${\widetilde{\Omega} \caption{Geometric factor $\sigma_k[\widetilde{\Omega} \caption{Truncation of $\Theta[\partial\widetilde{\Omega} \caption{Symmetric smooth domain. (a) and (b) illustrate a smooth domain $\widetilde{\Omega} \end{figure} \begin{figure} \caption{Symmetric smooth domain $\widetilde{\Omegaega} \caption{Reflected domain ${\widetilde{\Omega} \caption{Graph of $\sigma_k[\widetilde{\Omega} \caption{Truncation of $\Theta[\partial\widetilde{\Omega} \caption{Non-symmetric smooth domain. (a) and (b) illustrate a smooth domain $\widetilde{\Omega} \end{figure} \section{Proofs} \langlebel{sec:proof} \subsection{Proof of Proposition \ref{theorem1}}\langlebel{proof:theorem1} By \eqnref{Phi}, \eqnref{RG:conformal}, and \eqnref{Sseries0} and the definitions of $\mu_{n,k}$ and $b_{n,k}$, we have \begin{align} \quad & \Phi(\zeta)^n - \sum_{m=1}^\infty \frac{ \gamma_{mn}^1 }{\Phi(\zeta)^m} = \Phi(\zeta)^n - \sum_{m=1}^\infty \gamma_{mn}^1 S{\bf i}gr(\frac{1}{\zeta}{\bf i}gr)^m \nonumber \\ &= C^n\zeta^{2n}{\bf i}gr(\sum_{k=1}^\infty \frac{\mu_{k-2}}{\zeta^k} {\bf i}gr)^n- \sum_{m=1}^{\infty} \gamma_{mn}^1 \frac{1}{C^m} {\bf i}gr( \sum_{k=1}^\infty \frac{b_{k}}{\zeta^k} {\bf i}gr)^m \nonumber = C^n \sum_{k=n}^{\infty} \frac{\mu_{n,k}}{\zeta^{k-2n}}- \sum_{k=1}^{\infty}{\bf i}gr(\sum_{m=1}^{k}\frac{1}{C^m} \gamma_{mn}^1 b_{m,k} {\bf i}gr)\frac{1}{\zeta^k} \nonumber \\ &= C^n \sum_{k=0}^{n} \mu_{n,2n-k}\zeta^{k}+\sum_{k=1}^{\infty} {\bf i}gr( C^n \mu_{n,2n+k}-\sum_{m=1}^{k} \frac{1}{C^m} \gamma_{mn}^1 b_{m,k} {\bf i}gr)\frac{1}{\zeta^k}. \langlebel{rel} \end{align} {Note that $$ \Phi(\zeta)^n - \sum_{m=1}^\infty \frac{ \gamma_{mn}^1 }{\Phi(\zeta)^m}=\frac{1}{2}\left(V_1\circ\Phi+V_2\circ\Phi\right)(\zeta)$$ with $V_1$ and $V_2$ defined in Section \ref{sec:GPTandcoeff}. From Lemma \ref{lemma:basic}, the right-hand side of the above equation has an entire extension. The main idea of the proof of Lemma \ref{lemma:basic} is the equality \begin{align} -\frac{1}{2}{\left(V_1\circ\Phi+V_2\circ\Phi\right)(\zeta)}\langlebel{eqn:V1V2} &=\mbox{const.}-\frac{1}{2}\overline{\left(V_1\circ\Phi-V_2\circ\Phi\right)(\zeta)}\\ &=\mbox{const.}-\frac{1}{2}\overline{\left(V_1\circ\Phi-V_2\circ\Phi\right)(\frac{1}{\bar{\zeta}})}\notag \end{align} on $|\zeta|=1$ due to \eqnref{extension1} and \eqnref{extension2}. Since the last term in the above equation is analytic for $|\zeta|<1$, the function $\frac{1}{2}{\left(V_1\circ\Phi+V_2\circ\Phi\right)(\zeta)}$ has an entire extension. Therefore, the principal parts of \eqnref{rel} should vanish: \begin{equation}\langlebel{gamma1_all} C^n \mu_{n,2n+k}-\sum_{m=1}^{k} \frac{1}{C^m} \gamma_{mn}^1 b_{m,k}=0. \end{equation} Rearranging \eqnref{gamma1_all}, and since $b_{k,k}= 1$, we get $ \gamma_{kn}^1 = C^{k+n} {\bf i}gr(\mu_{n,2n+k}-\sum_{m=1}^{k-1} \frac{1}{C^{m+n}} \gamma_{mn}^1 b_{m,k} {\bf i}gr)$ for each $ n,k\in \mathbb{N}$. This proves \eqnref{Gamma1}. From \eqnref{eqn:V1V2}, for each $k\in\mathbb{N}\cup\{0\}$ and sufficiently large $R>0$ we have \begin{align} &\int_{|\zeta|=1}-\frac{1}{2}{\left(V_1\circ\Phi-V_2\circ\Phi\right)(\zeta)}\,\zeta^k d\zeta\notag = \int_{|\zeta|=1}\overline{-\frac{1}{2}{\left(V_1\circ\Phi+V_2\circ\Phi\right)(\zeta)}}\,\zeta^k d\zeta\\\notag &=\int_{|\zeta|=1}\overline{-\frac{1}{2}{\left(V_1\circ\Phi+V_2\circ\Phi\right)(\bar{\zeta}^{-1})}}\,\zeta^k d\zeta=\int_{|\zeta|=\frac{1}{R}}\overline{-\frac{1}{2}{\left(V_1\circ\Phi+V_2\circ\Phi\right)(\bar{\zeta}^{-1})}}\,\zeta^k d\zeta\\\langlebel{eqn:formal1} &=-\int_{|\zeta|=\frac{1}{R}}\overline{\Phi(\bar{\zeta}^{-1})^n - \sum_{m=1}^\infty \frac{ \gamma_{mn}^1 }{\Phi(\bar{\zeta}^{-1})^m}}\,\zeta^k d\zeta, \end{align} and the left-hand side equals \begin{equation}\langlebel{eqn:formal2} \int_{|\zeta|=1}-\frac{1}{2}{\left(V_1\circ\Phi-V_2\circ\Phi\right)(\zeta)}\,\zeta^k d\zeta= \int_{|\zeta|=R}\left(\sum_{m=1}^\infty \frac{\gamma_{mn}^2}{\Phi(\zeta)^m}\right)\zeta^kd\zeta. \end{equation} We compute \begin{align} \overline{\Phi(\bar{\zeta}^{-1})^n - \sum_{m=1}^\infty \frac{ \gamma_{mn}^1 }{\Phi(\bar{\zeta}^{-1})^m}} &=\overline{\Phi(\bar{\zeta}^{-1})^n} - \overline{\sum_{m=1}^\infty { \gamma_{mn}^1 }{S(\bar{\zeta})^m}}\notag\\ &={C}^n{\bf i}gr(\sum_{k=-1}^\infty \overline{\mu_k}\zeta^k {\bf i}gr)^n-\sum_{m=1}^{\infty} \overline{\gamma_{mn}^1}\frac{1}{{C}^m} {\bf i}gr(\sum_{k=1}^\infty \overline{b_k}\zeta^k {\bf i}gr)^m \quad \mbox{on }|\zeta|=\frac{1}{R},\langlebel{eqn:formal3} \end{align} and \begin{equation}\langlebel{eqn:formal4}\sum_{m=1}^\infty \frac{\gamma_{mn}^2}{\Phi(\zeta)^m}=\sum_{m=1}^\infty \gamma_{mn}^2S(\frac{1}{\zeta})^m=\sum_{m=1}^{\infty} \gamma_{mn}^2 \frac{1}{C^m} {\bf i}gr(\sum_{k=1}^\infty \frac{b_{k}}{\zeta^k}{\bf i}gr)^m\quad \mbox{on }|\zeta|=R.\end{equation} By applying a similar multinomial expansion as in \eqnref{rel}, we can formally expand the summation of two components in \eqnref{eqn:formal3} and \eqnref{eqn:formal4} which contain principal parts:} \begin{align} &\sum_{m=1}^{\infty} \gamma_{mn}^2\frac{1}{C^m} {\bf i}gr(\sum_{k=1}^\infty \frac{b_{k}}{\zeta^k}{\bf i}gr)^m+{C}^n{\bf i}gr(\sum_{k=-1}^\infty \overline{\mu_k}\zeta^k {\bf i}gr)^n \nonumber \\ = &\sum_{k=1}^{\infty} {\bf i}gr(\sum_{m=1}^{k} \frac{1}{C^m} \gamma_{mn}^2 b_{m,k} {\bf i}gr)\frac{1}{\zeta^k}+ {C}^n \sum_{k=n}^{\infty}\overline{\mu_{n,k}}\zeta^{k-2n} \nonumber \\ =& {C}^n \sum_{k=0}^{\infty}\overline{\mu_{n,2n+k}}\zeta^{k} + \sum_{k=1}^{n} {\bf i}gr({C}^n \overline{\mu_{n,2n-k}}+\sum_{m=1}^{k} \frac{1}{C^m} \gamma_{mn}^2 b_{m,k}{\bf i}gr)\frac{1}{\zeta^{k}} + \sum_{k=n+1}^{\infty} {\bf i}gr(\sum_{m=1}^{k} \frac{1}{C^m} \gamma_{mn}^2 b_{m,k}{\bf i}gr)\frac{1}{\zeta^{k}}. \notag \end{align} In view of \eqnref{eqn:formal1} and \eqnref{eqn:formal2}, the principal parts of the above equation should vanish: \begin{equation}\notag \begin{cases} {C}^n \overline{\mu_{n,2n-k}} + \sum_{m=1}^{k} \frac{1}{C^m} \gamma_{mn}^2 b_{m,k} = 0, &\mbox{for }1\le k\le n, \\ \sum_{m=1}^{k} \frac{1}{C^m} \gamma_{mn}^2 b_{m,k} = 0, &\mbox{for }k\ge n+1. \end{cases} \end{equation} Rearranging the above equation, and since $b_{k,k}= 1$, we get \begin{equation}\notag \gamma_{kn}^2 = \begin{cases} - C^{k+n}{\bf i}gr( \overline{\mu_{n,2n-k}} + \sum_{m=1}^{k-1}\frac{1}{C^{m +n}}\gamma_{mn}^2 b_{m,k}{\bf i}gr), &\mbox{for }1\le k\le n, \\ - C^{k+n} \sum_{m=1}^{k-1} \frac{1}{C^{m+n}} \gamma_{mn}^2 b_{m,k}, &\mbox{for }k\ge n+1, \end{cases} \end{equation} This proves \eqnref{Gamma2}. \qed \subsection{Proofs of Proposition \ref{GPTtob} and Remark \ref{remark:bk2}} \langlebel{sec:proof:others} \noindent{\textbf{Proof of Proposition \ref{GPTtob}}} From \eqnref{Gamma2} with $n=k=1$ and \eqnref{bkk}, we have $\gamma_{11}^2 = -C^2 \overline{\mu_{1,1}} = -C^2.$ This implies \eqnref{eqn:C}. Applying again \eqnref{Gamma2} for $n=1$ and $k\ge2$, we have $ \gamma_{k1}^2 = -C^{k+1} \sum_{m=1}^{k-1} \frac{\gamma_{m1}^2}{C^{m+1}} b_{m,k}, $ which is equivalent to $\sum_{m=1}^{k} \frac{\gamma_{m1}^2}{C^{m+1}} b_{m,k} = 0$. Hence we have for $k\geq2$, \begin{equation} 0 = \sum_{m=1}^{k} \frac{\gamma_{m1}^2}{C^{m+1}} b_{m,k} = \frac{\gamma_{11}^2}{C^{2}} b_{1,k} + \sum_{m=2}^{k} \frac{\gamma_{m1}^2}{C^{m+1}} b_{m,k} = -b_k + \sum_{m=2}^{k} \frac{\gamma_{m1}^2}{C^{m+1}} b_{m,k}\,.\nonumber \end{equation} This proves \eqnref{eqn:bk1}. \qed \vskip .3cm \noindent{\textbf{Proof of Remark \ref{remark:bk2}}} From \eqnref{Gamma1}, we have \begin{align*} \gamma_{k1}^1= C^{k+1} {\bf i}gr(\mu_{1,2+k}-\sum_{m=1}^{k-1} \frac{\gamma_{m1}^1}{C^{m+1}} b_{m,k} {\bf i}gr), \quad \mbox{for } k\ge 2. \end{align*} It follows that $\mu_{1,2+k}=\sum_{m=1}^{k-1}\frac{\gamma_{m1}^1}{C^{m+1}}b_{m,k}+\frac{\gamma_{k1}^1}{C^{k+1}}=\sum_{m=1}^{k} \frac{\gamma_{m1}^1}{C^{m+1}} b_{m,k}$ because $b_{k,k}=1$. Hence, for $k\ge 2$, we have \begin{equation} \mu_{k} = \mu_{1,2+k}=\sum_{m=1}^{k} \frac{\gamma_{m1}^1}{C^{m+1}} b_{m,k}= \frac{\gamma_{11}^1}{C^{2}} b_k + \sum_{m=2}^{k} \frac{\gamma_{m1}^1}{C^{m+1}} b_{m,k}\,,\nonumber \end{equation} where $C$ is as in \eqnref{eqn:C}. This proves \eqnref{eqn:bk2}. \qed \section{Conclusions} We have analyzed the effects of corners of an insulating inclusion $\Omega$ on the perturbation of an electric potential. We derived explicit connections between generalized polarization tensors and coefficients of interior Riemann mapping functions on the way. We defined a sequence of geometric factors using these mapping coefficients. Mutually equivalent relations are then deduced between GPTs, the Riemann mapping coefficients, and the geometric factors. We finally characterized the corner effect: the sequence of geometric factors is the sequence of Fourier coefficients of the generalized external angle function for $\Omega^r$, the reflection of $\Omega$ across the unit circle, where the generalized external angle function contains the Dirac delta singularity at the corner points. Based on this corner effect, we established a criteria for the existence of corner points on the inclusion boundary in terms of the geometric factors. We assumed that the inclusion is insulated. It will be of interest to find geometric factors for inclusions with arbitrary conductivity that reveal the presence of corners. \begin{appendices} \section{Carath\'eodory's mapping theorem}\langlebel{sec:cara} For the relation between the convergence of domains and the convergence of the corresponding conformal mappings, we introduce some content from \cite{markushevich1977theory}. \begin{definition} Let $\{ \Omegaega_n \}_{n\in \mathbb{N}}$ be a sequence of simply connected and uniformly bounded domains in $\mathbb{C}$, and each $\Omegaega_n$ contains a fixed disk centered at $z_0$. The ${kernel}$ of $\{ \Omegaega_n \}_{n\in \mathbb{N}}$ is defined as the largest open domain $\Omegaega_{z_0}$ containing $z_0$ such that every compact subset $K \subset \Omegaega_{z_0}$ belongs to $\Omegaega_n$ for all $n \ge N$ with some $N \in \mathbb{N}$ depending on $K$. \end{definition} \begin{definition}[Kernel convergence in the sense of Carath\'eodory] Let $G_{z_0}$ be a kernel of $\{ \Omegaega_n \}_{n\in \mathbb{N}}$ relative to the point $z_0$. If every subsequence of $\{ \Omegaega_n \}_{n\in \mathbb{N}}$ has the same kernel $\Omegaega_{z_0}$, then $\Omegaega_n$ is said to converge to $\Omegaega_{z_0}$. Otherwise, $\Omegaega_n$ is said to diverge. \end{definition} We defined a concept of convergence in the sense of Carath\'eodory. Now, let's see how the convergence in the sense of Carath\'eodory is related to the convergence of the function sequences. \begin{theorem}[Carath\'eodory's mapping theorem] For each $n\in \mathbb{N}$, let $f_n:\Omegaega_n \to \mathbb{D}$ be a conformal mapping that satisfies \begin{equation} f_n(z_0) = 0, \quad f'_n(z_0)>0. \nonumber \end{equation} Similarly, let $f:\Omegaega_{z_0} \to \mathbb{D}$ be a conformal mapping that satisfies \begin{equation} f(z_0) = 0, \quad f'(z_0)>0. \nonumber \end{equation} If $\Omegaega_n$ converges to $\Omegaega_{z_0}$, then $f_n$ converges uniformly to $f$ inside $\Omegaega_{z_0}$ (which means by definition that $f_n$ converges uniformly on any compact subset of $\Omegaega_{z_0}$), and $f_n^{-1}$ converges uniformly to $f^{-1}$ inside $\mathbb{D}$. Conversely, if $f_n$ converges uniformly to $f$ inside $\Omegaega_{z_0}$, or if $f_n^{-1}$ converges uniformly to $f^{-1}$ inside $\mathbb{D}$, then $\Omegaega_n$ converges to $\Omegaega_{z_0}$. \end{theorem} \section{Parametrization of the non-symmetric domain in Section \ref{example:shape2}}\langlebel{appen:para} The boundary of the non-symmetric domain $\Omega$ in Section \ref{example:shape2} can be parametrized as follows: \begin{equation}\notag \gamma(t)= \begin{cases} \left(-\frac{1}{2}\sin\left(4\partiali c t \right) - \frac{\sqrt{2}\partiali}{4} , \quad - \frac{1}{2} + \frac{1}{2}\cos\left(4\partiali c t \right)\right),\quad t\in[0,t_1], \\[2mm] \left(2\partiali c(t-t_2) - \sqrt{2}\arcsin\left( \frac{\sqrt{2}}{2}\cos(2\partiali a) \right) , \quad -\frac{1}{2} \right),\quad t\in[t_1,t_2], \\[2mm] \left(-\sqrt{2}\arcsin\left( \frac{\sqrt{2}}{2}\cos\left(2\partiali c(t-t_2)+2\partiali a\right) \right) , \quad -\arcsinh \left(\sin \left(2\partiali c(t-t_2)+2\partiali a\right)\right) \right), \quad t\in[t_2,1], \end{cases}\end{equation} with $a = \frac{1}{2} - \frac{1}{2\partiali}\arcsin\left(\sinh\left(\frac{1}{2}\right)\right)$, $b = a-\frac{1}{4\partiali}-\frac{\sqrt{2}}{8}+\frac{\sqrt{2}}{2\partiali}\arcsin\left(\frac{\sqrt{2}}{2}\cos(2\partiali a)\right)$, $c = \frac{9}{8}-b$, $t_1 = \frac{1}{8c} \approx 0.1122$, and $t_2 = t_1 + \left( \frac{a-b}{c} \right) \approx 0.4731.$ \end{appendices} { } \end{document}
\begin{document} \title{Entanglement induced transparency and applications} \author{Stefano Olivares} \email{[email protected]} \affiliation{CNISM UdR Milano Universit\`a, I-20133 Milano, Italy} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \author{Matteo G. A. Paris} \email{[email protected]} \affiliation{Dipartimento di Fisica, Universit\`a degli Studi di Milano, I-20133 Milano, Italy} \affiliation{CNISM UdR Milano Universit\`a, I-20133 Milano, Italy} \affiliation{ISI Foundation, I-10133 Torino, Italy} \date{\today} \begin{abstract} We point out a symmetry exhibited by pairs of entangled states and discuss its possible applications in quantum information. More specifically, we consider quadripartite systems prepared in bipartite product states of the form $|\Psi\rangle = |\psi\rangle_{12} \otimes |\psi\rangle_{34}$ and let the uncorrelated subsystems $14$ and $23$ interact by a given unitary $U_{14}\otimes U_{23}$: we show that the entanglement between the noninteracting subsystems $12$ and $34$ may induce transparency, i.e. makes $|\Psi\rangle$ an eigenstate of the unitary. We investigate the occurrence of this phenomenon both in continuous variable and qubit systems, and discuss its possible applications to bath engineering, double swapping and remote inversion. \end{abstract} \maketitle \section{Introduction}\label{s:intro} Entanglement is a relevant resource for quantum information processing and considerable efforts in this field have been devoted to investigate its generation, characterization, manipulation and storing \cite{eres,edet,eman}. It is a general fact that the interaction with external systems may lead to entanglement degradation and, in turn, much attention has been payed to design and implement schemes suitable to preserve and restore entanglement \cite{BB1,BB2}. Among the other approaches, the analysis of symmetries is a powerful tool to investigate the separability problem and the dynamics of entanglement, as well as to individuate systems of interest for quantum information processing, as for example decoherence-free subspaces \cite{DFS97} and cluster states \cite{cl1}. In this paper we follow the above intuition and exploit a specific symmetry exhibited by pairs of entangled states for applications to bath engineering, double swapping and remote inversion. \par The basic idea is to consider quadripartite systems prepared in bipartite product states of the form $|\Psi\rangle = |\psi\rangle_{12} \otimes |\psi\rangle_{34}$ and let the uncorrelated systems $14$ and $23$ interact by unitaries of the form $U_{14}\otimes U_{23}$. As we will see, there are conditions leading to {\em entanglement induced transparency}, namely conditions in which entanglement of the systems $12$ and $34$ preserve the initial state and its properties during the evolution. In other words, the overall state $|\Psi\rangle$ becomes an eigenstate of $U_{14} \otimes U_{23}$. In the following we investigate the occurrence of this phenomenon both in continuous variable and qubit systems, and discuss possible applications in quantum information processing. \par The paper is structured as follows. In section \ref{s:phantom} we describe entanglement induced transparency in continuous variable systems and investigate in details the peculiar role of twin beams, {\em i.e} maximally entangled states at fixed energy. We also give the phase-space analysis of the effect to discuss its robustness to pertubations and the evolution of entanglement of formation. In Section \ref{s:BE} we address application of entanglement induced transparency to bath engineering. In Section \ref{s:RI}, we consider the discrete variable counterpart of entanglement induced transparency and describe its application to the remote inversion of an operation acting on two qubits belonging to two different entangled states. Section \ref{s:remarks} closes the paper with some concluding remarks. \section{Entanglement induced transparency in continuous variable systems}\label{s:phantom} Let us first consider continuous variable systems to illustrate the effect of entanglement induced transparency (EntIT). In particular we show the crucial role played by twin-beam (TWB) entanglement. We address a system composed by four modes, with field operators $a_k$, $k=1,..,4$ and consider quadripartite (pure) state of the form \begin{equation} \ket{\psi_{\rm in}} = \sum_{n,m}\psi_{n,m} \ket{n}_1 \otimes \ket{m}_2 \otimes \sum_{h,k} \omega_{h,k} \ket{h}_3 \otimes \ket{k}_4\:. \end{equation} Then we make the modes 14 and 23 interact through bilinear Hamiltonians, as those describing the interaction of modes in a beam splitter. We denote by $U_{14}(\phi)$ and $U_{23}(\varphi)$ the corresponding unitary operations, which lead to the following Heisenberg evolution \begin{subequations} \label{genBS} \begin{align} U_{hk}^{\dag}(\alpha) a_k U_{hk}(\alpha) &= a_h\cos\alpha+a_k\sin\alpha \\ U_{hk}^{\dag}(\alpha) a_h U_{hk}(\alpha) &= - a_h\sin\alpha+a_k\cos\alpha \end{align} \end{subequations} for the field modes. This corresponds to have beam splitters with transmissivities $T_{14} = \cos^2 \phi$ and $T_{23} = \cos^2 \varphi$. Upon expanding Fock number states $\ket{n}_k = (n!)^{-1/2} (a_k^\dag)^n\ket{0}$ and using mode transformation (\ref{genBS}) one obtains: \begin{align} \ket{\psi_{\rm out}}&= U_{14}(\phi)\otimes U_{23}(\varphi)\ket{\psi_{\rm in}}\\ &= \sum_{n,m,h,k} \frac{\psi_{n,m}\,\omega_{h,k}}{\sqrt{n!\, m!\, h!\, k!}} \nonumber\\ \times& \sum_{s=0}^{n} \sum_{t=0}^{m} \sum_{r=0}^{h} \sum_{u=0}^{k} (-1)^{r+u} {n\choose s} {m\choose t} {h\choose r} {k\choose u} \nonumber\\ \times& \sqrt{(n-s+u)!\, (m-t+r)!\, (h-r+t)!\, (k-u+s)!} \nonumber\\ \times& (\cos\phi)^{n+k-s-u} (\sin\phi)^{s+u} (\cos\varphi)^{m+h-t-r} (\sin\varphi)^{t+r} \nonumber\\ \times& \ket{n-s+u}_1 \ket{m-t+r}_2 \ket{h-r+t}_3 \ket{k-u+s}_4 \label{cumber} \end{align} where, for the sake of simplicity, we omitted the tensor product symbol in the last line. The conditions for transparency $\ket{\psi_{\rm out}}=\ket{\psi_{\rm in}}$, can be obtained by imposing $\braket{nmhk}{\psi_{\rm in}}=\braket{nmhk}{\psi_{\rm out}}$ for all the basis elements $\ket{nmhk}\equiv |n\rangle_1 |m\rangle_2 |h\rangle_3 |k\rangle_4$). It turns out that the whole set of equations is subsumed by a single condition \begin{align} \psi_{1,1}\,\omega_{1,1} =& \cos(2\phi)\cos(2\varphi) \psi_{1,1}\,\omega_{1,1}\nonumber\\ &+\frac{\cos(2\phi)\sin(2\varphi)}{\sqrt{2}} (\psi_{1,0}\,\omega_{2,1} - \psi_{1,2}\,\omega_{0,1})\nonumber\\ &+\frac{\sin(2\phi)\cos(2\varphi)}{\sqrt{2}} (\psi_{0,1}\,\omega_{1,2} - \psi_{2,1}\,\omega_{1,0})\nonumber\\ &+\frac{\sin(2\phi)\sin(2\varphi)}{2}\nonumber\\ \times& (\psi_{2,2}\,\omega_{0,0} - \psi_{0,2}\,\omega_{0,2} - \psi_{2,0}\,\omega_{2,0} + \psi_{0,0}\,\omega_{2,2}). \label{cond1111} \end{align} If $\psi_{n,m}$ and $\omega_{h,k}$ are generic, then the only solution of (\ref{cond1111}) is $\phi=\varphi=0$, i.e., the BSs transmissivity should be equal to 1 (trivial solution). For completeness, we note that also $\phi=\varphi=\pi/2$ is a solution, but, in this case, the two mode are just exchanged. In particular, these results can be applied to separable states, in which $\psi_{n,m}$ and $\omega_{h,k}$ can be written as product of two terms respectively. In other words, classical correlations are not enough to obtain the transparency. \par Let us now consider the case of photon-number entangled states, namely states with $\psi_{n,m} \propto \delta_{nm}\, \psi_n$ and $\omega_{h,k} = \delta_{kh}\, \psi_k$, $\delta_{nm}$ being the Kronecker delta. TWBs belong to this class, as well as the so-called pair-coherent states \cite{aga86,aga05}, which find applications in quantum communication \cite{us1,us2}. In this case Eq.~(\ref{cond1111}) reduces to: \begin{align} \psi_1\, \omega_1 =& \cos(2\phi)\cos(2\varphi)\,\psi_1\, \omega_1 \nonumber\\ &+ 2 \cos(\phi)\sin(\phi)\cos(\varphi)\sin(\varphi)\nonumber\\ &\times(\psi_2\,\omega_0 + \psi_0\, \omega_2)\:, \label{cond:PNES} \end{align} which, in general, admits only the trivial solution $\phi=\varphi=0$. However, if one choose $\psi_n = \omega_n = \lambda^n$, corresponding to a pair of TWB states, Eq.~(\ref{cond:PNES}) simplifies to $\cos[2(\phi-\varphi)]=1$ i.e., one has the transparency effect by choosing $\phi=\varphi$. We can conclude that only TWB states may give rise to the EntIT due to their peculiar analytical expression and, in turn, the transparency is induced by the TWB entanglement. It is worth to note that when EntIT occurs, then the quadripartite input state is an eigenstate of the unitary transformation $U_{14}(\phi)\otimes U_{23}(\phi)$. Notice that EntIT takes place for {\em any} value of the beam splitters transmissivity provided they are equal. \par In the following, we focus the attention on the case of a pair of TWB, $\dket{r}$ and $\dket{s}$ for modes 12 and 34 and mix modes 14 and 23 of state in two {\em balanced} BSs (Fig.~\ref{f:scheme}). The initial state $\ket{\psi_{\rm in}}$ is given by two TWB states with (real) parameters $r$ and $s$, namely: \begin{equation}\label{in} \ket{\psi_{\rm in}} = S_{12}(r)\otimes S_{34}(s) \ket{0} = \dket{r}\otimes\dket{s}, \end{equation} where $S_{hk}(\zeta) = \exp(\zeta a_h^{\dag}a_k^{\dag} - h.c.)$ is the two-mode squeezing operator acting onto modes $h$ and $k$, respectively, and $a_l$ is the annihilation operator of mode $l$. \begin{figure} \caption{\label{f:scheme} \label{f:scheme} \end{figure} \\ Thanks to the transformations (\ref{genBS}), the state emerging from the BSs can be written as: \begin{align} \ket{\psi_{\rm out}} =& U_{14}\otimes U_{23} \ket{\psi_{\rm in}} \\ =&\exp\Bigg\{\frac12\Big[ (r+s)(a_1^{\dag}a_2^{\dag}+a_3^{\dag}a_4^{\dag})\nonumber\\ & + (r-s)(a_1^{\dag}a_3^{\dag}+a_2^{\dag}a_4^{\dag}) - h.c. \Big]\Bigg\} \ket{0}.\label{out} \end{align} Form Eq.~(\ref{out}) we see that: \begin{align} &\mbox{if } s=r \quad &\Rightarrow \quad &\ket{\psi_{\rm out}} = \ket{\psi_{\rm in}}; \label{phantom}\\ &\mbox{if } s=-r \quad &\Rightarrow \quad &\ket{\psi_{\rm out}} = S_{13}(r) S_{24}(r)\ket{0}. \label{swap} \end{align} In other words, in the case (\ref{phantom}) we have the EntIT effect, i.e., $\ket{\psi_{\rm in}} \to \ket{\psi_{\rm in}}$; in the case (\ref{swap}) we obtain $S_{12}(r)\otimes S_{34}(r)\ket{0} \to S_{13}(r)\otimes S_{24}(r)\ket{0}$, i.e., entanglement is ``swapped'' from mode 12 and 34 to modes 13 and 24 (note that modes 1 and 3, as well as modes 2 and 4, did not directly interact each other). In the last case no measurement is required to obtain (double) entanglement swapping, even if to this aim one should have two {\em identical} states as inputs (double swapping). More generally, if the BSs have transmissivity $T_{14} = \cos^2 \phi$ and $T_{23} = \cos^2 \varphi$, then the outgoing state reads: \begin{align} \ket{\psi_{\rm out}}=& \exp\big[ (r \cos\phi \cos\varphi + s \sin\phi\sin\varphi)a_1^{\dag}a_2^{\dag}\nonumber\\ &+(r \sin\phi \sin\varphi + s \cos\phi\cos\varphi)a_3^{\dag}a_4^{\dag}\nonumber\\ &+(r \cos\phi \sin\varphi - s \sin\phi\cos\varphi)a_1^{\dag}a_3^{\dag}\nonumber\\ &+(r \sin\phi \cos\varphi - s \cos\phi\sin\varphi)a_2^{\dag}a_4^{\dag}-h.c. \big]\ket{0},\label{out:2} \end{align} which reduces to (\ref{out}) if $\phi=\varphi=\pi/4$. Investigating Eq.~(\ref{out:2}), we see that if $\varphi=\phi$ we have: \begin{align} \ket{\psi_{\rm out}}=& \exp\big[ (r \cos^2\phi + s \sin^2\phi)a_1^{\dag}a_2^{\dag}\nonumber\\ &+(r \sin^2\phi + s \cos^2\phi)a_3^{\dag}a_4^{\dag}\nonumber\\ &+(r-s) \cos\phi \sin\phi (a_1^{\dag}a_3^{\dag}+a_2^{\dag}a_4^{\dag})-h.c. \big]\ket{0},\label{out:3} \end{align} and one can always obtain the EntIT for $r=s$, whereas there is no way to obtain {\em perfect} swapping. \subsection{Phase-space analysis and robustness of EnIT} \label{s:PSanalysis} In this section we characterize the state (\ref{out:3}) in the phase-space in order to address the distribution of two-mode entanglement among all the possible partitions and the robustness of the EntIT effect. In order to simplify the formalism, we will use the following notation for the input/output modes $h,k,\ldots$: when we write ``$\square_{hk\ldots}$'', we refer to input modes, whereas writing ``$\square^{(hk\ldots)}$'' output modes are considered. Since the involved states are Gaussian and the evolution preserves this character, in order to characterize the output states we consider the evolution of the covariance matrix (CM) \cite{gsb}. The CM associated with the two-mode squeezed vacuum state $\dket{\psi_{hk}(r)} = S_{hk}(r)\ket{0}$ of modes $h$ and $k$ reads: \begin{equation} {\boldsymbol \Sigma}_{hk}(r) = \frac12\left( \begin{array}{c|c} \cosh 2r\, {\mathbbm 1}_2 & \sinh 2r\, {\boldsymbol \sigma}_3 \\ \hline -\sinh 2r\, {\boldsymbol \sigma}_3 & \cosh 2r\, {\mathbbm 1}_2 \end{array} \right), \end{equation} where $ {\mathbbm 1}_2$ is the $2\times 2$ identity matrix and ${\boldsymbol \sigma}_3 = \mbox{Diag}(1,-1)$ is the Pauli matrix. Thus, the four-mode covariance matrix of state (\ref{in}) is given by: \begin{equation} {\boldsymbol \Sigma}_{1234} = {\boldsymbol \Sigma}_{12}(r)\oplus{\boldsymbol \Sigma}_{34}(s) = \left( \begin{array}{c|c} {\boldsymbol \Sigma}_{12}(r) & {\boldsymbol 0} \\ \hline {\boldsymbol 0} & {\boldsymbol \Sigma}_{34}(s) \end{array} \right). \end{equation} The CM after the evolution through the BSs can be obtained as follows: \begin{equation} {\boldsymbol \Sigma}^{(1234)}(\phi,\varphi) = {\boldsymbol S}^{T}(\phi,\varphi) {\boldsymbol \Sigma}_{1234} {\boldsymbol S}(\phi,\varphi) \end{equation} where: \begin{align} {\boldsymbol S}(\phi,\varphi) = \left( \begin{array}{c|c|c|c} \cos\phi\,{\mathbbm 1}_2 & {\boldsymbol 0} & {\boldsymbol 0} & \sin\phi\,{\mathbbm 1}_2 \\ \hline {\boldsymbol 0} &\cos\varphi\,{\mathbbm 1}_2 & \sin\varphi\,{\mathbbm 1}_2 & {\boldsymbol 0} \\ \hline {\boldsymbol 0} &-\sin\varphi\,{\mathbbm 1}_2 & \cos\varphi\,{\mathbbm 1}_2 & {\boldsymbol 0} \\ \hline -\sin\phi\,{\mathbbm 1}_2 & {\boldsymbol 0} & {\boldsymbol 0} & \cos\phi\,{\mathbbm 1}_2 \\ \end{array} \right), \end{align} is the symplectic transformation associated with mode transformation (\ref{genBS}). Now, if we use the following $2\times 2$ block matrix decomposition of a $8\times 8$ matrix: \begin{equation} {\boldsymbol A} = \left( \begin{array}{c|c|c|c} {\boldsymbol A}_{11} & {\boldsymbol A}_{12} & {\boldsymbol A}_{13} & {\boldsymbol A}_{14} \\ \hline {\boldsymbol A}_{21} & {\boldsymbol A}_{22} & {\boldsymbol A}_{23} & {\boldsymbol A}_{24} \\ \hline {\boldsymbol A}_{31} & {\boldsymbol A}_{32} & {\boldsymbol A}_{33} & {\boldsymbol A}_{34} \\ \hline {\boldsymbol A}_{41} & {\boldsymbol A}_{42} & {\boldsymbol A}_{43} & {\boldsymbol A}_{44} \end{array}\right), \end{equation} and introduce the notation: \begin{equation} [[{\boldsymbol A}]]_{hk} \equiv \left( \begin{array}{c|c} {\boldsymbol A}_{hh} & {\boldsymbol A}_{hk} \\ \hline {\boldsymbol A}_{kh} & {\boldsymbol A}_{kk} \\ \end{array} \right), \end{equation} then the CM ${\boldsymbol \Sigma^{(hk)}}$ associated with the reduced state $\varrho^{(hk)} = \mbox{Tr}_{n,m}[\varrho^{(1234)}]$, with $n\ne m\ne h\ne k$, $\varrho_{1234}$ being the density matrix of the state (\ref{out:2}): \begin{equation}\label{reduced} {\boldsymbol \Sigma^{(hk)}} = [[{\boldsymbol \Sigma^{(1234)}}]]_{hk}. \end{equation} Starting from ${\boldsymbol \Sigma^{(hk)}}$ one can easily evaluate the purity, the separability and the entanglement of formation of the reduced states $\varrho^{(hk)}$. As an example, we plot in Fig.~\ref{f:km} the minimum symplectic eigenvalue $\tilde\kappa_{-}$ of the partially transposed of the reduced density matrices $\varrho^{(12)}$, $\varrho^{(13)}$ as a function of $x\in [-1,1]$, where we fixed $r$ and put $x=s/r$. For the reduced density matrices $\varrho^{(34)}$ and $\varrho^{(24)}$ one has the same results. Recalling that a bipartite Gaussian state is separable iff $\tilde\kappa_{-} \ge 1/2$, from Fig.~\ref{f:km} we can see the swapping of entanglement; notice that there is an interval of values of $x$ for which all the four partitions are not separable. \begin{figure} \caption{\label{f:km} \label{f:km} \end{figure} In the present case, due to the symmetry of the reduced states, their entanglement of formation is given by \cite{EOFM} \begin{equation} E_f=\left( \chi + \frac12 \right) \ln \left( \chi + \frac12 \right) - \left( \chi - \frac12 \right) \ln \left( \chi - \frac12 \right), \end{equation} where $\chi = (\tilde\kappa_{-}^2+1/4)/(2 \tilde\kappa_{-})$. We have seen that EntIT is achieved requiring $s=r$ and $\varphi=\phi$. In order to evaluate the robustness of the effect, we address the fidelity between $\varrho_{12}$ and $\varrho^{(12)}$, the input and the output state of modes 1 and 2, respectively. Since they are both Gaussian states, the fidelity is given by: \begin{equation} F=\left\{\hbox{Det}\left[\boldsymbol{\Sigma}_{12}+ \boldsymbol{\Sigma}^{(12)}\right]\right\}^{-1/2}. \end{equation} The analytic expression of $F$ is quite cumbersome; however, we report two relevant series expansions with respect to the TWB parameters ($r$ and $s$) and the BSs transmissivities ($T_{14} = \cos^2 \phi$ and $T_{23} = \cos^2 \varphi$). The first expansion concerns the TWB parameters in the case $\varphi = \phi$ and reads: \begin{equation}\label{sq:exp} F \approx 1 - \frac12 \left[ 3 + \sin^2 \phi \, \cos(2\phi) \right] (s-r)^2; \end{equation} the second one addresses the BSs transmissivities: \begin{equation}\label{BS:exp} F \approx f + f^2\, \sin(2\phi)\, \cos^2(\phi)\, \sinh^2(r-s)\, (\phi-\varphi), \end{equation} where $f\equiv f(r,s,\phi)$ is given by: \begin{equation} f = \frac{1}{1+[1 + \cos^2(\phi)] \sin^2(\phi) \sinh^2(r-s)}. \end{equation} Eq.~(\ref{sq:exp}) shows the robustness of the effect with respect to fluctuations of $s$, whereas, from Eq.~(\ref{BS:exp}), we conclude that the fluctuations of BSs transmissivities may be quite relevant. \section{Application to bath engineering}\label{s:BE} It is well known that correlated noise may enhance the capacity of a quantum channel \cite{CM02,KB04}. Moreover, in the last years the bosonic quantum channels with memory have attracted a growing interest \cite{DK05,NJC05,CL09}. These channels are characterized by Gaussian distributed-thermal noise and correlations between the environmental modes. Even if only classical correlations have been addressed so far, it has been demonstrated that uncorrelated phase-sensitive environments, i.e, uncorrelated noisy channels with squeezed fluctuations can been addressed for preservation of macroscopic quantum coherence \cite{TABK88,PT94,NL98} or for improving teleportation of squeezed states \cite{SO03}. A new insight to the properties of this kind of channels, when also nonclassical correlations are present, may be given by applying our analysis to a simple case of bath engineering, where entangled bath oscillators let properties such as entanglement and purity of an input state survive longer than uncorrelated ones do. \begin{figure} \caption{\label{f:EfP} \label{f:EfP} \end{figure} Let us now assume that the BSs in Fig.~\ref{f:scheme} describe linear losses (with the vacuum as input for both the ports 3 and 4, i.e., $s=0$). In this case, it is useful to define $\phi = \arccos\sqrt{1-\Gamma}$, where $T=\cos^2\phi$ is the BS transmissivity (we assume that both the BS have the same transmissivity, i.e., $\varphi=\phi$): if $\Gamma=0$ there aren't losses, is $\Gamma=1$ the state is completely lost. We will refer to $\Gamma$ as loss parameter. As a matter of fact, losses degrade the properties of the outgoing state $\varrho^{(12)}$ of mode 1 and 2; however, we can use the results of the previous section to engineer the state $\varrho_{34}$ to recover the degraded state (see Fig.~\ref{f:scheme}). If $\varrho_{12}$ is initially in a TWB state with TWB parameter $r$, then, by choosing as $\varrho_{34}$ a TWB with parameter $s=r$ (EntIT configuration), we have $\varrho^{(12)} = \varrho_{12}$: the state is totally recovered. Nevertheless, a partial recover is achieved also when $s<r$ (of course, if $s>r$ the outgoing state $\varrho^{(12)}$ has properties more related to $\varrho_{34}$ than to $\varrho_{12}$). This is shown in Fig.~\ref{f:EfP}, where we plot the $E_f$ and the purity $\mu = \{(16 \hbox{Det}[{\boldsymbol \Sigma^{(12)}}]\}^{-1/2}$ of $\varrho^{(12)}$ as functions of $s$ and different values of the other involved parameters. \section{Two two-qubit systems invariance via ``remote inversion''}\label{s:RI} \begin{figure} \caption{\label{f:QB_scheme} \label{f:QB_scheme} \end{figure} Let us now consider the qubit couterpart of the continuous variable setup investigated above. In this scenario, sketched in Fig.~\ref{f:QB_scheme}, the qubits 14 and 23 of two two-qubit states $\dket{\Psi}_{12}$ and $\dket{\Phi}_{34}$ undergo the generic unitary evolutions $U_{14}$ and $U_{23}$, respectively, where: \begin{align} U_{14}({\boldsymbol \theta}) =& \exp\left( -i \sum_{k=0}^{3}\theta_k\, \boldsymbol \sigma_k \otimes \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_k \right), \\ =&\, G_0({\boldsymbol \theta})\, \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_0 \nonumber\\ &- \sum_{k=1}^{3} G_k({\boldsymbol \theta})\, \boldsymbol \sigma_k \otimes \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_k,\\ U_{23}({\boldsymbol \phi}) =& \exp\left( -i \sum_{k=0}^{3}\phi_k\, \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_k \otimes \boldsymbol \sigma_k \otimes \boldsymbol \sigma_0 \right)\\ =&\, G_0({\boldsymbol \phi})\, \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_0 \nonumber\\ &- \sum_{k=1}^{3} G_k({\boldsymbol \phi})\, \boldsymbol \sigma_0 \otimes \boldsymbol \sigma_k \otimes \boldsymbol \sigma_k \otimes \boldsymbol \sigma_0,, \end{align} with ${\boldsymbol \theta} = (\theta_0,\theta_1,\theta_2,\theta_3)$, ${\boldsymbol \phi} = (\phi_0,\phi_1,\phi_2,\phi_3)$, $\boldsymbol \sigma_k$, $k=0,1,2,3$, are the Pauli matrices, $\boldsymbol \sigma_0 = {\mathbbm 1}$, and: \begin{align} G_k({\boldsymbol x}) =&\, e^{-i(x_0-x_1-x_2-x_3)}\,\nonumber\\ &\times \left[ 1+ \frac12 \sum_{n,m=1}^3g_k(n,m)\,e^{2i(x_n+x_m)} \right], \end{align} ${\boldsymbol x}=(x_0,x_1,x_2,x_3)$, with: \begin{equation} g_k(n,m) = \left\{ \begin{array}{ll} +1 & \mbox{if }k\ne n,m \\ 0 & \mbox{if }n=m \\ -1 & \mbox{elsewhere} \end{array} \right. . \end{equation} We are interested in finding the conditions on ${\boldsymbol \theta}$ and ${\boldsymbol \phi}$, that leave the input states unchanged. We restrict our analysis assuming that $\dket{\Psi}_{12}$ and $\dket{\Phi}_{34}$ are initially in the same state. Let us now consider as input states $\dket{\Psi}_{12} = \frac{1}{\sqrt{2}}\left( \ket{00}_{12} + \ket{11}_{12} \right)$ and $\dket{\Phi}_{34} = \frac{1}{\sqrt{2}}\left( \ket{00}_{34} + \ket{11}_{34} \right)$, where $\ket{xy}_{kh} = \ket{x}_k\otimes \ket{y}_h$. The four qubit initial state is then given by: \begin{align} | \psi \rangle_{1234} &= \dket{\Psi}_{12} \otimes \dket{\Phi}_{34} \\ &= \frac12\Big( \ket{00}_{14}\ket{00}_{23} + \ket{01}_{14}\ket{01}_{23} \nonumber\\ &\hspace{0.5cm} + \ket{10}_{14}\ket{10}_{23} + \ket{11}_{14}\ket{11}_{23}\Big) \label{1234:k}\\ &= \frac12 \Big( \left|\left.\mbox{$\frac{\boldsymbol \sigma_0}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$\frac{\boldsymbol \sigma_0}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23} + \left|\left.\mbox{$\frac{\boldsymbol \sigma_1}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$\frac{\boldsymbol \sigma_1}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23} \nonumber \\ &\hspace{0.5cm} + \left|\left.\mbox{$i\frac{\boldsymbol \sigma_2}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$i\frac{\boldsymbol \sigma_2}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23} + \left|\left.\mbox{$\frac{\boldsymbol \sigma_3}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$\frac{\boldsymbol \sigma_3}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23} \Big) \label{1234:m} \end{align} where we re-arranged the tensor product elements in order to put in evidence the bipartite couples (14 and 23, respectively) involved by the transformations; from (\ref{1234:k}) to (\ref{1234:m}), we used the matrix notation for bipartite states \cite{LoPresti:00}. Now, after some algebra based on the properties of the Pauli matrices, one can easily find the following relations: \begin{subequations} \label{1234:op} \begin{align} &U({\boldsymbol \theta},{\boldsymbol \phi}) \left|\left.\mbox{$\frac{\boldsymbol \sigma_0}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$\frac{\boldsymbol \sigma_0}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23} = \nonumber\\ &\left[G_0({\boldsymbol \theta})-G_1({\boldsymbol \theta}) +G_2({\boldsymbol \theta})-G_3({\boldsymbol \theta})\right] \\ &\times \left[G_0({\boldsymbol \phi})-G_1({\boldsymbol \phi}) +G_2({\boldsymbol \phi})-G_3({\boldsymbol \phi})\right] \left|\left.\mbox{$\frac{\boldsymbol \sigma_0}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$\frac{\boldsymbol \sigma_0}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23}\nonumber \\ &U({\boldsymbol \theta},{\boldsymbol \phi}) \left|\left.\mbox{$\frac{\boldsymbol \sigma_1}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$\frac{\boldsymbol \sigma_1}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23} = \nonumber\\ &\left[G_0({\boldsymbol \theta})-G_1({\boldsymbol \theta}) -G_2({\boldsymbol \theta})+G_3({\boldsymbol \theta})\right] \\ &\times\left[G_0({\boldsymbol \phi})-G_1({\boldsymbol \phi}) -G_2({\boldsymbol \phi})+G_3({\boldsymbol \phi})\right] \left|\left.\mbox{$\frac{\boldsymbol \sigma_1}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$\frac{\boldsymbol \sigma_1}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23}\nonumber \\ &U({\boldsymbol \theta},{\boldsymbol \phi}) \left|\left.\mbox{$i\frac{\boldsymbol \sigma_2}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$i\frac{\boldsymbol \sigma_0}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23} = \nonumber\\ &\left[G_0({\boldsymbol \theta})+G_1({\boldsymbol \theta}) +G_2({\boldsymbol \theta})+G_3({\boldsymbol \theta})\right] \\ &\times\left[G_0({\boldsymbol \phi})+G_1({\boldsymbol \phi}) +G_2({\boldsymbol \phi})+G_3({\boldsymbol \phi})\right] \left|\left.\mbox{$i\frac{\boldsymbol \sigma_2}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$i\frac{\boldsymbol \sigma_2}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23}\nonumber \\ &U({\boldsymbol \theta},{\boldsymbol \phi}) \left|\left.\mbox{$\frac{\boldsymbol \sigma_3}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$\frac{\boldsymbol \sigma_3}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23} = \nonumber\\ &\left[G_0({\boldsymbol \theta})+G_1({\boldsymbol \theta}) -G_2({\boldsymbol \theta})-G_3({\boldsymbol \theta})\right] \\ &\times\left[G_0({\boldsymbol \phi})+G_1({\boldsymbol \phi}) -G_2({\boldsymbol \phi})-G_3({\boldsymbol \phi})\right] \left|\left.\mbox{$\frac{\boldsymbol \sigma_3}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{14} \left|\left.\mbox{$\frac{\boldsymbol \sigma_3}{\sqrt 2}$} \right\rangle\!\!\right\rangle_{23}\nonumber \end{align} \end{subequations} where we defined $U({\boldsymbol \theta},{\boldsymbol \phi}) \equiv U_{14}({\boldsymbol \theta}) U_{23}({\boldsymbol \phi})$ and used the property $A\otimes B\dket{C} = \dket{ACB^T}$ of the matrix representation of bipartite states. Hence, the input states are left unchanged if the condition ${\boldsymbol \phi} = -{\boldsymbol \theta}$ is met: in this case all the numerical coefficients appearing at the right hand sides of Eq.s~(\ref{1234:op}) are equal to 1. The same result holds if we consider as input states $\dket{\Psi}_{12}$ and $\dket{\Phi}_{34}$ one of the three left Bell states. For a generic two qubit state the previous condition is not enough to leave it unchanged, and more condition on ${\boldsymbol \theta}$ are required (see Appendix~\ref{A:zoology} for a complete zoology). \par We can also look at the invariance obtained for Bell states as follows. The action of an unitary transformation, acting on two qubits belonging to different couples of qubits initially in the same Bell state, can be canceled out by applying the inverse transformation to the remaining couple of qubits (choosing ${\boldsymbol \phi} = -{\boldsymbol \theta}$ formally corresponds to the inverse of the first transformation, of course, acting on a different system). For this effect (invariance), Bell states plays a crucial role: if we consider as starting states other than Bell states, the inversion of the operation is not enough to achieve the invariance, further conditions are needed, i.e., differently form the Bell state case, not all the classes of unitaries leads to invariance up to ``remote inversion''. \section{Conclusions}\label{s:remarks} In this paper we have investigated in some details a useful symmetry exhibited by pairs of entangled states, which induces operation transparency, {\em i.e} the preservation of the state under the action of specific class of unitaries. \par In continuous variable systems entanglement induced transparency occurs when two TWBs with the same energy are left unchanged after the evolution through two equal beam splitters. We have shown that entanglement is crucial for the effect and we have studied its occurrence and robustness. Besides, we have shown how EntIT may be useful to engineer baths with nonclassical correlations in order to preserve transmission of entanglement and the purity of TWBs during the propagation. Related to the EntIT effect is the double swapping: now entanglement is swapped between the modes of TWBs by simply changing the phase of the initial bipartite states before the interaction at two balanced BSs and without any measurement. \par The investigation of the discrete-variable counterpart of the EntIt has brought us to the ``remote inversion'' effect, i.e., the action of an unitary transformation, acting on two qubits belonging to different couples of qubits initially in the same Bell state, can be canceled out by applying the inverse transformation to the remaining couple of qubits. This effect my be used to remotely control quantum operations over a quantum network. \acknowledgments Discussions with M.~Genoni, P.~Giorda, S.~Mancini and A.~R.~Rossi are acknowledged. This work has been partially supported by the CNR-CNISM convention. \appendix \section{Zoology for qubit invariance via ``remote inversion''}\label{A:zoology} Here we assume that both the states $\dket{\Psi}_{12}$ and $\dket{\Phi}_{34}$ are equal to the same state: \begin{equation} \dket{\psi} = \sum_{h,k=0,1} a_{hk} \ket{hk}, \end{equation} with $\sum_{h,k} a_{hk}^2 = 1$ (without loss of generality we choose only real coefficients). We have seen in Section \ref{s:RI} that if $\dket{\psi}$ is one of the four Bell states, then the invariance is achieved for ${\boldsymbol \phi} = -{\boldsymbol \theta}$. In all the other cases the following further conditions are needed (of course, together with ${\boldsymbol \phi} = -{\boldsymbol \theta}$). \begin{itemize} \item $a_{00}=1$ or $a_{11}=1$ $\Rightarrow$ $\theta_1=\theta_2$. \item $a_{01}=1$ or $a_{10}=1$ $\Rightarrow$ $\theta_1=-\theta_2$. \item $a_{00},a_{11}\ne 0$, $a_{00}\ne a_{11}$, and $a_{01},a_{10}= 0$ $\Rightarrow$ $\theta_1=\theta_2$. \item $a_{01},a_{10}\ne 0$, $a_{01}\ne a_{10}$, and $a_{00},a_{11}= 0$ $\Rightarrow$ $\theta_1=-\theta_2$. \item $a_{hk} \ne 0, \forall h,k$: \begin{itemize} \item $a_{00} = a_{11}$ and $a_{01} = a_{10}$ $\Rightarrow$ $\theta_2=\theta_3$. \item $a_{00} = -a_{11}$ and $a_{01} = -a_{10}$ $\Rightarrow$ $\theta_2=-\theta_3$. \item $a_{00} = -a_{11}$ and $a_{01} = a_{10}$ $\Rightarrow$ $\theta_1=\theta_3$. \item $a_{00} = a_{11}$ and $a_{01} = -a_{10}$ $\Rightarrow$ $\theta_1=-\theta_3$. \end{itemize} \item In all the other cases one should put ${\boldsymbol \phi} = {\boldsymbol \theta} = 0$, i.e., the states are left unchanged only if no operation is performed. \end{itemize} \end{document}
\begin{document} \title{ILP formulations for the two-stage stochastic Steiner tree problem} \author{ Bernd Zey } \institute{ Department of Computer Science, TU Dortmund, Germany\\ \email{[email protected]} } \maketitle \begin{abstract} We give an overview of new and existing cut- and flow-based ILP formulations for the two-stage stochastic Steiner tree problem and compare the strength of the LP relaxations. \end{abstract} \section{Introduction} The {\em Steiner tree problem (STP)} is a classical network design problem: For an undirected graph $G=(V,E)$ with edge costs $c_e \in \bbR^{\geq 0}, \forall e\in E$, and a set of {\em terminals} $\emptyset \not=T\subseteq V$ it asks for a minimum cost edge set $E'\subseteq E$ such that $G[E']$ connects $T$. The decision problem of the STP is \cNP-complete \cite{Karp1972}, even in case of edge weights 1 and 2 \cite{BernPlassmann1989} or when the graph is planar \cite{GareyJohnson1977}. It is solvable in polynomial time if the graph is series-parallel (partial 2-tree) \cite{WaldColbourn1983} and it is in \cFPT\ with the parameter being the treewidth $k$ (partial $k$-trees) \cite{ChimaniMutzelZeySTP-FPT} or the number of terminals \cite{DreyfusWagner1972}. Moreover, the STP is approximable with a constant factor and the currently best ratio is $\ln(4) + \eps = 1.39$ \cite{ByrkaEtAlJournal}. Moreover, ILP formulations and their polytopes have been studied intensely in the 1990's, see, e.g., \cite{ChopraRao1994,ChopraRao1994b, ChopraTsai2001, GoemansMyung1993, KochMartinSTP, PolzinDaneshmand2001}. The two-stage stochastic Steiner tree problem is a natural extension of the STP to a two-stage stochastic combinatorial optimization problem; for an introduction to stochastic programming see, e.g., \cite{BirgeLouveaux1997,ShapiroEtAlBookStochasticProgramming,KallWallaceBook}. In the first stage, today, it is possible to buy some ``profitable'' edges while the terminal set and the edge costs are subject to uncertainty. However, all possible outcomes are known and given by a set of scenarios. In the second stage, in the future, one of the given scenarios is realized and additional edges have to be installed in order to connect the now known set of terminals. The objective is to make a decision about edges to be purchased in the first stage and in each scenario such that the terminal sets in each scenario are connected and the expected cost of the overall solution is minimized. Formally, the stochastic Steiner tree problem (SSTP) is defined as follows: We are given an undirected graph $G=(V,E)$, first stage edge costs $c_e^0 \in\bbR^{\ge0}, \forall e\in E$, and a set of $K \ge 1$ scenarios with $\cK:=\{1,\ldots, K\}$. Each scenario $k\in\cK$ is defined by its probability $p^k \in (0; 1]$, second-stage edge costs $c_e^k \in\bbR^{\ge0}, \forall e\in E$, and a set of terminals $\emptyset\not=T^k\subseteq V$. Thereby, it holds $\sum_{k\in\cK}p^k = 1$. A feasible solution consists of $K+1$ edge sets $E^0, \dots, E^K \subseteq E$ such that $G[E^0 \cup E^k]$ connects $T^k, \forall k\in\cK$. The objective is to minimize the expected cost $\sum_{e\in E^0} c_e^0 + \sum_{k\in\cK} p^k \sum_{e\in E^k} c_e^k$. The \emph{expected cost of an edge} $e\in E$ is defined as $c_e^* := \sum_{k\in \cK} p^k c_e^k$. W.l.o.g.\ one can assume that $c_e^0 < c_e^*, \forall e\in E$; otherwise, this edge would never be purchased in the first stage since it can be installed in every scenario at the same or cheaper cost. On the other hand it is also valid to assume $c_e^0 > \min_{k\in\cK}\{p^k c_e^k\}, \forall e\in E$, since this edge would never be installed in any scenario. Notice that for the SSTP the optimum first stage solution $E^0$ does not have to be connected. In particular, it is easy to construct instances with the optimum first stage solution being a forest, cf.\ Figure \ref{figure:rsstp:wlog:assumption} and Figure \ref{figure:sstp:examples:disconnected:first:stage:directed} in Section \ref{section:sstp:ip:formulations:directed}. However, fragmented solutions might be unreasonable in practical settings. For example, if new cables or pipes are installed in a city one would prefer starting at one point and connecting adjacent streets first and not by digging in several parts of the city simultaneously. This leads to the \emph{rooted stochastic Steiner tree problem} (rSSTP) which is defined similarly to the SSTP. It additionally has a root node $r\in V$ which is a terminal in each scenario, i.e., $r\in T^k, \forall k \in \cK$. Then, a feasible solution again consists of $K+1$ edge sets $E^0, \dots, E^K \subseteq E$ such that $G[E^0 \cup E^k]$ connects $T^k, \forall k\in\cK$, but it is required that $G[E^0]$ is a tree containing $r$. As for the SSTP the objective is to minimize the expected cost. Notice that the assumption $c_e^0 < c_e^*, \forall e\in E$, as for the SSTP, is not valid for the rSSTP due to the necessary first stage tree. This is shown by Figure \ref{figure:rsstp:wlog:assumption}; here, edge $e_2$ would be disabled in the first stage which prohibits the optimum solution. By swapping first- and second-stage edge costs this example shows that this holds for assumption $c_e^0 > \min_{k\in\cK}\{p^k c_e^k\}$ as well. \begin{figure} \caption{ A simple example for the SSTP where the optimum first stage solution is disconnected. There exists only one scenario (connect terminals $r$ and $3$) and edge costs for the first stage and the scenario, respectively, are written above the edges. The optimum solution selects edges $\{r,1\} \label{figure:rsstp:wlog:assumption} \end{figure} \paragraph{Organization.} We start in Section \ref{section:sstp:complexity:related:work} with an overview of the related work. Section \ref{section:ILP:formulations} introduces known and new ILP formulations based on undirected cuts and flows (Section \ref{section:undirected:formulations}), stronger semi-directed formulations by using orientations properties (Section \ref{section:sstp:ip:formulations:semi:directed}), and directed formulations for the rooted SSTP (Section \ref{section:sstp:ip:formulations:directed}). In the last part, in Section \ref{section:sstp:ip:formulations:strength}, all described ILP formulations are compared by considering the strength of their LP relaxations. \section{Related work} \label{section:sstp:complexity:related:work} \paragraph{Approximations.} Although the STP allows constant factor approximations the stochastic problems are harder to approximate. \cite{RaviSinha-ua-StochasticShortestPath} showed that the group Steiner tree problem, which is $\Omega(\log^{2-\eps}n)$-hard to approximate, can be reduced to the stochastic shortest path problem (a special case of the (r)SSTP). Nevertheless, in literature stochastic versions of the STP have been mostly investigated for approximation algorithms. Due to the inapproximability results restricted versions have been considered to obtain approximation algorithms, e.g., by introducing a fixed and/or uniform inflation factor or a global terminal (a vertex being a terminal in all scenarios). Moreover, different models of scenario representations are used. Here, we concentrate on the finite/polynomial scenario model where the random variables of the stochastic problems are assumed to have finite support. Other publications consider the black box/oracle model. For an overview of these concepts see, e.g., \cite{ShmoysSwamy2006Approx2StageStochOpt}. \cite{GuptaEtAl2007} consider the SSTP with $K$ inflation factors and a global terminal and present a 40-approximation. \cite{GuptaKumar2009StochasticSteinerForest} consider the problem with a uniform fixed inflation factor but without global terminal and describe a constant factor approximation. For the black box/oracle model there exist several approximation algorithms which are based on the idea of scenario sampling. \cite{ImmorlicaEtAlApproxStochCombOpt} present an $\bigO{\log n}$-\-app\-ro\-xi\-mation algorithm for a problem which is restricted by a uniform inflation factor. \cite{GuptaEtAl2004BoostedSampling,GuptaEtAl2012SamplingApproximationCostSharing} introduce the concept of boosted sampling and consider the problem with a global terminal and a uniform inflation factor; their approximation algorithm has a ratio of $3.55$. A similar problem is considered by \cite{ShmoysSwamy2006Approx2StageStochOpt} who present a 4-approximation. \cite{GuptaPal2005StochasticSteinerTreeWithoutRoot} approximate a problem without global terminal. This problem has a fixed uniform inflation factor and the presented algorithm has a ratio of $12.6$. \cite{GuptaEtAl-SSTP-NonUniformInflation} consider no uniform inflation factor but there are only two cost functions for the first stage edges and one for the second-stage edges. The problem is shown to be at least $\Omega(\log\log n)$-hard and an approximation algorithm with a polylogarithmic approximation ratio is given. \paragraph{Related publications.} Among others, the approach by \cite{GuptaEtAl2007} is based on a primal-dual scheme where an undirected cut- and flow-based formulation is used. \cite{BomzeEtAl2010} describe a stronger semi-directed cut-based formulation for the SSTP, apply a Benders decomposition/two-stage branch\&cut approach, and present an experimental study. \cite{HeuristicSSTP-Dimacs2014} describe a heuristic for the SSTP which is compared to the exact approach experimentally. \cite{LjubicMutzelZeySSNDPpaper,LjubicMutzelZeySSNDParticle} expand the SSTP to stochastic survivable network design problems and undirected and semi-directed cut-based formulations are introduced. Last but not least, fixed parameter tractable algorithms are described for the stochastic problems with parameter overall number of terminals \cite{KurzMutzelZey2013} and on partial 2-trees with parameter number of scenarios \cite{BoeklerZeyMutzel2012}. \section{ILP formulations} \label{section:ILP:formulations} We start by introducing undirected cut- and flow-based formulations for the SSTP in Section \ref{section:undirected:formulations}. Afterwards we consider semi-directed models in Section \ref{section:sstp:ip:formulations:semi:directed}. The rooted version can be modeled by stronger directed formulations which are described in Section \ref{section:sstp:ip:formulations:directed}. Finally, Section \ref{section:additional:constraints} deals with additional constraints for the described models. \paragraph{Notations and definitions.} We always use the upper index 0 to indicate the first stage and indices $1, \ldots, K$ for the $K$ scenarios, e.g., $x^0$ is the vector of undirected edge variables of the first stage and $y^1$ and $z^k$ are directed arc variables of the first and $k$th scenario, respectively. To shorten the notation we use the superscript $1\dots K$ to abbreviate $K$ combined scenario vectors: the vector $x^{1\dots K}$ is the transposed concatenation of the vectors $x^1, \dots, x^K$, i.e., $x^{1\dots K} = ((x^1)^\tp, \dots, (x^K)^\tp)^\tp$. We use $0\dots K$ analogously. Moreover, if, e.g., $x^0$ and $y^{1\dots K}$ are variable vectors we abbreviate the vector $((x^0)^\tp, (y^{1\dots K})^\tp)^\tp$ by $(x^0, y^{1\dots K})$. For an undirected weighted graph $G=(V,E)$ with edge cost $c_e, \forall e \in E$, the {\em bidirection} of $G$ is the directed graph $\bar G = (V, A)$ with the arc set $A:=\bigcup_{\{i,j\}\in E} \{(i,j), (j,i)\}$ and arc costs $c_{ij} = c_{ji} = c_e, \forall e=\{i,j\}\in E$. We use the common abbreviations for undirected and directed cuts for a vertex set $\emptyset\not= S \subset V$: $\delta(S) = \{e\in E \mid |e\cap S| = 1\}$ and $\delta^-(S) = \{(i,j)\in A\mid i\not\in S, j\in S\}$. Moreover, if $x$ is a variable vector for undirected edges and $z$ for directed arcs we use $x(E') = \sum_{e\in E'} x_e$ and $z(A') = \sum_{a\in A'} z_a$. In the semi-directed formulations each scenario $k\in\cK$ has a designated root vertex $r^k\in T^k$. Then, let $T^k_r := T^k\wo\{r^k\}$ and $V_r^k := V\setminus \{r^k\}$. Moreover, let $t_r^* := \sum_{k\in\cK}|T_r^k|$. In the directed formulations with root node $r$ we have $V_r := V\wo\{r\}$ and $T_r^k := T^k\wo\{r\}, \forall k\in\cK$. \subsection{Undirected formulations} \label{section:undirected:formulations} \paragraph{Undirected cut formulation.} The following IP is a formulation based on undirected cuts and was frequently considered in literature, e.g., by \cite{GuptaEtAl2007}. It is the classical expansion of the undirected cut formulation for the STP, see, e.g., \cite{KochMartinSTP,PolzinDaneshmand2001}. Binary decision variables for the first stage edges are denoted by $x_e^0, \forall e\in E$, and scenario edges of the $k$th scenario by $x_e^k, \forall e\in E, \forall k\in \cK$. The objective is to minimize the expected cost which is the sum of the selected first stage edges plus the sum of second-stage edges weighted by the scenario probability. \begin{align} \@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[uc]{SSTP}\, \min\, &\sum_{e\in E}c_e^0 x_e^0 + \sum_{k\in\cK} p^k \sum_{e\in E}c_e^k x_e^k \nonumber \\ \text{s.t. } (x^0 + x^k)(\delta(S)) &\ge 1 \hspace{25pt}\forall k\in\cK, \forall S\subseteq V\colon \emptyset\not=T^k\cap S\not=T^k \label{SSTP:ucut:undirected:cuts}\\ x^0 &\in \{0, 1\}^{|E|} &&\\ x^{1 \dots K} &\in \{0, 1\}^{|E|\cdot K} && \end{align} Constraints \eqref{SSTP:ucut:undirected:cuts} are undirected cuts ensuring the connectivity of each scenario terminal set. Thereby, first-stage and second-stage edges can be used to satisfy a cut $S\subseteq V$; we use the notation $(x^0 + x^k)(\delta(S)) = \sum_{e\in\delta(S)} x_e^0 + x_e^k$. \paragraph{Undirected flow formulation.} Here, we present a similar model to the one introduced by \cite{GuptaEtAl2007}. We modify the model such that we have a flow only in the second stage. Thereby, the flow can be constructed by using selected first-stage or second-stage edges. We again use variables $x^0$ and $x^k, \forall k\in\cK$, for modeling the solution edges. Moreover, the bidirection with arc set $A$ is considered and a flow $f$ is computed in each scenario $k\in\cK$ from a designated root node $r^k\in T^k$ to each terminal. We use variables $f_{ij}^{k,t}$ for each scenario $k\in\cK$, arc $(i,j)\in A$, and terminal $t\in T^k_r$. The undirected flow model for the SSTP then reads as follows: \begin{align} \@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[uf]{SSTP}\, \min\, &\sum_{e\in E}c_e^0 x_e^0 + \sum_{k\in\cK} p^k \sum_{e\in E}c_e^k (x_e^k - x_e^0) \nonumber\\ \text{s.t. } x_e^0 + x_e^k &\ge f_{ij}^{k,t}, \nonumber\\ x_e^0 + x_e^k &\ge f_{ji}^{k,t} \hspace{20pt} \forall k \in\cK, \forall e=\{i,j\}\in E, \forall t\in T^k_r \label{sstp:uflow:capacity:constraint:scenario}\\ \sum_{(h,i) \in A} f_{hi}^{k,t} - \sum_{(i, j) \in A} f_{ij}^{k,t} &= \left.\begin{cases} -1, & \text{if } i = r^k\\ 1, & \text{if } i = t\\ 0, & \text{otherwise} \end{cases} \right\} \begin{array}{l} \forall k\in\cK, \forall t\in T^k_r,\\ \forall i \in V \end{array} \label{sstp:uflow:flow:conservation} \\ f &\in [0,1]^{|A|\cdot t_r^*} \label{sstp:uflow:nonnegativity:f}\\ x^0 &\in \{0, 1\}^{|E|} \label{sstp:uflow:integrality:x0}\\ x^{1 \dots K} &\in \{0, 1\}^{|E|\cdot K} \label{sstp:uflow:integrality:xk} \end{align} In this model there has to be one unit of flow in each scenario from the root to each terminal. This is enforced by the flow conservation constraints \eqref{sstp:uflow:flow:conservation}; the root has one outgoing flow (first case), the terminal one ingoing flow (second case), and for all other vertices the ingoing flow equals the outgoing flow. Edges which are used for routing the flow are selected as solution edges by the capacity constraints \eqref{sstp:uflow:capacity:constraint:scenario}, either as first-stage or as second-stage edges. It is easy to see that the formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[uf]{SSTP}$ is valid and that it is equivalent to the one introduced by \cite{GuptaEtAl2007}. Due to the deterministic STP it is not surprising that the cut-based formulation is equivalent to the flow formulation, cf.\ Section \ref{section:sstp:ip:formulations:strength}. However, there exist stronger formulations by using orientation properties. \subsection{Semi-directed formulations} \label{section:sstp:ip:formulations:semi:directed} \paragraph{Semi-directed cut formulations.} In the following we introduce three semi-directed cut-based formulations for the SSTP. All models are based on the application of orientation properties like in the directed cut formulation for the STP. However, edge variables $x^0$ for the first stage remain undirected in all semi-directed formulations. As will be discussed at the beginning of Section \ref{section:sstp:ip:formulations:directed}, using a directed first stage is difficult and no stronger formulation is known. On the other hand, it is possible to consider the bidirected input graph $\bar G=(V,A)$ in the second stage. In the first semi-directed model we use arc variables $z^k_a, \forall a\in A, \forall k\in\cK$. We search for a first-stage edge set $E^0$ and second-stage arc sets $A^1, \dots, A^K$ such that $E^0 \cup A^k$ contains a semi-directed path from a designated terminal $r^k\in T^k$ to each terminal in $T_r^k$, for all scenarios $k\in\cK$. In other words, $A^0 \cup A^k$ has to contain a feasible arborescence for all scenarios $k\in\cK$, with $A^0 := \bigcup_{\{i,j\}\in E^0} \{(i,j), (j,i)\}$. To shorten the notation we write $(x^0+z^k)(\delta^-(S)) := x^0(\delta(S)) + z^k(\delta^-(S)) = \sum_{(i,j)\in\delta^-(S)} x^0_{\{i,j\}} + z_{ij}^k$ for semi-directed cuts. \begin{align} \@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}\, \min\, &\sum_{e\in E} c_e^0 x_e^0 + \sum_{k\in\cK} p^k \sum_{e=\{i,j\}\in E} c_e^k (z_{ij}^k + z_{ji}^k) \nonumber \\ \text{s.t. } (x^0 + z^k)(\delta^-(S)) &\ge 1 \hspace{25pt}\forall k\in\cK, \forall S\subseteq V_r^k\colon S\cap T_r^k \not=\emptyset \label{SSTP:sdcut1:semi:directed:cuts}\\ x^0 &\in \{0, 1\}^{|E|} \\ z^{1\dots K} &\in \{0, 1\}^{|A|\cdot K} \end{align} This first formulation uses semi-directed cuts, i.e., each cut \eqref{SSTP:sdcut1:semi:directed:cuts} for scenario $k\in\cK$ can be fulfilled by first-stage edges or by second-stage arcs from this scenario. \begin{lemm} Formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$ models the stochastic Steiner tree problem correctly. \end{lemm} \begin{proof} Let $\tilde E^0, \tilde E^1, \dots, \tilde E^K$ be an optimum solution for the stochastic Steiner tree problem. Since this solution connects all terminals in all scenarios we can easily find 0/1-values for $x^0$ and $z^k, \forall k\in\cK$, respectively, by using exactly the edges $\tilde E^0, \dots, \tilde E^K$ such that there is a semi-directed path from $r^k$ to each terminal in $T_r^k, \forall k\in\cK$. On the other hand, due to constraints \eqref{SSTP:sdcut1:semi:directed:cuts} an optimum solution $(\tilde x^0, \tilde z^{1\dots K})$ to $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$ connects the designated root node $r^k$ with semi-directed paths to each terminal in $T_r^k$, for all scenarios $k\in\cK$. Hence, using the selected undirected first-stage edges plus the undirected counterparts of the second-stage arcs gives a feasible solution to the SSTP with the same objective value. \qed \end{proof} In formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$ a selected first-stage edge fulfills all related semi-directed cuts. Hence, in the extreme case when all terminals are connected via first-stage edges this model is not stronger than the undirected model. This drawback is overcome by the second semi-directed formulation \cite{BomzeEtAl2010}. It is based on additional capacity constraints which enforce that selected first-stage edges have to be incorporated into the second-stage solution: Each selected first-stage edge has to be oriented such that a feasible arborescence is established in each scenario. Due to this change, the cut constraints are now purely directed and contain only second-stage arc variables $y^{1\dots K}$. Because of the different meaning of the second-stage arc variables we use the identifier $y^{1\dots K}$ instead of $z^{1\dots K}$ as in $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$. The second semi-directed cut formulation for the SSTP reads as follows: \begin{align} \@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}\, \min\, &\sum_{e\in E} c_e^0 x_e^0 + \sum_{k\in\cK} p^k \sum_{e=\{i,j\}\in E} c_e^k (y_{ij}^k + y_{ji}^k - x_e^0) \nonumber\\ \text{s.t. } y^k(\delta^-(S)) &\ge 1 \hphantom{x_e^0}\hspace{25pt}\forall k\in\cK, \forall S\subseteq V_r^k\colon S\cap T_r^k \not=\emptyset \label{SSTP:sdcut2:directed:cuts}\\ y_{ij}^k + y_{ji}^k &\ge x_e^0 \hphantom{1}\hspace{25pt}\forall k\in\cK, \forall e=\{i,j\}\in E \label{SSTP:sdcut2:capacity:first:second:stage}\\ x^0 &\in \{0, 1\}^{|E|} \label{SSTP:sdcut2:x0:integrality} \\ y^{1\dots K} &\in \{0, 1\}^{|A|\cdot K} \label{SSTP:sdcut2:yk:integrality} \end{align} This formulation is basically a union of $K$ directed Steiner tree formulations joined by the first stage through capacity constraints \eqref{SSTP:sdcut2:capacity:first:second:stage}. Compared to the previous cut-based formulations the objective function contains a corrective term for subtracting the additional cost that results from these constraints. \begin{lemm}[\cite{BomzeEtAl2010}] Formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ models the stochastic Steiner tree problem correctly. \end{lemm} \begin{proof} An optimum solution $\tilde E^0, \tilde E^1, \dots, \tilde E^K$ to the SSTP can be easily translated into a feasible solution for model $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ by using the edge set $\tilde E^0 \cup \tilde E^k$ for finding a feasible arborescence in each scenario $k\in\cK$; then let variables $x^0$ represent $\tilde E^0$ and set arc variables $y^k$ according to the arborescences, $\forall k\in\cK$. Contrarily, due to the correctness of the directed cut formulation for the deterministic STP an optimum solution $(\tilde x^0, \tilde y^{1\dots K})$ to $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ contains an $r^k$-rooted arborescence in each scenario $k\in\cK$. Hence, $\tilde E^0, \tilde E^1, \dots, \tilde E^K$, with $\tilde E^0 := \{e\in E\mid \tilde x_e^0 = 1\}$ and $\forall k\in\cK\colon \tilde E^k := \{e=\{i,j\}\in E\mid \tilde y_{ij}^k = 1 \vee \tilde y_{ji}^k = 1\}$, is a feasible solution with the same objective value. \qed \end{proof} Let $(\text{SSTP}_\text{sdc2}^{\text{rel}:x^0})$ denote formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ with the integrality constraint \eqref{SSTP:sdcut2:x0:integrality} being relaxed to $x^0 \in [0,1]^{|E|}$. \begin{lemm}[\cite{BomzeEtAl2010}] \label{sstp:sdcut2:integrality} The optimum solution to $(\mathit{SSTP}_\mathit{sdc2}^\mathit{rel:x^0})$ is integer. \end{lemm} \begin{proof} Assume there exists an optimum solution $(\tilde x^0, \tilde y^{1\dots K})$ to $(\text{SSTP}_\text{sdc2}^{\text{rel}:x^0})$ that is non-integer. Let variable $\tilde x_e^0$ corresponding to edge $e = \{i,j\} \in E$ be fractional, i.e., $0 < \tilde x_e^0 < 1$. The term in the objective function corresponding to edge $e$ is: \begin{align} & c_e^0 \tilde x_e^0 + \sum_{k\in\cK} p^k c_e^k (\tilde y_{ij}^k + \tilde y_{ji}^k - \tilde x_e^0) \nonumber\\ =\, &c_e^0 \tilde x_e^0 - \sum_{k\in\cK} p^k c_e^k \tilde x_e^0 + \sum_{k\in\cK} p^k c_e^k (\tilde y_{ij}^k + \tilde y_{ji}^k) \nonumber \\ =\, & (c_e^0 - c_e^*) \tilde x_e^0 + \sum_{k\in\cK} p^k c_e^k (\tilde y_{ij}^k + \tilde y_{ji}^k)\nonumber \end{align} In case $c_e^0 < c_e^*$ set $\tilde x_e^0 := 1$ and if $c_e^0 > c_e^*$ set $\tilde x_e^0 := 0$. In both cases the resulting solution is still feasible: Constraint \eqref{SSTP:sdcut2:capacity:first:second:stage} together with the integrality of $y^{1\dots K}$ ensures that for all scenarios $k\in\cK$ it holds $\tilde y_{ij}^k + \tilde y_{ji}^k \ge 1$ and hence, \eqref{SSTP:sdcut2:capacity:first:second:stage} is still satisfied. Moreover, the objective value improves which is a contradiction. In case $c_e^0 = c_e^*$ variable $x_e^0$ has coefficient 0 in the objective function and can be fixed to $\tilde x_e^0 := 0$. \qed \end{proof} We like to shortly revisit formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[$\text{uc}$]{SSTP}$ based on undirected cuts. Notice that by adding similar capacity constraints $x_e^k \geq x_e^0, \forall k\in \cK, \forall e\in E$, the undirected cuts \eqref{SSTP:ucut:undirected:cuts} contain only second-stage variables, as in model $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[$\text{sdc2}$]{SSTP}$. Moreover, it is possible to relax the first-stage variables to $x^0\in[0,1]^{|E|}$ without violating overall integrality; the proof is very similar to the one of Lemma \eqref{sstp:sdcut2:integrality}. On the other hand, these modifications do not influence the strength of the LP relaxation and this formulation is as strong as $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[$\text{uc}$]{SSTP}$. We close the discussion on semi-directed cut-based formulations by rewriting the objective function of $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$. By moving the first-stage variables to the first sum gives the following formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[$\text{sdc2}^*$]{SSTP}$ (\cite{BomzeEtAl2010}): \begin{align} \@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[$\text{sdc2}^*$]{SSTP} \min \sum_{e\in E} (c_e^0 - c_e^*) x_e^0 + &\sum_{k\in\cK} p^k \sum_{e=\{i,j\}\in E} c_e^k (y_{ij}^k + y_{ji}^k) \nonumber\\ \text{s.t. } (x^0,y^{1\ldots K}) \text{ satisfies }&\text{\eqref{SSTP:sdcut2:directed:cuts}--\eqref{SSTP:sdcut2:yk:integrality}} \nonumber \end{align} Obviously, $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[$\text{sdc2}^*$]{SSTP}$ and $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ are identical. However, when the model gets decomposed with Benders' decomposition the modified objective function does matter, cf.\ \cite{BomzeEtAl2010}. Then, the master problem of formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[$\text{sdc2}^*$]{SSTP}$ has negative coefficients (since $c_e^* > c_e^0$) whereas the coefficients in the master problem of $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ are non-negative. Moreover, this change affects the primal and dual subproblems and in particular, the generated optimality cuts. \paragraph{Semi-directed flow formulation.} The flow formulation can be strengthened as in the deterministic setting. One simply has to enforce that a selected undirected edge cannot be used for routing flow in both directions at the same time, i.e., for one commodity. Therefore, directed arc variables $y^k, \forall k\in \cK$, are used and constraints \eqref{sstp:uflow:capacity:constraint:scenario} are replaced by the stronger constraints \eqref{sstp:sdflow:capacity}. To highlight the connection to formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ we use the same capacity constraints \eqref{sstp:sdflow:capacity:first:second:stage}. \begin{align} \@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdf]{SSTP}\, \min\, &\sum_{e\in E}c_e^0 x_e^0 + \sum_{k\in\cK} p^k \sum_{e=\{i,j\}\in E}c_e^k (y_{ij}^k + y_{ji}^k - x_e^0)\nonumber \\ \text{s.t. } f \text{ satisfies } &\text{\eqref{sstp:uflow:flow:conservation}}\nonumber\\ y_{ij}^k &\ge f_{ij}^{k,t} \hphantom{x_e^0}\hspace{25pt} \forall k \in\cK, \forall (i,j)\in A, \forall t\in T^k_r \label{sstp:sdflow:capacity} \\ y_{ij}^k + y_{ji}^k &\ge x_e^0 \hphantom{f_{ij}^{k,t}}\hspace{25pt} \forall k \in\cK, \forall e=\{i,j\}\in E \label{sstp:sdflow:capacity:first:second:stage} \\ f &\in [0,1]^{|A|\cdot t_r^*} \label{sstp:sdflow:nonnegativity:f}\\ x^0 &\in \{0, 1\}^{|E|} \label{sstp:sdflow:integrality:x0}\\ y^{1\dots K} &\in \{0, 1\}^{|A|\cdot K} && \label{sstp:sdflow:yk:integrality} \end{align} Formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdf]{SSTP}$ is the equivalent to $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$: instead of satisfying directed cuts one has to find a feasible flow in each scenario and moreover, the scenarios are linked by the first-stage and capacity constraints \eqref{sstp:sdflow:capacity:first:second:stage}. \begin{observation} Formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdf]{SSTP}$ models the stochastic Steiner tree problem correctly. \end{observation} \subsection{Directed formulations} \label{section:sstp:ip:formulations:directed} Formulating the SSTP with a directed first stage causes difficulties when first-stage solutions are disconnected. Consider Figure \ref{figure:sstp:examples:disconnected:first:stage:directed} which depicts such an example. Here, the optimum first-stage solution is disconnected as shown in Figure \ref{figure:sstp:examples:disconnected:first:stage:directed} (a). The optimum arborecences of the two scenarios are given in (b). In particular, edge $e_4$ is used in direction $(3,4)$ in the first and direction $(4,3)$ in the second scenario. Hence, already fixing an orientation in the first stage omits an optimum scenario solution---or at least, makes the corresponding solution more expensive. \begin{figure} \caption{ (a) An SSTP instance with two equally probable scenarios with identical terminal set \{1,3,4\} \label{figure:sstp:examples:disconnected:first:stage:directed} \end{figure} \paragraph{Directed cut formulations for the rSSTP.} While we are not aware of a fully directed and stronger cut-based formulation for the SSTP the rooted version of the SSTP permits a model with directed cuts only. For the following formulations we again consider the weighted bidirection $\bar G=(V,A)$ of the input graph. The first formulation is called $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}$; afterwards, we introduce two more formulations $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc2]{rSSTP}$ and $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[$dc2^*$]{rSSTP}$, respectively, similar to the semi-directed case. We use directed arc variables $z^0$ and $z^k$ for the first and second stage in scenario $k\in\cK$, respectively. Constraints \eqref{rSSTP:dcut1:scenario:cuts} are directed cuts ensuring a feasible arborescence in each scenario consisting of first and second-stage arcs. Moreover, the additional directed cuts \eqref{rSSTP:dcut1:cuts:first:stage} are used to enforce the required first-stage tree. \begin{align} \@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}\, \min\, &\sum_{a\in A} c_a^0 z_a^0 + \sum_{k\in\cK} p^k \sum_{a\in A} c_a^k z_a^k \nonumber\\ \text{s.t. } z^0(\delta^-(S)) &\ge z^0(\delta^-(v)) \hspace{21pt}\forall \emptyset\not=S\subseteq V_r, \forall v\in S \label{rSSTP:dcut1:cuts:first:stage} \\ (z^0 + z^k)(\delta^-(S)) &\ge 1 \hphantom{z(\delta^-(v))}\hspace{20pt}\forall k\in\cK, \forall S\subseteq V_r\colon S\cap T_r^k \not=\emptyset \label{rSSTP:dcut1:scenario:cuts}\\ z^0 &\in \{0, 1\}^{|A|} \\ z^{1\dots K} &\in \{0, 1\}^{|A|\cdot K} \label{SSTP:dcut1:yk:integrality} \end{align} \begin{lemm} Formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}$ models the rooted stochastic Steiner tree problem correctly. \end{lemm} \begin{proof} Let $\tilde E^0, \tilde E^1, \dots, \tilde E^K$ describe an optimum rSSTP solution. Since $\tilde E^0$ induces a tree the edges can be oriented from the root $r$ outwards. Then, it is clear that for each scenario $k\in\cK$ the edge set $\tilde E^k$ can be oriented such that $\tilde E^0\cup \tilde E^k$ contains an arborescence with directed paths from $r$ to each terminal. This orienting procedure gives a solution to $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}$. On the other hand, an optimum solution to $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}$ guarantees that every terminal is reachable by a directed path from the root node due to constraints \eqref{rSSTP:dcut1:scenario:cuts}. Moreover, constraints \eqref{rSSTP:dcut1:cuts:first:stage} plus the objective function ensure that the first stage is a tree rooted at $r$. Hence, the related undirected edges yield a feasible solution to the rSSTP. \qed \end{proof} It is possible to use the same idea leading to the semi-directed formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ for another directed formulation for the rSSTP. The variable identifier for the first-stage arcs is $z^0$ and the arc variables for the $K$ scenarios are $y^{1\dots K}$. Again, we use identifier $y$ due to the different meaning: scenario arcs already contain selected first-stage arcs. \begin{align} \@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc2]{rSSTP}\, \min\, &\sum_{a\in A} c_a^0 z_a^0 + \sum_{k\in\cK} p^k \sum_{a\in A} c_a^k (y_a^k - z_a^0) \nonumber\\ \text{s.t. } z^0(\delta^-(S)) &\ge z^0(\delta^-(v)) \hphantom{1z_{ij^0}}\forall \emptyset\not=S\subseteq V_r, \forall v\in S \label{rSSTP:dcut2:cuts:first:stage} \\ y^k(\delta^-(S)) &\ge 1 \hphantom{z_{ij^0}z^0(\delta^-(v))}\forall k\in\cK, \forall S\subseteq V_r\colon S\cap T_r^k \not=\emptyset \label{rSSTP:dcut2:scenario:cuts}\\ y_{ij}^k &\ge z_{ij}^0 \hphantom{z_0z^0(\delta^-(v))}\forall k\in\cK, \forall (i,j)\in A \label{rSSTP:dcut2:capacity:first:second:stage} \\ z^0 &\in \{0, 1\}^{|A|} \label{rSSTP:dcut2:z0:integrality}\\ y^{1\dots K} &\in \{0, 1\}^{|A|\cdot K} \label{rSSTP:dcut2:yk:integrality} \end{align} Constraints \eqref{rSSTP:dcut2:cuts:first:stage} are identical to constraints \eqref{rSSTP:dcut1:cuts:first:stage} in $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}$ and model the first-stage tree. Capacity constraints \eqref{rSSTP:dcut2:capacity:first:second:stage} enforce the selection of used first-stage arcs in each scenario. Again, the objective function contains a corrective term for the additional cost. Then, the directed cuts \eqref{rSSTP:dcut2:scenario:cuts} in the scenarios contain only variables $y$. \begin{observation} Formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc2]{rSSTP}$ models the rooted stochastic Steiner tree problem correctly. \end{observation} The objective function of model $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc2]{rSSTP}$ can be rewritten analogously to the semi-directed formulation. We call the resulting formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[$\text{dc2}^*$]{rSSTP}$ which is equivalent to $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc2]{rSSTP}$ but the change in the objective function matters when a decomposition is applied. \begin{align} \@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[$\text{dc2}^*$]{rSSTP}\, \min\, &\sum_{a\in A} (c_a^0 - c_a^*) z_a^0 + \sum_{k\in\cK} p^k \sum_{a\in A} c_a^k y_a^k \nonumber \\ \text{s.t. } (z^0, y^{1\dots K}) &\text{ satisfies \eqref{rSSTP:dcut2:cuts:first:stage}--\eqref{rSSTP:dcut2:yk:integrality}} \nonumber \end{align} If $c_a^0 < c_a^* := \sum_{k\in\cK} p^k c_a^k $ holds for all arcs $a\in A$ we can again relax the integrality restrictions on the first-stage variables without losing overall integrality. Let $(\text{rSSTP}_\text{dc2}^{\text{rel}:z0})$ denote formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc2]{rSSTP}$ with the integrality constraint \eqref{rSSTP:dcut2:z0:integrality} being relaxed to $z^0 \in [0,1]^{|A|}$. \begin{theorem} \label{sstp:dcut2:integrality} If it holds $c_a^0 < c_a^*, \forall a\in A$, the optimum solution to $(\text{rSSTP}_\text{dc2}^{\text{rel}:z0})$ is integer. \end{theorem} \begin{proof} Let $(\tilde z^0, \tilde y^{1\dots K})$ denote an optimum solution to $(\text{rSSTP}_\text{dc2}^{\text{rel}:z0})$ that is non-integer. Now consider an arc $\alpha\in A$ with $0< \tilde z_{\alpha}^0 < 1$ defined as follows. If there exists a fractional arc $(r,j)$ we set $\alpha := (r,j)$. Otherwise, we set $\alpha:=(i,j)$ such that the directed path $P$ from the root $r$ to vertex $i$ consists only of selected arcs, i.e., $\tilde z_a^0 = 1, \forall a\in P$. Notice that arc $\alpha$ is well-defined due to constraints \eqref{rSSTP:dcut2:cuts:first:stage}. We consider three main cases. In each case we construct a feasible solution $(\hat z^0, \hat y^{1\dots K})$ with a better objective value than by $(\tilde z^0, \tilde y^{1\dots K})$. We always start with the solution $(\hat z^0, \hat y^{1\dots K})$ with $\hat z^0 := \tilde z^0, \hat y^{1\dots K} := \tilde y^{1\dots K}$ and describe the necessary modifications. {\em Case 1:} $\alpha = (i, r)$. Since $\alpha$ is an ingoing arc of the root $r$ it is not contained in any directed cut. Hence, setting $\hat z_\alpha^0 := 0$ and $\hat y_\alpha^k := 0, \forall k\in\cK$, gives a better solution. {\em Case 2:} $\alpha=(r, j)$. In this case set $\hat z_\alpha^0 := 1$. First, notice that the objective value improves since the term in the objective function with respect to arc $\alpha$ is $ c_\alpha^0 \tilde z_\alpha^0 + \sum_{k\in\cK} p^k c_\alpha^k (\tilde y_\alpha^k - \tilde z_\alpha^0) = c_\alpha^0 \tilde z_\alpha^0 + \sum_{k\in\cK} p^k c_\alpha^k (1-\tilde z_\alpha^0) = (c_\alpha^0 - c_\alpha^*) \tilde z_\alpha^0 + c_\alpha^* $ and $c_\alpha^0 < c_\alpha^*$. Second, we argue that the solution $(\hat z^0, \hat y^{1\dots K})$ is feasible. Since $\hat y^{1\dots K}=\tilde y^{1\dots K}$ we do not need to consider constraints \eqref{rSSTP:dcut2:scenario:cuts}. Constraints \eqref{rSSTP:dcut2:cuts:first:stage} are only crucial for vertex $j$ since for all other vertices the right-hand side does not change and the left-hand side does not decrease. For vertex $j$ notice that $z_\alpha^0$ is contained in the left-hand and in the right-hand side of any constraint; hence, the constraints are still satisfied. Constraint \eqref{rSSTP:dcut2:capacity:first:second:stage} is also only interesting for arc $\alpha$; but since $\tilde z_\alpha^0 > 0$ it holds $\hat y_\alpha^k = \tilde y_\alpha^k = 1$ and the constraint is also still satisfied. {\em Case 3:} $\alpha=(i, j)$ with $i\not=r, j\not=r$. Let $\cL := \{\ell\in V \mid (\ell, j)\in A, \ell \not= i, \tilde z_{\ell j}^0 > 0 \}$, i.e., $\cL$ is the set of vertices $\ell\not=i$ with a (fractionally) selected arc $(\ell, j)$. {\em Case 3.1:} $\cL = \emptyset$. Hence, arc $\alpha$ is the only ingoing arc of $j$ with $\tilde z_{\cdot,j}^0 > 0$. In this case we set $\hat z_\alpha^0 := 1$. The arguments are similar to Case 2. Again, the objective value improves and constraints \eqref{rSSTP:dcut2:scenario:cuts} and \eqref{rSSTP:dcut2:capacity:first:second:stage} are still satisfied. Constraints \eqref{rSSTP:dcut2:cuts:first:stage} are again only crucial for vertex $j$ and are satisfied due to the properties of arc $\alpha$: Recall that we set $\alpha$ such that the directed path $P$ from $r$ to $i$ consists of arcs $a$ with $\tilde z_a^0 = 1, \forall a\in P$. Hence, any cut $S$ with $j\in S, r\not\in S$ satisfies $\hat z^0(\delta^-(S)) \ge \tilde z^0(\delta^-(S)) \ge 1 = \hat z^0(\delta^-(j))$. {\em Case 3.2:} $\cL \not= \emptyset$. Since $\cL \not= \emptyset$ there exists at least one arc $(\ell, j)$ with $\tilde z^0_{\ell j} > 0, \ell \not=i$. Hence, due to capacity constraints \eqref{rSSTP:dcut2:capacity:first:second:stage} it holds $\tilde y^k(\delta^-(j)) = 1 + |\cL| \ge 2$ in any scenario $k\in\cK$. Since directed cuts have a left-hand side of 1 it is obvious that this solution is non-optimal. Now, set $\hat z_{\alpha}^0 := 1$, $\hat z_{\ell j}^0 := 0, \forall \ell\in\cL$, and $\hat y_{\ell j}^k := 0, \forall \ell\in\cL, \forall k\in\cK$. First, we argue that this solution has a better objective value and afterwards, we discuss its feasibility. As discussed in Case 2 increasing $\hat z_{\alpha}^0$ leads to a decrease of the objective value. Moreover, deleting arcs from the solution by setting $\hat z_{\ell j}^0 := 0, \forall \ell\in\cL$, and $\hat y_{\ell j}^k := 0, \forall \ell\in\cL, \forall k\in\cK$, improves the objective, too. Hence, the newly constructed solution has a better objective value. To show the feasibility of this solution we consider the constraints one by one. Capacity constraints \eqref{rSSTP:dcut2:capacity:first:second:stage} are satisfied by construction. The directed cuts in the scenarios \eqref{rSSTP:dcut2:scenario:cuts} are satisfied for every valid cut $S\ni j$ since $S$ crosses the path $P$ or arc $\alpha$ where each arc $a\in P\cup\alpha$ has a value $\hat y_a^k = 1, \forall k\in\cK$, such that it holds $\hat y^k(\delta^-(S))\geq 1$. All other valid cuts $S\not\ni j$ are still satisfied since the arc variables crossing the cuts are not modified. Last but not least, we have to consider constraints \eqref{rSSTP:dcut2:cuts:first:stage}; here, the arguments are very similar. Consider any valid cut $S$ for constraint \eqref{rSSTP:dcut2:cuts:first:stage}. If $j\not\in S$ the constraint is still satisfied since the related arc variables are unchanged. In case $j\in S$ the cut $S$ crosses $P\cup \alpha$ such that (i) $\hat z^0(\delta^-(S))\geq 1$. Since arc costs are non-negative and the right-hand side of the directed cuts is 1 any optimum solution satisfies (ii) $z^0(\delta^-(v)) \leq 1, \forall v\in V$. We modified $z^0$ such that (iii) $\hat z^0(\delta^-(v)) = 1, \forall v\in V$. Combining (i)--(iii) shows that constraints \eqref{rSSTP:dcut2:cuts:first:stage} are satisfied. \qed \end{proof} \begin{figure} \caption{ Instance for the STP where the directed cut formulation has an integrality gap of $10/9$. All edge costs are 1 and terminals are drawn as rectangles. (a) shows the optimum fractional solution (dashed arcs are set to 0.5) whereas (b) depicts an optimum integer solution. This graph can be used to construct an rSSTP-instance where the optimum solution to $(\text{rSSTP} \label{figure:sstp:rsstp:first:stage:integer} \end{figure} We like to shortly revisit the first directed cut formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}$ and show that $(\text{rSSTP}_\text{dc1}^{\text{rel}:z0})$ does not have the latter property; let $(\text{rSSTP}_\text{dc1}^{\text{rel}:z0})$ denote formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}$ with relaxed first-stage variables $z^0\in[0,1]^{|A|}$. An example is given by Figure \ref{figure:sstp:rsstp:first:stage:integer} (a). The corresponding undirected graph depicts a classical instance for the deterministic STP (cf.\ e.g., \cite{PolzinDaneshmand2009ApporachesSTP}) where the directed cut formulation has an integrality gap---here it is $10/9$. Now, consider an rSSTP-instance on that graph that contains one scenario with the four terminals $\{1, 4, 5, 6\}$ and with vertex 1 being the root $r$. Moreover, let the cost in the first stage be 1 for each edge and in the scenario 2 for each edge such that $(\text{rSSTP}_\text{dc1}^{\text{rel}:z0})$ connects all terminals already in the first stage. Figure \ref{figure:sstp:rsstp:first:stage:integer} (a) gives the optimum solution with cost $4.5$ for model $(\text{rSSTP}_\text{dc1}^{\text{rel}:z0})$ where each dashed arc $a$ is set to $z^0_a := 0.5$; moreover, $z^{1\dots K} := \mathbf{0}$ is integer. Figure \ref{figure:sstp:rsstp:first:stage:integer} (b) depicts the first stage of an optimum solution with cost 5 for the described rSSTP-instance which is also the optimum solution to $(\text{rSSTP}_\text{dc2}^{\text{rel}:z0})$. \paragraph{Directed flow formulations for the rSSTP.} We close the discussion on formulations for the rooted stochastic Steiner tree problem by introducing a polynomially sized model. This formulation is again flow-based. Compared to the previously introduced flow formulations it requires additional node variables $w_v^0 \in\{0,1\}, \forall v\in V$, and additional first-stage flow variables $f_{ij}^{0,v}, \forall v\in V_r, \forall (i,j)\in A$, for ensuring the first-stage tree. The description of the formulation is split into several parts for better readability. First, we introduce the variables. The solution is represented by arc variables $z^0$ for the first stage and $y^{1\dots K}$ for the $K$ scenarios. We again use capacity constraints to ensure that each first-stage arc is also used in each scenario. Hence, we have the same identifiers $y^{1\dots K}$ for the second stage. As for the semi-directed flow formulations we have flow variables $f_{ij}^{k,t}$ for each scenario $k\in\cK$, terminal $t\in T_r^k$, and arc $(i,j)\in A$. Moreover, we use the already mentioned flow variables $f_{ij}^{0,v}$ and binary node variables $w_v^0$ for the first stage. \begin{align} f^0 &\in [0,1]^{|V_r|\cdot|A|} \label{rsstp:dflow:f0:in:01} \\ f^{1\dots K} &\in [0,1]^{|A|\cdot t_r^*} \\ z^0 &\in \{0, 1\}^{|A|}\\ w^0 &\in\{0,1\}^{|V_r|} \label{rsstp:dflow:w0:integrality}\\ y^{1\dots K} &\in \{0, 1\}^{|A|\cdot K} \label{rsstp:dflow:integrality:yk} \end{align} The constraints which contain first-stage variables are given as follows. Thereby, $w_v^0=1$ implies that vertex $v$ is contained in the first-stage tree. In this case a flow of unit one needs to be send from the root to this vertex. This is ensured by the classical flow conservation constraints \eqref{rsstp:dflow:first:stage:low:conservation}; here with the right-hand side $w_i^0$ and $-w_i^0$, respectively. Constraints \eqref{rsstp:dflow:first:stage:variables} ensure the correct assignment of the node variables. \begin{align} z_{ij}^0 &\ge f_{ij}^{0,v} \hphantom{z^0(\delta^-(v)) } \forall v\in V_r, \forall (i,j)\in A \label{rsstp:dflow:first:stage:capacity:arcs:flow} \\ w_v^0 &\ge z^0(\delta^-(v)) \hphantom{f_{ij}^{0,v} } \forall v\in V_r \label{rsstp:dflow:first:stage:variables}\\ \sum_{(h,i) \in A} f_{hi}^{0,v} - \sum_{(i,j) \in A} f_{ij}^{0,v} &= \left.\begin{cases} w_v^0, & \text{if } i = r\\ -w_v^0, & \text{if } i = v\\ 0, & \text{otherwise} \end{cases} \right\} \, \forall v\in V_r, \forall i \in V \label{rsstp:dflow:first:stage:low:conservation} \end{align} Again, we use capacity constraints \eqref{rsstp:dflow:capacity:constraint:first:second:stage} to ensure that each first-stage arc is also used in each scenario. These constraints link the first and second stage and they are the only constraints using both first and second-stage variables. \begin{align} y_{ij}^k &\ge z_{ij}^0 \qquad \forall k \in\cK, \forall (i,j)\in A \label{rsstp:dflow:capacity:constraint:first:second:stage} \end{align} The remaining constraints are identical to the constraints in the semi-directed flow formulation. They ensure that all arcs used for routing flow are also purchased in the objective function and that the constructed flow is valid. \begin{align} y_{ij}^k &\ge f_{ij}^{k,t} \qquad \forall k \in\cK, \forall (i,j)\in A, \forall t\in T^k_r \label{rsstp:dflow:capacity:constraint:scenario}\\ \sum_{(h,i) \in A} f_{hi}^{k,t} - \sum_{(i, j) \in A} f_{ij}^{k,t} &= \left.\begin{cases} 1, & \text{if } i = r\\ -1, & \text{if } i = t\\ 0, & \text{otherwise} \end{cases} \right\} \begin{array}{l} \forall k\in\cK, \forall t\in T^k_r,\\ \forall i \in V \end{array} \label{rsstp:dflow:flow:conservation} \end{align} Finally, the directed flow-based formulation reads as follows: \begin{align} \@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[df]{rSSTP}\, \min\, \sum_{a\in A}c_{a}^0 z_{a}^0 +& \sum_{k\in\cK} p^k \sum_{a\in A}c_{a}^k (y_{a}^k - z_{a}^0) \nonumber \\ \text{s.t. } (z^0, y^{1\dots K}, w^0, f) \text{ satisfies }&\text{\eqref{rsstp:dflow:f0:in:01}--\eqref{rsstp:dflow:flow:conservation}}\nonumber \end{align} \begin{observation} Formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[df]{rSSTP}$ models the rooted stochastic Steiner tree problem correctly. \end{observation} \subsection{Additional constraints} \label{section:additional:constraints} It is possible to expand the formulations for the (r)SSTP by further inequalities which are valid for the deterministic STP as described by, e.g., \cite{KochMartinSTP} and \cite{PolzinDaneshmand2001}. Although the following constraints do not strengthen the models they are all valid for any scenario $k\in\cK$. Here, we use variables $y^k$ but the constraints can be used for $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$ and $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}$ as well. \begin{alignat}{2} y^k_{ij} + y^k_{ji} &\le 1 \quad\quad&&\forall e=\{i,j\}\in E \label{SSTP:sdcut:SEC2}\\ y^k(\delta^-(r^k)) &= 0 \quad\quad&&\label{SSTP:indegree:root:0}\\ y^k(\delta^+(r^k)) &\ge 1 && \label{SSTP:sdcut:outdegree:root} \\ y^k(\delta^-(v)) &= 1 &&\forall v\in T^k_r \label{SSTP:sdcut:indegree:terminal} \\ y^k(\delta^-(v)) &\le 1 &&\forall v\in V_r^k\without T_r^k \label{SSTP:indegree:nonterminal:leq:1} \end{alignat} By using straight-forward modifications constraints \eqref{SSTP:sdcut:SEC2}, \eqref{SSTP:indegree:root:0}, and \eqref{SSTP:indegree:nonterminal:leq:1} are also valid for the first stage of the rSSTP models. \paragraph{Flow-balance constraints.} These constraints are deviated from the flow-conservation condition and relate the in- and outdegree of non-terminal vertices. E.g., \cite{PolzinDaneshmand2001} showed that constraints \eqref{flow:balance:constraints} strengthen the directed cut- and flow-based formulations of the STP. \begin{align} z(\delta^+(v)) \ge z(\delta^-(v)) \hspace{25pt}\forall v \in V\wo T \label{flow:balance:constraints} \end{align} However, these constraints are not valid for the stochastic models. Since first-stage solutions might contain irrelevant parts w.r.t.\ one particular scenario $k$, i.e., there might be parts of the first-stage solution that can be pruned without violating the feasibility of the solution in scenario $k$, these constraints would enforce the selection of unnecessary arcs. Notice that this holds for both the semi-directed and directed formulations (in the first and second stage, too). \section{Strength of the formulations} \label{section:sstp:ip:formulations:strength} This section provides a comparison of the introduced formulations from a polyhedral point of view. In the first part we consider the undirected (Section \ref{section:sstp:ip:formulations:strength:undirected}) and semi-directed formulations (Section \ref{section:sstp:ip:formulations:strength:semi:directed}) for the SSTP and Section \ref{sstp:ip:formulations:strength:directed} focusses on the directed models for the rooted version. \subsection{Undirected formulations for the SSTP} \label{section:sstp:ip:formulations:strength:undirected} We start by comparing the undirected formulations based on cuts and flows, respectively. The related polytopes of the relaxed formulations are denoted by \begin{align} \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[uc]{SSTP} &= \left\{x^{0\dots K} \in [0,1]^{|E|\cdot(K+1)} \midresize x^{0\dots K} \text{ satisfies \eqref{SSTP:ucut:undirected:cuts}} \right\} \nonumber\\ \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[uf]{SSTP} &= \left\{(x^{0\dots K}, f) \in [0,1]^{|E|\cdot(K+1)}\times [0,1]^{|A|\cdot t_r^*} \right|\nonumber\\ &\hphantom{= \left\{x^{0\dots K} \in [0,1]^{|E|\cdot(K+1)} \right|}\left.\,(x^{0\dots K}, f) \text{ satisfies \eqref{sstp:uflow:capacity:constraint:scenario}, \eqref{sstp:uflow:flow:conservation}} \right\}. \nonumber \end{align} In order to compare the formulations we project the variables of the flow formulation onto the space of undirected edge variables, i.e., \[ \projection{x^{0\dots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[uf]{SSTP}} = \left\{x^{0\dots K} \midresize \exists f\colon (x^{0\dots K}, f) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[uf]{SSTP}\right\}. \nonumber \] As for the undirected cut-based and flow-based formulations of the deterministic STP the two formulations for the SSTP are equivalently strong. \begin{lemm} $\projection{x^{0\dots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[uf]{SSTP}} = \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[uc]{SSTP}$. \end{lemm} \begin{proof} This lemma follows directly from the classical max flow = min cut theorem, applied to each scenario. If there is a flow of one unit from the root node to each terminal then every cut separating the terminal from the root node is satisfied. On the other hand, if every undirected cut is satisfied it is easy to find a feasible flow from the root node to every terminal using exactly those edges. In both models either first- or second-stage edges can be used. \qed \end{proof} \subsection{Semi-directed formulations for the SSTP} \label{section:sstp:ip:formulations:strength:semi:directed} Before comparing the formulations we expand the semi-directed cut formulations by \emph{subtour elimination constraints of size two (SEC2)} in the second stage; constraints \eqref{sstp:sdcut1:additional:sec2} are added to $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$ and \eqref{sstp:sdcut2:additional:sec2} to $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$, respectively: \begin{align} z_{ij}^k + z_{ji}^k &\le 1\hspace{25pt} \forall k\in\cK, \forall (i,j)\in A \label{sstp:sdcut1:additional:sec2} \\ y_{ij}^k + y_{ji}^k &\le 1\hspace{25pt} \forall k\in\cK, \forall (i,j)\in A \label{sstp:sdcut2:additional:sec2} \end{align} We introduce the additional constraints to make the comparison of polytopes easier. Although these constraints cut the polytopes of the LP relaxations they are not binding, i.e., any optimum solution satisfies the SEC2's anyway. Then, the polytopes of the relaxed cut formulations are denoted by \begin{align} \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc1]{SSTP} &= \left\{(x^0, z^{1\dots K}) \in [0,1]^{|E|} \times [0,1]^{|A|\cdot K} \right| \nonumber\\ &\hspace{60pt}\left.(x^0, z^{1\dots K}) \text{ satisfies \eqref{SSTP:sdcut1:semi:directed:cuts}, \eqref{sstp:sdcut1:additional:sec2}} \right\} \nonumber\\ \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc2]{SSTP} &= \left\{(x^0, y^{1\dots K}) \in [0,1]^{|E|} \times [0,1]^{|A|\cdot K} \right| \nonumber\\ &\hspace{60pt}\left.(x^0, y^{1\dots K}) \text{ satisfies \eqref{SSTP:sdcut2:directed:cuts}, \eqref{SSTP:sdcut2:capacity:first:second:stage}, \eqref{sstp:sdcut2:additional:sec2}} \right\} \nonumber \end{align} Again, we consider the projections onto the space of undirected edge variables $x^{0\dots K}$: \begin{align} \projection{x^{0\dots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc1]{SSTP}} &= \left\{x^{0\dots K} \right| \exists z^{1\dots K}\colon (x^0, z^{1\dots K}) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc1]{SSTP}, \nonumber\\ &\hphantom{= \left\{x^{0\dots K} \right|} \left.\, x_e^k = z_{ij}^k + z_{ji}^k, \forall k\in\cK, \forall e=\{i,j\}\in E \right\} \nonumber\\ \projection{x^{0\dots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc2]{SSTP}} &= \left\{x^{0\dots K} \right| \exists y^{1\dots K}\colon (x^0, y^{1\dots K}) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc2]{SSTP}, \nonumber\\ &\hphantom{= \left\{x^{0\dots K} \right| } \left.\, x_e^k = y_{ij}^k + y_{ji}^k - x_e^0, \forall k\in\cK, \forall e=\{i,j\}\in E \right\} \nonumber \end{align} \begin{figure} \caption{ Hierarchy of undirected and semi-directed formulations for the SSTP. The dashed line and the additional clusters specify that formulations are equivalent. An arrow indicates that the target cluster contains stronger formulations than the formulations in the source cluster. } \label{figure:sstp:semi:directed:formulations:strength} \end{figure} We start by comparing the undirected and the first semi-directed cut formulation. Not surprising, the additional directed parts of the formulation make it stronger. \begin{theorem} \label{lemma:sstp:u:vs:sd1} $\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[uc]{SSTP} \supsetneq \projection{x^{0\dots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc1]{SSTP}}$, i.e., the semi-directed cut-based formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$ is stronger than the undirected formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[uc]{SSTP}$. \end{theorem} \begin{proof} Let $(\tilde x^0, \tilde z^{1\dots K}) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc1]{SSTP}$ and set $\hat x^0 := \tilde x^0, \hat x_e^k := \tilde z_{ij}^k + \tilde z_{ji}^k, \forall k\in\cK, \forall e=\{i,j\} \in E$. We obtain a solution $\hat x^{0\dots K}$ for $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[uc]{SSTP}$; its validity is discussed in the following. Bounds of the first-stage variables $\hat x^0$ are obviously satisfied. Moreover, it clearly holds $\hat x_e^k \ge 0$ and due to constraints \eqref{sstp:sdcut1:additional:sec2}: $\hat x_e^k = \tilde z_{ij}^k + \tilde z_{ji}^k \le 1$. Hence, $\hat x^{0\dots K} \in [0, 1]^{|E|\cdot (K+1)}$. We now show that the undirected cuts \eqref{SSTP:ucut:undirected:cuts} are also satisfied by $\hat x^{0\dots K}$. Let $S\subseteq V$ represent a feasible cut set in scenario $k\in\cK$, i.e., $\emptyset \not=S\cap T^k\not= T^k$. Since cuts in $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$ are semi-directed and ingoing we assume w.l.o.g.\ that it holds $r^k\not\in S$. Otherwise one can simply consider the complementary set $V\wo S$, since $\delta(S) = \delta(V\wo S)$ and then, it holds $r^k\not\in (V\wo S)$. \begin{align} (\hat x^0 + \hat x^k)(\delta(S)) &=\sum_{e\in\delta(S)} \hat x_e^0 + \hat x_e^k \nonumber\\ &=\sum_{e\in\delta(S)} \tilde x_e^0 + \sum_{\{i,j\}\in\delta(S)} \tilde z_{ij}^k + \tilde z_{ji}^k \nonumber\\ &\ge\tilde x^0(\delta(S)) + \tilde z^k(\delta^-(S)) \ge 1\nonumber \end{align} The last inequality holds since $(\tilde x^0, \tilde z^{1\dots K})$ satisfies constraint \eqref{SSTP:sdcut1:semi:directed:cuts} for cut set $S$. Intuitively, the strict inequality of the formulations results from the directed arcs in the scenarios and the strength of the directed cut formulation for the deterministic STP. Figure \ref{figure:sstp:u:vs:sd1} gives a small example with this property where everything is purchased in the second stage and the relaxed semi-directed model gives a better lower bound. \qed \end{proof} \begin{figure} \caption{ Example where the LP relaxation of $(\text{SSTP} \label{figure:sstp:u:vs:sd1} \end{figure} The following theorem shows that formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ is stronger than formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$. \begin{theorem} \label{lemma:sstp:sd1:vs:sd2} $\projection{x^{0\dots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc1]{SSTP}} \supsetneq \projection{x^{0\dots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc2]{SSTP}}$. \end{theorem} \begin{proof} Let $(\tilde x^0, \tilde y^{1\dots K}) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc2]{SSTP}$ and set $\hat x_e^0 := \tilde x_e^0, \forall e\in E$, and $\hat x_e^k := \tilde y_{ij}^k + \tilde y_{ji}^k - \tilde x_e^0, \forall k\in \cK, \forall e=\{i,j\}\in E$. We argue that $\hat x^{0\dots K} \in \projectionS{x^{0\dots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc1]{SSTP}}$ by showing that there exists a variable assignment $\hat z^{1\dots K} \in [0,1]^{K\cdot |A|}$ such that $(\hat x^0, \hat z^{1\dots K}) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc1]{SSTP}$. This solution is obtained by transforming $(\tilde x^0, \tilde y^{1\dots K})$ into a feasible $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$-solution. Thereby, the parameter $\alpha_{ij}^k \in [0,1], \forall k \in\cK, \forall (i,j)\in A$, is used: \[ \alpha_{ij}^k := \begin{cases} \frac{\tilde y_{ij}^k}{\tilde y_{ij}^k + \tilde y_{ji}^k} & \text{if}\ \tilde y_{ij}^k + \tilde y_{ji}^k > 0 \\ 0 & \text{otherwise}. \end{cases} \] This parameter allows us to split up the first-stage values among the two corresponding directed arcs, independent for each scenario. With $\alpha$ at hand the directed arc variables are set to $\hat z_{ij}^k := \tilde y_{ij}^k - \alpha_{ij}^k \tilde x_{e}^0, \forall k \in\cK, \forall (i,j)\in A$, with $e =\{i,j\} \in E$. First we show that this is a valid projection. Notice that $\forall e=\{i,j\}\in E, \forall k\in\cK$: $\alpha_{ij}^k + \alpha_{ji}^k \in\{0,1\}$; if $\tilde y_{ij}^k + \tilde y_{ji}^k > 0$ this value is 1 and 0 otherwise. Now, consider edge $e = \{i,j\}\in E$ in scenario $k\in\cK$ with $\tilde y_{ij}^k + \tilde y_{ji}^k > 0$. Then, $\hat z_{ij}^k + \hat z_{ji}^k = \tilde y_{ij}^k - \alpha_{ij}^k \tilde x_e^0 + \tilde y_{ji}^k - \alpha_{ji}^k \tilde x_e^0 = \tilde y_{ij}^k + \tilde y_{ji}^k - \tilde x_e^0$. In case $\alpha_{ij}^k = \alpha_{ji}^k = 0$, due to $\tilde y_{ij}^k + \tilde y_{ji}^k = 0$ and constraints \eqref{SSTP:sdcut2:capacity:first:second:stage}, i.e., $y_{ij}^k + y_{ji}^k \ge x_e^0$, it follows $\tilde x_e^0 = 0$. Hence, it always holds $\hat z_{ij}^k + \hat z_{ji}^k = \tilde y_{ij}^k + \tilde y_{ji}^k - \tilde x_e^0, \forall k\in\cK, \forall e=\{i,j\}\in E$. Now we are able to prove $\hat x^{0\dots K} \in \projectionS{x^{0\dots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc1]{SSTP}}$. Due to the preceding discussion it is clear that the subtour elimination constraints \eqref{sstp:sdcut1:additional:sec2} are satisfied. Moreover, it obviously holds $\hat x_e^0 \in [0,1], \forall e\in E$. Next, we consider the bounds for the directed arc variables $\hat z_{ij}^k, \forall k\in\cK, \forall (i,j)\in A$. $\hat z_{ij}^k \le 1$ holds since $\hat z_{ij}^k \le \tilde y_{ij}^k \le 1$. Non-negativity can be seen by considering two cases. (i) If $\alpha_{ij}^k > 0$: \[ \hat z_{ij}^k = \tilde y_{ij}^k - \alpha_{ij}^k \tilde x_e^0 = \tilde y^k_{ij} - \tilde x_e^0 \frac{\tilde y^k_{ij}}{\tilde y^k_{ij} + \tilde y^k_{ji}} = \tilde y^k_{ij} \left( 1- \overbrace{\frac{\tilde x_e^0}{\tilde y^k_{ij} + \tilde y^k_{ji}} }^{\le1}\right) \ge 0. \] Inequality $\frac{\tilde x_e^0}{\tilde y^k_{ij} + \tilde y^k_{ji}} \le1$ is true due to capacity constraints \eqref{SSTP:sdcut2:capacity:first:second:stage}. (ii) If $\alpha_{ij}^k = 0$ the non-negativity follows directly since $\hat z^k_{ij} = \tilde y^k_{ij} \ge 0$. It remains to show that a valid cut $S\subseteq V_r$ in scenario $k\in\cK$ is satisfied by $(\hat x^0, \hat z^{1\dots K})$: \begin{align} (\hat x^0 + \hat z^k)(\delta^-(S)) = \sum_{(i,j)\in \delta^-(S)} \hat x_{\{i,j\}}^0 + \hat z_{ij}^k &= \sum_{(i,j)\in \delta^-(S)} \tilde x_{\{i,j\}}^0 + \tilde y_{ij}^k - \alpha_{ij}^k \tilde x_{\{i,j\}}^0 \nonumber\\ &= \sum_{(i,j)\in \delta^-(S)} (1-\alpha_{ij}^k) \tilde x_{\{i,j\}}^0 + \tilde y_{ij}^k \nonumber\\ &\ge \sum_{(i,j)\in \delta^-(S)} \tilde y_{ij}^k \ge 1 \nonumber \end{align} The last inequality is true due to the validity of solution $\tilde y^k$ for scenario $k$ and constraints \eqref{SSTP:sdcut2:directed:cuts}. This completes the ``$\supseteq$''-part of the proof. An example showing the strict inequality can be constructed by exploiting the different \emph{meaning} of the first-stage variables. In formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$ a first-stage edge $e=\{i,j\}$ contributes its value to cuts in both directions, i.e., $\delta^-(S)$ and $\delta^+(S)$. Contrarily, a feasible solution for formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ has to find an orientation for this edge and distribute its value to the related arcs. In a sloppy way, the same edge has a lesser value in the second semi-directed formulation. Hence, the same example from Figure \ref{figure:sstp:u:vs:sd1} can be utilized to show the strict inequality; one simply has to set edge costs to 1 for all first-stage and 10 for the scenario edges, respectively. There is still one scenario with all three vertices being terminals. Then, formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc1]{SSTP}$ selects all three edges at $0.5$ in the first stage satisfying all cuts in the scenario. On the other hand, this solution is not valid for $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[sdc2]{SSTP}$ and there is none with overall cost $1.5$. \qed \end{proof} To complete the hierarchy of SSTP formulations given in Figure \ref{figure:sstp:semi:directed:formulations:strength} it remains to show the equivalence of the semi-directed flow and cut-based formulations. To give the formal proof we denote the polytope of the relaxed flow formulation and the projection onto the same variable space as follows. \begin{align} \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdf]{SSTP} &= \left\{(x^0, y^{1\dots K}, f) \in [0,1]^{|E|} \times [0,1]^{|A|\cdot K} \times [0,1]^{|A|\cdot t_r^*} \right| \nonumber\\ &\hphantom{= \left\{x^{0\dots K} \right|\ } \left.\, (x^0, y^{1\dots K}, f) \text{ satisfies \eqref{sstp:uflow:flow:conservation}, \eqref{sstp:sdflow:capacity}, \eqref{sstp:sdflow:capacity:first:second:stage}} \right\} \nonumber\\ \projection{(x^0, y^{1\dots K})}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdf]{SSTP}} &= \left\{(x^0, y^{1\dots K})\ \middle|\, \exists f\colon (x^0, y^{1\dots K}, f) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdf]{SSTP}\right\} \nonumber \end{align} The stronger semi-directed cut and flow formulations are equivalent. This result is mainly a consequence of the relationship of the deterministic STP formulations. \begin{lemm} $\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdc2]{SSTP} = \projection{(x^0, y^{1\dots K})}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[sdf]{SSTP}}$. \end{lemm} \begin{proof} Restricting the models to one particular scenario, i.e., for one $k\in\cK$: $y^k$ or $(y^k, f^k)$, respectively, results in the related cut- and flow-based formulations for the deterministic STP. Since the formulations for the deterministic STP are equivalent and the remaining parts of the stochastic models are identical the lemma follows. \qed \end{proof} \subsection{Directed formulations for the rSSTP} \label{sstp:ip:formulations:strength:directed} To make the comparison of the polytopes easier we add the following constraints to the directed cut formulations: $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}$ is expanded by both constraints and $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc2]{rSSTP}$ only by the second type of constraints \eqref{rsstp:dcuts:z0S:leq1}: \begin{align} z_{ij}^0 + z_{ij}^k&\le 1\hspace{25pt} \forall k\in\cK, \forall (i,j)\in A \label{rsstp:dcut1:sec2:ij:both:stages} \\ z^0(\delta^-(v)) &\le 1\hspace{25pt} \forall v\in V_r \label{rsstp:dcuts:z0S:leq1} \end{align} As for the added SEC2's in the semi-directed formulations \eqref{rsstp:dcut1:sec2:ij:both:stages} is obviously redundant, too, since the right-hand-side of the directed cuts is 1. The same holds for \eqref{rsstp:dcuts:z0S:leq1}. The polytopes of the relaxed formulations are denoted as follows. \begin{align} \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc1]{rSSTP} &= \left\{z^{0\dots K} \in [0,1]^{|A|\cdot(K+1)} \midresize z^{0\dots K} \text{ satisfies \eqref{rSSTP:dcut1:cuts:first:stage}, \eqref{rSSTP:dcut1:scenario:cuts}, \eqref{rsstp:dcut1:sec2:ij:both:stages}, \eqref{rsstp:dcuts:z0S:leq1}} \right\} \nonumber \\ \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc2]{rSSTP} &= \left\{(z^0, y^{1\dots K}) \in [0,1]^{|A|\cdot(K+1)} \midresize (z^0, y^{1\dots K}) \text{ satisfies \eqref{rSSTP:dcut2:cuts:first:stage}--\eqref{rSSTP:dcut2:capacity:first:second:stage}, \eqref{rsstp:dcuts:z0S:leq1}} \right\} \nonumber \end{align} We use a projection for the second formulation to compare both models: \begin{align} \projection{z^{0\ldots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc2]{rSSTP}} = \left\{(z^0, z^{1\dots K}) \right| &(z^0, y^{1\dots K}) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc2]{rSSTP}, \nonumber\\ &\left.z_{ij}^k = y_{ij}^k - z_{ij}^0, \forall k\in\cK, \forall (i,j)\in A \right\} \nonumber \end{align} Both directed cut-based formulations are equivalent: \begin{theorem} $\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc1]{rSSTP} = \projectionS{z^{0\ldots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc2]{rSSTP}}$. \end{theorem} \begin{proof} ```$\subseteq$'': Let $\tilde z^{0\dots K} \in\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc1]{rSSTP}$. We show that $(\hat z^0, \hat y^{1\dots K}) \in\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc2]{rSSTP}$ with $\hat z^0 := \tilde z^0, \hat y^k := \tilde z^k + \tilde z^0, \forall k\in\cK$. First, we consider the variable bounds. Since $\hat z^0 = \tilde z^0$ we have $\hat z^0 \in[0,1]^{|A|}$. Moreover, $\hat y^k$ is obviously non-negative and due to constraints \eqref{rsstp:dcut1:sec2:ij:both:stages} at most 1: $\hat y_{ij}^k = \tilde z_{ij}^k + \tilde z_{ij}^0 \le 1, \forall (i,j)\in A, \forall k\in\cK$. Second, the directed cuts in the first stage, i.e., constraints \eqref{rSSTP:dcut2:cuts:first:stage}, and constraints \eqref{rsstp:dcuts:z0S:leq1}, are identical in both formulations and hence, they are satisfied. This is also true for the capacity constraints \eqref{rSSTP:dcut2:capacity:first:second:stage} since $\hat y_{ij}^k = \tilde z_{ij}^k + \tilde z_{ij}^0 \ge \hat z_{ij}^0, \forall (i,j)\in A, \forall k\in\cK$. Third, consider a valid cut set $S\subseteq V_r$ in scenario $k\in\cK$. Since $\tilde z^{0\dots K}$ is a valid solution for $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc1]{rSSTP}$ it satisfies the directed cuts \eqref{rSSTP:dcut1:scenario:cuts} and leads to $\hat y^k(\delta^-(S)) = (\tilde z^k + \tilde z^0)(\delta^-(S)) \ge 1$. Hence, the directed cuts \eqref{rSSTP:dcut2:scenario:cuts} are satisfied by $\hat y^{1\dots K}$. ``$\supseteq$'': The opposite direction is similar. Let $(\tilde z^0, \tilde y^{1\dots K})\in \projectionS{z^{0\ldots K}}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc2]{rSSTP}}$. We set $\hat z^0 := \tilde z^0, \hat z^k := \tilde y^k - \tilde z^0, \forall k\in\cK$, such that $\hat z^{0\dots K}\in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc1]{rSSTP}$. Again, directed cuts in the first stage are obviously satisfied and the variable bounds trivially hold for the first-stage variables. For the second-stage variables we have $\hat z^k \le \tilde y^k \le \boldsymbol{1}$ and $\hat z^k = \tilde y^k - \tilde z^0 \ge \boldsymbol{0}, \forall k\in\cK$, due to constraints \eqref{rSSTP:dcut2:capacity:first:second:stage}. The added constraints \eqref{rsstp:dcut1:sec2:ij:both:stages} are satisfied since $\hat z_{ij}^0 + \hat z_{ij}^k = \tilde z_{ij}^0 + \tilde y_{ij}^k - \tilde z_{ij}^0 = \tilde y_{ij}^k \le 1, \forall (i,j)\in A, \forall k\in\cK$, and last but not least, a valid cut set $S\subseteq V_r$ in scenario $k\in\cK$ is satisfied since $(\hat z^0 + \hat z^k)(\delta^-(S)) = (\tilde z^0 + \tilde y^k - \tilde z^0)(\delta^-(S)) = \tilde y^k(\delta^-(S)) \ge 1$. \qed \end{proof} We close the discussion by comparing the directed flow formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[df]{rSSTP}$ to the second directed cut formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[dc2]{rSSTP}$. \begin{align} \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[df]{rSSTP} = &\left\{(z^0, y^{1\dots K}, w^0, f) \in [0,1]^{|A|\cdot(K+1)} \times [0,1]^{|V_r|} \times [0,1]^{|A|(|V_r| + t_r^*)} \right| \nonumber\\ &\left. \hphantom{(z^0, y^{1\dots K}, w^0, f) \in [0,1]^{|A|\cdot(K+1)}} (z^0, y^{1\dots K}, w^0, f) \text{ satisfies \eqref{rsstp:dflow:first:stage:capacity:arcs:flow}--\eqref{rsstp:dflow:flow:conservation}} \right\} \nonumber \end{align} \begin{align} \projection{(z^0, y^{1\dots K})}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[df]{rSSTP}} &= \left\{(z^0, y^{1\dots K})\ \middle|\, \exists (w^0, f)\colon (z^0, y^{1\dots K}, w^0, f) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[df]{rSSTP} \right\} \nonumber \end{align} \begin{theorem} $\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc2]{rSSTP} = \projection{(z^0, y^{1\dots K})}{\@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[df]{rSSTP}}$ \end{theorem} \begin{proof} ``$\subseteq$'': Let $(\tilde z^0, \tilde y^{1\ldots K})\in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[dc2]{rSSTP}$. We use $(\hat z^0, \hat y^{1\ldots K}) := (\tilde z^0, \tilde y^{1\ldots K})$ to construct a solution $(\hat z^0, \hat y^{1\dots K}, \hat w^0, \hat f) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[df]{rSSTP}$. First, constraints \eqref{rsstp:dflow:capacity:constraint:first:second:stage} are contained in both models and hence satisfied for $(\hat z^0, \hat y^{1\ldots K})$. Second, since $w^0$ gives the connected vertices in the first stage we set $\hat w_v^0 := \tilde z(\delta^-(v)), \forall v\in V_r$; due to \eqref{rsstp:dcuts:z0S:leq1} we have $\tilde z(\delta^-(v))\in[0,1]$. Hence, bounds on $\hat w^0$ are satisfied and moreover, \eqref{rsstp:dflow:first:stage:variables} is satisfied with equality. Third, the remaining part of formulation $\@ifnextchar[{\@modelWithTwo}{\@modelWithOne}[df]{rSSTP}$ contains the construction of flow: we set the flow variables $\hat f$ such that in the first stage a flow with value $\hat w_v^0$ is send from the root to every vertex and in every scenario a flow of value 1 from the root to every terminal. The feasibility and correctness follows again from ``max flow = min cut''. ``$\supseteq$'': Let $(\tilde z^0, \tilde y^{1\dots K}, \tilde w^0, \tilde f) \in \@ifnextchar[{\@polytopeWithTwo}{\@polytopeWithOne}[df]{rSSTP}$. Again, set $(\hat z^0, \hat y^{1\ldots K}) := (\tilde z^0, \tilde y^{1\ldots K})$. First, \eqref{rsstp:dcuts:z0S:leq1} is satisfied for all vertices $v\in V_r$ since $\hat z^0(\delta^-(v)) \le \tilde w_v^0$ due to \eqref{rsstp:dflow:first:stage:variables}. Second, due to \eqref{rsstp:dflow:first:stage:capacity:arcs:flow}--\eqref{rsstp:dflow:first:stage:low:conservation} there is a flow with value $\tilde w^0_v$ in the first stage from the root to a vertex $v\in V_r$ with $\tilde w^0_v > 0$ and moreover, arcs used for routing flow are selected by $\tilde z^0$ through \eqref{rsstp:dflow:first:stage:capacity:arcs:flow}. Hence, again due to ``max flow = min cut'', the directed cuts \eqref{rSSTP:dcut2:cuts:first:stage} are satisfied for $\hat z^0$ for all $v\in V_r$. The same holds for the directed cuts in the scenarios \eqref{rSSTP:dcut2:scenario:cuts} and variables $\hat y^k, \forall k\in\cK$. Last but not least, \eqref{rSSTP:dcut2:capacity:first:second:stage} is satisfied since the constraints are contained in both models. \qed \end{proof} \end{document}
\begin{document} \author{Dmitri V. Alekseevsky and Liana David} \title{A note about invariant SKT structures and generalized K\"ahler structures on flag manifolds} \maketitle {\bf Abstract:} We prove that any invariant strong K\"ahler structure with torsion (SKT structure) on a flag manifold $M = G/K$ of a semisimple compact Lie group $G$ is K\"ahler. As an application we describe invariant generalized K\"ahler structures on $M$.\\ {\it 2010 Mathematics Subject Classification:} 53D18. \section{Introduction} A Hermitian manifold $(M,g,J)$ admits a unique connection $\nabla^B$ (called the Bismut connection) which preserves the metric $g$ and the complex structure $J$ and has a skew-symmetric torsion tensor $c:= g(\cdot, T^{B}(\cdot,\cdot))$, where $T^{B}$ is the torsion of $\nabla^B$. The 3-form $c$ can be expressed in terms of the K\"ahler form $\omega = g\circ J$ by $$ c = Jd\omega := d\omega(J\cdot, J \cdot,J \cdot).$$ The manifold $(M,g,J)$ is called strong K\"ahler with torsion (SKT) if the torsion 3-form $c$ is closed, or, equivalently, $\partial \bar \partial \omega =0$. SKT manifolds are a natural generalization of K\"ahler manifolds and many results from K\"ahler geometry can be generalized to SKT geometry, see e.g. \cite{enrietti-fino,fernandes-fino,fino-tomassini, fino-tomassini1}.\\ SKT geometry is also closely related to generalized K\"ahler geometry, which was recently introduced by N. Hitchin \cite{hitchin} and appeared before in physics as the geometry of the target space of $N=(2,2)$ supersymmetric nonlinear sigma models, see e.g. \cite{GHR,LRvUZ}.\\ A generalized K\"ahler structure on a manifold $M$ is a pair $(\mathcal{J}_1, \mathcal{J}_2)$ of commuting generalized complex structures such that the symmetric bilinear form $ -(\mathcal{J}_1 \circ \mathcal{J}_2 \cdot, \cdot)$ is positive definite, where $(\cdot , \cdot )$ is the standard scalar product of neutral signature of the generalized tangent bundle $\mathbb{T}M= TM\oplus T^{*}M.$ (For the definition and basic facts about generalized complex structures, see e.g. \cite{thesis}). It was shown by M. Gualtieri \cite{thesis,Gual} that a generalized K\"ahler structure on a manifold can be described in classical terms as a bi-Hermitian structure $(g,J_+,J_-,b)$ in the sense of \cite{GHR}, i.e. a pair $(g, J_+)$, $(g, J_-)$ of SKT structures with common metric $g$ and a 2-form $b$ (called in the physical literature the $b$-field) such that \begin{equation}\label{cond1} db = J_+d\omega_{+} = - J_{-}d\omega_{-}, \end{equation} where $\omega_{\pm} = g \circ J_{\pm}$ are K\"{a}hler forms. Let $G$ be a semisimple compact Lie group and $M= G/K$ a flag manifold, i.e. an adjoint orbit of $G$. In this note we describe invariant SKT structures and invariant generalized K\"ahler structures on $M$, as follows. \begin{thm}\label{pmain} Any invariant SKT structure $(g,J)$ on a flag manifold $M= G/ K$ is K\"ahler, i.e. the K\"ahler form $\omega = g\circ J$ is closed. \end{thm} The description of invariant K\"ahler structures on flag manifolds is well known, see e.g. \cite{dmitri,dmitri'} and Section \mbox{\goth r}ef{hermitian} below. \begin{cor}\label{mainh} Let $(g,J_{+}, J_{-},b)$ be an invariant bi-Hermitian structure in the sense of \cite{GHR} on a flag manifold $M = G/ K$ (which defines a generalized K\"{a}hler structure $(\mathcal{J}_1, \mathcal{J}_2)$ via Gualtieri's correspondence). Then $g$ is an invariant K\"ahler metric, $J_+,J_-$ are two parallel invariant complex structures and $b$ is any closed invariant 2-form. If the group $G$ is simple, then $J_{+}= J_{-}$ or $J_{+}= - J_{-}.$ \end{cor} The note is organized as follows. In Section \mbox{\goth r}ef{hermitian} we fix our conventions and we recall the basic facts on the geometry of flag manifolds and, in particular, the description of invariant Hermitian and K\"{a}hler structures \cite{dmitri,dmitri'}. With these preliminaries, Theorem \mbox{\goth r}ef{pmain} and Corollary \mbox{\goth r}ef{mainh} will be proved in Section \mbox{\goth r}ef{genkahler}.\\ {\bf Acknowledgements.} D.V.A thanks the University of Hamburg for hospitality and financial support. L.D. acknowledges financial support from a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-ID-PCE-2011-3-0362. \section{Preliminary material}\label{hermitian} {\bf Basic facts about flag manifolds.} A flag manifold of a semisimple compact Lie group $G$ is an adjoint orbit $M = \mathrm{Ad}_G (h_0) \simeq G/K$ of an element $h_0 $ of the Lie algebra of $G$. We denote by $\mathfrak{g}$, $\mathfrak{k}$ the complex Lie algebras associated with the groups $G$, $K$ respectively, and we fix a Cartan subalgebra $\mathfrak{h}$ of $\mathfrak{k}$. We denote by $R, R_0$ the root systems of $\mathfrak{g}$, $\mathfrak{k}$ with respect to $\mathfrak{h}$ and we set $R':= R \setminus R_0$. We write the root space decomposition of $\mbox{\mbox{\goth o}th g}$ as $$ \mathfrak{g} = \mathfrak{k}+ \mathfrak{m}=( \mathfrak{h} + \sum_{\alpha \in R_0}\mathfrak{g}_\alpha) + \sum_{\alpha \in R'}\mathfrak{g}_\alpha $$ and we identify the vector space $\mathfrak{m}= \sum_{\alpha \in R'}\mathfrak{g}_\alpha $ with the complexification of the tangent space $T_{h_0}M $. Let $E_{\alpha}\in \mbox{\mbox{\goth o}th g}_{\alpha}$ be root vectors of a Weyl basis. Thus, $$ \langle E_{\alpha}, E_{-\alpha}\mbox{\goth r}angle =1,\quad\forall\alpha \in R $$ (where $\langle X, Y\mbox{\goth r}angle :=\mathrm{tr}\left(\mathrm{ad}_{X}\circ \mathrm{ad}_{Y}\mbox{\goth r}ight)$ denotes the Killing form of $\mbox{\mbox{\goth o}th g}$) and \begin{equation}\label{weyl} N_{-\alpha ,-\beta} = - N_{\alpha \beta},\quad\forall \alpha ,\beta \in R \end{equation} where $N_{\alpha\beta}$ are the structure constants defined by \begin{equation}\label{nab} [E_{\alpha}, E_{\beta}]= N_{\alpha\beta} E_{\alpha +\beta},\quad\forall \alpha ,\beta \in R. \end{equation} The Lie algebra of $G$ is the fixed point set $\mathfrak{g}^\tau$ of the compact anti-involution $\tau$, which preserves the Cartan subalgebra ${\mbox{\mbox{\goth o}th h}}$ and sends $E_{\alpha}$ to $- E_{-\alpha}$, for any $\alpha\in R.$ It is given by $$ \mathfrak{g}^\tau = i{\mbox{\mbox{\goth o}th h}}_{\mathbb{R}} + \sum_{\alpha \in R}\mathrm{span} \{ E_{\alpha}-E_{-\alpha}, i(E_{\alpha}+ E_{-\alpha})\}$$ where ${\mbox{\mbox{\goth o}th h}}_{\mathbb{R}} = \mathrm{span}\{ H_\alpha:=[E_{\alpha}, E_{-\alpha}],\alpha \in R\}$ is a real form of $\mathfrak{h}$. Note that $$ \beta(H_\alpha) = \beta([E_\alpha, E_{-\alpha}]) = \langle\beta, \alpha\mbox{\goth r}angle $$ where $\langle \cdot , \cdot \mbox{\goth r}angle$ denotes also the scalar product on $\mathfrak{h}^*$ induced by the Killing form. \subsection{Invariant complex structures on $M = G/K$} We fix a system $\Pi_0$ of simple roots of $R_0$ and we extend it to a system $\Pi = \Pi_0 \cup \Pi'$ of simple roots of $R$. We denote by $R_0^+, R^+ = R_0^+ \cup R'_+$ the corresponding systems of positive roots. A decomposition $$ \mathfrak{m} = \mathfrak{m}^+ + \mathfrak{m}^- = \sum_{\alpha \in R'_+}\mathfrak{g}_\alpha +\sum_{\alpha \in R'_+}\mathfrak{g}_{-\alpha} $$ defines an $\mathrm{Ad}_K$-invariant complex structure $J$ on $T_{h_0}M = \mathfrak{m}^\tau $, such that $$ J|_{\mathfrak{m}^{\pm}} = \pm i \mathrm{Id}. $$ We extend it to an invariant complex structure on $M$, also denoted by $J$. We will refer to $R'_{+}$ and $\Pi'$ as the set of positive roots, respectively the set of simple roots, of $J$. It is known that any invariant complex structure on $M$ can be obtained by this construction \cite{dmitri,wang}. \subsection{$T$-roots and isotropy decomposition} Let $\mathfrak{z} = i \mathfrak{t} \subset \mathfrak{h} $ be the center of the stability subalgebra $\mathfrak{k}^\tau$. The restriction of the roots from $R' \subset \mathfrak{h}^*$ to the subspace $\mathfrak{t}$ are called $T$-roots. Denote by $$ \kappa : R' \to R_T, \, \alpha \mapsto \alpha|_{\mathfrak{t}}$$ the natural projection onto the set $R_T$ of $T$-roots. Note that $\alpha|_{\mathfrak{t}}=0$ for any $\alpha\in R_{0}.$ Any $T$-root $\xi$ defines an $\mathrm{Ad}_K$-invariant subspace $$ \mathfrak{m}_\xi := \sum_{\alpha \in R', \kappa(\alpha) =\xi} \mathfrak{g}_\alpha $$ of the complexified tangent space $\mathfrak{m}$ and $$ \mathfrak{m} = \sum_{\xi \in R_T} \mathfrak{m}_\xi $$ is a direct sum decomposition into non-equivalent irreducible $\mathrm{Ad}_K$-submodules. \subsection{Invariant metrics and Hermitian structures} We denote by $\omega_{\alpha}\in \mbox{\mbox{\goth o}th g}^{*}$ the $1$-forms dual to $E_\alpha, \,\alpha \in R $, i.e. \begin{equation}\label{label-g} \omega_\alpha(E_\beta) = \delta_{\alpha \beta},\quad\omega_{\alpha}\vert_{{\mbox{\mbox{\goth o}th h}}}=0. \end{equation} Any invariant Riemannian metric on $M$ is defined by an $\mathrm{Ad}_K$-invariant Euclidean metric $g$ on $\mathfrak{m}^\tau$, whose complex linear extension has the form \begin{equation}\label{adaus} g = - \frac{1}{2}\sum_{\alpha \in R'} g_\alpha \omega_{\alpha}\vee \omega_{-\alpha} \end{equation} where $\omega_{\alpha}\vee \omega_{-\alpha}= \omega_{\alpha}\otimes\omega_{-\alpha}+ \omega_{-\alpha} \otimes\omega_{\alpha}$ is the symmetric product and $g_\xi$, $\xi \in R_T$, is a system of positive constants associated to the $T$-roots, $g_{\xi}= g_{-\xi}$ for any $\xi \in R_{T}$ and $g_\alpha := g_{\kappa(\alpha)}$. Note that the restriction of $g$ to $\mathfrak{m}_\xi$ is proportional to the restriction of the Killing form, with coefficient of proportionality $-g_\xi$.\\ Any such metric $g$ is Hermitian with respect to any invariant complex structure $J$ and the corresponding K\"ahler form is given by \begin{equation}\label{omega} \omega = - i \sum_{\alpha \in R'_+} g_\alpha \omega_{\alpha}\wedge \omega_{-\alpha} \end{equation} where $R'_{+}$ is the set of positive roots of $J$ and in our conventions $\omega_{\alpha}\wedge\omega_{-\alpha} = \omega_{\alpha}\otimes \omega_{-\alpha} -\omega_{-\alpha}\otimes\omega_{\alpha}.$ \subsection{Invariant K\"ahler structures} Any invariant symplectic form $\omega$ on $M$ compatible with an invariant complex structure $J$ as above (i.e. such that $g:= - \omega\circ J$ is positive definite) is associated to a 1-form $ \sigma \in \mathfrak{t}^* $ such that $\langle\sigma, \alpha_i \mbox{\goth r}angle >0$ for any $\alpha_i \in \Pi'$ (the set of simple roots of $J$). As a form on $\mathfrak{m}$, it is given by $$ \omega= \omega_{\sigma}:= -i \sum_{\alpha \in R'_+} \langle \sigma, \alpha\mbox{\goth r}angle \omega_{\alpha} \wedge \omega_{-\alpha} .$$ The associated K\"ahler metric $g$ has the coefficients $g_{\alpha}=g_{\kappa(\alpha)} = \langle\sigma, \alpha \mbox{\goth r}angle$, which, obviously, satisfy the following linearity property: \begin{equation} \label{linearitycondition} g_{\alpha + \beta} = g_{\alpha} + g_{\beta},\,\, \forall \alpha, \beta, \alpha + \beta \in R_+' \end{equation} In particular, if $\Pi' = \{ \alpha_1, \cdots , \alpha_m \}$ and $$R' \ni \alpha \equiv k_1 \alpha_1 + \cdots + k_m \alpha_m \, (\mathrm{mod} R_0) $$ then $$ g_\alpha = k_1 g_{\alpha_1} + \cdots + k_m g_{\alpha_m}. $$ To summarize, we get: \begin{prop}\label{linearityProp} \cite{dmitri} An invariant Hermitian structure $(g,J)$ on $M$ is K\"ahler if and only if the coefficients $g_{\alpha}$ associated to $g$ by (\mbox{\goth r}ef{adaus}) satisfy the linearity property: if $\alpha, \beta, \alpha + \beta \in R'_+$, then $g_{\alpha + \beta} = g_\alpha + g_{\beta}$. Here $R'_{+}$ is the set of positive roots of $J$. \end{prop} \subsection{The formula for the exterior derivative} An invariant $k$-form on $M=G/K$ can be considered as an $\mathrm{Ad}_K$-invariant $k$-form $\omega$ on the Lie algebra $\mathfrak{g}$ such that $i_{\mathfrak{k}}(\omega )=0$. We recall the standard Koszul formula for the exterior differential $d\omega$: \begin{equation}\label{exteriord} ( d\omega )(X_{0}, \cdots , X_{k})=\sum_{i<j}(-1)^{i+j}\omega ( [X_{i}, X_{j}],X_{1},\cdots , \widehat{X_{i}}, \cdots , \widehat{X_{j}}, \cdots , X_{k}), \end{equation} for any $X_{i}\in \mbox{\mbox{\goth o}th m} \subset \mathfrak{g}$. In (\mbox{\goth r}ef{exteriord}) the hat means that the term is omitted. \section{Proof of our main results}\label{genkahler} We now prove Theorem \mbox{\goth r}ef{pmain} and Corollary \mbox{\goth r}ef{mainh}. We preserve the notations from the previous sections. Let $(g,J)$ be an invariant Hermitian structure on a flag manifold $M=G/K$. Let $g_{\alpha}= g_{k(\alpha )}$ the positive numbers associated to $g$ and $R'_{+}$, $\Pi'$ the set of positive (respectively, simple) roots of $J$, like before. Let $\omega = g \circ J$ be the K\"ahler form. To prove the theorem, we have to check that if the form $J d\omega$ is closed, then $g_{\alpha}$ satisfy the linearity property (\mbox{\goth r}ef{linearitycondition}). We define the sign $\epsilon_\alpha$ of a root $\alpha \in R'= R'_+ \cup(-R'_+)$ by $\epsilon_{\alpha} = \pm 1 $ if $\alpha \in \pm R'_{+}$. Note that $\epsilon_\alpha$ depends only on $\kappa(\alpha)$. Now we calculate $d\omega$ and $J d\omega $ on basic vectors, as follows: \begin{lem} \begin{enumerate} \item[i)] \begin{equation} \label{domega} d \omega (E_{\alpha}, E_\beta, E_\mbox{\mbox{\goth o}th a}mma) =0 \,\, {\mathrm{ if}} \,\, \alpha + \beta + \mbox{\mbox{\goth o}th a}mma \neq 0 \end{equation} and \begin{equation} \label{domega'} d \omega (E_{\alpha}, E_\beta, E_{-(\alpha + \beta)}) = -i N_{\alpha \beta} (\epsilon_\alpha g_\alpha + \epsilon_{\beta}g_\beta -\epsilon_{\alpha + \beta}g_{\alpha + \beta} ). \end{equation} \item[ii)] \begin{equation}\label{add-d} (J d \omega )(E_{\alpha}, E_\beta, E_{-(\alpha + \beta)})= N_{\alpha \beta} (\epsilon_\beta \epsilon_{\alpha + \beta} g_\alpha + \epsilon_{\alpha}\epsilon_{\alpha + \beta}g_\beta - \epsilon_{\alpha} \epsilon_{\beta}g_{\alpha + \beta} ). \end{equation} \end{enumerate} \end{lem} \begin{proof} Relation (\mbox{\goth r}ef{domega}) follows from (\mbox{\goth r}ef{omega}) and (\mbox{\goth r}ef{exteriord}). Relation (\mbox{\goth r}ef{domega'}) follows from (\mbox{\goth r}ef{omega}), (\mbox{\goth r}ef{exteriord}) and the following property of $N_{\alpha\beta}$ (see Chapter 5 of \cite{helgason}): \\ if $\alpha , \beta ,\mbox{\mbox{\goth o}th a}mma\in R$ are such that $\alpha +\beta +\mbox{\mbox{\goth o}th a}mma =0$, then \begin{equation}\label{suma} N_{\alpha\beta} = N_{\beta\mbox{\mbox{\goth o}th a}mma} = N_{\mbox{\mbox{\goth o}th a}mma\alpha}. \end{equation} Relation (\mbox{\goth r}ef{add-d}) follows from (\mbox{\goth r}ef{domega'}) and $JE_{\alpha}= i\epsilon_{\alpha}E_{\alpha}$ for any $\alpha\in R'.$ \end{proof} \begin{lem} Suppose that $(g, J)$ is a SKT structure, i.e. $d\left( J d\omega\mbox{\goth r}ight) =0$. Then \begin{equation}\label{kahler1} N_{\alpha\beta}^{2}\left( g_{\alpha +\beta}-g_{\alpha} - g_{\beta}\mbox{\goth r}ight)+ \epsilon_{\alpha -\beta}N_{\alpha,-\beta}^{2} \left( \epsilon_{\alpha -\beta} g_{\alpha-\beta} - g_{\alpha} + g_{\beta}\mbox{\goth r}ight) =0 \end{equation} for any $\alpha, \beta \in R_+'$, where we assume that $\epsilon_{\alpha -\beta} =0$ if $\alpha - \beta \notin R'$. \end{lem} \begin{proof} By a direct computation, we find \begin{align*} -\frac{1}{2}d\left( Jd\omega \mbox{\goth r}ight) (E_{\alpha}, E_{\beta}, E_{-\alpha}, E_{-\beta})&= N_{\alpha\beta}^{2}\left( g_{\alpha +\beta}- g_{\alpha} - g_{\beta}\mbox{\goth r}ight)\\ &+\epsilon_{\alpha -\beta}N_{\alpha,-\beta}^{2} \left( \epsilon_{\alpha -\beta} g_{\alpha-\beta} -g_{\alpha} + g_{\beta}\mbox{\goth r}ight). \end{align*} This relation implies our claim. \end{proof} For any root $$ R^{\prime}_{+}\ni\alpha \equiv k_1 \alpha_1 + \cdots + k_{m} \alpha_m \, ( \mathrm{mod}R^{+}_{0}), \,\, \alpha_i \in \Pi', $$ we define the length of $\alpha$ as $\ell(\alpha) = \sum_{i=1}^{m} k_i$. Note that $\ell (\alpha )$ depends only on the projection $\kappa(\alpha)$ of $\alpha$ onto $\mbox{\goth t}^{*}.$\\ {\bf Proof of Theorem \mbox{\goth r}ef{pmain}.} By Proposition \mbox{\goth r}ef{linearityProp} we have to check that \begin{equation}\label{condk} g_{\alpha +\beta} = g_{\alpha} +g_{\beta}, \end{equation} for any $\alpha ,\beta \in R'_{+}$ such that $\alpha +\beta\in R'_+$. We use induction on the length of $\mbox{\mbox{\goth o}th a}mma = \alpha + \beta \in R'_+$. Suppose first that $\mbox{\mbox{\goth o}th a}mma = \alpha + \beta \in R'_{+}$ has length two. Then $\alpha - \beta \notin R'$, hence $\epsilon_{\alpha -\beta}=0$. Identity (\mbox{\goth r}ef{kahler1}) implies (\mbox{\goth r}ef{condk}).\\ Suppose now that (\mbox{\goth r}ef{condk}) holds for all $\mbox{\mbox{\goth o}th a}mma = \alpha+\beta\in R'_{+}$ with $l(\mbox{\mbox{\goth o}th a}mma )\leq k$. Let $\mbox{\mbox{\goth o}th a}mma\in R'_{+}$ with $\ell(\mbox{\mbox{\goth o}th a}mma) = k+1$ and suppose that $\mbox{\mbox{\goth o}th a}mma = \alpha + \beta$, where $\alpha, \beta \in R'_+$. We have to show that \begin{equation}\label{condk0} g_{\mbox{\mbox{\goth o}th a}mma } = g_{\alpha} +g_{\beta}. \end{equation} If $\alpha -\beta \notin R'$ , our previous argument shows that (\mbox{\goth r}ef{condk0}) holds. Suppose now that $\alpha -\beta\in R'$. Without loss of generality, we may assume that $\alpha - \beta \in R'_{+}.$ Then $\alpha= (\alpha - \beta) + \beta$ is a decomposition of the root $\alpha $ into a sum of two roots from $R'_{+}.$ Since $\alpha$ has length $\leq k$, our inductive assumption implies that $g_\alpha = g_{\alpha - \beta} + g_\beta$. Thus the second term of the identity (\mbox{\goth r}ef{kahler1}) vanishes and we obtain (\mbox{\goth r}ef{condk0}). This concludes the proof of Theorem \mbox{\goth r}ef{pmain}.\\ {\bf Proof of Corollary \mbox{\goth r}ef{mainh}.} Let $(g, J_{+}, J_{-}, b)$ be a $G$-invariant bi-Hermitian structure in the sense of \cite{GHR} on a flag manifold $M=G/K$. Then, by Theorem \mbox{\goth r}ef{pmain}, $(g,J_{\pm})$ are two K\"ahler structures and hence the $b$-field $b$ is closed. The complex structures $J_{\pm}$ are parallel with respect to the Levi-Civita connection. If the group $G$ is simple, the K\"ahler metric $g$ is irreducible. The endomorphism $A = J_{1}\circ J_{2}$ is symmetric with respect to $g$ and parallel. An easy argument which uses the irreducibility of $g$ shows that $J_{1}= J_{2}$ or $J_{1}= - J_{2}.$ This concludes the proof of Corollary \mbox{\goth r}ef{mainh}. DMITRI V. ALEKSEEVSKY: Edinburgh University, King's Buildings, JCMB, Mayfield Road, Edinburgh, EH9 3JZ,UK, [email protected]\\ LIANA DAVID: Institute of Mathematics "Simion Stoilow" of the Romanian Academy; Calea Grivitei nr. 21, Sector 1, Bucharest, Romania; [email protected] \end{document}
\begin{document} \title{On problems in the calculus of variations \\ in increasingly elongated domains} \author{Herv\'e Le Dret\\ Sorbonne Universit\'es, UPMC Univ Paris 06, CNRS,\\Laboratoire Jacques-Louis Lions, Bo\^\i te courrier 187,\\ 75252 Paris Cedex 05, France. Email: herve.le\[email protected]\and Amira Mokrane\\ Laboratoire d'\'equations aux d\'eriv\'ees partielles\\ non lin\'eaires et histoire des math\'ematiques, ENS, B.P. 92,\\ Vieux Kouba, 16050 Alger, Alg\'erie\\ and USTHB, Facult\'e des math\'ematiques, D\'epartement d'analyse,\\ Laboratoire d'analyse math\'ematique et num\'erique\\ des \'equations aux d\'eriv\'ees partielles, Bab Ezzouar, Alger, Alg\'erie.\\ Email: mokrane\[email protected]} \maketitle \begin{abstract}\noindent We consider minimization problems in the calculus of variations set in a sequence of domains the size of which tends to infinity in certain directions and such that the data only depend on the coordinates in the directions that remain constant. We study the asymptotic behavior of minimizers in various situations and show that they converge in an appropriate sense toward minimizers of a related energy functional in the constant directions.\\ \textbf{MSC 2010:} 35J25, 35J35, 35J62, 35J92, 49J45, 74K99.\\ \textbf{Keywords:} Calculus of variations, domains becoming unbounded, asymptotic behavior, exponential rate of convergence. \end{abstract} \section{Introduction} In this article, we revisit the ``$\ell\to+\infty$'' problem in the context of the calculus of variations. This class of problems was introduced by Chipot and Rougirel in 2000, \cite{[CR1]}, see also the monograph by Chipot \cite{[C2]}, and has since then given rise to many works by several authors dealing with various elliptic and parabolic problems up to until recently. A prototypical $\ell\to+\infty$ problem is the following. Let $\omega={]-}1,1[$, $\ell>0$ be a real number and $\Omega_{\ell}\subset{\mathbb R}^2$ the rectangle ${]-}\ell,\ell[\times\omega$. We denote by $x_1$ the first variable in ${]-}\ell,\ell[$ and $x_2$ the second variable in $\omega$. Any function $f\in L^2(\omega)$ in the second variable gives rise to a function in two variables still denoted $f$ by setting $f(x_1,x_2)=f(x_2)$. We thus consider the two boundary value problems: find $u_{\ell}$, a function in $(x_1,x_2)$, such that $$\begin{cases} -\Delta u_{\ell}=f & \mbox{in } \Omega_{\ell},\\ u_{\ell}=0 & \mbox{on } \partial\Omega_{\ell}, \end{cases} $$ and find $u_{\infty}$, a function in $x_2$, such that $$\begin{cases} -\frac{d^2u_{\infty}}{dx_2^2}=f & \mbox{in } \omega,\\ u_{\infty}=0 & \mbox{on }\partial\omega= \{-1,1\}. \end{cases} $$ Now the function $u_\infty$ can also be considered as a function in two variables that is independent of $x_1$. In this case, it can be shown that, for any $\ell_0>0$, one has $$u_{\ell}\rightarrow u_{\infty} \mbox{ in } H^1(\Omega_{\ell_0}) \hbox{ when }\ell\to+\infty,$$ hence the name of the problem. In other words, when the data does not depend on the elongated dimension, the solution of the above boundary value problem converges in some sense at finite distance to the solution of the corresponding boundary value problem posed in the non-elongated dimension when the elongation tends to infinity. The majority of works on the $\ell\to+\infty$ problem makes use of the boundary value problem itself, \emph{i.e.}, the PDE plus boundary condition. One exception to this rule are the recent papers {\cite{Chipot-Savitska,Chipot Mosjic Roy}}, in which the authors consider instead a sequence of problems in the calculus of variations posed on elongated domains, {see also \cite{[C2new]}. This is the approach we adopt here as well. Our main motivation for this is that certain models, such as nonlinear hyperelasticity, are naturally posed as problems in the calculus of variations for which no Euler-Lagrange equation, \emph{i.e.}, non underlying PDE even in a weak form, is available, see \cite{Ball}. Moreover, questions surrounding the Saint Venant principle in elasticity, see \cite{Mielke,Toupin}, are typically set in elongated domains, albeit in one direction only. Consequently, it makes sense to attempt dealing with some $\ell\to+\infty$ problems by using only energy minimization properties and no Euler-Lagrange equation whatsoever. We are however quite far from achieving the goal of treating nonlinear elasticity, since the approach that we develop below relies a lot on convexity, whereas convexity is not an appropriate hypothesis for nonlinear elasticity. We are nonetheless able to encompass a wide range of nonlinear energies, including the $p$-Lapla\-cian with some technical restrictions on the number of elongated dimensions with respect to the exponent $p$. Our hypotheses are weaker and our results are sometimes stronger than those of \cite{Chipot Mosjic Roy}. The techniques are somewhat different too, with an emphasis here on weak convergence and weak lower semicontinuity techniques, and reliance on such classical techniques as the De Giorgi slicing method which are not dependent on convexity. As a general rule, we try to make as little use of convexity as we can at any given point. Let us describe our results a little more precisely. We consider bounded open subsets $\Omega_\ell$ of ${\mathbb R}^n$ which are Cartesian products of the form $\ell\omega'\times\omega''$, with $\omega'\subset {\mathbb R}^r$ and $\omega''\subset{\mathbb R}^{n-r}$, with $1\le r\le n-1$. We let $x=(x',x'')$ with $x'\in {\mathbb R}^r$ being the elongated variable and $x''\in {\mathbb R}^{n-r}$ the non-elongated variable. Likewise, for a scalar-valued function $v\colon\Omega_\ell\to{\mathbb R}$, we decompose the gradient $\nabla v=(\nabla'v,\nabla''v)$ with obvious notation. We consider an energy density $F\colon{\mathbb R}^n\to{\mathbb R}$ and a function $f$ on $\omega''$, and introduce the minimization problem of finding $u_\ell\in W^{1,p}_0(\Omega_\ell)$ such that $J_\ell(u_\ell)=\inf_{v\in W^{1,p}_0(\Omega_\ell)}J_\ell(v)$ where $$ J_\ell(v)=\int_{\Omega_\ell}\bigl(F(\nabla v(x))-f(x'')v(x)\bigr)\,dx.$$ We assume that $F$ has $p$-growth, $p$-coerciveness and is convex. In particular, there is no assumption of strict convexity or uniform strict convexity made on $F$. We then introduce $F''\colon {\mathbb R}^{n-r}\to{\mathbb R}$ by letting $F''(\xi'')=F(0,\xi'')$, again with obvious notation. Of course, $F''$ is convex, has $p$-growth and $p$-coerciveness and the minimization problem of finding $u_\infty\in W^{1,p}_0(\omega'')$ such that $J_\infty(u_\infty)=\inf_{v\in W^{1,p}_0(\omega'')}J_\infty(v)$ where $$ J_\infty(v)=\int_{\omega''}\bigl(F''(\nabla'' v(x''))-f(x'')v(x'')\bigr)\,dx'',$$ admits solutions. It turns out that, under additional hypotheses, this problem is the ``$\ell\to+\infty$'' limit of the family of minimization problems under consideration. These hypotheses include appropriate growth and coerciveness hypotheses on the function $G\colon {\mathbb R}^n\to{\mathbb R}$, $G(\xi)=F(\xi)-F''(\xi'')$, of the form $$ \forall \xi\in{\mathbb R}^{n}, \lambda(|\xi'|^p+k|\xi''|^{p-k}|\xi'|^k)\leq G(\xi)\leq\Lambda(|\xi'|^{p}+k|\xi''|^{p-k}|\xi'|^k), $$ for some $0<\lambda\le\Lambda$ and $0\le k<p$. Depending on the case, there is no more additional hypothesis (for $k=0$), or a hypothesis of strict convexity of $F''$, or a hypothesis of uniform strict convexity of $F''$ (for $k>0)$. The results are a ``$\ell\to+\infty$'' convergence in the weak sense for $k=0$ when $r<p$, sharpened to strong sense when $F''$ is furthermore assumed to be strictly convex, and a strong ``$\ell\to+\infty$'' convergence for $k>0$ when $r\leq kp/(p-k)$. In the case of the $p$-Laplacian, $p>2$, we thus obtain strong ``$\ell\to+\infty$'' convergence when $r<2p/(p-2)$, {see also \cite{Xie}}. In addition, in the case $k=0$, if we assume that $F''$ is uniformly strictly convex, we obtain strong convergence at an exponential rate without any restriction on $r$. This includes the known behavior of the $2$-Laplacian in the ``$\ell\to+\infty$'' context. We conclude the article with a few comments and perspectives on the vectorial case, in connection with nonlinear elasticity in particular. \section{Statement of the problem} We consider two bounded open sets $\omega'\subset{\mathbb R}^r$ with $0\in\omega'$ and $\omega'$ is starshaped with respect to $0$, and $\omega''\subset{\mathbb R}^{n-r}$ with $n>r\ge 1$. Let $\ell >0$ and set \begin{equation}\label{omgl} \omega'_\ell=\ell\omega'\text{ and }\Omega_\ell=\omega'_\ell\times\omega''\subset{\mathbb R}^n. \end{equation} Points $x$ in $\Omega_\ell$ will be denoted by $x=(x',x'')$ with $x'=(x_1,x_2,\ldots,x_r)\in\omega'_\ell$ and $x''=(x_{r+1},\ldots,x_n)\in \omega''$. Likewise, vectors $\xi$ in ${\mathbb R}^n$ will be decomposed as $\xi=(\xi',\xi'')$, with $\xi'\in {\mathbb R}^r$ and $\xi''\in {\mathbb R}^{n-r}$. Note that because of the starshaped assumption, we have $\Omega_\ell\subset\Omega_{\ell'}$ as soon as $\ell\le\ell'$ and we are thus dealing with a ``growing' family of open sets. We make an additional regularity hypothesis on $\omega'$, which is as follows. Define first the gauge function of $\omega'$ as $$g(x')=\inf\{t\in{\mathbb R}_+^*; x'/t\in \omega'\}.$$ Since $\omega'$ is starshaped and bounded, this is well defined, $\omega'_\ell=\{x'; g(x')<\ell\}$, and there exists $0<R_1<R_{2}$ such that $R_1|x'|\le g(x')\le R_2|x'|$ for all $x'\in{\mathbb R}^r$. Now we assume that $\omega'$ is such that $g$ is a Lipschitz function with Lipschitz constant $K$. By Rademacher's theorem, this implies that $g$ is almost everywhere differentiable, with $|\nabla'g(x')|\le K$ a.e. Moreover, it is known that $g$ then belongs to $W^{1,\infty}_{\rm loc}({\mathbb R}^r)$ and that its almost everywhere derivatives equal its distributional derivatives. This is true for example if $\omega'$ is convex. This regularity hypothesis is for convenience only: we use $g$ to build cut-off functions inside the domains, and not up to the boundary. It should be quite clear that our results can be rewritten in order to accommodate arbitrary open sets $\omega'$. We are interested in a sequence of problems in the calculus of variations ${\cal P}_\ell$ of the form \begin{equation}\label{pl} J_\ell(u_\ell)=\inf_{v\in W^{1,p}_0(\Omega_\ell)}J_\ell(v), \end{equation} with $u_\ell\in W^{1,p}_0(\Omega_\ell)$ and \begin{equation}\label{funjl} J_\ell(v)=\int_{\Omega_\ell}\bigl[F(\nabla v(x))-f''(x'')v(x)\bigr]\,dx. \end{equation} where $f''\in L^{p'}(\omega'')$, $\frac1p+\frac1{p'}=1$, is a given function. Observe that the term corresponding to the force term for this problem only depends on the ``non-elongated'' variable $x''$ so that it is reasonable to expect that $u_\ell$ behaves as a function mostly in $x''$ in the limit $\ell\to+\infty$, in a sense made precise below. We could also consider more general semilinear force terms of the form $h(x'',v)$ satisfying appropriate growth and convexity assumptions, but we stick here with a linear term for simplicity. We assume that the energy density $F\colon{\mathbb R}^n\to{\mathbb R}$ is convex. We let \begin{equation}\label{structure de F} \begin{array}{rcl} F''\colon {\mathbb R}^{n-r}&\to&{\mathbb R}\\ \xi''&\mapsto&F(0,\xi'') \end{array} \quad\hbox{and}\quad \begin{array}{rcl} G\colon {\mathbb R}^n&\to&{\mathbb R}\\ \xi&\mapsto&F(\xi)-F''(\xi'') \end{array} \end{equation} so that \begin{equation}\label{f1f2} F(\xi',\xi'')=F''(\xi'')+G(\xi',\xi''), \end{equation} and $F''$ is convex. These functions are assumed to satisfy the following coerciveness and growth hypotheses \begin{align} &\forall \xi\in{\mathbb R}^{n}, \lambda|\xi|^p\leq F(\xi)\leq\Lambda(|\xi|^p+1),\label{growth1}\\ &\forall \xi\in{\mathbb R}^{n}, \lambda(|\xi'|^p+k|\xi''|^{p-k}|\xi'|^k)\leq G(\xi)\leq\Lambda(|\xi'|^{p}+k|\xi''|^{p-k}|\xi'|^k),\label{growth3} \end{align} for some $0<\lambda\le\Lambda$, $p>1$ and $0\le k< p$.\footnote{Note that $k=p$ yields the same hypothesis as $k=0$.} Here, for $\xi\in{\mathbb R}^d$, $|\xi|$ denotes the canonical Euclidean norm of $\xi$ in ${\mathbb R}^d$. Clearly, condition \eqref{growth1} implies the similar condition \begin{equation} \forall \xi''\in{\mathbb R}^{n-r}, \lambda|\xi''|^p\leq F''(\xi'')\leq\Lambda(|\xi''|^p+1),\label{growth2} \end{equation} for $F''$. Energy densities of the form above include that associated with the $p$-Laplacian for $p\ge 2$. Indeed, in this case, $F(\xi)=\frac1p|\xi|^p=\frac1p(|\xi'|^2+|\xi''|^2)^{p/2}$ and we can take $k=2$ for $p>2$, or $k=0$ for $p=2$. Another simple energy density that is covered by our analysis is $F(\xi)=\frac1p (|\xi'|^p+|\xi''|^p)$ or more generally energies of the form $F(\xi)=F'(\xi')+F''(\xi'')$, with appropriate hypotheses on $F'$ and $F''$. Here, assuming without loss of generality that $F'(0)=0$, we have $G(\xi',\xi'')=F'(\xi')$ and we can take $k=0$. In addition to the above growth and coerciveness hypotheses, which obviously imply that problem ${\cal P}_\ell$ has at least one solution $u_\ell$, we assume that $F''$ is uniformly strictly convex for $k>0$, in the sense that there exists a constant $\beta>0$ such that for all $\xi''$, $\zeta''\in {\mathbb R}^{n-r}$ and all $\theta,\mu\in [0,1]$ with $\theta+\mu=1$, we have \begin{equation}\label{uniformite stricte} F''(\theta\xi''+\mu\zeta'')\le \theta F''(\xi'')+\mu F''(\zeta'')-k\beta\theta\mu(\theta^{p-1}+\mu^{p-1})|\xi''-\zeta''|^p. \end{equation} see for instance \cite{[A],Evans,[J-N]}. The $p$-Laplacian for $p> 2$, $k=2$, satisfies this hypothesis (the $2$-Laplacian satisfies the alternate hypothesis \eqref{uniformite stricte k=0} that will be used later on in Section 5). Note that when $k=0$, the hypothesis becomes redundant, and there is actually no requirement of even strict convexity, let alone uniform strict convexity, of $F''$ in this case. We now introduce our candidate limit problem ${\cal P}_\infty$ as that of finding $u_\infty\in W^{1,p}_0(\omega'')$ such that \begin{equation}\label{pinfty} J_\infty(u_\infty)=\inf_{v\in W^{1,p}_0(\omega'')}J_\infty(v), \end{equation} with \begin{equation}\label{funjinfty}{} J_{\infty}(v)=\int_{\omega''}\bigl[F''(\nabla''v(x''))-f''(x'')v(x'')\bigr]\,dx''. \end{equation} It also clear that problem ${\cal P}_\infty$ has at least one solution $u_\infty$. Here and in the sequel, we use the following notational device $$\nabla'=(\partial_1,\ldots,\partial_r),\quad\nabla''=(\partial_{r+1},\ldots,\partial_n),$$ that we apply indifferently to functions defined either on $\Omega_\ell$ or on $\omega''$. For brevity, we refer to $\nabla'$ as the ``horizontal'' part of the gradient and to $\nabla''$ as the ``vertical'' part of the gradient. We want to study the asymptotic behavior of $u_\ell$ when $\ell\to+\infty$ and compare it with a minimizer $u_\infty$ of the $n-r$ dimensional vertical problem ${\cal P}_\infty$. Actually, our goal is to show that the former converges to the latter in a sense that will be explained later on. \section{Preliminary estimates} We first give several estimates that we will use in the proofs of our convergence results. The first estimate follows immediately from Poincar\'e's inequality. \begin{lemma}\label{lem1} There exists a constant $c_1=c_1(\omega'')$ independent of $\ell$ such that for all $v\in W^{1,p}(\Omega_{\ell})$ whose trace vanish on $\omega'_\ell\times\partial\omega''$, we have \begin{equation}\label{poincare} \|v\|_{L^p(\Omega_{\ell})}\leq c_1 \|\nabla'' v\|_{L^p(\Omega_{\ell};{\mathbb R}^{n-r})}. \end{equation} \end{lemma} Let us now give a first, coarse estimate of $u_\ell$. \begin{lemma}\label{lem2} There exists a constant $c_2$ independent of $\ell$, such that \begin{equation}\label{estmtul} \int_{\Omega_{\ell}}|\nabla u_{\ell}|^p\,dx\le c_2 \ell^r. \end{equation} \end{lemma} \noindent{\bf Proof.}\enspace{} Let us take $v=0$ as a test-function in problem \eqref{pl}. It follows that $$\int_{\Omega_{\ell}} F(\nabla u_{\ell}(x))\,dx\le\int_{\Omega_{\ell}} f''(x'')u_{\ell}(x)\,dx+A\ell^r.$$ where $A=F(0){\cal L}^r(\omega'){\cal L}^{n-r}(\omega'')$ does not depend on $\ell$ (${\cal L}^d$ denotes the $d$-dimensional Lebesgue measure). By H\"older's inequality and the coerciveness assumption \eqref{growth1}, it follows that \begin{align*} \int_{\Omega_{\ell}} |\nabla u_{\ell}(x)|^p\,dx &\le \frac{1}{\lambda}\Bigl(\int_{\Omega_{\ell}}|f''(x'')|^{p'}\,dx\Bigr)^{1/p'} \Bigl(\int_{\Omega_{\ell}}|u_{\ell}(x)|^p\,dx \Bigr)^{1/p} +\frac {A}{\lambda}\ell^r\\ &\le \frac{B}{\lambda} \ell^{r/p'} \|\nabla'' u_{\ell}\|_{L^p(\Omega_{\ell};{\mathbb R}^{n-r})}+\frac {A}{\lambda}\ell^r, \end{align*} with $B=c_1\|f''\|_{L^{p'}(\omega'')}{\cal L}^r(\omega')^{1/p'}$, which does not depend on $\ell$. Consequently, we obtain an estimate of the form \begin{equation}\label{estimation grossiere} \|\nabla u_{\ell}\|^p_{L^p(\Omega_{\ell};{\mathbb R}^{n})} \le C \ell^{r/p'} \|\nabla u_{\ell}\|_{L^p(\Omega_{\ell};{\mathbb R}^{n})}+D\ell^r, \end{equation} where $C$ and $D$ are constants that do not depend on $\ell$. Let us set $X=\ell^{-r/p}\|\nabla u_{\ell}\|_{L^p}$. Estimate \eqref{estimation grossiere} now reads $$X^p\le CX+D,$$ so that there exists $c_2$ depending only on $C$ and $D$ such that $X\le c_2^{1/p}$, which completes the proof. $\square$\par\medbreak We now recall an elementary estimate similar to what can be found in \cite{[G2]} for instance. \begin{lemma}\label{lem3} Let $h(t)$ a nonnegative bounded function defined on an interval $[\tau_0,\tau_1]$, $\tau_0\geq 0$. Suppose that for $\tau_0\leq t< s\leq \tau_1$, we have $$h(t)\leq \theta h(s)+C(s-t)^{-\nu_1}+D(s-t)^{-\nu_2},$$ where $C,D,\nu_1,\nu_2,\theta$ are nonnegative constants with $0\leq \theta< 1$. Then, for all $\tau_0\le t<s\le\tau_1$, we have $$h(t)\le c(C(s-t)^{-\nu_1}+D(s-t)^{-\nu_2}),$$ where $c$ is a constant that only depends on $\nu_1$, $\nu_2$ and $\theta$. \end{lemma} \noindent{\bf Proof.}\enspace{} If we have two sequences of nonnegative numbers $a_i$ and $b_i$ such that $a_i\le \theta a_{i+1}+b_{i+1}$, it follows by induction that $a_0\le \theta^i a_i+\sum_{j=0}^{i-1}\theta^jb_{j+1}$. We apply this remark to the sequences $a_i=h(t_i)$ and $b_{i+1}=C(t_{i+1}-t_i)^{-\nu_1}+D(t_{i+1}-t_i)^{-\nu_2}$, where $t_i=t+(1-\sigma^i)(s-t)$, $0<\sigma<1$ to be chosen later on, is an increasing sequence in $[\tau_0,\tau_1]$ such that $t_0=t$. This yields the estimate $$h(t)\le \theta^ih(t_i)+\frac{C}{(s-t)^{\nu_1}}(1-\sigma)^{-\nu_1}\sum_{j=0}^{i-1}\biggl(\frac\theta{\sigma^{\nu_1}}\biggr)^j+\frac{D}{(s-t)^{\nu_2}}(1-\sigma)^{-\nu_2}\sum_{j=0}^{i-1}\biggl(\frac\theta{\sigma^{\nu_2}}\biggr)^j .$$ We now choose $\sigma<1$ in such a way that $\frac\theta{\sigma^{\nu_1}}<1$ and $\frac\theta{\sigma^{\nu_2}}<1$, and conclude by letting $i\to+\infty$, remembering that $h(t_i)$ is bounded. $\square$\par\medbreak Next, we estimate the horizontal part of the gradient of $u_\ell$ in $L^p(\Omega_{\ell_0})$ in terms of $\ell$, $\ell_0$, $u_\ell$ and a minimizer $u_\infty$ of the vertical problem ${\cal P}_\infty$. \begin{theorem}\label{estimation de base} There exists a constant $c_3$ independent of all the other quantities such that, for all $0<t<s\le\ell$ and all minimizers $u_\infty$ of the vertical problem, we have \begin{multline}\label{l'estimation en question usc} \|\nabla'u_\ell\|^p_{L^p(\Omega_{t};{\mathbb R}^r)}+k\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_t;{\mathbb R}^{n-r})}\\\le \frac{\delta c_3}{(s-t)^p}\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_s\setminus\Omega_t;{\mathbb R}^{n-r})}\\+ \frac{c_3k}{(s-t)^{kp/(p-k)}}\left\{\|\nabla''u_{\ell}\|^p_{L^p(\Omega_s\setminus\Omega_t;{\mathbb R}^{n-r})}+(1-\delta)\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_s\setminus\Omega_t;{\mathbb R}^{n-r})}\right\}. \end{multline} where $\delta=1$ if $0\le k\le p/2$, $\delta=0$ otherwise. \end{theorem} \noindent{\bf Proof.}\enspace{} We first define a family of cut-off functions as follows. For all $0<t<s\le \ell$, we set $$\rho_{s,t}(x')=\frac1{s-t}\min\{(s-g(x'))_+,s-t\}.$$ By the definition of the gauge function, we see that $\rho_{s,t}\equiv 0$ on $\omega'_\ell\setminus\omega'_s$, $\rho_{s,t}\equiv 1$ on $\omega'_t$ and $0\le \rho_{s,t}\le 1$. By our regularity assumption on $\omega'$, $\rho_{s,t}$ is Lipschitz and such that $$\nabla'\rho_{s,t}(x')=-\frac1{s-t}\nabla'g(x'){\bf 1}_{\omega'_s\setminus\omega'_t}(x'),$$ so that we can estimate \begin{equation}\label{gradient cutoff} |\nabla'\rho_{s,t}(x')|\le\frac K{s-t}{\bf 1}_{\omega'_s\setminus\omega'_t}(x'). \end{equation} We pick a number $0<\alpha<1$ and then set \begin{equation}\label{test1} v_1(x)=(1-\alpha\rho_{s,t}(x'))u_{\ell}(x)+\alpha\rho_{s,t}(x')u_{\infty}(x''), \end{equation} and \begin{equation}\label{test2} v_2(x)=(1-\alpha\rho_{s,t}(x'))u_{\infty}(x'')+\alpha\rho_{s,t}(x')u_{\ell}(x). \end{equation} Clearly, $v_1$ belongs to $W^{1,p}_0(\Omega_\ell)$ and is thus a suitable test-function for problem ${\cal P}_\ell$, hence \begin{equation}\label{test v1} \int_{\Omega_\ell}\bigl[F(\nabla u_\ell(x))-f''(x'')u_\ell(x)\bigr]\,dx\le\int_{\Omega_\ell}\bigl[F(\nabla v_1(x))-f''(x'')v_1(x)\bigr]\,dx. \end{equation} Next we note that, owing to the embedding $W^{1,p}_0(\Omega_\ell)\hookrightarrow L^p(\omega'_\ell; W^{1,p}_0(\omega''))$, $v_2$ is suitable test-function for problem ${\cal P}_\infty$ for almost all $x'$, hence \begin{multline}\label{test v2} \int_{\omega''}\bigl[F''(\nabla''u_\infty(x''))-f''(x'')u_\infty(x'')\bigr]\,dx''\\ \le\int_{\omega''}\bigl[F''(\nabla''v_2(x',x''))-f''(x'')v_2(x',x'')\bigr]\,dx''. \end{multline} Integrating estimate \eqref{test v2} over $\omega'_\ell$, we obtain \begin{multline}\label{test v2 bis} \int_{\Omega_\ell}\bigl[F''(\nabla''u_\infty(x''))-f''(x'')u_\infty(x'')\bigr]\,dx\\ \le\int_{\Omega_\ell}\bigl[F''(\nabla''v_2(x))-f''(x'')v_2(x)\bigr]\,dx. \end{multline} We add estimates \eqref{test v1} and \eqref{test v2 bis} together and note that all the terms involving $f''$ cancel out since $v_1+v_2=u_\ell+u_\infty$. Therefore, \begin{equation}\label{addition tests} \int_{\Omega_\ell}\bigl[F(\nabla u_\ell(x))+F''(\nabla''u_\infty(x''))\bigr]\,dx \le \int_{\Omega_\ell}\bigl[F(\nabla v_1(x))+F''(\nabla''v_2(x))\bigr]\,dx. \end{equation} We observe that $v_1=u_\ell$ and $v_2=u_\infty$ on $\Omega_\ell\setminus\Omega_s$, so that estimate \eqref{addition tests} boils down to \begin{align} \int_{\Omega_s}\bigl[F(\nabla u_\ell(x))+F''(\nabla''u_\infty(x''))\bigr]\,dx &\le \int_{\Omega_s\setminus\Omega_t}\bigl[F(\nabla v_1(x))+F''(\nabla''v_2(x))\bigr]\,dx\nonumber\\ &\qquad+\int_{\Omega_t}\bigl[F(\nabla v_1(x))+F''(\nabla''v_2(x))\bigr]\,dx.\label{addition tests 2} \end{align} The left-hand side of \eqref{addition tests 2} can be rewritten as \begin{multline}\label{addition tests gauche reecrite} \int_{\Omega_s}\bigl[F(\nabla u_\ell(x))+F''(\nabla''u_\infty(x''))\bigr]\,dx\\=\int_{\Omega_s}\bigl[G(\nabla u_\ell(x))+F''(\nabla''u_\ell(x))+F''(\nabla''u_\infty(x''))\bigr]\,dx. \end{multline} Let $I_1$ and $I_2$ be the first and second integrals in the right-hand side of \eqref{addition tests 2}. To estimate $I_1$, we just use the convexity of $F''$, since the vertical gradients of $v_1$ and $v_2$ are convex combinations of the vertical gradients of $u_\ell$ and $u_\infty$, \begin{align} I_1&=\int_{\Omega_s\setminus\Omega_t}\bigl[G(\nabla v_1(x))+F''(\nabla'' v_1(x))+F''(\nabla''v_2(x))\bigr]\,dx\nonumber\\ &\le\int_{\Omega_s\setminus\Omega_t}\bigl[G(\nabla v_1(x))+F''(\nabla'' u_\ell(x))+F''(\nabla''u_\infty (x))\bigr]\,dx.\label{estimation I1} \end{align} To estimate $I_2$, we note that $v_1=(1-\alpha)u_\ell+\alpha u_\infty$ and $v_2=\alpha u_\ell+ (1-\alpha)u_\infty$ on $\Omega_t$, thus owing to the convexity of $F$ and the uniform convexity \eqref{uniformite stricte} of $F''$, \begin{align} I_2&\le \int_{\Omega_t}\bigl[(1-\alpha)F(\nabla u_\ell)+\alpha F(\nabla u_\infty)+(1-\alpha)F''(\nabla''u_\infty)+\alpha F''(\nabla''u_\ell)\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad -k\gamma|\nabla''(u_\infty-u_\ell)|^p\bigr]\,dx\nonumber\\ &= \int_{\Omega_t}\bigl[(1-\alpha)G(\nabla u_\ell)+F''(\nabla''u_\ell)+F''(\nabla'' u_\infty) -k\gamma|\nabla''(u_\infty-u_\ell)|^p\bigr]\,dx,\label{estimation I2} \end{align} for some $\gamma>0$. Putting estimates \eqref{addition tests 2}, \eqref{estimation I1}, \eqref{estimation I2} and equation \eqref{addition tests gauche reecrite} together, we obtain $$\int_{\Omega_s\setminus\Omega_t}G(\nabla u_\ell)\,dx+\int_{\Omega_t}\bigl[\alpha G(\nabla u_\ell)+k\gamma|\nabla''(u_\infty-u_\ell)|^p\bigr]\,dx \le\int_{\Omega_s\setminus\Omega_t}G(\nabla v_1(x))\,dx,$$ which, upon using the coerciveness hypothesis \eqref{growth3}, yields \begin{multline}\label{premiere estimation serieuse} a\int_{\Omega_t}\bigl[(|\nabla'u_\ell|^p+k|\nabla''u_\ell|^{p-k}|\nabla'u_\ell|^k)+k|\nabla''(u_\infty-u_\ell)|^p\bigr]\,dx \\\le\int_{\Omega_s\setminus\Omega_t}G(\nabla v_1(x))\,dx, \end{multline} where $a>0$ is a small generic constant that only depends on the other constants involved. We now focus on estimating the right-hand side of \eqref{premiere estimation serieuse}. We have \begin{equation}\label{gradients tests} \left\{\begin{aligned} \nabla'v_1&=(1-\alpha\rho_{s,t})\nabla'u_{\ell}+\alpha\nabla'\rho_{s,t}(u_{\infty}-u_{\ell}),\\ \nabla''v_1&=(1-\alpha\rho_{s,t})\nabla''u_{\ell}+\alpha\rho_{s,t}\nabla''u_{\infty}. \end{aligned}\right. \end{equation} Based on \eqref{gradients tests} and the definition of $\rho_{s,t}$, we have the following estimates for any exponent $q$: \begin{equation}\label{gradients tests estimes} \left\{\begin{aligned} |\nabla'v_1|^q&\le2^{q-1}|\nabla'u_{\ell}|^q+2^{q-1}\frac{K^q}{(s-t)^q}|u_{\infty}-u_{\ell}|^q,\\ |\nabla''v_1|^{q}&\le2^{q-1}|\nabla''u_{\ell}|^{q}+2^{q-1}|\nabla''(u_{\infty}-u_{\ell})|^{q}. \end{aligned}\right. \end{equation} We will use exponents $q=p$ and $q=k$ for the first line and $q=p-k$ for the second line. Due to the growth hypothesis \eqref{growth3}, we have \begin{multline}\label{estimation de Gv1} G(\nabla v_1)\le A\biggl(|\nabla'u_{\ell}|^p+\frac{1}{(s-t)^p}|u_{\infty}-u_{\ell}|^p\\+k\bigl(|\nabla''u_{\ell}|^{p-k}+|\nabla''(u_{\infty}-u_{\ell})|^{p-k}\bigr)\Bigl(|\nabla'u_{\ell}|^k+\frac{1}{(s-t)^k}|u_{\infty}-u_{\ell}|^k\Bigr)\biggr), \end{multline} where $A$ is a large generic constant that only depends on the other constants involved. For $k\ge 1$, three of the four product terms that appear need to be estimated. For this purpose, we will use Young's inequality in the following form $$a^kb^{p-k}\le \frac{k}{p}a^p+\frac{p-k}{p}b^{p}$$ for $a,b\ge 0$ (recall that $p> k$). We thus obtain \begin{multline}\label{estimation de Gv1 2} G(\nabla v_1)\le A\biggl(|\nabla'u_{\ell}|^p+\frac{1}{(s-t)^p}|u_{\infty}-u_{\ell}|^p\\ +k\Bigl(|\nabla''u_{\ell}|^{p-k}|\nabla'u_{\ell}|^k+|u_{\infty}-u_{\ell}|^p+\frac{1}{(s-t)^{kp/(p-k)}}|\nabla''u_{\ell}|^{p} +|\nabla''(u_{\infty}-u_{\ell})|^{p}\Bigr)\biggr), \end{multline} where $A$ is another generic constant. We integrate this inequality over $\Omega_s\setminus\Omega_t$ and use Poincar\'e's inequality in the vertical variables to obtain \begin{multline}\label{estimation du membre de droite} \int_{\Omega_s\setminus\Omega_t}G(\nabla v_1)\,dx\le A\int_{\Omega_s\setminus\Omega_t}\bigl(|\nabla'u_{\ell}|^p+k|\nabla''u_{\ell}|^{p-k}|\nabla'u_{\ell}|^k+k|\nabla''(u_{\infty}-u_{\ell})|^{p}\bigr)\,dx\\ +\frac{A}{(s-t)^p}\|\nabla''(u_{\infty}-u_{\ell})\|^p_{L^p(\Omega_s\setminus\Omega_t)} +\frac{Ak}{(s-t)^{kp/(p-k)}}\|\nabla''u_{\ell}\|^p_{L^p(\Omega_s\setminus\Omega_t)}, \end{multline} with $A$ yet another generic constant. We now consider two different cases. First, for $0\le k\le p/2$, let us set $$h(t)=\int_{\Omega_{t}}\bigl(|\nabla' u_{\ell}|^p+k|\nabla'' u_{\ell}|^{p-k}|\nabla' u_{\ell}|^k+k|\nabla''(u_{\infty}-u_{\ell})|^{p}\bigr)\,dx.$$ Inequalities \eqref{premiere estimation serieuse} and \eqref{estimation du membre de droite} may be rewritten as \begin{multline}\label{inegalite de g} h(t)\le \theta h(s) +\frac{1}{(s-t)^{p}}\|\nabla''(u_{\infty}-u_{\ell})\|^p_{L^p(\Omega_s\setminus\Omega_t)}\\ +\frac{k}{(s-t)^{kp/(p-k)}}\|\nabla''u_{\ell}\|^p_{L^p(\Omega_s\setminus\Omega_t)} , \end{multline} with $\theta=\frac A{A+a}\in {]}0,1[$. Let $t\le t_1<s_1\le s$. We invoke Lemma \ref{lem3}, with $\nu_1=p$, $C=\|\nabla''(u_{\infty}-u_{\ell})\|^p_{L^p(\Omega_s\setminus\Omega_t)}$, $\nu_2=kp/(p-k)$, $D= k\|\nabla''u_{\ell}\|^p_{L^p(\Omega_s\setminus\Omega_t)}$, to conclude that \begin{equation}\label{inegalite de g corrigee} h(t_1)\le c(C(s_1-t_1)^{-\nu_1}+D(s_1-t_1)^{-\nu_2}). \end{equation} The result follows in this case by letting $t_1\to t$ and $s_1\to s$ since the constant $c$ only depends on $\nu_1$, $\nu_2$ and $\theta$, and $h$ is continuous (recall that $\delta=1$). Now the second case is when $p/2<k<p$. Estimate \eqref{estimation du membre de droite} still holds true, but we now use Young's inequality once more in the form $$a^wb^{p-w}\le \frac{w}{p}a^p+\frac{p-w}{p}b^{p}$$ with $w=\frac{p(2k-p)}k$ to deduce that $$\frac1{(s-t)^p}=1^w\biggl(\frac1{(s-t)^{p/(p-w)}}\biggr)^{p-w}\le \frac{p-k}k+\frac{2k-p}k\frac1{(s-t)^{kp/(p-k)}},$$ so that we can actually write \begin{equation}\label{inegalite de g bis} h(t)\le \theta h(s) +\frac{k}{(s-t)^{kp/(p-k)}}\bigl(\|\nabla''(u_{\infty}-u_{\ell})\|^p_{L^p(\Omega_s\setminus\Omega_t)} +\|\nabla''u_{\ell}\|^p_{L^p(\Omega_s\setminus\Omega_t)}\bigr) , \end{equation} with the same function $h$, but with another value for $\theta$, which we do not write here. We conclude as before with Lemma \ref{lem3} and the first constant $C=0$ for instance. $\square$\par\medbreak The following is an immediate consequence of the previous estimate. \begin{corollary}\label{Caccio global} We have, for all $\ell\ge \ell_0$, \begin{multline}\label{Caccio usc} \|\nabla'u_\ell\|^p_{L^p(\Omega_{\ell_0};{\mathbb R}^r)}+k\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_{\ell_0};{\mathbb R}^{n-r})}\\\le \frac{\delta c_3}{(\ell-\ell_0)^p}\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_\ell;{\mathbb R}^{n-r})}\\+\frac{c_3k}{(\ell-\ell_0)^{kp/(p-k)}}\left\{\|\nabla''u_{\ell}\|^p_{L^p(\Omega_\ell;{\mathbb R}^{n-r})}+(1-\delta)\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_\ell;{\mathbb R}^{n-r})}\right\}, \end{multline} where $\delta=1$ if $0\le k\le p/2$, $\delta=0$ otherwise. \end{corollary} \noindent{\bf Proof.}\enspace{} Indeed, we take $s=\ell$, $t=\ell_0$ and notice that $\Omega_\ell\setminus\Omega_{\ell_0}\subset\Omega_\ell$. $\square$\par\medbreak Let us remark that if $k=0$ and there is actually no strict convexity assumption made on $F''$, \emph{i.e.}, $F''$ may well be not strictly convex, the previous result boils down to $$ \|\nabla'u_\ell\|^p_{L^p(\Omega_{\ell_0};{\mathbb R}^r)}\le \frac{c_3}{(\ell-\ell_0)^p}\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_\ell;{\mathbb R}^{n-r})}. $$ However, when $k>0$, we make crucial use of the uniform strict convexity to derive the estimate. Let us close this section with an estimate similar to that obtained in Lemma~\ref{lem2}. Recall that $u_\ell$ is a minimizer on $\Omega_\ell$, whereas the following estimate is on $\Omega_{\ell_0}$. {See \cite{Chipot-Savitska} for a very similar argument.} \begin{lemma}\label{estimation sur lzero} There exist constants $\bar\ell$ and $c_4$, independent of $\ell$, such that for all $\bar \ell\le\ell_0\le \ell$, \begin{equation}\label{estmtul zero} \int_{\Omega_{\ell_0}}|\nabla u_{\ell}|^p\,dx\le c_4 \ell_0^r. \end{equation} \end{lemma} \noindent{\bf Proof.}\enspace{} Let $1\le t\le \ell-1$ and set $\rho_t=\rho_{t+1,t}$. We take $v_{t,\ell}=(1-\rho_t)u_{\ell}$ as a test-function in problem~\eqref{pl}. This test-function is equal to $u_\ell$ ``far away'' and is $0$ in $\Omega_t$. We obtain \begin{align*} \int_{\Omega_{\ell}} F(\nabla u_{\ell})\,dx&\le \int_{\Omega_{\ell}}\bigl( F(\nabla v_{t,\ell})-f''(v_{t,\ell}-u_\ell)\bigr)\,dx\\ &=\int_{\Omega_t}F(0)\,dx+\int_{\Omega_t}f''u_\ell\,dx+ \int_{\Omega_{t+1}\setminus\Omega_t}\bigl[ F(\nabla v_{t,\ell})+f''\rho_tu_\ell)\bigr]\,dx\\ &\qquad\qquad\qquad +\int_{\Omega_{\ell}\setminus\Omega_{t+1}} F(\nabla u_{\ell})\,dx. \end{align*} Therefore, we see that $$ \int_{\Omega_{t+1}} F(\nabla u_{\ell})\,dx\le At^r+\int_{\Omega_{t+1}}\nu_tf''u_\ell\,dx+\int_{\Omega_{t+1}\setminus\Omega_t} F(\nabla v_{t,\ell})\,dx, $$ with $A=F(0){\cal L}^r(\omega'){\cal L}^{n-r}(\omega'')$ and $\nu_t=\mathbf{1}_{\Omega_t}+\rho_t\mathbf{1}_{\Omega_{t+1}\setminus\Omega_t}$. By the coerciveness and growth hypotheses \eqref{growth1}, we infer that $$ \lambda\int_{\Omega_{t+1}} |\nabla u_{\ell}|^p\,dx\le Bt^r+\int_{\Omega_{t+1}}|f''u_\ell|\,dx+\Lambda\int_{\Omega_{t+1}\setminus\Omega_t} |\nabla v_{t,\ell}|^p\,dx, $$ for some constant $B$, since $0\le\nu_t\le 1$ and the Lebesgue measure of $\Omega_{t+1}\setminus\Omega_t$ is of the order of $t^{r-1}$. In $\Omega_{t+1}\setminus\Omega_t$, we have $$|\nabla v_{t,\ell}|^p=|(1-\rho_t)\nabla u_{\ell}-u_\ell\nabla\rho_t|^p\le 2^{p-1}\bigl(|\nabla u_\ell|^p+K^p|u_\ell|^p\bigr).$$ Clearly, estimate \eqref{poincare} is also valid on $\Omega_{t+1}\setminus\Omega_t$, thus, $$ \int_{\Omega_{t+1}\setminus\Omega_t}|u_\ell|^p\,dx \le c_1^p\int_{\Omega_{t+1}\setminus\Omega_t}|\nabla''u_\ell|^p\,dx\le c_1^p\int_{\Omega_{t+1}\setminus\Omega_t}|\nabla u_\ell|^p\,dx, $$ so that $$\int_{\Omega_{t+1}\setminus\Omega_t} |\nabla v_{t,\ell}|^p\,dx\le 2^{p-1}(1+c_1^pK^p)\int_{\Omega_{t+1}\setminus\Omega_t} |\nabla u_{\ell}|^p\,dx.$$ Furthermore, \begin{align*} \int_{\Omega_{t+1}}|f''u_\ell|\,dx&\le\frac{\varepsilon}{p}\int_{\Omega_{t+1}}|u_\ell|^p\,dx +\frac{(t+1)^r}{\varepsilon^{p'/p}p'}{\cal L}^r(\omega')\|f''\|_{L^{p'}(\omega'')}^{p'}\\ &\le\frac{\varepsilon c_1^p}{p}\int_{\Omega_{t+1}}|\nabla u_\ell|^p\,dx +\frac{C}{\varepsilon^{p'/p}}t^r, \end{align*} with $\varepsilon>0$ to be chosen afterwards. Let us set $$h(t)=\int_{\Omega_t}|\nabla u_\ell|^p\,dx.$$ Putting all the above estimates together, it follows that \begin{equation} \lambda' h(t+1)\le E\bigl(h(t+1)-h(t)\bigr)+Dt^r,\label{une estimation de plus} \end{equation} with $\lambda'=\lambda-\frac{\varepsilon c_1^p}{p}$, $D=B+\frac{C}{\varepsilon^{p'/p}}$ and $E=2^{p-1}\Lambda(1+c_1^pK^p)$. We now pick $\varepsilon$ in such a way that $\lambda'>0$. Inequality \eqref{une estimation de plus} may be rewritten as \begin{equation}\label{encore une recurrence a venir} h(t)\le \theta h(t+1)+Ht^r, \end{equation} where $\theta=1-\frac{\lambda'}{E}\in {]}0,1[$ and $H=\frac D{E}$ depend neither on $t$ nor on $\ell$. Iterating inequality \eqref{encore une recurrence a venir}, we see that for $n=\lfloor\ell-t\rfloor$, we have \begin{equation}\label{apres recurrence} h(t)\le \theta^nh(t+n)+H\sum_{m=0}^{n-1}(t+m)^r\theta^m. \end{equation} Let us now set $t=\ell_0$. We have $h(\ell_0+\lfloor\ell-\ell_0\rfloor)\le h(\ell)\le c_2\ell^r$ by Lemma \ref{lem2}. Hence $$\theta^{\lfloor\ell-\ell_0\rfloor}h(t+\lfloor\ell-\ell_0\rfloor)\le c_2 \theta^{\lfloor\ell-\ell_0\rfloor}\ell^r\le c_2 \theta^{\ell-\ell_0-1}\ell^r.$$ Now, for $\ell_0\ge-\frac{r}{\ln\theta}$, the function in the right-hand side is decreasing, hence maximum for $\ell=\ell_0$. Therefore, $$\theta^{\lfloor\ell-\ell_0\rfloor}h(t+\lfloor\ell-\ell_0\rfloor)\le \frac{c_2}{\theta} \ell_0^r,$$ for $\ell\ge\ell_0\ge-\frac{r}{\ln\theta}$. Moreover, for $\ell_0\ge 1$, \begin{multline*} \sum_{m=0}^{\lfloor\ell-\ell_0\rfloor-1}(\ell_0+m)^r\theta^m=\ell_0^r\sum_{m=0}^{\lfloor\ell-\ell_0\rfloor-1}\Bigl(1+\frac m{\ell_0}\Bigr)^r\theta^m\\ \le \ell_0^r\sum_{m=0}^{\lfloor\ell-\ell_0\rfloor-1}(1+m)^r\theta^m\le \frac{\sum_{m=1}^{+\infty}m^r\theta^m}{\theta}\ell_0^r, \end{multline*} which completes the proof with $\bar \ell=\max\bigl(1,-\frac{r}{\ln\theta}\bigr)$. $\square$\par\medbreak We now turn to the convergence results. As a consequence of Lemma \ref{estimation sur lzero}, we have, without any restriction on $r$ with respect to $p$ and $k$, \begin{theorem}\label{sous suite} There exists a subsequence $\ell\to+\infty$ and a function $u^*\in W^{1,p}_{\rm loc}(\Omega_\infty)$ such that, for all $\ell_0$, \begin{equation}\label{conv faible} u_{\ell|\Omega_{\ell_0}}\rightharpoonup u^*_{|\Omega_{\ell_0}}\text{ weakly in }W^{1,p}(\Omega_{\ell_0}). \end{equation} Moreover, $u^*=0$ on $\partial\Omega_\infty$. \end{theorem} Note that the weak convergence above implies that $u_{\ell}\rightharpoonup u^*$ weakly in $W^{1,p}_{\rm loc}(\Omega_\infty)$. We will sometimes omit the restriction notation in the sequel when unnecessary. \noindent{\bf Proof.}\enspace{} By estimates \eqref{poincare} and \eqref{estmtul zero}, for all $n\in{\mathbb N}^*$, $u_\ell$ is bounded in $W^{1,p}(\Omega_n)$. Using the diagonal procedure, we thus construct a sequence $\ell_n$ such that for all $m$, $u_{\ell_n|\Omega_m}\rightharpoonup u^*_m$ weakly in $W^{1,p}(\Omega_m)$, with $u_m=0$ on $\omega'_m\times\partial\omega''$. Now, since $\Omega_m\subset\Omega_{m'}$ as soon as $m\le m'$, it follows that $u^*_m=u^*_{m'|\Omega_m}$, so that we have constructed a single limit function $u^*$ in the desired class. Furthermore, for all $\ell_0$, if we choose an integer $m\ge \ell_0$, we see that convergence~\eqref{conv faible} holds true. $\square$\par\medbreak In the sequel, we will always consider a weakly convergent subsequence $u_\ell$ in the sense of Theorem \ref{sous suite}. \section{Identification of the limit when $\ell\to+\infty$} In this section, we do not make any further use of assumption \eqref{uniformite stricte} of uniform strict convexity of $F''$, other than the fact that we used it to establish Theorem \ref{estimation de base}.\footnote{Keep in mind that this hypothesis is void for $k=0$ anyway.} The results will only hold for values of $r$ small enough depending on $p$. We let $\Omega_\infty={\mathbb R}^r\times\omega''$. Let us first show that the asymptotic behavior of $u_\ell$ is independent of the elongated dimension if $r$ is small enough. \begin{theorem}\label{limite allongee} Assume that $r<p$ if $k=0$, or that $r<kp/(p-k)$ if $0<k<p$. Then we have $\nabla'u^*=0$ and $u^*$ may be identified with a function in the $x''$ variable only, still denoted $u^*$, which belongs to $W^{1,p}_0(\omega'')$. \end{theorem} \noindent{\bf Proof.}\enspace{} By estimates \eqref{estmtul} and \eqref{Caccio usc} and the triangle inequality, it follows that \begin{equation}\label{gradient' tend vers zero} \|\nabla'u_\ell\|^p_{L^p(\Omega_{\ell_0};{\mathbb R}^r)}\le C\biggl(\frac{\delta}{(\ell-\ell_0)^p}+\frac{k}{(\ell-\ell_0)^{kp/(p-k)}}\biggr)\ell^{r}\to 0 \end{equation} when $\ell\to+\infty$ with $\ell_0$ fixed. Indeed, when $0<k\le p/2$, we actually have $\frac{kp}{p-k}\le p$ and since $\ell\to+\infty$, the first term in the right hand side of estimate \eqref{gradient' tend vers zero} is bounded from above by the second term. Now $\nabla'u_\ell\rightharpoonup\nabla'u^*$ weakly in $L^{p}_{\rm loc}(\Omega_\infty)$, hence we see that $\nabla'u^*=0$, which concludes the proof of the Theorem. $\square$\par\medbreak In order to get a feeling of what Theorem \ref{limite allongee} says, let us look at a few examples. For the Laplacian, we have $p=2$ and we can take $k=0$, which restricts this result to $r=1$ (see Section 5 for a more general result with additional hypotheses, that applies in this case). For the $p$-Laplacian, $p>2$, we can take $k=2$ and the result is restricted to $r<2p/(p-2)$. {This restriction for the $p$-Laplacian can already be found in \cite{Xie}}. Note that $r=1$ and $r=2$ are allowed for any value of $p$. This is not optimal in this particular case, since it is known that $\ell\to+\infty$ convergence holds without restriction on the dimension with respect to $p$, see \cite{Chipot-Xie}. Let us now identify the limit function. We first need another estimate. \begin{lemma}\label{estimation sur une tranche} There exists a constant $c_5$ such that for all $t\le s$, \begin{equation}\label{estime tranche} \limsup_{\ell\to+\infty}\int_{\Omega_s\setminus\Omega_t}|\nabla u_{\ell}|^p\,dx\le c_5 (s^r-t^r). \end{equation} \end{lemma} \noindent{\bf Proof.}\enspace{} We may assume that $t>0$, since the case $t=0$ is already covered by Lemma \ref{estimation sur lzero}. We use here De Giorgi's classical slicing trick. Let $n$ be an integer large enough so that $0\le t-\frac1n<s+\frac1n\le\ell$. For each integer $m$, $1\le m\le n$, we consider the cut-off function $$\chi_{m,n}(x')=\rho_{s+\frac{m}{n^2},s+\frac{m-1}{n^2}}(x')\Bigl(1-\rho_{t-\frac{m-1}{n^2},t-\frac{m}{n^2}}(x')\Bigr).$$ This cut-off function takes its values in $[0,1]$, it is $0$ whenever $g(x')\ge s+\frac{m}{n^2}$ or $g(x')\le t-\frac{m}{n^2}$, it is $1$ for $t-\frac{m-1}{n^2}\le g(x')\le s+\frac{m-1}{n^2}$, and $|\nabla\chi_{m,n}|\le Kn^2$. Let us call $S_{m,n}$ the slice where $0<\chi_{m,n}(x')<1$. We observe that $$\bigcup_{m=1}^n \overline{S_{m,n}}=\overline{\Omega_{s+\frac1n}\setminus\Omega_s}\bigcup \overline{\Omega_{t}\setminus\Omega_{t-\frac1n}} \subset \Omega_{s+1},$$ and that $S_{m,n}\cap S_{m',n}=\emptyset$ when $m\neq m'$. Let us consider the test-function $v_{\ell,m,n}=(1-\chi_{m,n})u_\ell+\chi_{m,n}u^*$. The minimization problem yields the estimate \begin{align*} \int_{\Omega_\ell}F(\nabla u_\ell)\,dx&\le\int_{\Omega_\ell}F(\nabla v_{\ell,m,n})\,dx-\int_{\Omega_\ell}f''\chi_{m,n}(u^*-u_\ell)\,dx\\ &=\int_{\Omega_\ell}F(\nabla v_{\ell,m,n})\,dx-\int_{\Omega_{s+1}}f''\chi_{m,n}(u^*-u_\ell)\,dx. \end{align*} Taking into account the specific form of the cut-off function, this implies that \begin{align} \nonumber\int_{\Omega_s\setminus\Omega_t}F(\nabla u_\ell)\,dx&\le \int_{\Omega_{s+\frac{m}{n^2}}\setminus\Omega_{t-\frac{m}{n^2}}}F(\nabla u_\ell)\,dx\\ \nonumber&\le\int_{\Omega_{s+\frac{m}{n^2}}\setminus\Omega_{t-\frac{m}{n^2}}}F(\nabla v_{\ell,m,n})\,dx-\int_{\Omega_{s+1}}f''\chi_{m,n}(u^*-u_\ell)\,dx\\ \nonumber&\le\int_{S_{m,n}}F(\nabla v_{\ell,m,n})\,dx+\int_{\Omega_{s+\frac{m-1}{n^2}}\setminus\Omega_{t-\frac{m-1}{n^2}}}F(\nabla u^*)\,dx\\ &\qquad\qquad-\int_{\Omega_{s+1}}f''\chi_{m,n}(u^*-u_\ell)\,dx.\label{slicing1} \end{align} Let us estimate each term in the right-hand side separately. First of all, we have \begin{equation}\label{slicing2} \Bigl|\int_{\Omega_{s+1}}f''\chi_{m,n}(u^*-u_\ell)\,dx\Bigr|\le A^{1/p'}(s+1)^{r/p'}\|f''\|_{L^{p'}(\omega'')}\|u^*-u_\ell\|_{L^p(\Omega_{s+1})}, \end{equation} with $A=\mathcal{L}^r(\omega')$. Secondly, we see that \begin{equation}\label{slicing3} \biggl|\int_{\Omega_{s+\frac{m-1}{n^2}}\setminus\Omega_{t-\frac{m-1}{n^2}}}F(\nabla u^*)\,dx\biggr|\le A\Bigl(\Bigl(s+\frac1n\Bigr)^r- \Bigl(t-\frac1n\Bigr)^r\Bigr)\|F''(\nabla'' u^*)\|_{L^1(\omega'')}. \end{equation} We now come to the slicing argument stricto sensu. By the growth estimate \eqref{growth1}, we have \begin{multline}\label{slicing4} \int_{S_{m,n}}F(\nabla v_{\ell,m,n})\,dx\le2^{p-1}\Lambda\Bigl(\int_{S_{m,n}}\bigl(|\nabla u_\ell|^p+|\nabla u^*|^p+1\bigr)\,dx\\+K^pn^{2p} \int_{S_{m,n}}|u^*- u_\ell|^p\,dx\Bigr). \end{multline} The only term that causes a difficulty is the last term coming from $\nabla\chi_{m,n}$. We now plug estimates \eqref{slicing2}, \eqref{slicing3} and \eqref{slicing4} into the right-hand side of estimate \eqref{slicing1}, sum for $m=1$ to $n$ and divide the result by $n$. Observing that the sum of integrals over the slices $S_{m,n}$ gives rise to integrals over the union of all slices, which is included in $\Omega_{s+1}$, this yields \begin{align} \int_{\Omega_s\setminus\Omega_t}F(\nabla u_\ell)\,dx&\le A^{1/p'}(s+1)^{r/p'}\|f''\|_{L^{p'}(\omega'')}\|u^*-u_\ell\|_{L^p(\Omega_{s+1})}\\ &\qquad +A\Bigl(\Bigl(s+\frac1n\Bigr)^r- \Bigl(t-\frac1n\Bigr)^r\Bigr)\|F''(\nabla'' u^*)\|_{L^1(\omega'')}\\ &+\frac{2^p\Lambda c_4}n(s+1)^r+2^{p-1}\Lambda K^p n^{2p-1}\|u^*-u_\ell\|^p_{L^p(\Omega_{s+1})}. \end{align} We first let $\ell\to+\infty$. Due to the Rellich-Kondra\v sov theorem, $\|u^*-u_\ell\|_{L^p(\Omega_{s+1})}\to 0$ and it follows from the coerciveness estimate that \begin{multline*} \limsup_{\ell\to+\infty}\int_{\Omega_s\setminus\Omega_t}|\nabla u_{\ell}|^p\,dx\le \frac A\lambda\Bigl(\Bigl(s+\frac1n\Bigr)^r- \Bigl(t-\frac1n\Bigr)^r\Bigr)\|F''(\nabla'' u^*)\|_{L^1(\omega'')}\\+\frac{2^p\Lambda c_4}{n\lambda}(s+1)^r. \end{multline*} We finally let $n\to+\infty$ to obtain the result with $c_5=\frac A\lambda\|F''(\nabla'' u^*)\|_{L^1(\omega'')}$. $\square$\par\medbreak We now are in a position to prove the main result of this section. \begin{theorem}\label{resultat principal faible} The function $u^*$ is a minimizer of problem ${\cal P}_\infty$. \end{theorem} \noindent{\bf Proof.}\enspace{} Let $z\in W^{1,p}_0(\omega'')$ be arbitrary. We use the test function $v_\ell=(1-\rho_{t})u_\ell+\rho_{t}z$, with $\rho_t=\rho_{t+1,t}$, so that $v_\ell=u_\ell$ on $\Omega_\ell\setminus\Omega_{t+1}$ and $v_\ell=z$ on $\Omega_t$. We thus have \begin{equation}\label{on va finir par y arriver} \int_{\Omega_{t+1}}[F(\nabla u_\ell)-f''u_\ell]\,dx\le \int_{\Omega_{t+1}\setminus\Omega_t}[F(\nabla v_\ell)-f''v_\ell]\,dx +\int_{\Omega_{t}}[F(\nabla z)-f''z]\,dx. \end{equation} It follows from Lemma \ref{estimation sur une tranche} that $$\limsup_{\ell\to+\infty}\Bigl|\int_{\Omega_{t+1}\setminus\Omega_t}[F(\nabla v_\ell)-f''v_\ell]\,dx\Bigr|\le C(t+1)^{r-1}$$ for some constant $C$ independent of $\ell$ and $t$. The left-hand side of estimate \eqref{on va finir par y arriver} is weakly lower-semicontinuous, hence, letting $\ell\to+\infty$, we obtain \begin{align*} (t+1)^r\mathcal{L}^r(\omega') \int_{\omega''}[F(\nabla u^*)-f''u^*]\,dx'&\le C(t+1)^{r-1}\\ &\qquad+t^r\mathcal{L}^r(\omega') \int_{\omega''}[F(\nabla z)-f''z]\,dx' \end{align*} and the result follows from letting $t\to+\infty$, since $F(\nabla u^*)=F''(\nabla'' u^*)$ and $F(\nabla z)=F''(\nabla'' z)$. $\square$\par\medbreak We now apply a classical trick to obtain strong convergence when $F''$ is strictly convex. Of course, when $k>0$, this is already the case by assumption \eqref{uniformite stricte}. Strict convexity is only a new assumption if $k=0$. In this case, the solution $u_\infty$ of the limit problem is unique and this uniqueness implies the weak convergence of the whole family $u_\ell$. \begin{theorem}\label{resultat principal fort} Assume that $F''$ is strictly convex. Then $u^*=u_\infty$ and $u_{\ell}\to u_\infty$ strongly in $W^{1,p}(\Omega_{\ell_0})$ for all $\ell_0$. \end{theorem} We recall the following two lemmas that can be found \emph{e.g.} in \cite{Ball-Marsden}. \begin{lemma}\label{BM1} Let $F\colon {\mathbb R}^M\to {\mathbb R}$ be strictly convex. Let $\mu\in{]}0,1[$ and $a_j,a\in {\mathbb R}^M$ such that $$ \mu F(a_j)+(1-\mu)F(a)-F(\mu a_j+(1-\mu) a)\to 0\text{ as }j\to+\infty. $$ Then $a_j\to a$. \end{lemma} The second lemma is a slight variation on Fatou's lemma. \begin{lemma}\label{BM2} Let $F_j,F,H_j,H\in L^1(\Omega)$ with $F_j\ge H_j\ge 0$ for all $j$, $F_j\to F$ and $H_j\to H$ a.e., and $\int_\Omega F_j\,dx\to\int_\Omega F\,dx$. Then $$ \int_\Omega H_j\,dx\to\int_\Omega H\,dx. $$ \end{lemma} \noindent{\bf Proof of Theorem \ref{resultat principal fort}.} We already know that $\nabla' u_\ell\to 0=\nabla' u^*$ strongly in $L^p(\Omega_t)$ by estimate \eqref{gradient' tend vers zero}. We thus just have to prove the strong convergence of $\nabla'' u_\ell$. We use a similar slicing as before, with the test-functions $\rho_{t+\frac m{n^2},t+\frac {m-1}{n^2}}u_\ell$ for $n$ large enough, $1\le m\le n$. Skipping the details, this slicing implies that $$\limsup_{\ell\to+\infty}\int_{\Omega_t}F(\nabla u_\ell)\,dx\le \int_{\Omega_t}F(\nabla u^*)\,dx.$$ On the other hand, for almost all $x'$, the function $u_{x',\ell}\colon x''\mapsto u_\ell(x',x'')$ is an admissible test-function for the limit problem, so that $$\int_{\omega''}[F''(\nabla''u^*)-f''u^*]\,dx''\le \int_{\omega''}[F''(\nabla''u_{x',\ell})-f''u_{x',\ell}]\,dx''.$$ We integrate this inequality with respect to $x'\in t\omega'$ and obtain $$\int_{\Omega_t}[F''(\nabla''u^*)-f''u^*]\,dx\le \int_{\Omega_t}[F''(\nabla''u_\ell)-f''u_\ell]\,dx.$$ We now let $\ell\to+\infty$, which yields $$\int_{\Omega_t}F''(\nabla''u^*)\,dx\le \liminf_{\ell\to+\infty}\int_{\Omega_t}F''(\nabla''u_\ell)\,dx.$$ By hypothesis \eqref{growth3}, $G\ge 0$, which implies that $F''(\xi'')\le F(\xi',\xi'')$ for any $\xi'$. It follows that \begin{equation}\label{func-conv} \int_{\Omega_t}F''(\nabla''u_\ell)\,dx\to \int_{\Omega_t}F''(\nabla''u^*)\,dx \end{equation} when $\ell\to+\infty$, since $F''(\nabla''u^*)=F(\nabla u^*)$. Let us pick $\mu\in{]}0,1[$ and set $$g_\ell=\mu F''(\nabla'' u_\ell)+(1-\mu)F''(\nabla'' u^*)-F''(\mu \nabla'' u_\ell+(1-\mu)\nabla'' u^*). $$ By weak lower semicontinuity, it is clear that $$\liminf_{\ell\to+\infty}\int_{\Omega_t}F''(\mu \nabla'' u_\ell+(1-\mu)\nabla'' u^*)\,dx \ge\int_{\Omega_t}F''(\nabla'' u^*)\,dx.$$ Therefore $$0\le \limsup_{\ell\to+\infty}\int_{\Omega_t}g_\ell\,dx\le \int_{\Omega_t}F''(\nabla'' u^*)\,dx-\int_{\Omega_t}F''(\nabla'' u^*)\,dx=0,$$ so that $g_\ell\to 0$ a.e.\ (up to a subsequence). We then apply Lemma \ref{BM1} to deduce that $\nabla''u_\ell\to\nabla''u^*$ a.e.\ up to that same subsequence. We now let $$H_\ell=|\nabla''u_\ell-\nabla''u^*|^p\le 2^{p-1}(F''(\nabla'' u_\ell)+|\nabla''u^*|^p)=F_\ell,$$ and invoke Lemma \ref{BM2} and \eqref{func-conv} to obtain the result for $\ell_0=t$. To conclude for all $\ell_0$, we use the diagonal process. $\square$\par\medbreak \section{Convergence rates} In the previous section, we obtained convergence results without taking advantage of the term involving $k$ in the left-hand side of estimate \eqref{Caccio usc}. This makes them valid in particular for $k=0$ without strict or uniform strict convexity. It should however be clear that for $k>0$, the term in question can be used to obtain a much shorter convergence proof with convergence rate, which we do not detail here. More precisely, \begin{theorem}\label{limite allongee usc} Under the previous hypotheses with $0<k<p$ and $r<kp/(p-k)$, we have $$ \|u_{\ell}-u_{\infty}\|^p_{W^{1,p}(\Omega_{\ell_0})}\leq C\ell^{r-\frac{kp}{p-k}}.$$ \end{theorem} The proof is a direct consequence of Corollary \ref{Caccio global} and Lemma \ref{lem2}. In any case, the estimates do not seem to allow a convergence proof without any restriction on $r$ with respect to $p$ in all generality, whereas it is known in some cases, for instance in the case of the Laplacian, that convergence holds true for all values of $r$. In order to partially overcome these shortcomings, we assume now that $k=0$ and that $F''$ is uniformly strictly convex in the sense that \begin{equation}\label{uniformite stricte k=0} F''(\theta\xi''+\mu\zeta'')\le \theta F''(\xi'')+\mu F''(\zeta'')-\beta\theta\mu(\theta^{p-1}+\mu^{p-1})|\xi''-\zeta''|^p, \end{equation} for some $\beta>0$. Note that this is equivalent to allowing $k=p$ in hypotheses \eqref{growth3} and \eqref{uniformite stricte}. In some sense, $\frac{kp}{p-k}$ is then infinite and it is to be expected that there should be no restriction on the allowed dimensions $r$, plus faster than polynomial convergence. This is what we now proceed to show. Under assumption \eqref{uniformite stricte k=0}, it is fairly clear that we still have an estimate similar to that of Theorem \ref{estimation de base}, namely, \begin{equation}\label{Caccio ter} \|\nabla'u_\ell\|^p_{L^p(\Omega_{t})}+\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_t)}\le \frac{C}{(s-t)^p}\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_s\setminus\Omega_t)}. \end{equation} Let us thus prove that not only does convergence hold without restrictions on the elongated dimension $r$, but that it also occurs at an exponential rate. The extra control makes things actually much easier. \begin{theorem}\label{th4}Under hypotheses \eqref{growth1}-\eqref{growth3} with $k=0$ and \eqref{uniformite stricte k=0}, then for all $r< n$ and all $\ell_0$, there exist constants $C$ and $\alpha>0$ independent of $\ell$ such that we have $$||\nabla(u_{\ell}- u_{\infty})||_{L^p(\Omega_{\ell_0})}\leq C e^{-\alpha\ell}.$$ \end{theorem} \noindent{\bf Proof.}\enspace{} We take $s=t+1$ in estimate \eqref{Caccio ter}, which yields \begin{align*} \|\nabla'u_\ell\|^p_{L^p(\Omega_{t})}+\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_t)}&\le C\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_{t+1}\setminus\Omega_{t})}\\ &\leq C\|\nabla'u_\ell\|^p_{L^p(\Omega_{t+1}\setminus\Omega_{t})}\\ &\qquad\qquad{}+ C\|\nabla''(u_\ell-u_\infty)\|^p_{L^p(\Omega_{t+1}\setminus\Omega_{t})}. \end{align*} Setting $$g(t)=\|\nabla'u_\ell\|_{L^p(\Omega_t)}^p+\|\nabla''(u_\ell-u_\infty)\|_{L^p(\Omega_t)}^p,$$ we have just shown that $$ g(t)\le C(g(t+1)-g(t)), $$ or in other words \begin{equation}\label{la, ca change encore-new5} g(t)\le \theta g(t+1) \end{equation} with $\theta=\frac{C}{1+C}\in{}]0,1[$. We iterate inequality \eqref{la, ca change encore-new5} using the sequence $t_n=n+\ell_0$, $n=0,\ldots,\lfloor \ell-\ell_0\rfloor$. Obviously $$g(\ell_0)=g(t_0)\leq \theta^ng(t_n)$$ for all such $n$, and in particular for the last one, $$g(\ell_0)\leq\theta^{\lfloor \ell-\ell_0\rfloor}g(t_{\lfloor \ell-\ell_0\rfloor})\le \theta^{\ell-\ell_0-1}g(\ell)\leq C\theta^{-\ell_0-1}e^{\ell\ln \theta}\ell^r,$$ with $\ln\theta<0$. Now, for all $r$, we can pick $\alpha>0$ such that $\ln\theta<-p\alpha<0$ and $e^{\ell\ln \theta}\ell^r\le e^{-p\alpha \ell}$ for $\ell$ large enough, which completes the proof since $\nabla' u_\infty=0$. $\square$\par\medbreak Theorem \ref{th4} applies to energies of the form $F(\xi)=F'(\xi')+F''(\xi'')$, for instance. We recover in particular the known result for the case of the 2-Laplacian. {See also the monograph \cite{[C2new]} for exponential estimates in this context.} \section{Extension to the vectorial case} We have written everything so far in the context of a scalar problem, \emph{i.e.}, the functions $u_\ell$ are scalar-valued. All previous developments only made use of the minimization problem, under various convexity assumptions. Now clearly, absolutely nothing is changed if we consider instead vector-valued problems in the calculus of variations, with functions $u_\ell$ taking their values in some ${\mathbb R}^N$, if the energies are supposed to satisfy the same growth, coercivity and convexity assumptions as before, and the same convergence results hold true. Unfortunately, in the vectorial case of the calculus of variations, the relevant condition that guarantees lower-semicontinuity of the energy functional is not convexity, but much weaker conditions such as quasiconvexity, or in the case of energies that can take the value $+\infty$, as is the case in nonlinear elasticity, polyconvexity, see \cite{[D]}. Indeed, convexity is not suitable in nonlinear elasticity for well-known modeling reasons. This explains why we have striven to use as little convexity as possible (in some sense) at any given point in the sequence of arguments. This comment should however be mitigated by the fact that some instances of our uses of convexity will also work with rank-1-convexity, which is a reasonable assumption in the vectorial case. There are also notions of strict uniform quasiconvexity that may apply, see \cite{Evans}. The fact that the Euler-Lagrange equation is not available in nonlinear elasticity is also an incentive to try and only use the minimization problem. Now, it is at this point unclear to us how to attack the elongation problem in such nonconvex vectorial cases, since we still heavily rely on (strict uniform) convexity at crucial points of the proofs. Moreover, the Dirichlet boundary condition considered here is not necessarily the most interesting one in the context of nonlinear elasticity, in particular if we have the Saint Venant principle in mind. Even the potential limit problem is not so clear. In another dimension reduction context, when considering a body whose thickness goes to zero, and with different boundary conditions, it can be seen that quasiconvexity is not conserved through an ``algebraic'' formula of the kind found here, and that a relaxation step is necessary, see for instance \cite{[L-R]}. Physically, this due to the possibility of crumpling such a thin body. A similar phenomenon may quite possibly happen here, but maybe not in the same fashion. To the best of our knowledge, the nonconvex vectorial case remains open. \end{document}
\begin{document} \begin{abstract} A graph~$G$ is said to be \emph{ubiquitous}, if every graph~$\Gamma$ that contains arbitrarily many disjoint $G$-minors automatically contains infinitely many disjoint $G$-minors. The well-known \emph{Ubiquity conjecture} of Andreae says that every locally finite graph is ubiquitous. In this paper we show that locally finite graphs admitting a certain type of tree-decomposition, which we call an \emph{extensive tree-decomposition}, are ubiquitous. In particular this includes all locally finite graphs of finite tree-width, and also all locally finite graphs with finitely many ends, all of which have finite degree. It remains an open question whether every locally finite graph admits an extensive tree-decomposition. \end{abstract} \maketitle \date{} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Introduction} Given a graph~$G$ and some relation~$\vartriangleleft$ between graphs, we say that~$G$ is \emph{$\vartriangleleft$-ubiquitous} if whenever~$\Gamma$ is a graph such that ${nG \vartriangleleft \Gamma}$ for all~${n \in \mathbb{N}}$, then ${\aleph_0 G \vartriangleleft \Gamma}$, where ${\alpha G}$ is the disjoint union of~$\alpha$ many copies of~$G$. A classic result of Halin~\cite[Satz~1]{H65} says that the ray, i.e.~a one-way infinite path, is $\subseteqseteq$-ubiquitous, where~$\subseteqseteq$ is the subgraph relation. That is, any graph which contains arbitrarily large collections of vertex-disjoint rays must contain an infinite collection of vertex-disjoint rays. Later, Halin showed that the double ray, i.e.~a two-way infinite path, is also $\subseteqseteq$-ubiquitous~\cite{H70}. However, not all graphs are $\subseteqseteq$-ubiquitous, and in fact even trees can fail to be $\subseteqseteq$-ubiquitous (see for example~\cite{W76}). The question of ubiquity for classes of graphs has also been considered for other graph relations. In particular, whilst there are still reasonably simple examples of graphs which are not $\leqslant$-ubiquitous (see~\cite{L76,A77}), where~$\leqslant$ is the topological minor relation, it was shown by Andreae that all rayless countable graphs~\cite{A80} and all locally finite trees~\cite{A79} are $\leqslant$-ubiquitous. The latter result was recently extended to the class of all trees by the present authors~\cite{BEEGHPTI}. In~\cite{A02} Andreae initiated the study of ubiquity of graphs with respect to the minor relation~$\preccurlyeq$. He constructed a graph which is not $\preccurlyeq$-ubiquitous, however the construction relies on the existence of a counterexample to the well-quasi-ordering of infinite graphs under the minor relation, for which only examples of uncountable size are known~\cite{komjath1995note,Pitz2020,T88}. In particular, the question of whether there exists a countable graph which is not $\preccurlyeq$-ubiquitous remains open. Andreae conjectured that at least all \emph{locally finite} graphs, those with all degrees finite, should be $\preccurlyeq$-ubiquitous. \begin{ubqconjecture} Every locally finite connected graph is $\preccurlyeq$-ubiquitous. \end{ubqconjecture} In~\cite{A13} Andreae established the following pair of results, demonstrating that his conjecture holds for wide classes of locally finite graphs. Recall that a \emph{block} of a graph is a maximal $2$-connected subgraph, and that a graph has \emph{finite tree-width} if there is an integer~$k$ such that the graph has a tree-decomposition of width~$k$. \begin{theorem}[Andreae, {\cite[Corollary 1]{A13}}] \label{t:And1} Let~$G$ be a locally finite, connected graph with finitely many ends such that every block of~$G$ is finite. Then~$G$ is $\preccurlyeq$-ubiquitous. \end{theorem} \begin{theorem}[Andreae, {\cite[Corollary 2]{A13}}] \label{t:And2} Let~$G$ be a locally finite, connected graph of finite tree-width such that every block of~$G$ is finite. Then~$G$ is $\preccurlyeq$-ubiquitous. \end{theorem} Note, in particular, that if~$G$ is such a graph, then the degree of every end in~$G$ must be one.\footnote{A precise definitions of the ends of a graph and their degree can be found in Section~\ref{s:prelim}.} The main result of this paper is a far-reaching extension of Andreae's results, removing the assumption of finite blocks. \begin{theorem} \label{c:locfin} Let~$G$ be a locally finite, connected graph with finitely many ends such that every end of~$G$ has finite degree. Then~$G$ is $\preccurlyeq$-ubiquitous. \end{theorem} \begin{theorem} \label{c:finend} Every locally finite, connected graph of finite tree-width is $\preccurlyeq$-ubiquitous. \end{theorem} The reader may have noticed that these results are of a similar flavour: they all make an assertion that locally finite graphs which are built by pasting finite graphs in a tree like fashion are ubiquitous -- with differing requirements on the size of the finite graphs, how far they are allowed to overlap, and the structure of the underlying decomposition trees. And indeed, behind all the above results there are unifying but more technical theorems, the strongest of which is the true main result of this paper: \begin{theorem}[Extensive tree-decompositions and ubiquity] \label{t:mainintro} Every locally finite connected graph admitting an extensive tree-decomposition is $\preccurlyeq$-ubiquitous. \end{theorem} The precise definition of an extensive tree-decomposition is somewhat involved and will be given in detail in Section~\ref{s:extensive} up to Theorem~\ref{t:nice}. Roughly, however, it implies that we can find many self-minors of the graph at spots whose precise positions are governed by the decomposition tree. We hope that the proof sketch in Section~\ref{s:sketch} is a good source for additional intuition before the reader delves into the technical details. To summarise, we are facing two main tasks in this paper. One is to prove our main ubiquity result, Theorem~\ref{t:mainintro}. This will occupy the second part of this paper, Sections~\ref{s:nonpebbly} to~\ref{sec:countable-subtrees}. And as our other task, we also need to prove that the graphs in Theorems~\ref{c:locfin} and~\ref{c:finend} do indeed possess such extensive tree-decompositions. This analysis occupies Section~\ref{s:extensive} and~\ref{s:getnice}. The proof uses in an essential way certain results about the well-quasi-ordering of graphs under the minor relation, including Thomas's result~\cite{T89} that for all $k \in \mathbb{N}$, the classes of graphs of tree-width at most $k$ are well-quasi-ordered under the minor relation. In fact, the class of locally finite graphs having an extensive tree-decomposition is certainly larger than the results stated in Theorems~\ref{c:locfin} and~\ref{c:finend}; for example, it is easy to see that the infinite grid $\mathbb{N} \times \mathbb{N}$ has such an extensive tree-decomposition. It remains an open question whether \emph{every} locally finite graph has an extensive tree-decomposition. A more precise discussion of how this problem relates to the theory of well-quasi- and better-quasi-orderings of finite graphs will be given in Section~\ref{s:WQO}. But first, in Section~\ref{s:sketch} we will give a sketch of the key ideas in the proof, at the end of which we will provide a more detailed overview of the structure and the different sections of this paper. \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Proof sketch} \label{s:sketch} To give a flavour of the main ideas in this paper, let us begin by considering the case of a locally finite connected graph~$G$ with a single end~$\omega$, where~$\omega$ has finite degree~${d \in \mathbb{N}}$ (this means that there is a family ${(A_i \colon 1 \leqslant i \leqslant d)}$ of~$d$ disjoint rays in~$\omega$, but no family of more than~$d$ such rays). Our construction will exploit the fact that graphs of this kind have a very particular structure. More precisely, there is a tree-decomposition ${(S, (V_s)_{s \in V(S)})}$ of~$G$, where ${S = s_0s_1s_2\ldots}$ is a ray and such that, if we denote~$V_{s_n}$ by~$V_n$ and~${G[\bigcup_{l \geqslant n} V_l]}$ by~$G_n$ for each~$n$, the following holds: \begin{enumerate} \item \label{item:sketch-1} each~$V_n$ is finite; \item \label{item:sketch-2} $|V_i \cap V_{j}|=d$ if $|i-j|=1$, and $|V_i \cap V_{j}|=0$ otherwise; \item \label{item:sketch-3} all the~$A_i$ begin in~$V_0$; \item \label{item:sketch-4} for each~${m \geqslant 1}$ there are infinitely many~${n > m}$ such that~$G_m$ is a minor of~$G_n$, in such a way that for any edge~$e$ of~$G_m$ and any~${i \leqslant d}$, the edge~$e$ is contained in~$A_i$ if and only if the edge representing it in this minor is. \end{enumerate} Property~\ref{item:sketch-4} seems rather strong -- it is a first glimpse of the strength of extensive tree-decompositions alluded to in Theorem~\ref{t:mainintro}. The reason it can always be achieved has to do with the well-quasi-ordering of finite graphs. For details of how this works, see Section~\ref{s:getnice}. The sceptical reader who does not yet see how to achieve this may consider the argument in this section as showing ubiquity simply for graphs~$G$ with a decomposition of the above kind. Now we suppose that we are given some graph~$\Gamma$ such that~${nG \preccurlyeq \Gamma}$ for each~$n$, and we wish to show that ${\aleph_0G \preccurlyeq \Gamma}$. Consider a $G$-minor~$H$ in~$\Gamma$. Any ray~$R$ of~$G$ can be expanded to a ray~${H(R)}$ in the copy~$H$ of~$G$ in~$\Gamma$, and since~$G$ only has one end, all rays~${H(R)}$ go to the same end~$\epsilon_H$ of~$\Gamma$; we shall say that~$H$ \emph{goes to} the end~$\epsilon_H$. Techniques from an earlier paper~\cite{BEEGHPTI} show that we may assume that there is some end $\epsilon$ of $\Gamma$ such that all $G$-minors in $\Gamma$ go to $\epsilon$, otherwise it can be shown that $\aleph_0 G \preccurlyeq \Gamma$. From any $G$-minor~$H$ we obtain rays~${H(A_i)}$ corresponding to our marked rays~$A_i$ in~$G$, which by the above all go to $\epsilon$. We will call this family of rays the \emph{bundle} of rays given by~$H$. Our aim now is to build up an ${\aleph_0G}$-minor of~$\Gamma$ recursively. At stage~$n$ we hope to construct~$n$ disjoint~${G[\bigcup_{m \leqslant n}V_m]}$-minors~${H^n_1, H^n_2, \ldots, H^n_n}$, such that for each such~$H^n_m$ there is a family ${(R^n_{m,i} \colon i \leqslant k)}$ of disjoint rays in~$\epsilon$, where the path in~$H^n_m$ corresponding to the initial segment of the ray~$A_i$ in~${\bigcup_{m \leqslant n}G_m}$ is an initial segment of~$R^n_{m,i}$, but these rays are otherwise disjoint from the various~$H^n_l$ and from each other, see Figure~\ref{fig:bundles}. We aim to do this in such a way that each~$H^n_m$ extends all previous~$H^l_m$ for~${l \leqslant n}$, so that at the end of our construction we can obtain infinitely many disjoint $G$-minors as~${(\bigcup_{n \geqslant m}H^n_m \colon m \in \mathbb{N})}$. The rays chosen at later stages need not bear any relation to those chosen at earlier stages; we just need them to exist so that there is some hope of continuing the construction. We will again refer to the families~${(R^n_{m,i} \colon i \leqslant k)}$ of rays starting at the various~$H^n_m$ as the \emph{bundles} of rays from those~$H^n_m$. \begin{figure} \caption{Stage~$n$ of the construction with disjoint ${G[\bigcup_{m \leqslant n} \label{fig:bundles} \end{figure} The rough idea for getting from the~$n$\textsuperscript{th} to the~${n+1}$\textsuperscript{st} stage of this construction is now as follows: we choose a very large family~$\mathcal{H}$ of disjoint $G$-minors in~$\Gamma$. We discard all those which meet any previous~$H^n_m$ and we consider the family of rays corresponding to the~$A_i$ in the remaining minors. Then it is possible to find a collection of paths transitioning from the~$R^n_{m,i}$ from stage~$n$ onto these new rays. Precisely what we need is captured in the following definition, which also introduces some helpful terminology for dealing with such transitions: \begin{definition}[Linkage of families of rays] \label{d:linkage} Let ${\mathcal{R} = (R_i \colon i\in I)}$ and ${\mathcal{S} = (S_j \colon j \in J)}$ be families of disjoint rays, where the initial vertex of each~$R_i$ is denoted~$x_i$. A family of paths ${\mathcal{P} = (P_i \colon i \in I)}$, is a \emph{linkage} from~$\mathcal{R}$ to~$\mathcal{S}$ if there is an injective function~${\sigma \colon I \rightarrow J}$ such that \begin{itemize} \item each~$P_i$ goes from a vertex~${x'_i \in R_i}$ to a vertex~${y_{\sigma(i)} \in S_{\sigma(i)}}$; \item the family ${\mathcal{T} = (x_iR_ix'_iP_iy_{\sigma(i)}S_{\sigma(i)} \colon i\in I)}$ is a collection of disjoint rays.\footnote{Where we use the notation as in~\cite{D16}, see also Definition~\ref{def_concat}.} We write ${\mathcal{R}\circ_\mathcal{P}\mathcal{S}}$ for the family~$\mathcal{T}$ as well~${R_i \circ_{\mathcal{P}} \mathcal{S}}$ for the ray in~$\mathcal{T}$ with initial vertex~$x_i$. \end{itemize} We say that~${\mathcal T}$ is obtained by \emph{transitioning} from~${\mathcal R}$ to~${\mathcal S}$ along the linkage. We say the linkage~$\mathcal{P}$ \emph{induces} the mapping~$\sigma$. We further say that~$\mathcal{P}$ \emph{links}~$\mathcal{R}$ to~$\mathcal{S}$. Given a set~$X$ we say that the linkage is \emph{after}~$X$ if ${X \cap V(R_i) \subseteqseteq V(x_iR_ix'_i)}$ for all~${i \in I}$ and no other vertex in~$X$ is used by the members of~$\mathcal{T}$. \end{definition} Thus, our aim is to find a linkage from the~${R^n_{m,i}}$ to the new rays after all the~$H^n_m$. That this is possible is guaranteed by the following lemma from~\cite{BEEGHPTI}: \begin{lemma}[Weak linking lemma {\cite[Lemma~4.3]{BEEGHPTI}}] \label{l:weaklink} Let~$\Gamma$ be a graph and~${\omega \in \Omega(\Gamma)}$. Then, for any families~${{\mathcal R} = (R_1, \ldots, R_n)}$ and~${{\mathcal S} = (S_1,\ldots, S_n)}$ of vertex disjoint rays in~$\omega$ and any finite set~$X$ of vertices, there is a linkage from~$\mathcal{R}$ to~$\mathcal{S}$ after~$X$. \end{lemma} The aim is now to use property~\ref{item:sketch-4} of our tree-decomposition of~$G$ to find minor-copies of~${G[V_{n+1}]}$ sufficiently far along the new rays that we can stick them onto our~$H^n_m$ to obtain suitable~${H^{n+1}_m}$. There are two difficulties at this point in this argument. The first is that, as well as extending the existing~$H^n_m$ to~$H^{n+1}_m$, we also need to introduce an~$H^{n+1}_{n+1}$. To achieve this, we ensure that one of the $G$-minors in~$\mathcal{H}$ is disjoint from all the paths in the linkage, so that we may take an initial segment of it as~$H^{n+1}_{n+1}$. This is possible because of a slight strengthening of the linking lemma above; see~\cite[Lemma~4.4]{BEEGHPTI} or Lemma~\ref{l:link} for a precise statement. A more serious difficulty is that in order to stick the new copy of~$V_{n+1}$ onto~$H^n_m$ we need the following property: \begin{equation}\tag{$*$} \label{property} \parbox{12cm}{For each of the bundles corresponding to an~$H^n_m$, the rays in the bundle are linked to the rays in the bundle coming from some~${H \in \mathcal{H}}$. This happens in such a way that each~$R^n_{m,i}$ is linked to~$H(A_i)$.} \end{equation} Thus we need a great deal of control over which rays get linked to which. We can keep track of which rays are linked to which as follows: \begin{definition}[Transition function] \label{d:trans-func} Let~${\mathcal{R} = (R_i \colon i\in I)}$ and~${\mathcal{S} = (S_j \colon j \in J)}$ be families of disjoint rays. We say that a function~${\sigma \colon I \rightarrow J}$ is a \emph{transition function} from~${\mathcal R}$ to~${\mathcal S}$ if for any finite set~$X$ of vertices there is a linkage from~${\mathcal R}$ to~${\mathcal S}$ after~$X$ that induces~$\sigma$. \end{definition} So our aim is to find a transition function assigning new rays to the~$R^n_m$ so as to achieve~\eqref{property}. One reason for expecting this to be possible is that the new rays all go to the same end, and so they are joined up by many paths. We might hope to be able to use these paths to move between the rays, allowing us some control over which rays are linked to which. The structure of possible jumps is captured by a graph whose vertex set is the set of rays: \begin{definition}[Ray graph] \label{def_raygraph} Given a finite family of disjoint rays~${\mathcal{R} = (R_i \colon i \in I)}$ in a graph~$\Gamma$ the \emph{ray graph}, ${\RG_{\Gamma}(\mathcal{R}) = \RG_{\Gamma}(R_i \colon i \in I)}$ is the graph with vertex set~$I$ and with an edge between~$i$ and~$j$ if there is an infinite collection of vertex disjoint paths from~$R_i$ to~$R_j$ which meet no other~$R_k$. When the host graph~$\Gamma$ is clear from the context we will simply write~${\RG(\mathcal{R})}$ for~${\RG_{\Gamma}(\mathcal{R})}$. \end{definition} Unfortunately, the collection of possible transition functions can be rather limited. Consider, for example, the case of families of disjoint rays in the grid. Any such family has a natural cyclic order, and any transition function must preserve this cyclic order. This paucity of transition functions is reflected in the sparsity of the ray graphs, which are all just cycles. However, in a previous paper~\cite{BEEGHPTII} we analysed the possibilities for how the ray graphs and transition functions associated to a given thick\footnote{An end is \emph{thick} if it contains infinitely many disjoint rays.} end may look. We found that there are just three possibilities. The easiest case is that in which the rays to the end are very joined up, in the sense that any injective function between two families of rays is a transition function. This case was already dealt with in~\cite{BEEGHPTII}, where is was shown that in any graph with such an end we can find a~$K_{\aleph_0}$ minor. The second possibility is that which we saw above for the grid: all ray graphs are cycles, and all transition functions between them preserve the cyclic order. The third possibility is that all ray graphs consist of a path together with a bounded number of further `junk' vertices, where these junk vertices are hanging at the ends of the paths (formally: all interior vertices on this \emph{central path} in the ray graph have degree~$2$). In this case, the transition functions must preserve the linear order along the paths. The second and third cases can be dealt with using similar ideas, so we will focus on the third one here. Since we are assuming that all the $G$-minors in $\Gamma$ go to $\epsilon$, given a large enough collection of $G$-minors $\mathcal{H}$, almost all of the rays from the bundles of the $H \in \mathcal{H}$ lie on the central path of the ray graph of this family of rays, and so in particular by a Ramsey type argument there must be a large collection of $H \in \mathcal{H}$ such that for each $H$, the rays~${H(A_i)}$ appear in the same order along the central path. Since there are only finitely many possible orders, there is some consistent way to order the~$A_i$ such that for every~$n$ we can find~$n$ disjoint $G$-minors~$H$ such that there is some ray graph in which, for each~$H$, the rays~${H(A_i)}$ appear in this order along the central path, which we can assume, without loss of generality, is from $H(A_1)$ to $H(A_d)$. This will allow us to recursively maintain a similar property for the rays from the bundles of the $H_m^n$. More precisely, we can guarantee that there is a slightly larger family~$\mathcal{R}$ of disjoint rays, consisting of the~$R^n_{m,i}$ and some extra `junk' rays, such that all of the~$R^n_{m,i}$ lie on the central path of $\RG ({\mathcal R})$, and for each~$n$ and~$m$ the~$R^n_{m,i}$ appear on this path consecutively in order from~$R^n_{m,1}$ to~$R^n_{m,k}$. Then, our extra assumption on the structure of the end $\epsilon$ ensures that given a linkage from ${\mathcal R}$ to the bundles from $H \in \mathcal{H}$ which induces a transition function, we can reroute our linkage, using the edges of $\RG({\mathcal R})$, so that~\eqref{property} holds. There is one last subtle difficulty which we have to address, once more relating to the fact that we want to introduce a new~$H^{n+1}_{n+1}$ together with its private bundle of rays corresponding to its copies of the~$A_i$, disjoint from all the other~$H^{n+1}_m$ and their bundles. Our strengthening of the weak linking lemma allows us to find a linkage which avoids one of the $G$-minors in~$\mathcal{H}$, but this linkage may not have property~\eqref{property}. We can, as before, modify it to one satisfying~\eqref{property} by rerouting the linkage, but this new linkage may then have to intersect some of the rays in the bundle of~$H^{n+1}_{n+1}$, if these rays from~$H^{n+1}_{n+1}$ lie between rays linked to a bundle of some~$H^n_m$, see Figure~\ref{fig:only_rounting}. \begin{figure} \caption{Extending the $H^n_m$ by routing onto a set of disjoint $G$-minors might cause problems with introducing a new $H^{n+1} \label{fig:only_rounting} \end{figure} However, we can get around this by instead rerouting the rays in~$\mathcal{R}$ \emph{before} the linkage, so as to rearrange which bundles make use of (the tails of) which rays. Of course, we cannot know before we choose our linkage how we will need to reroute the rays in~$\mathcal{R}$, but we do know that the structure of $\epsilon$ restricts the possible reroutings we might need to do. Hence, we can avoid this issue by first taking a large, but finite, set of paths between the rays in $\mathcal{R}$ which is rich enough to allow us to reroute the rays in $\mathcal{R}$ in every way which is possible in $\Gamma$. Since the rays in $\mathcal{R}$ also go to $\epsilon$, the structure of $\epsilon$ will guarantee that this includes all of the possible reroutings we might need to do. We call such a collection a \emph{transition box}. Only after building our transition box do we choose the linkage from~${\mathcal R}$ to the rays from~$\mathcal{H}$, and we make sure that this linkage is after the transition box. Then, when we later see how the rays in ${\mathcal R}$ should be arranged in order that the rays from the bundle of~$H^{n+1}_{n+1}$ do not appear between rays linked to a bundle of some~$H^n_m$, we can go back and perform a suitable rerouting within the transition box, see Figure~\ref{l_fig_intro}. \begin{figure} \caption{The transitioning strategy between the old and new bundles.} \label{l_fig_intro} \end{figure} This completes the sketch of the proof that locally finite graphs with a single end of finite degree are ubiquitous. Our results in this paper are for a more general class of graphs, but one which is chosen to ensure that arguments of the kind outlined above will work for them. Hence we still need a tree-decomposition with properties similar to \ref{item:sketch-1}--\ref{item:sketch-4} from our ray-decomposition above. Tree-decompositions with these properties are called \emph{extensive}, and the details can be found in Section~\ref{s:extensive}. However, certain aspects of the sketch above must be modified to allow for the fact that we are now dealing with graphs~$G$ with multiple, indeed possibly infinitely many, ends. For any end~$\delta$ of~$G$ and any $G$-minor~$H$ of~$\Gamma$, all rays~${H(R)}$ with~$R$ in~$\delta$ belong to the same end~$H(\delta)$ of~$\Gamma$. If~$\delta$ and~$\delta'$ are different ends in~$G$, then~$H(\delta)$ and~$H(\delta')$ may well be different ends in~$\Gamma$ as well. So there is no hope of finding a single end~$\epsilon$ of~$\Gamma$ to which all rays in all $G$-minors converge. Nevertheless, we can still find an end~$\epsilon$ of~$\Gamma$ towards which the $G$-minors are \emph{concentrated}, in the sense that for any finite vertex set~$X$ there are arbitrarily large families of $G$-minors in the same component of~${G-X}$ as all rays of~$\epsilon$ have tails in. See Section~\ref{s:tribes} for details. In that section we introduce the term \emph{tribe} for a collection of arbitrarily large families of disjoint $G$-minors. The recursive construction will work pretty much as before, in that at each step~$n$ we will again have embedded $G^n$-minors for some large finite part~$G^n$ of~$G$, together with a number of rays in~$\epsilon$ corresponding to some designated rays going to certain ends~$\delta$ of~$G$. In order for this to work, we need some consistency about which ends~$H(\delta)$ of~$\Gamma$ are equal to~$\epsilon$ and which are not. It is clear that for any finite set~$\Delta$ of ends of~$G$ there is some subset~$\Delta'$ such that there is a tribe of $G$-minors~$H$ converging to~$\epsilon$ with the property that the set of~$\delta$ in~$\Delta$ with~${H(\delta) = \epsilon}$ is~$\Delta'$. This is because there are only finitely many options for this set. But if~$G$ has infinitely many ends, there is no reason why we should be able to do this for all ends of~$G$ at once. Our solution is to keep track of only finitely many ends of~$G$ at any stage in the construction, and to maintain at each stage a tribe concentrated towards~$\epsilon$ which is consistent for all these finitely many ends. Thus in our construction consistency of questions such as which ends~$\delta$ of~$G$ converge to~$\epsilon$ or of the proper linear order in the ray graph of the families of canonical rays to those ends is achieved dynamically during the construction, rather than being fixed in advance. The ideas behind this dynamic process have already been used successfully in our earlier paper~\cite{BEEGHPTI}, where they appear in slightly simpler circumstances. The paper is then structured as follows. In Section~\ref{s:prelim} we give precise definitions of some of the basic concepts we will be using, and prove some of their fundamental properties. In Section~\ref{s:extensive} we introduce extensive tree-decompositions and in Section~\ref{s:getnice} we illustrate that many locally finite graphs admit such decompositions. In Section~\ref{s:nonpebbly} we analyse the possible collections of ray graphs and transition functions between them which can occur in a thick end. In Section~\ref{s:tribes} we introduce the notion of tribes and of their concentration towards an end and begin building some tools for the main recursive construction, which is given in Section~\ref{sec:countable-subtrees}. We conclude with a discussion of the future outlook in Section~\ref{s:WQO}. \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Preliminaries} \label{s:prelim} In this paper we will denote by~$\mathbb{N}$ the set of positive integers and by~$\mathbb{N}_0$ the set of non-negative integers. In our graph theoretic notation we generally follow the textbook of Diestel~\cite{D16}. For a graph~${G = (V,E)}$ and~${W \subseteqseteq V}$ we write~${G[W]}$ for the induced subgraph of~$G$ on~$W$. For two vertices~${v,w}$ of a connected graph~$G$, we write~$\dist(v,w)$ for the edge-length of a shortest $v$--$w$ path. A path ${P = v_0v_1\ldots v_n}$ in a graph~$G$ is called a \emph{bare path} if~${d_G(v_i) = 2}$ for all inner vertices~$v_i$ for~${0 < i < n}$. \subseteqsection{Rays and ends} \begin{definition}[Rays, double rays and initial vertices of rays] A one-way infinite path is called a \emph{ray} and a two-way infinite path is called a \emph{double ray}. For a ray~$R$, let~$\init(R)$ denote the \emph{initial vertex} of~$R$, that is the unique vertex of degree~$1$ in~$R$. For a family~$\mathcal{R}$ of rays, let~${\init(\mathcal{R})}$ denote the set of initial vertices of the rays in~$\mathcal{R}$. \end{definition} \begin{definition}[Tail of a ray] Given a ray~$R$ in a graph~$G$ and a finite set~${X \subseteqseteq V(G)}$, the \emph{tail of~$R$ after~$X$}, written~${T(R,X)}$, is the unique infinite component of~$R$ in~${G - X}$. \end{definition} \begin{definition}[Concatenation of paths and rays] \label{def_concat} For a path or ray~$P$ and vertices ${v,w \in V(P)}$, let~${vPw}$ denote the subpath of~$P$ with endvertices~$v$ and~$w$, and~${\mathring{v}P\mathring{w}}$ the subpath strictly between~$v$ and~$w$. If~$P$ is a ray, let~$Pv$ denote the finite subpath of~$P$ between the initial vertex of~$P$ and~$v$, and let~$vP$ denote the subray (or \emph{tail}) of~$P$ with initial vertex~$v$. Similarly, we write~${P\mathring{v}}$ and~${\mathring{v}P}$ for the corresponding path/ray without the vertex~$v$. For a ray ${R = r_0 r_1 \ldots}$, let~$R^{-}$ denote the tail~${r_1 R}$ of~$R$ starting at~$r_1$. Given a family~$\mathcal{R}$ of rays, let~$\mathcal{R}^{-}$ denote the family~${(R^{-} \colon R \in \mathcal{R})}$. Given two paths or rays~$P$ and~$Q$, which intersect in a single vertex only, which is an endvertex in both~$P$ and~$Q$, we write~$PQ$ for the \emph{concatenation of~$P$ and~$Q$}, that is the path, ray or double ray~${P \cup Q}$. Moreover, if we concatenate paths of the form~$vPw$ and~$wQx$, then we omit writing~$w$ twice and denote the concatenation by~${vPwQx}$. \end{definition} \begin{definition}[{Ends of a graph, cf.~\cite[Chapter~8]{D16}}] An \emph{end} of an infinite graph~$\Gamma$ is an equivalence class of rays, where two rays~$R$ and~$S$ of~$\Gamma$ are \emph{equivalent} if and only if there are infinitely many vertex disjoint paths between~$R$ and~$S$ in~$\Gamma$. We denote by~${\Omega(\Gamma)}$ the set of ends of~$\Gamma$. We say that a ray~${R \subseteqseteq \Gamma}$ \emph{converges} (or \emph{tends}) to an end~$\epsilon$ of~$\Gamma$ if~$R$ is contained in~$\epsilon$. In this case, we call~$R$ an \emph{$\epsilon$-ray}. Given an end~${\epsilon \in \Omega(\Gamma)}$ and a finite set~${X \subseteqseteq V(\Gamma)}$ there is a unique component of~${\Gamma - X}$ which contains a tail of every ray in~$\epsilon$, which we denote by~${C(X,\epsilon)}$. Given two ends $\epsilon, \epsilon' \in \Omega(\Gamma)$, we say a finite set $X \subseteqseteq V(\Gamma)$ \emph{separates} $\epsilon$ and $\epsilon'$ if $C(X,\epsilon) \neq C(X,\epsilon')$. For an end~${\epsilon \in \Omega(\Gamma)}$, we define the \emph{degree} of~$\epsilon$ in~$\Gamma$, denoted by~${\deg(\epsilon)}$, as the supremum in ${\mathbb{N} \cup \{ \infty \}}$ of the set ${ \{ |\mathcal{R}| \; \colon \; \mathcal{R} \textnormal{ is a set of disjoint } \epsilon \textnormal{-rays} \} }$. Note that this supremum is in fact an attained maximum, i.e.~for each end~$\epsilon$ of~$\Gamma$ there is a set~$\mathcal{R}$ of vertex-disjoint $\epsilon$-rays with~${|\mathcal{R}| = \deg(\omega)}$, as proved by Halin~\cite[Satz~1]{H65}. An end with finite degree is called \emph{thin}, otherwise the end is called \emph{thick}. \end{definition} \subseteqsection{Inflated copies of graphs} \begin{definition}[Inflated graph, branch set] Given a graph~$G$, we say that a pair~${(H,\varphi)}$ is an \emph{inflated copy of~$G$}, or an~$IG$, if~$H$ is a graph and~${\varphi \colon V(H) \rightarrow V(G)}$ is a map such that: \begin{itemize} \item For every~${v \in V(G)}$ the \emph{branch set}~${\varphi^{-1}(v)}$ induces a non-empty, connected subgraph of~$H$; \item There is an edge in~$H$ between~${\varphi^{-1}(v)}$ and~${\varphi^{-1}(w)}$ if and only if~${vw \in E(G)}$ and this edge, if it exists, is unique. \end{itemize} \end{definition} When there is no danger of confusion, we will simply say that~$H$ is an~$IG$ instead of saying that~${(H,\varphi)}$ is an~$IG$, and denote by~${H(v) = \varphi^{-1}(v)}$ the branch set of~$v$. \begin{definition}[Minor] A graph~$G$ is a \emph{minor} of another graph~$\Gamma$, written~${G \preccurlyeq \Gamma}$, if there is some subgraph~${H \subseteqseteq \Gamma}$ such that~$H$ is an inflated copy of~$G$. In this case, we also say that $H$ is a \emph{$G$-minor} in $\Gamma$. \end{definition} \begin{definition}[Extension of inflated copies] Suppose~${G \subseteqseteq G'}$ as subgraphs, and that~$H$ is an~$IG$ and~$H'$ is an~$IG'$. We say that~$H'$ \emph{extends}~$H$ (or that~$H'$ is an \emph{extension} of~$H$) if~${H \subseteqseteq H'}$ as subgraphs and~${H(v) \subseteqseteq H'(v)}$ for all~${v \in V(G)}$. Note that, since~${H \subseteqseteq H'}$, for every edge~${vw \in E(G)}$ the unique edge between the branch sets~$H'(v)$ and~$H'(w)$ is also the unique edge between~$H(v)$ and~$H(w)$. If~$H'$ is an extension of~$H$ and~${X \subseteqseteq V(G)}$ is such that~${H'(x) = H(x)}$ for every~${x \in X}$, then we say~$H'$ is an extension of~$H$ \emph{fixing~$X$}. \end{definition} \begin{definition}[Tidiness] Let~${(H,\varphi)}$ be an~$IG$. We call~${(H,\varphi)}$ \emph{tidy} if \begin{itemize} \item $H[\varphi^{-1}(v)]$ is a tree for all~${v \in V(G)}$; \item $H[\varphi^{-1}(v)]$ is finite if~$d_G(v)$ is finite. \end{itemize} \end{definition} Note that every~$H$ which is an~$IG$ contains a subgraph~$H'$ such that~${(H',\varphi \restriction V(H'))}$ is a tidy~$IG$, although this choice may not be unique. In this paper we will always assume, without loss of generality, that each~$IG$ is tidy. \begin{definition}[Restriction] Let~$G$ be a graph, ${M \subseteqseteq G}$ a subgraph of~$G$, and let~${(H,\varphi)}$ be an~$IG$. The \emph{restriction of~$H$ to~$M$}, denoted by~$H(M)$, is the~$IM$ given by~${(H(M),\varphi')}$, where~${\varphi'^{-1}(v) = \varphi^{-1}(v)}$ for all~${v \in V(M)}$, and~$H(M)$ consists of the union of the subgraphs of~$H$ induced on each branch set~${\varphi^{-1}(v)}$ for each~${v \in V(M)}$, together with the edge between~${\varphi^{-1}(u)}$ and~${\varphi^{-1}(v)}$ in~$H$ for each~${uv \in E(M)}$. \end{definition} Suppose~$R$ is a ray in some graph~$G$. If~$H$ is a tidy~$IG$ in a graph~$\Gamma$, then in the restriction~$H(R)$ all rays which do not have a tail contained in some branch set will share a tail. Later in the paper, we will want to make this correspondence between rays in~$G$ and~$\Gamma$ more explicit, with use of the following definition: \begin{definition}[Pullback] Let~$G$ be a graph, ${R \subseteqseteq G}$ a ray, and let~${(H, \varphi)}$ be a tidy~$IG$. The \emph{pullback of~$R$ to~$H$} is the subgraph~${H^{\downarrow}(R) \subseteqseteq H(R)}$, where~${H^{\downarrow}(R)}$ is subgraph minimal such that~${(H^{\downarrow}(R), \varphi \restriction V(H^{\downarrow}(R)))}$ is an~$IR$. \end{definition} Note that, since~$H$ is tidy, ${H^{\downarrow}(R)}$ is well defined. It can be shown that, in fact, $H^{\downarrow}(R)$ is also ray. \begin{lemma}[{\cite[Lemma~2.11]{BEEGHPTII}}] \label{l:pullbackray} Let~$G$ be a graph and let~$H$ be a tidy~$IG$. If~${R \subseteqseteq G}$ is a ray, then the pullback~${H^{\downarrow}(R)}$ is also a ray. \end{lemma} \begin{definition} \label{d:rayfamily} Let~$G$ be a graph, ${\mathcal R}$ be a family of disjoint rays in~$G$, and let~$H$ be a tidy~$IG$. We will write~$H^{\downarrow}({\mathcal R})$ for the family ${(H^{\downarrow}(R) \colon R \in {\mathcal R})}$. \end{definition} It is easy to check that if two rays~$R$ and~$S$ in~$G$ are equivalent, then also~$H^{\downarrow}(R)$ and~$H^{\downarrow}(S)$ are rays (Lemma~\ref{l:pullbackray}) which are equivalent in~$H$, and hence also equivalent in~$\Gamma$. \begin{definition} \label{def:H(omega)} For an end~$\omega$ of~$G$ and~${H \subseteqseteq \Gamma}$ a tidy~$IG$, we denote by~$H(\omega)$ the unique end of~$\Gamma$ containing all rays~${H^{\downarrow}(R)}$ for~${R \in \omega}$. \end{definition} \subseteqsection{Transitional linkages and the strong linking lemma} The next definition is based on definitions already stated in Section~\ref{s:sketch} (cf.~Definition~\ref{d:linkage}, Definition~\ref{d:trans-func} and Definition~\ref{def_raygraph}). \begin{definition} We say a linkage between two families of rays is \emph{transitional} if the function which it induces between the corresponding ray graphs is a transition function. \end{definition} \begin{lemma} \label{l:trans-link} Let~$\Gamma$ be a graph and let~${\epsilon \in \Omega(\Gamma)}$. Then, for any finite families ${{\mathcal R} = (R_i \colon i \in I)}$ and ${{\mathcal S} = (S_j \colon j \in J)}$ of disjoint $\epsilon$-rays in~$\Gamma$, there is a finite set~$X$ such that every linkage from~${\mathcal R}$ to~${\mathcal S}$ after~$X$ is transitional. \end{lemma} \begin{proof} By definition, for every function~${\sigma \colon I \rightarrow J}$ which is not a transition function from~${\mathcal R}$ to~${\mathcal S}$ there is a finite set~${X_\sigma \subseteqseteq V(\Gamma)}$ such that there is no linkage from~${\mathcal R}$ to~${\mathcal S}$ after~$X_\sigma$ which induces~$\sigma$. If we let~$\Phi$ be the set of all such~$\sigma$ which are not transition functions, then the set~${X := \bigcup_{\sigma \in \Phi} X_\sigma}$ satisfies the conclusion of the lemma. \end{proof} In addition to Lemma~\ref{l:weaklink}, we will also need the following stronger linking lemma, which is a slight modification of~\cite[Lemma~4.4]{BEEGHPTI}: \begin{lemma}[Strong linking lemma] \label{l:link} Let~$\Gamma$ be a graph and let~${\epsilon \in \Omega(\Gamma)}$. Let~$X$ be a finite set of vertices and let~${{\mathcal R} = (R_i \colon i \in I)}$ a finite family of vertex disjoint $\epsilon$-rays. Let~${x_i = \init(R_i)}$ and let~${x'_i = \init(T(R_i,X))}$. Then there is a finite number~${N = N({\mathcal R},X)}$ with the following property: For every collection ${(H_j \colon j\in[N])}$ of vertex disjoint subgraphs of~$\Gamma$, all disjoint from~$X$ and each including a specified ray~$S_j$ in~$\epsilon$, there is an~${\ell \in [N]}$ and a transitional linkage ${{\mathcal P} = (P_i \colon i \in I)}$ from~${\mathcal R}$ to ${(S_j \colon j \in [N])}$, with transition function~$\sigma$, which is after~$X$ and such that the family \[ \mathcal{T} = \left(x_iR_ix'_iP_iy_{\sigma(i)}S_{\sigma(i)} \colon i\in I\right) \] avoids~$H_{\ell}$. \end{lemma} \begin{proof} Let~${Y \subseteqseteq V(\Gamma)}$ be a finite set as in Lemma~\ref{l:trans-link}. We apply the strong linking lemma established in~\cite[Lemma~4.4]{BEEGHPTI} to the set~${X \cup Y}$ to obtain this version of the strong linking lemma. \end{proof} \begin{lemmadef} \label{l:transition-box} Let~$\Gamma$ be a graph, ${\epsilon \in \Omega(\Gamma)}$, ${X \subseteqseteq V(\Gamma)}$ be finite, and let ${\mathcal{R} = ( R_i \colon i \in I)}$, ${\mathcal{S} = ( S_j \colon j \in J)}$ be two finite families of disjoint $\epsilon$-rays with~${|I| \leqslant |J|}$. Then there is a finite subgraph~${Y}$ such that, for any transition function~$\sigma$ from~$\mathcal{R}$ to~$\mathcal{S}$, there is a linkage~$\mathcal{P}_\sigma$ from~$\mathcal{R}$ to~$\mathcal{S}$ inducing~$\sigma$, with~${\bigcup \mathcal{P}_\sigma \subseteqseteq Y}$, which is after $X$. We call such a graph~$Y$ a \emph{transition box between~$\mathcal{R}$ and~$\mathcal{S}$ (after~$X$)}. \end{lemmadef} \begin{proof} Let~${\sigma \colon I \rightarrow J}$ be a transition function from~$\mathcal{R}$ to~$\mathcal{S}$. By definition, there is a linkage~$\mathcal{P}_\sigma$ from~$\mathcal{R}$ to~$\mathcal{S}$ after~$X$ which induces~$\sigma$. Let~$\Phi$ be the set of all transition functions from~$\mathcal{R}$ to~$\mathcal{S}$ and let~${Y = \bigcup_{\sigma \in \Phi} \mathcal{P}_\sigma}$. Then~$Y$ is a transition box between~$\mathcal{R}$ and~$\mathcal{S}$ (after~$X$). \end{proof} \begin{remarkdef} \label{rem:trans-link-combine} Let~$\Gamma$ be a graph and~${\epsilon \in \Omega(\Gamma)}$. Let ${\mathcal R}_1$, ${\mathcal R}_2$, ${\mathcal R}_3$ be finite families of disjoint $\epsilon$-rays, $\mathcal{P}_1$ a transitional linkage from~${\mathcal R}_1$ to~${\mathcal R}_2$, and let~$\mathcal{P}_2$ a transitional linkage from~${\mathcal R}_2$ to~${\mathcal R}_3$ after~${V(\bigcup \mathcal{P}_2)}$. Then \begin{enumerate} \item $\mathcal{P}_2$ is also a transitional linkage from ${({\mathcal R}_1 \circ_{\mathcal{P}_1} {\mathcal R}_2)}$ to~${\mathcal R}_3$;\footnote{Formally, it is only the subset of $\mathcal{P}_2$ starting at the endpoints of $\mathcal{P}_1$ which is a linkage from ${({\mathcal R}_1 \circ_{\mathcal{P}_1} {\mathcal R}_2)}$ to~${\mathcal R}_3$. Here and later in the paper, we will use such abuses of notation, when the appropriate subset of the path family is clear from context.} \item The linkage from~${\mathcal R}_1$ to~${\mathcal R}_3$ yielding the rays ${({\mathcal R}_1 \circ_{\mathcal{P}_1} {\mathcal R}_2) \circ_{\mathcal{P}_2} {\mathcal R}_3}$, which we call the \emph{concatenation~${\mathcal{P}_1 + \mathcal{P}_2}$ of~$\mathcal{P}_1$ and~$\mathcal{P}_2$}, is transitional. \end{enumerate} \end{remarkdef} The following lemmas are simple exercises. \begin{lemma} \label{rem:tail-raygraph} Let~$\Gamma$ be a graph and ${( R_i \colon i \in I)}$ be a finite family of equivalent disjoint rays. Then the ray graph~${\RG( R_i \colon i \in I)}$ is connected. Also, if~$R'_i$ is a tail of~$R_i$ for each~${i \in I}$, then we have that~${{\RG( R_i \colon i \in I)} = {\RG( R'_i \colon i \in I)}}$. \qed \end{lemma} \begin{lemma}[{\cite[Lemma 3.4]{BEEGHPTII}}] \label{l:rayinducedsubgraph} Let~$\Gamma$ be a graph, ${\Gamma' \subseteqseteq \Gamma}$, ${\mathcal{R} = (R_i \colon i \in I)}$ be a finite family of disjoint rays in~$\Gamma'$, and let~${\mathcal{S} = (S_j \colon j \in J)}$ be a finite family of disjoint rays in~${\Gamma - V(\Gamma')}$, where~$I$ and~$J$ are disjoint. Then~${\RG_{\Gamma'}(\mathcal{R})}$ is a subgraph of~${\RG_{\Gamma}(\mathcal{R} \cup \mathcal{S})\big[I \big]}$. \qed \end{lemma} \subseteqsection{Separations and tree-decompositions of graphs} \begin{definition} \label{def_separation} Let~${G = (V,E)}$ be a graph. A \emph{separation} of~$G$ is a pair~${(A,B)}$ of subsets of vertices such that~${A \cup B = V}$ and such that there is no edge between~${B \setminus A}$ and~${A \setminus B}$. Given a separation~${(A,B)}$, we write~$\overline{G[B]}$ for the graph obtained by deleting all edges in the \emph{separator}~${A \cap B}$ from~${G[B]}$. Two separations~${(A,B)}$ and~${(C,D)}$ are \emph{nested} if one of the following conditions hold: \begin{align*} &{A \subseteqseteq C} \textnormal{ and } {D \subseteqseteq B}, \qquad \textnormal { or } \qquad {B \subseteqseteq C} \textnormal{ and } {D \subseteqseteq A}, \qquad \textnormal { or } \qquad \\ &{A \subseteqseteq D} \textnormal{ and } {C \subseteqseteq B} , \qquad \textnormal { or } \qquad {B \subseteqseteq D} \textnormal{ and } {C \subseteqseteq A}. \end{align*} \end{definition} \begin{definition} Let~$T$ be a tree with a root~${v \in V(T)}$. Given nodes~${x, y \in V(T)}$, let us denote by~$xTy$ the unique path in~$T$ between~$x$ and~$y$, by~$T_x$ denote the component of~${T - E(vTx)}$ containing~$x$, and by~$\overline{T_x}$ the tree~${T - T_x}$. Given an edge~${e = tt' \in E(T)}$, we say that~$t$ is the \emph{lower vertex} of~$e$, denoted by~$e^-$, if~${t \in vTt'}$. In this case, $t'$ is the \emph{higher vertex} of~$e$, denoted by~$e^+$. If~$S$ is a subtree of a tree~$T$, let us write~${\partial(S) = E(S,T \setminus S)}$ for the edge cut between~$S$ and its complement in~$T$. We say that~$S$ is a \emph{initial subtree} of~$T$ if~$S$ contains~$v$. In this case, we consider~$S$ to be rooted in~$v$ as well. \end{definition} A reader unfamiliar with tree-decompositions may also consult~\cite[Chapter 12.3]{D16}. \begin{definition}[Tree-decomposition] Given a graph~${G = (V,E)}$, a \emph{tree-decomposition} of~$G$ is a pair ${(T,\mathcal{V})}$ consisting of a rooted tree~$T$, together with a family of subsets of vertices ${\mathcal{V} = (V_t \colon t \in V(T))}$, such that: \begin{itemize} \item ${V(G) = \bigcup \mathcal{V}}$; \item For every edge~${e \in E(G)}$ there is a~${t \in V(T)}$ such that~$e$ lies in~${G[V_t]}$; \item ${V_{t_1} \cap V_{t_3} \subseteqseteq V_{t_2}}$ whenever~${t_2 \in V(t_1 T t_3)}$. \end{itemize} The vertex sets~$V_t$ for~${t \in V(T)}$ are called the \emph{parts} of the tree-decomposition~${(T,\mathcal{V})}$. \end{definition} \begin{definition}[Tree-width] Suppose~${(T,\mathcal{V})}$ is a tree-decomposition of a graph~$G$. The \emph{width} of~${(T,\mathcal{V})}$ is the number ${\sup\, \{ |V_t|-1 \colon t \in V(T) \} \in \mathbb{N}_0 \cup \{\infty\}}$. The \emph{tree-width} of a graph~$G$ is the least width of any tree-decomposition of~$G$. \end{definition} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Extensive tree-decompositions and self minors} \label{s:extensive} The purpose of this section is to explain the extensive tree-decompositions mentioned in the proof sketch. Some ideas motivating this definition are already present in Andreae's proof that locally finite trees are ubiquitous under the topological minor relation~\cite[Lemma~2]{A79}. \subseteqsection{Extensive tree-decompositions} \begin{definition}[Separations induced by tree-decompositions] Given a tree-decomposition ${(T,\mathcal{V})}$ of a graph~$G$, and an edge~${e \in E(T)}$, let \begin{itemize} \item ${A(e) := \bigcup \{ V_{t'} \colon t' \notin V(T_{e^+}) \}}$; \item ${B(e) := \bigcup \{ V_{t'} \colon t' \in V(T_{e^+}) \}}$; \item ${S(e) := A(e) \cap B(e)=V_{e^-}\cap V_{e^+}}$. \end{itemize} Then~${\left( A(e),B(e) \right)}$ is a separation of~$G$ (cf.~\cite[Chapter 12.3.1]{D16}). We call~${B(e)}$ the \emph{bough of~${(T,\mathcal{V})}$ rooted in~$e$} and~${S(e)}$ the \emph{separator of~${B(e)}$}. \ When writing $\overline{G[B(e)]}$ it is implicitly understood that this refers to the separation~${\left( A(e),B(e) \right)}$ (cf.~Definition~\ref{def_separation}.) \end{definition} \begin{definition} \label{defn_treestuff} Let~${(T,\mathcal{V})}$ be a tree-decomposition of a graph~$G$. For a subtree~${S \subseteqseteq T}$, let us write \[ G(S) = G\left[\bigcup_{t\in V(S)} V_t\right] \] and, if~$H$ is an~$IG$, we write~${H(S) = H(G(S))}$ for the restriction of~$H$ to~${G(S)}$. \end{definition} \begin{definition}[Self-similar bough] \label{d:selfsimilarbough} Let~${(T, \mathcal{V})}$ be a tree-decomposition of a graph~$G$. Given~${e \in E(T)}$, the bough~${B(e)}$ is called \emph{self-similar} (towards an end~$\omega$ of~$G$), if there is a family ${\mathcal{R}_e = ( R_{e,s} \colon s \in S(e))}$ of disjoint $\omega$-rays in~$G$ such that for all~${n \in \mathbb{N}}$ there is an edge~${e' \in E(T_{e^+})}$ with~${\dist(e^-,e'^-) \geqslant n}$ such that \begin{itemize} \item for each~${s \in S(e)}$, the ray~$R_{e,s}$ starts in~$s$ and meets~${S(e')}$; \item there is a subgraph~${W \subseteqseteq G[B(e')]}$ which is an inflated copy of~$\overline{G[B(e)]}$; \item for each~${s \in S(e)}$, we have~${V(R_{e,s}) \cap S(e') \subseteqseteq W(s)}$. \end{itemize} Such a~$W$ is called a \emph{witness for the self-similarity of~${B(e)}$ (towards an end~$\omega$ of~$G$) of distance at least~$n$}. \end{definition} \begin{definition}[Extensive tree-decomposition] \label{d:extensive} A tree-decomposition~${( T, \mathcal{V} )}$ of~$G$ is \emph{extensive} if \begin{itemize} \item $T$ is a locally finite, rooted tree; \item each part of~${(T, \mathcal{V})}$ is finite; \item every vertex of~$G$ appears in only finitely many parts of~$\mathcal{V}$; \item for each~${e \in E(T)}$, the bough~${B(e)}$ is self-similar towards some end~$\omega_e$ of~$G$. \end{itemize} \end{definition} \begin{remark} If ${( T, \mathcal{V} )}$ is extensive then, for each edge~${e \in E(T)}$ and every~${n \in \mathbb{N}}$, there is an an edge~${e' \in E(T_{e^+})}$ with~${\dist(e^-,e'^-) \geqslant n}$, such that~${G[B(e')]}$ contains a witness for the self-similarity of~${B(e)}$. Since~$T$ is locally finite, there is some ray~$R_e$ in~$T$ such that there are infinitely many such~$e'$ on~$R_e$. \end{remark} The following is the main result of this paper. \begin{theorem} \label{t:nice} Every locally finite connected graph admitting an extensive tree-decomposi{-}tion is $\preccurlyeq$-ubiquitous. \end{theorem} \subseteqsection{Self minors in extensive tree-decompositions} The existence of an extensive tree-decomposition of a graph~$G$ will imply the existence of many self-minors of~$G$, which will be essential to our proof. Throughout this subsection, let~$G$ denote a locally finite, connected graph with an extensive tree-decomposition~${(T, \mathcal{V})}$. \begin{definition} \label{d:amalgamation} Let~${(A,B)}$ be a separation of~$G$ with~${A \cap B = \{v_1,v_2,\ldots,v_n \}}$. Suppose ${H_1, H_2}$ are subgraphs of a graph~$\Gamma$, where $H_1$ is an inflated copy of~${G[A]}$, $H_2$ is an inflated copy of~$\overline{G[B]}$, and for all vertices~${x \in A}$ and~${y \in B}$, we have~${H_1(x) \cap H_2(y) \neq \emptyset}$ only if~${x = y = v_i}$ for some~$i$. Suppose further that~$\mathcal{P}$ is a family of disjoint paths ${(P_i \colon i \in [n])}$ in~$\Gamma$ such that each~$P_i$ is a path from~${H_1(v_i)}$ to~${H_2(v_i)}$, which is otherwise disjoint from~${H_1 \cup H_2}$. Note that~$P_i$ may be a single vertex if~${H_1(v_i) \cap H_2(v_i) \neq \emptyset}$. We write~${H_1 \oplus_{{\mathcal P}} H_2}$ for the~$IG$ given by~${(H,\phi)}$, where~${H = H_1 \cup H_2 \cup \bigcup_{i\in [n]} P_i}$ and \[ H(v) := \begin{cases} H_1(v_i) \cup V(P_i) \cup H_2(v_i) & \text{ if } v = v_i \in A \cap B,\\ H_1(v) & \text{ if } v \in A \setminus B,\\ H_2(v) & \text{ if } v \in B \setminus A. \end{cases} \] We note that this may produce a non-tidy~$IG$, in which case in practise (in order to maintain our assumption that each~$IG$ we consider is tidy) we will always delete some edges inside the branch sets to make it tidy. We will often use this construction when the family $\mathcal{P}$ consists of certain segments of a family of disjoint rays ${\mathcal R}$. If ${\mathcal R}$ is such that each~$R_i$ has its first vertex in~${H_1(v_i)}$ and is otherwise disjoint from~$H_1$, and such that every~$R_i$ meets~$H_2$, and does so first in some vertex~${x_i \in H(v_i)}$, then we write \[ H_1 \oplus_{\mathcal R} H_2 = H_1 \oplus_{(R_ix_i \colon i \in [n])} H_2. \] \end{definition} \begin{definition}[Push-out] \label{d:pushout} A self minor ${G' \subseteqseteq G}$ (meaning~$G'$ is an~$IG$) is called a \emph{push-out of~$G$ along~$e$ to depth~$n$} for some~${e \in E(T)}$ if there is an edge~${e' \in T_{e^+}}$ such that ${\dist(e^-,e'^-) \geqslant n}$ and a subgraph ${W \subseteqseteq G[B(e')]}$, which is an inflated copy of~$\overline{G[B(e)]}$, such that ${G' = G[A(e)] \oplus_{{\mathcal R}_e} W}$. Similarly, if~$H$ is an~$IG$, then a subgraph~$H'$ of~$H$ is a \emph{push-out of~$H$ along~$e$ to depth~$n$} for some ${e \in E(T)}$ if there is an edge~${e' \in T_{e^+}}$ such that~${\dist(e^-,e'^-) \geqslant n}$ and a subgraph~${W \subseteqseteq H(G[B(e')])}$, which is an inflated copy of~$\overline{G[B(e)]}$, such that \[ H' = H(G[A(e)]) \oplus_{H^{\downarrow}({\mathcal R}_e)} W. \] Note that if~$G'$ is a push-out of~$G$ along~$e$ to depth~$n$, then~${H(G')}$ has a subgraph which is a push-out of~$H$ along~$e$ to depth~$n$. \end{definition} \begin{lemma} \label{lem:pushout2} For each~${e \in E(T)}$, each~${n \in \mathbb{N}}$, and each witness~$W$ of the self-similarity of~${B(e)}$ of distance at least~$n$ there is a corresponding push-out ${G_W := G[A(e)] \oplus_{\mathcal{R}_e} W}$ of~$G$ along~$e$ to depth~$n$. \end{lemma} \begin{proof} Let~${e' \in E(T_{e^+})}$ be the edge in Definition~\ref{d:selfsimilarbough} such that~${W \subseteqseteq G[B(e')]}$. By Definition~\ref{d:selfsimilarbough}, each ray~$R_{e,s}$ meets~${S(e')}$ and ${R_{e,s} \cap S(e') \subseteqseteq W(s)}$. Hence, the initial segment of~$R_{e,s}$ up to the first point in~$W$ only meets~${G[A(e)] \cup W}$ in~${\{s\} \cup W(s)}$. Now, if ${s' \in S(e) \cap W(s)}$ for some~$s'$, then~${s' \in S(e')}$, and so~${R_{e,s'} \cap S(e') \not\subseteqseteq W(s')}$, contradicting Definition~\ref{d:selfsimilarbough}. Since~${G[A(e)]}$ is an~${IG[A(e)]}$ and~$W$ is an inflated copy of~$\overline{G[B(e)]}$, by Definitions~\ref{d:amalgamation} and~\ref{d:pushout} ${G[A(e)] \oplus_{{\mathcal R}_e} W}$ is well-defined and is indeed a push-out of~$G$ along~$e$ to depth~$n$. \end{proof} The existence of push-outs of~$G$ along~$e$ to arbitrary depths is in some sense the essence of extensive tree-decompositions, and lies at the heart of our inductive construction in Section~\ref{sec:countable-subtrees}. \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Existence of extensive tree-decompositions} \label{s:getnice} The purpose of this section is to examine two classes of locally finite connected graphs that have extensive tree-decompositions: Firstly, the class of graphs with finitely many ends, all of which are thin, and secondly the class of graphs of finite tree-width. In both cases we will show the existence of extensive tree-decompositions using some results about the \emph{well-quasi-ordering} of certain classes of graphs. A \emph{quasi-order} is a reflexive and transitive binary relation, such as the minor relation between graphs. A quasi-order~$\preccurlyeq$ on a set~$X$ is a \emph{well-quasi-order} if for all infinite sequences~${(x_i)_{i \in \mathbb{N}}}$ with~${x_i \in X}$ for every~${i \in \mathbb{N}}$ there exist~${i, j \in \mathbb{N}}$ with~${i < j}$ such that~${x_i \preccurlyeq x_j}$. The following two consequences will be useful. \begin{remark} \label{r:increasingsubsequence} A simple Ramsey type argument shows that if~$\preccurlyeq$ is a well-quasi-order on~$X$, then every infinite sequence~${(x_i)_{i \in \mathbb{N}}}$ with~${x_i \in X}$ for every~${i \in \mathbb{N}}$ contains an increasing infinite subsequence~${x_{i_1},x_{i_2}, \ldots \in X}$. That is, an increasing infinite sequence~${i_1<i_2<\ldots}$ such that~${x_{i_j} \preccurlyeq x_{i_k}}$ for all~${j < k}$. Also, it is simple to show that if~$\preccurlyeq$ is a well-quasi-order on~$X$, then for every infinite sequence~${(x_i)_{i \in \mathbb{N}}}$ with~${x_i \in X}$ for every~${i \in \mathbb{N}}$ there is an~${i_0 \in \mathbb{N}}$ such that for every~${i \geqslant i_0}$ there are infinitely many~${j \in \mathbb{N}}$ with~${x_i \preccurlyeq x_j}$. \end{remark} A famous result of Robertson and Seymour~\cite{RS04}, proved over a series of 20 papers, shows that finite graphs are well-quasi-ordered under the minor relation. Thomas~\cite{T89} showed that for any~${k \in \mathbb{N}}$ the class of graphs with tree-width at most~$k$ and arbitrary cardinality is well-quasi-ordered by the minor relation. We will use slight strengthenings of both of these results, Lemma~\ref{lem:labelled-wqo} and Lemma~\ref{lem:labelled-wqo-btw}, to show that our two classes of graphs admit extensive tree-decompositions. In Section~\ref{s:WQO} we will discuss in more detail the connection between our proof and well-quasi-orderings, and indicate how stronger well-quasi-ordering results could be used to prove the ubiquity of larger classes of graphs. \subseteqsection{Finitely many thin ends} We will consider the following strengthening of the minor relation. \begin{definition} Given~${\ell \in \mathbb{N}}$, an $\ell$-\emph{pointed graph} is a graph~$G$ together with a function~${\pi \colon [\ell] \to V(G)}$, called a \emph{point function}. For $\ell$-pointed graphs~${(G_1, \pi_1)}$ and~${(G_2, \pi_2)}$, we say~${(G_1, \pi_1) \preccurlyeq_p (G_2, \pi_2)}$ if~${G_1 \preccurlyeq G_2}$ and this can be arranged in such a way that~${\pi_2(i)}$ is contained in the branch set of~${\pi_1(i)}$ for every~${i \in [\ell]}$. \end{definition} \begin{lemma} \label{lem:labelled-wqo} For~${\ell \in \mathbb{N}}$ the set of $\ell$-pointed finite graphs is well-quasi-ordered under the relation~$\preccurlyeq_p$. \end{lemma} \begin{proof} This follows from a stronger statement of Robertson and Seymour in~\cite[1.7]{RS10}. \end{proof} We will also need the following structural characterisation of locally finite one-ended graphs with a thin end due to Halin. \begin{lemma}[{\cite[Satz~$3'$]{H65}}]\label{lem:oneended-td} Every one-ended, locally finite connected graph~$G$ with a thin end of degree~${k \in \mathbb{N}}$ has a tree-decomposition~${(R,\mathcal{V})}$ of~$G$ such that ${R = t_0t_1t_2\dots}$ is a ray, and for every~${i \in \mathbb{N}_0}$: \begin{itemize} \item ${|V_{t_i}|}$ is finite; \item ${|S(t_{i}t_{i+1})| = k}$; \item ${S(t_{i}t_{i+1}) \cap S(t_{i+1}t_{i+2}) = \emptyset}$. \end{itemize} \end{lemma} \begin{remark} \label{rem:oneended-td} Note that in the above lemma, for a given finite set~${X \subseteqseteq V(G)}$, by taking the union over parts corresponding to an initial segment of the ray of the decomposition, one may always assume that~${X \subseteqseteq V_{t_0}}$. Moreover, note that since~${S(t_{i}t_{i+1}) \cap S(t_{i+1}t_{i+2}) = \emptyset}$, it follows that every vertex of~$G$ is contained in at most two parts of the tree-decomposition. \end{remark} \begin{lemma} \label{lem:one-ended-nice} Every one-ended, locally finite connected graph~$G$ with a thin end has an extensive tree-decomposition~${( R, \mathcal{V} )}$ where~${R = t_0t_1t_2\ldots}$ is a ray rooted in its initial vertex. \end{lemma} \begin{proof} Let ${k \in \mathbb{N}}$ be the degree of the thin end of~$G$ and let~${{\mathcal R} = (R_j \colon j \in [k])}$ be a maximal family of disjoint rays in~$G$. Let~${(R',\mathcal{W})}$ be the tree-decomposition of~$G$ given by Lemma~\ref{lem:oneended-td} where~${R' = t_0't_1'\ldots}$. By Remark~\ref{rem:oneended-td} (and considering tails of rays if necessary), we may assume that each ray in~${\mathcal R}$ starts in~${S(t'_0t'_1)}$. Note that each ray in~${\mathcal R}$ meets the separator~${S(t'_{i-1}t'_i)}$ for each~${i \in \mathbb{N}}$. Since~${\mathcal R}$ is a family of~$k$ disjoint rays and~${|S(t'_{i-1}t'_i)| = k}$ for each~${i \in \mathbb{N}}$, each vertex in~${S(t'_{i-1}t'_i)}$ is contained in a unique ray in~${\mathcal R}$. Let~${\ell = 2k}$ and consider a sequence~${(G_i,\pi_i)_{i\in \mathbb{N}}}$ of $\ell$-pointed finite graphs defined by ${G_i := G[W_{t'_i}]}$ and \[ \pi_i \colon [\ell] \to V(G_i) , \; j \mapsto \begin{cases} \text{ the unique vertex in } S(t'_{i-1}t'_i) \cap V(R_j) & \text{ for } 1 \leqslant j \leqslant k , \\ \text{ the unique vertex in } S(t'_it'_{i+1}) \cap V(R_{j-k}) & \text{ for } k < j \leqslant 2k = \ell. \end{cases} \] By Lemma~\ref{lem:labelled-wqo} and Remark~\ref{r:increasingsubsequence} there is an~${n_0 \in \mathbb{N}}$ such that for every~${n \geqslant n_0}$ there are infinitely many~${m > n}$ with~${(G_n,\pi_n) \preccurlyeq_p (G_m,\pi_m)}$. Let~${V_{t_0} := \bigcup_{i=0}^{n_0} W_{t'_i}}$ and~${V_{t_i} := W_{t'_{n_0 + i}}}$ for all~${i \in \mathbb{N}}$. We claim that ${(R, (V_{t_i} \colon i\in \mathbb{N}_0))}$ is the desired extensive tree-decomposition of~$G$ where~${R = t_0t_1t_2\ldots}$ is a ray with root~$t_0$. The ray~$R$ is a locally finite tree and all the parts are finite. Moreover, every vertex of~$G$ is contained in at most two parts by Remark~\ref{rem:oneended-td}. It remains to show that for every~${i \in \mathbb{N}}$, the bough~${B(t_{i-1}t_i)}$ is self-similar towards the end of~$G$. Let~${e = t_{i-1}t_i}$ for some~${i \in \mathbb{N}}$. For each~${s \in S(e)}$, we let~${p(s) \in [k]}$ be such that~${s \in R_{p(s)}}$ and set~${R_{e,s} = sR_{p(s)}}$. We wish to show there is a witness~$W$ for the self-similarity of~${B(e)}$ of distance at least~$n$ for each~${n \in \mathbb{N}}$. Note that~${B(e) = \bigcup_{j \geqslant 0} V(G_{n_0+i+j})}$. By the choice of~$n_0$ in Remark~\ref{r:increasingsubsequence}, there exists an~${m > i + n}$ such that~${(G_{n_0+i},\pi_{n_0+i}) \preccurlyeq_p (G_{n_0+m},\pi_{n_0+m})}$. Let~${e' = t_{m-1}t_{m}}$. We will show that there exists a~${W \subseteqseteq G[B(e')]}$ witnessing the self-similarity of~${B(e)}$ towards the end of~$G$. Recursively, for each~${j \geqslant 0}$ we can find~${m = m_0 < m_1 < m_2 < \cdots}$ with \[ (G_{n_0+i+j},\pi_{n_0+i+j}) \preccurlyeq_p (G_{n_0+m_j},\pi_{n_0+m_j}). \] In particular, there are subgraphs~${H_{m_j} \subseteqseteq G_{n_0+m_j}}$ which are inflated copies of~$G_{n_0+i+j}$, all compatible with the point functions, and so \[ {S(t'_{n_0+m_j -1}t'_{n_0+m_j}) \cup S(t'_{n_0+m_j}t'_{n_0+m_j+1}) \subseteqseteq H_{m_j}} \] for each~${j \geqslant 0}$. Hence, for every~${j \in \mathbb{N}}$ and~${p \in [k]}$ there is a unique $H_{m_{j-1}}$--$H_{m_{j}}$ subpath~$P_{p,j}$ of~$R_p$. We claim that \[ W' := \bigcup_{j \geqslant 0 } H_{m_j} \cup \bigcup_{j \in \mathbb{N} }\bigcup_{p \in [k]} P_{p,j} \] is a subgraph of~${G[B(e')]}$ that is an~${IG[B(e)]}$. Hence, the desired~$W$ can be obtained as a subgraph of~$W'$. To prove this claim it is sufficient to check that for each~${j \in \mathbb{N}}$ and each~${s \in S(t_{j-1}t_{j})}$, the branch sets of~$s$ in~$H_{j-1}$ and in~$H_{j}$ are connected by~$P_{p(s),j}$. Indeed, by construction, every~$P_{p,j}$ is a path from~${\pi_{n_0+m_{j-1}}(k+p)}$ to~${\pi_{n_0+m_{j}}(p)}$. And, since the~$H_{m_j}$ are pointed minors of~$G_{n_0+m_j}$, it follows that~${\pi_{n_0+m_{j-1}}(k+p(s)) \in H_{m_{j-1}}(s)}$ and~${\pi_{n_0+m_{j}}(p(s)) \in H_{m_{j}}(s)}$ are as desired. Finally, since~${(G_{n_0+i},\pi_{n_0+i})~\preccurlyeq_p~(G_{n_0 + m},\pi_{n_0 + m})}$ as witnessed by~$H_{m_0}$, the branch set of each~${s \in S(t_{i-1}t_i)}$ must indeed include~${V(R_{e,s}) \cap S(e')}$. \end{proof} \begin{lemma} \label{lem:finendsisext} If~$G$ is a locally finite connected graph with finitely many ends, each of which is thin, then~$G$ has an extensive tree-decomposition. \end{lemma} \begin{proof} Let~${\Omega(G) = \{\omega_1,\ldots ,\omega_n\}}$ be the set of the ends of~$G$. Let~${X \subseteqseteq V(G)}$ be a finite set of vertices which separates the ends of~$G$, i.e.~so that all~${C_i = C(X,\omega_i)}$ are pairwise disjoint. Without loss of generality, we may assume that~${V(G) = X \cup \bigcup_{i \in [n]} C_i}$. Let ${G_i := G[C_i \cup X]}$. Then each~$G_i$ is a locally finite connected one-ended graph, with a thin end~$\omega_i$, and hence by Lemma~\ref{lem:one-ended-nice} each of the~$G_i$ admits an extensive tree-decomposition~${(R^i,\mathcal{V}^i)}$, where~$R^i$ is rooted in its initial vertex~$r^i$. Without loss of generality, ${X \subseteqseteq V^i_{r^i}}$ for each~${i \in [n]}$. Let~$T$ be the tree formed by identifying the family of rays ${(R^i \colon i \in [n])}$ at their roots, let~$r$ be this identified vertex which we consider to be the root of~$T$, and let~${(T,\mathcal{V})}$ be the tree-decompositions whose root part is~${\bigcup_{i \in [n]} V^i_{r^i}}$, and which otherwise agrees with the~${(R^i,\mathcal{V}^i)}$. It is a simple check that~${(T,\mathcal{V})}$ is an extensive tree-decomposition of~$G$. \end{proof} \subseteqsection{Finite tree-width} \begin{definition} A rooted tree-decomposition~${(T,\mathcal{V})}$ of~$G$ is \emph{lean} if for any~${k \in \mathbb{N}}$, any nodes~${t_1,t_2 \in V(T)}$, and any~${X_1 \subseteqseteq V_{t_1}, X_2 \subseteqseteq V_{t_2}}$ such that~${|X_1|,|X_2| \geqslant k}$ there are either~$k$ disjoint paths in~$G$ between~$X_1$ and~$X_2$, or there is a vertex~$t$ on the path in~$T$ between~$t_1$ and~$t_2$ such that~${|V_t| < k}$. \end{definition} \begin{remark} \label{rem:lean-tw} K{\v{r}}{\'\i}{\v{z}} and Thomas~\cite{KT91} showed that if~$G$ has tree-width at most~$m$ for some~${m \in \mathbb{N}}$, then~$G$ has a lean tree-decomposition of width at most~$m$. \end{remark} \begin{lemma} \label{lem:bough-connected} Let~$G$ be a locally finite connected graph and let~${(T,\mathcal{V})}$ be a lean tree-decomposition of~$G$ of width at most~$m$. Then there exists a lean tree-decomposition of~$G$ of width at most~$m$ such that every bough is connected and the decomposition tree is locally finite. Moreover, we may assume that every vertex appears in only finitely many parts. \end{lemma} \begin{proof} We begin by defining the underlying tree~$T'$ of this decomposition. The root of~$T'$ will be the root~$r$ of~$T$, and the other vertices will be pairs~${(e, C)}$ where~$e$ is an edge of~$T$ and~$C$ is a component of~${G - S(e)}$ meeting (or equivalently, included in)~${B(e)}$. There is an edge from~$r$ to~${(e, C)}$ whenever~${e^- = r}$, and from~${(e, C)}$ to~${(f, D)}$ whenever~${f^- = e^+}$ and~${D \subseteqseteq C}$. For future reference, we define a graph homomorphism~$\pi$ from~$T'$ to~$T$ by setting~${\pi(r) = r}$ and~${\pi(e, C) = e^+}$. Next, we set~${V'_r := V_r}$ and \[ V'_{(e,C)} := V_{e^+} \cap (V(C) \cup N(V(C))), \] where~$N(V(C))$ is the neighbourhood of~$V(C)$. Moreover, we let~$\mathcal{V}'$ denote the family of all~$V'_p$ for all nodes~$p$ of~$T'$. To see that~$T'$ is locally finite, note that for any child ${(e,C)}$ of~$p$ the set~$C$ is also a component of~${G \setminus V_{\pi(p)}}$ and that no two distinct children yield the same component; if~${(e, C)}$ and~${(f,C)}$ were distinct children of~$p$, then we would have~${V(C) \subseteqseteq B(f) \subseteqseteq A(e)}$ and so~${V(C) \subseteqseteq A(e) \cap B(e) = S(e)}$, which is impossible. We now analyse, for a given vertex~$v$ of~$G$, which of the sets~$V'_p$ contain~$v$. Since~${(T, \mathcal{V})}$ is a tree-decomposition, $T$ induces a subtree on the set of nodes~$t$ of~$T$ with~${v \in V_t}$, and so this set has a minimal element~$t_v$ in the tree order. We set~${p_v := r}$ if~${t_v = r}$ and otherwise set~${p_v := (e, C)}$, where~$e$ is the unique edge of~$T$ with~${e^+ = t_v}$ and~$C$ is the unique component of~${G - S(e)}$ containing~$v$. This guarantees that~${v \in V'_{p_v}}$. For any other node~$p$ of~$T'$ with~${v \in V'_p}$, we have~${p \neq r}$ and so~$p$ has the form~${(e,C)}$. Since~${v \in V_{e^+}}$ and~${p \neq p_v}$, it follows that~$e^-$ lies on the path from~$t_v$ to~$e^+$ and so~${v \in V_{e^-}}$, from which~${v \in N(V(C))}$ follows. Thus, some neighbour~$w$ of~$v$ lies in~$C$. Then~${w \in B(e) \setminus S(e) = B(e) \setminus A(e)}$ and so~$t_w$ lies in~$T_{e^+}$. That is,~$p$ lies on the path from~$p_v$ to~$p_w$. Conversely, for any~${p = (e,C)}$ on this path we have~${w \in V(C)}$ and so~${v \in N(V(C)) \subseteqseteq S(e) \subseteqseteq V_{e^+}}$, so that~${v \in V'_p}$. What we have shown is that~$v$ is in~$V'_p$ precisely when~${p = p_v}$ or there is some neighbour~$w$ of~$v$ in~$G$ such that~$p$ lies on the path in~$T'$ from~$p_v$ to~${p_w \in V(T'_{p_v})}$. Using this information, it is easy to deduce that~${(T', \mathcal{V}')}$ is a tree-decomposition: A vertex~$v$ is in~$V'_{p_v}$ and an edge~$vw$ with~$p_v$ no higher (in the tree order) than~$p_w$ in~$T$ is also in~$V'_{p_v}$. The third condition in the definition of tree-decompositions follows from the fact that the $T'$ induces a subtree on the set of all nodes~$p$ with~${v \in V'_p}$. These sets are also all finite, since~$G$ is locally finite. Next we examine the boughs of this decomposition. Let~${f \in E(T')}$ with~${f^+ = (e, C)}$. Our aim is to show that~${B(f) = V(C) \cup N(V(C))}$. For any~${(e', C') \in V(T'_{f^+})}$, we have ${V'_{(e', C')} \subseteqseteq V(C') \cup N(V(C')) \subseteqseteq V(C) \cup N(V(C))}$, so that~${B(f) \subseteqseteq V(C) \cup N(V(C))}$. For ${v \in V(C)}$, we have~${p_v \in V(T_{f^+})}$ and so~${v \in B(f)}$ and for~${v \in N(V(C))}$, there is a neighbour~$w$ of~$v$ such that~$f^+$ lies on the path from~$p_v$ to~$p_w$, yielding once more that~${v \in B(f)}$. This completes the proof that~${B(f) = V(C) \cup N(V(C))}$, and in particular~${B(f)}$ is connected. Since~$G$ is locally finite, for each~$e$, there are only finitely many components of~${G - V_{e^-}}$, so that~$T'$ is also locally finite. The final thing to show is that this decomposition is lean. So, suppose we have~${X_1 \subseteqseteq V'_{p_1}}$ and~${X_2 \subseteqseteq V'_{p_2}}$ with ${|X_1|, |X_2| \geqslant k}$. Then also~${X_1 \subseteqseteq V_{\pi(p_1)}}$ and~${X_2 \subseteqseteq V_{\pi(p_2)}}$, so that if there are no~$k$ disjoint paths from~$X_1$ to~$X_2$ in~$G$, then there is some~$t$ on the path from~$\pi(p_1)$ to~$\pi(p_2)$ in~$T$ with~${|V_t| \leqslant k}$. But then there is some~$p$ on the path from~$p_1$ to~$p_2$ in~$T'$ with~${\pi(p) = t}$ and, since~${V'_p \subseteqseteq V_t}$, we have~${|V'_p| \leqslant k}$. \end{proof} \begin{lemma} \label{lem:labelled-wqo-btw} For all~${k,\ell \in \mathbb{N}}$, the class of $\ell$-pointed graphs with tree-width at most~$k$ is well-quasi-ordered under the relation~$\preccurlyeq_p$. \end{lemma} \begin{proof} This is a consequence of a result of Thomas~\cite{T89}. \end{proof} \begin{lemma} \label{lem:boundwidthisext} Every locally finite connected graph of finite tree-width has an extensive tree-decomposition. \end{lemma} \begin{proof} Let~$G$ be a locally finite connected graph of tree-width~${m \in \mathbb{N}}$. By Remark~\ref{rem:lean-tw}, $G$ has a lean tree-decomposition of width at most $m$ and so, by Lemma~\ref{lem:bough-connected}, there is a lean tree-decomposition~${(T,\mathcal{V})}$ of~$G$ with width~$m$ in which every bough is connected, every vertex is contained in only finitely many parts, and such that~$T$ is a locally finite tree with root~$r$. Let~$\epsilon$ be an end of~$T$ and let~$R$ be the unique $\epsilon$-ray starting at the root of~$T$. Let ${d_\epsilon = \liminf_{e \in R} |S(e)|}$ and fix a tail ${t^\epsilon_0t^\epsilon_1\ldots}$ of~$R$ such that ${|S(t^\epsilon_{i-1}t^\epsilon_i)| \geqslant d_\epsilon}$ for all~${i \in \mathbb{N}}$. Note that, ${|S(t^\epsilon_{i_k-1}t^\epsilon_{i_k})| = d_\epsilon}$ for an infinite sequence~${i_1<i_2<\cdots}$ of indices. Since~${(T,\mathcal{V})}$ is lean, there are~$d_\epsilon$ disjoint paths between~${S(t^\epsilon_{i_k-1}t^\epsilon_{i_k})}$ and~${S(t^\epsilon_{i_{k+1}-1}t^\epsilon_{i_{k+1}})}$ for every~${k \in \mathbb{N}}$. Moreover, since each ${S(t^\epsilon_{i_k-1}t^\epsilon_{i_k})}$ is a separator of size~$d_\epsilon$, these paths are all internally disjoint. Hence, since every vertex appears in only finitely many parts, by concatenating these paths we get a family of~$d_\epsilon$ many disjoint rays in~$G$. Fix one such family of rays~${( R^\epsilon_j \colon j \in [d_\epsilon] )}$. We claim that there is an end~$\omega$ of~$G$ such that~${R^\epsilon_j \in \omega}$ for all~${j \in [d_\epsilon]}$. Indeed, if not, then there is a finite vertex set~$X$ separating some pair of rays~$R$ and~$R'$ from the family. However, since each vertex appears in only finitely many parts, there is some~${k \in \mathbb{N}}$ such that~${X \cap V_t = \emptyset}$ for all~${t \in V(T_{t^\epsilon_{i_k-1}})}$. By construction~$R$ and~$R'$, have tails in~${B(t^\epsilon_{i_{k}-1}t^\epsilon_{i_{k}})}$, which is connected and disjoint from~$X$, contradicting the fact that~$X$ separates~$R$ and~$R'$. For every~${k \in \mathbb{N}}$, we define a point function ${\pi^\epsilon_{i_k} \colon [d_\epsilon] \to S(t^\epsilon_{i_k-1}t^\epsilon_{i_k})}$ by letting~$\pi^\epsilon_{i_k}(j)$ be the unique vertex in~${V(R^\epsilon_j) \cap S(t^\epsilon_{i_k-1}t^\epsilon_{i_k})}$. By Lemma~\ref{lem:labelled-wqo-btw} and Remark~\ref{r:increasingsubsequence}, the sequence ${(G[B(t^\epsilon_{i_k-1}t^\epsilon_{i_k})], \pi^\epsilon_{i_k})_{k\in\mathbb{N}}}$ has an increasing subsequence ${(G[B(t^\epsilon_{i-1}t^\epsilon_i)], \pi^\epsilon_i)_{i\in I_\epsilon}}$, i.e.~there exists an~${I_{\epsilon} \subseteqseteq \{ i_k \colon k \in \mathbb{N} \}}$ such that for any~${k,j \in I_\epsilon}$ with~${k < j}$, we have \[ (G[B(t^\epsilon_{k-1}t^\epsilon_k)], \pi^\epsilon_k) \preccurlyeq_p (G[B(t^\epsilon_{j-1}t^\epsilon_j)], \pi^\epsilon_j). \] Let us define ${F_\epsilon = \{ t^\epsilon_{k-1}t^\epsilon_{k} \colon k \in I_\epsilon \} \subseteqseteq E(T)}$. Consider~${T^- = T - \bigcup_{\epsilon\in\Omega(T)} F_\epsilon}$, and let us write~${\mathcal{C}(T^-)}$ for the components of~$T^-$. We claim that every component~${C \in \mathcal{C}(T^-)}$ is a locally finite rayless tree, and hence finite. Indeed, if~$C$ contains a ray~${R \subseteqseteq T}$, then~$R$ is in an end~$\epsilon$ of~$T$ and hence~${F_\epsilon \cap R \neq \emptyset}$, a contradiction. Consequently, each set~${\bigcup_{t\in C} V_t}$ is finite. Let us define a tree-decomposition~${(T', \mathcal{V}')}$ of~$G$ with ${T' = T / \mathcal{C}(T^-)}$, that is where we contract each component $C \in \mathcal{C}(T^-)$ to a single vertex and where~${V'_{t'} = \bigcup_{t\in t'} V_t }$. We claim this is an extensive tree-decomposition. Clearly $T'$ is a locally finite tree, each part of~${(T', \mathcal{V}')}$ is finite, and every vertex of~$G$ in contained in only finitely many parts of the tree-decomposition. Given~${e \in E(T')}$, there is some~${\epsilon \in \Omega(T)}$ such that~${e \in F_\epsilon}$. Consider the family of rays ${(R_{e,j} \colon j \in [d_\epsilon] )}$ given by~${R_{e,j} = R^\epsilon_j \cap B(e)}$. Let~$\omega_e$ be the end of~$G$ in which the rays~$R_{e,j}$ lie. There is some~${k \in \mathbb{N}}$ such that~${e = t^\epsilon_{k-1}t^\epsilon_{k}}$. Given~${n \in \mathbb{N}}$, let~${k' \in I_\epsilon}$ be such that there are at least~$n$ indices~${\ell \in I_\epsilon}$ with~${k < \ell < k'}$, and let~${e' = t^\epsilon_{k'-1}t^\epsilon_{k'}}$. Note that,~${e' \in F_\epsilon}$ and hence~${e' \in E(T')}$. Furthermore, by construction~$e'^-$ has distance at least~$n$ from~$e^-$ in~$T'$. Then, since~${G[ B(e)] = G[B(t^\epsilon_{k-1}t^\epsilon_{k})]}$ and~${G[B(e') ]= G[B(t^\epsilon_{k'-1}t^\epsilon_{k'})]}$, it follows that ${(G[B(e)], \pi^\epsilon_{k}) \preccurlyeq_p (G[B(e')], \pi^\epsilon_{k'})}$, and so suitable subgraphs witness the self-similarity of~${B(e)}$ towards~$\omega_e$ with the rays ${(R_{e,j} \colon j \in [d_\epsilon])}$, as in Lemma~\ref{lem:one-ended-nice}. \end{proof} \begin{remark} If for every~${\ell \in \mathbb{N}}$ the class of $\ell$-pointed locally finite graphs without thick ends is well-quasi-ordered under~$\preccurlyeq_p$, then every locally finite graph without thick ends has an extensive tree-decomposition. This follows by a simple adaptation of the proof above. \end{remark} \subseteqsection{Sporadic examples} We note that, whilst Lemmas~\ref{lem:finendsisext} and~\ref{lem:boundwidthisext} show that a large class of locally finite graphs have extensive tree-decompositions, for many other graphs it is possible to construct an extensive tree-decomposition `by hand'. In particular, the fact that no graph in these classes has a thick end is an artefact of the method of proof, rather than a necessary condition for the existence of such a tree-decomposition, as is demonstrated by the following examples: \begin{remark} The grid~${\mathbb{Z} \times \mathbb{Z}}$ has an extensive tree-decomposition, which can be seen in Figure~\ref{f:grid}. More explicitly, we can take a ray decomposition of the grid given by a sequence of increasing diamond shaped regions around the origin. It is easy to check that every bough is self-similar towards the end of the grid. A similar argument shows that the half-grid has an extensive tree-decomposition. However, we note that both of these graphs were already shown to be ubiquitous in~\cite{BEEGHPTII}. \end{remark} \begin{figure} \caption{In the grid the boughs are self-similar.} \label{f:grid} \end{figure} In fact, we do not know of any construction of a locally finite connected graph which does not admit an extensive tree-decomposition. \begin{question} \label{qst:loc_fin_admit ex_td} Do all locally finite connected graphs admit an extensive tree-decomposi{-}tion? \end{question} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{The structure of non-pebbly ends} \label{s:nonpebbly} We will need a structural understanding of how the arbitrarily large families of~$IG$s (for some fixed graph~$G$) can be arranged inside some host graph~$\Gamma$. In particular, we are interested in how the rays of these minors occupy a given end~$\epsilon$ of~$\Gamma$. In~\cite{BEEGHPTII}, by considering a pebble pushing game played on ray graphs, we established a distinction between \emph{pebbly} and \emph{non-pebbly} ends. Furthermore, we showed that each non-pebbly end is either \emph{grid-like} or \emph{half-grid-like}. \begin{theorem}[{\cite[Theorem 1.2]{BEEGHPTII}}] \label{t:trichotomy} Let~$\Gamma$ be a graph and let~$\epsilon$ be a thick end of~$\Gamma$. Then~$\epsilon$ is either pebbly, half-grid-like or grid-like. \end{theorem} The precise technical definition of such ends is not relevant, in what follows we will simply need to use the following results from~\cite{BEEGHPTII}. \begin{corollary}[{\cite[Corollary 5.3]{BEEGHPTII}}] \label{c:pebblyubiq} Let~$\Gamma$ be a graph with a pebbly end~$\epsilon$ and let~$G$ be a countable graph. Then~${\aleph_0 G \preccurlyeq \Gamma}$. \end{corollary} \begin{lemma}[{\cite[Lemma 7.1 and Corollary 7.3]{BEEGHPTII}}] \label{l:gridstructure} Let~$\Gamma$ be a graph with a grid-like end~$\epsilon$. Then there exists an~${N \in \mathbb{N}}$ such that the ray graph for any family~${(R_i \colon i \in I)}$ of disjoint $\epsilon$-rays in~$\Gamma$ with~${|I| \geqslant N+2}$ is a cycle. Furthermore, there is a choice of a cyclic orientation, which we call the \emph{correct orientation}, of each such ray graph such that any transition function between two families of at least~${N + 3}$ disjoint $\epsilon$-rays preserves the correct orientation. \end{lemma} \begin{lemma}[{\cite[Lemma 7.6, Corollary 7.7 and Corollary 7.9]{BEEGHPTII}}] \label{l:halfgridstructure} Let~$\Gamma$ be a graph with a half-grid-like end~$\epsilon$. Then there exists an~${N \in \mathbb{N}}$ such that the ray graph~$K$ for any family~${(R_i \colon i \in I)}$ of disjoint $\epsilon$-rays in~$\Gamma$ with~${|I| \geqslant N+2}$ contains a bare path with at least~${|I|-N}$ vertices, which we call the \emph{central path} of~$K$, such that the following statements are true: \begin{enumerate} \item For any ${i \in I}$, if ${K - i}$ has precisely two components, each of size at least~${N+1}$, then~$i$ is an inner vertex of the central path of~$K$. \item There is a choice of an orientation, which we call the \emph{correct orientation}, of the central path of each such ray graph such that any transition function between two families of at least~${N + 3}$ disjoint $\epsilon$-rays sends vertices of the central path to vertices of the central path and preserves the correct orientation. \end{enumerate} \end{lemma} By Corollary~\ref{c:pebblyubiq}, if we wish to show that a countable graph~$G$ is $\preccurlyeq$-ubiquitous we can restrict our attention to host graphs~$\Gamma$ where each end is non-pebbly. In which case, by Lemmas~\ref{l:gridstructure} and~\ref{l:halfgridstructure} for any end~$\epsilon$ of~$\Gamma$, the possible ray graphs, and the possible transition functions between two families of rays, are severely restricted. Later on in our proof we will be able to restrict our attention to a single end~$\epsilon$ of~$\Gamma$ and the proof will split into two cases according to whether~$\epsilon$ is half grid-like or grid-like. However, the two cases are very similar, with the grid-like case being significantly simpler. Therefore, in what follows we will prove only the results necessary for the case where~$\epsilon$ is half-grid-like, and then later, in Section~\ref{s:gridlike}, we will shortly sketch the differences for the grid-like case. \subseteqsection{Core rays in the half-grid-like case} \label{s:core} By Lemma~\ref{l:halfgridstructure}, in a half-grid-like end~$\epsilon$ every ray graph consists, apart for possibly some bounded number of rays on either end, of a bare-path, each of which comes with a correct orientation, which must be preserved by transition functions. However, in the half-grid itself even more can be seen to true. There is a natural partial order defined on the set of all rays in the half-grid, where two rays are comparable if they have disjoint tails, and a ray~$R$ is less than a ray~$S$ if the tail of~$R$ lies `to the left' of the tail of~$S$ in the half-grid. Then it can be seen that the correct orientations of the central path of any disjoint family of rays can be chosen to agree with this global partial order. In a general half-grid-like end~$\epsilon$ a similar thing will be true, but only for a subset of the rays in the end which we call the core rays. Let us fix for the rest of this section a graph~$\Gamma$ and a half-grid-like end~$\epsilon$. By Lemma~\ref{l:halfgridstructure}, there is some~${N \in \mathbb{N}}$ such that all but at most $N$ vertices of the ray graph of any large enough family of disjoint $\epsilon$-rays lie on the \emph{central path}. \begin{definition}[Core rays] \label{d:coreray} Let~$R$ be an $\epsilon$-ray. We say~$R$ is a \emph{core ray (of~$\epsilon$)} if there is a finite family~${\mathcal{R} = ( R_i \colon i \in I)}$ of disjoint $\epsilon$-rays with~${R = R_c}$ for some~${c \in I}$ such that~${\RG(\mathcal{R})- c}$ has precisely two components, each of size at least~${N+1}$. \end{definition} Note that, by Lemma~\ref{l:halfgridstructure}, such a ray $R_c$ is an inner vertex of the central path of $\RG(\mathcal{R})$. In order to define our partial order on the core rays, we will need to consider what it means for a ray to lie `between' two other rays. \begin{definition} Given three $\epsilon$-rays $R,S,T$ such that $R,S,T$ have disjoint tails, we say that~$S$ \emph{separates~$R$ from~$T$} if the tails of~$R$ and~$T$ disjoint from~$S$ belong to different ends of~${\Gamma-S}$. \end{definition} \begin{lemma} \label{l:raygraph_separates} Let $\mathcal{R}={(R_i \colon i \in I)}$ be a finite family of disjoint $\epsilon$-rays and let~${i_1, i_2, j \in I}$. Then~$i_1$ and~$i_2$ belong to different components of~${\RG(\mathcal{R})-j}$ if and only if~$R_j$ separates~$R_{i_1}$ from~$R_{i_2}$. \end{lemma} \begin{proof} Suppose that~$R_{i_1}$ and~$R_{i_2}$ belong to the same end of~${\Gamma-V(R_j)}$, and let~$\mathcal{R}'$ be the subset of~${\mathcal{R} \setminus \{R_j\}}$ which belong to this end. Then, $\mathcal{R}'$ is a disjoint family of rays in the same end of~${\Gamma-V(R_j)}$ and so by Lemma~\ref{rem:tail-raygraph} the ray graph~${\RG_{{\Gamma-V(R_j)}}(\mathcal{R}')}$ is connected. However, it is apparent that~${\RG_{{\Gamma-V(R_j)}}(\mathcal{R}')}$ is a subgraph of~${\RG_{\Gamma}(\mathcal{R})}$, and so~$i_1$ and~$i_2$ belong to the same component of~${\RG(\mathcal{R})-j}$. Conversely, suppose~$i_1$ and $i_2$ belong to the same component of~${\RG(\mathcal{R})-j}$. Then, it is clear that for any two adjacent vertices~$k$ and~$\ell$ in~${\RG(\mathcal{R})-j}$ the rays~$R_k$ and~$R_\ell$ are equivalent in $\Gamma - R_j$, and hence~$R_{i_1}$ and~$R_{i_2}$ belong to a common end of $\Gamma - R_j$. It follows that~$R_j$ does not separate~$R_{i_1}$ from~$R_{i_2}$. \end{proof} \begin{lemma} \label{l:ray_tripple_seps} If $R,S,T$ are $\epsilon$-rays and~$S$ separates~$R$ from~$T$, then~$T$ does not separate~$R$ from~$S$ and~$R$ does not separate~$S$ from~$T$. \end{lemma} \begin{proof} As~$R$ and~$T$ both belong to~$\epsilon$, there are infinitely many disjoint paths between them. As~$S$ separates~$R$ from~$T$, we know that~$S$ must meet infinitely many of these paths. Hence, there are infinitely many disjoint paths from~$S$ to~$R$, all disjoint from~$T$. Similarly, there are infinitely many disjoint paths from~$S$ to~$T$, all disjoint from~$R$. Hence~$T$ does not separate~$R$ from~$S$ and~$R$ does not separate~$S$ from~$T$. \end{proof} \begin{lemma} \label{l:centraltwoend} Let~$R$ be a core ray of~$\epsilon$. Then in~${\Gamma - V(R)}$ the end~$\epsilon$ splits into precisely two different ends. (That is, there are two ends~$\epsilon'$ and~$\epsilon''$ of~${\Gamma - V(R)}$ such that every $\epsilon$-ray in~$\Gamma$ which is disjoint from~$R$ is in~$\epsilon'$ or~$\epsilon''$ in~${\Gamma - V(R)}$.) \end{lemma} \begin{proof} Let ${\mathcal{R} = ( R_i \colon i \in I)}$ be a finite family of disjoint $\epsilon$-rays witnessing that~${R = R_c}$ for some~${c \in I}$ is a core ray. Then there are precisely two ends $\epsilon'$ and $\epsilon''$ in~${\Gamma-V(R)}$ that contain rays in~$\mathcal{R}$, since connected components of ${\RG(\mathcal{R})-c}$ are equivalent sets of rays in~${\Gamma-V(R)}$ and moreover, the two connected components do not contain rays belonging to the same end of~${\Gamma-V(R)}$ by Lemma~\ref{l:raygraph_separates}. Suppose there is a third end in~${\Gamma-V(R)}$ that contains an $\epsilon$-ray~$S$. We first claim that there is a tail of~$S$ which is disjoint from~${\bigcup \mathcal{R}}$. Indeed, clearly~$S$ is disjoint from~$R$, and if~$S$ meets~${\bigcup \mathcal{R}}$ infinitely often then it would meet some~${R_i \in \mathcal{R}}$ infinitely often, and hence lie in the same end of~${\Gamma-V(R)}$ as~$R_i$. So let~$S'$ be a tail of~$S$ which is disjoint from~${\bigcup \mathcal{R}}$. Let us consider the family~${\mathcal{R}' := \mathcal{R} \cup \{S'\}}$, where the ray~$S'$ is indexed by some additional index~$s$. Since~$S'$ is an $\epsilon$-ray, the ray graph~$\RG(\mathcal{R'})$ is connected. Furthermore, since the identity on~$I$ is clearly a transition function from~$\mathcal{R}$ to~$\mathcal{R}'$, by Lemma~\ref{l:halfgridstructure}, $c$ is an inner vertex of the central path of~$\RG(\mathcal{R}')$, and hence has degree two. We claim that~$s$ is adjacent to some~${i \neq c}$ in~$\RG(\mathcal{R}')$. Indeed, if not, then~$s$ must be a leaf of~$\RG(\mathcal{R}')$ adjacent to~$c$. In which case, there must be some neighbour~$i$ of~$c$ in~$\RG(\mathcal{R})$ which is not adjacent to~$c$ in~$\RG(\mathcal{R}')$. However, then~$s$ must be adjacent to~$i$ in~$\RG(\mathcal{R}')$. However, then clearly~$s$ lies in the same end of~${\Gamma-V(R)}$ as~$R_i$, and hence in either~$\epsilon'$ or~$\epsilon''$. \end{proof} Hence, every core ray~$R$ splits~$\epsilon$ into two ends. We would like to use this partition to define our partial order on core rays; the core rays in one end will be less than~$R$ and the core rays in the other end will be greater than~$R$. However, if we want this partial order to agree with the correct orientation of the central path for any disjoint family of rays in~$\epsilon$, then every family of rays~${(R_i \colon i \in I)}$ in whose ray graph~${R=R_c}$ is a vertex of the central path will choose which end of~${\Gamma-V(R)}$ is less than~$R$ and which is greater than~$R$, and we must make sure that this choice is consistent. So, given a finite family of disjoint $\epsilon$-rays~${{\mathcal R}=(R_i \colon i \in I)}$ in whose ray graph~${R=R_c}$ is a vertex of the central path, we denote by~${\top(R,{\mathcal R})}$ the end of~${\Gamma-V(R)}$ containing rays~$R_i$ satisfying~${i<c}$, where~$<$ refers to the correct orientation of the vertices of the central path, and with~${\bot(R,{\mathcal R})}$ the end containing rays~$R_i$ satisfying~${i>c}$. We will show that the labelling~$\top$ and~$\bot$ is in fact independent of the choice of family~${{\mathcal R}}$. \begin{definition} Given two (possibly infinite) vertex sets~$X$ and~$Y$ in~$\Gamma$, we say that an end~$\epsilon$ of~${\Gamma-X}$ is a \emph{sub-end} of an end~$\epsilon'$ of~${\Gamma-Y}$ if every ray in~$\epsilon$ has a tail in~$\epsilon'$. \end{definition} \begin{lemma} \label{l:end_distribution} Let~$R$ and~$S$ be disjoint core rays of~$\epsilon$. Let us suppose that~$\epsilon$ splits in~${\Gamma-V(S)}$ into~$\epsilon'_S$ and~$\epsilon''_S$ and in~${\Gamma-V(R)}$ into~$\epsilon'_R$ and~$\epsilon''_R$. If~$R$ belongs to~$\epsilon'_S$ and~$S$ belongs to~$\epsilon'_R$, then~$\epsilon''_S$ is a sub-end of~$\epsilon'_R$ and~$\epsilon''_R$ is a sub-end of~$\epsilon'_S$. \end{lemma} \begin{proof} Let~$T$ be a ray in~$\epsilon''_S$. As~$R$ belongs to a different end of~${\Gamma-V(S)}$ than~$T$, there is a tail~$T'$ of~$T$ which is disjoint from~$R$. As~$S$ separates~$R$ from~$T'$, we know, by Lemma~\ref{l:ray_tripple_seps}, that~$R$ does not separate~$S$ from~$T'$, hence~$T'$ belongs to~$\epsilon'_R$. Hence,~$\epsilon_S''$ is a sub-end of~$\epsilon_R'$. Proving that~$\epsilon''_R$ is a sub-end of~$\epsilon'_S$ works analogously. \end{proof} \begin{lemmadef} \label{l:unique_above} Let ${\mathcal{R}_1 = ( R_i \colon i \in I_1 )}$ and ${\mathcal{R}_2 = ( R_i \colon i \in I_2 )}$ be two finite families of disjoint $\epsilon$-rays, such for some~${c \in I_1 \cap I_2}$ the ray~$R_c$ lies on the central path of both $\RG(\mathcal{R}_1)$ and $\RG(\mathcal{R}_2)$. Then~${\top(R_c,{\mathcal R}_1) = \top(R_c,{\mathcal R}_2)}$ and~${\bot(R_c,{\mathcal R}_1) = \bot(R_c,{\mathcal R}_2)}$. We therefore write~${\top(\epsilon,R_c)}$ for the end~${\top(R_c,{\mathcal R}_1)}$ and~$\bot(\epsilon,R_c)$ accordingly, i.e.\ ${\top(\epsilon, R_c)}$ is the end of~${\Gamma - V(R_c)}$ containing rays that appear on the central path of some ray graph before~$R_c$ according to the correct orientation and~${\bot(\epsilon, R_c)}$ is the end of~${\Gamma - V(R_c)}$ containing rays that appear on the central path of some ray graph after~$R_c$ according to the correct orientation. Note that~${\top(\epsilon,R_c) \cap \bot(\epsilon,R_c) = \emptyset}$. \end{lemmadef} \begin{proof} Let~$\epsilon'$ and~$\epsilon''$ be the two ends of~${\Gamma - V(R_c)}$ and let~${\mathcal R}_1'$ and~${\mathcal R}_2'$ be the set of rays in~${\mathcal R}_1$ and~${\mathcal R}_2$ respectively that belong to~$\epsilon'$, and similarly~${\mathcal R}_1''$ and~${\mathcal R}_2''$ be the set of rays in~${\mathcal R}_1$ and~${\mathcal R}_2$ respectively that belong to $\epsilon''$. Let~${\mathcal S}'$ be the larger of~${\mathcal R}_1'$ and~${\mathcal R}_2'$, and similarly~${\mathcal S}''$ the larger of~${\mathcal R}_1''$ and~${\mathcal R}_2''$. Let us consider the family of rays~${{\mathcal S} := {\mathcal S}' \cup \{R_c\} \cup {\mathcal S}''}$. Since the rays in~${\mathcal S}'$ and~${\mathcal S}''$ belong to different ends of~${\Gamma - V(R_c)}$, we may, after replacing some of the rays with tails, assume that~${\mathcal S}$ is a family of disjoint rays. We claim that there is a transition function~$\sigma_1$ from~${\mathcal R}_1$ to~${\mathcal S}$ which maps~$R_c$ to itself, ${\mathcal R}_1'$ to~${\mathcal S}'$, and~${\mathcal R}_1''$ to~${\mathcal S}''$. Indeed, let us take a finite separator~$X$ which separates~$\epsilon'$ and~$\epsilon''$ in~${\Gamma - V(R_c)}$. By Lemma~\ref{l:trans-link}, there is a finite set~$Y$ such that any linkage after~$Y$ from~${\mathcal R}_1$ to~${\mathcal S}$ is transitional. Then, since the rays in~${\mathcal R}_1'$ and~${\mathcal S}'$ belong to the same end of~${\Gamma - V(R_c)}$ and~${|{\mathcal R}_1'| \leqslant |{\mathcal S}'|}$, there is a linkage after~${X \cup Y}$ from~${\mathcal R}_1'$ to~${\mathcal S}'$ in~${\Gamma - V(R_c)}$, and similarly there is a linkage after~${X \cup Y}$ from~${\mathcal R}_1''$ to~${\mathcal S}''$ in~${\Gamma - V(R_c)}$. If we combine these two linkages with a trivial linkage from~$R_c$ to itself after~${X \cup Y}$, we obtain a transitional linkage which induces an appropriate transition function. The same argument shows that there is a transition function~$\sigma_2 $ from~${\mathcal R}_2$ to~${\mathcal S}$ which maps~$R_c$ to itself, ${\mathcal R}_2'$ to~${\mathcal S}'$, and~${\mathcal R}_2''$ to~${\mathcal S}''$. By Lemma~\ref{l:halfgridstructure}, both transition functions map vertices of the central path to vertices of the central path and preserve the correct orientation. In particular, $R_c$ lies on the central path of $\RG(\mathcal{S})$. Moreover, both~$\sigma_1$ and~$\sigma_2$ map $\epsilon'$-rays to $\epsilon'$-rays and $\epsilon''$-rays to $\epsilon''$-rays. Therefore, if ${\epsilon' = \top(R_c,{\mathcal S})}$, then~$\sigma_1$ shows that ${\epsilon' = \top(R_c,{\mathcal R}_1)}$ and~$\sigma_2$ shows that ${\epsilon' = \top(R_c,{\mathcal R}_2)}$, and similarly if~${\epsilon' = \bot(R_c,{\mathcal S})}$. \end{proof} \begin{lemmadef} \label{def:core-order} Let~${\core(\epsilon)}$ denote the set of core rays in~$\epsilon$. We define a partial order~$\leqslant_\epsilon$ on~${\core(\epsilon)}$ by \begin{align*} R \leqslant_\epsilon S &\text{ if and only if either } R = S, \\ &\text{ or } R \text{ and } S \text{ have disjoint tails $xR$ and $yS$ and } xR \in \top(\epsilon, yS) \end{align*} for~${R, S \in \core(\epsilon)}$. \end{lemmadef} \begin{proof} We must show that~$\leqslant_\epsilon$ is indeed a partial order. For the anti-symmetry, let us suppose that~$R$ and~$S$ are disjoint rays in~${\core(\epsilon)}$ such that~${R \leqslant_\epsilon S}$ and~${S \leqslant_\epsilon R}$, so that~${R \in \top(\epsilon, S)}$ and~${S \in \top(\epsilon, R)}$. Let~$\mathcal{R}_S$ be a family of rays witnessing that~$S$ is a core ray and~$\mathcal{R}_R$ a family witnessing that~$R$ is a core ray. By Lemma~\ref{l:end_distribution}, ${\bot(\epsilon,S)}$ is a sub-end of~${\top(\epsilon,R)}$ and~${\bot(\epsilon,R)}$ is a sub-end of~${\top(\epsilon,S)}$. Let~$\mathcal{R}_{\bot(S)}$ be the subset of~$\mathcal{R}_S$ of rays which belong to~${\bot(\epsilon, S)}$. Let~$\mathcal{R}_{\bot(R)}$ be defined accordingly. By replacing rays with tails, we may assume that all rays in~${\mathcal{R} := \mathcal{R}_{\bot(S)} \cup \mathcal{R}_{\bot(R)} \cup \{R\} \cup\{S\}}$ are pairwise disjoint. Note that, by the comment after Definition~\ref{d:coreray}, both $R$ and~$S$ are inner vertices of the central path of~${\RG(\mathcal{R})}$. Thus, either~${S \in \bot(\epsilon,R)}$ or~${R \in \bot(\epsilon,S)}$, contradicting Lemma~\ref{l:unique_above}. For the transitivity, let us suppose that $R,S,T$ are rays in~${\core(\epsilon)}$, such that~${R \leqslant_\epsilon S}$ and~${S \leqslant_\epsilon T}$. We may assume that~$R$ and~$S$, and~$S$ and~$T$ are disjoint. As~$\leqslant_\epsilon$ is anti-symmetric, we have~${T \not \leqslant_\epsilon S}$, hence~${T \in \bot(\epsilon,S)}$. Thus,~$R$ and~$T$ belong to different ends of~${\Gamma-V(S)}$, and we may assume that they are also disjoint. As~$S$ therefore separates~$R$ from~$T$, by Lemma~\ref{l:ray_tripple_seps}, we know that~$T$ does not separate~$S$ from~$R$. Thus, $R$ and~$S$ belong to the same end of~${\Gamma-V(T)}$. Hence~${R \in \top(\epsilon,T)}$. \end{proof} \begin{remark} \label{rem:core-remarks} Let~${R, S \in \core(\epsilon)}$ and let~$\mathcal{R}$ be a finite family of disjoint $\epsilon$-rays. \begin{enumerate} \item \label{rem:core-tail} Any ray which shares a tail with $R$ is also a core ray of~$\epsilon$. \item \label{rem:core-comparable} If~$R$ and~$S$ are disjoint, then~$R$ and~$S$ are comparable under~$\leqslant_\epsilon$. \item \label{rem:core-correct} If~$R$ and~$S$ are on the central path of~${\RG({\mathcal R})}$, then~${R \leqslant_\epsilon S}$ if and only if~$R$ appears before~$S$ in the correct orientation of the central path of~$\RG({\mathcal R})$. \item \label{rem:non-core-bounded} The maximum number of disjoint rays in ${\epsilon \setminus \core(\epsilon)}$ is bounded by~${2N+2}$. \end{enumerate} \end{remark} \begin{lemma} \label{lem:core-exchange} Let~${R, S \in \core(\epsilon)}$ and let~${Z \subseteqseteq V(\Gamma)}$ be a finite set such that~${\top(\epsilon, S)}$ and~${\bot(\epsilon, S)}$ are separated by~$Z$ in~${\Gamma - V(S)}$. Let~${H \subseteqseteq \Gamma - Z}$ be a connected subgraph which is disjoint to~$S$ and contains~$R$ and let~${T \subseteqseteq H}$ be some core $\epsilon$-ray. Then~$S$ is in the same relative $\leqslant_\epsilon$-order to~$T$ as to~$R$. \end{lemma} \begin{proof} Assume~${S \leqslant_\epsilon R}$ and hence~${R \in \top(\epsilon, S)}$. Since~$H$ is connected, we obtain that ${T \in \top(\epsilon, S)}$ as well and hence~${S \leqslant_\epsilon T}$. The other case is analogous. \end{proof} Since, by \ref{rem:core-remarks}~\ref{rem:core-correct}, the order $\leqslant_\epsilon$ will agree with correct order on the central path, which is preserved by transition functions by Lemma~\ref{l:halfgridstructure}, the order $\leqslant_\epsilon$ will also be preserved by transition functions, as long as they map core rays to core rays. In order to guarantee that this holds, before linking a family of core rays $\mathcal{R}$ we will first enlarge it slightly by adding some `buffer' rays. \begin{lemmadef} \label{def:central-core} Let~$\mathcal{R} = (R_i \colon i \in I)$ be a finite family of disjoint core $\epsilon$-rays. Then there exists a finite family~$\overline{\mathcal{R}} \supset \mathcal{R}$ of disjoint $\epsilon$-rays such that \begin{itemize} \item For each $i\in I$, the graph $\RG(\overline{\mathcal{R}})-i$ has precisely two components, each of size at least $N+1$; \item Each $i\in I$ is an inner vertex of the central path of~$\RG(\overline{\mathcal{R}})$; \item $|\overline{\mathcal{R}}| = |\mathcal{R}|+2N+2$. \end{itemize} Even though such a family is not unique, we denote by~${\overline{\mathcal{R}}}$ an arbitrary such family. \end{lemmadef} \begin{proof} By Remark~{\ref{rem:core-remarks}\ref{rem:core-comparable}}, the rays in~$\mathcal{R}$ are linearly ordered by~$\leqslant_\epsilon$. Let~$R$ denote the $\leqslant_\epsilon$-greatest and~$S$ denote the $\leqslant_\epsilon$-smallest element of~$\mathcal{R}$. As in the proof of Lemma~\ref{def:core-order}, let~${\mathcal S}_R$ and~${\mathcal S}_S$ be families of disjoint rays witnessing that~$R$ and~$S$ are core rays, and let~${\mathcal S}_{\bot(R)}$ be the subset of rays of~${\mathcal S}_R$ belonging to~${\bot(\epsilon,R)}$ and~${\mathcal S}_{\top(S)}$ be the subset of rays of~${\mathcal S}_S$ belonging to~${\top(\epsilon,S)}$. Note that, by definition both ${\mathcal S}_{\bot(R)}$ and ${\mathcal S}_{\top(S)}$ contain at least $N+1$ rays, and we may in fact assume without loss of generality that they both contain exactly $N+1$ rays. Now~${\mathcal{S}_{\bot(R)} \subseteqseteq \bot(\epsilon, R)}$ and~${R' \in \top(\epsilon, R)}$ for every~${R' \in \mathcal{R} \setminus \{R\}}$, and each ray in~$\mathcal{S}_{\bot(R)}$ has a tail disjoint to~${\bigcup \mathcal{R}}$. Analogously, ${\mathcal{S}_{\top(S)} \subseteqseteq \top(\epsilon, S)}$ and~${R' \in \bot(\epsilon, S)}$ for every~${R' \in \mathcal{R} \setminus \{S\}}$ and each ray in~$\mathcal{S}_{\top(S)}$ has a tail disjoint to~${\bigcup \mathcal{R}}$. Now, ${\mathcal{S}_{\top(S)} \subseteqseteq \top(\epsilon,R)}$ and ${\mathcal{S}_{\bot(R)} \subseteqseteq \bot(\epsilon,S)}$ by Lemma~\ref{l:end_distribution}, yielding that tails of rays in~$\mathcal{S}_{\top(S)}$ are necessarily disjoint from tails in~$\mathcal{S}_{\bot(R)}$. Let~$\overline{\mathcal{R}}$ be the union of $\mathcal{R}$ with appropriate tails of each ray in $\mathcal{S}_{\bot(R)} \cup \mathcal{S}_{\top(S)}$. Note that $|\overline{\mathcal{R}}| = |\mathcal{R}|+2N+2$. For any ray $R_i \in \mathcal{R}$, we first note that that $S \leqslant_\epsilon R_i$ and so $S \in \top(\epsilon,R_i)$ and $R_i \in \bot(\epsilon, S)$. Then, since $\mathcal{S}_{\top(S)} \subseteqseteq \top(\epsilon, S)$ it follows from Lemma~\ref{l:end_distribution} that $\mathcal{S}_{\top(S)} \subseteqseteq \top(\epsilon, R_i)$, and hence one of the components of~${\RG(\overline{\mathcal{R}}) - i}$ has size at least~${N+1}$. A similar argument shows that a second component has size at least~${N+1}$, and finally, since~$R_i$ is a core ray, by Lemma~\ref{l:centraltwoend}, there are no other components of~${\RG(\overline{\mathcal{R}}) - i}$. Finally, by the comment after Definition~\ref{d:coreray}, it follows that $R_i$ is an inner vertex of the central path of this ray graph. \end{proof} \begin{lemma}[{\cite[Lemma 7.10]{BEEGHPTII}}] \label{l:transitionseparate} Let~$\mathcal{R}$ and~$\mathcal{T}$ be families of disjoint rays, each of size at least~${N+3}$, and let~$\sigma$ be a transition function from~${\mathcal R}$ to~${\mathcal T}$. Let~${x \in \RG(\mathcal{R})}$ be an inner vertex of the central path. If~${v_1,v_2\in \RG(\mathcal{R})}$ lie in different components of~${\RG(\mathcal{R})-x}$, then~$\sigma(v_1)$ and~$\sigma(v_2)$ lie in different components of~${\RG({\mathcal T})-\sigma(x)}$. Moreover, $\sigma(x)$ is an inner vertex of the central path of~$\RG({\mathcal T})$. \end{lemma} \begin{definition} Let~$\mathcal{R}$, $\mathcal{S}$ be finite families of disjoint $\epsilon$-rays and let~$\mathcal{R}'$ be a subfamily of~$\mathcal{R}$ consisting of core rays. A linkage~$\mathcal{P}$ between~$\mathcal{R}$ and~$\mathcal{S}$ is \emph{preserving on~$\mathcal{R}'$} if~$\mathcal{P}$ links~$\mathcal{R}'$ to core rays and preserves the order~$\leqslant_\epsilon$. \end{definition} \begin{lemma}\label{l:core-preserving} Let~${\mathcal{R} = (R_i \colon i \in I)}$ be a finite family of disjoint core $\epsilon$-rays and let ${\mathcal{S} = (S_j \colon j \in J)}$ be a finite family of disjoint $\epsilon$-rays. Let ${\overline{\mathcal{R}}=(R_i \colon i \in \overline{I})}$ be as in Lemma~\ref{def:central-core} and let~$\mathcal{P}$ be a linkage from~$\overline{\mathcal{R}}$ to~$\mathcal{S}$. If~$\mathcal{P}$ is transitional, then it is preserving on~$\mathcal{R}$. \end{lemma} \begin{proof} We first note that, by Lemma~\ref{l:halfgridstructure}, if~$\mathcal{P}$ links the rays in~$\mathcal{R}$ to core rays, then it will be preserving. So, let~${\sigma: \overline{I} \rightarrow J}$ be the transition function induced by~$\mathcal{P}$. For each~${i \in I}$, since~$i$ is an inner vertex of the central path of~${\RG(\overline{\mathcal{R}})}$, by Lemma~\ref{l:transitionseparate}, $\sigma(i)$ is an inner vertex of the central path of~${\RG(\mathcal{S})}$. Since the central path is a bare path, it follows that~${\RG(\mathcal{S}) - \sigma(i)}$ has precisely two components. Furthermore, by Lemma~\ref{def:central-core}, the graph~${\RG(\overline{\mathcal{R}}) - i}$ has precisely two components, each of size at least~${N+1}$, and so by Lemma~\ref{l:transitionseparate} the two components of~${\RG(\mathcal{S}) - \sigma(i)}$ each have size at least~${N+1}$. Hence, the family~$\mathcal{S}$ witnesses that~$S_{\sigma(i)}$ is a core ray. \end{proof} \begin{definition} If~$\mathcal{P}$ is a linkage from~$\mathcal{R}$ to~$\mathcal{S}$, then a \emph{sub-linkage} of~${\mathcal P}$ is just a subset of~${\mathcal P}$, considered as a linkage from the corresponding subset of~$\mathcal{R}$ to~$\mathcal{S}$. \end{definition} \begin{remark} \label{rem:transitional-sub} A sub-linkage of a transitional linkage is transitional. \end{remark} The following remarks are a direct consequence of the definitions and Lemma~\ref{l:halfgridstructure}. \begin{remark} \label{rem:core-preserving} Let $\mathcal{R}$ be a finite family of disjoint core $\epsilon$-rays and let $\mathcal{S}$ and $\mathcal{T}$ be finite families of disjoint $\epsilon$-rays. Let $\overline{\mathcal{R}}$ be as in Lemma~\ref{def:central-core} and let~$\mathcal{P}_1$ and~$\mathcal{P}_2$ be linkages from~$\overline{\mathcal{R}}$ to~$\mathcal{S}$ and from~${(\overline{\mathcal{R}} \circ_{\mathcal{P}_1} \mathcal{S})}$ to~$\mathcal{T}$ respectively. \begin{enumerate} \item \label{rem:core-preserving-sub} If~$\mathcal{P}_1$ is preserving on~$\mathcal{R}$, then any~${\mathcal{P}_1' \subseteqseteq \mathcal{P}_1}$ as a linkage between the respective subfamilies is preserving on the respective subfamily of~$\mathcal{R}$. \item \label{rem:core-preserving-concat} If~$\mathcal{P}_1$ is preserving on~$\mathcal{R}$ and~$\mathcal{P}_2$ is preserving on~${\mathcal{R} \circ_{\mathcal{P}_1} \mathcal{S}}$, then the concatenation~${\mathcal{P}_1 + \mathcal{P}_2}$ is preserving on~$\mathcal{R}$. \end{enumerate} \end{remark} \begin{lemma} \label{lem:core-preserving2} Let~$\mathcal{R}$ and~$\mathcal{S}$ be finite families of disjoint core rays of~$\epsilon$ and let~${\mathcal{S}' \subseteqseteq \mathcal{S}}$ be a subfamily of~$\mathcal{S}$ with~${|\mathcal{R}| = |\mathcal{S}'|}$. Then there is a transitional linkage from~$\overline{\mathcal{R}}$ to~$\overline{\mathcal{S}}$ which is preserving on~$\mathcal{R}$ and links the rays in~$\mathcal{R}$ to rays in~$\mathcal{S}'$. \end{lemma} \begin{proof} Consider~${\mathcal{T} := (\overline{\mathcal{S}} \setminus \mathcal{S}) \cup \mathcal{S}' \subseteqseteq \overline{\mathcal{S}}}$. It is apparent that the family~$\mathcal{T}$ satisfies the conclusions of Lemma~\ref{def:central-core} for~$\mathcal{S}'$. Let~$\sigma$ be some transition function between~$\overline{\mathcal{R}}$ and~$\mathcal{T}$ and let~$\mathcal{P}$ be a linkage inducing this transition function. By Lemma~\ref{l:core-preserving} this linkage is preserving on~$\mathcal{R}$. Note that, since~$\sigma$ is a transition function from~$\overline{\mathcal{R}}$ to~$\mathcal{T}$, it is also a transition function from~$\overline{\mathcal{R}}$ to~$\overline{\mathcal{S}}$, and so~$\mathcal{P}$ is also a preserving, transitional linkage from~$\overline{\mathcal{R}}$ to~$\overline{\mathcal{S}}$. We claim further that~$\mathcal{P}$ links the rays in~$\mathcal{R}$ to the rays in~$\mathcal{S}'$. Indeed, since $|\overline{\mathcal{R}}| = |\mathcal{T}| = |\mathcal{R}| + 2N+2$, we may assume for a contradiction that there is some $R_i \in \mathcal{R}$ such that $S_{\sigma(i)} \not\in \mathcal{S}'$. Note that, since $i$ is an inner vertex of the central path of $\RG(\overline{\mathcal{R}})$, by Lemma~\ref{l:transitionseparate} $\sigma(i)$ is an inner vertex of the central path of $\RG(\mathcal{T})$, and so in particular $\RG(\mathcal{T}) - \sigma(i)$ has precisely two components. Since for each~${S_j \in \mathcal{S'}}$, $j$ lies on the central path of~$\RG(\mathcal{T})$, if ${S_{\sigma(i)} \not\in \mathcal{S'}}$ then it is clear that~${\RG(\mathcal{T}) \setminus \sigma(i)}$ contains one component of size at least~${|\mathcal{S}'| + N+1 = |\mathcal{R}|+N+1}$. However, since~$i$ is an inner vertex of the central path of~$\RG(\overline{\mathcal{R}})$, by Lemma~\ref{l:transitionseparate} and Lemma~\ref{l:core-preserving} there must be two components of~${\RG(\mathcal{T}) \setminus \sigma(i)}$ of size at least~${N+1}$, a contradiction. \end{proof} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{ \texorpdfstring{$G$-tribes and concentration of $G$-tribes towards an end} {G-tribes and concentration of G-tribes towards an end} } \label{s:tribes} To show that a given graph~$G$ is $\preccurlyeq$-ubiquitous, we shall assume that~${n G \preccurlyeq \Gamma}$ for every~${n \in \mathbb{N}}$ and need to show that this implies~${\aleph_0 G \preccurlyeq \Gamma}$. To this end we use the following notation for such collections of~$nG$ in~$\Gamma$, which we introduced in~\cite{BEEGHPTI} and~\cite{BEEGHPTII}. \begin{definition}[{$G$-\text{tribe}} s] Let~$G$ and~$\Gamma$ be graphs. \begin{itemize} \item A \emph{{$G$-\text{tribe}}\ in~$\Gamma$ (with respect to the minor relation)} is a family~$\mathcal{F}$ of finite collections~$F$ of disjoint subgraphs~$H$ of~$\Gamma$, such that each \emph{member}~$H$ of~${\mathcal F}$ is an~$IG$. \item A {$G$-\text{tribe}}\ $\mathcal{F}$ in~$\Gamma$ is called \emph{thick} if for each~${n \in \mathbb{N}}$, there is a \emph{layer}~${F \in \mathcal{F}}$ with~${|F| \geqslant n}$; otherwise, it is called \emph{thin}. \item A $G$-tribe~${\mathcal F}$ is \emph{connected} if every member~$H$ of~${\mathcal F}$ is connected. Note that, this is the case precisely if~$G$ is connected. \item A {$G$-\text{tribe}}\ $\mathcal{F}'$ in~$\Gamma$ is a \emph{{$G$-\text{subtribe}}}\footnote{When~$G$ is clear from the context we will often refer to a $G$-subtribe as simply a subtribe.} of a {$G$-\text{tribe}}\ $\mathcal{F}$ in~$\Gamma$, denoted by~${{\mathcal F}' \preccurlyeq {\mathcal F}}$, if there is an injection~${\Psi \colon {\mathcal F}' \to {\mathcal F}}$ such that for each~${F' \in \mathcal{F}'}$, there is an injection ${\varphi_{F'} \colon F' \to \Psi(F')}$ with~${V(H') \subseteqseteq V(\varphi_{F'}(H'))}$ for every~${H' \in F'}$. The {$G$-\text{subtribe}}~${\mathcal F}'$ is called \emph{flat}, denoted by~${{\mathcal F}' \subseteqseteq {\mathcal F}}$, if there is such an injection~$\Psi$ satisfying~${F' \subseteqseteq \Psi(F')}$. \item A thick {$G$-\text{tribe}}\ $\mathcal{F}$ in~$\Gamma$ is \emph{concentrated at an end~$\epsilon$} of~$\Gamma$ if for every finite vertex set~$X$ of~$\Gamma$, the {$G$-\text{tribe}}\ ${{\mathcal F}_X = \{F_X \colon F \in {\mathcal F} \}}$ consisting of the layers \[ {F_X = \{H \in F \colon H \not\subseteqseteq C(X,\epsilon)\} \subseteqseteq F} \] is a thin subtribe of~${\mathcal F}$. \end{itemize} \end{definition} We note that, if~$G$ is connected, every thick $G$-tribe~${\mathcal F}$ contains a thick subtribe~${\mathcal F}'$ such that every~${H \in \bigcup{\mathcal F}'}$ is a tidy~$IG$. We will use the following lemmas from~\cite{BEEGHPTI}. \begin{lemma}[Removing a thin subtribe, {\cite[Lemma 5.2]{BEEGHPTI}}] \label{l:removethin} Let~${\mathcal F}$ be a thick $G$-tribe in~$\Gamma$ and let~${\mathcal F}'$ be a thin subtribe of~${\mathcal F}$, witnessed by~${\Psi \colon {\mathcal F}'\to {\mathcal F}}$ and~${(\varphi_{F'} \colon F' \in \mathcal{F}')}$. For~${F \in {\mathcal F}}$, if~${F \in \Psi({\mathcal F}')}$, let~${\Psi^{-1}(F) = \{F'_F\}}$ and set~${\hat{F} = \varphi_{F'_F}(F'_F)}$. If~${F \notin \Psi({\mathcal F}')}$, set~${\hat{F} = \emptyset}$. Then \[ {\mathcal F}'' := \{F\setminus \hat{F}\colon F\in {\mathcal F}\} \] is a thick flat $G$-subtribe of~${\mathcal F}$. \end{lemma} \begin{lemma}[Pigeon hole principle for thick $G$-tribes, {\cite[Lemma 5.3]{BEEGHPTI}}] \label{Lem_finitechoice} Let~${k \in \mathbb{N}}$ and let~${c \colon \bigcup \mathcal{F} \to [k]}$ be a $k$-colouring of the members of some thick $G$-tribe~$\mathcal{F}$ in~$\Gamma$. Then there is a monochromatic, thick, flat $G$-subtribe~$\mathcal{F}'$ of~$\mathcal{F}$. \end{lemma} \begin{lemma}[{\cite[Lemma 5.4]{BEEGHPTI}}] \label{l:concentrated} Let~$G$ be a connected graph and~$\Gamma$ a graph containing a thick connected $G$-tribe~$\mathcal{F}$. Then either ${\aleph_0 G \preccurlyeq \Gamma}$, or there is a thick flat subtribe~${\mathcal F}'$ of~${\mathcal F}$ and an end~$\epsilon$ of~$\Gamma$ such that~${\mathcal F}'$ is concentrated at~$\epsilon$. \end{lemma} \begin{lemma}[{\cite[Lemma 5.5]{BEEGHPTI}}] \label{lem_subtribesinheritconcentration} Let~$G$ be a connected graph and~$\Gamma$ a graph containing a thick connected $G$-tribe~$\mathcal{F}$ concentrated at an end~$\epsilon$ of~$\Gamma$. Then the following assertions hold: \begin{enumerate} \item For every finite set~$X$, the component~${C(X,\epsilon)}$ contains a thick flat $G$-subtribe of~${\mathcal F}$. \item Every thick subtribe~${\mathcal F}'$ of~${\mathcal F}$ is concentrated at~$\epsilon$. \end{enumerate} \end{lemma} The following lemma from~\cite{BEEGHPTII} shows that we can restrict ourself to thick $G$-tribes which are concentrated at thick ends. \begin{lemma}[{\cite[Lemma 8.7]{BEEGHPTII}}] \label{l:concentratedatthin} Let~$G$ be a connected graph and~$\Gamma$ a graph containing a thick $G$-tribe ${\mathcal F}$ concentrated at an end~${\epsilon \in \Omega(\Gamma)}$ which is thin. Then~${\aleph_0 G \preccurlyeq \Gamma}$. \end{lemma} Given an extensive tree-decomposition ${(T,\mathcal{V})}$ of~$G$, broadly our strategy will be to obtain a family of disjoint~$IG$'s by choosing a sequence of initial subtrees ${T_0 \subseteqseteq T_1 \subseteqseteq \ldots}$ such that~${\bigcup T_i = T}$ and constructing inductively a family of finitely many~${IG(T_{k+1})}$'s which extend the~${IG(T_{k})}$'s built previously (cf.~Definition~\ref{defn_treestuff}). The extensiveness of the tree-decomposition will ensure that, at each stage, there will be some edges in~${\partial(T_i) = E(T_i, T \setminus T_i)}$, each of which has in~$G$ a family of rays~$\mathcal{R}_e$ along which the graph $G$ displays self-similarity. In order to extend our~$IG(T_k)$ at each step, we will want to assume that the~$IG$s in our thick $G$-tribe~$\mathcal{F}$ lie in a `uniform' manner in the graph~$\Gamma$ in terms of these rays~$\mathcal{R}_e$. More specifically, for each edge~${e \in \partial(T_i)}$, the rays~$\mathcal{R}_e$ provided by the extensive tree-decomposition in Definition~\ref{d:extensive} tend to a common end~$\omega_e$ in~$G$, and for each~${H \in \bigcup \mathcal{F}}$, the corresponding rays in~$H$ converge to an end~${H(\omega_e) \in \Omega(\Gamma)}$ (cf.~Definition~\ref{def:H(omega)}), which might either be the end~$\epsilon$ of~$\Gamma$ at which~$\mathcal{F}$ is concentrated, or another end of~$\Gamma$. We would like that our {$G$-\text{tribe}}~$\mathcal{F}$ makes a consistent choice across all members $H$ of $\mathcal{F}$ of whether~$H(\omega_e)$ is~$\epsilon$, for each~${e \in \partial(T_i)}$. Furthermore, if~${H(\omega_e) = \epsilon}$ for every~${H \in \bigcup \mathcal{F}}$, then this imposes some structure on the end~$\omega_e$ of~$G$. More precisely, by~\cite[Lemma~10.1]{BEEGHPTII}, we may assume that~${\RG_H(H^{\downarrow}(\mathcal{R}_e))}$ is a path for each member~$H$ of the $G$-tribe~$\mathcal{F}$, or else we immediately find that $\aleph_0 G \preccurlyeq \Gamma$ and are done. By moving to a thick subtribe, we may assume that every $\epsilon$-ray in $H$ is a core ray for every~${H \in \bigcup \mathcal{F}}$, in which case~$\leqslant_\epsilon$ imposes a linear order on every family of rays~$H^{\downarrow}(\mathcal{R}_e)$, which induces one of the two distinct orientations of the path~${\RG_H(H^{\downarrow}(\mathcal{R}_e))}$. We will also want that our tribe~$\mathcal{F}$ induces this orientation in a consistent manner. Let us make the preceding discussion precise with the following definitions: \begin{definition} \label{d:tribes} Let~$G$ be a connected locally finite graph with an extensive tree-\linebreak decomposition~${(T,\mathcal {V})}$ and~$S$ be an initial subtree of~$T$. Let~${H \subseteqseteq \Gamma}$ be a tidy~$IG$, $\mathcal{H}$ be a set of tidy~$IG$s in~$\Gamma$, and~$\epsilon$ an end of~$\Gamma$. \begin{itemize} \item Given an end~$\omega$ of~$G$, we say that~$\omega$ \emph{converges to~$\epsilon$ according to~$H$} if $H(\omega) = \epsilon$ (cf.~Definition~\ref{def:H(omega)}). The end~$\omega$ \emph{converges to~$\epsilon$ according to~$\mathcal{H}$} if it converges to~$\epsilon$ according to every element of~$\mathcal{H}$. We say that~$\omega$ is \emph{cut from~$\epsilon$ according to~$H$} if~$H(\omega) \neq \epsilon$. The end~$\omega$ is \emph{cut from~$\epsilon$ according to~$\mathcal{H}$} if it is cut from~$\epsilon$ according to every element of~$\mathcal{H}$. Finally, we say that~$\mathcal{H}$ \emph{determines whether~$\omega$ converges to~$\epsilon$} if either~$\omega$ converges to~$\epsilon$ according to~$\mathcal{H}$ or~$\omega$ is cut from~$\epsilon$ according to~$\mathcal{H}$. \item Given~${E \subseteqseteq E(T)}$, we say~$\mathcal{H}$ \emph{weakly agrees about~$E$} if for each~${e \in E}$, the set~$\mathcal{H}$ determines whether~$\omega_e$ (cf.~Definition~\ref{d:extensive}) converges to~$\epsilon$. If~$\mathcal{H}$ weakly agrees about~${\partial(S)}$ we let \begin{align*} \partial_\epsilon (S) &:= \{e \in \partial (S) \colon \omega_e \text{ converges to~$\epsilon$ according to } \mathcal{H} \}\,, \\ \partial_{\neg \epsilon} (S) &:= \{e \in \partial (S) \colon \omega_e \text{ is cut from~$\epsilon$ according to } \mathcal{H} \} \, , \end{align*} and write \begin{align*} S^{\neg \epsilon} &\text{ for the component of the forest } T - \partial_\epsilon (S) \text{ containing the root of } T\,, \\ S^{\epsilon} &\text{ for the component of the forest } T - \partial_{\neg \epsilon} (S) \text{ containing the root of } T\,. \end{align*} Note that~${S = S^{\neg \epsilon} \cap S^{\epsilon}}$. \item We say that~$\mathcal{H}$ is \emph{well-separated from~$\epsilon$ at~$S$} if~$\mathcal{H}$ weakly agrees about~${\partial(S)}$ and~${H(S^{\neg \epsilon})}$ can be separated from~$\epsilon$ in~$\Gamma$ for all elements~${H \in \mathcal{H}}$, i.e.~for every~$H$ there is a finite~${X \subseteqseteq V(\Gamma)}$ such that~${H(S^{\neg \epsilon}) \cap C_\Gamma(X,\epsilon) = \emptyset}$. \end{itemize} In the case that~$\epsilon$ is half-grid-like, we say that~$\mathcal{H}$ \emph{strongly agrees} about~$\partial(S)$ if \begin{itemize} \item it weakly agrees about~${\partial(S)}$; \item for each~${H \in \mathcal{H}}$, every $\epsilon$-ray~${R \subseteqseteq H}$ is in~${\core(\epsilon)}$; \item for every~${e \in \partial_\epsilon(S)}$, there is a linear order~$\leqslant_{\mathcal{H},e}$ on~${S(e)}$ (cf.~Definition~\ref{d:extensive}), such that the order induced on~${H^{\downarrow}(\mathcal{R}_e)}$ by~$\leqslant_{\mathcal{H},e}$ agrees with~$\leqslant_\epsilon$ on~${H^{\downarrow}(\mathcal{R}_e)}$ for all~${H \in \mathcal{H}}$. \end{itemize} If~${\mathcal F}$ is a thick $G$-tribe concentrated at an end~$\epsilon$, we use these terms in the following way: \begin{itemize} \item Given~${E \subseteqseteq E(T)}$, we say that~${\mathcal F}$ \emph{weakly agrees about~$E$} if~${\bigcup {\mathcal F}}$ weakly agrees about~$E$ w.r.t.~$\epsilon$. \item We say that~${\mathcal F}$ is \emph{well-separated from~$\epsilon$ at~$S$} if~${\bigcup {\mathcal F}}$ is. \item We say that~${\mathcal F}$ \emph{strongly agrees} about~${\partial(S)}$ if~${\bigcup {\mathcal F}}$ does. \end{itemize} \end{definition} For ease of presentation, when a $G$-tribe $\mathcal{F}$ strongly agrees about $\partial(S)$ we will write $\leqslant_{\mathcal{F},e}$ for $\leqslant_{\bigcup \mathcal{F},e}$. \begin{remark} \label{r:hereditaryprop} The properties of weakly agreeing about~$E$, being well-separated from~$\epsilon$, and strongly agreeing about~$\partial(S)$ are all preserved under taking subsets, and hence under taking flat subtribes. \end{remark} Note that by the pigeon hole principle for thick $G$-tribes, given a finite edge set~${E \subseteqseteq E(T)}$, any thick $G$-tribe~${\mathcal F}$ concentrated at~$\epsilon$ has a thick (flat) subtribe which weakly agrees about~$E$. The next few lemmas show that, with some slight modification, we may restrict to a further subtribe which strongly agrees about~$E$ and is also well-separated from~$\epsilon$. \begin{definition}[{\cite[Lemma 3.5]{BEEGHPTII}}] \label{d:linear} Let~$\omega$ be an end of a graph~$G$. We say~$\omega$ is \emph{linear} if~${\RG(\mathcal{R})}$ is a path for every finite family~$\mathcal{R}$ of disjoint $\omega$-rays. \end{definition} \begin{lemma}[{\cite[Lemma 10.1]{BEEGHPTII}}] \label{l:linearsubtribe} Let~$\epsilon$ be a non-pebbly end of~$\Gamma$ and let~${\mathcal F}$ be a thick $G$-tribe, such that for every~${H \in \bigcup {\mathcal F}}$, there is an end~${\omega_H \in \Omega(G)}$ such that~${H(\omega_H) = \epsilon}$. Then there is a thick flat subtribe~${\mathcal F}'$ of~${\mathcal F}$ such that~$\omega_H$ is linear for every~${H \in \bigcup {\mathcal F}'}$. \end{lemma} \begin{corollary} \label{c:linearend} Let~$G$ be a connected locally finite graph with an extensive tree-decomposi{-}tion ${(T,\mathcal {V})}$, $S$ an initial subtree of~$T$, and let~${\mathcal F}$ be a thick $G$-tribe which is concentrated at a non-pebbbly end~$\epsilon$ of a graph~$\Gamma$ and weakly agrees about~${\partial(S)}$. Then~$\omega_e$ is linear for every~${e \in \partial_\epsilon(S)}$. \end{corollary} \begin{proof} For any~${e \in \partial_\epsilon(S)}$ apply Lemma~\ref{l:linearsubtribe} to~${\mathcal F}$ with~${\omega_H = \omega_e}$ for each~${H \in \bigcup {\mathcal F}}$. \end{proof} \begin{lemma}\label{lem:stronglyagreesubtribe} Let~$G$ be a connected locally finite graph with an extensive tree-decomposition ${(T, \mathcal{V})}$ and let~$S$ be an initial subtree of~$T$ with~${\partial(S)}$ finite. Let~${\mathcal F}$ be a thick $G$-tribe in a graph~$\Gamma$, which weakly agrees about~${\partial(S) \subseteqseteq E(T)}$, concentrated at a half-grid-like end~$\epsilon$ of~$\Gamma$. Then~${\mathcal F}$ has a thick flat subtribe~${\mathcal F}'$ so that~${\mathcal F}'$ strongly agrees about~${\partial(S)}$. \end{lemma} \begin{proof} Since~$\epsilon$ is half-grid-like, there is some~${N \in \mathbb{N}}$ as in Lemma~\ref{l:halfgridstructure}. Then, by Remark~\ref{rem:core-remarks}\ref{rem:non-core-bounded}, given any family of disjoint $\epsilon$-rays, at least~${m-2N-2}$ of them are core rays. Thus, since all members of a layer~$F$ of~${\mathcal F}$ are disjoint, at least~${|F|-2N-2}$ members of~$F$ do not contain any $\epsilon$-ray which is not core. Thus, there is a thick flat subtribe~${\mathcal F}^*$ of~${\mathcal F}$ such that all $\epsilon$-rays in members of~${\mathcal F}^*$ are core. Given a member~$H$ of~${\mathcal F}^*$ and~${e \in \partial_\epsilon(S)}$, we consider the order~$\leqslant_{H,e}$ induced on~${S(e)}$ by the order~$\leqslant_\epsilon$ on~${H^{\downarrow}(\mathcal{R}_e)}$. Let~$O_e$ be the set of potential orders on~${S(e)}$ which is finite since~${S(e)}$ is finite\footnote{Note that there are in fact at most two orders of~${S(e)}$ induced by one of the members of~${\mathcal F}^*$ since~$\omega_e$ is linear by Corollary~\ref{c:linearend}.}. Consider the colouring~${c \colon \bigcup {\mathcal F}^* \to \prod_{e \in \partial_\epsilon(S)} O_e}$ where we map every~$H$ to the product of the orders~$\leqslant_{H,e}$ it induces. By the pigeon hole principle for thick G-tribes, Lemma~\ref{Lem_finitechoice}, there is a monochromatic, thick, flat $G$-subtribe~${\mathcal F}'$ of~${\mathcal F}^*$. We can now set $\le _{{\mathcal F}',e} := \leqslant_{H,e}$ for some~${H \in {\mathcal F}'}$. Then, by Remark~\ref{r:hereditaryprop}, this order~$\leqslant_{{\mathcal F}',e}$ witnesses that~${\mathcal F}'$ is a thick flat subtribe of~${\mathcal F}$ which strongly agrees about~${\partial(S)}$. \end{proof} \begin{lemma} \label{l:push-away} Let~$G$ be a connected locally finite graph with an extensive tree-decomposi{-}tion~${(T, \mathcal{V})}$. Let~${H \subseteqseteq \Gamma}$ be a tidy~$IG$ and~$\epsilon$ an end of~$\Gamma$. Let~$e$ be an edge of~$T$ such that~${H(\omega_e) \neq \epsilon}$. Then there is a finite set~${X \subseteqseteq V(\Gamma)}$ such that for every finite~${X' \supseteq X}$, there exists a push-out ${H_e = H(G[A(e)]) \oplus_{H^\downarrow({\mathcal R}_e)} W_e}$ of~$H$ along~$e$ to some depth~${n \in \mathbb{N}}$ so that ${C_\Gamma(X', H(\omega_e)) \neq C_\Gamma(X', \epsilon)}$ and ${W_e \subseteqseteq C_\Gamma(X', H(\omega_e))}$. \end{lemma} \begin{proof} Let~${X \subseteqseteq V(\Gamma)}$ be a finite vertex set such that~${C_\Gamma(X, H(\omega_e)) \neq C_\Gamma(X, \epsilon)}$. Then ${C_\Gamma(X', H(\omega_e)) \neq C_\Gamma(X', \epsilon)}$ holds for any finite vertex set~${X' \supseteq X}$. Furthermore, since~$X'$ is finite, there are only finitely many~${v \in V(G)}$ whose branch sets~${H(v)}$ meet~$X'$. By extensiveness, every vertex of~$G$ is contained in only finitely many parts of the tree-decomposition, and so there exists an $n \in \mathbb{N}$ such that whenever $e' \in E(T_{e^+})$ is such that ${\dist(e^-,e'^-)\geqslant n}$, then \[ H(G[B(e')]) \cap X' = \emptyset, \; \text{and so} \; H(G[B(e')]) \subseteqseteq C_\Gamma(X', H(\omega_e)). \] Since ${(T,\mathcal{V})}$ is an extensive tree-decomposition, there is a witness~$W$ of the self-similarity of~${B(e)}$ at distance at least~$n$. Then by Definition~\ref{d:pushout} and Lemma~\ref{lem:pushout2}, there is a push-out ${H_e = H(G[A(e)]) \oplus_{H^\downarrow({\mathcal R}_e)} H(W)}$ of~$H$ along~$e$ to depth~$n$. Let~${W_e = H(W)}$, then by Definition~\ref{d:pushout}, $V(W_e) \subseteqseteq V(H(G[B(e')]))\subseteqseteq C_\Gamma(X', H(\omega_e))$. \end{proof} \begin{lemma} \label{l:induction-start} Let~$G$ be a connected locally finite graph with an extensive tree-decomposition ${(T,\mathcal {V})}$ with root~$r \in T$. Let~$\Gamma$ be a graph and~${\mathcal F}$ a thick $G$-tribe concentrated at a half-grid-like end~$\epsilon$ of~$\Gamma$. Then there is a thick subtribe~${\mathcal F}'$ of~${\mathcal F}$ such that \begin{enumerate}[label=(\arabic*)] \item \label{item:ind-start-1} ${\mathcal F}'$ is concentrated at~$\epsilon$. \item \label{item:ind-start-2} ${\mathcal F}'$ strongly agrees about~${\partial (\{r\})}$. \item \label{item:ind-start-3} ${\mathcal F}'$ is well-separated from~$\epsilon$ at~${\{r\}}$. \end{enumerate} \end{lemma} \begin{proof} Since~$T$ is locally finite, also ${d(r)}$ is finite, and, by choosing a thick flat subtribe of~${\mathcal F}$, we may assume that~${\mathcal F}$ weakly agrees about~${\partial(\{r\})}$. Moreover, by Lemma~\ref{lem:stronglyagreesubtribe}, we may even assume that~${\mathcal F}$ strongly agrees about~${\partial(\{r\})}$. Using Lemma~\ref{lem_subtribesinheritconcentration}(2), this ${\mathcal F}$ would then satisfy \ref{item:ind-start-1} and \ref{item:ind-start-2}. So, it remains to arrange for \ref{item:ind-start-3}: For every member~$H$ of~${\mathcal F}$, and for every~${e \in \partial_{\neg \epsilon}(\{r\})}$, there exists, by Lemma~\ref{l:push-away}, a finite set~${X_e \subseteqseteq V(\Gamma)}$, such that for every finite vertex set~${X' \supseteq X_e}$ there is a push-out ${H_e = H(G[A(e)]) \oplus_{H^\downarrow({\mathcal R}_e)} W_e}$ of~$H$ along~$e$, so that ${C_\Gamma(X', H(\omega_e)) \neq C_\Gamma(X', \epsilon)}$ and ${W_e \subseteqseteq C_\Gamma(X', H(\omega_e))}$. Let~$X$ be the union of all these~$X_e$ together with~${H(\{r\})}$. For each~${e \in \partial_{\neg \epsilon}(\{r\})}$, let~$H_e$ be the push-out whose existence is guaranteed by the above with respect to this set~$X$. Let us define an~$IG$ \[ H' := \bigcup_{e \in \partial_{\neg \epsilon} (\{r\})} \mkern-18mu H_e \left( \{r\}^{\epsilon} \cup T_{e^+} \right). \] It is straightforward, although not quick, to check that this is indeed an~$IG$ and so we will not do this in detail. Briefly, this can be deduced from multiple applications of Definition~\ref{d:amalgamation}, and, since each ${H_e(G[A(e)])}$ extends~${H(G[A(e)])}$ fixing~${A(e) \setminus S(e)}$, all that we need to check is that the extra vertices added to the branch sets of vertices in~${S(e)}$ are distinct for each edge~$e$. However, this follows from Definition~\ref{d:pushout}, since these vertices come from~${\bigcup H^\downarrow({\mathcal R}_e)}$ and the rays~$R_{e,s}$ and~$R_{e',s'}$ are disjoint except in their initial vertex when~${s = s'}$. Let~${\mathcal F}'$ be the tribe given by~${\{F' \colon F \in {\mathcal F} \}}$, where~${F' = \{ H' \colon H \in F\}}$ for each~${F \in {\mathcal F}}$. We claim that~${\mathcal F}'$ satisfies the conclusion of the lemma. Firstly, by Lemma~\ref{lem_subtribesinheritconcentration}(2), ${\mathcal F}'$ is concentrated at~$\epsilon$, i.e.~\ref{item:ind-start-1} holds. Next, we claim that~$\mathcal{F}'$ strongly agrees about~${\partial(\{r\})}$. Indeed, by construction for each~${e \in \partial_{\neg \epsilon}(\{r\})}$ we have ${W_e \subseteqseteq C_\Gamma(X, H(\omega_e))}$, and hence~$\omega_e$ is cut from~$\epsilon$ according to~$H'$. Furthermore, by construction ${H(\{r\}^{\epsilon}) \setminus X = H'(\{r\}^{\epsilon}) \setminus X}$ and so~$\omega_e$ converges to~$\epsilon$ according to~$H'$ for every~${e \in \partial_{\epsilon}(\{r\})}$. In fact, ${H^{\downarrow}(\mathcal{R}_e) = H'^{\downarrow}(\mathcal{R}_e)}$ for every~${e \in \partial_{ \epsilon}(\{r\})}$. Finally, since ${H' \subseteqseteq H}$, and~${\mathcal F}$ strongly agrees about~${\partial(\{r\})}$, it follows that every $\epsilon$-ray in~$H'$ is in $\core(\epsilon)$, and so \ref{item:ind-start-2} holds. It remains to show that~${\mathcal F}'$ is well-separated from~$\epsilon$ at~${\{r\}}$. However, $H'(\{r\}^{\neg \epsilon}) \setminus \bigcup_{e \in \partial_{\neg \epsilon}(\{r\})} W_e$ is finite, and each $W_e$ is separated from $\epsilon$ by $X$. Hence, there is some finite set $Y$ separating $H'(\{r\}^{\neg \epsilon})$ from $\epsilon$, and so \ref{item:ind-start-3} holds. \end{proof} \begin{lemma}[Well-separated push-out] \label{l:separatedpushout} Let~$G$ be a connected locally-finite graph with an extensive tree-decomposition~${(T, \mathcal{V})}$. Let~${H \subseteqseteq \Gamma}$ be a tidy~$IG$ and~$\epsilon$ an end of~$\Gamma$. Let~$S$ be a finite initial subtree of~$T$, such that~${\{H\}}$ is well-separated from~$\epsilon$ at~$S$, and let~${f \in \partial_\epsilon(S)}$. Then there exists exists a push-out~$H'$ of~$H$ along~$f$ to depth~$0$ (see Definition~\ref{d:pushout}) such that~${\{H'\}}$ is well-separated from~$\epsilon$ at~${\tilde S := S + f \subseteqseteq T}$. \end{lemma} \begin{proof} Let ${X' \subseteqseteq V(\Gamma)}$ be a finite set with~${H(S^{\neg\epsilon}) \cap C_\Gamma(X', \epsilon) = \emptyset}$. If ${\partial_{\neg \epsilon} (\tilde S) \setminus \partial(S) = \emptyset}$, then~${H' = H}$ satisfies the conclusion of the lemma, hence we may assume that~${\partial_{\neg \epsilon} (\tilde S) \setminus \partial(S)}$ is non-empty. By applying Lemma~\ref{l:push-away} to every~${e \in \partial_{\neg \epsilon} (\tilde S) \setminus \partial(S)}$, we obtain a finite set~${X \supseteq X'}$ and a family ${(H_e \colon e \in \partial_{\neg \epsilon} (\tilde S) \setminus \partial(S))}$ where each~${H_e = H(G[A(e)]) \oplus_{H^\downarrow({\mathcal R}_e)} W_e}$ is a push-out of~$H$ along~$e$ such that~${W_e \subseteqseteq C_{\Gamma}(X,H(\omega_e)) \neq C_{\Gamma}(X,\epsilon)}$. Let \[ H' := \mkern-24mu \bigcup_{e \in \partial_{\neg \epsilon} (\tilde S)\setminus \partial(S)} \mkern-24mu H_e \left( S^{\epsilon} \cup T_{e^+} \right). \] As before, it is straightforward to check that~$H'$ is an~$IG$, and that~$H'$ is a push-out of~$H$ along~$f$ to depth~$0$. We claim that~$H'$ is well-separated from~$\epsilon$ at~${\tilde S}$. Since $X'$ separates $H(S^{\neg \epsilon})$ from $\epsilon$, and $\partial_{\neg \epsilon} (\tilde S)\setminus \partial(S)$ is finite, it will be sufficient to show that for each $e \in \partial_{\neg \epsilon} (\tilde S)\setminus \partial(S)$, there is a finite set $X_e$ which separates $H'(G[B(e)])$ from $\epsilon$ in $\Gamma$. However, by construction $X$ separates $W_e$ from $\epsilon$, and $H'(G[B(e)]) \setminus W_e$ is finite, and so the claim follows. \end{proof} The following lemma contains a large part of the work needed for our inductive construction. The idea behind the statement is the following: At step~$n$ in our construction, we will have a thick $G$-tribe~${\mathcal F}_n$ which agrees about~${\partial(T_n)}$, where~$T_n$ is an initial subtree of the decomposition tree~$T$ with finite~${\partial(T_n)}$, which will allow us to extend our~${IG(T_n)}$'s to~${IG(T_{n+1})}$'s, where~$T_{n+1}$ is a larger initial subtree of~$T$, again with finite~${\partial(T_{n+1})}$. In order to perform the next stage of our construction, we will need to `refine'~${\mathcal F}_n$ to a thick $G$-tribe~${\mathcal F}_{n+1}$ which agrees about~${\partial(T_{n+1})}$. This would be a relatively simple application of the pigeon hole principle for $G$-tribes, Lemma~\ref{Lem_finitechoice}, except that, in our construction, we cannot extend by a member of~${\mathcal F}_{n+1}$ naively. Indeed, suppose we wish to use an~$IG$, say~$H$, to extend an~$IG(T_{n})$ to an~$IG(T_{n+1})$. There is some subgraph, ${H(T_{n+1}\setminus T_n)}$, of~$H$ which is an~${IG(T_{n+1}\setminus T_n)}$, however in order to use this to extend the~${IG(T_{n})}$ we first have to link the branch sets of the boundary vertices to this subgraph, and there may be no way to do so without using other vertices of~${H(T_{n+1}\setminus T_n)}$. For this reason, we will ensure the existence of an `intermediate $G$-tribe'~${\mathcal F}^*$, which has the property that for each member~$H$ of~${\mathcal F}^*$, there are push-outs at arbitrary depth of~$H$ which are members of~${\mathcal F}_{n+1}$. This allows us to first link our~${IG(T_{n})}$ to some~${H \in {\mathcal F}^*}$ and then choose a push-out~${H' \in {\mathcal F}_{n+1}}$ of~$H$, such that~${H'(T_{n+1}\setminus T_n)}$ avoids the vertices we used in our linkage. \begin{lemma}[$G$-tribe refinement lemma] \label{lem:refinement-lemma} Let~$G$ be a connected locally finite graph with an extensive tree-decomposition~${(T,\mathcal {V})}$, let~$S$ be an initial subtree of~$T$ with~${\partial(S)}$ finite, and let~${\mathcal F}$ be a thick $G$-tribe of a graph~$\Gamma$ such that \begin{enumerate}[label=(\arabic*)] \item \label{itemconcentratedF} ${\mathcal F}$ is concentrated at a half-grid-like end~$\epsilon$; \item \label{itemagreeF} ${\mathcal F}$ strongly agrees about~${\partial (S)}$; \item \label{itemSeparationF} ${\mathcal F}$ is well-separated from~$\epsilon$ at~$S$. \end{enumerate} Suppose~${f \in \partial_\epsilon(S)}$ and let~${\tilde S := S + f \subseteqseteq T}$. Then there is a thick flat subtribe~${\mathcal F}^*$ of~${\mathcal F}$ and a thick $G$-tribe~${\mathcal F}'$ in~$\Gamma$ with the following properties: \begin{enumerate}[label=(\roman*)] \item \label{itemconcentrated} ${\mathcal F}'$ is concentrated at~$\epsilon$. \item \label{itemagree} ${\mathcal F}'$ strongly agrees about~${\partial (\tilde{S})}$. \item \label{itemSeparation} ${\mathcal F}'$ is well-separated from~$\epsilon$ at~$\tilde{S}$. \item \label{itemconsistentagree} ${{\mathcal F}' \cup {\mathcal F}}$ strongly agrees about~${\partial (S) \setminus \{f\}}$. \item \label{itemnested} $S^{\neg \epsilon}$ w.r.t.~${\mathcal F}$ is a subtree of~${\tilde{S}^{\neg \epsilon}}$ w.r.t.~${\mathcal F}'$. \item \label{consistentpushingalong} For every~${F \in {\mathcal F}^*}$ and every~${m \in \mathbb{N}}$, there is an~${F' \in {\mathcal F}'}$ such that for all~${H \in F}$, there is an~${H' \in F'}$ which is a push-out of~$H$ to depth~$m$ along~$f$. \end{enumerate} \end{lemma} \begin{proof} For every member~$H$ of~${\mathcal F}$, consider a sequence~${(H^{(i)} \colon i \in \mathbb{N})}$, where~${H^{(i)}}$ is a push-out of~$H$ along~$f$ to depth at least~$i$. After choosing a subsequence of~${(H^{(i)} \colon i \in \mathbb{N})}$ and relabelling (monotonically), we may assume that for each~$H$, the set ${\{H^{(i)} \colon i \in \mathbb{N}\}}$ weakly agrees on~${\partial(\tilde S)}$, i.e.~for every~${e \in \partial(\tilde S)}$ either~${{H^{(i)}}^{\downarrow}(R) \in \epsilon}$ for every~${R \in \omega_e}$ and all~$i$ or~${{H^{(i)}}^{\downarrow}(R) \notin \epsilon}$ for every~${R \in \omega_e}$ and all~$i$. Note that a monotone relabelling preserves the property of~${H^{(i)}}$ being a push-out of~$H$ along~$f$ to depth at least~$i$. This uniform behaviour of~${(H^{(i)} \colon i \in \mathbb{N})}$ on~${\partial(\tilde S)}$ for each member~$H$ of~${\mathcal F}$ gives rise to a finite colouring~${c \colon \bigcup {\mathcal F} \to 2^{\partial(\tilde S)}}$. By Lemma~\ref{Lem_finitechoice}, we may choose a thick flat subtribe~${{\mathcal F}_1 \subseteqseteq {\mathcal F}}$ such that~$c$ is constant on~${\bigcup {\mathcal F}_1}$. Recall that, by Corollary~\ref{c:linearend}, for every~${e \in \partial_\epsilon(\tilde S)}$ (w.r.t.~${\mathcal F}_1$), the ray graph~${\RG_G({\mathcal R}_e)}$ is a path. We pick an arbitrary orientation of this path and denote by~$\le_e$ the corresponding linear order on~${\mathcal R}_e$. Note that, since ${\mathcal F}_1$ is a flat subtribe of $\mathcal{F}$ which strongly agrees about $\partial(S)$, every $\epsilon$-ray in every member~${H \in \bigcup{\mathcal F}_1}$ is core. Let us define, for each member~${H \in \bigcup{\mathcal F}_1}$, \begin{align*} d_H &\colon \{H^{(i)} \colon i \in \mathbb{N} \} \to \{-1,1\}^{\partial_\epsilon(\tilde S)}, \\ \intertext{where} d_H(H^{(i)})_e &= \begin{cases} 1 & \text{if $\leqslant_{\epsilon}$ agrees with the $\leqslant_e$}, \\ -1 & \text{if $\leqslant_{\epsilon}$ agrees with the reverse order $\geqslant_e$ of $\leqslant_e$}. \end{cases} \end{align*} Since~$d_H$ has finite range, we may assume by Lemma~\ref{Lem_finitechoice}, after choosing a subsequence and relabelling, that~$d_H$ is constant on ${\{H^{(i)}\colon i\in \mathbb{N}\}}$ and that~$H^{(i)}$ is still a push-out of~$H$ along~$f$ to depth at least~$i$. Now, consider ${d \colon \bigcup{\mathcal F}_1 \to \{-1,1\}^{\partial_\epsilon(\tilde S)}}$, with ${d(H) = d_H(H^{(1)})}$ (${ = d_H(H^{(i)})}$ for all~${i \in \mathbb{N}}$). Again, we may choose a thick flat subtribe ${{\mathcal F}_2 \subseteqseteq {\mathcal F}_1}$ such that~$d$ is constant on~${\mathcal F}_2$. Since~$\mathcal{F}$ is well-separated from~$\epsilon$ at~$S$, we get that ${\{ H^{(i)} \colon H \in \mathcal{F} \}}$ is well-separated from~$\epsilon$ at~$S$. So, we can now apply Lemma~\ref{l:separatedpushout} to each~$H^{(i)}$ to obtain~$H'^{(i)}$, yielding a collection which is well-separated from~$\epsilon$ at~$\tilde{S}$. Note that~$H'^{(i)}$ is still a push-out of~$H$ along~$f$ to depth at least~$i$. Now, let~${{\mathcal F}^* = {\mathcal F}_2}$ and~${{\mathcal F}' = \{ \{H'^{(i)} \colon H \in F\} \colon i \in \mathbb{N}, F\in {\mathcal F}^* \}}$. Let us verify that these satisfy~\ref{itemconcentrated}--\ref{consistentpushingalong}. ${\mathcal F}^*$ is concentrated at~$\epsilon$ because it is a thick flat subtribe of~${\mathcal F}$ by Lemma~\ref{lem_subtribesinheritconcentration}. By a comparison, layer by layer, since all members of~${\mathcal F}'$ are push-outs of members of~${\mathcal F}^*$ along~$f$, the tribe~${\mathcal F}'$ is also concentrated at~$\epsilon$, satisfying~\ref{itemconcentrated}. Property~\ref{itemagree} is satisfied: Since~$c$ and~$d$ are constant on~${\bigcup{\mathcal F}_2}$ the collection of the~$H^{(i)}$ (for~${H \in \bigcup{\mathcal F}_2}$) strongly agrees on~${\partial(\tilde S)}$, since we have chosen an appropriate subsequence in which~${d_H(H^{(i)})}$ is constant. The~$H'^{(i)}$ are constructed such that this property is preserved. Property~\ref{itemSeparation} is immediate from the choice of~$H'^{(i)}$. Properties~\ref{itemconsistentagree} and~\ref{itemnested} follow from~\ref{itemagreeF} and the fact that every member of~${\mathcal F}'$ is a push-out of a member of~${\mathcal F}$ along~$f$. Property~\ref{consistentpushingalong} is immediate from the construction of~${\mathcal F}'$\!. \end{proof} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{The inductive argument} \label{sec:countable-subtrees} In this section we prove Theorem~\ref{t:nice}, our main result. Given a locally finite connected graph $G$ which admits an extensive tree-decomposition~${(T,\mathcal{V})}$ and a graph~$\Gamma$ which contains a thick $G$-tribe~$\mathcal{F}$, our aim is to construct an infinite family~${(Q_i \colon i \in \mathbb{N})}$ of disjoint $G$-minors in~$\Gamma$ inductively. Our work so far will allow us to make certain assumptions about~${\mathcal F}$. For example, by Lemma~\ref{l:concentrated}, we may assume that~${\mathcal F}$ is concentrated at some end~$\epsilon$ of~$\Gamma$, which, by Lemma~\ref{l:concentratedatthin}, we may assume is a thick end, and, by Lemma~\ref{c:pebblyubiq}, we may assume is not pebbly. Hence, by Theorem~\ref{t:trichotomy}, we may assume that~$\epsilon$ is either half-grid-like or grid-like. At this point our proof will split into two different cases, depending on the nature of~$\epsilon$. As we mentioned before, the two cases are very similar, with the grid-like case being significantly simpler. Therefore, we will first prove Theorem~\ref{t:nice} in the case where~$\epsilon$ is half-grid-like, and then in Section~\ref{s:gridlike} we will shortly sketch the differences for the grid-like case. So, to briefly recap, in the following section we will be working under the standing assumptions that there is a thick $G$-tribe~$\mathcal{F}$ in~$\Gamma$, and an end~$\epsilon$ of $\Gamma$ such that \begin{itemize} \item[--] $\mathcal{F}$ is concentrated at $\epsilon$; \item[--] $\epsilon$ is thick; \item[--] $\epsilon$ is half-grid-like. \end{itemize} \subseteqsection{The half-grid-like case} \label{sec:half-grid-like} As explained in Section~\ref{s:sketch}, our strategy will be to take some sequence of initial subtrees ${S_1 \subseteqseteq S_2 \subseteqseteq S_3 \ldots}$ of~$T$ such that~${\bigcup_{i \in \mathbb{N}} S_i = T}$, and to inductively build a collection of~$n$ inflated copies of~${G(S_n)}$, at each stage extending the previous copies. However, in order to ensure that we can continue the construction at each stage, we will require the existence of additional structure. Let us pick an enumeration ${\{ t_i \colon i \geqslant 0 \}}$ of~${V(T)}$ such that~$t_0$ is the root of~$T$ and ${T_n := T[\{ t_i \colon 0\leqslant i \leqslant n \}]}$ is connected for every~${n \in \mathbb{N}}$. We will not take the~$S_n$ above to be the subtrees~$T_n$, but instead the subtrees~$T_n^{\neg \epsilon}$ with respect to some tribe~${\mathcal F}_n$ that weakly agrees about~${\partial(T_n)}$. This will ensure that every edge in the boundary~${\partial(S_n)}$ will be in~${\partial_\epsilon(T_n)}$. For every edge~${e \in E(T)}$, let us fix a family~${\mathcal{R}_e = ( R_{e,s} \colon s \in S(e) )}$ of disjoint rays witnessing the self-similarity of the bough~${B(e)}$ towards an end~$\omega_e$ of~$G$, where~${\init(R_{e,s}) = s}$. By taking ${S_n = T_n^{\neg \epsilon}}$, we guarantee that for each edge in~${e \in \partial(S_n)}$, ${s \in S(e)}$, and every~${H \in \bigcup {\mathcal F}_n}$, the ray~${H^{\downarrow}(R_{e,s})}$ is an $\epsilon$-ray. Furthermore, since~${\partial(T_n)}$ is finite, we may assume by Lemma~\ref{lem:stronglyagreesubtribe}, that~${\mathcal F}_n$ strongly agrees about~${\partial(T_n)}$. We can now describe the additional structure that we require for the induction hypothesis. At each stage of our construction we will have built some inflated copies of~${G(S_n)}$, which we wish to extend in the next stage. However, $S_n$ will not in general be a finite subtree, and so we will need some control over where these copies lie in~$\Gamma$ to ensure we have not `used up' all of~$\Gamma$. The control we will want is that there is a finite set of vertices~$X$, which we call a \emph{bounder}, that separates all that we have built so far from the end~$\epsilon$. This will guarantee, since~${\mathcal F}$ is concentrated at~$\epsilon$, that we can find arbitrarily large layers of~${\mathcal F}$ which are disjoint from what we have built so far. Furthermore, in order to extend these copies in the next step, we will need to be able to link the boundary of our inflated copies of~${G(S_n)}$ to this large layer of~${\mathcal F}$. To this end we, will also want to keep track of some structure which allows us to do this, which we call an \emph{extender}. Let us make the preceding discussion precise. \begin{definition}[Bounder, extender] Let~${\mathcal F}$ be a thick $G$-tribe, which is concentrated at~$\epsilon$ and strongly agrees about~${\partial(S)}$ for some initial subtree~$S$ of~$T$, and let~${k \in \mathbb{N}}$. Let ${\mathcal{Q} = (Q_i \colon i \in [k])}$ be a family of disjoint inflated copies of~${G(S^{\neg \epsilon}})$ in~$\Gamma$ (note, $S^{\neg \epsilon}$ depends on~${\mathcal F}$). \begin{itemize} \item A \emph{bounder} for~$\mathcal{Q}$ is a finite set~$X$ of vertices in~$\Gamma$ separating each~$Q_i$ in~$\mathcal{Q}$ from~$\epsilon$, i.e.~such that \[ C(X,\epsilon) \cap \bigcup_{i=1}^k Q_i = \emptyset. \] \item For ${A \subseteqseteq E(T)}$, let~${I(A,k)}$ denote the set~${\{ (e,s,i) \colon e \in A, s \in S(e), i \in [k] \}}$. \item An \emph{extender} for~$\mathcal{Q}$ is a family ${\mathcal{E} = ( E_{e,s,i} \colon (e,s,i) \in I(\partial_\epsilon(S),k))}$ of $\epsilon$-rays in~$\Gamma$ such that the graphs in~${\mathcal{E}^{-} \cup \mathcal{Q}}$ are pairwise disjoint and such that~${\init(E_{e,s,i}) \in Q_i(s)}$ for every~${(e,s,i) \in I(\partial_\epsilon(S),k)}$ (using the notation as in Definition~\ref{def_concat}). \item Given an extender~${\mathcal E}$, an edge~${e \in \partial_{\epsilon}(S)}$, and~${i \in [k]}$, we let \[ {\mathcal E}_{e,i} := ( E_{e,s,i} \colon s \in S(e)). \] \end{itemize} \end{definition} Recall that, since~$\epsilon$ is half-grid like, there is a partial order~$\leqslant_\epsilon$ defined on the core rays of~$\epsilon$, see Lemma~\ref{def:core-order}. Furthermore, if~${\mathcal F}$ strongly agrees about~${\partial(S)}$ then, as in Definition~\ref{d:tribes}, for each~${e \in \partial_\epsilon(S)}$, there is a linear order~$\leqslant_{{\mathcal F},e}$ on~${S(e)}$. \begin{definition}[Extension scheme] Under the conditions above, we call a tuple $(X,{\mathcal E})$ an \emph{extension scheme} for $\mathcal{Q}$ if the following holds: \begin{enumerate}[label=(ES\arabic*)] \item\label{item:ES-defs} $X$ is a bounder for~$\mathcal{Q}$ and~${\mathcal E}$ is an extender for~$\mathcal{Q}$; \item\label{item:ES-core} $\mathcal{E}$ is a family of core rays; \item\label{item:ES-correct} the order~$\leqslant_\epsilon$ on~${\mathcal E}_{e,i}$ (and thus on~${\mathcal E}_{e,i}^-$) agrees with the order induced by~$\leqslant_{\mathcal{F},e}$ on~${\mathcal E}_{e,i}^-$ for all~${e \in \partial_\epsilon(S)}$ and~${i \in [k]}$; \item\label{item:ES-interval} the sets~$\mathcal{E}_{e,i}^-$ are intervals with respect to~$\leqslant_\epsilon$ on~${\mathcal{E}^-}$ for all~${e \in \partial_\epsilon(S)}$ and~${i \in [k]}$. \end{enumerate} \end{definition} We will in fact split our inductive construction into two types of extensions, which we will do on odd and even steps respectively. In an even step~${n = 2k}$, starting with a $G$-tribe~$\mathcal{F}_k$, $k$ disjoint inflated copies ${(Q_{i}^n \colon i \in [k])}$ of~${G(T_k^{\neg \epsilon})}$, and an appropriate extension scheme, we will construct~$Q_{k+1}^n$, a further disjoint inflated copy of~${G(T_k^{\neg \epsilon})}$, and an appropriate extension scheme for everything we built so far. In an odd step~${n = 2k - 1}$ (for~${k \geqslant 1}$), starting with the same $G$-tribe~$\mathcal{F}_{k-1}$ from the previous step, $k$ disjoint inflated copies of~${G(T_{k-1}^{\neg \epsilon})}$, and an appropriate extension scheme, we will refine to a new $G$-tribe~$\mathcal{F}_{k}$, which strongly agrees on~${\partial(T_k)}$, extend each copy~$Q_i^n$ of~${G(T_{k-1}^{\neg \epsilon})}$ to a copy~$Q_i^{n+1}$ of~${G(T_{k}^{\neg \epsilon})}$ for~${i \in [k]}$, and construct an appropriate extension scheme for everything we built so far. So, we will assume inductively that for some ${n \in \mathbb{N}_0}$, with~${\rho := \lfloor n/2 \rfloor}$ and~${\sigma := \lceil n/2 \rceil}$ we have: \begin{enumerate}[label=(I\arabic*)] \item \label{item:induction1} a thick $G$-tribe~$\mathcal{F}_{\rho}$ in~$\Gamma$ which \begin{itemize} \item is concentrated at~$\epsilon$; \item strongly agrees about~${\partial(T_\rho)}$; \item is well-separated from~$\epsilon$ at~$T_\rho$; \item whenever~${k < l \leqslant \rho}$, the tree~$T_k^{\neg\epsilon}$ with respect to~$\mathcal{F}_k$ is a subtree of~$T_l^{\neg\epsilon}$ with respect to~$\mathcal{F}_l$. \end{itemize} \item \label{item:induction2} a family ${\mathcal{Q}_n = ( Q_i^n \colon i \in [\sigma] )}$ of~$\sigma$ pairwise disjoint inflated copies of~${G(T^{\neg \epsilon}_{\rho})}$ (where $T^{\neg \epsilon}_{\rho}$ is considered with respect to~$\mathcal{F}_\rho$) in~$\Gamma$;\\ if~${n \geqslant 1}$, we additionally require that~$Q^n_i$ extends~$Q^{n-1}_i$ for all~${i \leqslant \sigma-1}$; \item \label{item:induction3} an extension scheme~${(X_n,{\mathcal E}_n)}$ for~$\mathcal{Q}_n$; \item \label{item:induction4} if~$n$ is even and~${\partial_\epsilon(T_\rho) \neq \emptyset}$, we require that there is a set~$\mathcal{J}_\rho$ of disjoint core $\epsilon$-rays disjoint to~$\mathcal{E}_n$, with ${|\mathcal{J}_\rho| \geqslant (|\partial_\epsilon(T_\rho)|+1) \cdot |\mathcal{E}_n|}$. \end{enumerate} Suppose we have inductively constructed~$\mathcal{Q}_n$ for all~${n \in \mathbb{N}}$. Let us define~${H_i := \bigcup_{n \geqslant 2i-1} Q^n_i}$. Since $T_k^{\neg\epsilon}$ with respect to~$\mathcal{F}_k$ is a subtree of~$T_l^{\neg\epsilon}$ with respect to~$\mathcal{F}_l$ for all~${k<l}$, we have that ${\bigcup_{n \in \mathbb{N}} T^{\neg \epsilon}_n = T}$ (where we considered~$T^{\neg \epsilon}_n$ w.r.t.~$\mathcal{F}_n$), and due to the extension property~\ref{item:induction2}, the collection ${(H_i \colon i \in \mathbb{N})}$ is an infinite family of disjoint $G$-minors, as required. So let us start the construction. To see that our assumptions can be fulfilled for the case~${n = 0}$, we first note that since~${T_0 = t_0}$, by Lemma~\ref{l:induction-start} there is a thick subtribe~${\mathcal F}_0$ of~${\mathcal F}$ which satisfies~\ref{item:induction1}. Let us further take~${\mathcal{Q}_0 = \mathcal{E}_0 = X_0 = \mathcal{J}_0 = \emptyset}$. The following notation will be useful throughout the construction. Given~${e \in E(T)}$ and some inflated copy~$H$ of~$G$, recall that~${H^{\downarrow}(\mathcal{R}_e)}$ denotes the family~${(H^{\downarrow}(R_{e,s}) \colon s \in S(e))}$. Given a $G$-tribe~$\mathcal{F}$, a layer~${F \in \mathcal{F}}$ and a family of disjoint rays~${\mathcal R}$ in~$G$ we will write ${F^{\downarrow}(\mathcal{R}) = ( H^{\downarrow}(R) \colon H \in F, R \in {\mathcal R})}$. \noindent{\bf Construction part 1: ${n=2k}$ is even} \noindent{\bf Case 1: ${\partial_\epsilon (T_k) = \emptyset}$.} In this case, ${T^{\neg \epsilon}_k = T}$ and so picking any member~${H \in {\mathcal F}_k}$ with ${H \subseteqseteq C(X_n,\epsilon)}$ and setting ${Q_{k+1}^{n+1} = H(T^{\neg \epsilon}_k)}$ gives us a further inflated copy of~${G(T^{\neg \epsilon}_k)}$ disjoint from all the previous ones. We set~${Q^{n+1}_i = Q^{n}_i}$ for all~${i \in [k]}$ and~${\mathcal{Q}_{n+1} = ( Q^{n+1}_i \colon i \in [k+1])}$. Since~$\mathcal{F}_k$ is well-separated from~$\epsilon$ at~$T_k$, there is a suitable bounder~${X_{n+1}\supseteq X_n}$ for~$\mathcal{Q}_{n+1}$. Then ${(X_{n+1}, \emptyset)}$ is an extension scheme for~$\mathcal{Q}_{n+1}$ while~$\mathcal{F}_k$ remains unchanged. \noindent \textbf{Case 2: ${\partial_\epsilon (T_k) \neq \emptyset}$.} (See Figure~\ref{f:adding}) Consider the family ${\mathcal{R}^{-} := \bigcup \{ \mathcal{R}_e^{-} \colon e \in \partial_\epsilon(T_k)\}}$. Moreover, set~${\mathcal{C} := \mathcal{E}_n^- \cup \mathcal{J}_k}$ and consider~$\overline{\mathcal{C}}$ as in Definition~\ref{def:central-core}. Let ${Y \subseteqseteq C(X_n,\epsilon)}$ be a finite subgraph, which is a transition box between~$\overline{\mathcal{E}_n^-}$ and~$\overline{\mathcal{C}}$ after~$X_n$ as in Lemma~\ref{l:transition-box}. Let~$\mathcal{F}'$ be a flat thick $G$-subtribe of~$\mathcal{F}_k$, such that each member of~$\mathcal{F}'$ is contained in~${C(X_n \cup V(Y), \epsilon)}$, which exists, by Lemma~\ref{lem_subtribesinheritconcentration}, since both~$X_n$ and~$V(Y)$ are finite. Let ${F \in \mathcal{F}'}$ be large enough such that we may apply Lemma~\ref{l:link} to find a transitional linkage~$\mathcal{P}$, such that~${\bigcup \mathcal{P} \subseteqseteq C(X_n \cup V(Y), \epsilon)}$, from~${\overline{\mathcal{C}}}$ to~${F^{\downarrow}({\mathcal R}^{-})}$ after~${X_n \cup V(Y)}$ avoiding some member~${H \in F}$. Note that, since~$X_n$ is a bounder and ${\bigcup \mathcal{P} \subseteqseteq C(X_n \cup V(Y), \epsilon)}$, we get that each element of~$\mathcal{P}$ is disjoint from all~$\mathcal{Q}_n$ and~$Y$. Let \[ Q^{n+1}_{k+1} := H(T^{\neg \epsilon}_k). \] Note that $Q^{n+1}_{k+1}$ is an inflated copy of~${G(T^{\neg \epsilon}_k)}$. Moreover, let ${Q^{n+1}_{i} := Q^{n}_{i}}$ for all~${i \in [k]}$ and~${\mathcal{Q}_{n+1} := ( Q_{i}^{n+1} \colon i \in [k+1] )}$, yielding property~\ref{item:induction2}. Since~$\mathcal{F}_k$ is well-separated from~$\epsilon$ at~$T_k$, and~${H \in \bigcup {\mathcal F}_k}$, there is a finite set~${X_{n+1} \subseteqseteq V(\Gamma)}$ containing ${X_{n} \cup V(Y)}$, such that~${C(X_{n+1},\epsilon) \cap Q^{n+1}_{k+1} = \emptyset}$. This set~$X_{n+1}$ is a bounder for~$\mathcal{Q}_{n+1}$. Since~$\mathcal{P}$ is transitional, Lemma~{\ref{l:core-preserving}} implies that the linkage is preserving on~$\mathcal{C}$. Since all rays in~${F^{\downarrow}(\mathcal{R}^-)}$ are core rays, we have that~$\le_{\epsilon}$ is a linear order on~${F^{\downarrow}(\mathcal{R}^-)}$. Moreover, for each~${e \in \partial_\epsilon(T_k)}$, the rays in~${H^{\downarrow}(\mathcal{R}^-_{e})}$ correspond to an interval in this order. Thus, deleting these intervals from~${F^{\downarrow}(\mathcal{R}^-)}$ leaves behind at most~${|\partial_\epsilon(T_k)|+1}$ intervals in~${F^{\downarrow}(\mathcal{R}^-)}$ (with respect to~$\le_{\epsilon}$) which do not contain any rays in~${H^{\downarrow}(\mathcal{R}^-)}$. Since~${|\mathcal{J}_k| \geqslant (|\partial_\epsilon(T_k)|+1) \cdot |\mathcal{E}_n|}$, by the pigeonhole principle there is one such interval on~${F^{\downarrow}(\mathcal{R}^-)}$ that \begin{itemize} \item[--] does not contain rays in~${H^{\downarrow}(\mathcal{R})}$; \item[--] where a subset ${\mathcal{P}' \subseteqseteq \mathcal{P}}$ of size~$|\mathcal{E}_n^-|$ links a corresponding subset~$\mathcal{A}$ of~$\mathcal{C}$ to a set of rays~$\mathcal{B}$ in that interval. \end{itemize} By Lemmas~\ref{l:transition-box} , ~{\ref{lem:core-preserving2}} and~\ref{l:core-preserving}, and Remark~\ref{rem:core-preserving}\ref{rem:core-preserving-sub}, there is a linkage~$\mathcal{P}''$ from~${\overline{\mathcal{E}_n^{-}}}$ to~${\mathcal{A}}$ contained in~$Y$ which is preserving on~$\mathcal{E}_n^{-}$. For~${e \in \partial_\epsilon(T_k)}$ and~${s \in S(e)}$, define \[ E^{n+1}_{e,s,k+1} = H^{\downarrow}(R_{e,s}) \text{ for the corresponding ray } R_{e,s} \in \mathcal{R}_e. \] Moreover for each~${i \in [k]}$, we define \[ E_{e,s,i}^{n+1} = (E^{n}_{e,s,i} \circ_{\mathcal{P}''} \mathcal{A}) \circ_{\mathcal{P}'} \mathcal{B}, \] noting that~$\mathcal{P}''$ is also a linkage from~$\mathcal{E}_n$ to~${\mathcal{A}}$. By construction, all these rays are, except for their first vertex, disjoint from~$\mathcal{Q}_{n+1}$. Moreover, ${\mathcal{E}_{n+1} := ( E^{n+1}_{e,s,i} \colon (e,s,i) \in I(\partial_\epsilon(T_k),k+1) )}$ is an extender for~$\mathcal{Q}_{n+1}$. Note that each ray in~$\mathcal{E}_{n+1}$ shares a tail with a ray in~${F^{\downarrow}(\mathcal{R}^{-})}$. We claim that ${(X_{n+1}, \mathcal{E}_{n+1})}$ is an extension scheme for~$\mathcal{Q}_{n+1}$ and hence property~\ref{item:induction3} is satisfied. Since every ray in~$\mathcal{E}_{n+1}$ has a tail which is also a tail of a ray in~$F^{\downarrow}(\mathcal{R}^{-})$, property~\ref{item:ES-core} is satisfied by Remark~{\ref{rem:core-remarks}\ref{rem:core-tail}}. Since~$\mathcal{P}'$ is preserving on~$\mathcal{A'}$ and~$\mathcal{P}''$ is preserving on~$\mathcal{E}_n^-$, Remark~{\ref{rem:core-preserving}\ref{rem:core-preserving-concat}} implies that the linkage~${{\mathcal P}'' + {\mathcal P}'}$ is preserving on~$\mathcal{E}_n^-$. Hence, property~\ref{item:ES-correct} holds for each~${i \in [k]}$. Furthermore, since ${E^{n+1}_{e,s,k+1} = H^{\downarrow}(R_{e,s})}$ for each~${e \in \partial_\epsilon(T_k)}$ and~${s \in S(e)}$ and~$\mathcal{F}_k$ strongly agrees about~$\partial(T_k)$, it is clear that property~\ref{item:ES-correct} holds for~${i = k+1}$. Finally, property~\ref{item:ES-interval} holds for~${i=k+1}$ since for each~${e \in \partial_\epsilon(T_k)}$, the rays in~${H^{\downarrow}(\mathcal{R}_{e})}$ are an interval with respect to~$\leqslant_\epsilon$ on~$F^{\downarrow}(\mathcal{R}^-)$, and it holds for~${i \in [k]}$ by the fact that ${{\mathcal P}'' + {\mathcal P}'}$ is preserving on~$\mathcal{E}_n^-$ together with the fact that ${{\mathcal P}'' + {\mathcal P}'}$ links~$\mathcal{E}_n^-$ to an interval of~${F^{\downarrow}(\mathcal{R}^-)}$ containing no ray in~${H^{\downarrow}(\mathcal{R})}$. Finally, note that~\ref{item:induction1} is still satisfied by~${\mathcal F}_k$ and~$T_k$, and~\ref{item:induction4} is vacuously satisfied. \begin{sidewaysfigurepage} \centering \resizebox{.8\textwidth}{!}{ \input{pic-adding-a-copy.tikz} } \caption{Adding a new copy when~${n=2k}$ is even.} \label{f:adding} \end{sidewaysfigurepage} \noindent \textbf{Construction part 2: ${n=2k-1}$ is odd (for~${k \geqslant 1}$).} Let~$f$ denote the unique edge of~$T$ between~$T_{k-1}$ and~${T_{k} \setminus T_{k-1}}$. \noindent \textbf{Case 1:} ${f \notin \partial_{\epsilon} (T_{k-1})}$. Let~${{\mathcal F}_{k} := {\mathcal F}_{k-1}}$. Since~$\mathcal{F}_{k-1}$ is well-separated from~$\epsilon$ at~$T_{k-1}$, it follows that~${e \in \partial_{\neg \epsilon}(T_k)}$ for every~${e \in \partial(T_k) \setminus \partial(T_{k-1})}$. Hence~${T^{\neg \epsilon}_{k} = T^{\neg \epsilon}_{k-1}}$ and~${\partial_\epsilon(T_{k-1}) = \partial_\epsilon(T_k)}$, and so~$\mathcal{F}_k$ is well-separated from~$\epsilon$ at~$T_k$ and we can simply take ${\mathcal{Q}_{n+1} := \mathcal{Q}_n}$, ${\mathcal{E}_{n+1} := \mathcal{E}_n}$, ${\mathcal{J}_{k} := \mathcal{J}_{k-1}}$ and ${X_{n+1} := X_n}$ to satisfy~\ref{item:induction1}, \ref{item:induction2}, \ref{item:induction3} and~\ref{item:induction4}. \noindent \textbf{Case 2:} ${f \in \partial_{\epsilon} (T_{k-1})}$. (See Figure~\ref{f:new}) By~\ref{item:induction1} we can apply Lemma~\ref{lem:refinement-lemma} to~${\mathcal F}_{k-1}$ and~$T_{k-1}$ in order to find a thick $G$-tribe~${\mathcal F}_{k}$ and a thick flat subtribe~$\mathcal{F}^*$ of~$\mathcal{F}_{k-1}$, both concentrated at~$\epsilon$, satisfying properties \ref{itemconcentrated}--\ref{consistentpushingalong} from that lemma. It follows that~$\mathcal{F}_{k}$ satisfies~\ref{item:induction1} for the next step. Let~${F \in \mathcal{F}^*}$ be a layer of~$\mathcal{F}^*$ such that \[ {|F| \geqslant (\partial_\epsilon(T_k) + 2) \cdot |I(\partial_\epsilon (T_k), k)|} \] and consider the rays~${F^{\downarrow}(\mathcal{R}_f)}$. Consider the rays in the extender corresponding to the edge~$f$, that is~${\mathcal{E}_f := (E_{f,s,i}^n \colon i \in [k], s \in S(f))}$. By Lemma~{\ref{lem:core-preserving2}}, there is, for every subset~$\mathcal{S}$ of~${F^{\downarrow}(\mathcal{R}_f)}$ of size~${|\mathcal{E}_f^-|}$, a transitional linkage~$\mathcal{P}$ from~${\mathcal{E}_f^- \subseteqseteq \overline{\mathcal{E}_n^-}}$ to~${\mathcal{S} \subseteqseteq \overline{F^{\downarrow}(\mathcal{R}_f)}}$ after~${X_n \cup \, \init(\mathcal{E}_n)}$, which is preserving on~$\mathcal{E}^-_f$. Let us choose~${H_1,H_2,\ldots,H_k \in F}$ and let~${\mathcal{S} = \left(H^{\downarrow}_i(R_{f,s}) \colon i \in [k], s\in S(f)\right)}$. Let~$\mathcal{P}$ be the linkage given by the previous paragraph, which we recall is preserving on~$\mathcal{E}^-_f$. Since for every~${i \leqslant k}$, the family ${\left(E^{n-}_{f,s,i} \colon s \in S(f)\right)}$ forms an interval in~$\mathcal{E}^-_n$ and the set~${H_i^{\downarrow}(\mathcal{R}_{f})}$ forms an interval in~${F^{\downarrow}(\mathcal{R}_f)}$, and furthermore the order~$\leqslant_\epsilon$ agrees with~$\leqslant_{\mathcal{F}_k,f}$ on~${S(f)}$, it follows that, after perhaps relabelling the~$H_i$, for every~${i \in [k]}$ and~${s \in S(f)}$, ${\mathcal P}$ links~$E^{n-}_{f,s,i}$ to~${H^{\downarrow}_i(R_{f,s})}$. Let~${Z \subseteqseteq V(\Gamma)}$ be a finite set such that~${\top(\omega, R)}$ and~${\bot(\omega, R)}$ are separated by~$Z$ in~${\Gamma - V(R)}$ for all~${R \in F^{\downarrow}(\mathcal{R}_f)}$ (cf.~Lemma~\ref{lem:core-exchange}). Since~${|F|}$ is finite and ${(T,\mathcal{V})}$ is an extensive tree-decomposition, there exists an~${m \in \mathbb{N}}$ such that if~${e \in T_{f^+}}$ with~${\dist(f^-,e^-) = m}$, then~${H(B(e)) \cap \left({X_n \cup Z \cup V(\bigcup \mathcal{P})}\right) = \emptyset}$ for every~${H \in F}$. Let~${F' \in \mathcal{F}_{k}}$ be as in Lemma~\ref{lem:refinement-lemma}\ref{consistentpushingalong} for~$F$ with such an~$m$. Hence, by definition, for each~${H_i \in F}$ there is some~${H'_i \in F'}$ which is a push-out of~$H_i$ to depth~$m$ along~$f$, and so there is some edge~${e \in T_{f^+}}$ with~${\dist(f^-,e^-) = m}$ and some subgraph ${W_i \subseteqseteq H(B(e))}$ which is an~${I\overline{G[B(f)]}}$ such that for each~${s \in S(f)}$, we have that~${W_i(s)}$ contains the first vertex of~$W_i$ on~${H_i^{\downarrow}(R_{f,s})}$. For each~${i \in [k]}$ we construct~$Q_{i}^{n+1}$ from~$Q_{i}^{n}$ as follows. Consider the part of~$G$ that we want to add~${G(T_{k-1}^{\neg \epsilon})}$ to obtain~${G(T_k^{\neg \epsilon})}$, namely \[ {D := \overline{G[B(f)]} \left[ V_{f^+} \cup \bigcup \big\{ B(e) \colon {e \in \partial_{\neg \epsilon}(T_k) \setminus \partial_{\neg \epsilon}(T_{k-1})} \big\} \right]}. \] Let~${K_i := W_i(D)}$. Note that this is an inflated copy of~$D$, and for each~${s \in S(f)}$ and each~${i \in [k]}$ the branch set~${K_i(s)}$ contains the first vertex of~$K_i$ on~${H_i^{\downarrow}(R_{f,s})}$. Note further that, by the choice of~$m$, all the~$K_i$ are disjoint to~$\mathcal{Q}_n$. Let~$x_{f,s,i}$ denote the first vertex on the ray~${H^{\downarrow}_i(R_{f,s})}$ in~$K_i$, and let \[ O_{s,i} := (E^n_{f,s,i} \circ_{\mathcal{P}} F^{\downarrow}(\mathcal{R}_f)) x_{f,s,i}, \] where as before we note that~$\mathcal{P}$ is also a linkage from~${\mathcal{E}_n}$ to~${F^{\downarrow}(\mathcal{R}_f)}$. Then, if we let ${\mathcal{O}_i := (O_{s,i} \colon s \in S(f))}$ and ${\mathcal{O} = (O_{s,i} \colon s \in S(f), i\in [k])}$, we see that \[ Q_{i}^{n+1} := Q_{i}^{n} \oplus_{\mathcal{O}_i} K_i \] (see Definition~\ref{d:amalgamation}) is an inflated copy of~${G(T_k^{\neg \epsilon})}$ extending~$Q_{i}^{n}$. Hence, \[ {\mathcal{Q}^{n+1} := ( Q^{n+1}_i \colon i \in [k])} \] is a family satisfying~\ref{item:induction2}. Since~$\mathcal{F}_k$ is well-separated from~$\epsilon$ at~$T_k$, and each~$K_i$ is a subgraph of the restriction of ${W_i \subseteqseteq H'_i}$ to~$D$, for each~$K_i$, there is a finite set~$\hat{X}_i$ separating~$K_i$ from~$\epsilon$, and hence the set \[ X_{n+1} := X_n \cup \bigcup_{i \in [k]} \hat{X}_i \cup V \left( \bigcup \mathcal{O} \right) \] is a bounder for~$\mathcal{Q}^{n+1}$. For ${e \in \partial_\epsilon(T_{k-1}) \setminus \{f\}}$, ${s \in S(e)}$, and~${i \in [k]}$, we set \[ {E}^{n+1}_{e,s,i} = E^n_{e,s,i} \circ_\mathcal{P} F^{\downarrow}(\mathcal{R}_f), \] and set \[ \mathcal{E}' := \left(E^{n+1}_{e,s,i} \colon (e,s,i) \in I\left(\partial_\epsilon(T_{k-1}) \setminus \{f\},k\right)\right) \] Moreover, for~${e \in \partial_\epsilon(T_k) \setminus \partial_\epsilon(T_{k-1})}$, ${s \in S(e)}$, and~${i \in [k]}$, we set \[ {E}^{n+1}_{e,s,i} = H'^{\downarrow}_i(R_{e,s}), \] and set \[ \mathcal{E}'' := \left(E^{n+1}_{e,s,i} \colon (e,s,i) \in I\left(\partial_\epsilon(T_k)\setminus\partial_\epsilon(T_{k-1}),k\right)\right). \] Note that, by construction, any such ray~$E^{n+1}_{e, s, i}$ has its initial vertex in the branch set~$Q^{n+1}_i(s)$ and is otherwise disjoint to~${\bigcup \mathcal{Q}_{n+1}}$. We set~${\mathcal{E}_{n+1} := \mathcal{E}' \cup \mathcal{E}''}$. It is easy to check that this is an extender for~$\mathcal{Q}_{n+1}$. We claim that ${(X_{n+1}, \mathcal{E}_{n+1})}$ is an extension scheme. Property~\ref{item:ES-defs} is apparent. Since~$\mathcal{F}_k$ strongly agrees about~$\partial(T_k)$, every $\epsilon$-ray in an any member of~$\mathcal{F}_k$ is core. Then, since~$\mathcal{F}^*$ is a flat subtribe of~$\mathcal{F}_k$ and every ray in~$\mathcal{E}_{n+1}$ shares a tail with a ray in a member of~$\mathcal{F}_k$ or~$\mathcal{F}^*$, it follows by Remark~{\ref{rem:core-remarks}}~\ref{rem:core-tail} that all rays in~$\mathcal{E}_{n+1}$ are core rays, and so~\ref{item:ES-core} holds. For any~${e \in \partial_\epsilon(T_{k-1}) \setminus \{f\}}$ and~${i \in [k]}$, the rays~$\mathcal{E}_{n+1,e,i}$ are a subfamily of~$\mathcal{E}'$, obtained by transitioning from the family~$\mathcal{E}_{n,e,i}$ to~${F^{\downarrow}(\mathcal{R}_f)}$ along the linkage~$\mathcal{P}$. By the induction hypothesis, $\leqslant_\epsilon$ agreed with the order induced by~$\leqslant_{\mathcal{F}_{k-1},e}$ on~$\mathcal{E}_{n,e,i}$, and, since ${\mathcal{F}_k \cup \mathcal{F}_{k-1}}$ strongly agrees about~${\partial_\epsilon(T_{k-1}) \setminus \{f\}}$, this is also the order induced by~$\leqslant_{\mathcal{F}_{k},e}$. Hence, since~$\mathcal{P}$ is preserving, by Lemma~\ref{l:core-preserving}, it follows that the order induced by~$\leqslant_{\mathcal{F}_{k},e}$ on~$\mathcal{E}_{n+1,e,i}$ agrees with~$\leqslant_\epsilon$. For for ${e \in \partial_\epsilon(T_k) \setminus \partial_\epsilon(T_{k-1})}$ and~${i \in [k]}$, the rays~$\mathcal{E}_{n+1,e,i}$ are ${( H'^{\downarrow}_i(R_{e,s}) \colon s \in S(e))}$. Since ${H'_i \in F' \in \mathcal{F}_k}$ and~$\mathcal{F}_k$ strongly agrees about~$\partial(T_k)$, it follows that the order induced by~$\leqslant_{\mathcal{F}_{k},e}$ on~$\mathcal{E}_{n+1,e,i}$ agrees with~$\leqslant_\epsilon$. Hence Property~\ref{item:ES-correct} holds. Finally, by Lemma~\ref{l:rayinducedsubgraph} it is clear that for any~${e \in \partial_\epsilon(T_{k-1}) \setminus \{f\}}$ and~${i \in [k]}$, the rays~$\mathcal{E}^{-}_{n+1,e,i}$ form an interval with respect to~$\leqslant_\epsilon$ on~$\mathcal{E}^-_{n+1}$, since they are each contained in a connected subgraph~$H'_i$ to which the tails of the rest of~$\mathcal{E}^-_{n+1}$ are disjoint. Furthermore, by choice of~$Z$ and Lemma~\ref{lem:core-exchange}, it it clear that, since~$\mathcal{P}$ is preserving on~$\mathcal{E}^{-}_n$, for each~${e \in \partial_\epsilon(T_k) \setminus \partial_\epsilon(T_{k-1})}$ and~${i \in [k]}$, the rays~$\mathcal{E}^-_{n+1,e,i}$ also form an interval with respect to~$\leqslant_\epsilon$ on~$\mathcal{E}^-_{n+1}$. Hence, property~\ref{item:ES-interval} holds and therefore~\ref{item:induction3} is satisfied for the next step. For property~\ref{item:induction4}, we note that every ray in~$\mathcal{E}_{n+1}$ has a tail in some~${H \in F \in \mathcal{F}^*}$ or some pushout~$H'$ of~$H$ in~$\mathcal{F}_k$. Note that~${V(H') \subseteqseteq V(H)}$. Since there is at least one core $\epsilon$-ray in each~${H \in F \in \mathcal{F}^*}$, and the~$H$ in~$F$ are pairwise disjoint, we can find a family of at least~${|F| - |\mathcal{E}_{n+1}|}$ such rays disjoint from~$\mathcal{E}_{n+1}$. However, since \[ |F| \geqslant (\partial_\epsilon(T_k) + 2) \cdot |\mathcal{E}_{n+1}|, \] it follows that we can find a suitable family~${|\mathcal{J}_k|}$. This concludes the induction step. \qed \begin{sidewaysfigurepage} \centering \resizebox{.8\textwidth}{!}{ \input{pic-extending.tikz} } \caption{Extending the copies when~${n=2k-1}$ is odd.} \label{f:new} \end{sidewaysfigurepage} \subseteqsection{The grid-like case} \label{s:gridlike} In this section we will give a brief sketch of how the argument differs in the case where the end~$\epsilon$, towards which we may assume our $G$-tribe~$\mathcal{F}$ is concentrated, is grid-like. In the case where~$\epsilon$ is half-grid-like we showed that the end~$\epsilon$ had a roughly linear structure, in the sense that there is a global partial order~$\leqslant_{\epsilon}$ which is defined on almost all of the $\epsilon$-rays, namely the core ones, such that every pair of disjoint core rays are comparable, and that this order determines the relative structure of any finite family of disjoint core rays, since it determines the ray graph. Since, by Corollary~\ref{c:linearend}, $\RG_G(\mathcal{R}_e)$ is a path whenever~${e \in \partial_\epsilon(T_k)}$, there are only two ways that~$\leqslant_{\epsilon}$ can order~${H^{\downarrow}(\mathcal{R}_e)}$, and, since~${\partial_\epsilon(T_k)}$ is finite, by various pigeon-hole type arguments we can assume that it does so consistently for each~${H \in \bigcup \mathcal{F}_k}$ and each~${\mathcal E}_{e,i}$. We use this fact crucially in part~2 of the construction, where we wish to extend the graphs ${(Q^n_i \colon i \in [k])}$ from inflated copies of~${G(T^{\neg \epsilon}_{k-1})}$ to inflated copies of~${G(T^{\neg \epsilon}_{k})}$ along an edge~${e \in \partial(T_{k-1})}$. We wish to do so by constructing a linkage from the extender~${\mathcal E}_n$ to some layer~${F \in \mathcal{F}_k}$, using the self-similarity of~$G$ to find an inflated copy of~${G[B(e)]}$ which is `rooted' on the rays~${H^{\downarrow}(\mathcal{R}_e)}$ and extending each~$Q^n_i$ by such a subgraph. However, for this step to work it is necessary that the linkage from~${\mathcal E}_n$ to~${F^{\downarrow}(\mathcal{R}_e)}$ is such that for each~${i \in [k]}$, there is some~${H \in F}$ such that ray~$E_{e,s,i}$ is linked to~${H^{\downarrow}(R_{e,s})}$ for each~${s \in S(e)}$. However, since any transitional linkage we construct between~${\mathcal E}_n$ and a layer~${F \in \mathcal{F}_n}$ will respect~$\leqslant_{\epsilon}$, we can use a transition box to `re-route' our linkage such that the above property holds. In the case where~$\epsilon$ is grid-like we would like to say that the end has a roughly cyclic structure, in the sense that there is a global `partial cyclic order'~$C_\epsilon$, defined again on almost all of the $\epsilon$-rays, which will again determine the relative structure of any finite family of disjoint `core' rays. As before, since~${\RG_{G}(\mathcal{R}_e)}$ is a path whenever~${e \in \partial_\epsilon(T_n)}$, there are only two ways that~$C_\epsilon$ can order~${H^{\downarrow}(\mathcal{R}_e)}$ (`clockwise' or `anti-clockwise') and so we can use similar arguments to assume that it does so consistently for each~${H \in \bigcup \mathcal{F}_k}$ and each~${\mathcal E}_{e,i}$, which allows us as before to control the linkages we build. To this end, suppose~$\epsilon$ is a grid-like end, and that~$N$ is as in Lemma~\ref{l:gridstructure}, so that the ray graph of any family of at least~${N+2}$ disjoint rays is a cycle. We say that an $\epsilon$-ray~$R$ is a \emph{core ray (of~$\epsilon$)} if there is some finite family~${(R_i \colon i \in[n])}$ of~${n \geqslant N+3}$ disjoint $\epsilon$-rays such that~${R = R_i}$ for some~${i \in [n]}$\footnote{We note that it is possible to show that, if~$\epsilon$ is grid-like, then in fact~${N=3}$.}. Every large enough ray graph is a cycle, which has a correct orientation by Lemma~\ref{l:gridstructure}, and we would like to say that this orientation is induced by a global `partial cyclic order' defined on the core rays of~$\epsilon$. By a similar argument as in Section~\ref{s:core}, one can show the following: \begin{lemma} For every core ray~$R$ of a grid-like end~$\epsilon$ there is a unique sub-end of~$\epsilon$ in~${G - V(R)}$, which is linear (cf.~Definition~\ref{d:linear}). \end{lemma} It follows that if~$R$ and~$R'$ are disjoint core rays then~$\epsilon$ splits into at most two ends in~${G - (V(R) \cup V(R'))}$. \begin{definition} Let~$R$ and~$R'$ be disjoint core rays of~$\epsilon$. We denote by~${\top(\epsilon, R,R')}$ the end of~${G - (V(R) \cup V(R'))}$ containing rays which appear between~$R$ and~$R'$ according to the correct orientation of some ray graph of a family of at least $N+3$ $\epsilon$-rays and by~${\bot(\epsilon, R,R')}$ the end of~${G - (V(R) \cup V(R'))}$ containing rays which appear between~$R'$ and~$R$ in the correct orientation of some ray graph of a family of at least $N+3$ $\epsilon$-rays. \end{definition} We will model our global `partial cyclic order' as a ternary relation on the set of core rays of~$\epsilon$. That is, a \emph{partial cyclic order} on a set~$X$ is a relation~${C \subseteqset X^3}$ written~${[a,b,c]}$ satisfying the following axioms: \begin{itemize} \item If~${[a,b,c]}$ then~${[b,c,a]}$. \item If~${[a,b,c]}$ then not~${[c,b,a]}$. \item If~${[a,b,c]}$ and~${[a,c,d]}$ then~${[a,b,d]}$. \end{itemize} \begin{lemmadef} Let~${\core(\epsilon)}$ denote the set of core rays of~$\epsilon$. We define a partial cyclic order~$C_\epsilon$ on~${\core(\epsilon)}$ as follows: \[ [R,S,T] \text{ if and only if } R,S,T \text{ have disjoint tails } xR,yS,zT \text{ and } yS \in \top(\epsilon, xR,zT). \] Then, for any family~${(R_i \colon i \in [n])}$ of~${n \geqslant N+3}$ disjoint $\epsilon$-rays, the cyclic order induced on~${(R_i \colon i \in [n])}$ by~$C_\epsilon$ agrees with the correct orientation. \end{lemmadef} Again, by a similar argument as in Section~\ref{s:core}, one can show that this relation is in fact a partial cyclic order and that it always agrees with the correction orientation of large enough ray graphs. Furthermore, by Lemma~\ref{l:gridstructure}, given two families~${\mathcal R}$ and~${\mathcal S}$ of at least~${N+3}$ disjoint $\epsilon$-rays, every transitional linkage between~${\mathcal R}$ and~${\mathcal S}$ \emph{preserves}~$C_\epsilon$, for the obvious definition of preserving. Given a family of disjoint $\epsilon$-rays ${\mathcal{R} = (R_i \colon i \in [n])}$ with a linear order~$\leqslant$ on~$\mathcal{R}$, we say that~$\leqslant$ \emph{agrees} with~$C_\epsilon$ if~${[R_i,R_j,R_k]}$ whenever~${R_i < R_j < R_k}$. Given a family ${F = ( f_i \colon i \in I )}$ and a linear order~$\leqslant$ on~$I$, we denote by~${F(\leqslant)}$ the linear order on~$F$ induced by~$\leqslant$, i.e.~the order defined by~${f_i F(\leqslant) f_j}$ if and only if~${i \leqslant j}$. As in Section~\ref{s:tribes} we say a thick $G$-tribe~$\mathcal{F}$ \emph{strongly agrees about~${\partial(T_n)}$} if \begin{itemize} \item it weakly agrees about~$\partial(T_n)$; \item for each~${H \in \bigcup \mathcal{F}}$ every $\epsilon$-ray ${R \subseteqseteq H}$ is in $\core(\epsilon)$; \item for every~${e \in \partial_\epsilon(T_n)}$ there is a linear order~$\leqslant_{\mathcal{F},e}$ on~${S(e)}$ such that~${H^{\downarrow}(\mathcal{R}_e)(\leqslant_{\mathcal{F},e})}$ agrees with~$C_\epsilon$ on~${H^{\downarrow}(\mathcal{R}_e)}$ for all~${H \in \bigcup F}$. \end{itemize} Using this definition, the $G$-tribe refinement lemma (Lemma~\ref{lem:refinement-lemma}) can also be shown to hold in the case where~$\epsilon$ is a grid-like-end. Furthermore, we modify the definition of an extension scheme for a family of disjoint inflated copies of~${G(T^{\neg \epsilon}_n})$. \begin{definition}[Extension scheme] Let ${\mathcal{Q} = (Q_i \colon i \in [k])}$ be a family of disjoint inflated copies of~${G(S^{\neg \epsilon})}$ and~${\mathcal F}$ be a $G$-tribe which strongly agrees about~${\partial(S)}$. We call a tuple~${(X,{\mathcal E})}$ an \emph{extension scheme} for~$\mathcal{Q}$ if the following holds: \begin{enumerate}[label=(ES\arabic*)] \item $X$ is a bounder for~$\mathcal{Q}$ and~${\mathcal E}$ is an extender for~$\mathcal{Q}$; \item ${\mathcal E}$ is a family of core rays; \item the order~$C_\epsilon$ agrees with~${{\mathcal E}_{e,i}^-(\leqslant_{\mathcal{F},e})}$ for every~${e \in \partial_\epsilon(S)}$; \item the sets~$\mathcal{E}_{s,i}^-$ are intervals of~$C_\epsilon$ on~${\mathcal{E}^-}$ for all~${e \in \partial_\epsilon(S)}$ and~${i \in [k]}$. \end{enumerate} \end{definition} We can then proceed by induction as before, with the same induction hypotheses. For the most part the proof will follow verbatim, apart from one slight technical issue. Recall that, in the case where~$n$ is even, we use the existence of the family of rays~${\overline{\mathcal{C}}}$ to find a linkage from~$\mathcal{C}$ to~${F^{\downarrow}(\mathcal{R}^-)}$ which is preserving on~$\mathcal{C}$ and similarly, in the case where~$n$ is odd, we do the same for~$\overline{\mathcal{E}_n^-}$. In the grid-like case we do not have to be so careful, since every transitional linkage from~$\mathcal{C}$ to~${F^{\downarrow}(\mathcal{R}^-)}$ will preserve~$C_\epsilon$, as long as~${|\mathcal{C}|}$ is large enough. However, in order to ensure that~${|\mathcal{C}|}$ and~${|\mathcal{E}_n^-|}$ are large enough in each step, we should start by building~${N+3}$ inflated copies of~${G(T^{\neg \epsilon}_0)}$ in the first step, which can be done relatively straightforwardly. Indeed, in the case~${n=0}$ most of the argument in the construction is unnecessary, since a large part of the construction is constructing a new copy whilst re-routing the rays~$\mathcal{E}_n$ to avoid this new copy, but~$\mathcal{E}_0$ is empty. Therefore, it is enough to choose a layer~${F \in \mathcal{F}_0}$ with~${|F| \geqslant N+3}$, with say~${H_1,\ldots,H_{N+3} \in F}$ and to take \[ Q^1_i := H_i(T^{\neg \epsilon}_k) \] for each~${i \in [N+3]}$, and to take~${E^1_{e,s,i} = H^{\downarrow}_i(R_{e,s})}$ for each~${e \in \partial_\epsilon(T_0)}$, ${s \in S(e)}$, and ${i \in [N+3]}$. One can then proceed as before, extending the copies in odd steps and adding a new copy in even steps. \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}{Outlook: connections with well-quasi-ordering and better-quasi-ordering} \label{s:WQO} Our aim in this section is to sketch what we believe to be the limitations of the techniques of this paper. We will often omit or ignore technical details in order to give a simpler account of the relationship of the ideas involved. Our strategy for proving ubiquity is heavily reliant on well-quasi-ordering results. The reason is that they are the only known tool for finding extensive tree-decompositions for broad classes of graphs. To more fully understand this, let us recall how well-quasi-ordering was used in the proofs of Lemmas~\ref{lem:finendsisext} and~\ref{lem:boundwidthisext}. Lemma~\ref{lem:finendsisext} states that any locally finite connected graph with only finitely many ends, all of them thin, has an extensive tree-decomposition. The key idea of the proof was as follows: for each end, there is a sequence of separators converging towards that end. The graphs between these separators are finite, and so are well-quasi-ordered by the Graph Minor Theorem. This well-quasi-ordering guarantees the necessary self-similarity. Lemma~\ref{lem:boundwidthisext}, where infinitely many ends are allowed but the graph must have finite tree-width, is similar: once more, for each end there is a sequence of separators converging towards that end. The graphs between these separators are not necessarily finite, but they have bounded tree-width and so they are again well-quasi-ordered. Note that the Graph Minor Theorem is not needed for this latter result. Instead, the reason it works can be expressed in the following slogan, which will motivate the considerations in the rest of this section: \begin{quote} Trees of wombats are well-quasi-ordered precisely when wombats themselves are better-quasi-ordered. \end{quote} Here better-quasi-ordering is a strengthening of well-quasi-ordering, introduced by Nash-Williams in~\cite{N65} essentially in order to make this slogan be true. Since graphs of bounded tree-width can be encoded as trees of graphs of bounded size, what is used here is that graphs of bounded size are better-quasi-ordered. What if we wanted to go a little further, for example by allowing infinite tree-width but requiring that all ends should be thin? In that case, all we would know about the graphs between the separators would be that all their ends are thin. Such graphs are essentially trees of finite graphs. So, by the slogan above, to show that such trees are well-quasi-ordered we would need the statement that finite graphs are better-quasi-ordered. Indeed, this problem arises even if we restrict our attention to the following natural common strengthening of Theorems~\ref{t:And1} and~\ref{t:And2}: \begin{conjecture} Any locally finite connected graph in which all blocks are finite is $\preccurlyeq$-ubiquitous. \end{conjecture} In order to attack this conjecture with our current techniques we would need better-quasi-ordering of finite graphs. Thomas has conjectured~\cite{T89} that countable graphs are well-quasi-ordered with respect to the minor relation. If this were true, it could allow us to resolve problems like those discussed above for countable graphs at least, since all the graphs appearing between the separators are countable. But this approach does not allow us to avoid the issue of better-quasi-ordering of finite graphs. Indeed, since countable trees of finite graphs can be coded as countable graphs, well-quasi-ordering of countable graphs would imply better-quasi-ordering of finite graphs. Thus until better-quasi-ordering of finite graphs has been established, the best that we can hope for -- using our current techniques -- is to drop the condition of local finiteness from the main results of this paper. For countable graphs we hope to show this in the next paper in the series, however for graphs of larger cardinalities further issues arise. \end{document}
\begin{document} \title{Dissipative boundary conditions for nonlinear 1-D hyperbolic systems: sharp conditions through an approach via time-delay systems} \begin{abstract} We analyse dissipative boundary conditions for nonlinear hyperbolic systems in one space dimension. We show that a previous known sufficient condition for exponential stability with respect to the $C^1$-norm is optimal. In particular a known weaker sufficient condition for exponential stability with respect to the $H^2$-norm is not sufficient for the exponential stability with respect to the $C^1$-norm. Hence, due to the nonlinearity, even in the case of classical solutions, the exponential stability depends strongly on the norm considered. We also give a new sufficient condition for the exponential stability with respect to the $W^{2,p}$-norm. The methods used are inspired from the theory of the linear time-delay systems and incorporate the characteristic method. \end{abstract} \noindent Keywords: Hyperbolic systems, dissipative boundary conditions, time-delay systems. \noindent AMS Subject classification: 35L50, 93D20. \section{Introduction} Let $n$ be a positive integer. We are concerned with the following nonlinear hyperbolic system: \begin{equation}\label{sys-P} u_t + F(u) u_x = 0 \quad \mbox{ for every } (t, x) \in [0, + \infty) \times [0, 1], \end{equation} where $u: [0, + \infty) \times [0, 1] \to \mathbb{R}^n$ and $F: \mathbb{R}^n \to {\cal M}_{n, n}(\mathbb{R})$. Here, as usual, ${\cal M}_{n, n}(\mathbb{R})$ denotes the set of $n \times n$ real matrices. We assume that $F$ is of class $C^\infty$, $F(0)$ has $n$ distinct real nonzero eigenvalues. Then, replacing, if necessary, $u$ by $Mu$ where $M\in {\cal M}_{n, n}(\mathbb{R})$ is a suitable invertible matrix, we may assume that \begin{equation}\label{cond-F1} F(0) = \mbox{diag}(\Lambda_1, \cdots, \Lambda_n) \end{equation} with \begin{equation}\label{cond-G1} \Lambda_i\in \mathbb{R}, \, \Lambda_i \neq \Lambda_j \mbox{ for } i \neq j, \, i\in\{1,\cdots,n\},\, j\in\{1,\cdots,n\}. \end{equation} For simple presentation, we assume that, \begin{equation}\label{lambdai>0} \Lambda_i > 0 \mbox{ for } i =1, \cdots, n. \end{equation} The case where $\Lambda_i$ changes sign can be worked out similarly as in \cite{2008-Coron-Bastin-Novel-SICON}. In this article, we consider the following boundary condition \begin{equation}\label{bdry-P} u(t, 0) = G \big( u(t, 1) \big)\quad \mbox{ for every } t \in [0, + \infty), \end{equation} where the map $G: \mathbb{R}^n \to \mathbb{R}^n$ is of class $C^\infty $ and satisfies \begin{equation}\label{G(0)=0} G(0)=0, \end{equation} which implies that $0$ is a solution of \begin{equation}\label{system} \left\{ \begin{array}{ll} u_t + F(u) u_x = 0 &\mbox{ for every } (t, x) \in [0, + \infty) \times [0, 1], \\ u(t, 0) = G \big( u(t, 1) \big) &\mbox{ for every } t \in [0, + \infty). \end{array} \right. \end{equation} In this paper, we are concerned about conditions on $G$ for which this equilibrium solution $0$ of \eqref{system} is exponentially stable for \eqref{system}. We first review known results in the linear case, i.e., when $F$ and $G$ are linear. In that case, \eqref{system} is equivalent to \begin{equation}\label{time-delay-system} \phi_i(t) = \sum_{j=1}^n K_{ij} \phi_j(t - r_j) \quad \mbox{ for } i =1, \cdots, n, \end{equation} where \begin{equation}\label{def-K} K = G'(0) \in {\cal M}_{n \times n}(\mathbb{R}) \end{equation} and \begin{equation}\label{def-phi-ri} \phi_i(t) : = u_i(t, 0), \quad r_i : = 1/ \Lambda_i \quad \mbox{ for } i = 1, \cdots, n. \end{equation} Hence \eqref{system} can be viewed as a linear time-delay system. It is known from the work of Hale and Verduyn Lunel \cite[Theorem 3.5 on page 275]{HaleVerduynLunel-book} on delay equations that $0$ is exponentially stable (in $L^2((0,1);\mathbb{R}^n)$) for \eqref{time-delay-system} if and only if there exists $\delta > 0$ such that \begin{equation}\label{cond-LN1} \Big( \mbox{det} \big(Id_n - \big(\mbox{diag} (e^{- r_1 z}, \cdots, e^{- r_n z})\big)K \big) = 0 , z \in \mathbb{C} \Big) \implies \mathbb{R}e(z) \le - \delta. \end{equation} For many applications it is interesting to have an exponential stability of \eqref{time-delay-system} which is robust with respect to the small changes on the $\Lambda_i$'s (or, equivalently, on the $r_i$'s), i.e., the speeds of propagation. One says that the exponential stability of $0$ for \eqref{time-delay-system} is robust with respect to the small changes on the $r_i's$ if there exists $\varepsilon \in (0, \text{Min}\{r_1,r_2,\cdots,r_n\})$ such that, for every $(\tilde r_1,\tilde r_2,\cdots,\tilde r_n)\in \mathbb{R}^n$ such that \begin{equation}\label{Lambda} |\tilde r_i- r_i|\leq \varepsilon \quad \mbox{ for } i =1, \cdots, n, \end{equation} $0$ is exponentially stable (in $L^2((0,1);\mathbb{R}^n)$) for \begin{equation}\label{eqperturb} \phi_i(t) = \sum_{j=1}^n K_{ij} \phi_j(t - \tilde r_j) \quad \mbox{ for } i =1, \cdots, n. \end{equation} Silkowski (see, e.g., \cite[Theorem 6.1 on page 286]{HaleVerduynLunel-book}) proved that $0$ is exponentially stable (in $L^2((0,1);\mathbb{R}^n)$) for \eqref{time-delay-system} with an exponential stability which is robust with respect to the small changes on the $r_i$'s if and only if \begin{equation}\label{cond-LN-2} \hat \rho_0 \big(K \big) < 1, \end{equation} Here \begin{equation} \hat \rho_0(K) : = \max \Big\{ \rho \big( \mbox{diag} (e^{i \theta_1}, \cdots, e^{ i \theta_n}) K \big); \theta_i \in \mathbb{R} \Big\}, \end{equation} where, for $M\in {\cal M}_{n \times n}(\mathbb{R})$, $\rho(M)$ denotes the spectral radius of $M$. In fact, Silkowski proved that, if the $r_i$'s are rationally independent, i.e., if \begin{equation} \label{rat-ind} \left(\sum_{i=1}^n q_ir_i=0 \text{ and } q:=(q_1,\cdots, q_n)^T\in \mathbb{Q}^n\right) \implies \left( q=0\right), \end{equation} then $0$ is exponentially stable (in $L^2((0,1);\mathbb{R}^n)$) for \eqref{time-delay-system} if and only if \eqref{cond-LN-2} holds. In \eqref{rat-ind} and in the following, $\mathbb{Q}$ denotes the set of rational numbers. The nonlinear case has been considered in the literature for more than three decades. To our knowledge, the first results are due to Slemrod in \cite{Slemrod} and Greenberg and Li in \cite{GreenbergLi} in two dimensions, i.e., $n=2$. These results were later generalized for the higher dimensions. All these results rely on a systematic use of direct estimates of the solutions and their derivatives along the characteristic curves. The weakest sufficient condition in this direction was obtained by Qin \cite{1985-Qin-Tie-hu}, Zhao \cite{1986-Zhao-Yan-chun} and Li \cite[Theorem 1.3 on page 173]{Li-book}. In these references, it is proved that $0$ is exponentially stable for system \eqref{system} with respect to the $C^1$-norm if \begin{equation}\label{cond-Li} \hat \rho_{\infty} \big(K \big) < 1. \end{equation} Here and in the following \begin{equation} \hat \rho_p (M): = \inf \big\{ \|\Delta M \Delta^{-1} \|_p; \; \Delta \in {\cal D}_{n, +} \big\} \quad \mbox{ for every } M \in {\cal M}_{n \times n}(\mathbb{R}), \end{equation} where ${\cal D}_{n, + }$ denotes the set of all $n \times n$ real diagonal matrices whose entries on the diagonal are strictly positive, with, for $1 \le p \le \infty$, \begin{gather} \label{def|x|p} \|x \|_p:=\Big(\sum_{i=1}^n|x_i|^p\Big)^{1/p} \quad \forall x:=(x_1,\cdots,x_n)^T \in \mathbb{R}^n,\, \forall p\in [1,+\infty), \\ \label{def|x|infty} \|x \|_\infty:=\max\left\{|x_i|;\, i\in \{1,\cdots,n\}\right\} \quad \forall x:=(x_1,\cdots,x_n)^T \in \mathbb{R}^n, \\ \label{def|P|p} \| M\|_p : = \max_{\|x \|_p = 1} \|M x \|_p\quad \forall M \in {\cal M}_{n \times n}(\mathbb{R}). \end{gather} (In fact, in \cite{Li-book,1985-Qin-Tie-hu,1986-Zhao-Yan-chun}, $K$ is assumed to have a special structure; however it is was pointed out in \cite{2003-De-Halleux-et-al-Automatica} that the case of a general $K$ can be reduced to the case of this special structure.) We will see later that \eqref{cond-Li} is also a sufficient condition for the exponential stability with respect to the $W^{2, \infty}$-norm (see Theorem~\ref{thm2}). Robustness issues of the exponential stability was studied by Prieur et al. in \cite{2008-Prieur-Winkin-Bastin-MCSS} using again direct estimates of the solutions and their derivatives along the characteristic curves. Using a totally different approach, which is based on a Lyapunov stability analysis, a new criterion on the exponential stability is obtained in \cite{2008-Coron-Bastin-Novel-SICON}: it is proved in this paper that $0$ is exponentially stable for system \eqref{system} with respect to the $H^2$-norm if \begin{equation}\label{cond-Coron} \hat \rho_{2} \big(K\big) < 1. \end{equation} This result extends a previous one obtained in \cite{2007-Coron-Andrea-Novel-Bastin-IEEE} where the same result is established under the assumption that $n=2$ and $F$ is diagonal. See also the prior works \cite{1974-Rauch-Taylor-IUMJ} by Rauch and Taylor, and \cite{02Xu-Sallet} by Xu and Sallet in the case of linear hyperbolic systems. It is known (see \cite{2008-Coron-Bastin-Novel-SICON}) that \begin{equation*} \hat \rho_0(M) \le \hat \rho_2 (M) \le \hat \rho_\infty (M) \end{equation*} and that the second inequality is strict in general if $n \ge 2$: for $n \ge 2$ there exists $M \in {\cal M}_{n, n}(\mathbb{R}) $ such that \begin{equation}\label{compare-1} \hat \rho_2(M) < \hat \rho_\infty(M). \end{equation} In fact, let $a > 0$ and define \begin{equation*} M : = \left( \begin{array}{cc} a & a \\[6pt] -a & a \end{array}\right). \end{equation*} Then \begin{equation*} \hat \rho_2(M) = \sqrt{2} a \end{equation*} and \begin{equation*} \hat \rho_\infty(M) = 2 a. \end{equation*} This implies \eqref{compare-1} in the case $n=2$. The case $n \ge 3$ follows similarly by considering the matrices \begin{equation*} \left(\begin{array}{cc} M & 0 \\[6pt] 0 & 0 \end{array}\right) \in {\cal M}_{n, n}(\mathbb{R}). \end{equation*} The Lyapunov approach introduced in \cite{2008-Coron-Bastin-Novel-SICON} has been shown in \cite{2014-Coron-Bastin} to be applicable to the study the exponential stability with respect to the $C^1$-norm. It gives a new proof that \eqref{cond-Li} implies that $0$ is exponentially stable for system \eqref{system} with respect to the $C^1$-norm. The result obtained in \cite{2008-Coron-Bastin-Novel-SICON} is sharp for $n \le 5$. In fact, they established in \cite{2008-Coron-Bastin-Novel-SICON} the following result: \begin{equation*} \hat \rho_0 = \hat \rho_2 \quad \mbox{ for } n=1,\, 2, \, 3, \, 4, \, 5. \end{equation*} For $n \ge 6$, they showed that there exists $M \in {\cal M}_{n, n}(\mathbb{R})$ such that \begin{equation*} \hat \rho_0(M) < \hat \rho_2(M). \end{equation*} Taking into account these results, a natural question is the following: does $\hat \rho_2(K)<1$ implies that $0$ is exponentially stable for \eqref{system} with respect to the $C^1$-norm? We give a negative answer to this question and prove that the condition $\hat \rho_\infty (K)< 1$ is, in some sense, optimal for the exponential stability with respect to the $C^1$-norm (Theorem~\ref{thm1}). Hence, different norms require different criteria for the exponential stability with respect to them. Let us emphasize that this phenomenon is due to the nonlinearities: it does not appear when $F$ is constant. We then show that the condition $\hat \rho_p (K)< 1$ is sufficient to obtain the exponential stability with respect to the $W^{2, p}$-norm (Theorem~\ref{thm2}). The method used in this paper is strongly inspired from the theory of the linear time-delay systems and incorporates the characteristic method. In order to state precisely our first result, we need to recall the compatibility conditions in connection with the well-posedness for the Cauchy problem associated to \eqref{system}. Let $m\in \mathbb{N}$. Let $\mathcal{H}: C^0([0,1];\mathbb{R}^n)\rightarrow C^0([0,1];\mathbb{R}^n)$ be a map of class $C^m$. For $k\in \{0,1,\ldots, m\}$, we define, by induction on $k$, $D^k\mathcal{H}: C^k([0,1];\mathbb{R}^n)\rightarrow C^0([0,1];\mathbb{R}^n)$ by \begin{gather} \label{defDO} (D^0\mathcal{H})(u):=\mathcal{H}(u) \quad \forall u \in C^0([0,1];\mathbb{R}^n), \\ (D^k\mathcal{H})(u):=\big((D^{k-1}\mathcal{H}')(u)\big)F(u)u_x \quad \forall \; u\in C^k([0,1];\mathbb{R}^n), \, \forall k\in \{0,1,\ldots, m\}. \end{gather} For example, if $m=2$, \begin{equation} (D^1\mathcal{H})(u)=\mathcal{H}'(u)F(u)u_x \quad \forall u\in C^1([0,1];\mathbb{R}^n), \end{equation} \begin{multline} (D^2\mathcal{H})(u)=\mathcal{H}''(u)\big(F(u)u_x,F(u)u_x\big)+\mathcal{H}'(u)\big(F'(u)F(u)u_x\big)u_x, \\ +\mathcal{H}'(u)F(u)\big((F'(u)u_x)u_x+F(u)u_{xx}\big)\quad \forall u\in C^2([0,1];\mathbb{R}^n). \end{multline} Let $\mathcal{I}$ be the identity map from $C^0([0,1];\mathbb{R}^n)$ into $C^0([0,1];\mathbb{R}^n)$ and let $\mathcal{G}: C^0([0,1];\mathbb{R}^n)\rightarrow C^0([0,1];\mathbb{R}^n)$ be defined by \begin{equation}\label{defcalG} \big(\mathcal{G}(v)\big)(x)=G\big(v(x)\big) \quad \text{for every } v \in C^0([0,1];\mathbb{R}^n) \text{ and for every } x\in [0,1]. \end{equation} Let $u^0\in C^m([0,1];\mathbb{R}^n)$. We say that $u^0$ satisfies the compatibility conditions of order $m$ if \begin{equation}\label{condition-order-m} ((D^k\mathcal{I})(u^0))(0)= ((D^k\mathcal{G})(u^0))(1)\quad \text{for every } k\in \{0,1,\ldots, m\} . \end{equation} For example, for $m=1$, $u^0\in C^1([0,1];\mathbb{R}^n)$ satisfies the compatibility conditions of order 1 if and only if \begin{gather} \label{compatibilty-C1-0} u^0(0)=G\big(u(1)\big), \\ \label{compatibilty-C1-1} F\big(u^0(0)\big) u^0_x(0) = G' \big(u(1) \big) F\big(u^0(1) \big)u^0_x(1). \end{gather} With this definition of the compatibility conditions of order $m$, we can recall the following classical theorem due to Li and Yu \cite[Chapter 4]{LiYu} on the well-posedness of the Cauchy problem associated to \eqref{system}. \begin{theorem} \label{wellposedCm} Let $m\in \mathbb{N}\setminus\{0\}$. Let $T>0$. There exist $\varepsilon>0$ and $C>0$ such that, for every $u^0\in C^m([0,1];\mathbb{R}^n)$ satisfying the compatibility conditions of order m \eqref{condition-order-m} and such that $\| u^0\|_{C^m([0,1];\mathbb{R}^n)}\leq \varepsilon$, there exists one and only one solution $u\in C^m([0,T]\times [0,1];\mathbb{R}^n)$ of \eqref{system} satisfying the initial condition $u(0,\cdot)=u^0$. Moreover, \begin{equation}\label{inedCmCauchy} \| u \|_{C^m([0,T]\times [0,1];\mathbb{R}^n)}\leq C \| u^0 \|_{C^m([0,1];\mathbb{R}^n)}. \end{equation} \end{theorem} \begin{remark} In fact \cite[Chapter 4]{LiYu} is dealing only with the case $m=1$; however the proof given there can be adapted to treat the case $m\geq 2$. \end{remark} We can now define the notion of exponential stability with respect to the $C^m$-norm. \begin{definition} \label{defexpC1} The equilibrium solution $u \equiv 0$ is exponentially stable for system \eqref{system} with respect to the $C^m$-norm if there exist $\varepsilon > 0$, $\nu > 0$ and $C>0$ such that, for every $u^0\in C^m([0,1];\mathbb{R}^n)$ satisfying the compatibility conditions of order m \eqref{condition-order-m} and such that $\| u^0 \|_{C^m([0,1];\mathbb{R}^n)}\leq \varepsilon$, there exists one and only one solution $u\in C^m([0,+\infty)\times [0,1];\mathbb{R}^n)$ of \eqref{system} satisfying the initial condition $u(0,\cdot)=u^0$ and this solution satisfies \begin{equation*} \|u(t, \cdot)\|_{C^m([0,1];\mathbb{R}^n)} \le C e^{- \nu t} \| u^0\|_{C^m([0,1];\mathbb{R}^n)} \quad \forall \, t > 0. \end{equation*} \end{definition} With this definition, let us return to the results which are already known concerning the exponential stability with respect to the $C^m$-norm. \begin{itemize} \item [(i)] \textbf{For linear $F$ and $G$.} Let $m\in \mathbb{N}$. If $\hat \rho_0\big (G'(0)\big )<1$, then $0$ is exponentially stable for system \eqref{system} with respect to the $C^m$-norm and the converse holds if the $r_i$'s are rationally independent. This result was proved for the $L^2$-norm. But the proof can be adapted to treat the case of the $C^m$-norm. \item [(ii)] \textbf{For general $F$ and $G$.} Let $m\in \mathbb{N}\setminus\{0\}$. If $\hat \rho_\infty \big (G'(0)\big)<1$, then $0$ is exponentially stable for system \eqref{system} with respect to the $C^m$-norm. This result was proved only for the case $m=1$. However the proofs given in \cite{Li-book, 1985-Qin-Tie-hu, 1986-Zhao-Yan-chun} for this case can be adapted to treat the case $m\geq 2$. \item [(iii)] \textbf{For general $F$ and $G$, and $n=1$.} Let $m\in \mathbb{N}\setminus\{0\}$. Then $0$ is exponentially stable for system \eqref{system} with respect to the $C^m$-norm if and only if $\hat \rho_0\big (G'(0)\big )<1$. Note that, for $n=1$, the $\hat \rho_p\big (G'(0)\big )$'s do not depend on $p\in [1,+\infty]$: they are all equal to $|G'(0)|$. \end{itemize} The first result of this paper is the following one. \begin{theorem}\label{thm1} Let $m\in \mathbb{N}\setminus\{0\}$, $n \ge 2$ and $\tau >0$. There exist $F \in C^\infty (\mathbb{R}^n; {\cal M}_{n \times n}(\mathbb{R}))$ and a linear map $G: \mathbb{R}^n \to \mathbb{R}^n$ such that $F$ is diagonal, $F(0)$ has distinct positive eigenvalues, \begin{equation} \hat \rho_\infty\big( G'(0)\big) < 1 + \tau, \,\hat \rho_0\big( G'(0)\big)=\hat \rho_2\big( G'(0)\big) < 1 \end{equation} and $0$ is \textbf{not} exponentially stable for system \eqref{system} with respect to the $C^m$-norm. \end{theorem} The second result of this paper is on a sufficient condition for the exponential stability with respect to the $W^{2, p}$-norm. In order to state it, we use the following definition, adapted from Definition~\ref{defexpC1}. \begin{definition} \label{defexpW2p} Let $p\in [1,+\infty]$. The equilibrium solution $u \equiv 0$ is exponentially stable for \eqref{system} with respect to the $W^{2,p}$-norm if there exist $\varepsilon > 0$, $\nu > 0$ and $C>0$ such that, for every $u^0\in W^{2,p}((0,1);\mathbb{R}^n)$ satisfying the compatibility conditions of order 1 \eqref{compatibilty-C1-0}-\eqref{compatibilty-C1-1} and such that \begin{equation}\label{u0petitC1} \|u^0\|_{W^{2,p}((0,1) ;\mathbb{R}^n)} \le \varepsilon, \end{equation} there exists one and only one solution $u\in C^1([0,+\infty)\times [0,1];\mathbb{R}^n)$ of \eqref{system} satisfying the initial condition $u(0,\cdot)=u^0$ and this solution satisfies \begin{equation*} \|u(t, \cdot) \|_{W^{2,p}((0,1);\mathbb{R}^n)}\le C e^{- \nu t} \| u^0 \|_{W^{2,p}((0,1);\mathbb{R}^n)} \quad \forall \, t > 0. \end{equation*} \end{definition} Again, for every $T>0$, for every initial condition $u^0\in W^{2,p}((0,1);\mathbb{R}^n)$ satisfying the compatibility conditions \eqref{compatibilty-C1-0}-\eqref{compatibilty-C1-1} and such that $\| u^0 \|_{W^{2,p}((0,1);\mathbb{R}^n)}$ is small enough, there exist a unique $C^1$ solution $u\in L^\infty([0,T];W^{2,p}((0,1);\mathbb{R}^n))$ of \eqref{system} satisfying the initial condition $u(0,\cdot)=u^0$ (and this solution is in $C^0([0,T];W^{2,p}((0,1);\mathbb{R}^n))$ if $p\in [1,+\infty)$). The (sketchs of) proof given in \cite{2008-Coron-Bastin-Novel-SICON} of this result for $p=2$ can be adapted to treat the other cases. Our next result is the following theorem. \begin{theorem}\label{thm2} Let $p\in [1,+\infty]$. Assume that \begin{equation} \hat \rho_{p} \big(G'(0)\big) < 1. \end{equation} Then, the equilibrium solution $u \equiv 0$ of the system \eqref{system} is exponentially stable with respect to the $W^{2,p}$-norm. \end{theorem} Let us recall that the case $p=2$ is proved in \cite{2008-Coron-Bastin-Novel-SICON}. Let us emphasize that, even in this case, our proof is completely different from the one given in \cite{2008-Coron-Bastin-Novel-SICON}. \begin{remark} The notations on various conditions on exponential stability used in this paper are different from the ones in \cite{2008-Coron-Bastin-Novel-SICON}. In fact, one has \begin{equation*} \hat \rho_0 = \rho_0, \quad \hat \rho_2 = \rho_1, \quad \mbox{ and } \quad \hat \rho_\infty = \rho_2. \end{equation*} Here $\rho_0$, $\rho_1$, and $\rho_2$ are the notations used in \cite{2008-Coron-Bastin-Novel-SICON}. \end{remark} The paper is organized as follows. In Sections~\ref{sect-thm1} and \ref{sect-thm2}, we establish Theorems~\ref{thm1} and \ref{thm2} respectively. \section{Proof of Theorem~\ref{thm1}}\label{sect-thm1} We give the proof in the case $n=2$. The general cas $n\geq 2$ follows immediately from the case considered here. Let $F\in C^{\infty}(\mathbb{R}^2;\mathcal{M}_{2 \times 2}(\mathbb{R}))$ be such that \begin{equation} \label{defF} F(u) = \left( \begin{array}{cc} \Lambda_1 & 0 \\[6pt] 0 & \displaystyle \frac{1}{ r_2 + u_2} \end{array} \right) \quad \forall u=(u_1,u_2)^T\in \mathbb{R}^2 \text{ with } u_2>-\frac{r_2}{2}, \end{equation} for some $0 < \Lambda_1 < \Lambda_2$. We recall that \begin{equation*} r_1 = 1/ \Lambda_1 \quad \mbox{ and } \quad r_2 = 1/ \Lambda_2. \end{equation*} We assume that $r_1$ and $r_2$ are independent in $\mathbb{Z}$, i.e., \begin{equation}\label{independence} \left(k_1 r_1 + k_2 r_2 = 0\text{ and } (k_1,k_2)^T\in \mathbb{Z}^2\right)\implies \left(k_1 = k_2 = 0\right). \end{equation} Define $G: \mathbb{R}^2 \to \mathbb{R}^2 $ as the following linear map \begin{equation} \label{defG} G(u) := a\left( \begin{array}{cc} 1 & \xi \\ -1 & \eta \\ \end{array} \right) u \quad \text{for } u\in \mathbb{R}^2. \end{equation} Here $a> 0$ and $\xi, \eta$ are two positive numbers such that \begin{equation}\label{cond-xieta} \mbox{ if } P_k(\xi, \eta) = 0 \quad \mbox{ then } \quad P_k \equiv 0, \end{equation} for every polynomial $P_k$ of degree $k$ ($k \ge 0$) with rational coefficients. Note that if \begin{equation} a \mbox{ is close to } 1/2 \quad \mbox{ and } \quad \xi, \eta \mbox{ are close to 1}, \end{equation} then \begin{equation} \hat \rho_\infty(G) \mbox{ is close to } 1 \end{equation} and \begin{equation} \hat \rho_0(G) = \hat \rho_2(G) \mbox{ are close to } \frac{1}{\sqrt{2}}<1. \end{equation} Here, and in the following, for the notational ease, we use the convention $G = K = G'(0)$. Let $\tau_0>1$ (which will defined below). We take $a \in \mathbb{Q}$, $a>1/2$ but close to $1/2$ and choose $\xi, \eta >1$ but close to 1 so that \begin{gather} \label{rhoinftyGbon} \hat \rho_\infty (G) < \tau_0, \\ \label{axietapres} a(1+\xi+\eta )\leq 2, \end{gather} and there exists $ c>0 $ such that \begin{equation}\label{def-c} \frac{\max\{\xi, \eta \}}{a(\xi + \eta)} < c < 1. \end{equation} We also impose that $\xi, \, \eta$ satisfy \eqref{cond-xieta}. We start with the case $m=1$. We argue by contradiction. We assume that there exists $\tau_0 > 1$ such that for all $G$ with $\hat \rho_\infty\big( G'(0) \big) < \tau_0$, there exist $\varepsilon_0$, $C_0$, $\nu$ positive numbers such that \begin{equation} \label{udecroitexp} \| u(t, \cdot ) \|_{C^1([0,1];\mathbb{R}^2)} \le C e^{-\nu t } \|u^0 \|_{C^1([0,1];\mathbb{R}^2)}, \end{equation} if $u^0\in C^1([0,1];\mathbb{R}^2)$ satisfies the compatibility conditions \eqref{compatibilty-C1-0}-\eqref{compatibilty-C1-1} and is such that $\| u^0 \|_{C^1([0,1];\mathbb{R}^2)} \le \varepsilon_0$. Here $u$ denotes the solution of \eqref{system} satisfying the initial condition $u(0,\cdot)=u^0$. Assume that $u \in C^1([0, + \infty) \times [0, 1];\mathbb{R}^2)$ is a solution to \eqref{system}. Define \begin{equation*} v(t) = u(t, 0). \end{equation*} Then \begin{equation}\label{Delay1} v\Big(t + r_2 + v_2(t) \Big) = v_1\Big(t + r_2 + v_2(t) - r_1\Big) G_1 + v_2(t) G_2. \end{equation} where $G_1$ and $G_2$ are the first and the second column of $G$. Equation \eqref{Delay1} motivates our construction below. Fix $T > 0$ (arbitrarily large) such that \begin{equation*} T - (k r_1 + l r_2) \neq 0\quad \mbox{ for every } k, l \in \mathbb{N}. \end{equation*} Let $\varepsilon\in (0,1)$ be (arbitrarily) small such that \begin{equation}\label{nodefect} \inf_{k, l \in \mathbb{N}} |T - (kr_1 + lr_2)| \ge \varepsilon. \end{equation} (Note that the smallness of $\varepsilon$ in order to have \eqref{nodefect} depends on $T$: It goes to $0$ as $T\rightarrow +\infty$.) Let $n$ be the integer part of $T/r_2$ plus $1$. In particular $ n r_2 > T$. Fix $n$ rational points $(s_i^0, t_i^0)^T \in \mathbb{Q}^2$, $i = 1, \cdots, n$, such that their coordinates are distinct, i.e., $s_i^0 \neq s_j^0$, $t_i^0 \neq t_j^0$ for $i \neq j$, and \begin{equation}\label{initial} \|(s_i^0, t_i^0) \|_\infty \le \varepsilon^3/ 4^n \quad \mbox{ for every } i \in \{1,\cdots,n\}. \end{equation} For $0 \le k \le n-1$, we define $(s_i^{k+1}, t_i^{k+1})^T$ for $i=1, n-(k+1)$ by recurrence as follows \begin{equation}\label{construction-st} (s_i^{k+1}, t_i^{k+1})^T = G (s_i^k, t_{i+1}^k)^T = a \left( \begin{array}{c} s_i^k + \xi t_{i+1}^k \\[6pt] - s_i^k + \eta t_{i+1}^k \end{array}\right). \end{equation} Set \begin{equation} \label{initVetdV} V(T) := (s_1^n, t_1^n), \quad dV(T) = \varepsilon (1, 0)^T. \end{equation} Define \begin{gather} \label{Tinit} T_{1} := T- r_1, \quad T_{2} := T - r_2 - t_{2}^{n-1}, \\ \label{Vinit} V(T_{1}) = (s_1^{n-1}, t_1^{n-1}), \quad V(T_{2}) = (s_2^{n-1}, t_2^{n-1}), \\ \label{dVinit} dV(T_1) = \varepsilon \Big(\frac{\eta}{a(\xi + \eta)}, 0 \Big), \quad dV(T_2) = \varepsilon \Big(0, \frac{1}{a(\xi + \eta)} \Big). \end{gather} Assume that $T_{\gamma_1 \cdots \gamma_k}$ is defined for $\gamma_i = 1, 2$. Set \begin{equation}\label{def-t1} T_{\gamma_1 \cdots \gamma_k 1} = T_{\gamma_1 \cdots \gamma_k} - r_1 \end{equation} and \begin{equation}\label{def-t2} T_{\gamma_1 \cdots \gamma_k 2} = T_{\gamma_1 \cdots \gamma_k} - r_2 - t_{1 + l}^{n-(k+1)}. \end{equation} where \footnote{Roughly speaking, $l$ describes the number of times which comes from $r_2$ in the construction of $\gamma_1 \cdots \gamma_k$. } \begin{equation}\label{def-l} l = \sum_{j = 1}^k (\gamma_j - 1). \end{equation} Note that, by \eqref{initial}, \eqref{construction-st}, \eqref{Tinit}, \eqref{def-t1}, \eqref{def-t2} and \eqref{def-l} \begin{equation}\label{estTgamma} \Big|T_{\gamma_1\cdots\gamma_k}-kr_1-(r_2-r_1) \sum_{j=1}^k\left(\gamma_j-1\right)\Big|\leq C \varepsilon^3 \quad\forall k\in\{1,\cdots,n\}, \end{equation} for some $C>0$ which is independent of $T>r_1$ and $\varepsilon \in (0,+\infty)$. We claim that \begin{equation}\label{T} \text{the }T_{\gamma_1 \cdots \gamma_k}, \, k\in\{1,\cdots,n-1\}, \mbox{ are distinct}. \end{equation} (See fig. \ref{ref-Tgamma-fig}.) We admit this fact, which will be proved later on, and continue the proof. Define $V(T_{\gamma_1 \cdots \gamma_k \gamma_{k+1}})$ and $dV (T_{\gamma_1 \cdots \gamma_k \gamma_{k+1}})$ as follows \begin{equation}\label{def-v} V (T_{\gamma_1 \cdots \gamma_k \gamma_{k+1}}) = (s_{1 + l}^{n-(k+1)}, t_{1 + l}^{n - (k+1)})^T \end{equation} and \begin{equation}\label{def-dv} dV (T_{\gamma_1 \cdots \gamma_k 1}) = (x, 0)^T \quad \quad dV (T_{\gamma_1 \cdots \gamma_k 2}) = (0, y)^T, \end{equation} where $l$ is given by \eqref{def-l} and the real numbers $x, y$ are chosen such that \begin{equation} \label{defxy} G (x, y) ^T = dV (T_{\gamma_1 \cdots \gamma_k}). \end{equation} Let us also point that, by \eqref{dVinit} and \eqref{def-dv}, \begin{equation}\label{aumoinsunecompsantenulle} \text{at least one of the two components of $dV (T_{\gamma_1 \cdots \gamma_k})$ is 0.} \end{equation} From \eqref{defG}, we have \begin{equation}\label{G-1} G^{-1}=\frac{1}{a(\eta+\xi)} \begin{pmatrix} \eta & -\xi \\ 1 & 1 \end{pmatrix}. \end{equation} It follows from \eqref{def-c}, \eqref{def-dv}, \eqref{defxy}, \eqref{aumoinsunecompsantenulle} and \eqref{G-1} that \begin{equation}\label{cond-dv} \|dV (T_{\gamma_1 \cdots \gamma_k \gamma_{k+1}})\|_\infty \le c \| dV (T_{\gamma_1 \cdots \gamma_k}) \|_\infty. \end{equation} Using \eqref{T}, we may construct $\mathfrak{v}\in C^1([0, r_1];\mathbb{R}^2)$ such that \begin{equation}\label{def-v-1} \mathfrak{v}' (T_{\alpha_1 \cdots \alpha_k}) = dV(T_{\alpha_1 \cdots \alpha_k}), \end{equation} and \begin{equation}\label{def-v-2} \mathfrak{v} (T_{\alpha_1 \cdots \alpha_k}) = V(T_{\alpha_1 \cdots \alpha_k}), \end{equation} if \begin{equation*} T_{\alpha_1 \cdots \alpha_k} \in (0, r_1), \end{equation*} (recall that $r_1 > r_2 > 0$ and $n r_2 > T$). It follows from \eqref{axietapres}, \eqref{initial}, \eqref{construction-st}, \eqref{def-v} and \eqref{def-v-2} that \begin{equation}\label{norm-v} \| \mathfrak{v}(T_{\alpha_1 \cdots \alpha_k})\|_\infty \le \varepsilon^3 \quad \mbox{ if } T_{\alpha_1 \cdots \alpha_k} \in (0, r_1). \end{equation} Let $T_{\alpha_1 \cdots \alpha_k} \in (0, r_1)$ and $T_{\gamma_1 \cdots \gamma_m}\in (0, r_1)$ be such that \begin{equation} \label{vdifferent} \mathfrak{v}(T_{\alpha_1 \cdots \alpha_k}) \neq \mathfrak{v}(T_{\gamma_1 \cdots \gamma_m}). \end{equation} From \eqref{construction-st}, \eqref{def-v}, \eqref{def-v-2} and \eqref{vdifferent}, we get that \begin{equation}\label{cnpournonequality} k\not = m \text{ or } \mbox{card}\{i\in \{1,\cdots,k\};\, \alpha_i=1\}\not = \mbox{card}\{i\in \{1,\cdots,m\};\, \gamma_i=1\}. \end{equation} See also Fig. \ref{ref-Tgamma-fig}. \begin{figure}\label{ref-Tgamma-fig} \end{figure} From \eqref{nodefect}, \eqref{Tinit}, \eqref{def-t1}, \eqref{def-t2} and \eqref{cnpournonequality}, we get that, at least if $\varepsilon >0 $ is small enough, \begin{equation}\label{dist-v} |T_{\alpha_1 \cdots \alpha_k} - T_{\gamma_1 \cdots \gamma_m}| \ge \varepsilon/ 2. \end{equation} Using \eqref{nodefect}, \eqref{norm-v} and \eqref{dist-v}, we may also impose that \begin{gather} \label{v=0presde0} \text{$\mathfrak{v} = 0$ in a neighborhood of $0$ in $[0, r_1]$,} \\ \label{v=0presder1} \text{$\mathfrak{v} = 0$ in a neighborhood of $r_1$ in $[0, r_1]$,} \\ \label{v=0presder2} \text{$\mathfrak{v} = 0$ in a neighborhood of $r_2$,} \\ \label{v-1} \|\mathfrak{v} \|_{C^1([0, r_1])} \le C \max\{\varepsilon^2, A \}, \end{gather} where \begin{equation} \label{defA} A : = \max\big\{ \|dV(T_{\alpha_1 \cdots \alpha_k})\|_\infty; \; T_{\alpha_1 \cdots \alpha_k} \in (0, r_1) \big\}. \end{equation} In \eqref{v-1}, $C$ denotes a positive constant which does not depend on $T>r_1$ and on $\varepsilon>0$ provided that $\varepsilon>0$ is small enough, this smallness depending on $T$. We use this convention until the end of this section and the constants $C$ may vary from one place to another. Note that if $T_{\alpha_1 \cdots \alpha_k} \in (0, r_1)$ then \begin{equation*} k r_1 > T/2. \end{equation*} It follows that \begin{equation*} k > T/ (2 r_1), \end{equation*} which, together with \eqref{initVetdV}, \eqref{cond-dv} and $c\in(0,1)$, implies that \begin{equation} \label{dVpetit} \|dV(T_{\alpha_1 \cdots \alpha_k}) \|_\infty \le \varepsilon c^{T/ (2 r_1)}. \end{equation} From \eqref{v-1} and \eqref{dVpetit}, one has \begin{equation}\label{v-2} \| \mathfrak{v}\|_{C^1([0, r_1];\mathbb{R}^2)} \le C \max\big\{\varepsilon^2, \varepsilon c^{T/ (2 r_1)} \big\} \leq C\varepsilon c^{T/ (2 r_1)}. \end{equation} Let $\tilde u\in C^1([0,r_1]\times [0,1];\mathbb{R}^2)$ be the solution to the backward Cauchy problem \begin{equation}\label{tildeubackward} \left\{ \begin{array}{ll} \tilde u_t + F(\tilde u) \tilde u_x = 0 &\mbox{ for every } (t, x) \in [0,r_1] \times [0, 1], \\ \tilde u(t, 1) = G^{-1} v(t) &\mbox{ for every } t \in [0,r_1], \\ \tilde u(r_1,x)=0 &\mbox{ for every } x \in [0,1]. \end{array} \right. \end{equation} Note that, by \eqref{v=0presder1}, the boundary condition at $x=1$ for the backward Cauchy problem \eqref{tildeubackward} vanishes in a neighborhood of $r_1$ in $[0,1]$ and therefore the necessary compatibility conditions for the existence of $\tilde u$, namely \begin{equation}\label{bondarycompatildeu} G^{-1} v(t_1)=0 \text{ and } G^{-1} v'(t_1)=0, \end{equation} are satisfied. Moreover, if $\varepsilon>0$ is small enough this solutions indeed exists by \cite[pp. 96-107]{LiYu}. Let $u^0 \in C^1([0, 1];\mathbb{R}^2)$ be defined by \begin{equation} \label{defu} u^0(x) := \tilde u(0,x)\quad \mbox{for every } x \in [0, 1]. \end{equation} Using \eqref{v-2}, \eqref{def-c} and the definition of $u^0$, we have \begin{equation}\label{choice-u0} \| u^0 \|_{C^1([0,1];\mathbb{R}^2)} \le C \| v \|_{C^1([0, r_1];\mathbb{R}^2)} \le C \max\big\{\varepsilon^2, \varepsilon c^{T/ (2 r_1)} \big\} \le C \varepsilon. \end{equation} Note that $u^0$ satisfies the the compatibility condition \eqref{compatibilty-C1-0} and \eqref{compatibilty-C1-1} since, by \eqref{v=0presder1} and \eqref{v=0presder2}, $u^0$ vanishes in a neighborhood of $0$ in $[0,1]$ and, by \eqref{v=0presde0}, $u^0$ vanishes in a neighborhood of $1$ in $[0,1]$. Let $u\in C^1([0,+\infty)\times[0,1];\mathbb{R}^2)$ be the solution of \eqref{system} satisfying the initial condition \begin{equation*} u(0,x) = u^0(x) \quad \mbox{ for every } x\in [0,1]. \end{equation*} Since $0$ is assumed to be exponentially stable for \eqref{system} with respect to the $C^1$-norm, $u$ exists for all positive time if $\varepsilon$ is small enough. Let us define $v\in C^1([0,+\infty);\mathbb{R}^2)$ by \begin{equation}\label{deffrakv} v(t):=u(t,0)\quad \mbox{ for every } t\in[0,+\infty). \end{equation} Then, by the constructions of $u$ and $\tilde u$, one has \begin{equation}\label{fracvetv} v(t)=\mathfrak{v}(t)\quad \text{for every } t\in [0,r_1]. \end{equation} Then, using \eqref{Delay1} together with the definition of $T_{\gamma_1 \cdots \gamma_k}$ and $V(T_{\gamma_1 \cdots \gamma_k})$, one has \begin{gather}\label{vandVTgamma} v(T_{\gamma_1 \cdots \gamma_k} )=V(T_{\gamma_1 \cdots \gamma_k}) \quad \text{if } T_{\gamma_1 \cdots \gamma_k}\in [0,T], \end{gather} with the convention that, if $k=0$, $T_{\gamma_1 \cdots \gamma_k}=T$. Differentiating \eqref{Delay1} with respect to $t$, we get \begin{equation} \label{equation-der-1} \big(1 + v_2'(t) \big) v'\Big(t + r_2 + v_2(t) \Big) = \big(1 + v_2'(t) \big) v_1'\Big(t + r_2 + v_2(t) - r_1\Big) G_1 + v_2'(t) G_2. \end{equation} It follows that \begin{equation} \label{expressionv'} v'\Big(t + r_2 + v_2(t) \Big) = v_1'\Big(t + r_2 + v_2(t) - r_1\Big) G_1 + v_2'(t) G_2 - \frac{v_2'(t)^2}{1 + v_2'(t)} G_2. \end{equation} From the definition of $dV$, \eqref{def-v-1}, \eqref{dVpetit}, \eqref{fracvetv} and \eqref{expressionv'}, one gets, for every $T>r_1$, the existence of $C(T)>0$ such that \begin{equation}\label{diffv'dV} | v'(T) - dV(T)| \le C(T) \varepsilon^2. \end{equation} provided that $\varepsilon$ is small enough (the smallness depending on $T$). In \eqref{diffv'dV} and in the following we use the notation \begin{equation}\label{def|x|} |x|:=\|x \|_2 \quad \forall x\in \mathbb{R}^n. \end{equation} From \eqref{system}, \eqref{udecroitexp} and \eqref{deffrakv}, \begin{equation} \label{expdecrea} |v'(t)| \le 2\Lambda_2 C_0 e^{-\nu t} \|u^0 \|_{C^1([0,1];\mathbb{R}^2)}\quad \text{for every } t\in [0,+\infty), \end{equation} provided that $\|u^0 \|_{C^1([0,1];\mathbb{R}^2)}\leq \varepsilon_0$. Using \eqref{initVetdV}, \eqref{choice-u0}, \eqref{diffv'dV} and \eqref{expdecrea}, one gets the existence of $C_1>0$ such that, for every $T>0$, there exist $C(T)>0$ and $\varepsilon(T)>0$ such that \begin{equation}\label{estimate-contradiction} 1\leq C_1 e^{-\nu T} + C(T) \varepsilon \quad \text{for every } T>0,\text{ for every } \varepsilonilon \in (0,\varepsilon(T)] . \end{equation} We choose $T>0$ large enough so that $C_1 e^{-\nu T}\leq (1/2)$. Then letting $\varepsilon \rightarrow 0^+$ in \eqref{estimate-contradiction} we get a contradiction. It remains to prove \eqref{T} in order to conclude the proof of Theorem~\ref{thm1} if $m=1$. Let us assume \begin{equation}\label{T-T} T_{\gamma_1 \cdots \gamma_{k}} = T_{\alpha_1 \cdots \alpha_{m}} \text{ with } k,m\in \{1,\ldots,n-1\} \end{equation} ($\gamma_i, \alpha_i =1, 2$). Using \eqref{independence} and \eqref{estTgamma}, we derive that \begin{equation}\label{ell} m=k, \, \mbox{card}\big\{i; \gamma_i = 2 \big\} = \mbox{card} \big\{i; \alpha_i = 2 \big\} =: \ell \end{equation} for some $0 \le \ell \le m $. Let $k_1 < \cdots < k_\ell$ and $m_1 < \cdots < m_\ell$ be such that \begin{equation*} \gamma_{k_l} = \alpha_{m_l} = 2 \quad \mbox{ for } 1\le l \le \ell. \end{equation*} Define \begin{equation*} i_l := \sum_{i=1}^{k_l} (\gamma_i - 1) \quad \mbox{ and } \quad j_l := \sum_{i=1}^{k_l} (\alpha_i - 1). \end{equation*} It follows from \eqref{def-t2}, \eqref{def-l}, and \eqref{T-T} that \begin{equation}\label{T-T-1} \sum_{l = 1}^\ell t_{i_l}^{n - k_l} = \sum_{l = 1}^\ell t_{j_l}^{n- m_l}. \end{equation} Hence \begin{equation}\label{conclusion-T-T} \gamma_i = \alpha_i \quad \mbox{ for } i =1, \cdots, k=m \end{equation} is proved if one can verify that \begin{equation}\label{claimP} i_l = j_l \quad \mbox{ and } \quad k_l = m_l \quad \quad \forall \, l =1, \cdots \ell. \end{equation} By a recurrence argument on $\ell$, it suffices to prove that \begin{equation}\label{claim1} i_\ell = j_\ell \quad \mbox{ and } k_\ell = m_\ell. \end{equation} Note that, by \eqref{construction-st}, \begin{equation}\label{observation} t_{j}^k = a^k \eta^k t_{j + k}^0 + P_{k-1}(\xi, \eta), \end{equation} where $P_{k-1}$ is a polynomial of degree $k-1$ with rational coefficients. Since $\xi, \eta$ satisfy \eqref{cond-xieta}, it follows from \eqref{T-T-1} and \eqref{observation} that \begin{equation*} k_\ell = m_\ell, \end{equation*} and \begin{equation*} i_\ell = j_\ell. \end{equation*} Thus claim \eqref{claim1} is proved and so are claims \eqref{claimP}, \eqref{conclusion-T-T}, and \eqref{T}. This concludes the proof of Theorem~\ref{thm1} if $m=1$. Let us show how to modify the above proof to treat the case $m\geq 2$. Instead of \eqref{initial}, one requires \begin{equation}\label{initial-new} \|(s_i^0, t_i^0) \|_\infty \le \varepsilon^{2+m}/ 4^n \quad \mbox{ for every } i, \,j \in \{1,\cdots,n\}. \end{equation} Then, instead of \eqref{norm-v}, one gets \begin{equation}\label{norm-v-new} \| \mathfrak{v}(T_{\alpha_1 \cdots \alpha_k})\|_\infty \le \varepsilon^{2+m} \quad \mbox{ if } T_{\alpha_1 \cdots \alpha_k} \in (0, r_1). \end{equation} Instead of \eqref{def-v-1}, one requires \begin{equation}\label{def-v-1-new} \mathfrak{v}^{(m)} (T_{\alpha_1 \cdots \alpha_k}) = dV(T_{\alpha_1 \cdots \alpha_k}), \end{equation} and instead of \eqref{v-1}, one has \begin{gather} \label{v-1-new} \|\mathfrak{v} \|_{C^m([0, r_1])} \le C \max\{\varepsilon^2, A \}, \end{gather} where $A$ is still given by \eqref{defA}. Then \eqref{choice-u0} is now \begin{equation}\label{choice-u0-new} \| u^0 \|_{C^m([0,1];\mathbb{R}^2)} \le C \| v \|_{C^m([0, r_1];\mathbb{R}^2)} \le C \varepsilon c^{T/ (2 r_1)}. \end{equation} In the case $m=1$ we differentiated once \eqref{Delay1} with respect to $t$ in order to get \eqref{expressionv'}. Now we differentiate \eqref{Delay1} $m$ times with respect to $t$ in order to get \begin{gather*} \left|v^{(m)}\Big(t + r_2 + v_2(t) \Big) -v_1^{(m)}\Big(t + r_2 + v_2(t) - r_1\Big) G_1 + v_2^{(m)}(t) G_2 \right|\leq C \sum_{i=0}^m v^{(i)}(t)^2, \end{gather*} which allows us to get, instead of \eqref{diffv'dV}, \begin{equation}\label{diffv'dV-new} | v^{(m)}(T) - dV(T)| \le C(T) \varepsilon^2. \end{equation} We then get a contradiction as in the case $m=1$. This concludes the proof of Theorem~\ref{thm1}. $\Box$ \begin{remark} \label{remdiffTcapital} Property \eqref{T} is a key point. It explains why the condition $\hat \rho _0(K)<1$ is not sufficient for exponential stability in the case of \textbf{nonlinear} systems. Indeed $\hat \rho _0(K)<1$ gives an exponential stability which is robust with respect to perturbations on the delays which are \textbf{constant}: these perturbations are not allowed to depend on time. However with these type of perturbations \eqref{T} does not hold: with constant perturbations on the delays, one has \begin{equation*} T_{12}=T_{21}, \, T_{122}=T_{212}=T_{221} \end{equation*} and, more generally, \begin{equation*} T_{\gamma_1 \cdots \gamma_k}=T_{\alpha_1 \cdots \alpha_k} \text{ if }\mbox{card}\{i\in \{1,\cdots,k\};\, \gamma _i=1\}= \mbox{card}\{i\in \{1,\cdots,k\};\, \alpha_i=1\}. \end{equation*} \end{remark} \section{Proof of Theorem~\ref{thm2}}\label{sect-thm2} This section containing two subsections is devoted to the proof of Theorem~\ref{thm2}. In the first subsection, we present some lemmas which will be used in the proof. In the second subsection, we give the proof of Theorem~\ref{thm2}. \subsection{Some useful lemmas} The first lemma is standard one on the well-posedness of \eqref{sys-P} and \eqref{bdry-P}. \begin{lemma}\label{lem1} Let $p\in [1,+\infty]$. There exist $C>0$ and $\gamma >0$ such that, for every $T>0$, there exists $\varepsilon_0>0$ such that, for every $u_0 \in W^{2, p}((0, 1);\mathbb{R}^n)$ with $\| u_0\|_{W^{2, p}((0, 1);\mathbb{R}^n)} < \varepsilon_0$ satisfying the compatibility conditions \eqref{compatibilty-C1-0}-\eqref{compatibilty-C1-1}, there exists one and only one solution $u\in C^1([0,T]\times [0,1];\mathbb{R}^n)$ of \eqref{system} satisfying the initial condition $u(0,\cdot)=u^0$. Moreover \begin{equation*} \| u(t, \cdot) \|_{W^{2, p}((0, 1);\mathbb{R}^n)} \le C e^{\gamma t} \| u^0 \|_{W^{2, p}((0, 1);\mathbb{R}^n)}. \end{equation*} \end{lemma} We next present two lemmas dealing with the system \begin{equation*} v_t+ A(t, x) v_x = 0, \end{equation*} and its perturbation where $A$ is diagonal. The first lemma is the following one. \begin{lemma}\label{lemP1} Let $p\in [1,+\infty]$, $m$ be a positive integer, $\lambda_1 \ge \cdots \ge \lambda_m > 0$ and $\hat K\in (0,1)$. Then there exist three constants $\varepsilon_0>0$, $\gamma >0$ and $C>0$ such that, for every $T>0$, every $A \in C^1([0, T] \times [0, 1];\mathcal{D}_{m,+} )$, every $K \in C^1([0, T];\mathcal{M}_{m,m}(\mathbb{R}))$, every $v \in W^{1, p}([0, T] \times [0, 1]; \mathbb{R}^m)$ such that \begin{gather} \label{eqvlinear} v_t + A(t, x) v_x = 0 \mbox{ for } (t, x) \in (0, T) \times (0, 1), \\ \label{boundaryvlinear} v(t, 0) = K(t) v(t, 1)\mbox{ for } t \in [0, T], \\ \label{pro-K} \sup_{t \in [0, T]} \|K(t) \|_p \le \hat K < 1, \\ \label{derAK} \| A - \mbox{diag}(\lambda_1, \cdots, \lambda_m)\|_{C^1([0, T] \times [0, 1];\mathcal{M}_{m,m}(\mathbb{R}))} + \mathop{\sup}_{t \in [0, T]}\|K'(t) \|_{p} \le \varepsilon_0, \end{gather} one has \begin{equation*} \|v(t, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)} \le C e^{-\gamma t} \|v(0, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)} \mbox{ for } t \in [0, T]. \end{equation*} \end{lemma} \noindent{\bf Proof of Lemma~\ref{lemP1}.} We only consider the case $1 \le p < + \infty$, the case $p=+ \infty$ follows similarly (the proof is even easier) and is left to the reader. For $t \ge 0$, let $\varphi_i(t, s)$ be such that \begin{equation*} \partial_s \varphi_i(t, s) = A_{ii}(s, \varphi_i(t, s)) \quad \mbox{ and } \quad \varphi_i(t, t) = 0. \end{equation*} Then \begin{equation*} v_i(s, \varphi_i(t, s)) =v_i(t, 0). \end{equation*} We define $s_i$ as a function of $t$ by $\varphi_i (t, s_i(t)) = 1$. Note that $A_{ii}(s, \varphi_i(t, s))> \lambda_m/ 2 > 0$, at least if $\varepsilon_0>0$ is small enough, a property which is always assumed in this proof. Hence $s_i$ is well-defined. It follows from the definition of $s_i$ that \begin{equation}\label{def-si-lem} v_i(s_i(t), 1) = v_i(t,0). \end{equation} Using classical results on the dependence of solutions of ordinary differential equations on the initial conditions together with the inverse mapping theorem, one gets \begin{equation}\label{dsi-1-lem} |s_i'(t) - 1| \le C \varepsilon_0. \end{equation} Here and in what follows in this proof $'$ denotes the derivative with respect to $t$, e.g., $s_i'(t) = ds_i/dt$ and $v'(t, x) = \partial_t v(t, x)$ and $C$ denotes a positive constant which changes from one place to another and may depend on $p$, $m$, $\lambda_1 \ge \cdots \ge \lambda_m > 0$ and $\hat K\in (0,1)$ but is independent of $\varepsilon_0>0$, which is always assumed to be small enough, $T>0$, $A$ and $v$ which are always assumed to satisfy \eqref{eqvlinear} to \eqref{derAK}. Define, for $t \ge 2 \lambda_1$, \begin{equation}\label{def-ri-lem} \hat r_i(t) := t- s_i^{-1}(t). \end{equation} From \eqref{dsi-1-lem}, we have \begin{equation}\label{derivative-ri-11-lem} \sup_{t \in [2 \lambda_1, T]} | \hat r_i'|\le C \varepsilon_0. \end{equation} Set \begin{equation*} V(t) = v(t, 0). \end{equation*} We derive from \eqref{boundaryvlinear}, \eqref{def-si-lem} and \eqref{def-ri-lem} that \begin{equation}\label{important-lem} V(t) = K(t) \Big(V_1\big(t - \hat r_1 (t)\big), \cdots, V_i \big(t - \hat r_i (t)\big), \cdots, V_m\big(t - \hat r_m (t)\big)\Big)^T, \quad \mbox{ for } t \ge 2 r_m. \end{equation} In \eqref{important-lem} and in the following $r_i:=1/\lambda_i$ for every $i\in \{1,\cdots,m\}$. From \eqref{pro-K} and \eqref{important-lem}, we obtain \begin{equation}\label{key-1-lem} \int_{2 r_m}^{T}\| V(t) \|_{p}^{p} \, dt \le \hat K^p \sum_{i=1}^{n} \int_{2 r_m}^T |V_{i} \big(t - \hat r_{i}(t) \big)|^{p} \, dt. \end{equation} Since \begin{equation*} \int_{2 r_m}^T |V_{i} \big(t - \hat r_{i} (t) \big)|^{p} \, dt = \int_{2 r_m - \hat r_i (2 r_m) }^{T - \hat \lambda_i(T)} |V_{i}(t)|^{p} s_i' (t) \, dt, \end{equation*} it follows from \eqref{dsi-1-lem} that \begin{equation}\label{inter-1-lem} \int_{2 r_m}^T |V_{i}(t - \hat r_{i})|^{p} \le \int_{0}^{T} (1 + C\varepsilon_0) |V_{i}(t)|^{p} \, dt. \end{equation} A combination of \eqref{key-1-lem} and \eqref{inter-1-lem} yields \begin{equation*} \int_{2 r_m}^T \| V(t) \|_p^p \, dt \le \int_0^{T} \hat K^p (1 + C \varepsilon_0)\|V(t) \|_p^p \, dt. \end{equation*} By taking $\varepsilon_0$ small enough so that $\hat K^p (1 + C \varepsilon_0) \le [(1 + \hat K)/2 ]^p$, we have \begin{equation}\label{fact1-lem} \int_0^T \|V(t) \|_p^p \, dt \le C \int_0^{2 r_m} \|V(t) \|_p^p \, dt. \end{equation} We next establish similar estimates for the derivatives of $V$. Let us define \begin{equation}\label{def-w-lem} W(t) : =(W_1(t),\cdots,W_m(t))^T:=V'(t). \end{equation} Differentiating \eqref{important-lem} with respect to $t$, we have \begin{equation}\label{eq-w-lem} W(t) = K(t) \Big(W_1\big(t - \hat r_1 (t)\big), \cdots, W_i \big(t - \hat r_i (t)\big) , \cdots, W_m\big(t - \hat r_m (t)\big) \Big)^T + g_1(t) + f_1(t), \end{equation} where \begin{equation}\label{def-g1-lem} g_1(t) := - K(t) \Big(W_1\big(t - \hat r_1 (t)\big) \hat r_1'(t) , \cdots, W_i \big(t - \hat r_i (t)\big) \hat r_i'(t) , \cdots, W_m\big(t - \hat r_m (t)\big) \hat r_m'(t) \Big)^T \end{equation} and \begin{equation}\label{def-f1-lem} f_1(t) := K'(t) \Big(V_1\big(t - \hat r_1 (t)\big), \cdots, V_i \big(t - \hat r_i (t)\big), \cdots, V_m\big(t - \hat r_m (t)\big)\Big)^T. \end{equation} From \eqref{eq-w-lem}, we have \begin{equation}\label{estimate-dv} |W(t)|_p^p \le [(\hat K + 1)/2]^p \sum_{i=1}^{m} |W_{i} \big(t - \hat r_{i}(t) \big)|^{p} + C\Big(|f_1(t) |_p^p + |g_1(t)|_p^p \Big). \end{equation} Using \eqref{derAK} and \eqref{derivative-ri-11-lem}, we derive from \eqref{def-g1-lem} and \eqref{def-f1-lem}, as in \eqref{inter-1-lem}, that \begin{equation}\label{estimate-g1-f1-lem} \int_{2 r_m}^T \big(\|g_1(t) \|_p^p + \|f_1(t) \|_p^p\big) \, dt \le C\varepsilon_0^p \int_{0}^{T} \big( \|W \|_p^p + \|V(t) \|_p^p \big)\, dt. \end{equation} It follows from \eqref{estimate-dv}, as in \eqref{fact1-lem}, that \begin{equation}\label{fact2-lem} \int_0^T \|V'(t) \|_p^p \, dt \le C \int_0^{2 r_m} \big( \|V(t) \|_{p}^p + \|V'(t) \|_p^p \big)\, dt. \end{equation} Combining \eqref{fact1-lem} and \eqref{fact2-lem}, we reach the conclusion. $\Box$ As a consequence of Lemma~\ref{lemP1}, we obtain the following lemma, where $\mathcal{B}(\mathbb{R}^m)$ denotes the set of bilinear forms on $\mathbb{R}^m$. \begin{lemma}\label{lemP2} Let $p \ge 1$, $m$ be a positive integer, $\lambda_1 \ge \cdots \ge \lambda_m > 0$, $\hat K\in (0,1)$ and $M\in(0,+\infty)$. Then there exist three constants $\varepsilon_0>0$, $\gamma >0$ and $C>0$ such that, for every $T>0$, every $A \in C^1([0, T] \times [0, 1];\mathcal{D}_{m,+} )$, every $K \in C^1([0, T];\mathcal{M}_{m,m}(\mathbb{R}))$, every $Q \in C^1([0, T] \times [0, 1];\mathcal{B}(\mathbb{R}^m) )$ and every $v \in W^{1, p}([0, T] \times [0, 1]; \mathbb{R}^m)$ such that \begin{gather} \label{vequation} v_t + A(t, x) v_x = Q(t, x)(v, v) \mbox{ for } (t, x) \in (0, T) \times (0, 1), \\ \label{vbord} v(t, 0) = K(t) v(t, 1)\mbox{ for } t \in (0, T), \\ \label{Kp<1} \sup_{t \in [0, T]}\|K(t) \|_p \le \hat K < 1, \\ \| A - \mbox{diag}(\lambda_1, \cdots, \lambda_m)\|_{C^1([0, T] \times [0, 1])} + \mathop{\sup}_{t \in [0, T]}\|K'(t) \|_{p}\le \varepsilon_0, \\ \|Q\|_{C^1([0, T] \times [0, 1];\mathcal{B}(\mathbb{R}^m) )}\leq M, \\ \label{V0petit} \|v(0, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)}\le \varepsilon_0, \end{gather} one has \begin{equation*} \|v(t, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)} \le C e^{-\gamma t} \|v(0, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)} \mbox{ for t} \in (0,T). \end{equation*} \end{lemma} \noindent{\bf Proof of Lemma~\ref{lemP2}.} Let $\tilde v \in W^{1, p}([0, T] \times [0, 1]; \mathbb{R}^m)$ be the solution of the linear Cauchy problem \begin{gather}\label{deftildev-eq} \tilde v_t + A(t, x) \tilde v_x = 0 \mbox{ for } (t, x) \in (0, T) \times (0, 1), \\ \label{deftildev-bord} \tilde v(t, 0) = K(t) \tilde v(t, 1)\mbox{ for } t \in (0, T), \\ \label{deftildev-init} \tilde v(0, x) = v(0, x) \mbox{ for } x \in (0, 1). \end{gather} (Note that $v(0,0)=K(0)v(0,1)$; hence such a $\tilde v$ exists.) From Lemma~\ref{lemP1}, \eqref{deftildev-eq}, \eqref{deftildev-bord} and \eqref{deftildev-init}, one has \begin{equation}\label{estsurtildev} \|\tilde v(t, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)} \le C e^{-\gamma t} \|v(0, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)} \mbox{ for } t \in [0, T]. \end{equation} Let \begin{gather}\label{defbarv} \bar v:=v-\tilde v. \end{gather} From \eqref{vequation}, \eqref{vbord}, \eqref{deftildev-eq}, \eqref{deftildev-bord}, \eqref{deftildev-init} and \eqref{defbarv}, one has \begin{gather}\label{barv-eq} \bar v_t + A(t, x) \bar v_x = Q(t, x)(\tilde v+\bar v , \tilde v+ \bar v) \mbox{ for } (t, x) \in (0, T) \times (0, 1), \\ \label{barv-bord} \bar v(t, 0) = K(t) \bar v(t, 1)\mbox{ for } t \in (0, T), \\ \label{barv-init} \bar v(0, x) = 0 \mbox{ for } x \in (0, 1). \end{gather} Let, for $t\in [0,T]$, \begin{equation}\label{defe(t)} e(t):=\|\bar v(t,\cdot)\|_{L^\infty((0,1);\mathbb{R}^m)}. \end{equation} Following the characteristics and using \eqref{estsurtildev}, \eqref{barv-eq}, \eqref{barv-bord} and the Sobolev imbedding $W^{1, p}((0, 1);\mathbb{R}^m)\subset L^\infty((0,1);\mathbb{R}^m)$, one gets, in the sense of distribution in $(0,T)$, \begin{equation}\label{este(t)} e'(t)\leqslant C (\|v(0, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)}^2+ e(t)+ e(t)^2). \end{equation} In \eqref{este(t)}, $C$ is as in the proof of Lemma~\ref{lemP1} except that it may now depend on $M$. From \eqref{barv-init}, \eqref{defe(t)} and \eqref{este(t)}, one gets the existence of $\varepsilon_0$, of an increasing function $T\in[0, +\infty) \mapsto C(T)\in (0,+\infty)$ and of a decreasing function $T\in[0, +\infty) \mapsto \varepsilon(T)\in (0,+\infty)$, such that, for every $T\in [0,+\infty)$, for every $A \in C^1([0, T] \times [0, 1];\mathcal{D}_{m,+} )$, every $K \in C^1([0, T];\mathcal{M}_{m,m}(\mathbb{R}))$, every $Q \in C^1([0, T] \times [0, 1];\mathcal{B}(\mathbb{R}^m) )$ and every $v \in W^{1, p}([0, T] \times [0, 1]; \mathbb{R}^m)$ satisfying \eqref{vequation} to \eqref{V0petit}, \begin{multline}\label{epetit} \left(\|v(0, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)}\le \varepsilon(T)\right)\implies \\ \left( \|\bar v(t,\cdot)\|_{L^\infty((0,1);\mathbb{R}^m)} \le C (T) \|v(0, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)}^{2} \mbox{ for } t \in (0,T)\right), \end{multline} Let $\bar w:=\bar v_x$. Differentiating \eqref{barv-eq} with respect to $x$, we get \begin{multline}\label{barv-eq-der} \bar w_t + A(t, x) \bar w_x + A_x(t,x) \bar w= Q_x(t, x)(\tilde v+\bar v , \tilde v+ \bar v) \\+Q(t, x)(\tilde v_x+\bar w , \tilde v+ \bar v) + Q(t, x)(\tilde v+\bar v , \tilde v_x+ \bar w) \mbox{ for } (t, x) \in (0, T) \times (0, 1). \end{multline} Differentiating \eqref{barv-bord} with respect to $t$ and using \eqref{barv-eq}, we get, for $t \in [0, T]$, \begin{multline}\label{barv-bord-der} A(t, 0)\bar w(t,0) - Q(t, 0)(\tilde v(t,0)+\bar v (t,0), \tilde v(t,0)+ \bar v(t,0)) = \\K(t) \big(A(t, 1)\bar w(t,1) - Q(t,1)(\tilde v(t,1)+\bar v (t,1), \tilde v(t,1)+ \bar v(t,1)) \big) -K'(t)\bar v(t,1). \end{multline} Differentiating \eqref{barv-init} with respect to $x$, one gets \begin{gather} \label{barw-init} \bar w(0, x) = 0 \mbox{ for } x \in (0, 1). \end{gather} We consider \eqref{barv-eq-der}, \eqref{barv-bord-der} and \eqref{barw-init} as a nonhomogeneous linear hyperbolic system where the unknown is $w$ and the data are $A$, $K$, $Q$, $\tilde v $, and $\bar v$. Then, from straightforward estimates on the solutions of linear hyperbolic equations, one gets that, for every $t\in [0,T]$, \begin{equation}\label{estimatewLp} \begin{array}{rcl} \displaystyle \|\bar w(t,\cdot)\|_{L^p((0,1);\mathbb{R}^m)}&\leq &e^{CT\left(1+ \|\tilde v\|_{L^\infty((0,T)\times(0,1);\mathbb{R}^m)}+ \|\bar v\|_{L^\infty((0,T)\times(0,1);\mathbb{R}^m)}\right)} \\ && \displaystyle\times\left(\|\tilde v\|_{L^\infty((0,T);W^{1,p}((0,1);\mathbb{R}^m))}^2+\|\bar v\|_{L^\infty((0,T)\times(0,1);\mathbb{R}^m)}^2\right). \end{array} \end{equation} From \eqref{estsurtildev}, \eqref{epetit} and \eqref{estimatewLp}, one gets the existence of $\varepsilon_0$, of an increasing function $T\in[0, +\infty) \mapsto C(T)\in (0,+\infty)$ and of a decreasing function $T\in[0, +\infty) \mapsto \varepsilon(T)\in (0,+\infty)$, such that, for every $T\in [0,+\infty)$, every $A \in C^1([0, T] \times [0, 1];\mathcal{D}_{m,+} )$, every $K \in C^1([0, T];\mathcal{M}_{m,m}(\mathbb{R}))$, every $Q \in C^1([0, T] \times [0, 1];\mathcal{B}(\mathbb{R}^m) )$ and every $v \in W^{1, p}([0, T] \times [0, 1]; \mathbb{R}^m)$ satisfying \eqref{vequation} to \eqref{V0petit}, \begin{multline}\label{v0petit0K} \left(\|v(0, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)}\le \varepsilon(T)\right)\implies \\ \left(\|\bar v(t, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)} \le C (T) \|v(0, \cdot)\|_{W^{1, p}((0, 1);\mathbb{R}^m)}^{2} \mbox{ for } t \in (0,T)\right), \end{multline} which, together with \eqref{estsurtildev} and \eqref{defbarv}, concludes the proof of Lemma~\ref{lemP2}. \endproof \subsection{Proof of Theorem~\ref{thm2}} Replacing, if necessary, $u$ by $Du$ where $D$ (depending only on $K$) is a diagonal matrix with positive entries, we may assume that \begin{equation}\label{p-normG'0} \|G'(0)\|_p<1. \end{equation} For $a\in \mathbb{R}^n$, let $\lambda_i(a)$ be the $i$-th eigenvalue of $F(a)$ and $l_i(a)$ be a left eigenvector of $F(a)$ for this eigenvalue. The functions $\lambda_i$ are of class $C^\infty$ in a neighborhood of $0\in \mathbb{R}^n$. We may also impose on the $l_i$ to be of class $C^\infty$ in a neighborhood of $0\in \mathbb{R}^n$ and that $l_i(0)^T$ is the $i$-th vector of the canonical basis of $\mathbb{R}^n$. Set \begin{equation*} \left\{\begin{array}{l} v_i = l_i(u) u \\[6pt] w_i = l_i(u) \partial_t u \end{array} \right. \quad \mbox{ for } i =1, \cdots, n. \end{equation*} From \cite[(3.5) and (3.6) on page 187]{Li-book}, we have, for $i =1, \cdots, n$, \begin{equation}\label{sys1} \left\{\begin{array}{l} u_i = v_i + \sum_{j, k}^n b_{ijk}(v) v_j v_k\\[6pt] \partial_t u_i = w_i + \sum_{ijk} \bar b_{ijk} (v) v_j w_k \end{array} \right. , \end{equation} where $b_{ijk}$ and $\bar b_{ijk}$ are of class $C^\infty$. From \cite[(3.7) and (3.8)]{Li-book}, we obtain, for $i =1, \cdots, n$, \begin{equation}\label{sys2} \left\{\begin{array}{l} \displaystyle \partial_t v_i + \lambda_i(u) \partial_x v_i = \sum_{ijk}^n c_{ijk}(u)v_j v_k + \sum_{ijk}^n d_{ijk}(u) v_j w_k, \\[6pt] \displaystyle \partial_t w_i + \lambda_i(u) \partial_x w_i = \sum_{ijk}^n \bar c_{ijk}(u)w_j w_k + \sum_{ijk}^n \bar d_{ijk}(u) v_j w_k, \\[6pt] \end{array}\right. \end{equation} where $c_{ijk}, \bar c_{ijk}, d_{ijk}, \bar d_{ijk}$ are of class $C^\infty$ in a neighborhood of $0\in \mathbb{R}^n$. We also have, for some $\hat G: \mathbb{R}^{2n} \to \mathbb{R}^{2n}$ of class $C^\infty$ in a neighborhood of $0\in \mathbb{R}^{2n}$, \begin{equation*} \left( \begin{array}{c} v(t, 0) \\[6pt] w(t, 0) \end{array} \right) = \hat G \left( \begin{array}{c} v(t, 1) \\[6pt] w(t, 1) \end{array} \right) \end{equation*} and, by \eqref{bdry-P}, \begin{equation*} \hat G' \left( \begin{array}{c} 0 \\[6pt] 0 \end{array} \right) = \left( \begin{array}{cc } G'(0) & 0 \\[6pt] 0 & G'(0) \end{array} \right), \end{equation*} which, together with \eqref{p-normG'0}, implies that \begin{equation*} \|\hat G'(0)\|_p < 1. \end{equation*} Applying Lemma~\ref{lemP2} for \eqref{sys2}, we obtain the exponential stability for $(v, w)$ with respect to the $W^{1, p}$-norm, from which, noticing that $u_x=-F(u)^{-1}u_t$, Theorem~\ref{thm2} readily follows. $\Box$ \end{document}
\begin{document} \begin{Titul} {\large \bf ON HARDY\,--\,LITTLEWOOD-TYPE\\ AND HAUSDORFF\,--\,YOUNG-TYPE INEQUALITIES\\[0.2em] FOR GENERALIZED GEGENBAUER EXPANSIONS }\\[3ex] {{\bf Roman~A.~Veprintsev} \\[5ex]} \end{Titul} \begin{Anot} {\bf Abstract.} Using well-known techniques, we establish Hardy\,--\,Littlewood-type and Hausdorff\,--\,Young-type inequalities for generalized Gegenbauer expansions and their unification. {\bf Key words and phrases:} orthogonal polynomials, Jacobi polynomials, Gegenbauer polynomials, generalized Gegenbauer polynomials, Hardy\,--\,Littlewood-type inequalities, Hausdorff\,--\,Young-type inequalities {\bf MSC 2010:} 33C45, 41A17, 42C10 \end{Anot} \section{Introduction and preliminaries} In this section, we introduce some classes of orthogonal polynomials on $[-1,1]$, including the so-called generalized Gegenbauer polynomials. For a background and more details on the orthogonal polynomials, the reader is referred to \cite{dai_xu_book_approximation_theory_2013,dunkl_xu_book_orthogonal_polynomials_2014,andrews_askey_roy_book_special_functions_1999,szego_book_orthogonal_polynomials_1975}. Let $\alpha,\,\beta>-1$. The Jacobi polynomials, denoted by $P_n^{(\alpha,\beta)}(\cdot)$, where $n=0,1,\ldots$, are orthogonal with respect to the Jacobi weight function $w_{\alpha,\beta}(t)=(1-t)^\alpha(1+t)^\beta$ on $[-1,1]$, namely, \begin{equation*} \int\nolimits_{-1}^1 P_n^{(\alpha,\beta)}(t)\, P_m^{(\alpha,\beta)}(t)\, w_{\alpha,\beta}(t)\,dt=\begin{cases}\dfrac{2^{\alpha+\beta+1}\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}{(2n+\alpha+\beta+1)\Gamma(n+1)\Gamma(n+\alpha+\beta+1)},&n=m,\\ 0,&n\not=m. \end{cases} \end{equation*} Here, as usual, $\Gamma$ is the gamma function. For $\lambda>-\frac{1}{2}$, $\mu\geq0$, and $n=0,1,\ldots$, the generalized Gegenbauer polynomials $C_n^{(\lambda,\mu)}(\cdot)$ are defined by \begin{equation*}\label{coefficients_for_generalized_Gegenbauer_polynomials} \begin{array}{ll} C_{2n}^{(\lambda,\mu)}(t)=a_{2n}^{(\lambda,\mu)}P_n^{(\lambda-1/2,\mu-1/2)}(2t^2-1), & a_{2n}^{(\lambda,\mu)}=\dfrac{(\lambda+\mu)_n}{(\mu+\frac{1}{2})_n},\\[1.0em] C_{2n+1}^{(\lambda,\mu)}(t)=a_{2n+1}^{(\lambda,\mu)}\,t P_n^{(\lambda-1/2,\mu+1/2)}(2t^2-1),\quad & a_{2n+1}^{(\lambda,\mu)}=\dfrac{(\lambda+\mu)_{n+1}}{(\mu+\frac{1}{2})_{n+1}}, \end{array} \end{equation*} where $(\lambda)_n$ denotes the Pochhammer symbol given by \begin{equation*} (\lambda)_0=1,\quad (\lambda)_n=\lambda(\lambda+1)\cdots(\lambda+n-1)\quad\text{ for}\quad n=1,2,\ldots. \end{equation*} They are orthogonal with respect to the weight function \begin{equation}\label{weight_function} v_{\lambda,\mu}(t)=|t|^{2\mu}(1-t^2)^{\lambda-1/2},\quad t\in[-1,1]. \end{equation} For $\mu=0$, these polynomials, denoted by $C_n^{\lambda}(\cdot)$, are called the Gegenbauer polynomials: \begin{equation*} C_n^{\lambda}(t)=C_n^{(\lambda,0)}(t)=\frac{(2\lambda)_n}{(\lambda+\frac{1}{2})_n} P_n^{(\lambda-1/2,\lambda-1/2)}(t). \end{equation*} For $\lambda>-\frac{1}{2}$, $\mu>0$, and $n=0,1,\ldots$, we have the following connection: \begin{equation*} C_n^{(\lambda,\mu)}(t)=c_\mu\int\nolimits_{-1}^1 C_n^{\lambda+\mu}(tx)(1+x)(1-x^2)^{\mu-1}\,dx,\quad c_\mu^{-1}=2\int\nolimits_0^1 (1-x^2)^{\mu-1}\,dx. \end{equation*} Denote by $\bigl\{\widetilde{C}_n^{(\lambda,\mu)}(\cdot)\bigr\}_{n=0}^{\infty}$ the sequence of orthonormal generalized Gegenbauer polynomials. It is easily verified that these polynomials are given by the following formulae: \begin{equation*}\label{coefficients_for_orthonormal_generalized_Gegenbauer_polynomials} \begin{split} &\widetilde{C}_{2n}^{(\lambda,\mu)}(t)=\widetilde{a}_{2n}^{\,(\lambda,\mu)}P_n^{(\lambda-1/2,\mu-1/2)}(2t^2-1),\\ &\widetilde{a}_{2n}^{\,(\lambda,\mu)}=\Biggl(\dfrac{(2n+\lambda+\mu)\Gamma(n+1)\Gamma(n+\lambda+\mu)}{\Gamma(n+\lambda+\frac{1}{2})\Gamma(n+\mu+\frac{1}{2})}\Biggr)^{1/2},\\[0.4em] &\widetilde{C}_{2n+1}^{(\lambda,\mu)}(t)=\widetilde{a}_{2n+1}^{\,(\lambda,\mu)}\,t P_n^{(\lambda-1/2,\mu+1/2)}(2t^2-1),\\ &\widetilde{a}_{2n+1}^{\,(\lambda,\mu)}=\Biggl(\dfrac{(2n+\lambda+\mu+1)\Gamma(n+1)\Gamma(n+\lambda+\mu+1)}{\Gamma(n+\lambda+\frac{1}{2})\Gamma(n+\mu+\frac{3}{2})}\Biggr)^{1/2}. \end{split} \end{equation*} The generalized Gegenbauer polynomials play an important role in Dunkl harmonic analysis (see, for example, \cite{dunkl_xu_book_orthogonal_polynomials_2014,dai_xu_book_approximation_theory_2013}). So, the study of these polynomials and their applications is very natural. The notation $f(n)\asymp g(n)$, $n\to\infty$, means that there exist positive constants $C_1$, $C_2$, and a positive integer $n_0$ such that $0\leq C_1 g(n)\leq f(n)\leq C_2 g(n)$ for all $n\geq n_0$. For brevity, we will omit ``$n\to\infty$'' in the asymptotic notation. Define the uniform norm of a continuous function $f$ on $[-1,1]$ by \begin{equation*} \|f\|_{\infty}=\max\limits_{-1\leq t\leq 1} |f(t)|. \end{equation*} The maximum of two real numbers $x$ and $y$ is denoted by $\max(x,y)$. In \cite{veprintsev_preprint_max_value_2015}, we prove the following result. \begin{teoen}\label{main_result_theorem_of_max_value_preprint_for_orthonormal_system} Let $\lambda>-\frac{1}{2}$, $\mu>0$. Then \begin{equation*} \bigl\|\widetilde{C}_n^{(\lambda,\mu)}\bigr\|_{\infty}\asymp n^{\max(\lambda,\mu)}. \end{equation*} \end{teoen} Given $1\leq p\leq\infty$, we denote by $L_p(v_{\lambda,\mu})$ the space of complex-valued Lebesgue measurable functions $f$ on $[-1,1]$ with finite norm \begin{equation*} \begin{array}{lr} \|f\|_{L_p(v_{\lambda,\mu})}=\Bigl(\int\nolimits_{-1}^1 |f(t)|^p\,v_{\lambda,\mu}(t)\,dt\Bigr)^{1/p},&\quad 1\leq p<\infty,\\[1.0em] \|f\|_{L_\infty}=\esssup\limits_{x\in[-1,1]} |f(x)|,& p=\infty. \end{array} \end{equation*} For a function $f\in L_p(v_{\lambda,\mu})$, $1\leq p\leq\infty$, the generalized Gegenbauer expansion is defined by \begin{equation*} f(t)\sim\sum\limits_{n=0}^\infty \hat{f}_n \widetilde{C}_n^{(\lambda,\mu)}(t),\qquad \text{where}\quad\hat{f}_n=\int\nolimits_{-1}^1 f(t)\, \widetilde{C}_n^{(\lambda,\mu)}(t)\,v_{\lambda,\mu}(t)\,dt. \end{equation*} For $1<p<\infty$, we denote by $p'$ the conjugate exponent to $p$, that is, $\frac{1}{p}+\frac{1}{p'}=1$. The aim of this paper is to establish Hardy\,--\,Littlewood-type and Hausdorff\,--\,Young-type inequalities for generalized Gegenbauer expansions in Sections \ref{section_for_Hardy-Littlewood_inequality} and \ref{section_for_Hausdorff-Young_inequality}, respectively. Also, we give their unification in Section \ref{section_for_unification_of_inequalities}. \section{Hardy\,--\,Littlewood-type inequalities\\ for generalized Gegenbauer expansions}\label{section_for_Hardy-Littlewood_inequality} The analogue of the Hardy\,--\,Littlewood inequality is given in the following theorem, which can be deduced as a corollary from \cite[Theorems 3.2 and 3.6]{stein_weiss_article_interpolation_1958} (for \eqref{first_part_of_Hardy-Littlewood_inequality} and \eqref{second_part_of_Hardy-Littlewood_inequality}, respectively). Nevertheless, for convenience we give a direct proof of the theorem, based on Theorem \ref{main_result_theorem_of_max_value_preprint_for_orthonormal_system} and our settings. \begin{teoen}\label{Hardy-Littlewood_inequality_for_generalized_Gegenbauer_expansions} $(a)$ If $1<p\leq2$ and $f\in L_p(v_{\lambda,\mu})$, then \begin{equation}\label{first_part_of_Hardy-Littlewood_inequality} \Bigl\{\sum\limits_{n=0}^\infty\Bigl((n+1)^{\left(\frac{1}{p'}-\frac{1}{p}\right)\left(\max(\lambda,\mu)+1\right)}|\hat{f}_n|\Bigl)^{p}\Bigr\}^{1/p}\leq A_p\,\|f\|_{L_p(v_{\lambda,\mu})}. \end{equation} $(b)$ If $2\leq q<\infty$ and $\phi$ is a function on non-negative integers satisfying \begin{equation}\label{assumption_for_second_part_of_Hardy-Littlewood_inequality} \sum\limits_{n=0}^\infty\Bigl((n+1)^{\left(\frac{1}{q'}-\frac{1}{q}\right)\left(\max(\lambda,\mu)+1\right)}|\phi(n)|\Bigl)^{q}<\infty, \end{equation} then the algebraic polynomials \begin{equation*} \Phi_N(t)=\sum\limits_{n=0}^N \phi(n)\,\widetilde{C}_n^{(\lambda,\mu)}(t) \end{equation*} converge in $L_q(v_{\lambda,\mu})$ to a function $f$ satisfying $\hat{f}_n=\phi(n)$, $n=0,1,\ldots$, and \begin{equation}\label{second_part_of_Hardy-Littlewood_inequality} \|f\|_{L_q(v_{\lambda,\mu})}\leq A_{q'} \Bigl\{\sum\limits_{n=0}^\infty\Bigl((n+1)^{\left(\frac{1}{q'}-\frac{1}{q}\right)\left(\max(\lambda,\mu)+1\right)}|\phi(n)|\Bigl)^{q}\Bigr\}^{1/q}. \end{equation} \end{teoen} \proofen Let $\sigma=\max(\lambda,\mu)+1$. (a) To prove \eqref{first_part_of_Hardy-Littlewood_inequality}, we note that for $p=2$ the Parseval identity implies equality in \eqref{first_part_of_Hardy-Littlewood_inequality} with $A_2=1$. Consider \eqref{first_part_of_Hardy-Littlewood_inequality} as the transformation from $L_p(v_{\lambda,\mu})$ into the sequence $\bigl\{(n+1)^{\sigma}\hat{f}_n\bigr\}_{n=0}^\infty$ in the $\ell_p$ norm with the weight $\bigl\{(n+1)^{-2\sigma}\bigr\}_{n=0}^\infty$ and show that this transformation is of weak type $(1,1)$. We have \begin{equation*} m\bigl\{n\colon\, (n+1)^{\sigma}|\hat{f}_n|>t\bigr\}=\sum\limits_{(n+1)^{\sigma}|\hat{f}_n|>t} (n+1)^{-2\sigma}\equiv I_t. \end{equation*} By Theorem \ref{main_result_theorem_of_max_value_preprint_for_orthonormal_system}, $|\hat{f}_n|\leq C_1 \|f\|_{L_1(v_{\lambda,\mu})} (n+1)^{\sigma-1}$ and consequently \begin{equation*} I_t\leq\sum\limits_{(n+1)>A} (n+1)^{-2\sigma},\quad A=C_2 \left(\dfrac{t}{\|f\|_{L_1(v_{\lambda,\mu})}}\right)^{\frac{1}{2\sigma-1}}. \end{equation*} Hence, using the easily verified inequality \begin{equation*} \sum\limits_{(n+1)>\widetilde{A}} (1+n)^{-\delta}\leq 2^{\delta-1}\,{\widetilde{A}}^{-\delta+1},\quad \widetilde{A}>0,\quad \delta\geq 2, \end{equation*} we observe that, for $\widetilde{A}=A$ and $\delta=2\sigma$, \begin{equation*} I_t\leq C_3 \frac{\|f\|_{L_1(v_{\lambda,\mu})}}{t}. \end{equation*} The last estimate is a weak $(1,1)$ estimate which, using the Marcinkiewicz interpolation theorem, implies \eqref{first_part_of_Hardy-Littlewood_inequality}. (b) We have $1<q'\leq2$. For brevity, write $\psi_n$ in place of $ \Bigl((n+1)^{\left(\frac{1}{q'}-\frac{1}{q}\right)\sigma}|\phi(n)|\Bigl)^{q}. $ Suppose that $g\in L_{q'}(v_{\lambda,\mu})$ and that $N<N'$ are positive integers. Applying H\"{o}lder's inequality and (a), we find that \begin{equation}\label{first_inequality_for_second_part_of_Hardy-Littlewood_inequality} \begin{split} \Bigl|\int\nolimits_{-1}^1 \Phi_N(t)\, g(t)\, v_{\lambda,\mu}(t)\,dt\Bigr|&=\Bigl|\sum\limits_{n=0}^N\phi(n)\hat{g}_n\Bigr|=\\ &=\Bigl|\sum\limits_{n=0}^N (n+1)^{\left(\frac{1}{q'}-\frac{1}{q}\right)\sigma}\phi(n) \, (n+1)^{\left(\frac{1}{q}-\frac{1}{q'}\right)\sigma}\hat{g}_n\Bigr|\leq\\ &\leq \Bigl\{\sum\limits_{n=0}^N \psi_n\Bigr\}^{1/q}\,\Bigl\{\sum\limits_{n=0}^N \Bigl((n+1)^{\left(\frac{1}{q}-\frac{1}{q'}\right)\sigma}|\hat{g}_n|\Bigl)^{q'} \Bigr\}^{1/q'}\leq\\ &\leq \Bigl\{\sum\limits_{n=0}^N \psi_n\Bigr\}^{1/q} \, A_{q'} \|g\|_{L_{q'}(v_{\lambda,\mu})}. \end{split} \end{equation} Similarly, \begin{equation}\label{second_inequality_for_second_part_of_Hardy-Littlewood_inequality} \Bigl|\int\nolimits_{-1}^1 \bigl(\Phi_N(t)-\Phi_{N'}(t)\bigr)\, g(t)\, v_{\lambda,\mu}(t)\,dt\Bigr|\leq \Bigl\{\sum\limits_{n=N+1}^{N'}\psi_n\Bigr\}^{1/q}\, A_{q'} \|g\|_{L_{q'}(v_{\lambda,\mu})}. \end{equation} Hence, by \cite[Theorem~(12.13)]{hewitt_ross_book_analysis_1963}, the inequalities \eqref{first_inequality_for_second_part_of_Hardy-Littlewood_inequality} and \eqref{second_inequality_for_second_part_of_Hardy-Littlewood_inequality} lead respectively to the estimates \begin{equation}\label{third_inequality_for_second_part_of_Hardy-Littlewood_inequality} \|\Phi_N\|_{L_q(v_{\lambda,\mu})}\leq \Bigl\{\sum\limits_{n=0}^N \psi_n\Bigr\}^{1/q} \, A_{q'} \end{equation} and \begin{equation*}\label{fourth_inequality_for_second_part_of_Hardy-Littlewood_inequality} \|\Phi_N-\Phi_{N'}\|_{L_q(v_{\lambda,\mu})}\leq \Bigl\{\sum\limits_{n=N+1}^{N'}\psi_n\Bigr\}^{1/q}\, A_{q'}. \end{equation*} The last inequality combined with \eqref{assumption_for_second_part_of_Hardy-Littlewood_inequality} show that the sequence $\{\Phi_N\}_{N=1}^\infty$ is a Cauchy sequence in $L_q(v_{\lambda,\mu})$ and therefore convergent in $L_q(v_{\lambda,\mu})$; let $f$ be its limit. Then, by mean convergence, \begin{equation*} \hat{f}_n=\lim\limits_{N\to\infty} \bigl(\widehat{\Phi_N}\bigr)_n,\quad n=0,1,\ldots, \end{equation*} which is easily seen to equal $\phi(n)$. Moreover, the defining relation \begin{equation*} f=\lim\limits_{N\to\infty} \Phi_N\qquad \text{in \, $L_q(v_{\lambda,\mu})$} \end{equation*} and the inequality \eqref{third_inequality_for_second_part_of_Hardy-Littlewood_inequality} show that \eqref{second_part_of_Hardy-Littlewood_inequality} holds and so complete the proof. $\square$ \section{Hausdorff\,--\,Young-type inequalities\\ for generalized Gegenbauer expansions}\label{section_for_Hausdorff-Young_inequality} To prove the following result, we use the Riesz\,--\,Thorin interpolation theorem. \begin{teoen}\label{Hausdorff-Young_inequality_for_generalized_Gegenbauer_expansions} $(a)$ If $1<p\leq2$ and $f\in L_{p}(v_{\lambda,\mu})$, then \begin{equation}\label{first_part_of_Hausdorff-Young_inequality} \Bigl\{\sum\limits_{n=0}^\infty\Bigl((n+1)^{\left(\frac{1}{p'}-\frac{1}{p}\right)\max(\lambda,\mu)}|\hat{f}_n|\Bigr)^{p'}\Bigr\}^{1/p'}\leq B_p\,\|f\|_{L_p(v_{\lambda,\mu})}. \end{equation} $(b)$ If $2\leq q<\infty$ and $\phi$ is a function on non-negative integers satisfying \begin{equation}\label{assumption_for_second_part_of_Hausdorff-Young_inequality} \sum\limits_{n=0}^\infty\Bigl((n+1)^{\left(\frac{1}{q'}-\frac{1}{q}\right)\max(\lambda,\mu)}|\phi(n)|\Bigl)^{q'}<\infty, \end{equation} then the algebraic polynomials \begin{equation*} \Phi_N(t)=\sum\limits_{n=0}^N \phi(n)\,\widetilde{C}_n^{(\lambda,\mu)}(t) \end{equation*} converge in $L_q(v_{\lambda,\mu})$ to a function $f$ satisfying $\hat{f}_n=\phi(n)$, $n=0,1,\ldots$, and \begin{equation}\label{second_part_of_Hausdorff-Young_inequality} \|f\|_{L_q(v_{\lambda,\mu})}\leq B_{q'} \Bigl\{\sum\limits_{n=0}^\infty\Bigl((n+1)^{\left(\frac{1}{q'}-\frac{1}{q}\right)\max(\lambda,\mu)}|\phi(n)|\Bigl)^{q'}\Bigr\}^{1/q'}. \end{equation} \end{teoen} \proofen Let $\sigma=\max(\lambda,\mu)$. (a) Note that for $p=2$ the Parseval identity implies equality in \eqref{first_part_of_Hausdorff-Young_inequality} with $B_2=1$. We now consider \eqref{first_part_of_Hausdorff-Young_inequality} as the transformation from $L_p(v_{\lambda,\mu})$ into the sequence $\bigl\{(n+1)^{-\sigma}\hat{f}_n\bigr\}_{n=0}^\infty$ in the $\ell_p$ norm with the weight $\bigl\{(n+1)^{2\sigma}\bigr\}_{n=0}^\infty$ and show that this transformation is of strong type $(1,\infty)$. Using Theorem \ref{main_result_theorem_of_max_value_preprint_for_orthonormal_system}, we get \begin{equation*} \sup\limits_{n=0,1,\ldots} \Bigl\{(n+1)^{-\sigma} |\hat{f}_n|\Bigr\}\leq B_1 \|f\|_{L_1(v_{\lambda,\mu})}. \end{equation*} Thus, applying the Riesz\,--\,Thorin theorem, we deduce \eqref{first_part_of_Hausdorff-Young_inequality} with $B_p=B_1^{\frac{1}{p}-\frac{1}{p'}}$. (b) The proof of this part is closely related to the proof of part (b) of Theorem \ref{Hardy-Littlewood_inequality_for_generalized_Gegenbauer_expansions}. One can obtain this proof. We give it here for completeness. We have $1<q'\leq 2$. For brevity, write $\psi_n$ in place of $\Bigl((n+1)^{\left(\frac{1}{q'}-\frac{1}{q}\right)\sigma}|\phi(n)|\Bigl)^{q'}$. Suppose that $g\in L_{q'}(v_{\lambda,\mu})$ and that $N<N'$ are positive integers. Applying H\"{o}lder's inequality and (a), we find that \begin{equation}\label{first_inequality_for_second_part_of_Hausdorff-Young_inequality} \begin{split} \Bigl|\int\nolimits_{-1}^1 \Phi_N(t)\, g(t)\, v_{\lambda,\mu}(t)\,dt\Bigr|&=\Bigl|\sum\limits_{n=0}^N\phi(n)\hat{g}_n\Bigr|=\\ &=\Bigl|\sum\limits_{n=0}^N (n+1)^{\left(\frac{1}{q'}-\frac{1}{q}\right)\sigma}\phi(n) \, (n+1)^{\left(\frac{1}{q}-\frac{1}{q'}\right)\sigma}\hat{g}_n\Bigr|\leq\\ &\leq \Bigl\{\sum\limits_{n=0}^N \psi_n\Bigr\}^{1/q'}\,\Bigl\{\sum\limits_{n=0}^N \Bigl((n+1)^{\left(\frac{1}{q}-\frac{1}{q'}\right)\sigma}|\hat{g}_n|\Bigl)^{q} \Bigr\}^{1/q}\leq\\ &\leq \Bigl\{\sum\limits_{n=0}^N \psi_n\Bigr\}^{1/q'} \, B_{q'} \|g\|_{L_{q'}(v_{\lambda,\mu})}. \end{split} \end{equation} Similarly, \begin{equation}\label{second_inequality_for_second_part_of_Hausdorff-Young_inequality} \Bigl|\int\nolimits_{-1}^1 \bigl(\Phi_N(t)-\Phi_{N'}(t)\bigr)\, g(t)\, v_{\lambda,\mu}(t)\,dt\Bigr|\leq \Bigl\{\sum\limits_{n=N+1}^{N'}\psi_n\Bigr\}^{1/q'}\, B_{q'} \|g\|_{L_{q'}(v_{\lambda,\mu})}. \end{equation} Hence, by \cite[Theorem~(12.13)]{hewitt_ross_book_analysis_1963}, the inequalities \eqref{first_inequality_for_second_part_of_Hausdorff-Young_inequality} and \eqref{second_inequality_for_second_part_of_Hausdorff-Young_inequality} lead respectively to the estimates \begin{equation}\label{third_inequality_for_second_part_of_Hausdorff-Young_inequality} \|\Phi_N\|_{L_q(v_{\lambda,\mu})}\leq \Bigl\{\sum\limits_{n=0}^N \psi_n\Bigr\}^{1/q'} \, B_{q'} \end{equation} and \begin{equation*}\label{fourth_inequality_for_second_part_of_Hausdorff-Young_inequality} \|\Phi_N-\Phi_{N'}\|_{L_q(v_{\lambda,\mu})}\leq \Bigl\{\sum\limits_{n=N+1}^{N'}\psi_n\Bigr\}^{1/q'}\, B_{q'}. \end{equation*} The last inequality combined with \eqref{assumption_for_second_part_of_Hausdorff-Young_inequality} show that the sequence $\{\Phi_N\}_{N=1}^\infty$ is a Cauchy sequence in $L_q(v_{\lambda,\mu})$ and therefore convergent in $L_q(v_{\lambda,\mu})$; let $f$ be its limit. Then, by mean convergence, \begin{equation*} \hat{f}_n=\lim\limits_{N\to\infty} \bigl(\widehat{\Phi_N}\bigr)_n,\quad n=0,1,\ldots, \end{equation*} which is easily seen to equal $\phi(n)$. Moreover, the defining relation \begin{equation*} f=\lim\limits_{N\to\infty} \Phi_N\qquad \text{in \, $L_q(v_{\lambda,\mu})$} \end{equation*} and the inequality \eqref{third_inequality_for_second_part_of_Hausdorff-Young_inequality} show that \eqref{second_part_of_Hausdorff-Young_inequality} holds and so complete the proof. $\square$ \section{Unification of the Hardy\,--\,Littlewood-type\\ and the Hausdorff\,--\,Young-type inequalities}\label{section_for_unification_of_inequalities} Theorem \ref{unification_of_different_types_of_inequalities} contains the Hardy\,--\,Littlewood-type and the Hausdorff\,--\,Young-type inequalities for the expansions by orthonormal polynomials with respect to the weight function $v_{\lambda,\mu}$ (see \eqref{weight_function}). To prove it, we need Stein's modification of the Riesz\,--\,Thorin interpolation theorem (see \cite[Theorem~2, p.~485]{stein_article_interpolation_1956}) given below. \begin{teoen}[(Stein)]\label{Stein's_modification} Suppose $\nu_1$ and $\nu_2$ are $\sigma$-finite measures on $M$ and $S$, respectively, and $T$ is a linear operator defined on $\nu_1$-measurable functions on $M$ to $\nu_2$-measurable functions on $S$. Let $1\leq r_0,\,r_1,\,s_0,\,s_1\leq\infty$ and $\frac{1}{r}=\frac{1-t}{r_0}+\frac{t}{r_1}$, $\frac{1}{s}=\frac{1-t}{s_0}+\frac{t}{s_1}$, where $0\leq t\leq1$. Suppose further that \begin{equation*} \|(Tg) \cdot v_i\|_{L_{s_i}(S,\nu_2)}\leq L_i\|g \cdot u_i\|_{L_{r_i}(M,\nu_1)},\quad i=0,1, \end{equation*} where $u_i$ and $v_i$ are non-negative weight functions. Let $u=u_0^{1-t} \cdot u_1^t$, $v=v_0^{1-t} \cdot v_1^t$. Then \begin{equation*} \|(Tg) \cdot v\|_{L_{s}(S,\nu_2)}\leq L\|g \cdot u\|_{L_{r}(M,\nu_1)} \end{equation*} with $L=L_0^{1-t} \cdot L_1^t$. \end{teoen} \begin{teoen}\label{unification_of_different_types_of_inequalities} Let $\sigma=\max(\lambda,\mu)$. $(a)$ If $1<p\leq 2$, $f\in L_p(v_{\lambda,\mu})$, and $p\leq s\leq p'$, then \begin{equation}\label{first_part_of_unification} \Bigl\{\sum\limits_{n=0}^\infty\Bigl((n+1)^{\left(\frac{1}{s}-\frac{1}{p}\right)\sigma+\left(\frac{1}{p'}-\frac{1}{s}\right)(\sigma+1)}|\hat{f}_n|\Bigr)^s\Bigr\}^{1/s}\leq C_p(s)\,\|f\|_{L_p(v_{\lambda,\mu})}. \end{equation} $(b)$ If $2\leq q<\infty$, $q'\leq r\leq q$, and $\phi$ is a function on non-negative integers satisfying \begin{equation*} \sum\limits_{n=0}^\infty\Bigl((n+1)^{\left(\frac{1}{q'}-\frac{1}{r}\right)\sigma+\left(\frac{1}{r}-\frac{1}{q}\right)(\sigma+1)}|\phi(n)|\Bigr)^{r'}<\infty, \end{equation*} then the algebraic polynomials \begin{equation*} \Phi_N(t)=\sum\limits_{n=0}^N \phi(n)\widetilde{C}_n^{(\lambda,\mu)}(t) \end{equation*} converge in $L_q(v_{\lambda,\mu})$ to a function $f$ satisfying $\hat{f}_n=\phi(n)$, $n=0,1,\ldots$, and \begin{equation*} \|f\|_{L_q(v_{\lambda,\mu})}\leq C_{q'}(r) \Bigl\{\sum\limits_{n=0}^\infty\Bigl((n+1)^{\left(\frac{1}{q'}-\frac{1}{r}\right)\sigma+\left(\frac{1}{r}-\frac{1}{q}\right)(\sigma+1)}|\phi(n)|\Bigr)^{r'}\Bigr\}^{1/r'}. \end{equation*} \end{teoen} \proofen (a) This part was proved for $s=p$ (with $C_p(p)=A_p$) and $s=p'$ (with $C_p(p')=B_p$) in Theorems \ref{Hardy-Littlewood_inequality_for_generalized_Gegenbauer_expansions} and \ref{Hausdorff-Young_inequality_for_generalized_Gegenbauer_expansions}, respectively. So for $p=2$, we obtain the equality in \eqref{first_part_of_unification} with $C_2(2)=1$. Consider now the case that $1<p<2$. To prove \eqref{first_part_of_unification}, we set in Theorem \ref{Stein's_modification}: $M=[-1,1]$, $\nu_1$ the Lebesgue measure, $S=\{n\}_{n=0}^\infty$, $\nu_2$ the counting measure, $g=f$, $Tg=\{\hat{f}_n\}_{n=0}^\infty$, $r=r_0=r_1=p$, $u=u_0=u_1=v_{\lambda,\mu}$, $s_0=p'$, $s_1=p$, $v_0=\bigl\{(n+1)^{\left(\frac{1}{p'}-\frac{1}{p}\right)\sigma}\bigr\}_{n=0}^\infty$, $v_1=\bigl\{(n+1)^{\left(\frac{1}{p'}-\frac{1}{p}\right)(\sigma+1)}\bigr\}_{n=0}^\infty$, $L_0=B_p$, $L_1=A_p$, and $\frac{1}{s}=\frac{1-t}{p'}+\frac{t}{p}$. As $\frac{1}{p}+\frac{1}{p'}=1$, $\frac{1}{s}-\frac{1}{p}=(1-t)\bigl(\frac{1}{p'}-\frac{1}{p}\bigr)$, $\frac{1}{p'}-\frac{1}{s}=t\bigl(\frac{1}{p'}-\frac{1}{p}\bigr)$, the proof of \eqref{first_part_of_unification} is concluded. Because of \begin{equation*} 1-t=\frac{\frac{1}{s}-\frac{1}{p}}{\frac{1}{p'}-\frac{1}{p}},\quad t=\frac{\frac{1}{p'}-\frac{1}{s}}{\frac{1}{p'}-\frac{1}{p}}, \end{equation*} it is clear that $C_p(s)=B_p^{1-t}A_p^t$. (b) Taking into account the previously given proofs (see parts (b) and (b) in Theorems \ref{Hardy-Littlewood_inequality_for_generalized_Gegenbauer_expansions} and \ref{Hausdorff-Young_inequality_for_generalized_Gegenbauer_expansions}, respectively), the proof is obvious and left to the reader. $\square$ \begin{Biblioen} \bibitem{andrews_askey_roy_book_special_functions_1999}G.\,E.~Andrews, R.~Askey, and R.~Roy, \textit{Special Functions}, Encyclopedia of Mathematics and its Applications \textbf{71}, Cambridge University Press, Cambridge, 1999. \bibitem{dai_xu_book_approximation_theory_2013}F.~Dai and Y.~Xu, \textit{Approximation theory and harmonic analysis on spheres and balls}, Springer Monographs in Mathematics, Springer, 2013. \bibitem{ditzian_article_estimates_2011}Z.~Ditzian, Estimates of the coefficients of the Jacobi expansion by measures of smoothness, \textit{J. Math. Anal. Appl.} \textbf{384} (2011), 303--306. \bibitem{ditzian_article_smoothness_2012}Z.~Ditzian, Relating smoothness to expressions involving Fourier coefficients or to a Fourier transform, \textit{J. Approx. Theory} \textbf{164} (2012), 1369--1389. \bibitem{ditzian_article_norm_and_smoothness_2015}Z.~Ditzian, Norm and smoothness of a function related to the coefficients of its expansion, \textit{J. Approx. Theory} \textbf{196} (2015), 101--110. \bibitem{dunkl_xu_book_orthogonal_polynomials_2014}C.\,F.~Dunkl and Y.~Xu, \textit{Orthogonal polynomials of several variables}, 2nd ed., Encyclopedia of Mathematics and its Applications \textbf{155}, Cambridge University Press, Cambridge, 2014. \bibitem{hewitt_ross_book_analysis_1963}E.~Hewitt and K.\,A.~Ross, \textit{Abstract harmonic analysis. Vol. I}, Springer-Verlag, Heidelberg, 1963. \bibitem{stein_article_interpolation_1956}E.~Stein, Interpolation of linear operators, \textit{Trans. Amer. Math. Soc.} \textbf{83} (1956), 482--492. \bibitem{stein_weiss_article_interpolation_1958}E.~Stein and G.~Weiss, Interpolation of operators with change of measures, \textit{Trans. Amer. Math. Soc.} \textbf{87} (1958), 159--172. \bibitem{szego_book_orthogonal_polynomials_1975}G.~Szeg\"{o}, \textit{Orthogonal polynomials}, 4th ed., American Mathematical Society Colloquium Publications \textbf{23}, American Mathematical Society, Providence, Rhode Island, 1975. \bibitem{veprintsev_preprint_max_value_2015}R.\,A.~Veprintsev, On the asymptotic behavior of the maximum absolute value of generalized Gegenbauer polynomials, arXiv preprint 1602.01023 (2015). \end{Biblioen} \noindent \textsc{Department of scientific research, Tula State University, Tula, Russia } \noindent \textit{E-mail address}: \textbf{[email protected]} \end{document}
\begin{document} \title{Linear Branching Programs and Directional Affine Extractors} \begin{abstract} A natural model of \emph{read-once{} linear branching programs} is a branching program where queries are $\mathbb{F}_2$ linear forms, and along each path, the queries are linearly independent. We consider two restrictions of this model, which we call \emph{weakly} and \emph{strongly} read-once, both generalizing standard read-once branching programs and parity decision trees. Our main results are as follows. \begin{itemize} \item {\bf Average-case complexity.} We define a pseudo-random class of functions which we call \emph{directional affine extractors}, and show that these functions are hard on average for the strongly read-once{} model. We then present an explicit construction of such function with good parameters. This strengthens the result of Cohen and Shinkar (ITCS'16) who gave such average-case hardness for parity decision trees. Directional affine extractors are stronger than the more familiar class of affine extractors. Given the significance of these functions, we expect that our new class of functions might be of independent interest. \item {\bf Proof complexity.} We also consider the proof system $\reslin$ which is an extension of resolution with linear queries. A refutation of a CNF in this proof system naturally defines a linear branching program solving the corresponding search problem. Conversely, we show that a weakly read-once{} linear BP solving the search problem can be converted to a $\reslin$ refutation with constant blow up. \end{itemize} \end{abstract} \section{Introduction} Circuit complexity and proof complexity are two major lines of inquiry in complexity theory (see \cite{DBLP:books/daglib/0028687, kraj_2019} for extensive introductions). The former theme attempts to identify explicit Boolean functions which are not computable by small circuits from a certain restricted class, and the latter aims to find tautologies which are not provable by short proofs in a given restricted proof system. These seemingly unrelated topics are bound together in at least two different ways: via feasible interpolation where a circuit lower bound for a concrete computational problem implies proof size lower bounds (see, e.g., \cite{DBLP:conf/focs/HrubesP17}), and more fundamentally many proof systems have an underlying circuit class where proof lines come from. Notable examples are Frege, bounded depth Frege, and extended Frege systems where proof lines are De Morgan formulas, $\ac0$ circuits, and general Boolean circuits, respectively. Intuitively we expect that understanding a circuit class in terms of lower bounds and techniques should yield results in the proof complexity counterpart. This intuition has been supported by bounded depth Frege lower bounds using specialized Switching Lemmas (see, e.g., \cite{DBLP:journals/jacm/Hastad21}), the essential ingredient of $\ac0$ lower bounds. \noindent{\bf $\ac0[2]$ circuits and $\reslin$ proof system.} It is not clear if this intuition should always hold. Lower bounds for $\ac0[p]$ circuits ($\ac0$ circuits with $\Mod_p$ gates) have been known for a long time \cite{MR897705,DBLP:conf/stoc/Smolensky87} yet lower bounds for bounded depth Frege systems with modular gates still elude us. Perhaps this failure is not too surprising since our understanding of $\ac0[p]$ circuits is not of the same status as our understanding of $\ac0$. For example, even for $\ac0[2]$, that is $\ac0$ with parity gates, no strong average-case lower bound is known. Settling such bounds is an important challenge, since Shaltiel and Viola \cite{DBLP:journals/siamcomp/ShaltielV10} showed that for standard worst-case to average-case hardness amplification techniques to work, the circuit class is required to compute the majority function, which is not the case for $\ac0[2]$. Several works have highlighted the special case of $\ac0\circ \Mod_2$, where the parity gates are next to the input \cite{DBLP:journals/eccc/ServedioV12,DBLP:conf/innovations/AkaviaBGKR14,DBLP:conf/innovations/CohenS16}. Among these works we pay special attention to the result of Cohen and Shinkar \cite{DBLP:conf/innovations/CohenS16} who considered the depth-3 case of this problem and proved a strong average-case hardness for the special case of parity decision trees. The more general case of $\dnf \circ \Mod_2$ remains open. In the proof complexity parallel, a special case of $\ac0[2]$-Frege was suggested by Itsykson and Sokolov \cite{DBLP:journals/apal/ItsyksonS20}. They considered the system $\reslin$ that is an extension of resolution which reasons about disjunctions of linear equations over $\mathbb{F}_2$, which we call linear clauses. The rules of this system are: \begin{itemize} \item \emph{the weakening rule}: from a linear clause we can derive any other linear clause which is semantically implied, \item \emph{the resolution rule}: for every two linear clauses $C$ and $D$ and linear form $f$, we can derive $C \vee D$ from $(f = 0) \vee C$ and $(f= 1) \vee D$. \end{itemize} They proved exponential lower bounds for the tree-like restriction of this system. These lower bounds were later extended in \cite{DBLP:conf/csr/Gryaznov19,DBLP:journals/cc/PartT21}. For DAG-like proofs, the only known results are due to Khaniki \cite{DBLP:journals/eccc/Khaniki20} who proved almost quadratic lower bounds, and to Lauria \cite{DBLP:journals/ipl/Lauria18a} for a restriction of the system when parities are on a bounded number of variables. Super-polynomial lower bounds for unrestricted DAG-like $\reslin$ are widely open. \noindent{\bf Parity decision trees and tree-like $\reslin$.} Given an unsatisfiable CNF $F = C_1 \wedge \ldots \wedge C_m$, the \emph{search problem for $F$} is the computational problem of finding a clause $C_i$ falsified by a given assignment to the variables. A tree-like $\reslin$ refutation of $F$ can be viewed as a parity decision tree solving the search problem for an unsatisfiable CNF \cite{DBLP:conf/csr/Gryaznov19}. Recall that the strongest average-case lower bounds for $\ac0[2]$ are in fact for parity decision trees. Thus it seems that parity decision trees are at the frontier of our understanding in these two areas. Therefore a natural approach to make progress towards both general $\reslin$ lower bounds and average-case hardness for $\ac0[2]$ is to consider DAG-like structures more general than decision trees. \subsection{Our contributions} Motivated by strengthening tree-like $\reslin$ lower bounds as well as average-case lower bounds for parity decision trees to more general models, we consider a model of read-once branching programs (BPs) with linear queries. The most natural way to interpret the property of being read-once in BPs with linear queries, is to impose that along every path, the queries are linearly independent. We consider two restrictions of this model which we call weakly read-once{} and strongly read-once{}, both of which extend parity decision trees as well as standard read-once branching programs. For strongly read-once{} BPs, we prove average-case hardness for a new class of psuedo-random functions, and we give an explicit construction of such a function, thus strengthening the result of Cohen and Shinkar \cite{DBLP:conf/innovations/CohenS16} and making progress towards average-case hardness for $\dnf \circ \Mod_2$. Our pseudo-random functions are defined below and might be of independent interest. \noindent{\bf Directional affine extractors.} The average-case hardness result of Cohen and Shinkar \cite{DBLP:conf/innovations/CohenS16} is for affine extractors. An affine extractor for dimension $d$ and bias $\epsilon$ is a function such that restricted to any affine subspace of dimension at least $d$ it has bias at most $\epsilon$. Explicit constructions for such functions are known (e.g., \cite{MR2306652,DBLP:journals/combinatorica/Yehudayoff11,DBLP:journals/siamcomp/Ben-SassonK12}). For our purposes it is not clear if affine extractors are sufficient. Therefore we consider a more robust concept. We say that a function $f\colon {\{0,1\}}^n \rightarrow {\{0,1\}}^n$ is a \emph{directional affine extractor} for dimension $d$ with bias $\epsilon$, if for every non-zero $a \in {\{0,1\}}^n$, the derivative of $f$ in the direction $a$, $D_a f(x) = f(x+a) + f(x)$, is an affine extractor for dimension $d$ with bias $\epsilon$. We give an explicit construction of a good directional affine extractor for dimension larger than $2n/3$. For weakly read-once{} BPs we show a correspondence with $\reslin$. More precisely, we show that a weakly read-once{} BP solving the search problem for a CNF $F$, can be converted to a $\reslin$ refutation of $F$. This also justifies considering a $\reslin$ counterpart to regular resolution. Recall that in a regular resolution proof, no variable is resolved more than once along any path. It is well-known that a read-once BP solving the search problem for an unsatisfiable CNF can be converted to a regular resolution refutation of the formula. Our result should be interpreted as an extension of this result to $\reslin$. \subsection{Read-once{} linear branching programs} The model of read-once branching programs is a natural and extensively studied model of computation for which strong lower bounds are known~\cite{DBLP:journals/tcs/SavickyZ00,DBLP:conf/icalp/AndreevBCR99}. Here we consider an extension of this model where queries are linear forms. A linear branching program $\mathcal{P}$ in the variables $x$ is a DAG with the following properties: \begin{itemize} \item it has exactly one source; \item it has two sinks labeled with $0$ and $1$ representing the values of the function that $\mathcal{P}$ computes; \item every inner node is labeled by a linear form $q$ over $\mathbb{F}_2$ in $x$ which we call \emph{queries}; \item every inner node with a label $q$ has two outgoing edges labeled with $0$ and $1$ representing the value of $q$. \end{itemize} Any assignment to the input variables naturally defines a path in the program. We say that $\mathcal{P}$ computes a Boolean function $f\colon {\{0,1\}}^n \to {\{0,1\}}$ if for every $x \in {\{0,1\}}^n$, the path in $\mathcal{P}$ defined by $x$ ends in the sink labeled with $f(x)$. We now define read-once{} linear BPs. Given an inner node $v$ of a linear branching program $\mathcal{P}$, we define $\pre_v$ as the span of all queries that appear on any path from the source of $\mathcal{P}$ to $v$, excluding the query at $v$. We define $\post_v$ as the span of all queries in the subprogram starting at $v$. \begin{definition}[Weakly read-once{} linear branching program] We say that a linear branching program $\mathcal{P}$ is \emph{weakly read-once} if for every inner node $v$ of $\mathcal{P}$ which queries $q$, it holds that $q \not \in \pre_v$. \end{definition} We can make this requirement more strict. \begin{definition}[Strongly read-once{} linear branching program] We say that a linear branching program $\mathcal{P}$ is \emph{strongly read-once} if for every inner node $v$ of $\mathcal{P}$, it holds that $\pre_v \cap \post_v = \{0\}$. \end{definition} It follows from both definitions that queries alongside any path in weakly or strongly read-once BP are linearly independent. Furthermore, both of these models generalize standard read-once BPs and parity decision trees. When the distinction between weakly or strongly read-once is not important, we simply write ``read-once''. \section{Notation and basic facts} Each path in a read-once{} program defines an affine subspace given by the set of solutions of the system corresponding to the queries on the path. Any affine subspace can be represented by a vector space shifted by a vector from the affine space. For our purposes, we need to choose this shift carefully. Let $p$ be a path in $\mathcal{P}$ leading to a node $v$ with queries $q_1, \ldots, q_k$ and answers $a_1, \ldots, a_k$ to these queries which define the affine subspace $S_p = \{x:\bigwedge_{i=1}^k q_i(x) = a_i\}$. Let $V_p$ be the supporting vector space of $S_p$, i.e., $V_p = \{x : \bigwedge_{i = 1}^k q_i(x) = 0\}$. Then clearly $S_p = V_p + b$ for any $b \in S_p$. Choose an arbitrary basis $q'_1, \ldots, q'_t$ for $\post_v$. Since $q'_1, \ldots, q'_t$ are independent of $q_1, \ldots, q_k$, there exists $b$ such that $\bigwedge_{i=1}^k q_i(b) = a_i$ and $\bigwedge_{i = 1}^t q'_i(b) = 0$. Then $S_p = V_p + b$ and for every $q \in \post_v$, we have $q(b) = 0$. \begin{definition}[Canonical affine subspace] Given a path $p$ which ends at a node $v$, we call $S_p$ the \emph{canonical affine subspace} for $p$. Furthermore a \emph{canonical representation} of $S_p$ is any $V_p + b = S_p$ where every $q \in \post_v$ vanishes on $b$. \end{definition} Throughout the paper we drop the word \emph{representation} and simply say $V_p + b$ is the canonical affine subspace of $p$ to mean that it is a canonical representation of $S_p$. Since we will often use canonical affine subspaces to represent paths in BPs, we adopt the following algebraic notation. Let us denote the space of all linear forms on $\mathbb{F}_2^n$ (the dual space) as $\dual{(\mathbb{F}_2^n)}$. Given a subspace $V$ of $\mathbb{F}_2^n$, we define $\orth{V}$ as the space of all linear forms from $\dual{(\mathbb{F}_2^n)}$ that vanish on $V$ (this space is sometimes called the annihilator of $V$), i.e., \[ \orth{V} = \{\ell \in \dual{(\mathbb{F}_2^n)} : \forall v \in V,\ \ell(v) = 0\}. \] Given a path $p$ with queries $q_1, \ldots, q_k$ and its canonical affine subspace $V+b$, the space $\orth{V}$ is the query space of $p$, i.e., $\orth{V} = \Span(q_1, \ldots, q_k)$. Throughout the paper we adopt the following notation. \begin{itemize} \item Given a vector $c \in {\{0,1\}}^n$ the \emph{support of $c$} is defined as \[ \supp(c) \coloneqq \{i : c_i \neq 0\}. \] \item Let $\sigma$ be a partial assignment to the variables $x_1, \ldots, x_n$. Then \[ \dom(\sigma) \coloneqq \{i : \sigma(x_i)\ \text{is defined}\}. \] \item We say that $a \in {\{0,1\}}^n$ is \emph{consistent} with a partial assignment $\sigma$ to $x_1, \ldots, x_n$ if for every $i \in \dom(\sigma)$, it holds that $\sigma(x_i) = a_i$. \item Let $V$ and $W$ be two subspaces. Then the sum of $V$ and $W$ is the subspace \[ V+W \coloneqq \{v+w : v \in V,\ w \in W\}. \] Note that $V + W = \Span(V \cup W)$. \item We write $a+b$ without specifying the underlying field, if it is clear from the context and often intended to be $\mathbb{F}_2$. \end{itemize} \subsection{The trace map} The trace map $\Tr\colon \mathbb{F}_{p^n} \to \mathbb{F}_p$ is defined as \[ \Tr(x) \coloneqq \sum_{i=0}^{n-1} x^{p^i}. \] One important property that we need is that $\Tr$ is a linear map. We also use the following fact about the trace. \begin{proposition}[cf.~\cite{lidl_niederreiter_1996}]\label{prop:trace_linear} For every $\mathbb{F}_p$-linear map $\pi\colon \mathbb{F}_{p^n} \to \mathbb{F}_p$ there exists $\mu \in \mathbb{F}_{p^n}$ such that for all $x \in \mathbb{F}_{p^n}$ we have \[ \pi(x) = \Tr(\mu \cdot x). \] Furthermore, $\pi$ is trivial if and only if $\mu = 0$. \end{proposition} Since, we are interested in Boolean functions, we will only consider the case $p=2$. Let $\phi\colon \mathbb{F}_2^n \to \mathbb{F}_{2^n}$ be any $\mathbb{F}_2$-linear isomorphism. Then $\Tr(\mu \cdot \phi(x))$ is linear in $x$ and we have the following: \begin{proposition}\label{prop:trace_linear_boolean} The set of all linear Boolean functions coincides with the set of functions $\ell_\mu(x) = \Tr(\mu \cdot \phi(x))$, where $\mu \in \mathbb{F}_{2^n}$. \end{proposition} In the rest of the paper we fix $\phi$. To make the proofs more readable we use bold font to denote the corresponding elements of $\mathbb{F}_{2^n}$, e.g., $\iso{x}$ for $\phi(x)$. \subsection{Affine extractors and dispersers} A Boolean function $f\colon {\{0,1\}}^n \to {\{0,1\}}$ is an \emph{affine disperser} for dimension $d$ if $f$ is not constant on any affine subspace of dimension at least $d$. Let us also recall affine extractors, which are generalizations of affine dispersers. The \emph{bias} of $f$ is defined as \[ \bias(f) \coloneqq \card{\E_{x \in U_n}[(-1)^{f(x)}]}, \] where $U_n$ is a uniform distribution on ${\{0,1\}}^n$. Given an affine subspace $f$, \emph{the bias of $f$ restricted to $S \subseteq {\{0,1\}}^n$} is defined as \[ \bias(\restr{f}{S}) \coloneqq \card{\E_{x \in U(S)}[(-1)^{f(x)}]}, \] where $U(S)$ is a uniform distribution on $S$. A Boolean function $f\colon {\{0,1\}}^n \to {\{0,1\}}$ is an \emph{affine extractor} for dimension $d$ with bias $\epsilon$ if for every affine subspace $S$ of dimension $d$, the bias of $f$ restricted to $S$, $\bias(\restr{f}{S})$, is at most $\epsilon$. \section{Affine mixedness} In this section we give a criterion for functions to be worst-case hard for read-once{} linear BPs. Let us first recall \emph{mixedness} from standard read-once BPs. \begin{definition} A Boolean function $f\colon {\{0,1\}}^n \to {\{0,1\}}$ is \emph{$d$-mixed} if for every $I \subseteq [n]$ of size at most $n-d$\footnote{This definition is commonly given for sets of size $d$ instead of $n - d$. We deviate from this since for our generalization to affine spaces, it corresponds to dimension which is more natural.} and every two distinct partial assignments $\sigma$ and $\tau$ with $\dom(\sigma) = \dom(\tau) = I$, it holds that $\restr{f}{\sigma} \neq \restr{f}{\tau}$. \end{definition} \begin{theorem}[Folklore; see~\cite{DBLP:books/daglib/0028687} for a proof]\label{thm:mixedness_1bp} Let $f\colon {\{0,1\}}^n \to {\{0,1\}}$ be a $d$-mixed Boolean function. Then any read-once branching program computing $f$ has size at least $2^{n-d}-1$. \end{theorem} Explicit constructions of $d$-mixed functions with $d = o(n)$ and thus $2^{n - o(n)}$ size lower bounds for read-once BPs were given in \cite{DBLP:journals/tcs/SavickyZ00,DBLP:conf/icalp/AndreevBCR99}. We generalize this notion for linear branching programs. We need the following equivalent definition of $d$-mixedness. \begin{lemma}\label{lm:equiv_d-mixed} A Boolean function $f$ is $d$-mixed if and only if for every partial assignments $\sigma$ of size at most $n-d$ and every $c \neq 0$ with $\supp(c) \subseteq \dom(\sigma)$, there exists $x$ consistent with $\sigma$ such that $f(x) \neq f(x+c)$. \end{lemma} \begin{proof} ($\Leftarrow$) Let $\sigma$ and $\tau$ be two distinct partial assignments with domain $I$ of size at most $n - d$. Define $c_i = \tau(x_i) + \sigma(x_i)$ for $i \in I$ and $c_i = 0$ otherwise. By assumption there exists $x$ consistent with $\sigma$ such that $f(x) \ne f(x + c)$. It follows from the definition of $c$ that $x+c$ is consistent with $\tau$. Define $J = [n] \setminus I$ and $z = x_{J} = {(x+c)}_J$. Then $\restr{f}{\sigma}(z) = f(x) \neq f(x+c) = \restr{f}{\tau}(z)$. ($\Rightarrow$) Let $\sigma$ be a partial assignment with a domain of size at most $n - d$ and let $c$ be given such that $\supp(c) \subseteq \dom(\sigma)$. Define $\tau(x_i) = \sigma(x_i) + c_i$ for $i \in \dom(\sigma)$. By assumption $\restr{f}{\sigma} \neq \restr{f}{\tau}$, hence there exists $z$ such that $\restr{f}{\sigma}(z) \neq \restr{f}{\tau}(z)$. Define $x$ to take the same value as $\sigma$ on $\dom(\sigma)$ and equal to $z$ otherwise. Then $f(x) = \restr{f}{\sigma}(z) \neq \restr{f}{\tau}(z) = f(x+c)$. \end{proof} \begin{definition} A Boolean function $f\colon {\{0,1\}}^n \to {\{0,1\}}$ is \emph{$d$-affine mixed} if for every affine subspace $S$ of dimension at least $d$ and every vector $c \not\in V$, where $V$ is the supporting vector space of $S$, there exists $x \in S$ such that $f(x) \neq f(x+c)$. \end{definition} It follows from \cref{lm:equiv_d-mixed} that $d$-affine mixedness implies $d$-mixedness since a partial assignment is a special case of an affine subspace. Now we are ready to prove a generalization of \cref{thm:mixedness_1bp}. \begin{theorem} Let $f\colon {\{0,1\}}^n \to {\{0,1\}}$ be a $d$-affine mixed Boolean function. Then any strongly read-once{} linear branching program computing $f$ has size at least $2^{n-d}-1$. \end{theorem} \begin{proof} We prove that any such program $\mathcal{P}$ computing $f$ starts with a complete binary tree of depth $n-d-1$. Assume for the sake of contradiction that there are two paths $p$ and $q$ of length at most $n-d-1$, which meet for the first time at a node $v$. Let $V+a$ and $W+b$ be their corresponding canonical affine subspaces. Both of them have dimension at least $d+1$. We start by proving $\orth{V} = \orth{W}$ which implies $V = W$. Suppose that it is not the case. Without loss of generality, there exists $\ell \in \orth{W} \setminus \orth{V}$. By the read-once{} property $\ell \not\in \post_v$. Consider two affine subspaces $V'+a_1$ and $V'+a_2$ obtained by intersecting $V+a$ with $\ell(x) = 0$ and $\ell(x)=1$ such that for every $\ell' \in \post_v$, $\ell'(a_1) = 0$ and $\ell'(a_2) = 0$ (recall that we can choose such $a_1$ and $a_2$ since $\pre_v \cap \post_v = \{0\}$). By construction, they have dimension at least $d$. Since $f$ is $d$-affine mixed, there exists $z \in V'$ such that $f(z+a_1) \neq f(z+a_2)$. Consider any query $\ell'$ in the subprogram starting at $v$. The fact that $\ell' \in \post_v$ implies $\ell'(a_1) = \ell'(a_2) = 0$. Thus, we have $\ell'(z+a_1) = \ell'(z) = \ell'(z+a_2)$. It implies that in the subprogram starting at $v$ both $z+a_1$ and $z+a_2$ must follow the same path contradicting $f(z+a_1) \neq f(z+a_2)$. Now, since $V = W$, $V+b$ is the canonical affine subspace for $q$, and $a \ne b$ since $p$ and $q$ are different paths. Again, since $f$ is $d$-affine mixed, there exists $z \in V$ such that $f(z+a) \neq f(z+b)$. Analogously to the previous case, for every $\ell' \in \post_v$ we have $\ell'(a)=\ell'(b)=0$, and thus $\ell'(z+a) = \ell'(z+b)$ contradicting $f(z+a) \neq f(z+b)$. \end{proof} \section{Affine dispersers for directional derivatives} In this section we give an explicit construction of an affine mixed function for linear dimension. In fact, we give an even more powerful construction, which allows us to get an average-case lower bound for strongly read-once{} linear branching programs. For a Boolean function $f$ its directional derivative with respect to a non-zero vector $a$ is defined as \[ D_{a} f(x) \coloneqq f(x+a) + f(x). \] \begin{definition} A Boolean function $f\colon {\{0,1\}}^n \to {\{0,1\}}$ is a \emph{directional affine extractor} for dimension $d$ with bias $\epsilon$ if for every non-zero $a$, the derivative $D_a f$ is an affine extractor for dimension $d$ with bias $\epsilon$. Similarly, $f$ is a \emph{directional affine disperser} for dimension $d$ if for every non-zero $a$, $D_a f$ is an affine disperser for dimension $d$. \end{definition} Observe that this notion is stronger than the one defined in the previous section: if $f$ is a directional affine disperser for dimension $d$, then it is $d$-affine mixed. In what follows we construct a Boolean function $f$ in $n$ variables that is a good directional affine extractor for dimensions bigger than $\frac{2}{3}n$. It is a well-known fact that the inner product function is an affine extractor. IP is a member of the class of bent functions, which are all affine extractors. A Boolean function $f\colon {\{0,1\}}^n \to {\{0,1\}}$ is called a \emph{bent function} if all Fourier coefficients of its $\pm 1$ representation $f_\pm(x) \coloneqq {(-1)}^{f(x)}$ have the same absolute value. \begin{lemma}[Folklore; for a proof see, e.g.,~\cite{DBLP:conf/innovations/CohenS16,DBLP:journals/jcss/CheraghchiGJWX18}] \label{lm:bent} Let $f$ be a bent function on $n$ variables and $c \ge 1$ be an integer. Then, $f$ is an affine extractor for dimension $k = n/2 + c$ with bias at most $2^{-c}$. In particular, $f$ is an affine disperser for dimension $n/2 + 1$. \end{lemma} We apply this result to prove that the following function is an affine extractor. \begin{lemma}\label{lem:trace_affine_extractor} Let $a_0, a_1, a_2, a_3 \in \mathbb{F}_{2^k}$ with $a_0 \ne 0$. Let $g\colon {\{0,1\}}^k \times {\{0,1\}}^k \to {\{0,1\}}$ be the function defined as \[ g(x, y) = \Tr(a_0 \cdot \phi(x) \cdot \phi(y) + a_1 \cdot \phi(x) + a_2 \cdot \phi(y) + a_3). \] Then $g$ is an affine extractor for dimension $k+c$ with bias at most $2^{-c}$. In particular, $g$ is an affine disperser for dimension $k+1$. \end{lemma} \begin{proof} Let $g_\pm$ be the $\pm 1$ representation of $g$. By Lemma \ref{lm:bent}, it is enough to prove that all Fourier coefficients of $g_\pm$ have the same absolute value. Recall that given $\alpha \in {\{0,1\}}^{2k}$ the $\alpha$-character $\chi_\alpha$ is defined as $\chi_\alpha(x, y) = {(-1)}^{\alpha \cdot (x, y)}$, where $\alpha \cdot (x, y)$ is the inner product. Fourier coefficient $\widehat{g_\pm}(\alpha)$ can be computed as follows. \[ \widehat{g_\pm}(\alpha) = \sum_{x,y \in {\{0,1\}}^k} g_\pm(x, y) \cdot \chi_\alpha(x, y) = \sum_{x,y \in {\{0,1\}}^k} {(-1)}^{\Tr(a_0 \cdot \iso{x} \cdot \iso{y} + a_1 \cdot \iso{x} + a_2 \cdot \iso{y} + a_3)} \cdot \chi_\alpha(x, y). \] Split $\alpha$ into two equal parts: $\alpha = (\alpha_1, \alpha_2)$. Then $\alpha \cdot (x, y) = \alpha_1 \cdot x + \alpha_2 \cdot y$. By \cref{prop:trace_linear_boolean}, there exist $\mu_1, \mu_2 \in \mathbb{F}_{2^k}$ such that $\alpha_1 \cdot x = \Tr(\mu_1 \cdot \iso{x})$ and $\alpha_2 \cdot y = \Tr(\mu_2 \cdot \iso{y})$. Also define \begin{align*} b_1 &\coloneqq a_0^{-1} \cdot (a_1 + \mu_1), \\ b_2 &\coloneqq a_0^{-1} \cdot (a_2 + \mu_2), \\ b_3 &\coloneqq a_3 + a_0^{-1} \cdot (a_1 + \mu_1) \cdot (a_2 + \mu_2). \end{align*} Then we can express $\widehat{g_\pm}(\alpha)$ in terms of $b_i$: \begin{align*} \widehat{g_\pm}(\alpha) &= \sum\limits_{x,y \in {\{0,1\}}^k} {(-1)}^{\Tr(a_0 \cdot \iso{x} \cdot \iso{y} + a_1 \cdot \iso{x} + a_2 \cdot \iso{y} + a_3)} \cdot {(-1)}^{\Tr(\mu_1 \cdot \iso{x}) + \Tr(\mu_2 \cdot \iso{y})} \\ &= \sum\limits_{x,y \in {\{0,1\}}^k} {(-1)}^{\Tr(a_0 \cdot \iso{x} \cdot \iso{y} + a_1 \cdot \iso{x} + a_2 \cdot \iso{y} + a_3 + \mu_1 \cdot \iso{x} + \mu_2 \cdot \iso{y})} \\ &= \sum\limits_{x,y \in {\{0,1\}}^k} {(-1)}^{\Tr(a_0 \cdot (\iso{x} + b_2) \cdot (\iso{y} + b_1) + b_3)} \\ &= {(-1)}^{\Tr(b_3)} \cdot \sum\limits_{x,y \in {\{0,1\}}^k} {(-1)}^{\Tr(a_0 \cdot (\iso{x} + b_2) \cdot (\iso{y} + b_1))}. \end{align*} Since $x$ and $y$ iterate through all vectors from ${\{0,1\}}^k$, $a_0 \cdot (\iso{x} + b_2)$ and $\iso{y} + b_1$ take all possible values from $\mathbb{F}_{2^k}$. It follows that \[ \widehat{g_\pm}(\alpha) = {(-1)}^{\Tr(b_3)} \widehat{g_\pm}(0). \] \end{proof} We are now ready to present our directional affine extractor. \begin{theorem} \label{thm:construction} Let $f\colon {\{0,1\}}^k \times {\{0,1\}}^k \times {\{0,1\}}^k \to {\{0,1\}}$ be the function defined by \[ f(x, y, z) = \Tr(\phi(x) \cdot \phi(y) \cdot \phi(z)). \] Then $f$ is a directional affine extractor for dimension $2k+c$ with bias $\epsilon \le 2^{-c}$. In particular, $f$ is a directional affine disperser for dimension $2k+1$. \end{theorem} \begin{proof} Consider the directional derivative of $f$ in the non-zero direction $a = (a_1,a_2,a_3)$: \begin{align*} D_{a} f(x, y, z) &= f(x+a_1, y+a_2, z+a_3) + f(x, y, z) \\ &= \Tr(\phi(x+a_1) \cdot \phi(y+a_2) \cdot \phi(z+a_3)) + \Tr(\iso{x} \cdot \iso{y} \cdot \iso{z}). \end{align*} By linearity of $\Tr$ and $\phi$ we have \begin{align*} D_{a} f(x, y, z) &= \Tr((\iso{x}+\iso{a_1}) \cdot (\iso{y} + \iso{a_2}) \cdot (\iso{z} + \iso{a_3}) + \iso{x} \cdot \iso{y} \cdot \iso{z}) \\ &= \Tr(\iso{a_1} \cdot \iso{y} \cdot \iso{z} + \iso{a_2} \cdot \iso{x} \cdot \iso{z} + \iso{a_3} \cdot \iso{x} \cdot \iso{y} + \ell(\iso{x},\iso{y},\iso{z})), \end{align*} where $\ell$ is an affine function. Without loss of generality we may assume that $a_3 \neq 0$. Let $S$ be an affine subspace with dimension at least $2k+c$. We need show that the bias of $f$ restricted to $S$ is at most $\epsilon$. Given $z_0 \in {\{0,1\}}^k$ define $S_{z_0} \coloneqq \{(x, y) : (x, y, z_0) \in S\}$. For every $z_0$ the affine subspace $S_{z_0}$ is either empty or has dimension at least $k+c$. Consider the restriction of $D_a f$ to $z = z_0$. \[ h_{z_0}(x, y) \coloneqq D_a f(x, y, z_0) = \Tr(\iso{a_3} \cdot \iso{x} \cdot \iso{y} + \ell'_{z_0}(\iso{x}, \iso{y})), \] where $\ell'_{z_0}$ is an affine function. By \cref{lem:trace_affine_extractor}, $h_{z_0}$ is an affine extractor for dimension $k+c$ with bias $\epsilon \le 2^{-c}$. In particular, if $S_{z_0}$ is non-empty, then $\bias(\restr{h_{z_0}}{S_{z_0}}) \le \epsilon$. Thus, the bias of $D_a f$ restricted to $S$ can easily be bounded as follows: \begin{align*} \bias(\restr{D_a f}{S}) &= \left|\frac{1}{\card{S}} \sum_{(x, y, z) \in S} {(-1)}^{D_a f(x, y, z)}\right| \\ &= \left|\frac{1}{\card{S}} \sum_{z_0 \in {\{0,1\}}^n} \sum_{(x, y, z_0) \in S} {(-1)}^{D_a f(x, y, z_0)}\right| \\ &= \left|\frac{1}{\card{S}} \sum_{z_0 \in {\{0,1\}}^n} \sum_{(x, y) \in S_{z_0}} {(-1)}^{h_{z_0}(x, y)}\right| \\ &\le \frac{1}{\card{S}} \sum_{z_0 \in {\{0,1\}}^n} \left|\sum_{(x, y) \in S_{z_0}} {(-1)}^{h_{z_0}(x, y)}\right| \\ &\le \frac{1}{\card{S}} \sum_{z_0 \in {\{0,1\}}^n} \epsilon \cdot \card{S_{z_0}} = \epsilon. \end{align*} \end{proof} \section{Average-case lower bound} We consider a canonical form of strongly read-once{} linear branching programs. We adopt the terminology of~\cite{DBLP:journals/cc/ChenKKSZ15} and say that a read-once{} linear branching program is \emph{full} if for every inner node $v$ of the program, all the paths leading to $v$ have the same query space. A \emph{multipath} $(w_1, \ldots, w_m, v)$ is a linear branching program of the form \begin{center} \begin{tikzpicture} \tikzset{pnode/.style={draw,circle,outer sep=1mm,inner sep=1mm}} \tikzset{pedge/.style={thick,->}} \node[pnode] (u1) at (0, 0) {$w_1$}; \node[pnode] (u2) at (2, 0) {$w_2$}; \node (ellipsis) at (4, 0) {$\cdots$}; \node[pnode] (um) at (6, 0) {$w_m$}; \node[pnode] (v) at (8, 0) {$v$}; \draw[pedge] (u1) to [out=40,in=140] (u2); \draw[pedge] (u1) to [out=-40,in=-140] (u2); \draw[pedge] (u2) to [out=40,in=140] (ellipsis); \draw[pedge] (u2) to [out=-40,in=-140] (ellipsis); \draw[pedge] (ellipsis) to [out=40,in=140] (um); \draw[pedge] (ellipsis) to [out=-40,in=-140] (um); \draw[pedge] (um) to [out=40,in=140] (v); \draw[pedge] (um) to [out=-40,in=-140] (v); \end{tikzpicture} \end{center} That is, the program ignores the answers to the queries at $w_i$ for every $i$. Given a program $\mathcal{P}$, we say that a subset of nodes is an \emph{antichain} if none of its nodes is a descendant of another. For example, the set of nodes at a fixed depth and the set of leaves form an antichain. The following lemma and its proof are easy extensions of Lemma 3.7 in \cite{DBLP:journals/cc/ChenKKSZ15}. \begin{lemma}\label{lem:full_bp} Every weakly read-once{} or strongly read-once{} linear branching program $\mathcal{P}$ of size $s$ in $n$ variables has an equivalent full weakly read-once{} or strongly read-once{} linear branching program $\mathcal{P}'$, respectively, of size at most $3n \cdot s$. Furthermore, the size of every antichain in $\mathcal{P}'$ is at most $2s$. \end{lemma} \begin{proof} We construct $\mathcal{P}'$ inductively. Consider the nodes of $\mathcal{P}$ in topological order. It is clear that the start node satisfies the fullness property. Let $v$ be a node of $\mathcal{P}$ and $p_1, \ldots, p_k$ the paths that meet at $v$, and $V_1 + a_1, \ldots, V_k + a_k$ their canonical affine subspaces. For every $i \in [k]$ choose a set of linearly independent queries $Q_i$ such that $\orth{V_i} + \Span(Q_i) = \pre(v)$. For every $i \in [k]$ do the following. Let $Q_i = \{q_1, \ldots, q_m\}$. Replace the edge $u_i \to v$ with a multipath $(w_1, \ldots, w_m, v)$ and an edge $u_i \to w_1$, where $w_i$ are labeled with $q_i$. After this transformation, every path to $v$ will have the query space $\pre(v)$. Since a branching program of size $s$ has at most $2s$ edges and we replaced every edge with a multipath of length at most $n$, the size of the constructed full read-once{} linear branching program $\mathcal{P}'$ is at most $s+2s \cdot n \le 3n \cdot s$. Consider an antichain $A$ in $\mathcal{P}'$. We map every node in $A$ to nodes in $\mathcal{P}$. Each node in $A$ is either originally in $\mathcal{P}$ or it was created by a multipath. In the former case we map it to itself, and in the latter case we map it to the parent node from which it was created. Since the out-degree in $\mathcal{P}$ is 2 and $A$ is an antichain, at most 2 nodes are mapped to the same node. This proves the result. \end{proof} Denote by $\dist(f,g)$ the relative distance between Boolean function $f$ and $g$. \begin{theorem}\label{thm:avg_case_full} Let $f\colon {\{0,1\}}^n \to {\{0,1\}}$ be a directional affine extractor for dimension $d$ with bias $\epsilon < \frac{1}{2}$. Then for every $g\colon {\{0,1\}}^n \to {\{0,1\}}$ computed by a strongly read-once{} linear branching program $\mathcal{P}$ of size at most $\epsilon \cdot 2^{n-d-1}$, it holds that $\dist(f, g) \ge \frac{1-\sqrt{2\epsilon}}{2}$. \end{theorem} \begin{proof} Let $s$ denote the size of $\mathcal{P}$. We first convert $\mathcal{P}$ into a full program. By \Cref{lem:full_bp}, the size of every antichain is at most $2s$. We then construct an equivalent program $\mathcal{P}'$ in which every path has length at least $n-d$. We can achieve this by extending every leaf of low depth by a multipath of an appropriate length. Consider the set $A$ of nodes in $\mathcal{P}'$ at depth exactly $n-d$. Note that every $v \in A$ is either a node at depth $n - d$ in $\mathcal{P}$, or it is uniquely defined by a leaf of $\mathcal{P}$ by a multipath. Thus $A$ is identified by an antichain in $\mathcal{P}$ and thus $\card{A} \le 2s$. We call an input $x$ \emph{wrong} if $f(x) \neq g(x)$. The distance $\dist(f, g)$ between $f$ and $g$ is the fraction of wrong inputs. \begin{claim}\label{clm:avg_case_1} Let $v \in A$ and $k$ the numbers of paths that meet at $v$. Then the number of wrong inputs that pass through $v$ is at least \[ \frac{k\cdot 2^d}{2} \left(1-\sqrt{\epsilon+\frac{1}{k}}\right). \] \end{claim} \begin{claimproof} Since the program is full, the corresponding canonical affine subspaces for the paths that meet at $v$ are $V+a_1, \ldots, V+a_k$, for some $d$-dimensional vector space $V$, and distinct $a_1, \ldots, a_k \in {\{0,1\}}^n$. Recall that $f$ is a directional affine extractor with bias $\epsilon$. Then for every $i \neq j$, it holds that $D_{a_i + a_j} f = f(x+(a_i + a_j)) + f(x)$ is an affine extractor with bias $\epsilon$, thus \begin{equation}\label{eq:extractor_assumption} \begin{aligned} \left|\sum_{x \in V} {(-1)}^{f(x+a_i)} \cdot {(-1)}^{f(x+a_j)}\right| &= \left|\sum_{x \in V + a_j} {(-1)}^{f(x+a_i+a_j) + f(x)}\right| \\ &= \bias\mathopen{}\mathclose\bgroup\left(\restr{D_{a_i + a_j} f}{V+a_j}\aftergroup\egroup\right) \cdot \card{V} \le \epsilon \card{V}. \end{aligned} \end{equation} Every $x \in V$ produces a partition of $[k]$ into two parts $(J, [k] \setminus J)$ such that $f(x+a_i) = 0$ for $i \in J$ and $f(x+a_i) = 1$ for $i \not\in J$. Let $m_x$ be the size of the \emph{smallest} part. By definition of canonical affine subspace and the choice of $a_i$, for any linear query $q \in \post_v$ we have $q(a_i) = 0$ for all $i \in [k]$. Then $x+a_1, \ldots, x+a_k$ will follow the same path in the subprogram starting at $v$. Hence, for every $x \in V$ it holds that $f(x+a_1) = \cdots = f(x+a_k)$. It implies that at least $m_x$ inputs of the form $x+a_i$ are wrong and the total number of wrong inputs passing through $v$ is at least \[ m \coloneqq \sum_{x \in V} m_x. \] Now consider the following sum \[ E \coloneqq \sum_{\substack{x \in V \\ 1\le i<j \le k}} |f(x+a_i) - f(x+a_j)|. \] We apply double counting to this quantity to obtain the result. On the one hand, by definitions of $m_x$ and $m$, we have \begin{equation*} E = \sum_{x \in V} m_x \cdot (k - m_x) = k m - \sum_{x \in V} m_x^2. \end{equation*} By the Cauchy--Schwarz inequality, $\sum_{x \in V} m_x^2 \ge {\left(\sum_{x \in V} m_x\right)}^2 / \card{V} = m^2/\card{V}$. Thus, \begin{equation}\label{eq:bad_inputs_upper_bound} E \le km-m^2/\card{V}. \end{equation} On the other hand, $E$ can be rewritten as follows. \begin{equation*} \begin{aligned} E &= \sum_{\substack{x \in V \\ 1\le i<j \le k}} \frac{1}{4} {\left({(-1)}^{f(x+a_i)} - {(-1)}^{f(x+a_j)}\right)}^2 \\ &= \frac{1}{4} \sum_{1 \le i < j \le k} \left( 2\card{V} - 2\sum_{x\in V} {(-1)}^{f(x+a_i)} \cdot {(-1)}^{f(x+a_j)}\right). \end{aligned} \end{equation*} Applying~\eqref{eq:extractor_assumption}, we obtain the following lower bound on $E$. \begin{equation}\label{eq:bad_inputs_lower_bound} E \ge \frac{1}{2} \binom{k}{2} \card{V} (1-\epsilon). \end{equation} Combining~\eqref{eq:bad_inputs_upper_bound} and~\eqref{eq:bad_inputs_lower_bound}, we get \[ km-m^2/\card{V} \ge \frac{1}{2} \binom{k}{2} \card{V} (1-\epsilon). \] This can be written as \begin{align*} {\left(m-\frac{k\card{V}}{2}\right)}^2 &\le \frac{1}{4} k^2 \card{V}^2 - \frac{1-\epsilon}{2}\binom{k}{2}\card{V}^2 \\ &= \frac{k^2 \card{V}^2}{4} \left(1 - (1-\epsilon)\left(1-\frac{1}{k}\right)\right) \\ &\le \frac{k^2 \card{V}^2}{4} \left(\epsilon + \frac{1}{k}\right). \end{align*} Thus, \[ m \ge \frac{k\card{V}}{2}\left(1 - \sqrt{\epsilon + \frac{1}{k}}\right) = \frac{k\cdot 2^d}{2}\left(1 - \sqrt{\epsilon + \frac{1}{k}}\right). \] \end{claimproof} Let $k(v)$ denote the number of paths that meet at $v$ and define $w(k)$ as \[ w(k) \coloneqq \frac{k 2^d}{2}\left(1-\sqrt{\epsilon+\frac{1}{k}}\right). \] Then by \cref{clm:avg_case_1} the total number of bad inputs that pass through $A$ is at least $\sum_{v\in A} w(k(v)) = \sum_{v\in A} \frac{k(v) 2^d}{2}\left(1-\sqrt{\epsilon+\frac{1}{k(v)}}\right)$. Since all paths in $\mathcal{P}'$ has length at least $n-d$, $\sum_{v \in A} k(v) = 2^{n-d}$. The function $w$ is convex, hence by Jensen's inequality, the total number of bad inputs passing through $A$ is at least \begin{align*} \sum_{v \in A} w(k(v)) \ge \card{A} \cdot w\left(\frac{\sum_{v \in A} k(v)}{\card{A}}\right) = \frac{1}{2} 2^n \left(1-\sqrt{\epsilon + \frac{\card{A}}{2^{n-d}}}\right). \end{align*} Since $\card{A} \le 2s \le \epsilon 2^{n-d}$, this expression is at least \[ \frac{1-\sqrt{2\epsilon}}{2} 2^n. \] \end{proof} Plugging in the function of \Cref{thm:construction} we get the following corollary. \begin{corollary} Let $f:{\{0,1\}}^{\frac{n}{3}}\times {\{0,1\}}^{\frac{n}{3}} \times {\{0,1\}}^{\frac{n}{3}} \rightarrow {\{0,1\}}$ be defined by $f(x,y,z) = \Tr(\phi(x) \cdot \phi(y) \cdot \phi(z))$. Then for every $g:{\{0,1\}}^n \rightarrow {\{0,1\}}$ computed by a strongly read-once{} linear BP of size at most $2^{\frac{n}{3} - o(n)}$, $\dist(f, g) \ge \frac{1}{2} - 2^{-o(n)}$. \end{corollary} \section{\texorpdfstring{Weakly read-once{} BPs and $\reslin$}{Weakly read-once{} BPs and Res[+]}} In this section we prove an analogue of the correspondence between read-once BPs and regular resolution for $\reslin$ and weakly read-once{} BPs. The proof is a simple extension of standard arguments. \begin{theorem}\label{thm:res-bp}\leavevmode \begin{enumerate} \item Every $\reslin$ refutation of an unsatisfiable CNF $F$ can be translated into a linear BP solving the corresponding search problem without increasing its size. \item Every weakly read-once{} BP of size $s$ solving the search problem for CNF $F = C_1 \wedge \ldots \wedge C_m$ in $n$ variables can be translated into a regular $\reslin$ refutation of $F$ of size $O(ns)$. \end{enumerate} \end{theorem} \begin{proof} \noindent1. Consider an application of the resolution rule in the proof DAG $G$. Suppose that it is applied to clauses $C_0 \lor (f = 0)$ and $C_1 \lor (f = 1)$. Then we label the outgoing edges with $f=1$ and $f=0$ respectively. We leave the edges corresponding to the weakening rule unlabeled. Let $u$ be a vertex in $G$ and $C_u$ the clause it is labeled with. It can be shown by induction on the depth of $u$ that for every path to $u$, the linear system obtained from the equations written on the edges on this path implies $\neg C_u$. The source contains the empty clause, hence the base case holds. For the inductive step, consider any path leading to $u$ and let $v$ be the parent of $u$ on this path. Consider the case when $v$ corresponds to an application of the resolution rule and $w$ be its other child. Let $C_0 \lor (f=b)$, $C_1 \lor (f=b+1)$, and $C_1 \lor C_2$ be the labels of $u$, $w$, and $v$ respectively, where $b \in \{0,1\}$. By the induction hypothesis, every path to $v$ implies $\neg (C_1 \lor C_2)$. In particular, it implies $\neg C_1$. By construction, the edge $(v, u)$ is labeled with $f=b+1$. Then every path to $u$ going through $v$ implies $\neg C_1 \land (f=b+1) = \neg (C_1 \land (f = b))$. Now consider the case when $u$ corresponds to an application of the weakening rule and let $v$ be it parent on this path. Let $C$ and $D$ be the labels of $u$ and $v$. Every path to $v$ implies $\neg D$ by the induction hypothesis and $\neg D \vDash \neg C$. Thus, every path to $u$ through $v$ implies $\neg C$. In particular, every path to the sinks of $G$ falsifies some clause of $F$. To obtain the weakly read-once linear BP, we remove labels at the inner nodes and contract all unlabeled edges. \noindent2. A linear clause $C = \bigvee_{i=1}^k (f_i = a_i)$ can be viewed as a negation of a linear system $\neg C = \bigwedge_{i=1}^k (f_i = a_i + 1)$. We first convert $P$ into a full BP of size $O(ns)$ using \cref{lem:full_bp}. Inductively, to every node $v$ we associate a linear clause $C_v$ such that: \begin{enumerate} \item Every assignment reaching $v$ falsifies $C_v$. \item If $\neg C_v$ represents a linear system $Bx = b$, then the row space of $B$ is $\pre(v)$. \end{enumerate} For the base case, with each leaf $v$ we associate the clause $C_v$ it is labeled with. The first condition holds since $P$ solves the search problem. To see the second property, note that any path reaching $v$ can be expressed as a linear system on a basis for $\pre(v)$ which forces every literal in $C_v$. This implies that single variables in $C_v$ are in $\pre(v)$. For the inductive step, consider a node $v$, which queries $q$ with outgoing neighbors $u$ and $w$, in the directions $q=0$ and $q=1$ respectively. Observe that $\neg C_u \not \models q(x) = 1$ and $\neg C_w \not \models q(x) = 0$. Thus, there are only two cases to consider: \begin{enumerate} \item $\neg C_u \not \models q(x)=0$ or $\neg C_w \not \models q(x)=1$, \item $\neg C_u \models q(x)=0$ and $\neg C_w \models q(x)=1$. \end{enumerate} In the first case, we simply let $C_v$ be $C_u$ or $C_w$, depending on which condition holds. For the second case, let $B = \{\beta_1, \ldots, \beta_t\}$ be a basis of $\pre(v)$. Fullness implies $\pre(u) = \pre(w) = \pre(v) + \Span(q)$. Applying the inductive hypothesis, we can write $\neg C_u = (q(x)=0) \wedge (B_u x = b_u)$ and $\neg C_w = (q(x) = 1) \wedge (B_w x = b_w)$, where $B_u$ and $B_w$ are matrices with rows in $\beta_1, \ldots, \beta_t$ and $b_u$ and $b_w$ are some vectors. To write $C_u$ and $C_w$ in these forms, we might need to change the basis, which we can do by applying the weakening rule. We claim that setting $C_v$ so that $\neg C_v$ can be written as $B_u x= b_u \wedge B_w x = b_w$ satisfies the requirements. Consider any path to $v$. Such a path can be described by a system $Rx = b$ where rows in $R$ are from $B$. Since every such path can be extended to both $u$ and $w$, it follows that $B_u x = b_u \vDash Rx=b$ and $B_w x = b_w \vDash Rx = b$. This means that $B_u x = b_u \wedge B_w x = b_w$ is consistent and thus the derivation of $C_v$ from $C_u$ and $C_w$ (possibly changing the basis) is a valid $\reslin$ step. It is easy to see that conditions 1 and 2 hold for $C_v$. Since for every $v$ we create at most $2$ extra clauses, the total size of the proof is at most $O(ns)$. \end{proof} \section{Conclusion} Several problems are immediately suggested by our work: \begin{itemize} \item \emph{Explicit constructions.} Give an explicit construction of directional affine extractors (or dispersers) for smaller dimension $d$, ideally $d = o(n)$. \item \emph{BP lower bounds.} Prove worst-case and average-case hardness results for the weakly read-once{} BPs. \item \emph{Proof complexity.} Prove a read-once{} linear BP lower bound for a search problem, that is for some unsatisfiable CNF $F = C_1 \wedge \ldots \wedge C_m$, show that a read-once{} linear BP with leaves labeled by $C_i$s solving the corresponding search problem has to be large. \end{itemize} \end{document}
\begin{document} \title[Euler matrices and their algebraic properties]{Euler matrices and their algebraic properties revisited} \author[Y. Quintana]{Yamilet Quintana$^{(1)}$} \address{Departamento de Matem\'aticas Puras y Aplicadas, Edificio Matem\'aticas y Sistemas (MYS), Apartado Postal: 89000, Caracas 1080 A, Universidad Sim\'on Bol\'{\i}var, Venezuela} \email{[email protected]} \thanks{$(1)\,\,\,$Partially supported by the grants Impacto Caribe (IC-002627-2015) from Universidad del Atl\'antico, Colombia, and DID-USB (S1-IC-CB-003-16) from Decanato de Investigaci\'on y Desarrollo. Universidad Sim\'on Bol\'{\i}var, Venezuela.} \author[W. Ram\'{\i}rez]{William Ram\'{\i}rez$^{(2)}$} \address{Departamento de Ciencias B\'asicas, Universidad de la Costa - CUC, Barranquilla, Colombia.} \email{[email protected]} \thanks{$(2)\,\,\,$Partially supported by the grant Impacto Caribe (IC-002627-2015) from Universidad del Atl\'antico, Colombia.} \author[A. Urieles]{Alejandro Urieles$^{(2)}$} \address{Programa de Matem\'aticas, Universidad del Atl\'antico, Km 7 V\'{i}a Pto. Colombia, Barranquilla, Colombia.} \email{[email protected]} \subjclass[2010]{11B68, 11B83, 11B39, 05A19.} \keywords{Euler polynomials, Euler matrix, generalized Euler matrix, generalized Pascal matrix, Fibonacci matrix, Lucas matrix.} \begin{abstract} This paper is concerned with the generalized Euler polynomial matrix $\mathrsfs{E}^{(\alpha)}(x)$ and the Euler matrix $\mathrsfs{E}$. Taking into account some properties of Euler polynomials and numbers, we deduce product formulae for $\mathrsfs{E}^{(\alpha)}(x)$ and determine the inverse matrix of $\mathrsfs{E}$. We establish some explicit expressions for the Euler polynomial matrix $\mathrsfs{E}(x)$, which involving the generalized Pascal, Fibonacci and Lucas matrices, respectively. From these formulae we get some new interesting identities involving Fibonacci and Lucas numbers. Also, we provide some factorizations of the Euler polynomial matrix in terms of Stirling matrices, as well as, a connection between the shifted Euler matrices and Vandermonde matrices. \end{abstract} \maketitle \section{Introduction} \label{intro} The classical Euler polynomials $E_{n}(x)$ and the generalized Euler polynomials $E_{n}^{(\alpha)}(x)$ of (real or complex) order $\alpha$, are usually defined as follows (see, for details, \cite{AIK2014,N,SCh2012,SM}): \begin{equation} \label{euler1} \displaystyle\left(\frac{2}{e^{z}+1}\right)^{\alpha}e^{xz} =\displaystyle\sum\limits_{n=0}^{\infty} E_{n}^{(\alpha)}(x)\frac{z^n}{n!}, \quad |z|<\pi, \quad 1^{\alpha}:=1, \end{equation} and \begin{equation} \label{euler2} E_{n}(x):= E_{n}^{(1)}(x), \quad n\in \mathbb{N}_{0}, \end{equation} where $ \mathbb{N}_{0}:= \mathbb{N}\cup\{0\}$ and $\mathbb{N}=\{1,2,3,\ldots\}$. The numbers $E_{n}^{(\alpha)}:= E_{n}^{(\alpha)}(0)$ are called generalized Euler numbers of order $\alpha$, $n\in \mathbb{N}_{0}$. It is well-known that the classical Euler numbers are defined by the generating function \begin{equation} \label{euler3} \frac{2}{e^{z}+e^{-z}}=\sum_{n=0}^{\infty} \varepsilon_{n}\frac{z^{n}}{n!}. \end{equation} The sequence $\{\varepsilon_{n}\}_{n\geq 0}$ counts the numbers of alternating $n$-permutations. Let recall us that a permutation $\sigma$ of a set of $n$ elements (or $n$-permutation), is said alternating if and only if the $n-1$ differences $\sigma(i+1)-\sigma(i)$ for $i=1,2,\ldots, n-1$ have alternating signs (cf. \cite[p. 258]{C1974}). From \eqref{euler2} and \eqref{euler3} it is easy to check that the connection between the classical Euler numbers and the Euler polynomials is given by the formula \begin{equation} \label{euler4} \varepsilon_{n}=2^{n}E_{n}\left(\frac{1}{2}\right), \quad n\in\mathbb{N}_{0}. \end{equation} So, the numbers $E_{n}:=E_{n}(0)$ also are known in the literature as Euler numbers (cf., e.g., \cite{LS2005,SCh2012}). The first six generalized Euler polynomials are $$\begin{aligned} E_{0}^{(\alpha)}(x)=& \,1,\quad E_{1}^{(\alpha)}(x)=x-\frac{\alpha}{2},\quad E_{2}^{(\alpha)}(x)= x^{2}-\alpha x+\frac{\alpha(\alpha-1)}{4},\\ E_{3}^{(\alpha)}(x)=& \, x^{3}-\frac{3\alpha}{2}x^{2}+\frac{3\alpha(\alpha-1)}{4}x-\frac{3\alpha^{2}(\alpha-1)}{8}, \\ E_{4}^{(\alpha)}(x)=&\, x^{4}-2\alpha x^{3}+\frac{3\alpha(\alpha-1)}{2}x^{2}-\frac{\alpha^{2}(\alpha-3)}{2}x +\frac{\alpha(\alpha^{3}-6\alpha^{2}+3\alpha-26)}{16},\\ E_{5}^{(\alpha)}(x)=& \, x^{5}-\frac{5\alpha}{2}x^{4}+\frac{5\alpha(\alpha-1)}{2}x^{3}-\frac{5\alpha^{2}(\alpha-3)}{4}x^{2} +\frac{5\alpha(\alpha-1)(\alpha^{2}-5\alpha-2)}{16}x \\ &-\frac{\alpha^{2}(\alpha^{3}-10\alpha^{2}+15\alpha+10)}{32}. \end{aligned}$$ Recent and interesting works dealing with these polynomials, Appell and Apostol type polynomials, their properties and applications in several areas as such as combinatorics, number theory, numerical analysis and partial differential equations, can be found by reviewing the current literature on this subject. For a broad information on old literature and new research trends about these classes of polynomials we strongly recommend to the interested reader see \cite{C1974,HAS2016,HASA2015,HQU2015,LS2005,N,PS2013,QRU,Rio68,SBR2018,SCh2012,SKS2017,SMR2018,SOK2013,SOY2014,SP2003}. From the generating relation \eqref{euler1}, it is fairly straightforward to deduce the addition formula: \begin{equation} \label{euler6} E_{n}^{(\alpha+\beta)}(x+y)= \sum_{k=0}^{n}\binom{n}{k}E_{k}^{(\alpha)}(x)E_{n-k}^{(\beta)}(y). \end{equation} And, it follows also that \begin{equation} \label{euler11} E_{n}^{(\alpha)}(x+1)+ E_{n}^{(\alpha)}(x) =2 E_{n}^{(\alpha-1)}(x). \end{equation} Since $E_{n}^{(0)}(x)=x^{n}$, making the substitution $\beta=0$ into \eqref{euler6} and interchanging $x$ and $y$, we get \begin{equation} \label{euler7} E_{n}^{(\alpha)}(x+y)= \sum_{k=0}^{n}\binom{n}{k}E_{k}^{(\alpha)}(y)x^{n-k}. \end{equation} And, as an immediate consequence, we have \begin{eqnarray} \label{euler8} E_{n}(x+y)&=& \sum_{k=0}^{n}\binom{n}{k}E_{k}(y)x^{n-k},\\ \label{euler9} E_{n}(x)&=&\sum_{k=0}^{n}\binom{n}{k}E_{k}\,x^{n-k}. \end{eqnarray} Using \eqref{euler4}, \eqref{euler8} and the well-known relation $E_{n}(1-x)=(-1)^{n}E_{n}(x)$, it is possible to deduce the following connection formula between $E_{n}$ and the classical Euler numbers $\varepsilon_{n}$: \begin{equation} \label{euler5} E_{n}=\left\{ \begin{array}{l}-\frac{1}{2^{n}}\sum_{k=0}^{n}\binom{n}{k}\varepsilon_{n-k},\quad \mbox{ if } n \mbox{ is odd},\\ \\ 0, \quad \mbox{ if } n \mbox{ is even}. \end{array}\right. \end{equation} Inspired by the article \cite{ZW2006} in which the authors introduce the generalized Bernoulli matrix and establish some algebraic properties of the Bernoulli polynomial and Bernoulli matrices, in the present article we focus our attention on the algebraic and differential properties of the generalized Euler matrix. It is worthwhile to mention that the authors of \cite{ZW2006} to point out that their proposed methodology can be used for obtaining similar properties in the setting of the generalized Euler matrix. However, the authors of \cite{ZW2006} do not actually write out any proof about this statement. The outline of the paper is as follows. Section \ref{sec:1} has an auxiliary character and provides some background as well as some results which will be used throughout the paper. Making use of the some identities above, we introduce the generalized Euler matrix in Section \ref{sec:2}. Then, we study some interesting particular cases of this matrix, namely, the Euler polynomial matrix, the Euler matrix and the specialized Euler matrix. The main results of this section are Theorems \ref{teogeneuler1}, \ref{teogeneuler2}, \ref{teogeneuler3} and \ref{teogeneuler4}, because these theorems contain the information concerning the product formula for the Euler matrix, an explicit expression for the inverse matrix of the specialized Euler matrix, the factorization of the Euler matrix via the generalized Pascal matrix of first kind, and a useful factorization for the inverse matrix of a particular ``horizontal sliding'' of the Euler polynomial matrix, respectively. Also, some consequences of these results are showed (see for instance, Corollaries \ref{corgeneuler1}, \ref{corgeneuler2}, \ref{corgeneuler3} and \ref{corgeneuler4}). Section \ref{sec:3} shows several factorizations of the generalized Euler matrix in terms the Fibonacci and Lucas matrices, respectively (cf. Theorems \ref{teogeneuler5} and \ref{teogeneuler6}). Also, some new identities involving Fibonacci and Lucas numbers are given in this section. Finally, in Section \ref{sec:5} we provide some factorizations of the Euler polynomial matrix in terms of Stirling matrices, and the shifted Euler matrices and their connection with Vandermonde matrices are given. \section{Background and previous results} \label{sec:1} Throughout this paper, all matrices are in $M_{n+1}(\mathbb{R})$, the set of all $(n+1)$-square matrices over the real field. Also, for $i,j$ any nonnegative integers we adopt the following convention $$\binom{i}{j}=0, \mbox{ whenever } j>i.$$ In this section we recall the definitions of the generalized Pascal matrix, the Fibonacci matrix and the Lucas matrix, as well as, some properties of these matrices. \begin{definition} \label{defi1} Let $x$ be any nonzero real number. The generalized Pascal matrix of first kind $P[x]$ is an $(n+1)\times(n+1)$ matrix whose entries are given by (see \cite{CV1993,Z1997}): \begin{equation} \label{pascal1} p_{i,j}(x)=\left\{\begin{array}{l} \binom{i}{j}x^{i-j}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} \end{definition} In \cite{CV1993,Z1997,Z1998} some properties of the generalized Pascal matrix of first kind are showed, for example, its matrix factorization by special summation matrices, its associated differential equation and its bivariate extensions. The following proposition summarizes some algebraic and differential properties of $P[x]$. \begin{prop} Let $P[x]$ be the generalized Pascal matrix of first kind and order $n+1$. Then the following statements hold. \begin{enumerate} \item[(a)] Special value. If the convention $0^{0}=1$ is adopted, then it is possible to define \begin{equation} \label{pascal2} P[0]:= I_{n+1}={\rm diag}(1,1,\ldots,1), \end{equation} where $I_{n+1}$ denotes the identity matrix of order $n+1$. \item[(b)] $P[x]$ is an invertible matrix and its inverse is given by \begin{equation} \label{pascal3} P^{-1}[x]:=\left(P[x]\right)^{-1}= P[-x]. \end{equation} \item[(c)] \cite[Theorem 2]{CV1993} Addition theorem of the argument. For $x,y\in \mathbb{R}$ we have \begin{equation} \label{pascal4} P[x+y]= P[x]P[y]. \end{equation} \item[(d)] \cite[Theorem 5]{CV1993} Differential relation (Appell type polynomial entries). $P[x]$ satisfies the following differential equation \begin{equation} \label{pascal5} D_{x}P[x]= \mathfrak{L} P[x]= P[x] \mathfrak{L}, \end{equation} where $D_{x}P[x]$ is the matrix resulting from taking the derivative with respect to $x$ of each entry of $P[x]$ and the entries of the $(n+1)\times(n+1)$ matrix $\mathfrak{L}$ are given by $$\begin{aligned} {\rm l}_{i,j}=&\left\{\begin{array}{l} p_{i,j}'(0), \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}, \end{array}\right.\\ =&\left\{\begin{array}{l} j+1, \quad i=j+1,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{aligned}$$ \item[(e)] (\cite[Theorem 1]{Z1997}) The matrix $P[x]$ can be factorized as follows. \begin{equation} \label{pascal6} P[x]=G_{n}[x]G_{n-1}[x]\cdots G_{1}[x], \end{equation} where $G_{k}[x]$ is the $(n+1)\times(n+1)$ summation matrix given by $$\begin{aligned} G_{k}[x]=& \left\{\begin{array}{l} \begin{bmatrix} I_{n-k}&0\\ 0&S_{k}[x] \end{bmatrix}, \quad k=1,\ldots, n-1,\\ \\ S_{n}[x], \quad k=n, \end{array}\right. \end{aligned}$$ being $S_{k}[x]$ the $(k+1)\times(k+1)$ matrix whose entries $S_{k}(x;i,j)$ are given by $$\begin{aligned} S_{k}(x;i,j)=&\left\{\begin{array}{l} x^{i-j}, \quad j\leq i,\\ \\ 0, \quad j>i, \end{array}\right. \quad (0\leq i,j\leq k). \end{aligned}$$ \end{enumerate} \end{prop} Another necessary structured matrices in what follows, are the Fibonacci and Lucas matrices. Below, we recall the definitions of each one of them. \begin{definition} \label{defi2} Let $\{F_{n}\}_{n\geq 1}$ be the Fibonacci sequence, i.e., $F_{n}=F_{n-1}+F_{n-2}$ for $n\geq 2$ with initial conditions $F_{0}=0$ and $F_{1}=1$. The Fibonacci matrix $\mathrsfs{F}$ is an $(n+1)\times(n+1)$ matrix whose entries are given by \cite{LKL}: \begin{equation} \label{fibo2} f_{i,j}=\left\{\begin{array}{l} F_{i-j+1}, \quad i-j+1\geq 0,\\ \\ 0, \quad i-j+1< 0. \end{array}\right. \end{equation} \end{definition} Let $\mathrsfs{F}^{-1}$ be the inverse of $\mathrsfs{F}$ and denote by $\tilde{f}_{i,j}$ the entries of $\mathrsfs{F}^{-1}$. In \cite{LKL} the authors obtained the following explicit expression for $\mathrsfs{F}^{-1}$. \begin{equation} \label{fibo1} \begin{aligned} \tilde{f}_{i,j}=&\left\{\begin{array}{l} 1, \quad i=j,\\ \\ -1, \quad i=j+1, j+2,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{aligned} \end{equation} \begin{definition} \label{defi3} Let $\{L_{n}\}_{n\geq 1}$ be the Lucas sequence, i.e., $L_{n+2}=L_{n+1}+L_{n}$ for $n\geq 1$ with initial conditions $L_{1}=1$ and $L_{2}=3$. The Lucas matrix $\mathrsfs{L}$ is an $(n+1)\times(n+1)$ matrix whose entries are given by \cite{ZZ2007}: \begin{equation} \label{lucas} l_{i,j}=\left\{\begin{array}{l} L_{i-j+1}, \quad i-j\geq 0,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} \end{definition} Let $\mathrsfs{L}^{-1}$ be the inverse of $\mathrsfs{L}$ and denote by $\tilde{l}_{i,j}$ the entries of $\mathrsfs{L}^{-1}$. In \cite[Theorem 2.2]{ZZ2007} the authors obtained the following explicit expression for $\mathrsfs{L}^{-1}$. \begin{equation} \label{lucas1} \begin{aligned} \tilde{l}_{i,j}=&\left\{\begin{array}{l} 1, \quad i=j,\\ \\ -3, \quad i=j+1, \\ \\ 5(-1)^{i-j}2^{i-j-2}, \quad i\geq j+2,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{aligned} \end{equation} For $x$ any nonzero real number, the following relation between the matrices $P[x]$ and $\mathrsfs{L}$ was stated and proved in \cite[Theorem 3.1]{ZZ2007}. \begin{equation} \label{lucas2} P[x]=\mathrsfs{L} \mathrsfs{G}[x] =\mathrsfs{H}[x]\mathrsfs{L}, \end{equation} where the entries of the $(n+1)\times(n+1)$ matrices $\mathrsfs{G}[x]$ and $\mathrsfs{H}[x]$ are given by $$\begin{aligned} g_{i,j}(x)=& x^{-j-1}\left[x^{i+1} \binom{i}{j} -3x^{i}\binom{i-1}{j}+ 5(-1)^{i+1} 2^{i-1}m_{i-1, j+1}\left(\frac{x}{2}\right)\right],\\ \\ h_{i,j}(x)=&x^{-j-1}\left[x^{i+1} \binom{i}{j} -3x^{i}\binom{i}{j+1}+ (-1)^{j+1}\frac{5x^{i+j+2}}{2^{j+3}} n_{i+1, j+3}\left(\frac{2}{x}\right)\right], \end{aligned}$$ respectively, with $$\begin{aligned} m_{i,j}(x):=& \left\{\begin{array}{l} \sum_{k=j}^{i}(-1)^{k}\binom{k}{j}x^{k}, \quad i\geq j,\\ \\ 0, \quad i<j, \end{array}\right. \end{aligned} \quad \mbox{ and } \quad \begin{aligned} n_{i,j}(x):=& \left\{\begin{array}{l} \sum_{k=j}^{i}(-1)^{k}\binom{i}{k}x^{k}, \quad i\geq j,\\ \\ 0, \quad i<j. \end{array}\right. \end{aligned}$$ \section{The generalized Euler matrix} \label{sec:2} \begin{definition} \label{def3} The generalized $(n+1)\times(n+1)$ Euler matrix $\mathrsfs{E}^{(\alpha)}(x)$ is defined by \begin{equation} \label{euler10} E^{(\alpha)}_{i,j}(x)= \left\{\begin{array}{l} \binom{i}{j} E^{(\alpha)}_{i-j}(x), \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} While, $\mathrsfs{E}(x):= \mathrsfs{E}^{(1)}(x)$ and $\mathrsfs{E}:= \mathrsfs{E}(0)$ are called the Euler polynomial matrix and the Euler matrix, respectively. In the particular case $x=\frac{1}{2}$, we call the matrix $\mathbb{E}:=\mathrsfs{E}\left(\frac{1}{2}\right)$ specialized Euler matrix. \end{definition} It is clear that \eqref{euler11} yields the following matrix identity: \begin{equation} \label{geneuler5} \mathrsfs{E}^{(\alpha)}(x+1)+ \mathrsfs{E}^{(\alpha)}(x)= 2\mathrsfs{E}^{(\alpha-1)}(x). \end{equation} Since $\mathrsfs{E}^{(0)}(x)=P[x]$, replacing $\alpha$ by $1$ in \eqref{geneuler5} we have \begin{equation} \label{euler20} \mathrsfs{E}(x+1)+ \mathrsfs{E}(x)= 2P[x]. \end{equation} Then, putting $x=0$ in \eqref{euler20} and taking into account \eqref{pascal2}, we get $$\mathrsfs{E}(1)+ \mathrsfs{E}= 2I_{n+1}.$$ Analogously, $$\mathrsfs{E}+ \mathrsfs{E}(-1)= 2P\left[-\frac{1}{2}\right].$$ From \eqref{euler4} it follows that the entries of the specialized Euler matrix $\mathbb{E}$ are given by \begin{equation} \label{euler12} e_{i,j}= \left\{\begin{array}{l} \binom{i}{j} 2^{j-i}\varepsilon_{i-j}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} From \eqref{euler5} it follows that the entries of the Euler matrix $\mathrsfs{E}$ are given by \begin{equation} \label{euler18} E_{i,j}= \left\{\begin{array}{l} \binom{i}{j} E_{i-j}, \quad i>j \mbox{ and } i-j \mbox{ odd},\\ \\ 1, \quad i=j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} The next result is an immediate consequence of Definition \ref{def3} and the addition formula \eqref{euler6}. \begin{teo} \label{teogeneuler1} The generalized Euler matrix $\mathrsfs{E}^{(\alpha)}(x)$ satisfies the following product formula. \begin{equation} \label{geneuler1} \mathrsfs{E}^{(\alpha+\beta)}(x+y)= \mathrsfs{E}^{(\alpha)}(x)\,\mathrsfs{E}^{(\beta)}(y)= \mathrsfs{E}^{(\beta)}(x)\,\mathrsfs{E}^{(\alpha)}(y)= \mathrsfs{E}^{(\alpha)}(y)\, \mathrsfs{E}^{(\beta)}(x). \end{equation} \end{teo} \begin{proof} We proceed as in the proof of \cite[Theorem 2.1]{ZW2006}, making the corresponding modifications. Let $A_{i,j}^{(\alpha,\beta)}(x,y)$ be the $(i,j)$-th entry of the matrix product $ \mathrsfs{E}^{(\alpha)}(x)\,\mathrsfs{E}^{(\beta)}(y)$, then by the addition formula \eqref{euler6} we have $$\begin{aligned} A_{i,j}^{(\alpha,\beta)}(x,y)=&\sum_{k=0}^{n}\binom{i}{k}E_{i-k}^{(\alpha)}(x)\binom{k}{j}E_{k-j}^{(\beta)}(y)\\ =&\sum_{k=j}^{i}\binom{i}{k}E_{i-k}^{(\alpha)}(x)\binom{k}{j}E_{k-j}^{(\beta)}(y)\\ =& \sum_{k=j}^{i}\binom{i}{j}\binom{i-j}{i-k}E_{i-k}^{(\alpha)}(x)E_{k-j}^{(\beta)}(y)\\ =&\binom{i}{j}\sum_{k=0}^{i-j}\binom{i-j}{k}E_{i-j-k}^{(\alpha)}(x)E_{k}^{(\beta)}(y)\\ =& \binom{i}{j}E_{i-j}^{(\alpha)}(x+y), \end{aligned}$$ which implies the first equality of \eqref{geneuler1}. The second and third equalities of \eqref{geneuler1} can be derived in a similar way. \end{proof} \begin{coro} \label{corgeneuler1} Let $(x_{1},\ldots,x_{k})\in \mathbb{R}^{k}$. For $\alpha_{j}$ real or complex parameters, the Euler matrices $\mathrsfs{E}^{(\alpha_{j})}(x)$ satisfies the following product formula, $j=1,\ldots, k$. \begin{equation} \label{geneuler2} \mathrsfs{E}^{(\alpha_{1}+\alpha_{2}+\cdots+\alpha_{k})}(x_{1}+x_{2}+\cdots+x_{k})= \mathrsfs{E}^{(\alpha_{1})}(x_{1})\,\mathrsfs{E}^{(\alpha_{2})}(x_{2})\,\cdots\, \mathrsfs{E}^{(\alpha_{k})}(x_{k}). \end{equation} \end{coro} \begin{proof} The application of induction on $k$ gives the desired result. \end{proof} If we take $x=x_{1}=x_{2}=\cdots=x_{k}$ and $\alpha=\alpha_{1}=\alpha_{2}=\cdots=\alpha_{k}$, then we obtain the following simple formula for the powers of the generalized Euler matrix, and consequently, for the powers of the Euler polynomial and Euler matrices. \begin{coro} \label{corgeneuler2} The generalized Euler matrix $\mathrsfs{E}^{(\alpha)}(x)$ satisfies the following identity. \begin{equation} \label{geneuler3} \left(\mathrsfs{E}^{(\alpha)}(x)\right)^{k}= \mathrsfs{E}^{(k\alpha)}(kx). \end{equation} In particular, \begin{equation} \label{geneuler4} \begin{aligned} \left(\mathrsfs{E}(x)\right)^{k}=& \mathrsfs{E}^{(k)}(kx),\\ \mathrsfs{E}^{k}=& \mathrsfs{E}^{(k)}. \end{aligned} \end{equation} \end{coro} \begin{remark} Note that Theorem \ref{teogeneuler1} and Corollaries \ref{corgeneuler1} and \ref{corgeneuler2} are respectively, the analogous of Theorem 2.1 and Corollaries 2.2 and 2.3 of \cite{ZW2006} in the setting of Euler matrices. \end{remark} Let $\mathrsfs{D}$ be the $(n+1)\times(n+1)$ matrix whose entries are defined by \begin{equation} \label{euler13} d_{i,j}=\left\{\begin{array}{l} (1+(-1)^{i-j})\binom{i}{j} 2^{j-i-1}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} \begin{teo} \label{teogeneuler2} The inverse matrix of the specialized Euler matrix $\mathbb{E}$ is given by $$\mathbb{E}^{-1}=\mathrsfs{D}.$$ Furthermore, $$\left[\mathrsfs{E}^{(k)}\left(\frac{k}{2}\right)\right]^{-1}= \mathrsfs{D}^{k}.$$ \end{teo} \begin{proof} Taking into account \eqref{euler4} and \eqref{euler12}, it is possible to deduce $$ \sum_{k=0}^{n}\frac{(1+(-1)^{k})}{2}\binom{n}{k} 2^{n-k}E_{n-k}\left(\frac{1}{2}\right)=\sum_{k=0}^{n}\frac{(1+(-1)^{k})}{2}\binom{n}{k}\varepsilon_{n-k}= \delta_{n,0},$$ where $\delta_{n,0}$ is the Kronecker delta (cf., e.g., \cite[pp. 107-109]{Rio68}). So, we obtain that the $(i,j)$-th entry of the matrix product $\mathrsfs{D}\mathbb{E}$ may be written as $$ \sum_{k=j}^{i} \binom{i}{k}\frac{(1+(-1)^{i-k})}{2}2^{k-i}\binom{k}{j}E_{k-j}\left(\frac{1}{2}\right) $$ $$\begin{aligned} =&\binom{i}{j}2^{j-i}\sum_{k=j}^{i}\binom{i-j}{k-j}\frac{(1+(-1)^{i-k})}{2}2^{k-j}E_{k-j}\left(\frac{1}{2}\right)\\ =&\binom{i}{j}2^{j-i}\sum_{k=0}^{i-j}\binom{i-j}{k} \frac{(1+(-1)^{i-j-k})}{2}2^{k}E_{k}\left(\frac{1}{2}\right) =&\binom{i}{j}2^{j-i}\delta_{i-j,0}, \end{aligned}$$ and consequently, $\mathrsfs{D}\mathbb{E}=I_{n+1}$. Similar arguments allow to show that $\mathbb{E}\mathrsfs{D}=I_{n+1}$, and hence $\mathbb{E}^{-1}=\mathrsfs{D}$. Finally, from the identity $\mathbb{E}^{-1}=\mathrsfs{D}$ and \eqref{geneuler4} we see that $$\left[\mathrsfs{E}^{(k)}\left(\frac{k}{2}\right)\right]^{-1}= \left(\mathbb{E}^{k}\right)^{-1}=\left(\mathbb{E}^{-1}\right)^{k}=\mathrsfs{D}^{k}.$$ This last chain of equalities finishes the proof. \end{proof} It is worthwhile to mention that the calculation of $\mathbb{E}^{-1}$ strongly depends on the use of inverse relations derived from exponential generating functions (cf. \cite[Chap. 3, Sec. 3.4]{Rio68}). This tool can be applied in order to determine $\mathbb{E}^{-1}$, but it not works for determining of $\mathrsfs{E}^{-1}$. This fact and \eqref{euler5} suggest that methodology proposed in \cite{ZW2006} does not suffice to finding an explicit formula for $\mathrsfs{E}^{-1}$. The next result establishes the relation between the generalized Euler matrix and the generalized Pascal matrix of first kind. \begin{teo} \label{teogeneuler3} The generalized Euler matrix $\mathrsfs{E}^{(\alpha)}(x)$ satisfies the following relation. \begin{equation} \label{euler141} \mathrsfs{E}^{(\alpha)}(x+y)= \mathrsfs{E}^{(\alpha)}(x)P[y]= P[x]\mathrsfs{E}^{(\alpha)}(y)= \mathrsfs{E}^{(\alpha)}(y)P[x]. \end{equation} In particular, \begin{equation} \label{euler14} \mathrsfs{E}(x+y)=P[x]\mathrsfs{E}(y)=P[y]\mathrsfs{E}(x), \end{equation} \begin{equation} \label{euler17} \mathrsfs{E}(x)=P[x]\mathrsfs{E}, \end{equation} \begin{equation} \label{euler15} \mathrsfs{E}\left(x+\frac{1}{2}\right)=P[x]\mathbb{E}, \end{equation} and \begin{equation} \label{euler19} \mathrsfs{E}=P\left[-\frac{1}{2}\right]\mathbb{E}. \end{equation} \end{teo} \begin{proof} The substitution $\beta=0$ into \eqref{geneuler1} yields $$\mathrsfs{E}^{(\alpha)}(x+y)= \mathrsfs{E}^{(\alpha)}(x)\,\mathrsfs{E}^{(0)}(y)= \mathrsfs{E}^{(0)}(x)\,\mathrsfs{E}^{(\alpha)}(y)= \mathrsfs{E}^{(\alpha)}(y)\, \mathrsfs{E}^{(0)}(x).$$ Since $\mathrsfs{E}^{(0)}(x)=P[x]$, we obtain $$\mathrsfs{E}^{(\alpha)}(x+y)=P[x]\mathrsfs{E}^{(\alpha)}(y).$$ A similar argument allows to show that $\mathrsfs{E}^{(\alpha)}(x+y)=\mathrsfs{E}^{(\alpha)}(x)P[y]$ and \\$\mathrsfs{E}^{(\alpha)}(x+y)= \mathrsfs{E}^{(\alpha)}(y)P[x]$. Next, the substitution $\alpha=1$ into \eqref{euler141} yields \eqref{euler14}. From the substitutions $y=0$ and $y=\frac{1}{2}$ into \eqref{euler14}, we obtain the relations \eqref{euler17} and \eqref{euler15}, respectively. Finally, the substitution $x=-\frac{1}{2}$ into \eqref{euler15} completes the proof. \end{proof} \begin{remark} Note that the relation \eqref{euler14} is the analogous of \cite[Eq. (13)]{ZW2006} in the context of Euler polynomial matrices and, the counterpart of \eqref{euler17} is \cite[Eq. (14)]{ZW2006}. However, the relation \eqref{euler15} is slightly different from \cite[Eq. (14)]{ZW2006}, since it involves an Euler polynomial matrix with ``shifted argument'' and the specialized Euler matrix. More precisely, the relation \cite[Eq. (14)]{ZW2006} reads as $$\mathrsfs{B}(x)=P[x]\mathrsfs{B},$$ consequently, this relation expresses to the Bernoulli polynomial matrix $\mathrsfs{B}(x)$ in terms of the matrix product between the generalized Pascal matrix of first kind $P[x]$ and the Bernoulli matrix $\mathrsfs{B}$. While, on the left hand side of \eqref{euler15} appears an Euler polynomial matrix with ``shifted argument'', and the matrix product on the right hand side of \eqref{euler15} contains to the specialized Euler matrix $\mathbb{E}$. \end{remark} The following example shows the validity of Theorem \ref{teogeneuler3}. \begin{eje} Let us consider $n=3$. It follows from the definition \ref{def3} that $$\mathbb{E}=\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ -\frac{1}{4} & 0 & 1 & 0\\ 0 & -\frac{3}{4} & 0 & 1 \end{bmatrix} \quad \mbox{ and }\quad \mathrsfs{E}\left(x+\frac{1}{2}\right)= \begin{bmatrix} 1 & 0 & 0 & 0\\ x & 1 & 0 & 0\\ x^2-\frac{1}{4} & 2x & 1 & 0\\ x^3-\frac{3}{4}x & 3x^2-\frac{3}{4} & 3x & 1 \end{bmatrix}.$$ On the other hand, from \eqref{euler15} and a simple computation we have \begin{eqnarray*} \mathrsfs{E}\left(x+\frac{1}{2}\right)&=& \underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ x & 1 & 0 & 0\\ x^2 & 2x & 1 & 0\\ x^3 & 3x^2 & 3x & 1 \end{bmatrix}}_{P[x]}\underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ -\frac{1}{4} & 0 & 1 & 0\\ 0 & -\frac{3}{4} & 0 & 1 \end{bmatrix}}_{\mathbb{E}} =\begin{bmatrix} 1 & 0 & 0 & 0\\ x & 1 & 0 & 0\\ x^2-\frac{1}{4} & 2x & 1 & 0\\ x^3-\frac{3}{4}x & 3x^2-\frac{3}{4} & 3x & 1 \end{bmatrix}. \end{eqnarray*} \end{eje} The next theorem follows by a simple computation. \begin{teo} \label{teogeneuler4} The inverse of the Euler polynomial matrix $\mathrsfs{E}\left(x+\frac{1}{2}\right)$ can be expressed as \begin{equation} \label{euler16} \left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}= \mathbb{E}^{-1}P[-x]=\mathrsfs{D} P[-x]. \end{equation} In particular, \begin{equation} \label{euler21} \mathrsfs{E}^{-1}= \mathrsfs{D} P\left[\frac{1}{2}\right]. \end{equation} \end{teo} \begin{proof} Using \eqref{pascal3}, \eqref{euler15} and Theorem \ref{teogeneuler2} the relation \eqref{euler16} is deduced. The substitution $x=-\frac{1}{2}$ into \eqref{euler16} yields \eqref{euler21}. \end{proof} \begin{eje} Let us consider $n=3$. From the definition \ref{def3} and a standard computation we obtain \begin{eqnarray*} \left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}&=&\begin{bmatrix} 1 & 0 & 0 & 0\\ x & 1 & 0 & 0\\ x^2-\frac{1}{4} & 2x & 1 & 0\\ x^3-\frac{3}{4}x & 3x^2-\frac{3}{4} & 3x & 1 \end{bmatrix}^{-1}=\begin{bmatrix} 1& 0& 0& 0\\ -x& 1& 0& 0\\ x^{2}+\frac{1}{4}& -2x& 1& 0\\ -x^{3}-\frac{3}{4}x& 3x^{2}+\frac{3}{4}& -3x& 1 \end{bmatrix}. \end{eqnarray*} On the other hand, from \eqref{euler16} we have \begin{eqnarray*} \left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}&=& \underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ \frac{1}{4} &0 & 1 & 0\\ 0 & \frac{3}{4} & 0 & 1 \end{bmatrix}}_{\mathrsfs{D}}\underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ -x & 1 & 0 & 0\\ x^2 & -2x & 1 & 0\\ -x^3 & 3x^2 & -3x & 1 \end{bmatrix}}_{P[-x]}= \begin{bmatrix} 1& 0& 0& 0\\ -x& 1& 0& 0\\ x^{2}+\frac{1}{4}& -2x& 1& 0\\ -x^{3}-\frac{3}{4}x& 3x^{2}+\frac{3}{4}& -3x& 1 \end{bmatrix}. \end{eqnarray*} Hence, when $x=-\frac{1}{2}$, we get $$\mathrsfs{E}^{-1}= \underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ \frac{1}{4} &0 & 1 & 0\\ 0 & \frac{3}{4} & 0 & 1 \end{bmatrix}}_{\mathrsfs{D}}\underbrace{\begin{bmatrix} 1 & 0 & 0 & 0\\ \frac{1}{2} & 1 & 0 & 0\\ \frac{1}{4} & 1 & 1 & 0\\ \frac{1}{8} & \frac{3}{4}& \frac{3}{2} & 1 \end{bmatrix}}_{P\left[\frac{1}{2}\right]}= \begin{bmatrix} 1& 0& 0& 0\\ \frac{1}{2}& 1& 0& 0\\ \frac{1}{2}& 1& 1& 0\\ \frac{1}{2}& \frac{3}{2}&\frac{3}{2}& 1 \end{bmatrix}.$$ \end{eje} At this point, an apart mention deserves the recent work \cite{IRS} since it states an explicit formula to the inverse matrix of the $q$-Pascal matrix plus one in terms of the $q$-analogue of the Euler matrix $\mathrsfs{E}$. As a consequence of the relations \eqref{pascal6}, \eqref{lucas2}, and Theorems \ref{teogeneuler3} and \ref{teogeneuler4}, we obtain the following corollaries. \begin{coro} \label{corgeneuler3} The Euler polynomial matrix $\mathrsfs{E}\left(x+\frac{1}{2}\right)$ and its inverse can be factorized by summation matrices as follows. $$\begin{aligned} \mathrsfs{E}\left(x+\frac{1}{2}\right)=& G_{n}[x]G_{n-1}[x]\cdots G_{1}[x] \mathbb{E},\\ \\ \left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}=& \mathrsfs{D} G_{n}[-x]G_{n-1}[-x]\cdots G_{1}[-x]. \end{aligned}$$ In particular, $$\begin{aligned} \mathrsfs{E}=& G_{n}\left[-\frac{1}{2}\right]G_{n-1}\left[-\frac{1}{2}\right]\cdots G_{1}\left[-\frac{1}{2}\right] \mathbb{E},\\ \\ \mathrsfs{E}^{-1}=&\mathrsfs{D} G_{n}\left[\frac{1}{2}\right]G_{n-1}\left[\frac{1}{2}\right]\cdots G_{1}\left[\frac{1}{2}\right]. \end{aligned}$$ \end{coro} \begin{coro} \label{corgeneuler4} For $x$ any nonzero real number, the Euler polynomial matrix $\mathrsfs{E}\left(x+\frac{1}{2}\right)$ and its inverse can be factorized, respectively, in terms of the Lucas matrix $\mathrsfs{L}$ and its inverse as follows. $$\begin{aligned} \mathrsfs{E}\left(x+\frac{1}{2}\right)=& \mathrsfs{L} \mathrsfs{G}[x]\mathbb{E}= \mathrsfs{H}[x]\mathrsfs{L} \mathbb{E} ,\\ \\ \left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}=& \mathrsfs{D} (\mathrsfs{G}[x])^{-1} \mathrsfs{L}^{-1}= \mathrsfs{D}\mathrsfs{L}^{-1} (\mathrsfs{H}[x])^{-1} . \end{aligned}$$ In particular, $$\begin{aligned} \mathrsfs{E}=& \mathrsfs{L} \mathrsfs{G}\left[-\frac{1}{2}\right]\mathbb{E}=\mathrsfs{H}\left[-\frac{1}{2}\right]\mathrsfs{L} \mathbb{E},\\ \\ \mathrsfs{E}^{-1}=&\mathrsfs{D} \left(\mathrsfs{G}\left[-\frac{1}{2}\right]\right)^{-1} \mathrsfs{L}^{-1}= \mathrsfs{D} \mathrsfs{L}^{-1} \left(\mathrsfs{H}\left[-\frac{1}{2}\right]\right)^{-1}. \end{aligned}$$ \end{coro} We end this section showing others identities, which can be easily deduced from the content of this paper. So, we will omit the details of their proofs. $$\begin{aligned} D_{x}\mathrsfs{E}(x+y)=&\mathfrak{L} P[x]\mathrsfs{E}(y),\\ &\\ D_{x}\mathrsfs{E}(x)=&\mathfrak{L} P[x]\mathrsfs{E},\\ &\\ D_{x}\mathrsfs{E}\left(x+\frac{1}{2}\right)=&\mathfrak{L} P[x]\mathbb{E},\\ &\\ D_{x}\left[\mathrsfs{E}\left(x+\frac{1}{2}\right)\right]^{-1}=& \mathrsfs{D}\mathfrak{L} P[-x]. \end{aligned}$$ \section{Generalized Euler polynomial matrices via Fibonacci and Lucas matrices} \label{sec:3} For $0\leq i,j\leq n$ and $\alpha$ a real or complex number, let $\mathrsfs{M}^{(\alpha)}(x)$ be the $(n+1)\times(n+1)$ matrix whose entries are given by (cf. \cite[Eq. (18)]{ZW2006}): \begin{equation} \label{euler28a} \tilde{m}^{(\alpha)}_{i,j}(x)= \binom{i}{j}E^{(\alpha)}_{i-j}(x)-\binom{i-1}{j}E^{(\alpha)}_{i-j-1}(x)-\binom{i-2}{j}E^{(\alpha)}_{i-j-2}(x). \end{equation} We denote $\mathrsfs{M}(x)=\mathrsfs{M}^{(1)}(x)$ and $\mathrsfs{M}=\mathrsfs{M}(0)$. Similarly, let $\mathrsfs{N}^{(\alpha)}(x)$ be the $(n+1)\times(n+1)$ matrix whose entries are given by (cf. \cite[Eq. (32)]{ZW2006}): \begin{equation} \label{euler28b} \tilde{n}^{(\alpha)}_{i,j}(x)= \binom{i}{j}E^{(\alpha)}_{i-j}(x)-\binom{i}{j+1}E^{(\alpha)}_{i-j-1}(x)-\binom{i}{j+2}E^{(\alpha)}_{i-j-2}(x). \end{equation} We denote $\mathrsfs{N}(x)=\mathrsfs{N}^{(1)}(x)$ and $\mathrsfs{N}=\mathrsfs{N}(0)$. From the definitions of $\mathrsfs{M}^{(\alpha)}(x)$ and $\mathrsfs{N}^{(\alpha)}(x)$, we see that $$\begin{array}{l} \tilde{m}^{(\alpha)}_{0,0}(x)= \tilde{m}^{(\alpha)}_{1,1}(x)= \tilde{n}^{(\alpha)}_{0,0}(x)= \tilde{n}^{(\alpha)}_{1,1}(x)= E^{(\alpha)}_{0}(x)=1,\\ \\ \tilde{m}^{(\alpha)}_{0,j}(x)=\tilde{n}^{(\alpha)}_{0,j}(x)=0, \quad j\geq 1,\\ \\ \tilde{m}^{(\alpha)}_{1,0}(x)= \tilde{n}^{(\alpha)}_{1,0}(x)= E^{(\alpha)}_{1}(x)- E^{(\alpha)}_{0}(x)=x-\frac{\alpha}{2}-1,\\ \\ \tilde{m}^{(\alpha)}_{1,j}(x)=\tilde{n}^{(\alpha)}_{1,j}(x)=0, \quad j\geq 2,\\ \\ \tilde{m}^{(\alpha)}_{i,0}(x)= \tilde{n}^{(\alpha)}_{i,0}(x) = E^{(\alpha)}_{i}(x)-E^{(\alpha)}_{i-1}(x)-E^{(\alpha)}_{i-2}(x), \quad i\geq 2. \end{array}$$ For $0\leq i,j\leq n$ and $\alpha$ a real or complex number, let $\mathrsfs{L}_{1}^{(\alpha)}(x)$ be the $(n+1)\times(n+1)$ matrix whose entries are given by \begin{equation} \label{euler28c} \hat{l}^{(\alpha,1)}_{i,j}(x)= \binom{i}{j}E^{(\alpha)}_{i-j}(x)-3\binom{i-j}{j}E^{(\alpha)}_{i-j-1}(x)+ 5\sum_{k=j}^{i-2}(-1)^{i-k}2^{i-k-2}\binom{k}{j}E^{(\alpha)}_{k-j}(x). \end{equation} We denote $\mathrsfs{L}_{1}(x)=\mathrsfs{L}_{1}^{(1)}(x)$ and $\mathrsfs{L}_{1}=\mathrsfs{L}_{1}(0)$. Similarly, let $\mathrsfs{L}_{2}^{(\alpha)}(x)$ be the $(n+1)\times(n+1)$ matrix whose entries are given by \begin{equation} \label{euler28d} \hat{l}^{(\alpha,2)}_{i,j}(x)= \binom{i}{j}E^{(\alpha)}_{i-j}(x)-3\binom{i}{j+1}E^{(\alpha)}_{i-j-1}(x)+ 5\sum_{k=j+1}^{i}(-1)^{k-j}2^{k-j-2}\binom{i}{k}E^{(\alpha)}_{i-k}(x). \end{equation} We denote $\mathrsfs{L}_{2}(x)=\mathrsfs{L}_{2}^{(1)}(x)$ and $\mathrsfs{L}_{2}=\mathrsfs{L}_{2}(0)$. From the definitions of $\mathrsfs{L}_{1}^{(\alpha)}(x)$ and $\mathrsfs{L}_{2}^{(\alpha)}(x)$, we see that $$\begin{array}{l} \hat{l}^{(\alpha,1)}_{i,i}(x)= \hat{l}^{(\alpha,2)}_{i,i}(x)=1, \quad i\geq 0,\\ \\ \hat{l}^{(\alpha,1)}_{0,j}(x)= \hat{l}^{(\alpha,2)}_{0,j}(x)=0, \quad j\geq 1,\\ \\ \hat{l}^{(\alpha,1)}_{1,0}(x)= \hat{l}^{(\alpha,2)}_{1,0}(x)= E^{(\alpha)}_{1}(x)- 3E^{(\alpha)}_{0}(x)=x-\frac{\alpha}{2}-3,\\ \\ \hat{l}^{(\alpha,1)}_{1,j}(x)= \hat{l}^{(\alpha,2)}_{1,j}(x)=0, \quad j\geq 2,\\ \\ \hat{l}^{(\alpha,1)}_{i,0}(x)= E^{(\alpha)}_{i}(x)-3E^{(\alpha)}_{i-1}(x)+ 5\sum_{k=0}^{i-2}(-1)^{i-k}2^{i-k-2}E^{(\alpha)}_{k}(x), \quad i\geq 2,\\ \\ \hat{l}^{(\alpha,2)}_{i,0}(x)= E^{(\alpha)}_{i}(x)-3iE^{(\alpha)}_{i-1}(x)+ 5\sum_{k=1}^{i}(-1)^{k}2^{k-2}\binom{i}{k} E^{(\alpha)}_{i-k}(x), \quad i\geq 2,\\ \\ \hat{l}^{(\alpha,1)}_{i,1}(x)= iE^{(\alpha)}_{i-1}(x)-\frac{7(i-1)}{2}E^{(\alpha)}_{i-2}(x)+ 5\sum_{k=1}^{i-2}(-1)^{i-k}2^{i-k-2}kE^{(\alpha)}_{k-1}(x), \quad i\geq 3. \end{array}$$ The following results show several factorizations of $\mathrsfs{E}^{(\alpha)}(x)$ in terms of Fibonacci and Lucas matrices, respectively. \begin{teo} \label{teogeneuler5} The generalized Euler polynomial matrix $\mathrsfs{E}^{(\alpha)}(x)$ can be factorized in terms of the Fibonacci matrix $\mathrsfs{F}$ as follows. \begin{equation} \label{euler22} \mathrsfs{E}^{(\alpha)}(x)= \mathrsfs{F} \mathrsfs{M}^{(\alpha)}(x), \end{equation} or, \begin{equation} \label{euler22a} \mathrsfs{E}^{(\alpha)}(x)= \mathrsfs{N}^{(\alpha)}(x)\mathrsfs{F}. \end{equation} In particular, \begin{equation} \label{euler23a} \mathrsfs{F}\mathrsfs{M}(x)=\mathrsfs{E}(x)=\mathrsfs{N}(x)\mathrsfs{F}, \end{equation} \begin{equation} \label{euler23} \mathrsfs{F}\mathrsfs{M}=\mathrsfs{E}=\mathrsfs{N}\mathrsfs{F}, \end{equation} and \begin{equation} \label{euler24} \mathrsfs{F}\mathrsfs{M}\left(\frac{1}{2}\right)=\mathbb{E}=\mathrsfs{N}\left(\frac{1}{2}\right)\mathrsfs{F}. \end{equation} \end{teo} \begin{proof} Since the relation \eqref{euler22} is equivalent to $\mathrsfs{F}^{-1}\mathrsfs{E}^{(\alpha)}(x)= \mathrsfs{M}^{(\alpha)}(x)$, it is possible to follow the proof given in \cite[Theorem 4.1]{ZW2006}, making the corresponding modifications, for obtaining \eqref{euler22}. The relation \eqref{euler22a} can be obtained using a similar procedure. The relations \eqref{euler23a}, \eqref{euler23} and \eqref{euler24} are straightforward consequences of \eqref{euler22} and \eqref{euler22a}. \end{proof} Also, the relations \eqref{euler22} and \eqref{euler22a} allow us to deduce the following identity: $$\mathrsfs{M}^{(\alpha)}(x)=\mathrsfs{F}^{-1}\mathrsfs{N}^{(\alpha)}(x)\,\mathrsfs{F}.$$ As a consequence of Theorems \ref{teogeneuler4} and \ref{teogeneuler5}, we can derive simple factorizations for the inverses of the polynomial matrices $\mathrsfs{M}\left(x+\frac{1}{2}\right)$ and $\mathrsfs{N}\left(x+\frac{1}{2}\right)$: \begin{coro} \label{teogeneuler6} The inverses of the polynomial matrices $\mathrsfs{M}\left(x+\frac{1}{2}\right)$ and $\mathrsfs{N}\left(x+\frac{1}{2}\right)$ can be factorized as follows. \begin{equation} \label{euler24a} \left[\mathrsfs{M}\left(x+\frac{1}{2}\right)\right]^{-1}=\mathrsfs{D} P[-x]\mathrsfs{F}, \end{equation} \begin{equation} \label{euler24b} \left[\mathrsfs{N}\left(x+\frac{1}{2}\right)\right]^{-1}=\mathrsfs{F} \mathrsfs{D} P[-x]. \end{equation} In particular, \begin{equation} \label{euler25} \mathrsfs{M}^{-1}= \mathrsfs{D} P\left[\frac{1}{2}\right]\mathrsfs{F}, \quad \mbox{ and } \quad \mathrsfs{N}^{-1}= \mathrsfs{F}\mathrsfs{D} P\left[\frac{1}{2}\right], \end{equation} \begin{equation} \label{euler26} \left[\mathrsfs{M}\left(\frac{1}{2}\right)\right]^{-1}= \mathrsfs{D}\mathrsfs{F}, \quad \mbox{ and } \quad \left[ \mathrsfs{N}\left(\frac{1}{2}\right)\right]^{-1}= \mathrsfs{F}\mathrsfs{D}. \end{equation} \end{coro} An analogous reasoning as used in the proof of Theorem \ref{teogeneuler5} allows us to prove the results below. \begin{teo} \label{teogeneuler8} The generalized Euler polynomial matrix $\mathrsfs{E}^{(\alpha)}(x)$ can be factorized in terms of the Lucas matrix $\mathrsfs{L}$ as follows. \begin{equation} \label{euler33} \mathrsfs{E}^{(\alpha)}(x)= \mathrsfs{L} \mathrsfs{L}_{1}^{(\alpha)}(x), \end{equation} or, \begin{equation} \label{euler34} \mathrsfs{E}^{(\alpha)}(x)= \mathrsfs{L}_{2}^{(\alpha)}(x)\mathrsfs{L}. \end{equation} In particular, \begin{equation} \label{euler34a} \mathrsfs{L} \mathrsfs{L}_{1}(x)=\mathrsfs{E}(x)=\mathrsfs{L}_{2}(x)\mathrsfs{L}, \end{equation} \begin{equation} \label{euler34b} \mathrsfs{L}\mathrsfs{L}_{1}=\mathrsfs{E}=\mathrsfs{L}_{2}\mathrsfs{L}, \end{equation} and \begin{equation} \label{euler34e} \mathrsfs{L}\mathrsfs{L}_{1}^{\left(\frac{1}{2}\right)}(x)=\mathbb{E}=\mathrsfs{L}_{2}^{\left(\frac{1}{2}\right)}(x)\mathrsfs{L}. \end{equation} \end{teo} Also, the relations \eqref{euler33} and \eqref{euler34} allow us to deduce the following identity: $$\mathrsfs{L}_{1}^{(\alpha)}(x)=\mathrsfs{L}^{-1}\mathrsfs{L}_{2}^{(\alpha)}(x)\,\mathrsfs{L}.$$ \begin{coro} \label{teogeneuler9} The inverses of the polynomial matrices $\mathrsfs{L}_{1}\left(x+\frac{1}{2}\right)$ and $\mathrsfs{L}_{2}\left(x+\frac{1}{2}\right)$ can be factorized as follows. \begin{equation} \label{euler35} \left[\mathrsfs{L}_{1}\left(x+\frac{1}{2}\right)\right]^{-1}=\mathrsfs{D} P[-x]\mathrsfs{L}, \end{equation} \begin{equation} \label{euler35a} \left[\mathrsfs{L}_{2}\left(x+\frac{1}{2}\right)\right]^{-1}=\mathrsfs{L} \mathrsfs{D} P[-x]. \end{equation} In particular, \begin{equation} \label{euler35b} \mathrsfs{L}_{1}^{-1}= \mathrsfs{D} P\left[\frac{1}{2}\right]\mathrsfs{L}, \quad \mbox{ and } \quad \mathrsfs{L}_{2}^{-1}= \mathrsfs{L}\mathrsfs{D} P\left[\frac{1}{2}\right], \end{equation} \begin{equation} \label{euler35c} \left[\mathrsfs{L}_{1}\left(\frac{1}{2}\right)\right]^{-1}= \mathrsfs{D}\mathrsfs{L}, \quad \mbox{ and } \quad \left[ \mathrsfs{L}_{2}\left(\frac{1}{2}\right)\right]^{-1}= \mathrsfs{L}\mathrsfs{D}. \end{equation} \end{coro} \begin{remark} It is worthwhile to mention that if we consider $a\in \mathbb{C}$, $b\in \mathbb{C}\setminus\{0\}$ and $s=0,1$, then Theorems \ref{teogeneuler5} and \ref{teogeneuler8}, as well as, their corollaries have corresponding analogous forms for generalized Fibonacci matrices of type $s$, $\mathrsfs{F}^{(a,b,s)}$, and for generalized Fibonacci matrices $\mathrsfs{U}^{(a,b,0)}$ with second order recurrent sequence $U_{n}^{(a,b)}$ subordinated to certain constraints. The reader may consult \cite{SNS2008} in order to complete the details of this assertion. \end{remark} We finish this section with some new identities involving the Fibonacci numbers, the Lucas numbers and the generalized Euler polynomials and numbers. \begin{teo} \label{teogeneuler7} For $0\leq r\leq n$ and $\alpha$ any real or complex number, we have \begin{eqnarray} \nonumber \label{euler27} \binom{n}{r}E^{(\alpha)}_{n-r}(x)&=& F_{n-r+1}+\left[(r+1)x -\frac{(r+1)\alpha +2}{2}\right]F_{n-r}\\ \nonumber & & + \sum_{k=r+2}^{n}\binom{k}{r}\left\{E^{(\alpha)}_{k-r}(x)- \frac{k-r}{k}\left[E^{(\alpha)}_{k-r-1}(x)+ \frac{k-r-1}{k-1}E^{(\alpha)}_{k-r-2}(x)\right]\right\}F_{n-k+1}\\ \nonumber \label{euler31} & & \\ \nonumber & & \\ \nonumber &=& F_{n-r+1}+\left[n\left(x-\frac{\alpha}{2}\right)-1\right]F_{n-r}\\ \nonumber & & + \sum_{k=0}^{n-2}\binom{n}{k}\left\{E^{(\alpha)}_{n-k}(x)- \frac{n-k}{k+1}\left[E^{(\alpha)}_{n-k-1}(x)+ \frac{n-k-1}{k+2}E^{(\alpha)}_{n-k-2}(x)\right]\right\}F_{k-r+1}. \end{eqnarray} \end{teo} \begin{proof} We proceed as in the proof of \cite[Theorem 4.2]{ZW2006}, making the corresponding modifications. From \eqref{euler28a}, it is clear that $$\tilde{m}_{r,r}^{(\alpha)}(x)= 1, \quad \tilde{m}_{r+1,r}^{(\alpha)}(x)= (r+1)x -\frac{(r+1)\alpha +2}{2},$$ and, for $k\geq 2$: $$\tilde{m}_{k,r}^{(\alpha)}(x)= \binom{k}{r}\left\{E^{(\alpha)}_{k-r}(x)- \frac{k-r}{k}\left[E^{(\alpha)}_{k-r-1}(x)+ \frac{k-r-1}{k-1}E^{(\alpha)}_{k-r-2}(x)\right]\right\}.$$ Next, it follows from \eqref{euler22} that \begin{eqnarray} \nonumber \binom{n}{r}E^{(\alpha)}_{n-r}(x)&=& E^{(\alpha)}_{n,r}(x)\\ \nonumber &=&\sum_{k=r}^{n}F_{n-k+1}\tilde{m}_{k,r}^{(\alpha)}(x)\\ \nonumber &=&F_{n-r+1}+ F_{n-r}\tilde{m}_{r+1,r}^{(\alpha)}(x) +\sum_{k=r+2}^{n}F_{n-k+1}\tilde{m}_{k,r}^{(\alpha)}(x)\\ \nonumber &=&F_{n-r+1}+\left[(r+1)x -\frac{(r+1)\alpha +2}{2}\right]F_{n-r}\\ \nonumber & & + \sum_{k=r+2}^{n}\binom{k}{r}\left\{E^{(\alpha)}_{k-r}(x)- \frac{k-r}{k}\left[E^{(\alpha)}_{k-r-1}(x)+ \frac{k-r-1}{k-1}E^{(\alpha)}_{k-r-2}(x)\right]\right\}F_{n-k+1}. \end{eqnarray} This chain of equalities completes the first part of the proof. The second one is obtained in a similar way, taking into account the following identities: $$\tilde{n}_{n,n}^{(\alpha)}(x)= 1, \quad \tilde{n}_{n,n-1}^{(\alpha)}(x)= n\left(x-\frac{\alpha}{2}\right)-1,$$ and, for $0\leq k\leq n-2$: $$\tilde{n}_{n,k}^{(\alpha)}(x)= \binom{n}{k}\left\{E^{(\alpha)}_{n-k}(x)- \frac{n-k}{k+1}\left[E^{(\alpha)}_{n-k-1}(x)+ \frac{n-k-1}{k+2}E^{(\alpha)}_{n-k-2}(x)\right]\right\}.$$ \end{proof} \begin{coro} \label{corgeneuler5} For $0\leq r\leq n$ and $\alpha$ any real number, we have \begin{multline*} \nonumber (-1)^{n}\binom{n}{r}E^{(\alpha)}_{n-r}(x)=(-1)^{r}F_{n-r+1}+(-1)^{r+1}\left[\frac{(r+1)(2x-\alpha)+2}{2}\right]F_{n-r}\\ \hspace{2.5cm}+ \sum_{k=r+2}^{n}(-1)^{k}\binom{k}{r}\left\{E^{(\alpha)}_{k-r}(x)+ \frac{k-r}{k}\left[E^{(\alpha)}_{k-r-1}(x)- \frac{k-r-1}{k-1}E^{(\alpha)}_{k-r-2}(x)\right]\right\}F_{n-k+1}\\ \\ \nonumber = (-1)^{r}F_{n-r+1}+(-1)^{r+1}\left[n\left(x-\frac{\alpha}{2}\right)-1\right]F_{n-r}\\ \hspace{2cm} + \sum_{k=0}^{n-2}(-1)^{n-k+r}\binom{n}{k}\left\{E^{(\alpha)}_{n-k}(x)+ \frac{n-k}{k+1}\left[E^{(\alpha)}_{n-k-1}(x)+ \frac{n-k-1}{k+2}E^{(\alpha)}_{n-k-2}(x)\right]\right\}F_{k-r+1}. \end{multline*} \end{coro} \begin{proof} Replacing $x$ by $\alpha-x$ in \eqref{euler27} and applying the formula $$E^{(\alpha)}_{n}(x)= (-1)^{n}E^{(\alpha)}_{n}(\alpha-x)$$ to the resulting identity, we obtain the first identity of Corollary \ref{corgeneuler5}. An analogous reasoning yields the second identity. \end{proof} Analogous reasonings to those used in the proofs of Theorem \ref{teogeneuler7} and Corollary \ref{corgeneuler5} allow us to prove the following results. \begin{teo} \label{teogeneuler10} For any real or complex number $\alpha$, we have the following identities \begin{eqnarray} \nonumber \label{lucas3} E^{(\alpha)}_{n}(x)&=& L_{n+1}+\left(x-\frac{\alpha}{2}-3\right)L_{n}+ \sum_{k=2}^{n}\left(E^{(\alpha)}_{k}(x)-3E^{(\alpha)}_{k-1}(x) \right)L_{n-k+1}\\ & & + 5\sum_{k=2}^{n}\sum_{s=0}^{k-2}(-1)^{k-s}2^{k-s-2}L_{n-k+1}E^{(\alpha)}_{s}(x), \end{eqnarray} whenever $n\geq 2$. \begin{eqnarray} \label{lucas4} \nonumber nE^{(\alpha)}_{n-1}(x)&=& L_{n}+\left(2x-\alpha-3\right)L_{n-1}+ \sum_{k=3}^{n}\left(kE^{(\alpha)}_{k-1}(x)-3(k-1)E^{(\alpha)}_{k-2}(x) \right)L_{n-k+1}\\ & & + 5\sum_{k=3}^{n}\sum_{s=1}^{k-2}(-1)^{k-s}2^{k-s-2}sL_{n-k+1}E^{(\alpha)}_{s-1}(x), \end{eqnarray} whenever $n\geq3$. \end{teo} \begin{coro} The following identities hold. \begin{eqnarray} \nonumber \label{lucas5} (-1)^{n}E^{(\alpha)}_{n}(x)&=& L_{n+1}-\left(x-\frac{\alpha}{2}+3\right)L_{n}+ \sum_{k=2}^{n}(-1)^{k}\left(E^{(\alpha)}_{k}(x)+3E^{(\alpha)}_{k-1}(x) \right)L_{n-k+1}\\ & & + 5\sum_{k=2}^{n}\sum_{s=0}^{k-2}(-1)^{k-s}2^{k-s-2}L_{n-k+1}E^{(\alpha)}_{s}(x), \end{eqnarray} whenever $n\geq 2$. \begin{eqnarray} \label{lucas6} \nonumber (-1)^{n-1}nE^{(\alpha)}_{n-1}(x)&=& L_{n}+\left(\alpha-2x -3\right)L_{n-1}\\ \nonumber && + \sum_{k=3}^{n}(-1)^{k-1}\left(kE^{(\alpha)}_{k-1}(x)+3(k-1)E^{(\alpha)}_{k-2}(x) \right)L_{n-k+1}\\ & & + 5\sum_{k=3}^{n}\sum_{s=1}^{k-2}(-1)^{k-1}2^{k-s-2}sL_{n-k+1}E^{(\alpha)}_{s-1}(x), \end{eqnarray} whenever $n\geq3$. \end{coro} By \eqref{lucas3}, \eqref{lucas4}, \eqref{lucas5} and \eqref{lucas6} we obtain the following interesting identities involving Lucas and Euler numbers. \begin{itemize} \item For $n\geq 2$: $$ E_{n}-\left(L_{n+1}-\frac{7}{2}L_{n}\right)= \sum_{k=2}^{n}\left(E_{k}-3E_{k-1}+5\sum_{s=0}^{k-2}(-1)^{k-s}2^{k-s-2}E_{s}\right)L_{n-k+1}, $$ $$ (-1)^{n}E_{n}= L_{n+1}-\frac{5}{2}L_{n}+ \sum_{k=2}^{n}(-1)^{k}\left(E_{k}+3E_{k-1}\right)L_{n-k+1}+ 5\sum_{k=2}^{n}\sum_{s=0}^{k-2}(-1)^{k-s}2^{k-s-2}L_{n-k+1}E_{s}. $$ \item For $n\geq 3$: \begin{eqnarray*} nE_{n-1}-(L_{n}-4L_{n-1})&=& \sum_{k=3}^{n}\left(kE_{k-1}-3(k-1)E_{k-2}\right)L_{n-k+1}\\ & & + 5\sum_{k=3}^{n}\sum_{s=1}^{k-2}(-1)^{k-s}2^{k-s-2}sE_{s-1}L_{n-k+1}, \end{eqnarray*} $$(-1)^{n-1}nE_{n-1}-L_{n}-\left(\alpha-2x -3\right)L_{n-1} = \sum_{k=3}^{n}(-1)^{k-1}\left(kE_{k-1}+3(k-1)E_{k-2} \right)L_{n-k+1}$$ $$\qquad\qquad\hspace{4cm}\qquad + 5\sum_{k=3}^{n}\sum_{s=1}^{k-2}(-1)^{k-1}2^{k-s-2}sL_{n-k+1}E_{s-1}.$$ \end{itemize} Another similar combinatorial identities may be obtained using the results of \cite{LKC2003}. We leave to the interested reader the formulation of them. \section{Euler matrices and their relation with Stirling and Vandermonde matrices} \label{sec:5} Let $s(n, k)$ and $S(n, k)$ be the Stirling numbers of the first and second kind, which are respectively defined by the generating functions \cite[Chapther 1, Section 1.6]{SCh2012}: \begin{eqnarray} \label{seuler1} \sum_{k=0}^{n}s(n, k)z^{k}&=& z(z-1)\cdots(z-n+1), \\ \nonumber (\log(1+z))^{k}&=&k!\sum_{n=k}^{\infty}s(n, k)\frac{z^{n}}{n!}, \quad |z|<1,\\ \nonumber z^{n}&=& \sum_{k=0}^{n}S(n, k)z(z-1)\cdots(z-k+1),\\ \nonumber (e^{z}-1)^{k} &=& k!\sum_{n=k}^{\infty}S(n, k)\frac{z^{n}}{n!}. \end{eqnarray} In combinatorics, it is well-known that the value $|s(n, k)|$ represents the number of permutations of $n$ elements with $k$ disjoint cycles. While, the Stirling numbers of the second kind $S(n, k)$ give the number of partitions of $n$ objects into $k$ non-empty subsets. Another way to compute these numbers is by means of the formula (see \cite[Eq. (5.1)]{E2012} or \cite[p. 226]{Rio68}: \begin{equation*} S(n, k)= \frac{1}{k!}\sum_{l=0}^{k}(-1)^{k-l}\binom{k}{l}l^{n}, \quad 1\leq k\leq n. \end{equation*} A recent connection between the Stirling numbers of the second kind and the Euler polynomials is given by the formula (see \cite[Theorem 3.1, Eq. (3.3)]{GQ2014}): \begin{equation} \label{seuler11} E_{n}(x)=\sum_{k=0}^{n}(-1)^{n-k}\binom{n}{k}\left[\sum_{l=1}^{n-k+1}\frac{(-1)^{l-1}(l-1)!}{2^{l-1}}S(n-k+1,l)\right]x^{k}. \end{equation} Proceeding as in the proof of \cite[Theorem 3.1]{GQ2014}, one can find a similar relation to the previous one but connecting Stirling numbers of the first kind and a particular class of generalized Euler polynomials. \begin{teo} \label{stinglemma} Let us assume that $\alpha=m\in \mathbb{N}$. Then, the connection between the Stirling numbers of the first kind and the generalized Euler polynomial $E^{(m)}_{n}(x)$ is given by the formula: \begin{equation} \label{sneuler1} E^{(m)}_{n}(x)=\frac{1}{2^{n}}\sum_{k=0}^{n}\binom{n}{k}\left[\sum_{j=0}^{n-k}s(n-k,j)(-m)^{j}\right](2x)^{k}, \end{equation} \end{teo} \begin{proof} By Leibniz's theorem for differentiation we have \begin{eqnarray*} \frac{\partial^{r}}{\partial z^{r}}\left[\left(\frac{2}{e^{z}+1}\right)^{m}e^{xz}\right]&=& \sum_{k=0}^{r}\binom{r}{k}\left[\left(\frac{2}{e^{z}+1}\right)^{m}\right]^{(k)}\frac{\partial^{r-k}}{\partial z^{r-k}}(e^{xz})\\ &=&\sum_{k=0}^{r}\binom{r}{k}\left[\left(\frac{2}{e^{z}+1}\right)^{m}\right]^{(k)}x^{r-k}e^{xz}\\ &=&\left(\frac{2}{e^{z}+1}\right)^{m}e^{(x+1)z}\sum_{k=0}^{r}\binom{r}{k}\frac{(-m)_{k}\,x^{r-k}}{(e^{z}+1)^{k}}, \end{eqnarray*} where in the last expression $(-m)_{k}$ denotes the falling factorial with opposite argument $-m$. Combining this with the $r$-th differentiation on both sides of the generating function in \eqref{euler1} reveals that $$\sum_{n=r}^{\infty}E_{n}^{(m)}(x)\frac{z^{n-r}}{(n-r)!}= \left(\frac{2}{e^{z}+1}\right)^{m}e^{(x+1)z}\sum_{k=0}^{r}\binom{r}{k}\frac{(-m)_{k}\,x^{r-k}}{(e^{z}+1)^{k}}.$$ Further taking $z\rightarrow 0$ and employing \eqref{seuler1} give \begin{eqnarray*} E_{r}^{(m)}(x)&=&\sum_{k=0}^{r}\binom{r}{k}(-m)_{k}\,\frac{x^{r-k}}{2^{k}}=\frac{1}{2^{r}}\sum_{k=0}^{r}\binom{r}{k} (-m)_{r-k}\,(2x)^{k}\\ &=&\frac{1}{2^{r}}\sum_{k=0}^{r}\binom{r}{k}\left[\sum_{j=0}^{r-k}s(r-k, j)(-m)^{j} \right](2x)^{k}. \end{eqnarray*} Finally, changing $r$ by $n$ the proof of the formula \eqref{seuler11} is complete. \end{proof} \begin{definition} \label{def6} For the Stirling numbers $s(i,j)$ and $S(i,j)$ of the first kind and of the second kind respectively, define $\mathfrak{S}$ and $\mathrsfs{S}$ to be the $(n+1)\times(n+1)$ matrices by \begin{equation} \label{seuler10} \mathfrak{S}_{i,j}= \left\{\begin{array}{l} s(i,j), \quad i\geq j,\\ 0, \quad \mbox{otherwise}, \end{array}\right. \quad \mbox{ and } \quad \mathrsfs{S}_{i,j}= \left\{\begin{array}{l} S(i,j), \quad i\geq j,\\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} The matrices $\mathfrak{S}$ and $\mathrsfs{S}$ are called Stirling matrix of the first kind and of the second kind, respectively (see \cite{ChK2001}). \end{definition} In order to obtain factorizations for Euler matrices via Stirling matrices we will need the following matrices: Let $\tilde{S}_{n}$ be the factorial Stirling matrix, i.e., the $n\times n$ matrix whose $(i,j)$-th entry is given by $\tilde{S}_{i,j,n}:=j!\,\mathrsfs{S}_{i,j}$, $i\geq j$ and otherwise $0$. For $m\in\mathbb{N}$, let $\mathfrak{S}^{(m)}$ be the $(n+1)\times(n+1)$ matrix whose $(i,j)$-entries are defined by \begin{equation} \label{kind1} \mathfrak{S}^{(m)}_{i,j}=\left\{\begin{array}{l} \binom{i}{j}\sum_{k=0}^{i-j}s(i-j, k)(-m)^{k}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} Let $\tilde{C}$ and $\tilde{D}$ be the $(n+1)\times(n+1)$ matrices whose $(i,j)$-entries are defined by \begin{equation} \label{kind2} \tilde{C}_{i,j}=\left\{\begin{array}{l} \binom{i}{j} (-1)^{i-j}\sum_{k=0}^{i-j}\left(-\frac{1}{2}\right)^{k}\tilde{S}_{i-j-k,k,i-j}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} \begin{equation} \label{kind3} \tilde{D}_{i,j}=\left\{\begin{array}{l} \binom{i}{j} (-1)^{i-j}\sum_{k=0}^{i-j}\left(-\frac{1}{2}\right)^{k}\tilde{S}_{i-j-k,k+1,i-j}, \quad i\geq j,\\ \\ 0, \quad \mbox{otherwise}. \end{array}\right. \end{equation} The next theorem shows the corresponding factorizations of the generalized Euler matrix $\mathrsfs{E}^{(m)}$, $m\in\mathbb{N}$, in terms of the Stirling matrices, when the expressions \eqref{seuler11} and \eqref{sneuler1} are incorporated. \begin{teo} \label{teogenseuler5} For $m\in \mathbb{N}$, the generalized Euler matrix $\mathrsfs{E}^{(m)}(x)$ can be factorized as follows. \begin{equation} \label{seuler22} \mathrsfs{E}^{(m)}(x)= \mathfrak{S}^{(m)}P[x]. \end{equation} In the case of the Stirling matrix of the second kind, we have \begin{equation} \label{sneuler22b} \mathrsfs{E}(x)= (\tilde{C}+\tilde{D})P[x]. \end{equation} Furthermore, \begin{equation} \label{sneuler22r} \mathfrak{S}^{(1)}= \tilde{C}+\tilde{D}. \end{equation} \end{teo} \begin{proof} For $m\in \mathbb{N}$ and $i\geq j$, let $A_{i,j}^{(m)}(x)$ be the $(i,j)$-th entry of the matrix product $\mathfrak{S}^{(m)}P[x]$, then $$\begin{aligned} A_{i,j}^{(m)}(x)=&\sum_{k=j}^{i}\mathfrak{S}^{(m)}_{i,k}\,p_{k,j}(x)=\sum_{k=j}^{i}\binom{i}{k}\binom{k}{j}\left[\sum_{r=0}^{i-k}s(i-k, r)(-m)^{r}\right]x^{k-j}\\ =&\sum_{k=j}^{i}\binom{i}{j}\binom{i-j}{k-j}2^{j-i}\left[\sum_{r=0}^{i-k}s(i-k, r)(-m)^{r}\right](2x)^{k-j}\\ =&\frac{\binom{i}{j}}{2^{i-j}}\sum_{k=0}^{i-j}\binom{i-j}{k}\left[\sum_{r=0}^{i-j-k}s(i-j-k, r)(-m)^{r}\right](2x)^{k}=\binom{i}{j}E_{i-j}^{(m)}(x), \end{aligned}$$ The last equality is an immediate consequence of \eqref{sneuler1}, and \eqref{seuler22} follows from the previous chain of equalities. In order to prove \eqref{sneuler22b}, we proceed in a similar way to the previous one, tanking into account \eqref{seuler11}, \eqref{kind2}, \eqref{kind3}, and making the corresponding modifications. Finally, the substitution $m=1$ into \eqref{seuler22} yields \eqref{sneuler22r}. \end{proof} \begin{definition} \label{shift1} The $(n+1)\times(n+1)$ shifted Euler polynomial matrix $\tilde{\mathrsfs{E}}(x)$ are given by \begin{equation} \label{shift2} \tilde{\mathrsfs{E}}_{i,j}(x)= \mathrsfs{E}_{i}(j+x), \quad 0\leq i,j\leq n. \end{equation} \end{definition} Let us consider the Vandermonde matrix: $$\mathrsfs{V}(x):= \begin{bmatrix} 1 & 1 & 1 & \cdots & 1 \\ x & 1+x & 2+x & \cdots & n+x \\ x^{2} &(1+x)^{2} & (2+x)^{2} & \cdots & (n+x)^{2} \\ \vdots & \vdots & \vdots& \ddots & \vdots \\ x^{n}& (1+x)^{n} & (2+x)^{n} & \cdots & (n+x)^{n} \end{bmatrix}.$$ In \cite[Theorem 2.1]{ChK2002}, the following factorization for the Vandermonde matrix $\mathrsfs{V}(x)$ was stated. \begin{equation} \label{veuler3} \mathrsfs{V}(x)= ([1]\oplus \tilde{S}_{n}) \mathrsfs{D}elta_{n+1}(x) P^{T}:=([1]\oplus \tilde{S}_{n}) \mathrsfs{D}elta_{n+1}(x) (P[1])^{T}, \end{equation} where $\mathrsfs{D}elta_{n+1}(x)(P[1])^{T}$ represents the LU-factorization of a lower triangular matrix whose $(i,j)$-th entry is $\binom{x}{i-j}$, if $i\geq j$ and otherwise $0$. The relation between the shifted Euler polynomial matrix $\tilde{\mathrsfs{E}}(x)$ and the matrices $\mathrsfs{V}(x)$ and $\tilde{S}_{n}$ is contained in the following result. \begin{teo} \label{teogenvaneuler5} The shifted Euler polynomial matrix $\tilde{\mathrsfs{E}}(x)$ can be factorized in terms of the Vandermonde matrix $\mathrsfs{V}(x)$ and consequently, in terms of the factorial Stirling matrix $\tilde{S}_{n}$ as follows: \begin{equation} \label{veuler22} \tilde{\mathrsfs{E}}(x)= \mathrsfs{E} \mathrsfs{V}(x), \end{equation} \begin{equation} \label{veuler22c} \tilde{\mathrsfs{E}}(x)= \mathrsfs{E} ([1]\oplus \tilde{S}_{n}) \mathrsfs{D}elta_{n+1}(x) P^{T}. \end{equation} \end{teo} \begin{proof} Let $\tilde{\mathrsfs{E}}_{i,j}(x)$ be the $(i,j)$-th entry of the shifted Euler polynomial matrix $\tilde{\mathrsfs{E}}(x)$. Then, using \eqref{euler9} we get $$\tilde{\mathrsfs{E}}_{i,j}(x)= \mathrsfs{E}_{i}(j+x)=\sum_{k=0}^{i}\binom{i}{k}E_{i-k}(j+x)^{k}=\sum_{k=0}^{i}E_{i,k}\mathrsfs{V}_{k,j}(x).$$ Hence, \eqref{veuler22} follows from this chain of equalities. The relation \eqref{veuler22c} is a straightforward consequence of \eqref{veuler3}. \end{proof} \begin{remark} Note that the relations \eqref{veuler22} and \eqref{veuler22c} are the analogous of \cite[Eqs. (37), (38)]{ZW2006}, respectively, in the context of Euler polynomial matrices. \end{remark} Finally, in the present paper, all matrix identities have been expressed using finite matrices. Since such matrix identities involve lower triangular matrices, they have a resemblance for infinite matrices. We state this property briefly as follows. Let $\mathrsfs{E}^{(\alpha)}_{\infty}(x)$, $\mathrsfs{E}_{\infty}\left(x+\frac{1}{2}\right)$, $\mathrsfs{E}_{\infty}(x)$, $\mathrsfs{E}_{\infty}$, $\mathbb{E}_{\infty}$, $\mathrsfs{D}_{\infty}$, $\tilde{\mathrsfs{E}}_{\infty}(x)$, $P_{\infty}[x]$, $\mathrsfs{F}_{\infty}$, $\mathrsfs{F}^{-1}_{\infty}$, $\mathrsfs{L}_{\infty}$, $\mathrsfs{G}_{\infty}[x]$, $\mathrsfs{H}_{\infty}[x]$, $\mathrsfs{M}^{(\alpha)}_{\infty}(x)$, $\mathrsfs{M}_{\infty}\left(x+\frac{1}{2}\right)$, $\mathrsfs{N}^{(\alpha)}_{\infty}(x)$, $\mathrsfs{N}_{\infty}\left(x+\frac{1}{2}\right)$, $\mathrsfs{V}_{\infty}$ and $\mathfrak{S}^{(m)}_{\infty}$, be the infinite cases of the matrices $\mathrsfs{E}^{(\alpha)}(x)$, $\mathrsfs{E}\left(x+\frac{1}{2}\right)$, $\mathrsfs{E}(x)$, $\mathrsfs{E}$, $\mathbb{E}$, $\mathrsfs{D}$, $\tilde{\mathrsfs{E}}(x)$, $P[x]$, $\mathrsfs{F}$, $\mathrsfs{F}^{-1}$, $\mathrsfs{L}$, $\mathrsfs{G}[x]$, $\mathrsfs{H}[x]$, $\mathrsfs{M}^{(\alpha)}(x)$, $\mathrsfs{M}\left(x+\frac{1}{2}\right)$, $\mathrsfs{N}^{(\alpha)}(x)$, $\mathrsfs{N}\left(x+\frac{1}{2}\right)$, $\mathrsfs{V}$ and $\mathfrak{S}^{(m)}$ respectively. Then the following identities hold. \begin{eqnarray*} 2\mathrsfs{E}^{(\alpha-1)}_{\infty}(x)&=& \mathrsfs{E}^{(\alpha)}_{\infty}(x+1)+\mathrsfs{E}^{(\alpha)}_{\infty}(x),\\ \mathrsfs{E}^{(\alpha)}_{\infty}(x+y) &=& \mathrsfs{E}^{(\alpha)}_{\infty}(x)P_{\infty}[y]=P_{\infty}[x]\mathrsfs{E}^{(\alpha)}_{\infty}(y)=\mathrsfs{E}^{(\alpha)}_{\infty}(y)P_{\infty}[x], \\ \mathrsfs{E}_{\infty}(x+y) &=& P_{\infty}[x] \mathrsfs{E}_{\infty}(y)=P_{\infty}[y] \mathrsfs{E}_{\infty}(x), \\ \mathrsfs{E}_{\infty}(x) &=& P_{\infty}[x]\mathrsfs{E}_{\infty}, \\ \mathrsfs{E}_{\infty}\left(x+\frac{1}{2}\right) &=& P_{\infty}[x] \mathbb{E}_{\infty}, \\ \left[\mathrsfs{E}_{\infty}\left(x+\frac{1}{2}\right)\right]^{-1} &=& \mathrsfs{D}_{\infty}P_{\infty}[-x], \\ \mathrsfs{E}_{\infty}\left(x+\frac{1}{2}\right) &=& \mathrsfs{L}_{\infty}\mathrsfs{G}_{\infty}[x]\mathbb{E}_{\infty}=\mathrsfs{H}_{\infty}[x]\mathrsfs{L}_{\infty}\mathbb{E}_{\infty}, \\ \mathrsfs{E}^{(\alpha)}_{\infty}(x) &=& \mathrsfs{F}_{\infty}\mathrsfs{M}^{(\alpha)}_{\infty}(x)=\mathrsfs{N}^{(\alpha)}_{\infty}(x)\mathrsfs{F}_{\infty}, \\ \mathrsfs{M}^{(\alpha)}_{\infty}(x) &=& \mathrsfs{F}^{-1}_{\infty}\mathrsfs{N}^{(\alpha)}_{\infty}(x)\mathrsfs{F}_{\infty},\\ \left[\mathrsfs{M}_{\infty}\left(x+\frac{1}{2}\right)\right]^{-1} &=& \mathrsfs{D}_{\infty}P_{\infty}[-x] \mathrsfs{F}_{\infty},\\ \left[\mathrsfs{N}_{\infty}\left(x+\frac{1}{2}\right)\right]^{-1} &=& \mathrsfs{F}_{\infty} \mathrsfs{D}_{\infty}P_{\infty}[-x],\\ \mathrsfs{E}^{(m)}_{\infty}(x)&=&\mathfrak{S}_{\infty}P_{\infty}[x],\\ \tilde{\mathrsfs{E}}_{\infty}(x)&=& \mathrsfs{E}_{\infty} \mathrsfs{V}_{\infty}(x). \end{eqnarray*} \end{document}
\begin{document} \title{Peer Methods for the Solution of Large-Scale Differential Matrix Equations} \noindent {\bfseries Author's addresses:}\\[2ex] Peter Benner \\ Max Planck Institute for Dynamics of Complex Technical Systems,\\ Computational Methods in Systems and Control Theory,\\ D-39106 Magdeburg, Germany,\\ and\\ Technische Universit\"at Chemnitz,\\ Faculty of Mathematics, Mathematics in Industry and Technology\\ D-09126 Chemnitz, Germany\\ \url{[email protected]}\\[2ex] Norman Lang\\ Technische Universit\"at Chemnitz,\\ Faculty of Mathematics, Mathematics in Industry and Technology\\ D-09126 Chemnitz, Germany\\ \url{[email protected]} \begin{abstract} We consider the application of implicit and linearly implicit (Rosenbrock-type) peer methods to matrix-valued ordinary differential equations. In particular the differential Riccati equation (DRE) is investigated. For the Rosenbrock-type schemes, a reformulation capable of avoiding a number of Jacobian applications is developed that, in the autonomous case, reduces the computational complexity of the algorithms. Dealing with large-scale problems, an efficient implementation based on low-rank symmetric indefinite factorizations is presented. The performance of both peer approaches up to order 4 is compared to existing implicit time integration schemes for matrix-valued differential equations. \end{abstract} \section{Introduction}\label{intro} Differential matrix equations are of major importance in many fields like optimal control and model reduction of linear dynamical systems, see, e.g.,~\cite{Loc01,AboFIetal03} and \cite{ShoSV83,VerK83}, respectively. In that context, the most common differential matrix equations are the differential Riccati and Lyapunov equations (DREs/DLEs), where the latter can be considered as a special case of the Riccati equation. Therefore, as an illustrating example, in this article, we consider the numerical solution of the time-varying DRE \begin{align} \begin{aligned} \dot{X}(t)&=A(t)^{T}X(t)+X(t)A(t)-X(t)S(t)X(t)+W(t)=:\ensuremath{\mathcal{R}}(t,X),\\ X(t_0)&=X_{0}, \end{aligned}\label{eq:DRE} \end{align} where $t\in[t_{0},t_{f}]$, $X(t)\in\ensuremath{\R^{n\times n}}$ is the sought for solution to Equation~(\ref{eq:DRE}) and $A(t)$, $W(t)$, $S(t)\in\ensuremath{\R^{n\times n}}$ are given matrix-valued functions and the matrix $X_{0}\in\ensuremath{\R^{n\times n}}$ denotes the initial value with $n$ being the problem dimension. The differential Lyapunov equation results if we set \(S(t)\equiv 0\). Provided that the matrices $A,W,S$ are piecewise continuous and locally bounded, from~\cite[Theorem 4.1.6]{AboFIetal03} we have that a solution to Equation~(\ref{eq:DRE}) exists and further is unique. The DRE is one of the most deeply studied nonlinear matrix differential equations due to its importance in optimal control, optimal filtering, $\mathbf{H}_\infty$-control of linear-time varying systems, differential games and many more (see, e.g.,~\cite{AboFIetal03,IchK99,Jac93,PetUS00}). Over the last four decades many solution strategies have been presented, see, e.g., \cite{DavM73,Lai76,Lau82,HarWA83,KenL85,Meh91}. Most of these methods are only applicable to small-scale systems, i.e., systems with a rather small number $n$ of unknowns. Others, suitable for the application to large-scale problems have to deal with numerical difficulties like, e.g., instability, see~\cite[Section 4.1]{Men07} for a detailed overview. Due to the fact that in many control applications fast and slow modes are present, the DRE~(\ref{eq:DRE}) is usually fairly stiff. For that reason, the numerical solution based on matrix versions of classical implicit time integration schemes, such as the BDF, Rosenbrock methods, and the Midpoint and Trapezoidal rules~\cite{Die92,BenM04,Men07,BenM13,BenM18} has become a popular tool for the solution of~(\ref{eq:DRE}). Recently, also splitting methods~\cite{HanS14,Sti15,Sti17} and a structure preserving solution method for large-scale DREs~\cite{KosM17}, using Krylov subspace methods, were proposed. Note that ``unrolling'' the matrix differential equation into a standard (vector-valued) ordinary differential equation (ODE) is usually infeasible due to the resulting complexity - the ODE would then be posed in \(\R^{n^{2}\times n^{2}}\). In the field of implicit time integration methods, linear multistep and one-step methods, such as e.g., the BDF and the Rosenbrock methods, respectively, have been known for many decades and in addition have proven their effectiveness over the years for a wide range of problems. These two traditional classes of time integration methods have been studied separately until recently. As a unifying framework for stability, consistency and convergence analysis for a wide variety of methods, also containing the aforementioned classes, in~\cite{But66} the general linear methods (GLMs) were introduced. Detailed explanations on GLMs are given in the surveys~\cite{But85,But96}. Most of the classical methods contain a number of solution variables $X_{k+1,j}$ and in addition compute a separate number of auxiliary variables related to function evaluations $F(\ensuremath{\tilde{t}}_{k},\ensuremath{\tilde{X}}_{k})$ at $t_{k}\leq \ensuremath{\tilde{t}}_{k}\leq t_{k+1}$, $\ensuremath{\tilde{X}}_{k}\approx X(\ensuremath{\tilde{t}}_{k})$ that are designated to improve the accuracy and stability properties of the approximate solutions. In particular, usually only one solution variable for the approximation of the solution in each time interval is employed. Moreover, for different time intervals, as, e.g., for variable time step sizes, these solution variables may have distinguished accuracy and stability properties. Due to that e.g., the Rosenbrock methods often suffer from order reduction. Now, the idea of the so-called \emph{peer methods} is to define an integration scheme that only contains peer variables, each representing an approximation to the exact solution of~\eqref{eq:DRE} at different time locations that share the same accuracy and stability properties. For all the above mentioned implicit solution methods, including the peer methods to be presented, it turns out that the main ingredient for the solution of the DRE~(\ref{eq:DRE}) is to solve a number of either algebraic Riccati or Lyapunov equations (AREs/ALEs) in each time step. Dealing with large-scale systems, the simple application of the implicit integration methods leads to an enormous computational effort and storage amount in the sense that the solution to the DRE~(\ref{eq:DRE}) is a dense square matrix of dimension $n$ being computed at each point of the discrete time set. However, in practice it is often observed that the singular values of the solution of the ALEs occurring in the innermost iteration of the solution methods decay rapidly to zero, see, e.g.,~\cite{AntSZ02,Gra04,Pen00a,TruV07}. Thus, the solution is of low numerical rank, meaning it can be well approximated by products of low-rank matrices. Based on this observation, modern and efficient algorithms rely on low-rank based solution algorithms. In~\cite{Men07,BenM13,BenM18} classical implicit time integration methods, originally developed for standard scalar and vector-valued ODEs, exploiting the low-rank phenomena were presented. Therein a factorization $X=ZZ^{T}$ with $Z\in\ensuremath{\R^{n\times k}}$, $k\ll n$, is employed in order to efficiently solve large-scale DREs. This decomposition will be referred to as a low-rank Cholesky-type factorization (LRCF) in the remainder. In~\cite{LanMS14,LanMS15}, it has be shown that for integration methods of order $\geq 2$, the right hand sides of the ALEs to be solved become indefinite and thus the LRCF involves complex data and therefore requires complex arithmetic and storage. Moreover, therein a low-rank symmetric indefinite factorization (LRSIF) of the form $X=LDL^{T}$, $L\in\ensuremath{\R^{n\times k}},~D\in\ensuremath{\R^{k\times k}}$, $k\ll n$, of the solution, was introduced in order to avoid complex data. The paper is organized as follows. In Section~\ref{sec:peer}, the implicit and linearly implicit Ro\-sen\-brock-type peer methods are introduced for the application to the matrix-valued differential Riccati equation. Efficient numerical algorithms based on the low-rank symmetric indefinite factorization for both peer approaches are presented in Section~\ref{sec:low-rank}. In Section~\ref{sec:numerics}, the performance of the new peer solution methods up to order 4 is compared to the existing implicit integration schemes of similar orders. A conclusion is given in Section~\ref{sec:conc}. \section{Peer Methods}\label{sec:peer} The class of peer methods first appeared in~\cite{SchW04} in terms of linearly implicit integration schemes with peer variables, suitable for parallel computations by only using information from the previous time interval. A number of specific peer schemes and applications are presented in, e.g.,~\cite{PodWS05,PodWS06,SchWE05,SchWP05}. Further, for a recent detailed overview see~\cite[Chapters 5,10]{StrWP12}. \subsection{General Implicit Peer Methods}\label{sec:implPeer} A general (one-step) implicit peer method, applied to the matrix-valued initial value problem~\eqref{eq:DRE} reads \begin{align} X_{k,i}=&\sum_{j=1}^{s}b_{i,j}X_{k-1,j} +\tau_{k}\sum_{j=1}^{i}g_{i,j}\ensuremath{\mathcal{R}}(t_{k,j},X_{k,j}).\label{eq:peer} \end{align} Here, $s$ is the number of stages and \begin{align} t_{k,j}=t_{k}+c_{j}\tau_{k},\label{eq:peertime} \end{align} where the variables $c_{j},~j=1,\dots,s$, with $c_{s}=1,~t_{k,s}=t_{k+1}$, define the locations of the peer variables \(X_{k,i},~i=1,\dots,s\), for the time step $t_{k}\rightarrow t_{k+1}$. In general, $c_{j}<0$ for some $j$ will also be allowed. Furthermore, the peer variables $X_{k,i}$ represent the solution approximations of~(\ref{eq:DRE}) at the time locations $t_{k,i}$, i.e, $X_{k,i}\approx X(t_{k,i})=X(t_{k}+c_{i}\tau_{k})$. From $c_{s}=1$, the solution $X_{k}$ at time $t_{k}$ is given by $X_{k-1,s}$. The variables \(b_{i,j}\) and \(g_{i,j}\) are the determining coefficients of the method. The convergence order of these methods is restricted to $s-1$. Thus, additionally using function values from the previous time interval, two-step peer methods of order $s$ can be constructed. Under special conditions even a superconvergent subclass of the implicit peer methods with convergence order $s+1$ can be found. Details on the convergence analysis are given in, e.g.,~\cite{SolW17} and the references therein. The corresponding two-step scheme becomes \begin{align} X_{k,i}=&\sum_{j=1}^{s}b_{i,j}X_{k-1,j}+ \tau_{k}\sum_{j=1}^{s}a_{i,j}\ensuremath{\mathcal{R}}(t_{k-1,j},X_{k-1,j}) +\tau_{k}\sum_{j=1}^{i}g_{i,j}\ensuremath{\mathcal{R}}(t_{k,j},X_{k,j})\label{eq:peer2} \end{align} with additional coefficients \(a_{i,j}\). Note that, given from the order conditions, the coefficients will in general depend on the step size ratio $\tau_{k}/\tau_{k-1}$ of two consecutive time steps. Moreover, the computation of the coefficients is based on highly sophisticated optimization processes. Therefore, for details on the order conditions and the computation of the associated coefficients, we refer to~\cite{SolW17} and the references therein. Further, note that the scheme~\eqref{eq:peer} can easily be recovered from~\eqref{eq:peer2} by setting $a_{i,j}=0$ and therefore in the remainder the statements restrict to the more general class~(\ref{eq:peer2}) of implicit (two-step) peer methods. From the application of the peer scheme~\eqref{eq:peer2} to the DRE~\eqref{eq:DRE} with $F(t_{k,i},X_{k,i})=\ensuremath{\mathcal{R}}(t_{k,i},X_{k,i})$ one obtains \begin{align} \ensuremath{\tilde{A}}_{k,i}^TX_{k,i}+X_{k,i}\ensuremath{\tilde{A}}_{k,i} -X_{k,i}\ensuremath{\tilde{S}}_{k,i}X_{k,i}+\ensuremath{\tilde{W}}_{k,i}=0,\qquad i=1,\dots,s,\label{eq:peerARE} \end{align} that in fact is an algebraic Riccati equation. Here, the coefficient matrices are given by \begin{align*} \ensuremath{\tilde{A}}_{k,i}&=\tau_{k}g_{i,i}A_{k,i}-\frac{1}{2}I,\quad \ensuremath{\tilde{S}}_{k,i}=\tau_{k}g_{i,i}S_{k,i},\\ \ensuremath{\tilde{W}}_{k,i}&=\tau_{k}g_{i,i}W_{k,i}+\sum_{j=1}^{s}b_{i,j}X_{k-1,j} +\tau_{k}\sum_{j=1}^{s}a_{i,j}\ensuremath{\mathcal{R}}(t_{k-1,j},X_{k-1,j}) +\tau_{k}\sum_{j=1}^{i-1}g_{i,j}\ensuremath{\mathcal{R}}(t_{k,j},X_{k,j}). \end{align*} Moreover, we have $A_{k,i}=A(t_{k,i})$, $W_{k,i}=W(t_{k,i})$ and $S_{k,i}=S(t_{k,i})$ with $t_{k,i}$ from Equation~(\ref{eq:peertime}). Note that, according to the number of peer variables to be computed, $s$ AREs have to be solved at every time step $t_{k}\to t_{k+1}$ of the method. In comparison, the BDF methods, as well as the Midpoint and Trapezoidal rules, require the solution of only one ARE at every time step, see e.g.,~\cite{Men07,LanMS15,Lan17}. That is, directly solving the occurring algebraic Riccati equations, the expected computational effort of the peer methods is in general $s$-times higher than that of the other implicit methods. Still, from the fact that $s$ peer variables with the same accuracy and stability properties are computed within every time interval, the peer methods allow us to use larger step sizes in order to achieve a comparable accuracy. A comparison and detailed investigation is given in Section~\ref{sec:numerics}. Note that analogously to the DRE case, the peer method can be applied to differential Lyapunov equations or any other differential matrix equation. The application to differential Lyapunov equations is presented in~\cite{Lan17}. For solving the AREs, in general, any solution method suitable for sparse large-scale problems can be applied. A detailed overview can, e.g., be found in~\cite{BenS13,Sim16}. In this contribution, Newton's method is going to be used in order to find a solution to the AREs~(\ref{eq:peerARE}) arising within the peer scheme~(\ref{eq:peer2}). Following~\cite{Kle68,LanR95}, Newton's method applied to the AREs~(\ref{eq:peerARE}) results in the solution of an algebraic Lyapunov equation \begin{align} {{}\ensuremath{\hat{A}}_{k,i}^{(\ell)}}^TX_{k,i}^{(\ell)}+X_{k,i}^{(\ell)}\ensuremath{\hat{A}}_{k,i}^{(\ell)} =-\ensuremath{\tilde{W}}_{k,i}-X_{k,i}^{(\ell-1)}\ensuremath{\tilde{S}}_{k,i}X_{k,i}^{(\ell-1)} \label{eq:ALE_implPeer} \end{align} with \begin{align*} \ensuremath{\hat{A}}_{k,i}^{(\ell)}&=\ensuremath{\tilde{A}}_{k,i}-\ensuremath{\tilde{S}}_{k,i}X_{k,i}^{(\ell-1)}. \end{align*} at each step $\ell$ of the Newton iteration and thus the solution of \eqref{eq:DRE}, using the implicit peer scheme~\eqref{eq:peer2} boils down to the solution of a sequence of ALEs at every time step of the integration scheme. \subsection{Rosenbrock-Type Peer Methods}\label{sec:RosPeer} \subsubsection{Standard Representation}\label{sec:stand-repr} For the implicit peer methods applied to the DRE, a number of AREs has to be solved. In order to avoid the solution of these nonlinear matrix equations, we also consider linearly implicit peer methods in terms of the two-step Rosenbrock-type peer schemes \begin{align} \begin{aligned} (I-\tau_{k}g_{i,i}J_{k})X_{k,i}=&\sum_{j=1}^{s}b_{i,j}X_{k-1,j}+ \tau_{k}\sum_{j=1}^{s}a_{i,j}\left(F(t_{k-1,j},X_{k-1,j})-J_{k}X_{k-1,j}\right)\\ &+\tau_{k}J_{k}\sum_{j=1}^{i-1}g_{i,j}X_{k,j}, \end{aligned}\label{eq:peerRos} \end{align} introduced in~\cite{PodWS05}. As for the implicit schemes, here we consider methods with\\ \mbox{$g_{1,1}=\dots=g_{s,s}=\gamma$}. For the comprehensive derivation of coefficients $a_{i,j},~b_{i,j}$ and $g_{i,j}$ that result in stable schemes~(\ref{eq:peerRos}), for arbitrary step size ratios, we refer to~\cite[Section 3]{PodWS05}. Expression $J_{k}$ denotes the Jacobian represented by the Fr\'{e}chet derivative \begin{align} J_{k}:=\frac{\partial \ensuremath{\mathcal{R}}}{\partial X}(t_k,X_k): U \rightarrow (A_k-S_kX_k)^TU+U(A_k-S_kX_k).\label{eq:frechet} \end{align} of $F$ at $(t_{k},X_{k})$. Now, replacing the Jacobian $J_{k}$ in~(\ref{eq:peerRos}) by~(\ref{eq:frechet}), for the solution of the DRE, the procedure reads \begin{align} \begin{aligned} \ensuremath{\tilde{A}}_{k,i}^{T}X_{k,i}&+X_{k,i}\ensuremath{\tilde{A}}_{k,i}=-\ensuremath{\tilde{W}}_{k,i}\quad i=1,\dots,s,\\ \ensuremath{\tilde{W}}_{k,i}=&\sum_{j=1}^{s}b_{i,j}X_{k-1,j} +\tau_{k}\sum_{j=1}^{s}a_{i,j}\left(\ensuremath{\mathcal{R}}(t_{k-1,j},X_{k-1,j}) -(\ensuremath{\hat{A}}_{k}^{T}X_{k-1,j}+X_{k-1,j}\ensuremath{\hat{A}}_{k})\right)\\ &+\tau_{k}\sum_{j=1}^{i-1}g_{i,j}(\ensuremath{\hat{A}}_{k}^{T}X_{k,j}+X_{k,j}\ensuremath{\hat{A}}_{k}) \end{aligned}\label{eq:ALE_RosPeer} \end{align} with the matrices $\ensuremath{\hat{A}}_{k}=A_{k}-S_{k}X_{k}$ and $\ensuremath{\tilde{A}}_{k,i}=\tau_{k}g_{i,i}\ensuremath{\hat{A}}_{k}-\frac{1}{2}I$. \subsubsection{Reformulation to avoid Jacobian applications} \label{sec:jacob-avoid-reform} The Rosenbrock-type peer scheme~(\ref{eq:peerRos}) involves the solution of an ALE at each stage. The right hand sides of these ALEs particularly require the application of the Jacobian $J_{k}$ to the sums $\sum_{j=1}^{s}a_{i,j}X_{k-1,j}$ and $\sum_{j=1}^{i}g_{i,j}X_{k,j}$ of the previous and current solution approximations, respectively. In order to at least avoid the application of the Jacobians to the sum of new variables $X_{k,j}$, a reformulation, similar to what is standard for the classical Rosenbrock schemes, see, e.g.,~\cite[Chapter IV.7]{HaiW02}, based on the variables \begin{align} Y_{k,i}=\sum_{j=1}^{i}g_{i,j}X_{k,j},~i=1,\dots,s\Leftrightarrow \ensuremath{\bm{Y}}_{k}=(G\otimes I_{n})\ensuremath{\bm{X}}_{k}\label{eq:auxvar_peer} \end{align} can be stated. Here, $\ensuremath{\bm{X}}_{k}=(X_{k,i})_{i=1}^{s},~\ensuremath{\bm{Y}}_{k}=(Y_{k,i})_{i=1}^{s}\in\R^{sn\times n}$ and $\otimes$ denotes the Kronecker product. Provided that $g_{i,i}\neq 0,~\forall i$, the lower triangular matrix $G=(g_{i,j})$ is non-singular and the original variables $X_{k,i}$ can be recovered from the relation \begin{align} \ensuremath{\bm{X}}_{k}=(G^{-1}\otimes I_{n})\ensuremath{\bm{Y}}_{k}\Leftrightarrow X_{k,i}=\sum_{j=1}^{i}\ensuremath{\bm{g}}_{i,j}Y_{k,j},~i=1,\dots,s\label{eq:origvar_peer} \end{align} where $G^{-1}=(\ensuremath{\bm{g}}_{i,j})$ and $\ensuremath{\bm{g}}_{i,i}=\frac{1}{g_{i,i}}$. Then, from~(\ref{eq:origvar_peer}), we obtain \begin{align} \begin{aligned} \sum_{j=1}^{s}\!a_{i,j}X_{k-1,j}, i=1,\dots,s \Leftrightarrow\quad \!&~((a_{i,j})\otimes I)\ensuremath{\bm{X}}_{k}&\\ \!=&~((a_{i,j})\otimes I)(G^{-1}\!\otimes I)\ensuremath{\bm{Y}}_{k}&\\ \!=&~((a_{i,j})G^{-1}\otimes I)\ensuremath{\bm{Y}}_{k} &\!\!\!\!\!\Leftrightarrow\!\sum_{j=1}^{s}\!\ensuremath{\bm{a}}_{i,j}Y_{k-1,j},i=1,\dots,s \end{aligned}\label{eq:modsumPeer} \end{align} with the coefficients \begin{align} \begin{aligned} (\ensuremath{\bm{a}}_{i,j})=(a_{i,j})G^{-1}. \end{aligned}\label{auxcoeff_peer} \end{align} and analogously, for the sum $\sum_{j=1}^{s}b_{i,j}X_{k-1,j}$, we have \begin{align*} \sum_{j=1}^{s}b_{i,j}X_{k-1,j}=\sum_{j=1}^{s}\ensuremath{\bm{b}}_{i,j}Y_{k-1,j},~i=1,\dots,s \end{align*} with $(\ensuremath{\bm{b}}_{i,j})=(b_{i,j})G^{-1}$. Now, inserting the auxiliary variables~(\ref{eq:auxvar_peer}) into~(\ref{eq:peerRos}) and dividing the result by $\tau_{k}$, the linearly implicit scheme can be reformulated to \begin{align} \begin{aligned} \left(\frac{1}{\tau_{k}g_{i,i}}I-J_{k}\right)Y_{k,i}=& ~\sum_{j=1}^{s}\frac{\ensuremath{\bm{b}}_{i,j}}{\tau_{k}}Y_{k-1,j}+ \sum_{j=1}^{s}a_{i,j}F(t_{k-1,j},\sum_{\ell=1}^{j}\ensuremath{\bm{g}}_{j,\ell}Y_{k-1,\ell})\\ &-J_{k}\sum_{j=1}^{s}\ensuremath{\bm{a}}_{i,j}Y_{k-1,j} -\sum_{j=1}^{i-1}\frac{\ensuremath{\bm{g}}_{i,j}}{\tau_{k}}Y_{k,j},\quad i=1,\dots,s. \end{aligned}\label{eq:auxpeerRos} \end{align} Again, replacing the Jacobian $J_{k}$ by~(\ref{eq:frechet}), the modified Rosenbrock-type scheme, applied to the DRE, reads \begin{align} \begin{aligned} \ensuremath{\tilde{A}}_{k,i}^{T}Y_{k,i}&+Y_{k,i}\ensuremath{\tilde{A}}_{k,i}=-\ensuremath{\tilde{W}}_{k,i},\\ \ensuremath{\tilde{W}}_{k,i}=&\sum_{j=1}^{s}\frac{\ensuremath{\bm{b}}_{i,j}}{\tau_{k}}Y_{k-1,j} +\sum_{j=1}^{s}a_{i,j}\ensuremath{\mathcal{R}}(t_{k-1,j},\sum_{\ell=1}^{j}\ensuremath{\bm{g}}_{j,\ell}Y_{k-1,\ell})\\ &-\sum_{j=1}^{s}\ensuremath{\bm{a}}_{i,j}(\ensuremath{\hat{A}}_{k}^{T}Y_{k-1,j}+Y_{k-1,j}\ensuremath{\hat{A}}_{k}) -\sum_{j=1}^{i-1}\frac{\ensuremath{\bm{g}}_{i,j}}{\tau_{k}}Y_{k,j} \end{aligned}\label{eq:ALE_mRosPeer} \end{align} with $\ensuremath{\hat{A}}_{k}$ from the original scheme and $\ensuremath{\tilde{A}}_{k,i}=\ensuremath{\hat{A}}_{k}-\frac{1}{2\tau_{k}g_{i,i}}I$. Recall that the introduction of the auxiliary variables is capable of avoiding the application of the Jacobian $J_{k}$ to the sum of current stage variables. Still, the application remains for the sum of the previously determined peer variables. Moreover, in contrast to the classical Rosenbrock methods, the original solution approximations $X_{k,i}$ have to be reconstructed from the auxiliary variables by (\ref{eq:origvar_peer}). That is, the reconstruction doubles the online storage amount for storing the solution approximations $X_{k,i}$ and the corresponding auxiliary variables $Y_{k,i},~i=1,\dots,s$, during the runtime of the integration method. Summarizing, the linearly implicit peer methods result in the solution of $s$ ALEs, just like the classical Rosenbrock methods~\cite{Men07,BenM13}, but directly compute the sought for solutions, instead of additional stage variables. Moreover, the additional stage variables from the classical Rosenbrock methods have a low stage order and therefore the integration procedures may suffer from order reduction. The computation of peer variables in the Rosenbrock-type peer scheme, sharing the same accuracy and stability properties, can overcome this well-known disadvantage~\cite{PodWS05} and again allows us to use larger time steps compared to the classical Rosenbrock methods. \section{Efficient Solution using Low-Rank Representations} \label{sec:low-rank} As mentioned in the introduction, for small-scale problems, the implicit and Rosenbrock-type peer methods can directly be applied to the DRE, in general resulting in dense solutions. Thus, the explicit computation of the solution is not recommended for large-scale applications. Based on the observation that the solution to the ALEs in the innermost iteration often is of low numerical rank~\cite{AntSZ02,Gra04,Pen00a,TruV07}, the literature provides a number of solution methods for large-scale ALEs based on low-rank versions of the alternating directions implicit (ADI) iteration and Krylov subspace methods. First developments considered a two-term LRCF of both ALE solution philosophies. Most recent improvements can, e.g., be found in~\cite{BenKS13a,BenKS13b,BenKS14,Kue16} and~\cite{JaiK94,StyS12,DruSZ14}, respectively. Three-term LRSIF based formulations of these solution strategies have first been investigated for the more general case of Sylvester equations~\cite{BenLT09}. The specific application to ALEs, is extensively studied in~\cite{LanMS14,LanMS15,Lan17}. The latter factorization is of major importance for the efficient solution of differential matrix equations. That is, the LRSIF allows to avoid complex data and arithmetic, arising within the classical low-rank two-term representation of the ALEs within the classical implicit integration schemes of order $\geq 2$. Note that for the implicit and Rosenbrock-type peer schemes complex data and arithmetic, in general, already occur for order \(1\). The LRSIF has proven to show considerably better performance with respect to computational timings and storage amount in most applications. Note that there is some exceptions, see~\cite{LanMS15,Lan17} for details. Still, for the numerical experiments in Section~\ref{sec:numerics}, the algorithms used are restricted to the LRSIF based schemes. Moreover, we restrict to implementations using the ADI iteration for the solution of the innermost ALEs. In order to exploit the low-rank phenomenon, a suitable low-rank representation of the right hand sides of the ALEs~(\ref{eq:ALE_implPeer}) and (\ref{eq:ALE_RosPeer})/(\ref{eq:ALE_mRosPeer}) within the implicit and linearly implicit Rosenbrock-type peer schemes, respectively, has to be found. In what follows, the LRSIF representations are presented. A detailed derivation of the LRCF based strategy and an extension to generalized DREs, also for the LRSIF approach, can be found in~\cite{Lan17}. For the remainder, we define the mapping \begin{align*} H:\R^{n\times n}\rightarrow \R^{2n\times 2n}, \quad H:I\mapsto H(I)= \begin{bmatrix} 0&I\\ I&0 \end{bmatrix}. \end{align*} \subsection{Low-Rank Implicit Peer Scheme} For the solution of the DRE~(\ref{eq:DRE}) by implicit peer schemes, the main ingredient is to solve the algebraic Lyapunov equation \begin{align} \begin{aligned} &{{}\ensuremath{\hat{A}}_{k,i}^{(\ell)}}^TX_{k,i}^{(\ell)}+X_{k,i}^{(\ell)}\ensuremath{\hat{A}}_{k,i}^{(\ell)} =-\ensuremath{\tilde{W}}_{k,i}-\tau_{k}g_{i,i}X_{k,i}^{(\ell-1)}S_{k,i}X_{k,i}^{(\ell-1)},\\ &\ensuremath{\tilde{A}}_{k,i}=\tau_{k}g_{i,i}A_{k,i}-\frac{1}{2}I,\quad \ensuremath{\tilde{S}}_{k,i}=\tau_{k}g_{i,i}S_{k,i},\\ &\ensuremath{\hat{A}}_{k,i}^{(\ell)}=\ensuremath{\tilde{A}}_{k,i}-\ensuremath{\tilde{S}}_{k,i}X_{k,i}^{(\ell-1)} -\frac{1}{2}I,\\ &\ensuremath{\tilde{W}}_{k,i}=\tau_{k}g_{i,i}W_{k,i}\!+\!\sum_{j=1}^{s}b_{i,j}X_{k-1,j} \!+\!\tau_{k}\!\sum_{j=1}^{s}\!a_{i,j}\ensuremath{\mathcal{R}}(t_{k-1,j},X_{k-1,j}) \!+\!\tau_{k}\!\sum_{j=1}^{i-1}\!g_{i,j}\ensuremath{\mathcal{R}}(t_{k,j},X_{k,j}) \end{aligned}\label{eq:RHS_implpeer} \end{align} within every Newton step $\ell$ at each time step $t_{k}\to t_{k+1}$. Using low-rank versions of the ADI method, this requires the right hand side to be given in low-rank form as well. Provided $S_{k,i}$ and $W_{k,i}$ in the DRE~(\ref{eq:DRE}) are given in the form \begin{align*} S_{k,i}=B_{k,i}B_{k,i}^{T},\quad W_{k,i}=C_{k,i}^{T}C_{k,i} \end{align*} with $B_{k,i}\in\ensuremath{\R^{n\times m}}$ and $C_{k,i}\in\ensuremath{\R^{q\times n}}$, $m,q\ll n$, the right hand side of the ALE can also be written in factored form. Assume that the previous solution approximations $X_{k,j}$'s admit a decomposition of the form $X_{k,j}=L_{k,j}D_{k,j}L_{k,j}^{T}$ with $L_{k,j}\in\R^{n\times n_{L_{k,j}}},~D_{k,j}\in\R^{n_{L_{k,j}}\times n_{L_{k,j}}}$ such that the right hand side of~(\ref{eq:RHS_implpeer}) can be written in the form $-G_{k,i}^{(\ell)}S_{k,i}^{(\ell)}{{}G_{k,i}^{(\ell)}}^{T}$. In order to find such a symmetric indefinite decomposition of the entire right hand side, we first define a factorization for the Riccati operator $\ensuremath{\mathcal{R}}(.,.)$ in the form \begin{align} \begin{aligned} \ensuremath{\mathcal{R}}(t_{k,j},\!X_{k,j})&\!=\!C_{k,j}^{T}C_{k,j}\!+\!A_{k,j}^{T}X_{k,j}\! +\!\!X_{k,j}A_{k,j} \!-\!\!X_{k,j}B_{k,j}B_{k,j}^{T}X_{k,j}\!=\!\ensuremath{\mathcal{T}}_{k,j}\ensuremath{\mathcal{M}}_{k,j}\ensuremath{\mathcal{T}}_{k,j}^{T},\\ \ensuremath{\mathcal{T}}_{k,j}&= \begin{bmatrix} C_{k,j}^{T},&A_{k,j}^{T}L_{k,j},&L_{k,j} \end{bmatrix}\in\R^{n\times (q+2n_{L_{k,j}})},\\ \ensuremath{\mathcal{M}}_{k,j}&= \begin{bmatrix} I_q&0&0\\ 0&0&D_{k,j}\\ 0&\quad D_{k,j}&\quad -D_{k,j}L_{k,j}^TB_{k,j}B_{k,j}^TL_{k,j}D_{k,j}\\ \end{bmatrix}\!\in\R^{(q+2n_{L_{k,j}})\times (q+2n_{L_{k,j}})}. \end{aligned}\label{eq:RiccOp_peerLR} \end{align} For a more detailed derivation, we refer to~\cite{Lan17}. Then, applying~(\ref{eq:RiccOp_peerLR}) to the right hand side \begin{align*} -\ensuremath{\tilde{W}}_{k,i}&-\tau_{k}g_{i,i}X_{k,i}^{(\ell-1)}S_{k,i}X_{k,i}^{(\ell-1)}, \end{align*} of the ALE from Equation~(\ref{eq:RHS_implpeer}), the decomposition \(G_{k,i}^{(\ell)}S_{k,i}^{(\ell)}{{}G_{k,i}^{(\ell)}}^{T}\) is given by the factors \begin{align*} G_{k,i}^{(\ell)}=& \begin{bmatrix} C_{k,i}^{T},&L_{k-1,1},\dots,L_{k-1,s},&\ensuremath{\mathcal{T}}_{k-1,1},\dots,\ensuremath{\mathcal{T}}_{k-1,s},& \ensuremath{\mathcal{T}}_{k,1},\dots,\ensuremath{\mathcal{T}}_{k,i-1},&X_{k,i}^{(\ell-1)}B_{k,i} \end{bmatrix},\\ S_{k,i}^{(\ell)}=& \ensuremath{\operatorname{diag}}\left(\tau_{k}g_{i,i}I_{q},~ b_{i,1}D_{k-1,1},\dots,b_{i,s}D_{k-1,s},~ \tau_{k}a_{i,1}\ensuremath{\mathcal{M}}_{k-1,1},\dots,\tau_{k}a_{i,s}\ensuremath{\mathcal{M}}_{k-1,s},\right.\\ ~&\qquad\left.\tau_{k}g_{i,1}\ensuremath{\mathcal{M}}_{k,1},\dots,\tau_{k}g_{i,i-1}\ensuremath{\mathcal{M}}_{k,i-1},~ \tau_{k}g_{i,i}I_{m}\right) \end{align*} can be formulated and the desired factor $G_{k,i}^{(\ell)}$ is of column size \begin{align*} &~q+\sum_{j=1}^{s}n_{L_{k-1,j}}+\sum_{j=1}^{s}(q+2n_{L_{k-1,j}}) +\sum_{j=1}^{i-1}(q+2n_{L_{k,j}})+n_{L_{k,i}^{(\ell-1)}}\\ =&~(s+i)q+3\sum_{j=1}^{s}n_{L_{k-1,j}}+2\sum_{j=1}^{i-1}n_{L_{k,j}}+m. \end{align*} For autonomous systems with constant system matrices, the inner ALE becomes \begin{align*} {{}\ensuremath{\hat{A}}_{k,i}^{(\ell)}}^TX_{k,i}^{(\ell)}+X_{k,i}^{(\ell)}\ensuremath{\hat{A}}_{k,i}^{(\ell)} =-\ensuremath{\tilde{W}}_{k,i}-\tau_{k}g_{i,i}X_{k,i}^{(\ell-1)}BB^{T}X_{k,i}^{(\ell-1)}= -G_{k,i}^{(\ell)}S_{k,i}^{(\ell)}{{}G_{k,i}^{(\ell)}}^{T}, \end{align*} where $\ensuremath{\hat{A}}_{k,i}$ and the right hand side factors $G_{k,i}^{(\ell)},~S_{k,i}^{(\ell)}$ are given by \begin{align*} \ensuremath{\hat{A}}_{k,i}^{(\ell)}&=\!\!\tau_{k}g_{i,i}(A-BB^{T}X_{k,i}^{(\ell-1)})-\frac{1}{2}I\\ G_{k,i}^{(\ell)}&= \begin{bmatrix} C^{T},&L_{k-1,1},\dots,L_{k-1,s},&\ensuremath{\mathcal{T}}_{k-1,1},\dots,\ensuremath{\mathcal{T}}_{k-1,s},& \ensuremath{\mathcal{T}}_{k,1},\dots,\ensuremath{\mathcal{T}}_{k,i-1},&X_{k,i}^{(\ell-1)}B \end{bmatrix},\\ S_{k,i}^{(\ell)}=& \ensuremath{\operatorname{diag}}\left(\tau_{k}(\sum_{j=1}^{s}a_{i,j}+\sum_{j=1}^{i}g_{i,j})I_{q},~ b_{i,1}D_{k-1,1},\dots,b_{i,s}D_{k-1,s},\right.\\ ~&~~\qquad\left.\tau_{k}a_{i,1}\ensuremath{\mathcal{M}}_{k-1,1},\dots,\tau_{k}a_{i,s}\ensuremath{\mathcal{M}}_{k-1,s},~ \tau_{k}g_{i,1}\ensuremath{\mathcal{M}}_{k,1},\dots,\tau_{k}g_{i,i-1}\ensuremath{\mathcal{M}}_{k,i-1},~ \tau_{k}g_{i,i}I_{m}\vphantom{\sum_{j=1}^{i}}\right) \end{align*} where the factors \(\ensuremath{\mathcal{T}}_{k,j},\ensuremath{\mathcal{M}}_{k,j}\) simplify to \begin{align*} \ensuremath{\mathcal{T}}_{k,j}&= \begin{bmatrix} A^{T}L_{k,j},&L_{k,j} \end{bmatrix}\in\R^{n\times 2n_{L_{k,j}}},\\ \ensuremath{\mathcal{M}}_{k,j}&= \begin{bmatrix} 0&D_{k,j}\\ D_{k,j}&\quad -D_{k,j}L_{k,j}^TB_{k,j}B_{k,j}^TL_{k,j}D_{k,j}\\ \end{bmatrix}\in\R^{2n_{L_{k,j}}\times 2n_{L_{k,j}}}. \end{align*} Then, the right hand side factor $G_{k,i}^{(\ell)}$ is of column size \begin{align*} q+3\sum_{j=1}^{s}n_{L_{k-1,j}}+2\sum_{j=1}^{i-1}n_{L_{k,j}}+m. \end{align*} \subsection{Low-Rank Rosenbrock-type Peer Scheme} \subsubsection{Standard Rosenbrock-type Peer Representation} \label{sec:LDL_RosPeer_DRE} For the low-rank symmetric indefinite factorization based solution of a non-autonomous DRE~(\ref{eq:DRE}), using the Rosenbrock-type peer method, we consider the ALE \begin{align} \begin{aligned} \ensuremath{\tilde{A}}_{k,i}^{T}X_{k,i}&+X_{k,i}\ensuremath{\tilde{A}}_{k,i}=-\ensuremath{\tilde{W}}_{k,i},\quad i=1,\dots,s,\\ \ensuremath{\tilde{W}}_{k,i}=&~\sum_{j=1}^{s}b_{i,j}X_{k-1,j} +\tau_{k}\sum_{j=1}^{s}a_{i,j}\left(\ensuremath{\mathcal{R}}(t_{k-1,j},X_{k-1,j}) -(\ensuremath{\hat{A}}_{k}^{T}X_{k-1,j}+X_{k-1,j}\ensuremath{\hat{A}}_{k})\right),\\ &+\tau_{k}\sum_{j=1}^{i-1}g_{i,j}(\ensuremath{\hat{A}}_{k}^{T}X_{k,j}+X_{k,j}\ensuremath{\hat{A}}_{k}), \end{aligned}\label{eq:stRosPeer_ALE} \end{align} where we have $\ensuremath{\hat{A}}_{k}=A_{k}-B_{k}B_{k}^{T}X_{k}, ~\ensuremath{\tilde{A}}_{k,i}=\tau_{k}g_{i,i}\ensuremath{\hat{A}}_{k}-\frac{1}{2}I$. In contrast to small-scale and dense computations, it is recommended to never explicitly form the matrices $\ensuremath{\hat{A}}_{k}$. Therefore, instead we use \begin{align} \begin{aligned} \ensuremath{\hat{A}}_{k}^{T}X_{k-1,j}+X_{k-1,j}\ensuremath{\hat{A}}_{k}=&~A_{k}^{T}X_{k-1,j}+X_{k-1,j}A_{k}\\ &-X_{k}B_{k}B_{k}^{T}X_{k-1,j}-X_{k-1,j}B_{k}B_{k}^{T}X_{k}. \end{aligned}\label{eq:lin_expand} \end{align} Using~(\ref{eq:lin_expand}) and further exploiting the structure of the Riccati operators $\ensuremath{\mathcal{R}}(t_{k-1,j},X_{k-1,j})$, the right hand side $\ensuremath{\tilde{W}}_{k,i}$ of the standard Rosenbrock-type peer scheme~(\ref{eq:stRosPeer_ALE}) can be reformulated in the form \begin{align*} \begin{aligned} \ensuremath{\tilde{W}}_{k,i}=&~\tau_{k}\sum_{j=1}^{i-1}g_{i,j}\left(A_{k}^{T}X_{k,j}+X_{k,j}A_{k} -X_{k}B_{k}B_{k}^{T}X_{k,j}-X_{k,j}B_{k}B_{k}^{T}X_{k}\right)\\ &+\sum_{j=1}^{s}\biggl(\tau_{k}a_{i,j}\bigl(C_{k-1,j}^{T}C_{k-1,j} -X_{k-1,j}B_{k-1,j}B_{k-1,j}^{T}X_{k-1,j}\biggr.\\ &~\biggl.+X_{k}B_{k}B_{k}^{T}X_{k-1,j}+X_{k-1,j}B_{k}B_{k}^{T}X_{k}\bigr) +\check{A}_{k,i,j}^{T}X_{k-1,j}+X_{k-1,j}\check{A}_{k,i,j}\biggr), \end{aligned} \end{align*} where $\check{A}_{k,i,j}=\tau_{k}a_{i,j}(A_{k-1,j}-A_{k})+\frac{b_{i,j}}{2}I$. The matrix $\check{A}_{k,i,j}$ can efficiently be computed, since $A_{k-1,j}$ and $A_{k}$ are sparse matrices and so is $\check{A}_{k,i,j}$. Note that for $j=s$, we have $A_{k-1,s}=A_{k},~B_{k-1,s}=B_{k}$ and $X_{k-1,s}=X_{k}$. Therefore $\check{A}_{k,i,s}=\frac{b_{i,s}}{2}I$ and the right hand side at every stage $i=1,\dots,s$ reduces to \begin{align*} \begin{aligned} \ensuremath{\tilde{W}}_{k,i}=&~\tau_{k}\sum_{j=1}^{i-1}g_{i,j}\left(A_{k}^{T}X_{k,j}+X_{k,j}A_{k} -X_{k}B_{k}B_{k}^{T}X_{k,j}-X_{k,j}B_{k}B_{k}^{T}X_{k}\right)\\ &+\sum_{j=1}^{s-1}\biggl(\tau_{k}a_{i,j}\bigl(C_{k-1,j}^{T}C_{k-1,j} -X_{k-1,j}B_{k-1,j}B_{k-1,j}^{T}X_{k-1,j}\biggr.\\ &~\biggl.+X_{k}B_{k}B_{k}^{T}X_{k-1,j}+X_{k-1,j}B_{k}B_{k}^{T}X_{k}\bigr) +\check{A}_{k,i,j}^{T}X_{k-1,j}+X_{k-1,j}\check{A}_{k,i,j}\biggr)\\ &+\tau_{k}a_{i,s}\left(C_{k}^{T}C_{k}+X_{k}B_{k}B_{k}^{T}X_{k}\right) +b_{i,s}X_{k}. \end{aligned} \end{align*} Also, we see that a considerable number of quadratic terms share the product $X_{k}B_{k}$ or its transpose. Combining these expressions, we obtain the formulation \begin{align} \begin{aligned} \ensuremath{\tilde{W}}_{k,i}=&~\tau_{k}\sum_{j=1}^{i-1}g_{i,j}\left(A_{k}^{T}X_{k,j}+X_{k,j}A_{k}\right) +X_{k}B_{k}K_{k,i}^{T}+K_{k,i}B_{k}^{T}X_{k}\\ &+\sum_{j=1}^{s-1}\biggl(\tau_{k}a_{i,j}\bigl(C_{k-1,j}^{T}C_{k-1,j} -X_{k-1,j}B_{k-1,j}B_{k-1,j}^{T}X_{k-1,j}\bigr)\\ &+\check{A}_{k,i,j}^{T}X_{k-1,j}+X_{k-1,j}\check{A}_{k,i,j}\biggr) +\tau_{k}a_{i,s}C_{k}^{T}C_{k}+b_{i,s}X_{k}, \end{aligned}\label{eq:rhs_RosPeer_DRE} \end{align} where \begin{align*} K_{k,i}=\tau_{k}\left(\sum_{j=1}^{s-1}a_{i,j}X_{k-1,j}+\frac{a_{i,s}}{2}X_{k} -\sum_{j=1}^{i-1}g_{i,j}X_{k,j}\right)B_{k} \end{align*} collects all products, interacting with $X_{k}B_{k}$. Again, the previous solution approximations $X_{k-1,j}=L_{k-1,j}D_{k-1,j}L_{k-1,j}^{T},j=1,\dots,s$, $X_{k}=L_{k}D_{k}L_{k}^{T}$ with \(L_{k}=L_{k-1,s},D_{k}=D_{k-1,s}\) and $X_{k,j}=L_{k,j}D_{k,j}L_{k,j}^{T},j=1,\dots,i-1$, are assumed to be given in low-rank format. Then, defining the matrices \begin{align*} \ensuremath{\mathcal{T}}_{k,j}&= \begin{bmatrix} A_{k}^{T}L_{k,j},&L_{k,j} \end{bmatrix}\in\R^{n\times 2n_{L_{k,j}}},\ \ensuremath{\mathcal{M}}_{k,j}=\tau_{k}g_{i,j}H(D_{k,j}) \in\R^{2n_{L_{k,j}} \times 2n_{L_{k,j}}},\\ \check{\ensuremath{\mathcal{T}}}_{k,i,j}&= \begin{bmatrix} C_{k-1,j}^{T},&\check{A}_{k,i,j}^{T}L_{k-1,j},&L_{k-1,j} \end{bmatrix},\\ \check{\ensuremath{\mathcal{M}}}_{k,i,j}&= \begin{bmatrix} \tau_{k}a_{i,j}I_{q}&0&0\\ 0&0&D_{k-1,j}\\ 0&\quad D_{k-1,j}&\quad -\tau_{k}a_{i,j}D_{k-1,j}L_{k-1,j}^{T}B_{k-1,j}B_{k-1,j}^{T}L_{k-1,j}D_{k-1,j} \end{bmatrix} \end{align*} with \(\check{\ensuremath{\mathcal{T}}}_{k,i,j}\in\R^{n\times (q+2n_{L_{k-1,j}})}\), \(\check{\ensuremath{\mathcal{M}}}_{k,i,j}\in\R^{(q+2n_{L_{k-1,j}}) \times (q+2n_{L_{k-1,j}})}\), the low-rank symmetric indefinite factorization \(\ensuremath{\tilde{W}}_{k,i}=G_{k,i}S_{k,i}G_{k,i}^{T}\) of~(\ref{eq:rhs_RosPeer_DRE}) is given by \begin{align*} G_{k,i}=& \begin{bmatrix} \ensuremath{\mathcal{T}}_{k,1},\dots,\ensuremath{\mathcal{T}}_{k,i-1},&X_{k}B_{k},&K_{k,i}, &\check{\ensuremath{\mathcal{T}}}_{k,i,1},\dots,\check{\ensuremath{\mathcal{T}}}_{k,i,s-1},&C_{k}^{T},&L_{k} \end{bmatrix},\\ S_{k,i}=&\ensuremath{\operatorname{diag}} \left(\ensuremath{\mathcal{M}}_{k,1},\dots,\ensuremath{\mathcal{M}}_{k,i-1},~H(I_{m}),~ \check{\ensuremath{\mathcal{M}}}_{k,i,1},\dots,\check{\ensuremath{\mathcal{M}}}_{k,i,s-1},~ \tau_{k}a_{i,s}I_{q},~ b_{i,s}D_{k}\right) \end{align*} with $G_{k,i}$ being of column size \begin{align} \begin{aligned} &~\sum_{j=1}^{i-1}2n_{L_{k,j}}+2m+\sum_{j=1}^{s-1}(q+2n_{L_{k-1,j}})+q+n_{L_{k}}\\ =&~2\sum_{j=1}^{i-1}n_{L_{k,j}}+2\sum_{j=1}^{s-1}n_{L_{k-1,j}}+n_{L_{k}}+sq+2m. \end{aligned}\label{eq:clmsz_RosPeer} \end{align} In the autonomous case, we in particular have $A_{k-1,j}=A_{k}=A$. Hence, $\check{A}_{k,i,j}=\frac{b_{i,j}}{2}I$, $i,j=1,\dots,s$, and together with the modifications for $j=s,~X_{k-1,s}=X_{k}$, the right hand side $\ensuremath{\tilde{W}}_{k,i}$ in~(\ref{eq:rhs_RosPeer_DRE}) becomes \begin{align*} \begin{aligned} \ensuremath{\tilde{W}}_{k,i}=& \tau_{k}\sum_{j=1}^{i-1}g_{i,j}\left(A_{k}^{T}X_{k,j}+X_{k,j}A_{k}\right) +X_{k}B_{k}K_{k,i}^{T}+K_{k,i}B_{k}^{T}X_{k}\\ &+\sum_{j=1}^{s}\left(\tau_{k}a_{i,j}C^{T}C+b_{i,j}X_{k-1,j}\right) -\sum_{j=1}^{s-1}\tau_{k}a_{i,j}X_{k-1,j}BB^{T}X_{k-1,j}. \end{aligned} \end{align*} Then, similar to the non-autonomous scheme, for the simplified right hand side, we have \begin{align*} G_{k,i}=& \begin{bmatrix} \ensuremath{\mathcal{T}}_{k,1},\dots,\ensuremath{\mathcal{T}}_{k,i-1},&X_{k}B,&K_{k,i}, &C^{T},&L_{k-1,1},\dots,L_{k-1,s-1},&L_{k} \end{bmatrix},\\ S_{k,i}=& \ensuremath{\operatorname{diag}}\left(\ensuremath{\mathcal{M}}_{k,1},\dots,\ensuremath{\mathcal{M}}_{k,i-1},~ H(I_{m}),~ \tau_{k}\sum_{j=1}^{s}a_{i,j}I_{q},~ \ensuremath{\tilde{D}}_{k-1,1},\dots,\ensuremath{\tilde{D}}_{k-1,s-1},~ b_{i,s}D_{k}\right) \end{align*} where \begin{align*} \ensuremath{\mathcal{T}}_{k,j}&= \begin{bmatrix} A_{k}^{T}L_{k,j},&L_{k,j} \end{bmatrix}\in\R^{n\times 2n_{L_{k,j}}},\ \ensuremath{\mathcal{M}}_{k,j}=\tau_{k}g_{i,j}H(D_{k,j}) \in\R^{2n_{L_{k,j}} \times 2n_{L_{k,j}}},\\ \ensuremath{\tilde{D}}_{k-1,j}&=b_{i,j}D_{k-1,j}-\tau_{k}a_{i,j}D_{k-1,j}L_{k-1,j}BB^{T}L_{k-1,j}D_{k-1,j}. \end{align*} Here, the column size of the factor $G_{k,i}$ is \begin{align} \sum_{j=1}^{i-1}2n_{L_{k,j}}+2m+q+\sum_{j=1}^{s}n_{L_{k-1,j}} =2\sum_{j=1}^{i-1}n_{L_{k,j}}+\sum_{j=1}^{s}n_{L_{k-1,j}}+q+2m. \label{eq:clmsz_RosPeer_aut} \end{align} \subsubsection{Modified Rosenbrock-type Peer Representation} \label{sec:LDL_mRosPeer_DRE} Now, for the modified Rosenbrock-type peer formulation applied to the non-autonomous DRE, we consider the ALE \begin{align*} \begin{aligned} \ensuremath{\tilde{A}}_{k,i}^{T}Y_{k,i}&+Y_{k,i}\ensuremath{\tilde{A}}_{k,i}=-\ensuremath{\tilde{W}}_{k,i},\quad i=1,\dots,s,\\ \ensuremath{\tilde{W}}_{k,i}=&\sum_{j=1}^{s}\frac{\ensuremath{\bm{b}}_{i,j}}{\tau_{k}}Y_{k-1,j} +\sum_{j=1}^{s}a_{i,j}\ensuremath{\mathcal{R}}(t_{k-1,j},\sum_{\ell=1}^{j}\ensuremath{\bm{g}}_{j,\ell}Y_{k-1,\ell})\\ &-\sum_{j=1}^{s}\ensuremath{\bm{a}}_{i,j}(\ensuremath{\hat{A}}_{k}^{T}Y_{k-1,j}+Y_{k-1,j}\ensuremath{\hat{A}}_{k}) -\sum_{j=1}^{i-1}\frac{\ensuremath{\bm{g}}_{i,j}}{\tau_{k}}Y_{k,j},\\ \ensuremath{\tilde{A}}_{k,i}=&~\ensuremath{\hat{A}}_{k}-\frac{1}{2\tau_{k}g_{i,i}}I,\quad \ensuremath{\hat{A}}_{k}=A_{k}-B_{k}B_{k}^{T}X_{k}I. \end{aligned} \end{align*} Note that the matrix $\ensuremath{\hat{A}}_{k}$ is given in terms of $X_{k}$. This is due to the fact that $\ensuremath{\hat{A}}_{k}$ originates from the Jacobian~(\ref{eq:frechet}) that, as in the original scheme, is given as the Fr\'{e}chet derivative of $\ensuremath{\mathcal{R}}(t_{k},X_{k})=\ensuremath{\mathcal{R}}(t_{k-1,s},X_{k-1,s})= \ensuremath{\mathcal{R}}(t_{k-1,s},\sum_{\ell=1}^{s}\ensuremath{\bm{g}}_{j,\ell}Y_{k-1,\ell})$. Thus, instead of explicitly forming $\ensuremath{\hat{A}}_{k}$, again relation~(\ref{eq:lin_expand}) is utilized. For the sake of simplicity the original variables $X_{k}$ within $\ensuremath{\hat{A}}_{k}$, as well as in the Riccati operators $\ensuremath{\mathcal{R}}(t_{k-1,j},X_{k-1,j})$, are kept throughout the computations. As previously mentioned in Section~\ref{sec:RosPeer}, we have to reconstruct the solution from the auxiliary variables anyway. Thus, using both sets of variables does not require additional computations. In order to give a more detailed motivation for mixing up the original and auxiliary scheme, the following considerations are stated. From the relation of the original and auxiliary variables, given in~(\ref{eq:origvar_peer}), we have \begin{align*} X_{k-1,j}=\sum_{\ell=1}^{j}\ensuremath{\bm{g}}_{j,\ell}Y_{k-1,\ell}. \end{align*} Further, defining the decomposition $Y_{k-1,\ell}=\ensuremath{\hat{L}}_{k-1,\ell}\ensuremath{\hat{D}}_{k-1,\ell}\ensuremath{\hat{L}}_{k-1,\ell}^{T}$, $\ell=1,\dots,j$ with $\ensuremath{\hat{L}}_{k-1,\ell}\in\R^{n\times n_{\ensuremath{\hat{L}}_{k-1,\ell}}}$, \(\ensuremath{\hat{D}}_{k-1,\ell}\in\R^{n_{\ensuremath{\hat{L}}_{k-1,\ell}}\times n_{\ensuremath{\hat{L}}_{k-1,\ell}}}\), the original solution approximation admits a factorization $X_{k-1,j}= L_{k-1,j}D_{k-1,j}L_{k-1,j}^{T}$, $j=1,\dots,s$, based on the factors \begin{align*} L_{k-1,j}&= \begin{bmatrix} \ensuremath{\hat{L}}_{k-1,1},\dots,\ensuremath{\hat{L}}_{k-1,j} \end{bmatrix},\quad D_{k-1,j}=\ensuremath{\operatorname{diag}}{\ensuremath{\bm{g}}_{j,1}\ensuremath{\hat{D}}_{k-1,1},\dots,\ensuremath{\bm{g}}_{j,j}\ensuremath{\hat{D}}_{k-1,j}}. \end{align*} The factors $L_{k-1,j}\in\R^{n\times n_{L_{k-1,j}}},D_{k-1,j}\in\R^{n_{L_{k-1,j}}\times n_{L_{k-1,j}}}$ are given as a block concatenation of the solution factors of the auxiliary variables $Y_{k-1,\ell}$, $\ell=1,\dots,j$. That is, the column size $n_{L_{k-1,j}}=\sum_{\ell=1}^{j}n_{\ensuremath{\hat{L}}_{k-1,\ell}}$ may dramatically grow with respect to the number of stages and time steps. Still, the numerical rank of the original solution is assumed to be ``small''. Thus, using column compression techniques, see~\cite[Section 6.3]{Lan17}, being a tacit requirement for large-scale problems anyway, the column size of $L_{k-1,j}$ is presumably ``small'' as well. To be more precise, the factors $L_{k-1,j}$ and $\ensuremath{\hat{L}}_{k-1,j}$ are expected to be of compatible size. Consequently, one can make use of both representations at the one place or another without messing up the formulations with respect to both, the notational and computational complexity. However, expanding $\ensuremath{\hat{A}}_{k}$ and combining the linear parts with respect to $Y_{k-1,j}$, the right hand side reads \begin{align} \begin{aligned} \ensuremath{\tilde{W}}_{k,i}\!=&\! -\!\!\sum_{j=1}^{s}\biggl(\check{A}_{k,i,j}^{T}Y_{k-1,j}+Y_{k-1,j}\check{A}_{k,i,j} \!-\!\ensuremath{\bm{a}}_{i,j}\left(X_{k}B_{k}B_{k}^{T}Y_{k-1,j} \!+\!Y_{k-1,j}B_{k}B_{k}^{T}X_{k}\right)\biggr)\\ &+\sum_{j=1}^{s}a_{i,j}\ensuremath{\mathcal{R}}(t_{k-1,j},X_{k-1,j}) -\sum_{j=1}^{i-1}\frac{\ensuremath{\bm{g}}_{i,j}}{\tau_{k}}Y_{k,j} \end{aligned}\label{eq:rhs_mRosPeer_DRE} \end{align} with $\check{A}_{k,i,j}=\ensuremath{\bm{a}}_{i,j}A_{k}-\frac{\ensuremath{\bm{b}}_{i,j}}{2\tau_{k}}I$. Then, separating $\ensuremath{\mathcal{R}}(t_{k-1,s},X_{k-1,s})=\ensuremath{\mathcal{R}}(t_{k},X_{k})$ and again combining the quadratic terms including the products $X_{k}B_{k}$, we end up with the formulation \begin{align*} \begin{aligned} \ensuremath{\tilde{W}}_{k,i}=&-\sum_{j=1}^{s} \biggl(\check{A}_{k,i,j}^{T}Y_{k-1,j}+Y_{k-1,j}\check{A}_{k,i,j}\biggr) +X_{k}B_{k}K_{k,i}^{T}+K_{k,i}B_{k}^{T}X_{k}\\ &+\sum_{j=1}^{s-1}a_{i,j}\ensuremath{\mathcal{R}}(t_{k-1,j},X_{k-1,j}) +a_{i,s}\left(C_{k}^{T}C_{k}+A_{k}^{T}X_{k}+X_{k}A_{k}\right) -\sum_{j=1}^{i-1}\frac{\ensuremath{\bm{g}}_{i,j}}{\tau_{k}}Y_{k,j},\\ K_{k,i}=&~\left( \sum_{j=1}^{s}\ensuremath{\bm{a}}_{i,j}Y_{k-1,j}-\frac{a_{i,s}}{2}X_{k} \right)B_{k}. \end{aligned} \end{align*} Then, the associated symmetric indefinite formulation is given by the factors \begin{align*} G_{k,i}=&~\bigl[ \check{\ensuremath{\mathcal{T}}}_{k-1,i,1},\dots,\check{\ensuremath{\mathcal{T}}}_{k-1,i,s}, ~X_{k}B_{k},~K_{k,i}, ~\ensuremath{\mathcal{T}}_{k-1,1},\dots,\ensuremath{\mathcal{T}}_{k-1,s-1},\\ &~~~C_{k}^{T},A_{k}^{T}L_{k},~L_{k},\sqrt{a_{i,s}} \ensuremath{\hat{L}}_{k,1},\dots,\ensuremath{\hat{L}}_{k,i-1}, \bigr],\\ S_{k,i}=& \ensuremath{\operatorname{diag}}\left(\vphantom{\frac{\ensuremath{\bm{g}}_{i,i-1}}{\tau_{k}}} -\check{\ensuremath{\mathcal{M}}}_{k-1,i,1},\dots,-\check{\ensuremath{\mathcal{M}}}_{k-1,i,s},~ H(I_{m}),~ a_{i,1}\ensuremath{\mathcal{M}}_{k-1,1},\dots,a_{i,s-1}\ensuremath{\mathcal{M}}_{k-1,s-1},\right.\\ ~&\qquad~ \left. a_{i,s}I_{q},~ a_{i,s}H(D_{k}),~ -\frac{\ensuremath{\bm{g}}_{i,1}}{\tau_{k}}\ensuremath{\hat{D}}_{k,1},\dots, -\frac{\ensuremath{\bm{g}}_{i,i-1}}{\tau_{k}}\ensuremath{\hat{D}}_{k,i-1} \right) \end{align*} with \begin{align*} \check{\ensuremath{\mathcal{T}}}_{k-1,i,j}&= \begin{bmatrix} \check{A}_{k,i,j}^{T}\ensuremath{\hat{L}}_{k-1,j},&\ensuremath{\hat{L}}_{k-1,j} \end{bmatrix}\!\in\R^{n\times 2n_{\ensuremath{\hat{L}}_{k-1,j}}}, \check{\ensuremath{\mathcal{M}}}_{k-1,i,j}=H(\ensuremath{\hat{D}}_{k-1,j})\! \in\R^{2n_{\ensuremath{\hat{L}}_{k-1,j}}\times 2n_{\ensuremath{\hat{L}}_{k-1,j}}},\\ \ensuremath{\mathcal{T}}_{k-1,j}&= \begin{bmatrix} C_{k-1,j}^{T},&A_{k-1,j}^{T}L_{k-1,j},&L_{k-1,j} \end{bmatrix}\in\R^{n\times (q+2n_{L_{k-1,j}})},\\ \ensuremath{\mathcal{M}}_{k-1,j}&= \begin{bmatrix} I_{q}&\!\! 0&\!\! 0\\ 0&\!\! 0&\!\!D_{k-1,j}\\ 0 &\!\! D_{k-1,j}&\!\! -D_{k-1,j}L_{k-1,j}^{T}B_{k-1,j}B_{k-1,j}^{T}L_{k-1,j}D_{k-1,j} \end{bmatrix}\!\!\!\in\!\R^{(q+2n_{L_{k-1,j}})\times(q+2n_{L_{k-1,j}})}, \end{align*} defining the factorization of the Lyapunov-type expression and the Riccati operators, respectively. The resulting column size of $G_{k,i}$ is then given by \begin{align} \begin{aligned} &~\sum_{j=1}^{s}2n_{\ensuremath{\hat{L}}_{k-1,j}}+2m+\sum_{j=1}^{s-1}(q+2n_{L_{k-1,j}}) +q+2n_{L_{k}}+\sum_{j=1}^{i-1}n_{\ensuremath{\hat{L}}_{k,j}}\\ =&~\sum_{j=1}^{i-1}n_{\ensuremath{\hat{L}}_{k,j}} +2\sum_{j=1}^{s}(n_{\ensuremath{\hat{L}}_{k-1,j}}+n_{L_{k-1,j}})+sq+2m. \end{aligned}\label{eq:clmsz_mRosPeer} \end{align} Note that the use of both, the auxiliary variables in the linear parts and the original variables within the Fr\'{e}chet derivative and the Riccati operator, does not allow us to completely combine these parts, as we have seen for the condensed form~(\ref{eq:rhs_RosPeer_DRE}) of the original Rosenbrock-type peer scheme. Therefore, assume the associated low-rank factors $L_{k-1,j},D_{k-1,j}$ and $\ensuremath{\hat{L}}_{k-1,j},\ensuremath{\hat{D}}_{k-1,j}$ of $X_{k-1,j}$ and $Y_{k-1,j}$, respectively, to be of comparable column sizes $n_{L_{k-1,j}}$ and $n_{\ensuremath{\hat{L}}_{k-1,j}}$. Then, comparing (\ref{eq:clmsz_RosPeer}) and~(\ref{eq:clmsz_mRosPeer}), the modified scheme results in a larger overall number of columns in the right hand side factorization, although avoiding the application of the Jacobian to the current solutions $Y_{k,j},~j=1,\dots,i$, saves $2\sum_{j=1}^{i-1}n_{\ensuremath{\hat{L}}_{k,j}}$ columns in the first place. That is, for large-scale non-autonomous DREs, the standard version of the Rosenbrock-type peer schemes seems to be preferable. Still, a more beneficial situation can be found for autonomous DREs. Here, additional modifications, based on the time-invariant nature of the system matrices, allow to further reduce the complexity of the ALEs to be solved. In that case, the associated ALEs are of the form \begin{align} \begin{aligned} \ensuremath{\tilde{A}}_{k,i}^{T}Y_{k,i}&+Y_{k,i}\ensuremath{\tilde{A}}_{k,i}=-\ensuremath{\tilde{W}}_{k,i},\quad i=1,\dots,s,\\ \ensuremath{\tilde{W}}_{k,i}=&\sum_{j=1}^{s}\frac{\ensuremath{\bm{b}}_{i,j}}{\tau_{k}}Y_{k-1,j} +\sum_{j=1}^{s}a_{i,j}\ensuremath{\mathcal{R}}(\sum_{\ell=1}^{j}\ensuremath{\bm{g}}_{j,\ell}Y_{k-1,\ell})\\ &-\sum_{j=1}^{s}\ensuremath{\bm{a}}_{i,j}(\ensuremath{\hat{A}}_{k}^{T}Y_{k-1,j}+Y_{k-1,j}\ensuremath{\hat{A}}_{k}) -\sum_{j=1}^{i-1}\frac{\ensuremath{\bm{g}}_{i,j}}{\tau_{k}}Y_{k,j},\\ \ensuremath{\tilde{A}}_{k,i}=&~\ensuremath{\hat{A}}_{k}-\frac{1}{2\tau_{k}g_{i,i}},\quad \ensuremath{\hat{A}}_{k}=A-BB^{T}X_{k}. \end{aligned}\label{eq:mRosPeer_autDRE} \end{align} We start the investigations at $\ensuremath{\mathcal{R}}(\sum_{\ell=1}^{j}\ensuremath{\bm{g}}_{j,\ell}Y_{k-1,\ell})$. For that, first consider the sum of Riccati operators \begin{align*} \sum_{j=1}^{s}a_{i,j}\ensuremath{\mathcal{R}}(X_{k-1,j}) =\sum_{j=1}^{s}a_{i,j}\left(C^{T}C+A^{T}X_{k-1,j}+X_{k-1,j}A -X_{k-1,j}BB^{T}X_{k-1,j}\right). \end{align*} Further, recall the definitions $\ensuremath{\bm{X}}_{k}=(X_{k,i})_{i=1}^{s}$ and $\ensuremath{\bm{Y}}_{k}=(Y_{k,i})_{i=1}^{s}$. Then, from $A_{k}=A$ being constant and motivated by~(\ref{eq:modsumPeer}), for the linear part, we find \begin{align*} \begin{aligned} &\sum_{j=1}^{s}a_{i,j}A^{T}X_{k-1,j}+\sum_{j=1}^{s}a_{i,j}X_{k-1,j}A,~i=1,\dots,s\\ \Leftrightarrow\quad& ((a_{i,j})\otimes A^{T})\ensuremath{\bm{X}}_{k-1}+((a_{i,j})\otimes I )\ensuremath{\bm{X}}_{k-1}A. \end{aligned} \end{align*} Moreover, from the definition~(\ref{eq:origvar_peer}) of $\ensuremath{\bm{X}}_{k}$ in terms of the auxiliary variables $\ensuremath{\bm{Y}}_{k}$ the following reformulation holds: \begin{align*} \begin{aligned} &~((a_{i,j})\otimes A^{T})\ensuremath{\bm{X}}_{k-1}\!+\!((a_{i,j})\otimes I)\ensuremath{\bm{X}}_{k-1}A\\ =&~((a_{i,j})\otimes A^{T})(G^{ -1}\otimes I)\ensuremath{\bm{Y}}_{k-1} +((a_{i,j})\otimes I)(G^{-1}\otimes I)\ensuremath{\bm{Y}}_{k-1}A\\ =&~((a_{i,j})G^{-1}\otimes A^{T})\ensuremath{\bm{Y}}_{k-1} +((a_{i,j})G^{-1}\otimes I)\ensuremath{\bm{Y}}_{k-1}A\\ =&~((\ensuremath{\bm{a}}_{i,j})\otimes A^{T})\ensuremath{\bm{Y}}_{k-1}+((\ensuremath{\bm{a}}_{i,j})\otimes I )\ensuremath{\bm{Y}}_{k-1}A. \end{aligned} \end{align*} with $(\ensuremath{\bm{a}}_{i,j})=(a_{i,j})G^{-1}$ from~(\ref{auxcoeff_peer}). Then, together with \begin{align*} \begin{aligned} &\quad((\ensuremath{\bm{a}}_{i,j})\otimes A^{T})\ensuremath{\bm{Y}}_{k-1}+((\ensuremath{\bm{a}}_{i,j})\otimes I)\ensuremath{\bm{Y}}_{k-1}A\\ \Leftrightarrow&\sum_{j=1}^{s}\ensuremath{\bm{a}}_{i,j}A^{T}Y_{k-1,j}+ \sum_{j=1}^{s}\ensuremath{\bm{a}}_{i,j}Y_{k-1,j}A,~i=1,\dots,s, \end{aligned} \end{align*} the sum of Riccati operators $\ensuremath{\mathcal{R}}(X_{k-1,j})$ can be written in the mixed form \begin{align*} \sum_{j=1}^{s}a_{i,j}\ensuremath{\mathcal{R}}(X_{k-1,j}) &=\sum_{j=1}^{s}a_{i,j}\left(C^{T}C+A^{T}X_{k-1,j}+X_{k-1,j}A -X_{k-1,j}BB^{T}X_{k-1,j}\right)\\ &=\sum_{j=1}^{s}a_{i,j}\left(C^{T}C-X_{k-1,j}BB^{T}X_{k-1,j}\right) +\sum_{j=1}^{s}\ensuremath{\bm{a}}_{i,j}\left(A^{T}Y_{k-1,j}+Y_{k-1,j}A\right). \end{align*} Note that in this formulation only the quadratic term of the Riccati operator uses the original variables and analogously to the right hand side $\ensuremath{\tilde{W}}_{k,i}$ in~(\ref{eq:rhs_mRosPeer_DRE}), for an autonomous DRE, we obtain \begin{align*} \begin{aligned} \ensuremath{\tilde{W}}_{k,i}=& -\sum_{j=1}^{s}\biggl(\check{A}_{k,i,j}^{T}Y_{k-1,j}+Y_{k-1,j}\check{A}_{k,i,j} -\ensuremath{\bm{a}}_{i,j}\left(X_{k}BB^{T}Y_{k-1,j} +Y_{k-1,j}BB^{T}X_{k}\right)\biggr)\\ &+\sum_{j=1}^{s}a_{i,j}\left(C^{T}C-X_{k-1,j}BB^{T}X_{k-1,j}\right) +\sum_{j=1}^{s}\ensuremath{\bm{a}}_{i,j}\left(A^{T}Y_{k-1,j}+Y_{k-1,j}A\right)\\ &-\sum_{j=1}^{i-1}\frac{\ensuremath{\bm{g}}_{i,j}}{\tau_{k}}Y_{k,j} \end{aligned} \end{align*} with $\check{A}_{k,i,j}=\ensuremath{\bm{a}}_{i,j}A-\frac{\ensuremath{\bm{b}}_{i,j}}{2\tau_{k}}I$. Now, combining the expressions that are linear in $Y_{k-1,j}$, as well as the quadratic terms containing $X_{k}B$ and again paying particular attention to $j=s$ with $X_{k-1,s}=X_{k}$, $Y_{k-1,s}=Y_{k}$, the right hand side reads \begin{align} \begin{aligned} \ensuremath{\tilde{W}}_{k,i}=& ~\sum_{j=1}^{s}a_{i,j}C^{T}C-\sum_{j=1}^{s-1}a_{i,j}X_{k-1,j}BB^{T}X_{k-1,j} +X_{k}BK_{k,i}^{T}+K_{k,i}B^{T}X_{k}\\ &+\sum_{j=1}^{s}\frac{\ensuremath{\bm{b}}_{i,j}}{\tau_{k}}Y_{k-1,j} -\sum_{j=1}^{i-1}\frac{\ensuremath{\bm{g}}_{i,j}}{\tau_{k}}Y_{k,j},\\ K_{k,i}=&~\left( \sum_{j=1}^{s}\ensuremath{\bm{a}}_{i,j}Y_{k-1,j}-\frac{a_{i,s}}{2}X_{k} \right)B. \end{aligned}\label{eq:rhs_mRosPeer_autDRE} \end{align} For the autonomous case and the associated ALE~(\ref{eq:mRosPeer_autDRE}) and its condensed right hand side~(\ref{eq:rhs_mRosPeer_autDRE}), we find the factors \begin{align*} G_{k,i}=&~\bigl[ C^{T},~X_{k-1,1}B,\dots,X_{k-1,s-1}B,~X_{k}B,~K_{k,i},\\ &~~~\ensuremath{\hat{L}}_{k-1,1},\dots,\ensuremath{\hat{L}}_{k-1,s},~\ensuremath{\hat{L}}_{k,1},\dots,\ensuremath{\hat{L}}_{k,i-1} \bigr],\\ S_{k,i}=& \ensuremath{\operatorname{diag}}\left( \sum_{j=1}^{s}a_{i,j}I_{q},~ -a_{i,1}I_{m},\dots,-a_{i,s-1}I_{m},~ H(I_{m}),\right.\\ ~&\qquad~ \left. \frac{\ensuremath{\bm{b}}_{i,1}}{\tau_{k}}\ensuremath{\hat{D}}_{k-1,1},\dots, \frac{\ensuremath{\bm{b}}_{i,s}}{\tau_{k}}\ensuremath{\hat{D}}_{k-1,s},~ -\frac{\ensuremath{\bm{g}}_{i,1}}{\tau_{k}}\ensuremath{\hat{D}}_{k,1},\dots, -\frac{-\frac{1}{\tau_{k}}}{\tau_{k}}\ensuremath{\hat{D}}_{k,i-1} \right) \end{align*} where $G_{k,i}$ is of column size \begin{align} q+\sum_{j=1}^{s-1}m+2m+\sum_{j=1}^{s}n_{\ensuremath{\hat{L}}_{k-1,j}}+\sum_{j=1}^{i-1}n_{\ensuremath{\hat{L}}_{k,j}} =\sum_{j=1}^{i-1}n_{\ensuremath{\hat{L}}_{k,j}}+\sum_{j=1}^{s}n_{\ensuremath{\hat{L}}_{k-1,j}}+q+(s+1)m. \label{eq:clmsz_mRosPeer_aut} \end{align} Again, assume that the column sizes of the solution factors \(L_{k,j}\) and \(\ensuremath{\hat{L}}_{k,j}\) of the original Rosenbrock-type peer and its modified version, respectively, are compatible. Then, from (\ref{eq:clmsz_RosPeer_aut}) and (\ref{eq:clmsz_mRosPeer_aut}) it can be seen that the modified version can save a number of system solves within the ALE solver, as long as \((s-1)m\) does not exceed \(\sum_{j=1}^{i-1}n_{L_{k,j}}\) from the original scheme. This will most likely be true for a small number \(m\), i.e, a low numerical rank of \(S(t)\) in the DRE~\eqref{eq:DRE}. Considering control problems, \(m\) represents the number of inputs to the system to be controlled and thus will be rather small for numerous examples. \section{Numerical Experiments}\label{sec:numerics} The following computations have been executed on a 64bit CentOS 5.5 system with two {Intel\textsuperscript{\textregistered}}\ {Xeon\textsuperscript{\textregistered}}\ [email protected] GHz with a total of 12 cores and 48GB main memory, being one computing node of the linux cluster {\rmfamily otto}\footnote{\url{http://www.mpi-magdeburg.mpg.de/1012477/otto}} at the {\itshape Max Planck Institute for Dynamics of Complex Technical Systems} in Magdeburg. The numerical algorithms have been implemented and tested in {\matlab}~version 8.0.0.783 (R2012b). For the numerical experiments, we consider the implicit peer method~(\ref{eq:peer2}) and both versions of the Rosenbrock-type schemes~(\ref{eq:peerRos}),~(\ref{eq:auxpeerRos}) up to order \(4\). For a comparison of the computational times and relative errors with respect to a reference solution, the several peer schemes are also compared to the BDF methods of order 1 to 4~\cite{BenM04,LanMS15,Lan17}, Rosenbrock methods of orders \(1,2\)~\cite{BenM13}, \(4\)~\cite{Sha82}, and the midpoint and trapezoidal rules~\cite{Die92}. The relative errors are given in the Frobenius norm $\|.\|_F$. An overview of the corresponding low-rank formulations, except for the Rosenbrock method of order 4, can be found in~\cite{LanMS15,Lan17}. For the latter, no low-rank representation has been published so far. The additional initial values for multi-step and the peer integrators of order\(\geq 2\), the one-step Rosenbrock methods of appropriate order are chosen. In what remains, the abbreviations, given in Table~\ref{tab:acronyms}, are used to identify the several integration schemes. For the integration methods, using Newton's method to solve the arising AREs, a tolerance of \(1e\)-10 and a maximum number of 15 Newton steps are chosen. The ADI iteration, used in the innermost loop of all schemes, is terminated at a tolerance of \(n\varepsilon\) or at a maximum of 100 ADI steps. Here again, \(n\) is the system dimension and \(\varepsilon\) denotes the machine precision. \begin{table}[t] \centering \caption{Acronyms of the time integration methods (\(s=1,\dots,4\)).} \label{tab:acronyms} {\rowcolors{2}{gray!25}{white} \begin{tabular}{cc} \rowcolor{gray!60} {Time integration method}&{Acronym}\\ BDF of order \(s\)& BDF\((s)\)\\ Rosenbrock of order \(s\) & Ros\((s)\)\\ Midpoint rule & Mid\\ Trapezoidal rule & Trap\\ Implcit peer of order \(s\)& Peer\((s)\)\\ Rosenbrock-type peer of order \(s\)& RosPeer\((s)\)\\ Modified RosPeer\((s)\) & mRosPeer\((s)\)\\ \end{tabular} } \end{table} \begin{table}[t] \centering \caption{$2$-stage implicit peer method of order $2$.} \label{tab:peer2} {\rowcolors{1}{white}{gray!25} \begin{tabular}{|lrlr|}\firsthline $c_{1}:$&$0.4831632475943920$ & $c_{2}:$&$1.0000000000000000$\\ $b_{1,1}:$&$-0.3045407685048590$ & $b_{1,2}:$&$1.3045407685048591$\\ $b_{2,1}:$&$-0.3045407685048590$ & $b_{2,2}:$&$1.3045407685048591$\\ $g_{1,1}:$&$0.2584183762028040$ & $g_{1,2}:$&$0.0000000000000000$\\ $g_{2,1}:$&$0.4376001712448750$ & $g_{2,2}:$&$0.2584183762028040$\\\hline \end{tabular} } \end{table} \paragraph{Implicit Peer Coefficients} \label{sec:impl-peer-coeff} The \(1\)-stage implicit peer scheme is given by the coefficients $c_{1}=1$, $b_{1,1}=1$ and $g_{1,1}=1$. The coefficients of the $2$-stage implicit peer method, given in Table~\ref{tab:peer2}, were provided by the group of Prof. R. Weiner at the Martin-Luther-Universität Halle and cannot, to the best of the authors' knowledge, be found in any publication so far. The coefficients for the \(3\)- and \(4\)-stage peer schemes are provided by methods 3a and 4b in~\cite{SolW17}. \paragraph{Rosenbrock-type Peer Coefficients} \label{sec:rosenbrock-type-peer} The \(1\)-stage Rosenbrock-type peer method is given by the coefficients $c_{1}=1$, $a_{1,1}=1$, $b_{1,1}=1$ and $g_{1,1}=1$. The coefficients for the Rosenbrock-type peer schemes used here, can be computed following the instructions in~\cite[Section~3]{PodWS05}. \subsection{Steel Profile} \label{sec:steel-profile} \begin{figure} \caption{Peer(1-4)} \label{fig:acc_Peer} \caption{RosPeer(1-4)} \label{fig:acc_RosPeer} \caption{mRosPeer(1-4)} \label{fig:acc_modRosPeer} \caption{Peer(1-4)} \label{fig:eff_Peer} \caption{RosPeer(1-4)} \label{fig:eff_RosPeer} \caption{mRosPeer(1-4)} \label{fig:eff_modRosPeer} \caption{Steel profile: Accuracy and efficiency plots} \label{fig:acc_eff} \end{figure} {\rowcolors{3}{white}{gray!25} \begin{table}[t] \centering \caption{Steel profile: Computational timings and relative errors with respect to the reference solution for $\tau=0.1125\ s$, 400 steps.}\label{tab:rail_timings} \begin{tabular}{lrc} \rowcolor{gray!50} Method&Time in $s$&Rel. Frobenius err.\\ BDF(1)&1\,627.76&3.75e-03\\ BDF(2)&1\,347.55&3.20e-04\\ BDF(3)&1\,228.07&1.29e-04\\ BDF(4)&1\,179.00&4.58e-05\\ Ros1&806.00&3.75e-03\\ Ros2&1\,028.04&1.24e-03\\ Ros4&1\,001.05&1.30e-06\\ Mid&1\,239.78&1.33e-04\\ Trap&1\,202.90&1.32e-04\\ Peer(1)&1\,551.03&3.75e-03\\ Peer(2)&1\,635.30&6.09e-05\\ Peer(3)&2\,815.84&1.01e-07\\ Peer(4)&3\,268.56&3.57e-07\\ RosPeer(1)&605.64&3.75e-03\\ RosPeer(2)&702.46&1.50e-05\\ RosPeer(3)&892.35&2.41e-06\\ RosPeer(4)&1\,087.55&2.41e-07\\ mRosPeer(1)&610.31&3.75e-03\\ mRosPeer(2)&698.74&1.50e-05\\ mRosPeer(3)&883.33&2.41e-06\\ mRosPeer(4)&1\,088.86&2.41e-07 \end{tabular} \end{table} } As a first example, we consider a semi-discretized heat transfer problem for optimal cooling of steel profiles~\cite{BenS05b,morwiki_steel}. This example is a mutli-input multi-output (MIMO) system with $m=7$ inputs and $q=6$ outputs. The solution to the DRE is computed on the simulation time interval $[0,\;4\,500]\ s$ with the step sizes $\tau\in\{180, 90, 45, 25.5, 12.75\}\ s$ and \(\{25, 50, 100, 200, 400\}\) steps, respectively. Note that the actual time line is implicitly scaled by $1e2$ within the model such that a real time of $[0,45]\ s$ with corresponding step sizes is investigated. To ensure the computability of a reference solution in appropriate time, the smallest available discretization level with $n=371$ is chosen. The reference is computed by the small-scale dense version of the fourth-order Rosenbrock (Ros4) method. In particular, the {\itshape Parareal} based implementation with \(450\) coarse and additionally \(1000\) fine steps at each of those intervals, considered in~\cite{KoeLS16}, has been used. Figures~\ref{fig:acc_eff}(\subref{fig:acc_Peer})-(\subref{fig:acc_modRosPeer}) show the accuracy plots for the implicit peer methods, the RosPeer schemes and the modified RosPeer integrators, respectively. It can be observed that, for this example, the convergence orders are reached asymptotically. Further, note that the Peer(3) scheme outperforms its Peer(4) successor. This is due to the superconvergence of the Peer(3) method (see~\cite[Section 4, Method 3a]{SolW17}) and the fact that, for this example, the convergence order 4 of the Peer(4) scheme has just been reached for the last step size refinement. It can further be observed that the implicit peer and the Rosenbrock-type schemes of corresponding order achieve a comparable accuracy. This is not too surprising considering an autonomous problem. The efficiency plots are presented in Figures~(\subref{fig:eff_Peer})-(\subref{fig:eff_modRosPeer}). In Table~\ref{tab:rail_timings}, the LRSIF computation times and the relative errors with respect to the reference solution are given. Here, it becomes clear that the peer methods of order \(s\geq 2\) show a significantly better performance compared to the other implicit time integrators of similar order with respect to the accuracy. Solely comparing the computational times, the Rosenbrock-type peer scheme of first-order shows best performance. Taking the efficiency into account, i.e., studying the required computational time versus the achieved error level, see also Figures~\ref{fig:acc_eff}(\subref{fig:eff_Peer})-(\subref{fig:eff_modRosPeer}), the RosPeer schemes and its modified versions surpass the already existing LRSIF versions of the implicit integration schemes for DREs. Further, it is noteworthy that the fourth-order peer schemes do not reach better error levels that was already visible from Figures~\ref{fig:acc_eff}(\subref{fig:acc_Peer})-(\subref{fig:acc_modRosPeer}). A more detailed investigation of all methods up to order \(2\) and in particular the peer schemes can be found in~\cite{Lan17}. \subsection{Convection-diffusion - Small-Scale LTV} \label{sec:conv-diff-small} \begin{figure} \caption{Peer(1-4)} \label{fig:FDM81_acc_Peer} \caption{RosPeer(1-4)} \label{fig:FDM81_acc_RosPeer} \caption{mRosPeer(1-4)} \label{fig:FDM81_acc_modRosPeer} \caption{Peer(1-4)} \label{fig:FDM81_eff_Peer} \caption{RosPeer(1-4)} \label{fig:FDM81_eff_RosPeer} \caption{mRosPeer(1-4)} \label{fig:FDM81_eff_modRosPeer} \caption{Convection-diffusion LTV: Accuracy and efficiency plots} \label{fig:FDM81_acc_eff} \end{figure} The second example is a convection-diffusion model problem originating from a centered finite differences discretization of the partial differential equation \begin{align} \dot{v}=-\Delta v-f_{1}\frac{\partial v}{\partial\xi_{1}}-f_{2} \frac{\partial v}{\partial\xi_{2}}-f_{3}=0,\label{eq:conv-diff} \end{align} for \(v=v(\xi_{1},\xi_{2})\) defined on the unit square \(\Omega=(0, 1)^{2}\) with homogeneous Dirichlet boundary conditions. Here, \(f_{i},~i=1,2,3\), are functions depending on \(\xi_{1},\xi_{2}\) and are often referred to as convection and reaction terms. The system matrices \(A\) and \(B,C\) are generated by the \matlab ~routines \texttt{fdm\_2d\_matrix} and \texttt{fdm\_2d\_vector}, respectively, from \textsf{LyaPack}~\cite{Pen00b} with \(n_{0}=9\) equidistant grid points for each spatial dimension, resulting in \(n=n_{0}^{2}=81\) unknowns, and the convection and reaction terms are chosen as \(f_{1}=20,~f_{2}=5,~f_{3}=0\). Further, the model represents a single-input single-output (SISO) system with \(m=1\) input and \(q=1\) output. The regions, where \(B\) and \(C\) act are restricted to the lower left corner \(\xi_{1}\in (0,0.35),\xi_{2}\in (0,0.35)\) for the input and the upper area defined by \(\xi_{1}\in (0,1),\xi_{2}\in (0.95,1)\) for the output, respectively. In order to obtain an LTV model, we introduce an artificial time-variability $\mu(t)=\frac{3}{4}\sin(8\pi t)+1\in[0.25,1.75]$ to the system matrix $A$. As a result, we obtain a time-varying system with constant matrices $E,B,C$ and a time dependent matrix $A(t)=\mu(t)A$. The model is simulated for the time interval \([0,0.5]\ s\) with the time step sizes \(\tau\in\{\frac{1}{100}, \frac{1}{200}, \frac{1}{400}, \frac{1}{800}, \frac{1}{1600}\}\ s\), resulting in \(\{50, 100, 200, 400, 800\}\) steps, respectively. As for the previous example, Figures~\ref{fig:FDM81_acc_eff}(\subref{fig:FDM81_acc_Peer})-(\subref{fig:FDM81_acc_modRosPeer}) show the error behavior with respect to the several time step sizes used. Here, the predicted convergence behavior is clearly visible except for the Peer(4) scheme. The efficiency plots are presented in Figures~\ref{fig:FDM81_acc_eff}(\subref{fig:FDM81_eff_Peer})-(\subref{fig:FDM81_eff_modRosPeer}). For this example, again the peer schemes show best performance with respect to the achieved accuracy. Additionally considering the computational effort of the integration schemes, the BDF methods show best performance up to order 3. See also Table~\ref{tab:fdmLTV_timings}. For large-scale model problems, the computational effort for solving the ARE inside the implicit schemes will become more expensive compared to the ALE solves within the linear implicit time integrators such that the latter will become more effective. {\rowcolors{3}{white}{gray!25} \begin{table}[t] \centering \caption{Convection-diffusion LTV: Computational timings and relative errors with respect to the reference solution for $\tau=6.25e$-4 \(s\), 800 steps.}\label{tab:fdmLTV_timings} \begin{tabular}{lrc} \rowcolor{gray!50} Method&Time in $s$&Rel. Frobenius err.\\ BDF(1)&25.07&2.32e-02\\ BDF(2)&23.33&6.79e-04\\ BDF(3)&23.05&7.34e-05\\ BDF(4)&23.08&2.91e-05\\ Ros1&12.57&2.09e-02\\ Ros2&48.50&2.87e-03\\ Ros4&62.18&4.36e-04\\ Mid&29.46&1.91e-04\\ Trap&29.11&2.13e-04\\ Peer(1)&26.92&2.32e-02\\ Peer(2)&51.93&4.26e-05\\ Peer(3)&84.82&3.84e-06\\ Peer(4)&108.81&9.81e-06\\ RosPeer(1)&11.16&2.09e-02\\ RosPeer(2)&22.15&4.32e-04\\ RosPeer(3)&33.28&1.54e-05\\ RosPeer(4)&45.03&2.77e-06\\ mRosPeer(1)&13.09&2.09e-02\\ mRosPeer(2)&25.60&4.32e-04\\ mRosPeer(3)&37.00&1.54e-05\\ mRosPeer(4)&51.74&2.77e-06 \end{tabular} \end{table} } \subsection{Convection-Diffusion - Large-Scale LTI} \label{sec:convection-diffusion} {\rowcolors{3}{white}{gray!25} \begin{table}[t] \centering \caption{Convection-diffusion LTI: Computational timings for $\tau=6.25e$-4, 480 steps.}\label{tab:fdmLTI_timings} \begin{tabular}{lr} \rowcolor{gray!50} Method&Time in $s$\\ BDF(1)&1\,260.43\\ BDF(2)&1\,038.41\\ BDF(3)&870.76\\ BDF(4)&813.07\\ Ros1&1\,107.56\\ Ros2&5\,779.17\\ Ros4&10\,571.82\\ Mid&793.03\\ Trap&796.44\\ Peer(1)&1\,239.61\\ Peer(2)&1\,322.49\\ Peer(3)&2\,068.50\\ Peer(4)&2\,652.84\\ RosPeer(1)&583.14\\ RosPeer(2)&561.36\\ RosPeer(3)&740.52\\ RosPeer(4)&913.28\\ mRosPeer(1)&584.07\\ mRosPeer(2)&543.07\\ mRosPeer(3)&647.03\\ mRosPeer(4)&887.04 \end{tabular} \end{table} } The third example is again the convection-diffusion model~\eqref{eq:conv-diff} from Example 2. Here, the convection and reaction terms \(f_{1}=50,~f_{2}=10,~f_{3}=0\) and no additional artificial time-variability are used. Further, \(n_{0}=45\) grid nodes in each direction, yielding a system dimension of \(n=2\,025\), are considered. The model is simulated for the time interval \([0,0.3]\ s\) with time step sizes \(\tau\in\{\frac{1}{100}, \frac{1}{200}, \frac{1}{400}, \frac{1}{800}, \frac{1}{1600}\}\ s\), and \(\{30, 60, 120, 240, 480\}\) steps, respectively. Due to the system size, no reference solution is computed. Similar to the previous examples, Table~\ref{tab:fdmLTV_timings} shows the computational timings for the several integration schemes. Again, the Rosenbrock-type peer schemes up to order 3 come up with the lowest computational times. It can also be seen that for this autonomous SISO system, the reformulated Rosenbrock-type schemes (mRosPeer) outperform their counterparts given in the original formulation. \section{Conclusion}\label{sec:conc} In this contribution, the classes of implicit and Rosenbrock-type peer methods have been applied to matrix-valued ODEs. Further, a reformulation of the latter has been proposed in order to avoid a number of Jacobian applications to the currently computed stage variables. An efficient low-rank formulation in terms of the low-rank symmetric indefinite factorization (LRSIF) has been presented. The performance of the peer methods was presented for three different examples. It has been shown that the Rosenbrock-type schemes and their reformulated version outperform their classical implicit one- and multi-step opponents with respect to the relation of accuracy and computational effort in most cases. Thus, the peer methods and in particular Rosenbrock-type schemes make an important contribution to the efficient low-rank based solution of differential Riccati equations and most probably differential matrix equations, in general. \end{document}
\begin{document} \noindent \title[Kronecker limit formulas]{The Kronecker limit formulas via the distribution relation} \author{Kenichi Bannai} \address{Department of Mathematics, Keio University, 3-14-1 Hiyoshi, Kouhoku-ku, Yokohama 223-8522, Japan} \email{[email protected]} \author{Shinichi Kobayashi} \address{Graduate School of Mathematics, Nagoya University, Furo-cho Chikusa-ku, Nagoya 464-8602, Japan} \email{[email protected]} \footnote{The 2000 Mathematics Subject Classification: 11M35, 11M36, 11S80} \begin{abstract} In this paper, we give a proof of the classical Kronecker limit formulas using the distribution relation of the Eisenstein-Kronecker series. Using a similar idea, we then prove $p$-adic analogues of the Kronecker limit formulas for the $p$-adic Eisenstein-Kronecker functions defined in our previous paper. \end{abstract} \maketitle \section{Introduction} In these notes, we will give a proof of the classical first and second Kronecker limit formulas concerning the limit of values of Eisenstein-Kronecker-Lerch series. Our proof is based on the distribution relation of the Eisenstein-Kronecker-Lerch series. Using a similar idea, we then prove $p$-adic analogues of these formulas for the $p$-adic Eisenstein-Kronecker functions defined in our previous paper \cite{BFK}. Let $\Gamma \subset \mathbb{C}$ be a lattice. We define a pairing for $z$, $w \in \mathbb{C}$ by $\pair{z,w}_\Gamma := \Exp {(z \overline{w} - w \overline{z})/A(\Gamma)}$, where $A(\Gamma)$ is the area of the fundamental domain of $\Gamma$ divided by $\boldsymbol\pi = 3.1415\cdots$. Then for an integer $a$ and a fixed $z_0$, $w_0 \in \mathbb{C}$, the Eisenstein-Kronecker-Lerch series is defined as \begin{equation*} \label{equation: definition of Eisenstein-Kronecker*} K^*_a(z_0,w_0,s; \Gamma) = {\sum}^*_{\gamma \in \Gamma} \frac{(\overline{z}_0 + \overline{\gamma})^a}{|z_0 + \gamma|^{2s}} \pair{\gamma, w_0}_\Gamma, \end{equation*} where $\sum^*$ denotes the sum taken over all $\gamma\in\Gamma$ except for $\gamma=-z_0$ if $z_0 \in \Gamma$. The above series converges for $\operatorname{Re}(s) > a/2+1$, but one may give it meaning for general $s$ by analytic continuation. In what follows, we omit the $\Gamma$ from the notations if there is no fear of confusion. We let $\theta(z)$ be the reduced theta function on $\mathbb{C}/\Gamma$ associated to the divisor $[0] \subset \mathbb{C}/\Gamma$, normalized so that $\theta'(0) =1$. Then the Kronecker limit formulas are given as follows. \begin{theorem}[Kronecker limit formulas]\label{thm: KLF} Let $c$ be the Euler constant $ c:=\lim_{n\rightarrow \infty} \left(1+\frac{1}{2}+ \cdots +\frac{1}{n}-\log n\right), $ and let $\Delta$ be the discriminant of $\Gamma$ defined as $\Delta:=g_2^3-27g_3^2$, where $ g_k:=\sum_{\gamma \in \Gamma \setminus \{0\}} \gamma^{-2k}. $ Then we have the following. \begin{enumerate} \item The first limit formula $$ \lim_{s \rightarrow 1}\left(A K^*_0(0, 0, s)-\frac{1}{s-1}\right) =-\frac{1}{12}\log|\Delta|^2-2\log A+2c. $$ \item For $z \notin \Gamma$, the second limit formula $$ A K^*_0(0,z,1)=-\log |\theta(z)|^2+\frac{|z|^2}{A}-\frac{1}{12}\log|\Delta|^2. $$ \end{enumerate} \end{theorem} Numerous proofs exist for this classical formula, and many of the proofs rely on arguments concerning the moduli space. We give a proof of the above theorem, valid for a fixed lattice $\Gamma \subset \mathbb{C}$, using the Kronecker theta function. As in the original proof by Kronecker, we first prove the second limit formula, and then deduce the first limit formula from the second. The key in our proof is the distribution relation for the Eisenstein-Kronecker function. Our view of understanding the Kronecker limit formulas in terms of the Kronecker theta function and the distribution relation allows us to prove the following $p$-adic analogues of Theorem \ref{thm: KLF}. Suppose now that $\Gamma$ corresponds to a period lattice corresponding to the invariant differential $\omega = dx/y$ of an elliptic curve $E: y^2 = 4x^3 - g_2 x - g_3$ with complex multiplication by the ring of integers $\mathcal{O}_{\boldsymbol{K}}$ of an imaginary quadratic field ${\boldsymbol{K}}$. We assume in addition that $E$ is defined over ${\boldsymbol{K}}$, and that the model above has good reduction at the primes above $p \geq 5$. We fix a branch of the $p$-adic logarithm, which is a homomorphism $\log_p : \mathbb{C}_p^\times \rightarrow \mathbb{C}_p$. In the paper \cite{BFK}, we introduced the $p$-adic Eisenstein-Kronecker series $E^\mathrm{col}_{a,b}(z)$ as a Coleman function for integers $a$, $b$ such that $b \geq 0$. This function is a $p$-adic analogue of the Kronecker double series $$ K^*_{a+b}(0,z,b) = {\sum}^* _{\gamma\in\Gamma}\frac{\overline\gamma^{a+b}}{|\gamma|^{b}} \pair{\gamma,z}. $$ In this paper, we let $$ K^\mathrm{col}_{a+b}(0,z,b) := E^\mathrm{col}_{a,b}(z) $$ to highlight the analogy. Then in analogy with Theorem \ref{thm: KLF} (ii), we have the following. \begin{theorem}[$p$-adic Kronecker second limit formula]\label{thm: pKLF} For any prime $p \geq 5$ of good reduction, we have the second limit formula $$ K^\mathrm{col}_0(0,z,1)= - \log_p \theta(z) - \frac{1}{12}\log_p \Delta, $$ where $\log_p \theta(z)$ is a certain $p$-adic analogue of the function $\log|\theta(z)| - |z|^2/A$ defined in Definition \ref{def: log-p-theta} using the reduced theta function $\theta(z)$ and the branch of our $p$-adic logarithm. \end{theorem} The $p$-adic analogues of Kronecker second limit formula were previously investigated by Katz \cite{Ka} and de Shalit \cite{dS} in the context of $p$-adic $L$-functions when $p$ is a prime of good ordinary reduction. Our formulation via $p$-adic Eisenstein-Kronecker series gives a direct $p$-adic analogue, and is valid even for supersingular $p$. When $p \geq 5$ is a prime of good ordinary reduction, we defined in \cite{BK1} \S 3.1 a two-variable $p$-adic measure $\mu:=\mu_{0,0}$ on $\mathbb{Z}_p \times \mathbb{Z}_p$ interpolating Eisenstein-Kronecker numbers, or more precisely, the values $K^*_{a+b}(0,0,b)/A^a$ for $a$, $b \geq 0$. We define the $p$-adic Eisenstein-Kronecker-Lerch series by $$ K^{(p)}_{a}(0, 0, s) := \int_{\mathbb{Z}^\times_p \times \mathbb{Z}^\times_p} \pair{x}^{s-1} \omega(y)^{a-1}\pair{y}^{a-s} d \mu(x,y) $$ for any $s \in \mathbb{Z}_p$, where $\pair{-} : \mathbb{Z}_p^\times \rightarrow \mathbb{C}_p^\times$ is given as the composition $\mathbb{Z}_p^\times \rightarrow 1 + p\mathbb{Z}_p \hookrightarrow \mathbb{C}_p^\times$ and $\omega: \mathbb{Z}_p^\times \rightarrow \mu_{p-1}$ is the Teichm\"uller character. Then an argument similar to the proof of Theorem \ref{thm: KLF} (i) gives the following. \begin{proposition}\label{pro: pKLF} Suppose $p \geq 5$ is a prime of good ordinary reduction. Then $$ \lim_{s\rightarrow 1} K^{(p)}_{0}(0,0,s) = \Omega_p^{-1}\left( 1 - \frac{1}{p} \right) \log_p \overline\pi $$ where $\Omega_p$ is a $p$-adic period of the formal group of $E$. \end{proposition} The proof of the above proposition is similar to that of the proof of Theorem \ref{thm: KLF}. However, due to the existence of a trivial zero for the function $K^{(p)}_0(0,0,s)$ at $s=1$, the analogy with the classical case is not perfect. See Remark \ref{rem: not pKLF} for details. \section{Kronecker limit formulas} \subsection{Kronecker's Theorem} In this section, we recall the definition of Eisenstein-Kronecker-Lerch series and the Kronecker theta function. Then we will state Kronecker's theorem giving the relation between the two. All of the results are contained in \cite{We1}. Se also \cite{BK1} or \cite{BKT}. We fix a lattice $\Gamma$ in $\mathbb{C}$ and let $A$ be the area of the fundamental domain of $\Gamma$ divided by $\pi$. Let $a$ be an integer $\geq 0$. For a fixed $z_0$, $w_0 \in \mathbb{C}$, we let $\theta^*_a(t,z_0,w_0)$ be the function $$ \theta^*_a(t,z_0,w_0) = {\sum_{\gamma \in \Gamma}}^* \exp(- t|z_0 + \gamma|^2/A) \pair{\gamma, w_0} (\overline z_0 +\overline \gamma)^a, $$ where $\sum^*$ means the sum taken over all $\gamma \in \Gamma$ other than $-z_0$ if $z_0$ is in $\Gamma$. Furthermore, we let $$ I_a(z_0, w_0, s) := \int_{1}^\infty\theta^*_a(t,z_0,w_0) t^{s-1} dt. $$ Then we have \begin{multline}\label{eq: integral expression} A^s \Gamma(s) K^*_a(z_0, w_0, s) = I_a(z_0, w_0, s) - \frac{\delta_{z_0, a}}{s} \pair{w_0,z_0}\\ + I_{a} (w_0, z_0, a+1-s) \pair{w_0,z_0} + \frac{\delta_{w_0, a}}{s-1}, \end{multline} where $\delta_{a,x} = 1$ if $a =0$ and $x \in \Gamma$, and $\delta_{a,x} = 0$ otherwise. The above integral expression gives the meromorphic continuation of $K^*_a(z_0, w_0, s)$ to the whole complex plane, and also the functional equation. We next review the definition of the Kronecker theta function. We let $\theta(z)$ be the reduced theta function on $\mathbb{C}/\Gamma$ associated to the divisor $[0] \in \mathbb{C}/\Gamma$, normalized so that $\theta'(0) =1$. This function may be given explicitly in terms of the Weierstrass $\sigma$-function \begin{equation*}\label{product expansion of sigma} \sigma(z) : = z \prod_{\gamma \in \Gamma \setminus \{ 0 \}} \left( 1 - \frac{z}{\gamma} \right) \Exp{ \frac{z}{\gamma} + \frac{z^2}{2 \gamma^2}} \end{equation*} as follows. Let $e_{0,2}^* := \lim_{s \rightarrow 2^+} \sum_{\gamma \in \Gamma \setminus \{ 0 \}} \overline{\gamma}^2 |\gamma|^{-2s}$. Then $\theta(z)$ is given as $$ \theta(z) = \Exp{\frac{- e_{0,2}^* z^2 }{2}} \sigma(z). $$ This function is known to satisfy the transformation formula $$ \theta(z+ \gamma) = \varepsilon(\gamma) \Exp{ \frac{z \overline \gamma}{A} + \frac{\gamma \overline \gamma}{2A}} \theta(z) $$ for any $\gamma \in \Gamma$, where $\varepsilon : \Gamma \rightarrow \{ \pm 1 \}$ is such that $\varepsilon(\gamma) = -1$ if $\gamma \in 2 \Gamma$ and $\varepsilon(\gamma)=1$ otherwise. We define the Kronecker theta function $\Theta(z,w)$ by $$ \Theta(z,w) := \frac{\theta(z+w)}{ \theta(z)\theta(w)}. $$ The above function is known to be a reduced theta function associated to the Poincar\'e bundle on $\mathbb{C}/\Gamma\times\mathbb{C}/\Gamma$. For any $z$, $w \in \mathbb{C}$ such that $z$, $w \not\in \Gamma$, we let $K_a(z,w,s) := K^*_a(z,w,s)$, which we view as a $\mathscr C^\infty$ function for $z$ and $w$. The relation between this function and the Kronecker theta function is given by the following theorem due to Kronecker. \begin{theorem}[Kronecker]\label{theorem; kronecker} $$ \Theta(z,w) = \exp\left[ \frac{z \overline w}{A}\right] K_1(z,w,1). $$ \end{theorem} The above theorem was originally proved in terms of Jacobi theta functions by Kronecker using moduli arguments (See for example \cite{We1}.) In \cite{BK1} or \cite{BK2}, we give another proof valid for a fixed lattice $\Gamma\subset\mathbb{C}$ using the fact that both sides of the equality are reduced meromorphic theta functions associated to the Poincar\'e bundle on $\mathbb{C}/\Gamma \times \mathbb{C}/\Gamma$, with the same poles and the same residue at each pole. \subsection{Proof of the second limit formula.} In this subsection, we deduce Theorem \ref{thm: KLF} (ii) from Theorem \ref{theorem; kronecker}. \begin{proposition}\label{pro: up to C} There exists a constant $C$ such that $$ \log |\theta(z)|^2-\frac{|z|^2}{A}=-A K^*_0(0,z,1)+C $$ for any $z \notin \Gamma$. \end{proposition} \begin{proof} By Theorem \ref{theorem; kronecker} and the fact that $$ \lim_{z \rightarrow 0} \left[ K_1(z,w,1) -\frac{1}{ z} \right] = K^*_1(0,w,1), $$ we have \begin{equation*} \lim_{z \rightarrow 0} \left( \Theta(z,w) - \frac{1}{z} \right) = K^*_1(0,w,1) + \frac{\overline w}{A}. \end{equation*} Direct computation also shows that \begin{equation*} \lim_{z \rightarrow 0} \left(\Theta(z,w) - \frac{1}{z} \right)= \frac{\theta'(w)}{\theta(w)}. \end{equation*} Hence we have $$ K^*_1(0,w,1) + \frac{\overline w}{A} = \frac{\theta'(w)}{\theta(w)}. $$ In particular, \begin{equation*}\label{equation: leff} \frac{\partial}{\partial z} \left( \log \theta(z) - \frac{z \overline z}{A} \right)= K^*_1(0,z,1). \end{equation*} Therefore, if we let $\Xi(z)$ be the function $$ \Xi(z) := \log |\theta(z)|^2 - \frac{|z|^2}{A}, $$ then we have \begin{align*} \frac{\partial}{\partial z} \Xi(z) = K^*_1(0,z,1), \qquad \frac{\partial}{\partial \overline z} \Xi(z) = \overline{K^*_{1}(0,z,1)}. \end{align*} On the other hand, one can directly show that \begin{align*} A\frac{\partial}{\partial z} K^*_0(0,z,1) =- {K^*_1(0,z,1)}, \qquad A\frac{\partial}{\partial \overline z} K^*_0(0,z,1)=- {\overline{K^*_1(0,z,1)}}. \end{align*} (See for example, Lemma 2.5 and the first formula of p.22 of \cite{BKT}. ) Hence $\Xi(z)+AK^*_0(0,z,1)$ must be constant. \end{proof} Our goal is to determine the constant $C$. We use the following result, which is a type of distribution relation. \begin{lemma}\label{lem: C} We have $$ \sum_{z_n \not=0} \;K^*_0(0, z_n, 1)=-\frac{2\log n}{A}, $$ where the sum is over all $n$-torsion points $z_n$ of $\mathbb{C}/\Gamma$ except zero. \end{lemma} \begin{proof} We have $$ \sum_{z_n \in \frac{1}{n}\Gamma/\Gamma} \pair{\gamma, z_n}= \begin{cases} n^2 \qquad &(\gamma \in n\Gamma) \\ 0 \qquad &(\gamma \notin n\Gamma). \end{cases} $$ Hence $$ \frac{1}{n^2}\sum_{z_n} \;K^*_0(0, z_n, s)=\sum_{\gamma \in n\Gamma} \frac{1}{|\gamma|^{2s}}=\frac{1}{n^{2s}}K^*_0(0, 0, s) $$ when the real part of $s$ is sufficiently large, and hence for any $s$ by analytic continuation. In particular, we have $$ \frac{1}{n^2}\sum_{z_n\not=0} \;K^*_0(0, z_n, s)=\left(\frac{1}{n^{2s}}-\frac{1}{n^2}\right)K^*_0(0, 0, s). $$ Since the residue of $K^*_0(0, 0, s)$ at $s=1$ is $1/A$, we have $$ \frac{1}{n^2}\sum_{z_n\not=0} \;K^*_0(0, z_n, 1)=-\frac{2 \log n}{n^{2}A} $$ as desired. \end{proof} The above lemma shows that the constant $C$ is $$ C=\frac{1}{n^2-1}\left[\sum_{z_n\not=0} \left( \log |\theta(z_n)|^2-\frac{|z_n|^2}{A}\right)-2\log n\right]. $$ We will now calculate this value explicitly in terms of $\Delta$. \begin{proposition}\label{pro: C} We have $$ \frac{1}{4} \log|\Delta'|^2=- \sum_{z_2\not=0}\left( \log |\theta(z_2)|^2-\frac{|z_2|^2}{A}\right) $$ where $z_2$ runs through non-trivial $2$-torsion points of $\mathbb{C}/\Gamma$ and $$ \Delta'=(e_1-e_2)^2(e_2-e_3)^2(e_3-e_1)^2 $$ for $y^2=4x^3-g_2x-g_3=4(x-e_1)(x-e_2)(x-e_3)$. \end{proposition} \begin{proof} Note that $$ (x-e_1)(x-e_2)(x-e_3)=\prod_{z_2\not=0}(x-\wp(z_2)). $$ Then if $\Gamma=\mathbb{Z} \omega_1+\mathbb{Z} \omega_2$, we may suppose that $e_1=\wp(\omega_1/2)$, $e_2=\wp(\omega_2/2)$ and $e_3=\wp((\omega_1+\omega_2)/2)$. Since $$ {\theta(z+w) \theta(z-w)}{\theta(z)^{-2}\theta(w)^{-2}}=\wp(w)-\wp(z), $$ we have $$ {\theta\left(\frac{\omega_1+\omega_2}{2}\right)\theta\left(\frac{\omega_1-\omega_2}{2}\right)} {\theta\left(\frac{\omega_1}{2}\right)^{-2}\theta\left(\frac{\omega_2}{2}\right)^{-2}} =e_2-e_1, $$ $$ {\theta\left(\omega_1+\frac{\omega_2}{2}\right)\theta\left(\frac{\omega_2}{2}\right)} {\theta\left(\frac{\omega_1+\omega_2}{2}\right)^{-2}\theta\left(\frac{\omega_1}{2}\right)^{-2}} =e_1-e_3, $$ $$ {\theta\left(\omega_2+\frac{\omega_1}{2}\right)\theta\left(\frac{\omega_1}{2}\right)} {\theta\left(\frac{\omega_1+\omega_2}{2}\right)^{-2}\theta\left(\frac{\omega_2}{2}\right)^{-2}} =e_2-e_3. $$ Hence using the transformation formula of $\theta(z)$, the value $\Delta'$ is $$ \exp\left[\frac{\omega_1\overline{\omega_1}+\omega_2\overline{\omega_2}+\overline{\omega_1}\omega_2}{A}\right] {\theta\left(\frac{\omega_1}{2}\right)^{-4} \theta\left(\frac{\omega_2}{2}\right)^{-4}\theta\left(\frac{\omega_1+\omega_2}{2}\right)^{-4}}. $$ Multiplying it and its complex conjugation and taking the logarithm, we obtain the formula. Note that since we take the logarithm of {\it positive real} numbers, the values do not depend on the choice of the branch of the logarithm. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: KLF} (ii)] Since the Ramanujan $\Delta$ is given by $\Delta=2^4 \Delta'$, we have by Lemma \ref{lem: C} ad Proposition \ref{pro: C} $$ C=\frac{1}{3}\left(-\frac{1}{4} \log|\Delta'|^2-2\log 2\right)=-\frac{1}{12}\log|\Delta|^2. $$ Our assertion now follows from Proposition \ref{pro: up to C}. \end{proof} \subsection{Proof of the first limit formula.} We now prove Theorem \ref{thm: KLF} (i) using the second limit formula. \begin{proof}[Proof of Theorem \ref{thm: KLF} (i)] From (\ref{eq: integral expression}), we have \begin{align*} A^{s-1} \Gamma(s) & \left(A K^*_0(0, 0, s)-\frac{1}{s-1}\right)\\ &= I_0(0, 0, s) - \frac{1}{s} + I_{0} (0, 0, 1-s) - \frac{A^{s-1} \Gamma(s)-1}{s-1}. \end{align*} Therefore, we have \begin{equation}\label{eq: integral expression 1} \lim_{s \rightarrow 1}\left(A K^*_0(0, 0, s)-\frac{1}{s-1}\right) = I_0(0, 0, 1) -1 + I_{0} (0, 0, 0) -\log A+c, \end{equation} where $c$ is the Euler constant as before and we used the fact $\Gamma'(1)=-c$. On the other hand, we have \begin{equation*} AK^*_0(0, z, 1) = I_0(0, z, 1) - 1 + I_{0} (z, 0, 0). \end{equation*} We let $$I_{0}^* (z, 0, s)= I_{0} (z, 0, s)-\int_{1}^\infty \exp(-t|z|^2/A) t^{s-1} dt.$$ Then $\displaystyle\lim_{z \rightarrow 0} I_0(0, z, 1)= I_0(0, 0, 1)$ and $\displaystyle\lim_{z \rightarrow 0} I_0^*(z, 0, 0)= I_0(0, 0, 0)$. We have \begin{align*} \Gamma(s)-\frac{1}{s}&= \int_{|z|^2/A}^\infty e^{-t} t^{s-1} dt+\int_{0}^{|z|^2/A} e^{-t} t^{s-1} dt-\frac{1}{s}\\ &= \int_{|z|^2/A}^\infty e^{-t} t^{s-1} dt+\int_{0}^{|z|^2/A} (e^{-t}-1) t^{s-1} dt+ \frac{1}{s}\left[ \left(\frac{|z|^2}{A}\right)^s-1\right]. \end{align*} Taking $s \rightarrow 0$, we have $$ -c=\int_{|z|^2/A}^\infty e^{-t} t^{-1} dt+\int_{0}^{|z|^2/A} (e^{-t}-1) t^{-1} dt +\log \left(\frac{|z|^2}{A}\right). $$ Hence \begin{align*} &AK^*_0(0, z, 1) = I_0(0, z, 1) - 1 + I^*_{0} (z, 0, 0)+\int_{1}^\infty \exp(-t|z|^2/A) t^{-1} dt \\ &= I_0(0, z, 1) - 1 + I^*_{0} (z, 0, 0) -c-\int_{0}^{|z|^2/A} (e^{-t}-1) t^{-1} dt-\log \left(\frac{|z|^2}{A}\right). \end{align*} Therefore \begin{align*} \lim_{z \rightarrow 0} \left(AK^*_0(0, z, 1)+\log |z|^2 \right)= I_0(0, 0, 1) - 1 + I_{0} (0, 0, 0) -c+\log A. \end{align*} Finally, combining this with (\ref{eq: integral expression 1}) and the second limit formula, we have \begin{align*} \lim_{s \rightarrow 1}\left(A K^*_0(0, 0, s)-\frac{1}{s-1}\right)&= \lim_{z \rightarrow 0} \left(AK^*_0(0, z, 1)+\log |z|^2 \right) -2\log A+2c \\ =&-\frac{1}{12}\log|\Delta|^2-2\log A+2c. \end{align*} This proves our assertion. \end{proof} \section{$p$-adic Kronecker limit formulas} \subsection{The $p$-adic Eisenstein-Kronecker functions} Assume now the conditions of the second half of the introduction. In \cite{BFK}, we defined a $p$-adic analogue of the Kronecker double series as a Coleman function on a CM elliptic curve. In order to prove the $p$-adic limit formulas, we define in this subsection a $p$-adic analogue of the function $\log |\theta(z)|^2-|z|^2/A$, which turns out to be a Coleman function. We then prove the distribution relation, which will be used to characterize this function. Let $p$ be a prime $\geq 5$. In what follows, fix an embedding of $\overline{\mathbb{Q}}$ into $\mathbb{C}_p$. We fix a branch of the logarithm, which is a homomorphism $\log_p: \mathbb{C}_p^\times \rightarrow \mathbb{C}_p$. We extend this homomorphism to $\mathbb{C}_p[[t]]$ by using the decomposition $\mathbb{C}_p[[t]]=\mathbb{C}_p \times (1+t\mathbb{C}_p[[t]])$ and defining $\log_p (1-tf(t))=-\sum {t^nf^n(t)}/n$ for any $f(t) \in \mathbb{C}_p[[t]]$. Let $E$ be a CM elliptic curve as in the introduction and let $\Gamma$ be the period lattice of $E\otimes\mathbb{C}$. For $z_0 \in \Gamma \otimes \mathbb{Q}$, we let $$ \theta_{z_0}(z):=\theta(z+z_0) \exp\left(-\frac{z\overline{z_0}}{A}-\frac{z_0\overline{z_0}}{2A}\right). $$ Then by \cite{BK1}, the Taylor series of $\theta_{z_0}(z)$ at $z=0$ has algebraic coefficients. If we consider the formal composition $$ \widehat{\theta}_{z_0}(t):=\theta_{z_0}(z)|_{z=\lambda(t)} $$ of this series with $\lambda(t)$, where $\lambda(t)$ is the formal logarithm of the formal group of $E$, then we may regard this power series as an element in $\mathbb{C}_p[[t]]$. Considering its derivatives, we may prove that $\log_p \widehat{\theta}_{z_0}(t)$ is a rigid analytic function on the open unit disc over $\mathbb{C}_p$, namely, it is convergent if $|t|<1$. We use the same notations as in \cite{BFK}. In particular, we fix a prime $\mathfrak{p}$ in $\mathcal{O}_{\boldsymbol{K}}$ over $p \geq 5$, and we let $\pi:= \psi_{E/{\boldsymbol{K}}}(\mathfrak{p})$, where $\psi_{E/{\boldsymbol{K}}}$ is the Gr\"ossen character of $K$ associated to the elliptic curve $E$. Then $\pi$ is a generator of the ideal $\mathfrak{p}$. \begin{definition}\label{def: log-p-theta} We let $\log _p \theta$ be the function in $A_{\log}(U)$ defined by $$ \log _p \theta |_{]z_0[}:=\log_p \widehat{\theta}_{z_0}(t) \in A_{\log}(]z_0[) $$ on each residue disc $]z_0[$. \end{definition} Now we investigate basic properties of $\log_p {\theta}$. \begin{proposition} For $z_0 \in \Gamma \otimes \mathbb{Q}$ and $z_\alpha$ such that $\alpha z_\alpha \in \Gamma$ for $\pi$-power morphism $\alpha \in \mathrm{End}_{\overline{\mathbb{Q}}}(E)$, we have $$ \log_p \widehat{\theta}_{z_0}(t \oplus t_\alpha)=\log_p\widehat{\theta}_{z_0+z_\alpha}(t) $$ where $z_\alpha \in \mathbb{C}$ is a lift of a torsion point in $\mathbb{C}/\Gamma$ corresponding to $t_\alpha \in E(\mathbb{C}_p)_{\mathrm{tor}}$, and the right hand side is independent of the choice of the lift. \end{proposition} \begin{proof} Let $\alpha$ and $\beta$ be elements of $\mathcal{O}_K$ such that $2\alpha | \beta$ and $\beta z_0 \in \Gamma$. Then $f_{\beta}(z):=\theta(z)^{N\beta}/\theta( \beta z)$ is a rational function on $E$ over $\overline{\mathbb{Q}}$. We have \begin{equation}\label{equation: logtheta} \theta_{z_0+z_\alpha}(z)^{N\beta}= \pm \theta( \beta z) \tau_{z_0+z_\alpha}^*f_{\beta}(z). \end{equation} Similarly, we have $$ \widehat{\theta}_{z_0}(z)^{N\beta} = \pm \theta( \beta z) \tau_{z_0}^*f_{\beta}(z). $$ Since $f_\beta$ is a rational function, we also have $$ \tau_{z_0+z_\alpha}^*f_{\beta}(t)= \tau_{z_0}^*f_{\beta}(t\oplus t_\alpha). $$ Hence we have \begin{equation}\label{equation: logtheta2} \widehat{\theta}_{z_0}(t \oplus t_\alpha)^{N\beta} = \pm \theta( [\beta] t) \tau_{z_0+z_\alpha}^*f_{\beta}(t). \end{equation} Our assertion now follows from \eqref{equation: logtheta} and \eqref{equation: logtheta2}. \end{proof} \begin{corollary}\label{corollary: interpolation} Let $t_\alpha$ be a $\pi$-power torsion point, and we assume that $z_0 \not=0$ or $t_\alpha\not=0$. Then we have $$ \log_p \widehat{\theta}_{z_0}(t_\alpha)= \log_p \left(\widehat{\theta}(z_0+z_\alpha) \exp\left[-\frac{(z_0+z_\alpha)\overline{(z_0+z_\alpha)}}{2A}\right] \right). $$ \end{corollary} Roughly speaking, $\log_p \theta(z)$ is a $p$-adic function which interpolates the special values $\log \theta(z)-z \overline{z}/2A$ at torsion points. We thus regard $\log_p \theta(z)$ as a $p$-adic analogue of the function $\log |\theta(z)|^2-|z|^2/A$. \subsection{The $p$-adic second limit formula} In this subsection, we prove Theorem \ref{thm: pKLF}, which is a $p$-adic analogue of the Kronecker second limit formula. \begin{proposition}\label{proposition: theta distribution} Let $z_0$ be a $\mathfrak{f}$-torsion point of $\mathbb{C}/\Gamma$. Then for $\alpha \in \mathcal{O}_K$ we have $$ \theta_{\alpha z_0}( \alpha z)^{24 N(\alpha\mathfrak{f})}= \Delta^{2N(\alpha\mathfrak{f})(N\alpha-1)} \prod_{z_\alpha \in E[\alpha]} \theta_{z_0+z_\alpha}(z)^{24 N(\alpha\mathfrak{f})} $$ where $z_\alpha$ is a lift of a $\alpha$-torsion point of $E$ and the right hand side is independent of the choice of the lifts $z_0$ and $z_\alpha$ on $\mathbb{C}$. \end{proposition} \begin{proof} Since for $\gamma \in \Gamma$ we have $\theta_{z_0+\gamma}(z)=\pm \pair{z_0/2, \gamma} \theta_{z_0}(z)$, the function $ \theta_{z_0}(z)^{2N\mathfrak{f}}$ is independent of the lift $z_0$ if $(N\mathfrak{f})z_0 \in \Gamma$. The independence of the lifts of $z_0$ and $z_\alpha$ follows from this fact. The logarithmic derivatives of both sides coincide (see for example Proposition 2.15 of \cite{BFK}). Hence for each $\alpha$, there exists a constant $c_\alpha$ such that $$ \theta_{\alpha z_0}( \alpha z)^{2N(\alpha\mathfrak{f})}= c_\alpha \prod_{z_\alpha \in E[\alpha]} \theta_{z_0+z_\alpha}(z)^{2N(\alpha\mathfrak{f})}. $$ By the definition of $\theta_{z_0}(z)$, we see that $c_{\alpha}$ is independent of $z_0$. Hence we may assume that $z_0=0$ and $N\mathfrak{f}=1$. Then we have \begin{align*} \prod_{z_{\alpha \beta} \in E[\alpha \beta]} \theta_{z_{\alpha \beta}}(z)^{2N(\alpha\beta)} = \prod_{z_{\alpha \beta} \in E[\alpha \beta]/E[\alpha]} \prod_{z_{\alpha } \in E[\alpha ]} \theta_{z_{\alpha \beta}+z_{\alpha}}(z)^{2N(\alpha\beta)}\\ = \prod_{z_{\alpha \beta} \in E[\alpha \beta]/E[\alpha]} c_\alpha^{N\beta} \theta_{ \alpha z_{\alpha \beta}}(\alpha z)^{2N(\alpha\beta)}\\ = c_\alpha^{N\beta^2} c_\beta^{2N\alpha} \theta(\beta \alpha z)^{2N(\alpha\beta)}. \end{align*} Hence we have $ c_\alpha^{N\beta^2} c_\beta^{2N\alpha}=c_{\alpha \beta}=c_\beta^{N\alpha^2} c_\alpha^{2N\beta} $ or equivalently, $$ c_\alpha^{N\beta(N\beta-1)}=c_\beta^{N\alpha(N\alpha-1)}. $$ In particular, $c_\alpha^{12}=c_2^{N\alpha(N\alpha-1)}$. On the other hand, we consider the constant term of $$ \frac{\theta(2z)^8}{\theta(z)^8}=c_2 \prod_{z_2 \in E[2]-\{0\}} \theta_{z_2}(z)^{8}. $$ As in the proof of Proposition \ref{pro: C}, we have $$ \prod_{z_2 \in E[2]-\{0\}} \theta_{z_2}(0)^{8}=\Delta'^{-2}. $$ Hence $c_2=2^8 \Delta'^2=\Delta^2$. Our assertion now follows from these facts. \end{proof} \begin{corollary} The function $\Xi(z):= - \log_p \theta (z) -\frac{1}{12}\log_p \Delta$ satisfies the distribution relation $$ \Xi(\alpha z)=\sum_{z_\alpha \in E[\alpha]} \Xi(z+z_\alpha). $$ \end{corollary} \begin{proof} By Proposition \ref{proposition: theta distribution}, we have $$ \log_p \widehat{\theta}_{\alpha z_0}([\alpha]t)=\frac{N\alpha-1}{12} \log_p \Delta +\sum_{t_\alpha \in E[\alpha]} \log_p \widehat{\theta}_{z_0}(t \oplus t_\alpha). $$ Our assertion follows from this formula. \end{proof} We now prove the $p$-adic second limit formula. \begin{proof}[Proof of Theorem \ref{thm: pKLF}] Since the derivatives of $\Xi(z)$ and $$ K_0^\mathrm{col}(0,z,1):=E^{\mathrm{col}}_{1,1}(z) $$ are equal, their difference $c(z)=\Xi(z)-E^{\mathrm{col}}_{1,1}(z)$ is a constant on the residue disc $]z_0[$. By (i) and the definition of $E^{\mathrm{col}}_{1,1}(z)$, the locally constant function $c(z)$ satisfies the distribution relation. For any torsion point $z_0$ of order $\mathfrak{f}$, we take $N$ such that $\pi^N \equiv 1 \mod \mathfrak{f}$. Then $[\pi^N]^*(]z_0[)=]z_0[$ and $$ [\pi^N]^*c(z) |_{]z_0[}=\sum_{w \in E[\pi^N]} c(z+w) |_{]z_0[}. $$ Since $c(z) |_{]z_0[}$ is constant, the above relation shows $c(z) |_{]z_0[}=0$. \end{proof} The above result shows in particular that $\Xi(z)=- \log_p \theta (z) -\frac{1}{12}\log_p \Delta$ is in fact a Coleman function. \subsection{$p$-adic Eisenstein-Kronecker-Lerch series} In this subsection, we introduce the $p$-adic Eisenstein-Kronecker-Lerch series. Then we consider a $p$-adic counterpart to the arguments concerning the Kronecker first limit formula in the classical case and prove Proposition \ref{pro: pKLF}. Let $p \geq 5$ be a prime of good \textit{ordinary} reduction for $E$, and we fix a prime $\mathfrak{p}$ of $\mathcal{O}_{\boldsymbol{K}}$ over $p$. We defined in \cite{BK1} \S 3.1 a $p$-adic measure $\mu:=\mu_{0, 0}$ on $\mathbb{Z}_p \times \mathbb{Z}_p$ interpolating the Eisenstein-Kronecker numbers, or more precisely, the special values of Eisenstein-Kronecker-Lerch series $K^*_{a+b}(0,0,b; \Gamma)/A(\Gamma)^a$ for $a$, $b \geq 0$, where $\Gamma$ is the period lattice of $E$. We define the $p$-adic Eisenstein-Kronecker-Lerch function as in the introduction as follows. \begin{definition} For any integer $a \in \mathbb{Z}$, we define the \textit{$p$-adic Eisenstein-Kronecker-Lerch function} by $$ K^{(p)}_a(0,0,s) := \int_{\mathbb{Z}_p^\times \times \mathbb{Z}_p^\times} \pair{x}^{s-1}\pair{y}^{a-s} \omega(y)^{a-1} d \mu(x,y). $$ \end{definition} The $p$-adic Eisenstein-Kronecker-Lerch function is analytic in $s \in \mathbb{Z}_p$. The reason we view this function as a $p$-adic analogue of Eisenstein-Kronecker-Lerch series is the following interpolation property. \begin{proposition} For any integer $a$, $b$ such that $a \geq b > 0$ and $b \equiv 1 \pmod{p-1}$, we have \begin{equation}\label{eq: interpolation} \frac{K^{(p)}_a(0,0,b)}{\Omega_p^{a-1}} =(-1)^{a-1}(b-1)! \left(1-\frac{\pi^{a}}{p^{a-b+1}}\right) \left(1-\frac{\pi^{a}}{p^b} \right)\frac{K^*_{a}(0,0,b)}{A(\Gamma)^{a-b}}, \end{equation} where $\Omega_p$ is a $p$-adic period of the formal group of $E$. \end{proposition} \begin{proof} This follows from the interpolation property of the measure $\mu:=\mu_{0,0}$ given in \cite{BK1} Proposition 3.5. \end{proof} We now give the proof of Proposition \ref{pro: pKLF}. \begin{proof}[Proof of Proposition \ref{pro: pKLF}] We consider the function $$ f(t):=\Omega_p \int_{\mathbb{Z}_p^\times \times \mathbb{Z}_p^\times}y^{-1} \exp(y\Omega_p^{-1} \lambda(t)) d\mu(x,y) $$ on the $p$-adic residue disc $]0[$ around $0$. Then \begin{multline*} \lambda'(t)^{-1}\frac{d}{dt}f(t)=\int_{\mathbb{Z}_p^\times \times \mathbb{Z}_p^\times} \exp(y\Omega_p^{-1} \lambda(t)) d\mu(x,y)\\ = F_1(t; \Gamma)-{\overline{\pi}}^{-1}F_1([\pi]t; \Gamma)-F_1(t; \overline\mathfrak{p}\Gamma) +{\overline{\pi}}^{-1} F_1([\pi]t; \overline\mathfrak{p}\Gamma). \end{multline*} Hence for $E_{1,1}^{(p)}(z; \Gamma) := E_{1,1}^\mathrm{col}(z; \Gamma) - p^{-1} E_{1,1}^\mathrm{col}(\pi z; \Gamma)$, the function $$ f(t)-E_{1,1}^{(p)}(z; \Gamma)+E_{1,1}^{(p)}(z; \overline\mathfrak{p}\Gamma) $$ is a constant on the residue disc $]0[$. Furthermore, both functions satisfy the distribution relation $\sum_{t_{\pi} \in E[\pi]} f(t\oplus t_{\pi})=0$ and $\sum_{t_{\pi} \in E[\pi]} E_{1,1}^{(p)}(t\oplus t_{\pi})=0$. Hence we must have $f(t)=E_{1,1}^{(p)}(z; \Gamma)-E_{1,1}^{(p)}(z; \overline\mathfrak{p}\Gamma)$ on ${]0[}$. On the other hand, the $p$-adic second limit formula shows that $$ E_{1,1}^{(p)}(z; \Gamma)=\log_p \theta(z; \Gamma) -\frac{1}{p}\log_p \theta(\pi z; \Gamma)+\frac{1}{12}\left(1-\frac{1}{p}\right) \log_p \Delta(\Gamma), $$ hence we have $$ f(0) = \bigl.E_{1,1}^{(p)}(z; \Gamma)-E_{1,1}^{(p)}( z; \overline{\pi}\Gamma)\;\bigr|_{z=0}= \left(1-\frac{1}{p}\right) \log_p \overline{\pi}. $$ Our assertion now follows from the fact that $ f(0)=\Omega_p K^{(p)}_0(0,0,1). $ \end{proof} \begin{remark}\label{rem: not pKLF} In the interpolation formula of \eqref{eq: interpolation}, if we let $a=0$ and $b=1$, then the interpolation factor of the the right hand side vanishes. Hence the value $$ \Omega_p K^{(p)}_0(0,0,1) = \int_{\mathbb{Z}_p^\times \times \mathbb{Z}_p^\times} y^{-1} d\mu(x,y) $$ is in some sense not the constant term but the residue at $s=1$ of the $p$-adic analogue of $\sum_{\gamma \in \Gamma}^*1/|\gamma|^{2s}$. Because of this fact, the formula of Proposition \ref{pro: pKLF} is not a perfect $p$-adic analogue of the classical Kronecker first limit formula. \end{remark} \end{document}
\begin{document} \title{Monte Carlo Co-Ordinate Ascent Variational Inference} \author[1]{Lifeng Ye} \author[1]{Alexandros Beskos} \author[1,2]{Maria De Iorio} \author[3]{Jie Hao} \affil[1]{Department of Statistical Science, University College London} \affil[2]{Yale-NUS College, Singapore} \affil[3]{Key Laboratory of Systems Biomedicine (Ministry of Education), Shanghai Center for Systems Biomedicine, Shanghai Jiao Tong University} \affil[ ]{\textit {[email protected], [email protected], [email protected], [email protected]}} \date{} \setcounter{Maxaffil}{0} \renewcommand\Affilfont{\itshape\small} \maketitle \begin{abstract} In Variational Inference (VI), coordinate-ascent and gradient-based approaches are two major types of algorithms for approximating difficult-to-compute probability densities. In real-world implementations of complex models, Monte Carlo methods are widely used to estimate expectations in coordinate-ascent approaches and gradients in derivative-driven ones. We discuss a Monte Carlo Co-ordinate Ascent VI (MC-CAVI) algorithm that makes use of Markov chain Monte Carlo (MCMC) methods in the calculation of expectations required within Co-ordinate Ascent VI (CAVI). We show that, under regularity conditions, an MC-CAVI recursion will get arbitrarily close to a maximiser of the evidence lower bound (ELBO) with any given high probability. In numerical examples, the performance of MC-CAVI algorithm is compared with that of MCMC and -- as a representative of derivative-based VI methods -- of Black Box VI (BBVI). We discuss and demonstrate MC-CAVI's suitability for models with \emph{hard constraints} in simulated and real examples. We compare MC-CAVI's performance with that of MCMC in an important complex model used in Nuclear Magnetic Resonance (NMR) spectroscopy data analysis -- BBVI is nearly impossible to be employed in this setting due to the hard constraints involved in the model. \end{abstract} \keywords{Variational Inference; Markov chain Monte Carlo; Coordinate-Ascent; \\ Gradient-Based Optimisation; Bayesian Inference; Nuclear Magnetic Resonance.} \section{Introduction} Variational Inference (VI) \citep{Jordan1999, wainwright_jordan_2014} is a powerful method to approximate intractable integrals. As an alternative strategy to Markov chain Monte Carlo (MCMC) sampling, VI is fast, relatively straightforward for monitoring convergence and typically easier to scale to large data \citep{bleivi} than MCMC. The key idea of VI is to approximate difficult-to-compute conditional densities of latent variables, given observations, via use of optimization. A family of distributions is assumed for the latent variables, as an approximation to the exact conditional distribution. VI aims at finding the member, amongst the selected family, that minimizes the Kullback-Leibler (KL) divergence from the conditional law of interest. Let $x$ and $z$ denote, respectively, the observed data and latent variables. The goal of the inference problem is to identify the conditional density (assuming a relevant reference measure, e.g.~Lebesgue) of latent variables given observations, i.e. $p(z| x)$. Let $\mathcal{L}$ denote a family of densities defined over the space of latent variables -- we denote members of this family as $q=q(z)$ below. The goal of VI is to find the element of the family closest in KL divergence to the true $p(z| x)$. Thus, the original inference problem can be rewritten as an optimization one: identify $q^*$ such that \begin{equation} \label{eq:min} q^* = \operatornamewithlimits{argmin}\limits_{q\in \mathcal{L}}\textrm{KL}(q\mid p(\cdot| x)) \end{equation} for the KL-divergence defined as \begin{align*} \textrm{KL}(q\mid p(\cdot | x)) &= \mathbb{E}_{q}[\log q(z)] - \mathbb{E}_q[\log p(z| x)] \\ &= \mathbb{E}_q[\log q(z)] - \mathbb{E}_q[\log p(z,x)] + \log p(x), \end{align*} with $\log p(x)$ being constant w.r.t.~$z$. Notation $\mathbb{E}_q$ refers to expectation taken over $z\sim q$. Thus, minimizing the KL divergence is equivalent to maximising the evidence lower bound, ELBO$(q)$, given by \begin{equation} \textrm{ELBO}(q) = \mathbb{E}_q[\log p(z,x)] - \mathbb{E}_q[\log q(z)]. \label{elbo1} \end{equation} Let $\mathsf{S}_p\subseteq \mathbb{R}^{m}$, $m\ge 1$, denote the support of the target $p(z|x)$, and $\mathsf{S}_{q}\subseteq \mathbb{R}^{m}$ the support of a variational density $q\in\mathcal{L}$ -- assumed to be common over all members $q\in\mathcal{L}$. Necessarily, $\mathsf{S}_p\subseteq \mathsf{S}_q$, otherwise the KL-divergence will diverge to $+\infty$. Many VI algorithms focus on the mean-field variational family, where variational densities in $\mathcal{L}$ are assumed to factorise over blocks of $z$. That is, \begin{equation} \label{eq:meanfield} q(z) = \prod_{i=1}^b q_i(z_i),\quad \mathsf{S}_q = \mathsf{S}_{q_{1}}\times \cdots \times \mathsf{S}_{q_{b}}, \quad z=(z_1,\ldots, z_{b})\in \mathsf{S}_q, \,\,\,z_i\in \mathsf{S}_{q_{i}}, \end{equation} for individual supports $\mathsf{S}_{q_{i}}\subseteq\mathbb{R}^{m_i}$, $m_i\ge 1$, $1\le i\le b$, for some $b\ge 1$, and $\sum_{i}m_i =m$. It is advisable that highly correlated latent variables are placed in the same block to improve the performance of the VI method. There are, in general, two types of approaches to maximise ELBO in VI: a co-ordinate ascent approach and a gradient-based one. Co-ordinate ascent VI (CAVI) \citep{bishop_2006} is amongst the most commonly used algorithms in this context. To obtain a local maximiser for ELBO, CAVI sequentially optimizes each factor of the mean-field variational density, while holding the others fixed. Analytical calculations on function space -- involving variational derivatives -- imply that, for given fixed $q_1,\ldots, q_{i-1},q_{i+1},\ldots, q_b$, ELBO$(q)$ is maximised for \begin{equation} \label{eq:recursion} q_i(z_i)\propto \exp\big\{\mathbb{E}_{-i}[\log p(z_{i_-},z_{i},z_{i_+},x)]\big\}, \end{equation} \noindent where $z_{-i}:=(z_{i_-},z_{i_+})$ denotes vector $z$ having removed component $z_i$, with ${i_-}$ (resp.~${i_+}$) denoting the ordered indices that are smaller (resp.~larger) than~$i$; $\mathbb{E}_{-i}$ is the expectation taken under $z_{-i}$ following its variational distribution, denoted $q_{-i}$. The above suggest immediately an iterative algorithm, guaranteed to provide values for ELBO$(q)$ that cannot decrease as the updates are carried out. The expected value $\mathbb{E}_{-i}[\log p(z_{i_-},z_{i},z_{i_+},x)]$ can be difficult to derive analytically. Also, CAVI typically requires traversing the entire dataset at each iteration, which can be overly computationally expensive for large datasets. Gradient-based approaches, which can potentially scale up to large data -- alluding here to recent Stochastic-Gradient-type methods -- can be an effective alternative for ELBO optimisation. However, such algorithms have their own challenges, e.g. in the case reparameterization Variational Bayes (VB) analytical derivation of gradients of the log-likelihood can often be problematic, while in the case of score-function VB the requirement of the gradient of $\log q$ restricts the range of the family $\mathcal{L}$ we can choose from. In real-world applications, hybrid methods combining Monte Carlo with recursive algorithms are common, e.g., Auto-Encoding Variational Bayes, Doubly-Stochastic Variational Bayes for non-conjugate inference, Stochastic Expectation-Maximation (EM) \citep{Beaumont2025, Sisson1760, mcem}. In VI, Monte Carlo is often used to estimate the expectation within CAVI or the gradient within derivative-driven methods. This is the case, e.g., for Stochastic VI \citep{svi} and Black-Box VI (BBVI) \citep{bbvi}. BBVI is used in this work as a representative of gradient-based VI algorithms. It allows carrying out VI over a wide range of complex models. The variational density $q$ is typically chosen within a parametric family, so finding $q^*$ in~(\ref{eq:min}) is equivalent to determining an optimal set of parameters that characterize $q_i=q_i(\cdot|\lambda_i)$, $\lambda_{i}\in \Lambda_i\subseteq \mathbb{R}^{d_i}$, $1\le d_i$, $1\le i\le b$, with $\sum_{i=1}^{b}d_i=d$. The gradient of ELBO w.r.t.~the variational parameters $\lambda=(\lambda_1,\ldots,\lambda_b)$ equals \begin{equation} \label{eq:mainBB} \nabla_{\lambda} \textrm{ELBO}(q) := \mathbb{E}_q\big[\nabla_{\lambda}\log q(z| \lambda)\{\log p(z,x)-\log q(z| \lambda)\}\big] \end{equation} and can be approximated by black-box Monte Carlo estimators as, e.g., \begin{equation} \label{eq:BBVIest} \widehat{\nabla_{\lambda}\textrm{ELBO}(q)} := \tfrac{1}{N}\sum^N_{n=1}\big[\nabla_{\lambda}\log q(z^{(n)}| \lambda)\{\log p(z^{(n)},x)-\log q(z^{(n)}|\lambda)\}\big], \end{equation} with $z^{(n)} \stackrel{iid}{\sim} q(z| \lambda)$, $1\le n\le N$, $N\ge 1$. The approximated gradient of ELBO can then be used within a stochastic optimization procedure to update $\lambda$ at the $k$th iteration with \begin{equation} \lambda_{k+1} \leftarrow \lambda_k + \rho_k \widehat{\nabla_{\lambda_k}\textrm{ELBO}(q)}, \label{eq:BBVIit} \end{equation} where $\{\rho_k\}_{k\ge 0}$ is a Robbins-Monro-type step-size sequence \citep{robbins1951}. As we will see in later sections, BBVI is accompanied by generic variance reduction methods, as the variability of (\ref{eq:BBVIest}) for complex models can be large. \begin{remark}[Hard Constraints] \label{rem:issue} Though gradient-based VI methods are some times more straightforward to apply than co-ordinate ascent ones, -- e.g.~combined with the use of modern approaches for automatic differentiation \citep{advi} -- co-ordinate ascent methods can still be important for models with \emph{hard constraints}, where gradient-based algorithms are laborious to apply. (We adopt the viewpoint here that one chooses variational densities that respect the constaints of the target, for improved accuracy.) Indeed, notice in the brief description we have given above for CAVI and BBVI, the two methodologies are structurally different, as CAVI does not necessarily require to be build up via the introduction of an exogenous variational parameter $\lambda$. Thus, in the context of a support for the target $p(z|x)$ that involves complex constraints, a CAVI approach overcomes this issue naturally by blocking together the $z_i$'s responsible for the constraints. In contrast, introduction of the variational parameter $\lambda$ creates sometimes severe complications in the development of the derivative-driven algorithm, as normalising constants that depend on $\lambda$ are extremely difficult to calculate analytically and obtain their derivatives. Thus, a main argument spanning this work -- and illustrated within it -- is that co-ordinate-ascent-based VI methods have a critical role to play amongst VI approaches for important classes of statistical models. \end{remark} The main contributions of the paper are: \begin{itemize} \item[(i)] We discuss, and then apply a Monte Carlo CAVI (MC-CAVI) algorithm in a sequence of problems of increasing complexity, and study its performance. As the name suggests, MC-CAVI uses the Monte Carlo principle for the approximation of the difficult-to-compute conditional expectations, $\mathbb{E}_{-i}[\log p(z_{i_-},z_{i},z_{i_+},x)]$, within CAVI. \item[(ii)] We provide a justification for the algorithm by showing analytically that, under suitable regularity conditions, MC-CAVI will get arbitrarily close to a maximiser of the ELBO with high probability. \item[(iii)] We contrast MC-CAVI with MCMC and BBVI through simulated and real examples, some of which involve hard constraints; we demonstrate MC-CAVI's effectiveness in an important application imposing such hard constraints, with real data in the context of Nuclear Magnetic Resonance (NMR) spectroscopy. \end{itemize} \begin{remark} Inserting Monte Carlo steps within a VI approach (that might use a mean field or another approximation) is not uncommon in the VI literature. E.g., \cite{forb:07} employ an MCMC procedure in the context of a Variational EM (VEM), to obtain estimates of the normalizing constant for Markov Random Fields -- they provide asymptotic results for the correctness of the complete algorithm; \cite{tran:16} apply Mean-Field Variational Bayes (VB) for Generalised Linear Mixed Models, and use Monte Carlo for the approximation of analytically intractable required expectations under the variational densities; several references for related works are given in the above papers. Our work focuses on MC-CAVI, and develops theory that is appropriate for this VI method. This algorithm has \emph{not} been studied analytically in the literature, thus the development of its theoretical justification -- even if it borrows elements from Monte Carlo EM -- is new. \end{remark} The rest of the paper is organised as follows. Section \ref{sec:MCCAVI} presents briefly the MC-CAVI algorithm. It also provides -- in a specified setting -- an analytical result illustrating non-accumulation of Monte Carlo errors in the execution of the recursions of the algorithm. That is, with a probability arbitrarily close to 1, the variational solution provided by MC-CAVI can be as close as required to the one of CAVI, for a big enough Monte Carlo sample size, regardless of the number of algorithmic iterations. Section \ref{sec:numerics} shows two numerical examples, contrasting MC-CAVI with alternative algorithms. Section \ref{sec:nmr} presents an implementation of MC-CAVI in a real, complex, challenging posterior distribution arising in metabolomics. This is a practical application, involving hard constraints, chosen to illustrate the potential of MC-CAVI in this context. We finish with some conclusions in Section \ref{sec:discussion}. \section{MC-CAVI Algorithm} \label{sec:MCCAVI} \subsection{Description of the Algorithm} \label{subsec:CAVI} We begin with a description of the basic CAVI algorithm. A double subscript will be used to identify block variational densities: $q_{i,k}(z_i)$ (resp.~$q_{-i,k}(z_{-i})$) will refer to the density of the $i$th block (resp.~all blocks but the $i$th), after $k$ updates have been carried out on that block density (resp.~$k$ updates have been carried out on the blocks preceding the $i$th, and $k-1$ updates on the blocks following the $i$th). \begin{itemize} \item Step 0: Initialize probability density functions $q_{i,0}(z_i)$, $i=1,\ldots, b$. \item Step $k$: For $k\ge 1$, given $q_{i,k-1}(z_i)$, $i=1,\ldots, b$, execute: \begin{itemize} \item For $i=1,\ldots, b$, update: \begin{align*} \log q_{i,k}(z_i) = const. + \mathbb{E}_{-i,k}[\log p(z,x)], \end{align*} with $\mathbb{E}_{-i,k}$ taken w.r.t.~$z_{-i}\sim q_{-i,k}$. \end{itemize} \item Iterate until convergence. \end{itemize} \noindent Assume that the expectations $\mathbb{E}_{-i}[\log p(z,x)]$, $\{i:i\in\mathcal{I}\}$, for an index set $\mathcal{I}\subseteq\{1,\ldots, b\}$, can be obtained analytically, over all updates of the variational density $q(z)$; and that this is not the case for $i\notin\mathcal{I}$. Intractable integrals can be approximated via a Monte Carlo method. (As we will see in the applications in the sequel, such a Monte Carlo device typically uses samples from an appropriate MCMC algorithm.) In particular, for $i\notin \mathcal{I}$, one obtains $N\ge 1$ samples from the current $q_{-i}(z_{-i})$ and uses the standard Monte Carlo estimate \begin{equation*} \widehat{\mathbb{E}}_{-i}[\log p(z_{i_-},z_{i},z_{i_+},x)] = \frac{\sum_{n=1}^{N} \log p(z_{i_-}^{(n)},z_{i},z_{i_+}^{(n)},x)}{N}. \end{equation*} Implementation of such an approach gives rise to MC-CAVI, described in Algorithm~\ref{MC-CAVI}. \begin{algorithm}[!h] \SetAlgoLined \SetKwInOut{Input}{Require} \SetKw{KwBy}{by} \Input{Number of iterations $T$. } \Input{Number of Monte Carlo samples $N$. } \Input{$\mathbb{E}_{-i} [\log p(z_{i_-},z_i, z_{i_+},x)]$ in closed form, for $i\in \mathcal{I}$. } Initialize $q_{i,0}(z_i)$, $i=1,\ldots, b$. \\ \For{$k= 1:T$ }{ \For{$i=1:b$ }{ If $i\in\mathcal{I}$, set $q_{i,k}(z_i) \propto \exp \big\{ \mathbb{E}_{-i,k}[\log p(z{_{i_-}},z_i, z{_{i_+}},x)] \big\} $ \; If $i\notin\mathcal{I}$:\\ Obtain $N$ samples, $(z_{i_{-},k}^{(n)},z_{i_{+},k-1}^{(n)})$, $1\le n \le N$, from $q_{-i,k}(z_{-i})$. \\ Set $$q_{i,k}(z_i) \propto \exp \big\{ \tfrac{\sum_{n=1}^{N} \textrm{log}\; p(z_{i_-,k}^{(n)},z_{i},z_{i_+,k-1}^{(n)},x)}{N} \big\}.$$ }} \caption{MC-CAVI}\label{MC-CAVI} \end{algorithm} \\ \subsection{Applicability of MC-CAVI} We discuss here the class of problems for which MC-CAVI can be applied. It is desirable to avoid settings where the order of samples or statistics to be stored in memory increases with the iterations of the algorithm. To set-up the ideas we begin with CAVI itself. Motivated by the standard exponential class of distributions, we work as follows. Consider the case when the target density $p(z,x)\equiv f(z)$ -- we omit reference to the data $x$ in what follows, as $x$ is fixed and irrelevant for our purposes (notice that $f$ is not required to integrate to $1$) -- is assumed to have the structure, \begin{align} \label{eq:class} f(z) = h(z)\exp\big\{ \langle \eta, T(z) \rangle - A(\eta) \big\},\quad z\in \mathsf{S}_p, \end{align} for $s$-dimensional constant vector $\eta=(\eta_1,\ldots, \eta_s)$, vector function $T(z)=(T_1(z),\ldots, T_{s}(z))$, with some $s\ge 1$, and relevant scalar functions $h>0$, $A$; $\langle \cdot,\cdot \rangle$ is the standard inner product in $\mathbb{R}^{s}$. Also, we are given the choice of block-variational densities $q_1(z_1),\ldots, q_b(z_b)$ in (\ref{eq:meanfield}). Following the definition of CAVI from Section \ref{subsec:CAVI} -- assuming that the algorithm can be applied, i.e.~all required expectations can be obtained analytically -- the number of `sufficient' statistics, say $T_{i,k}$ giving rise to the definition of $q_{i,k}$ will always be upper bounded by $s$. Thus, in our working scenario, CAVI will be applicable with a computational cost that is upper bounded by a constant within the class of target distributions in (\ref{eq:class}) -- assuming relevant costs for calculating expectations remain bounded over the algorithmic iterations. Moving on to MC-CAVI, following the definition of index set $\mathcal{I}$ in Section \ref{subsec:CAVI}, recall that a Monte Carlo approach is required when updating $q_i(z_i)$ for $i\notin \mathcal{I}$, $1\le i \le b$. In such a scenario, controlling computational costs amounts to having a target (\ref{eq:class}) admitting the factorisations, \begin{equation} \label{eq:fact} h(z) \equiv h_i(z_i)h_{-i}(z_{-i}),\quad T_{l}(z) \equiv T_{l,i}(z_{i})T_{l,-i}(z_{-i}), \,\,\,1\le l\le s, \quad\,\, \textrm{for all }\,i\notin \mathcal{I}. \end{equation} Once (\ref{eq:fact}) is satisfied, we do not need to store all $N$ samples from $q_{-i}(z_{-i})$, but simply some relevant averages keeping the cost per iteration for the algorithm bounded. We stress that the combination of characterisations in (\ref{eq:class})-(\ref{eq:fact}) is very general and will typically be satisfied for most practical statistical models. \subsection{Theoretical Justification of MC-CAVI} An advantageous feature of MC-CAVI versus derivative-driven VI methods is its structural similarity with Monte Carlo Expectation-Maximization (MCEM). Thus, one can build on results in the MCEM literature to prove asymptotical properties of MC-CAVI; see e.g.~\cite{mc-em, boot:99, levi:01, fort:03}. To avoid technicalities related with working on general spaces of probability density functions, we begin by assuming a parameterised setting for the variational densities -- as in the BBVI case -- with the family of variational densities being closed under CAVI or (more generally) MC-CAVI updates. \begin{assumption}[Closedness of Parameterised $q(\cdot)$ Under Variational Update] \label{ass:family} For the CAVI or the MC-CAVI algorithm, each $q_{i,k}(z_i)$ density obtained during the iterations of the algorithm, $1\leq i\leq b$, $k\ge 0$, is of the parametric form $$q_{i,k}(z_i) = q_i(z_i|\lambda_{i}^{k}),$$ for a unique $\lambda_{i}^{k}\in \Lambda_i\subseteq \mathbb{R}^{d_i}$, for some $d_i\ge 1$, for all $1\le i \le b$. \\ (Let $d=\sum_{i=1}^b {d_i}$ and $\Lambda =\Lambda_1 \times \cdots \times \Lambda_b $.) \end{assumption} \noindent Under Assumption \ref{ass:family}, CAVI and MC-CAVI can be corresponded to some well-defined maps $M:\Lambda\mapsto\Lambda$, $\mathcal{M}_N:\Lambda\mapsto\Lambda$ respectively, so that, given current variational parameter $\lambda$, one step of the algorithms can be expressed in terms of a new parameter $\lambda'$ (different for each case) obtained via the updates \begin{equation*} \textrm{CAVI:}\,\,\,\,\lambda' = M(\lambda); \qquad \textrm{MC-CAVI:}\,\,\,\,\lambda' = \mathcal{M}_N(\lambda). \end{equation*} \indent For an analytical study of the convergence properties of CAVI itself and relevant regularity conditions, see e.g.~\cite[Proposition 2.7.1]{bert:99}, or numerous other resources in numerical optimisation. Expressing the MC-CAVI update -- say, the $(k+1)$th one -- as \begin{equation} \label{eq:perturb} \lambda^{k+1} = M(\lambda^k) + \{ \mathcal{M}_N(\lambda^k) - M(\lambda^k) \}, \end{equation} it can be seen as a random perturbation of a CAVI step. In the rest of this section we will explore the asymptotic properties of MC-CAVI. We follow closely the approach in \cite{mc-em} -- as it provides a less technical procedure, compared e.g.~to \cite{fort:03} or other works about MCEM -- making all appropriate adjustments to fit the derivations into the setting of the MC-CAVI methodology along the way. We denote by $M^{k}$, $\mathcal{M}_N^{k}$, the $k$-fold composition of $M$, $\mathcal{M}_{N}$ respectively, for $k\ge 0$. \begin{assumption} \label{ass:regular} $\Lambda$ is an open subset of $\mathbb{R}^{d}$, and the mappings $\lambda\mapsto \textrm{ELBO}(q(\lambda))$, $\lambda\mapsto M(\lambda)$ are continuous on $\Lambda$. \end{assumption} \noindent If $M(\lambda)=\lambda$ for some $\lambda\in \Lambda$, then $\lambda$ is a fixed point of $M()$. A given $\lambda^*\in \Lambda$ is called an isolated local maximiser of the ELBO$(q(\cdot))$ if there is a neighborhood of $\lambda^*$ over which $\lambda^*$ is the unique maximiser of the ELBO$(q(\cdot))$. \begin{assumption}[Properties of $M(\cdot)$ Near a Local Maximum] \label{ass:M} Let $\lambda^*\in\Lambda$ be an isolated local maximum of ELBO$(q(\cdot))$. Then, \begin{itemize} \item[(i)] $\lambda^*$ is a fixed point of $M(\cdot)$; \item[(ii)]there is a neighborhood $V\subseteq \Lambda$ of $\lambda^*$ over which $\lambda^*$ is a unique maximum, such that $\textrm{ELBO}(q(M(\lambda)))>\textrm{ELBO}(q(\lambda))$ for any $\lambda\in V\backslash\{\lambda^*\}$. \end{itemize} \end{assumption} \noindent Notice that the above assumption refers to the deterministic update $M(\cdot)$, which performs co-ordinate ascent; thus requirements (i), (ii) are fairly weak for such a recursion. The critical technical assumption required for delivering the convergence results in the rest of this section is the following one. \begin{assumption}[Uniform Convergence in Probability on Compact Sets] \label{ass:technical} For any compact set $C\subseteq\Lambda$ the following holds: for any $\varrho,\varrho'>0$, there exists a positive integer $N_0$, such that for all $N\ge N_0$ we have, \begin{equation*} \inf_{\lambda\in C} \mathrm{Prob}\,\big[\, \big| \mathcal{M}_N(\lambda)-M(\lambda) \big| < \varrho \, \big] > 1-\varrho' . \end{equation*} \end{assumption} \noindent It is beyond the context of this paper to examine Assumption \ref{ass:technical} in more depth. We will only stress that Assumption \ref{ass:technical} is the sufficient structural condition that allows to extend closeness between CAVI and MC-CAVI updates in a single algorithmic step into one for arbitrary number of steps. We continue with a definition. \begin{define} \label{def:stable} A fixed point $\lambda^*$ of $M(\cdot)$ is said to be asymptotically stable if, \begin{itemize} \item[(i)] for any neighborhood $V_1$ of $\lambda^*$, there is a neighborhood $V_2$ of $\lambda^*$ such that for all~$k\ge 0$ and all $\lambda\in V_2$, $M^k(\lambda)\in V_1$; \item[(ii)] there exists a neighbourhood $V$ of $\lambda^*$ such that $\lim_{k\rightarrow\infty}M^k(\lambda)=\lambda^*$ if $\lambda\in V$. \end{itemize} \end{define} We will state the main asymptotic result for MC-CAVI in Theorem \ref{th:stable} that follows; first we require Lemma \ref{lem:stable}. \begin{lemma} \label{lem:stable} Let Assumptions \ref{ass:family}-\ref{ass:M} hold. If $\lambda^*$ is an isolated local maximiser of $\textrm{ELBO}(q(\cdot))$, then $\lambda^*$ is an asymptotically stable fixed point of $M(\cdot)$. \end{lemma} The main result of this section is as follows. \begin{theorem} \label{th:stable} Let Assumptions \ref{ass:family}-\ref{ass:technical} hold and $\lambda^*$ be an isolated local maximiser of $\mathrm{ELBO}(q(\cdot))$. Then there exists a neighbourhood, say $V_1$, of $\lambda^*$ such that for starting values $\lambda\in V_1$ of MC-CAVI algorithm and for all $\epsilon_1>0$, there exists a $k_0$ such that \begin{equation*} \lim_{N\rightarrow \infty}\mathrm{Prob}\,\big(\,|\mathcal{M}_N^{k}-\lambda^* | < \epsilon_1 \textrm{ for some } k\leq k_0\,\big)= 1. \end{equation*} \end{theorem} \noindent The proofs of Lemma \ref{lem:stable} and Theorem \ref{th:stable} can be found in Appendices \ref{sec:lem} and \ref{sec:theorem}, respectively. \subsection{Stopping Criterion and Sample Size} The method requires the specification of the Monte Carlo size $N$ and a stopping rule. \subsubsection*{Principled - but Impractical - Approach} As the algorithm approaches a local maximum, changes in ELBO should be getting closer to zero. To evaluate the performance of MC-CAVI, one could, in principle, attempt to monitor the evolution of ELBO during the algorithmic iterations. For current variational distribution $q=(q_1,\ldots, q_b)$, assume that MC-CAVI is about to update $q_i$ with $q'_i= q'_{i,N}$, where the addition of the second subscript at this point emphasizes the dependence of the new value for $q_i$ on the Monte Carlo size $N$. Define, \begin{equation*} \Delta\mathrm{ELBO}(q, N) = \mathrm{ELBO}(q_{i-},q'_{i,N},q_{i+}) - \mathrm{ELBO}(q). \end{equation*} If the algorithm is close to a local maximum, $\Delta$ELBO$(q, N)$ should be close to zero, at least for sufficiently large $N$. Given such a choice of $N$, an MC-CAVI recursion should be terminated once $\Delta$ELBO$(q, N)$ is smaller than a user-specified tolerance threshold. Assume that the random variable $\Delta$ELBO$(q, N)$ has mean $\mu = \mu(q, N)$ and variance $\sigma^2 = \sigma^2(q, N)$. Chebychev's inequality implies that, with probability greater than or equal to $(1-1/K^2)$, $\Delta$ELBO$(q, N)$ lies within the interval $(\mu-K\sigma, \mu + K\sigma)$, for any real $K>0$. Assume that one fixes a large enough $K$. The choice of $N$ and of a stopping criterion should be based on the requirements: \begin{itemize} \item[(i)] $\sigma\leq \nu$, with $\nu$ a predetermined level of tolerance; \item[(ii)] the effective range $(\mu-K\sigma, \mu + K\sigma)$ should include zero, implying that $\Delta$ELBO$(q, N)$ differs from zero by less than $2K\sigma$. \end{itemize} Requirement (i) provides a rule for the choice of $N$ -- assuming applied over all $1\le i \le b$, for $q$ in areas close to a maximiser --, and requirement (ii) a rule for defining a stopping criterion. Unfortunately, the above considerations -- based on the proper term ELBO$(q)$ that VI aims to maximise -- involve quantities that are typically impossible to obtain analytically or via some reasonably expensive approximation. \subsubsection*{Practical Considerations} Similarly to MCEM, it is recommended that $N$ gets increased as the algorithm becomes more stable. It is computationally inefficient to start with a large value of $N$ when the current variational distribution can be far from the maximiser. In practice, one may monitor the convergence of the algorithm by plotting relevant \emph{statistics} of the variational distribution versus the number of iterations. We can declare that convergence has been reached when such traceplots show relatively small random fluctuations (due to the Monte Carlo variability) around a fixed value. At this point, one may terminate the algorithm or continue with a larger value of $N$, which will further decrease the traceplot variability. In the applications we encounter in the sequel, we typically have $N\le 100$, so calculating, for instance, Effective Sample Sizes to monitor the mixing performance of the MCMC steps is not practical. \section{Numerical Examples -- Simulation Study} \label{sec:numerics} In this section we illustrate MC-CAVI with two simulated examples. First, we apply MC-CAVI and CAVI on a simple model to highlight main features and implementation strategies. Then, we contrast MC-CAVI, MCMC, BBVI in a complex scenario with hard constraints. \subsection{Simulated Example 1} \label{sec:example1} We generate $n=10^3$ data points from $\mathrm{N}(10,100)$ and fit the semi-conjugate Bayesian model \begin{align*} \textrm{\underline{Example Model 1}} \\ {x_1, \ldots, x_n} &\sim \mathrm{N}(\vartheta,\tau^{-1}), \\ \vartheta &\sim \mathrm{N}(0,\tau^{-1}), \\ \tau &\sim \textrm{Gamma}(1,1). \end{align*} Let $\bar{x}$ be the data sample mean. In each iteration, the CAVI density function -- see (\ref{eq:recursion}) -- for $\tau$ is that of the Gamma distribution $\textrm{Gamma}(\tfrac{n+3}{2},\zeta)$, with \begin{align*} \zeta = 1 + \tfrac{(1+n)\mathbb{E}(\vartheta^2)-2(n\bar{x})\mathbb{E}(\vartheta)+\sum^n_{j=1}x^2_j}{2}, \end{align*} whereas for $\vartheta$ that of the normal distribution $\mathrm{N}(\frac{n\bar{x}}{1+n},\frac{1}{(1+n)\mathbb{E}(\tau)})$. $(\mathbb{E}(\vartheta),\mathbb{E}(\vartheta^2))$ and $\mathbb{E}(\tau)$ denote the relevant expectations under the current CAVI distributions for $\vartheta$ and $\tau$ respectively; the former are initialized at 0 -- there is no need to initialise $\mathbb{E}(\tau)$ in this case. Convergence of CAVI can be monitored, e.g., via the sequence of values of $\theta := (1+n)\mathbb{E}(\tau)$ and $\zeta$. If the change in values of these two parameters is smaller than, say, $0.01\%$, we declare convergence. Figure \ref{viresult1} shows the traceplots of $\theta$, $\zeta$. \begin{figure} \caption{Tracplots of $\zeta$ (left), $\theta$ (right) from application of CAVI on Simulated Example~1.} \label{viresult1} \end{figure} Convergence is reached within 0.0017secs\footnote{A Dell Latitude E5470 with Intel(R) Core(TM) i5-6300U [email protected] is used for all experiments in this paper.}, after precisely two iterations, due to the simplicity of the model. The resulted CAVI distribution for $\vartheta$ is $\mathrm{N}(9.6,0.1)$, and for $\tau$ it is Gamma$(501.5,50130.3)$ so that $\mathbb{E}(\tau) \approx 0.01$. Assume now that $q(\tau)$ was intractable. Since $\mathbb{E}(\tau)$ is required to update the approximate distribution of $\vartheta$, an MCMC step can be employed to sample $\tau_1,\ldots, \tau_{N}$ from $q(\tau)$ to produce the Monte Carlo estimator $\widehat{\mathbb{E}}(\tau)=\sum^{N}_{j=1}\tau_j/N$. Within this MC-CAVI setting, $\widehat{\mathbb{E}}(\tau)$ will replace the exact ${\mathbb{E}}(\tau)$ during the algorithmic iterations. $(\mathbb{E}(\vartheta),\mathbb{E}(\vartheta^2))$ are initialised as in CAVI. For the first 10 iterations we set $N=10$, and for the remaining ones, $N=10^3$ to reduce variability. We monitor the values of $\widehat{\mathbb{E}}(\tau)$ shown in Figure \ref{mcmcresult1}. \begin{figure} \caption{Traceplot of $\widehat{\mathbb{E} \label{mcmcresult1} \end{figure} The figure shows that MC-CAVI has stabilized after about $15$ iterations; algorithmic time was 0.0114secs. To remove some Monte Carlo variability, the final estimator of $\mathbb{E}(\tau)$ is produced by averaging the last 10 values of its traceplot, which gives $\widehat{\mathbb{E}}(\tau) = 0.01$, i.e.~a value very close to the one obtained by CAVI. The estimated distribution of $\vartheta$ is $\mathrm{N}(9.6,0.1)$, the same as with CAVI. The performance of MC-CAVI depends critically on the choice $N$. Let A be the value of $N$ in the burn-in period, B the number of burn-in iterations and C the value of $N$ after burn-in. Figure \ref{mitertune} shows trace plots of $\widehat{\mathbb{E}}(\tau)$ under different settings of the triplet A-B-C. \begin{figure} \caption{Traceplot of $\widehat{\mathbb{E} \label{mitertune} \end{figure} \begin{figure} \caption{Plot of convergence time versus variance of $\widehat{\mathbb{E} \label{convergence_plot} \end{figure} \begin{table} \centering \begin{tabular}{|l|l|l|l|l|l|} \hline A-B-C & $10$-$10$-$10^5$ & $10^3$-$10$-$10^5$ & $10^5$-$10$-$10^5$ & $10$-$30$-$10^5$ & $10$-$50$-$10^5$ \\ \hline time (secs) & 0.4640 & 0.4772 & 0.5152 & 0.3573 & 0.2722 \\ \hline $\widehat{\mathbb{E}}(\tau)$ & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 \\ \hline \end{tabular} \caption{Results of MC-CAVI for Simulated Example 1.} \label{my-label} \end{table} As with MCEM, $N$ should typically be set to a small number at the beginning of the iterations so that the algorithm can reach fast a region of relatively high probability. $N$ should then be increased to reduce algorithmic variability close to the convergence region. Figure \ref{convergence_plot} shows plots of convergence time versus variance of $\widehat{\mathbb{E}}(\tau)$ (left panel) and versus $N$ (right panel). In VI, iterations are typically terminated when the (absolute) change in the monitored estimate is less than a small threshold. In MC-CAVI the estimate fluctuates around the limiting value after convergence. In the simulation in Figure \ref{convergence_plot}, we terminate the iterations when the difference between the estimated mean (disregarding the first half of the chain) and the true value ($0.01$) is less than $10^{-5}$. Figure \ref{convergence_plot} shows that: (i) convergence time decreases when the variance of $\widehat{\mathbb{E}}(\tau)$ decreases, as anticipated; (ii) convergence time decreases when $N$ increases. In (ii), the decrease is most evident when $N$ is still relatively small After $N$ exceeds $200$, convergence time remains almost fixed, as the benefit brought by decrease of variance is offset by the cost of extra samples. (This is also in agreement with the policy of $N$ set to a small value at the initial iterations of the algorithm.) \subsection{Variance Reduction for BBVI} In non-trivial applications, the variability of the initial estimator $\nabla_{\lambda}\widehat{\textrm{ELBO}}(q)$ within BBVI in (\ref{eq:BBVIest}) will typically be large, so variance reduction approaches such as Rao-Blackwellization and control variates \citep{bbvi} are also used. Rao-Blackwellization \citep{raoblack} reduces variances by analytically calculating conditional expectations. In BBVI, within the factorization framework of (\ref{eq:meanfield}), where $\lambda = (\lambda_1,\ldots, \lambda_b)$, and recalling identity (\ref{eq:mainBB}) for the gradient, a Monte Carlo estimator for the gradient with respect to $\lambda_i$, $i\in\{1,\ldots, b\}$, can be simplified as \begin{equation} \label{raograd} \nabla_{\lambda_i}\widehat{\textrm{ELBO}}(q_i) = \tfrac{1}{N}\sum^N_{n=1}\big[\nabla_{\lambda_i}\log q_i(z_i^{(n)}|\lambda_i)\{\log c_i(z_i^{(n)},x)-\log q_i(z_i^{(n)}| \lambda_i)\}\big], \end{equation} with $z_i^{(n)} \stackrel{iid}{\sim} q_i(z_i|\lambda_i)$, $1\le n\le N$, and, \begin{align*} c_i(z_i,x):= \exp\big\{\mathbb{E}_{-i}[\log p(z_{i_-},z_{i},z_{i_+},x)]\big\}. \end{align*} Depending on the model at hand, term $c_i(z_i,x)$ can be obtained analytically or via a double Monte Carlo procedure (for estimating $c_i(z_i^{(n)},x)$, over all $1\le n\le N$) -- or a combination of thereof. In BBVI, control variates \citep{ross_2002} can be defined on a per-component basis and be applied to the Rao-Blackwellized noisy gradients of ELBO in (\ref{raograd}) to provide the estimator, \begin{equation} \label{deltaelbo} \nabla_{\lambda_i}\widehat{\textrm{ELBO}}(q_i) = \tfrac{1}{N}\sum^N_{n=1}\big[\nabla_{\lambda_i}\log q_i(z_i^{(n)}| \lambda_i)\{\log c_i(z_i^{(n)},x)-\log q_i(z_i^{(n)}| \lambda_i)-\widehat{a}^*_i\}\big], \end{equation} for the control, \begin{equation*} \widehat{a}^*_i := \frac{\sum^{d_i}_{j=1}\widehat{\textrm{Cov}}(f_{i,j},g_{i,j})}{\sum^{d_i}_{j=1}\widehat{\textrm{Var}}(g_{i,j})}, \end{equation*} where $f_{i,j}$, $g_{i,j}$ denote the $j$th co-ordinate of the vector-valued functions $f_i$, $g_i$ respectively, given below, \begin{align*} g_i(z_i)&:= \nabla_{\lambda_i}\log q_i(z_i| \lambda_i), \\ f_i(z_i)&:= \nabla_{\lambda_i}\log q_i(z_i| \lambda_i)\{\log c_i(z_i,x)-\log q_i(z_i| \lambda_i)\}. \end{align*} \begin{comment} confronted with options: (i) An effortless variational distribution, such as normal distribution or gamma distribution, can be easy to sample from, but with high probability, the samples would not meet the hard constraints; (ii) A carefully designed variational distribution with the hard constraints usually is only available up to a proportionality constant, which causes complication in evaluating the score function required in Eq.\ref{deltaelbo}. \end{comment} \subsection{Simulated Example 2: Model with Hard Constraints} In this section, we discuss the performance and challenges of MC-CAVI, MCMC, BBVI for models where the support of the posterior -- thus, also the variational distribution -- involves hard constraints. Here, we provide an example which offers a simplified version of the NMR problem discussed in Section~\ref{sec:nmr} but allows for the implementation of BBVI, as the involved normalising constants can be easily computed. Moreover, as with other gradient-based methods, BBVI requires to tune the step-size sequence $\{\rho_k\}$ in (\ref{eq:BBVIit}), which might be a laborious task, in particular for increasing dimension. Although there are several proposals aimed to optimise the choice of $\{\rho_k\}$ (\citealp{Bottou2012,advi}), MC-CAVI does not face such a tuning requirement. We simulate data according to the following scheme: observations $\{y_j\}$ are generated from $\mathrm{N}(\vartheta + \kappa_j,\theta^{-1})$, $j = 1,\ldots,n$, with $\vartheta = 6$, $\kappa_j = 1.5\cdot \sin(-2\pi+4\pi(j-1)/{n})$, $\theta = 3$, $n = 100$. We fit the following model: \begin{align*} \textrm{\underline{Example Model 2}} \\ y_j \mid \vartheta, \kappa_j, \theta&\sim \mathrm{N}(\vartheta + \kappa_j,\theta^{-1}), \\[0.1cm] \vartheta &\sim \mathrm{N}(0,10),\\[0.1cm] \kappa_j \mid \psi_j &\sim \mathrm{TN}(0,10,-\psi_j,\psi_j),\\ \psi_j \hspace{0.1cm} &\!\!\stackrel{i.i.d.}{\sim} \mathrm{TN}(0.05,10,0,2),\quad j = 1,\ldots,n, \\[0.1cm] \theta &\sim \mathrm{Gamma}(1,1). \end{align*} \subsubsection*{MCMC} \label{sec:MCMC} We use a standard Metropolis-within-Gibbs. We set $y = (y_1, \ldots, y_{n})$, $\kappa = (\kappa_1, \ldots, \kappa_{n})$ and $\psi = (\psi_1, \ldots, \psi_{n})$. Notice that we have the full conditional distributions, \begin{align*} p(\vartheta| y,\theta, \kappa, \psi) &= \mathrm{N}\big(\tfrac{\sum^{n}_{j=1}(y_j-\kappa_j)\theta}{\frac{1}{10}+{n}\theta},\tfrac{1}{\frac{1}{10}+{n}\theta}\big),\\[0.1cm] p(\kappa_j| y,\theta,\vartheta, \psi)&= \mathrm{TN}\big(\tfrac{(y_j-\vartheta)\theta}{\frac{1}{10}+\theta},\tfrac{1}{\frac{1}{10}+\theta},-\psi_j,\psi_j\big) ,\\[0.1cm] p(\theta|y,\vartheta, \kappa, \psi) &= \mathrm{Gamma}\big(1+\tfrac{n}{2},1+\tfrac{\sum^{n}_{j=1}(y_j-\vartheta-\kappa_j)^2}{2}\big). \end{align*} (Above, and in similar expressions written in the sequel, equality is meant to be properly understood as stating that `the density on the left is equal to the density of the distribution on the right'.) For each $\psi_j$, $1\le j\le {n}$, the full conditional is, \begin{equation*} p(\psi_j | y,\theta,\vartheta, \kappa) \propto \frac{ \phi(\tfrac{\psi_j-\frac{1}{20}}{\sqrt{10}})}{\Phi(\tfrac{\psi_j}{\sqrt{10}})-\Phi(\tfrac{-\psi_j}{\sqrt{10}})}\, \mathbb{I}\,[\,|\kappa_j|<\psi_j<2\,],\quad j = 1,\ldots,{n}, \end{equation*} where $\phi(\cdot)$ is the density of $\mathrm{N}(0,1)$ and $\Phi(\cdot)$ its cdf. The Metropolis-Hastings proposal for $\psi_j$ is a uniform variate from $\textrm{U}(0,2)$. \subsubsection*{MC-CAVI} For MC-CAVI, the logarithm of the joint distribution is given by, \begin{align*} \log p(y,\vartheta, \kappa, \psi,\theta) &= const. + \tfrac{n}{2}\log \theta - \tfrac{\theta\sum^{n}_{j=1}(y_j - \vartheta-\kappa_j)^2}{2} - \tfrac{\vartheta^2}{2\cdot 10} -\theta-\sum^{n}_{j=1}\tfrac{\kappa_j^2+(\psi_j-\frac{1}{20})^2}{2\cdot 10} \\[-0.4cm] &\qquad \qquad \qquad -\sum^{n}_{j=1} \log(\Phi(\tfrac{\psi_j }{\sqrt{10}})-\Phi(\tfrac{-\psi_j }{\sqrt{10}})), \end{align*} under the constraints, \begin{align*} |\kappa_j|<\psi_j<2, \quad j = 1,\ldots,{n}. \end{align*} To comply with the above constraints, we factorise the variational distribution as, \begin{align} \label{eq:parts} q(\vartheta,\theta, \kappa, \psi)=q(\vartheta)q(\theta)\prod^{n}_{j=1}q(\kappa_j,\psi_j). \end{align} Here, for the relevant iteration $k$, we have, \begin{align*} q_k(\vartheta) &= \mathrm{N}\big(\tfrac{\sum^{n}_{j=1}(y_j-\mathbb{E}_{k-1}(\kappa_j))\mathbb{E}_{k-1}(\theta)}{\frac{1}{10}+{n}\mathbb{E}_{k-1}(\theta)},\tfrac{1}{\frac{1}{10}+{n}\mathbb{E}_{k-1}(\theta)}\big),\\[0.2cm] q_k(\theta) &= \mathrm{Gamma}\big(1+\tfrac{n}{2}, 1+\tfrac{\sum^{n}_{j=1}\mathbb{E}_{k,k-1}((y_j-\vartheta-\kappa_j)^2)}{2})\big), \\[0.3cm] q_k(\kappa_j,\psi_j) &\propto \exp\big\{- \tfrac{\mathbb{E}_{k}(\theta) (\kappa_j-(y_j-\mathbb{E}_{k}(\vartheta)))^2}{2} -\tfrac{\kappa_j^2+(\psi_j-\frac{1}{20})^2}{2\cdot 10} \big\} \big/ \big(\Phi(\tfrac{\psi_j }{\sqrt{10}})-\Phi(\tfrac{-\psi_j }{\sqrt{10}})\big)\\ &\qquad \qquad\qquad\qquad\qquad\qquad \qquad\cdot \mathbb{I}\,[\,|\kappa_j|<\psi_j<2\,],\qquad 1\le j\le {n}. \end{align*} The quantity $\mathbb{E}_{k,k-1}((y_j-\vartheta-\kappa_j)^2)$ used in the second line above means that the expectation is considered under $\vartheta\sim q_k(\vartheta)$ and (independently) $\kappa_{j}\sim q_{k-1}(\kappa_{j},\psi_j)$. Then, MC-CAVI develops as follows: \begin{itemize} \item Step 0: For $k=0$, initialize $\mathbb{E}_{0}(\theta)=1$, $\mathbb{E}_{0}(\vartheta)=4$, $\mathbb{E}_{0}(\vartheta^2)=17$. \item Step $k$: For $k\ge 1$, given $\mathbb{E}_{k-1}(\theta)$, $\mathbb{E}_{k-1}(\vartheta)$, execute: \begin{itemize} \item For $j=1,\ldots, {n}$, apply an MCMC algorithm -- with invariant law $q_{k-1}(\kappa_j,\psi_j)$ -- consisted of a number, $N$, of Metropolis-within-Gibbs iterations carried out over the relevant full conditionals, \begin{align*} q_{k-1}(\psi_j| \kappa_j) &\propto\frac{\phi(\tfrac{\psi_j-\frac{1}{20}}{\sqrt{10}})}{\Phi(\tfrac{\psi_j}{\sqrt{10}})-\Phi(\tfrac{-\psi_j}{\sqrt{10}})}\, \mathbb{I}\,[\,|\kappa_j|<\psi_j<2\,], \\[0.3cm] q_{k-1}(\kappa_j|\psi_j)&= \mathrm{TN}\big(\tfrac{(y_j-\mathbb{E}_{k-1}(\vartheta))\mathbb{E}_{k-1}(\theta)}{\frac{1}{10}+\mathbb{E}_{k-1}(\theta)},\tfrac{1}{\frac{1}{10}+\mathbb{E}_{k-1}(\theta)},-\psi_j,\psi_j\big). \end{align*} As with the full conditional $p(\psi_j | y,\theta,\vartheta,\kappa)$ within the MCMC sampler, we use a uniform proposal $\mathrm{U}(0,2)$ at the Metropolis-Hastings step applied for $q_{k-1}(\psi_j| \kappa_j)$. For each $k$, the $N$ iterations begin from the $(\kappa_j,\psi_j)$-values obtained at the end of the corresponding MCMC iterations at step $k-1$, with very first initial values being $\kappa, \psi_j)=(0,1)$. Use the $N$ samples to obtain $\mathbb{E}_{k-1}(\kappa_j)$ and $\mathbb{E}_{k-1}(\kappa_j^2)$. \item Update the variational distribution for $\vartheta$, \begin{align*} q_{k}(\vartheta) &= \mathrm{N}\big(\tfrac{\sum^{n}_{j=i}(y_j-\mathbb{E}_{k-1}(\kappa_j))\mathbb{E}_{k-1}(\theta)}{\frac{1}{10}+{n}\mathbb{E}_{k-1}(\theta)},\tfrac{1}{\frac{1}{10}+{n}\mathbb{E}_{k-1}(\theta)}\big) \end{align*} and evaluate $\mathbb{E}_{k}(\vartheta)$, $\mathbb{E}_{k}(\vartheta^2)$. \item Update the variational distribution for $\theta$, \begin{align*} q_{k}(\theta)&= \mathrm{Gamma}\big(1+\tfrac{n}{2},1+\tfrac{\sum^{n}_{j=1}\mathbb{E}_{k,k-1}((y_j-\vartheta-\kappa_j)^2)}{2}\big) \end{align*} and evaluate $\mathbb{E}_{k}(\theta)$. \end{itemize} \item Iterate until convergence. \end{itemize} \begin{comment} \begin{align*} q(\vartheta)&= \mathrm{N}(\tfrac{\sum^n_{i=i}(y_i-\mathbb{E}[\kappa_i])\mathbb{E}[\theta]}{\frac{1}{100}+n\mathbb{E}[\theta]},\tfrac{1}{\frac{1}{100}+n\mathbb{E}[\theta]}),\\[0.2cm] q(\theta)&= \mathrm{Gamma}(1+\tfrac{n}{2},1+\tfrac{\sum^n_{i=1}\mathbb{E}(y_i-\vartheta-\kappa_i)^2}{2}),\\[0.3cm] q(\kappa_i\mid \psi_i)&= \mathrm{TN}(\tfrac{(y_i-\mathbb{E}[\vartheta])\mathbb{E}[[\theta]}{\frac{1}{100}+\mathbb{E}[\theta]},\tfrac{1}{\frac{1}{100}+\mathbb{E}[\theta]},-\psi_i,\psi_i),\\[0.3cm] q(\psi_i\mid \kappa) &= \mathrm{TN}(\tfrac{1}{20},10,\max\{0,\kappa_i\},2). \end{align*} \end{comment} \subsubsection*{BBVI} For BBVI we assume a variational distribution $q(\theta,\vartheta, \kappa, \psi\,|\,\boldsymbol{\alpha},\boldsymbol{\gamma})$ that factorises as in the case of CAVI in (\ref{eq:parts}), where \begin{align*} \boldsymbol{\alpha} &= (\alpha_{\vartheta}, \alpha_{\theta}, \alpha_{\kappa_1}, \ldots, \alpha_{\kappa_{n}}, \alpha_{\psi_1}, \ldots, \alpha_{\psi_{n}})\ , \\ \boldsymbol{\gamma} &= (\gamma_{\vartheta}, \gamma_{\theta}, \gamma_{\kappa_1}, \ldots, \gamma_{\kappa_{n}}, \gamma_{\psi_1}, \ldots, \gamma_{\psi_{n}}) \end{align*} to be the variational parameters. Individual marginal distributions are chosen to agree -- in type -- with the model priors. In particular, we set, \begin{align*} q(\vartheta) &= \mathrm{N}(\alpha_{\vartheta},\exp(\gamma_{\vartheta})),\\[0.2cm] q(\theta) &= \mathrm{Gamma}(\exp(\alpha_{\theta}),\exp(\gamma_{\theta})), \\[0.2cm] q(\kappa_j,\psi_j) &= \mathrm{TN}(\alpha_{\kappa_j},\exp(2\gamma_{\kappa_j}),-\psi_j,\psi_j)\otimes \mathrm{TN}(\alpha_{\psi_j},\exp(2\gamma_{\psi_j}),0,2), \quad 1\leq j \leq {n}. \end{align*} It is straightforward to derive the required gradients (see Appendix \ref{sec:gradient} for the analytical expressions). BBVI is applied using Rao-Blackwellization and control variates for variance reduction. The algorithm is as follows, \begin{itemize} \item Step 0: Set $\eta = 0.5$; initialise $\boldsymbol{\alpha}^0 = 0$, $\boldsymbol{\gamma}^0 = 0$ with the exception $\alpha^0_{\vartheta}=4$. \item Step $k$: For $k\ge 1$, given $\boldsymbol{\alpha}^{k-1}$ and $\boldsymbol{\gamma}^{k-1}$ execute: \begin{itemize} \item Draw $(\vartheta^i, \theta^i, \kappa^i,\psi^i)$, for $1\leq i \leq N$, from $q_{k-1}(\vartheta)$, $q_{k-1}(\theta)$, $q_{k-1}(\kappa,\psi)$. \item With the samples, use (\ref{deltaelbo}) to evaluate: \begin{align*} &\nabla^{k}_{\alpha_{\vartheta}}\widehat{\textrm{ELBO}}(q(\vartheta)),\quad \nabla^{k}_{\gamma_{\vartheta}}\widehat{\textrm{ELBO}}(q(\vartheta)), \\ &\nabla^{k}_{\alpha_{\theta}}\widehat{\textrm{ELBO}}(q(\theta)),\quad \nabla^{k}_{\gamma_{\theta}}\widehat{\textrm{ELBO}}(q(\theta)), \\ &\nabla^{k}_{\alpha_{\kappa_j}}\widehat{\textrm{ELBO}}(q(\kappa_j,\psi_j)), \quad \nabla^{k}_{\gamma_{\kappa_j}}\widehat{\textrm{ELBO}}(q(\kappa_j,\psi_j)),\quad 1\leq j \leq n, \\ &\nabla^{k}_{\alpha_{\psi_j}}\widehat{\textrm{ELBO}}(q(\kappa_j,\psi_j)),\quad \nabla^{k}_{\gamma_{\psi_j}}\widehat{\textrm{ELBO}}(q(\kappa_j,\psi_j)), \quad 1\leq j \leq n. \end{align*} (Here, superscript $k$ at the gradient symbol $\nabla$ specifies the BBVI iteration.) \item Evaluate $\boldsymbol{\alpha}^{k}$ and $\boldsymbol{\gamma}^{k}$: \begin{align*} (\boldsymbol{\alpha},\boldsymbol{\gamma})^{k} &= (\boldsymbol{\alpha},\boldsymbol{\gamma})^{k-1} + \rho_k\nabla^{k}_{(\boldsymbol{\alpha},\boldsymbol{\gamma})}\widehat{\textrm{ELBO}}(q), \end{align*} where $q = (q(\vartheta), q(\theta), q(\kappa_1, \psi_1), \ldots, q(\kappa_n, \psi_n))$. For the learning rate, we employed the AdaGrad algorithm \citep{duchi2011adaptive} and set $\rho_k = \eta \, \textrm{diag}(G_k)^{-1/2}$, where $G_k$ is a matrix equal to the sum of the first $k$ iterations of the outer products of the gradient, and $\textrm{diag}(\cdot)$ maps a matrix to its diagonal version. \end{itemize} \item Iterate until convergence. \end{itemize} \subsubsection*{Results} The three algorithms have different stopping criteria. We run each for $100$secs for parity. A summary of results is given in Table \ref{resulttable}. Model fitting plots and algorithmic traceplots are shown in Figure \ref{resultplot}. \begin{table}[!h] \begin{tabular}{|l|l|l|l|} \hline & MCMC & MC-CAVI & BBVI \\ \hline Iterations & \begin{tabular}[c]{@{}l@{}}No. Iterations = 2,500\\ Burn-in = 1,250\end{tabular} & \begin{tabular}[c]{@{}l@{}}No. Iterations = 300\\ $N = 10$\\ Burn-in = 150\end{tabular} & \begin{tabular}[c]{@{}l@{}}No. Iterations = 100\\ $N = 10$\end{tabular} \\ \hline $\vartheta$ & 5.927 (0.117) & 5.951 (0.009) & 6.083 (0.476) \\ \hline $\theta$ & 1.248 (0.272) & 8.880 (0.515) & 0.442 (0.172) \\ \hline \end{tabular} \caption{Summary of results: last two rows show the average for the corresponding parameter (in horizontal direction) and algorithm (in vertical direction), after burn-in (the number in brackets is the corresponding standard deviation). All algorithms were executed for $10^2$secs. The first row gives some algorithmic details.} \label{resulttable} \end{table} \begin{figure} \caption{Model fit (left panel), traceplots of $\vartheta$ (middle panel) and traceplots of $\theta$ (right panel) for the three algorithms: MCMC (first row), MC-CAVI (second row) and BBVI (third row) -- for Example Model 2 -- when allowed $100$secs of execution. In the plots showing model fit, the green line represents the data without noise, the orange line the data with noise; the blue line shows the corresponding posterior means and the grey area the pointwise 95\% posterior credible intervals.} \label{resultplot} \end{figure} \begin{figure} \caption{Density plots for the true posterior of $\vartheta$ (blue line) -- obtained via an expensive MCMC -- and the corresponding approximate distribution provided by MC-CAVI.} \label{resultdensity} \end{figure} \noindent Table \ref{resulttable} indicates that all three algorithms approximate the posterior mean of $\vartheta$ effectively; the estimate from MC-CAVI has smaller variability than the one of BBVI; the opposite holds for the variability in the estimates for $\theta$. Figure \ref{resultplot} shows that the traceplots for BBVI are unstable, a sign that the gradient estimates have high variability. In contrast, MCMC and MC-CAVI perform rather well. Figure \ref{resultdensity} shows the `true' posterior density of $\vartheta$ (obtained from an expensive MCMC with 10,000 iterations -- 5,000 burn-in) and the corresponding approximation obtained via MC-CAVI. In this case, the variational approximation is quite accurate at the estimation of the mean but underestimates the posterior variance (rather typically for a VI method). We mention that for BBVI we also tried to use normal laws as variational distributions -- as this is mainly the standard choice in the literature -- however, in this case, the performance of BBVI deteriorated even further. \section{Application to $^1$H NMR Spectroscopy} \label{sec:nmr} We demonstrate the utility of MC-CAVI in a statistical model proposed in the field of metabolomics by \cite{batmanmodel}, and used in NMR (Nuclear Magnetic Resonance) data analysis. Proton nuclear magnetic resonance ($^1$H NMR) is an extensively used technique for measuring abundance (concentration) of a number of metabolites in complex biofluids. NMR spectra are widely used in metabolomics to obtain profiles of metabolites present in biofluids. The NMR spectrum can contain information for a few hundreds of compounds. Resonance peaks generated by each compound must be identified in the spectrum after deconvolution. The spectral signature of a compound is given by a combination of peaks not necessarily close to each other. Such compounds can generate hundreds of resonance peaks, many of which overlap. This causes difficulty in peak identification and deconvolution. The analysis of NMR spectrum is further complicated by fluctuations in peak positions among spectra induced by uncontrollable variations in experimental conditions and the chemical properties of the biological samples, e.g.~by the pH. Nevertheless, extensive information on the patterns of spectral resonance generated by human metabolites is now available in online databases. By incorporating this information into a Bayesian model, we can deconvolve resonance peaks from a spectrum and obtain explicit concentration estimates for the corresponding metabolites. Spectral resonances that cannot be deconvolved in this way may also be of scientific interest; these are modelled in \cite{batmanmodel} using wavelet basis functions. More specifically, an NMR spectrum is a collection of peaks convoluted with various horizontal translations and vertical scalings, with each peak having the form of a Lorentzian curve. A number of metabolites of interest have known NMR spectrum shape, with the height of the peaks or their width in a particular experiment providing information about the abundance of each metabolite. The zero-centred, standardized Lorentzian function is defined as: \begin{equation} \ell_\gamma(x) = \frac{2}{\pi}\frac{\gamma}{4x^2+\gamma^2} \end{equation} where $\gamma$ is the peak width at half height. An example of $^1$H NMR spectrum is shown in Figure \ref{nmrexample}. The x-axis of the spectrum measures chemical shift in parts per million (ppm) and corresponds to the resonance frequency. The y-axis measures relative resonance intensity. Each spectrum peak corresponds to magnetic nuclei resonating at a particular frequency in the biological mixture, with every metabolite having a characteristic molecular $^1$H NMR `signature'; the result is a convolution of Lorentzian peaks that appear in specific positions in $^1$H NMR spectra. Each metabolite in the experiment usually gives rise to more than a `multiplet' in the spectrum -- i.e.~linear combination of Lorentzian functions, symmetric around a central point. Spectral signature (i.e.~pattern multiplets) of many metabolites are stored in public databases. The aim of the analysis is: (i) to deconvolve resonance peak in the spectrum and assign them to a particular metabolite; (ii) estimate the abundance of the catalogued metabolites; (iii) model the component of a spectrum that cannot be assigned to known compounds. \cite{batmanmodel} propose a two-component joint model for a spectrum, in which the metabolites whose peaks we wish to assign explicitly are modelled parametrically, using information from the online databases, while the unassigned spectrum is modelled using wavelets. \begin{figure} \caption{An Example of $^1$H NMR spectrun.} \label{nmrexample} \end{figure} \subsection{The Model} We now describe the model of \cite{batmanmodel}. The available data are represented by the pair $(\mathbf{x},\mathbf{y})$, where $\mathbf{x}$ is a vector of $n$ ordered points (of the order $10^3-10^4$) on the chemical shift axis -- often regularly spaced -- and $\mathbf{y}$ is the vector of the corresponding resonance intensity measurements (scaled, so that they sum up to $1$). The conditional law of $\mathbf{y}|\mathbf{x}$ is modelled under the assumption that $y_i| \mathbf{x}$ are independent normal variables and, \begin{equation} \mathbb{E}\,[\,y_i\,|\,\mathbf{x}\,] = \phi(x_i) + \xi(x_i), \quad 1\leq i \leq n. \end{equation} Here, the $\phi$ component of the model represents signatures that we wish to assign to target metabolites. The $\xi$ component models signatures of remaining metabolites present in the spectrum, but not explicitly modelled. We refer to this latter as residual spectrum and we highlight the fact that it is important to account for it as it can unveil important information not captured by $\phi(\cdot)$. Function $\phi$ is constructed parametrically using results from the physical theory of NMR and information available online databases or expert knowledge, while $\xi$ is modelled semiparametrically with wavelets generated by a mother wavelet (symlet 6) that resembles the Lorentzian curve. More analytically, \begin{equation*} \phi(x_i) = \sum_{m=1}^{M}t_m(x_i)\beta_{m} \end{equation*} where $M$ is the number of metabolites modelled explicitly and $\beta = (\beta_{1},\ldots,\beta_{M})^{\top}$ is a parameter vector corresponding to metabolite concentrations. Function $t_m(\cdot)$ represents a continuous template function that specifies the NMR signature of metabolite $m$ and it is defined as, \begin{equation} t_m(\delta) = \sum_u \sum^{V_{m,u}}_{v=1}z_{m,u}\,\omega_{m,u,v}\,\ell_{\gamma}(\delta-\delta^*_{m,u}-c_{m,u,v}),\quad \delta>0, \end{equation} where $u$ is an index running over all multiplets assigned to metabolite $m$, $v$ is an index representing a peak in a multiplet and $V_{m,u}$ is the number of peaks in multiplet $u$ of metabolite $m$. In addition, $\delta^*_{m,u}$ specifies the theoretical position on the chemical shift axis of the centre of mass of the $u$th multiplet of the $m$th metabolite; $z_{m,u}$ is a positive quantity, usually equal to the number of protons in a molecule of metabolite $m$ that contributes to the resonance signal of multiplet $u$; $\omega_{m,u,v}$ is the weight determining the relative heights of the peaks of the multiplet; $c_{m,u,v}$ is the translation determining the horizontal offsets of the peaks from the centre of mass of the multiplet. Both $\omega_{m,u,v}$ and $c_{m,u,v}$ can be computed by empirical estimates of the so-called $J$-coupling constants; see \cite{hore2015nuclear} for more details. The $z_{m,u}$'s and $J$-coupling constants information can be found in online databases or from expert knowledge. The residual spectrum is modelled through wavelets, \begin{equation*} \xi(x_i) = \sum_{j,k}\varphi_{j,k}(x_i)\vartheta_{j,k} \end{equation*} where $\varphi_{j,k}(\cdot)$ denote the orthogonal wavelet functions generated by the symlet-6 mother wavelet, see \cite{batmanmodel} for full details; here, $\vartheta = (\vartheta_{1,1},\ldots,\vartheta_{j,k},\ldots)^{\top}$ is the vector of wavelet coefficients. Indices $j,k$ correspond to the $k$th wavelet in the $j$th scaling level. Finally, overall, the model for an NMR spectrum can be re-written in matrix form as: \begin{equation} \mathcal{W}(\mathbf{y} -\mathbf{T} \beta) = \mathbf{I}_{n_1} \vartheta + \epsilon, \quad \boldsymbol{\epsilon} \sim \mathrm{N}(0,\mathbf{I}_{n_1}/\theta), \label{nmrlikelihood} \end{equation} where $\mathcal{W}\in \mathbb{R}^{n\times {n_1}}$ is the inverse wavelet transform, $M$ is the total number of known metabolites, $\mathbf{T}$ is an $n \times M$ matrix with its $(i,m)$th entry equal to $t_m(x_i)$ and $\theta$ is a scalar precision parameter. \subsection{Prior Specification} \label{sec:priordist} \cite{batmanmodel} assign the following prior distribution to the parameters in the Bayesian model. For the concentration parameters, we assume \begin{equation*} \beta_m \sim \mathrm{TN}(e_m,1/s_m,0,\infty), \end{equation*} where $e_m = 0$ and $s_m = 10^{-3}$, for all $m=1,\ldots, M$. Moreover, \begin{align*} \gamma &\sim \mathrm{LN}(0,1); \\ \delta^*_{m,u} &\sim \mathrm{TN}(\hat{\delta}^*_{m,u},10^{-4},\hat{\delta}^*_{m,u}-0.03,\hat{\delta}^*_{m,u}+0.03), \end{align*} where LN denotes a log-normal distribution and $\hat{\delta}^*_{m,u}$ is the estimate for $\delta^*_{m,u}$ obtained from the online database HMDB \citep[see][]{hmdb1, hmdb2, hmdb3, hmdb4}. In the regions of the spectrum where both parametric (i.e.~$\phi$) and semiparametric (i.e.~$\xi$) components need to be fitted, the likelihood is unidentifiable. To tackle this problem, \cite{batmanmodel} opt for shrinkage priors for the wavelet coefficients and include a vector of hyperparameters $\psi$ -- each component $\psi_{j,k}$ of which corresponds to a wavelet coefficient -- to penalize the semiparametric component. To reflect prior knowledge that NMR spectra are usually restricted to the half plane above the chemical shift axis, \cite{batmanmodel} introduce a vector of hyperparameters $\tau$, each component of which, $\tau_i$, corresponds to a spectral data point, to further penalize spectral reconstructions in which some components of $\mathcal{W}^{-1}\boldsymbol{\vartheta}$ are less than a small negative threshold. In conclusion, \cite{batmanmodel} specify the following joint prior density for $(\vartheta, \psi,\tau,\theta)$, \begin{align*} p(\vartheta, \psi, \tau,\theta) &\propto \theta^{a+\tfrac{n+n_1}{2}-1} \Big\{\prod_{j,k}\psi^{c_j-0.5}_{j,k}\exp\big(-\tfrac{\psi_{j,k} d_j}{2}\big)\Big\}\\ &\qquad \qquad \times \exp\Big\{-\tfrac{\theta}{2}\Big(e+\sum_{j,k}\psi_{j,k}\,\vartheta^2_{j,k}+ r\sum^{n}_{i=1}(\tau_i-h )^2\Big)\Big\} \\ &\qquad \qquad \qquad \qquad \times \mathbbm{1}\,\big\{\,\mathcal{W}^{-1} \vartheta\geq \tau,\,\, h\mathbf{1}_{n}\geq \tau\,\big\}, \end{align*} where $ \psi$ introduces local shrinkage for the marginal prior of $\vartheta$ and $\tau$ is a vector of $n$ truncation limits, which bounds $\mathcal{W}^{-1} \vartheta$ from below. The truncation imposes an identifiability constraint: without it, when the signature template does not match the shape of the spectral data, the mismatch will be compensated by negative wavelet coefficients, such that an ideal overall model fit is achieved even though the signature template is erroneously assigned and the concentration of metabolites is overestimated. Finally we set $c_j = 0.05$, $d_j = 10^{-8}$, $h = -0.002$, $r = 10^5$, $a = 10^{-9}$, $e = 10^{-6}$; see \cite{batmanmodel} for more details. \subsection{Results} BATMAN is an $\mathsf{R}$ package for estimating metabolite concentrations from NMR spectral data using a specifically designed MCMC algorithm \citep{batman} to perform posterior inference from the Bayesian model described above. We implement a MC-CAVI version of BATMAN and compare its performance with the original MCMC algorithm. Details of the implementation of MC-CAVI are given in Appendix \ref{sec:BATMAN}. Due to the complexity of the model and the datasize, it is challenging for both algorithms to reach convergence. We run the two methods, MC-CAVI and MCMC, for approximately an equal amount of time, to analyse a full spectrum with 1,530 data points and modelling parametrically 10 metabolites. We fix the number of iterations for MC-CAVI to 1,000, with a burn-in of 500 iterations; we set the Monte Carlo size to $N=10$ for all iterations. The execution time for this MC-CAVI algorithms is $2,048$secs. For the MCMC algorithm, we fix the number of iterations to 2,000, with a burn-in of 1,000 iterations. This MCMC algorithm has an execution time of $2,098$secs. In $^1$H NMR analysis, $\beta$ (the concentration of metabolites in the biofluid) and $\delta^*_{m,u}$ (the peak positions) are the most important parameters from a scientific point of view. Traceplots of four examples ($\beta_3$, $\beta_4$, $\beta_9$ and $\delta_{4,1}$) are shown in Figure \ref{paracomparison}. These four parameters are chosen due to the different performance of the two methods, which are closely examined in Figure \ref{detailcomparison}. For $\beta_3$ and $\beta_9$, traceplots are still far from convergence for MCMC, while they move toward the correct direction (see Figure \ref{paracomparison}) when using MC-CAVI. For $\beta_4$ and $\delta_{4,1}$, both parameters reach a stable regime very quickly in MC-CAVI, whereas the same parameters only make local moves when implementing MCMC. For the remaining parameters in the model, both algorithms present similar results. \begin{figure} \caption{Traceplots of Parameter Value against Number of Iterations after the burn-in period for $\beta_3$ (upper left panel), $\beta_4$ (upper right panel), $\beta_9$ (lower left panel) and $\delta_{4,1} \label{paracomparison} \end{figure} \begin{figure} \caption{Comparison of MC-CAVI and MCMC in terms of Spectral Fit. The upper panel shows the Spectral Fit from MC-CAVI algorithm. The lower panel shows the Spectral Fit from MCMC algorithm. The $x$-axis corresponds to chemical shift measure in ppm. The $y$-axis corresponds to standard density.} \label{speccomparison} \end{figure} Figure \ref{speccomparison} shows the fit obtained from both the algorithms, while Table \ref{betatable} reports posterior estimates for $ \beta$. From Figure \ref{speccomparison}, it is evident that the overall performance of MC-CAVI is similar as that of MCMC since in most areas, the metabolites fit (orange line) captures the shape of the original spectrum quite well. Table \ref{betatable} shows that, similar to standard VI behaviour, MC-CAVI underestimates the variance of the posterior density. We examine in more detail the posterior distribution of the $\beta$ coefficients for which the posterior means obtained with the two algorithms differ more than 1.0e-4. Figure \ref{detailcomparison} shows that MC-CAVI manages to capture the shapes of the peaks while MCMC does not, around ppm values of 2.14 and 3.78, which correspond to spectral regions where many peaks overlap making peak deconvolution challenging. This is probably due to the faster convergence of MC-CAVI. Figure \ref{detailcomparison} shows that for areas with no overlapping (e.g.~around ppm values of 2.66 and 7.53), MC-CAVI and MCMC produce similar results. \begin{table}[!h] \centering \begin{center} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & & $\beta_1$ & $\beta_2$ & $\boldsymbol{\beta_3}$ & $\boldsymbol{\beta_4}$ & $\beta_5$ \\ \hline \multirow{2}{*}{MC-CAVI} & mean & 6.0e-6 & 7.8e-5 & 1.4e-3 & 4.2e-4 & 2.6e-5 \\ \cline{2-7} & sd & 1.8e-11 & 4.0e-11 & 1.3e-11 & 1.0e-11 & 6.2e-11 \\ \hline \multirow{2}{*}{MCMC} & mean & 1.2e-5 & 4.0e-5 & 1.5e-3 & 2.1e-5 & 3.4e-5 \\ \cline{2-7} & sd & 1.1e-10 & 5.0e-10 & 1.6e-9 & 6.4e-10 & 3.9e-10 \\ \hline & & $\beta_6$ & $\beta_7$ & $\beta_8$ & $\boldsymbol{\beta_9}$ & $\beta_{10}$ \\ \hline \multirow{2}{*}{MC-CAVI} & mean & 6.1e-4 & 3.0e-5 & 1.9e-4 & 2.7e-3 & 1.0e-3 \\ \cline{2-7} & sd & 1.5e-11 & 1.6e-11 & 3.9e-11 & 1.6e-11 & 3.6e-11 \\ \hline \multirow{2}{*}{MCMC} & mean & 6.0e-4 & 3.0e-5 & 1.8e-4 & 2.5e-3 & 1.0e-3 \\ \cline{2-7} & sd & 2.3e-10 & 7.5e-11 & 3.7e-10 & 5.1e-9 & 7.9e-10 \\ \hline \end{tabular} \caption{Estimation of $\beta$ obtained with MC-CAVI and MCMC. (The coefficients of $\beta$ for which the posterior means obtained with the two algorithms differ by more than 1.0e-4 are shown in bold.)} \label{betatable} \end{center} \end{table} \begin{figure} \caption{Comparison of Metabolites Fit obtained with MC-CAVI and MCMC. The $x$-axis corresponds to chemical shift measure in ppm. The $y$-axis corresponds to standard density. The upper left panel shows areas around ppm value 2.14 ($\beta_4$ and $\beta_9$). The upper right panel shows areas around ppm 2.66 ($\beta_6$). The lower left panel shows areas around ppm value 3.78 ($\beta_3$ and $\beta_9$). The lower right panel shows areas around ppm 7.53 ($\beta_{10} \label{detailcomparison} \end{figure} Comparing MC-CAVI and MCMC's performance in the case of the NMR model, we can draw the following conclusions: \begin{itemize} \item In NMR analysis, if many peaks overlap (see Figure \ref{detailcomparison}), MC-CAVI can provide better results than MCMC. \item In high-dimensional models, where the number of parameters grows with the size of data, MC-CAVI can converge faster than MCMC. \item Choice of $N$ is important for optimising the performance of MC-CAVI. Building on results derived for other Monte Carlo methods (e.g.~MCEM), it is reasonable to choose a relatively small number of Monte Carlo iterations at the beginning when the algorithm can be far from regions of parameter space of high posterior probability, and gradually increase the number of Monte Carlo iterations, with the maximum number taken once the algorithm has reached a mode. \end{itemize} \section{Discussion} \label{sec:discussion} As a combination of VI and MCMC, MC-CAVI provides a powerful inferential tool particularly in high dimensional settings when full posterior inference is computationally demanding and the application of optimization and of noisy-gradient-based approaches, e.g. BBVI, is hindered by the presence of hard constraints. The MCMC step of MC-CAVI is necessary to deal with parameters for which VI approximation distributions are difficult or impossible to derive, for example due to the impossibility to derive closed-form expression for the normalising constant. General Monte Carlo algorithms such as sequential Monte Carlo and Hamiltonian Monte Carlo can be incorporated within MC-CAVI. Compared with MCMC, the VI step of MC-CAVI speeds up convergence and provides reliable estimates in a shorter time. Moreover, MC-CAVI scales better in high-dimensional settings. As an optimization algorithm, MC-CAVI's convergence monitoring is easier than MCMC. Moreover, MC-CAVI offers a flexible alternative to BBVI. This latter algorithm, although very general and suitable for a large range of complex models, depends crucially on the quality of the approximation to the true target provided by the variational distribution, which in high dimensional setting (in particular with hard constraints) is very difficult to assess. \section*{Acknowledgments} We thank two anonymous referees for their comments that greatly improved the content of the paper. AB acknowledges funding by the Leverhulme Trust Prize. \appendix \section{Proof of Lemma \ref{lem:stable}} \label{sec:lem} \begin{proof}{Part (i)}: For a neighborhood of $\lambda^*$, we can chose a sub-neighborhood $V$ as described in Assumption \ref{ass:M}. For some small $\epsilon>0$, the set $V_0 = \{\lambda:\textrm{ELBO}(q(\lambda))\geq \textrm{ELBO}(q(\lambda^*))-\epsilon\}$ has a connected component, say $V'$, so that $\lambda^*\in V'$ and $V'\subseteq V$; we can assume that $V'$ is compact. Assumption \ref{ass:M} implies that $M(V')\subseteq V_0$; in fact, since $M(V')$ is connected and contains $\lambda^*$, we have $M(V')\subseteq V'$. This completes the proof of part (i) of Definition \ref{def:stable}.\\ {Part (ii)}: Let $\lambda\in V'$. Consider the sequence $\{M^{k}(\lambda)\}_k$ with a convergent subsequence, $M^{a_k}(\lambda)\rightarrow \lambda_1\in V'$, for increasing integers $\{a_k\}$. Thus, we have that the following holds, $\textrm{ELBO}(q(M^{a_{k+1}}(\lambda)))\ge \textrm{ELBO}(q(M(M^{a_{k}}(\lambda))))\rightarrow \textrm{ELBO}(q(M(\lambda_1)))$, whereas we also have that $\textrm{ELBO}(q(M^{a_{k+1}}(\lambda)))\rightarrow\textrm{ELBO}(q(\lambda_1))$. These two last limits give the implication that $\textrm{ELBO}(q(M(\lambda_1))) = \textrm{ELBO}(q(\lambda_1))$, so that $\lambda_1=\lambda^*$. We have shown that any convergent subsequence of $\{M^k(\lambda)\}_k$ has limit $\lambda^*$; the compactness of $V'$ gives that also $M^{k}(\lambda)\rightarrow \lambda^*$. This completes the proof of part (ii) of Definition \ref{def:stable}. \end{proof} \section{Proof of Theorem \ref{th:stable}} \label{sec:theorem} \begin{proof} Let $V_1$ be as $V'$ within the proof of Lemma \ref{lem:stable}. Define $V_2 = \{\lambda\in V_1:|\lambda-\lambda^*|\ge \epsilon\}$, for an $\epsilon>0$ small enough so that $V_1\neq \emptyset$. For $\lambda\in V_2$, we have $M(\lambda)\neq \lambda$, thus there are $\nu,\nu_1>0$ such that for all $\lambda\in V_2$ and for all $\lambda'$ with $|\lambda'-M(\lambda)|<\nu$, we obtain that $\textrm{ELBO}(q(\lambda'))-\textrm{ELBO}(q(\lambda))>\nu_1$. Also, due to continuity and compactness, there is $\nu_2>0$ such that for all $\lambda\in V_1$ and for all $\lambda'$ such that $|\lambda'-M(\lambda)|<\nu_2$, we have $\lambda'\in V_1$. Let $R=\sup_{\lambda,\lambda'\in V_1}\{\textrm{ELBO}(q(\lambda))-\textrm{ELBO}(q(\lambda'))\}$ and $k_0 = [ R/\nu_1]$ where $[\cdot]$ denotes integer part. Notice that given $\lambda^k_N:=\mathcal{M}_N^{k}(\lambda)$, we have that $\{|\mathcal{M}^{k+1}_N - M(\lambda^k_N)|<\nu_2\}\subseteq \{ \lambda^{k+1}_N\in V_1 \}$. Consider the event $F_N=\{\lambda_{N}^{k}\in V_1\,;\,k=0,\ldots, k_0\}$. Under Assumption \ref{ass:technical}, we have that $\mathrm{Prob}[F_N]\ge p^{k_0}$ for $p$ arbitrarily close to 1. Within $F_N$, we have that $|\lambda_N^{k}-\lambda^*|<\epsilon$ for some $k\le k_0$, or else $\lambda_{N}^{k}\in V_2$ for all $k\le k_0$, giving that $\textrm{ELBO}(q(\lambda^k_N))-\textrm{ELBO}(q(\lambda)) > \nu_1\cdot k_0 >R$, which is impossible. \end{proof} \section{Gradient Expressions for BBVI} \label{sec:gradient} \begin{align*} \nabla_{\alpha_{\vartheta}} \log q(\vartheta) &= (\vartheta-\alpha_{\vartheta})\cdot \exp(-\gamma_{\vartheta}),\\ \nabla_{\gamma_{\vartheta}} \log q(\vartheta) &= -\tfrac{1}{2} + \tfrac{ (\vartheta-\alpha_{\vartheta})^2}{2}\cdot \exp(-\gamma_{\vartheta}),\\ \nabla_{\alpha_{\theta}} \log q(\theta) &= \big(\gamma_{\theta} - \tfrac{\Gamma'(\exp(\alpha_{\theta}))}{\Gamma(\exp(\alpha_{\theta}))} + \log(\theta)\big)\cdot \exp(\alpha_{\theta}),\\ \nabla_{\gamma_{\theta}} \log q(\theta) &= \exp(\alpha_{\theta})-\theta\cdot\exp(\gamma_{\theta}),\\ \nabla_{\alpha_{\kappa_j}} \log q(\kappa_j,\psi_j) &= \tfrac{\kappa_j-\alpha_{\kappa_j}}{\exp(2\gamma_{\kappa_j}) } + \tfrac{\phi(\frac{\psi_j-\alpha_{\kappa_j}}{\exp(\gamma_{\kappa_j})})-\phi(\frac{-\psi_j-\alpha_{\kappa_j}}{\exp(\gamma_{\kappa_j})})}{\exp(\gamma_{\kappa_j})(\Phi(\frac{\psi_j-\alpha_{\kappa_j}}{\exp(\gamma_{\kappa_j})})-\Phi(\frac{-\psi_j-\alpha_{\kappa_j}}{\exp(\gamma_{\kappa_j})}))},\quad 1\leq j \leq {n}\\ \nabla_{\alpha_{\psi_j}} \log q(\kappa_j,\psi_j) &= \tfrac{\psi_j-\alpha_{\psi_j}}{\exp(2\gamma_{\psi_j}) } + \tfrac{ \phi(\frac{2-\alpha_{\psi_j}}{\exp(\gamma_{\psi_j})})- \phi(\frac{-\alpha_{\psi_j}}{\exp(\gamma_{\psi_j})})}{\exp(\gamma_{\psi_j})(\Phi(\frac{2-\alpha_{\psi_j}}{\exp(\gamma_{\psi_j})})-\Phi(\frac{-\alpha_{\psi_j}}{\exp(\gamma_{\psi_j})}))},\quad 1\leq j \leq {n}\\ \nabla_{\gamma_{\kappa_j}} \log q(\kappa_j,\psi_j) &= \tfrac{(\kappa_j-\alpha_{\kappa_j})^2}{\exp(2\gamma_{\kappa_j})} - 1 + \tfrac{(\psi_j-\alpha_{\kappa_j}) \phi(\frac{\psi_j-\alpha_{\kappa_j}}{\exp(\gamma_{\kappa_j})})+(\psi_j+\alpha_{\kappa_j}) \phi(\frac{-\psi_j-\alpha_{\kappa_j}}{\exp(\gamma_{\kappa_j})})}{\exp(\gamma_{\kappa_j})(\Phi(\frac{\psi_j-\alpha_{\kappa_j}}{\exp(\gamma_{\kappa_j})})-\Phi(\frac{-\psi_j-\alpha_{\kappa_j}}{\exp(\gamma_{\kappa_j})}))}, \quad 1\leq j \leq {n}\\ \nabla_{\gamma_{\psi_j}} \log q(\kappa_j,\psi_j) &= \tfrac{(\psi_j-\alpha_{\psi_j})^2}{\exp(2\gamma_{\psi_j})} - 1 + \tfrac{(2-\alpha_{\psi_j})\boldsymbol{\phi}(\frac{2-\alpha_{\psi_j}}{\exp(\gamma_{\psi_j})})+(\alpha_{\psi_j})\boldsymbol{\phi}(\frac{-\alpha_{\psi_j}}{\exp(\gamma_{\psi_j})})}{\exp(\gamma_{\psi_j})(\Phi(\frac{2-\alpha_{\psi_j}}{\exp(\gamma_{\psi_j})})-\Phi(\frac{-\alpha_{\psi_j}}{\exp(\gamma_{\psi_j})}))}, \quad 1\leq j \leq {n}. \end{align*} \section{MC-CAVI Implementation of BATMAN} \label{sec:BATMAN} In the MC-CAVI implementation of BATMAN, taking both computation efficiency and model structure into consideration, we assume that the variational distribution factorises over four partitions of the parameter vectors, $q( \beta, \delta^*,\gamma)$, $q( \vartheta, \tau)$, $q(\psi)$, $q(\theta)$. This factorization is motivated by the original Metropolis-Hastings block updates in \cite{batmanmodel}. Let $B$ denote the wavelet basis matrix defined by the transform $\mathcal{W}$, so $\mathcal{W}(B) = \mathbf{I}_{n_1}$. We use $v_{-i}$ to represent vector $v$ without the $i$th component and analogous notation for matrices (resp., without the $i$th column).\\ \noindent Set $\mathbb{E}(\theta) = 2a/e$, $ {\mathbb{E}}(\vartheta^2_{j,k}) = 0$, ${\mathbb{E}}( \vartheta) = 0$, $ {\mathbb{E}}(\tau) = 0$, ${\mathbb{E}}(\mathbf{T}\beta) = \mathbf{y}$, ${\mathbb{E}} \big((\mathbf{T} \beta)^{\top}(\mathbf{T}\beta)\big) = \mathbf{y}^{\top}\mathbf{y}$.\\ \noindent For each iteration: \begin{enumerate} \item Set $q(\psi_{j,k}) = \mathrm{Gamma}\big(c_j+\frac{1}{2}, \tfrac{{\mathbb{E}}(\theta) \mathbb{E}(\vartheta^2_{j,k})+d_j}{2} \big)$; calculate $\mathbb{E}(\psi_{j,k})$. \item Set $q(\theta) = \mathrm{Gamma}(c,c')$, where we have defined, \begin{align*} c &= a_1+n_1+\tfrac{n}{2}, \\[0.3cm] c'&=\tfrac{1}{2}\Big\{\sum_{j,k}\mathbb{E}(\psi_{j,k}) \mathbb{E}(\vartheta^2_{j,k})+ \mathbb{E}\big((\mathcal{W}\mathbf{y}-\mathcal{W}\mathbf{T} \beta- \vartheta)^{\top}(\mathcal{W}\mathbf{y}-\mathcal{W}\mathbf{T} \beta- \vartheta)\big)\\[-0.45cm] &\qquad\qquad\qquad\qquad\qquad\qquad\qquad +r(\mathbb{E}(\tau)-h\mathbf{1}_{n})+e\Big\}; \end{align*} calculate $\mathbb{E}(\theta)$. \item Use Monte Carlo to draw $N$ samples from $q( \beta,\delta^*_{m,u},\gamma)$, which is derived via (\ref{eq:recursion}) as, \begin{align*} q( \beta, \delta^*,\gamma) &\propto \exp\Big\{-\tfrac{\mathbb{E}(\theta)}{2} \big( (\mathcal{W}\boldsymbol{T}\beta)^{\top} \mathcal{W}\boldsymbol{T}\beta - 2\mathcal{W}\boldsymbol{T}\beta(\mathcal{W}\mathbf{y}- {\mathbb{E}}( \vartheta)) \big) \Big\}\\ & \qquad \qquad \qquad \qquad\qquad\qquad \times p( \beta)p( \delta^*)p(\gamma), \end{align*} where $p( \beta)$, $p( \delta^*)$, $p(\gamma)$ are the prior distributions specified in Section \ref{sec:priordist}. \begin{itemize} \item Use a Gibbs sampler update to draw samples from $q( \beta| \delta^*_{m,u},\gamma)$. Draw each component of $ \beta=(\beta_m)$ from a univariate normal, truncated below at zero, with precision and mean parameters given, respectively, by \begin{gather*} P := s_m + {\mathbb{E}}(\theta)(\mathcal{W}\boldsymbol{T}_i)^{\top}(\mathcal{W}\boldsymbol{T}_i) ,\\[0.2cm] (\mathcal{W}\boldsymbol{T}_i)^\top(\mathcal{W}\mathbf{y}-\mathcal{W}\boldsymbol{T}_{-i}\beta_{-i}- {\mathbb{E}}(\vartheta)) {\mathbb{E}}(\theta)/P. \end{gather*} \item Use Metropolis--Hastings to update $\gamma$. Propose $\log(\gamma')\sim \mathrm{N}(\log(\gamma),V_{\gamma}^2)$. Perform accept/reject. Adapt $V_{\gamma}^2$ to obtain average acceptance rate of approximately 0.45. \item Use Metropolis--Hastings to update $\delta^*_{m,u}$. Propose, $$({\delta^*_{m,u}})' \sim \mathrm{TN}(\delta^*_{m,u},V_{\delta^*_{m,u}}^2,\hat{\delta}^*_{m,u}-0.03,\hat{\delta}^*_{m,u}+0.03).$$ Perform accept/reject. Adapt $V_{\delta^*_{m,u}}^2$ to target acceptance rate 0.45. \end{itemize} Calculate ${\mathbb{E}}(\mathbf{T}\beta)$ and ${\mathbb{E}}\big((\mathbf{T}\beta)^{\top}(\mathbf{T}\beta)\big)$. \item Use Monte Carlo to draw $N$ samples from $q( \vartheta, \tau)$, which is derived via (\ref{eq:recursion}) as, \begin{align*} &q( \vartheta, \tau) \propto \\ &\exp\Big\{-\tfrac{\mathbb{E}(\theta)}{2} \Big(\sum_{j,k}\vartheta_{j,k}\big( (\psi_{j,k}+1)\,\vartheta_{j,k} -2\big(\mathcal{W}\mathbf{y}- \mathcal{W} \mathbb{E} (\boldsymbol{T}\beta)\big)_{j,k} \big) + r\sum^{n}_{i=1}(\tau_i-h)^2 \Big) \Big\}\\ &\quad\quad\quad\quad\quad \quad \quad \quad \quad \quad \quad\quad \quad \quad \quad \quad \quad \times \mathbb{I}\,\big\{\,\mathcal{W}^{-1} \vartheta\geq \tau,\,\, h\mathbf{1}_{n}\geq\tau\,\big\} \end{align*} \begin{itemize} \item Use Gibbs sampler to draw from $q( \vartheta| \tau)$. Draw $\vartheta_{j,k}$ from: \begin{equation*} \mathrm{TN}\big(\tfrac{1}{1+\mathop{\mathbb{E}}(\psi_{j,k})}\big(\mathcal{W}\mathbf{y}- \mathcal{W}{\mathbb{E}}(\boldsymbol{T}\beta)\big)_{j,k},\tfrac{1}{ {\mathbb{E}}(\theta)(1+ {\mathbb{E}}(\psi_{j,k}))},L,U\big) \end{equation*} where we have set, \begin{align*} L &= \max_{i:B_{i\{j,k\}}>0} \frac{\tau_i- B_{i-\{j,k\}} \vartheta_{-\{j,k\}}}{B_{i\{j,k\}}} \\ U &= \min_{i:B_{i\{j,k\}}<0} \frac{\tau_i-B_{i-\{j,k\}} \vartheta_{-\{j,k\}}}{B_{i\{j,k\}}} \end{align*} and $B_{i\{j,k\}}$ is the $(j,k)$th element of the $i$th column of $B$. \item Use Gibbs sampler to update $\tau_i$. Draw, $$\tau_i\sim\mathrm{TN}\big(h,1/({\mathbb{E}}(\theta)r),-\infty,\min\big\{h,(\mathcal{W}^{-1} \vartheta)_i\big\}\big).$$ \end{itemize} Calculate $ {\mathbb{E}}(\vartheta^2_{j,k})$, ${\mathbb{E}}(\vartheta)$, $ {\mathbb{E}}(\tau)$. \end{enumerate} \end{document}
\begin{document} \begin{frontmatter} \title{Exact Density matrix of an oscillator-bath system: Alternative derivation} \author{Fardin Kheirandish} \address{Department of Physics, Faculty of Science, University of Kurdistan, P.O.Box 66177-15175, Sanandaj, Iran} \begin{abstract} \noindent Starting from a total Lagrangian describing an oscillator-bath system, an alternative derivation of exact quantum propagator is presented. Having the quantum propagator, the exact density matrix, reduced density matrix of the main oscillator and thermal equilibrium fixed point are obtained. The modified quantum propagator is obtained in the generalised case where the main oscillator is under the influence of a classical external force. By introducing auxiliary classical external fields, the generalised quantum propagator or generating functional of position correlation functions is obtained. \end{abstract} \begin{keyword} \texttt{Density matrix}\sep Oscillator-Bath\sep Propagator \sep Generating function \end{keyword} \end{frontmatter} \linenumbers \section{Introduction} \noindent The quantum propagator is the most important function in quantum theories \cite{Propagator-1,Propagator-2}. Knowing the quantum propagator, we can obtain all measurable quantities related to the physical system exactly, that is we have a complete physical description of the underline system in any time. Unfortunately, except for some simple physical systems, obtaining the exact form of quantum propagator is usually a difficult task and we have to invoke perturbative methods. Among different approaches to find quantum propagatorو we can refer to two main approaches. In the first method, quantum propagator is written as a bilinear function in eigenvectors of the Schr\"{o}dinger equation. The main task in this method is to find the eigenfunctions of the Hamiltonian which are usually difficult to find and even having these eigenfunctions, extracting a closed form quantum propagator from them may be cumbersome. The second approach is based on the Feynman path integral technique \cite{Feynman-1,Kleinert,Zin}. One of the most efficient features of this method is its perturbative technique known as Feynman diagrams which extends the applicability of the method to the era of non-quadratic Lagrangians. The path integral technique has been applied to oscillator-bath system in \cite{Path-0,Path-1,Path-2,Path-3,Path-4,Path-5,Path-6}. Here we follow an alternative approach to find the quantum propagator. This approach which we will describe in detail is based on the position and momentum operators in Heisenberg picture. In this scheme, using elementary quantum mechanical relations, two independent partial differential equations are found that quantum propagator satisfy in. The solutions of these partial differential equations are easily found and unknown functions are determined from basic properties of quantum propagators. The first message of the present paper is that this method compared to other methods to derive the quantum propagator of an oscillator-bath system with a linear coupling is easier to apply and in particular, comparing with path integral technique, there is no need to introduce more advanced mathematical notions like infinite integrations, operator determinant and Weyl ordering. The second message is that since we will find a closed form for the total quantum propagator, we will find a closed form density matrix describing the combined oscillator-bath system. Also, by tracing out the bath degrees of freedom, we find a reduced density matrix describing the main oscillator in any time. In the following, we will generalise the oscillator-bath model by including external classical sources in Hamiltonian, and find the modified quantum propagator under the influence of classical forces. The modified quantum propagator can be interpreted also as a generating functional from which time-ordered correlation functions among different position operators can be determined \cite{Greiner}. The basic ingredient of the approach is a symmetric time-independent matrix $B$, (Eq.(\ref{14-2}) depending on natural frequencies of the bath oscillators and coupling constants. Therefore, from numerical or simulation point of view, the only challenge is finding the inverse of the matrix $B$ or equivalently diagonalizing it. The efficiency of the method introduced here in determining the exact form of the quantum propagator for quadratic Lagrangians, inspires the idea of developing a perturbative approach to include non-quadratic Lagrangians too. The process presented to determine the quantum propagator, suggest that these perturbative techniques may be based on perturbative solutions of nonlinear partial differential equations. This development deserves to be investigated in an independent work. \section{Lagrangian} \noindent In this section, we set the stage for what will be investigated in the following sections. We start with a total Lagrangian describing an interacting oscillator-bath system. Then from the corresponding Hamiltonian and Heisenberg equations of motion, we find explicit expressions for position and momentum operators as the main ingredients of an approach that will be applied in the next section. The Lagrangian describing a main oscillator interacting linearly with a bath of oscillators is given by \cite{Weiss} \begin{equation}{\langle}bel{1} L=\haf \dot{x}^2-\haf\omega_0^2 x^2+\sum_{i=1}^N \haf (\dot{X}^2_i-\omega_i^2 X^2_i)+\sum_{i=1}^N g_i X_i x, \end{equation} Eq.(\ref{1}) can be rewritten in a more compact form as \begin{equation}{\langle}bel{2} L=\haf \sum_{\mu=0}^N (\dot{Y}^2_\mu-\omega_\mu^2 Y_\mu^2)+\haf\sum_{\mu,\nu=0}^N Y_\mu \Omega_{\mu\nu}^2 Y_\nu, \end{equation} where the matrix $\Omega^2_{\mu\nu}$ is given by \begin{equation}{\langle}bel{3} \Omega^2_{\mu\nu}=\left( \begin{array}{ccccc} 0 & g_1 & g_2 & \cdots & g_N \\ g_1 & 0 & 0 & \cdots & 0 \\ g_2 & 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ g_N & 0 & 0 & \cdots & 0 \\ \end{array} \right), \end{equation} and \begin{equation}{\langle}bel{4} Y_0=x,\,\,\,\,Y_k=X_k,\,\,\,\,k=1,\cdots,N. \end{equation} The corresponding Hamiltonian is \begin{equation}{\langle}bel{5} H=\haf \sum_{\mu=0}^N (P^2_\mu+\omega_\mu^2 Y_\mu^2)-\haf\sum_{\mu,\nu=0}^N Y_\mu \Omega_{\mu\nu}^2 Y_\nu, \end{equation} where $P_\mu=\dot{Y}_\mu$ is the canonical conjugate momentum corresponding to the canonical position $Y_\mu$. The system is quantized by imposing the equal-time commutation relations \begin{eqnarray}{\langle}bel{6} && [\hat{Y}_\mu, \hat{P}_\nu] = i\hbar\,\delta_{\mu\nu},\nonumber\\ && [\hat{Y}_\mu, \hat{Y}_\nu] = [\hat{P}_\mu, \hat{P}_\nu]=0, \end{eqnarray} and from Heisenberg equations of motion one finds \begin{equation}{\langle}bel{7} \ddot{\hat{Y}}_\mu +\omega_\mu^2 \hat{Y}_\mu=\sum_\nu \Omega_{\mu\nu}^2 \hat{Y}_\nu. \end{equation} Note that $(\hat{Y}_0, \hat{P}_0)$ refer to the position and momentum of the main oscillator and $(\hat{Y}_k, \hat{P}_k),\,\,(k=1,\cdots,N)$ refer to position and momentum operators of bath oscillators. Taking the Laplace transform from both sides of Eq.(\ref{7}) we find \begin{equation}{\langle}bel{8} \sum_{\nu}\Lambda_{\mu\nu}(s)\hat{\tilde{Y}}_\mu (s)=s \hat{Y}_\mu (0)+\hat{P}_\mu (0), \end{equation} where the $N+1$-dimensional matrix $\Lambda$ is defined by \begin{equation}{\langle}bel{9} \Lambda_{\mu\nu} (s)=[(s^2 +\omega_\mu^2)\delta_{\mu\nu}-\Omega_{\mu\nu}^2]. \end{equation} Therefore, applying the inverse matrix, we find \begin{equation}{\langle}bel{10} \hat{\tilde{Y}}_\mu (s)=\sum_{\nu} [s\Lambda^{-1}_{\mu\nu} (s)\hat{Y}_\nu (0)+\Lambda^{-1}_{\mu\nu} (s)\hat{P}_\nu (0)], \end{equation} and a formal solution is obtained by inverse Laplace transform as \begin{equation}{\langle}bel{11} \hat{Y}_\mu (t)=\dot{F}_{\mu\nu} (t) \hat{Y}_\nu (0)+F_{\mu\nu} (t) \hat{P}_\nu (0), \end{equation} where we defined \begin{equation}{\langle}bel{12} F_{\mu\nu} (t)=\mathcal{L}^{-1}[\Lambda^{-1}(s)]_{\mu\nu}. \end{equation} The matrix $\Lambda$ is explicitly given by \begin{equation}{\langle}bel{13} \Lambda (s)=\left( \begin{array}{ccccc} s^2 + \omega_0^2 & -g_1 & -g_2 & \cdots & -g_N \\ -g_1 & s^2 + \omega_1^2 & 0 & \cdots & 0 \\ -g_2 & 0 & s^2 + \omega_2^2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -g_N & 0 & 0 & \cdots & s^2 + \omega_N^2 \\ \end{array} \right), \end{equation} which can be rewritten as \begin{equation}{\langle}bel{14-1} \Lambda (s)=s^2 \,\mathbb{I}+B, \end{equation} wherein \begin{equation}{\langle}bel{14-2} B=\left( \begin{array}{ccccc} \omega_0^2 & -g_1 & -g_2 & \cdots & -g_N \\ -g_1 & \omega_1^2 & 0 & \cdots & 0 \\ -g_2 & 0 & \omega_2^2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -g_N & 0 & 0 & \cdots & \omega_N^2 \\ \end{array} \right). \end{equation} The inverse matrix can be formally written as \begin{eqnarray}{\langle}bel{14-3} \Lambda^{-1} (s) &=& \frac{1}{s^2 \,\mathbb{I}+B}=\frac{1}{s^2}\,\frac{1}{\mathbb{I}+\frac{1}{s^2}\,B}\nonumber \\ &=& \frac{1}{s^2}\,\bigg(\mathbb{I}-\frac{1}{s^2}\,B+\frac{1}{s^4}B^2-\cdots\bigg)\nonumber \\ &=& \sum_{n=0}^\infty \frac{(-1)^n}{s^{2n+2}}\,B^n,\,\,\,\,(B^0=\mathbb{I}). \end{eqnarray} Therefore, from Eq.(\ref{12}) we have \begin{eqnarray}{\langle}bel{15} && F_{\mu\nu}(t)=\sum_{n=0}^\infty \frac{(-1)^n \,t^{2n+1}}{(2n+1)!}\,(B^n)_{\mu\nu},\nonumber\\ && \dot{F}_{\mu\nu}(t)=\Big(\frac{dF}{dt}\Big)_{\mu\nu}=\sum_{n=0}^\infty \frac{(-1)^n \,t^{2n}}{(2n)!}\,(B^n)_{\mu\nu}. \end{eqnarray} The equations Eqs.(\ref{15}) can be formally written as \begin{eqnarray}{\langle}bel{15-1} && F(t)=\frac{1}{\sqrt{B}}\,\sin(\sqrt{B}\,t),\nonumber\\ && \dot{F}(t)=\cos(\sqrt{B}\,t). \end{eqnarray} From Eqs.(\ref{15}) we deduce that the matrices $F_{\mu\nu}(t)$ and $\dot{F}_{\mu\nu}(t)$ are odd and even in $t$, respectively. \subsection{Connection to the previous works} \noindent The Eq. (\ref{11}) has been appeared in \cite{Haake} with a minor change of notation in the framework of Ullersma diagonalisation technique \cite{Ullersma}. Let the matrix $X_{\mu\nu}$ be a unitary matrix that diagonalizes the orthogonal matrix $B$ given by Eq. (\ref{14-2}) with corresponding eigenvalues $z^2_{\alpha},\,(\alpha=0,1,\cdots,N)$. Therefore, in matrix notation we have \begin{equation}{\langle}bel{C1} (X^t B X)_{\alpha\beta}=z^2_{\alpha}\,\delta_{\alpha\beta}, \end{equation} and using the first equation of Eq. (18), we find \cite{Haake} \begin{equation}{\langle}bel{C2} F_{\mu\nu} (t)=\sum_{\alpha=0}^N X_{\mu\alpha}X_{\nu\alpha}\frac{1}{z_{\alpha}}\sin z_{\alpha}t. \end{equation} The eigenvalues $z^2_{\alpha}$ of the matrix $B$, satisfy the characteristic equation \begin{equation}{\langle}bel{C3} \det(B-z^2\mathbb{I})=0\Rightarrow \left| \begin{array}{ccccc} \omega_0^2-z^2 & -g_1 & -g_2 & \cdots & -g_N \\ -g_1 & \omega_1^2-z^2 & 0 & \cdots & 0 \\ -g_2 & 0 & \omega_2^2 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -g_N & 0 & 0 & \cdots & \omega_N^2-z^2 \\ \end{array} \right|=0, \end{equation} the determinant can be evaluated using the mathematical induction leading to the following characteristic equation \begin{equation}{\langle}bel{C4} g(z)=z^2-\omega_0^2-\sum_{n}\frac{g_n^2}{z^2-\omega_n^2}=0. \end{equation} By making use of Eq. (\ref{7}), we find the following quantum Langevin equation for the main oscillator \begin{equation}{\langle}bel{C5} \ddot{\hat{Y}}_0 (t)-\int_0^t dt'\,\chi(t-t')\,\hat{Y}_0 (t')+\omega_0^2 \,Y_0 (t)=\Upsilon(t), \end{equation} where the susceptibility of the environment is defined by \begin{equation}{\langle}bel{C6} \chi(t)=\sum_{k=1}^N g^2_k\,\frac{\sin(\omega_k t)}{\omega_k}, \end{equation} and the noise operator by \begin{equation}{\langle}bel{C7} \hat{\Upsilon}(t)=\sum_{k=1}^N g_k \,\big[\cos(\omega_k t)\hat{Y}_k (0)+\frac{\sin(\omega_k t)}{\omega_k}\hat{P}_k (0)\big], \end{equation} where $\hat{Y}_k (0)$ and $\hat{P}_k (0)$ are the position and momentum operators at initial time ($t=0$). Taking the Laplace transform of the Langevin equation Eq. (\ref{C5}), we will find the Laplace transform of the corresponding Green's function as \begin{equation}{\langle}bel{C8} \tilde{G}(s)=\frac{1}{s^2-\tilde{\chi}(s)+\omega_0^2}=\frac{1}{s^2+\omega_0^2-\sum\limits_{k=1}^N \frac{g_k^2}{s^2+\omega_k^2}}, \end{equation} where \begin{equation}{\langle}bel{C9} \tilde{\chi}(s)=\sum_{k=1}^N \frac{g_k^2}{s^2+\omega_k^2}. \end{equation} The Green's function in frequency space ($G(\omega)$) can be obtained from the Laplace transformed Green's function $\tilde{G}(s)$ using the identity $G(\omega)=\tilde{G}(i\omega)$, therefore, \begin{equation}{\langle}bel{C10} G(\omega)=\frac{-1}{\omega^2-\omega_0^2-\sum\limits_{k=1}^N \frac{g_k^2}{\omega^2-\omega_k^2}}=\frac{-1}{g(\omega)}, \end{equation} that is the roots of the characteristic equation $g(z)=0$ are the poles of the Green's function $G(\omega)$ in frequency domain. \section{Quantum Propagator} \noindent In this section a novel scheme to derive the quantum propagator of the combined oscillator-bath system is introduced in detail. Let $|y_0{\rangle}$ be an eigenket of $\hat{Y}_0$ and $|y_k{\rangle}$ an eigenket of $\hat{Y}_k$, then in Heisenberg picture, we can write \begin{equation}{\langle}bel{16} \hat{Y}_\mu (t)\,|\mathbf{y},t{\rangle}=y_\mu \,|\mathbf{y},t{\rangle}, \end{equation} where for notational simplicity the tensor product is abbreviated as \begin{equation}{\langle}bel{17} |\mathbf{y},t{\rangle}=|y_0,t{\rangle}\otimes|y_1,t{\rangle}\otimes\cdots\otimes|y_N,t{\rangle}=|y_0,\cdots,y_N,t{\rangle}. \end{equation} Multiplying Eq.(\ref{16}) from the left by ${\langle} \mathbf{y}'|$ and using Eq.(\ref{11}), we find \begin{equation}{\langle}bel{18} \sum_{\nu=0}^N\bigg(\dot{F}_{\mu\nu} (t)\,y'_\nu-i\hbar\,F_{\mu\nu} (t)\,\frac{{\partial}}{{\partial} y'_\nu}\bigg) \,\mathcal{K}(\mathbf{y}'|\mathbf{y},t)=y_\mu\,\mathcal{K}(\mathbf{y}'|\mathbf{y},t), \end{equation} where we have defined the function $\mathcal{K}$ as \begin{equation}{\langle}bel{18-1} {\langle} \mathbf{y}'|\mathbf{y},t{\rangle}=\mathcal{K}(\mathbf{y}'|\mathbf{y},t), \end{equation} and made use of the identities \begin{eqnarray}{\langle}bel{19} {\langle} \mathbf{y}'|\hat{Y}_\mu (0) &=& y'_\mu {\langle} \mathbf{y}'|,\nonumber \\ {\langle} \mathbf{y}'|\hat{P}_\mu (0) &=& -i\hbar\,\frac{{\partial}}{{\partial} y'_\mu}{\langle} \mathbf{y}'|. \end{eqnarray} Eq.(\ref{18}) can be rewritten as \begin{equation}{\langle}bel{20} \sum_{\nu=0}^N F_{\mu\nu} (t)\,\frac{{\partial}}{{\partial} y'_\nu}\ln \mathcal{K}(\mathbf{y}'|\mathbf{y},t)= \frac{i}{\hbar}\bigg(y_\mu-\sum_{\nu}\dot{F}_{\mu\nu} (t)\,y'_\nu\bigg). \end{equation} The right hand side of Eq.(\ref{20}) is linear in $y'_\mu$, so the following quadratic form can be assumed for $\ln \mathcal{K}$ \begin{equation}{\langle}bel{21} \ln\mathcal{K}(\mathbf{y}'|\mathbf{y},t)=A(\mathbf{y},t)+\sum_{\mu=0}^N A_\mu (\mathbf{y},t)y'_\mu+\haf \sum_{\mu,\nu=0}^N y'_\mu C_{\mu\nu} (\mathbf{y},t) y'_\nu, \end{equation} where $C_{\mu\nu}=C_{\nu\mu}$. By inserting Eq.(\ref{21}) into Eq.(\ref{20}), we easily find \begin{eqnarray}{\langle}bel{22} A_\mu (y,t) &=& \frac{i}{\hbar}\,\sum_{\nu=0}^N F^{-1}_{\mu\nu} (t)\,y_\nu,\nonumber\\ C_{\mu\nu} (t) &=& -\frac{i}{\hbar}\,\sum_{\sigma=0}^N F^{-1}_{\mu\sigma} (t)\,\dot{F}_{\sigma\nu} (t), \end{eqnarray} therefore, in dyadic notation, we can write \begin{equation}{\langle}bel{23} \mathcal{K}(\mathbf{y}'|\mathbf{y},t)=e^{A(\mathbf{y},t)}e^{\frac{i}{\hbar}\mathbf{y}'\cdot\mathbf{F}^{-1}(t)\cdot \mathbf{y}} e^{-\frac{i}{2\hbar}\,\mathbf{y}'\cdot \mathbf{F}^{-1} (t)\dot{\mathbf{F}}(t)\cdot \mathbf{y}'}. \end{equation} The form of $A(\mathbf{y},t)$ can be determined from the properties of propagators. Since the Hamiltonian Eq.(\ref{5}) is time-independent, we can write \begin{equation}{\langle}bel{24} \mathcal{K}(\mathbf{y}'|\mathbf{y},t)={\langle} \mathbf{y}'|\mathbf{y},t{\rangle}={\langle} \mathbf{y}'|e^{\frac{it}{\hbar}\hat{H}}|\mathbf{y}{\rangle}. \end{equation} Eq.(\ref{24}), is invariant under successive transformations (i) complex conjugation (ii) $\mathbf{y}\leftrightarrow \mathbf{y}'$ (iii) $t\rightarrow -t$, therefore, \begin{equation}{\langle}bel{25} \mathcal{K}(\mathbf{y}'|\mathbf{y},t)=\mathcal{K}^{*}(\mathbf{y}|\mathbf{y}',-t), \end{equation} leading to \begin{eqnarray}{\langle}bel{26} e^{A(\mathbf{y},t)} &=& e^{\varphi(t)}\,e^{-\frac{i}{2\hbar}\,\mathbf{y}\cdot \mathbf{F}^{-1} (t)\dot{\mathbf{F}}(t)\cdot \mathbf{y}},\nonumber\\ \varphi^{*} (-t) &=& \varphi(t). \end{eqnarray} Note that in Sec.VI, the Hamiltonian will be time-dependent and to find $A(\mathbf{y},t)$ we can not use these transformations and we will follow another approach. Up to now the form of the propagator is as follows \begin{eqnarray}{\langle}bel{27} \mathcal{K}(\mathbf{y}'|\mathbf{y},t) &=& e^{\varphi(t)}\,e^{-\frac{i}{2\hbar}\,\mathbf{y}\cdot \mathbf{F}^{-1} (t)\dot{\mathbf{F}}(t)\cdot \mathbf{y}}e^{\frac{i}{\hbar}\mathbf{y}'\cdot\mathbf{F}^{-1}(t)\cdot \mathbf{y}} e^{-\frac{i}{2\hbar}\,\mathbf{y}'\cdot \mathbf{F}^{-1} (t)\dot{\mathbf{F}}(t)\cdot \mathbf{y}'},\nonumber\\ &=& e^{\varphi(t)}\,e^{-\frac{i}{2\hbar}[\mathbf{y}\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}+ \mathbf{y}'\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}'-2\mathbf{y}'\cdot \mathbf{F}^{-1}\cdot \mathbf{y}]}. \end{eqnarray} From Eqs.(\ref{15-1}) we find the following asymptotic behaviours of Matrices $\mathbf{F},\,\mathbf{F}^{-1}$, and $\dot{\mathbf{F}}$ \begin{eqnarray}{\langle}bel{28} \lim_{t\rightarrow 0} \mathbf{F}(t) \thickapprox t\,\mathbb{I},\nonumber \\ \lim_{t\rightarrow 0} \mathbf{F}^{-1}(t) \thickapprox \frac{1}{t}\,\mathbb{I},\nonumber \\ \lim_{t\rightarrow 0} \dot{\mathbf{F}}(t) \thickapprox \mathbb{I}, \end{eqnarray} By inserting these asymptotic behaviours into Eq.(\ref{27}) we find \begin{equation}{\langle}bel{29} \lim_{t\rightarrow 0}\mathcal{K}(\mathbf{y}'|\mathbf{y},t)=\delta(\mathbf{y}'-\mathbf{y})= \lim_{t\rightarrow 0} e^{\varphi(t)}\,e^{-\frac{i}{2\hbar t}(\mathbf{y}'-\mathbf{y})^2}, \end{equation} comparing Eq.(\ref{29}) with the following one-dimensional representation of Dirac delta function \begin{equation}{\langle}bel{30} \lim_{t\rightarrow 0} \sqrt{\frac{A}{{\partial}i t}}\,e^{-\frac{A}{t}\,(x-x')^2}=\delta(x-x'), \end{equation} we deduce immediately \begin{equation}{\langle}bel{31} \lim_{t\rightarrow 0} e^{\varphi(t)}=\bigg(\frac{i}{2{\partial}i\hbar\,t}\bigg)^{\frac{N+1}{2}}, \end{equation} so we can assume \begin{equation}{\langle}bel{32} e^{\varphi(t)}=\bigg(\frac{i}{2{\partial}i\hbar\,t}\bigg)^{\frac{N+1}{2}}e^{{\langle}mbda(t)}, \end{equation} where the unknown function ${\langle}mbda(t)$ satisfies \begin{equation}{\langle}bel{33} \lim_{t\rightarrow 0} {\langle}mbda(t)=0. \end{equation} The function $\mathcal{K}$ now has the form \begin{eqnarray}{\langle}bel{34} \mathcal{K}(\mathbf{y}'|\mathbf{y},t) &=& e^{{\langle}mbda(t)}\bigg(\frac{i}{2{\partial}i\hbar t}\bigg)^{\frac{N}{2}}\,e^{-\frac{i}{2\hbar}[\mathbf{y}\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}+\mathbf{y}'\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}'-2\mathbf{y}'\cdot \mathbf{F}^{-1}\cdot \mathbf{y}]}. \end{eqnarray} To find ${\langle}mbda(t)$ we make use of the following identity \begin{equation}{\langle}bel{35} \delta(\mathbf{y}'-\mathbf{y})=\int d\mathbf{y}''\,\mathcal{K}(y'|y'',t)\,\mathcal{K}^{*}(y|y'',t), \end{equation} which can be easily checked using the definition of $\mathcal{K}$, Eq.(\ref{18-1}). By inserting Eq.(\ref{34}) and its complex conjugation into Eq.(\ref{35}) and doing the integral we will find \begin{equation}{\langle}bel{36} e^{{\langle}mbda(t)}=\frac{t^{\frac{N+1}{2}}}{\sqrt{|\det \mathbf{F}(t)|}}\,e^{i\theta}, \end{equation} where $\theta$ is a real function that will be determined from a limiting case where the coupling constants are turned off ($g_1=\cdots=g_N=0$) and also the fact that the propagator should satisfy the Schr\"{o}dinger equation. It should be noted that according to the definition Eq.(\ref{18-1}), the Feynman propagator has the following relation to the function $\mathcal{K}$ \begin{equation}{\langle}bel{37} K(\mathbf{y},t;\mathbf{y}',0)={\langle} \mathbf{y},t|\mathbf{y}',0{\rangle}={\langle} \mathbf{y}'|\mathbf{y},t{\rangle}^{*}=\mathcal{K}^{*}(\mathbf{\mathbf{y}}'|\mathbf{\mathbf{y}},t), \end{equation} therefore, Feynman propagator is given by \begin{eqnarray}{\langle}bel{38} K(\mathbf{y},t;\mathbf{y}',0) &=& \frac{e^{-i\theta}}{\sqrt{|\det F(t)|}}\bigg(\frac{1}{2{\partial}i i\hbar}\bigg)^{\frac{N+1}{2}}\,e^{\frac{i}{2\hbar}[\mathbf{y}\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}+\mathbf{y}'\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}'-2\mathbf{y}'\cdot \mathbf{F}^{-1}\cdot \mathbf{y}]}.\nonumber\\ \end{eqnarray} Now we set $y'=0$ and require that $K(\mathbf{y},t;0,0)$ satisfy the Schr\"{o}dinger equation \begin{equation}{\langle}bel{38-1} i\hbar\frac{{\partial} K(\mathbf{y},t;0,0)}{{\partial} t}=\bigg[\haf \sum_{\mu=0}^N \bigg(-\hbar^2\frac{{\partial}^2}{{\partial} y_\mu^2}+\omega_\mu^2 y_\mu^2\bigg)- \haf\sum_{\mu,\nu=0}^N y_\mu \Omega_{\mu\nu}^2 y_\nu\bigg]\,K(\mathbf{y},t;0,0), \end{equation} after spatial differentiations we set $y=0$ and by comparing both sides of Eq.(\ref{38-1}) we find that $\theta$ is a constant. To find the constant $\theta$, we turn off the coupling constants, ($g_1=g_2=\cdots=g_N=0$), and from consistency condition we should recover the quantum propagator of $N$ noninteracting oscillators. When the coupling constants are turned off, We have \begin{eqnarray}{\langle}bel{39} \mathbf{F}^{-1} (t) &=& \mbox{diag}\bigg(\frac{\omega_0}{\sin(\omega_0 t)},\frac{\omega_1}{\sin(\omega_1 t)},\cdots,\frac{\omega_N}{\sin(\omega_N t)}\bigg),\nonumber\\ \dot{\mathbf{F}}(t) &=& \mbox{diag}(\cos(\omega_0 t),\cos(\omega_0 t),\cdots,\cos(\omega_0 t)), \end{eqnarray} Inserting Eqs.(\ref{39}) into Eq.(\ref{38}) we find \begin{equation}{\langle}bel{40} K(\mathbf{y},t;\mathbf{y}',0)=e^{-i\theta}{\partial}rod_{\mu=0}^N \sqrt{\frac{\omega_\mu}{2{\partial}i i\hbar \sin(\omega_\mu t)}} \,e^{\frac{i \omega_\mu}{2\hbar\sin(\omega_\mu t)}\big[(y_\mu^2+{y'}^2_\mu)\cos(\omega_\mu t)-2y_\mu y'_\mu\big]}, \end{equation} which is the propagator of $N$ noninteracting oscillators if we set $\theta=0$. Finally, we find the quantum propagator of oscillator-bath system as \begin{eqnarray}{\langle}bel{41} K(\mathbf{y},t;\mathbf{y}',0) &=& \frac{1}{\sqrt{\det F(t)}}\bigg(\frac{1}{2{\partial}i i\hbar}\bigg)^{\frac{N+1}{2}}\,e^{\frac{i}{2\hbar}[\mathbf{y}\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}+\mathbf{y}'\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}'-2\mathbf{y}'\cdot \mathbf{F}^{-1}\cdot \mathbf{y}]}.\nonumber\\ \end{eqnarray} \section{Density matrix} \noindent In this section we will find the density matrix for the oscillator-bath system using the explicit form of the quantum propagator Eq.(\ref{41}) of the combined system. If we denote the evolution operator by $\hat{U}(t)$ then the density matrix at time $t$ can be obtained from the initial density matrix at $t=0$ as \begin{equation}{\langle}bel{42} \hat{\rho}(t)=\hat{U(}t) \hat{\rho}(0) \hat{U}^{\dag}(t), \end{equation} in position representation we have \begin{eqnarray}{\langle}bel{43} \rho(\mathbf{y},\mathbf{y}';t) &=& {\langle} \mathbf{y}|\rho (t)|\mathbf{y}'{\rangle} \nonumber\\ &=& \int d\mathbf{y}_1 d\mathbf{y}_2\,{\langle} \mathbf{y}|\hat{U}(t)|\mathbf{y}_1{\rangle}{\langle} \mathbf{y}_1|\rho (0)|\mathbf{y}_2{\rangle}{\langle} \mathbf{y}_2 |\hat{U}^{\dag}(t)|\mathbf{y}'{\rangle},\nonumber \\ &=& \int d\mathbf{y}_1 d\mathbf{y}_2\, K(\mathbf{y},t;\mathbf{y}_1,0)\rho(\mathbf{y}_1,\mathbf{y}_2;0) K^{*}(\mathbf{y}',t;\mathbf{y}_2,0). \end{eqnarray} We can assume an arbitrary initial state for oscillator-bath system, but for simplicity we assume that the initial state is a product state as \begin{equation}{\langle}bel{44} \rho(\mathbf{y}_1,\mathbf{y}_2;0)=\rho_{red} (y_{10},y_{20};0)\otimes \rho_B (\vec{y}_1,\vec{y}_2;0), \end{equation} where $\mathbf{y}_1=(y_{10},\vec{y}_1)$ and $\mathbf{y}_2=(y_{20},\vec{y}_2)$. To find the reduced density matrix of the main oscillator, we should take trace over the degrees of freedom of the bath oscillators. Straightforward calculations lead to \begin{eqnarray}{\langle}bel{46} \rho_{red} (y_0,y'_0;t) &=& \frac{1}{|\det F|}\frac{1}{(2{\partial}i\hbar)^{N+1}}\int dy_{01} dy_{02}\,e^{\frac{ia(y_0^2-{y'_0}^2)}{2\hbar}}\nonumber\\ &\cdot & e^{\frac{ia(y^2_{01}-y^2_{02})}{2\hbar}-\frac{ib(y_0 y_{01}-y_0' y_{02})}{\hbar}}\rho_{red}(y_{01},y_{02};0)\nonumber\\ &\cdot & \,I(y_0,y'_0;y_{01},y_{02}), \end{eqnarray} where \begin{eqnarray}{\langle}bel{I} I(y_0,y'_0;y_{01},y_{02}) &=& \int d\vec{y}\,e^{\frac{i}{\hbar}\sum\limits_{k=1}^N\big[(y_0-y'_0)B_k-(y_{01}-y_{02})C_k\big]y_k}\int d\vec{y}_1 d\vec{y}_2\,\rho_B (\vec{y}_1,\vec{y}_2;0)\nonumber\\ &\cdot & \,e^{\frac{i}{2\hbar}\sum\limits_{k,l=1}^N y_{1k}A_{kl}y_{1l}}\,e^{\frac{i}{\hbar}\sum\limits_{k=1}^N \big[y_{01}B_k-y_0 C_k -\sum\limits_{l=1}^N D_{kl}y_l\big] y_{1k}}\nonumber\\ &\cdot & e^{\frac{-i}{2\hbar}\sum\limits_{k,l=1}^N y_{2k}A_{kl}y_{2l}}e^{\frac{-i}{\hbar}\sum\limits_{k=1}^N \big[y_{02}B_k-y'_0 C_k -\sum\limits_{l=1}^N D_{kl}y_l\big]y_{2k}}. \end{eqnarray} The Eq. (\ref{46}) can be rewritten as \begin{equation}{\langle}bel{J1} \rho_{red} (y_0,y'_0;t) = \int dy_{01} dy_{02}\,J(y_0,y'_0;t|y_{01},y_{02})\,\rho_{red}(y_{01},y_{02};0), \end{equation} where \begin{eqnarray}{\langle}bel{J2} J(y_0,y'_0;t|y_{01},y_{02}) &=& \frac{1}{|\det F|}\frac{1}{(2{\partial}i\hbar)^{N+1}}\,e^{\frac{ia(y_0^2-{y'_0}^2)}{2\hbar}}\,e^{\frac{ia(y^2_{01}-y^2_{02})}{2\hbar}-\frac{ib(y_0 y_{01}-y_0' y_{02})}{\hbar}}\nonumber\\ & \times & I(y_0,y'_0;y_{01},y_{02}), \end{eqnarray} The function $J(y_0,y'_0;t|y_{01},y_{02})$, which can be interpreted as a reduced kernel, has been expressed in path integral language in terms of the Feynman-Vernon influence functional \cite{Path-0,Path-1,Path-4}. Here we have obtained the reduced kernel in terms of the quadratic integrals. The time dependent functions ($a, b$), vectors ($C_k, B_k$) and matrices ($A_{kl}, D_{kl}$) are defined by \begin{eqnarray}{\langle}bel{defs} a(t) &=& (F^{-1}\dot{F})_{00},\nonumber\\ b(t) &=& (F^{-1})_{00},\nonumber\\ C_k (t) &=& (F^{-1})_{k0}=(F^{-1})_{0k},\nonumber\\ B_k (t) &=& (F^{-1}\dot{F})_{0k}=(F^{-1}\dot{F})_{k0},\nonumber\\ A_{kl} &=& (\mathbf{A})_{kl}=(F^{-1}\dot{F})_{kl},\nonumber\\ D_{kl} &=& (\mathbf{D})_{kl}=(F^{-1})_{kl}=(F^{-1})_{lk}, \end{eqnarray} which can be rewritten more compactly in matrix form as \begin{equation}{\langle}bel{FF} F^{-1}\dot{F}=\left( \begin{array}{cc} a & \mathbf{B}^T \\ \mathbf{B} & \mathbf{A} \\ \end{array} \right),\,\,\,\, F^{-1}=\left( \begin{array}{cc} b & \mathbf{C}^T \\ \mathbf{C} & \mathbf{D} \\ \end{array} \right), \end{equation} \begin{equation}{\langle}bel{CB} B^T=[B_1, B_2,\cdots,B_N],\,\,\,C^T=[C_1, C_2,\cdots,C_N]. \end{equation} Let the initial state of the bath be a thermal state given by \begin{eqnarray}{\langle}bel{ex-1} \rho_B (\vec{y}_1,\vec{y}_2;0) &=& \bigg({\partial}rod_{k=1}^N\sqrt{\frac{\omega_k}{2{\partial}i\hbar\sinh(\beta\hbar\omega_k)}}\bigg)\nonumber\\ &\times & e^{-\sum\limits_{k=1}^N \frac{\omega_k}{2\hbar\sinh(\beta\hbar\omega_k)}\big[(y_{1k}^2+y_{2k}^2)\cosh(\beta\hbar\omega_k)-2y_{1k}y_{2k}\big]}, \end{eqnarray} then, the integrals over $\vec{y}_1$, $\vec{y}_2$ and $\vec{y}$ in Eq.(\ref{I}), will be Gaussian type integrals and can be obtained using the generic formula \cite{Zin} \begin{equation}{\langle}bel{zin} \int d\vec{x}\,e^{-\frac{1}{2}\sum\limits_{k,l=1}^N x_k \Gamma_{kl} x_l+\sum\limits_{k=1}^N j_k x_k}=(2{\partial}i)^{N/2}(\det \Gamma)^{-1/2} e^{\frac{1}{2}\sum\limits_{k,l=1}^N j_k \Gamma^{-1}_{kl} j_l}, \end{equation} where $\Gamma$ is a positive, symmetric matrix. \subsection{Master equation} \noindent The main ingredient quantity in open quantum system theory is the master equation. To find the master equation satisfied by the reduced density matrix $\rho_{red}$, we insert the initial bath state Eq.(\ref{ex-1}) into Eq.(\ref{I}) and take the integrals over $\vec{y}$, $\vec{y}_1$ and $\vec{y}_2$, after straightforward but tedious calculations we will find the following expression for the reduced kernel defined in Eq.(\ref{J2}) \begin{eqnarray}{\langle}bel{J} J(y_0,y'_0;y_{01},y_{02}) &=& \frac{b_3}{2{\partial}i}\,e^{ib_1 X\xi+ib_2 X_0\xi-ib_3 X\xi_0-ib_4 X_0 \xi_0}\nonumber\\ &\times & e^{-a_{11} \xi^2-a_{12}\xi\xi_0-a_{22}\xi_0^2}, \end{eqnarray} where for later convenience, we have chosen the same notation for the time-dependent coefficients $b_k (t)$ and $a_{ij} (t)$ introduced by Paz in \cite{Paz} following the path integral technique. These coefficients can be obtained in terms of the functions given by Eqs.(\ref{defs}) or in terms of the environment properties described in \cite{Paz}. Following the same process described by Paz in \cite{Paz}, we recover the master equation $(\hbar=1)$ for the reduced density matrix as \begin{eqnarray}{\langle}bel{Master} && i\frac{{\partial}artial \rho_{red} (y_0,y'_0,t)}{{\partial}artial t}={\langle} y_0|[H_{ren},\rho_{red}]|y'_0{\rangle}-i\gamma(t)(y_0-y'_0)(\frac{{\partial}artial }{{\partial}artial y_0}-\frac{{\partial}artial}{{\partial}artial y'_0})\rho_{red}(y_0,y'_0,t),\nonumber\\ && -i D(t) (y_0-y'_0)^2\,\rho_{red}(y_0,y'_0,t)+f(t) (y_0-y'_0) (\frac{{\partial}artial }{{\partial}artial y_0}+\frac{{\partial}artial}{{\partial}artial y'_0})\rho_{red}(y_0,y'_0,t),\nonumber \end{eqnarray} where $H_{ren}$ is the renormalized Hamiltonian of the main oscillator with the renormalized frequency $\omega_{ren} (t)$. To find the connection between the functions $\omega_{ren} (t), \gamma(t), D(t), f(t)$ and coefficients $b_k (t),\,a_{ij}(t)$, the interested reader is referred to \cite{Paz}. \section{Thermal Equilibrium: fixed point} \noindent In the equilibrium state, the density matrix of oscillator-bath system can be obtained from the quantum propagator using the correspondence between quantum propagator and partition function as \begin{equation}{\langle}bel{P1} \rho(y_0,\vec{y};y'_0,\vec{y}',\beta)=\frac{1}{Z(\beta)}\,K(y_0,\vec{y},-i\hbar\beta;y'_0,\vec{y}',0), \end{equation} where $\beta=1/\kappa_B T$ is the inverse of temperature and $\kappa_B$ is Boltzmann constant. The function $Z(\beta)$ is the total partition function \begin{eqnarray}{\langle}bel{P2} Z(\beta) &=& \int dy_0 d\vec{y}\,K(y_0,\vec{y},-i\hbar\beta;y_0,\vec{y},0),\nonumber\\ &=& \frac{1}{2^{\frac{N+1}{2}}}\frac{1}{\sqrt{\det (\dot{F}-\mathbb{I})}}\bigg|_{t=-i\hbar\beta}, \end{eqnarray} and $\mathbb{I}$ is a $N$-dimensional unit matrix. The reduced density matrix of the oscillator is obtained by integrating out the bath degrees of freedom as \begin{eqnarray}{\langle}bel{p3} \rho_{red}(y_0,y'_0;\beta) &=& \int d\vec{y}\,K(y_0,\vec{y},-i\hbar\beta;y'_0,\vec{y},0),\nonumber\\ &=& \sqrt{\frac{\det (\dot{F}-\mathbb{I})}{i{\partial}i\hbar\det F\det(\mathbf{A}-\mathbf{D})}}\,e^{\frac{i}{2\hbar}\big[({y_0}^2+{y'_0}^2)(a-\frac{\eta}{2})-2 y_0 y'_0 (b+\frac{\eta}{2})\big]},\nonumber\\ \end{eqnarray} where \begin{equation}{\langle}bel{p4} \eta=\sum_{k,l=1}^N(B_k-C_k)(\mathbf{A}-\mathbf{D})^{-1}_{kl} (B_l-C_l)|_{t=-i\hbar\beta}. \end{equation} From Eq.(\ref{FF}) we have \begin{equation}{\langle}bel{p5} F^{-1}(\dot{F}-\mathbb{I})=\left( \begin{array}{cc} a-b & \mathbf{B}^T-\mathbf{C}^T \\ \mathbf{B}-\mathbf{C} & \mathbf{A}-\mathbf{D} \\ \end{array} \right), \end{equation} by making use of the identity \cite{Matrix} \begin{eqnarray}{\langle}bel{p6} \det[F^{-1}(F-I)] &=& \det(\mathbf{A}-\mathbf{D})\det[a-b-\underbrace{(\mathbf{B}^T-\mathbf{C}^T)(\mathbf{A}-\mathbf{D})^{-1}(\mathbf{B}-\mathbf{C})}_{\eta}],\nonumber\\ &=& \det(\mathbf{A}-\mathbf{D})(a-b-\eta), \end{eqnarray} Eq.(\ref{p3}) can be rewritten as \begin{equation}{\langle}bel{p7} \rho_{red}(y_0,y'_0;\beta)=\sqrt{\frac{a-b-\eta}{i\hbar{\partial}i}}\,e^{\frac{i}{2\hbar}\big[({y_0}^2+{y'_0}^2)(a-\frac{\eta}{2})-2 y_0 y'_0 (b+\frac{\eta}{2})\big]}. \end{equation} From Eq.(\ref{p7}) we find the thermal mean square of position and momentum as \begin{eqnarray} {\langle} y_0^2 {\rangle} &=& \frac{i\hbar}{2(a-b-\eta)}\bigg|_{t=-i\hbar\beta},\nonumber \\ {\langle} p_0^2 {\rangle} &=& -i\hbar\frac{a+b}{2}\bigg|_{t=-i\hbar\beta}, \end{eqnarray} therefore, \begin{equation}{\langle}bel{p8} \rho_{red}(y_0,y'_0;\beta)=\frac{1}{\sqrt{2{\partial}i{\langle} y_0^2 {\rangle}}}\,e^{-\frac{{\langle} p_0^2 {\rangle}}{2\hbar^2}(y_0-y_0')^2-\frac{1}{8{\langle} y_0^2 {\rangle}}(y_0+y_0')^2}, \end{equation} for another derivation, see \cite{Weiss}. \section{Main oscillator interacts with an external field} \noindent Now assume that the main oscillator is under the influence of an external classical field $f(t)$. In this case the total Lagrangian is written as \begin{equation}{\langle}bel{47} L=\haf \sum_{\mu=0}^N (\dot{Y}^2_\mu-\omega_\mu^2 Y_\mu^2)+\haf\sum_{\mu,\nu=0}^N Y_\mu \Omega_{\mu\nu}^2 Y_\nu-f(t) Y_0, \end{equation} and the corresponding Hamiltonian is \begin{equation}{\langle}bel{48} H=\haf \sum_{\mu=0}^N (P^2_\mu+\omega_\mu^2 Y_\mu^2)-\haf\sum_{\mu,\nu=0}^N Y_\mu \Omega_{\mu\nu}^2 Y_\nu+f(t) Y_0. \end{equation} Note that the Hamiltonian is now time-dependent and we can not use Eqs.(\ref{24},\ref{25}). In this case, we can find another partial differential equation satisfied by $\mathcal{K}(y'|y,t)$ as follows. From Heisenberg equations of motion we find \begin{equation}{\langle}bel{48-1} \ddot{\hat{Y}}_\mu +\omega_\mu^2 \hat{Y}_\mu-\sum_\nu \Omega_{\mu\nu}^2 \hat{Y}_\nu=-f(t)\,\delta_{\mu 0}. \end{equation} The Green tensor corresponding to Eq.(\ref{48-1}) is defined by \begin{equation}{\langle}bel{48-2} \sum_\nu\bigg(\big[{\partial}^2_t +\omega_\mu^2\big]\delta_{\mu\nu}-\Omega^2_{\mu\nu}\bigg)G_{\nu\alpha} (t-t')=\delta_{\mu\alpha}\,\delta(t-t'). \end{equation} By making use of Laplace transform and definitions Eqs.(\ref{9},\ref{12}), we find the retarded Green tensor as \begin{equation}{\langle}bel{48-3} G_{\mu\nu} (t-t')=F_{\mu\nu} (t-t'), \end{equation} and the position and momentum operators are respectively given by \begin{eqnarray}{\langle}bel{49} && \hat{Y}_\mu (t)=\sum_{\nu} \big[\dot{F}_{\mu\nu} (t) \hat{Y}_\nu (0)+F_{\mu\nu} (t) \hat{P}_\mu (0)\big]-R_\mu (t),\nonumber\\ && \hat{P}_\mu =\dot{\hat{Y}}_\mu=\sum_{\nu} \big[\ddot{F}_{\mu\nu} (t) \hat{Y}_\nu (0)+\dot{F}_{\mu\nu} (t) \hat{P}_\mu (0)\big]-\dot{R}_\mu (t), \end{eqnarray} where we defined \begin{equation}{\langle}bel{50} R_\mu (t)=\int_0^t dt'\,F_{\mu 0} (t-t') f(t'). \end{equation} We can rewrite the identity \begin{equation}{\langle}bel{51} \hat{P}_\mu (t)=\hat{U}^{\dag}(t) \hat{P}_\mu (0) \hat{U}(t), \end{equation} as \begin{equation}{\langle}bel{52} \hat{P}_\mu (t) \hat{U}^{\dag}(t)=\hat{U}^{\dag}(t) \hat{P}_\mu (0), \end{equation} then \begin{equation}{\langle}bel{53} {\langle} y'|\hat{P}_\mu (t) \hat{U}^{\dag}(t)|y{\rangle}={\langle} y'|\hat{U}^{\dag}(t) \hat{P}_\mu (0)|y{\rangle}. \end{equation} By inserting the momentum operator from the second line of Eqs.(\ref{49}) into Eq.(\ref{53}), we easily find \begin{equation}{\langle}bel{54} \sum_{\nu}\bigg(\ddot{F}_{\mu\nu} (t)\,y'_\nu-i\hbar\,\dot{F}_{\mu\nu} (t)\,\frac{{\partial}}{{\partial} y'_\nu}-\dot{g}_\mu \bigg) \,\mathcal{K}(\mathbf{y}'|\mathbf{y},t)=y_\mu\,\mathcal{K}(\mathbf{y}'|\mathbf{y},t). \end{equation} By making use of Eqs.(\ref{18},\ref{29},\ref{35},\ref{54}), and following the same process as we did in Sec.III, we will find \begin{eqnarray}{\langle}bel{55} K^{(f)}(\mathbf{y},t;\mathbf{y}',0) &=& \frac{e^{-i\zeta(t)}}{\sqrt{|\det F(t)|}}\bigg(\frac{1}{2{\partial}i i\hbar}\bigg)^{\frac{N+1}{2}}\,e^{\frac{i}{2\hbar}\big[\mathbf{y}\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}+\mathbf{y}'\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}'-2\mathbf{y}'\cdot \mathbf{F}^{-1}\cdot \mathbf{y}\big]}\nonumber\\ &\times & e^{-\frac{i}{\hbar}\,\big[\mathbf{y}'\cdot \mathbf{F}^{-1}\cdot \mathbf{R} +\mathbf{y}\cdot \mathbf{F}^{-1}\cdot\check{\mathbf{R}}\big]}, \end{eqnarray} where we have defined $\check{\mathbf{R}}$ as \begin{equation}{\langle}bel{56} \check{R}_\mu (t)=\int_0^{t} dt'\,F_{\mu 0}(t') f(t'), \end{equation} and the function $\zeta(t)$ can be determined from the Schr\"{o}dinger equation \begin{eqnarray}{\langle}bel{57} && i\hbar\frac{{\partial} K^{(f)}(\mathbf{y},t;0,0)}{{\partial} t}\bigg|_{y=0}=\nonumber\\ && \bigg[\haf \sum_{\mu=0}^N \bigg(-\hbar^2\frac{{\partial}^2}{{\partial} y_\mu^2}+\omega_\mu^2 y_\mu^2\bigg)- \haf\sum_{\mu,\nu=0}^N y_\mu \Omega_{\mu\nu}^2 y_\nu +f(t) y_0\bigg]\,K^{(f)}(\mathbf{y},t;0,0)\bigg|_{y=0},\nonumber\\ \end{eqnarray} as \begin{eqnarray}{\langle}bel{58} \zeta(t) &=& \frac{1}{2\hbar}\int_0^t ds\,\check{\mathbf{R}}(s)\cdot \mathbf{F}^{-2} (s)\cdot \check{\mathbf{R}}(s),\nonumber\\ &=& \frac{1}{\hbar}\int_0^t ds\,\int_0^{s} du\,f(s)\bigg[\frac{\sin(\sqrt{B}u)\sin[\sqrt{B}(t-s)]}{\sqrt{B}\sin(\sqrt{B}t)}\bigg]_{00}f(u). \end{eqnarray} Finally, the quantum propagator for oscillator-bath system under the influence of an external classical force on the main oscillator, is obtained as \begin{eqnarray}{\langle}bel{59} K^{(f)}(\mathbf{y},t;\mathbf{y}',0) &=& \frac{1}{\sqrt{|\det F(t)|}}\bigg(\frac{1}{2{\partial}i i\hbar}\bigg)^{\frac{N+1}{2}}\,e^{\frac{i}{2\hbar}\big[\mathbf{y}\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}+\mathbf{y}'\cdot \mathbf{F}^{-1} \dot{\mathbf{F}}\cdot \mathbf{y}'-2\mathbf{y}'\cdot \mathbf{F}^{-1}\cdot \mathbf{y}\big]}\nonumber\\ &\times & e^{-\frac{i}{\hbar}\,\big[\mathbf{y}'\cdot \mathbf{F}^{-1}\cdot \mathbf{R} +\mathbf{y}\cdot \mathbf{F}^{-1}\cdot\check{\mathbf{R}}\big]}\,e^{-\frac{i}{2\hbar}\int_0^t ds\,\check{\mathbf{R}}(s)\cdot \mathbf{F}^{-2} (s)\cdot \check{\mathbf{R}}(s)}. \end{eqnarray} \subsection{A generalization: generating function} \noindent We can generalize the Lagrangian Eq.(\ref{47}) as \begin{equation}{\langle}bel{60} L=\haf \sum_{\mu=0}^N (\dot{Y}^2_\mu-\omega_\mu^2 Y_\mu^2)+\haf\sum_{\mu,\nu=0}^N Y_\mu \Omega_{\mu\nu}^2 Y_\nu-\sum_{\mu=0}^N f_{\mu}(t) Y_\mu, \end{equation} in this case the quantum propagator is given by Eq.(\ref{50}) but now the definitions Eqs.(\ref{50},\ref{56}) have to be replaced by the new definitions \begin{eqnarray}{\langle}bel{61} R_\mu (t) &=& \int_0^t dt'\,F_{\mu\nu} (t-t') f_{\nu} (t'),\nonumber\\ \check{R}_\mu (t) &=& \int_0^t dt'\,F_{\mu\nu} (t') f_{\nu} (t'). \end{eqnarray} The path integral representation of quantum propagator Eq.(\ref{59}) is \cite{Greiner} \begin{equation}{\langle}bel{62} K^{(f)}(\mathbf{y},t;\mathbf{y}',0)=\int d[\mathbf{x}]\,e^{\frac{i}{\hbar}\int_0^t d\tau\,L}, \end{equation} where $L$ is the Lagrangian Eq.(\ref{60}). Having the closed form expression Eq.(\ref{59}), we can find ordered correlation functions among position operators of the oscillator-bath system. In this case, the external source $f_\mu (t)$ is an auxiliary force that should be set zero at the end of functional derivatives \cite{Greiner}, we have \begin{eqnarray}{\langle}bel{63} && {\langle} \mathbf{y},t|\hat{T}[\hat{Y}_{\mu_1}(t_1) \hat{Y}_{\mu_2}(t_2)\cdots \hat{Y}_{\mu_N}(t_N)|\mathbf{y}',0 {\rangle} = \nonumber\\ && \frac{\int D[\mathbf{x}]\,y_{\mu_1}(t_1) y_{\mu_2}(t_2)\cdots y_{\mu_N}(t_N)\,e^{\frac{i}{\hbar}\int_0^t d\tau\,L}}{\int D[\mathbf{x}]\,e^{\frac{i}{\hbar}\int_0^t d\tau\,L}}= \nonumber \\ && \frac{(i\hbar)^N}{K^{(0)}(\mathbf{y},t;\mathbf{y}',0)} \frac{\delta^{N}}{\delta f_{\mu_1} (t_1)\cdots \delta f_{\mu_N} (t_N)} K^{(f)}(\mathbf{y},t;\mathbf{y}',0)\bigg|_{f=0}, \end{eqnarray} where $\hat{T}$ is a time ordering operator acting on bosonic operators as \begin{equation}{\langle}bel{64} \hat{T}(\hat{A}(t)\hat{B}(t')=\left\{ \begin{array}{ll} \hat{A}(t)\hat{B}(t'), & \hbox{$t>t'$;} \\ \hat{B}(t')\hat{A}(t), & \hbox{$t'>t$.} \end{array} \right. \end{equation} \section{Conclusions} \noindent Using elementary quantum mechanical calculations and basic properties of quantum propagators, an alternative derivation of exact quantum propagator for the oscillator-bath system was introduced. The method compared to other methods to derive quantum propagator of an oscillator-bath system with linear interaction or generally quadratic Lagrangians, was easier to apply and in particular, compared to path integral approach, there was no need to introduce more advanced mathematical notions like infinite integrations, operator determinant and Weyl ordering. From quantum propagator, a closed form density matrix describing the combined oscillator-bath system was obtained from which reduced density matrix could be derived. The problem was generalised to the case where the main oscillator was under the influence of an external classical source. By introducing auxiliary classical fields the modified quantum propagator or generating functional of position correlation functions was found. The basic ingredient of the approach was a symmetric time-independent matrix $B$, which was dependent on natural frequencies of the bath oscillators and coupling constants. Therefore, from numerical or simulation point of view, the only challenge was finding the inverse of the matrix $B$ or equivalently diagonalizing it. The efficiency of the method in determining the exact form of the quantum propagator for quadratic Lagrangians, inspired the idea of developing a perturbative approach to include non-quadratic Lagrangians too. \end{document}
\begin{document} \title{Parallelized POD-based Suboptimal Economic Model Predictive Control of a State-Constrained Boussinesq approximation} \author{Julian Andrej\footnote{Lawrence Livermore National Laboratory, 7000 East Ave., Livermore, 94550, USA.} , Lars Gr\"une\footnote{University of Bayreuth, Universit\"atsstra\ss e 30, 95447 Bayreuth, Germany.} , Luca Mechelli\footnote{University of Konstanz, Universit\"atsstra\ss e 10, 78464 Konstanz, Germany.} , Thomas Meurer\footnote{Chair of Automatic Control, Kiel University, Kaiserstra\ss e 2, 24143, Kiel, Germany.} ,\\Simon Pirkelmann$^\dag$ and Stefan Volkwein$^\ddag$} \date{} \maketitle \begin{abstract} Motivated by an energy efficient building application, we want to optimize a quadratic cost functional subject to the Boussinesq approximation of the Navier-Stokes equations and to bilateral state and control constraints. Since the computation of such an optimal solution is numerically costly, we design an efficient strategy to compute a sub-optimal (but applicationally acceptable) solution with signifcantly reduced computational effort. We employ an economic Model Predictive Control (MPC) strategy to obtain a feedback control. The MPC sub-problems are based on a linear-quadratic optimal control problem subjected to mixed control and state constraints and a convection-diffusion equation, reduced with proper orthogonal decomposition. To solve each sub-problem, we apply a primal-dual active set strategy. The method can be fully parallelized, which enables the solution of large problems with real-world parameters. \\ \\ \textit{Keywords}: Boussinesq approximation, Model predictive control, State constraints, Proper orthogonal decomposition, Parallel computing. \end{abstract} \section{Introduction} Coupled heat-convection phenomena like those occurring in heating, cooling and air-conditioning (HVAC) of residual buildings can be accurately modeled by the incompressible Navier-Stokes equations; see \cite{AT90,Tem84,Tri77}, for instance. These equations describe how the physical quantities temperature, air velocities and pressure are connected and how they influence each other. While the underlying physical principles are completely described by the equations, working with them directly for the purpose of control of HVAC processes has turned out to be challenging due to the high computational complexity. Instead, simplified models of varying granularity are used in control applications. One common simplification that still retains much of the relevant physical relations in the HVAC regime is the Boussinesq approximation \cite{Tem84,Tri77}. The general idea is to reduce the coupling between heat and airflow by only considering bouyancy induced changes of the fluid velocity. More specifically, we only consider density variations along the gravity field which lead to, e.g., the rise of warmer air. Although we obtain a simplified model, this simplification is not enough to prevent high computational cost for computing its solution. Many degrees of freedom (DOFs) are in fact necessary to accurately capture the involved dynamics, especially when dealing with real-world parameters; see, e.g., \cite{And19}. One viable approach is to apply model order reduction (MOR). This field was extensively developed during the last two decades and several techniques were proposed. Among all, in case of computational fluid dynamics, proper orthogonal decomposition (POD) seems the one to guarantee best performances for nonlinear dynamical systems; see, e.g., \cite{BMQR15,HLBR12,SR18}. In the particular case of the Boussinesq approximation, the POD-based MOR approach is also certified by an a-posteriori error analysis \cite{Rav2000,Rav2011}, that guarantees the validity of approximation for the computed reduced order model. Due to the request of computing many PDE solutions, MOR techniques are naturally applied in the field of optimal control. In \cite{HU07}, for example, a POD-based model predictive control (MPC) algorithm is considered to approximate a feedback solution to the Boussinesq equation. MPC is a well-established model-based control method, sometimes also called Receding Horizon Control; cf. \cite{GrPa17,RMD17}. The key idea is to decompose an infinite time horizon optimal control problem into several finite horizon sub-problems and, therefore, to optimize predictions of the model behavior in relatively short time. After a sub-problem is solved, the initial part of the computed optimal control is stored and used as feedback control, the time horizon is shifted and it is possible to update the problem parameters before the procedure is iterated. This technique is particularly useful in HVAC applications. It permits, in fact, to merge predictions, such as weather forecast, with system measurements \cite{OPJG12,SOCP11}. Let us mention that POD-based MPC is considered also in \cite{FFV19,GU14,Mec19,MV19} for different models. Furthermore, we are particularly interested in economic MPC; cf. \cite{AmRA11,ELC17,FaGM18,GP17,GP20,Pir20}. In this particular branch of MPC, more general cost functionals than those employing tracking type costs are allowed. In our specific application, the cost functional depends only on the control variables, without any dependence on the state variable. The focus is then to keep the system trajectory inside pre-defined bounds with less control effort as possible and it deals perfectly with the usual requests of HVAC applications. This work aspires to unify the above mentioned contributions in a reliable method, in order to compute a suboptimal solution to an optimal control problem subject to the Boussinesq approximation and bilateral state and control constraints. In particular, starting from \cite{And19,Mec19,Pir20}, we propose a POD-based economic MPC scheme which considers linear-quadratic subproblems governed by convection-diffusion equations with a virtual control approach \cite{KR09,Mec19} and possible updates of the reduced order model through parallel solution of the Boussinesq approximation \cite{And19}. Furthermore a primal-dual active set strategy (PDASS) \cite{HIK02,IK03} is considered to solve each sub-problem. The goal is to have a robust and reliable method that speeds up the computational time. In Section~\ref{sec:setting}, we present the optimal control problem. Section~\ref{sec:pdass} describes the virtual control approach and the PDASS for linear-quadratic optimal control problems, while Section~\ref{sec:POD} contains a brief explanation of the POD method and the a-posteriori error estimate. The POD-based economic MPC algorithm is explained in Section~\ref{sec:MPC} and further details about the implementation are given in Section~\ref{sec:parallel}. Finally, we test our algorithm for a simplified problem in Section~\ref{sec:num_test}. \section{The setting of the optimal control problem} \label{sec:setting} The Boussinesq approximation of the Navier-Stokes equations is described by the following PDE: \begin{subequations} \label{eq:Bous} \begin{align} \frac{\partial{\mathbf{v}}}{\partial t} + {\mathbf{v}}\cdot \nabla {\mathbf{v}} &= - \frac{1}{\rho} \nabla p + \nu \Delta {\mathbf{v}} - \textbf{g} \alpha ( {y} - \tilde{y}) && \text{in } Q:=(0,T)\times\Omega, \label{eq:pde-navier-stokes}\\ \nabla \cdot {\mathbf{v}}&= 0 && \text{in } Q, \label{eq:pde-continuity}\\ \mathbf{v}(0)& = \mathbf{v}_\circ, && \text{in } \Omega, \\ \frac{\partial y}{\partial t} + {\mathbf{v}}\cdot \nabla y &= \frac{\kappa}{\rho c_p}\,\Delta y && \text{in } Q, \label{eq:pde-heat-equation} \\ y(0) & = y_\circ && \text{in } \Omega, \label{eq:pde-heat-equation-initguess} \end{align} where $\mathbf{v}:Q\to\mathbb{R}^{2}$ is the \emph{air velocity}, $p:Q\to\mathbb{R}$ stands for the \emph{air pressure}, and $y:Q\to\mathbb{R}$ is the \emph{temperature}, $\Omega\subset\mathbb{R}^2$ denotes an open and bounded domain with sufficiently smooth boundary, $T>0$ is the time horizon, $\textbf{g}\in\mathbb{R}^2$ is the \emph{gravitational acceleration}, $\mathbf{v}_\circ$, $y_\circ$ are given initial conditions and $\tilde{y}\in\mathbb{R}$ is the reference temperature at which we measure: \begin{itemize} \item the (constant) density $\rho>0$, \item the (constant) kinematic viscosity $\nu>0$, \item the (constant) coefficient of thermal expansion $\alpha>0$, \item the (constant) thermal conductivity $\kappa>0$, \item the (constant) isobaric specific heat $c_p>0$ \end{itemize} of the studied fluid (air in our case). \begin{figure} \caption{Domain $\Omega$ in case of underfloor heating system. \label{fig:domain} \label{fig:domain} \end{figure} In addition, we consider time-varying boundary conditions of the following form \begin{align} v & = 0 && \text{ on } (0,T)\times \Gamma, \\ -\kappa\frac{\partial y}{\partial{\bm n}} &= \gamma (y - y_{\text{out}}) && \text{ on } (0,T)\times \Gamma \setminus \Gamma_\mathsf{c} ,\label{eq:pde-heat-equation-outside} \\ -\kappa\frac{\partial y}{\partial{\bm n}} &= \gamma_\mathsf{c} (y - u) && \text{ on } (0,T)\times \Gamma_\mathsf{c} ,\label{eq:pde-heat-equation-control} \end{align} \end{subequations} where $\Gamma= \partial\Omega$ and $\gamma,\gamma_c>0$ are fixed parameters. The function $u\in \mathcal{U}= L^2(0,T)$ is a control input placed at a predefined part of the domain's boundary, namely $\Gamma_\mathsf{c}$, and it represents a device for controlling the temperature in the room, e.g. a radiator or an underfloor heating system; cf. Figure~\ref{fig:domain}. The quantity $y_{\text{out}}\in L^2(0,T)$ represents the outside temperature. We are interested in the weak solution to \eqref{eq:Bous}. For more details of the straightforward variational formulation for \eqref{eq:Bous} we refer to \cite{AT90,Tem84} and \cite{HU07} for more details. The goal is to minimize the control effort while keeping the temperature $y$ inside desired bounds, i.e. we are interested in minimizing \begin{subequations} \label{eq:opt_cont_prob} \begin{equation} \mathcal{J}(u)= \frac{1}{2} \|u\|^2_{\mathcal{U}} \end{equation} subject to (s.t.) the weak formulation of \eqref{eq:Bous} and the inequality constraints \begin{align} &u_\mathsf{a}(t) \leq u(t) \leq u_\mathsf{b}(t) && \text{ a.e. in } (0,T) \label{control_constraints},\\ & y_\mathsf{a}(t,{\bm x}) \leq y(t,{\bm x}) \leq y_\mathsf{b}(t,{\bm x}) && \text{ a.e. in } Q. \label{state_constraints} \end{align} \end{subequations} Here $y$ is the (weak) solution of \eqref{eq:Bous}, $u_\mathsf{a},u_\mathsf{b}\in \mathcal{U}$ given control constraints and $y_\mathsf{a},y_\mathsf{b}\in C(Q)$ given state constraints. It follows from \cite[Chap.~III,\,\S3]{Tem84} that \eqref{eq:Bous} admits a unique weak solution \begin{align*} (\mathbf{v},y)\in \mathcal{Y}_\mathsf B=W(0,T;\tilde V)\times W(0,T;V), \end{align*} where $V=H^1(\Omega)$ and the Hilbert space $\tilde V$ is the closure of the space $\{v\in C^\infty_0(\Omega)^2\,|\,\nabla\cdot v =0\}$ with respect to the norm $\|\cdot\|_{V\times V}$. Moreover, we have $W(0,T;\tilde V)=L^2(0,T;\tilde V)\cap H^1(0,T;\tilde V')$, and the set $W(0,T;V)$ is defined analogously; see \cite[Chap.~XVIII]{DL00} for more details. Thus, considering the control-to-state map \begin{align*} \mathcal S_\mathsf B: \mathcal{U} \to \mathcal{Y}_\mathsf B,\quad u\mapsto \mathcal S_\mathsf B(u)=(\textbf{v}(u),y(u))\in\mathcal{Y}_\mathsf B \end{align*} and the admissible set \begin{align*} \mathcal{U}_{\mathsf B,\mathsf{ad}}= \big\{ u\in\mathcal{U}\,\big|\,u\text{ and }y=\mathcal S_\mathsf B(u)\text{ satisfy \eqref{control_constraints} and \eqref{state_constraints}, respectively}\big\}, \end{align*} problem \eqref{eq:opt_cont_prob} can be rewritten as the following purely control-constrained optimization problem \begin{equation} \tag{$\mathbf{\hat P}_\mathsf B$} \label{eq:opt_cont_prob_in_u_bous} \min \mathcal{J}(u)\quad\text{s.t.}\quad u\in\mathcal{U}_{\mathsf B,\mathsf{ad}}. \end{equation} We call \eqref{eq:opt_cont_prob_in_u_bous} the \emph{reduced problem} because -- in contrast to \eqref{eq:opt_cont_prob} -- it is an optimization problem in the control variable only. Note that \eqref{eq:opt_cont_prob_in_u_bous} admits a global solution $\bar u_\mathsf B\in\mathcal{U}_{\mathsf B,\mathsf{ad}}$ provided $\mathcal{U}_{\mathsf B,\mathsf{ad}}$ has non-empty interior; cf., e.g., \cite{HPUU2009}. Throughout the work a bar indicates optimality. While the Boussinesq approximation already reduces numerical effort for simulation of the fluid processes compared to the full compressible Navier-Stokes equations, the nonlinearity induced by the two-way coupling of temperature and velocity still makes it computationally infeasible, when it is used, e.g., as the prediction model in an MPC scheme. This is because each step of the MPC algorithm typically requires many evaluations of the prediction model, which would be too costly for an online application of the method. For that reason we elaborate a simplified strategy that aims at decreasing the computational cost significantly. We introduce several simplifications from a model and optimal control perspectives. At first, we notice that the buoyancy effect in the Boussinesq model has a big impact only for large time horizons, therefore in the MPC open-loop problem, one can consider to fix the current velocity field $\mathbf{v}$ and solve only the linear convection-diffusion equation \eqref{eq:pde-heat-equation}. However, we will still carry out occasional simulations of the Boussinesq approximation in order to generate the source for so-called snapshots of the system state. From those we can then construct a local (with respect to time) reduced-order model of the convection-diffusion equation \eqref{eq:pde-heat-equation} system using POD. This reduced model summarizes information about the essential dynamics currently going on in the system and is much smaller than the original system which allows us to use it for fast computation of an approximated solution of optimal control problems within the MPC algorithm. One may ask, why not performing POD directly for the coupled system \eqref{eq:Bous}, as in \cite{Rav2000,Rav2011}. Although this is a viable approach, the use of the velocity field $\textbf{v}$ as datum of the problem and not as unknown improves and simplifies further the POD approximation. This has the double advantage of using less basis functions and easier (mathematically and computationally) a-posteriori error estimator for the problem, which may result in tighter estimates compared to the one for the fully coupled system. The last challenge is caused by the state constraints. These do not guarantee the existence of regular ($L^2$) Lagrange multipliers (cf. e.g.\ \cite{Tro13}) , introducing additional computational effort. To overcome this issue, we employ a virtual control approach together with a primal-dual active set strategy. \section{Virtual control approach and PDASS for the linear-quadratic problem} \label{sec:pdass} As mentioned in Section~\ref{sec:setting}, as first simplification we assume that the velocity field $\textbf{v}$ is given and we focus on a model based only on the heat equation \eqref{eq:pde-heat-equation}-\eqref{eq:pde-heat-equation-initguess} with convection together with the boundary conditions \eqref{eq:pde-heat-equation-outside}-\eqref{eq:pde-heat-equation-control}. It is well-known that this equation admits a unique weak solution $y\in \mathcal{Y}=W(0,T;V)$ for any given control $u\in\mathcal{U}$; cf., e.g., \cite{DL00}. Let $\mathcal S: \mathcal{U}\to\mathcal{Y}$ be the control-to-state map for the heat equation, then we define the admissible set of controls \[ \mathcal{U}ad= \big\{u\in\mathcal{U}\,\big|\,u\text{ and }y=\mathcal S(u)\text{ satisfy \eqref{control_constraints} and \eqref{state_constraints}, respectively}\big\}. \] We are, thus, interested in solving the strictly convex optimal control problem \begin{equation} \tag{$\mathbf{\hat P}$} \label{eq:opt_cont_prob_in_u} \min \mathcal{J}(u)\quad\text{s.t.}\quad u\in\mathcal{U}ad, \end{equation} which admits a unique solution $\bar u$ if the set $\mathcal{U}ad$ has non-empty interior; see, e.g., \cite{HPUU2009}. Note that the difference between \eqref{eq:opt_cont_prob_in_u_bous} and \eqref{eq:opt_cont_prob_in_u} relies on the different control-to-state map, i.e. on the different underlying state model. As said, we expect that $\bar u_\mathsf{B}$ and $\bar u$ will be close for short time horizon $T$, since the buoyancy effect influences the temperature trajectory $y$ only on larger time scales. This will allow computing a good enough approximation of the velocity field $\textbf{v}$ for \eqref{eq:pde-heat-equation}, to be used as datum of the problem. We will explain in Section~\ref{sec:parallel} how we compute this approximation. What is important to clarify now is how to compute the solution to \eqref{eq:opt_cont_prob_in_u}. As mentioned, the presence of pointwise state constraints leads to non-regular Lagrange multipliers; cf. \cite{Cas97,Ray97,RMRT08}. To overcome this issue, we apply a virtual control approach as in \cite{KR09,Mec19,MV18}. Let $w\in\mathcal{W}= L^2(Q)$ be an additional (artificial) control. For given $\varepsilon>0$ we relax the bilateral state constraints \eqref{state_constraints} as follows: \begin{equation} \label{relaxed_state_constraints} y_\mathsf{a} \leq \mathcal{S}(u) + \varepsilon w \leq y_\mathsf{b}\quad\text{a.e. in } Q \end{equation} and introduce the artificial constraints \begin{equation} \label{artificial_constraints} w_\mathsf{a} \leq w(t,{\bm x}) \leq w_\mathsf{b}\quad\text{a.e. in } Q \end{equation} with fixed (safeguard) scalars $w_\mathsf{a}\ll0$ and $w_\mathsf{b}\gg0$, i.e., sufficiently small and large, respectively. Furthermore, we define the admissible set of controls \[ \mathcal{Z}^\varepsilon_\mathsf{ad} = \big\{(u,w)\in\mathcal{U}\times\mathcal{W}\,\big|\,u, w\text{ and }\mathcal{S}(u)\text{ satisfy } \eqref{control_constraints}, \eqref{relaxed_state_constraints}, \eqref{artificial_constraints} \big\}. \] Now we are interested in solving \begin{equation} \tag{$\mathbf{\hat P}^\varepsilon$} \label{eq:optimal_control_in_u_w} \min J(u,w)\quad\text{s.t.}\quad(u,w)\in\mathcal{Z}^\varepsilon_\mathsf{ad}, \end{equation} where \[ J(u,w)= \mathcal{J}(u)+\frac{\sigma}{2}\,{\|w\|}^2_\mathcal{W} \] with $\sigma>0$. Note that \eqref{eq:optimal_control_in_u_w} admits a unique solution $(\bar u^\varepsilon,\bar w^\varepsilon)\in \mathcal{U}\times\mathcal{W}$ for each $\varepsilon>0$ provided $\mathcal{Z}^\varepsilon_\mathsf{ad}$ has non-empty interior (cf. \cite{HPUU2009,Mec19}). \begin{remark} \em We point out that the artificial bounds $w_\mathsf{a}$, $w_\mathsf{b}$ are necessary to guarantee $\bar u^\varepsilon \to\bar u$ in $\mathcal{U}$ as $\varepsilon\to 0$ with an order of convergence $O(\sqrt{\varepsilon})$; cf. \cite[Section~1.3.3]{Mec19}. They can be omitted if $\varepsilon$ is taken small but fixed. In this case, the existence of a unique solution is still guaranteed \cite{MV18}. Moreover, we can choose $w_\mathsf{a}$ and $w_\mathsf{b}$ as large in absolute value as one desires, because they do not have any physical meaning \cite{Mec19}. $\Diamond$ \end{remark} Let us define the non-symmetric continuous weakly-coercive bilinear form $a(t;\cdot\,,\cdot):V\times V\to\mathbb{R}$ as \[ a(t; \varphi,\psi) = \int_\Omega {\textstyle\frac{\kappa}{\rho c_p}}\nabla\varphi\cdot\nabla\psi + (\textbf{v}(t,\cdot)\cdot \nabla\varphi)\psi\,\mathrm{d}{\bm x} + \gamma\int_{\Gamma\setminus\Gamma_c} \varphi\psi\,\mathrm{d}{\bm s} + \gamma_c\int_{\Gamma_c} \varphi\psi\,\mathrm{d}{\bm s}. \] Then the first-order necessary optimality conditions for \eqref{eq:optimal_control_in_u_w} are as follows (cf. \cite{Mec19}): \begin{theorem} \label{TheoremFOC} Let $\mathcal{Z}^\varepsilon_\mathsf{ad}$ have a non-empty interior. Suppose that the pair $(\bar u^\varepsilon,\bar w^\varepsilon)\in\mathcal{Z}^\varepsilon_\mathsf{ad}$ is the solution to \eqref{eq:optimal_control_in_u_w} with associated optimal state $\bar y^\varepsilon=\mathcal S(\bar u^\varepsilon)$, i.e. satisfying \begin{subequations} \label{VC:OptSyst} \begin{equation} \label{StateEquation} \begin{aligned} \frac{\mathrm d}{\mathrm dt}\,{\langle \bar y^\varepsilon(t),\varphi\rangle}_H+a(t;\bar y^\varepsilon(t),\varphi)&=\gamma y_\text{out}(t)\int_{\Gamma\setminus\Gamma_c} \varphi\,\mathrm{d}{\bm s} + \gamma_c u(t)\int_{\Gamma_c} \varphi\,\mathrm d {\bm s},\\ \bar y^\varepsilon(0)&= y_\circ \end{aligned} \end{equation} for all $\varphi\in V$ and a.e. in $[0,T]$, where $H= L^2(\Omega)$. Then, there exist unique Lagrange multipliers $\bar p^\varepsilon\in \mathcal{Y}$, $\bar\beta^\varepsilon,\bar \vartheta^\varepsilon \in\mathcal{W}$ and $\bar\alpha^\varepsilon\in\mathcal{U}$ satisfying the dual equation \begin{equation} \label{DualEquation} \begin{aligned} -\frac{\mathrm d}{\mathrm dt}\,{\langle \bar p^\varepsilon(t),\varphi\rangle}_H+a(t;\varphi,\bar p^\varepsilon(t))&=-{\langle\bar\beta^\varepsilon(t),\varphi\rangle}_H,\\ \bar p^\varepsilon(T)&=-\bar \beta^\varepsilon(T) \end{aligned} \end{equation} for all $\varphi\in V$ and a.e. in $[0,T]$ and the optimality system \begin{align} \label{OptConda} \bar u^\varepsilon -\gamma_c \int_{\Gamma_c}\bar p^\varepsilon\mathrm{d} s +\bar\alpha^\varepsilon &=0&&\text{in }\mathcal{U},\\ \label{OptCondb} \sigma \bar w^\varepsilon+\varepsilon\bar\beta^\varepsilon+\bar\vartheta^\varepsilon&=0&&\text{in }\mathcal{W}. \end{align} Moreover, \begin{align} \label{NCP-1} \bar\beta^\varepsilon&=\max\big\{0,\bar\beta^\varepsilon+\eta(\bar y^\varepsilon+\varepsilon\bar w^\varepsilon-y_\mathsf{b})\big\}+\min\big\{0,\bar\beta^\varepsilon+\eta(\bar y^\varepsilon+\varepsilon\bar w^\varepsilon-y_\mathsf{a}\big\},\\ \label{NCP-2} \bar\alpha^\varepsilon&=\max\big\{0,\bar\alpha^\varepsilon+\eta_u(\bar u^\varepsilon-u_\mathsf{b})\big\}+\min\big\{0,\bar\alpha^\varepsilon+\eta_u(\bar u^\varepsilon-u_\mathsf{b})\big\}, \\ \label{NCP-3} \bar\vartheta^\varepsilon&=\max\big\{0,\bar\vartheta^\varepsilon+\eta_w(\bar w^\varepsilon-w_\mathsf{b})\big\}+\min\big\{0,\bar\vartheta^\varepsilon+\eta_w(\bar w^\varepsilon-w_\mathsf{a})\big\} \end{align} \end{subequations} for arbitrarily chosen $\eta,\eta_u,\eta_w>0$, where the max- and min-operations are interpreted componentwise in the pointwise everywhere sense. \end{theorem} \begin{remark} \em The well-defined bounded solution operator $\mathcal A:\mathcal{W}\to\mathcal{Y}$ is defined as follows: for given $\beta\in\mathcal{W}$ the function $p=\mathcal A\beta$ is the unique solution to \begin{align*} -\frac{\mathrm d}{\mathrm dt}\,{\langle p(t),\varphi\rangle}_H+a(t;\varphi,p(t))&=-{\langle \beta(t),\varphi\rangle}_H&&\forall\varphi\in V\text{ a.e. in }[0,T),\\ p(T)&=-\beta(T)&& \end{align*} for given $\beta\in\mathcal{W}$; cf. Remark~2.3 in \cite{MV18}. Then, $\bar p^\varepsilon=\mathcal A(\bar\beta^\varepsilon)$ solves \eqref{DualEquation}. $\Diamond$ \end{remark} Letting $\nu=(u,w,\theta)\in \mathcal N=\mathcal{U}\times\mathcal{W}\times\mathcal{W}$, we define the active sets corresponding to \eqref{VC:OptSyst} as \begin{subequations} \label{ActInactSets} \begin{equation} \label{ActiveSets} \begin{aligned} {\mathscr A_{{\mathsf a}}^\U}(\nu)&=\big\{t\in(0,T)\,\big|\,\alpha(\nu)+u-u_\mathsf{a}<0\text{ a.e.}\big\},\\ {\mathscr A_{{\mathsf b}}^\U}(\nu)&=\big\{t\in(0,T)\,\big|\,\alpha(\nu)+u-u_\mathsf{b}>0\text{ a.e.}\big\},\\ {\mathscr A_{\mathsf a}^\W}(\nu)&=\bigg\{(t,{\bm x})\in Q\,\big|\,\beta(\nu)+\frac{\sigma}{\varepsilon^2}\big(y(\nu)+\varepsilon w-y_\mathsf{a}\big)<0\text{ a.e.}\bigg\},\\ {\mathscr A_{\mathsf b}^\W}(\nu)&=\bigg\{(t,{\bm x})\in Q\,\big|\,\beta(\nu)+\frac{\sigma}{\varepsilon^2}\big(y(\nu)+\varepsilon w-y_\mathsf{b}\big)>0\text{ a.e.}\bigg\}, \\ {\mathscr A_{\mathsf a,2}^\W} (\nu)& = \big\{(t,{\bm x})\in Q\,\big|\,w-w_\mathsf{a}<0\text{ a.e.}\big\}, \\ {\mathscr A_{\mathsf b,2}^\W} (\nu)& = \big\{(t,{\bm x})\in Q\,\big|\,w-w_\mathsf{b}>0\text{ a.e.}\big\}. \end{aligned} \end{equation} Similarly, the associated inactive sets are \begin{equation} \label{InactiveSets} \begin{aligned} {\mathscr I^\U}(\nu)&=(0,T)\setminus\big({\mathscr A_{{\mathsf a}}^\U}(\nu)\cup{\mathscr A_{{\mathsf b}}^\U}(\nu)\big),\\ {\mathscr I^\W}(\nu)&=Q\setminus\big({\mathscr A_{\mathsf a}^\W}(\nu)\cup{\mathscr A_{\mathsf b}^\W}(\nu)\big),\quad{\mathscr I^\W_2}(\nu)= Q\setminus\big({\mathscr A_{\mathsf a,2}^\W}(\nu)\cup{\mathscr A_{\mathsf b,2}^\W}(\nu)\big). \end{aligned} \end{equation} \end{subequations} One can solve \eqref{VC:OptSyst} applying a primal-dual active set strategy or, equivalently, a semi-smooth Newton method; see, e.g., \cite{HIK02,IK03,Mec19}. Doing so, at each iteration $k$ of the semi-smooth Newton method we have to solve the following system \begin{subequations} \label{PDASS-System} \begin{align} \label{PDASS-System-a} \gamma_c\int_{\Gamma_c}p^{k+1}\,\mathrm{d}{\bm s}-u^{k+1}&=0 &&\text{in }{\mathscr I^\U}(\nu^k),\\ \label{PDASS-System-b} u^{k+1}&=u_\mathsf{a}&&\text{in }{\mathscr A_{{\mathsf a}}^\U}(\nu^k),\\ \label{PDASS-System-c} u^{k+1}&=u_\mathsf{b}&&\text{in }{\mathscr A_{{\mathsf b}}^\U}(\nu^k),\\ \label{PDASS-System-d} w^{k+1}&=0&&\text{in }{\mathscr I^\W}(\nu^k),\\ \label{PDASS-System-e} y^{k+1}+\varepsilon\,w^{k+1}&=y_\mathsf{a}&&\text{in }{\mathscr A_{\mathsf a}^\W}(\nu^k),\\ \label{PDASS-System-f} y^{k+1}+\varepsilon\,w^{k+1}&=y_\mathsf{b}&&\text{in }{\mathscr A_{\mathsf b}^\W}(\nu^k), \\ \label{PDASS-System-g} \theta^{k+1}&=0 && \text{in }{\mathscr I^\W_2}(\nu^k), \\ \label{PDASS-System-h} \sigma w^{k+1}-\varepsilon \theta^{k+1} &= \sigma w_\mathsf{a} && \text{in }{\mathscr A_{\mathsf a,2}^\W}(\nu^k), \\ \label{PDASS-System-i} \sigma w^{k+1}-\varepsilon \theta^{k+1} &= \sigma w_\mathsf{b} && \text{in }{\mathscr A_{\mathsf b,2}^\W}(\nu^k). \end{align} \end{subequations} Taking into account \eqref{VC:OptSyst}, conditions \eqref{PDASS-System} lead to the following coupled equations: \begin{subequations} \label{eq:pdass_coupled_system} \begin{align} &\frac{\mathrm d}{\mathrm dt} {\langle y^{k+1}(t),\varphi \rangle}_H+a(y^{k+1}(t),\varphi) -\gamma_c \mathcal{H}^{k}(t; p^{k+1}(t))\int_{\Gamma_c}\varphi\,\mathrm{d}{\bm s}\\\nonumber &\hspace{0.36\textwidth}=\gamma y_\text{out}(t)\int_{\Gamma\setminus\Gamma_c} \varphi\,\mathrm{d}{\bm s}+ \gamma_cr^k(t)\int_{\Gamma_c} \varphi\,\mathrm{d}{\bm s}, \\ &\label{eq:pdass_dual} -\frac{\mathrm d}{\mathrm dt}\,{\langle p^{k+1}(t),\varphi\rangle}_H+a(t;\varphi,p^{k+1}(t)) +\frac{\sigma}{\varepsilon^2}\,\left\langle \mathcal{G}^{k,\varepsilon}(t;y^{k+1}(t)),\varphi\right\rangle_H \\\nonumber & \hspace{0.63\textwidth} = \frac{\sigma}{\varepsilon^2}\,{\langle r^{k,\varepsilon}(t),\varphi\rangle}_H \end{align} for all $\varphi\in V$ and a.e. in $(0,T)$, together with the conditions \begin{align} y^{k+1}(0) &=y_\circ, \\ \label{pT} p^{k+1}(T) + \frac{\sigma}{\varepsilon^2}\,\mathcal{G}^{k,\varepsilon}(T;y^{k+1}(T)) & = \frac{\sigma}{\varepsilon^2}\, r^{k,\varepsilon}(T), \end{align} \end{subequations} where \begin{align*} \mathcal{G}^{k,\varepsilon}(t;y^{k+1}(t))&= y^{k+1}(t)\sum_{i=1}^6 \chi_{\mathscr{A}_i(\nu^k)}(t)+ \frac{1}{\varepsilon}y^{k+1}(t)\sum_{i=3}^6 \chi_{\mathscr{A}_i(\nu^k)}(t) \\ \mathcal{H}^k(t;p^{k+1}(t)) &= \chi_{{\mathscr I^\U}(z^k)}(t)\gamma_c\int_{\Gamma_c} p^{k+1}(t)\mathrm d s \\ r^{k}(t) &= \chi_{{\mathscr A_{{\mathsf a}}^\U}(z^k)}(t)u_\mathsf{a}(t)+\chi_{{\mathscr A_{{\mathsf b}}^\U}(z^k)}(t)u_\mathsf{b}(t) \\ r^{k,\varepsilon}(t) &= y_\mathsf{a}\chi_{\mathscr A_1^\W(\nu^k)}(t) + y_\mathsf{b}\chi_{\mathscr A_2^\W(\nu^k)}(t) - w_\mathsf{a}\big(\chi_{\mathscr A_3^\W(\nu^k)}(t)+\chi_{\mathscr A_5^\W(\nu^k)}(t)\big)\\ & \quad - w_\mathsf{b}\big(\chi_{\mathscr A_4^\W(\nu^k)}(t)+\chi_{\mathscr A_6^\W(\nu^k)}(t)\big) \\ & \quad + \frac{\varepsilon+1}{\varepsilon} y_\mathsf{a} \big( \chi_{\mathscr A_3^\W(\nu^k)}(t)+\chi_{\mathscr A_4^\W(\nu^k)}(t)\big) \\ & \quad + \frac{\varepsilon+1}{\varepsilon} y_\mathsf{b} \big( \chi_{\mathscr A_5^\W(\nu^k)}(t)+\chi_{\mathscr A_6^\W(\nu^k)}(t)\big), \end{align*} with $\chi_\bullet(t)$ being the indicator functions of the sets \begin{align*} \mathscr A_1^\W(\nu^k)&= {\mathscr A_{\mathsf a}^\W}(\nu^k)\cap{\mathscr I^\W_2}(\nu^k), & \mathscr A_2^\W(\nu^k)&={\mathscr A_{\mathsf b}^\W}(\nu^k)\cap{\mathscr I^\W_2}(\nu^k),\\ \mathscr A_3^\W(\nu^k)&={\mathscr A_{\mathsf a}^\W}(\nu^k)\cap{\mathscr A_{\mathsf a,2}^\W}(\nu^k), & \mathscr A_4^\W(\nu^k)&={\mathscr A_{\mathsf a}^\W}(\nu^k)\cap {\mathscr A_{\mathsf b,2}^\W}(\nu^k), \\ \mathscr A_5^\W(\nu^k)&={\mathscr A_{\mathsf b}^\W}(\nu^k)\cap {\mathscr A_{\mathsf a,2}^\W}(\nu^k), & \mathscr A_6^\W(\nu^k)&={\mathscr A_{\mathsf b}^\W}(\nu^k)\cap {\mathscr A_{\mathsf b,2}^\W}(\nu^k), \end{align*} respectively. Summarizing, the coupled system of equations \eqref{eq:pdass_coupled_system} can be formulated only in the variables $y^{k+1}$ and $p^{k+1}$: \begin{equation} \label{OpSystem} \left( \begin{array}{cc} \mathcal A_{11}^k&\mathcal A_{12}^k\\[1ex] \mathcal A_{21}^k&\mathcal A_{22}^k \end{array} \right)\left( \begin{array}{c} y^{k+1}\\[1ex] p^{k+1} \end{array} \right)=\left( \begin{array}{c} \mathcal Q_1(\nu^k;u_\mathsf{a},u_\mathsf{b},\gamma_c,\gamma,y_\text{out})\\[1ex] \mathcal Q_2(\nu^k;y_\mathsf{a},y_\mathsf{b},w_\mathsf{a},w_\mathsf{b},\varepsilon,\sigma) \end{array} \right), \end{equation} where the operators $\mathcal A_{11}^k$ and $\mathcal A_{22}^k$ are of the form \[ \mathcal A_{11}^k=\mathcal C+\tilde{\mathcal A}_{11}^k, \quad\mathcal A_{22}^k=\mathcal C^\star+\tilde{\mathcal A}_{22}^k \] and $\mathcal C$ stands for the heat-convection operator, which does not depend on $k$. A discretization of \eqref{OpSystem} leads to a discretized system of the form \begin{equation} \label{DiscOpSystem} \left( \begin{array}{cc} \mathrm A_{11}^k&\mathrm A_{12}^k\\[1ex] \mathrm A_{21}^k&\mathrm A_{22}^k \end{array} \right)\left( \begin{array}{c} \mathrm y^{k+1}\\[1ex] \mathrm p^{k+1} \end{array} \right)=\left( \begin{array}{c} \mathrm Q_1^k\\[1ex] \mathrm Q_2^k \end{array} \right) \end{equation} with \[ \mathrm A_{11}^k=\mathrm C+\tilde{\mathrm A}_{11}^k, \quad\mathrm A_{22}^k=\mathrm C^\top+\tilde{\mathrm A}_{22}^k, \] where $\mathrm C$ stands for the discretized heat-convection operator, which is again independent of $k$. For further details, we refer to \cite{Mec19}. We summarize the PDASS in Algorithm~\ref{Alg:PDASS}. \begin{algorithm} \caption{(Primal-dual active set strategy)} \label{Alg:PDASS} \begin{algorithmic}[1] \STATE Choose starting value $\nu^0=(u^0,w^0,\vartheta^0)\in\mathbb{N}$, set $k=0$ and {\texttt flag}~= {\texttt false}; \STATE Determine $y^0=\mathcal Su^0$ and $p^0=- \mathcal A(\sigma w^0+\vartheta^0)/\varepsilon$. \STATE Determine ${\mathscr A_{{\mathsf a}}^\U}(\nu^0)$, ${\mathscr A_{{\mathsf b}}^\U}(\nu^0)$, ${\mathscr I^\U}(\nu^0)$ for $i=1,\ldots,m$, ${\mathscr A_{\mathsf a}^\W}(\nu^0)$, ${\mathscr A_{\mathsf b}^\W}(\nu^0)$, ${\mathscr I^\W}(\nu^0)$ and ${\mathscr A_{\mathsf a,2}^\W}(\nu^0)$, ${\mathscr A_{\mathsf a,2}^\W}(\nu^0)$ and ${\mathscr I^\W_2}(\nu^0)$ from \eqref{ActInactSets}; \REPEAT \STATE Compute the solution $(y^{k+1},p^{k+1})$ by solving \eqref{DiscOpSystem}; \STATE Compute $\nu^{k+1}=(u^{k+1},w^{k+1},\vartheta^{k+1})\in\mathbb{N}$ from \eqref{PDASS-System} and set $k=k+1$; \STATE Determine ${\mathscr A_{{\mathsf a}}^\U}(\nu^k)$, ${\mathscr A_{{\mathsf b}}^\U}(\nu^k)$, ${\mathscr I^\U}(\nu^k)$ for $i=1,\ldots,m$, ${\mathscr A_{\mathsf a}^\W}(\nu^k)$, ${\mathscr A_{\mathsf b}^\W}(\nu^k)$, ${\mathscr I^\W}(\nu^k)$ and ${\mathscr A_{\mathsf a,2}^\W}(\nu^k)$, ${\mathscr A_{\mathsf a,2}^\W}(\nu^k)$ and ${\mathscr I^\W_2}(\nu^k)$ from \eqref{ActInactSets}; \IF{$\mathscr A_{{\mathsf a}i}^\mathcal{U}(\nu^k)=\mathscr A_{{\mathsf a}i}^\mathcal{U}(\nu^{k-1})$ {\textbf and} $\mathscr A_{{\mathsf b}i}^\mathcal{U}(\nu^k)=\mathscr A_{{\mathsf b}i}^\mathcal{U}(\nu^{k-1})$ for $i=1,\dots,m$} \IF{${\mathscr A_{\mathsf a}^\W}(\nu^k)={\mathscr A_{\mathsf a}^\W}(\nu^{k-1})$ {\textbf and} ${\mathscr A_{\mathsf b}^\W}(\nu^k)={\mathscr A_{\mathsf b}^\W}(\nu^{k-1})$} \IF{${\mathscr A_{\mathsf a,2}^\W}(\nu^k)={\mathscr A_{\mathsf a,2}^\W}(\nu^{k-1})$ {\textbf and} ${\mathscr A_{\mathsf b,2}^\W}(\nu^k)={\mathscr A_{\mathsf b,2}^\W}(\nu^{k-1})$} \STATE Set {\texttt flag}~=~{\texttt true}; \ENDIF \ENDIF \ENDIF \mathcal{U}NTIL{{\texttt flag}~=~{\texttt true};} \end{algorithmic} \end{algorithm} \begin{remark} \label{Remark:PDASS} \em The discrete linear system \eqref{DiscOpSystem} can be obtained discretizing the primal and dual equations \eqref{eq:pdass_coupled_system}, for example, with piecewise linear finite elements (FE) in space and the implicit Euler scheme in time. Let $V_h\subset V$ be the FE space with $\dim V_h = N_{\bm x}$ and $N_t$ be the number of time steps. Notice that the matrix in \eqref{DiscOpSystem} has the size $(2N_tN_{\bm x})\times(2N_tN_{\bm x})$. Clearly, its dimension may cause problems regarding memory consumption and computational time. In our application, in fact, $N_{\bm x}$ is not small, since a coarse FE grid can not capture the complexity of the dynamics involved. Therefore, we apply POD-based reduced-order modeling to approximate the solution of \eqref{DiscOpSystem} by a reduced-order system of dimension $2\ell N_t$ with $\ell\ll N_{\bm x}$, gaining computational time but paying in approximation; cf. Section~\ref{sec:POD}. The number of time steps may not be small either, since the time horizon $T$ could be extremely large (e.g. one month) and the time step small (e.g. a minute). This is not a disadvantage for us, since because we apply an MPC method, we iteratively solve optimal control subproblems on a smaller time horizon, computing a feedback control that approximates the optimal control $\bar u^\varepsilon$; cf.\ Section~{\ref{sec:MPC}}. Finally, note that the matrix in \eqref{DiscOpSystem} has a sparse block structure that allows to store only the FE matrices, e.g., the mass and stiffness matrices, which have dimension $N_{\bm x}\times N_{\bm x}$. These blocks will then be projected into the POD space. Doing that we can save memory significantly, in particular when the number of FE nodes is large. Note that, in comparison to the Boussinesq approximation, we gain even more, since in general computing the solution of this model requires an additional picewise quadratic FE approximation of the velocity space, which would definitely become infeasible in combination with the PDASS. \end{remark} \section{POD and a-posteriori error estimate} \label{sec:POD} Model order reduction and, in particular, POD is a well established field of numerical analysis, which was intensively developed in the past 20 years. The key idea is to construct a low-dimensional subspace $V_\ell\subset V_h=\mathrm{span}\,\{\varphi_1,\ldots,\varphi_{N_{\bm x}}\}$, spanned by the so-called POD basis functions. For the sake of completeness, in the first part of this section, we describe briefly the POD method in the discrete case, but we focus on one snapshot ensemble for brevity. For more details we refer the reader to \cite{GV17}, for instance. Let $t_j=j\Delta t$, $j=0,\ldots,n$, be the time discretization for a suitable time interval with constant time step $\Delta t>0$. By $\{y_j\}_{j=0}^n\subset V_h$ we denote a given discrete trajectory, also called snapshots ensemble. We define the snapshot subspace $\mathscr V_n= \mathrm{span}\,\{y_0,\ldots,y_n\}\subset V_h$. Let $d_n = \dim\mathscr V_n\le n+1$. For any $\ell\in\{1,\ldots,d_n\}$ we are interested in finding an orthonormal set $\{\psi_i\}_{i=1}^\ell\subset\mathscr V_n$, which is the solution of the following minimization problem \begin{equation} \label{eq:POD_prob} \min \sum_{j=0}^n \alpha_j \Big\|y_j-\sum_{i=1}^\ell {\langle y_j,\psi_i\rangle}_V\,\psi_i\Big\|_V^2\quad\text{s.t.}\quad{\langle\psi_i,\psi_j\rangle}_V=\delta_{ij}, \end{equation} where $\delta_{ij}$ is the Kronecker delta and $\alpha_j>0$ are given (trapezoidal) weights. We set $V^\ell=\mathrm{span}\,\{\psi_1,\ldots,\psi_\ell\}\subset V_h$. Obviously, if $\dim V_\ell=\ell\ll m$ holds we gain a computational speed-up. From $y_j\in V_h$ ($0\le j\le n$) we infer that there exists a snapshot matrix $Y\in\mathbb R^{N_{\bm x}\times(n+1)}$, such that \begin{align*} y_j({\bm x})=\sum_{i=1}^{N_{\bm x}}Y_{ij}\varphi_i({\bm x})\quad\text{for }{\bm x}\in\Omega. \end{align*} Introducing the weighting matrices $D=\mathrm{diag}\,(\alpha_0,\ldots,\alpha_n)\in\mathbb{R}^{(n+1)\times (n+1)}$ and $W=((W_{ij}))\in\mathbb{R}^{N_{\bm x}\times N_{\bm x}}$ with $W_{ij}=\langle\varphi_j,\varphi_i\rangle_V$, a solution $\{\psi_j\}_{j=1}^\ell$ to \eqref{eq:POD_prob} can be computed as follows: \begin{itemize} \item Solve the symmetric eigenvalue problem \begin{align*} \big(D^{1/2}Y^\top WYD^{1/2}\big)\bm\phi_j = \lambda_j\bm\phi_j\quad\text{for }j=1,\ldots,\ell \end{align*} with $\lambda_1\ge\ldots\ge \lambda_\ell>0$. \item Define the vectors $\bm\psi_j=YD^{1/2}\bm\phi_j/\sqrt{\lambda_j}\in\mathbb R^{N_{\bm x}}$ for $1\leq j\leq \ell$ and the matrix $\Psi=[\bm\psi_1|\ldots|\bm\psi_\ell]\in\mathbb R^{N_{\bm x}\times\ell}$. \item Set \begin{align*} \psi_j({\bm x})=\sum_{i=1}^{N_{\bm x}}\Psi_{ij}\varphi_i({\bm x})\quad\text{for }j=1,\ldots,\ell\text{ and }{\bm x}\in\Omega. \end{align*} \end{itemize} Using a Galerkin ansatz, the reduced-order version of \eqref{eq:pdass_coupled_system} can be derived replacing the test functions $\varphi$ by the POD basis $\left\{\psi_i\right\}_{i=1}^\ell$. We indicate with $\mathcal{S}^\ell$ the POD solution operator corresponding to the reduced-order model. We can then construct the discretized system \begin{equation} \label{DiscOpSystemPOD} \left( \begin{array}{cc} \mathrm A_{11}^{k,\ell}&\mathrm A_{12}^{k,\ell}\\[1ex] \mathrm A_{21}^{k,\ell}&\mathrm A_{22}^{k,\ell} \end{array} \right)\left( \begin{array}{c} \mathrm y^{k+1,\ell}\\[1ex] \mathrm p^{k+1,\ell} \end{array} \right)=\left( \begin{array}{c} \mathrm Q_1^{k,\ell}\\[1ex] \mathrm Q_2^{k,\ell} \end{array} \right), \end{equation} of which the matrix belongs to $\mathbb{R}^{(2\ell N_t)\times (2\ell N_t)}$. From \eqref{DiscOpSystemPOD}, one can easily implement a POD-based PDASS following the structure of Algorithm~\ref{Alg:PDASS}. Now, a first question is how to check that the chosen number of POD basis $\ell$ is good enough to guarantee a small approximation error in reconstructing the snapshots. For this, the following error formula holds \begin{equation} \label{aprioriellselection} \sum_{j=0}^n \alpha_j \bigg\|y_j-\sum_{i=1}^\ell {\langle y_j,\psi_i\rangle}_V \psi_i\bigg\|_V^2 = \sum_{i=\ell+1}^{d_n}\lambda_i. \end{equation} More importantly, to check that the constructed POD basis is able to reconstruct the full-order optimal solution $\bar u^\varepsilon$, we need an a-posteriori error estimator. Let $\bar u^{\varepsilon,\ell}$ be the solution of the POD-based PDASS, then the following result from \cite{Mec19} holds: \begin{proposition} \label{prop:err_est} Let $(\bar u^\varepsilon,\bar w^\varepsilon)\in\mathcal{Z}^\varepsilon_\mathsf{ad}$ be the optimal solution to \eqref{eq:optimal_control_in_u_w}. Suppose that $z^\mathsf{ap}=(u^\mathsf{ap},w^\mathsf{ap})\in\mathcal{Z}^\varepsilon_\mathsf{ad}$ is given arbitrarily. Then, there exists a perturbation $\zeta=(\zeta^u,\zeta^w)\in\mathcal{U}\times\mathcal{W}$, which is independent of $(\bar u^\varepsilon,\bar w^\varepsilon)$, so that \begin{equation} \label{APostError} \|\bar u^\varepsilon-u^\mathsf{ap}\|_\mathcal{U} + \|\bar w^\varepsilon-w^\mathsf{ap}\|_\mathcal{W}\le\frac{1}{\sigma^\mathsf{ap}}\,{\|\mathcal T^\star\zeta\|}_\mathcal{U}, \end{equation} where $\sigma^\mathsf{ap}:=\min\{\sigma,1\}>0$ and $\mathcal T^\star$ is defined as \[ \mathcal T^\star=\left( \begin{array}{cc} \mathcal I_\mathcal{U}& \mathcal S^\star \\ 0 & \varepsilon \mathcal I_\mathcal{W} \end{array} \right):\mathcal{U}\times\mathcal{W}\to\mathcal{U}\times\mathcal{W}, \] where $\mathcal I_\mathcal{U}$ and $\mathcal I_\mathcal{W}$ are the identities on $\mathcal{U}$ and $\mathcal{W}$, respectively. The perturbation $\zeta$ is computed as follows: Let $\xi=(\xi^u,\xi^w)\in\mathcal{U}\times\mathcal{W}$ be given as the solution of the linear system $\mathcal T^\star\xi=\nabla\hat J(z^\mathsf{ap})$, i.e., \begin{equation} \label{SystemAPostError} \left(\begin{array}{cc} \mathcal I_\mathcal{U}& \mathcal S^\star \\ 0&\varepsilon \mathcal I_\mathcal{W} \end{array}\right) \left(\begin{array}{c} \xi^u\\ \xi^w \end{array}\right) =\left( \begin{array}{c} u^\mathsf{ap}\\ \sigma w^\mathsf{ap} \end{array} \right), \end{equation} then \begin{subequations} \label{PertZeta} \begin{equation} \label{PertZeta-a} \zeta^u(t)=\left\{\begin{array}{ll} -\min\{0,\xi^u(t)\}&\text{for }t\in {\mathscr A_{{\mathsf a}}^\U}(z^\mathsf{ap}),\\[1mm] -\max\{0,\xi^u(t)\}&\text{for }t\in{\mathscr A_{{\mathsf b}}^\U}(z^\mathsf{ap}),\\[1mm] -\xi^u(t)&\text{for }t\in{\mathscr I^\U}(z^\mathsf{ap}) \end{array}\right. \end{equation} and \begin{equation} \label{PertZeta-b} \zeta^w(t,{\bm x})=\left\{ \begin{array}{ll} -\min\{0,\xi^w(t,{\bm x})\}&\text{for }(t,{\bm x})\in{\mathscr A_{\mathsf a}^\W}(z^\mathsf{ap}),\\[1mm] -\max\{0,\xi^w(t,{\bm x})\}&\text{for }(t,{\bm x})\in{\mathscr A_{\mathsf b}^\W}(z^\mathsf{ap}),\\[1mm] -\xi^w(t,{\bm x})&\text{for }(t,{\bm x})\in{\mathscr I^\W}(z^\mathsf{ap}). \end{array}\right. \end{equation} \end{subequations} \end{proposition} \begin{remark} \em Note that we can easily decouple the equations in system \eqref{SystemAPostError} by computing $\xi^w=\sigma_w w^\mathsf{ap}/\varepsilon$ from the second equation. Then $\xi^u = -\mathcal S^\star \xi^w + u^\mathsf{ap} = -\sigma\mathcal S^\star w^\mathsf{ap}/\varepsilon + u^\mathsf{ap}$. Moreover, we recall that $\mathcal S^\star: W_0(0,T)'\to\mathcal{U}$ denotes the dual operator of the linear solution operator $\mathcal S$; see \cite[Lemma 2.4]{TV09}. Finally, we remark that in our numerical realization $z^\mathsf{ap}$ is given by the POD suboptimal solution pair $(\bar u^{\varepsilon,\ell},\bar w^{\varepsilon,\ell})$. Thus, \eqref{APostError} can be utilized as an a-posteriori error estimate. For further details regarding the a-posteriori error estimator, we refer to \cite{Mec19}. $\Diamond$ \end{remark} \section{Economic MPC} \label{sec:MPC} MPC is a well-established method for computing a closed-loop control for a dynamical system on an infinite (or large) time horizon. MPC splits the solution of such problems into the consecutive solution of problems on a small finite time horizon, the so called prediction horizon. The goal is to reduce the complexity of the problem in time, while at the same time being able to react to model uncertainties, disturbances, and possible parameter changes. We are interested in economic MPC \cite{AmRA11,FaGM18}. This term comprises all MPC schemes in which the cost function does not merely penalize the distance to a pre-defined target. Here we do not have such a pre-defined target. Rather, the goal is to minimize the control effort, while satisfying certain control and state constraints; see, e.g., \cite{GP17,GP20,Pir20}. Applying MPC to the virtual control problem \eqref{eq:optimal_control_in_u_w} has several advantages, which will be more clear after having introduced the method. Let $N\Delta t$, $N\in\mathbb{N}$, be the prediction horizon. We define the Hilbert spaces \[ \mathcal{U}Nj = L^2(t_j,t_{j+N}), \quad \mathcal{W}Nj = L^2((t_j,t_{j+N})\times\Omega). \] for $j=0,\ldots,N_t$ and with the notation $t_j= j\Delta t$ for $j>N_t$. Then the MPC cost functional is \[ J^j_N(u,w)= \frac{1}{2} \|u\|^2_{\mathcal{U}Nj} + \frac{\sigma}{2}\|w\|^2_{\mathcal{W}Nj} \] and the MPC admissible sets are \[ \begin{aligned} \mathcal{Z}^\varepsilon_\mathsf{ad}Nj = \big\{& (u,w)\in\mathcal{U}Nj\times\mathcal{W}Nj: u, w, \mathcal{S}u+\varepsilon w \text{ satisfy } \\ &u_\mathsf{a}(t)\leq u(t)\leq u_\mathsf{b}(t) \text{ a.e. in } (t_j,t_{j+N}], \\ & w_\mathsf{a}\leq w(t,{\bm x})\leq w_\mathsf{b}, \, y_\mathsf{a}\leq \mathcal{S}u+\varepsilon w\leq y_\mathsf{b} \text{ a.e. in } (t_j,t_{j+N}]\times\Omega, \\ & \text{ respectively} \big\}, \end{aligned} \] for $j=0,\ldots,N_t$. Furthermore, note that the fully discretized version of \eqref{StateEquation} can be rewritten as the following discrete dynamical system \begin{equation} \label{eq:discrete_dynamical_system} \begin{aligned} y_{j+1} = f(j,y_j,u_j)\text{ for } j=0,\ldots,N_t,\quad y_0 = y_\circ, \end{aligned} \end{equation} where the function $f$ comprises all the formulas that come from the discretization and the model, which we avoid to write for brevity. The MPC method is then summarized in Algorithm~\ref{Alg:MPC}. \begin{algorithm} \caption{Model predictive control \label{Alg:MPC}} \begin{algorithmic}[1] \STATE Choose an initial state $y_0$, an MPC prediction horizon $N$ and the regularization parameter $\varepsilon>0$; \FOR {each time instant $j=0,1,2,\ldots,N_t$} \STATE \label{MPC_measurestep} Measure the current state of the system $y^j$ and the model parameters; \STATE Solve the optimal control problem \begin{equation} \label{eq:opt_control_problem_MPC} \min J_{N}^j(u,w)\quad\text{s.t.}\quad(u,w)\in\mathcal{Z}^\varepsilon_\mathsf{ad}Nj \end{equation} to obtain the open-loop optimal control sequence $\bar u^\varepsilon_N$; \STATE Apply the first element of $\bar u^\varepsilon_N$ as a control to the system \eqref{eq:discrete_dynamical_system} during the next sampling period, i.e.\ use the feedback law $\mu_N(y_j) = \bar u^\varepsilon_N(0)$. \ENDFOR \end{algorithmic} \end{algorithm} To guarantee well-posedness and existence of the optimal control to each open-loop optimal control problem in Algorithm~\ref{Alg:MPC}, we assume that the admissible sets $\mathcal{Z}^\varepsilon_\mathsf{ad}Nj$ have non-empty interior for all $j=0,\ldots,N_t$. Algorithm~\ref{Alg:MPC} produces a sequence of optimal controls in discrete time and uses the first element of each optimal control sequence as a feedback control value. This results in a closed-loop sub-optimal trajectory, which solves \eqref{eq:discrete_dynamical_system} for $u_j= \mu_N(y_j)$, $j=0,\ldots,N_t$. Note that in step~\ref{MPC_measurestep} of the MPC algorithm it is possible to measure (and update) the parameters of the model. This is particularly advantageous for our application, since usually we do not have, e.g., a precise weather forecast for the entire long-time horizon $T$. In this way, we can update the outside temperature while the algorithm runs. To determine $y^j$ typically an estimator is needed to process available sensor data properly to reconstruct the spatial-temporal distribution. In addition, we can take advantage for the velocity field $\textbf{v}$. We mentioned already that this will be provided from an approximation of the Boussinesq velocity fields. We thus gain the possibility of updating this approximation while the simulation runs, improving the accuracy of our simplified model. The optimal control problems \eqref{eq:opt_control_problem_MPC} are solved on a much smaller time horizon $(t_j,t_{j+N})$ since $N\ll N_t$. This guarantees a good computational speed-up. For general optimal control problems it is not necessarily the case that the MPC closed-loop trajectory is approximately optimal on the long time horizon $N_t$. However, in recent years structural conditions were discovered that allow to conclude this property for $N$ large enough; cf.\ \cite[Chapter 7]{GrPa17} and \cite{FaGM18,GP20}. The most important among these conditions are the turnpike property and the closely related strict dissipativity property \cite{TreZ15,GruM16}. The turnpike property can be seen as a similarity condition for optimal trajectories on different time horizons and we will check numerically that this property holds for the optimal control problem considered in this paper; cf. Section~\ref{sec:turnpike}. It is also possible to check the quality of the MPC horizon and trajectory with a-posteriori error estimators, such as in \cite{GMPV20,GP09}, but we do not utilize them in this work. What remains to be explained is how to successfully combine MPC and POD. The challenge is to guarantee a small approximation error for the reduced-order model while the MPC algorithm proceeds. This is clearly a big challenge if one thinks about the fact that the problem parameters are not only time-variant, but that the parameter values at a certain time instant in the future may also change when new measurement information becomes available. When this happens, it means that we need to update the POD model. Luckily, the a-posteriori error estimate from Proposition~\ref{prop:err_est} is a useful tool to check whether this should be done or not. \begin{algorithm}[t] \caption{MPC-POD \label{Alg:MPCPOD}} \begin{algorithmic}[1] \STATE Choose an initial state $y_0$, an MPC prediction horizon $N$, the regularization parameter $\varepsilon>0$, and tolerances $\tau_1$ for the a-priori estimate \eqref{aprioriellselection} and $\tau_2$ for the a-posteriori estimate; \STATE Set $t_0=0$, $y_0(0)=y_\circ$, and {\texttt flag} = {\texttt true}; \FOR{ $j=0,1,2,\dots,N_t$ } \STATE Measure the current state $y_j$ of the system at time $t_j$; \IF { {\texttt flag} = {\texttt true} } \STATE Update the model parameters; \STATE \label{Alg:MPCPOD:step:compute_snapshots} Compute primal and dual snapshots; \STATE \label{Alg:MPCPOD:step:compute_basis} Compute a POD basis $\{\psi_i\}_{i=1}^\ell$ of rank $\ell$ according to $\tau_1$; \STATE Set {\texttt flag} = {\texttt false}; \ENDIF \STATE Solve the MPC-POD optimal control problem \begin{equation} \label{eq:opt_control_problem_MPCPOD} \min J^j_N(u,w) \text{ s.t. } (u,w)\in \mathcal{Z}^\varepsilon_\mathsf{ad}ellNj \end{equation} to obtain the open-loop suboptimal control sequence $\bar u^{\varepsilon,\ell}_N$; \STATE \label{Alg:MPCPOD:step:compute_feedback} Apply the first element of $\bar u^{\varepsilon,\ell}_N$ as a control to the system \eqref{eq:discrete_dynamical_system} during the next sampling period, i.e.\ use the feedback law $\mu_N(y_j) = \bar u^{\varepsilon,\ell}_N(0)$; \STATE Compute the a-posteriori error estimate $e= {\|\mathcal T^\star\zeta\|}_{\mathcal{U}\times\mathcal{W}}$ from \eqref{APostError}; \IF { $e> \tau_2$} \STATE Set {\texttt flag} = {\texttt true}; \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} We report the MPC-POD algorithm in Algorithm~\ref{Alg:MPCPOD}. For further details, we refer to \cite{Mec19,MV19}. The admissible sets in \eqref{eq:opt_control_problem_MPCPOD} are defined as \[ \begin{aligned} \mathcal{Z}^\varepsilon_\mathsf{ad}ellNj = \big\{& (u,w)\in\mathcal{U}Nj\times\mathcal{W}Nj: u, w, \mathcal{S}^\ell u+\varepsilon w \text{ satisfy } \\ &u_\mathsf{a}(t)\leq u(t)\leq u_\mathsf{b}(t) \text{ a.e. in } (t_j,t_{j+N}], \\ & w_\mathsf{a}\leq w(t,{\bm x})\leq w_\mathsf{b}, \, y_\mathsf{a}\leq \mathcal{S}^\ell u+\varepsilon w\leq y_\mathsf{b} \text{ a.e. in } (t_j,t_{j+N}]\times\Omega, \\ & \text{ respectively} \big\}, \end{aligned} \] where we recall that $\mathcal{S}^\ell$ is the POD solution operator for the reduced-order state equation. Obviously, to guarantee existence and uniqueness of the suboptimal controls solution to \eqref{eq:opt_control_problem_MPCPOD}, we assume that the sets $\mathcal{Z}^\varepsilon_\mathsf{ad}ellNj$ have non-empty interior for all $j=0,\ldots,N_t$. The only aspect to be explained is how to compute the snapshots in Algorithm~\ref{Alg:MPCPOD}. We will explain this in the next section. Let us mention that other possibilities are explored in \cite{FFV19,GU14,Mec19,MV19}. \section{Computational details and additional parallelization issues} \label{sec:parallel} In this section, we clarify some aspects of our simplification strategy to compute an an approximate solution to \eqref{eq:opt_cont_prob_in_u_bous}. We give details about the implementation of the algorithm, mentioning which parts can be performed in parallel. Looking at Algorithm~\ref{Alg:MPCPOD}, there are still some unclear aspects. At first, it is not clear how to compute the POD snapshots in Step~\ref{Alg:MPCPOD:step:compute_snapshots}. At iteration $j=0$, we initialize our model by solving the Boussinesq approximation \eqref{eq:Bous} with a given initial control $u_0$ for the time horizon $[0,T+N\Delta t]$ using the FE method for the spatial and the implicit Euler method for the temporal discretization. The intervall $[0,T+N\Delta t]$ is discretized by $t_j=j\Delta t$, $j=0,\ldots,m$ with $m=N_t+N$. Exploiting the power of parallel computing we obtain, therefore, the solution sequence $\{(\textbf{v}_{j,h}^0,y_{j,h}^0)\}_{j=0}^m$. Then, we compute the sequence $\{p_{j,h}^0\}_{j=0}^m$ by solving in parallel \begin{align*} &\frac{1}{\Delta t}\,{\langle p_{j,h}^0-p_{j+1,h}^0,\varphi_h\rangle}_H+a(t_j;\varphi_h,p_{j,h}^0) +\frac{\sigma}{\varepsilon^2}\,{\langle \mathcal{G}^{0,\varepsilon}(t_j;y_{j,h}^0),\varphi_h\rangle}_H\\ &\qquad= \frac{\sigma}{\varepsilon^2}\,{\langle r^{0,\varepsilon}(t_j),\varphi_h\rangle}_H\quad\text{for all }\varphi_h\in V_h\text{ and }j=m-1,\ldots,1,\\ &\bigg\langle p_{m,h}^0 + \frac{\sigma}{\varepsilon^2}\,\mathcal{G}^{0,\varepsilon}(t_m;y_{m,h}^0),\varphi_h\bigg\rangle_H = \frac{\sigma}{\varepsilon^2}\, {\langle r^{0,\varepsilon}(t_m),\varphi_h\rangle}_H\quad\text{for all }\varphi_h\in V_h \end{align*} (compare \eqref{eq:pdass_dual} and \eqref{pT}), where the terms $\mathcal{G}^{0,\varepsilon}$ and $r^{0,\varepsilon}$ involve the computation of initial active and inactive sets \eqref{ActiveSets}-\eqref{InactiveSets}, which can be also performed in parallel. The snapshots $\{y_{j,h}^0\}_{j=0}^m$ and $\{p_{j,h}^0\}_{j=0}^m$ are used to compute the POD basis in step~\ref{Alg:MPCPOD:step:compute_basis}. Note that the discrete POD eigenvalue problem has dimension $2m$, which may be large. We discuss later about this issue. For subsequent basis updates at an MPC step $j$, triggered by the a-posteriori error estimator, we will compute our snapshots with the same procedure, but using the control $\bar u^{\varepsilon,\ell}_N$ computed at the previous iteration $j-1$. Note that we get also an update for the velocity field $\textbf{v}$. We use the velocity field $\textbf{v}$ (computed at step~\ref{Alg:MPCPOD:step:compute_snapshots}) as fixed one for the open-loop optimal control problem \eqref{eq:opt_control_problem_MPCPOD}, solved with POD-PDASS. In such a way, every time the basis and the velocity field are updated, the MPC can correct its approximation. We remark that the POD-PDASS matrix for problem \eqref{eq:opt_control_problem_MPCPOD} has the size $(2N\ell)\times (2N\ell)$ (and is thus small), therefore there is no need of parallelization for solving the linear system \eqref{DiscOpSystemPOD}. We anyway compute the active and inactive sets \eqref{ActiveSets}-\eqref{InactiveSets} in parallel, since they depend on $N_x$ and not on $\ell$. With the current information, one can already run Algorithm~\ref{Alg:MPCPOD}. In addition, we use a couple of improvements, which we describe further. A first disadvantage of the POD basis updates is that any time we need to compute the snapshots, we need to solve the Boussinesq equation for $t\in [t_j,T+N\Delta t]$. Although the measure of this time domain decreases for increasing $j$, computing the solution of the Boussinesq approximation \eqref{eq:Bous} for this large time horizon can be computationally demanding even using parallel computing. This computational effort can be reduced, considering to generate snapshots in a smaller time interval $[t_j,t_j+(N+M)\Delta t]$, where $M>0$ is chosen to be small. In such a way, we have an updated velocity field $\textbf{v}$ (and snapshots) for the subsequent $M$ iterations of the algorithm. If a POD basis update is performed in one of these steps, we have saved computational time. If not, we need to solve \eqref{eq:Bous} for the remaining time horizon up to the end (or for other $M$ steps, for example). Having only $2(N+M)$ snapshots is also beneficial for computing the POD basis, since we have to solve a smaller eigenvalue problem. A second disadvantage of the POD basis update is that overwriting the past information while computing a new basis may not be always a good choice. The old snapshots, in fact, may still carry useful information for the future computations. Yet, adding only snapshots over snapshots will increase the number of POD basis needed to achieve the a-priori tolerance $\tau_1$. We employ, therefore, the snapshots selection strategy proposed in \cite{Mec19}. We report it in Algorithm~\ref{Alg:Snapshotselection}. This strategy consists in keeping only those old snapshots that represent dynamics close to the new ones. This procedure is motivated by the fact that the MPC algorithm gives a forecast of the future dynamics (new computed snapshots). Therefore, it makes sense to use them as starting point and add eventually previously computed information, if this describes similar trajectories. \begin{algorithm} \caption{Snapshots selection from \cite{Mec19}.\label{Alg:Snapshotselection}} \begin{algorithmic}[1] \REQUIRE Snapshots previously computed and stored in a list $\mathsf{L}$ and tolerances $0<\rho <\varrho \ll 1$. \STATE Compute the new $2(N+M)$ snapshots and store them in a list $\mathsf{S}$; \FOR {$i \leq$ length($\mathsf{L}$)} \FOR {$j \leq$ $2(N+M)$} \IF { \label{SnapSelIfcondition} $(1-\varrho)\|\mathsf{S}[j]\|\|\mathsf{L}[i]\| \leq |\langle \mathsf{S}[j],\mathsf{L}[i] \rangle | \leq (1-\rho) \|\mathsf{S}[j]\|\|\mathsf{L}[i]\| $} \STATE Add $\mathsf{L}[i]$ to $\mathsf{S}$; \STATE \textbf{break} \ENDIF \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} Furthermore, in step~\ref{Alg:MPCPOD:step:compute_feedback} of Algorithm \ref{Alg:MPCPOD}, one can solve in parallel one time step of the Boussinesq model \eqref{eq:Bous} applying the feedback law, instead of solving \eqref{StateEquation}. In such a way, we further improve the MPC approximation. This operation is not costly if one considers that it is done only for one time-step. At last, note that the computation of the a-posteriori error estimate \eqref{APostError} requires full-order solves of state and adjoint equations. This can also be carried out in parallel. To conclude, we resume every step of Algorithm~\ref{Alg:MPCPOD}, which is performed in parallel: \begin{itemize} \item Computation of POD snapshots: this involves solving the Boussinesq approximation \eqref{eq:Bous} and the adjoint equation \eqref{eq:pdass_dual}; \item Computation of the active and inactive sets \eqref{ActiveSets}-\eqref{InactiveSets} of the POD-PDASS; \item Computation of the initial guess for the next MPC iteration $j$: this involves solving one time step of \eqref{eq:Bous}; \item The POD a-posteriori error estimator \eqref{APostError}. \end{itemize} \section{Numerical tests} \label{sec:num_test} The numerical tests were performed on the compute server of the Chair of Applied Mathematics at the University of Bayreuth, a computer with 32 physical cores (64 logical) Intel(R) Xeon(R) CPU E7-2830 @ 2.13GHz and a total RAM of 512GB. For the numerical realization of the Boussinesq approximation, we refer to \cite{And19}. \subsection{Numerical verification of the turnpike property} \label{sec:turnpike} In this subsection we check numerically that the turnpike property holds for \eqref{eq:opt_control_problem_MPC}, because this is the most important ingredient for proving that MPC provides nearly optimal solutions, see \cite{GP20}. Before stating it in a mathematical formalism, we need to introduce the concept of optimal operation for an infinite horizon optimal control problem. Consider the discrete-time time-varying control system \eqref{eq:discrete_dynamical_system}. In this subsection, we denote with $y_u(\cdot\,;t_j,\tilde y_\circ)$ the solution of \eqref{eq:discrete_dynamical_system} for a given control $u \in \mathcal{U}Nj$ starting in $\tilde y_\circ\in H$ at time $t_j$. Considering the stage cost function $l:\mathbb{R}\times H\to \mathbb{R}$, $l(u,w) = (|u|^2+\sigma\,\|w\|_H^2)/2$, the cost functional $J_N^{j}$ can be written as \[ J_N^{j}(u,w) = \sum_{i=0}^N l(u(t_{j+i}),w(t_{j+i})). \] We recall that the finite horizon optimal control problem \eqref{eq:opt_control_problem_MPC} is \[ \min_{(u,w)\in \mathcal{Z}^\varepsilon_\mathsf{ad}Nj} J_N^j(u,w) \] with (locally) optimal solution $(\bar u^\varepsilon_N,\bar w^\varepsilon_N)\in \mathcal{Z}^\varepsilon_\mathsf{ad}Nj$. By $\mathcal{Z}^\varepsilon_\mathsf{ad}infj$, $J_\infty^j$, $\bar u^\varepsilon_\infty$ and $\bar w^\varepsilon_\infty$ we denote the extension of the above quantities to infinite time horizon problem. Note that $\mathcal{Z}^\varepsilon_\mathsf{ad}Nj$ changes if the starting value $\tilde y_\circ\in H$ of the PDE is different. Therefore, in what follows we indicate this dependency with $\mathcal{Z}^\varepsilon_\mathsf{ad}Nj(\tilde y_\circ)$. \begin{definition} \label{def:optimal_operation} Let $\tilde y_\circ\in H$ be an arbitrary feasible starting value at time $t_j$. Consider a control sequence $(\bar u,\bar w)\in \mathcal{Z}^\varepsilon_\mathsf{ad}infj(\tilde y_\circ)$ with corresponding trajectory $\bar y$. We say that the system \eqref{eq:discrete_dynamical_system} is optimally operated at $(\bar y,\bar u,\bar w)$ if \[ \liminf_{I\to\infty} \sum_{i=0}^{I-1} l(u(t_{j+i}),w(t_{j+i}))-l(\bar u(t_{j+i}),\bar w(t_{j+i}))\geq 0 \] for all feasible $\hat y\in H$ and all $(u,w)\in \mathcal{Z}^\varepsilon_\mathsf{ad}infj(\hat y)$. The trajectory $\bar y$ is called optimal operation trajectory. \end{definition} We remark that there may exist more than one solution to $\eqref{eq:discrete_dynamical_system}$ that satisfies Definition~\ref{def:optimal_operation}. Moreover, it is well-known that, in general, it is not easy to compute an optimal operation trajectory explicitly. Still, one can expect that the MPC trajectory converges to this trajectory for sufficiently large MPC horizon $N$. To ensure this behavior one has to assume structural conditions such as the turnpike property. This property demands that the solution to the finite (and infinite) time horizon optimal control problem is close for most of the time to an optimal operation trajectory $\bar y$, according to the following definition. \begin{figure} \caption{Time-varying turnpike property for the finite horizon optimal control problem. \label{Fig:turnpike} \label{Fig:turnpike} \end{figure} \begin{definition} \label{def:Turnpike} Consider $(\bar y,\bar u,\bar w)$ at which system \eqref{eq:discrete_dynamical_system} is optimally operated. The finite horizon optimal control problem \eqref{eq:opt_control_problem_MPC} has the turnpike property at $(\bar y, \bar u, \bar w)$ if there exist $\eta\in \mathcal L\dagfootnote{ $\mathcal L= \{\eta\,:\,\mathbb{R}^+_0\to \mathbb{R}^+_0 \big| \eta \text{ is continuous and strictly decreasing with } \lim_{s\to\infty} \eta(s) = 0\}.$}$ such that for each $j\in\mathbb{N}_0$, each feasible initial guess $y_j$, each optimal trajectory $y_{\bar u^\varepsilon_N}(\cdot\,; t_j, y_j)$ and all $N,P\in\mathbb{N}$ there is a set $\mathcal{Q}(t_j,y_j,P,N)\subseteq\{0,\ldots,N\}$ with at most $P$ elements and \[ \begin{aligned} & \|y_{\bar u^\varepsilon_N}(t_{j+M}; t_j, y^j)-\bar y(t_{j+M})\|_H+|\bar u^\varepsilon_N(t_{j+M})-\bar u(t_{j+M})| \\ & \quad + \|\bar w^\varepsilon_N(t_{j+M})-\bar w(t_{j+M})\|_H\leq \eta(P) \end{aligned} \] for all $M\in\{0,\ldots,N\}\setminus\mathcal{Q}(t_j,y_j,P,N)$. \end{definition} The definition of the turnpike property seems technically elaborated but it is easy to illustrate graphically; cf. Figure~\ref{Fig:turnpike}. This property demands that for each optimal open-loop trajectory there can only be a finite number $P$ of time steps, $P$ independent from $N$, at which the trajectory is far from the optimal operation trajectory. Moreover, it permits to define a bound on the distance for the remaining close points. Contrary to the computation of the optimal operation trajectory, it is easy to find numerical evidence that a given optimal control problem has the turnpike property \cite{GP19}. \begin{figure} \caption{ Advection field \textbf{v} \label{Fig:advection_field_turnpike} \end{figure} For this test, we choose $\Omega= (0,1)^2$ discretized with a structured quadrilateral mesh of $N_{\bm x}= 1769$ nodes, $\Gamma_c= (0,1)\times \{0\}$, $\rho= C_p= \alpha= 1.0$, $\nu= 0.1$, $\kappa= 0.025$ and $\textbf{g} = (0,-9.8)$. Moreover, we consider $\gamma= 0.1$ and $\gamma_c= 10^5$ for the boundary conditions, so that the action of the control on the boundary can be seen (numerically) as the one of a Dirichlet boundary condition. We solve at first \eqref{eq:Bous} with fixed control $u(t)=22.5$, outside temperature $y_{\text{out}}= \max(13,16-t)$ and initial guess $(\textbf{v}_0,y_0)(t,{\bm x})=(0,20)$ for all ${\bm x}\in\Omega$ and $t\in[0,T]$, in order to generate the advection field $\textbf{v}$ to use in \eqref{eq:opt_control_problem_MPC}; cf. Figure~\ref{Fig:advection_field_turnpike}. Then, we solve \eqref{eq:opt_control_problem_MPC} with $u_\mathsf{a}(t) = 0$, $u_\mathsf{b}(t) = 10^6$, $w_\mathsf{b}= -w_\mathsf{a}= 10^9$, $\varepsilon=0.025$, $\sigma=1.0$, $y_\mathsf{a}(t) = 17.5+\min(t,2)$ and $y_\mathsf{b}(t) = 23$ for all $t\in[0,T]$. We use a time step $\Delta t= 0.01$ and compute the open-loop problem solution for $j=0$ (i.e. $t_0=0$) for different horizons $N$, thus $T= N \Delta t$ in each test. As shown in Figure~\ref{Fig:open_loop_diff_horizon}, the $L^2$ norm of the solution to \eqref{eq:opt_control_problem_MPC} is the same for a larger number of time steps as $N$ increases. Moreover, the solutions show a similar behavior. \begin{figure} \caption{Turnpike test results.} \label{Fig:open_loop_diff_horizon} \label{Fig:open_loop_diff_init_guesses} \end{figure} This provides clear evidence for the turnpike behavior. This evidence is further strengthened by the results from Figure~\ref{Fig:open_loop_diff_init_guesses}, where the open-loop trajectories starting from different initial guess are converging to the same region as the time passes. For more information about how to check the conditions for near optimal performance of MPC numerically, we refer to \cite{GP19}. \subsection{MPC-POD algorithm for the optimal control problem \eqref{eq:opt_cont_prob}} \begin{figure} \caption{Results for Algorithm~\ref{Alg:MPCPOD} \label{Fig:MPCtest} \label{Fig:ActivePoints} \end{figure} In this section we test Algorithm~\ref{Alg:MPCPOD} with the same data of Section~\ref{sec:turnpike}, choosing an MPC horizon $N=300$. This horizon is a good comprise between computational time and turnpike property; cf. Figure~\ref{Fig:open_loop_diff_horizon}. We fix also $M=60$, the a-priori tolerance $\tau_1= 10^{-8}\sum_{i=1}^\ell \lambda_i$ and the a posteriori error estimator tolerance $\tau_2= 3.5$. Although $\tau_2$ seems large, one has to consider that the control will have a value close to the state constraints range, roughly between 20 and 30, due to the fact that the control boundary condition is (numerically) close to a Dirichlet one. Moreover, the MPC horizon $N=300$ corresponds to a time horizon for the open-loop problem equal to three. Therefore, one can expect an average $L^2$ norm for the control of $43$ ($=25\sqrt{3}$). Hence, fixing $\tau_2= 3.5$ equals asking a relative error approximately of $8\%$. We expect, moreover, an overestimation factor for the a-posteriori error estimator, since $(\sigma^\mathsf{ap})^{-1}\|\mathcal{T}^\star \zeta\|_\mathcal{U}$ in \eqref{APostError} gives an upper-bound also for the error in reconstructing the artificial control, which might attain significantly greater values according to the chosen $\varepsilon$. In addition, a too small $\tau_2$ will trigger the basis updates too often and thus worsen the performance of the algorithm. Figure~\ref{Fig:MPCtest} shows the minimum, maximum and average value of the optimal MPC solution together with the desired bounds. One can notice that the average temperature is kept inside the bounds, where instead the minimum and the maximum one exceed them. This is typical of the $L^2$ regularization effect of the virtual control approach and can be mitigated taking a small $\varepsilon$. This effect is accentuated also by the POD method, which provides a sub-optimal solution as well-known from the literature \cite{Rav2000,TV09}. As a result, the average number of active points at each time step amounts to $3.5\%$ of the total number of grid points. This can be seen in Figure~\ref{Fig:ActivePoints}. Note that the number of points below the lower bound $y_\mathsf{a}$ increases as the time passes. The reason is that the upper bound $y_\mathsf{b}$ prevents the control at the boundary from growing significantly to counteract the decreasing outside temperature. This together with the sub-optimality of the POD solution will prevent the control from reacting faster and stronger to this external inputs. A possible solution is to consider local constraints in the domain $\Omega$ \cite{Pir20} and/or local POD basis functions \cite{AH2015,AHKO2012}. Nevertheless, the fact that the average temperature is inside the bounds and only a small portion of discretization points are active make Algorithm~\ref{Alg:MPCPOD} a viable approach in HVAC of residual buildings. \begin{figure} \caption{Results for Algorithm~\ref{Alg:MPCPOD} \label{Fig:MPCtest_smaller} \label{Fig:ActivePoints_smaller} \label{Fig:smaller} \end{figure} For smaller $\gamma=0.01$ and $\varepsilon=0.0075$, we can significantly improve the results in terms of violation of the state constraints; cf. Figure~\ref{Fig:smaller}. In this test, in fact, the influence of the outside temperature is lower and the method is capable to reduce significantly the number of active points; see Figure~\ref{Fig:ActivePoints_smaller}. In particular, the minimum temperature in the room never lies below the given lower bound $y_\mathsf{a}$ (Figure~\ref{Fig:MPCtest_smaller}). The maximum temperature, instead, is slightly above the upper bound $y_\mathsf{b}$. This can be justified from the chosen value of the relaxation parameter $\varepsilon$ and also from the fact that we are using the POD method, thus the reduced-order model approximation introduce a further discretization error, which may not be canceled even when passing to the limit $\varepsilon\to 0$. This violation is anyway negligible for HVAC of residual buildings, since the maximum value of the temperature attained all over the time-space domain is $23.07$. Furthermore, we already discuss the conflict (and a possible solution) between the state constraints and the control. This can be clearly seen here, since the 33 active points are exactly the grid points belonging to $\Gamma_c$. \section{Conclusion} We presented a POD-based economic MPC algorithm to handle HVAC of residual buildings. The strategy consists in computing a sub-optimal solution to an optimal control problem subjected to the Boussinesq approximation of Navier-Stokes and bilateral state and control constraints. The goal was to have a good compromise between robustness and computational cost. The first is achieved via the MPC and a goal-oriented a-posteriori error estimate, while the second is obtained through POD and parallel computing. The numerical results confirm the expectations. While some limitations of the strategy show up, particularly the emergence of several discretization points over time in which the state constraints are violated, the resulting control strategy can be regarded accurate enough for typical HVAC applications. The peformance can be further improved, for example, by considering localized MOR and localized constraints. This will be the focus of future work. \section*{Acknowledgments} The authors gratefully acknowledge support of DFG grants 274852524, 274852737 and 274853298. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-JRNL-818244). This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. {\small } \end{document}
\begin{document} \RUNAUTHOR{Ghosh and Lam} \RUNTITLE{Robust Analysis in Stochastic Simulation} \TITLE{Robust Analysis in Stochastic Simulation: Computation and Performance Guarantees} \ARTICLEAUTHORS{ \AUTHOR{Soumyadip Ghosh} \AFF{IBM Research AI, IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, \mathbb{E}MAIL{[email protected]}} \AUTHOR{Henry Lam} \AFF{Department of Industrial Engineering and Operations Research, Columbia University, New York, NY 10027, \mathbb{E}MAIL{[email protected]}} } \ABSTRACT{ Any performance analysis based on stochastic simulation is subject to the errors inherent in misspecifying the modeling assumptions, particularly the input distributions. In situations with little support from data, we investigate the use of worst-case analysis to analyze these errors, by representing the partial, nonparametric knowledge of the input models via optimization constraints. We study the performance and robustness guarantees of this approach. We design and analyze a numerical scheme for solving a general class of simulation objectives and uncertainty specifications. The key steps involve a randomized discretization of the probability spaces, a simulable unbiased gradient estimator using a nonparametric analog of the likelihood ratio method, and a Frank-Wolfe (FW) variant of the stochastic approximation (SA) method (which we call FWSA) run on the space of input probability distributions. A convergence analysis for FWSA on non-convex problems is provided. We test the performance of our approach via several numerical examples. } \maketitle \section{Introduction} Simulation-based performance analysis of stochastic models, or stochastic simulation, is built on input model assumptions that to some extent deviate from the truth. Consequently, a performance analysis subject to these input errors may lead to poor prediction and suboptimal decision-making. To address this important problem, a typical framework in the stochastic simulation literature focuses on output variability measures or confidence bounds that account for the input uncertainty when input data are available. Established statistical techniques such as the bootstrap (e.g., \cite{barton1993uniform,barton2013quantifying}), goodness-of-fit tests (e.g., \cite{banks2000dm}), Bayesian inference and model selection (e.g., \cite{chick2001input,zouaoui2004accounting}) and the delta method (e.g., \cite{cheng1998two,cheng2004calculation}) have been proposed and have proven effective in many situations. In this paper, we take a different approach for situations with insufficient data, or when the modeler wants to assess risk beyond what the data or the model indicates. Such situations can arise when the system, service target or operational policy in study is at a testing stage without much prior experience. To find reliable output estimates in these settings, we investigate a worst-case approach with respect to the input models. In this framework, the modeler represents the partial and nonparametric beliefs about the input models as constraints, and computes tight worst-case bounds among all models that satisfy them. More precisely, let $Z(P^1,\ldots,P^m)$ be a performance measure that depends on $m$ input models, each generated from a probability distribution $P^i$. The formulation for computing the worst-case bounds are \begin{equation} \min_{P^i\in\mathcal U^i, i=1,\ldots,m}Z(P^1,\ldots,P^m)\text{\ \ \ \ and\ \ \ \ }\max_{P^i\in\mathcal U^i, i=1,\ldots,m}Z(P^1,\ldots,P^m) \label{generic} \end{equation} The set $\mathcal U^i$ encodes the collection of all possible $P^i$ from the knowledge of the modeler. The decision variables in the optimizations in \eqref{generic} are the unknown models $P^i,i=1,\ldots,m$. The primary motivation for using \eqref{generic} is the robustness against model misspecification, where a proper construction of the set $\mathcal U^i$ avoids making specific assumptions beyond the modeler's knowledge. The following three examples motivate and explain further. \begin{example}[Robust bounds under expert opinion] When little information is available for an input model, a common practice in stochastic simulation is to summarize its range (say $[a,b]$) and mean (or mode) as a triangular distribution, where the base of the triangle denotes the range and the position of the peak is calibrated from the mean. This specific distribution only crudely describes the knowledge of the modeler and may deviate from the true distribution, even if $a,b,\mu$ are correctly specified. Instead, using \begin{equation} \mathcal U^i=\{P^i:E_{P^i}[X^i]=\mu,\ \text{supp\ }P^i=[a,b]\}\label{triangle} \end{equation} in formulation \eqref{generic}, where $X^i$ is the random variate, $E_{P^i}[\cdot]$ is the expectation under $P^i$, and $\text{supp\ }P^i$ is the support of $P^i$, will give a valid interval that covers the true performance measure whenever $a,b,\mu$ are correctly specified. Moreover, when these parameters are not fully known but instead specified within a range, \eqref{triangle} can be relaxed to $$\mathcal U^i=\{P^i:\underline\mu\leq E_{P^i}[X^i]\leq\overline\mu,\ \text{supp\ }X^i=[\underline a,\overline b]\}$$ where $[\underline\mu,\overline\mu]$ denotes the range of the mean and $\underline a,\overline b$ denote the lower estimate of the lower support end and upper estimate of the upper support end respectively. The resulting bound will cover the truth as long as these ranges are supplied correctly.\label{triangle ex} \Halmos \end{example} \begin{example}[Dependency modeling] In constructing dependent input models, common approaches in the simulation literature fit the marginal description and the correlation of a multivariate model to a specified family. Examples include Gaussian copula (e.g., \cite{lurie1998approximate,channouf2009fitting}; also known as normal-to-anything (NORTA), e.g. \cite{cario1997modeling}) and chessboard distribution (\cite{ghosh2002chessboard}) that uses a domain discretization. These distributions are correctly constructed up to their marginal description and correlation, provided that these information are correctly specified. However, dependency structure beyond correlation can imply errors on these approaches (e.g., \cite{lam2016serial}), and formulation \eqref{generic} can be used to get bounds that address such dependency. For example, suppose $P^i$ is a bivariate input model with marginal distributions $P^{i,1},P^{i,2}$, marginal means $\mu^{i,1},\mu^{i,2}$ and covariance $\rho^i$. We can set $$\mathcal U^i=\{P^i:P_{P^{i,1}}(X^{i,1}\leq q_j^{i,1})=\nu_j^1,j=1,\ldots,l_1,\ P_{P^{i,2}}(X^{i,2}\leq q_j^{i,2})=\nu_j^2,j=1,\ldots,l_2,\ E[X^{i,1}X^{i,2}]=\rho^i+\mu^{i,1}\mu^{i,2}\}$$ where $(X^{i,1},X^{i,2})$ denote the random vector under $P^i$, and $q_j^{i,1},q_j^{i,2},\nu_j^{i,1},\nu_j^{i,2}$ are pre-specified quantiles and probabilities of the respective marginal distributions. Unlike previous approaches, \eqref{generic} outputs correct bounds on the truth given correctly specified marginal quantiles and correlation, regardless of the dependency structure. \label{dependency ex} \Halmos \end{example} \begin{example}[Model risk] Model risk refers broadly to the uncertainty in analysis arising from the adopted model not being fully accurate. This inaccuracy occurs as the adopted model (often known as the baseline model), typically obtained from the best statistical fit or expert opinion, deviates from the truth due to the real-world non-stationarity and the lack of full modeling knowledge or capability. To assess model risk, a recently surging literature studies the use of statistical distance as a measurement of model discrepancy (e.g., \cite{gx12b,lam2013robust}). Given the baseline model $P_b^i$, the idea is to represent the uncertainty in terms of the distance away from the baseline via a neighborhood ball \begin{equation} \mathcal U^i=\{P^i:d(P^i,P_b^i)\leq\eta^i\}\label{ball} \end{equation} where $d$ is a distance defined on the nonparametric space of distributions (i.e., without restricting to any parametric families). The bounds drawn from formulation \eqref{generic} assess the effect of model risk due to the input models, tuned by the ball size parameter $\eta^i$ that denotes the uncertainty level. Besides risk assessment, this approach can also be used to obtain consistent confidence bounds for the true performance measure, when $P_b^i$ is taken as the empirical distribution and $\eta$ and $d$ are chosen suitably (discussed further in Section \ref{sec:discretization}). \label{model ex} \Halmos \end{example} Our worst-case approach is inspired from the literature of robust optimization (\cite{ben2009robust,bertsimas2011theory}), which considers decision-making under uncertainty and advocates optimizing decisions over worst-case scenarios. In particular, when the uncertainty lies in the probability distributions that govern a stochastic problem, the decision is made to optimize under the worst-case distributions, a class of problems known as distributionally robust optimization (e.g. \cite{delage2010distributionally,lim2006model}). Such an approach has also appeared in so-called robust simulation or robust Monte Carlo in the simulation literature (\cite{hu2012robust,gx12b}). However, the methodologies presented in the above literature focus on structured problems where the objective function is tractable, such as linear or linearly decomposable. In contrast, $Z(\cdot)$ for most problems in stochastic simulation is nonlinear and unstructured, obstructing the direct adaptation of the existing methods. In view of this, our main objective is to design an efficient simulation-based method to compute the worst-case bounds for formulation \eqref{generic} that can be applied to broad classes of simulation models and input uncertainty representations. \subsection{Our Contributions} We study a simulation-based iterative procedure for the worst-case optimizations \eqref{generic}, based on a modified version of the celebrated stochastic approximation (SA) method (e.g. \cite{kushner}). Because of the iterative nature, it is difficult to directly operate on the space of continuous distributions except in very special cases. Thus, our first contribution (Section \ref{sec:discretization}) is to provide a randomized discretization scheme that can provably approximate the continuous counterpart. This allows one to focus on discrete distributions on fixed support points as the decision variable to feed into our SA algorithm. We develop the SA method in several aspects. In Section \ref{sec:gradient}, we construct an unbiased gradient estimator for $Z$ based on the idea of the Gateaux derivative for functionals of probability distributions (\cite{serfling2009approximation}), which is used to obtain the direction in each subsequent SA iterate. The need for such a construction is motivated by the difficulty in na\"{i}ve implementation of standard gradient estimators: An arbitrary perturbation of a probability distribution, which is the decision variable in the optimization, may shoot outside the probability simplex and results in a gradient that does not bear any probabilistic meaning and subsequently does not support simulation-based estimation. Our approach effectively restricts the direction of perturbation to points within the probability simplex, leading to a simulable gradient estimator. We justify our approach as a nonparametric version of the classical likelihood ratio method (or the score function method) (\cite{glynn1990likelihood,reiman1989sensitivity,rubinstein1986score}). Next, in Sections \ref{sec:procedure} and \ref{sec:guarantees}, we design and analyze our SA scheme under the uncertainty constraints. We choose to use a stochastic counterpart of the so-called Frank-Wolfe (FW) method (\cite{frank1956algorithm}), known synonymously as the conditional gradient method in deterministic nonlinear programming. For convenience we call our scheme FWSA. Note that a standard SA iteration follows the estimated gradient up to a pre-specified step size to find the next candidate iterate. When the formulation includes constraints, the common approach in the SA literature projects the candidate solution onto the feasible region in order to define the next iterate (e.g. \cite{kushner}). Instead, our method looks in advance for a feasible direction along which the next iterate is guaranteed to lie in the (convex) feasible region. In order to find this feasible direction, an optimization subproblem with a linear objective function is solved in each iteration. We base our choice of using FWSA on its computational benefit in solving these subproblems, as their linear objectives allow efficient solution scheme for high-dimensional decision variables for many choices of the set $\mathcal U^i$. We characterize the convergence rate of FWSA in terms of the step size and the number of simulation replications used to estimate the gradient at each iteration. The form of our convergence bounds suggests prescriptions for the step-size and sample-size sequences that are efficient with respect to the cumulative number of sample paths simulated to generate all the gradients until the current iterate. The literature on the stochastic FW methods for non-convex problems is small. \cite{kushner1974stochastic} proves almost sure convergence under assumptions that can prescribe algorithmic specifications only for one-dimensional settings. During the review process of this paper, two other convergence rate studies \cite{reddi2016stochastic} and \cite{lafond2016} have appeared. Both of them assume the so-called $G$-Lipschitz condition on the gradient estimator that does not apply to our setting. Consequently, our obtained convergence rates are generally inferior to their results. Nonetheless, we will point out how our rates almost match theirs under stronger assumptions on the behavior of the iterates that we will discuss. Finally, in Section \ref{sec:numerics} we provide numerical validation of our approach using two sets of experiments, one testing the performance of our proposed randomized discretization strategy, and one on the convergence of FWSA. \subsection{Literature Review} We briefly survey three lines of related work. First, our paper is related to the literature on input model uncertainty. In the parametric regime, studies have focused on the construction of confidence intervals or variance decompositions to account for both parameter and stochastic uncertainty using data, via for instance the delta method (\cite{cheng1998two,cheng2004calculation}), the bootstrap (\cite{barton2013quantifying,cheng1997sensitivity}), Bayesian approaches (\cite{zouaoui2003accounting,xie2014bayesian,saltelli2010variance,saltelli2008global}), and metamodel-assisted analysis (\cite{xie2014bayesian,xie2013statistical}). Model selection beyond a single parametric model can be handled through goodness-of-fit or Bayesian model selection and averaging (\cite{chick2001input,zouaoui2004accounting}). Fully nonparametric approaches using the bootstrap have also been investigated (\cite{barton1993uniform,barton2001resampling,song2015quickly}). Second, formulation \eqref{generic} relates to the literature on robust stochastic control (\cite{pjd00,iyengar2005robust,nilim2005robust,doi:10.1287/moor.1120.0540}) and distributionally robust optimization (\cite{delage2010distributionally,goh2010distributionally,ben2013robust,wiesemann2014distributionally}), where the focus is to make decision rules under stochastic environments that are robust against the ambiguity of the underlying probability distributions. This is usually cast in the form of a minimax problem where the inner maximization is over the space of distributions. This idea has spanned across multiple areas like economics (\cite{hsAER01,hansen2008robustness}), finance (\cite{gx12a,lsw11}), queueing (\cite{bertsimas2007semidefinite,jls10}), dynamic pricing (\cite{ls07}), inventory management (\cite{xin2015distributionally}), physical sciences (\cite{dupuis2016path}), and more recently machine learning (\cite{shafieezadeh2015distributionally,blanchet2016robust}). In the simulation context, \cite{hu2012robust} compared different global warming policies using Gaussian models with uncertain mean and covariance information. \cite{gx12b,glasserman2016bounding} studied approaches based on sample average approximation for solving distance-based constrained optimizations to quantify model risk in finance. \cite{lam2013robust,lam2016serial} investigated infinitesimal approximations for related optimizations to quantify model errors arising from sequences of uncertain input variates. \cite{bandi2012tractable} studied the view of deterministic robust optimization to compute various stochastic quantities. Simulation optimization under input uncertainty has also been studied via the robust optimization framework (\cite{fan2013robust,ryzhov2012ranking}), and the closely related approach using risk measures (\cite{qian2015composite,zhou2015simulation}). Lastly, optimizations over probability distributions have also arisen as generalized moment problems, applied to decision analysis (\cite{smith95,smith1993moment,bertsimas2005optimal}) and stochastic programming (\cite{birge1987computing}). Our algorithm relates to the literature on the FW method (\cite{frank1956algorithm}) and constrained SA. The former is a nonlinear programming technique initially proposed for convex optimization, based on sequential linearization of the objective function using the gradient at the solution iterate. The classical work of \cite{canon1968tight}, \cite{dunn1979rates} and \cite{dunn1980convergence} analyzed convergence properties of FW for deterministic convex programs. More recently, \cite{jaggi2013revisiting}, \cite{freund2014new} and \cite{hazan2016variance} carried out finite-time analysis for the FW method motivated by machine learning applications. For stochastic FW on non-convex problems (viewed as a class of constrained SA), \cite{kushner1974stochastic} focused on almost sure convergence based on a set of assumptions about the probabilistic behavior of the iterations, which were then used to tune the algorithm for one-dimensional problems. While this paper was under review, \cite{reddi2016stochastic} provided a complexity analysis in terms of the sample size in estimating gradients and the number of calls of the linear optimization routine. \cite{lafond2016} studied the performance in terms of regret in an online setting. Both \cite{reddi2016stochastic} and \cite{lafond2016} relied on the $G$-Lipschitz condition that our gradient estimator violated. Other types of constrained SA schemes include the Lagrangian method (\cite{buche2002rate}) and mirror descent SA (\cite{nemirovski2009robust}). Lastly, general convergence results for SA can be found in \cite{fu1994optimization}, \cite{kushner} and \cite{pasupathy2011stochastic}. \section{Formulation and Assumptions}\label{sec:formulation} We focus on $Z(P^1,\ldots,P^m)$ that is a finite horizon performance measure generated from i.i.d. replications from the independent input models $P^1,\ldots,P^m$. Let $\mathbf X^i=(X_t^i)_{t=1,\ldots,T^i}$ be $T^i$ i.i.d. random variables on the space $\mathcal X^i\subset\mathbb R^{v^i}$, each generated under $P^i$. The performance measure can be written as \begin{equation} Z(P^1,\ldots,P^m)=E_{P^1,\ldots,P^m}[h(\mathbf X^1,\ldots,\mathbf X^m)]=\int\cdots\int h(\mathbf x^1,\ldots,\mathbf x^m)\prod_{t=1}^{T^1}dP(x_t^1)\cdots\prod_{t=1}^{T^m}dP(x_t^m)\label{perf measure} \end{equation} where $h(\cdot):\prod_{i=1}^m(\mathcal X^i)^{T^i}\to\mathbb R$ is a cost function, and $E_{P^1,\ldots,P^m}[\cdot]$ denotes the expectation associated with the generation of the i.i.d. replications. We assume that $h(\cdot)$ can be evaluated by the computer given the inputs. In other words, the performance measure \eqref{perf measure} can be approximated by running simulation. \eqref{perf measure} is the stylized representation for transient performance measures in discrete-event simulation. For example, $\mathbf X^1$ and $\mathbf X^2$ can be the sequences of interarrival and service times in a queue, and $P^1$ and $P^2$ are the interarrival time and service time distributions. When $h(\mathbf X^1,\mathbf X^2)$ is the indicator function of the waiting time exceeding a threshold, \eqref{perf measure} will denote the corresponding threshold exceedance probability. Next we discuss the constraints in \eqref{generic}. Following the terminology in robust optimization, we call $\mathcal U^i$ the \emph{uncertainty set} for the $i$-th input model. Motivated by the examples in the Introduction, we focus on two types of convex uncertainty sets: \begin{enumerate} \item\emph{Moment and support constraints: }We consider \begin{equation} \mathcal U^i=\{P^i:E_{P^i}[f_l^i(X^i)]\leq\mu_l^i,l=1,\ldots,s^i,\ \text{supp\ }P^i=A^i\}\label{moment set} \end{equation} where $X^i$ is a generic random variable under distribution $P^i$, $f_l^i:\mathcal X^i\to\mathbb R$, and $A^i\subset\mathcal X^i$. For instance, when $\mathcal X^i=\mathbb R$, $f_l^i(x)$ being $x$ or $x^2$ denotes the first two moments. When $\mathcal X^i=\mathbb R^2$, $f_l^i(x_1,x_2)=x_1x_2$ denotes the cross-moment. Equalities can also be represented via \eqref{moment set} by including $E_{P^i}[-f_l^i(X^i)]\leq-\mu_l^i$. Thus the uncertainty set \eqref{moment set} covers Examples \ref{triangle ex} and \ref{dependency ex} in the Introduction. Furthermore, the neighborhood measured by certain types of statistical distance (Example \ref{model ex}) can also be cast as \eqref{moment set}. For instance, suppose $d$ is induced by the sup-norm on the distribution function on $\mathbb R$. Suppose $P^i$ is a continuous distribution and the baseline distribution $P_b^i$ is discrete with support points $y_j,j=1,\ldots,n^i$. The constraint \begin{equation} \sup_{x\in\mathbb R}|F^i(x)-F_b^i(x)|\leq\eta^i\label{KS} \end{equation} where $F^i$ and $F_b^i$ denote the distribution functions for $P^i$ and $P_b^i$ respectively, can be reformulated as $$F_b^i(y_j+)-\eta^i\leq F^i(y_j)\leq F_b^i(y_j-)+\eta^i,\ j=1,\ldots,n^i$$ where $F_b^i(y_j-)$ and $F_b^i(y_j+)$ denote the left and right limits of $F_b^i$ at $y_j$, by using the monotonicity of distribution functions. Thus $$\mathcal U^i=\{P^i:F_b^i(y_j+)-\eta^i\leq E^i[I(X^i\leq y_j)]\leq F_b^i(y_j-)+\eta^i,\ j=1,\ldots,n^i,\ \text{supp\ }P^i=\mathbb R\}$$ where $I(\cdot)$ denotes the indicator function, falls into the form of \eqref{moment set}. \cite{bertsimas2014robust} considers this reformulation for constructing uncertainty sets for stochastic optimization problems, and suggests to select $\eta^i$ as the quantile of the Kolmogorov-Smirnov statistic if $F_b^i$ is the empirical distribution function constructed from continuous i.i.d. data. \item\emph{Neighborhood of a baseline model measured by $\phi$-divergence: }Consider \begin{equation} \mathcal U^i=\{P^i:d_\phi(P^i,P_b^i)\leq\eta^i\}\label{phi set} \end{equation} where $d_\phi(P^i,P_b^i)$ denotes the $\phi$-divergence from a baseline distribution $P_b^i$ given by $$d_\phi(P^i,P_b^i)=\int\phi\left(\frac{dP^i}{dP_b^i}\right)dP_b^i$$ which is finite only when $P^i$ is absolutely continuous with respect to $P_b^i$. The function $\phi$ is a convex function satisfying $\phi(1)=0$. This family covers many widely used distances. Common examples are $\phi(x)=x\log x-x+1$ giving the KL divergence, $\phi(x)=(x-1)^2$ giving the (modified) $\chi^2$-distance, and $\phi(x)=(1-\theta+\theta x-x^\theta)/(\theta(1-\theta)),\ \theta\neq0,1$ giving the Cressie-Read divergence. Details of $\phi$-divergence can be found in, e.g., \cite{pardo2005statistical,ben2013robust,bayraksan2015data}. \end{enumerate} As precursed in the Introduction, in the context of simulation analysis where $(P^1,\ldots,P^m)$ are the input models, $Z(\cdot)$ in \eqref{perf measure} is in general a complex nonlinear function. This raises challenges in solving \eqref{generic} beyond the literature of robust control and optimization that considers typically more tractable objectives. Indeed, if $Z(\cdot)$ is a linear function in $P^i$'s, then optimizing over the two types of uncertainty sets above can both be cast as specialized classes of convex programs that can be efficiently solved. But linear $Z(\cdot)$ is too restrictive to describe the input-output relation in simulation. To handle a broader class of $Z(\cdot)$ and to address its simulation-based nature, we propose to use a stochastic iterative method. The next sections will discuss our methodology in relation to the performance guarantees provided by \eqref{generic}. \section{Performance Guarantees and Discretization Strategy}\label{sec:discretization} This section describes the guarantees provided by our framework. Section \ref{sec:randomized} first presents the motivation and justification of a discretization scheme for continuous input distributions. Section \ref{sec:stat} then discusses the statistical implications in more details. \subsection{Randomized Discretization}\label{sec:randomized} Suppose there is a ``ground true" distribution $P_0^i$ for each input model. Let $Z_*$ and $Z^*$ be the minimum and maximum values of the worst-case optimizations \eqref{generic}. Let $Z_0$ be the true performance measure, i.e. $Z_0=Z(P_0^1,\ldots,P_0^m)$. The following highlights an immediate implication of using \eqref{generic}: \begin{proposition} If $P_0^i\in\mathcal U^i$ for all $i$, then $Z_*\leq Z_0\leq Z^*$.\label{basic guarantee} \end{proposition} In other words, the bounds from the worst-case optimizations form an interval that covers the true performance measure if the uncertainty sets contain the true distributions. We discuss a discretization strategy for the worst-case optimizations for continuous input distributions. We will show that, by replacing the continuous distribution with a discrete distribution on support points that are initially sampled from some suitably chosen distribution, we can recover the guarantee in Proposition \ref{basic guarantee} up to a small error. The motivation for using discretization comes from the challenges in handling decision variables in the form of continuous distributions when running our iterative optimization scheme proposed later. We focus on the two uncertainty sets \eqref{moment set} and \eqref{phi set}. The following states our guarantee: \begin{theorem} Consider $Z(P^1,\ldots,P^m)$ in \eqref{perf measure}. Assume $h$ is bounded a.s.. Let $n^i,i=1,\ldots,m$ and $n$ be positive integers such that $n^i=nw^i$ for some fixed $w^i>0$, for all $i$. For each input model $i$, we sample $n^i$ i.i.d. observations $\{y_1^i,\ldots,y_{n^i}^i\}$ from a distribution $Q^i$ such that the true distribution $P_0^i$ is absolutely continuous with respect to $Q^i$, with $L^i=dP_0^i/dQ^i$ satisfying $\|L^i\|_\infty<\infty$, where $\|L^i\|_\infty$ denotes the essential supremum of $L^i$ under $Q^i$. Consider the optimizations \begin{equation} \hat Z_*=\min_{P^i\in\hat{\mathcal U}^i, i=1,\ldots,m}Z(P^1,\ldots,P^m)\text{\ \ \ \ and\ \ \ \ }\hat Z^*=\max_{P^i\in\hat{\mathcal U}^i, i=1,\ldots,m}Z(P^1,\ldots,P^m)\label{sample counterpart} \end{equation} where each $\hat{\mathcal U}^i$ contains discrete distributions supported on $\{y_1^i,\ldots,y_{n^i}^i\}$, defined in one of the two cases below. For each case, we also make additional assumptions as follows: \begin{enumerate} \item Set \begin{equation} \hat{\mathcal U}^i=\{P^i:E_{P^i}[f_l^i(X^i)]\leq\mu_l^i,l=1,\ldots,s^i,\ \text{supp\ }P^i\subset\{y_1^i,\ldots,y_{n^i}^i\}\}\label{moment discrete} \end{equation} Moreover, assume that $P_0^i$ satisfies $E_{P_0^i}|f_l^i(X^i)|<\infty$ and $E_{P_0^i}[f_l^i(X^i)]<\mu_l^i$ for all $l=1,\ldots,s^i$. \item The distribution $Q^i$ is chosen such that $P_b^i$ is absolutely continuous with respect to $Q^i$, and we denote $L_b^i=dP_b^i/dQ^i$. Set \begin{equation} \hat{\mathcal U}^i=\{P^i:d_\phi(P^i,\hat P_b^i)\leq\eta^i\}\label{phi discrete} \end{equation} where $\hat P_b^i$ is defined as $$\hat P_b^i=\sum_{j=1}^{n^i}\frac{L_b^i(y_j^i)}{\sum_{r=1}^{n^i}L_b^i(y_r^i)}\delta(y_j^i)$$ with $\delta(y)$ denoting the delta measure at $y$. Moreover, assume $P_0^i$ satisfies $E_{P_b^i}|\phi(dP_0^i/dP_b^i)|<\infty$ and $d_\phi(P_0^i,P_b^i)<\eta^i$. Additionally, assume $\phi(\cdot)$ satisfies the continuity condition $|\phi(t(1+\lambda))-\phi(t)|\leq|\phi(t)|\kappa_1(\lambda)+\kappa_2(\lambda)$ for any $t\geq0$ and $\lambda$ in a fixed neighborhood of 0, where $\kappa_1(\cdot)$ and $\kappa_2(\cdot)$ are two functions such that $\kappa_1(\lambda)=O(\lambda)$ and $\kappa_2(\lambda)=O(\lambda)$ as $\lambda\to0$. \end{enumerate} Then we have \begin{equation} \hat Z_*\leq Z_0+O_p\left(\frac{1}{\sqrt n}\right)\leq\hat Z^*\label{approx bound} \end{equation}\label{sample thm} \end{theorem} Here $O_p(1/\sqrt n)$ is an error term $e_n$ that is of stochastic order $1/\sqrt n$, i.e., for any $0<\epsilon<1$, there exist $M,N>0$ such that $P(|\sqrt ne_n|<M)>1-\epsilon$ for any $n>N$. Theorem \ref{sample thm} is proved in Appendix \ref{sec:proofs}. We have a few immediate remarks: \begin{enumerate} \item Optimizations \eqref{sample counterpart} are the sample counterparts of the original worst-case optimizations \eqref{generic} with uncertainty sets given by \eqref{moment set} or \eqref{phi set}, which optimize discrete distributions over support points that are sampled from generating distributions $Q^i$'s. Theorem \ref{sample thm} guarantees that, if the original worst-case optimizations give valid covering bounds for the true performance measure (in the spirit of Proposition \ref{basic guarantee}), then so are the sample counterparts, up to an error $O_p(1/\sqrt n)$ where $n$ denotes the order of the sample size used to construct the sets of support points. The constant implicit in this $O_p(1/\sqrt n)$ error depends on the sensitivity of $Z$ with respect to the input distributions, as well as the discrepancies between the true input distributions and the support-generating distributions. \item The condition $\|L^i\|_\infty<\infty$ implies that $Q^i$ has a tail at least as heavy as $P_0^i$. In practice, the tail of the true distribution $P_0^i$ is not exactly known a priori. This means that it is safer to sample the support points from a heavy-tailed distribution. Additionally, in the case of $\phi$-divergence, the generating distribution should also support the baseline. One easy choice is to merely use the baseline as the generating distribution. \item The conditions $E_{P_0^i}[f_l^i(X^i)]<\mu_l^i$ and $d_\phi(P_0^i,P_b^i)<\eta^i$ state that $E_{P_0^i}[f_l^i(X^i)]$ and $d_\phi(P_0^i,P_b^i)$ are in the interior of $\{(z_1,\ldots,z_{s^i}):z_l\leq\mu_l^i,\ l=1,\ldots,s^i\}$ and $\{z:z\leq\eta^i\}$ respectively. These conditions guarantee that $P_0^i$ projected on a sample approximation of the support is asymptotically feasible for \eqref{sample counterpart}, which helps lead to the guarantee \eqref{approx bound}. In general, the closer $P_0^i$ is to the boundary of the uncertainty set, i.e., the smaller the values of $\mu_l^i-E_{P_0^i}[f_l^i(X^i)]$ and $\eta^i-d_\phi(P_0^i,P_b^i)$, the larger the sample size is needed for the asymptotic behavior in \eqref{approx bound} to kick in, a fact that is not revealed explicitly in Theorem \ref{sample thm}. One way to control this required sample size is to expand the uncertainty set by a small margin, say $\epsilon>0$, i.e., use $E_{P^i}[f_l^i(X^i)]\leq\mu_l^i+\epsilon$ and $d_\phi(P^i,P_b^i)\leq\eta^i+\epsilon$, in \eqref{moment discrete} and \eqref{phi discrete}. Note that, in the case of moment equality constraint, say $E_{P^i}[f_l^i(X^i)]=\mu_l^i$, one does have to deliberately relax the constraint to $\mu_l^i-\epsilon\leq E_{P^i}[f_l^i(X^i)]\leq\mu_l^i+\epsilon$ for the interior-point conditions to hold. \item The continuity assumption imposed on $\phi(\cdot)$ in Case 2 is satisfied by many common choices, including KL, (modified) $\chi^2$-distance, and Burg entropy (see the definitions in \cite{ben2013robust}). \item As $n^i$ increases, the sampled uncertainty set $\hat{\mathcal U}^i$ enlarges as it contains distributions supported on more values. As a result, $\hat Z_*$ becomes smaller and $\hat Z^*$ larger as $n^i$ increases. Moreover, since $\hat{\mathcal U}^i\subset\mathcal U^i$, we have $\hat Z_*\geq Z_*$ and $\hat Z^*\leq Z^*$. This means that as the generated support size increases, the interval $[\hat Z_*,\hat Z^*]$ progressively widens and is always contained by the interval $[Z_*,Z^*]$. \end{enumerate} \subsection{Statistical Implications}\label{sec:stat} We further discuss the statistical guarantees implied from Section \ref{sec:randomized}. First, a probabilistic analog of Proposition \ref{basic guarantee} is: \begin{proposition} Suppose $\mathcal U^i$ contains the true distribution $P_0^i$ for all $i$ with confidence $1-\alpha$, i.e. $\mathbb P(\mathcal U^i\ni P_0^i\text{\ for all\ }i=1,\ldots,m)\geq1-\alpha$, then $\mathbb P(Z_*\leq Z_0\leq Z^*)\geq1-\alpha$, where $\mathbb P$ denotes the probability generated from a combination of data and prior belief.\label{simple conf} \end{proposition} Proposition \ref{simple conf} follows immediately from Proposition \ref{basic guarantee}. In the frequentist framework, $\mathbb P$ refers to the probability generated from data. However, Proposition \ref{simple conf} can also be cast in a Bayesian framework, in which $\mathbb P$ can represent the prior (e.g., from expert opinion) or the posterior belief. Proposition \ref{simple conf} reconciles with the established framework in distributionally robust optimization that the uncertainty set $\mathcal U^i$ should be chosen as a confidence set for the true distribution, in order to provide a guarantee for the coverage probability on the true objective, in the case that $\mathbb P$ represents the generation of data under a true model. Some strategies for constructing confidence sets are: \begin{enumerate} \item For moment constraint $E_{P^i}[f_l^i(X^i)]\leq\mu_l^i$, one can choose $\mu_l^i$ as the upper confidence bound of the moment. \item For the sup-norm constraint in \eqref{KS}, supposing that $P^i$ is continuous, $\eta^i$ chosen as the $(1-\alpha)$-quantile of $\sup_{x\in[0,1]}B(t)/\sqrt{n^i}$, where $B(t)$ is a standard Brownian bridge, gives an approximate $(1-\alpha)$ confidence region. This follows from the limiting distribution of the Kolmogorov-Smirnov statistic (see, e.g., \cite{bertsimas2014robust}). This calibration becomes conservative (but still correct) when $P^i$ is discrete, and one could use the bootstrap as a remedy. Note that the Kolmogorov-Smirnov-based confidence region is crude for the tail in that it can include a wide range of tail behaviors, and thus is not recommended if the performance measure of interest is sensitive to the tail. \item For the $\phi$-divergence-based constraint in \eqref{phi set}, under the assumption that $P^i$ has finite support of size $r^i$, \cite{ben2013robust} proposes using $\eta^i=(\phi''(1)/(2n^i))\chi^2_{r^i-1,1-\alpha}$ in the case $P_b^i$ is taken as the empirical distribution, where $\chi^2_{r^i-1,1-\alpha}$ is the $(1-\alpha)$-quantile of a $\chi^2$-distribution with degree of freedom $r^i-1$. This leads to an approximate $(1-\alpha)$ confidence region by using the asymptotics of goodness-of-fit statistics (\cite{pardo2005statistical}). The resulting region from this approach, however, can be conservative as the involved degree of freedom can be large. Recent works such as \cite{lam2015quantifying,duchistatistics,lam2016recovering} investigate the tightening of divergence-based regions and extend their use to continuous data using the empirical likelihood theory. This theory can also potentially shed insights on the (second-order) accuracies achieved using different divergences (\cite{owen2001empirical}). Other alternatives include using the Wasserstein distance; see, e.g., \cite{esfahani2015data,blanchet2016quantifying,gao2016distributionally} for these developments and the involved ball-size calibration methods. \end{enumerate} When discretization is applied, the probabilistic analog of Theorem \ref{sample thm} is: \begin{theorem} Suppose all assumptions in Theorem \ref{sample thm} are in place except that $E_{P_0^i}[f_l^i(X^i)]<\mu_l^i$ or $d_\phi(P_0^i,P_b^i)<\eta^i$ now holds true jointly for all $i$ with confidence $1-\alpha$ under $\mathbb P$. Then $\mathbb P(\hat Z_*\leq Z_0+O_p(1/\sqrt n)\leq\hat Z^*)\geq1-\alpha$.\label{sample conf} \end{theorem} Theorem \ref{sample conf} follows immediately from Theorem \ref{sample thm}. Like before, Theorem \ref{sample conf} translates \eqref{generic}, whose input models can be continuously represented, to \eqref{sample counterpart} that is imposed over discrete distributions, by paying a small price of error. In the next section we discuss our algorithm run over discrete distributions and point out clearly why the discretization is necessary when the input distributions are continuous. We close this section with two cautionary remarks. First, while our discretization strategy works for problems involving independent low-dimensional input distributions (which occur often in stochastic simulation), high-dimensional joint dependent models may greatly inflate the constant implicit in the error term, and we do not advise using our strategy in such settings. Second, in general, the finer the discretization scale (i.e., the more generated support points), the higher is the decision space dimension for the resulting optimization problem, and there is a tradeoff on the discretization scale between the approximation error and the optimization effort. Obviously, when the input model is finite discrete, the sampling step depicted in Theorems \ref{sample thm} and \ref{sample conf} is unnecessary, and our subsequent results regarding the algorithm applies readily to this case. \section{Gradient Estimation on Probability Simplices via a Nonparametric Likelihood Ratio Method}\label{sec:gradient} Since we work in the discrete space, for simplicity we denote $\mathbf p^i=(p_j^i)_{j=1,\ldots,n^i}\in\mathbb R^{n^i}$ as the vector of probability weights for the discretized input model $i$. This probability vector is understood to apply on the support points $\{y_1^i,\ldots,y_{n^i}^i\}$. Moreover, let $\mathbf p=\text{vec}(\mathbf p^i:i=1,\ldots,m)\in\mathbb R^N$ where $\text{vec}$ denotes a concatenation of the vectors $\mathbf p^i$'s as a single vector, and $N=\sum_{i=1}^mn^i$. We denote $\mathcal P_l=\{(p_1,\ldots,p_l)\in\mathbb R^l:\sum_{j=1}^lp_j=1, p_j\geq0, j=1,\ldots,l\}$ as the $l$-dimensional probability simplex. Hence $\mathbf p^i\in\mathcal P_{n^i}$. For convenience, let $\mathcal P=\prod_{i=1}^m\mathcal P_{n^i}$, so that $\mathbf p\in\mathcal P$. The performance measure in \eqref{sample counterpart} can be written as $Z(\mathbf p)$. Furthermore, denote $T=\max_{i=1,\ldots,m}T^i$ as the maximum length of replications among all input models. We also write $\mathbf X=(\mathbf X^1,\ldots,\mathbf X^m)$ and $h(\mathbf X)=h(\mathbf X^1,\ldots,\mathbf X^m)$ for simplicity. Recall that $I(E)$ denotes the indicator function for the event $E$. In the rest of this paper, $'$ denotes transpose, and $\|\mathbf x\|$ denotes the Euclidean norm of a vector $\mathbf x$. We also write $Var_{\mathbf p}(\cdot)$ as the variance under the input distribution $\mathbf p$. Inequalities for vectors are defined component-wise. We shall present an iterative simulation-based scheme for optimizing \eqref{sample counterpart}. The first step is to design a method to extract the gradient information of $Z(\mathbf p)$. Note that the standard gradient of $Z(\mathbf p)$, which we denote as $\nabla Z(\mathbf p)$, obtained through differentiation of $Z(\mathbf p)$, may not lead to any simulable object. This is because an arbitrary perturbation of $\mathbf p$ may shoot out from the set of probability simplices, and the resulting gradient will be a high-degree polynomial in $\mathbf p$ that may have no probabilistic interpretation and thus is not amenable to simulation-based estimation. We address this issue by considering the set of perturbations within the simplices. Our approach resembles the Gateaux derivative on a functional of probability distribution (\cite{serfling2009approximation}) as follows. Given any $\mathbf p^i$, define a mixture distribution $(1-\epsilon)\mathbf p^i+\epsilon\mathbf 1_j^i$, where $\mathbf 1_j^i$ represents a point mass on $y_j^i$, i.e. $\mathbf 1_j^i=(0,0,\ldots,1,\ldots,0)\in\mathcal P_{n^i}$ and 1 is at the $j$-th coordinate. The number $0\leq\epsilon\leq1$ is the mixture parameter. When $\epsilon=0$, this reduces to the given distribution $\mathbf p^i$. We treat $\epsilon$ as a parameter and differentiate $Z(\mathbf p^1,\ldots,\mathbf p^{i-1},(1-\epsilon)\mathbf p^i+\epsilon\mathbf 1_j^i,\mathbf p^{i+1},\ldots,\mathbf p^m)$ with respect to $\epsilon$ for each $i,j$. More precisely, let $$\psi_j^i(\mathbf p)=\frac{d}{d\epsilon}Z(\mathbf p^1,\ldots,\mathbf p^{i-1},(1-\epsilon)\mathbf p^i+\epsilon\mathbf 1_j^i,\mathbf p^{i+1},\ldots,\mathbf p^m)\big|_{\epsilon=0}$$ Denote $\bm\psi^i(\mathbf p)=(\psi_j^i(\mathbf p))_{j=1,\ldots,n^i}\in\mathbb R^{n^i}$, and $\bm\psi(\mathbf p)=\text{vec}(\bm\psi^i(\mathbf p):i=1,\ldots,m)\in\mathbb R^N$. We show that $\bm\psi$ possesses the following two properties: \begin{theorem} Given $\mathbf p\in\mathcal P$ such that $\mathbf p>\mathbf 0$, we have: \begin{enumerate} \item \begin{equation} \nabla Z(\mathbf p)'(\mathbf q-\mathbf p)=\sum_{i=1}^m\nabla^iZ(\mathbf p)'(\mathbf q^i-\mathbf p^i)=\sum_{i=1}^m\bm\psi^i(\mathbf p)'(\mathbf q^i-\mathbf p^i)=\bm\psi(\mathbf p)'(\mathbf q-\mathbf p)\label{gradient equivalence} \end{equation} for any $\mathbf q^i\in\mathcal P_{n^i}$ and $\mathbf q=\text{vec}(\mathbf q^i:i=1,\ldots,m)$, where $\nabla^iZ(\mathbf p)\in\mathbb R^{n^i}$ is the gradient of $Z$ taken with respect to $\mathbf p^i$. \item \begin{equation} \psi_j^i(\mathbf p)=E_{\mathbf p}[h(\mathbf X)s_j^i(\mathbf X^i)]\label{score function} \end{equation} where $s_j^i(\cdot)$ is defined as \begin{equation} s_j^i(\mathbf x^i)=\sum_{t=1}^{T^i}\frac{I(x_t^i=y_j^i)}{p_j^i}-T^i\label{score function2} \end{equation} for $\mathbf x^i=(x_1^i,\ldots,x_{T^i}^i)\in\mathbb R^{T^i}$. \end{enumerate} \label{prop:gradient} \end{theorem} The proof of Theorem \ref{prop:gradient} is in Appendix \ref{sec:proofs}. The first property above states that $\bm\psi(\mathbf p)$ and $\nabla Z(\mathbf p)$ are identical when viewed as directional derivatives, as long as the direction lies within $\mathcal P$. Since the feasible region of optimizations \eqref{sample counterpart} lies in $\mathcal P$, it suffices to focus on $\bm\psi(\mathbf p)$. The second property above states that $\bm\psi(\mathbf p)$ can be estimated unbiasedly in a way similar to the classical likelihood ratio method (\cite{glynn1990likelihood,reiman1989sensitivity}), with $s_j^i(\cdot)$ playing the role of the score function. Since this representation holds without assuming any specific parametric form for $\mathbf p$, we view it as a nonparametric version of the likelihood ratio method. From \eqref{score function}, an unbiased estimator for $\psi_j^i(\mathbf p)$ using a single simulation run is $(h(\mathbf X)s_j^i(\mathbf X^i))_{i=1,\ldots,m}$, where $\mathbf X=(\mathbf X^1,\ldots,\mathbf X^m)$ is the sample path. The following provides a bound on the variance of this estimator (See Appendix \ref{sec:proofs} for proof): \begin{lemma} Assume $h(\mathbf X)$ is bounded a.s., i.e. $|h(\mathbf X)|\leq M$ for some $M>0$, and that $\mathbf p>\mathbf 0$. Each sample for estimating $\psi_j^i(\mathbf p)$, given by $h(\mathbf X)s_j^i(\mathbf X^i)$ using one sample path of $\mathbf X$, possesses a variance bounded from above by $M^2T^i(1-p_j^i)/p_j^i$. \label{prop:var} \end{lemma} The function $\bm\psi(\mathbf p)$ derived via the above Gateaux derivative framework can be interpreted as a discrete version of the so-called influence function in robust statistics (\cite{hampel1974influence,hampel2011robust}), which is commonly used to approximate the first order effect on a given statistics due to contamination of data. In general, the gradient represented by the influence function is defined as an operator on the domain of the random object distributed under $\mathbf p$. Thus, in the continuous case, this object has an infinite-dimensional domain and can be difficult to compute and encode. This is the main reason why we seek for a discretization in the first place. \section{Frank-Wolfe Stochastic Approximation (FWSA)}\label{sec:procedure} With the implementable form of the gradient $\bm\psi(\mathbf p)$ described in Section \ref{sec:gradient}, we design a stochastic nonlinear programming technique to solve \eqref{sample counterpart}. We choose to use the Frank-Wolfe method because, for the types of $\hat{\mathcal U}^i$ we consider in Section \ref{sec:discretization}, effective routines exist for solving the induced linearized subproblems. \subsection{Description of the Algorithm}\label{sec:description} For convenience denote $\hat{\mathcal U}=\prod_{i=1}^{n^i}\hat{\mathcal U}^i$. We focus on the choices of $\mathcal U^i$ depicted in Section \ref{sec:formulation}, which are all convex and consequently $\hat{\mathcal U}^i$ and also $\hat{\mathcal U}$ are convex. FWSA works as follows. To avoid repetition we focus only on the minimization formulation in \eqref{generic}. First, pretending that $\nabla Z(\mathbf p)$ can be computed exactly, it iteratively updates a solution sequence $\mathbf p_1,\mathbf p_2,\ldots$ by, given a current solution $\mathbf p_k$, solving \begin{equation} \min_{\mathbf p\in\hat{\mathcal U}}\nabla Z(\mathbf p_k)'(\mathbf p-\mathbf p_k) \label{step optimization} \end{equation} Let the optimal solution to \eqref{step optimization} be $\mathbf q_k$. The quantity $\mathbf q_k-\mathbf p_k$ gives a feasible minimization direction starting from $\mathbf p_k$ (recall that $\hat{\mathcal U}$ is convex). This is then used to update $\mathbf p_k$ to $\mathbf p_{k+1}$ via $\mathbf p_{k+1}=\mathbf p_k+\epsilon_k(\mathbf q_k-\mathbf p_k)$ for some step size $\epsilon_k$. This expression can be rewritten as $\mathbf p_{k+1}=(1-\epsilon_k)\mathbf p_k+\epsilon_k\mathbf q_k$, which can be interpreted as a mixture between the distributions $\mathbf p_k$ and $\mathbf q_k$. When $\nabla Z(\mathbf p_k)$ is not exactly known, one can replace it by an empirical counterpart. Theorem \ref{prop:gradient} suggests that we can replace $\nabla Z(\mathbf p_k)$ by $\bm\psi(\mathbf p_k)$, and so the empirical counterpart of \eqref{step optimization} is \begin{equation} \min_{\mathbf p\in\hat{\mathcal U}}\hat{\bm\psi}(\mathbf p_k)'(\mathbf p-\mathbf p_k)\\ \label{step optimization1} \end{equation} where $\hat{\bm\psi}(\mathbf p_k)$ is an estimator of $\bm\psi(\mathbf p_k)$ using a sample size $R_k$. Note that all components of $\hat{\bm\psi}(\mathbf p_k)$ can be obtained from these $R_k$ sample paths simultaneously. Letting $\hat{\mathbf q}_k$ be the optimal solution to \eqref{step optimization1}, the update rule will be $\mathbf p_{k+1}=(1-\epsilon_k)\mathbf p_k+\epsilon_k\hat{\mathbf q}_k$ for some step size $\epsilon_k$. The sample size $R_k$ at each step needs to grow suitably to compensate for the bias introduced in solving \eqref{step optimization1}. All these are summarized in Procedure \ref{SCG}. \begin{algorithm} \label{proc} \caption{FWSA for solving \eqref{generic}} \textbf{Initialization: }$\mathbf p_1\in\mathcal P$ where $\mathbf p_1>\mathbf 0$. \textbf{Input: }Step size sequence $\epsilon_k$, sample size sequence $R_k$, $k=1,2,\ldots$. \textbf{Procedure: }For each iteration $k=1,2,\ldots$, given $\mathbf p_k$: \begin{algorithmic} \State \textbf{1. }Repeat $R_k$ times: Compute $$h(\mathbf X)s_j^i(\mathbf X^i)\text{\ \ \ \ for all $i=1,\ldots,m$}$$ using one sample path $\mathbf X=(\mathbf X^1,\ldots,\mathbf X^m)$, where $s_j^i(\mathbf X^i)=\sum_{t=1}^{T^i}I(X_t^i=y_j^i)/p_j^i-T^i$ for $j=1,\ldots,n^i$ and $i=1,\ldots,m$. Call these $R_k$ i.i.d. replications $\zeta_j^i(r)$, for $j=1,\ldots,n^i,\ i=1,\ldots,m,\ r=1,\ldots,R_k$. \State \textbf{2. } Estimate $\bm\psi(\mathbf p_k)$ by $$\hat{\bm\psi}(\mathbf p_k)=(\hat\psi_j^i(\mathbf p_k))_{i=1,\ldots,m,\ j=1,\ldots,n^i}=\left(\frac{1}{R_k}\sum_{r=1}^{R_k}\zeta_j^i(r)\right)_{i=1,\ldots,m,\ j=1,\ldots,n^i}.$$ \State \textbf{3. } Solve $\hat{\mathbf q}_k\in\text{argmin}_{\mathbf p\in\hat{\mathcal U}}\hat{\bm\psi}(\mathbf p_k)'(\mathbf p-\mathbf p_k)$. \State \textbf{4. } Update $\mathbf p_{k+1}=(1-\epsilon_k)\mathbf p_k+\epsilon_k\hat{\mathbf q}_k$. \end{algorithmic}\label{SCG} \end{algorithm} \subsection{Solving the Subproblem}\label{sec:subprogram} By \eqref{gradient equivalence} and the separability of uncertainty set $\hat{\mathcal U}=\prod_{i=1}^m\hat{\mathcal U}^i$, the subproblem at each iteration can be written as \begin{equation} \min_{\mathbf q\in\hat{\mathcal U}}\sum_{i=1}^m\hat{\bm\psi}^i(\mathbf p)'(\mathbf q^i-\mathbf p^i)=\sum_{i=1}^m\min_{\mathbf q^i\in\hat{\mathcal U}^i}\hat{\bm\psi}^i(\mathbf p)'(\mathbf q^i-\mathbf p^i)\label{step multiple} \end{equation} where $\hat{\bm\psi}^i(\mathbf p)=(\hat\psi_j^i(\mathbf p))_{j=1,\ldots,n^i}$ is the empirical counterpart of $\bm\psi^i(\mathbf p)$ obtained in Algorithm \ref{SCG}. Hence \eqref{step multiple} can be solved by $m$ separate convex programs. The update step follows by taking $\mathbf p_{k+1}=\text{vec}(\mathbf p_{k+1}^i:i=1,\ldots,m)$, where $\mathbf p_{k+1}^i=(1-\epsilon_k)\mathbf p_k^i+\epsilon_k\hat{\mathbf q}_k^i$ and $\hat{\mathbf q}_k^i$ is the solution to the $i$-th separate program. The separate programs in \eqref{step multiple} can be efficiently solved for the uncertainty sets considered in Section \ref{sec:discretization}. To facilitate discussion, we denote a generic form of each separate program in \eqref{step multiple} as \begin{equation} \min_{\mathbf p^i\in\hat{\mathcal U}^i}\bm\xi'\mathbf p^i \label{step optimization2} \end{equation} for an arbitrary vector $\bm\xi=(\xi_j)_{j=1,\ldots,n^i}\in\mathbb R^{n^i}$. \\ \noindent\emph{Case 1 in Theorem \ref{sample thm}: Moment and support constraints. }Consider $\hat{\mathcal U}^i=\{\mathbf p^i\in\mathcal P^{n^i}:{\mathbf f_l^i}'\mathbf p^i\leq\mu_l^i,l=1,\ldots,s^i\}$ where $\mathbf f_l^i=(f_l(y_j^i))_{j=1,\ldots,n^i}\in\mathbb R^{n^i}$. Then \eqref{step optimization2} is a linear program. \\ \noindent\emph{Case 2 in Theorem \ref{sample thm}: $\phi$-divergence neighborhood. }Consider \begin{equation} \hat{\mathcal U}^i=\{\mathbf p^i\in\mathcal P^{n^i}:d_\phi(\mathbf p^i,\mathbf p_b^i)\leq\eta^i\}\label{phi uncertainty reformulated} \end{equation} where $\mathbf p_b^i=(p_{b,j}^i)_{j=1,\ldots,n^i}\in\mathcal P^{n^i}$ and $d_\phi(\mathbf p^i,\mathbf p_b^i)=\sum_{j=1}^{n^i}p_{b,j}^i\phi(p_j^i/p_{b,j}^i)$. We have: \begin{proposition} Consider \eqref{step optimization2} with $\hat{\mathcal U}^i$ presented in \eqref{phi uncertainty reformulated}, where $\mathbf p_b^i>\mathbf 0$. Let $\phi^*(t)=\sup_{x\geq0}\{tx-\phi(x)\}$ be the conjugate function of $\phi$, and define $0\phi^*(s/0)=0$ if $s\leq0$ and $0\phi^*(s/0)=+\infty$ if $s>0$. Solve the program \begin{equation} (\alpha^*,\lambda^*)\in\text{argmax}_{\alpha\geq0,\lambda\in\mathbb R}\left\{-\alpha\sum_{j=1}^{n^i}p_{b,j}^i\phi^*\left(-\frac{\xi_j+\lambda}{\alpha}\right)-\alpha\eta^i-\lambda\right\}\label{opt phi} \end{equation} An optimal solution $\mathbf q^i=(q_j^i)_{j=1,\ldots,n^i}$ for \eqref{step optimization2} is \begin{enumerate} \item If $\alpha^*>0$, then \begin{equation} q_j^i=p_{b,j}^i\cdot \text{argmax}_{r\geq0}\left\{-\frac{\xi_j+\lambda^*}{\alpha^*}r-\phi(r)\right\}\label{opt phi1} \end{equation} \item If $\alpha^*=0$, then \begin{equation} q_j^i=\left\{\begin{array}{ll} \frac{p_{b,j}^i}{\sum_{j\in\mathcal M^i}p_{b,j}^i}&\text{\ for\ }j\in\mathcal M^i\\ 0&\text{\ otherwise} \end{array}\right.\label{opt phi2} \end{equation} \end{enumerate} where $\mathcal M^i=\text{argmin}_j\xi_j$, the set of indices $j\in\{1,\ldots,n^i\}$ that have the minimum $\xi_j$. \label{phisolution} \end{proposition} Operation \eqref{opt phi} involves a two-dimensional convex optimization. Note that both the function $\phi^*$ and the solution to the $n^i$ one-dimensional maximization \eqref{opt phi1} have closed-form expressions for all common $\phi$-divergence (\cite{pardo2005statistical}). The proof of Proposition \ref{phisolution} follows closely from \cite{ben2013robust} and is left to Appendix \ref{sec:proofs}. In the special case where $\phi=x\log x-x+1$, i.e. KL divergence, the solution scheme can be simplified to a one-dimensional root-finding problem. More precisely, we have \begin{proposition} Consider \eqref{step optimization2} with $\hat{\mathcal U}^i$ presented in \eqref{phi uncertainty reformulated}, where $\phi(x)=x\log x-x+1$ and $\mathbf p_b^i>\mathbf 0$. Denote $\mathcal M^i=\text{argmin}_j\xi_j$ as in Proposition \ref{phisolution}. An optimal solution $\mathbf q^i=(q_j^i)_{j=1,\ldots,n^i}$ for \eqref{step optimization2} is: \begin{enumerate} \item If $-\log\sum_{j\in\mathcal M^i}p_{b,j}^i\leq\eta^i$, then \begin{equation} q_j^i=\left\{\begin{array}{ll} \frac{p_{b,j}^i}{\sum_{j\in\mathcal M^i}p_{b,j}^i}&\text{\ for\ }j\in\mathcal M^i\\ 0&\text{\ otherwise} \end{array}\right.\label{opt2} \end{equation} \item If $-\log\sum_{j\in\mathcal M^i}p_{b,j}^i>\eta^i$, then \begin{equation} q_j^i=\frac{p_{b,j}^ie^{\beta\xi_j}}{\sum_{j=1}^{n^i}p_{b,j}^ie^{\beta\xi_j}}\label{opt1} \end{equation} for all $j$, where $\beta<0$ satisfies \begin{equation} \beta{\varphi_{\bm\xi}^i}'(\beta)-\varphi_{\bm\xi}^i(\beta)=\eta^i\label{root} \end{equation} Here $\varphi_{\bm\xi}^i(\beta)=\log\sum_{j=1}^{n^i}p_{b,j}^ie^{\beta\xi_j}$ is the logarithmic moment generating function of $\bm\xi$ under $\mathbf p_b^i$. \end{enumerate}\label{KLsolution} \end{proposition} The proof of Proposition \ref{KLsolution} follows from techniques in, e.g., \cite{hansen2008robustness}, and is left to Appendix \ref{sec:proofs}. \section{Theoretical Guarantees of FWSA} \label{sec:guarantees} This section shows the convergence properties of our proposed FWSA. We first present results on almost sure convergence, followed by a local convergence rate analysis. Throughout our analysis we assume that the subproblem at any iteration can be solved using deterministic optimization routine to a negligible error. \subsection{Almost Sure Convergence} An important object that we will use in our analysis is the so-called Frank-Wolfe (FW) gap (\cite{frank1956algorithm}): For any $\tilde{\mathbf p}\in\hat{\mathcal U}$, let $g(\tilde{\mathbf p})=-\min_{\mathbf p\in\hat{\mathcal U}}\bm\psi(\tilde{\mathbf p})'(\mathbf p-\tilde{\mathbf p})$, which is the negation of the optimal value of the next subproblem when the current solution is $\tilde{\mathbf p}$. Note that $g(\tilde{\mathbf p})$ is non-negative for any $\tilde{\mathbf p}\in\hat{\mathcal U}$, since one can always take $\mathbf p=\tilde{\mathbf p}$ in the definition of $g(\tilde{\mathbf p})$ to get a lower bound 0. In the case of convex objective function, it is well-known that $g(\tilde{\mathbf p})$ provides an upper bound of the actual optimality gap (\cite{frank1956algorithm}). However, we shall make no convexity assumption in our subsequent analysis, and will see that $g(\tilde{\mathbf p})$ still plays an important role in bounding the local convergence rate of our procedure under the conditions we impose. Our choices on the step size $\epsilon_k$ and sample size per iteration $R_k$ of the procedure are as follows: \begin{assumption} We choose $\epsilon_k,k=1,2,\ldots$ that satisfy $$\sum_{k=1}^\infty\epsilon_k=\infty\text{\ \ \ \ and\ \ \ \ }\sum_{k=1}^\infty\epsilon_k^2<\infty$$\label{tuning} \end{assumption} \begin{assumption} The sample sizes $R_k,k=1,2,\ldots$ are chosen such that $$\sum_{k=1}^\infty\frac{\epsilon_k}{\sqrt{R_k}}\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1/2}<\infty$$ where for convenience we denote $\prod_{j=1}^0(1-\epsilon_j)^{-1/2}=1$.\label{sample size tuning} \end{assumption} Note that among all $\epsilon_k$ in the form $c/k^\alpha$ for $c>0$ and $\alpha>0$, only $\alpha=1$ satisfies both Assumptions \ref{tuning} and \ref{sample size tuning} and avoids a super-polynomial growth in $R_k$ simultaneously (recall that $R_k$ represents the simulation effort expended in iteration $k$, which can be expensive). To see this, observe that Assumption \ref{tuning} asserts $\alpha\in(1/2,1]$. Now, if $\alpha<1$, then it is easy to see that $\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1/2}$ grows faster than any polynomials, so that $R_k$ cannot be polynomial if Assumption \ref{sample size tuning} needs to hold. On the other hand, when $\alpha=1$, then $\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1/2}$ grows at rate $\sqrt k$ and it is legitimate to choose $R_k$ growing at rate $k^\beta$ with $\beta>1$. Assumption \ref{tuning} is standard in the SA literature. The growing per-iteration sample size in Assumption \ref{sample size tuning} is needed to compensate for the bias caused by the subproblem in FWSA. Note that in standard SA, a solution update is obtained by moving in the gradient descent direction, and Assumption \ref{tuning} suffices if this direction is estimated unbiasedly. In FWSA, the subprogram introduces bias on the feasible direction despite the unbiasedness of the gradient estimate. The increasing simulation effort at each iteration is introduced to shrink this bias as the iteration proceeds. We also note that the expression $\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1/2}$ in Assumption \ref{sample size tuning} is imposed to compensate for a potentially increasing estimation variance, due to the form of the gradient estimator depicted in \eqref{score function} and \eqref{score function2} that possesses $p_j^i$ in the denominator and thus the possibility of having a larger variance as the iteration progresses. We state our result on almost sure convergence in two parts. The first part only assumes the continuity of $g(\cdot)$. The second part assumes a stronger uniqueness condition on the optimal solution, stated as: \begin{assumption} There exists a unique minimizer $\mathbf p^*$ for $\min_{\mathbf p\in\hat{\mathcal U}}Z(\mathbf p)$. Moreover, $g(\cdot)$ is continuous over $\hat{\mathcal U}$ and $\mathbf p^*$ is the only feasible solution such that $g(\mathbf p^*)=0$.\label{main assumption} \end{assumption} In light of Assumption \ref{main assumption}, $g$ plays a similar role as the gradient in unconstrained problems. The condition $g(\mathbf p^*)=0$ in Assumption \ref{main assumption} is a simple implication of the optimality of $\mathbf p^*$ (since $g(\mathbf p^*)>0$ would imply the existence of a better solution). Our convergence result is: \begin{theorem} Suppose that $h(\mathbf X)$ is bounded a.s. and that Assumptions \ref{tuning}-\ref{sample size tuning} hold. We have the following properties on $\mathbf p_k$ generated in Algorithm \ref{SCG} : \begin{enumerate} \item Assume that $g(\cdot)$ is continuous and an optimal solution exists. Then $D(Z(\mathbf p_k),\mathcal Z^*)\to0$ a.s., where $\mathcal Z^*=\{Z(\mathbf p):\mathbf p\text{\ satisfies\ }g(\mathbf p)=0\}$ and $D(x,A)=\inf_{y\in A}\|x-y\|$ for any point $x$ and set $A$ in the Euclidean space.\label{as part1} \item Under Assumption \ref{main assumption}, $\mathbf p_k$ converge to $\mathbf p^*$ a.s..\label{as part2} \end{enumerate} \label{as} \end{theorem} Part \ref{as part1} of Theorem \ref{as} states that the objective value generated by Algorithm \ref{SCG} will eventually get close to an objective value evaluated at a point where the FW gap is zero. Part \ref{as part2} strengthens the convergence to the unique optimal solution $\mathbf p^*$ under Assumption \ref{main assumption}. In practice, this uniqueness condition may not hold, and we propose combining Algorithm \ref{SCG} with multi-start of the initial solution $\mathbf p_1$ as a remedy. Section \ref{expt:mltcls} and Appendix \ref{sec:multi-start} show some numerical results on this strategy. \subsection{Local Convergence Rate}\label{sec:local rate} We impose several additional assumptions. The first is a Lipchitz continuity condition on an optimal solution for the generic subproblem \eqref{step optimization2}, with respect to the coefficients in the objective in a neighborhood of the gradient evaluated at $\mathbf p^*$. Denote $\mathbf v(\bm\xi)$ as an optimal solution of \eqref{step optimization2}. \begin{assumption} We have $$\|\mathbf v(\bm\xi_1)-\mathbf v(\bm\xi_2)\|\leq L\|\bm\xi_1-\bm\xi_2\|$$ for some $L>0$, for any $\bm\xi_1,\bm\xi_2\in\mathcal N_\Delta(\bm\psi(\mathbf p^*))$, where $\mathcal N_\Delta(\bm\psi(\mathbf p^*))$ denotes a Euclidean neighborhood of $\bm\psi(\mathbf p^*)$ with radius $\Delta$, and $\mathbf p^*$ is assumed to be the unique optimal solution for $\min_{\mathbf p\in\hat{\mathcal U}}Z(\mathbf p)$.\label{bias} \end{assumption} Next, we denote $\mathbf q(\tilde{\mathbf p})$ as an optimizer in the definition of the FW gap at $\tilde{\mathbf p}$, i.e. $\mathbf q(\tilde{\mathbf p})\in\text{argmin}_{\mathbf p}\bm\psi(\tilde{\mathbf p})'(\mathbf p-\tilde{\mathbf p})$. \begin{assumption} $$g(\mathbf p)\geq c\|\bm\psi(\mathbf p)\|\|\mathbf q(\mathbf p)-\mathbf p\|$$ for any $\mathbf p\in\hat{\mathcal U}$, where $c>0$ is a small constant.\label{angle} \end{assumption} \begin{assumption} $$\|\bm\psi(\mathbf p)\|>\tau>0$$ for any $\mathbf p\in\hat{\mathcal U}$, for some constant $\tau$.\label{nonzero gradient} \end{assumption} Assumption \ref{angle} guarantees that the angle between the descent direction and the gradient must be bounded away from $90^\circ$ uniformly at any point $\mathbf p$. This assumption has been used in the design and analysis of gradient descent methods for nonlinear programs that are singular (i.e. without assuming the existence of the Hessian matrix; \cite{bertsekas1999nonlinear}, Proposition 1.3.3). The non-zero gradient condition in Assumption \ref{nonzero gradient} effectively suggests that a local optimum must occur at the relative boundary of $\hat{\mathcal U}$ (i.e. the boundary with respect to the lower-dimensional subspace induced by the probability simplex constraint), which warrants further explanation. Note that the other alternate scenario for local optimality will be that it occurs in the interior region of the feasible set $\hat{\mathcal U}$. In the latter scenario, the gradient at the optimal solution is zero. While the convergence analysis can be simplified (and plausibly give a better rate) under this scenario, the statistical implication brought by this scenario is rather pathological. Note that our optimizations are imposed on decision variables that are input probability distributions. As discussed at the end of Section \ref{sec:gradient}, the gradient vector $\bm\psi(\mathbf p)$ is the influence function for the performance measure $Z(\cdot)$. If the influence function is zero, it is known that a Gaussian limit does not hold in the central limit theorem as the input sample size gets large (where the central limit theorem is on the difference between a simulation driven by empirical distributions and the truth). Instead, a $\chi^2$-limit occurs (\cite{serfling2009approximation}, Section 6.4.1, Theorem B). Such type of limit is unusual and has never been reported in simulation analysis. Indeed, in all our experiments, the obtained local optimal solution is always at the boundary. For this reason we impose Assumption \ref{nonzero gradient} rather than a more straightforward zero-gradient type condition. The following are our main results on convergence rate, first on the FW gap $g(\mathbf p_k)$, and then the optimality gap $Z(\mathbf p_k)-Z(\mathbf p^*)$, in terms of the number of iterations $k$. Similar to almost sure convergence, we assume here that the deterministic routine for solving the subproblems can be carried out with high precision. \begin{theorem} Suppose $|h(\mathbf X)|\leq M$ for some $M>0$ and that Assumptions \ref{tuning}-\ref{nonzero gradient} hold. Additionally, set $$\epsilon_k=\frac{a}{k}\text{\ \ \ \ and\ \ \ \ }R_k=bk^\beta$$ when $k>a$, and arbitrary $\epsilon_k<1$ when $k\leq a$. Given any $0<\varepsilon<1$, it holds that, with probability $1-\varepsilon$, there exists a large enough positive integer $k_0$ and small enough positive constants $\nu,\vartheta,\varrho$ such that $0<g(\mathbf p_{k_0})\leq\nu$, and for $k\geq k_0$, \begin{equation} g(\mathbf p_k)\leq\frac{A}{k^C}+B\times\left\{\begin{array}{ll}\frac{1}{(C-\gamma)k^\gamma}&\text{if $0<\gamma<C$}\\ \frac{1}{(\gamma-C)(k_0-1)^{\gamma-C}k^C}&\text{if $\gamma>C$}\\ \frac{\log((k-1)/(k_0-1))}{k^C}&\text{if $\gamma=C$} \end{array}\right.\label{main} \end{equation} where $$A=g(\mathbf p_{k_0})k_0^C,$$ $$B=\left(1+\frac{1}{k_0}\right)^C\left(a\varrho+\frac{2a^2\varrho K}{c\tau k_0}\left(\frac{\nu}{c\tau}+L\vartheta\right)\right)$$ and \begin{equation} C=a\left(1-\frac{2KL\vartheta}{c\tau}-\frac{2K\nu}{c^2\tau^2}\right)\label{C} \end{equation} Here the constants $L,c,\tau$ appear in Assumptions \ref{bias}, \ref{angle} and \ref{nonzero gradient} respectively. The sample size power $\beta$ needs to be chosen such that $\beta>2\gamma+a+1$. More precisely, the constants $a,b,\beta$ that appear in the specification of the algorithm, the other constants $k_0,\vartheta,\varrho,\gamma,K$, and two new constants $\rho>1$ and $\delta>0$ are chosen to satisfy Conditions \ref{c1}-\ref{c8} listed in Appendix \ref{sec:proofs}. \label{rate thm} \end{theorem} \begin{corollary} Suppose that all the assumptions are satisfied and all the constants are chosen as indicated in Theorem \ref{rate thm}. Then with probability $1-\varepsilon$, there exists a large enough positive integer $k_0$ and small enough positive constants $\nu,\vartheta,\varrho$ such that $0\leq g(\mathbf p_{k_0})\leq\nu$, and for $k\geq k_0$, \begin{equation} Z(\mathbf p_k)-Z(\mathbf p^*)\leq\frac{D}{k-1}+\frac{E}{(k-1)^C}+F\times\left\{\begin{array}{ll}\frac{1}{(C-\gamma)\gamma(k-1)^\gamma}&\text{\ if\ }0<\gamma<C\\ \frac{1}{(\gamma-C)(k_0-1)^{\gamma-C}C(k-1)^C}&\text{\ if\ }\gamma>C\\ \frac{\log((k-1)/(k_0-1))}{C(k-1)^C}&\text{\ if\ }\gamma=C \end{array}\right.\label{main1} \end{equation} where $$D=a^2K,\ \ E=\frac{aA}{C},\ \ F=aB$$ and $a,A,B,C,K$ are the same constants as in Theorem \ref{rate thm}.\label{rate cor} \end{corollary} A quick summary extracted from Theorem \ref{rate thm} and Corollary \ref{rate cor} is the following: Consider the local convergence rate denominated by workload, i.e. the number of simulation replications. To achieve the most efficient rate, approximately speaking, $a$ should be chosen to be $1+\omega$ and $\beta$ chosen to be $5+\zeta+\omega$ for some small $\omega,\zeta>0$. The local convergence rate is then $O(W^{-1/(6+\zeta+\omega)})$ where $W$ is the total number of simulation replications. Note that the bounds in Theorem \ref{rate thm} and Corollary \ref{rate cor} are local asymptotic statements since they only hold starting from $k\geq k_0$ and $g(\mathbf p_k)\leq\nu$ for some large $k_0$ and small $\nu$. It should be cautioned that they do not say anything about the behavior of the algorithm before reaching the small neighborhood of $\mathbf p^*$ as characterized by $0\leq g(\mathbf p_{k_0})\leq\nu$. The above summary therefore should be interpreted in the way that, given the algorithm has already run $k_0$ number of replications and $g(\mathbf p_k)\leq\nu$ for a suitably small $\nu$ (which occurs with probability 1 by Theorem \ref{as}), the convergence rate of $O(W^{-1/(6+\zeta+\omega)})$ for the optimality gap is guaranteed with probability $1-\varepsilon$ starting from that point. The summary above is derived based on the following observations: \begin{enumerate} \item The local convergence rate of the optimality gap, in terms of the number of iterations $k$, is at best $O(1/k^{C\wedge\gamma\wedge1})$. This is seen by \eqref{main1}. \item We now consider the convergence rate in terms of simulation replications. Note that at iteration $k$, the cumulative number of replications is of order $\sum_{j=1}^kj^\beta\approx k^{\beta+1}$. Thus from Point 1 above, the convergence rate of the optimality gap in terms of replications is of order $1/W^{(C\wedge\gamma\wedge1)/(\beta+1)}$. \item The constants $C$ and $\gamma$ respectively depend on $a$, the constant factor in the step size, and $\beta$, the geometric growth rate of the sample size, as follows: \begin{enumerate} \item \eqref{C} defines $C=a(1-2KL\vartheta/(c\tau)-2K\nu/(c^2\tau^2))$. For convenience, we let $\omega=2KL\vartheta/(c\tau)+2K\nu/(c^2\tau^2)$, and so $C=a(1-\omega)$. \item From Condition \ref{c5} in Theorem \ref{rate thm} (shown in Appendix \ref{sec:proofs}), we have $\beta=2\gamma+\rho a+2+\zeta$ for some $\zeta>0$. In other words $\gamma=(\beta-\rho a-\zeta-2)/2$. \end{enumerate} \item Therefore, the convergence rate in terms of replications is $1/W^{((a(1-\omega))\wedge((\beta-\rho a-\zeta-2)/2)\wedge1)/(\beta+1)}$. Let us focus on maximizing \begin{equation} \frac{(a(1-\omega))\wedge((\beta-\rho a-\zeta-2)/2)\wedge1}{\beta+1}\label{index} \end{equation} over $a$ and $\beta$, whose solution is given by the following lemma: \begin{lemma} The maximizer of \eqref{index} is given by $$a=\frac{1}{1-\omega},\ \ \beta=\frac{\rho}{1-\omega}+\zeta+4$$ and the optimal value is $$\frac{1}{\rho/(1-\omega)+\zeta+5}$$\label{index max} \end{lemma} The proof is in Appendix \ref{sec:proofs}. With Lemma \ref{index max}, let us choose $\vartheta$ and $\nu$, and hence $\omega$, to be small. We also choose $\rho$ to be close to 1. (Unfortunately, these choices can lead to a small size of neighborhood around $\mathbf p^*$ in which the convergence rate holds.) This gives rise to the approximate choice that $a\approx1+\omega$ and $\beta\approx5+\zeta+\omega$. The convergence rate is then $O(W^{-1/(6+\zeta+\omega)})$. \end{enumerate} We compare our results to some recent work in stochastic FW. \cite{hazan2016variance} showed that to achieve $\epsilon$ error in terms of the optimality gap one needs $O(1/\epsilon^{1.5})$ number of calls to the gradient estimation oracle, when the objective function is strongly convex. \cite{reddi2016stochastic} showed that the number needed increases to $O(1/\epsilon^4)$ for non-convex objectives, and suggested several more sophisticated algorithms to improve the rate. Corollary \ref{rate cor} and our discussion above suggests that we need $O(1/\epsilon^{6+\zeta+\omega})$ sample size, for some small $\zeta,\omega>0$, a rate that is inferior to the one achieved in \cite{reddi2016stochastic}. However, \cite{reddi2016stochastic} has assumed that the gradient estimator is uniformly bounded over the feasible space, a condition known as $G$-Lipschitz (Theorem 2 in \cite{reddi2016stochastic}), which does not hold in our case due to the presence of $p_j^i$ in the denominator in \eqref{score function2} that gives a potentially increasing estimation variance as the iteration progresses. This complication motivates our sample size and step size sequences depicted in Assumption \ref{sample size tuning} and the subsequent analysis. On the other hand, if Assumption \ref{bias} is relaxed to hold for any $\bm\xi_1,\bm\xi_2\in\mathbb R^N$, it can be seen that by choosing $\beta\approx3+\zeta+\omega$ our complexity improves to $O(1/\epsilon^{4+\zeta+\omega})$, which almost matches the one in \cite{reddi2016stochastic} (see Remark \ref{remark:rate} in Appendix \ref{sec:proofs}). However, such a relaxed condition would not hold if the constraints are linear, because the optimal solutions of the subproblems are located at the corner points and will jump from one to the other under perturbation of the objective function. \section{Numerical Experiments} \label{sec:numerics} \newcommand{\rightarrow \infty}{\rightarrow \infty} \newcommand{{\mathbf x}}{{\mathbf x}} \newcommand{{\mathbf X}}{{\mathbf X}} \newcommand{{\mathsf{e}}}{{\mathsf{e}}} \newcommand{\rightarrow 0}{\rightarrow 0} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{I}}{\mathbb{I}} \newcommand{\DefAs}{\mbox{$\,\stackrel{\bigtriangleup}{=}\,$}} \newcommand{\mbox{$\Box$}}{\mbox{$\Box$}} \newcommand{\ubar}[1]{\underline{#1}} \newcommand{\obar}[1]{{\overline{#1}}} This section describes two sets of numerical experiments. The first set (Section \ref{expt:mltcls}) studies the performance guarantees from Section \ref{sec:discretization} regarding our randomized discretization strategy and the tightness of the bounds coming from moment constraints. The second set of experiments (Section \ref{sec:gg1example}) studies the numerical convergence of FWSA. The appendix provides additional details and results. Unless specified, in all experiments we terminate the FWSA algorithm at iteration $k$ if at least one of the following criteria is met (as an indication that the convergence studied in Section \ref{sec:guarantees} is attained): \begin{itemize} \item The cumulative simulation replications $W_k$ reaches $5\times 10^8$, or \item The relative difference between objective value $Z({\mathbf{p}}_k)$ and the average of the observed values in $30$ previous iterations, $(\sum_{v=1}^{30} Z({\mathbf{p}}_{k-v}) )/30$, is below $5\times 10^{-5}$, or \item The gradient estimate $\hat{\bm\psi}({\mathbf{p}}_k)$ has an $l_2$-norm smaller than $1\times 10^{-3}$. \end{itemize} \subsection{Performance Bounds for Multiple Continuous and Unbounded Input Models}\label{expt:mltcls} We use the example of a generic multi-class $M/G/1$ queue where jobs from three distinct classes arrive and are attended to by one server. Such structures are common in service systems such as call-centers. Let $\mathbf P = \{P^1, P^2, P^3\}$ represent all the constituent probability measures, where each $P^i = \{P^{i,j}\},\,\,i=1,2,3$ with $j=1$ for interarrival and $j=2$ for service, denotes the joint measure of the interarrival and service distributions of jobs of class $i$. The performance measure of interest is the weighed average waiting time: \begin{equation} Z(\mathbf P) = E_{\mathbf P} \left[ \sum_{i=1}^3 \left(c_i\,\,\frac 1 {T^i} \sum_{t=1}^{T^i} W_t^{i}\right) \right], \end{equation} where the average is observed up to a (fixed) $T^i=500$ customers of class $i$ and $c_i$ is the cost assigned to its waiting times. Jobs within each class are served on a first-come-first-served basis. The server uses a fixed priority ordering based on the popular $c\mu$ rule (\cite{kleinrockv2}), which prioritizes the class on the next serving in decreasing order of the product of $c_i$ and the mean service rate $\mu^i$ of class $i$ (as discussed momentarily, the $\mu^i$'s are unknown, so we fix a specific guess throughout this example). To handle the uncertainty in specifying the interarrival and service time distributions of each class (due to, e.g., the novelty of the service operation with little pre-existing data), we use the uncertainty set based on moment constraints on the $P^i$ as: \begin{equation} \mathcal U=\prod_i \mathcal U^i,\,\,\,\text{where}\,\,\mathcal U^i = \{P^i:\ubar{\mu}^{i,j}_l \leq E_{P^i}[(X^{i,j})^l]\leq \obar{\mu}^{i,j}_l,\,\,l=1,2,\,\,j=1,2\}\label{indep moment set} \end{equation} where the index $l=1,2$ represents the first two moments of marginals $P^{i,j}$. This set is motivated from queueing theory that mean system responses could depend on the mean and variance of the input distributions. The moment bounds $\ubar{\mu}^{i,j}_l$ and $\obar{\mu}^{i,j}_l$ can be specified from prior or expert opinion. Here, to test the information value with respect to the accuracy of the moments, we specify the bounds from a confidence interval on the corresponding moments calculated from $N_s$ synthetically generated observations for each $i,j$. For example, \[\obar{\mu}^{i,j}_l = \hat{\mu}^{i,j}_l + t_{\alpha/2,N_s-1} \hat{\sigma}^{i,j}_l / \sqrt{N_s}, \] where $t_{\alpha/2,N_s-1}$ is the $(1-\alpha/2)$-quantile of the Student-t distribution with degree of freedom $N_s-1$, $\hat{\mu}^{i,j}_l$ is the empirical $l-$th moment and $\hat{\sigma}^{i,j}_l$ is the associated sample standard deviation as observed from the $N_s$ data points. Suppose that the true marginal distribution of interarrival times for each class is exponential with rate $0.5$ and the true service distribution of the three classes are exponentials with rates $2.25, 2.0 $ and $1.75$ respectively, to yield an overall traffic intensity of $0.75$. The FWSA algorithm is run by first sampling a discrete approximate support from bivariate independent-marginal lognormal distributions as representative of each $P^i$ with support size $n=50, 100, 250$ (we assume the support size corresponding to each distribution $P^i$ is all equal to $n$). Theorem \ref{sample thm} suggests that selecting lognormal distributions is reasonable if the modeler conjectures that the true distributions are light-tailed. Here we set the means and standard deviations of the lognormals to $1$. The parameter $n$ should ideally be large to minimize discretization error, but this pays a penalty in the slowness of the FWSA algorithm. \begin{figure} \caption{lognormal for discretization\label{fig:mm13lgnml} \label{fig:mm13lgnml} \caption{exponential for discretization\label{fig:mm13exp} \label{fig:mm13exp} \caption{The range from max to min worst-case objectives when $N_s$ and $n$ vary as indicated.\label{fig:mm13perf} \label{fig:mm13perf} \end{figure} Figure~\ref{fig:mm13lgnml} shows the output of our approach over various $n$ and $N_s$ to illustrate the effect of discretization and the accuracy of moment information on the tightness of our bounds. The true steady-state performance measure of the multiclass $M/M/1$ system, available in explicit form (\cite{kleinrockv2}), is indicated as the dotted-line in each plot. The bounds provided by our method are all seen to cover the true performance value when $n\geq45$. This is predicted by Theorem \ref{sample thm} as the moment constraints are all correctly calibrated (i.e. contain the true moments) in this example. Moreover, as predicted by discussion point 5 in Section \ref{sec:randomized}, the obtained intervals widen as $n$ increases, since the expansion of support size enlarges the feasible region. On the other hand, the intervals shrink as $N_s$ increases, since this tightens the moment constraints and consequently reduces the feasible region. The effect of the support size does not appear too sensitive in this example. Thus, taking into account the optimization efficiency, a use of support size of about 45 points appears sufficient. Figure~\ref{fig:mm13exp} plots the performance when the supports of the distributions are sampled from the true distributions. The performance trends are similar to Figure~\ref{fig:mm13lgnml}. However, the obtained bounds are slightly looser. Note that Theorem \ref{sample thm} guarantees that the obtained bounds under the generated support points cover the truth with high confidence, when the generating distributions satisfy the heavier-tail condition. In this example, both lognormal and exponential distributions (the latter being the truth) satisfy these conditions and lead to correct bounds. On the other hand, the tightness of the bounds, which is not captured in Theorem \ref{sample thm}, depends on the size and geometry of the feasible region that is determined by a complex interplay between the choice of the uncertainty set and the support-generating distributions. The feasible region using the true exponential distributions include probability weights that are close to uniform weights (since the moment constraints are calibrated using the same distribution). The region using the lognormal, however, does not contain such weights; in fact, when $N_s=500$, the resulting optimizations can be infeasible for $n\leq60$, signaling the need to use more support-generating samples, whereas they are always feasible using the exponential, whose values are shown in the rightmost set of intervals in Figure~\ref{fig:mm13exp}. The results above are implemented with an initialization that assigns equal probabilities to the support points. Appendix \ref{sec:multi-start} shows the results applied on different initializations to provide evidence that the formulation in this example has a unique global optimal solution or similar local optimal solutions. \subsection{Convergence of FWSA and Worst-case Input Distributions} \label{sec:gg1example} We test the numerical convergence of FWSA. The key parameters in the algorithm are the sample-size growth rate $\beta$ and the step-size constant $a$. Varying these two parameters, we empirically test the rate of convergence of the FW gap to zero analyzed in Theorem~\ref{rate thm}, and the objective function $Z({\mathbf{p}}_k)$ to the true optimal value $Z({\mathbf{p}}^*)$ analyzed in Corollary~\ref{rate cor}. We also investigate the magnitude of the optimal objective value and the form of the identified optimal solution. Here we consider an $M/G/1$ queue where the arrival process is Poisson known with high accuracy to have rate $1$. On the other hand, the service time $X_t$ for the $t$-th customer is uncertain but assumed i.i.d.. A simulation model is being used to estimate the expected long-run average of the waiting times $Z({\mathbf{p}})=E_{{\mathbf{p}}}[h({\mathbf X})]$, where \[ h(\mathbf X)=\frac 1 T \sum_1^T W_t \] and $W_t$ is the waiting time obtained from Lindley's recursion. We test our FWSA with a KL-divergence-based uncertainty set for $X_t$ as \begin{equation} \hat{\mathcal U}=\left\{\mathbf p:\sum_{j=1}^n p_j \log \left(\frac {p_j}{p_{b,j}} \right) \le \eta\right\}\label{mg1prob} \end{equation} where $\mathbf p_b=(p_{b,j})_{j=1,\ldots,n}$ is a baseline model chosen to be a discretized mixture of beta distribution given by $0.3\times\text{Beta}(2,6)+0.7\times\text{Beta}(6,2)$. The discrete supports are obtained by uniformly discretizing the interval $[0,1]$ into $n$ points, i.e. $y_j=(j+1)/n$. The set \eqref{mg1prob} provides a good testing ground because steady-state analysis allows obtaining an approximate optimal solution directly which serves as a benchmark for verifying the convergence of our FWSA algorithm (see Appendix \ref{sec:numerics appendix} for further details of this approximate optimal solution). \begin{figure} \caption{small $a$, $\beta$ varied as shown} \label{fig:mg1_small_a} \caption{$a=1$, $\beta$ varied as shown} \label{fig:mg1_a_1} \caption{$\beta=3.1$, $a$ varied} \label{fig:mg1_beta_3} \caption{Frank-Wolfe gap vs iteration count} \label{fig:mg1fwgap} \caption{Figs~\ref{fig:mg1_small_a} \label{fig:mg1perf} \end{figure} Figure~\ref{fig:mg1perf} captures the performance of our FWSA algorithm as a function of the $a$ and $\beta$ parameters. Figures~\ref{fig:mg1_small_a}--\ref{fig:mg1_beta_3} plot the (approximate) optimality gap as a function of the cumulative simulation replications $W_k$ for the maximization problem under~\eqref{mg1prob}. We set the parameters $\eta=0.025$, $n=100$ and $T=500$. Figures~\ref{fig:mg1_small_a},~\ref{fig:mg1_a_1} and~\ref{fig:mg1_beta_3} provide further insights into the actual observed finite-sample performance (When interpreting these graphs, note that they are plotted in log-log scale and thus, roughly speaking, the slope of the curve represents the power of the cumulative samples whereas the intercept represents the multiplicative constant in the rate): \begin{itemize} \item {\em Fig.~\ref{fig:mg1_small_a} v.s.~\ref{fig:mg1_a_1}--\ref{fig:mg1_beta_3}:} Convergence is much slower when $a<1$ no matter the value of $\beta$. \item {\em Fig.~\ref{fig:mg1_a_1}:} For $a>1$, convergence is again slow if $\beta >4$. \item {\em Fig.~\ref{fig:mg1_a_1}:} For $a$ slightly greater than $1$, the convergence rates are similar for $\beta\in [2.75, 3.25]$ with better performance for the lower end. \item {\em Fig.~\ref{fig:mg1_beta_3}:} For $\beta=3.1$, the rate of convergence generally improves as $a$ increases in the range $[1.10,2.75]$. \item {\em Figs.~\ref{fig:mg1_small_a},~\ref{fig:mg1_a_1} and~\ref{fig:mg1_beta_3}:} The approximation $Z^*_{\infty}$ of the true $Z({\mathbf{p}}^*)$ (from {\bf (SS)} in Appendix \ref{sec:numerics appendix}) has an error of about $0.006$ for the chosen $T$, as observed by the leveling off of all plots around this value as the sampling effort grows. \end{itemize} Figure~\ref{fig:mg1fwgap} shows the FW gap as a function of the iteration count. In general, the sample paths with similar $\beta$ are clustered together, indicating that more effort expended in estimating the gradient at each iterate leads to a faster drop in the FW gap per iteration. Within each cluster, performance is inferior when $a<1$, consistent with Theorem~\ref{rate thm}. Since most runs terminate when the criterion on the maximum allowed budget of simulation replications is expended, the end points of the curves indicate that a combination of $a\geq1$ and a $\beta$ of around $3$ gains the best finite-sample performance in terms of the FW gap. These choices seem to reconcile with the discussion at the end of Section \ref{sec:local rate} when Assumption \ref{bias} is relaxed to hold for any $\bm\xi_1,\bm\xi_2\in\mathbb R^N$. We provide further discussion on the shape of the obtained optimal distributions in Appendix \ref{sec:numerics appendix1}. \section{Conclusion}\label{conclusion} In this paper we investigated a methodology based on worst-case analysis to quantify input errors in stochastic simulation, by using optimization constraints to represent the partial nonparametric information on the model. The procedure involved a randomized discretization of the support and running FWSA using a gradient estimation technique akin to a nonparametric version of the likelihood ratio or the score function method. We studied the statistical guarantees of the discretization and convergence properties of the proposed FWSA. We also tested our method and verified the theoretical implications on queueing examples. We suggest several lines of future research. First is the extension of the methodology to dependent models, such as Markovian inputs or more general time series inputs, which would involve new sets of constraints in the optimizations. Second is the design and analysis of other potential alternate numerical procedures and comparisons with the proposed method. Third is the utilization of the proposed worst-case optimizations in various classes of decision-making problems. \section*{Acknowledgments} We thank the Area Editor, the Associate Editor and the three referees for many helpful suggestions that have greatly improved the paper. We gratefully acknowledge support from the National Science Foundation under grants CMMI-1542020, CMMI-1523453 and CAREER CMMI-1653339. \mathbb{E}CSwitch \mathbb{E}CHead{Appendix} \section{Technical Proofs}\label{sec:proofs} \proof{Proof of Theorem \ref{sample thm}.} Let $\delta(y)$ be the delta measure at $y$. For each $i=1,\ldots,m$, define $$\tilde P^i=\sum_{j=1}^{n^i}\frac{L^i(y_j^i)}{\sum_{r=1}^{n^i}L^i(y_r^i)}\delta(y_j^i)$$ i.e., the distribution with point mass $L^i(y_j^i)/\sum_{r=1}^{n^i}L^i(y_r^i)$ on each $y_j^i$, where $L^i=dP_0^i/dQ^i$. We first show that as $n\to\infty$, the solution $(\tilde P^i)_{i=1,\ldots,m}$ is feasible for the optimization problems in \eqref{sample counterpart} in an appropriate sense. Consider Case 1. For each $l=1,\ldots,s^i$, by a change measure we have $E_{Q^i}|f_l^i(X^i)L(X^i)|=E_{P_0^i}|f_l^i(X^i)|<\infty$ by our assumption. Also note that $E_{Q^i}L^i=1$. Therefore, by the law of large numbers, $$E_{\tilde P^i}[f_l^i(X^i)]=\frac{\sum_{j=1}^{n^i}L^i(y_j^i)f_l^i(y_j^i)}{\sum_{j=1}^{n^i}L^i(y_j^i)}=\frac{(1/n^i)\sum_{j=1}^{n^i}L^i(y_j^i)f_l^i(y_j^i)}{(1/n^i)\sum_{j=1}^{n^i}L^i(y_j^i)}\to E_{Q^i}[f_l^i(X^i)L(X^i)]\text{\ \ a.s.}$$ Since $E_{Q^i}[f_l^i(X^i)L(X^i)]=E_{P_0^i}[f_l^i(X^i)]<\mu_j^i$ by our assumption, we have $E_{\tilde P^i}[f_l^i(X^i)]\leq\mu_l^i$ eventually as $n^i\to\infty$. Consider Case 2. We have \begin{eqnarray*} d_\phi(\tilde P^i,\hat P_b^i) &=&\sum_{j=1}^{n^i}\phi\left(\frac{L^i(y_j^i)/\sum_{r=1}^{n^i}L^i(y_r^i)}{L_b^i(y_j^i)/\sum_{r=1}^{n^i}L_b^i(y_r^i)}\right)\frac{L_b^i(y_j^i)}{\sum_{r=1}^{n^i}L_b^i(y_r^i)}\notag\\ &=&\frac{1}{n^i}\sum_{j=1}^{n^i}\phi\left(\tilde L^i(y_j^i)\frac{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}{(1/n^i)\sum_{r=1}^{n^i}L^i(y_r^i)}\right)\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)} \end{eqnarray*} where $\tilde L^i=dP_0^i/dP_b^i$. Consider, for a given $\epsilon>0$, \begin{eqnarray} &&P(|d_\phi(\tilde P^i,\hat P_b^i)-d_\phi(P_0^i,P_b^i)|>\epsilon)\notag\\ &=&P\left(\left|\frac{1}{n^i}\sum_{j=1}^{n^i}\phi\left(\tilde L^i(y_j^i)\frac{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}{(1/n^i)\sum_{r=1}^{n^i}L^i(y_r^i)}\right)\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}-d_\phi(P_0^i,P_b^i)\right|>\epsilon\right)\notag\\ &\leq&P\left(\left|\frac{1}{n^i}\sum_{j=1}^{n^i}\phi\left(\tilde L^i(y_j^i)\frac{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}{(1/n^i)\sum_{r=1}^{n^i}L^i(y_r^i)}\right)\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}-\frac{1}{n^i}\sum_{j=1}^{n^i}\phi(\tilde L^i(y_j^i))\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}\right|>\frac{\epsilon}{2}\right){}\notag\\ &&+P\left(\left|\frac{1}{n^i}\sum_{j=1}^{n^i}\phi(\tilde L^i(y_j^i))\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}-d_\phi(P_0^i,P_b^i)\right|>\frac{\epsilon}{2}\right)\label{interim revised} \end{eqnarray} We analyze the two terms in \eqref{interim revised}. For any sufficiently small $\lambda>0$, the first term is bounded from above by \begin{eqnarray} &&P\Bigg(\left|\frac{1}{n^i}\sum_{j=1}^{n^i}\phi\left(\tilde L^i(y_j^i)\frac{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}{(1/n^i)\sum_{r=1}^{n^i}L^i(y_r^i)}\right)\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}-\frac{1}{n^i}\sum_{j=1}^{n^i}\phi(\tilde L^i(y_j^i))\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}\right|>\frac{\epsilon}{2}{}\notag\\ &&{};\left|\frac{1}{n^i}\sum_{r=1}^{n^i}L^i(y_r^i)-1\right|\leq\lambda,\left|\frac{1}{n^i}\sum_{r=1}^{n^i}L_b^i(y_r^i)-1\right|\leq\lambda\Bigg){}\notag\\ &&{}+P\Bigg(\left|\frac{1}{n^i}\sum_{j=1}^{n^i}\phi\left(\tilde L^i(y_j^i)\frac{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}{(1/n^i)\sum_{r=1}^{n^i}L^i(y_r^i)}\right)\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}-\frac{1}{n^i}\sum_{j=1}^{n^i}\phi(\tilde L^i(y_j^i))\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}\right|>\frac{\epsilon}{2}{}\notag\\ &&{};\left|\frac{1}{n^i}\sum_{r=1}^{n^i}L^i(y_r^i)-1\right|>\lambda\text{\ or\ }\left|\frac{1}{n^i}\sum_{r=1}^{n^i}L_b^i(y_r^i)-1\right|>\lambda\Bigg)\notag\\ &\leq&P\left(\frac{1}{n^i}\sum_{j=1}^{n^i}(|\phi(\tilde L^i(y_j^i))|+1)O(\lambda)\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}>\frac{\epsilon}{2};\left|\frac{1}{n^i}\sum_{r=1}^{n^i}L^i(y_r^i)-1\right|\leq\lambda,\left|\frac{1}{n^i}\sum_{r=1}^{n^i}L_b^i(y_r^i)-1\right|\leq\lambda\right){}\notag\\ &&{}+P\left(\left|\frac{1}{n^i}\sum_{r=1}^{n^i}L^i(y_r^i)-1\right|>\lambda\text{\ or\ }\left|\frac{1}{n^i}\sum_{r=1}^{n^i}L_b^i(y_r^i)-1\right|>\lambda\right)\label{interim revised1} \end{eqnarray} where the first term in the last inequality follows from the continuity condition on $\phi$, with $O(\lambda)$ being a deterministic positive function of $\lambda$ that converges to 0 as $\lambda\to0$. This first term is further bounded from above by \begin{equation} P\left(\frac{1}{n^i}\sum_{j=1}^{n^i}(|\phi(\tilde L^i(y_j^i))|+1)\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}O(\lambda)>\frac{\epsilon}{2}\right)\label{interim revised2} \end{equation} By the law of large numbers, we have $$\frac{1}{n^i}\sum_{j=1}^{n^i}(|\phi(\tilde L^i(y_j^i))|+1)L_b^i(y_j^i)\to E_{Q^i}[(|\phi(\tilde L^i(X^i))|+1)L_b^i(X^i)]=E_{P_b^i}|\phi(\tilde L^i(X^i))|+1\text{\ \ a.s.}$$ by using our assumption $E_{P_b^i}|\phi(\tilde L^i(X^i))|<\infty$. Moreover, by the law of large numbers again, we have $(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)\to1$ a.s.. Thus, $$\frac{1}{n^i}\sum_{j=1}^{n^i}(|\phi(\tilde L^i(y_j^i))|+1)\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}\to E_{P_b^i}|\phi(\tilde L^i(X^i))|+1\text{\ \ a.s.}$$ When $\lambda$ is chosen small enough relative to $\epsilon/2$, we have \eqref{interim revised2} go to 0 as $n^i\to\infty$. Since both $\frac{1}{n^i}\sum_{r=1}^{n^i}L^i(y_r^i)\to1$ and $\frac{1}{n^i}\sum_{r=1}^{n^i}L_b^i(y_r^i)\to1$ a.s., the second term in \eqref{interim revised1} also goes to 0 as $n^i\to\infty$. This concludes that the first term in \eqref{interim revised} goes to 0. For the second term in \eqref{interim revised}, note that $$\frac{1}{n^i}\sum_{j=1}^{n^i}\phi(\tilde L^i(y_j^i))L_b^i(y_j^i)\to E_{Q^i}[\phi(\tilde L^i(X^i))L_b^i(X^i)]=E_{P_b^i}[\phi(\tilde L^i(X^i))]=d_\phi(P_0^i,P_b^i)\text{\ \ a.s.}$$ by the law of large numbers and the assumption that $E_{P_b^i}|\phi(\tilde L^i(X^i))|<\infty$. Moreover, since $(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)\to1$, we get $$\frac{1}{n^i}\sum_{j=1}^{n^i}\phi(\tilde L^i(y_j^i))\frac{L_b^i(y_j^i)}{(1/n^i)\sum_{r=1}^{n^i}L_b^i(y_r^i)}\to d_\phi(P_0^i,P_b^i)\text{\ \ a.s.}$$ Thus, the second term in \eqref{interim revised} goes to 0 as $n^i\to\infty$. Therefore, we conclude that $d_\phi(\tilde P^i,\hat P_b^i)\stackrel{p}{\to}d_\phi(P_0^i,P_b^i)$. Since $d_\phi(P_0^i,P_b^i)<\eta^i$ by our assumption, we have $P(d_\phi(\tilde P^i,\hat P_b^i)\leq\eta^i)\to1$ as $n^i\to\infty$. Next we consider the objective in \eqref{sample counterpart}. We show that $Z(\tilde P^1,\ldots,\tilde P^m)-Z(P_0^1,\ldots,P_0^m)=O_p(1/\sqrt n)$, following the argument in the theory of differentiable statistical functionals (e.g., \cite{serfling2009approximation}, Chapter 6). For any $\lambda$ between 0 and 1, we write \begin{eqnarray*} &&Z(P_0^1+\lambda(\tilde P^1-P_0^1),\ldots,P_0^m+\lambda(\tilde P^m-P_0^m))\\ &=&\int\cdots\int h(\mathbf x^1,\ldots,\mathbf x^m)\prod_{i=1}^m\prod_{t=1}^{T^i}d[P_0^i+\lambda(\tilde P^i-P_0^i)](x_t^i)\\ &=&\sum_{k=0}^T\lambda^k\sum_{u\in\mathcal I^k}\int\cdots\int h(\mathbf x^1,\ldots,\mathbf x^m)\prod_{(i,t)\in(\mathcal S_u^k)^c}dP_0^i(x_t^i)\prod_{(i,t)\in\mathcal S_u^k}d(\tilde P^i-P_0^i)(x_t^i) \end{eqnarray*} where $\{\mathcal S_u^k\}_{u\in\mathcal I^k}$ is the collection of all subsets of $\{(i,t):i=1,\ldots,m,t=1,\ldots,T^i\}$ with cardinality $k$, and $\mathcal I^k$ indexes all these subsets. Note that \begin{eqnarray} &&\frac{d}{d\lambda}Z(P_0^1+\lambda(\tilde P^1-P_0^1),\ldots,P_0^m+\lambda(\tilde P^m-P_0^m))\Bigg|_{\lambda=0^+}\notag\\ &=&\sum_{i=1}^m\sum_{t=1}^{T^i}\int\cdots\int h(\mathbf x^1,\ldots,\mathbf x^m)\prod_{(j,s):(j,s)\neq(i,t)}dP_0^j(x_s^j)d(\tilde P^i-P_0^i)(x_t^i)\notag\\ &=&\sum_{i=1}^m\int\varphi^i(x;P_0^1,\ldots,P_0^m)d(\tilde P^i-P_0^i)(x) \label{interim new1} \end{eqnarray} where \begin{equation} \varphi^i(x;P_0^1,\ldots,P_0^m)=\sum_{t=1}^{T^i}E_{P_0^1,\ldots,P_0^m}[h(\mathbf X^1,\ldots,\mathbf X^m)|X_t^i=x]\label{interim new91} \end{equation} By the definition of $L^i$, we can write \eqref{interim new1} as \begin{eqnarray} &&\sum_{i=1}^m\left(\frac{\int\varphi^i(x;P_0^1,\ldots,P_0^m)L^i(x)d\hat Q^i(x)}{(1/n^i)\sum_{j=1}^{n^i}L^i(y_j^i)}-\int\varphi^i(x;P_0^1,\ldots,P_0^m)L^i(x)dQ^i(x)\right)\notag\\ &=&\sum_{i=1}^m\left(\frac{\int\varphi^i(x;P_0^1,\ldots,P_0^m)L^i(x)d(\hat Q^i-Q^i)(x)}{(1/n^i)\sum_{j=1}^{n^i}L^i(y_j^i)}-\int\varphi^i(x;P_0^1,\ldots,P_0^m)L^i(x)dQ^i(x)\left(1-\frac{1}{(1/n^i)\sum_{j=1}^{n^i}L^i(y_j^i)}\right)\right) \label{update interim2} \end{eqnarray} where $\hat Q^i$ is the empirical distribution $(1/n^i)\sum_{j=1}^{n^i}\delta(y_j^i)$ on the $n^i$ observations generated from $Q^i$. Suppose $\varphi^i(x;P_0^1,\ldots,P_0^m)L^i(x)=0$ a.s., then $\int\varphi^i(x;P_0^1,\ldots,P_0^m)L^i(x)d(\hat Q^i-Q^i)(x)=0$ a.s.. Otherwise, using the assumed boundedness of $h$, hence $\varphi^i(x;P_0^1,\ldots,P_0^m)$, and $L^i$, we have, by the central limit theorem, $$\sqrt{n^i}\left(\int\varphi^i(x;P_0^1,\ldots,P_0^m)L^i(x)d(\hat Q^i-Q^i)(x)\right)\Rightarrow N(0,(\sigma^i)^2)$$ where $(\sigma^i)^2=Var_{Q^i}(\varphi^i(X^i;P_0^1,\ldots,P_0^m)L^i(X^i))>0$ is finite. Since $(1/n^i)\sum_{j=1}^{n^i}L^i(y_j^i)\to1$ a.s. by the law of large numbers, and that $\int\varphi^i(x;P_0^1,\ldots,P_0^m)L^i(x)d\hat Q^i(x)$ is bounded, the second term in \eqref{update interim2} converges to 0 a.s.. Thus, by Slutsky's theorem, each summand in \eqref{update interim2} converges in distribution to $N(0,(\sigma^i)^2)$. Since for each $i$ we have $n^i=nw^i$ for some fixed $w^i>0$, we conclude that \eqref{update interim2} equal $O_p(1/\sqrt n)$. Now consider \begin{eqnarray} &&\frac{d^2}{d\lambda^2}Z(P_0^1+\lambda(\tilde P^1-P_0^1),\ldots,P_0^m+\lambda(\tilde P^m-P_0^m))\notag\\ &=&\sum_{k=2}^Tk(k-1)\lambda^{k-2}\sum_{u\in\mathcal I^k}\int\cdots\int h(\mathbf x^1,\ldots,\mathbf x^m)\prod_{(i,t)\in(\mathcal S_u^k)^c}dP_0^i(x_t^i)\prod_{(i,t)\in\mathcal S_u^k}d(\tilde P^i-P_0^i)(x_t^i)\label{update interim3} \end{eqnarray} Fixing each $\mathcal S_u^k$, we define $$h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})=\int\cdots\int h(\mathbf x^1,\ldots,\mathbf x^m)\prod_{(i,t)\in(\mathcal S_u^k)^c}dP_0^i(x_t^i)$$ where $\mathbf x_{\mathcal S_u^k}=(x_t^i)_{(i,t)\in\mathcal S_u^k}$. Next define \begin{eqnarray*} &&\tilde h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})\\ &=&h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})-\sum_{(j,t)\in\mathcal S_u^k}\int h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})dP_0^j(x_t^j)+\sum_{(j_1,t_1),(j_2,t_2)\in\mathcal S_u^k}\int\int h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})dP_0^{j_1}(x_{t_1}^{j_1})dP_0^{j_2}(x_{t_2}^{j_2})-\cdots{}\\ &&{}+(-1)^k\int\cdots\int h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})dP_0^{j_1}(x_{t_1}^{j_1})\cdots dP_0^{j_k}(x_{t_k}^{j_k}) \end{eqnarray*} where each summation above is over the set of all possible combinations of $(j,t)\in\mathcal S_u^k$ with increasing size. Direct verification shows that $\tilde h_{\mathcal S_u^k}$ has the property that $$\int\cdots\int\tilde h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})\prod_{(i,t)\in\mathcal S_u^k}dR^j(x_t^j)=\int\cdots\int h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})\prod_{(i,t)\in\mathcal S_u^k}d(R^j(x_t^j)-P_0^j(x_t^j))$$ for any probability measures $R^j$'s, and \begin{equation} \int\tilde h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})dP_0^j(x_t^j)=0\label{update interim4} \end{equation} for any $(j,t)\in\mathcal S_u^k$. Thus, \eqref{update interim3} is equal to $$\sum_{k=2}^Tk(k-1)\lambda^{k-2}\sum_{u\in\mathcal I^k}\int\tilde h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})\prod_{(i,t)\in\mathcal S_u^k}d\tilde P^i(x_t^i)$$ Now, viewing $\tilde P^i$ as randomly generated from $Q^i$, consider \begin{eqnarray} &&E_{Q^1,\ldots,Q^m}\left(\sum_{k=2}^Tk(k-1)\lambda^{k-2}\sum_{u\in\mathcal I^k}\int\tilde h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})\prod_{(i,t)\in\mathcal S_u^k}d\tilde P^i(x_t^i)\right)^2\notag\\ &=&E_{Q^1,\ldots,Q^m}\left[\left(\sum_{k=2}^Tk(k-1)\lambda^{k-2}\sum_{u\in\mathcal I^k}\int\tilde h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})\prod_{(i,t)\in\mathcal S_u^k}d\tilde P^i(x_t^i)\right)^2;\frac{1}{n^i}\sum_{r=1}^{n^i}L^{i}(Y_r^i)\geq1-\epsilon\text{\ for all\ }i=1,\ldots,m\right]{}\notag\\ &&{}+E_{Q^1,\ldots,Q^m}\left[\left(\sum_{k=2}^Tk(k-1)\lambda^{k-2}\sum_{u\in\mathcal I^k}\int\tilde h_{\mathcal S_u^k}(\mathbf x_{\mathcal S_u^k})\prod_{(i,t)\in\mathcal S_u^k}d\tilde P^i(x_t^i)\right)^2;\frac{1}{n^i}\sum_{r=1}^{n^i}L^{i}(Y_r^i)<1-\epsilon\text{\ for some\ }i=1,\ldots,m\right]\label{interim revised3} \end{eqnarray} We analyze the two terms in \eqref{interim revised3}. Note that the first term can be written as \begin{eqnarray} &&E_{Q^1,\ldots,Q^m}\Bigg[\left(\sum_{k=2}^Tk(k-1)\lambda^{k-2}\sum_{u\in\mathcal I^k}\frac{1}{n^{i_1}n^{i_2}\cdots n^{i_k}}\sum_{j_1=1}^{n^{i_1}}\cdots\sum_{j_k=1}^{n^{i_k}}\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k})\frac{L^{i_1}(Y_{j_1}^{i_1})\cdots L^{i_k}(Y_{j_k}^{i_k})}{\prod_{s=1}^k((1/n^{i_s})\sum_{r=1}^{n^{i_s}}L^{i_s}(Y_r^{i_s}))}\right)^2;{}\notag\\ &&{}\frac{1}{n^i}\sum_{r=1}^{n^i}L^{i}(Y_r^i)\geq1-\epsilon\text{\ for all\ }i=1,\ldots,m\Bigg]\notag\\ &\leq&\Bigg(\sum_{k=2}^Tk(k-1)\lambda^{k-2}\sum_{u\in\mathcal I^k}\frac{1}{n^{i_1}n^{i_2}\cdots n^{i_k}}\Bigg(E_{Q^1,\ldots,Q^m}\Bigg[\Bigg(\sum_{j_1=1}^{n^{i_1}}\cdots\sum_{j_k=1}^{n^{i_k}}\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k}){}\notag\\ &&{}\frac{L^{i_1}(Y_{j_1}^{i_1})\cdots L^{i_k}(Y_{j_k}^{i_k})}{\prod_{s=1}^k((1/n^{i_s})\sum_{r=1}^{n^{i_s}}L^{i_s}(Y_r^{i_s}))}\Bigg)^2;\frac{1}{n^i}\sum_{r=1}^{n^i}L^{i}(Y_r^i)\geq1-\epsilon\text{\ for all\ }i=1,\ldots,m\Bigg]\Bigg)^{1/2}\Bigg)^2\label{interim revised5} \end{eqnarray} by Minkowski's inequality, where we view $Y_j^i$'s as the random variables constituting the observations generated from $Q^i$'s. Since the expression $\prod_{s=1}^k((1/n^{i_s})\sum_{r=1}^{n^{i_s}}L^{i_s}(Y_r^{i_s}))$ inside the expectation in \eqref{interim revised5} does not depend on the $j_s$'s, \eqref{interim revised5} is further bounded from above by \begin{eqnarray} &&\Bigg(\sum_{k=2}^Tk(k-1)\lambda^{k-2}\sum_{u\in\mathcal I^k}\frac{1}{n^{i_1}n^{i_2}\cdots n^{i_k}}\left(E_{Q^1,\ldots,Q^m}\left(\sum_{j_1=1}^{n^{i_1}}\cdots\sum_{j_k=1}^{n^{i_k}}\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k})\frac{L^{i_1}(Y_{j_1}^{i_1})\cdots L^{i_k}(Y_{j_k}^{i_k})}{(1-\epsilon)^k}\right)^2\right)^{1/2}\Bigg)^2\notag\\ &=&\Bigg(\sum_{k=2}^T\frac{k(k-1)\lambda^{k-2}}{(1-\epsilon)^k}\sum_{u\in\mathcal I^k}\frac{1}{n^{i_1}n^{i_2}\cdots n^{i_k}}\Bigg(\sum_{j_1=1}^{n^{i_1}}\cdots\sum_{j_k=1}^{n^{i_k}}\sum_{j_1'=1}^{n^{i_1}}\cdots\sum_{j_k'=1}^{n^{i_k}} E_{Q^1,\ldots,Q^m}\Bigg[\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k})L^{i_1}(Y_{j_1}^{i_1})\cdots L^{i_k}(Y_{j_k}^{i_k}){}\notag\\ &&{}\tilde h_{\mathcal S_u^k}(Y_{j_1'}^{i_1},\ldots,Y_{j_k'}^{i_k})L^{i_1}(Y_{j_1'}^{i_1})\cdots L^{i_k}(Y_{j_k'}^{i_k})\Bigg]\Bigg)^{1/2}\Bigg)^2\label{update interim6} \end{eqnarray} Note that \begin{equation} E_{Q^1,\ldots,Q^m}\left[\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k})L^{i_1}(Y_{j_1}^{i_1})\cdots L^{i_k}(Y_{j_k}^{i_k})\tilde h_{\mathcal S_u^k}(Y_{j_1'}^{i_1},\ldots,Y_{j_k'}^{i_k})L^{i_1}(Y_{j_1'}^{i_1})\cdots L^{i_k}(Y_{j_k'}^{i_k})\right]=0\label{update interim5} \end{equation} if any $Y_j^i$ shows up only once among all those in both $\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k})$ and $\tilde h_{\mathcal S_u^k}(Y_{j_1'}^{i_1},\ldots,Y_{j_k'}^{i_k})$ in the expectation. To see this, suppose without loss of generality that $Y_{j_1}^{i_1}$ appears only once. Then we have \begin{eqnarray*} &&E_{Q^1,\ldots,Q^m}\left[\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k})L^{i_1}(Y_{j_1}^{i_1})\cdots L^{i_k}(Y_{j_k}^{i_k})\tilde h_{\mathcal S_u^k}(Y_{j_1'}^{i_1},\ldots,Y_{j_k'}^{i_k})L^{i_1}(Y_{j_1'}^{i_1})\cdots L^{i_k}(Y_{j_k'}^{i_k})\right]\\ &=&E_{Q^1,\ldots,Q^m}\Big[E_{Q^1,\ldots,Q^m}\left[\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k})L^{i_1}(Y_{j_1}^{i_1})\Big|Y_j^{i_2},\ldots,Y_{j_k}^{i_k},Y_{j_1'}^{i_1},\ldots,Y_{j_k'}^{i_k}\right]L^{i_2}(Y_j^{i_2})\cdots L^{i_k}(Y_{j_k}^{i_k}){}\\ &&{}\tilde h_{\mathcal S_u^k}(Y_{j_1'}^{i_1},\ldots,Y_{j_k'}^{i_k})L^{i_1}(Y_{j_1'}^{i_1})\cdots L^{i_k}(Y_{j_k'}^{i_k})\big]\\ &=&E_{Q^1,\ldots,Q^m}\Big[E_{P_0^{i_1}}\left[\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k})\Big|Y_j^{i_2},\ldots,Y_{j_k}^{i_k},Y_{j_1'}^{i_1},\ldots,Y_{j_k'}^{i_k}\right]L^{i_2}(Y_j^{i_2})\cdots L^{i_k}(Y_{j_k}^{i_k}){}\\ &&{}\tilde h_{\mathcal S_u^k}(Y_{j_1'}^{i_1},\ldots,Y_{j_k'}^{i_k})L^{i_1}(Y_{j_1'}^{i_1})\cdots L^{i_k}(Y_{j_k'}^{i_k})\Big]\\ &=&0 \end{eqnarray*} since $E_{P_0^{i_1}}\left[\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k})\Big|Y_j^{i_2},\ldots,Y_{j_k}^{i_k},Y_{j_1'}^{i_1},\ldots,Y_{j_k'}^{i_k}\right]=0$ by \eqref{update interim4}. The observation in \eqref{update interim5} implies that the summation in \eqref{update interim6} $$\sum_{j_1=1}^{n^{i_1}}\cdots\sum_{j_k=1}^{n^{i_k}}\sum_{j_1'=1}^{n^{i_1}}\cdots\sum_{j_k'=1}^{n^{i_k}}E_{Q^1,\ldots,Q^m}\left[\tilde h_{\mathcal S_u^k}(Y_{j_1}^{i_1},\ldots,Y_{j_k}^{i_k})L^{i_1}(Y_{j_1}^{i_1})\cdots L^{i_k}(Y_{j_k}^{i_k})\tilde h_{\mathcal S_u^k}(Y_{j_1'}^{i_1},\ldots,Y_{j_k'}^{i_k})L^{i_1}(Y_{j_1'}^{i_1})\cdots L^{i_k}(Y_{j_k'}^{i_k})\right]$$ contains only $O(n^k)$ non-zero summands. This is because in each non-zero summand only at most $k$ distinct $Y_{j}^{i}$'s can be present inside the expectation, and the cardinality of such combinations is $O(n^k)$. Note that each summand is bounded since $h$, hence $\tilde h_{\mathcal S_u^k}$, and $L^i$ are all bounded by our assumptions. Hence \eqref{update interim6} is \begin{equation} \left(\sum_{k=2}^T\frac{k(k-1)\lambda^{k-2}}{(1-\epsilon)^k}\binom{T}{k}O\left(\frac{1}{n^{k/2}}\right)\right)^2=O\left(\frac{1}{n^2}\right)\label{update interim7} \end{equation} This shows that \eqref{update interim3} is $O_p(1/n)$ for any $\lambda$ between 0 and 1. Therefore, by using Taylor's expansion, and the conclusion that \eqref{update interim2} is $O_p(1/\sqrt n)$, we have \begin{equation} Z(\tilde P^1,\ldots,\tilde P^m)=Z(P_0^1,\ldots,P_0^m)+O_p\left(\frac{1}{\sqrt n}\right)=Z_0+O_p\left(\frac{1}{\sqrt n}\right)\label{interim revised4} \end{equation} Note that we have shown previously that $P(\tilde P^i\in\hat{\mathcal U}^i)\to1$ for any $i=1,\ldots,m$ in both Cases 1 and 2. Using this and \eqref{interim revised4}, for any given $\epsilon>0$, we can choose $M,N>0$ big enough such that $$P(\sqrt n(\hat Z_*-Z_0)>M)\leq P(|\sqrt n(Z(\tilde P^1,\ldots,\tilde P^i)-Z_0)|>M)+\sum_{i=1}^mP(\tilde P^i\notin\hat{\mathcal U}^i)<\epsilon$$ and similarly $$P(\sqrt n(Z_0-\hat Z^*)>M)\leq P(|\sqrt n(Z(\tilde P^1,\ldots,\tilde P^i)-Z_0)|>M)+\sum_{i=1}^mP(\tilde P^i\notin\hat{\mathcal U}^i)<\epsilon$$ for any $n>N$. This concludes that $$\hat Z_*\leq Z_0+O_p\left(\frac{1}{\sqrt n}\right)\leq\hat Z^*$$ \endproof \proof{Proof of Theorem \ref{prop:gradient}.} To prove 1., consider first a mixture of $\mathbf p^i=(p_j^i)_{j=1,\ldots,n^i}$ with an arbitrary $\mathbf q^i\in\mathcal P_{n^i}$, in the form $(1-\epsilon)\mathbf p^i+\epsilon\mathbf q^i$. It satisfies $$\frac{d}{d\epsilon}Z(\mathbf p^1,\ldots,\mathbf p^{i-1},(1-\epsilon)\mathbf p^i+\epsilon\mathbf q^i,\mathbf p^{i+1},\ldots,\mathbf p^m)\big|_{\epsilon=0}=\nabla^iZ(\mathbf p)'(\mathbf q^i-\mathbf p^i)$$ by the chain rule. In particular, we must have \begin{equation} \psi_j^i(\mathbf p)=\nabla^iZ(\mathbf p)'(\mathbf 1_j^i-\mathbf p^i)=\partial_j^iZ(\mathbf p)-\nabla^iZ(\mathbf p)'\mathbf p^i\label{interim gradient} \end{equation} where $\partial_j^iZ(\mathbf p)$ denotes partial derivative of $Z$ with respect to $p_j^i$. Writing \eqref{interim gradient} for all $j$ together gives $$\bm\psi^i(\mathbf p)=\nabla^iZ(\mathbf p)-(\nabla^iZ(\mathbf p)'\mathbf p^i)\mathbf 1^i$$ where $\mathbf 1^i\in\mathbb R^{n^i}$ is a vector of 1. Therefore $$\bm\psi^i(\mathbf p)'(\mathbf q^i-\mathbf p^i)=(\nabla^iZ(\mathbf p)-(\nabla^iZ(\mathbf p)'\mathbf p^i)\mathbf 1^i)'(\mathbf q^i-\mathbf p^i)=\nabla^iZ(\mathbf p)'(\mathbf q^i-\mathbf p^i)$$ since $\mathbf q^i,\mathbf p^i\in\mathcal P_{n^i}$. Summing up over $i$, \eqref{gradient equivalence} follows. To prove 2., note that we have \begin{align} \psi_j^i(\mathbf p)&=\frac{d}{d\epsilon}Z(\mathbf p^1,\ldots,\mathbf p^{i-1},(1-\epsilon)\mathbf p^i+\epsilon\mathbf 1_j^i,\mathbf p^{i+1},\ldots,\mathbf p^m)\big|_{\epsilon=0}\notag\\ &=\frac{d}{d\epsilon}E_{\mathbf p^1,\ldots,\mathbf p^{i-1},(1-\epsilon)\mathbf p^i+\epsilon\mathbf 1_j^i,\mathbf p^{i+1},\ldots,\mathbf p^m}[h(\mathbf X)]\Bigg|_{\epsilon=0}\notag\\ &=E_{\mathbf p}[h(\mathbf X)s_j^i(\mathbf X^i)]\label{Gateaux} \end{align} where $s_j^i(\cdot)$ is the score function defined as \begin{equation} s_j^i(\mathbf x^i)=\sum_{t=1}^{T^i}\frac{d}{d\epsilon}\log((1-\epsilon)p^i(x_t^i)+\epsilon I(x_t^i=y_j^i))\Bigg|_{\epsilon=0}.\label{score function1} \end{equation} Here $p^i(x_t^i)=p_j^i$ where $j$ is chosen such that $x_t^i=y_j^i$. The last equality in \eqref{Gateaux} follows from the fact that $$\frac{d}{d\epsilon}\prod_{t=1}^{T^i}((1-\epsilon)p^i(x_t^i)+\epsilon I(x_t^i=y_j^i))\Bigg|_{\epsilon=0}=\frac{d}{d\epsilon}\sum_{t=1}^{T^i}\log((1-\epsilon)p^i(x_t^i)+\epsilon I(x_t^i=y_j^i))\Bigg|_{\epsilon=0}\cdot\prod_{t=1}^{T^i}p^i(x_t^i)$$ Note that \eqref{score function1} can be further written as $$\sum_{t=1}^{T^i}\frac{-p^i(x_t^i)+I(x_t^i=y_j^i)}{p^i(x_t^i)}=-T^i+\sum_{t=1}^{T^i}\frac{I(x_t^i=y_j^i)}{p^i(x_t^i)}=-T^i+\sum_{t=1}^{T^i}\frac{I(x_t^i=y_j^i)}{p_j^i}$$ which leads to \eqref{score function}. \endproof \proof{Proof of Lemma \ref{prop:var}} We have \begin{equation} Var_{\mathbf p}(h(\mathbf X)s_j^i(\mathbf X^i))\leq E_{\mathbf p}(h(\mathbf X)s_j^i(\mathbf X^i))^2\leq M^2E_{\mathbf p}(s_j^i(\mathbf X^i))^2=M^2(Var_{\mathbf p}(s_j^i(\mathbf X^i))+(E_{\mathbf p}[s_j^i(\mathbf X^i)])^2)\label{score function var} \end{equation} Now note that by the definition of $s_j^i(\mathbf X)$ in \eqref{score function2} we have $E_{\mathbf p}[s_j^i(\mathbf X^i)]=0$ and $$Var_{\mathbf p}(s_j^i(\mathbf X^i))=\frac{T^iVar_{\mathbf p}(I(X_t^i=y_j^i))}{(p_j^i)^2}=\frac{T^i(1-p_j^i)}{p_j^i}$$ Hence, from \eqref{score function var}, we conclude that $Var_{\mathbf p}(h(\mathbf X)s_j^i(\mathbf X^i))\leq M^2T^i(1-p_j^i)/p_j^i$. \endproof \proof{Proof of Proposition \ref{phisolution}} Consider the Lagrangian relaxation \begin{eqnarray} &&\max_{\alpha\geq0,\lambda\in\mathbb R}\min_{\mathbf p^i\geq\mathbf 0}\sum_{j=1}^{n^i}p_j^i\xi_j+\alpha\left(\sum_{j=1}^{n^i}p_{b,j}^i\phi\left(\frac{p_j^i}{p_{b,j}^i}\right)-\eta^i\right)+\lambda\left(\sum_{j=1}^{n^i}p_j^i-1\right)\label{Lagrangian phi}\\ &=&\max_{\alpha\geq0,\lambda\in\mathbb R}-\alpha\sum_{j=1}^{n^i}p_{b,j}^i\max_{p_j^i\geq0}\left\{-\frac{\xi_j+\lambda}{\alpha}\frac{p_j^i}{p_{b,j}^i}-\phi\left(\frac{p_j^i}{p_{b,j}^i}\right)\right\}-\alpha\eta^i-\lambda\notag\\ &=&\max_{\alpha\geq0,\lambda\in\mathbb R}-\alpha\sum_{j=1}^{n^i}p_{b,j}^i\phi^*\left(-\frac{\xi_j+\lambda}{\alpha}\right)-\alpha\eta^i-\lambda\notag \end{eqnarray} In the particular case that $\alpha^*=0$, the optimal value of \eqref{Lagrangian phi} is the same as $$\max_{\lambda\in\mathbb R}\min_{\mathbf p^i\geq\mathbf 0}\sum_{j=1}^{n^i}p_j^i\xi_j+\lambda\left(\sum_{j=1}^{n^i}p_j^i-1\right)$$ whose inner minimization is equivalent to $\min_{\mathbf p^i\in\mathcal P^i}\sum_{j=1}^{n^i}p_j^i\xi_j=\min_{j\in\{1,\ldots,n^i\}}\xi_j$. Among all solutions that lead to this objective value, we find the one that solves \begin{equation} \min_{p_j^i,j\in\mathcal M^i:\sum_{j\in\mathcal M^i}p_j^i=1}\sum_{j\in\mathcal M^i}p_{b,j}^i\phi\left(\frac{p_j^i}{p_{b,j}^i}\right)\label{opt phi3} \end{equation} Now note that by the convexity of $\phi$ and Jensen's inequality, for any $\sum_{j\in\mathcal M^i}p_j^i=1$, we have \begin{equation} \sum_{j\in\mathcal M^i}p_{b,j}^i\phi\left(\frac{p_j^i}{p_{b,j}^i}\right)=\sum_{r\in\mathcal M^i}p_{b,r}^i\sum_{j\in\mathcal M^i}\frac{p_{b,j}^i}{\sum_{r\in\mathcal M^i}p_{b,r}^i}\phi\left(\frac{p_j^i}{p_{b,j}^i}\right)\geq\sum_{j\in\mathcal M^i}p_{b,j}^i\phi\left(\frac{1}{\sum_{j\in\mathcal M^i}p_{b,j}^i}\right)=\phi\left(\frac{1}{\sum_{j\in\mathcal M^i}p_{b,j}^i}\right)\label{opt phi4} \end{equation} It is easy to see that choosing $p_j^i$ in \eqref{opt phi3} as $q_j^i$ depicted in \eqref{opt phi2} achieves the lower bound in \eqref{opt phi4}, hence concluding the proposition. \endproof \proof{Proof of Proposition \ref{KLsolution}} Consider the Lagrangian for the optimization \eqref{step optimization2} \begin{equation} \min_{\mathbf p^i\in\mathcal P^i}\sum_{j=1}^{n^i}\xi_jp_j^i+\alpha\left(\sum_{j=1}^{n^i}p_j^i\log\frac{p_j^i}{p_{b,j}^i}-\eta^i\right)\label{Lagrangian} \end{equation} By Theorem 1, P.220 in \cite{luenberger1969optimization}, suppose that one can find $\alpha^*\geq0$ such that $\mathbf q^i=(q_j^i)_{j=1,\ldots,n^i}\in\mathcal P_{n^i}$ minimizes \eqref{Lagrangian} for $\alpha=\alpha^*$ and moreover that $\alpha^*\left(\sum_{j=1}^{n^i}q_j^i\log\frac{q_j^i}{p_{b,j}^i}-\eta^i\right)=0$, then $\mathbf q^i$ is optimal for \eqref{step optimization2}. Suppose $\alpha^*=0$, then the minimizer of \eqref{Lagrangian} can be any probability distributions that have masses concentrated on the set of indices in $\mathcal M^i$. Any one of these distributions that lies in $\hat{\mathcal U}^i$ will be an optimal solution to \eqref{step optimization2}. To check whether any of them lies in $\hat{\mathcal U}^i$, consider the one that has the minimum $d_\phi(\mathbf q^i,\mathbf p_b^i)$ and see whether it is less than or equal to $\eta^i$. In other words, we want to find $\min_{p_j^i,j\in\mathcal M^i:\sum_{j\in\mathcal M^i}p_j^i=1}\sum_{j\in\mathcal M^i}p_j^i\log(p_j^i/p_{b,j}^i)$. The optimal solution to this minimization is $p_{b,j}^i/\sum_{j\in\mathcal M^i}p_{b,j}^i$ for $j\in\mathcal M^i$, which gives an optimal value $-\log\sum_{j\in\mathcal M^i}p_{b,j}^i$. Thus, if $-\log\sum_{j\in\mathcal M^i}p_{b,j}^i\leq\eta^i$, we find an optimal solution $\mathbf q^i$ to \eqref{step optimization2} given by \eqref{opt2}. In the case that $\alpha^*=0$ does not lead to an optimal solution, or equivalently $-\log\sum_{j\in\mathcal M^i}p_{b,j}^i>\eta^i$, we consider $\alpha^*>0$. We write the objective value of \eqref{Lagrangian} with $\alpha=\alpha^*$ as \begin{equation} \sum_{j=1}^{n^i}\xi_jp_j^i+\alpha^*\sum_{j=1}^{n^i}p_j^i\log\frac{p_j^i}{p_{b,j}^i}-\alpha^*\eta^i\label{interim new10} \end{equation} By Jensen's inequality, $$\sum_{j=1}^{n^i}p_j^ie^{-\xi_j/\alpha^*-\log(p_j^i/p_{b,j}^i)}\geq e^{-\sum_{j=1}^{n^i}\xi_jp_j^i/\alpha^*-\sum_{j=1}^{n^i}p_j^i\log(p_j^i/p_{b,j}^i)}$$ giving \begin{equation} \sum_{j=1}^{n^i}\xi_jp_j^i+\alpha^*\sum_{j=1}^{n^i}p_j^i\log\frac{p_j^i}{p_{b,j}^i}\geq-\alpha^*\log\sum_{j=1}^{n^i}p_{b,j}^ie^{-\xi_j/\alpha^*}\label{interim new11} \end{equation} It is easy to verify that putting $p_j^i$ as $$q_j^i=\frac{p_{b,j}^ie^{-\xi_j/\alpha^*}}{\sum_{r=1}^{n^i}p_{b,r}^ie^{-\xi_r/\alpha^*}}$$ gives the lower bound in \eqref{interim new11}. Thus $q_j^i$ minimizes \eqref{interim new10}. Moreover, $\alpha^*>0$ can be chosen such that $$\sum_{j=1}^{n^i}q_j^i\log\frac{q_j^i}{p_{b,j}^i}=-\frac{\sum_{j=1}^{n^i}\xi_jp_{b,j}^ie^{-\xi_j/\alpha^*}}{\alpha^*\sum_{j=1}^{n^i}p_{b,j}^ie^{-\xi_j/\alpha^*}}-\log\sum_{j=1}^{n^i}p_{b,j}^ie^{-\xi_j/\alpha^*}=\eta^i$$ Letting $\beta=-1/\alpha^*$, we obtain \eqref{opt1} and \eqref{root}. Note that \eqref{root} must bear a negative root because of the following. Note that the left hand side of \eqref{root} is continuous, and goes to 0 when $\beta\to0$. Defining $\xi_*=\min\{\xi_j:j=1,\ldots,n^i\}$, we have, as $\beta\to-\infty$, $\varphi_{\bm\xi}^i(\beta)=\log\sum_{j=1}^{n^i}p_{b,j}^ie^{\beta\xi_j}=\log\left(\sum_{j\in\mathcal M^i}p_{b,j}^ie^{\beta\xi_*}(1+\sum_{j\notin\mathcal M^i}p_{b,j}^ie^{\beta(\xi_j-\xi_*)}/\sum_{j\in\mathcal M^i}p_{b,j}^i)\right)=\beta\xi_*+\log\sum_{j\in\mathcal M^i}p_{b,j}^i+O(e^{c_1\beta})$ for some positive constant $c_1$, and ${\varphi_{\bm\xi}^i}'(\beta)=\sum_{j=1}^{n^i}\xi_jp_{b,j}^ie^{\beta\xi_j}/\sum_{j=1}^{n^i}p_{b,j}^ie^{\beta\xi_j}=\xi_*(1+\sum_{j\notin\mathcal M^i}\xi_jp_{b,j}^ie^{\beta(\xi_j-\xi_*)}/\sum_{j\in\mathcal M^i}p_{b,j}^i)/(1+\sum_{j\notin\mathcal M^i}p_{b,j}^ie^{\beta(\xi_j-\xi_*)}/\sum_{j\in\mathcal M^i}p_{b,j}^i)=\xi_*+O(e^{c\beta})$ for some positive constant $c_2$. So $\beta{\varphi_{\bm\xi}^i}'(\beta)-\varphi_{\bm\xi}^i(\beta)=-\log\sum_{j\in\mathcal M^i}p_{b,j}^i+O(e^{(c_1\wedge c_2)\beta})>\eta^i$ when $\beta$ is negative enough. \endproof \proof{Proof of Theorem \ref{as}} The proof is an adaptation of \cite{blum1954multidimensional}. Recall that $\mathbf p_k=\text{vec}(\mathbf p_k^i:i=1,\ldots,m)$ where we write each component of $\mathbf p_k$ as $p_{k,j}^i$. Let $N=\sum_{i=1}^mn^i$ be the total counts of support points. Since $h(\mathbf X)$ is bounded a.s., we have $|h(\mathbf X)|\leq M$ a.s. for some $M$. Without loss of generality, we assume that $Z(\mathbf p)\geq0$ for all $\mathbf p$. Also note that $Z(\mathbf p)$, as a high-dimensional polynomial, is continuous everywhere in $\hat{\mathcal U}$. For notational convenience, we write $\mathbf d_k=\mathbf q(\mathbf p_k)-\mathbf p_k$ and $\hat{\mathbf d}_k=\hat{\mathbf q}(\mathbf p_k)-\mathbf p_k$, i.e. $\mathbf d_k$ is the $k$-th step best feasible direction given the exact gradient estimate, and $\hat{\mathbf d}_k$ is the one with estimated gradient. Now, given $\mathbf p_k$, consider the iterative update $\mathbf p_{k+1}=(1-\epsilon_k)\mathbf p_k+\epsilon_k\hat{\mathbf q}(\mathbf p_k)=\mathbf p_k+\epsilon_k\hat{\mathbf d}_k$. We have, by Taylor series expansion, $$Z(\mathbf p_{k+1})=Z(\mathbf p_k)+\epsilon_k\nabla Z(\mathbf p_k)'\hat{\mathbf d}_k+\frac{\epsilon_k^2}{2}\hat{\mathbf d}_k'\nabla^2Z(\mathbf p_k+\theta_k\epsilon_k\hat{\mathbf d}_k)\hat{\mathbf d}_k$$ for some $\theta_k$ between 0 and 1. By Theorem \ref{prop:gradient}, we can rewrite the above as \begin{equation} Z(\mathbf p_{k+1})=Z(\mathbf p_k)+\epsilon_k\bm\psi(\mathbf p_k)'\hat{\mathbf d}_k+\frac{\epsilon_k^2}{2}\hat{\mathbf d}_k'\nabla^2Z(\mathbf p_k+\theta_k\epsilon_k\hat{\mathbf d}_k)\hat{\mathbf d}_k\label{interim as1} \end{equation} Consider the second term in the right hand side of \eqref{interim as1}. We can write \begin{eqnarray} \bm\psi(\mathbf p_k)'\hat{\mathbf d}_k&=&\hat{\bm\psi}(\mathbf p_k)'\hat{\mathbf d}_k+(\bm\psi(\mathbf p_k)-\hat{\bm\psi}(\mathbf p_k))'\hat{\mathbf d}_k\notag\\ &\leq&\hat{\bm\psi}(\mathbf p_k)'\mathbf d_k+(\bm\psi(\mathbf p_k)-\hat{\bm\psi}(\mathbf p_k))'\hat{\mathbf d}_k\text{\ \ \ \ by the definition of $\hat{\mathbf d}_k$}\notag\\ &=&\bm\psi(\mathbf p_k)'\mathbf d_k+(\hat{\bm\psi}(\mathbf p_k)-\bm\psi(\mathbf p_k))'\mathbf d_k+(\bm\psi(\mathbf p_k)-\hat{\bm\psi}(\mathbf p_k))'\hat{\mathbf d}_k\notag\\ &=&\bm\psi(\mathbf p_k)'\mathbf d_k+(\hat{\bm\psi}(\mathbf p_k)-\bm\psi(\mathbf p_k))'(\mathbf d_k-\hat{\mathbf d}_k)\label{interim as11} \end{eqnarray} Hence \eqref{interim as1} and \eqref{interim as11} together imply $$Z(\mathbf p_{k+1})\leq Z(\mathbf p_k)+\epsilon_k\bm\psi(\mathbf p_k)'\mathbf d_k+\epsilon_k(\hat{\bm\psi}(\mathbf p_k)-\bm\psi(\mathbf p_k))'(\mathbf d_k-\hat{\mathbf d}_k)+\frac{\epsilon_k^2}{2}\hat{\mathbf d}_k'\nabla^2Z(\mathbf p_k+\theta_k\epsilon_k\hat{\mathbf d}_k)\hat{\mathbf d}_k$$ Let $\mathcal F_k$ be the filtration generated by $\mathbf p_1,\ldots,\mathbf p_k$. We then have \begin{eqnarray} E[Z(\mathbf p_{k+1})|\mathcal F_k]&\leq& Z(\mathbf p_k)+\epsilon_k\bm\psi(\mathbf p_k)'\mathbf d_k+\epsilon_kE[(\hat{\bm\psi}(\mathbf p_k)-\bm\psi(\mathbf p_k))'(\mathbf d_k-\hat{\mathbf d}_k)|\mathcal F_k]{}\notag\\ &&{}+\frac{\epsilon_k^2}{2}E[\hat{\mathbf d}_k'\nabla^2Z(\mathbf p_k+\theta_k\epsilon_k\hat{\mathbf d}_k)\hat{\mathbf d}_k|\mathcal F_k] \label{interim as} \end{eqnarray} We analyze \eqref{interim as} term by term. First, since $Z(\mathbf p)$ is a high-dimensional polynomial and $\hat{\mathcal U}$ is a bounded set, the largest eigenvalue of the Hessian matrix $\nabla^2Z(\mathbf p)$, for any $\mathbf p\in\hat{\mathcal U}$, is uniformly bounded by a constant $H>0$. Hence \begin{equation} E[\hat{\mathbf d}_k'\nabla^2Z(\mathbf p_k+\theta_k\epsilon_k\hat{\mathbf d}_k)\hat{\mathbf d}_k|\mathcal F_k]\leq HE[\|\hat{\mathbf d}_k\|^2|\mathcal F_k]\leq V<\infty\label{interim as3} \end{equation} for some $V>0$. Now \begin{eqnarray} &&E[(\hat{\bm\psi}(\mathbf p_k)-\bm\psi(\mathbf p_k))'(\mathbf d_k-\hat{\mathbf d}_k)|\mathcal F_k]\\ &\leq&\sqrt{E[\|\hat{\bm\psi}(\mathbf p_k)-\bm\psi(\mathbf p_k)\|^2|\mathcal F_k]E[\|\mathbf d_k-\hat{\mathbf d}_k\|^2|\mathcal F_k]}\text{\ \ \ \ by Cauchy-Schwarz inequality}\notag\\ &\leq&\sqrt{E[\|\hat{\bm\psi}(\mathbf p_k)-\bm\psi(\mathbf p_k)\|^2|\mathcal F_k]E[2(\|\mathbf d_k\|^2+\|\hat{\mathbf d}_k\|^2)|\mathcal F_k]}\text{\ \ \ \ by parallelogram law}\notag\\ &\leq&\sqrt{8mE[\|\hat{\bm\psi}(\mathbf p_k)-\bm\psi(\mathbf p_k)\|^2|\mathcal F_k]}\text{\ \ \ \ since $\|\mathbf d_k\|^2,\|\hat{\mathbf d}_k\|^2\leq2m$ by using the fact that $\mathbf p_k,\mathbf q(\mathbf p_k),\hat{\mathbf q}(\mathbf p_k)\in\mathcal P$}\notag\\ &\leq&\sqrt{\frac{8mM^2T}{R_k}\sum_{i,j}\frac{1-p_{k,j}^i}{p_{k,j}^i}}\text{\ \ \ \ by Lemma \ref{prop:var}}\notag\\ &\leq&M\sqrt{\frac{8mTN}{R_k\min_{i,j}p_{k,j}^i}}\label{interim as2} \end{eqnarray} Note that by iterating the update rule $(1-\epsilon_k)\mathbf p_k+\epsilon_k\mathbf q_k$, we have $$\min_{i,j}p_{k,j}^i\geq\prod_{j=1}^{k-1}(1-\epsilon_j)\delta$$ where $\delta=\min_{i,j}p_{1,j}^i>0$. We thus have \eqref{interim as2} less than or equal to \begin{equation} M\sqrt{\frac{8mTN}{\delta R_k}}\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1/2}\label{interim as4} \end{equation} Therefore, noting that $\bm\psi(\mathbf p_k)'\mathbf d_k\leq0$ by the definition of $\mathbf d_k$, from \eqref{interim as} we have \begin{align} E[Z(\mathbf p_{k+1})-Z(\mathbf p_k)|\mathcal F_k]\leq\epsilon_kM\sqrt{\frac{8mTN}{\delta R_k}}\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1/2}+\frac{\epsilon_k^2V}{2}\label{interim as5} \end{align} and hence $$\sum_{k=1}^\infty E[E[Z(\mathbf p_{k+1})-Z(\mathbf p_k)|\mathcal F_k]^+]\leq M\sqrt{\frac{8mTN}{\delta}}\sum_{k=1}^\infty\frac{\epsilon_k}{\sqrt{R_k}}\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1/2}+\sum_{k=1}^\infty\frac{\epsilon_k^2V}{2}$$ By Assumptions \ref{tuning} and \ref{sample size tuning}, and Lemma \ref{prelim} (depicted after this proof), we have $Z(\mathbf p_k)$ converge to an integrable random variable. Now take expectation on \eqref{interim as} further to get \begin{eqnarray*} E[Z(\mathbf p_{k+1})]&\leq&E[Z(\mathbf p_k)]+\epsilon_kE[\bm\psi(\mathbf p_k)'\mathbf d_k]+\epsilon_kE[(\hat{\bm\psi}(\mathbf p_k)-\bm\psi(\mathbf p_k))'(\mathbf d_k-\hat{\mathbf d}_k)]{}\\ &&{}+\frac{\epsilon_k^2}{2}E[\hat{\mathbf d}_k'\nabla^2Z(\mathbf p_k+\theta_k\epsilon_k\hat{\mathbf d}_k)\hat{\mathbf d}_k] \end{eqnarray*} and telescope to get \begin{eqnarray} E[Z(\mathbf p_{k+1})]&\leq&E[Z(\mathbf p_1)]+\sum_{j=1}^k\epsilon_jE[\bm\psi(\mathbf p_j)'\mathbf d_j]+\sum_{j=1}^k\epsilon_jE[(\hat{\bm\psi}(\mathbf p_j)-\bm\psi(\mathbf p_j))'(\mathbf d_j-\hat{\mathbf d}_j)]{}\notag\\ &&{}+\sum_{j=1}^k\frac{\epsilon_j^2}{2}E[\hat{\mathbf d}_j'\nabla^2Z(\mathbf p_j+\theta_j\epsilon_j\hat{\mathbf d}_j)\hat{\mathbf d}_j]\label{interim as6} \end{eqnarray} Now take the limit on both sides of \eqref{interim as6}. Note that $E[Z(\mathbf p_{k+1})]\to E[Z_\infty]$ for some integrable $Z_\infty$ by dominated convergence theorem. Also $Z(\mathbf p_1)<\infty$, and by \eqref{interim as3} and \eqref{interim as4} respectively, we have $$\lim_{k\to\infty}\sum_{j=1}^k\frac{\epsilon_j^2}{2}E[\hat{\mathbf d}_j'\nabla Z(\mathbf p_j+\theta_j\epsilon_j\hat{\mathbf d}_j)\hat{\mathbf d}_j]\leq\sum_{j=1}^\infty\frac{\epsilon_j^2V}{2}<\infty$$ and $$\lim_{k\to\infty}\sum_{j=1}^k\epsilon_jE[(\hat{\bm\psi}(\mathbf p_j)-\bm\psi(\mathbf p_j))'(\mathbf d_j-\hat{\mathbf d}_j)]\leq M\sqrt{\frac{8mTN}{\delta}}\sum_{j=1}^\infty\frac{\epsilon_j}{\sqrt{R_j}}\prod_{i=1}^{j-1}(1-\epsilon_i)^{-1/2}<\infty$$ Therefore, from \eqref{interim as6}, and since $E[\bm\psi(\mathbf p_j)'\mathbf d_j]\leq0$, we must have $\sum_{j=1}^k\epsilon_jE[\bm\psi(\mathbf p_j)'\mathbf d_j]$ converges a.s., which implies that $\limsup_{k\to\infty}E[\bm\psi(\mathbf p_k)'\mathbf d_k]=0$. So there exists a subsequence $k_i$ such that $\lim_{i\to\infty}E[\bm\psi(\mathbf p_{k_i})'\mathbf d_{k_i}]=0$. This in turn implies that $\bm\psi(\mathbf p_{k_i})'\mathbf d_{k_i}\stackrel{p}{\to}0$. Then, there exists a further subsequence $l_i$ such that $\bm\psi(\mathbf p_{l_i})'\mathbf d_{l_i}\to0$ a.s.. Consider part \ref{as part1} of the theorem. Let $S^*=\{\mathbf p\in\mathcal P:g(\mathbf p)=0\}$. Since $g(\cdot)$ is continuous, we have $D(\mathbf p_{l_i},S^*)\to0$ a.s.. Since $Z(\cdot)$ is continuous, we have $D(Z(\mathbf p_{l_i}),\mathcal Z^*)\to0$ a.s.. But since we have proven that $Z(\mathbf p_k)$ converges a.s., we have $D(Z(\mathbf p_k),\mathcal Z^*)\to0$ a.s.. This gives part \ref{as part1} of the theorem. Now consider part \ref{as part2}. By Assumption \ref{main assumption}, since $\mathbf p^*$ is the only $\mathbf p$ such that $g(\mathbf p)=0$ and $g(\cdot)$ is continuous, we must have $\mathbf p_{l_i}\to\mathbf p^*$ a.s.. Since $Z(\cdot)$ is continuous, we have $Z(\mathbf p_{l_i})\to Z(\mathbf p^*)$. But since $Z(\mathbf p_k)$ converges a.s. as shown above, we must have $Z(\mathbf p_k)\to Z(\mathbf p^*)$. Then by Assumption \ref{main assumption} again, since $\mathbf p^*$ is the unique optimizer, we have $\mathbf p_k\to\mathbf p^*$ a.s.. This concludes part \ref{as part2} of the theorem. \endproof \begin{lemma}[Adapted from \cite{blum1954multidimensional}] Consider a sequence of integrable random variable $Y_k,k=1,2,\ldots$. Let $\mathcal F_k$ be the filtration generated by $Y_1,\ldots,Y_k$. Assume $$\sum_{k=1}^\infty E[E[Y_{k+1}-Y_k|\mathcal F_k]^+]<\infty$$ where $x^+$ denotes the positive part of $x$, i.e. $x^+=x$ if $x\geq0$ and $0$ if $x<0$. Moreover, assume that $Y_k$ is bounded uniformly from above. Then $Y_k\to Y_\infty$ a.s., where $Y_\infty$ is an integrable random variable.\label{prelim} \end{lemma} The lemma follows from \cite{blum1954multidimensional}, with the additional conclusion that $Y_\infty$ is integrable, which is a direct consequence of the martingale convergence theorem. \begin{theorem}[Conditions in Theorem \ref{rate thm}] Conditions \ref{c1}-\ref{c8} needed in Theorem \ref{rate thm} are: \begin{enumerate} \item $$k_0\geq2a\left(\frac{4KMTm}{c^2\tau^2}+\frac{KL\vartheta}{c\tau}\right)$$\label{c1} \item $$-\left(1-\frac{2KL\vartheta}{c\tau}-\frac{2a\varrho K}{c^2\tau^2k_0}\right)\nu+\frac{2aKL\vartheta\varrho}{c\tau k_0^{1+\gamma}}+\frac{\varrho}{k_0^\gamma}+\frac{2K\nu^2}{c^2\tau^2}\leq0$$\label{c2} \item $$\frac{2KL\vartheta}{c\tau}+\frac{2K\nu}{c^2\tau^2}<1$$\label{c3} \item $$\frac{a}{k_0}\left(1-\frac{2KL\vartheta}{c\tau}-\frac{2K\nu}{c^2\tau^2}\right)<1$$\label{c extra} \item $$k_0\geq\frac{a\rho}{\rho-1}$$\label{c4} \item $$\beta>\rho a+2\gamma+2$$\label{c5} \item \begin{eqnarray*} &&\prod_{j=1}^{k_0-1}(1-\epsilon_j)^{-1}\frac{M^2TN}{\vartheta^2\delta b}\frac{1}{(\beta-\rho a-1)(k_0-1)^{\beta-1}}{}\\ &&+\prod_{j=1}^{k_0-1}(1-\epsilon_j)^{-1/2}\frac{M}{\varrho}\sqrt{\frac{8mTN}{\delta b}}\frac{1}{((\beta-\rho a)/2-\gamma-1)(k_0-1)^{\beta/2-\gamma-1}}<\varepsilon \end{eqnarray*} where $N=\sum_{i=1}^mn^i$ is the total count of all support points.\label{c6} \item $K>0$ is a constant such that $|\mathbf x'\nabla^2Z(\mathbf p)\mathbf y|\leq K\|\mathbf x\|\|\mathbf y\|$ for any $x,y\in\mathbb R^n$ and $\mathbf p\in\mathcal A$ (which must exist because $Z(\cdot)$ is a polynomial defined over a bounded set).\label{c7} \item $\delta=\min_{\substack{i=1,\ldots,m\\j=1,\ldots,n^i}}p_{1,j}^i>0$\label{c8} \end{enumerate} \end{theorem} \proof{Proof of Theorem \ref{rate thm}} We adopt the notation as in the proof of Theorem \ref{as}. In addition, for convenience, we write $\bm\psi_k=\bm\psi(\mathbf p_k)$, $\hat{\bm\psi}_k=\hat{\bm\psi}(\mathbf p_k)$, $\mathbf q_k=\mathbf q(\mathbf p_k)$, $\hat{\mathbf q}_k=\hat{\mathbf q}(\mathbf p_k)$, $g_k=g(\mathbf p_k)=-\bm\psi(\mathbf p_k)'\mathbf d_k$, $\nabla Z_k=\nabla Z(\mathbf p_k)$, and $\nabla^2Z_k=\nabla^2Z(\mathbf p_k)$. Note that $\mathbf p_{k+1}=\mathbf p_k+\epsilon_k\hat{\mathbf d}_k$. First, by the proof of Theorem \ref{as}, given any $\nu$ and $\tilde k_0$, almost surely there must exists a $k_0\geq\tilde k_0$ such that $g_{k_0}\leq\nu$. If the optimal solution is reached and is kept there, then $g_k=0$ from thereon and the algorithm reaches and remains at optimum at finite time, hence there is nothing to prove. So let us assume that $0<g_{k_0}\leq\nu$. Moreover, let us assume that $\nu$ is chosen small enough so that for any $\mathbf p$ with $g(\mathbf p)\leq\nu$ and $\mathbf p>\mathbf 0$, we have $\bm\psi(\mathbf p)\in\mathcal N_{\Delta-\vartheta}(\bm\psi(\mathbf p^*))$ (which can be done since $g(\cdot)$ is assumed continuous by Assumption \ref{main assumption} and $\bm\psi(\mathbf p)$ is continuous for any $\mathbf p>\mathbf 0$ by the construction in Theorem \ref{prop:gradient}). We consider the event $$\mathcal E=\bigcup_{k=k_0}^\infty\mathcal E_k\cup\bigcup_{k=k_0}^\infty\mathcal E_k'$$ where $$\mathcal E_k=\{\|\hat{\bm\psi}_k-\bm\psi_k\|>\vartheta\}$$ and $$\mathcal E_k'=\left\{|(\hat{\bm\psi}_k-\bm\psi_k)'(\hat{\mathbf d}_k-\mathbf d_k)|>\frac{\varrho}{k^\gamma}\right\}$$ Note that by the Markov inequality, $$P(\mathcal E_k)\leq\frac{E\|\hat{\bm\psi}_k-\bm\psi_k\|^2}{\vartheta^2}\leq\frac{M^2T}{\vartheta^2R_k}\sum_{i,j}\frac{1-p_{k,j}^i}{p_{k,j}^i}\leq\frac{M^2TN}{\vartheta^2R_k\delta}\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1}$$ where the second inequality follows from Lemma \ref{prop:var} and the last inequality follows as in the derivation in \eqref{interim as2} and \eqref{interim as4}. On the other hand, we have \begin{equation} P(\mathcal E_k')\leq\frac{k^\gamma E|(\hat{\bm\psi}_k-\bm\psi_k)'(\hat{\mathbf d}_k-\mathbf d_k)|}{\varrho}\leq\frac{k^\gamma M}{\varrho}\sqrt{\frac{8mTN}{\delta R_k}}\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1/2}\label{interim remark} \end{equation} by following the derivation in \eqref{interim as2} and \eqref{interim as4}. Therefore, \begin{align} P(\mathcal E)&\leq\sum_{k=k_0}^\infty P(\mathcal E_k)+\sum_{k=k_0}^\infty P(\mathcal E_k')\notag\\ &\leq\frac{M^2TN}{\vartheta^2\delta}\sum_{k=k_0}^\infty\frac{1}{R_k}\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1}+\frac{M}{\varrho}\sqrt{\frac{8mTN}{\delta}}\sum_{k=k_0}^\infty\frac{k^\gamma}{\sqrt{R_k}}\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1/2}\notag\\ &=\prod_{j=1}^{k_0-1}(1-\epsilon_j)^{-1}\frac{M^2TN}{\vartheta^2\delta}\sum_{k=k_0}^\infty\frac{1}{R_k}\prod_{j=k_0}^{k-1}(1-\epsilon_j)^{-1}+\prod_{j=1}^{k_0-1}(1-\epsilon_j)^{-1/2}\frac{M}{\varrho}\sqrt{\frac{8mTN}{\delta}}\sum_{k=k_0}^\infty\frac{k^\gamma}{\sqrt{R_k}}\prod_{j=k_0}^{k-1}(1-\epsilon_j)^{-1/2}\label{interim rate1} \end{align} Now recall that $\epsilon_k=a/k$. Using the fact that $1-x\geq e^{-\rho x}$ for any $0\leq x\leq(\rho-1)/\rho$ and $\rho>1$, we have, for any $$\frac{a}{k}\leq\frac{\rho-1}{\rho}$$ or equivalently $$k\geq\frac{a\rho}{\rho-1}$$ we have $$1-\epsilon_k=1-\frac{a}{k}\geq e^{-\rho a/k}$$ Hence choosing $k_0$ satisfying Condition \ref{c4}, we get \begin{equation} \prod_{j=k_0}^{k-1}(1-\epsilon_j)^{-1}\leq e^{\rho a\sum_{k_0}^{k-1}1/j}\leq\left(\frac{k-1}{k_0-1}\right)^{\rho a}\label{interim rate2} \end{equation} Therefore, picking $R_k=bk^\beta$ and using \eqref{interim rate2}, we have \eqref{interim rate1} bounded from above by \begin{eqnarray} &&\prod_{j=1}^{k_0-1}(1-\epsilon_j)^{-1}\frac{M^2TN}{\vartheta^2\delta b}\sum_{k=k_0}^\infty\frac{1}{(k_0-1)^{\rho a}k^{\beta-\rho a}}+\prod_{j=1}^{k_0-1}(1-\epsilon_j)^{-1/2}\frac{M}{\varrho}\sqrt{\frac{8mTN}{\delta b}}\sum_{k=k_0}^\infty\frac{1}{(k_0-1)^{\rho a/2}k^{(\beta-\rho a)/2-\gamma}}\notag\\ &\leq&\prod_{j=1}^{k_0-1}(1-\epsilon_j)^{-1}\frac{M^2TN}{\vartheta^2\delta b}\frac{1}{(\beta-\rho a-1)(k_0-1)^{\beta-1}}{}\notag\\ &&{}+\prod_{j=1}^{k_0-1}(1-\epsilon_j)^{-1/2}\frac{M}{\varrho}\sqrt{\frac{8mTN}{\delta b}}\frac{1}{((\beta-\rho a)/2-\gamma-1)(k_0-1)^{\beta/2-\gamma-1}}\label{interim remark1} \end{eqnarray} if Condition \ref{c5} holds. Then Condition \ref{c6} guarantees that $P(\mathcal E)<\varepsilon$. The rest of the proof will show that under the event $\mathcal E^c$, we must have the bound \eqref{main}, hence concluding the theorem. To this end, we first set up a recursive representation of $g_k$. Consider \begin{align} g_{k+1}&=-\bm\psi_{k+1}'\mathbf d_{k+1}=-\bm\psi_{k+1}'(\mathbf q_{k+1}-\mathbf p_{k+1})\notag\\ &=-\bm\psi_k'(\mathbf q_{k+1}-\mathbf p_{k+1})+(\bm\psi_k-\bm\psi_{k+1})'(\mathbf q_{k+1}-\mathbf p_{k+1})\notag\\ &=-\bm\psi_k'(\mathbf q_{k+1}-\mathbf p_k)+\bm\psi_k'(\mathbf p_{k+1}-\mathbf p_k)+(\bm\psi_k-\bm\psi_{k+1})'(\mathbf q_{k+1}-\mathbf p_{k+1})\notag\\ &\leq g_k+\epsilon_k\bm\psi_k'\hat{\mathbf d}_k+(\bm\psi_k-\bm\psi_{k+1})'\mathbf d_{k+1}\text{\ \ \ \ by the definition of $g_k$, $\hat{\mathbf d}_k$ and $\mathbf d_{k+1}$}\notag\\ &\leq g_k-\epsilon_kg_k+\epsilon_k(\hat{\bm\psi}_k-\bm\psi_k)'(\mathbf d_k-\hat{\mathbf d}_k)+(\bm\psi_k-\bm\psi_{k+1})'\mathbf d_{k+1}\text{\ \ \ \ by \eqref{interim as11}}\notag\\ &=(1-\epsilon_k)g_k+(\nabla Z_k-\nabla Z_{k+1})'\mathbf d_{k+1}+\epsilon_k(\hat{\bm\psi}_k-\bm\psi_k)'(\mathbf d_k-\hat{\mathbf d}_k)\label{interim rate} \end{align} Now since $\nabla Z(\cdot)$ is continuously differentiable, we have $\nabla Z_{k+1}=\nabla Z_k+\epsilon_k\nabla^2Z(\mathbf p_k+\tilde{\theta}_k\hat{\mathbf d}_k)\hat{\mathbf d}_k$ for some $\tilde{\theta}_k$ between 0 and 1. Therefore \eqref{interim rate} is equal to \begin{eqnarray} &&(1-\epsilon_k)g_k-\epsilon_k\hat{\mathbf d}_k'\nabla^2Z(\mathbf p_k+\tilde{\theta}_k\hat{\mathbf d}_k)\mathbf d_{k+1}+\epsilon_k(\hat{\bm\psi}_k-\bm\psi_k)'(\mathbf d_k-\hat{\mathbf d}_k)\notag\\ &\leq&(1-\epsilon_k)g_k+\epsilon_kK\|\hat{\mathbf d}_k\|\|\mathbf d_{k+1}\|+\epsilon_k(\hat{\bm\psi}_k-\bm\psi_k)'(\mathbf d_k-\hat{\mathbf d}_k)\text{\ \ \ \ by Condition \ref{c7}}\notag\\ &\leq&(1-\epsilon_k)g_k+\epsilon_kK\|\mathbf d_k\|\|\mathbf d_{k+1}\|+\epsilon_kK\|\hat{\mathbf d}_k-\mathbf d_k\|\|\mathbf d_{k+1}\|+\epsilon_k(\hat{\bm\psi}_k-\bm\psi_k)'(\mathbf d_k-\hat{\mathbf d}_k){}\notag\\ &&{}\text{\ \ \ \ by the triangle inequality}\notag\\ &\leq&(1-\epsilon_k)g_k+\epsilon_kK\frac{g_kg_{k+1}}{c^2\|\bm\psi_k\|\|\bm\psi_{k+1}\|}+\epsilon_kKL\|\hat{\bm\psi}_k-\bm\psi_k\|\frac{g_{k+1}}{c\|\bm\psi_{k+1}\|}+\epsilon_k(\hat{\bm\psi}_k-\bm\psi_k)'(\mathbf d_k-\hat{\mathbf d}_k){}\notag\\ &&{}\text{\ \ \ \ by using Assumption \ref{bias} with the fact that $g_k\leq\nu$ and hence $\bm\psi_k,\hat{\bm\psi}_k\in\mathcal N_\Delta(\bm\psi(\mathbf p^*))$, and also}{}\notag\\ &&{}\text{\ \ \ \ Assumption \ref{angle}. The fact $g_k\leq\nu$ will be proved later by induction.}\notag\\ &\leq&(1-\epsilon_k)g_k+\epsilon_k\frac{K}{c^2\tau^2}g_kg_{k+1}+\epsilon_k\frac{KL}{c\tau}\|\hat{\bm\psi}_k-\bm\psi_k\|g_{k+1}+\epsilon_k(\hat{\bm\psi}_k-\bm\psi_k)'(\mathbf d_k-\hat{\mathbf d}_k){}\label{interim rate3}\\ &&{}\text{\ \ \ \ by Assumption \ref{nonzero gradient}}\notag \end{eqnarray} Now under the event $\mathcal E^c$, and noting that $\epsilon=a/k$, \eqref{interim rate3} implies that $$g_{k+1}\leq\left(1-\frac{a}{k}\right)g_k+\frac{aK}{c^2\tau^2k}g_kg_{k+1}+\frac{aKL\vartheta}{c\tau k}g_{k+1}+\frac{a\varrho}{k^{1+\gamma}}$$ or $$\left(1-\frac{aK}{c^2\tau^2k}g_k-\frac{aKL\vartheta}{c\tau k}\right)g_{k+1}\leq\left(1-\frac{a}{k}\right)g_k+\frac{a\varrho}{k^{1+\gamma}}$$ We claim that $|g_k|=|\bm\psi_k'\mathbf d_k|\leq4MTm$, which can be seen by writing \begin{align} \psi_j^i(\mathbf p)&=E_{\mathbf p}[h(\mathbf X)s_j^i(\mathbf X^i)]=\sum_{t=1}^{T^i}E_{\mathbf p}\left[h(\mathbf X)\frac{I(X_t^i=y_j^i)}{p_j^i}\right]-T^iE_{\mathbf p}[h(\mathbf X)]\notag\\ &=\sum_{t=1}^{T^i}E_{\mathbf p}[h(\mathbf X)|X_t=y_j^i]-T^iE_{\mathbf p}[h(\mathbf X)] \end{align} so that $|\psi_j^i(\mathbf p)|\leq2MT^i$ for any $\mathbf p$ and $i$. Using this and the fact that $1/(1-x)\leq1+2x$ for any $0\leq x\leq1/2$, we have, for \begin{equation} \frac{4aKMTm}{c^2\tau^2k}+\frac{aKL\vartheta}{c\tau k}\leq\frac{1}{2}\label{interim rate4} \end{equation} we must have \begin{equation} g_{k+1}\leq\left(1+\frac{2aK}{c^2\tau^2k}g_k+\frac{2aKL\vartheta}{c\tau k}\right)\left(\left(1-\frac{a}{k}\right)g_k+\frac{a\varrho}{k^{1+\gamma}}\right)\label{interim rate5} \end{equation} Note that \eqref{interim rate4} holds if $$k\geq2a\left(\frac{4KMTm}{c^2\tau^2}+\frac{KL\vartheta}{c\tau}\right)$$ which is Condition \ref{c1} in the theorem. Now \eqref{interim rate5} can be written as \begin{align} g_{k+1}&\leq\left(1-\frac{a}{k}+\frac{2aKL\vartheta}{c\tau k}+\frac{2a^2K\varrho}{c^2\tau^2k^{2+\gamma}}\right)g_k+\frac{a\varrho}{k^{1+\gamma}}+\frac{2a^2KL\vartheta\varrho}{c\tau k^{2+\gamma}}-\frac{2a^2KL\vartheta}{c\tau k^2}g_k+\frac{2aK}{c^2\tau^2k}\left(1-\frac{a}{k}\right)g_k^2\notag\\ &\leq\left(1-\frac{a}{k}+\frac{2aKL\vartheta}{c\tau k}+\frac{2a^2K\varrho}{c^2\tau^2k^{2+\gamma}}\right)g_k+\frac{a\varrho}{k^{1+\gamma}}+\frac{2a^2KL\vartheta\varrho}{c\tau k^{2+\gamma}}+\frac{2aK}{c^2\tau^2k}\left(1-\frac{a}{k}\right)g_k^2\label{interim rate6} \end{align} We argue that under Condition \ref{c2}, we must have $g_k\leq\nu$ for all $k\geq k_0$. This can be seen by induction using \eqref{interim rate6}. By our setting at the beginning of this proof we have $g_{k_0}\leq\nu$. Suppose $g_k\leq\nu$ for some $k$. We then have \begin{align} g_{k+1}&\leq\left(1-\frac{a}{k}+\frac{2aKL\vartheta}{c\tau k}+\frac{2a^2K\varrho}{c^2\tau^2k^{2+\gamma}}\right)\nu+\frac{a\varrho}{k^{1+\gamma}}+\frac{2a^2KL\vartheta\varrho}{c\tau k^{2+\gamma}}+\frac{2aK}{c^2\tau^2k}\left(1-\frac{a}{k}\right)\nu^2\notag\\ &\leq\nu+\frac{a}{k}\left(\left(-1+\frac{2KL\vartheta}{c\tau}+\frac{2aK\varrho}{c^2\tau^2k^{1+\gamma}}\right)\nu+\frac{\varrho}{k_0^\gamma}+\frac{2aKL\vartheta\varrho}{c\tau k_0^{1+\gamma}}+\frac{2K\nu^2}{c^2\tau^2}\right)\notag\\ &\leq\nu\label{interim rate8} \end{align} by Condition \ref{c2}. This concludes our claim. Given that $g_k\leq\nu$ for all $k\geq k_0$, \eqref{interim rate5} implies that \begin{align} g_{k+1}&\leq\left(1-\frac{a}{k}\left(1-\frac{2KL\vartheta}{c\tau}\right)-\frac{2a^2KL\vartheta}{c\tau k^2}+\frac{2aK\nu}{c^2\tau^2k}\left(1-\frac{a}{k}\right)\right)g_k+\frac{a\varrho}{k^{1+\gamma}}+\frac{a^2\varrho}{k^{2+\gamma}}\left(\frac{2K\nu}{c^2\tau^2}+\frac{2KL\vartheta}{c\tau}\right)\notag\\ &\leq\left(1-\frac{a}{k}\left(1-\frac{2KL\vartheta}{c\tau}-\frac{2K\nu}{c^2\tau^2}\right)\right)g_k+\frac{a\varrho}{k^{1+\gamma}}+\frac{a^2\varrho}{k^{2+\gamma}}\left(\frac{2K\nu}{c^2\tau^2}+\frac{2KL\vartheta}{c\tau}\right)\notag\\ &\leq\left(1-\frac{C}{k}\right)g_k+\frac{G}{k^{1+\gamma}}\label{interim rate7} \end{align} where $$C=a\left(1-\frac{2KL\vartheta}{c\tau}-\frac{2K\nu}{c^2\tau^2}\right)$$ and $$G=a\varrho+\frac{a^2\varrho}{k_0}\left(\frac{2K\nu}{c^2\tau^2}+\frac{2KL\vartheta}{c\tau}\right)$$ Now note that Conditions \ref{c3} and \ref{c extra} imply $C>0$ and $1-C/k>0$ respectively. By recursing the relation \eqref{interim rate7}, we get \begin{align*} g_{k+1}&\leq\prod_{j=k_0}^k\left(1-\frac{C}{j}\right)g_{k_0}+\sum_{j=k_0}^k\prod_{i=j+1}^k\left(1-\frac{C}{i}\right)\frac{G}{j^{1+\gamma}}\\ &\leq e^{-C\sum_{j=k_0}^k1/j}g_{k_0}+\sum_{j=k_0}^ke^{-C\sum_{i=j+1}^k1/i}\frac{G}{j^{1+\gamma}}\\ &\leq\left(\frac{k_0}{k+1}\right)^Cg_{k_0}+\sum_{j=k_0}^k\left(\frac{j+1}{k+1}\right)^C\frac{G}{j^{1+\gamma}}\\ &\leq\left(\frac{k_0}{k+1}\right)^Cg_{k_0}+\left(1+\frac{1}{k_0}\right)^CG\times\left\{\begin{array}{ll} \frac{1}{(C-\gamma)(k+1)^\gamma}&\text{\ if\ }0<\gamma<C\\ \frac{1}{(\gamma-C)(k_0-1)^{\gamma-C}(k+1)^C}&\text{\ if\ }\gamma>C\\ \frac{\log(k/(k_0-1))}{(k+1)^C}&\text{\ if\ }\gamma=C \end{array}\right. \end{align*} which gives \eqref{main}. This concludes the proof. \endproof \proof{Proof of Corollary \ref{rate cor}} We use the notations in the proof of Theorem \ref{rate thm}. Our analysis starts from \eqref{interim as1}, namely $$Z_{k+1}=Z_k+\epsilon_k\bm\psi_k'\hat{\mathbf d}_k+\frac{\epsilon_k^2}{2}\hat{\mathbf d}_k'\nabla^2Z(\mathbf p_k+\theta_k\epsilon_k\hat{\mathbf d}_k)\hat{\mathbf d}_k$$ for some $\tilde\theta_k$ between 0 and 1. Using the fact that $\bm\psi_k'\hat{\mathbf d}_k\geq\bm\psi_k'\mathbf d_k$ by the definition of $\mathbf d_k$, we have \begin{align*} Z_{k+1}&\geq Z_k+\epsilon_k\bm\psi_k'\mathbf d_k+\frac{\epsilon_k^2}{2}\hat{\mathbf d}_k'\nabla^2Z(\mathbf p_k+\theta_k\epsilon_k\hat{\mathbf d}_k)\hat{\mathbf d}_k\\ &=Z_k-\epsilon_kg_k+\frac{\epsilon_k^2}{2}\hat{\mathbf d}_k'\nabla^2Z(\mathbf p_k+\theta_k\epsilon_k\hat{\mathbf d}_k)\hat{\mathbf d}_k \end{align*} Now, using \eqref{main}, Condition \ref{c7} in Theorem \ref{rate thm} and $\|\hat{\mathbf d}_k\|^2\leq2$, we have \begin{align} Z_{k+1}&\geq Z_k-\epsilon_k\left(\frac{A}{k^C}+B\times\left\{\begin{array}{ll}\frac{1}{(C-\gamma)k^\gamma}&\text{if $0<\gamma<C$}\\ \frac{1}{(\gamma-C)(k_0-1)^{\gamma-C}k^C}&\text{if $\gamma>C$}\\ \frac{\log((k-1)/(k_0-1))}{k^C}&\text{if $\gamma=C$} \end{array}\right\}\right)-\epsilon_k^2K\notag\\ &=Z_k-\frac{aA}{k^{1+C}}-aB\times\left\{\begin{array}{ll}\frac{1}{(C-\gamma)k^{1+\gamma}}&\text{if $0<\gamma<C$}\\ \frac{1}{(\gamma-C)(k_0-1)^{\gamma-C}k^{1+C}}&\text{if $\gamma>C$}\\ \frac{\log((k-1)/(k_0-1))}{k^{1+C}}&\text{if $\gamma=C$} \end{array}\right\}-\frac{a^2K}{k^2}\label{interim rate12} \end{align} Now iterating \eqref{interim rate12} from $k$ to $l$, we have $$Z_l\geq Z_k-\sum_{j=k}^{l-1}\frac{aA}{j^{1+C}}-aB\times\left\{\begin{array}{ll}\frac{1}{(C-\gamma)}\sum_{j=k}^{l-1}\frac{1}{j^{1+\gamma}}&\text{if $0<\gamma<C$}\\ \frac{1}{(\gamma-C)(k_0-1)^{\gamma-C}}\sum_{j=k}^{l-1}\frac{1}{j^{1+C}}&\text{if $\gamma>C$}\\ \sum_{j=k}^{l-1}\frac{\log((j-1)/(k_0-1))}{j^{1+C}}&\text{if $\gamma=C$} \end{array}\right\}-a^2K\sum_{j=k}^{l-1}\frac{1}{j^2}$$ and letting $l\to\infty$, we get \begin{equation} Z^*\geq Z_k-\sum_{j=k}^\infty\frac{aA}{j^{1+C}}-aB\times\left\{\begin{array}{ll}\frac{1}{(C-\gamma)}\sum_{j=k}^\infty\frac{1}{j^{1+\gamma}}&\text{if $0<\gamma<C$}\\ \frac{1}{(\gamma-C)(k_0-1)^{\gamma-C}}\sum_{j=k}^\infty\frac{1}{j^{1+C}}&\text{if $\gamma>C$}\\ \sum_{j=k}^\infty\frac{\log((j-1)/(k_0-1))}{j^{1+C}}&\text{if $\gamma=C$} \end{array}\right\}-a^2K\sum_{j=k}^\infty\frac{1}{j^2}\label{interim rate13} \end{equation} where the convergence to $Z^*$ is guaranteed by Theorem \ref{as}. Note that \eqref{interim rate13} implies that \begin{align*} Z^*&\geq Z_k-\frac{aA}{C(k-1)^C}-aB\times\left\{\begin{array}{ll}\frac{1}{(C-\gamma)\gamma(k-1)^\gamma}&\text{if $0<\gamma<C$}\\ \frac{1}{(\gamma-C)(k_0-1)^{\gamma-C}C(k-1)^C}&\text{if $\gamma>C$}\\ \frac{\log((k-1)/(k_0-1))}{C(k-1)^C}&\text{if $\gamma=C$} \end{array}\right\}-\frac{a^2K}{k-1}\\ &\geq Z_k-\frac{D}{k-1}-\frac{E}{(k-1)^C}-F\times\left\{\begin{array}{ll}\frac{1}{(C-\gamma)\gamma(k-1)^\gamma}&\text{if $0<\gamma<C$}\\ \frac{1}{(\gamma-C)(k_0-1)^{\gamma-C}C(k-1)^C}&\text{if $\gamma>C$}\\ \frac{\log((k-1)/(k_0-1))}{C(k-1)^C}&\text{if $\gamma=C$} \end{array}\right. \end{align*} where $D=a^2K$, $E=aA/C$ and $F=aB$. This gives \eqref{main1}. \endproof \proof{Proof of Lemma \ref{index max}} Consider first a fixed $a$. When $a(1-\omega)>1$, \eqref{index} reduces to $\frac{\beta-\rho a-\zeta-2}{2(\beta+1)}\wedge\frac{1}{\beta+1}$. Since $\frac{\beta-\rho a-\zeta-2}{2(\beta+1)}$ is increasing in $\beta$ and $\frac{1}{\beta+1}$ is decreasing in $\beta$, the maximizer of $\frac{\beta-\rho a-\zeta-2}{2(\beta+1)}\wedge\frac{1}{\beta+1}$ occurs at the intersection of $\frac{\beta-\rho a-\zeta-2}{2(\beta+1)}$ and $\frac{1}{\beta+1}$, which is $\beta=\rho a+\zeta+4$. The associated value of \eqref{index} is $\frac{1}{\rho a+\zeta+5}$. When $a(1-\omega)\leq1$, \eqref{index} reduces to $\frac{a(1-\omega)}{\beta+1}\wedge\frac{\beta-\rho a-\zeta-2}{2(\beta+1)}$. By a similar argument, the maximizer is $\beta=a(2-2\omega+\rho)+\zeta+2$, with the value of \eqref{index} equal to $\frac{a(1-\omega)}{a(2-2\omega+\rho)+\zeta+3}$. Thus, overall, given $a$, the optimal choice of $\beta$ is $\beta=\rho a+\zeta+2+2((a(1-\omega))\wedge1)$, with the value of \eqref{index} given by $\frac{(a(1-\omega))\wedge1}{\rho a+\zeta+3+2((a(1-\omega))\wedge1)}$. When $a(1-\omega)>1$, the value of \eqref{index} is $\frac{1}{\rho a+\zeta+5}$ which is decreasing in $a$, whereas when $a(1-\omega)\leq1$, the value of \eqref{index} is $\frac{a(1-\omega)}{a(2-2\omega+\rho)+\zeta+3}$ which is increasing in $a$. Thus the maximum occurs when $a(1-\omega)=1$, or $a=\frac{1}{1-\omega}$. The associated value of \eqref{index} is $\frac{1}{\rho/(1-\omega)+\zeta+5}$. \endproof \remark Suppose that Assumption \ref{bias} is replaced by letting $$\|\mathbf v(\bm\xi_1)-\mathbf v(\bm\xi_2)\|\leq L\|\bm\xi_1-\bm\xi_2\|$$ hold for any $\bm\xi_1,\bm\xi_2\in\mathbb R^N$. Then, in the proof of Theorem \ref{rate thm}, the inequality \eqref{interim remark} can be replaced by \begin{align*} P(\mathcal E_k')&\leq\frac{k^\gamma E|(\hat{\bm\psi}_k-\bm\psi_k)'(\hat{\mathbf d}_k-\mathbf d_k)|}{\varrho}\\ &\leq\frac{k^\gamma}{\varrho}\sqrt{E[\|\hat{\bm\psi}_k-\bm\psi_k\|^2]E[\|\mathbf d_k-\hat{\mathbf d}_k\|^2]}\text{\ \ \ \ by the Cauchy-Schwarz inequality}\\ &\leq\frac{k^\gamma L}{\varrho}E[\|\hat{\bm\psi}_k-\bm\psi_k\|^2]\text{\ \ \ \ by the relaxed Assumption \ref{bias}}\\ &\leq\frac{LM^2TNk^\gamma}{R_k\varrho\delta}\prod_{j=1}^{k-1}(1-\epsilon_j)^{-1}\text{\ \ \ \ by following the derivation in \eqref{interim as2} and \eqref{interim as4}} \end{align*} Consequently, equation \eqref{interim remark1} becomes $$\prod_{j=1}^{k_0-1}(1-\epsilon_j)^{-1}\frac{M^2TN}{\delta b}\left(\frac{1}{\vartheta^2(\beta-\rho a-1)(k_0-1)^{\beta-1}}+\frac{L}{\varrho(\beta-\gamma-\rho a-1)(k_0-1)^{\beta-\gamma-1}}\right)$$ if Condition \ref{c5} is replaced by $$\beta>\gamma+\rho a+1$$ Correspondingly, Condition \ref{c6} needs to be replaced by $$\prod_{j=1}^{k_0-1}(1-\epsilon_j)^{-1}\frac{M^2TN}{\delta b}\left(\frac{1}{\vartheta^2(\beta-\rho a-1)(k_0-1)^{\beta-1}}+\frac{L}{\varrho(\beta-\gamma-\rho a-1)(k_0-1)^{\beta-\gamma-1}}\right)<\varepsilon$$ The results in Theorem \ref{rate thm} and Corollary \ref{rate cor} then retain. Under these modified Conditions \ref{c5} and \ref{c6}, discussion point 3(b) in Section \ref{sec:local rate} then gives $\beta=\gamma+\rho a+1+\zeta$ for some $\zeta>0$ and $\gamma=\beta-\rho a-\zeta-1$. In discussion point 4, the convergence rate in terms of replications becomes $1/W^{((a(1-\omega))\wedge(\beta-\rho a-\zeta-1)\wedge1)/(\beta+1)}$. By maximizing \begin{equation} \frac{(a(1-\omega))\wedge(\beta-\rho a-\zeta-1)\wedge1}{\beta+1}\label{index1} \end{equation} like in \eqref{index} by Lemma \ref{index max} (see Lemma \ref{index max1} right after this remark), we get $$a=\frac{1}{1-\omega},\ \ \beta=\frac{\rho}{1-\omega}+\zeta+2$$ and the optimal value is $$\frac{1}{\rho/(1-\omega)+\zeta+3}$$ So, following the argument there, we choose $\vartheta$ and $\nu$, and hence $\omega$, to be small, and we choose $\rho$ to be close to 1. This gives rise to the approximate choice that $a\approx1+\omega$ and $\beta\approx3+\zeta+\omega$. The convergence rate is then $O(W^{-1/(4+\zeta+\omega)})$, leading to our claim in Section \ref{sec:local rate} that the complexity can improve to $O(1/\epsilon^{4+\zeta+\omega})$ if Assumption \ref{bias} is relaxed. \label{remark:rate} \endremark \begin{lemma} The maximizer of \eqref{index1} is given by $$a=\frac{1}{1-\omega},\ \ \beta=\frac{\rho}{1-\omega}+\zeta+2$$ and the optimal value is $$\frac{1}{\rho/(1-\omega)+\zeta+3}$$\label{index max1} \end{lemma} \proof{Proof of Lemma \ref{index max1}} Consider first a fixed $a$. When $a(1-\omega)>1$, \eqref{index1} reduces to $\frac{\beta-\rho a-\zeta-1}{\beta+1}\wedge\frac{1}{\beta+1}$. Since $\frac{\beta-\rho a-\zeta-1}{\beta+1}$ is increasing in $\beta$ and $\frac{1}{\beta+1}$ is decreasing in $\beta$, the maximizer of $\frac{\beta-\rho a-\zeta-1}{\beta+1}\wedge\frac{1}{\beta+1}$ occurs at the intersection of $\frac{\beta-\rho a-\zeta-1}{\beta+1}$ and $\frac{1}{\beta+1}$, which is $\beta=\rho a+\zeta+2$. The associated value of \eqref{index1} is $\frac{1}{\rho a+\zeta+3}$. When $a(1-\omega)\leq1$, \eqref{index1} reduces to $\frac{a(1-\omega)}{\beta+1}\wedge\frac{\beta-\rho a-\zeta-1}{\beta+1}$. By a similar argument, the maximizer is $\beta=a(1-\omega+\rho)+\zeta+1$, with the value of \eqref{index1} equal to $\frac{a(1-\omega)}{a(1-\omega+\rho)+\zeta+2}$. Thus, overall, given $a$, the optimal choice of $\beta$ is $\beta=\rho a+\zeta+1+(a(1-\omega))\wedge1$, with the value of \eqref{index1} given by $\frac{(a(1-\omega))\wedge1}{\rho a+\zeta+2+(a(1-\omega))\wedge1}$. When $a(1-\omega)>1$, the value of \eqref{index1} is $\frac{1}{\rho a+\zeta+3}$ which is decreasing in $a$, whereas when $a(1-\omega)\leq1$, the value of \eqref{index} is $\frac{a(1-\omega)}{a(1-\omega+\rho)+\zeta+2}$ which is increasing in $a$. Thus the maximum occurs when $a(1-\omega)=1$, or $a=\frac{1}{1-\omega}$. The associated value of \eqref{index1} is $\frac{1}{\rho/(1-\omega)+\zeta+3}$. \endproof \section{Additional Details of the Numerical Results} \subsection{Multi-start Initialization}\label{sec:multi-start} The results in Section \ref{expt:mltcls} are implemented with an initialization that assigns equal probabilities to the support points. To test the procedure under different initializations, we repeat ten runs of the FWSA algorithm where the initial probability masses for the support points (held constant for all runs) are sampled uniformly independently with appropriate normalization. Figure~\ref{fig:multirun} provides a box-plot of the identified optima. The sample size for moment constraint generation is $N_s=50$ and the discretization support size is $n=30$. The returned optimal solutions for each of the minimization and maximization formulations all agree up to the first two digits (the box plot shows the small spread of the max values, while the min values are very clustered and they appear to all overlap at the same point). This indicates that the formulations have a unique global optimal solution or similar local optimal solutions. Note that the bounds generated from this setting are quite loose with a small $N_s$. \begin{figure} \caption{Returned optimal solutions from $10$ runs on $n=30$, $M=50$, exponential for discretization \label{fig:multirun} \label{fig:multirun} \end{figure} \subsection{Details of the Benchmark Steady-State Formulation in Section \ref{sec:gg1example}}\label{sec:numerics appendix} We consider the depicted $Z(\mathbf p)$ in Section \ref{sec:gg1example}. As $T$ grows, the average waiting time converges to the corresponding steady-state value, which, when the traffic intensity $\rho_{{\mathbf{p}}} = E_{{\mathbf{p}}}[X_t]$ is less than $1$, is given in closed form by the Pollaczek-Khinchine formula (\cite{klin32}) as: \[ {Z}_{\infty}({\mathbf{p}})=\frac{\rho_{{\mathbf{p}}} E_{{\mathbf{p}}}[X_1] + Var_{{\mathbf{p}}}(X_1)}{2(1-\rho_{{\mathbf{p}}} )}. \] So, when $T$ is large, an approximation $Z^*_{\infty}$ to the worst-case performance estimate can be obtained by replacing $Z({\mathbf{p}})$ with ${Z}_{\infty}({\mathbf{p}})$. (In experiments, a choice of $T=500$ seems to show close agreement.) With $E_{{\mathbf{p}}}[X_1]=\sum p_jy_j$ and $E_{{\mathbf{p}}}[X_1^2]=\sum p_jy_j^2$, the steady-state approximation to~(\ref{mg1prob}) is given by {\bf(SS)} below, which is equivalent to {\bf(SS$'$)} via variable substitutions (see p.191 in~\cite{boyd2009convex}): \noindent\begin{minipage}{0.45\textwidth} \begin{align} \min_{{\mathbf{p}}} &\quad \frac { \sum_j p_jy_j^2} {2(1- \sum_j p_jy_j)}\quad\quad\quad \mbox{\bf (SS)} \nonumber\\ s.t. &\quad \sum_j p_j \log \left(\frac {p_j}{p_{b,j}} \right) \le \eta\nonumber\\ &\quad \sum_j p_j = 1 \nonumber \\ &\quad 0 \le p_j\le 1,\quad \forall j=1,\ldots,n \nonumber \end{align} \end{minipage} $\quad \Longrightarrow \quad$ \begin{minipage}{0.35\textwidth} \begin{align} \min_{{\mathbf{p}}} & \quad \sum_j w_j {y_j}^{2} \quad\quad\quad\quad\quad\quad \mbox{\bf (SS$'$)} \nonumber\\ s.t.&\quad \sum_j w_j \log \left(\frac {w_j} {tp_{b,j}}\right) \le \eta t \nonumber\\ & \quad 2t - 2 \sum_j w_jy_j = 1 \nonumber \\ & \quad \sum_j w_j = t \nonumber \\ & \quad 0\le w_j \le t \quad \forall j=1,\ldots,n \nonumber \end{align} \end{minipage} \subsection{Shape of the Obtained Optimal Distributions in Section \ref{sec:gg1example}}\label{sec:numerics appendix1} Continuing with the example in Section \ref{sec:gg1example}, Figure~\ref{fig:mg1worstdistn} shows the form of the optimal distributions ${\mathbf{p}}^*$ identified by the FWSA algorithm for the minimization (Figure~\ref{fig:mg1optdistnminbeta}) and maximization (Figure~\ref{fig:mg1optdistnmaxbeta}) problems under~\eqref{mg1prob}. The optimal distributions follow a similar bimodal structure as the baseline distribution $\mathbf p_b$. The maximization version assigns probability masses in an unequal manner to the two modes in order to drive up both the mean and the variance of ${\mathbf{p}}$, as {\bf (SS)} (in Appendix \ref{sec:numerics appendix}) leads us to expect, whereas the minimization version on the other hand makes the mass allocation more equal in order to minimize the mean and the variance of ${\mathbf{p}}$ while maintaining the maximum allowed KL divergence. \begin{figure} \caption{(min) $\mathbf p_b$ from beta-mixture} \label{fig:mg1optdistnminbeta} \caption{(max) $\mathbf p_b$ from beta-mixture} \label{fig:mg1optdistnmaxbeta} \caption{Optimal solutions ${\mathbf{p} \label{fig:mg1worstdistn} \end{figure} \end{document}
\begin{document} \begin{frontmatter} \title{Lock in Feedback in Sequential Experiments} \runtitle{Lock in Feedback} \begin{aug} \author{\fnms{Maurits} \snm{Kaptein}} and \author{\fnms{Davide} \snm{Iannuzzi}\thanksref{t1}} \affiliation{Radboud University, Nijmegen, the Netherlands,\\and Vrije Universiteit, Amsterdam, the Netherlands} \runauthor{Kaptein \& Iannuzzi} \end{aug} \begin{abstract} We often encounter situations in which an experimenter wants to find, by sequential experimentation, $x_{max} = \arg\max_{x} f(x)$, where $f(x)$ is a (possibly unknown) function of a well controllable variable $x$. Taking inspiration from physics and engineering, we have designed a new method to address this problem. In this paper, we first introduce the method in continuous time, and then present two algorithms for use in sequential experiments. Through a series of simulation studies, we show that the method is effective for finding maxima of unknown functions by experimentation, even when the maximum of the functions drifts or when the signal to noise ratio is low. \end{abstract} \thankstext{t1}{D.I. acknowledges the support of the European Research Council (grant agreement n. 615170) and of the Stichting voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported by the Netherlands organization for scientific research (NWO).} \end{frontmatter} \section{Introduction} When designing an experiment where a given parameter must be kept constant throughout the entire duration of the measurement, physicists and engineers often rely on feedback techniques that, in real time, can properly re-adjust the configuration of the experiment to compensate for unexpected drifts \citep{Scofield1994}. Fig. \ref{fig:intro} illustrates, for instance, a well-established approach that is used to maintain a variable $x$ always locked at the value that maximizes the value of another variable $y$, which is some function -- possibly with large noise -- of $x$. The algorithm behind this approach, which will be described more in depth later in the text, is based on the following steps: \begin{itemize} \item[1] Fix a central value $x_0$ of the variable $x$; \item[2] Add an oscillation of amplitude $A$ at a fixed angular frequency $\omega$: $x=x_0+A_x\cos\left( \omega t \right)$. \item[3] Measure the amplitude of the oscillations that the variable $y$ has, in response of the oscillation of the variable $x$, at the same angular frequency $\omega$, and further measure whether the oscillation are in phase or out of phase; \item[4] Set a new value of $x_0$, adding (if the oscillation of $y$ are in phase with the oscillation of $x$) or subtracting (if the oscillation of $y$ are out of phase with respect to the oscillation of $x$) a value proportional to the value measured in step 4: $x_{0,new}=x_0\pm \gamma A_y$, where $\gamma$ is a constant. Iterate steps 2 to 4 for the whole duration of the experiment. \end{itemize} The above described feedback loop pushes the value of $x_0$ closer and closer to the value $x_{max}$ that maximizes $y$. As $x_0$ approaches $x_{max}$, the oscillations in $y$ become smaller and smaller, moving $x_0$ in a series of steps of decreasing size. Finally, when $x_0=x_{max}$, the variable $y$ ceases to oscillate at frequency $\omega$: because of this $x_0$ can stay locked in on $x_{max}$. However, if the curve suddenly shifts to another position (e.g., if the relationship between $x$ and $y$ changes, a phenomenon referred to as concept drift \citep{Gaber2005, Anagnostopoulos2012a}), the $\omega$ component of $y$ becomes different from zero again, forcing the feedback loop to move the value of $x_0$ towards the new value of $x_{max}$. Hence, the feedback loop enables one to hold on to the value of $x$ sequentially that maximizes the value of $y$. Interestingly, the feedback loop described above can work well even if the variable $y$ is affected by a high degree of noise. To extract the signal at frequency $\omega$, in fact, one can make use of a commercial instrument called \textit{lock-in amplifier}, which rejects all the components of the signals that do not beat at the frequency of interest. The algorithm used by a lock-in amplifier can of course be applied to digital (discrete timepoints) data as well. It is thus worth asking whether the approach adopted in a lock-in amplifier may be used in other contexts where, in the presence of a highly noisy set of data, one wants to maintain one variable locked to the value that maximizes the value of another. \begin{figure} \caption{Illustration of the lock-in principle used in physics and engineering to maintain a bring and maintain an independent, controllable variable $x$ onto the value $x_{max} \label{fig:intro} \end{figure} Tantalized by this opportunity, we propose here to use lock-in feedback (LiF) algorithms for the optimization of the price in (e.g.,) a rebate action. The idea is to present each customer a different price, which is changed sinusoidally around a central value, causing the revenue to oscillate at the same frequency. As the customers take their purchasing decision, a lock-in algorithm monitors the oscillations of the revenue at the price oscillation frequency. Like in the feedback loop described above, the central value of the price is continuously adjusted until the revenue ceases to oscillate at that frequency. At this point, in fact, the revenue is maximum (price elasticity = 1). If an unexpected event moves the price elasticity curve, the algorithm will automatically push the central price towards the new maximizing value. Next to product pricing of rebate actions, many more examples could be conceived in the social sciences: \begin{itemize} \item In economics, firms might be able to manipulate the price $x$ of an offering and subsequently observe their revenue $y$. Here a firm seeks to find the value of $x$ that maximizes $y$ \citep[for examples see][]{Kung2002, Jiang2011}. \item In industry, the outcome $y$ of a business process might depend on the amount of some raw material $x$ used in the process. \item In communication research, a communication professional might seek to find the length of an email message $x$ that leads to the highest number of clicks $y$ on a link in that message \citep{Ansari2005}. \item In medicine, a physician seeks to find the optimal dose $x$ of a medicine to maximize the health outcome $y$ of her patients \citep[see, e.g.,][]{Sapareto1984, Marschner2007}. \item In education, scholars might seek to select learning tasks which are quantified by their difficulty $x$, that have the highest effect on learning $y$ of their pupils. \end{itemize} In the above cases the functional form of $f(x)$ is often not known, the outcome $y$ is observed with noise, and likely the treatment values that maximize the outcome are subject to concept drift \citep{Gaber2005, Anagnostopoulos2012a} (thus, they change over time). Here we present a method to find $x_{max}$ which does not require an explicit specification of $f(x)$ or its derivatives, performs well in the face of noise, and is robust to concept drift. To prove the merits of LiF in such cases, we have performed an extensive numerical exercise that simulates the performance of LiF in a diverse range of situations, including ones where the observed signal is merely the choice of a consumer to yes or no adopt a product for a given (rebate) price; a scenario directly in line with the pricing challenges as identified above. We show that, in the presence of the noise induced by the variance of the willingness to pay across the population of the customers entering the shop, our lock-in algorithm allows the seller to both determine and maintain the price that optimizes the revenue of the shop. Furthermore, we demonstrate that if the price elasticity curve changes, the algorithm can detect the direction of the change and converge again to the optimal price. It has to be noted that it is a well-known and well-studied challenge to find optimal (according to some specified criterion) treatment values in (sequential) experiments. This challenge is acknowledged in many branches of science and engineering \citep[see, e.g.,][]{allen2003experimental, bardsley1996optimal, kuck2006smc}. An often researched topic is that of design optimization (DO), in which experimental designs are identified that lead to the smallest possible variances in the estimated model parameters \citep{burnetas1996optimal,McClelland1997}. More recently, an interest in adaptive design optimization (ADO) methods \citep{myung2009optimal,myung2013tutorial} and sequential experimentation methods has emerged: researchers are looking for effective ways to sequentially determine optimal treatment values in experiments as the experimental data is being collected \citep{zhang2010optimal}. Notably, work on Multi Armed Bandit (MAB) problems \citep[e.g.,][]{Lai1987, Whittle1980, Scott2010, Bubeck2011, Yue2012} and stochastic optimization \citep[e.g.,][]{NIPS2011_4475} has led to efficient sequential sampling schemes for various experimental designs and optimization criteria. This paper however introduces a novel sequential sampling scheme for a specific sequential design problem: we examine the problem in which the treatment values are continuous (e.g., with $x$ being $\in \mathbb{R}$) and the researcher seeks a treatment value $x_{max}$ at which the observed outcome $y$---which, at least in part, depends on $x$---obtains its maximal value. Thus we examine the situation in which an experimenter wants to find, by sequential experimentation, $x_{max} = \arg\max_{x} f(x)$, where $f(x)$ is a (possibly unknown) function of a well controllable variable $x$ and is likely observed with noise. We focus on the simple case where $x$ is a scalar. In the remainder of the paper, we index sequential trials by $t \in \{1, \dots, \mathcal{T} \}$. Our ultimate aim is to describe an experimental method for manipulating $x_t$ (in discrete time) to find, sequentially, the value of $x$ that maximizes $y$. The current manuscript is structured as follows: first, we briefly review the literature on DO, MAB problems, and stochastic optimization to position our method. Next, we discuss LiF as a solution to the treatment optimization problem considered in this paper. LiF is based on a solution that is routinely implemented in physics and engineering applications which relies on the idea of systematically changing the value of the treatment in time via so-called \textit{lock-in amplifier techniques} \citep{Scofield1994}. We introduce its basic principles in continuous time. Subsequently, we present two algorithms to use LiF in sequential experiments. We then, by simulation, compare the two algorithms, and examine the performance of LiF in several scenario's of signal-to-noise ratio and in situations of concept drift. Furthermore, we examine the use of LiF in cases in which the observable outcome is discrete; which is for example the case in the optimization of prices as described above. Finally, we examine the empirical \emph{regret} -- the search cost of the algorithm compared to an algorithm which has full information -- of the proposed procedure and compare it to a standard solution in the MAB literature \citep{berry1985bandit}. \subsection{Treatment optimization methods} The problem of finding $x_{max}$ is treated in a number of branches in the experimental design and machine learning literature. The problem can be approached as an optimal design problem, in which the main aim is to design an experiment that efficiently provides us with information regarding $f(\vec{x})$ \citep[see, e.g.,][]{o2003gentle, Myung2009}. Often, in the DO literature, experiments are treated statically, and the functional form of the data generating function is assumed known: the remaining question is to determine the optimal treatments given a fixed size of the experiment and an assumed relationship to precisely estimate the parameters of interest. Recently, \citep{myung2013tutorial} introduced an advanced method of DO into the psychology literature called Adaptive Design Optimization (ADO). The aim of ADO is to create adaptive experiments which are optimized to distinguish between competing explanations of the data \citep{myung2009optimal}. However, in this literature the main aim is to find treatment values to efficiently estimate parameters given a number of model assumptions. Instead, our focus is on efficiently finding treatment values which maximize some observable outcome of the experiment. Sequentially finding optimal treatments, where optimal is defined in terms of observed outcomes, is explicitly studied in the MAB literature \citep{berry1985bandit}. In this problem specification researchers consider policies $\mathcal{P}$ which describe how to select actions $a \in \mathcal{A}$ (the treatment values) at different times $t$ where the aim is to maximize the cumulative reward $R(t) = \sum_{t=1}^T r_i$ \citep{Bubeck2011}. The reward is assumed to be a function, possibly with noise, of the actions. Many specifications of the MAB problem exists: researchers have considered independent treatments (the traditional $k$-armed bandit problem \citep{Whittle1980}), related treatments, continuous treatments, etc. \citep{Audibert2009, Bubeck2011b}. The MAB problem, and its generalization, the contextual MAB problem \citep{Li2010a, Beygelzimer2011} present an active area of research in the machine learning literature. The literature on stochastic optimization with bandit feedback \citep{NIPS2011_4475, agarwal2010optimal} considers the problem of finding the optimal value of continuous treatments \citep{flaxman2005online}. Of special interest for the current proposal are derivative-free (or gradient-free) methods in which the gradient of the function (which is of use for e.g., (stochastic) gradient descent method) is assumed unknown and is itself approximated during the sequential experiment \citep{shamir2012complexity}. In this paper we present a derivative free method to perform stochastic optimization with bandit feedback. The presented method is well-suited for practical use in sequential experiments due to its ease of implementation: in the current paper we provide several algorithms for performing the optimization in real-life settings. Before presenting our novel sequential approach to solving the continuous treatment optimization problem, we first introduce its theoretical background assuming that the treatment does not vary in discrete sequential steps, but rather can be varied continuously (in continuous time). \section{Finding the maximum of a curve with a lock-in algorithm} In this section we detail the basic principles behind LiF assuming continuous time in which $x$ can be manipulated. Let's assume that $y$ is a continuous function $f$ of $x$: $y=f(x)$. Let's further assume that $x$ oscillates with time according to: \begin{align} \label{oscillation} x(t) &=x_0+A\cos\left( \omega t \right) \end{align} where $\omega$ is the angular frequency of the oscillation, $x_0$ its central value, and $A$ its amplitude. For relatively small values of $A$, Taylor expanding $f(x)$ around $x_0$ to the second order, one obtains: \begin{align} \label{taylor} \begin{split} y(x(t)) &=f(x_0)+\left( x_0+A \cos\left( \omega t \right) - x_0 \right) \left( \left. \frac{\partial f}{\partial x}\right|_{x=x_0} \right) \\ & +\frac{1}{2}\left( x_0+A \cos\left( \omega t \right) - x_0 \right)^2 \left( \left. \frac{\partial^2 f}{\partial x^2}\right|_{x=x_0} \right) \end{split} \end{align} which can be simplified to: \begin{align} \label{evidence} \begin{split} y(x(t)) &=k+A \cos\left(\omega t\right)\left( \left. \frac{\partial f}{\partial x}\right|_{x=x_0}\right) \\ & +\frac{1}{4}A ^2\cos\left(2\omega t\right)\left(\left. \frac{\partial^2 f}{\partial x^2}\right|_{x=x_0}\right) \end{split} \end{align} where $k=f(x_0)+1/4 A^2 \left(\left. \partial^2 f / \partial x^2 \right|_{x=x_0} \right)$. It is thus evident that, for small oscillations, $y$ becomes the sum of three terms: a constant term, a term oscillating at angular frequency $\omega$, and a term oscillating at angular frequency $2\omega$. Suppose we ourselves can actively manipulate $x$ and measure $y$, and that $f$ is continuous and only has one maximum and no minimum.\footnote{For simplicity of exposure we only consider these well-behaved functions in this paper.} Further suppose that one is interested to find the value $\arg\max_{x} y = f(x)$ which we denote with $x_{max}$, and that our measurements of $y$ contain noise \begin{align} \label{real} y(t) &= f(x(t))+ \epsilon_t \end{align} where $\epsilon$ denotes the noise and $\epsilon \sim \pi()$ where $\pi$ is some probability density function and $\mathbb{E}[\epsilon | x] = 0$. Following the scheme used in physical lock-in amplifiers \citep[see, e.g.,][]{Scofield1994}, we multiply the observed $y$ variable by $\cos\left( \omega t \right)$. Using eq. \ref{evidence} and eq. \ref{real}, one obtains: \begin{align} \begin{split} \label{complete} y_\omega(t) &= \cos\left(\omega t \right) \biggl[ k+A \cos\left(\omega t\right)\left( \left. \frac{\partial f}{\partial x}\right|_{x=x_0}\right) \\ & +\frac{1}{4}A ^2\cos\left(2\omega t\right)\left(\left. \frac{\partial^2 f}{\partial x^2}\right|_{x=x_0}\right) + \epsilon \biggr] \end{split} \end{align} where $y_\omega$ is the value of $y$ after it has been multiplied by $\cos \left( \omega t \right)$. Eq. \ref{complete} can be written more compactly as: \begin{align} \label{rewrite} \begin{split} y_\omega & = \frac{A}{2}\left( \left. \frac{\partial f}{\partial x}\right|_{x=x_0} \right) \\ & + k_\omega \cos\left( \omega t\right) + k_{2\omega}\cos\left( 2\omega t \right) \\ & +k_{3\omega}\cos\left(3\omega t \right)+ \epsilon \cos\left(\omega t \right) \end{split} \end{align} where \begin{align} k_\omega & = k+A^2/8\left(\left. \partial^2 f / \partial x^2 \right|_{x=x_0} \right) \\ k_{2\omega} & =A/2\left(\left. \partial^2 f / \partial x^2 \right|_{x=x_0} \right) \\ k_{3\omega} & =A^2/8\left(\left. \partial^2 f / \partial x^2 \right|_{x=x_0} \right). \end{align} Integrating $y_\omega$ over a time $T=\frac{2\pi N}{\omega}$, where $N$ is a positive integer and $T$ denotes the time needed to integrate $N$ full oscillations, one obtains: \begin{align} \label{rewrite2} y_\omega^{*}=\frac{TA}{2}\left( \left. \frac{\partial f}{\partial x}\right|_{x=x_0} \right)+\int_0^T \epsilon \cos\left(\omega t \right) dt \end{align} Depending on the noise level, one can tailor the integration time, $T$, in such a way to reduce the second addendum of the right hand of eq. \ref{rewrite2} to negligible levels, effectively averaging out the noise in the measurements. Under those circumstances, $y_\omega^{*}$ provides a direct measurement of the value of the first derivative of $f$ at $x=x_0$. The above method thus yields quantitative information regarding the first derivative of $f$ at $x=x_0$, providing, in this way, a logical update strategy of $x_0$: if $y_\omega^{*}<0$, then $x_0$ is larger than the value of $x$ that maximizes $f$; likewise, if $y_\omega^{*}>0$, $x_0$ is smaller than the value of $x$ that maximizes $f$. Thus, based on the oscillation observed in $y_\omega$ we are now able to move $x_0$ closer to $x = \arg\max_{x} f(x)$ using an update rule $x_0 := x_0 + \gamma y_\omega^{*}$ where $\gamma$ quantifies the learn rate of the procedure. Hence, we can setup a feedback loop that allows us to keep $x_0$ close to $x_{max}$, even if $f(x)$ changes over time. Note that, multiplying $y$ by $\cos{2\omega t}$ and using a similar approach as the one described above to extract the amplitude of the oscillation of $y$ at frequency $2\omega$, one would be able to measure the second derivative of the function $f$ at $x=x_0$. This property can be useful when, for instance, $f(x)$ is known to be an exact parabola to not only derive the direction of the step towards the maximum, but to work out the exact step size (see Appendix \ref{app:exact}). \section{Algorithm for LiF in discrete time} In practical terms, measurements can never run in continuous mode. Therefore, we now present an algorithm for LiF in discrete time. To simplify notation, we will index sequential measurements by $y_t$ where $t=1, \dots, t=\mathcal{T}$ where $\mathcal{T}$ denotes the length---possibly infinite---of the experiment that is ran to find $\arg\max_{x} f(x)$. In discrete time we can use the same procedure as above in which we start with $x_0$, and for each sample oscillate around $x_0$ with a known frequency $\omega$ and known amplitude $A$: \begin{align} x_t = x_0 + A \cos{\omega t} \end{align} which will result in measurements given by \begin{align} y_t = f(x_0 + A \cos{\omega t}) + \epsilon_t \end{align} On the basis of the arguments reported above, we can now implement a feedback loop that iteratively adjusts the value of $x_0$ until $x$ reaches $x_{max}$. After that, if the function $f$ changes, the loop can follow the value of $x$ to the new maximizing position and thus stay ``locked''. The procedure is similar to that given in Equation \ref{rewrite} and \ref{rewrite2}, where we first multiply the outcome $y_t$ by $\cos(\omega t)$ and subsequently integrate out the noise term (summing in the discrete case). In the following sections we present two possible implementations for LiF in discrete time for use in sequential experiments. \subsection{LiF-\rom{1}: Batch updates of $x_0$} Our first implementation of LiF (denoted LiF-\rom{1}) is presented in Algorithm \ref{Alg:LiF1}. In this implementation we summate observations $y_t$, which we multiply by $\cos(\omega t)$, for a batch period of length $T$, after which we update $x_0$. Variable $y_{\omega}^{\Sigma}$ contains a running sum of $y_t \cos{\omega t}$ over $t$ that is used for the integration. \begin{algorithm} \caption{Implementation of LiF-\rom{1} for single variable maximization in data stream using a batch approach.} \label{Alg:LiF1} \begin{algorithmic} \REQUIRE $x_0$, $A$, $T$, $\gamma$, $y_{\omega}^{\Sigma} = 0$ \STATE $\omega = \frac{2 \pi}{T}$ \FOR{$t=1, \dots, \mathcal{T}$} \STATE $x_t = x_0 + A \cos{\omega t}$ \STATE $y_t = f(x_0 + A \cos{\omega t}) + \epsilon_t$ \STATE $y_{\omega}^{\Sigma} = y_{\omega}^{\Sigma} + y_t \cos{\omega t}$ \IF{$(t \mod T == 0)$} \STATE $y_{\omega}^{*} = y_{\omega}^{\Sigma} / T$ \STATE $x_0 = x_0 + \gamma y_{\omega}^{*} $ \STATE $y_{\omega}^{\Sigma} = 0$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} The tuning parameters for LiF-\rom{1}, which should be set by the experimenter, are $x_0$, $A$, $T$, $\gamma$. Here below we describe some general criteria the choice may be based on: \begin{itemize} \item It is advised to set $x_0$ as close as possible to $x_{max}$. The choice can only be based on the available information on $f$. The more accurate the information, the closer the initial $x_0$ to $x_{max}$, the faster the convergence of the loop to $x_{max}$. \item The amplitude $A$ affects the costs of the search procedure, because a large $A$ implies querying a large range of $x$ values with (possibly) low resulting $y$ values. However, $A$ also influence the learning speed: a very small $A$ leads to small updates steps, while a large value of $A$ might lead to a value of $\gamma y_{\omega}^{*}$ that ``overshoots'' $x_{max}$. \item The integration time $T$ affects the variability of the update of $x_0$, with larger integration times leading to a smoother update but slower convergence. \item The learn-rate $\gamma < 1$ determines the step size at each update of $x_0$. This can be interpreted, and tuned, akin learn-rates in, for instance, stochastic gradient descent methods \citep{Poggio2011}. \end{itemize} \subsection{LiF-\rom{2}: Continuous updates of $x_0$} For some applications the batch updates of $x_0$ -- as implied by the continuous time analysis and defined in Algorithm \ref{Alg:LiF1} -- might not be feasible. Algorithm \ref{Alg:LiF2} presents a modified version of LiF (denoted LiF-\rom{2}) in which $x_0$ is updated every observation. LiF-\rom{2} starts by filling up a buffer of length $T$ which we denote by the vector $\vec{y}_{\omega} = \{NA_1, \dots, NA_T\}$, after which each observation leads to an update of $x_0$. In the algorithm description the values $y_{t-T}, \dots, y_t$ are stored in the vector $\vec{y}_{\omega}$. By defining the learn rate as $\frac{\gamma}{T}$ the tuning parameters in LiF-\rom{2} are the same as those discussed for LiF-\rom{1}. \begin{algorithm} \caption{Implementation of LiF-\rom{2} for single variable maximization using continuous updates.} \label{Alg:LiF2} \begin{algorithmic} \REQUIRE $x_0$, $A$, $T$, $\gamma$, $\vec{y}_{\omega} = \{NA_1, \dots, NA_T\}$ \STATE $\omega = \frac{2 \pi}{T}$ \FOR{$t=1, \dots, \mathcal{T}$} \STATE $x_t = x_0 + A \cos{\omega t}$ \STATE $y_t = f(x_0 + A \cos{\omega t}) + \epsilon_t$ \STATE $\vec{y}_{\omega} = \texttt{push}(\vec{y}_{\omega} , y_t \cos{\omega t})$ \IF{$(t > T)$} \STATE $y_{\omega}^{*} = (\sum \vec{y}_{\omega}) / T$ \STATE $x_0 = x_0 + \frac{\gamma}{T} y_{\omega}^{*} $ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \section{Simulation study 1: Comparison of Batched and streaming LiF and examination of tuning parameters} \label{sec:1sims} In this section we study, by simulation, the differences between LiF-\rom{1} and LiF-\rom{2}, and the effects of the tuning parameters $A$, $T$, and $\gamma$ in a situation in which $y=f(x)$ is measured without noise. \begin{figure} \caption{Examination of the effect of LiF tuning parameters $\gamma$ and $T$ for $A$=1. Displayed are the results for LiF-\rom{1} \label{fig1} \end{figure} \begin{figure} \caption{Examination of the effect of tuning parameters $A$ and $T$ for $\gamma=.1$. Displayed are the results for LiF-\rom{1} \label{fig2} \end{figure} Figure \ref{fig1} presents the performance of both LiF-\rom{1} and LiF-\rom{2} for data generated using \begin{align} \label{generate:model} f(x) & = -2(x-5)^2 + \epsilon \end{align} where $\epsilon \sim \mathcal{N}(0,0)$ and obviously $x_{max} = 5$. The figure displays the performance of LiF for $\mathcal{T}=10000$ using the following tuning parameter settings \begin{itemize} \item $x_0 = -5$. \item $T \in \{10, 100, 100\}$ \item $A = 1$ \item $\gamma \in \{.01, .1, .5, .9\}$ \end{itemize} The rows of Figure \ref{fig1} (top to bottom) present decreasing values of $\gamma$, while the columns (left to right) present increasing values of $T$. We fix $A=1$. Each panel presents the value of $x_0$ during the data stream as selected using LiF-\rom{1} (black solid line) and LiF-\rom{2} (gray dotted line). It is clear that LiF can ``overshoot'' the maximum for values of $\gamma$ that are too high (top two rows). This happens for both LiF-\rom{1} and LiF-\rom{2}, although LiF-\rom{1} seems more robust. For small values of $\gamma$ the performance of the algorithms is very similar, and increases in the integration window $T$ merely smooth the updating procedure. In Figure \ref{fig2} the results are plotted for the same setup, but this time we vary $A \in \{.1, 1, 2, 10\}$, while we fix $\gamma = .1$. Here it is clear that for large values of $A$ LiF-\rom{1} has a tendency to become unstable (see top rows), while the streaming LiF-\rom{2} is much more robust for erroneous selection of $A$. Very small choices for the amplitude $A$ lead to very slow updates of $x_0$ in both cases. Again, increased in $T$ merely smooth the process. The simulations give an impression of the importance of the tuning parameters $x_0$, $A$, $T$, $\gamma$ and their relationships. In the remainder of this paper we will focus on the evaluation -- through simulation -- of the performance of LiF-\rom{2} in cases of noise and concept drift. \section{Simulation study 2: Effects of noise} \begin{figure} \caption{Examination of the effect of different levels of noise $\sigma^2 \in \{10,100,1000, 10000\} \label{fig3} \end{figure} To examine the impact of (measurement) noise on the performance of LiF-\rom{2} we repeat the simulations as described in Simulation Study 1 using the data generating model described by Equation \ref{generate:model} with $\epsilon \sim \mathcal{N}(0,\sigma^2)$ and $\sigma^2 \in \{10,100,1000,10000\}$. We choose tuning parameters: $x_0=-5$, $A=1$, $T=100$, $\gamma=.1$. Contrary to the simulations presented in Section \ref{sec:1sims} we now repeat the procedure $m=100$ times: Figure \ref{fig3} presents the average $x_0$ over the $100$ simulation runs as well as the $95\%$ confidence bounds. From Figure \ref{fig3} it is clear that LiF-\rom{2} performs very well in the face of noise. \section{Simulation study 3: Performance of LiF-\rom{2} in cases of concept drift} One of the advantages of Lock in Feedback as opposed to other methods of finding $x_{max}$ is the fact that LiF can also be used to find a maximum of a function in cases of concept drift \citep{Gaber2005}: even when $f(x)$ changes over time, LiF provides a method to keep the value of the treatment $x$ close to $x_{max}$. To illustrate this latter advantage of LiF-\rom{2} we setup a simulation using the following data generating model: \begin{align} \label{generate:model2} f(x,t) & = -2( (x-.0025t) - 5)^2 + \epsilon \end{align} where the $(x-.0025t)$ term ensures that during the stream running from $t=0$ to $t=10^4=\mathcal{T}$ the value of $x_{max}$ moves from $5$ to $30$. We choose $x_0=-20$ (note the different starting position compared to the previous simulations), $A=1$, $T=100$, $\gamma=.1$ and $\sigma^2 = 10$. We investigate the performance of LiF-\rom{2} in this case of concept drift. \begin{figure} \caption{Illustration of LiF in the case of concept drift. As the true maximum shifts (top panel) LiF is able to follow the maximum and keep $x_0$ close to $x_{max} \label{fig:concept} \end{figure} Figure \ref{fig:concept} presents in the top panel $y=f(x,t)$ for distinct values of $t \in \{0, 1000, \dots, 10000\}$ in different shades of grey. The concept drift is illustrated by the different locations of the parabola. Superimposed in blue is the value of $x_0$ as selected by LiF-\rom{2}. In the bottom panel the value of $x_0$ as a function of the length of the stream is presented. It is clear that LiF-\rom{2} quickly finds $x_{max}$ and follows the maximum as it moves during the stream. \section{Simulation study 4: Dichotomous observations} In the introduction we described as a use case of our proposed method the optimization of sales prices to maximize the revenue. This specific case presents a novel problem since the dependent variable $y$, encoding the purchase decision of a customer after a price has been pitched is dichotomous, and the actual outcome of interest---if the firm aims to maximize its revenue---is a function of the observable and the manipulated variable $r(t)= \sum_{i=1}^{t} y_i x_i$. Since $y_i \in \{0,1\}$, the signal $r(t)$ used as an optimization criteria contains a different type of noise; while the expected value $Pr(y=1 | x) \times x$ of an offer could be approximated, the data itself contains non-zero values only when the decision is made to purchase a product. To empirically examine the performance of LiF in such a setting we setup a simulation study in which we assume that the data generating model looks as follows: \begin{align} \label{eq:logit} y_t & \sim \texttt{Bernoulli}\left(\frac{1}{1 + e^{-(10-x_t)}}\right)\\ r_t & = y_t x_t \end{align} Intuitively the above specification indicates that the probability that a consumer chooses to buy a product decreases as the price, $x$, of the product increases, while the (expected) revenue is computed using the probability of a purchase given a specific price multiplied by that price. Given this setup the (expected) $x_{max}$ is approximately $8$. Figure \ref{fig:price} shows the performance of LiF-\rom{2} for two different starting values, $x_0 = 4$ and $x_0 =15$, using the same set of tuning parameters as those used in Study 3 ($t=10^4=\mathcal{T}$, $A=1$, $T=100$, $\gamma=.1$). The only change in the algorithm compared to the earlier simulations is that $r_t = y_t x_i$ is integrated (summed) over instead of using the observed $y_t$ directly. Also in this case, LiF finds $x_{max}$ fairly quickly (in under 6000 iterations). It has to be noted that too high starting values, and thereby a very low $Pr(y = 1 | x_0)$ might lead to a failure to find $x_{max}$ since LiF then get's stuck in a local ``maximum'': for very high values of $x$ the revenue $r$ will always be $0$. \begin{figure} \caption{Use of LiF to find the revenue maximizing sales-price for a firm: example of a setup in which the observed $y \in \{0,1\} \label{fig:price} \end{figure} \section{Simulation study 5: Empirical Regret} The previous studies show that LiF is effective in finding the value of $x_{max}$. However, the oscillation that is introduced clearly introduces search costs into the procedure: LiF continuously runs experiments with a certain amplitude in its variation in $x$ to find $x_{max}$. In the previous simulations these search costs have not been considered, and hence while these simulations demonstrate that LiF finds the value of $x_{max}$, the previous simulation studies are uninformative regarding the costs of the procedure. To address this problem we run another simulation study in which we monitor the empirical \emph{regret} \begin{align} \mathcal{R}(t) = \sum_{i=1}^t (f(x_{max}) - (f(x_t)) \end{align} of the procedure. Thus, we compare over time in the data stream how much ``is lost'' when using LiF as compared to always selecting the exact right value of $x$ that maximizes the outcome if the data generating process would have been known. We use the exact setup as used in Simulation Study 4 (exact same data generating model and tuning parameter settings), but we increase $\mathcal{T}$ to $10^5$. Also, because of the noise and our interest in LiF as a general procedure, not merely in one specific attempt, we replicate the simulation $M=100$ times. To give insight in the performance of LiF when examining the regret of the procedure, we contrast the use of LiF not only to selecting the optimal value, but also to two other sequential experimentation scheme's: \begin{itemize} \item \emph{$\epsilon$-first:} in this approach we run a limited time (up to $n=1000$) experiment in which we randomly sample values of $x$ uniformly between 0 and 20. Subsequently, we fit a simple logistic regression modeling $Pr(y=1|x) = \mathcal{L}(\beta_0 + \beta_1 x)$ where $\mathcal{L}()$ denotes the logit link (see also Equation \ref{eq:logit}), and determine $x_{\epsilon} = \arg\max_{x} \mathcal{L}(\beta_0 + \beta_1 x) x$. The remaining $\mathcal{T} - n$ observations in the stream are allocated to $x_{\epsilon}$. \item \emph{Bootstrap Thompson Sampling (BTS)}: In this sequential experimentation scheme we again fit a simple logistic regression to estimate $Pr(y=1|x)$. We use Stochastic Gradient Descent to update the parameters of the model at each time point during the data stream. Furthermore, we maintain $J=100$ models each using an online half-sampling bootstrap to perform bootstrap Thompson sampling \citep[See for details of this sequential allocation scheme][]{Kaptein}. This gives $J$ different estimates of the model parameters ($\{\beta_0^j, \beta_1^j\}$). We then randomly uniformly select $j'$ out of $j=1, \dots, j=J$ and select treatment $x_{bts} = \arg\max_{x} \mathcal{L}(\beta_0^{j'} + \beta_1^{j'} x) x$. This bootstrapped sampling scheme quantifies the uncertainty in the model estimates and uses this directly to balance exploration (querying new values for $x$ to learn more about the data-generating model), and exploitation (selecting the value of $x$ which one believes leads to the highest outcomes). \end{itemize} Note that we choose random starting points of the parameter values for BTS that are relatively close to the true values, and that the functional form of the model that is used is the same as the true data generating model. Hence, this latter condition is expected to do very well on the current problem since it implements a lot of knowledge regarding the data-generating function that is not accessible to LiF. Figure \ref{fig:regret} shows the performance of LiF-\rom{2} -- in terms of average regret -- compared to the $\epsilon$-first and BTS. It is clear that the $\epsilon$-first does not perform very well: logically, during the experimentation stage $t=1, \dots, t=n$ this method incurs a large regret. However, since the probability that the true $x_{max}$ is found exactly in the experiment is smaller then $1$, also after the experiment period (expected) linear regret is incurred. BTS performs much better in the long run: the regret is not linear but rather seems to be $\mathcal{O}(\sqrt(t))$, which is the proven minimal regret bound known for this problem \citep{NIPS2011_4475}. Early on LiF performs very well on this problem; LiF is very efficient in finding $x_{max}$. It is even more efficient than BTS for small $t$, despite the fact that in the current setup BTS is heavily favored by using the correct form of the data generating model, something which is in practice very unlikely. However, in the long run the regret of LiF is lineair in $t$. This latter fact is easily explained: due to the continuous oscillations of $x$ by adding $A \cos \omega t$ LiF keeps exploring the space and thus keeps incurring additional costs. Even if $x_{max}$ has been found, these search costs are linear with $t$. \begin{figure} \caption{Overview of the (mean) empirical regret of three possible sequential allocation schemes.} \label{fig:regret} \end{figure} This simulation suggests that, in the bandit feedback case, LiF can be improved by gradually decreasing the amplitude of the oscillation: if $A$ can be decreased as a function of (e.g.,) the approximated gradient as well as the current time in the stream, the exploration behavior of LiF can be systematically decreased over time in the stream. However, this would make LiF less sensitive to concept drift, which might in practice be infeasible. Hence, we currently regard the linear regret incurred by LiF as exploration costs necessary to ensure its robustness in a changing environment. \section{Discussion and Future work} In this paper we presented Lock in Feedback as a method to find $\arg\max_{x} f(x)$ through sequential experiments. The method is appealing since it a) does not require the functional form of $f(x)$ to be known to derive its maximum, b) performs well in situations in which measurements are obtained with large noise, and c) allows following the maximum of a function even if that function changes over time. We have presented the basic mathematical arguments behind LiF, demonstrating how known (or imposed) oscillations in $x$ can be used to determine the derivative(s) of $f(x)$ which can subsequently be used to find $\arg\max_{x} f(x)$. Next, we detailed two possible implementations of LiF and examined their performance for a variety of tuning parameter settings. We then showed that a streaming version of LiF is robust both to noise as well as concept drift. We believe LiF can be of use in many sequential experimentation problems in which the independent variable is continuous; in the introduction we discussed pricing, medication dosing, and the selection of items by their difficulty as possible examples. LiF is extremely easy to implement, and very robust to noise and concept drift. We thus hope that LiF can be a valuable tool in treatment optimization in sequential experiments. However, the current expose of LiF also introduces a number of questions. For example, the ability to use LiF for problems of higher dimensions, e.g., where $y = f(\vec{x})$ is a function of multiple variables, has not been explored here even though this extension relatively is easily made. Also, the suggested decrease of the amplitude in the bandit setting (Simulation Study 5) needs further scrutiny and begs for an analytical treatment of the use of LiF in stochastic optimization with bandit feedback \citep[see, e.g.,][]{NIPS2011_4475}. Finally, the currently proposed version of LiF allows one to find local maxima (or minima), but convergence to a global maximum is not guaranteed. Throughout this paper, we have been considering unimodal functions, which might, in practical applications, be a too stringent assumption. In this paper we have demonstrated the use of LiF only in cases where $x$ is scalar. However, when $x$ is a vector a very similar approach can be used to find the maximum of the function $f(\vec{x})$ in more than one dimension. In the two dimensional case LiF can be extended by oscillating both elements of $x$ at different frequencies: \begin{align*} x_{1,t} = x_{1,0} + A_1 \cos{\omega_1 t} \\ x_{2,t} = x_{2,0} + A_2 \cos{\omega_2 t} \end{align*} After oscillating both elements of $x$ we observe $y_t = f(x_{1,t}, x_{2,t})$ and we can obtain information regarding the gradient by separately computing: \begin{align*} y_{1,\omega} = y_t \cos{\omega_1 t} \\ y_{2,\omega} = y_t \cos{\omega_2 t} \end{align*} This simple extension allows for the use of LiF in higher dimensions. However, besides the fact that $\omega_1$ and $\omega_2$ should not be multiples of each other, the effects of the tuning parameters and the performance of this higher dimensional version of LiF need to be further examined. Our proposed LiF algorithm, similar to many other procedures for function maximization, is prone to uncovering local maxima instead of global maxima. A logical solution to this problem would be to consider multiple starting points $\vec{x}_0$ which are oscillated independently (possibly alternating within a data stream). Effectively this would allow the experimenter to find multiple maxima. By evaluating the value of $y$ one could decide on the best possible solution, or, one could pool the results of multiple alternating threats to update each of them. Such approaches, and their robustness to the existence of local maxima, needs further scrutiny. \section*{References} \section*{Algorithm for finding the exact maximum of a parabola using the second order approximation.} \label{app:exact} Let's suppose that the curve $y=f(x)$ is a parabola: \begin{align*} y=-\alpha(x-x_0)^2+\gamma \end{align*} Clearly, $f(x)$ has a maximum for $x=x_0$. Furthermore, the second derivative is always equal to $-2\alpha$, regardless the value of $x$. Interestingly, the value of $\alpha$ can be easily extracted from the data accumulated during the lock-in procedure. For this purpose, $y(t)$ has to be multiplied by $\cos\left( 2\omega t \right)$. Following the steps illustrated in eq. \ref{complete}, eq. \ref{rewrite}, and eq. \ref{rewrite2}, one obtains: \begin{align*} y_{2\omega} & = \frac{TA^2}{8}\left(\left. \frac{\partial^2 f}{\partial x^2}\right|_{x=x_0}\right) \\ & + \int_{0}^{T} \epsilon \cos\left( 2\omega t \right) \end{align*} which allows us to calculate $\alpha$ as: \begin{align*} \alpha=\frac{4 y_{2\omega}}{TA^2} \end{align*} \end{document}
\begin{document} \pagestyle{headings} \flushbottom \maketitle \subsection*{} Keywords : scalar conservation laws, level sets, kinetic approximation, maximal monotone operator \\ \\ AMS classification~: 35L65, 47H05 \section*{Abstract} We show that Kruzhkov's theory of entropy solutions to multidimensional scalar conservation laws {\Bbb C}e{Kr} can be entirely recast in $L^2$ and fits into the general theory of maximal monotone operators in Hilbert spaces. Our approach is based on a combination of level-set, kinetic and transport-collapse approximations, in the spirit of previous works by Giga, Miyakawa, Osher, Tsai and the author {\Bbb C}e{Br1,Br2,Br3,Br4,GM,TGO}. \section{A short review of Kruzhkov's theory} First order systems of conservation laws read: $$ \partial_t u+\sum_{i=1}^d \partial_{x_i}(Q_i(u))=0, $$ or, in short, using the nabla notation, \begin{equation} \label{scalar} \partial_t u+\nabla_x\cdot (Q(u))=0, \end{equation} where $u=u(t,x)\in {\Bbb R}^m$ depends on $t\ge 0$, $x\in {\Bbb R}^d$, and $\cdot$ denotes the inner product in ${\Bbb R}^d$. The $Q_i$ (for $i=1,\cdot\cdot\cdot,d$) are given smooth functions from ${\Bbb R}^m$ into itself. The system is called hyperbolic when, for each $\tau\in{\Bbb R}^d$ and each $U\in{\Bbb R}^m$, the $m\times m$ matrix $\sum_{i=1,d}\tau_i Q'_i(U)$ can be put in diagonal form with real eigenvalues. There is no general theory to solve globally in time the initial value problem for such systems of PDEs. (See {\Bbb C}e{BDLL,Da,Ma,Se} for a general introduction to the field.) In general, smooth solutions are known to exist for short times but are expected to blow up in finite time. Therefore, it is usual to consider discontinuous weak solutions, satisfying additional 'entropy' conditions, to adress the initial value problem, but nothing is known, in general, about their existence . Some special situations are far better understood. First, for some special systems (enjoying 'linear degeneracy' or 'null conditions'), smooth solutions may be global (shock free), at least for 'small' initial data (see {\Bbb C}e{Kl}, for instance). Next, in one space dimension $d=1$, for a large class of systems, existence and uniqueness of global weak entropy solutions have been (recently) proven for initial data of sufficiently small total variation {\Bbb C}e{BB}. Still, in one space dimension, for a limited class of systems (typically for $m=2$), existence of global weak entropy solutions have been obtained for large initial data by 'compensated compactness' arguments {\Bbb C}e{Ta,Di,LPS}. Finally, there is a very comprehensive theory in the much simpler case of a $single$ conservation laws, i.e. when $m=1$. Then, equation (\ref{scalar}) is called a 'scalar conservation law'. Kruzhkov {\Bbb C}e{Kr} showed that such a scalar conservation law has a unique 'entropy solution' $u\in L^\infty$ for each given initial condition $u_0\in L^\infty$. (If the derivative $Q'$ is further assumed to be bounded, then we can substitute $L^1_{loc}$ for $L^\infty$ in this statement.) An entropy (or Kruzhkov) solution is an $L^\infty$ function that satisfies the following distributional inequality \begin{equation} \label{entropy} \partial_t C(u)+\nabla_x\cdot (Q^C(u))\le 0, \end{equation} for all Lipschitz convex function $C:{\Bbb R}\rightarrow {\Bbb R}$, where the derivative of $Q^C$ is defined by $(Q^C)'=C'Q'$. In addition, the initial condition $u_0$ is prescribed in $L^1_{loc}$, namely: \begin{equation} \label{continuity} \lim _{t\rightarrow 0} \int_{B} |u(t,x)-u_0(x)|dx=0, \end{equation} for all compact subset $B$ of ${\Bbb R}^d$. Beyond their existence and uniqueness, the Kruzhkov solutions enjoy many interesting properties. Each entropy solution $u(t,\cdot)$, with initial condition $u_0$, continuously depends on $t\ge 0$ in $L^1_{loc}$ and can be written $T(t)u_0$, where $(T(t),t\ge 0)$ is a family of order preserving operators: \begin{equation} \label{order T} T(t)u_0\;\ge\; T(t)\tilde u_0\;,\;\;\forall t\ge 0, \end{equation} whenever $u_0\ge \tilde u_0$. Since constants are trivial entropy solutions to (\ref{scalar}), it follows that if $u_0$ takes its values in some fixed compact interval, so does $u(t,\cdot)$ for all $t\ge 0$. Next, two solutions $u$ and $\tilde u$, with $u_0-\tilde u_0\in L^1$, are $L^1$ stable with respect to their initial conditions: \begin{equation} \label{L1 stability} \int|u(t,x)-\tilde u(t,x)|dx \le \int|u_0(x)-\tilde u_0(x)|dx, \end{equation} for all $t\ge 0$. As a consequence, the total variation $TV(u(t,\cdot))$ of a Kruzhkov solution $u$ at time $t\ge 0$ cannot be larger than the total variation of its initial condition $u_0$. This easily comes from the translation invariance of (\ref{scalar}) and from the following definition of the total variation of a function $v$: \begin{equation} \label{TV} TV(v)=\sup_{\eta\in{\Bbb R}^d,\;\;\eta\ne 0} \int\frac{|v(x+\eta)-v(x)|}{||\eta||}dx, \end{equation} where $||\cdot||$ denotes the Euclidean norm on ${\Bbb R}^d$. The space $L^1$ plays a key role in Kruzhkov's theory. There is no $L^p$ stability with respect to initial conditions in any $p>1$. Typically, for $p>1$, the Sobolev norm $||u(t,\cdot)||_{W^{1,p}}$ of a Kruzhkov solution blows up in finite time. This fact has induced a great amount of pessimism about the possibility of a unified theory of global solutions for general multidimensional systems of hyperbolic conservation laws. Indeed, simple linear systems, such as the wave equation (written as a first order system) or the Maxwell equations, are not well posed in any $L^p$ but for $p=2$ {\Bbb C}e{Brn}. However, as shown in the present work, $L^2$ is a perfectly suitable space for entropy solutions to multidimensional scalar conservation laws, provided a different formulation is used, based on a combination of level-set, kinetic and transport-collapse approximations, in the spirit of previous works by Giga, Miyakawa, Osher, Tsai and the author {\Bbb C}e{Br1,Br2,Br3,Br4,GM,TGO}. \section{Kruzhkov solutions revisited} \subsection{A maximal monotone operator in $L^2$} Subsequently, we restrict ourself, for simplicity, to initial conditions $u_0(x)$ valued in $[0,1]$ and spatially periodic of period 1 in each direction. In other words, the variable $x$ will be valued in the flat torus ${\Bbb T}^d={\Bbb R}^d/{\Bbb Z}^d$. \\ \\ Let us now introduce: \\ 1) the space $L^2([0,1]\times {\Bbb T}^d)$ of all square integrable functions $$(a,x)\in [0,1]\times {\Bbb T}^d\rightarrow Y(a,x)\in{\Bbb R}\;,$$ 2) the closed convex cone $K$ of all $Y\in L^2$ such that $\partial_a Y\ge 0$ (in the sense of distributions), \\ 3) the subdifferential of $K$ defined at each point $Y\in K$ by: \begin{equation} \label{subdifferential} \partial K(Y)=\{Z\in L^2,\;\;\; \int (\tilde Y-Y)Z\;dadx\le 0\;, \;\;\;\forall \tilde Y\in K\}\;, \end{equation} 4) the maximal monotone operator (MMO) (see {\Bbb C}e{Brz}): \begin{equation} \label{operator} Y\rightarrow - q(a)\cdot\nabla_x Y+\partial K(Y), \end{equation} where $q(a)=Q'(a)$, and the corresponding subdifferential equation {\Bbb C}e{Brz}: \begin{equation} \label{inclusion} 0\;\in\;\; \partial_t Y+ q(a)\cdot\nabla_x Y+\partial K(Y). \end{equation} From maximal monotone operator theory {\Bbb C}e{Brz}, we know that, for each initial condition $Y_0\in K$, there is a unique solution $Y(t,\cdot)\in K$ to (\ref{inclusion}), for all $t\ge 0$. More precisely, we will use the following definition (which includes the possibility of a left-hand side $q_0\in L^2([0,1])$): \begin{Definition} \label{def} $Y$ is a solution to \begin{equation} \label{inclusion bis} q_0(a)\;\in\;\; \partial_t Y+ q(a)\cdot\nabla_x Y+\partial K(Y), \end{equation} with initial value $Y_0\in K$ and left-hand side $q_0\in L^2([0,1])$, if: \\ 1) $t\rightarrow Y(t,\cdot)\in L^2$ is continuous and valued in $K$, with $Y(0,\cdot)=Y_0,$ \\ 2) $Y$ satisfies, in the sense of distribution, \begin{equation} \label{semi-integral} \frac{d}{dt} \int |Y-Z|^2 dadx \le 2\;\int (Y-Z)(q_0(a)-\partial_t Z- q(a)\cdot\nabla_x Z)dadx, \end{equation} for each smooth function $Z(t,a,x)$ such that $\partial_a Z\ge 0$. \end{Definition} \begin{Proposition} \label{Proposition} For each $Y_0\in K$, and $q_0\in L^2([0,1])$, there is a unique solution $Y$ to (\ref{inclusion bis}) in the sense of Definition \ref{def}. If both $Y_0$ and $q_0$ belong to $L^\infty$, then we have for all $t\ge0$: \begin{equation} \label{maxi} -t\sup (-q_0)_++\inf Y_0\le Y(t,\cdot)\le \sup Y_0+t\sup (q_0)_+. \end{equation} If $\nabla_x Y_0$ belongs to $L^2$, then so do $\partial_t Y(t,\cdot)$ and $\nabla_x Y(t,\cdot)$ for all $t\ge 0$. Two solutions $Y$ and $\tilde Y$ to (\ref{inclusion bis}) (with different left-hand side $q_0$ and $\tilde q_0$) are $L^2$ stable with respect to their initial conditions $Y_0$ and $\tilde Y_0$ in $K$: \begin{equation} \label{L2 stability} ||Y(t,\cdot)-\tilde Y(t,\cdot)||_{L^2} \le ||Y_{0}-\tilde Y_{0}||_{L^2}+t||q_0-\tilde q_0||_{L^2}. \end{equation} for all $t\ge 0$. This is also true for all $p\ge 1$, when both $Y_0-\tilde Y_0$ and $q_0-\tilde q_0$ belong to $L^p$: \begin{equation} \label{Lp stability} ||Y(t,\cdot)-\tilde Y(t,\cdot)||_{L^p} \le ||Y_{0}-\tilde Y_{0}||_{L^p}+t||q_0-\tilde q_0||_{L^p}. \end{equation} \end{Proposition} For the sake of completeness, a brief proof of these (standard) results will be provided at the end of the paper. \subsection{The main result} Our main result is \begin{Theorem} \label{main} Let $Y=Y(t,a,x)$ be a solution to the subdifferential equation (\ref{inclusion}) with initial condition $Y_0\in L^\infty$, with $\partial_a Y_0\ge 0$. Then, \begin{equation} \label{solution} u(t,y,x)=\int_0^1 H(y-Y(t,a,x))da, \end{equation} defines a one parameter family (parameterized by $y\in{\Bbb R}$) of Kruzhkov solution to (\ref{scalar}), valued in $[0,1]$. In addition, all Kruzhkov solutions, with initial values in $L^\infty$, can be recovered this way (up to a trivial rescaling). \end{Theorem} Let us rapidly check the last statement of our main result. We must show that any Kruzhkov solution $U(t,x)$ with initial condition $U_0(x)$ valued in $L^\infty$ can be recovered from a solution to (\ref{inclusion}). To do that, according to the first part of the theorem, it is enough to find an $L^\infty$ function $Y_0(a,x)$ such that $\partial_a Y_0\ge 0$ and $$ U_0(x)=\int_0^1 H(y-Y_0(a,x))da, $$ for some $y\in{\Bbb R}$, say $y=1$. This is always possible, up to rescaling, by assuming: $$r\le U_0(x)\le 1-r$$ for some constant $r>0$. Indeed, we set $$ u_0(y,x)=\max(0,\min(1,y\;U_0(x))) $$ so that $U_0(x)=u_0(1,x)$ and $\partial_y u_0\ge 0$. \\ Then, for each fixed $x$, we solve $u_0(y,x)=a$ by $y=Y_0(a,x)$, setting: $$ Y_0(a,x)=\frac{a}{U_0(x)},\;\;\;\forall a\in [0,1],\;\;\;\forall x\in{\Bbb T}^d, $$ so that $$ u_0(y,x)=\int_0^1 H(y-Y_0(x,a))da. $$ (Notice that $Y_0$ is valued in $[0,r^{-1}]$.) Finally, according to the first part of the theorem, we get $$ U(t,x)=\int_0^1 H(1-Y(t,x,a))da, $$ where $Y$ is the solution to (\ref{inclusion}) with initial condition $Y_0$. \subsubsection{Remark} Notice that, for all $t\ge 0$, the level sets of $Y$ and $U$ are related by: $$ \{(a,x),\;\;\;U(t,x)\ge a\}=\{(a,x),\;\;\;Y(t,a,x)\le 1\}. $$ Thus, the method of construction of $Y_0$ out of $U_0$ and the derivation of $U(t,x)$ from $Y(t,a,x)$ can be related to level-set methods in the spirit of {\Bbb C}e{FSS,Gi1,OF,TGO}. This is why we may call 'level-set formulation' of scalar conservation law ({\ref{scalar}) the subdifferential equation given by (\ref{inclusion}) \subsubsection{Remark} The solutions $(t,x)\rightarrow u(t,y,x)$, parameterized by $y\in{\Bbb R}$, are automatically ordered in $y$. Indeed, $\partial_y u\ge 0$ immediately follows from representation formula (\ref{solution}). This is consistent with the order preserving property of Kruzhkov's theory (as explained in the first section). \subsection{A second result} The function $u(t,y,x)$, given by (\ref{solution}), can also be considered as a $single$ Kruzhkov solution of a scalar conservation law in the enlarged $(1+d)$ dimensional space ${\Bbb R}\times{\Bbb T}^d$, namely \begin{equation} \label{scalar bis} \partial_t u+\partial_y (Q_0(u))+\nabla_x\cdot(Q(u))=0, \end{equation} with $(y,x)\in{\Bbb R}\times{\Bbb T}^d$, provided: \\ \\ 1) $Q_0$ is zero, \\ 2) the initial condition $u_0(y,x)$ is valued in $[0,1]$ and $\partial_y u_0\ge 0$. \\ Furthermore, it turns out that, if we add the left-hand side $q_0(a)=Q'_0(a)$ to (\ref{inclusion}), so that we get (\ref{inclusion bis}): $$ q_0(a)\;\in\;\; \partial_t Y+ q(a)\cdot\nabla_x Y+\partial K(Y), $$ and solve for $Y$, then the corresponding $u$ given by (\ref{solution}) is a Kruzhkov solution to (\ref{scalar bis}). \\ As a matter of fact, our proof will be done in this larger framework. We assume that $q_0$, $q$ and $Y_0$ are given in $L^\infty$, for simplicity. Without loss of generality, up to easy rescalings, we may assume that both $q_0$ and $Y_0$ are nonnegative, which simplifies some notations. \begin{Theorem} \label{main bis} Assume that $q_0$ and $q$ are given in $L^\infty$, with $q_0\ge 0$. Let $Y=Y(t,a,x)$ be a solution to the subdifferential equation (\ref{inclusion bis}), with initial condition $Y_0\in L^\infty$, $Y_0\ge 0$ and $\partial_a Y_0\ge 0$. Then, \begin{equation} \label{solution bis} u(t,y,x)=\int_0^1 H(y-Y(t,a,x))da, \end{equation} is the unique Kruzhkov solution to (\ref{scalar bis}) with initial condition: \begin{equation} \label{initial bis} u_0(y,x)=\int_0^1 H(y-Y_0(a,x))da. \end{equation} In addition, $Y$ is nonnegative and can be recovered from $u$ as: \begin{equation} \label{level} Y(t,a,x)=\int_0^\infty H(u(t,y,x)-a)dy. \end{equation} \end{Theorem} Before proving the theorem, let us observe that the recovery of $Y$ from $u$ through (\ref{level}) is just a consequence of the following elementary lemma which generalizes (in a standard way) the inversion of a strictly increasing function of one real variable: \begin{Lemma} \label{lemma} Let: $a\in[0,1]\rightarrow Z(a)\in {\Bbb R}_+$ with $Z'\ge 0$. We define the generalized inverse of $Z$: $$ v(y)=\int_0^1 H(y-Z(a))da,\;\;\;\forall y\in{\Bbb R}. $$ Then $v'\ge 0$, $H(y-Z(a))=H(v(y)-a)$ holds true a.e. in $(a,y)\in [0,1]\times{\Bbb R}$ and: $$ Z(a)=\int_0^\infty H(a-v(y))dy. $$ In addtion, for a pair $(Z,v)$, $(\tilde Z,\tilde v)$ of such functions, we have the co-area formula: \begin{equation} \label{coarea} \int_0^1|Z(a)-\tilde Z(a)|da= \int_0^1\int_0^\infty|H(y-Z(a))-H(y-\tilde Z(a))|dyda \end{equation} $$ =\int_0^1\int_0^\infty|H(v(y)-a)-H(\tilde v(y)-a)|dyda= \int_0^\infty|v(y)-\tilde v(y)|dy. $$ \end{Lemma} To recover (\ref{level}), we notice first that $\partial_a Y\ge 0$ follows from the very definition \ref{def} of a solution to (\ref{inclusion bis}). Next, $Y\ge 0$ follows from (\ref{maxi}) and the assumptions $q_0\ge 0$, $Y_0\ge 0$. Then, we apply lemma \ref{lemma}, for each fixed $x\in{\Bbb T}^d$ and $t\ge 0$, by setting $Z(a)=Y(t,a,x)$ and $u(t,y,x)=v(y)$. \subsubsection{Remark} The function $f(t,a,y,x)=H(y-Y(t,a,x))=H(u(t,y,x)-a)$ valued in $\{0,1\}$ is nothing but the solution of the Lions-Perthame-Tadmor {\Bbb C}e{LPT} 'kinetic formulation' of (\ref{scalar bis}), which satisfies: $$ \partial_t f+q_0(a)\partial_y f+q(a)\cdot\nabla_x f=\partial_a \mu, $$ for some nonnegative measure $\mu(t,a,y,x)$. \subsubsection{Remark} As already mentioned, the solutions of (\ref{inclusion bis}) enjoys the $L^p$ stability property with respect to initial conditions (\ref{Lp stability}), not only for $p=2$ but also for all $p\ge 1$. The case $p=1$ is of particular interest. Let us consider two solutions $Y$ and $\tilde Y$ of (\ref{inclusion bis}) and the corresponding Kruzhkov solutions $u$ and $\tilde u$ given by Theorem \ref{main bis}. Using the co-area formula (\ref{coarea}), we find, for all $t\ge 0$, $$ \int_{{\Bbb R}}\int_{{\Bbb T}^d}|u(t,y,x)-\tilde u(t,y,x)| dxdy= $$ $$ =\int_0^1\int_{{\Bbb R}}\int_{{\Bbb T}^d}|H(u(t,y,x)-a)-H(\tilde u(t,y,x)-a)| dadxdy $$ $$ =\int_0^1\int_{{\Bbb R}}\int_{{\Bbb T}^d}|H(y-Y(t,a,x))-H(y-\tilde Y(t,a,x))| dadxdy $$ $$ =\int_0^1\int_{{\Bbb T}^d}|Y(t,a,x)-\tilde Y(t,a,x)| dxda \le \int_0^1\int_{{\Bbb T}^d}|Y_0(a,x)-\tilde Y_0(a,x)| dxda $$ $$ =\int_{{\Bbb R}}\int_{{\Bbb T}^d}|u_0(y,x)-\tilde u_0(y,x)| dxdy. $$ Thus, Kruzhkov's $L^1$ stability property is nothing but a $very$ incomplete output of the much stronger $L^p$ stability property provided by equation (\ref{inclusion bis}) for all $p\ge 1$. \subsubsection{Remark} As a matter of fact, in Theorem \ref{main bis}, it is possible to translate the $L^p$ stability of the level set function $Y$ in terms of the Kruzhkov solution $u$ by using Monge-Kantorovich (MK) distances. Let us first recall that for two probability measures $\mu$ and $\nu$ compactly supported on ${\Bbb R}^D$, their $p$ MK distance can be defined (see {\Bbb C}e{Vi} for instance), for $p\ge 1$, by: $$ \delta_p^p(\mu,\nu)=\sup\int \phi(x)d\mu(x)+\int \psi(y)d\nu(y), $$ where the supremum is taken over all pair of continuous functions $\phi$ and $\psi$ such that: $$ \phi(x)+\psi(y)\le |x-y|^p,\;\;\;\forall x,y\in{\Bbb R}^D. $$ In dimension $D=1$, this definition reduces to: $$ \delta_p(\mu,\nu)=||Y-Z||_{L^p}, $$ where $Y$ and $Z$ are respectively the generalized inverse (in the sense of Lemma \ref{lemma}) of $u$ and $v$ defined on ${\Bbb R}$ by: $$ u(y)=\mu([-\infty,y]),\;\;\;v(y)=\nu([-\infty,y]),\;\;\;\forall y\in{\Bbb R}. $$ Next, observe that, for each $x\in{\Bbb T}^d$, the $y$ derivative of the Kruzhkov solution $u(t,y,x)$, as described in Theorem \ref{main bis}, can be seen as a probability measure compactly supported on ${\Bbb R}$. (Indeed, $\partial_y u\ge 0$, $u=0$ near $y=-\infty$ and $u=1$ near $y=+\infty$.) Then, the $L^p$ stability property simply reads: $$ \int_{{\Bbb T}^d} \delta_p^p(\partial_y u(t,\cdot,x),\partial_y \tilde u(t,\cdot,x))dx \le \int_{{\Bbb T}^d} \delta_p^p(\partial_y u_0(\cdot,x),\partial_y \tilde u_0(\cdot,x))dx. $$ We refer to {\Bbb C}e{BBL} and {\Bbb C}e{CFL} for recent occurences of MK distances in the field of scalar conservation laws. \section{Proofs} Let us now prove Theorem \ref{main bis} (which contains the first part of Theorem \ref{main} as the special case $q_0=0$). The main idea is to provide, for both formulations (\ref{scalar bis}) and (\ref{inclusion bis}), the same time-discrete approximation scheme, namely the 'transport-collapse' method {\Bbb C}e{Br1,Br2,Br3,GM}, and get the same limits. \subsection{A time-discrete approximation} We fix a time step $h>0$ and approximate $Y(nh,a,x)$ by $Y_n(a,x)$, for each positive integer $n$. To get $Y_{n}$ from $Y_{n-1}$, we perform two steps, making the following induction assumptions: \begin{equation} \label{induction} \partial_a Y_{n-1}\ge 0,\;\;\;0\le Y_{n-1} \le \sup Y_0+(n-1)h\sup q_0, \end{equation} which are consistent with our assumptions on $Y_0$. \subsubsection*{Predictor step} The first 'predictor' step amounts to solve the linear equation \begin{equation} \label{linear} \partial_t Y+ q(a)\cdot\nabla_x Y=q_0(a) \end{equation} for $nh-h<t<nh$, with $Y_{n-1}$ as initial condition at $t=nh-h$. We exactly get at time $t=nh$ the predicted value: \begin{equation} \label{predictor} Y^*_{n}(a,x)=Y_{n-1}(a,x-h\;q(a))+h\;q_0(a). \end{equation} Notice that, since $q_0$ is supposed to be nonnegative, the induction assumption (\ref{induction}) implies: \begin{equation} \label{induction bis} 0\le Y^*_{n}\le \sup Y_0+nh\sup q_0. \end{equation} However, although $\partial_a Y_{n-1}$ is nonnegative, the same may not be true for $\partial_a Y^*_n$. This is why, we need a correction step. \subsubsection*{Rearrangement step} In the second step, we 'rearrange' $Y^*$ in increasing order with respect to $a\in [0,1]$, for each fixed $x$, and get the corrected function $Y_{n}$. Let us recall some elementary facts about rearrangements: \begin{Lemma} \label{lemma bis} Let: $a\in[0,1]\rightarrow X(a)\in {\Bbb R}_+$ an $L^\infty$ function. Then, there is unique $L^\infty$ function $Y:[0,1]\rightarrow {\Bbb R}_+$, such that $Y'\ge 0$ and: $$ \int_0^1 H(y-Y(a))da=\int_0^1 H(y-X(a))da,\;\;\;\forall y\in {\Bbb R}. $$ We say that $Y$ is the rearrangement of $X$. In addition, for all $Z\in L^\infty$ such that $Z'\ge 0$, the following rearrangement inequality: \begin{equation} \label{inequality} \int |Y(a)-Z(a)|^p da\le \int |X(a)-Z(a)|^p da. \end{equation} holds true for all $p\ge 1$. \end{Lemma} So, we define $Y_n(a,x)$ to be, for each fixed $x$, the rearrangement of $Y^*_n(a,x)$ in $a\in [0,1]$: \begin{equation} \label{corrector} \partial_a Y_n \ge 0,\;\;\; \int_0^1 H(y-Y_n(a,x))da=\int_0^1 H(y-Y^*_n(a,x))da,\;\;\;\forall y\in {\Bbb R}. \end{equation} Equivalently, we may define the auxiliary function: \begin{equation} \label{u} u_n(y,x)=\int_0^1 H(y-Y^*_n(a,x))da,\;\;\;\forall y\in {\Bbb R}, \end{equation} i.e. \begin{equation} \label{predictor bis} u_n(y,x)=\int_0^1 H(y-h\;q_0(a)-Y_{n-1}(a,x-h\;q(a)))da, \end{equation} and set: \begin{equation} \label{corrector bis} Y_n(a,x)=\int_0^\infty H(a-u_n(y,x))dy. \end{equation} At this point, $Y_n$ is entirely determined by $Y_{n-1}$ through formulae (\ref{predictor}), (\ref{corrector}), or, equivalently, through formulae (\ref{predictor bis}), (\ref{corrector bis}). Notice that, from the very definition (\ref{corrector}) of the rearrangement step, $u_n$, defined by (\ref{u}), can be equivalently written: \begin{equation} \label{u bis} u_n(y,x)=\int_0^1 H(y-Y_n(a,x))da. \end{equation} Also notice that, for all function $Z(a,x)$ such that $\partial_a Z\ge 0$, and all $p\ge 1$: \begin{equation} \label{comparison} \int |Y_n(a,x)-Z(a,x)|^p dadx\le \int |Y^*_n(a,x)-Z(a,x)|^p dadx \end{equation} follows from the rearrangement inequality (\ref{inequality}). Finlly, we see that $\partial_a Y_n \ge 0$ is automatically satisfied (this was the purpose of the rearrangement step) and $$ 0\le Y_{n}\le \sup Y_0+nh\sup q_0. $$ follows form (\ref{induction bis}) (since the range of $Y^*_n$ is preserved by the rearrangement step). So, the induction assumption (\ref{induction}) is enforced at step $n$ and the scheme is well defined. \subsubsection{Remark} Observe that, for any fixed $x$, $u_n(y,x)$, as a function of $y$, is the (generalized) inverse of $Y_n(a,x)$, viewed as a function of $a$, in the sense of Lemma \ref{lemma}. Also notice that the level sets $\{(a,y);\;\;y\ge Y_n(a,x)\}$ and $\{(a,y);\;\;a\le u_n(y,x)\}$ coincide. \subsection{The transport-collapse scheme revisited} The time-discrete scheme can be entirely recast in terms of $u_n$ (defined by (\ref{u bis})). Indeed, introducing \begin{equation} \label{tcm1} ju_n(a,y,x)=H(u_n(y,x)-a), \end{equation} we can rewrite (\ref{predictor bis}), (\ref{corrector bis}) in terms of $u_n$ and $ju_n$ only: \begin{equation} \label{tcm2} u_n(y,x)=\int_0^1 ju_{n-1}(y-h\;q_0(a),x-h\;q(a),a)da. \end{equation} We observe that, formulae (\ref{tcm1},\ref{tcm2}) exactly define the 'transport-collapse' (TC) approximation to (\ref{scalar bis}), or, equivalently, its 'kinetic' approximation, according to {\Bbb C}e{Br1,Br2,Br3,GM}. \subsection{Convergence to the Kruzhkov solution} We are now going to prove that, on one hand, $Y_n(a,x)$ converges to $Y(t,a,x)$ as $nh\rightarrow t$, and, on the other hand, $u_n(y,x)$ converges to $u(t,y,x)$, where $Y$ and $u$ are respectively the unique solution to subdifferential equation (\ref{inclusion bis}) with initial condition $Y_0(a,x)$ and the unique Kruzhkov solution to (\ref{scalar bis}) with initial condition \begin{equation} \label{initial} u_0(y,x)=\int_0^1 H(y-Y_0(a,x))da. \end{equation} \\ From the convergence analysis of the TC method {\Bbb C}e{Br1,Br2,Br3,GM}, we already know that, as $nh\rightarrow t$, $$ \int |u_n(y,x)-u(t,y,x)|dydx\rightarrow 0, $$ where $u$ is the unique Kruzhkov solution with initial value $u_0$ given by (\ref{initial}). More precisely, if we extend the time discrete approximations $u_n(y,x)$ to all $t\in [0,T]$ by linear interpolation in time: \begin{equation} \label{interpo bis} u^h(t,y,x)=u_{n+1}(y,x)\frac{t-nh}{h}+u_n(y,x)\frac{nh+h-t}{h}, \end{equation} then $u^h-u$ converges to $0$ in the space $C^0([0,T],L^1({\Bbb R}\times{\Bbb T}^d))$ as $h\rightarrow 0$. Following (\ref{level}), it is now natural to introduce the level-set function $Y$ defined by (\ref{level}) from the Kruzhkov solution: $$ Y(t,a,x)=\int_0^\infty H(a-u(t,y,x))dy. $$ (Notice that, at this point, we do not know that $Y$ is a solution to the subdifferential formulation (\ref{inclusion bis})!) Let us interpolate the $Y_n$ by \begin{equation} \label{interpo} Y^h(t,a,x)=Y_{n+1}(a,x)\frac{t-nh}{h}+Y_n(a,x)\frac{nh+h-t}{h}, \end{equation} for all $t\in [nh,nh+h]$ and $n\ge 0$. By the co-area formula (\ref{coarea}), we have $$ \int |Y(t,a,x)-Y_n(a,x)|dadx=\int |u(t,y,x)-u_n(y,x)|dydx. $$ Thus: $$ \sup_{t\in [0,T]}||Y(t,\cdot)-Y^h(t,\cdot)||_{L^1} \le \sup_{t\in [0,T]}||u(t,\cdot)-u^h(t,\cdot)||_{L^1}\rightarrow 0, $$ and we conclude that the approximate solution $Y^h$ must converge to $Y$ in $C^0([0,T],L^1([0,1]\times{\Bbb T}^d))$ as $h\rightarrow 0$. Notice that, since the $Y^h$ are uniformly bounded in $L^\infty$, the convergence also holds true in $C^0([0,T],L^2([0,1]\times{\Bbb T}^d))$. We finally have to prove that $Y$ is the solution to the subdifferential formulation (\ref{inclusion bis}) with initial condition $Y_0$. \subsection{Consistency of the transport-collapse scheme} Let us check that the TC scheme is consistent with the subdifferential formulation (\ref{inclusion bis}) in its semi-integral formulation (\ref{semi-integral}). For each smooth function $Z(t,a,x)$ with $\partial_a Z\ge 0$ and $p\ge 1$, we have $$ \int |Y_{n+1}(a,x)-Z(nh+h,a,x)|^p dadx $$ $$ \le \int |Y_{n+1}^*(a,x)-Z(nh+h,a,x)|^p dadx $$ (because of property (\ref{comparison}) due to the rearrangement step (\ref{corrector})) $$ =\int |Y_{n}(a,x-h\;q(a))+h\;q_0(a)-Z(nh+h,a,x)|^p dadx $$ (by definition of the predictor step (\ref{predictor}) $$ =\int |Y_{n}(a,x)+h\;q_0(a)-Z(nh+h,a,x+h\;q(a))|^p dadx $$ $$ =\int |Y_{n}-Z(nh,\cdot)|^p dadx+h\;\Gamma+o(h) $$ where: $$ \Gamma=p\int (Y_{n}-Z(nh,\cdot))|Y_{n}-Z(nh,\cdot)|^{p-2} \{q_0-\partial_t Z(nh,\cdot)-q\cdot\nabla_x Z(nh,\cdot)\}dadx $$ (by Taylor expanding $Z$ about $(nh,a,x)$). Since the approximate solution provided by the TC scheme has a unique limit $Y$, as shown in the previous section, this limit must satisfy: $$ \frac{d}{dt} \int |Y-Z|^p dadx \le p\;\int (Y-Z)|Y-Z|^{p-2} (q_0(a)-\partial_t Z-q(a)\cdot\nabla_x Z)dadx, $$ in the distributional sense in $t$. In particular, for $p=2$, we exactly recover the semi-integral version (\ref{semi-integral}) of (\ref{inclusion bis}). We conclude that the approximate solutions generated by the TCM scheme do converge to the solutions of (\ref{inclusion bis}) in the sense of Definition \ref{def}, which completes the proof of Theorem \ref{main bis}. \section{Viscous approximations} A natural regularization for subdifferential equation (\ref{inclusion bis}) amounts to substitute a barrier function for the convex cone $K$ in $L^2([0,1]\times {\Bbb T}^d)$ of all functions $Y$ such that $\partial_a Y\ge 0$. Typically, we introduce a convex function $\phi:{\Bbb R}\rightarrow ]-\infty,+\infty]$ such that $\phi(\tau)=+\infty$ if $\tau<0$, we define, for all $Y\in K$, \begin{equation} \label{potential} \Phi(Y)=\int \phi(\partial_a Y)dadx, \end{equation} and set $\Phi(Y)=+\infty$ if $Y$ does not belong to $K$. Typical examples are: $$\phi(\tau)=-\log(\tau),\;\;\; \phi(\tau)=\tau\log(\tau),\;\;\; \phi(\tau)=\frac{1}{\tau},\;\;\;\forall\tau>0.$$ Then, we considered the perturbed subdifferential equation \begin{equation} \label{perturbed} 0\;\in\;\; \partial_t Y+ q(a)\cdot\nabla_x Y-q_0(a) +\varepsilon\partial \Phi(Y), \end{equation} for $\varepsilon>0$. The general theory of maximal monotone operators guarantees the convergence of the corresponding solutions to those of (\ref{inclusion bis}) as $\varepsilon\rightarrow 0$. It is not difficult (at least formally) to identify the corresponding perturbation to scalar conservation (\ref{scalar bis}). Indeed, assuming $\phi(\tau)$ to be smooth for $\tau>0$, we get, for each smooth function $Y$ such that $\partial_a Y>0$: $$ \partial\Phi(Y)=-\partial_a(\phi'(\partial_a Y)). $$ Thus, any smooth solution $Y$ to (\ref{perturbed}), satisfying $\partial_a Y>0$, solves the following parabolic equation: \begin{equation} \label{viscous} \partial_t Y+ q(a)\cdot\nabla_x Y-q_0(a) =\varepsilon\partial_a(\phi'(\partial_a Y)). \end{equation} Introducing, the function $u(t,y,x)$ implicitely defined by $$ u(t,Y(t,a,x),x)=a, $$ we get (by differentiating with respect to $a$, $t$ and $x$): $$ (\partial_y u)(t,Y(t,a,x),x)\partial_a Y(t,a,x)=1, $$ $$ (\partial_t u)(t,Y,x)+(\partial_y u)(t,Y,x)\partial_t Y=0, $$ $$ (\nabla_x u)(t,Y,x)+(\partial_y u)(t,Y,x)\nabla_x Y=0. $$ Multiplying (\ref{viscous}) by $(\partial_y u)(t,Y(t,a,x),x)$, we get: \begin{equation} \label{viscous bis} -\partial_t u-q(u)\cdot\nabla_x u-q_0(u)\partial_y u =\varepsilon \partial_y (\phi'(\frac{1}{\partial_y u})). \end{equation} In particular, in the case $\phi(\tau)=-\log\tau$, we recognize a linear viscous approximation to scalar conservation law (\ref{scalar bis}): \begin{equation} \label{viscous ter} \partial_t u+q(u)\cdot\nabla_x u+q_0(u)\partial_y u =\varepsilon \partial^2_{yy} u, \end{equation} with viscosity only in the $y$ variable. \subsubsection{Remark} Of course, these statements are not rigourous since the parabolic equations we have considered are degenerate and their solutions may not be smooth. \subsubsection{Remark} In the case of our main result, Theorem \ref{main}, we have $q_0=0$ and the variable $y$ is just a dummy variable in (\ref{scalar}). Thus, the corresponding regularized version \begin{equation} \label{viscous quart} -\partial_t u-q(u)\cdot\nabla_x u =\varepsilon \partial_y (\phi'(\frac{1}{\partial_y u})). \end{equation} includes viscous effects not on the space variable $x$ but rather on the 'parameter' $y\in{\Bbb R}$. This unusual type of regularization has already been used and analyzed in the level-set framework developped by Giga for Hamilton-Jacobi equations {\Bbb C}e{Gi2}, and by Giga, Giga, Osher, Tsai for scalar conservation laws {\Bbb C}e{GG,TGO}. \section{Related equations} A similar method can be applied to some special systems of conservation laws. A typical example (which was crucial for our understanding) is the 'Born-Infeld-Chaplygin' system considered in {\Bbb C}e{Br4}, and the related concept of 'order-preserving strings'. This system reads: \begin{equation} \label{bi} \partial_t(hv)+\partial_y(hv^2-hb^2)-\partial_x(hb)=0, \end{equation} $$ \partial_t h+\partial_y(hv)=0,\;\;\; \partial_t (hb)-\partial_x(hv)=0, $$ where $h,b,v$ are real valued functions of time $t$ and two space variables $x,y$. In {\Bbb C}e{Br4}, this system is related to the following subdifferential system: \begin{equation} \label{bi-subdif} 0\in \partial_t Y-\partial_x W+\partial K(Y), \;\;\;\partial_t W=\partial_x Y, \end{equation} where $(Y,W)$ are real valued functions of $(t,a,x)$ and $K$ is the convex cone of all $Y$ such that $\partial_a Y\ge 0$. The (formal) correspondence between (\ref{bi}) and (\ref{bi-subdif}) is obtained by setting: $$ h(t,x,Y(t,x,a))\partial_a Y(t,x,a)=1, $$ $$ v(t,x,Y(t,x,a))=\partial_t Y(t,x,a),\;\;\; b(t,x,Y(t,x,a))=\partial_x Y(t,x,a). $$ Unfortunately, this system is very special (its smooth solutions are easily integrable). In our opinion, it is very unlikely that $L^2$ formulations can be found for general hyperbolic conservation laws as easily as in the multidimensional scalar case. \section{Appendix: proof of Proposition \ref{Proposition}} In the case when $q_0$ and $Y_0$ belong to $L^\infty$ and are nonnegative, we already know, from the convergence of the TC scheme, that there is a solution $Y$ to (\ref{inclusion bis}), with initial value $Y_0$, in the sense of definition \ref{def}. From (\ref{induction}), we also get for such solutions, when $q_0\ge 0$ and $Y_0\ge 0$, $$ 0\le Y(t,\cdot)\le \sup Y_0+t\sup q_0,\;\;\;\forall t\ge 0. $$ By elementary rescalings, we can remove the assumptions that both $Y_0$ and $q_0$ are nonnegative and get estimate (\ref{maxi}). \\ Let us now examine some additional properties of the solutions to (\ref{inclusion bis}) obtained from the TC approximations. First, we observe that, in the TC scheme, \\ 1) the predictor step (a translation in the $x$ variable by $h\;q(a)$ plus an addition of $h\;q_0(a)$) is isometric in all $L^p$ spaces, \\ 2) the corrector step (an increasing rearrangement in the $a$ variable) is non-expansive in all $L^p$. \\ Thus the scheme is non-expansive in all $L^p([0,1]\times {\Bbb T}^d)$. More precisely, for two different initial conditions $Y_0$ and $\tilde Y_0$, and two different data $q_0$ and $\tilde q_0$, all in $L^\infty$, we get for the corresponding approximate solutions $Y_n$ and $\tilde Y_n$: \begin{equation} \label{non expansive} ||Y_{n}-\tilde Y_{n}||_{L^p} \le ||Y_{n-1}-\tilde Y_{n-1}||_{L^p}+h||q_0-\tilde q_0||_{L^p}\;. \end{equation} This shows that (\ref{Lp stability}) holds true for all solutions of (\ref{inclusion bis}) generated by the TC scheme. \\ Since the scheme is also invariant under translations in the $x$ variable, we get the following a priori estimate: \begin{equation} \label{esti2} ||\nabla_x Y_n||_{L^p}\le ||\nabla_x Y_0||_{L^p}. \end{equation} Finally, let us compare two solutions of the scheme $Y_n$ and $\tilde Y_n=Y_{n+1}$ obtained with initial condition $\tilde Y_0=Y_1$. Using (\ref{non expansive}), we deduce: $$ \int |Y_{n+1}(a,x)- Y_{n}(a,x)|^p dadx \le \int |Y_{1}(a,x)- Y_{0}(a,x)|^p dadx $$ $$ \le \int |Y^*_{1}(a,x)- Y_{0}(a,x)|^p dadx =\int |Y_{0}(a,x-h\;q(a))+h\;q_0(a)-Y_0(a,x)|^p dadx. $$ So we get a second a priori estimate: \begin{equation} \label{esti3} ||Y_{n+1}-Y_n||_{L^p}\le (||q_0||_{L^p}+||q||_{L^\infty}||\nabla_x Y_0||_{L^p})h. \end{equation} Thus the solutions $Y$ to (\ref{inclusion bis}) obtained from the TC scheme satisfy the a priori bounds: \begin{equation} \label{esti2 bis} ||\nabla_x Y(t,\cdot)||_{L^p}\le ||\nabla_x Y_0||_{L^p}, \end{equation} \begin{equation} \label{esti3 bis} ||\partial_t Y(t,\cdot)||_{L^p}\le ||q_0||_{L^p}+||q||_{L^\infty}||\nabla_x Y_0||_{L^p}. \end{equation} Notice that, at this level, we still do not know if solutions, in the sense of Definition \ref{def} exist when $Y_0\in K$ and $q_0\in L^2([0,1])$ are not in $L^\infty$ and we know nothing about their uniqueness. This can be easily addressed by standard functional analysis arguments. \subsubsection*{Existence for general data} Let $Y_0\in K$ and $q_0\in L^2([0,1])$. We can find two Cauchy sequences in $L^2$, labelled by $k\in{\Bbb N}$, namely $Y_0^k\in K$ and $q_0^k\in L^2([0,1])$, made of smooth functions, with limits $Y_0$ and $q_0$ respectively. Let us denote by $Y^k$ the corresponding solutions, generated by the TC scheme. Because of their $L^2$ stability, they satisfy: $$ \sup_{t\in [0,T]}||Y^k(t,\cdot)-Y^{k'}(t,\cdot)||_{L^2} \le ||Y_0^k-Y_0^{k'}||_{L^p}+T||q_0^k-q_0^{k'}||_{L^2}. $$ So, $Y^k$ is a Cauchy sequence in $C^0([0,T],L^2)$ of solutions of (\ref{inclusion bis}) in the sense of Definition \ref{def}, with a definite limit $Y$. Definition \ref{def} is clearly stable under this convergence process. So, we conclude that $Y$ satisfies the requirements of Definition \ref{def} and is a solution with initial condition $Y_0$ and left-hand side $q_0$. Notice that, through our approximation process, we keep the a priori estimates (\ref{esti2 bis}),(\ref{esti3 bis}), for general data $q_0\in L^2([0,1])$. \subsubsection*{Uniqueness} Let us consider a solution $Y$ to (\ref{inclusion bis}), with initial condition $Y_0\in K$ and left-hand side $q_0\in L^2([0,1])$, in the sense of Definition \ref{def}. By definition $Y(t,\cdot)\in K$ depends continuously of $t\in [0,T]$ in $L^2$. From definition (\ref{semi-integral}), using $Z=0$ as a test function, we see that: $$ \frac{d}{dt}||Y(t,\cdot)||^2_{L^2} \le 2\int Y(t,a,x)q_0(a)\; dadx \le ||Y(t,\cdot)||^2_{L^2}+||q||^2_{L^2}, $$ which implies that the $L^2$ norm $Y(t,\cdot)$ stays uniformly bounded on any finite interval $[0,T]$. Thus, $T>0$ being fixed, we can mollify $Y$ and get, for each $\epsilon\in ]0,1]$ a smooth function $Y_\epsilon$, valued in $K$, so that: \begin{equation} \label{error} \sup_{t\in [0,T]}||Y(t,\cdot)-Y_\epsilon(t,\cdot)||_{L^2}\le \epsilon. \end{equation} Let us now consider an initial condition $Z_0$ such that $\nabla_x Z_0$ belongs to $L^2$. We know that there exist a solution $Z$ to (\ref{inclusion bis}), still in the sense of Definition \ref{def}, obtained by TC approximation, for which both $\partial_t Z(t,\cdot)$ and $\nabla_x Z(t,\cdot)$ stay uniformly bounded in $L^2$ for all $t\in [0,T]$. This function $Z$ has enough regularity to be used as a test function in (\ref{semi-integral}) when expressing that $Y$ is a solution in the sense of Definition \ref{def}. So, for each smooth nonnegative function $\theta(t)$, compactly supported in $]0,T[$, we get from (\ref{semi-integral}): $$ \int \{\theta'(t)|Y-Z|^2 +2\theta(t)(Y-Z)(q_0(a)-\partial_t Z- q(a)\cdot\nabla_x Z)\}dadxdt\ge 0. $$ Substituting $Y_\epsilon$ for $Y$, we have, thanks to estimate (\ref{error}), $$ \int \{\theta'(t)|Y_\epsilon-Z|^2 +2\theta(t)(Y_\epsilon-Z)(q_0(a)-\partial_t Z- q(a)\cdot\nabla_x Z)\}dadxdt \ge -C\epsilon, $$ where $C$ is a constant depending on $\theta$, $Z$, $q_0$ and $q$ only. Since $Z$ is also a solution, using $Y_\epsilon$ as a test function, we get from formulation (\ref{semi-integral}): $$ \int \{\theta'(t)|Z-Y_\epsilon|^2 +2\theta(t)(Z-Y_\epsilon) (q_0(a)-\partial_t Y_\epsilon- q(a)\cdot\nabla_x Y_\epsilon)\}dadxdt \ge 0. $$ Adding up these two inequalities, we deduce: $$ \int\{2\theta'(t)|Y_\epsilon-Z|^2 +2\theta(t)(Y_\epsilon-Z)(\partial_t (Y_\epsilon-Z)+ q(a)\cdot\nabla_x(Y_\epsilon- Z))\}dadxdt \ge -C\epsilon. $$ Integrating by part in $t\in [0,T]$ and $x\in{\Bbb T}^d$, we simply get: $$ \int \theta'(t)|Y_\epsilon-Z|^2 dadxdt \ge -C\epsilon. $$ Letting $\epsilon\rightarrow 0$, we deduce: $$ \frac{d}{dt}\int |Y-Z|^2 dadx\le 0. $$ We conclude, at this point, that: $$ ||Y(t,\cdot)-Z(t,\cdot)||_{L^2}\le ||Y_0-Z_0||_{L^2},\;\;\;\forall t\in [0,T] $$ This immediately implies the uniqueness of $Y$. Indeed, any other solution $\tilde Y$ with initial condition $Y_0$ must also satisfy: $$ ||\tilde Y(t,\cdot)-Z(t,\cdot)||_{L^2}\le ||Y_0-Z_0||_{L^2}. $$ Thus, by the triangle inequality: $$ ||\tilde Y(t,\cdot)-Y(t,\cdot)||_{L^2}\le 2||Y_0-Z_0||_{L^2}. $$ Since $Z_0\in K$ is any function such that $\nabla_x Z_0$ belongs to $L^2$, we can make $||Y_0-Z_0||_{L^2}$ arbitrarily small and conclude that $\tilde Y=Y$, which completes the proof of uniqueness. \subsection*{Acknowledgments} This article was written at the Bernoulli Centre, EPFL, Lausanne, in September 2006, during the program ``Asymptotic Behaviour in Fluid Mechanics''. The author is grateful to the organizers, Dragos Iftime, Genevi\`eve Raugel and Tudor Ratiu for their kind invitation. \end{document}
\begin{document} \title{Representation of asymptotic values for nonexpansive stochastic control systems \thanks{The work has been supported in part by the NSF of P.R.China (No. 11222110), Shandong Province (No. JQ201202), NSFC-RS (No. 11661130148), 111 Project (No. B12023).}} \author{Juan Li,\,\, Nana Zhao\footnote{Corresponding author}\\ {\small School of Mathematics and Statistics, Shandong University, Weihai, Weihai 264209, P.~R.~China.}\\ {\small{\it E-mails: [email protected], [email protected].}} \date{August 02, 2017}} \maketitle \begin{abstract} In ergodic stochastic problems the limit of the value function $V_\lambda$ of the associated discounted cost functional with infinite time horizon is studied, when the discounted factor $\lambda$ tends to zero. These problems have been well studied in the literature and the used assumptions guarantee that the value function $\lambda V _\lambda$ converges uniformly to a constant as $\lambda\to 0$. The objective of this work consists in studying these problems under assumptions, namely, the nonexpansivity assumption, under which the limit function is not necessarily constant. Our discussion goes beyond the case of the stochastic control problem with infinite time horizon and discusses also $V_\lambda$ given by a Hamilton-Jacobi-Bellman equation of second order which is not necessarily associated with a stochastic control problem. On the other hand, the stochastic control case generalizes considerably earlier works by considering cost functionals defined through a backward stochastic differential equation with infinite time horizon and we give an explicit representation formula for the limit of $\lambda V_\lambda$, as $\lambda\to 0$. \end{abstract} \noindent \textbf{Keywords.} Stochastic nonexpansivity condition; limit value; BSDE. \noindent \textbf{AMS Subject classification:} 60H10; 60K35 \section{{\protect \large {Introduction}}} In our paper we study the limit behaviour of the optimal value of a discounted cost functional with infinite time horizon as the discount factor $\lambda>0$ tends to zero. For this we consider a stochastic control system given by the controlled stochastic equation \begin{equation}\label{0} dX_t^{x,u}=b(X_t^{u,x},u_t)dt+\sigma(X_t^{x,u},u_t)dW_t,\, t\ge 0,\ \ X_0^{x,u}=x\in \mathbb{R}^N, \end{equation} \noindent driven by a Brownian motion $W$ and an admissible control $u\in{\cal U}$, i.e. a control process $u$ which is adapted with respect to the filtration $\mathbf{F}=({\cal F}_t)_{t\ge 0}$ generated by $W$ and completed by all null sets. As we are interested in the limit behaviour of the controlled system, as $t\rightarrow +\infty$, we have to add to the usual Lipschitz and growth conditions on the coefficients $\sigma$ and $b$ also assumptions guaranteeing that, for all the process $X^{t,u}$ takes all its values in a compact $\overline{\theta}(\subset R^N)$, for all $x\in\overline{\theta}$ and all $u\in{\cal U}$. The cost functional $\overline{Y}_0^{x,u}$ associated with the dynamics $X^{x,u}$ is defined through a backward stochastic differential equation (BSDE) on the infinite time interval $[0,+\infty)$: \begin{equation}\label{1} \overline{Y}^{\lambda,x,u}_t=\overline{Y}^{\lambda,x,u}_T+\int_t^T(\psi(X_s^{x,u}, \overline{Z}^{\lambda,x,u}_s,u_s)-\lambda \overline{Y}^{\lambda,x,u}_s)ds-\int_t^T\overline{Z}^{\lambda,x,u}_sdW_s,\, 0\le t\le T<+\infty,\end{equation} \noindent and we define the value function \begin{equation}\label{3} V_\lambda(x):=\inf_{u\in{\cal U}}\overline{Y}_0^{x,u}.\end{equation} We remark that, if $\psi(x,z,u)$ doesn't depend on $z$, we get the cost functional considered in \cite{Buckdahn 2013}: \begin{equation}\label{2} \overline{Y}_0^{x,u}=E\left[\int_0^\infty e^{-\lambda t}\psi(X_t^{x,u},u_t)dt\right].\end{equation} \noindent However, since the pioneering work by Pardoux and Peng \cite{peng1990} on BSDEs in 1990 and its extension by Darling, Pardoux \cite{Darling 1997} and by Peng \cite{S. Peng 1991}, and in particular since the works by Peng \cite{P 1992}, \cite{Peng 1997} on BSDE methods in stochastic control, it has become usual to study stochastic control systems whose cost functionals are defined through a BSDE. As concerns BSDEs with infinite time horizon, Chen \cite{Chen 1992} was the first to study such equations on an unbounded random time interval, Hamad\`{e}ne, Lepeltier and Wu \cite{Wu 1999} studied reflected BSDEs with one reflecting barrier and with infinite time horizon. Moreover, Briand, Hu \cite{Hu 1998} and Royer \cite{Royer 2004} generalized the existence results for BSDEs with unbounded random terminal time. In our paper we begin our studies with the above infinite terminal time BSDE (\ref{1}), where we use techniques developed by Debusche, Hu and Tessitore \cite{Debussche 2011}, and we provide new estimates. Let us point out that in \cite{Debussche 2011} the authors have studied Ergodic BSDEs first introduced by Fuhrman, Hu and Tessotore \cite{Fuhrman 2009}; their $\lambda$ is a part of the solution. Our BSDE differs from theirs, its driving coefficient $\psi$ depends also on the control process, and its study differs, since we are not interested in the ergodic case, we study the limit behaviour of the value function $\lambda V_\lambda$ as $\lambda\rightarrow 0$ under assumptions which don't imply that the limit value function is a constant. The limit problem for deterministic and stochastic control systems has been studied by different authors. Quincampoix and Renault \cite{Quincampoix 2011} studied a deterministic control problem with infinite time horizon and investigated the limit behaviour of the discounted value value function, when the discount factor tends to zero. For this they used a so-called nonexpansivity condition, and they gave, in particular, examples which show that -unlike the ergodic case- the limit value function can depend on the initial state $x$. In Buckdahn, Goreac and Quincampoix \cite{Buckdahn 2013} these studies are extended to stochastic control problems with value functions of the form (\ref{2}) (Abel mean) but also of Ces\'{a}ro mean. In \cite{Marc 2015}, for the case of deterministic controls, Cannarsa and Quincampoix extend these approaches by using a measurable viability theorem of Frankowska, Plaskacz, Rzezuchowski \cite{Frankowska 1995}, they characterize $V_\lambda$ as constrained viscosity solution of an associated Hamilton-Jacobi equation, and they study the limit problem. The studies in our paper are heavily inspired by \cite{Buckdahn 2013} and \cite{Frankowska 1995}. The key assumption in \cite{Buckdahn 2013}, which allows to take the limit of the classical value function $\lambda V_\lambda$ (see (\ref{3})) as $\lambda\rightarrow 0$ is the nonexpansivity condition. However, as we generalize the cost functional by defining it through an infinite time horizon BSDE, we have also to extend our nonexpansivity assumption to the more general case we investigate (see our assumption (\ref{r49}) in Section 2). This extension is non trivial, it gives a stability to this assumption under Girsanov transformation which we have to work with, but however our condition coincides with that given in \cite{Buckdahn 2013}, if $\psi$ is independent of $z$. Under our nonexpansivity condition we show that the family of functions $\{\lambda V_\lambda\}$ is equi-continuous and equi-bounded on $\overline{\theta}$. Hence, due to the Arzel\`{a}-Ascoli Theorem, as $\lambda \rightarrow 0$, $\{\lambda V_\lambda\}$ has an accumulation point in the space of continuous functions over $\bar{\theta}$ endowed with the supremum norm. The main objective of our paper is to get the existence of the limit, i.e., the uniqueness of this accumulation point, and to characterize the limit function $w_0=\lim_{\lambda\rightarrow 0}\lambda {V}_\lambda$. In our approach PDE methods play a central role. We recall that the PDE approach for the study of the limit behaviour for solutions of Hamilton-Jacobi equations with coercitive Hamiltonian essentially originates from Lions, Papanicolaou and Varadhan \cite{P.-L. Lions}. This work was extended by Arisawa \cite{M. Arisawa 1998} for the deterministic control setting and by Arisawa and Lions \cite{Arisawa Lions 1998} to the stochastic control framework. For subsequent works and extensions the reader is referred to \cite{Artstein 2000}, \cite{Quincampoix 2011} for the deterministic control case, and to \cite{G. K. Basak 1997}, \cite{V. Borkar 2007}, \cite{R. Buckdahn 2005}, \cite{A. Richou 2009} and the references therein for the stochastic framework. But all these approaches were made in the ergodic case, under suitable assumptions guaranteeing that the limit value is independent of the initial data. In our paper we too use a PDE approach. For this end we characterize $V_\lambda$ as constrained viscosity solution of the associated Hamilton-Jabobi-Bellman (HJB) equation \centerline{$\lambda V_\lambda(x)+H(x,DV_\lambda(x),D^2V_\lambda(x))=0,\, \, x\in\theta,$} \centerline{$\lambda V_\lambda(x)+H(x,DV_\lambda(x),D^2V_\lambda(x))\ge 0,\, x\in\partial\theta,$} \noindent (see Section 3). Avoiding assumptions which lead to the ergodic case, we suppose that the Hamiltonian $H$ satisfies a radial monotonicity condition $$H(x,lp,lA)\le H(x,p,A),\, l\ge 1,\, (p,A)\in \mathbb{R}^N\times {\cal S}^N,$$ \noindent where ${\cal S}^N$ denotes the set of symmetric $N\times N$ matrices. This condition was introduced in \cite{Marc 2015}, and it guarantees the monotone and uniform convergence of $\lambda V_\lambda$, as $\lambda\rightarrow 0.$ As this convergence result for the constrained solution $V_\lambda$ of the above HJB equation is not directly related with the characterization of $V_\lambda$ as value function of our stochastic control problem, by using Katsoulakis' comparison results \cite{Katsoulakis 1994} for constrained solutions of PDEs, we extend our discussion to more general Hamiltonians which are not necessarily related with a stochastic control problem, but which satisfy the radial monotonicity condition. For this general case we characterize the limit $w_0=\lim_{\lambda\rightarrow 0}\lambda V_\lambda$ as maximal viscosity subsolution of some limit HJB equation (Theorem \ref{the:3.4}). More precisely, we prove that $$w_0(x)=\mathop{\rm sup}\{w(x):\, w\in \mbox{Lip}_{M_0}(\overline{\theta}),\, w+\overline{H}(x,Dw,D^2w)\le 0\mbox{ on }\theta \mbox{ in viscosity sense}\}$$ \noindent $x\in\overline{\theta},$ where $\overline{H}(x,p,A)=\min\left\{M_0,\mathop{\rm sup}_{l>0}H(x,lp,lA)\right\}$ (For details, see Theorem \ref{the:3.4}). \noindent After, coming back to the special case that $V_\lambda$ is the value function of our stochastic control problem, we characterize the limit function $w_0=\lim_{\lambda\rightarrow 0}\lambda V_\lambda$ as viscosity solution by passing to the limit in the HJB equation associated with $V_\lambda$. For the special case $\psi(x,z,u)=\psi_1(x,u)+g(z)$ we give an explicit representation of $w_0$ (see Theorem \ref{th:4.2}) using Peng's notion of $g$-expectation $\varepsilon^g[\cdot]$ (\cite{peng}); it's a non linear expectation introduced through a BSDE with driving coefficient $g$. More precisely, we show that $$ w_0(x)=\inf_{t\ge 0,u\in{\cal U}}\varepsilon^g [\min_{v\in U}\psi(X_t^{x,u},0,v) ],\, x\in \overline{\theta}.$$ Our paper is organized as follows. In Section 2 we present the basic assumptions on the coefficient functions $b, \sigma, \psi$, we define the value function $V_\lambda(x)$, and we prove the existence and the uniqueness of the solution of the BSDEs on the infinite time interval $[0,\infty)$ (Proposition \ref{th:2.4}). We introduce the stochastic nonexpansivity condition and show that the nonexpansivity condition combined with standard assumptions implies the stochastic nonexpansivity condition (Proposition \ref{p:2.1}). A consequence is that the family of functions $\{\lambda V_\lambda\}_{\lambda>0}$ is equicontinuous and equibounded on $\overline{\theta}$ (Lemma \ref{lem:2.6}). In Section 3 we first define the constrained viscosity solution of general HJB equations which are not necessarily related with a stochastic control problem, and then we show in this general framework that $\lambda V_\lambda$ is monotone and converges uniformly to some limit $w_0$ as $\lambda\rightarrow 0$ (Theorem \ref{th:3.3}). Moreover, we give an explicit representation of $w_0(x)$ (Theorem \ref{the:3.4}). In Section 4 we consider the Hamiltonian $H$ related to the stochastic control problem, and we characterize $V_\lambda$ as the unique viscosity solution on $\overline{\theta}$ of the associated HJB equation (Proposition \ref{th:3.3.1} and Proposition \ref{th:3.2}). For the convenience of the reader, we give the proof of the dynamic programming principle (DPP) in the Appendix. Moreover, still in the stochastic control case the HJB equation satisfied by $w_0(x)$ (Theorem \ref{th:4.1}) is studied and an explicit formula for $w_0(x)$ (Theorem \ref{th:4.2}) is given with the help of the $g$-expectation, a nonlinear expectation introduced by Peng in \cite{peng}. \section{ {\protect \large Preliminaries}} Let $\{W_t\}_{t\geq0}$ be a standard $d$-dimensional Brownian motion defined on a complete probability space $(\Omega,\mathcal{F},\mathbb{P})$. Let $\mathbb{F}=\{\mathcal{F}_t\}_{t\geq0}$ be the filtration generated by $\{W_t\}_{t\geq0}$, and augmented by all $\mathbb{P}$-null sets. We put $\mathcal{F}_\infty=\bigvee\limits_{t\geq0}\mathcal{F}_t$. For any $N\geq1$, $|x|$ denotes the Euclidean norm of $x\in\mathbb{R}^N$ and $\langle\cdot,\cdot\rangle$ denotes the Euclidean scalar product. We introduce the following spaces of stochastic processes: \begin{equation*} \begin{split} &S_{\mathbb{F}}^2(\mathbb{R}):=\Big\{(\phi_t)_{0\leq t< \infty}\ \text{real-valued continuous}\ \mathbb {F}\text{-adapted process}: \mathbb{E}[\mathop{\rm sup}\limits_{t\in[0,\infty)}|\phi_t|^2] <\infty\Big\};\\ &\mathcal{H}_{\mathbb{F}}^2(\mathbb{R}^{d}):= \Big\{(\phi_t)_{0\leq t< \infty}\ \mathbb{R}^{d}\text{-valued}\ \mathbb{F}\text{-progressively measurable process}: \mathbb{E}[\int_0^\infty|\phi_t|^2dt] <\infty\Big\};\\ &\mathcal{H}_{\mathbb{F}}^{2,-2\lambda}(0,T;\mathbb{R}^d):=\{(\phi_t)_{0\leq t\leq T}\ \mathbb{R}^d\text{-valued}\ \mathbb{F}\text{-progressively measurable process}: \\ &\qquad\qquad\qquad\qquad\quad \mathbb{E}[\int_0^T\exp(-2\lambda t)|\phi_t|^2dt]<\infty\};\\ &L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d):=\{(\phi_t)_{0\leq t< \infty}\ \mathbb{R}^{d}\text{-valued}\ \mathbb {F}\text{-adapted\ essentially\ bounded\ process}\};\\ &L^2(\mathcal{F}_\infty;\mathbb{R}):=\Big\{\xi\ \text{real-valued}\ \mathcal{F}_\infty \text{-measurable random variable}:\mathbb{E}[|\xi|^2]<\infty\Big\}. \end{split} \end{equation*} We suppose that $(U,d)$ is a compact metric space, $U$ is our control state space, and $\mathcal{U}=L_{\mathbb{F}}^\infty(0,\infty;U)$ is the space of all admissible control processes. It is defined as the set of all $U$-valued $\mathbb{F}$-adapted processes. Let us consider functions $b:\mathbb{R}^N\times U \rightarrow \mathbb{R}^N$ and $\sigma:\mathbb{R}^N\times U\rightarrow\mathbb{R}^{N\times d}$ satisfying standard conditions of continuity and Lipschitz property: \begin{equation*}\label{r1} \left\{ \begin{array}{llll} \mbox{(Hi)}\ b,\ \sigma\ \mbox{are\ uniformly\ continuous\ on}\ \mathbb{R}^N\times U,\\ \mbox{(Hii)}\ \text{There\ exists\ a\ constant}\ c>0\ \mbox{such\ that} \\ \ \ \ |b(x,u)-b(x',u)|+|\sigma(x,u)-\sigma(x',u)| \leq c|x-x'|,\ \mbox{for\ all}\ x,\ x'\in\mathbb{R}^N,\ u\in U,\\ \tag{H1} \ \ \ |b(x,u)|+|\sigma(x,u)|\leq c(1+|x|),\ \mbox{for\ all}\ x\in\mathbb{R}^N,\ u\in U. \end{array} \right. \end{equation*} \begin{lemma}\label{l:2.1} Under our standard assumptions (H1), for all control $u\in\mathcal{U}$, the controlled stochastic system \begin{equation}\label{r2} \left\{ \begin{array}{ll} dX_t^{x,u}=b(X_t^{x,u},u_t)dt+\sigma(X_t^{x,u},u_t)dW_t, \ \ t\geq 0, \\ X_0^{x,u}=x\in\mathbb{R}^N, \end{array} \right. \end{equation} has a unique $\mathbb{R}^N$-valued continuous, $\mathbb{F}$-adapted solution $X^{x,u}=(X^{x,u}_t)_{t\geq0}$. Moreover, for all $T>0$, and $k\geq2$, there is a constant $C_k(T)>0$ such that \begin{equation*} \begin{split} &\mathbb{E}[\mathop{\rm sup}\limits_{0\leq s\leq t}|X_s^{x,u}|^k]\leq C_k(T)(1+|x|^k),\\ &\mathbb{E}[\mathop{\rm sup}\limits_{0\leq s\leq t}|X_s^{x,u}-X_s^{x',u}|^k]\leq C_k(T)|x-x'|^k,\ t\in[0,T], x,\ x'\in\mathbb{R}^N, u\in\mathcal{U}. \end{split} \end{equation*} \end{lemma} The above result on SDEs is by now well known; for its proof the readers can refer to Ikeda, Watanabe \cite[pp.166-168]{Ikeda 1989} or Karatzas, Shreve \cite[pp.289-290]{Karatzas 1987}. We suppose that there exists a non-empty open set $\theta\subset \mathbb{R}^N$ with compact closure $\overline{\theta}$ such that $\overline{\theta}$ is invariant with respect to the control system (\ref{r2}). Recall that the invariance of $\overline{\theta}$ is defined by the fact that, for all control process $u\in\mathcal{U}$, if $x\in\overline{\theta}$, also $X_t^{x,u}\in\overline{\theta}$, for all $t\geq0$, $\mathbb{P}$-a.s. Given now a function $\psi:\mathbb{R}^N\times \mathbb{R}^d\times U\rightarrow \mathbb{R}$, for any $\lambda>0$, we consider the following BSDE on the infinite time interval $[0,\infty)$: \begin{equation}\label{r40} \overline{Y}_t^{\lambda,x,u}=\overline{Y}_T^{\lambda,x,u}+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\lambda \overline{Y}_s^{\lambda,x,u})ds-\int_t^T\overline{Z}_s^{\lambda,x,u}dW_s,\ 0\leq t\leq T<\infty. \end{equation} \begin{definition} A couple of processes $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})$ is called a solution of BSDE (\ref{r40}) on the infinite time interval if $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})$ satisfies the equation (\ref{r40}), and $\overline{Y}^{\lambda,x,u}=(\overline{Y}^{\lambda,x,u}_t)_{t\geq0}\in\mathcal{S}_{\mathbb{F}}^2(\mathbb{R})$ is bounded by some constant $\widetilde{M}$ and $\overline{Z}^{\lambda,x,u}=(\overline{Z}^{\lambda,x,u}_t)_{t\geq0}$ is in the space $$\mathcal{H}_{loc}^2(\mathbb{R}^d)=\{(\phi_t)_{0\leq t<\infty}: \mathbb{R}^d\mbox{-valued}\ \mathbb{F}\mbox{-progressively measurable},\ \displaystyle\mathbb{E}[\int_0^T|\phi_t|^2dt]<+\infty,\ 0\leq T<\infty\}.$$ \end{definition} We suppose that $\psi:\mathbb{R}^N\times \mathbb{R}^d\times U\rightarrow \mathbb{R}$ satisfies the following conditions: \begin{equation*}\label{r48} \left\{ \begin{array}{llll} \mbox{(Hiii)}\ \psi\ \text{is\ continuous\ on}\ \mathbb{R}^N\times\mathbb{R}^d\times U;\\ \mbox{(Hiv)}\ \text{There\ exist\ nonnegative\ constants}\ K_x, K_z\ \mbox{and}\ M \ \text{such\ that}\\ \ \ \ |\psi(x,z,u)-\psi(x',z',u)|\leq K_x|x-x'|+K_z|z-z'|,\\ \ \ \ |\psi(x,0,u)|\leq M, \ \ (x,x',z,z',u)\in \mathbb{R}^{2N}\times\mathbb{R}^{2d}\times U.\tag{H2} \end{array} \right. \end{equation*} The following proposition will be used frequently in what follows. We adapt the proof from \cite{Debussche 2011}, and also prove new estimates. \begin{proposition}\label{th:2.4} Under the assumptions (\ref{r1}) and (\ref{r48}), BSDE (\ref{r40}) on the infinite time interval $[0,\infty)$ has a unique solution $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})\in L^\infty _{\mathbb{F}}(0,T;\mathbb{R})\times\mathcal{H}^2_{loc}(\mathbb{R}^d)$. Moreover, we have \begin{equation*} |\overline{Y}_t^{\lambda,x,u}|\leq\frac{M}{\lambda},\ t\geq0,\ \text{and}\ \mathbb{E}[\int_0^\infty|e^{-\lambda t}\overline{Z}_t^{\lambda,x,u}|^2dt]\leq2(\frac{M}{\lambda})^2(2+\frac{K_z^2}{\lambda}). \end{equation*} \end{proposition} \begin{proof} \textbf{Uniqueness}. Let $x\in\mathbb{R}^N \mbox{and}\ u\in \mathcal{U}$ be arbitrarily given. Suppose that $(\overline{Y}_t^{1,\lambda,x,u},\overline{Z}_t^{1,\lambda,x,u})_{t\geq0}$ and $(\overline{Y}_t^{2,\lambda,x,u},\overline{Z}_t^{2,\lambda,x,u})_{t\geq0}$ are two solutions of BSDE (\ref{r40}) such that $\overline{Y}^{1,\lambda,x,u}, \overline{Y}^{2,\lambda,x,u}$ are continuous and bounded and $\overline{Z}^{1,\lambda,x,u}, \overline{Z}^{2,\lambda,x,u}\in \mathcal{H}^2_{loc}(\mathbb{R}^d)$. Let us set $\widehat{Y}_t=\overline{Y}_t^{1,\lambda,x,u}-\overline{Y}_t^{2,\lambda,x,u}$ and $\widehat{Z}_t=\overline{Z}_t^{1,\lambda,x,u}-\overline{Z}_t^{2,\lambda,x,u}$, $t\geq0$. Then, $\widehat{Y}$ is continuous and $|\widehat{Y}|\leq\overline{M}$, for some constant $\overline{M}$. We define \begin{equation*} \gamma_s=\left\{ \begin{array}{lll} &\frac{\psi(X_s^{x,u},\overline{Z}_s^{1,\lambda,x,u},u_s)-\psi(X_s^{x,u},\overline{Z}_s^{2,\lambda,x,u},u_s)}{|\widehat{Z}_s|^2}(\widehat{Z}_s)^*,\ \mbox{if}\ \widehat{Z}_s\neq 0;\\ & 0, \ \mbox{otherwise}, \end{array}\right. \end{equation*} and we notice that $|\gamma_s|\leq K_z,\ s\geq0$. Let $0\leq T<\infty$ be arbitrarily fixed. We define the probability $\mathbb{P}_T^\gamma$ on $(\Omega,\mathcal{F})$ by setting \begin{equation*} \frac{d\mathbb{P}_T^\gamma}{d\mathbb{P}}=\exp\{\int_0^T\gamma_sdW_s-\frac{1}{2}\int_0^T|\gamma_s|^2ds\}. \end{equation*} Then, from Girsanov's theorem, \begin{equation*} \begin{split} \widehat{Y}_t=&\widehat{Y}_T+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{1,\lambda,x,u},u_s)-\psi(X_s^{x,u},\overline{Z}_s^{2,\lambda,x,u},u_s))ds-\lambda\int_t^T\widehat{Y}_sds-\int_t^T\widehat{Z}_sdW_s\\ =& \widehat{Y}_T-\lambda\int_t^T\widehat{Y}_sds-\int_t^T\widehat{Z}_s(dW_s-\gamma_sds)\\ =&\widehat{Y}_T-\lambda\int_t^T\widehat{Y}_sds-\int_t^T\widehat{Z}_sdW^{\gamma,T}_s,\ t\in[0,T], \end{split} \end{equation*} where $\displaystyle W^{\gamma,T}_t=W_t-\int_0^t\gamma_sds,\ t\in[0,T],$ is an $(\mathbb{F},\mathbb{P}_T^\gamma)$-Brownian motion. Applying It\^{o}'s formula to $e^{-\lambda s}\widehat{Y}_s$, we get \begin{equation*} e^{-\lambda T}\widehat{Y}_T-e^{-\lambda t}\widehat{Y}_t=\int_t^Te^{-\lambda s}\widehat{Z}_sdW_s^{\gamma,T},\ t\in[0,T]. \end{equation*} From standard estimates we see that $\displaystyle (\int_0^te^{-\lambda s}\widehat{Z}_sdW_s^{\gamma,T})_{t\in[0,T]}$ is an $(\mathbb{F},\mathbb{P}_T^\gamma)$-martingale. Thus, denoting by $\mathbb{E}^\gamma_T[\cdot\big|\mathcal{F}_t]$ the conditional expectation under $\mathbb{P}_T^\gamma$, it follows that \begin{equation*} \widehat{Y}_t=\mathbb{E}^\gamma_T[\widehat{Y}_t\big|\mathcal{F}_t]=\mathbb{E}_T^\gamma[e^{-\lambda (T-t)}\widehat{Y}_T\big|\mathcal{F}_t]-\mathbb{E}_T^\gamma[\int_t^Te^{-\lambda (s-t)}\widehat{Z}_sdW_s^{\gamma,T}\big|\mathcal{F}_t]=\mathbb{E}_T^\gamma[e^{-\lambda (T-t)}\widehat{Y}_T\big|\mathcal{F}_t],\ t\in[0,T]. \end{equation*} Recall that $|\widehat{Y}_s|\leq\overline{M},\ s\geq0$. Hence, $|\widehat{Y}_t|\leq e^{-\lambda(T-t)}\overline{M},\ 0\leq t\leq T<\infty$. Finally, letting $T$ tend to infinity, we obtain that, for any $t\geq0$, $\widehat{Y}_t=0,\ \mathbb{P}\text{-}a.s.$, i.e., $\overline{Y}_t^{1,\lambda,x,u}=\overline{Y}_t^{2,\lambda,x,u},\ \text{for\ all}\ t\geq0,\ \mathbb{P}\text{-}a.s.$ \textbf{Existence.} For arbitrarily given $x\in\mathbb{R}^N, u\in \mathcal{U}\ \mbox{and}\ n\geq1$, we define $(\overline{Y}_t^{n,\lambda,x,u},\overline{Z}_t^{n,\lambda,x,u})_{t\geq 0}\in\mathcal{S}_{\mathbb{F}}^2([0,n];\mathbb{R})\times\mathcal{H}_{\mathbb{F}}^2([0,n];\mathbb{R}^d)$ as the unique solution of the following BSDE: \begin{equation}\label{r80} \overline{Y}_t^{n,\lambda,x,u}=\int_t^n(\psi(X_s^{x,u},\overline{Z}_s^{n,\lambda,x,u},u_s)-\lambda\overline{Y}_s^{n,\lambda,x,u})ds-\int_t^n\overline{Z}_s^{n,\lambda,x,u}dW_s,\ t\in[0,n]. \end{equation} Then, from a classical result for BSDEs we get the existence and the uniqueness of the solution $(\overline{Y}^{n,\lambda,x,u},\overline{Z}^{n,\lambda,x,u})$ under the assumption (H2). Now we will give the proof in four steps.\\ \textbf{Step 1.} $(\overline{Y}_t^{n,\lambda,x,u})_{t\in[0,n]}$ is bounded, uniformly with respect to $n$.\\ \indent Indeed, by introducing the ${\mathbb{F}}$-adapted process \begin{equation*} \gamma_s^n=\left\{ \begin{array}{lll} &\frac{\psi(X_s^{x,u},\overline{Z}_s^{n,\lambda,x,u},u_s)-\psi(X_s^{x,u},0,u_s)}{|\overline{Z}_s^{n,\lambda,x,u}|^2}(\overline{Z}_s^{n,\lambda,x,u})^*, \ \mbox{if}\ \overline{Z}_s^{n,\lambda,x,u}\neq 0;\\ & 0, \ \mbox{otherwise},\ s\in[0,n], \end{array}\right. \end{equation*} the above BSDE takes the form \begin{equation*} \overline{Y}_t^{n,\lambda,x,u}=\int_t^n(\psi(X_s^{x,u},0,u_s)-\lambda\overline{Y}_s^{n,\lambda,x,u})ds-\int_t^n\overline{Z}_s^{n,\lambda,x,u}(dW_s-\gamma_s^nds), \ t\in[0,n]. \end{equation*} As $\mid\gamma_s^n\mid\leq K_z,\ s\in[0,n],\ n\geq0,$ we know from the Girsanov Theorem that $\displaystyle W_t^n=W_t-\int_0^t\gamma_s^nds,\ t\in[0,n],$ is a Brownian motion under $\displaystyle d\mathbb{P}^n=\exp\{\int_0^n\gamma_s^ndW_s-\frac{1}{2}\int_0^n|\gamma_s^n|^2ds\}d\mathbb{P}$. Consequently, applying It\^{o}'s formula to $e^{-\lambda t}\overline{Y}^{n,\lambda,x,u}_t,\ t\in[0,n]$, and taking the conditional expectation $\mathbb{E}^n[\cdot\big|\mathcal{F}_t]$ with respect to $\mathbb{P}^n$, we obtain \begin{equation*} \overline{Y}_t^{n,\lambda,x,u}=\mathbb{E}^n[\int_t^ ne^{-\lambda(s-t)}\psi(X_s^{x,u},0,u_s)ds\big|\mathcal{F}_t],\ t\in[0,n]. \end{equation*} Finally, as $|\psi(x',0,u')|\leq M,\ (x',u')\in\mathbb{R}^N\times U$, it follows that \begin{equation*} |\overline{Y}_t^{n,\lambda,x,u}|\leq M\int_t^ne^{-\lambda(s-t)}ds\leq\frac{M}{\lambda},\ t\in[0,n],\ n\geq1. \end{equation*} Let us show now the second step.\\ \textbf{Step 2.} The sequence $(\overline{Y}_t^{n,\lambda,x,u})_{t\in[0,n]},\ n\geq1,$ converges uniformly on compacts, $\mathbb{P}$-a.s., as $n\rightarrow\infty$. For $n,\ m\geq1$ with $n\geq m$, we define\\ $$ \gamma_s^{n,m}=\frac{\psi(X_s^{x,u},\overline{Z}_s^{n,\lambda,x,u},u_s)-\psi(X_s^{x,u},\overline{Z}_s^{m,\lambda,x,u},u_s)} {|\overline{Z}_s^{n,\lambda,x,u}-\overline{Z}_s^{m,\lambda,x,u}|^2}(\overline{Z}_s^{n,\lambda,x,u}-\overline{Z}_s^{m,\lambda,x,u})^*,\ \mbox{if}\ \overline{Z}_s^{n,\lambda,x,u}\neq\overline{Z}_s^{m,\lambda,x,u};$$ \noindent and $\gamma_s^{n,m}=0$, otherwise, $s\in[0,m]$. As $|\gamma_s^{n,m}|\leq K_z,\ s\in[0,m]$, we can use the Girsanov Theorem to introduce the probability measure $\displaystyle d\mathbb{P}^{n,m}=\exp\{\int_0^m\gamma_s^{n,m}dW_s-\frac{1}{2}\int_0^m|\gamma_s^{n,m}|^2ds\}d\mathbb{P},$ under which $\displaystyle W_t^{n,m}=W_t-\int_0^t\gamma_s^{n,m}ds,\ t\in[0,m],$ is an $\mathbb{F}$-Brownian motion. From (\ref{r80}) we have \begin{equation*} \overline{Y}_t^{n,\lambda,x,u}-\overline{Y}_t^{m,\lambda,x,u}=e^{-\lambda(m-t)}\overline{Y}_m^{n,\lambda,x,u}+\int_t^me^{-\lambda (s-t)}(\overline{Z}_s^{n,\lambda,x,u}-\overline{Z}_s^{m,\lambda,x,u})dW_s^{n,m},\ t\in[0,m]. \end{equation*} Consequently, considering that $|\overline{Y}_m^{n,\lambda,x,u}|\leq\frac{M}{\lambda}$ (Step1), by taking the conditional expectation under $\mathbb{P}^{n,m}$, we get \begin{equation*} |\overline{Y}_t^{n,\lambda,x,u}-\overline{Y}_t^{m,\lambda,x,u}|=|\mathbb{E}^{n,m}[e^{-\lambda(m-t)}\overline{Y}_m^{n,\lambda,x,u}\big|\mathcal{F}_t]|\leq\frac{M}{\lambda}e^{-\lambda(m-t)},\ 0\leq t\leq m\leq n, \end{equation*} i.e., for all $T>0,\ n\geq m\geq T,$ \begin{equation}\label{t1} \mathop{\rm sup}\limits_{t\in[0,T]}|\overline{Y}_t^{n,\lambda,x,u}-\overline{Y}_t^{m,\lambda,x,u}|\leq \frac{M}{\lambda}e^{-\lambda(m-T)}\xrightarrow[ n\geq m\rightarrow\infty]{} 0,\ \mathbb{P}\mbox{-}a.s. \end{equation} Consequently, there is a continuous adapted process $\overline{Y}^{\lambda,x,u}=(\overline{Y}_t^{\lambda,x,u})_{t\geq0}$ to which $(\overline{Y}_{t\wedge n}^{n,\lambda,x,u})_{t\geq0}$ converges uniformly on compacts, $\mathbb{P}$-a.s. Moreover, $|\overline{Y}_t^{\lambda,x,u}|\leq\frac{M}{\lambda},\ t\geq0,\ \mathbb{P}$-a.s.\\ \textbf{Step 3.} There is a process $\overline{Z}^{\lambda,x,u}=(\overline{Z}_t^{\lambda,x,u})_{t\geq0}\in\mathcal{H}^2_{loc}(\mathbb{R}^d)$ such that, for all $T>0$, \begin{equation*} \mathbb{E}[\int_0^T|\overline{Z}_t^{\lambda,x,u}-\overline{Z}_t^{n,\lambda,x,u}|^2dt]\xrightarrow[n\rightarrow\infty]{} 0. \end{equation*} Indeed, for $n\geq m\geq T$, we get from (\ref{r80}) and (\ref{t1}) \begin{equation*} \begin{split} &\mathbb{E}[\int_0^T|\overline{Z}_t^{n,\lambda,x,u}-\overline{Z}_t^{m,\lambda,x,u}|^2dt]\\ \leq&2\mathbb{E}[\int_0^T|\overline{Y}_s^{n,\lambda,x,u}-\overline{Y}_s^{m,\lambda,x,u}||\psi(X_s^{x,u},\overline{Z}_s^{n,\lambda,x,u},u_s)-\psi(X_s^{x,u},\overline{Z}_s^{m,\lambda,x,u},u_s)| ds]\\ &+\mathbb{E}[|\overline{Y}_T^{n,\lambda,x,u}-\overline{Y}_T^{m,\lambda,x,u}|^2]\leq (4K_z^2T+1)\frac{M^2}{\lambda^2}e^{-2\lambda(m-T)}+\frac{1}{2}\mathbb{E}[\int_0^T|\overline{Z}_t^{n,\lambda,x,u}-\overline{Z}_t^{m,\lambda,x,u}|^2dt]. \end{split} \end{equation*} This proves that, for $n\geq m\geq T,$ \begin{equation*} \mathbb{E}[\int_0^T|\overline{Z}_t^{n,\lambda,x,u}-\overline{Z}_t^{m,\lambda,x,u}|^2dt]\leq2(4K_z^2T+1)\frac{M^2}{\lambda^2}e^{-2\lambda(m-T)}\xrightarrow[n\geq m\rightarrow\infty]{}0. \end{equation*} This completes Step 3.\\ \textbf{Step 4.} Finally, recall that from (\ref{r80}), for $n\geq T$, \begin{equation*} \overline{Y}_t^{n,\lambda,x,u}= \overline{Y}_T^{n,\lambda,x,u}+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{n,\lambda,x,u},u_s)-\lambda\overline{Y}_s^{n,\lambda,x,u})ds-\int_t^T\overline{Z}_s^{n,\lambda,x,u}dW_s,\ t\in[0,T]. \end{equation*} The Steps 2 and 3 allow to take the limit in this BSDE, as $n\rightarrow\infty$, and we obtain that $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})$ is the solution of the following BSDE: \begin{equation}\label{r81} \overline{Y}_t^{\lambda,x,u}=\overline{Y}_T^{\lambda,x,u}+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\lambda\overline{Y}_s^{\lambda,x,u})ds-\int_t^T\overline{Z}_s^{\lambda,x,u}dW_s,\ 0\leq t\leq T<+\infty, \end{equation} with $| \overline{Y}_t^{\lambda,x,u}|\leq\frac{M}{\lambda},\ t\geq0,$ and $\overline{Z}^{\lambda,x,u}\in\mathcal{H}^2_{loc}(\mathbb{R}^d)$. It remains to show that \begin{equation*} \mathbb{E}[\int_0^\infty|e^{-\lambda t}\overline{Z}_t^{\lambda,x,u}|^2dt]\leq2(\frac{M}{\lambda})^2(2+\frac{K_z^2}{\lambda}). \end{equation*} For this, by applying It\^{o}'s formula to $|e^{-\lambda t}\overline{Y}_t^{\lambda,x,u}|^2$, it follows from (\ref{r81}) that, for all $T>0$, \begin{equation*} \begin{split} &\mathbb{E}[\int_0^T|e^{-\lambda t}\overline{Z}_t^{\lambda,x,u}|^2dt]\leq\mathbb{E}[|e^{-\lambda T}\overline{Y}_T^{\lambda,x,u}|^2+2\int_0^Te^{-2\lambda t}\overline{Y}_t^{\lambda,x,u}\psi(X_t^{x,u},\overline{Z}_t^{\lambda,x,u},u_t)dt]\\ \leq&\mathbb{E}[|e^{-\lambda T}\overline{Y}_T^{\lambda,x,u}|^2+2\int_0^Te^{-2\lambda t}|\overline{Y}_t^{\lambda,x,u}|(M+K_z|\overline{Z}_t^{\lambda,x,u}|)dt], \end{split} \end{equation*} and, hence, \begin{equation*} \frac{1}{2}\mathbb{E}[\int_0^T|e^{-\lambda t}\overline{Z}_t^{\lambda,x,u}|^2dt]\leq\mathbb{E}[|e^{-\lambda T}\overline{Y}_T^{\lambda,x,u}|^2]+2\mathbb{E}[\int_0^Te^{-2\lambda t}M|\overline{Y}_t^{\lambda,x,u}|dt]+2K_z^2\mathbb{E}[\int_0^T|e^{-\lambda t}\overline{Y}_t^{\lambda,x,u}|^2dt]. \end{equation*} Therefore, using that $|\overline{Y}_t^{\lambda,x,u}|\leq\frac{M}{\lambda},\ t\geq0$, we obtain the stated estimate for $\displaystyle \mathbb{E}[\int_0^\infty|e^{-\lambda t}\overline{Z}_t^{\lambda,x,u}|^2dt]$. \end{proof} For any $\lambda>0$, let us define the value function \begin{equation}\label{r94} V_\lambda(x):=\inf\limits_{u\in\mathcal{U}}\overline{Y}_0^{\lambda,x,u},\ x\in\mathbb{R}^N, \end{equation} \noindent where $\overline{Y}^{\lambda,x,u}$ is introduced by the BSDE (\ref{r40}). We make the following so called \underline{nonexpansivity condition} for (\ref{r94}): For all $x,x'\in \mathbb{R}^N, u\in U,$ there exists $v\in U$ such that, for all $z\in\mathbb{R}^d$, \begin{equation*}\label{r49} \left\{ \begin{array}{llll} \mbox{(i)}\ g(x,x',u,v):=\langle x-x',b(x,u)-b(x',v)\rangle+\frac{1}{2}|\sigma(x,u)-\sigma(x',v)|^2\\ \qquad\qquad\qquad\qquad+K_z|\sigma(x,u)-\sigma(x',v)||x-x'|\leq0;\\ \mbox{(ii)}\ \text{There\ exists\ a\ constant}\ \overline{c}_0>0\ \text{such\ that}\\ \ \ \ \ \ \widetilde{\psi}(x,x',z,u,v):=|\psi(x,z,u)-\psi(x',z,v)|-\overline{c}_0|x-x'|\leq0, \tag{H3} \end{array} \right. \end{equation*} with $K_z>0$ introduced in (\ref{r48}). We also introduce a new \underline{stochastic nonexpansivity condition:} For all $\varepsilon>0,\ \lambda>0,\ x,\ x'\in \overline{\theta}$, and all $u\in\mathcal{U}$, there exists $v\in\mathcal{U}$ such that, for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsdP-a.e., and with the notation $\displaystyle L_t^\gamma=\exp\{\int_0^t\gamma_sdW_s-\frac{1}{2}\int_0^t|\gamma_s|^2ds\}$, \begin{equation*}\label{r50} \left\{ \begin{array}{llll} \mbox{(i)}\ \displaystyle\big(\mathbb{E}[L_t^\gamma|X_t^{x,u}-X_t^{x',v}|^2]\big)^{\frac{1}{2}}\leq|x-x'|+\varepsilon,\ t\geq0;\\ \mbox{(ii)}\ \displaystyle\lambda\int_0^\infty e^{-\lambda s}\mathbb{E}[L_s^\gamma|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v},\overline{Z}_s^{\lambda,x,u},v_s)|]ds\leq \overline{c}_0|x-x'|+\varepsilon. \tag{H4} \end{array} \right. \end{equation*} \begin{remark} Let us recall the nonexpansivity condition in \cite{Buckdahn 2013}, established for $\psi=\psi(x,u)$, which is extended by (\ref{r49}): For all $(x,x',u,v)\in\mathbb{R}^{2N}\times U^2$, \begin{equation*} \mathop{\rm sup}\limits_{u\in U}\inf\limits_{v\in U}\max\big((\langle x-x',b(x,u)-b(x',v)\rangle+\frac{1}{2}|\sigma(x,u)-\sigma(x',v)|^2),|\psi(x,u)-\psi(x',v)|-\overline{c}_0|x-x'|\big)\leq 0. \end{equation*} Observe that, if $\psi$ is independent of $z$ (that is, $\psi=\psi(x,u)$), then $K_z=0$ and (\ref{r49}) coincides with the above nonexpansivity condition in \cite{Buckdahn 2013}.\\ \indent But (\ref{r50}) is new, it reformulates the stochastic nonexpansivity condition in \cite{Buckdahn 2013} by taking into account the BSDE over the infinite time interval $[0,\infty)$. \end{remark} \begin{example} Let $d=1$ and $b(x,u)=-3x,\ \sigma(x,u)=x,\ \psi(x,u,z)=z$, for $x\in\mathbb{R}^N,\ u\in U$ and $z\in\mathbb{R}$. Then $K_z=1$, and for $\overline{c}_0=1$ we have \begin{equation*} \begin{split} g(x,x',u,v):=&\langle x-x',b(x,u)-b(x',v)\rangle+\frac{1}{2}|\sigma(x,u)-\sigma(x',v)|^2+K_z|\sigma(x,u)-\sigma(x',v)||x-x'|\\ =&-\frac{3}{2}|x-x'|^2\leq0, \end{split} \end{equation*} and \begin{equation*} \widetilde{\psi}(x,x',z,u,v):=|\psi(x,z,u)-\psi(x',z,v)|-\overline{c}_0|x-x'|=-\overline{c}_0|x-x'|\leq0. \end{equation*} \end{example} \begin{proposition}\label{p:2.1} Under the assumptions (\ref{r1}) and (\ref{r48}) the nonexpansivity condition (\ref{r49}) implies the stochastic nonexpansivity condition (\ref{r50}). \end{proposition} \begin{proof} We fix arbitrarily $(x,x')\in \overline{\theta}^2$, $\lambda>0$, $T>0$, $\varepsilon>0$, and $u\in\mathcal{U}$. Without loss of generality, let us suppose that $u$ is a step process, i.e., that there exists a partition of $[0,T ]$, denoted by $0=t_0<t_1<t_2<\cdot\cdot\cdot<t_M=T$, and random variables $u_i\in L^0(\mathcal{F}_{t_i}; U)$, $0\leq i\leq M-1$, such that \begin{equation*} u=\sum^{M-1}_{i=0}u_i1_{(t_i,t_{i+1}]}. \end{equation*} The reader can be referred to \cite{Krylov 1999} for further details. Indeed, we can make this choice, since these step functions are dense in the space of admissible controls $\mathcal{U}$ endowed with the metric $\displaystyle(\mathbb{E}[\int_0^\infty e^{-t}d(u_t,u'_t)^2dt])^{\frac{1}{2}},\ u,\ u'\in \mathcal{U}$, and the controlled state process $X^{x,u}$ as well as the solution $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})$ of the BSDE (\ref{r40}) are $L^2$-continuous in $u\in \mathcal{U}$. Now we introduce the set-valued function \begin{equation*} \begin{split} \overline{\theta}^2\times U\ni(x,x',u)\rightsquigarrow\Xi(x,x',u):=&\{v\in U:g(x,x',u,v)\leq0,\ \widetilde{\psi}(x,x',z,u,v)\leq0,\ \text{for\ all}\ z\in\mathbb{R}^d\}. \end{split} \end{equation*} From the fact that $\Xi$ is upper semicontinuous and has nonempty compact values we know that there exists a Borel function (see Aubin and Frankowska \cite{Aubin 1990}) \begin{equation*} \widehat{v}:\overline{\theta}^2\times U\to U,\ \text{with}\ \widehat{v}(x,x',u)\in\Xi(x,x',u),\ \text{for\ all}\ (x,x',u)\in\overline{\theta}^2\times U. \end{equation*} \textbf{Step1.} On $[0,t_1]$, setting $\tau_0=0$, we define $$v_t^{0,0}:=\widehat{v}(X_0^{x,u},x',u_t)=\widehat{v}(x,x',u_0)(=v_0^{0,0}),$$ and \begin{equation*} \begin{split} \tau_1:=&\inf\{t\geq0:g(X_t^{x,u},X_t^{x',v^{0,0}},u_t,v_t^{0,0})>\delta\ \mbox{or} \mathop{\rm sup}\limits_{z\in\mathbb{R}^d}\widetilde{\psi}(X_t^{x,u},X_t^{x',v^{0,0}},z,u_t,v_t^{0,0})>\delta\}\wedge\frac{t_1}{n},\ n\geq1, \end{split} \end{equation*} where $\delta>0$ is arbitrarily small and will be specified later. Similar to the proof of Lemma 3 in \cite{Buckdahn 2013}, from the assumption that the compact $\overline{\theta}$ is invariant with respect to control system (\ref{r2}) and from (\ref{r1}) we get for all $t\in[0,t_1]$, \begin{equation*} \begin{split} &g(X_t^{x,u},X_t^{x',v^{0,0}},u_t,v_t^{0,0}) =\langle X_t^{x,u}-X_t^{x',v^{0,0}},b(X_t^{x,u},u_t)-b(X_t^{x',v^{0,0}},v_t^{0,0})\rangle\\ &+\frac{1}{2}|\sigma(X_t^{x,u},u_t)-\sigma(X_t^{x',v^{0,0}},v_t^{0,0})|^2+K_z|\sigma(X_t^{x,u},u_t)-\sigma(X_t^{x',v^{0,0}},v_t^{0,0})||X_t^{x,u}-X_t^{x',v^{0,0}}|\\ \leq&\langle x-x',b(x,u_0)-b(x',v_0^{0,0})\rangle+\frac{1}{2}|\sigma(x,u_0)-\sigma(x',v_0^{0,0})|^2+K_z|\sigma(x,u_0)-\sigma(x',v_0^{0,0})||x-x'|\\ &+c(|x-X_t^{x,u}|+|x'-X_t^{x',v^{0,0}}|)\\ =&g(x,x',u_0,v_0^{0,0})+c(|x-X_t^{x,u}|+|x'-X_t^{x',v^{0,0}}|), \end{split} \end{equation*} and \begin{equation*} \begin{split} &\widetilde{\psi}(X_t^{x,u},X_t^{x',v^{0,0}},\overline{Z}_t^{\lambda,x,u},u_t,v_t^{0,0})\\ =&|\psi(X_t^{x,u},\overline{Z}_t^{\lambda,x,u},u_t)-\psi(X_t^{x',v^{0,0}},\overline{Z}_t^{\lambda,x,u},v_t^{0,0})|-\overline{c}_0| X_t^{x,u}-X_t^{x',v^{0,0}}|\\ \leq&|\psi(x,\overline{Z}_t^{\lambda,x,u},u_0)-\psi(x',\overline{Z}_t^{\lambda,x,u},v_0^{0,0})|-\overline{c}_0|x-x'|+c(|X_t^{x,u}-x|+| X_t^{x',v^{0,0}}-x'|)\\ =&\widetilde{\psi}(x,x',\overline{Z}_t^{\lambda,x,u},u_0,v_0^{0,0})+c(|X_t^{x,u}-x|+|X_t^{x',v^{0,0}}-x'|), \end{split} \end{equation*} for some constant $c$ depending on the coefficients $\sigma,b,\psi$ and on $\overline{\theta}$. Then, from the choice of $v^{0,0}$ we have that \begin{equation*} g(X_t^{x,u},X_t^{x',v^{0,0}},u_t,v_t^{0,0})\leq c(|x-X_t^{x,u}|+|x'-X_t^{x',v^{0,0}}|), \end{equation*} and \begin{equation*} \widetilde{\psi}(X_t^{x,u},X_t^{x',v^{0,0}},\overline{Z}_t^{\lambda,x,u},u_t,v_t^{0,0})\leq c(|X_t^{x,u}-x|+|X_t^{x',v^{0,0}}-x'|),\ t\in[0,t_1]. \end{equation*} Thus, applying Markov's inequality and Burkholder's inequality, we have that, for all $p>1,\ n\geq1$, there is a constant $c_p>0$ such that \begin{equation}\label{r88} \begin{split} &\mathbb{P}(\tau_1<\frac{t_1}{n})\leq\mathbb{P}(\mathop{\rm sup}\limits_{t\in[0,\frac{t_1}{n}]}c(|x'-X_t^{x',v^{0,0}}|+|x-X_t^{x,u}|)\geq\delta)\\ &\leq\frac{c}{\delta^{4p}}(\mathbb{E}[\mathop{\rm sup}\limits_{t\in[0,\frac{t_1}{n}]}|x'-X_t^{x',v^{0,0}}|^{4p}]+\mathbb{E}[\mathop{\rm sup}\limits_{t\in[0,\frac{t_1}{n}]}|x-X_t^{x,u}|^{4p}])\leq\frac{c^2_pt_1^{2p}}{\delta^{4p}n^{2p}}. \end{split} \end{equation} Recalling the definition of $L^\gamma$, we conclude that \begin{equation}\label{r89} \begin{split} \mathop{\rm sup}\limits_{|\gamma|\leq K_z}\mathbb{E}[L_{\frac{t_1}{n}}^{\gamma}1_{\{\tau_1<\frac{t_1}{n}\}}]\leq \mathop{\rm sup}\limits_{|\gamma|\leq K_z}\big(\mathbb{E}[(L_{\frac{t_1}{n}}^{\gamma})^2]\big)^\frac{1}{2}\big(\mathbb{P}(\tau_1<\frac{t_1}{n})\big)^\frac{1}{2}\leq e^{\frac{1}{2}K_z^2\frac{t_1}{n}}\frac{c_pt_1^p}{\delta^{2p}n^p}. \end{split} \end{equation} For $1\leq i\leq n-1$, let us define iteratively $v^{0,i}$ and $\tau_{i+1}$. Given $\tau_i$ and $v^{0,i-1}\in\mathcal{U}$ we put \begin{equation*} v_t^{0,i}:=v_t^{0,i-1}1_{\{t\leq\tau_i\}}+\widehat{v}(X_{\tau_i}^{x,u},X_{\tau_i}^{x',v^{0,i-1}},u_t)1_{\{t>\tau_i\}}, \end{equation*} and \begin{equation*} \begin{split} \tau_{i+1}:=&\inf\{t\geq\tau_{i}:g(X_t^{x,u},X_t^{x',v^{0,i}},u_t,v_t^{0,i})>\delta\ \mbox{or} \mathop{\rm sup}\limits_{z\in\mathbb{R}^d}\widetilde{\psi}(X_t^{x,u},X_t^{x',v^{0,i}},z,u_t,v_t^{0,i})>\delta\}\wedge\frac{(i+1)t_1}{n}. \end{split} \end{equation*} From the strong Markov property we have, in analogy to (\ref{r88}), \begin{equation*} \mathbb{P}(\tau_{i+1}-\tau_i<\frac{t_1}{n}/X_{\tau_i}^{x,u}=\widehat{x},X_{\tau_i}^{x',v^{0,i}}=\widehat{x'},\tau_i=\widehat{t}\ )\leq\frac{c^2_pt_1^{2p}}{\delta^{4p}n^{2p}},\ (\widehat{x},\widehat{x}')\in\overline{\theta}^2,\ \widehat{t}\in[0,\frac{it_1}{n}], \end{equation*} and, thus, \begin{equation*} \mathbb{P}(\tau_{i+1}-\tau_i<\frac{t_1}{n})\leq\frac{c^2_pt_1^{2p}}{\delta^{4p}n^{2p}}. \end{equation*} Moreover, similar to (\ref{r89}) we get \begin{equation*} \begin{split} \mathop{\rm sup}\limits_{|\gamma|\leq K_z}\mathbb{E}[L_{t_1}^{\gamma}1_{\{\tau_{i+1}-\tau_i<\frac{t_1}{n}\}}]\leq \mathop{\rm sup}\limits_{|\gamma|\leq K_z}\big(\mathbb{E}[(L_{t_1}^{\gamma})^2]\big)^\frac{1}{2}\big(\mathbb{P}(\tau_{i+1}-\tau_i<\frac{t_1}{n})\big)^\frac{1}{2}\leq e^{\frac{1}{2}K_z^2t_1}\frac{c_pt_1^p}{\delta^{2p}n^p}. \end{split} \end{equation*} This shows that there exists a constant $\overline{c}_p>0$ such that \begin{equation}\label{112} \mathop{\rm sup}\limits_{|\gamma|\leq K_z}\mathbb{E}[L_{t_1}^{\gamma}1_{\{\tau_n<t_1\}}]\leq \sum_{i=0}^{n-1}\mathop{\rm sup}\limits_{|\gamma|\leq K_z}\mathbb{E}[L_{t_1}^{\gamma}1_{\{\tau_{i+1}-\tau_i<\frac{t_1}{n}\}}]\leq e^{\frac{1}{2}K_z^2t_1}\frac{\overline{c}_pt_1^p}{\delta^{2p}n^{p-1}}. \end{equation} Let $d\mathbb{P}^\gamma_{t_1}=L_{t_1}^\gamma d\mathbb{P}$, and recall that due to the Girsanov Theorem $\displaystyle W_t^\gamma=W_t-\int_0^t\gamma_sds,\ t\in[0,t_1],$ is an $(\mathbb{F},\mathbb{P}_{t_1}^\gamma)$-Brownian motion. Let us define $\displaystyle\mathbb{E}_{t_1}^\gamma[\cdot]=\int_\Omega(\cdot) d\mathbb{P}_{t_1}^\gamma=\mathbb{E}[L_{t_1}^\gamma(\cdot)]$. Applying It\^{o}'s formula to $|X_t^{x,u}-X_t^{x',v^{0,n}}|^2$, for all $t\leq t_1$ we have \begin{eqnarray}\label{r41} \begin{split} \mathbb{E}_{t_1}^\gamma[|X_t^{x,u}-X_t^{x',v^{0,n}}|^2] =&|x-x'|^2 +2\mathbb{E}_{t_1}^\gamma\Big[\int_0^t\Big(\langle X_s^{x,u}-X_s^{x',v^{0,n}},b(X_s^{x,u},u_s)-b(X_s^{x',v^{0,n}},v_s^{0,n})\rangle\\ &+ \frac{1}{2}|\sigma(X_s^{x,u},u_s)-\sigma(X_s^{x',v^{0,n}},v_s^{0,n})|^2\Big) ds \Big] \\ +&2\mathbb{E}_{t_1}^\gamma\Big[\int_0^t(X_s^{x,u}-X_s^{x',v^{0,n}})(\sigma(X_s^{x,u},u_s)-\sigma(X_s^{x',v^{0,n}},v_s^{0,n}))dW_s]. \end{split} \end{eqnarray} Thus, substituting $dW_s=dW_s^\gamma+\gamma_sds$ and taking into account that $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e., we obtain \begin{equation}\label{r91} \begin{split} &\mathbb{E}_{t_1}^\gamma[|X_t^{x,u}-X_t^{x',v^{0,n}}|^2] \leq|x-x'|^2+2\mathbb{E}_{t_1}^\gamma\Big[\int_0^t\Big(\langle X_s^{x,u}-X_s^{x',v^{0,n}},b(X_s^{x,u},u_s)-b(X_s^{x',v^{0,n}},v_s^{0,n})\rangle\\ &+\frac{1}{2}|\sigma(X_s^{x,u},u_s)-\sigma(X_s^{x',v^{0,n}},v_s^{0,n})|^2+K_z|\sigma(X_s^{x,u},u_s)-\sigma(X_s^{x',v^{0,0}},v_s^{0,0})||X_s^{x,u}-X_s^{x',v^{0,0}}|\Big) ds \Big]\\ =&|x-x'|^2+2\mathbb{E}_{t_1}^\gamma[\int_0^tg(X_s^{x,u},X_s^{x',v^{0,n}},u_s,v_s^{0,n})ds]\\ \leq& |x-x'|^2+2\mathbb{E}_{t_1}^\gamma[ct1_{\{t>\tau_n\}}+t\delta 1_{\{t\leq\tau_n\}}]\\ \leq& |x-x'|^2+\frac{ct_1^{p+1}}{\delta^{2p}n^{p-1}}e^{K_z^2t_1}+ct_1\delta,\ t\in[0,t_1]. \end{split} \end{equation} For this we have used the definition of $\tau_n$ and the boundedness of $g$ over $\overline{\theta}\times\overline{\theta}\times U\times U$. Consequently, \begin{equation}\label{r90} \begin{split} &\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}[L_s^\gamma|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},v_s^{0,n})|]ds\\ =&\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[\widetilde{\psi}(X_s^{x,u},X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},u_s,v_s^{0,n})]ds+\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[\overline{c}_0|X_s^{x,u}-X_s^{x',v^{0,n}}|]ds\\ \leq&\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[\delta1_{\{\tau_n\geq s\}}+\widetilde{\psi}(X_s^{x,u},X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},u_s,v_s^{0,n})1_{\{\tau_n<s\}}]ds\\ &+\overline{c}_0\lambda\int_0^{t_1}e^{-\lambda s}(| x-x'|+\frac{ct_1^{\frac{p+1}{2}}}{\delta^pn^{\frac{p-1}{2}}}e^{\frac{1}{2}K_z^2t_1}+(ct_1\delta)^{\frac{1}{2}})ds. \end{split} \end{equation} We remark that \begin{equation*} |\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)|\leq|\psi(X_s^{x,u},0,u_s)|+K_z|\overline{Z}_s^{\lambda,x,u}|\leq M+K_z| \overline{Z}_s^{\lambda,x,u}|, \end{equation*} and as the same estimate holds true for $\psi(X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},v^{0,n}_s)$, we have \begin{equation*} \widetilde{\psi}(X_s^{x,u},X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},u_s,v_s^{0,n})\leq2M+2K_z|\overline{Z}_s^{\lambda,x,u}|,\ s\in[0,t_1]. \end{equation*} Thus, \begin{equation*} \begin{array}{lll} &\displaystyle\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[\widetilde{\psi}(X_s^{x,u},X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},u_s,v_s^{0,n})1_{\{\tau_n<s\}}]ds\\ &\displaystyle\leq2M\mathbb{E}_{t_1}^\gamma[1_{\{\tau_n<t_1\}}]\lambda\int_0^{t_1}e^{-\lambda s}ds+2K_z\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[|\overline{Z}_s^{\lambda,x,u}|1_{\{\tau_n<s\}}]ds, \end{array} \end{equation*} where \begin{equation*} \lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[|\overline{Z}_s^{\lambda,x,u}|1_{\{\tau_n<s\}}]ds\leq \lambda(\mathbb{E}[\int_0^{t_1}|e^{-\lambda s}\overline{Z}_s^{\lambda,x,u}|^2ds])^{\frac{1}{2}}\cdot \sqrt{t_1}\cdot(\mathbb{E}[(L_{t_1}^\gamma)^4])^{\frac{1}{4}}(P\{\tau_n<t_1\})^{\frac{1}{4}}. \end{equation*} Recall that \begin{equation*} \mathbb{E}[\int_0^{t_1}|e^{-\lambda s}\overline{Z}_s^{\lambda,x,u}|^2ds]\leq 2(\frac{M}{\lambda})^2(2+\frac{K_z^2}{\lambda}), \end{equation*} and observe that \begin{equation*} (\mathbb{E}[(L_{t_1}^\gamma)^4])^{\frac{1}{4}}\leq e^{2K_z^2t_1}, \end{equation*} and \begin{equation*} \mathbb{P}\{\tau_n<t_1\}\leq\sum\limits_{i=1}^n\mathbb{P}\{\tau_i-\tau_{i-1}<\frac{t_1}{n}\}\leq\frac{C_p^2t_1^{2p}}{\delta^{4p}n^{2p-1}}. \end{equation*} Hence, \begin{equation*} \lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}_{t_1}^\gamma[|\overline{Z}_s^{\lambda,x,u}|1_{\{\tau_n<s\}}]ds\leq C_{M,\lambda}e^{2K_z^2t_1}\frac{C_p^{\frac{1}{2}}t_1^{\frac{p+1}{2}}}{\delta^{p}n^{\frac{p-1}{2}}}, \end{equation*} and supposing without loss of generality that $\delta\in(0,1),\ K_z\geq1$ and $t_1(=t_1-t_0)\leq1$, we get from (\ref{r90}), (\ref{112}) and the above estimates, \begin{equation*} \begin{split} &\lambda\int_0^{t_1}e^{-\lambda s}\mathbb{E}[L_s^\gamma| \psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^{0,n}},\overline{Z}_s^{\lambda,x,u},v_s^{0,n})|]ds\\ \leq& \lambda\int_0^{t_1}e^{-\lambda s}ds\cdot(\overline{c}_0| x-x'|+c\delta^{\frac{1}{2}}+c_p\frac{t_1^{\frac{p}{2}}}{\delta^{p}n^{\frac{p-1}{2}}}e^{2K_z^2t_1}). \end{split} \end{equation*} Recall that $\delta\in(0,1)$ is arbitrary. Thus, choosing $\delta>0$ sufficiently small and $n$ large enough, we have for $v^0:=v^{0,n}\in\mathcal{U}$ \begin{equation}\label{t2} \begin{array}{lll} &{\rm (i)}\ (\mathbb{E}[L_t^\gamma|X_t^{x,u}-X_t^{x',v^{0}}|^2])^{\frac{1}{2}}\leq | x-x'|+\varepsilon\frac{t_1}{(\overline{c}_0(T+2))^M},\ t\in[0,t_1];\\ &{\rm (ii)}\ \displaystyle\lambda \int_0^{t_1}e^{-\lambda s}\mathbb{E}[L_s^\gamma|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^{0}},\overline{Z}_s^{\lambda,x,u},v_s^{0})|]ds\\ &\displaystyle\ \ \ \ \ \ \ \leq\lambda\int_0^{t_1}e^{-\lambda s}ds\cdot\overline{c}_0(|x-x'|+\frac{\varepsilon}{(\overline{c}_0(T+2))^M}), \end{array} \end{equation} for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e.\\ \noindent\textbf{Step 2.} We consider the interval $[0,t_2]$: Starting now from $(X_{t_1}^{x,u},X_{t_1}^{x',v^0})$ at time $t_1$, and with $u=u_{t_1}$ on $[t_1,t_2]$, we construct $v^1$. We begin with putting\\ $v_t^{1,0}:=v_t^01_{[0,t_1)}(t)+\widehat{v}(X_{t_1}^{x,u},X_{t_1}^{x',v^0},u_t)1_{[t_1,t_2]}(t) =v_t^01_{[0,t_1)}(t)+\widehat{v}(X_{t_1}^{x,u},X_{t_1}^{x',v^0},u_{t_1})1_{[t_1,t_2]}(t), t\in[0,t_2]$. Similar to Step 1, we construct a sequence of control processes $(v^{1,n})_{n\geq0}$. Letting $n$ be large enough, there exists $v^1:=v^{1,n}$ such that, for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e., \begin{equation}\label{t3} (\mathbb{E}_{t_2}^\gamma[|X_t^{x,u}-X_t^{x',v^1}|^2\big|\mathcal{F}_{t_1}])^{\frac{1}{2}}=(\mathbb{E}[\frac{L_{t_2}^\gamma}{L_{t_1}^\gamma}\mid X_{t}^{x,u}-X_{t}^{x',v^1}|^2\big|\mathcal{F}_{t_1}])^{\frac{1}{2}}\leq|X_{t_1}^{x,u}-X_{t_1}^{x',v^0}|+\varepsilon\frac{t_2-t_1}{(\overline{c}_0(T+2))^M}, \end{equation} for all $t\in[t_1,t_2]$, and \begin{equation*} \begin{split} & \lambda\int_{t_1}^{t_2} e^{-\lambda s}\mathbb{E}_s^\gamma[|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^1},\overline{Z}_s^{\lambda,x,u},v_s^1)|\big|\mathcal{F}_{t_1}]ds\\ \leq&\lambda\int_{t_1}^{t_2} e^{-\lambda s}ds\cdot\overline{c}_0(| X_{t_1}^{x,u}-X_{t_1}^{x',v^0}|+\frac{\varepsilon}{(\overline{c}_0(T+2))^M}). \end{split} \end{equation*} Then, from (\ref{t2}) and (\ref{t3}), \begin{equation*} \begin{split} &(\mathbb{E}_{t_2}^\gamma[|X_t^{x,u}-X_t^{x',v^1}|^2])^{\frac{1}{2}}=(\mathbb{E}_{t_1}^\gamma[((\mathbb{E}_{t_2}^\gamma[|X_t^{x,u}-X_t^{x',v^1}|^2\big|\mathcal{F}_{t_1}])^\frac{1}{2})^2])^\frac{1}{2}\\ & \leq (\mathbb{E}_{t_1}^\gamma[(|X_{t_1}^{x,u}-X_{t_1}^{x',v^0}|+\varepsilon\frac{t_2-t_1}{(\overline{c}_0(T+2))^M})^2])^\frac{1}{2}\\ & \leq (\mathbb{E}_{t_1}^\gamma[|X_{t_1}^{x,u}-X_{t_1}^{x',v^0}|^2])^\frac{1}{2}+\varepsilon\frac{t_2-t_1}{(\overline{c}_0(T+2))^M}\\ & \leq (|x-x'|+\varepsilon\frac{t_1}{(\overline{c}_0(T+2))^M})+\varepsilon\frac{t_2-t_1}{(\overline{c}_0(T+2))^M}. \end{split} \end{equation*} This combined once more with the result (\ref{t2}) of Step 1 yields \begin{equation*} \mathop{\rm sup}\limits_{|\gamma|\leq K_z}(\mathbb{E}_t^\gamma[|X_t^{x,u}-X_t^{x',v^1}|^2)^{\frac{1}{2}}\leq |x-x'|+\varepsilon\frac{t_2}{(\overline{c}_0(T+2))^M},\ t\in[0,t_2]. \end{equation*} On the other hand, arguing similarly with using (\ref{t2}) and (\ref{t3}), we get \begin{equation*} \begin{split} & \lambda\int_{t_1}^{t_2} e^{-\lambda s}\mathbb{E}_s^\gamma[|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^1},\overline{Z}_s^{\lambda,x,u},v_s^1)|]ds\\ \leq&\lambda\int_{t_1}^{t_2} e^{-\lambda s}ds\cdot\overline{c}_0((\mathbb{E}_{t_1}^\gamma[| X_{t_1}^{x,u}-X_{t_1}^{x',v^1}|^2])^{\frac{1}{2}}+\frac{\varepsilon}{(\overline{c}_0(T+2))^M})\\ \leq& \lambda\int_{t_1}^{t_2} e^{-\lambda s}ds(\overline{c}_0| x-x'|+\frac{\overline{c}_0(t_1+1)\varepsilon}{(\overline{c}_0(T+2))^{M}}), \end{split} \end{equation*} which combined with the corresponding estimate (\ref{t2}) of Step 1 yields \begin{equation*} \begin{split} & \lambda\int_0^{t_2}e^{-\lambda s}\mathbb{E}_s^\gamma[|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^1},\overline{Z}_s^{\lambda,x,u},v_s^1)|]ds\\ \leq&\lambda\int_0^{t_2}e^{-\lambda s}ds(\overline{c}_0|x-x'|+\frac{\overline{c}_0(t_1+2)\varepsilon}{(\overline{c}_0(T+2))^M})\\ \leq&\lambda\int_0^{t_2}e^{-\lambda s}ds(\overline{c}_0|x-x'|+\frac{\varepsilon}{(\overline{c}_0(T+2))^{M-1}}), \end{split} \end{equation*} for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e. Similarly, we make our construction on $[t_2,t_3]$, $[t_3,t_4]$, $\cdots$, $[t_{M-1},t_M]$, to finally get a process $v^{M-1}$ defined on $[0,T]$, such that \begin{equation*} \left\{ \begin{split} &\mbox{(i)}\ \mathop{\rm sup}\limits_{|\gamma|\leq K_z}(\mathbb{E}_t^\gamma[|X_t^{x,u}-X_t^{x',v^{M-1}}|^2])^{\frac{1}{2}}\leq |x-x'|+\varepsilon\frac{T}{(\overline{c}_0(T+2))^M}\leq|x-x'|+\frac{\varepsilon}{2},\ t\in[0,T];\\ &\mbox{(ii)}\displaystyle\ \lambda\int_0^{T}e^{-\lambda s}\mathbb{E}_s^\gamma[|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^{M-1}},\overline{Z}_s^{\lambda,x,u},v_s^{M-1})|]ds\\ &\ \ \ \ \ \ \leq\displaystyle \lambda\int_0^{T}e^{-\lambda s}ds(\overline{c}_0|x-x'|+\frac{\varepsilon}{2}), \end{split} \right. \end{equation*} for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e. Here we have supposed without loss of generality that $\overline{c}_0\geq1$. Let now $T=1, \rho=\rho_1=\frac{1}{2}, \widetilde{v}^1:=v^{M-1}$; we can make the same construction on $[1,2]$, starting with $(X_1^{x,u},X_1^{x',v^{M-1}},u_1)$, but now for $\frac{\varepsilon}{4}.$ Thus, we get $v^2$ on $[1,2]$, $\widetilde{v}^2:=\widetilde{v}^11_{[0,1]}+v^21_{[1,2]}$. Similarly, by iteration, for $\frac{\varepsilon}{2^{j+1}}$, we make our construction on $[j,j+1], j\geq2$. Then we get the construction of $v\in\mathcal{U}$ such that \begin{equation*} \begin{array}{lll} &\displaystyle {\rm (i)}\ (\mathbb{E}_t^\gamma[|X_t^{x,u}-X_t^{x',v}|^2])^{\frac{1}{2}}\leq |x-x'|+\varepsilon(\sum_{j=1}^\infty\frac{1}{2^j})=|x-x'|+\varepsilon,\ t\geq0,\\ &\displaystyle {\rm (ii)}\ \lambda\int_0^\infty e^{-\lambda s}\mathbb{E}_s^\gamma[|\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v},\overline{Z}_s^{\lambda,x,u},v_s)|]ds\\ &\displaystyle \ \ \ \ \leq\lambda\int_0^\infty e^{-\lambda s}ds(\overline{c}_0|x-x'|+\varepsilon)=\overline{c}_0|x-x'|+\varepsilon, \end{array} \end{equation*} for all $\gamma\in L_{\mathbb{F}}^\infty(0,\infty;\mathbb{R}^d)$ with $|\gamma_s|\leq K_z$, dsd$\mathbb{P}$-a.e. \end{proof} \begin{lemma}\label{lem:2.6} We suppose that (\ref{r1}), (\ref{r48}) and (\ref{r49}) hold. Then the family of functions $\{\lambda V_\lambda\}_\lambda$ is equicontinuous and equibounded on $\overline{\theta}$. Indeed, for the constants $\overline{c}_0>0$, $M>0$ defined in (\ref{r48}), it holds that, for all $\lambda>0$, and for all $ x,x'\in\overline{\theta},$ \begin{equation*} \left\{ \begin{array}{ll} {\rm{(i)}}\ |\lambda V_\lambda(x)-\lambda V_\lambda(x')|\leq \overline{c}_0|x-x'|, \\ {\rm{(ii)}}\ |\lambda V_\lambda(x)|\leq M. \end{array} \right. \end{equation*} \end{lemma} \begin{proof} From Proposition \ref{th:2.4} we know that for all $t\geq0,\ \lambda>0$, $|\overline{Y}_t^{\lambda,x,u}|\leq\frac{M}{\lambda}$. Thus we have \begin{equation*} |\lambda V_\lambda(x)|\leq\lambda\mathop{\rm sup}\limits_{u\in\mathcal{U}}|Y_0^{\lambda,x,u}|\leq M. \end{equation*} It remains to prove (i). Let $\lambda>0,\ x,\ x'\in\mathbb{R}^N$. For any $\varepsilon>0$, let $u\in\mathcal{U}$ be such that \begin{equation}\label{lg1} V_\lambda(x)\geq \overline{Y}_0^{\lambda,x,u}-\frac{\varepsilon}{\lambda}. \end{equation} Then, we have from Proposition \ref{p:2.1} that, there is $v^\varepsilon\in\mathcal{U}$ such that (\ref{r50}) holds true. Let us define $Y_s^\varepsilon=\overline{Y}_s^{\lambda,x,u}-\overline{Y}_s^{\lambda,x',v^\varepsilon}, Z_s^\varepsilon=\overline{Z}_s^{\lambda,x,u}-\overline{Z}_s^{\lambda,x',v^\varepsilon},$ and \begin{equation*} \gamma_s^\varepsilon=\frac{\psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x,u},v_s^\varepsilon)- \psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x',v^\varepsilon},v_s^\varepsilon)} {|\overline{Z}_s^{\lambda,x,u}-\overline{Z}_s^{\lambda,x',v^\varepsilon}|^2} \cdot(\overline{Z}_s^{\lambda,x,u}-\overline{Z}_s^{\lambda,x',v^\varepsilon})^*,\ \mbox{if}\ \overline{Z}_s^{\lambda,x,u}\neq\overline{Z}_s^{\lambda,x',v^\varepsilon}; \end{equation*} \noindent otherwise, $\gamma_s^\varepsilon=0,\ s\geq0$, where $(\overline{Y}^{\lambda,x,u},\overline{Z}^{\lambda,x,u})$ and $(\overline{Y}^{\lambda,x',v^\varepsilon},\overline{Z}^{\lambda,x',v^\varepsilon})$ are the solutions of BSDE (\ref{r40}) with the driving coefficient $\psi(X^{x,u},\cdot,u)$ and $\psi(X^{x',v^\varepsilon},\cdot,v^\varepsilon)$, respectively. We note that from (\ref{r48}) it follows that $|\gamma_s^\varepsilon|\leq K_z$. Putting \begin{equation*} L_s^\varepsilon=\exp\{\int_0^s\gamma^\varepsilon_rdW_r-\frac{1}{2}\int_0^s|\gamma^\varepsilon_r|^2dr\},\ s\geq0, \end{equation*} we define probability measures $\mathbb{P}_s^\varepsilon$ on $(\Omega,\mathcal{F})$ by setting \begin{equation*} \frac{d\mathbb{P}_s^\varepsilon}{d\mathbb{P}}=\exp\{\int_0^s\gamma^\varepsilon_rdW_r-\frac{1}{2}\int_0^s|\gamma^\varepsilon_r|^2dr\},\ s\geq0. \end{equation*} Then, it follows from Girsanov's theorem that \begin{equation*} W^\varepsilon_t=W_t-\int_0^t\gamma^\varepsilon_rdr,\ t\in[0,s], \end{equation*} is an $(\mathbb{F,\mathbb{P}}_s^\varepsilon)$-Brownian motion. Then, for all $0\leq t\leq T<\infty$, \begin{equation*} \begin{split} Y_t^\varepsilon=&Y_T^\varepsilon-\lambda\int_t^TY_s^\varepsilon ds+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x',v^\varepsilon},v_s^\varepsilon))ds -\int_t^TZ_s^\varepsilon dW_s\\ =& Y_T^\varepsilon-\lambda\int_t^TY_s^\varepsilon ds+\int_t^T(\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x,u},v_s^\varepsilon))ds -\int_t^TZ_s^\varepsilon dW^\varepsilon_s. \end{split} \end{equation*} By applying It\^{o}'s formula to $e^{-\lambda t}Y_t^\varepsilon$, and taking the conditional expectation $\mathbb{E}^\varepsilon_T[\cdot\big|\mathcal{F}_t]$ with respect to $\mathbb{P}^\varepsilon_T$, we obtain \begin{equation*} \begin{split} Y_t^\varepsilon=&\mathbb{E}_T^\varepsilon[\int_t^Te^{-\lambda (s-t)}\big(\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x,u},v_s^\varepsilon)\big)ds\big|\mathcal{F}_t] +\mathbb{E}_T^\varepsilon[e^{-\lambda(T-t)}Y_T^\varepsilon\big|\mathcal{F}_t]. \end{split} \end{equation*} Let $t=0$. Since $|Y_T^\varepsilon|=|\overline{Y}_T^{\lambda,x,u}-\overline{Y}_T^{\lambda,x',v^\varepsilon}|\leq \frac{2M}{\lambda}$, it follows that \begin{equation*} \begin{split} |Y_0^\varepsilon|\leq& \frac{2M}{\lambda}e^{-\lambda T}+\int_0^Te^{-\lambda s}\mathbb{E}[L_s^\varepsilon\mid\psi(X_s^{x,u},\overline{Z}_s^{\lambda,x,u},u_s)-\psi(X_s^{x',v^\varepsilon},\overline{Z}_s^{\lambda,x,u},v_s^\varepsilon)\mid]ds\\ \leq& \frac{2M}{\lambda}e^{-\lambda T}+\frac{\overline{c}_0}{\lambda}(|x-x'|+\varepsilon),\ T\geq0. \end{split} \end{equation*} Here we have used the fact that due to the choice of $v^\varepsilon$ (\ref{r50}) is satisfied. Now letting $T$ tend to infinity we get \begin{equation}\label{lg2} |\overline{Y}_0^{\lambda,x,u}-\overline{Y}_0^{\lambda,x',v^\varepsilon}|\leq \frac{\overline{c}_0}{\lambda}(|x-x'|+\varepsilon). \end{equation} Finally, from the arbitrariness of $u\in\mathcal{U}$ and $\varepsilon>0$ it follows that \begin{equation*} |\lambda V_\lambda(x)-\lambda V_\lambda(x')|\leq \overline{c}_0|x-x'|. \end{equation*} \noindent Indeed, from (\ref{lg1}) and (\ref{lg2}) we have \begin{equation*} \lambda V_\lambda(x)-\lambda V_\lambda(x')\geq\lambda(\overline{Y}_0^{\lambda,x,u}-\overline{Y}_0^{\lambda,x',v^\varepsilon})-\varepsilon\geq-\overline{c}_0|x-x'|-(\overline{c}_0+1)\varepsilon, \end{equation*} and letting $\varepsilon\downarrow0$ yields $\lambda V_\lambda(x)-\lambda V_\lambda(x')\geq-\overline{c}_0|x-x'|$. The symmetry of the argument in $x$ and $x'$ gives the inverse inequality. The proof is complete. \end{proof} \section{ {\protect \large Hamilton-Jacobi-Bellman equations}} Before we study in the next section the HJB equations associated with the stochastic control problem (\ref{r94}), let us begin a more general discussion in this section, where we consider a Hamiltonian $H:\mathbb{R}^N\times\mathbb{R}^N\times\mathcal{S}^N\to\mathbb{R}$ not necessarily related with a stochastic control problem. By $\mathcal{S}^N$ we denote the set of symmetric $N\times N$ matrices. Let $H:\mathbb{R}^N\times\mathbb{R}^N\times\mathcal{S}^N\to\mathbb{R}$ be a uniformly continuous function satisfying the monotonicity assumption:\\ \noindent\textbf{($A_H)$} (i) $H(x,p,A)\leq H(x,p,B),\ \text{for\ all}\ (x,p)\in\mathbb{R}^N\times\mathbb{R}^N,\ A,\ B\in\mathcal{S}^N \mbox{with}\ B\leq A$ (i.e., $A-B$ is positive semidefinite).\\ We consider the PDE \begin{equation}\label{r3.1} \lambda V(x)+H(x,DV(x),D^2V(x))=0,\ x\in\overline{\theta}. \end{equation} Let $V:\overline{\theta}\to\mathbb{R}$ be a bounded measurable function. We define \begin{equation*} V^*(x)=\varlimsup\limits_{y\to x}V(y),\ V_*(x)=\varliminf\limits_{y\to x}V(y),\ x\in\overline{\theta}. \end{equation*} Then, $V^*:\overline{\theta}\to\mathbb{R}$ is upper semicontinuous (we write $V^*\in \mbox{USC}(\overline{\theta})$) and $V_*:\overline{\theta}\to\mathbb{R}$ is lower semicontinuous ($V_*\in \mbox{LSC}(\overline{\theta})$). \begin{definition}\label{def3.1} $V$ is a constrained viscosity solution of (\ref{r3.1}), if it solves \begin{equation*} \lambda V(x)+H(x,DV(x),D^2V(x))=0,\ x\in\theta, \end{equation*} \begin{equation*} \lambda V(x)+H(x,DV(x),D^2V(x))\geq0,\ x\in\partial\theta, \end{equation*} in viscosity sense, i.e., if\\ \rm{i)} \emph{$V$ is a viscosity subsolution of (\ref{r3.1}) on $\theta$, and}\\ \rm{ii)} \emph{$V$ is a viscosity supersolution of (\ref{r3.1}) on $\overline{\theta}$}. \end{definition} \begin{remark} Recall that\\ \rm{i)} \emph{$V$ is a viscosity subsolution of (\ref{r3.1}) on $\theta$, if for all $x\in\theta$ and all $\varphi\in C^2(\mathbb{R}^N)$ such that $V^*-\varphi$ achieves a local maximum on $\theta$ at $x$, it holds} \begin{equation*} \lambda V^*(x)+H(x,D\varphi(x),D^2\varphi(x))\leq0; \end{equation*} \rm{ii)} \emph{$V$ is a viscosity supersolution of (\ref{r3.1}) on $\overline{\theta}$, if for all $x\in\overline{\theta}$ and all $\varphi\in C^2(\mathbb{R}^N)$ such that $V_*-\varphi$ achieves a local minimum on $\overline{\theta}$ at $x$, it holds} \begin{equation*} \lambda V_*(x)+H(x,D\varphi(x),D^2\varphi(x))\geq0. \end{equation*} \emph{The reader can refer to Crandall, Ishii, Lions \cite{ISHII 1992}.} \end{remark} Existence and comparison results for the viscosity solution of (\ref{r3.1}) have been established (see Theorem 2.1, 2.2 and 3.1 in Katsoulakis \cite{Katsoulakis 1994}) under the additional assumptions:\\ \noindent\textbf{($A_\theta)$} There exists a bounded, uniformly continuous function $m:\overline{\theta}\rightarrow \mathbb{R}^N$ with $|m|\leq 1$ and a\\ \indent\ constant $r>0$ such that\\ \centerline{$B(x+sm(x),rs)\subset \theta , \ \text{for\ all}\ x\in \overline{\theta},\ s\in(0,r].$} \indent\ $B(x,s)\subset\mathbb{R}^N$ denotes the open ball with center at $x$ and radius $s$.\\ \noindent\textbf{($A_H)$} (ii) There is a continuity modulus $\rho: \mathbb{R}_+\to\mathbb{R}_+$, $\rho(0+)=0$, such that,\\ \indent\qquad\ $|H(x,p,A)-H(x,q,B)|\leq\rho(|p-q|+|A-B|),\ x,\ p,\ q\in\mathbb{R}^N, A,\ B\in\mathcal{S}^N,\ \mbox{and}$\\ \indent \qquad\ $|H(y,p,B)-H(x,p,A)|\leq\rho(\frac{1}{\varepsilon}|x-y|^2+|x-y|(|p|+1)),\ \mbox{for\ all}\ x,\ y,\ p\in\mathbb{R}^N,\ \varepsilon>0,$ \indent \qquad\ $A,\ B\in\mathcal{S}^N,$ such that $$-\frac{3}{\varepsilon}\begin{pmatrix} I&0\\ 0&I \end{pmatrix}\leq \begin{pmatrix} A&0\\ 0&B \end{pmatrix}\leq \frac{3}{\varepsilon}\begin{pmatrix} I&-I\\ -I&I \end{pmatrix},$$ \indent \qquad\ where $I\in\mathbb{R}^{N\times N}$ denotes the unit matrix in $\mathbb{R}^{N\times N}$.\\ Under the above assumptions Katsoulakis \cite{Katsoulakis 1994} has shown the following theorems: \begin{theorem}\label{th1}(Comparison principle; Theorem 2.2 in \cite{Katsoulakis 1994}) Let $u\in USC(\overline{\theta})$ be a subsolution of (\ref{r3.1}) on $\theta$ and $v\in LSC(\overline{\theta})$ a supersolution of (\ref{r3.1}) on $\overline{\theta}$. Then $u\leq v$ on $\overline{\theta}$. \end{theorem} \begin{remark} In Theorem 2.2 in \cite{Katsoulakis 1994} the condition on $u$ is slightly weaker formulated: $u\in USC(\theta)$ is nontangential upper semicontinuous on $\partial\theta$. \end{remark} \begin{theorem}\label{th2}(Existence; Theorem 3.1 in \cite{Katsoulakis 1994}) If, in addition to the above assumptions, there is a bounded supersolution $\widetilde{v}\in LSC(\overline{\theta})$ of (\ref{r3.1}), then (\ref{r3.1}) has a constrained viscosity solution $v\in LSC(\overline{\theta})$; it is given by the smallest supersolution of (\ref{r3.1}) in $LSC(\overline{\theta})$. \end{theorem} \begin{remark} Let us suppose that $H$ satisfies the radial monotonicity assumption, which is introduced in Theorem \ref{th:3.3}. Then \begin{equation}\label{r3.2} H(x,p,A)\geq H(x,0,0),\ (x,p,A)\in\overline{\theta}\times\mathbb{R}^N\times\mathcal{S}^N\ (\mbox{see\ Lemma \ref{l:3.4}}). \end{equation} Furthermore, let $K\in\mathbb{R}$ be such that $K\geq-H(x,0,0),\ x\in\overline{\theta}$. Then one checks easily that $\widetilde{v}(x)=\frac{K}{\lambda},\ x\in\overline{\theta}$, is a viscosity supersolution of (\ref{r3.1}) on $\overline{\theta}$.\\ Indeed, if $x\in\overline{\theta}$ and $\varphi\in C^2(\mathbb{R}^N)$ such that $\widetilde{v}-\varphi\geq\widetilde{v}(x)-\varphi(x)$ on $\overline{\theta}$, then clearly \begin{equation*} \lambda \widetilde{v}(x)+ H(x,D\varphi(x),D^2\varphi(x))\geq 0,\ \ x\in\overline{\theta}, \end{equation*} with the help of (\ref{r3.2}). \end{remark} Let us introduce the space $Lip_{M_0}(\overline{\theta})$ ($M_0>0$) as space of all Lipschitz functions $u: \overline{\theta}\to\mathbb{R}$ with $|u(x)|\leq M_0$, $|u(x)-u(y)|\leq M_0|x-y|, x,y\in\overline{\theta},$ and let us suppose that, for $M_0>0$ large enough,\\ (H) PDE (\ref{r3.1}) has a solution $V_\lambda$ such that $\lambda V_\lambda\in Lip_{M_0}(\overline{\theta})$, for all $\lambda>0$. \begin{remark} Under the assumption of the existence of a bounded supersolution on $\overline{\theta}$, we have from Theorem \ref{th2} the existence of a viscosity solution $V_\lambda\in LSC(\overline{\theta})$. With (H) we suppose that $\lambda V_\lambda\in Lip_{M_0}(\overline{\theta})$. The uniqueness of this solution $V_\lambda\in Lip_M(\overline{\theta})$ is guaranteed by the comparison priciple (Theorem \ref{th1}). Later, in the discussion of the case where the Hamiltonian $H$ is associated with the stochastic control problem (\ref{r94}), we will see that for such Hamiltonians PDE (\ref{r3.1}) has a unique constrained viscosity solution $\lambda V_\lambda\in Lip_{M_0}(\overline{\theta})$, for all $\lambda>0$. \end{remark} In what follows we work with the hypothesis (H). Associated with our problem is the family of Hamiltonians \begin{equation*} H_\lambda(x,p,A):=\lambda H(x,\frac{1}{\lambda}p,\frac{1}{\lambda}A),\ (x,p,A)\in\overline{\theta}\times\mathbb{R}^N\times\mathcal{S}^N,\ \lambda>0, \end{equation*} where $H$ is supposed to satisfy ($A_H$). \begin{theorem}\label{th:3.3} We suppose that, in addition to \textbf{($A_\theta$)}, \textbf{($A_H$)} and (H), the Hamiltonian $H$ satisfies the radial monotonicity condition: \begin{equation}\label{r15} H(x,l p,l A)\geq H(x,p,A),\ \text{for\ all\ real}\ l\geq 1,\ (x,p,A)\in\overline{\theta}\times \mathbb{R}^N\times \mathcal{S}^N.\tag{H5} \end{equation} For all $\lambda >0$, let $V_\lambda$ be the constrained viscosity solution of PDE (\ref{r3.1}) such that $\lambda V_\lambda\in \mbox{Lip}_M(\overline{\theta})$. Then \\ \rm{(i)} \emph{$\lambda\rightarrow\lambda V_\lambda(x)$ is nondecreasing, for every $x\in\overline{\theta}$;}\\ \rm{(ii)} \emph{The limit $\lim_{\lambda\rightarrow0^+}\lambda V_\lambda(x)$ exists, for every $x\in\overline{\theta}$;}\\ \rm{(iii)} \emph{The convergence in (ii) is uniform on $\overline{\theta}$.} \end{theorem} \begin{proof} First, we know that for every $\lambda >0$, $w_\lambda(x):=\lambda V_\lambda(x)$ is a constrained viscosity solution of \begin{equation}\label{r16} \lambda w_\lambda(x)+H_\lambda(x,D w_\lambda(x),D^2w_\lambda(x))=0. \end{equation} For any $\lambda,\ \mu>0$, we have $\displaystyle \frac{\lambda}{\mu}H_\mu(x,\frac{\mu}{\lambda}p,\frac{\mu}{\lambda}A)=\frac{\lambda}{\mu}(\mu H(x,\frac{\mu}{\lambda}(\frac{1}{\mu}p),\frac{\mu}{\lambda}(\frac{1}{\mu}A)))=H_\lambda(x,p,A). $\\ Using the radial monotonicity condition (\ref{r15}) we have, for any $\mu>\lambda>0$, in viscosity sense, \begin{eqnarray*} \begin{array}{lll} &\displaystyle\lambda w_\mu(x)+H_\lambda(x,D w_\mu(x),D^2w_\mu(x)) = \mu\cdot\frac{\lambda}{\mu} w_\mu(x)+\frac{\lambda}{\mu}H_\mu(x,\frac{\mu}{\lambda}D w_\mu(x),\frac{\mu}{\lambda}D^2w_\mu(x)) \\ &\displaystyle =\frac{\lambda}{\mu}[\mu w_\mu(x)+\mu H(x,\frac{\mu}{\lambda}(\frac{1}{\mu}D w_\mu(x)),\frac{\mu}{\lambda}(\frac{1}{\mu}D^2w_\mu(x)))] \\ &\displaystyle \geq\frac{\lambda}{\mu}[\mu w_\mu(x)+\mu H(x,\frac{1}{\mu}D w_\mu(x),\frac{1}{\mu}D^2w_\mu(x))] \\ &\displaystyle = \frac{\lambda}{\mu}(\mu w_\mu(x)+H_\mu(x,Dw_\mu(x),D^2w_\mu(x)))\geq0,\ x\in\overline{\theta}. \end{array} \end{eqnarray*} Therefore, $w_\mu\in Lip_{M_0}(\overline{\theta})$ is a viscosity supersolution to (\ref{r16}) on $\overline{\theta}$. From the comparison principle-Theorem \ref{th1}, $w_\mu\geq w_\lambda$ on $\overline{\theta}$. Statement (ii) follows from (i) and the boundedness of $\lambda V_\lambda$, $\lambda>0$, while thanks to the fact that $\lambda V_\lambda\in \mbox{Lip}_M(\overline{\theta}),\ \lambda>0$, the Arzel\`{a}-Ascoli Theorem yields (iii). \end{proof} \begin{lemma}\label{l:3.4} Let $H(x,p,A)$ be convex in $(p,A)\in\mathbb{R}^N\times\mathcal{S}^N$. Then we have the following equivalence:\\ \rm{i)}\ \emph{The radial monotonicity (\ref{r15}) holds true for $H(x,\cdot,\cdot)$;}\\ \rm{ii)}\ $H(x,l'p,l'A)\geq H(x,lp,lA),\ 0\leq l\leq l',\ (p,A)\in\mathbb{R}^N\times\mathcal{S}^N$;\\ \rm{iii)}\ $H(x,p,A)\geq H(x,0,0),\ (p,A)\in\mathbb{R}^N\times\mathcal{S}^N$. \end{lemma} \begin{proof} Indeed, i) and ii) are obviously equivalent. Moreover, ii) implies iii) (take $l'=1$, and $l=0$). Thus, it remains only to show that iii) implies ii). For this end, given any $(p,A)\in\mathbb{R}^N\times\mathcal{S}^N$, we consider the function $G(l):=H(x,lp,lA),\ l\geq0$. From the convexity of $H(x,\cdot,\cdot)$ it follows that of $G$, and iii) implies that $G(l)\geq G(0),\ l\geq0$. Consequently, for $l'\geq l\geq0$ and $\displaystyle k:=\frac{l}{l'}\in[0,1]$, \begin{equation*} H(x,lp,lA)=G(l'k)=G(l'k+(1-k)0)\leq kG(l')+(1-k)G(0)\leq kG(l')+(1-k)G(l')=H(x,l'p,l'A). \end{equation*} \end{proof} \begin{remark} We suppose that $H$ is of the form (\ref{r20}) and $(-\psi)(x,\cdot,u)=\{z\mapsto(-\psi)(x,z,u)\}$ is convex, for all $(x,u)\in\overline{\theta}\times U$. Then $H(x,p,A)$ is convex in $(p,A)$, for all $x\in\overline{\theta}$. Under the additional assumption of the existence of some $u_0\in U$ such that $b(x,u_0)=0,\ \sigma(x,u_0)=0$ and $\psi(x,0,u)\geq\psi(x,0,u_0)$, for all $u\in U$, we have \begin{equation*} \begin{split} &H(x,p,A)=\mathop{\rm sup}\limits_{u\in U}\{\langle-p,b(x,u)\rangle-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\psi(x,p\sigma(x,u),u)\}\\ &\geq\langle-p,b(x,u_0)\rangle-\frac{1}{2}tr(\sigma\sigma^*(x,u_0)A)-\psi(x,p\sigma(x,u_0),u_0)\\ &=-\psi(x,0,u_0)=\mathop{\rm sup}\limits_{u\in U}\{-\psi(x,0,u)\}=H(x,0,0),\ \ (p,A)\in\mathbb{R}^N\times\mathcal{S}^N. \end{split} \end{equation*} Then Lemma \ref{l:3.4} yields that $H$ satisfies the radial monotonicity condition. However, without additional assumption for $H$ of the form (\ref{r20}), only with $(-\psi)(x,z,u)$ is convex in $z$, we don't, in general, have the radial monotonicity. Indeed, for example, if, for some $\varepsilon>0$ and $x\in\overline{\theta}$, $\sigma\sigma^*(x,u)\geq\varepsilon1_{\mathbb{R}^N}, u\in U$, then \begin{equation*} \begin{split} & H(x,0,A)=\mathop{\rm sup}\limits_{u\in U}\{-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\psi(x,0,u)\}\\ &\leq-\frac{1}{2}\varepsilon tr(A)+\mathop{\rm sup}\limits_{u\in U}\{-\psi(x,0,u)\}=-\frac{1}{2}\varepsilon tr(A)+H(x,0,0)\\ & < H(x,0,0), \ \mbox{for\ all}\ A\in\mathcal{S}^N \ \mbox{with}\ \ tr(A)>0. \end{split} \end{equation*} \end{remark} Under the assumptions of Theorem \ref{th:3.3} we let $w_0(x)=\lim\limits_{\lambda\rightarrow0^+}\lambda V_\lambda(x)$, $x\in\overline{\theta}$. Next we will characterize $w_0(x)$ under the condition of radial monotonicity of $H$ as maximal viscosity subsolution of the PDE \begin{equation}\label{r3.4} W(x)+\overline{H}(x,DW(x),D^2W(x))=0,\ x\in\theta, \end{equation} where $\overline{H}(x,p,A):=\min\{M_0,\mathop{\rm sup}\limits_{l>0}H(x,lp,lA)\}$. \begin{remark} As $H:\overline{\theta}\times\mathbb{R}^N\times\mathcal{S}^N\to\mathbb{R}$ is continuous and $\mathop{\rm sup}\limits_{l\geq0}H(x,lp,lA)=\lim\limits_{l\to\infty}\uparrow H(x,lp,lA)$, the function $\overline{H}$ is lower semicontinuous. Recall that a function $W\in USC(\overline{\theta})$ is a viscosity subsolution of (\ref{r3.4}) on $\theta$, if for all $x\in\theta$, $\varphi\in C^2(\mathbb{R}^N)$ such that $W-\varphi\leq W(x)-\varphi(x)$ on $\theta$, $$W(x)+\overline{H}(x,D\varphi(x),D^2\varphi(x))\leq0.$$ \end{remark} \begin{theorem}\label{the:3.4} We make the same assumptions as in Theorem \ref{th:3.3}. For all $\lambda>0$, let $V_\lambda$ be the unique constrained viscosity solution of the PDE \begin{equation}\label{r99} \lambda V(x)+H(x,DV(x),D^2V(x))=0,\ x\in \overline{\theta}, \end{equation} such that $\lambda V_\lambda\in \mbox{Lip}_{M_0}(\overline{\theta})$, for some $M_0> 0$ large enough and independent of $\lambda$. Then, $w_0(x):=\lim\limits_{\lambda\to0^+}\lambda V_\lambda(x)$, $x\in\overline{\theta}$, is the maximal viscosity subsolution of (\ref{r3.4}), $$w_0(x)=\mathop{\rm sup}\{w(x):w\in \mbox{Lip}_{M_0}(\overline{\theta}), w+\overline{H}(x,Dw,D^2w)\leq 0 \ \mbox{on} \ \theta\ (\mbox{in\ viscosity\ sense})\},\ x\in\overline{\theta},$$ where $\overline{H}(x,p,A):=\min \Big\{M_0, \mathop{\rm sup}\limits_{l>0}H(x,l p,l A)\Big\}$. \end{theorem} \begin{proof} We define the set $$\mathcal{S}_{H,M_0}=\{w: w\in \mbox{Lip}_{M_0}(\overline{\theta}),\ w+\overline{{H}}(x,Dw,D^2w)\leq 0 \ \text{on} \ \theta\ (\mbox{in\ viscosity\ sense})\},$$ and we set $\bar{w}(x)=\mathop{\rm sup}\{w(x), w\in\mathcal{S}_{H,M_0}\}.$\\ \textbf{Step 1}. We show that $w_0$ is a viscosity subsolution of (\ref{r3.4}), which implies that $w_0\in\mathcal{S}_{H,M}$ and, thus $w_0\leq \bar{w}$.\\ \textbf{Step 1.1}. We first prove that $w_\lambda=\lambda V_\lambda(x)\in Lip_{M_0}(\overline{\theta})$ is also a constrained viscosity solution of the equation \begin{equation}\label{rl7} w(x)+H^{M_0}(x,\frac{1}{\lambda}Dw(x),\frac{1}{\lambda}D^2w(x))=0,\ x\in\overline{\theta}, \end{equation} where $H^{M_0}(x,p,A):=\min\{M_0, H (x,p,A)\},\ \text{for\ all}\ (x,p,A)\in \overline{\theta}\times \mathbb{R}^N\times \mathcal{S}^N.$ In fact, let $x\in\theta$ and $\phi\in C^2(\mathbb{R}^N)$ be such that $(w_\lambda-\phi)(x)=\max\{(w_\lambda-\phi)(\overline{x}),\ \overline{x}\in\overline{\theta}\}$. Then, as $V_\lambda$ is a constrained viscosity solution of (\ref{r99}) and $w_\lambda=\lambda V_\lambda$, we have $$w_\lambda(x)+H(x,\frac{1}{\lambda}D\phi(x),\frac{1}{\lambda}D^2\phi(x))\leq 0.$$ Furthermore, from $w_\lambda\in \mbox{Lip}_{M_0}(\overline{\theta})$ we get $$H(x,\frac{1}{\lambda}D\phi(x),\frac{1}{\lambda}D^2\phi(x))\leq -w_\lambda(x)\leq M_0.$$ It follows that $$w_\lambda(x)+H^{M_0}(x,\frac{1}{\lambda}D\phi(x),\frac{1}{\lambda}D^2\phi(x))=w_\lambda(x)+H(x,\frac{1}{\lambda}D\phi(x),\frac{1}{\lambda}D^2\phi(x))\leq 0,$$ i.e., $w_\lambda$ is a constrained subsolution of (\ref{rl7}) in $\overline{\theta}$. Next we show that $w_\lambda$ is also a supersolution on $\overline{\theta}$. Let $x\in\overline{\theta}$ and $\varphi\in C^2(\mathbb{R}^N)$ be such that $(w_\lambda-\varphi)(x)=\min\{(w_\lambda-\varphi)(\overline{x}), \overline{x}\in\overline{\theta}\}$. Obviously, from (\ref{r99}) and the fact that $w_\lambda\in\mbox{Lip}_{M_0}(\overline{\theta})$ we have the following both inequalities, \begin{equation*} \left\{ \begin{array}{lll} w_\lambda(x)+H(x,\frac{1}{\lambda}D\varphi(x),\frac{1}{\lambda}D^2\varphi(x))\geq 0,\\ w_\lambda(x)+M_0\geq 0. \end{array} \right. \end{equation*} Thus, $w_\lambda(x)+H^{M_0}(x,\frac{1}{\lambda}D\varphi(x),\frac{1}{\lambda}D^2\varphi(x))\geq 0$, which implies (\ref{rl7}).\\ \textbf{Step 1.2}. Now we show $w_0\in\mathcal{S}_{H,M_0}$, i.e., $w_0\in\mbox{Lip}_{M_0}(\overline{\theta})$ and \begin{equation}\label{r18} w_0+\overline{H}(x,Dw_0,D^2w_0)\leq 0\ \mbox{in} \ \theta, \end{equation} in viscosity sense. Indeed, let us fix $l>0$. Then (\ref{r15}) and (\ref{rl7}) yield, for any $0<\lambda\leq\frac{1}{l}$, $$w_\lambda+H^{M_0}(x,l Dw_\lambda,l D^2w_\lambda)\leq w_\lambda+H^{M_0}(x,\frac{1}{\lambda}Dw_\lambda,\frac{1}{\lambda}D^2w_\lambda)=0 \ \ \mbox{in} \ \theta,$$ in viscosity sense. Recall that $w_\lambda\in\mbox{Lip}_{M_0}(\overline{\theta}), \lambda>0$, and that $w_0$ is the uniform limit of $w_\lambda$, as $\lambda\downarrow0$. Consequently, $w_0\in Lip_{M_0}(\overline{\theta})$. Moreover, by the result that the uniform limit of subsolutions is a subsolution again, we conclude that, in viscosity sense, $$w_0+H^{M_0}(x,l Dw_0,l D^2w_0)\leq 0, \ l>0.$$ Finally, taking the supremum with respect to $l>0$ over the increasing left-hand side, it follows that (\ref{r18}) holds. Consequently, $w_0\in\mathcal{S}_{H,M_0}$ and thus $w_0\leq \bar{w}$.\\ \textbf{Step 2}. Notice that also $\bar{w}\in\mbox{Lip}_{M_0}(\overline{\theta})$. Thus, in order to prove that $w_0\geq \bar{w}$, we need to check that \begin{equation}\label{r19} \bar{w}+\overline{H}(x,D\bar{w},D^2\bar{w})\leq 0 \ \mbox{in} \ \theta. \end{equation} The above property of the upper envelope of a bounded family of subsolutions is well known when $H$ is continuous and can be extended to $\overline{H}$. Let $x\in\theta$ and $\phi\in C^2(\mathbb{R}^N)$ be such that $(\bar{w}-\phi)(x)=\max\{(\bar{w}-\phi)(\overline{x}), \overline{x}\in\overline{\theta}\}$. By adding a constant to $\phi$ one can assume that $\bar{w}(x)=\phi(x)$. For $\varepsilon>0$ we put $\phi_\varepsilon(y)=\phi(y)+\varepsilon|x-y|^2$, $y\in\mathbb{R}^N$. Then, $$(\bar{w}-\phi_\varepsilon)(y)\leq -\varepsilon|y-x|^2,$$ for every $y$ in $\overline{\theta}$, and, hence, also in some closed ball $B_r(x)\subseteq\theta$. Thus, by the very definition of $\bar{w}$, there exists a sequence$\{w^n\}_n\subseteq \mathcal{S}_{H,M_0}$ such that $w^n(x)\geq \bar{w}(x)-\frac{1}{n}$ for all $n\geq 1$. Let $x_n^\varepsilon$ be a maximum point of $w^n-\phi_\varepsilon$ over $B_r(x)$. Then we have that $$-\frac{1}{n}\leq (w^n-\phi_\varepsilon)(x)\leq (w^n-\phi_\varepsilon)(x_n^\varepsilon)\leq -\varepsilon|x_n^\varepsilon-x|^2.$$ Consequently, $x_n^\varepsilon\rightarrow x$ and $(w^n-\phi_\varepsilon)(x_n^\varepsilon)\rightarrow 0$, as $n\rightarrow\infty$. Therefore, $w^n(x_n^\varepsilon)\rightarrow \bar{w}(x)$, as $n\rightarrow\infty$. Moreover, for all $l >0$ and $n$ large enough, we have \begin{equation}\label{th3.4.5} w^n(x_n^\varepsilon)+H^{M_0}(x_n^\varepsilon,l D\phi(x_n^\varepsilon),l D^2\phi(x_n^\varepsilon))\leq w^n(x_n^\varepsilon)+\overline{H}(x_n^\varepsilon,D\phi(x_n^\varepsilon),D^2\phi(x_n^\varepsilon))\leq 0. \end{equation} On the other hand, \begin{equation*} \overline{H}(x,p,A)=\lim\limits_{l\to\infty}\uparrow H^{M_0}(x,lp,lA),\ (x,p,A)\in \mathbb{R}^N\times\mathbb{R}^N\times\mathcal{S}^N. \end{equation*} Passing in (\ref{th3.4.5}) to the limit as $n\to \infty$ yields $\bar{w}(x)+H^{M_0}(x,l D\phi_\varepsilon(x),l D^2\phi_\varepsilon(x))\leq 0$, which in turn implies $$\bar{w}(x)+\overline{H}(x,D\phi_\varepsilon(x),D^2\phi_\varepsilon(x))\leq 0,$$ by taking first the limit as $\varepsilon\to 0$ and after the supremum over $l>0$, i.e., we have shown (\ref{r19}). Now, with the same $\phi$ and the same $x\in\theta$ as above, from (\ref{r19}) it follows that, for any $\lambda>0$, $$\bar{w}(x)+H^{M_0}(x,\frac{1}{\lambda}D\phi(x),\frac{1}{\lambda}D^2\phi(x))\leq \bar{w}(x)+\overline{H}(x,D\phi(x),D^2D\phi(x))\leq 0,$$ i.e., $\bar{w}$ is a (continuous) viscosity subsolution of (\ref{rl7}) in $\theta$. Since $w_\lambda$ is a continuous constrained viscosity solution of (\ref{rl7}), from Theorem \ref{th1} (comparison principle) we have that $$\bar{w}(x)\leq w_\lambda(x), \ \mbox{for\ all}\ x\in\overline{\theta}.$$ Taking the limit as $\lambda\rightarrow 0^+$ yields $w_0\geq \bar{w}$ and completes the proof.\\ \end{proof} We now give an application of the above Theorem \ref{the:3.4}, which generalize the results in \cite{Marc 2015}. For this recall that the second order superjet at $x\in\overline{\theta}$ for a function $u\in USC(\overline{\theta})$ is defined by \begin{equation*} J^{2,+}u(x)=\{(D\varphi(x),D^2\varphi(x)): \varphi\in C^2(\mathbb{R}^N), u-\varphi\leq u(x)-\varphi(x)\ \mbox{on}\ \overline{\theta}\}, \end{equation*} while, for $v\in LSC(\overline{\theta})$, \begin{equation*} J^{2,-}v(x)=\{-(p,A)|(p,A)\in J^{2,+}(-v)(x)\} \end{equation*} defines the second order subjet. \begin{corollary}\label{c1} Under the same assumptions in Theorem \ref{the:3.4}, we have, for all $x\in\theta$, \begin{equation*} \{(p,A)\in J^{2,+}w_0(x): \mathop{\rm sup}\limits_{l>0}H(x,lp,lA)=+\infty\}=\emptyset. \end{equation*} \end{corollary} \begin{proof} Assume that, for some $x\in\theta$, $(p,A)\in J^{2,+}w_0(x)$ and \begin{equation}\label{b1} \mathop{\rm sup}\limits_{l>0}H(x,lp,lA)=+\infty. \end{equation} From (\ref{b1}), $\overline{H}(x,p,A)=M_0$. Furthermore, from (\ref{r18}), in viscosity sense, \begin{equation*} w_0(x)+\overline{H}(x,Dw_0(x),D^2w_0(x))\leq0,\ x\in\theta. \end{equation*} Consequently, since $w_0\in Lip_{M_0}(\overline{\theta})$ we get $0\geq w_0(x)+\overline{H}(x,p,A)=w_0(x)+M_0$. Therefore, $w_0(x)\leq-M_0$. But, on the other hand $w_0\in Lip_{M_0}(\overline{\theta})$ implies $|w_0(x)|\leq M_0$. Hence, $w_0(x)=-M_0$. As for all $M\geq M_0$, $w_0\in\mathcal{S}_{H,M}$ and $w_0\in Lip_{M}(\overline{\theta})$, the same argument also gives $w_0(x)=-M$. This is a contradiction, and it follows that there cannot exist $(p,A)\in J^{2,+}w_0(x)$ with $\mathop{\rm sup}\limits_{l>0}H(x,lp,lA)=+\infty$. \end{proof} \begin{corollary}\label{c2} Under the same assumptions in Theorem \ref{the:3.4} we suppose that, for all $x\in\theta, (p,A)\in(\mathbb{R}^N\backslash\{0\})\times\mathcal{S}^N$, $\mathop{\rm sup}\limits_{l>0}H(x,lp,lA)=+\infty$. Then, $w_0$ is a constant on $\overline{\theta}$. \end{corollary} \begin{proof} Let $x_0\in \theta$ and $\varphi_\varepsilon\in C^2(\mathbb{R}^N)$, $\varepsilon>0$, be such that i) $\varphi_\varepsilon(x)=\left\{ \begin{array}{lll} \frac{\varepsilon}{2}|x-x_0|^2,\ x\in\theta\ \mbox{with\ dist}(x,\partial\theta)\geq\frac{1}{2} \mbox{dist}(x_0,\partial\theta)\ (>0),\\ \geq3M_0,\ x\in \theta^C, \end{array} \right.$ ii) $D{\varphi_\varepsilon}(x)\neq0,\ x\in\mathbb{R}^N\setminus\{x_0\},$ iii) $\varphi_\varepsilon(x)\to0\ (\varepsilon\downarrow0),$ for all $x\in\theta$. \noindent For $\psi_\varepsilon(x)=w_0(x)-\varphi_\varepsilon(x)$, $x\in\overline{\theta}$, let $x_\varepsilon\in\overline{\theta}$ be the maximum point of $\psi_\varepsilon$. As for all $x'\in\partial\theta, \psi_\varepsilon(x')\leq w_0(x')-3M_0\leq-2M_0$ (Recall that $w_0\in \mbox{Lip}_{M_0}(\overline{\theta})$), and $\psi_\varepsilon(x_\varepsilon)\geq-M_0.$\ It follows that $x_\varepsilon\in\theta$. Since $(D\varphi_\varepsilon(x_\varepsilon),D^2\varphi_\varepsilon(x_\varepsilon))\in J^{2,+}w_0(x_\varepsilon)$, we get from Corollary \ref{c1} that $\mathop{\rm sup}\limits_{l>0}H(x_\varepsilon,lD\varphi_\varepsilon(x_\varepsilon),lD^2\varphi_\varepsilon(x_\varepsilon))<+\infty$. Hence, from the assumptions of Corollary \ref{c2}, $D\varphi_\varepsilon(x_\varepsilon)=0$, i.e., $x_\varepsilon=x_0,\ \varepsilon>0$. Consequently, $$w_0(x_0)=\psi_\varepsilon(x_0)\geq\psi_\varepsilon(x)=w_0(x)-\varphi_\varepsilon(x)\to w_0(x),\ \mbox{as}\ \varepsilon\to 0,\ \mbox{for\ all}\ x\in \theta.$$ Then, from the arbitrariness of $x_0\in\theta$ it follows that $w_0(x)=w_0(x_0)$, for all $x\in\theta$, and from the continuity of $w_0$ on $\overline{\theta}$ we, finally, have $w_0(x)=w_0(x_0), x\in\overline{\theta}$. \end{proof} \section{ {\protect \large Convergence problem for the optimal control}} After a more general discussion in the previous section, in this part we consider the Hamiltonian $H$ of the form \begin{equation}\label{r20} H(x,p,A):=\mathop{\rm sup}\limits_{u\in U}\{{\langle -p,b(x,u)\rangle}-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\psi(x,p\sigma(x,u),u)\}, \ (x,p,A)\in\mathbb{R}^N\times\mathbb{R}^N\times\mathcal{S}^N. \end{equation} \begin{proposition}\label{th:3.3.1} Under the assumptions (\ref{r1}), (\ref{r48}) and (\ref{r49}) the value function $V_\lambda$ defined by (\ref{r94}) is a viscosity solution of the Hamilton-Jacobi-Bellman equation \begin{equation}\label{r8} \lambda V(x)+H(x,DV(x),D^2V(x))=0,\ x\in\overline{\theta}, \end{equation} where $H(x,p,A)$ is defined by (\ref{r20}). \end{proposition} \begin{remark} Proposition \ref{th:3.3.1} shows that $V_\lambda$ defined by (\ref{r94}) is in $Lip_{\frac{M_0}{\lambda}}(\overline{\theta})$ and it is a viscosity solution on $\overline{\theta}$ of (\ref{r8}), i.e., a super-but also a subsolution on $\overline{\theta}$. Thus, $V_\lambda$ is, in particular also a constrained viscosity solution of (\ref{r8}), but unlike a constrained viscosity solution, $V_\lambda$ also satisfies, in viscosity sense, \begin{equation*} \lambda V_\lambda(x)+H(x,DV_\lambda(x),D^2V_\lambda(X))\leq0,\ x\in\partial\theta, \end{equation*} i.e., for all $x\in\partial\theta$, $\varphi\in C^2(\mathbb{R}^N)$ with $V-\varphi\leq V(x)-\varphi(x)$, it holds \begin{equation*} \lambda V_\lambda(x)+H(x,D\varphi(x),D^2\varphi(x))\leq0. \end{equation*} \end{remark} The proof of Proposition \ref{th:3.3.1} uses Peng's BSDE method developed in \cite{Peng 1997}. To prove the proposition we need the dynamic programming principle (DPP) and the following three lemmas based on the notion of stochastic backward semigroups introduced by Peng \cite{Peng 1997}. Given the initial value $x$ at time $t=0$ of SDE (\ref{r2}), $u(\cdot)\in\mathcal{U}$ and $\eta\in L^2(\Omega,\mathcal{F}_t,\mathbb{P})$, we define a stochastic backward semigroup: For given $\lambda>0,\ x\in\overline{\theta},\ u\in\mathcal{U},\ t\in\mathbb{R}_+$, we put \begin{equation*} G_{s,t}^{\lambda,x,u}[\eta]:=Y_s^\eta,\ s\in[0,t],\ \eta\in L^2(\Omega,\mathcal{F}_t,\mathbb{P}), \end{equation*} where $(Y_s^\eta)_{s\in[0,t]}$ is the unique solution of the BSDE \begin{equation*} \left\{ \begin{array}{ll} dY_s^\eta=-(\psi(X_s^{x,u},Z_s^\eta,u_s)-\lambda Y_s^\eta)ds+Z_s^\eta dW_s,\ s\in[0,t],\\ Y_t^\eta=\eta. \end{array} \right. \end{equation*} \begin{proposition}\label{dpp}(DPP) Under the assumptions (\ref{r1}), (\ref{r48}) and (\ref{r49}), for all $\lambda>0,\ x\in\mathbb{R}^N$ and $t\geq0$, it holds \begin{equation*} V_\lambda(x)=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]. \end{equation*} \end{proposition} The proof of the DPP uses arguments, which are for the case of control problems with finite time horizon rather standard by now (see, e.g., \cite{Buckdahn 2008}). But here the time horizon is infinite, and for convenience we give the proof in the Appendix. Let us give now three auxiliary lemmas. For this, given a test function $\varphi\in C^3(\mathbb{R}^N)$, we define, for all $(x,y,z,u)\in\mathbb{R}^N\times\mathbb{R}\times\mathbb{R}^d\times U,$ \begin{equation*} \Phi(x,y,z,u):=\langle D\varphi(x),b(x,u)\rangle+\frac{1}{2}tr(\sigma\sigma^*(x,u)D^2\varphi(x))+\psi(x,z+D\varphi(x)\sigma(x,u),u)-\lambda\cdot(y+\varphi(x)). \end{equation*} Let $(Y_s^{1,u},Z_s^{1,u})_{s\in[0,t]}\in\mathcal{S}_{\mathbb{F}}^2([0,t];\mathbb{R})\times\mathcal{H}^2_{\mathbb{F}}([0,t];\mathbb{R}^d)\footnotemark[1]$ \footnotetext[1]{$\mathcal{S}_{\mathbb{F}}^2([0,t];\mathbb{R}):=\{\phi=(\phi_{s})_{s\in[0,t]}:(\phi_{s\wedge t})_{s\geq0}\in\mathcal{S}_{\mathbb{F}}^2(\mathbb{R})\}$, $\mathcal{H}^2_{\mathbb{F}}([0,t];\mathbb{R}^d):=\{\varphi=(\varphi_s)_{s\in[0,t]}:(\varphi_s1_{[0,t]}(s))_{s\geq0}\in\mathcal{H}^2_{\mathbb{F}}(\mathbb{R}^d)\}.$} be the unique solution of the BSDE \begin{equation}\label{r95} \left\{ \begin{array}{ll} dY_s^{1,u}=-\Phi(X_s^{x,u},Y_s^{1,u},Z_s^{1,u},u_s)ds+Z_s^{1,u}dW_s,\ s\in[0,t],\\ Y_t^{1,u}=0. \end{array} \right. \end{equation} As $\Phi(x,\cdot,\cdot,u)$ is Lipschitz, uniformly in $(x,u)$, and $\Phi(x,0,0,u)$ is bounded on $\overline{\theta}\times U$, the existence and the uniqueness are by now standard. \begin{lemma}\label{lem1} $Y_s^{1,u}=G_{s,t}^{\lambda,x,u}[\varphi(X_t^{x,u})]-\varphi(X_s^{x,u}),\ s\in[0,t].$ \end{lemma} \begin{proof} Notice that $G_{s,t}^{\lambda,x,u}[\varphi(X_t^{x,u})]$ is defined by the solution of the following BSDE \begin{equation*} \left\{ \begin{array}{ll} dY_s^{\varphi}=-( \psi(X_s^{x,u},Z_s^{\varphi},u_s)-\lambda Y_s^\varphi)ds+Z_s^{\varphi}dW_s,\ s\in[0,t],\\ Y_t^{\varphi}=\varphi(X_t^{x,u}), \end{array} \right. \end{equation*} that is \begin{equation*} G_{s,t}^{\lambda,x,u}[\varphi(X_t^{x,u})]=Y_s^{\varphi},\ s\in[0,t]. \end{equation*} We only need to prove that $Y_s^{\varphi}-\varphi(X_s^{x,u})= Y_s^{1,u}$, $s\in[0,t]$. Applying It\^{o}'s formula to $\varphi(X_s^{x,u})$, it is obvious that $d(Y_s^{\varphi}-\varphi(X_s^{x,u}))= dY_s^{1,u}$. As $Y_t^{\varphi}-\varphi(X_t^{x,u})=0=Y_t^{1,u}$, it follows that $Y_s^{\varphi}-\varphi(X_s^{x,u})= Y_s^{1,u}$, $s\in[0,t]$. \end{proof} Now we consider BSDE (\ref{r95}) in which $X_s^{x,u}$ is replaced by its initial condition $X_0^{x,u}=x$: \begin{equation}\label{r96} \left\{ \begin{array}{ll} dY_s^{2,u}=-\Phi(x,Y_s^{2,u},Z_s^{2,u},u_s)ds+Z_s^{2,u}dW_s,\ s\in[0,t],\\ Y_t^{2,u}=0. \end{array} \right. \end{equation} It has a unique solution $(Y^{2,u},Z^{2,u})\in\mathcal{S}_{\mathbb{F}}^2([0,t];\mathbb{R})\times\mathcal{H}_{\mathbb{F}}^2([0,t];\mathbb{R}^d)$. \begin{lemma}\label{lem2} We have $|Y_0^{1,u}-Y_0^{2,u}|\leq ct^{\frac{3}{2}},\ \mbox{for all}\ t\in [0, T],\ u\in\mathcal{U}$, where $c\in\mathbb{R}_+$ is independent of $u\in\mathcal{U}$ and depends only on $T>0$. \end{lemma} \begin{proof} As $\overline{\theta}\subset\mathbb{R}^N$ is compact, $\varphi, D\varphi$ and $D^2\varphi$ are bounded and Lipschitz on $\overline{\theta}$. Combined with the boundedness and the Lipschitz property of $b(\cdot,u), \sigma(\cdot,u)$ which is uniform with respect to $u\in U$, this has the consequence that $\overline{\theta}\ni x\rightarrow\Phi(x,y,z,u)$ is Lipschitz, uniformly in $(y,z,u)$. Then, using BSDE and SDE standard estimates, we get \begin{equation*} \begin{array}{lll} &\displaystyle|Y_0^{1,u}-Y_0^{2,u}|^2\leq\mathbb{E}[|Y_0^{1,u}-Y_0^{2,u}|^2+\int_0^t|Z_s^{1,u}-Z_s^{2,u}|^2ds]\\ \leq&\displaystyle c\mathbb{E}[(\int_0^t|\Phi(X_s^{x,u},Y_s^{2,u},Z_s^{2,u},u_s)-\Phi(x,Y_s^{2,u},Z_s^{2,u},u_s)|ds)^2]\\ \leq&\displaystyle ct^2\mathbb{E}[\mathop{\rm sup}\limits_{s\in[0,t]}|X_s^{x,u}-x|^2]\leq ct^3.\\ \end{array} \end{equation*} \end{proof} We now define $\overline{\Phi}(x,y,z):=\inf\limits_{u\in U}\Phi(x,y,z,u),\ (x,y,z)\in\overline{\theta}\times\mathbb{R}\times\mathbb{R}^d.$ Note that $\overline{\Phi}(x,y,z)=\overline{\Phi}(x,0,z)-\lambda y$ and that $(x,y,z)\rightarrow\overline{\Phi}(x,y,z)$ is Lipschitz. We consider the following ODE \begin{equation}\label{r97} \left\{ \begin{array}{ll} dY_s^0=-\overline{\Phi}(x,Y_s^0,0)ds,\ s\in[0,t],\\ Y_t^0=0. \end{array} \right. \end{equation} \begin{remark} As $\overline{\Phi}(x,y,z)=\overline{\Phi}(x,0,z)-\lambda y$, the unique solution of (\ref{r97}) is given by \begin{equation*} Y_s^0=\int_s^te^{-\lambda(r-s)}dr\cdot\overline{\Phi}(x,0,0),\ s\in[0,t]. \end{equation*} \end{remark} \begin{lemma}\label{lem3} $Y_s^0=\mathop{\rm essinf}\limits_{u\in\mathcal{U}}Y_s^{2,u}, s\in[0,t]$, i.e., in particular, $Y_0^0=\inf\limits_{u\in\mathcal{U}}Y_0^{2,u}$. \end{lemma} \begin{proof} From the comparison theorem for BSDEs we obtain easily that $Y_s^0\leq Y_s^{2,u},\ s\in[0,t]$, for all $u\in\mathcal{U}$. On the other hand, as $U$ is compact and $\Phi(x,0,0,\cdot)$ continuous on $U$, there is $u^*\in U$ such that $\overline{\Phi}(x,0,0)=\Phi(x,0,0,u^*)$. Then, for $u=(u_s)_{s\geq0}\in\mathcal{U}$ defined by $u_s=u^*,\ s\geq0$, $(Y^0,Z^0)=(Y^0,0)$ solves the BSDE \begin{equation*} dY_s^{2,u}=-\Phi(x,Y_s^{2,u},Z_s^{2,u},u_s)ds+Z_s^{2,u}dW_s,\ s\in[0,t],\ Y_t^{2,u}=0, \end{equation*} and from the uniqueness of its solution we get $Y_s^{2,u}=Y_s^0,\ s\in[0,t]$. The proof is complete. \end{proof} Now we are able to give the proof of Proposition \ref{th:3.3.1}. \begin{proof}\textbf{(of Proposition \ref{th:3.3.1}.)} From Lemma \ref{lem:2.6} we know that $V_\lambda\in C(\overline{\theta})$. Let $x\in {\theta}$ and $\varphi\in C^3(\mathbb{R}^N)$ be such that $0=(V_\lambda-\varphi)(x)\geq V_\lambda-\varphi$ on $\overline{\theta}$. Then, for all $u\in\mathcal{U}$ and $t>0,$ the DPP and the monotonicity of $G_{0,t}^{\lambda,x,u}[\cdot]$ yield: \begin{equation*} \varphi(x)=V_\lambda(x)=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]\leq\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[\varphi(X_t^{x,u})]. \end{equation*} Thus, due to Lemma \ref{lem1}, \begin{equation*} \inf\limits_{u\in\mathcal{U}}Y_0^{1,u}= \inf\limits_{u\in\mathcal{U}}(G_{0,t}^{\lambda,x,u}[\varphi(X_t^{x,u})]-\varphi(x))\geq0, \end{equation*} and Lemma \ref{lem2} implies together with Lemma \ref{lem3}, \begin{equation*} Y_0^0=\inf\limits_{u\in\mathcal{U}}Y_0^{2,u}\geq-ct^{\frac{3}{2}},\ \mbox{i.e.},\ \int_0^te^{-\lambda r}dr\cdot\overline{\Phi}(x,0,0)\geq-ct^{\frac{3}{2}},\ t>0. \end{equation*} Dividing the above relation by $t$ and taking after the limit as $t\downarrow0$, we get \begin{equation*} 0\leq\overline{\Phi}(x,0,0)=\inf\limits_{u\in U}\{\langle D\varphi(x),b(x,u)\rangle+\frac{1}{2}tr(\sigma\sigma^*(x,u)D^2\varphi(x))+\psi(x,D\varphi(x)\sigma(x,u),u)\}-\lambda\varphi(x), \end{equation*} i.e., $\lambda V_\lambda(x)+H(x,D\varphi(x),D^2\varphi(x))\leq0$. This proves that $V_\lambda$ is a subsolution on $\overline{\theta}$; the proof that $V_\lambda$ is a viscosity supersolution on $\overline{\theta}$ is similar, and thus, omitted here. As $V_\lambda\in Lip_{\frac{M_0}{\lambda}}(\overline{\theta})$ is a constrained viscosity solution of (\ref{r8}), we have from Theorem \ref{th1} (Comparison principle) its uniqueness in $C(\overline{\theta})$. However, for the convenience of the reader let us give the following comparison result and its proof for Hamiltonians of the form (\ref{r20}). \end{proof} \indent We have the uniqueness of the viscosity solution from the following theorem. For this we recall that $\overline{\theta}$ is a compact subset of $\mathbb{R}^N$ and invariant with respect to the control system (\ref{r2}). \begin{proposition}\label{th:3.2} Assume (\ref{r1}) holds. Let $H_1,\ H_2:\mathbb{R}^N\times\mathbb{R}^N\times\mathcal{S}^N\rightarrow\mathbb{R}$ be two Hamiltonians of the form (\ref{r20}) with $\psi=\psi_1$ and $\psi=\psi_2$, respectively, where $\psi_1$ and $\psi_2$ are assumed to satisfy (\ref{r48}). We suppose that $u\in USC(\overline{\theta})$ is a subsolution of \begin{equation*} \lambda V(x)+H_1(x,D\varphi(x),D^2\varphi(x))=0,\ x\in\overline{\theta}, \end{equation*} and $v\in LSC(\overline{\theta})$ is a supersolution of \begin{equation*} \lambda V(x)+H_2(x,D\varphi(x),D^2\varphi(x))=0,\ x\in\overline{\theta}. \end{equation*} Then it holds \begin{equation*} \lambda (u(x)- v(x))\leq\mathop{\rm sup}\limits_{u\in U, x\in\bar{\theta} \atop \scriptstyle z\in\mathbb{R}^d}\{|\psi_1(x,z,u)-\psi_2(x,z,u)|\},\ \mbox{for any}\ x\in\overline{\theta}. \end{equation*} \end{proposition} \begin{proof} Let $u\in USC(\overline{\theta})$ be a subsolution and $v\in LSC(\overline{\theta})$ a supersolution. For $\varepsilon>0$ arbitrarily chosen, we define $\Phi_\varepsilon(x,x'):=u(x)-v(x')-\frac{1}{2\varepsilon}|x-x'|^2$, $(x,x')\in\overline{\theta}\times\overline{\theta}$. Let $(x_\varepsilon,x'_\varepsilon)\in\overline{\theta}\times\overline{\theta}$ denote a maximum point of the USC-function $\Phi_\varepsilon $ on the compact set $\overline{\theta}\times\overline{\theta}$. We set $\varphi_\varepsilon(x,x')=\frac{1}{2\varepsilon}|x-x'|^2$. Then $u(x)-\varphi_\varepsilon(x,x'_\varepsilon)$ attains a maximum at $x=x_\varepsilon$ and $v(x')+\varphi_\varepsilon(x_\varepsilon,x')$ attains a minimum at $x'=x'_\varepsilon$. From Theorem 3.2 in \cite{ISHII 1992} we have the existence of two matrices $A, B\in\mathcal{S}^N$ with \begin{equation*} (\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon},A)\in \overline{J}^{2,+}u(x_\varepsilon),\quad (\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon},B)\in \overline{J}^{2,-}v(x'_\varepsilon), \end{equation*} such that \begin{gather}\label{r98} \begin{pmatrix} A&0\\ 0&-B \end{pmatrix} \leq A_0+\varepsilon A_0^2,\ A_0=D^2\varphi_\varepsilon(x,x')=\frac{1}{\varepsilon} \begin{pmatrix} I&-I\\ -I&I \end{pmatrix}. \end{gather} We notice that $A_0+\varepsilon A_0^2=\frac{3}{\varepsilon} \begin{pmatrix} I&-I\\ -I&I \end{pmatrix}.$ Then, as $u\in USC(\overline{\theta})$ is a subsolution on $\overline{\theta}$ and $v\in LSC(\overline{\theta})$ a supersolution on $\overline{\theta}$, \begin{equation}\label{t5} \lambda u(x_\varepsilon)+H_1(x_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon},A)\leq0,\ \ \lambda v(x'_\varepsilon)+H_2(x'_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon},B)\geq0. \end{equation} We set $\beta:=\mathop{\rm sup}\limits_{x\in\bar{\theta}}(u(x)-v(x))$. As $\overline{\theta}$ is compact, and $u-v$ upper semicontinuous on $\overline{\theta}$, there exists $\overline{x}\in\overline{\theta}$\ such that $u(\overline{x})-v(\overline{x})=\beta$. Then $\Phi_\varepsilon(x_\varepsilon,x'_\varepsilon)\geq u(\overline{x})-v(\overline{x})=\beta.$\\ Obviously, since \begin{equation*} \frac{|x_\varepsilon-x'_\varepsilon|^2}{2\varepsilon}=u(x_\varepsilon)-v(x'_\varepsilon)-\Phi_\varepsilon(x_\varepsilon,x'_\varepsilon)\leq u(x_\varepsilon)-v(x'_\varepsilon)-\beta\leq c,\ \varepsilon>0, \end{equation*} we have that $|x_\varepsilon-x'_\varepsilon|^2\leq c\varepsilon$, and by letting $\varepsilon\to 0$ we get $\lim\limits_{\varepsilon\downarrow0}|x_\varepsilon-x'_\varepsilon|=0.$ As $\overline{\theta}$ is compact, there exists a subsequence of $(x_\varepsilon,x'_\varepsilon)\in\overline{\theta}, \varepsilon>0$, again denoted by $(x_\varepsilon,x'_\varepsilon)$, and some $\widehat{x}\in\overline{\theta}$ such that $x_\varepsilon\rightarrow\widehat{x},\ x'_\varepsilon\rightarrow\widehat{x}$ as $\varepsilon\downarrow0$. Consequently, $$0\leq\varlimsup\limits_{\varepsilon\downarrow0}\frac{|x_\varepsilon-x'_\varepsilon|^2}{2\varepsilon}\leq u(\widehat{x})-v(\widehat{x})-\beta\leq0.$$ It follows that \begin{equation*} \left\{ \begin{array}{lll} u(\widehat{x})-v(\widehat{x})=\beta=\max\limits_{x\in\overline{\theta}}(u(x)-v(x)),\\ \lim\limits_{\varepsilon\downarrow0}\frac{|x_\varepsilon-x'_\varepsilon|^2}{2\varepsilon}=0. \end{array} \right. \end{equation*} From (\ref{t5}) we have \begin{equation*} \begin{split} 0\geq\lambda u(x_\varepsilon)+\mathop{\rm sup}\limits_{u\in U}\{-b(x_\varepsilon,u)\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}-\frac{1}{2}tr(\sigma\sigma^*(x_\varepsilon,u)A)-\psi_1(x_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x_\varepsilon,u),u)\},\\ 0\leq\lambda v(x'_\varepsilon)+\mathop{\rm sup}\limits_{u\in U}\{-b(x'_\varepsilon,u)\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}-\frac{1}{2}tr(\sigma\sigma^*(x'_\varepsilon,u)B)-\psi_2(x'_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x'_\varepsilon,u),u)\}. \end{split} \end{equation*} Then, combined with (\ref{r98}), we obtain by using the Lipschitz assumption on $b, \sigma, \psi_1$ and $\psi_2$, \begin{equation*} \begin{array}{lll} &\displaystyle\lambda (u(x_\varepsilon)-v(x'_\varepsilon)) \leq\mathop{\rm sup}\limits_{u\in U}\{(b(x_\varepsilon,u)-b(x'_\varepsilon,u))\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}+\frac{1}{2}tr\big(\sigma\sigma^*(x_\varepsilon,u)A-\sigma\sigma^*(x'_\varepsilon,u)B\big)\\ &\displaystyle\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad +(\psi_1(x_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x_\varepsilon,u),u)-\psi_2(x'_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x'_\varepsilon,u),u))\}\\ &\displaystyle\leq c\big(\frac{|x_\varepsilon-x'_\varepsilon|^2}{\varepsilon}+\mathop{\rm sup}\limits_{u\in U}\{|\psi_1(x_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x_\varepsilon,u),u)-\psi_2(x'_\varepsilon,\frac{x_\varepsilon-x'_\varepsilon}{\varepsilon}\sigma(x'_\varepsilon,u),u)|\}\big)\\ &\displaystyle\leq c(\frac{|x_\varepsilon-x'_\varepsilon|^2}{\varepsilon}+|x_\varepsilon-x'_\varepsilon|)+\mathop{\rm sup}\limits_{x\in\overline{\theta}, p\in\mathbb{R}^N, \atop \scriptstyle u\in U}\{|\psi_1(x,p\sigma(x,u),u)-\psi_2(x,p\sigma(x,u),u)|\}. \end{array} \end{equation*} Finally, letting $\varepsilon\downarrow0$, this yields \begin{equation*} \lambda\max\limits_{x\in\overline{\theta}}(u(x)-v(x))=\lambda (u(\widehat{x})-v(\widehat{x}))\leq \mathop{\rm sup}\limits_{x\in\overline{\theta}, p\in\mathbb{R}^N, \atop \scriptstyle u\in U}\{|\psi_1(x,p\sigma(x,u),u)-\psi_2(x,p\sigma(x,u),u)|\}. \end{equation*} \end{proof} \begin{theorem}\label{th:4.1} We suppose that the assumptions (\ref{r1}), (\ref{r48}) and (\ref{r49}) hold. Moreover, we suppose: \begin{equation*}\label{r100} \begin{array}{lll} \mbox{There\ is\ a\ concave\ increasing\ function}\ \rho:\mathbb{R}_+\to\mathbb{R}_+ \ \mbox{with}\ \rho(0+)=0\ \mbox{such\ that,\ for\ all}\ (x,z)\\ \tag{H6} \in\mathbb{R}^N\times\mathbb{R}^d,\ u,\ u'\in U,\ \mid \psi(x,z,u)-\psi(x,z,u')\mid\leq(1+|z|)\rho(d(u,u')) \end{array} \end{equation*} (Recall that $d$\ is the metric we consider on the control state space $U$). Then, along a suitable subsequence $0<\lambda_n\downarrow 0$, there exists the uniform limit $\widetilde{w}(x)=\lim\limits_{\lambda\to0^+}\lambda V_\lambda(x)$ (recall that $V_\lambda(x)$ is defined by (\ref{r94})) and it is a viscosity solution of the equation $$h(x,D\widetilde{w}(x),D^2\widetilde{w}(x))=0,\ x\in \overline{\theta},$$ in the sense of Definition \ref{def3.1}, where $h(x,p,A)=\max\limits_{u\in U}\{\langle -p,b(x,u)\rangle-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\widetilde{\psi}(p\sigma(x,u),u)\}$. The function $\widetilde{\psi}$ is described below in the proof. \end{theorem} \begin{proof} Due to Proposition \ref{th:3.3.1} $V_\lambda(x)$ defined by (\ref{r94}) is a viscosity solution of $$\lambda V(x)+H(x,DV(x),D^2V(x))=0\ \mbox{on}\ \overline{\theta}$$ (i.e., unlike a constrained viscosity solution $V_\lambda$ is a viscosity super-but also subsolution on $\overline{\theta}$), and $\lambda V_\lambda\in\mbox{Lip}_{M_0}(\overline{\theta})$, for $M_0\geq \max\{\overline{c}_0,M\}$ (see Lemma \ref{lem:2.6}) and due to Proposition \ref{th:3.2} this viscosity solution is unique. We define $w_\lambda(x):=\lambda V_\lambda(x), x\in\overline{\theta}$. Then $w_\lambda$ is the unique viscosity solution of \begin{equation}\label{226}\lambda w_\lambda(x)+H_\lambda(x,Dw_\lambda(x),D^2w_\lambda(x))=0\ \mbox{on}\ \overline{\theta},\end{equation} where $$H_\lambda(x,p,A):=\lambda H(x,\frac{1}{\lambda} p,\frac{1}{\lambda} \lambda A)=\max\limits_{u\in U}\{\langle -p,b(x,u)\rangle-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\lambda \psi(x,\frac{1}{\lambda}p\sigma(x,u),u)\}.$$ Due to (\ref{r48}) and (\ref{r100}) we have, for all $\lambda\in(0,1],\ (x,z,u),\ (x',z',u')\in\overline{\theta}\times\mathbb{R}^d\times U$, \begin{equation*} \begin{split} &{\rm i)}\ | \lambda\psi(x,\frac{1}{\lambda}z,u)|\leq \lambda M+K_z|z|;\\ &{\rm ii)}\ | \lambda\psi(x,\frac{1}{\lambda}z,u)-\lambda\psi(x',\frac{1}{\lambda}z',u')|\leq \lambda K_x|x-x'|+K_z|z-z'|+(\lambda+|z|)\rho(d(u,u')), \end{split} \end{equation*} i.e., combined with Lemma \ref{lem:2.6}, where we have shown that \begin{equation*} |w_\lambda(x)|\leq M,\ x\in\overline{\theta};\ \ |w_\lambda(x)-w_\lambda(x')|\leq\overline{c}_0|x-x'|,\ x,\ x'\in\overline{\theta},\ \lambda>0, \end{equation*} we can apply the Arzel\'{a}-Ascoli Theorem to conclude that, for some sequence $\lambda_n\downarrow0$ (as $n\to\infty$), there are functions $\widetilde{w}:\overline{\theta}\to\mathbb{R},\ \ \widetilde{\psi}: \overline{\theta}\times\mathbb{R}^d\times U\to\mathbb{R}$ such that, for some $\widetilde{w}\in C(\overline{\theta})$, $w_{\lambda_n}\to\widetilde{w}$ ($n\to\infty$) uniformly on $\overline{\theta}$, and $\lambda_n\psi(x,\frac{1}{\lambda_n}z,u)\to\widetilde{\psi}(x,z,u)$ ($n\to\infty$), uniformly on compacts in $\overline{\theta}\times\mathbb{R}^d\times U$. Obviously, \begin{equation*} \begin{split} &|\widetilde{w}(x)|\leq M,\ \ |\widetilde{w}(x)-\widetilde{w}(x')|\leq\overline{c}_0|x-x'|,\ x,\ x'\in\overline{\theta},\ \ \mbox{and}\\ &|\widetilde{\psi}(x,z,u)|\leq K_z|z|,\ \ |\widetilde{\psi}(x,z,u)-\widetilde{\psi}(x',z',u')|\leq K_z|z-z'|+|z|\rho(d(u,u')), \end{split} \end{equation*} i.e., $\widetilde{\psi}(x,z,u)=\widetilde{\psi}(z,u),\ (x,z,u)\in\overline{\theta}\times\mathbb{R}^d\times U$, is independent of $x\in\overline{\theta}$. Then, putting \begin{equation*} h(x,p,A):=\max\limits_{u\in U}\{\langle-p,b(x,u)\rangle-\frac{1}{2}tr(\sigma\sigma^*(x,u)A)-\widetilde{\psi}(p\sigma(x,u),u)\}, \end{equation*} it follows that also $H_{\lambda_n}\to h\ (n\to\infty)$ uniformly on compacts. Finally, from (\ref{226}) and the stability result for viscosity solutions we see that $\widetilde{w}$ is a viscosity solution of the equation \begin{equation*} h(x,D\widetilde{w}(x),D^2\widetilde{w}(x))=0,\ x\in\overline{\theta}. \end{equation*} \end{proof} \begin{remark} In Buckdahn, Li, Quincampoix \cite{Li} it is shown that the sequence $(w_\lambda)_{\lambda>0}$, as $\lambda\downarrow0$, can have at most only one accumulation point in the space $C(\overline{\theta})$ endowed with the supremum norm. As $\widetilde{w}$ is an accumulation point of $(w_\lambda)_{\lambda>0}$ and as due to Lemma \ref{lem:2.6} every subsequence of $w_\lambda$, $\lambda\downarrow0$, has a converging subsubsequence (Arzel\'{a}-Ascoli Theorem), it follows that $w_\lambda\to\widetilde{w} (\lambda\downarrow0)$, uniformly on $\overline{\theta}$. In particular, if we also suppose (\ref{r15}), we have $\widetilde{w}=w_0$. \end{remark} \begin{theorem}\label{th:4.2} We suppose that the assumptions (\ref{r1}), (\ref{r49}) and (\ref{r15}) hold true. Now we consider the case: $\psi(x,z,u)=\psi_1(x,u)+g(z)$, where $\psi_1:\overline{\theta}\times U\rightarrow \mathbb{R}$ is bounded (by $M$), uniformly continuous and satisfies \begin{equation*} |\psi_1(x,u)-\psi_1(x',u)|\leq K_x|x-x'|,\ \mbox{for any}\ x,\ x'\in\overline{\theta},\ u\in U, \end{equation*} while $g:\mathbb{R}^d\rightarrow\mathbb{R}$ is supposed to be Lipschitz (with Lipschitz constant $K_z$), positive homogeneous, concave and satisfies $g(0)=0$. For $\eta\in L^2(\mathcal{F}_t)$, we consider the following BSDE \begin{equation}\label{e1} Y_s^\eta=\eta+\int_s^tg(Z_r^\eta)dr-\int_s^tZ_r^\eta dW_r,\ s\in[0,t], \end{equation} and define the nonlinear expectation $\varepsilon^g[\eta]:=Y_0^\eta$. Then, there exists the uniform limit ${w}_0(x)=\lim\limits_{\lambda\to0^+}\lambda V_\lambda(x)$ (recall $V_\lambda(x)$ is defined by (\ref{r94})), and \begin{equation*} w_0(x)=\inf\limits_{t\geq0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)],\ \ \mbox{for any}\ x\in\overline{\theta}. \end{equation*} \end{theorem} \begin{remark} {\rm (i)}\ $\varepsilon^g[.]$ is called $g$-expectation, it was first introduced by Peng, see, e.g., \cite{peng}. Its definition is independent of $t$. Indeed, if $\eta\in L^2(\mathcal{F}_s),\ s\leq t$, then, in (\ref{e1}), $Z_r^\eta=0,\ r\in[s,t]$. {\rm (ii)} We recall the properties of $\varepsilon^g[\cdot]$, in particular, its concavity under the above assumptions on $g$: Let $\lambda_1, \lambda_2\in(0,1), \mbox{such\ that}\ \lambda_1+\lambda_2=1, \eta_1, \eta_2\in L^2(\mathcal{F}_t)$, $\overline{Y}_s:=(\lambda_1Y_s^{\eta_1}+\lambda_2Y_s^{\eta_2})-Y_s^{\lambda_1\eta_1+\lambda_2\eta_2}, \overline{Z}_s:=(\lambda_1Z_s^{\eta_1}+\lambda_2Z_s^{\eta_2})-Z_s^{\lambda_1\eta_1+\lambda_2\eta_2}$, $s\in[0,t]$. As the function $g$ is Lipschitz and concave, we get \begin{equation*} \begin{split} &(\overline{Y}_s)^+(\lambda_1g(Z_s^{\eta_1})+\lambda_2g(Z_s^{\eta_2})-g(Z_s^{\lambda_1\eta_1+\lambda_2\eta_2}))\\ \leq&(\overline{Y}_s)^+(g(\lambda_1Z_s^{\eta_1}+\lambda_2Z_s^{\eta_2})-g(Z_s^{\lambda_1\eta_1+\lambda_2\eta_2}))\\ \leq&L(\overline{Y}_s)^+|\overline{Z}_s|, \ s\in[0,t]. \end{split} \end{equation*} Hence, $\displaystyle\mathbb{E}[((\overline{Y}_s)^+)^2]+\mathbb{E}[\int_s^t|\overline{Z}_r|^21_{\{\overline{Y}_r>0\}}dr]\leq 2L\mathbb{E}[\int_s^t(\overline{Y}_s)^+|\overline{Z}_r|dr],\ s\in[0,t],$ and a standard estimate and Gronwall's inequality give $(\overline{Y}_s)^+=0$, i.e., $\lambda_1Y_s^{\eta_1}+\lambda_2Y_s^{\eta_2}\leq Y_s^{\lambda_1\eta_1+\lambda_2\eta_2},\ s\in[0,t],\ \mathbb{P}$-a.s. Thus, for $s=0$, $\varepsilon^g[\lambda_1\eta_1+\lambda_2\eta_2]\geq \lambda_1\varepsilon^g[\eta_1]+\lambda_2\varepsilon^g[\eta_2]$.\end{remark} \begin{proof}\textbf{(of Theorem \ref{th:4.2}.)}\\ \textbf{Step 1}. From Proposition \ref{dpp} (DPP) we have $V_\lambda(x)=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]$, where $G_{0,t}^{\lambda,x,u}[\eta]=\widetilde{Y}_{0,t}^{\lambda,x,u,\eta}$, for $\eta\in L^2(\mathcal{F}_t)$, defined by the BSDE \begin{equation*} \left\{ \begin{array}{ll} d\widetilde{Y}_{s,t}^{\lambda,x,u,\eta}=-(\psi(X_s^{x,u},\widetilde{Z}_{s,t}^{\lambda,x,u,\eta},u_s)-\lambda\widetilde{Y}_{s,t}^{\lambda,x,u,\eta})ds+\widetilde{Z}_{s,t}^{\lambda,x,u,\eta}dW_s,\\ \widetilde{Y}_{t,t}^{\lambda,x,u,\eta}=\eta,\ \eta:=V_\lambda(X_t^{x,u}). \end{array} \right. \end{equation*} Combined with the positive homogeneity of $g$, we obtain, for $s\in[0,t]$, \begin{equation*} d(\lambda e^{-\lambda s}\widetilde{Y}_{s,t}^{\lambda,x,u,\eta}+\lambda\int_0^se^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr)=-g(e^{-\lambda s}\lambda\widetilde{Z}_{s,t}^{\lambda,x,u,\eta})ds+e^{-\lambda s}\lambda\widetilde{Z}_{s,t}^{\lambda,x,u,\eta}dW_s. \end{equation*} On the other hand, \begin{equation*} \lambda e^{-\lambda t}\widetilde{Y}_{t,t}^{\lambda,x,u,\eta}+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr=e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr. \end{equation*} Thus, for $\eta=V_\lambda(X_t^{x,u})$, \begin{equation*} \varepsilon^g[e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr]=\lambda\widetilde{Y}_{0,t}^{\lambda,x,u,\eta}=\lambda G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]. \end{equation*} Hence, \begin{equation}\label{f1} \lambda V_\lambda(x)=\inf\limits_{u\in\mathcal{U}}\lambda G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]= \inf\limits_{u\in\mathcal{U}}\varepsilon^g[e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr]. \end{equation} Notice that (see, e.g., \cite{peng}, or use just classical estimates for BSDE) \begin{equation}\label{f2} \begin{split} &|\varepsilon^g[e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr]-\varepsilon^g[w_0(X_t^{x,u})]|\\ \leq&c\parallel e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})+\lambda\int_0^te^{-\lambda r}\psi_1(X_r^{x,u},u_r)dr-w_0(X_t^{x,u})\parallel_{L^2(\Omega)}\\ \leq&c((1-e^{-\lambda t})+\parallel\lambda V_\lambda-w_0\parallel_\infty+\lambda tM),\ \mbox{for any}\ \lambda,\ t\geq0,\ u\in\mathcal{U}. \end{split} \end{equation} Thus, combining (\ref{f1}) and (\ref{f2}), we have \begin{equation*} \lambda V_\lambda(x)= \inf\limits_{u\in\mathcal{U}}\varepsilon^g[w_0(X_t^{x,u})]+R_t^{\lambda,x},\ \text{with}\ |R_t^{\lambda,x}|\leq c((1-e^{-\lambda t})+\parallel \lambda V_\lambda-w_0\parallel_\infty+\lambda tM). \end{equation*} Then, letting $\lambda$ tend to 0 we get \begin{equation}\label{d1} w_0(x)=\inf\limits_{u\in\mathcal{U}}\varepsilon^g[w_0(X_t^{x,u})],\ \mbox{for any}\ t\geq0,\ x\in\overline{\theta}. \end{equation} \textbf{Step 2.}\ From (\ref{f1}), using the monotonicity of $\varepsilon^g$ (resulting from the BSDE comparison theorem) and recalling that $|\lambda V_\lambda(x)|\leq M, \ \mbox{for\ all}\ x\in\overline{\theta},\ \lambda\geq0$, we obtain \begin{equation*} \lambda V_\lambda(x)\geq \inf\limits_{u\in\mathcal{U}}\varepsilon^g[-Me^{-\lambda t}+\lambda\int_0^te^{-\lambda r}\min\limits_{v\in U}\psi_1(X_r^{x,u},v)dr]. \end{equation*} Similar to (\ref{f2}) we get \begin{equation*} \begin{split} &\mathop{\rm sup}\limits_{u\in\mathcal{U}}|\varepsilon^g[-Me^{-\lambda t}+\lambda\int_0^te^{-\lambda r}\min\limits_{v\in U}\psi_1(X_r^{x,u},v)dr]-\varepsilon^g[\lambda\int_0^\infty e^{-\lambda r}\min\limits_{v\in U}\psi_1(X_r^{x,u},v)dr]|\\ \leq& c(Me^{-\lambda t}+\lambda\int_t^\infty e^{-\lambda r}dr\cdot M)=2cMe^{-\lambda t}\xrightarrow[t\uparrow+\infty]{}0. \end{split} \end{equation*} Consequently, using the concavity of $\varepsilon^g[\cdot]$ (see Remark 4.3.) this yields \begin{equation*} \begin{split} &\lambda V_\lambda(x)\geq \inf\limits_{u\in\mathcal{U}}\varepsilon^g[\lambda\int_0^\infty e^{-\lambda r}\cdot\min\limits_{v\in U}\psi_1(X_r^{x,u},v)dr]-2cMe^{-\lambda t}\\ &\geq \inf\limits_{u\in\mathcal{U}}\lambda\int_0^\infty e^{-\lambda r}\varepsilon^g[\min\limits_{v\in U}\psi_1(X_r^{x,u},v)]dr-2cMe^{-\lambda t}\\ &\geq \inf\limits_{t\geq 0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi_1(X_t^{x,u},v)]-2cMe^{-\lambda t},\ t\geq 0,\ x\in\overline{\theta}. \end{split} \end{equation*} Taking the limit as $t\rightarrow +\infty$ we get immediately \begin{equation}\label{1000}\lambda V_\lambda(x)\geq \inf\limits_{t\geq0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi_1(X_t^{x,u},v)],\ \mbox{for any}\ x\in\overline{\theta}.\end{equation} Combining Propositions \ref{th:3.3.1} and \ref{th:3.2} (comparison result), (\ref{r15}) allows to use the proof of Theorem \ref{the:3.4} without using ($A_\theta)$ (($A_H$) and (H) are satisfied since Hamiltonian $H$ is of the form (\ref{r20})) that $\lambda V_\lambda\to w_0$ uniformly on $\overline{\theta}$ as $\lambda\downarrow0$, and $w_0$ is the maximal viscosity subsolution on $\overline{\theta}$ of \begin{equation}\label{r4.13} w_0(x)+\overline{H}(x,Dw_0(x),D^2w_0(x))\leq0,\ x\in\theta. \end{equation}Hence, letting $\lambda\downarrow0$ in above inequality (\ref{1000}) yields $$w_0(x)\geq \inf\limits_{t\geq0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi_1(X_t^{x,u},v)],\ \mbox{for any}\ x\in\overline{\theta}.$$ \textbf{Step 3.}\ Recall (\ref{r4.13}). Then, for all $x\in\theta,\ (p,A)\in J^{2,+}w_0(x)$ thanks to (\ref{r15}) (see also Lemma \ref{l:3.4}) we have \begin{equation*} 0\geq w_0(x)+\overline{H}(x,p,A)\geq w_0(x)+\overline{H}(x,0,0)=w_0(x)+\mathop{\rm sup}\limits_{u\in U}(-\psi(x,0,u)). \end{equation*} This shows that, if $J^{2,+}w_0(x)\neq\emptyset$, then \begin{equation}\label{d2} w_0(x)\leq\min\limits_{v\in U}\psi(x,0,v). \end{equation} Let $x\in\theta$\ and $\varepsilon>0$, and define \begin{equation*} \psi_\varepsilon(y):=w_0(y)-\frac{1}{2\varepsilon}|y-x|^2,\ y\in\overline{\theta}. \end{equation*} Let $y_\varepsilon\in\overline{\theta}$ be a maximum point of $\psi_\varepsilon$. As $\psi_\varepsilon(y_\varepsilon)\geq\psi_\varepsilon(x)=w_0(x)$,\ $\frac{1}{2\varepsilon}|y_\varepsilon-x|^2\leq w_0(y_\varepsilon)-w_0(x)\leq2M$, we get $y_\varepsilon\xrightarrow[\varepsilon\downarrow0]{}x$, i.e., for $\varepsilon>0$ small enough, $y_\varepsilon\in\theta$. On the other hand, we have $(p,A):=(\frac{y_\varepsilon-x}{\varepsilon}, \frac{1}{\varepsilon}I_{\mathbb{R}^N})\in J^{2,+}w_0(y_\varepsilon)$. From (\ref{d2}) we have $w_0(y_\varepsilon)\leq\min\limits_{v\in U}\psi(y_\varepsilon,0,v)$, and taking $\varepsilon\downarrow0$ yields $w_0(x)\leq \min\limits_{v\in U}\psi(x,0,v),$ $ \mbox{for any}\ x\in\theta$, and by the continuity of both sides of the inequality in $x\in\overline{\theta}$ we have \begin{equation*} w_0(x)\leq \min\limits_{v\in U}\psi(x,0,v),\ \mbox{for all}\ x\in\overline{\theta}. \end{equation*} Finally, it follows from (\ref{d1}) and the monotonicity of $\varepsilon^g[\cdot]$ that $$ w_0(x)\leq \inf\limits_{u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)],\ t\geq0,\ x\in\overline{\theta},$$ which means $w_0(x)\leq \inf\limits_{t\geq0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)],\ x\in\overline{\theta}.$ Combined with Step 2 we get \begin{equation*} w_0(x)=\inf\limits_{t\geq0, u\in\mathcal{U}}\varepsilon^g[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)],\ \mbox{for any}\ x\in\overline{\theta}. \end{equation*}\end{proof} \begin{remark} Let us consider the special case where $\psi$ is independent of $z$, i.e., $g(z)=0$. Then we get $w_0(x)=\inf\limits_{t\geq0, u\in\mathcal{U}}\mathbb{E}[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)]$, for any $x\in\overline{\theta}$. \end{remark} Let us come back now to a general case of $\psi$. \begin{theorem}\label{th:4.3} We suppose that the assumptions (\ref{r1}), (\ref{r48}), (\ref{r49}), (\ref{r15}), (\ref{r100}) and\textbf{ ($A_\theta$)} hold true. Moreover, let $H(x,p,A)$ be convex in $(p,A)\in\mathbb{R}^N\times\mathcal{S}^N$, for all $x\in\overline{\theta}$. Then, we have \begin{equation*} \begin{aligned} w_0(x)\leq &\inf\{G_{0,t}^{\widetilde{\psi},x,u}[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)]\mid u\in\mathcal{U},\ t\geq0,\ \widetilde{\psi}\ \mbox{such that there exists}\ \lambda_n\downarrow0\ \mbox{with}\\ &\ \ \ \ \lambda_n\psi(x,\frac{1}{\lambda_n}z,u)\rightarrow\widetilde{\psi}(z,u)\},\ x\in\overline{\theta}. \end{aligned} \end{equation*} \end{theorem} \begin{proof} From Propositions \ref{th:3.3.1} and \ref{th:3.2} we get\\ 1) The limit $w_0(x)=\lim_{\lambda\rightarrow0^+}\lambda V_\lambda(x)$, for every $x\in\overline{\theta}$; and the convergence is uniform on $\overline{\theta}$.\\ 2) There exists $\widetilde{\psi}$\ such that $\lambda_n\psi(x,\frac{1}{\lambda_n}z,u)\to\widetilde{\psi}(z,u)$\ as $\lambda_n\downarrow0$, uniformly on compacts. Moreover, \begin{equation*} |\lambda_n\psi(x,\frac{1}{\lambda_n}z,u)| \leq \lambda_nM+K_z|z|,\ n\geq1,\quad |\widetilde{\psi}(z,u)|\leq K_z|z|,\ z\in\mathbb{R}^d. \end{equation*} From Proposition \ref{dpp} (DPP) we have \begin{equation*} V_\lambda(x)=\inf_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})],\ t>0,\ x\in \overline{\theta}. \end{equation*} We put $\widetilde{Y}_s^{\lambda,x,u}:=G_{s,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})],\ s\in[0,t].$\ Then $ V_\lambda(x)=\inf\limits_{u\in\mathcal{U}}\widetilde{Y}_0^{\lambda,x,u},$ where \begin{equation*} \widetilde{Y}_s^{\lambda,x,u}=V_\lambda(X_t^{x,u})+\int_s^t(\psi(X_r^{x,u},\widetilde{Z}_r^{\lambda,x,u},u_r) -\lambda\widetilde{Y}_r^{\lambda,x,u})dr-\int_s^t\widetilde{Z}_r^{\lambda,x,u}dW_r,\ s\in[0,t]. \end{equation*} By applying It\^{o}'s formula to $e^{-\lambda s}\widetilde{Y}_s^{\lambda,x,u}$ we have \begin{equation*} e^{-\lambda s}\lambda\widetilde{Y}_s^{\lambda,x,u}=e^{-\lambda t}\lambda\widetilde{Y}_t^{\lambda,x,u}+\int_s^t\lambda e^{-\lambda r}\psi(X_r^{x,u},\widetilde{Z}_r^{\lambda,x,u},u_r)dr-\int_s^t\lambda e^{-\lambda r}\widetilde{Z}_r^{\lambda,x,u}dW_r. \end{equation*} As $e^{-\lambda t}\lambda V_\lambda(X_t^{x,u})\xrightarrow[L^\infty]{} w_0(X_t^{x,u})$, as $\lambda\rightarrow 0$, uniformly with respect to $(t,x),\ u\in\mathcal{U}$, we consider the following BSDE: \begin{equation}\label{326} Y_s^{x,u}=w_0(X_t^{x,u})+\int_s^t\widetilde{\psi}(Z_r^{x,u},u_r)dr-\int_s^tZ_r^{x,u}dW_r,\ s\in[t,T]. \end{equation} From a standard estimate for BSDEs it follows that, for all $p\in(1,2)$, \begin{equation*} \begin{split} &\mathbb{E}[\mathop{\rm sup}\limits_{s\in[0,t]}|e^{-\lambda s}(\lambda\widetilde{Y}_s^{\lambda,x,u})-Y_s^{x,u}|^p+(\int_0^t|e^{-\lambda s}(\lambda\widetilde{Z}_s^{\lambda,x,u})-Z_s^{x,u}|^2ds)^{\frac{p}{2}}]\\ \leq& C_p\mathbb{E}[|e^{-\lambda t}(\lambda\widetilde{Y}_t^{\lambda,x,u})-Y_t^{x,u}|^p]+C_p\mathbb{E}[(\int_0^t|e^{-\lambda r}\lambda\psi(X_r^{x,u},\widetilde{Z}_r^{\lambda,x,u},u_r)-\widetilde{\psi}(Z_r^{x,u},u_r)|dr)^p]\\ \leq& C_p\mathbb{E}[|e^{-\lambda t}(\lambda\widetilde{Y}_t^{\lambda,x,u})-Y_t^{x,u}|^p]\\ &+C_p\mathbb{E}[(\int_0^t\lambda|\psi(X_r^{x,u},\widetilde{Z}_r^{\lambda,x,u},u_r)-\psi(X_r^{x,u},\frac{1}{\lambda}{Z}_r^{x,u},u_r)|dr)^p] (=:I_1(\lambda))\\ &+C_p\mathbb{E}[(\int_0^t|e^{-\lambda r}(\lambda\psi(X_r^{x,u},\frac{1}{\lambda}{Z}_r^{x,u},u_r))-\widetilde{\psi}(Z_r^{x,u},u_r)|1_{\{|Z_r^{x,u}|\leq\alpha\}}dr)^p] (=:\rho_\alpha(\lambda))\\ &+C_p\mathbb{E}[(\int_0^t(\lambda M+2K_z|Z_r^{x,u}|)1_{\{|Z_r^{x,u}|>\alpha\}}dr)^p] (=:I_2(\lambda,\alpha)). \end{split} \end{equation*} Notice that $\displaystyle I_2(\lambda,\alpha)\leq C_{p,M}\lambda^p+C_{p,K_z}\mathbb{E}[\int_0^t\frac{|Z_r^{x,u}|^2}{\alpha^{2-p}}dr]$. As $w_0\in C_b(\overline{\theta})$ and $|\widetilde{\psi}(z,u)|\leq K_z|z|,\ (z,u)\in\mathbb{R}^d\times U$, it follows from the BSDE (\ref{326}) for $(Y^{x,u},Z^{x,u})$ that \begin{equation*} \mathop{\rm sup}\limits_{(x,u)\in\overline{\theta}\times\mathcal{U}}\mathbb{E}[\int_0^t|Z_r^{x,u}|^2dr]<\infty. \end{equation*} We also remark that, again by a BSDE standard estimates, there is some $K\in\mathbb{R}_+$ such that \begin{equation*} \lambda^2\mathbb{E}[\int_0^t|\widetilde{Z}_r^{\lambda,x,u}|^2dr]\leq K,\ \mbox{for\ all}\ \lambda>0. \end{equation*} Then, \begin{equation*} I_2(\lambda,\alpha)\leq C_{p,M}\lambda^p+C'_{p,K_z}\frac{1}{\alpha^{2-p}}. \end{equation*} For $I_1(\lambda)$ we have \begin{equation*} \begin{split} I_1(\lambda)\leq& C'_p\mathbb{E}[(\int_0^t|\lambda\widetilde{Z}_r^{\lambda,x,u}-Z_r^{x,u}|dr)^p]\\ \leq&C''_pt^{\frac{p}{2}}\mathbb{E}[(\int_0^t|e^{-\lambda r}(\lambda\widetilde{Z}_r^{\lambda,x,u})-Z_r^{x,u}|^2dr)^{\frac{p}{2}}]+C'''_p\mathbb{E}[(\int_0^t(1-e^{-\lambda r})^2|\lambda\widetilde{Z}_r^{\lambda,x,u}|^2dr)^{\frac{p}{2}}]\\ \leq& C''_pt^{\frac{p}{2}}\mathbb{E}[(\int_0^t|e^{-\lambda r}(\lambda\widetilde{Z}_r^{\lambda,x,u})-Z_r^{x,u}|^2dr)^{\frac{p}{2}}]+C'''_p(1-e^{-\lambda t})^pK. \end{split} \end{equation*} Hence, for $t>0$ small enough such that $C''_pt^{\frac{p}{2}}\leq\frac{1}{2}$, we get \begin{equation*} \begin{split} &\mathbb{E}[\mathop{\rm sup}\limits_{s\in[0,t]}|e^{-\lambda s}(\lambda\widetilde{Y}_s^{\lambda,x,u})-Y_s^{x,u}|^p+\frac{1}{2}(\int_0^t|e^{-\lambda s}(\lambda\widetilde{Z}_s^{\lambda,x,u})-Z_s^{x,u}|^2ds)^{\frac{p}{2}}]\\ \leq& C\rho_\alpha(\lambda)+\frac{C}{\alpha^{2-p}},\ \mbox{for any}\ (x,u)\in\overline{\theta}\times\mathcal{U},\ \mbox{and any}\ \alpha>0. \end{split} \end{equation*} Observe that $\rho_\alpha(\lambda)\xrightarrow[\lambda=\lambda_n\downarrow0]{}0$, $\frac{C}{\alpha^{2-p}}\xrightarrow[\alpha\uparrow\infty]{}0$. Hence, \begin{equation*} \mathbb{E}[\mathop{\rm sup}\limits_{s\in[0,t]}|e^{-\lambda s}(\lambda\widetilde{Y}_s^{\lambda,x,u})-Y_s^{x,u}|^p+\frac{1}{2}(\int_0^t|e^{-\lambda s}(\lambda\widetilde{Z}_s^{\lambda,x,u})-Z_s^{x,u}|^2ds)^{\frac{p}{2}}]\xrightarrow[\lambda=\lambda_n\downarrow0]{}0, \end{equation*} uniformly in $(x,u)\in\overline{\theta}\times\mathcal{U}$, for $t>0$ small enough; otherwise, for $\delta>0$ small enough, by making above discussion first on $[t-\delta,t]$, after on $[t-2\delta,t-\delta]$, etc., we get by iteration \begin{equation*} \mathop{\rm sup}\limits_{u\in\mathcal{U}}|\lambda\widetilde{Y}_0^{\lambda,x,u}-Y_0^{x,u}|\xrightarrow[\lambda=\lambda_n\downarrow0]{}0, \end{equation*} and, consequently, \begin{equation*} |\inf\limits_{u\in\mathcal{U}}(\lambda\widetilde{Y}_0^{\lambda,x,u})-\inf\limits_{u\in\mathcal{U}}Y_0^{x,u}|\xrightarrow[\lambda=\lambda_n\downarrow0]{}0. \end{equation*} But this means that \begin{equation*} \inf\limits_{u\in\mathcal{U}}Y_0^{x,u}=w_0(x). \end{equation*} Notice that from BSDE (\ref{326}) we have \begin{equation*} Y_0^{x,u}=w_0(X_t^{x,u})+\int_0^t\widetilde{\psi}(Z_s^{x,u},u_s)ds-\int_0^tZ_s^{x,u}dW_s. \end{equation*} On the other hand, defining the backward stochastic semigroup \begin{equation*} G_{s,t}^{\widetilde{\psi},x,u}(\eta):=Y_s^{x,u,\eta} \end{equation*} through the associated BSDE \begin{equation*} Y_s^{x,u,\eta}=\eta+\int_s^t\widetilde{\psi}(Z_r^{x,u,\eta},u_r)dr-\int_s^tZ_r^{x,u,\eta}dW_r,\ \eta\in L^2(\Omega,\mathcal{F}_t,\mathbb{P}), \end{equation*} we get \begin{equation*} w_0(x)=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\widetilde{\psi},x,u}[w_0(X_t^{x,u})]. \end{equation*} Consequently, \begin{equation}\label{426} \begin{aligned} w_0(x)= &\inf\{G_{0,t}^{\widetilde{\psi},x,u}[w_0(X_t^{x,u})]\mid u\in\mathcal{U},\ t\geq0,\ \widetilde{\psi}\ \mbox{such that there exists}\ \lambda_n\downarrow0\ \mbox{with}\\ &\ \ \ \ \lambda_n\psi(x,\frac{1}{\lambda_n}z,u)\rightarrow\widetilde{\psi}(z,u)\},\ x\in\overline{\theta}. \end{aligned} \end{equation} From Lemma \ref{l:3.4} we have $H(x,p,A)\geq H(x,0,0),\ \mbox{for any}\ (p,A)\in \mathbb{R}^N\times\mathcal{S}^N, x\in\overline{\theta}.$ Therefore, from Proposition \ref{th:3.3.1}, in viscosity sense \begin{equation*} \begin{split} 0\geq&\lambda V_\lambda(x)+H(x,DV_\lambda(x),D^2V_\lambda(x))\\ \geq& \lambda V_\lambda(x)+H(x,0,0)=\lambda V_\lambda(x)+\max\limits_{u\in {U}}\{-\psi(x,0,u)\}, \end{split} \end{equation*} for all $x\in\theta$ with $J^{2,+}V_\lambda(x)\neq0$. Using the same argument as in the proof of Step 2 of Theorem \ref{th:4.1}, we see that this implies that $0\geq\lambda V_\lambda(x)+\max\limits_{u\in U}\{-\psi(x,0,u)\}$, for all $x\in\overline{\theta}$. By taking the limit, as $\lambda\rightarrow 0$, it follows that \begin{equation*} w_0(x)\leq \min\limits_{u\in {U}}\psi(x,0,u). \end{equation*} Therefore, from (\ref{426}) and the comparison theorem for BSDEs we get directly \begin{equation*} \begin{aligned} w_0(x)\leq &\inf\{G_{0,t}^{\widetilde{\psi},x,u}[\min\limits_{v\in U}\psi(X_t^{x,u},0,v)]\mid u\in\mathcal{U},\ t\geq0,\ \widetilde{\psi}\ \mbox{such that there exists}\ \lambda_n\downarrow0\ \mbox{with}\\ &\ \ \ \ \lambda_n\psi(x,\frac{1}{\lambda_n}z,u)\rightarrow\widetilde{\psi}(z,u)\},\ x\in\overline{\theta}. \end{aligned} \end{equation*} \end{proof} \section{ {\protect \large Appendix: Proof of Proposition \ref{dpp} (DPP)}} This appendix is devoted to the proof of the DPP (Proposition \ref{dpp}). For the proof we need an auxiliary result. For this we note that, as the filtration used in Section 4 is the Brownian one, we can suppose without loss of generality that $(\Omega,\mathcal{F},\mathbb{P})$ is the standard Wiener space, $\Omega=C_0(\mathbb{R}_+;\mathbb{R}^d)=\{\omega\in C(\mathbb{R}_+;\mathbb{R}^d): \omega(0)=0\}$ , endowed with Borel $\sigma$-algebra over $C_0(\mathbb{R}_+;\mathbb{R}^d)$ and the Wiener measure, with respect to which $\mathbb{F}$ is completed. The coordinate process $W_t(\omega)=\omega_t, t\geq0, \omega\in\Omega$, is a d-dimensional Brownian motion, and the filtration $\mathbb{F}$ is generated by $W$. \begin{lemma} We assume that (H1) and (H2) hold. Let $t\geq0,\ u\in\mathcal{U}_t=L_{\mathbb{F}}^\infty(t,\infty;U)$. Let $X^{t,x,u}$ be the unique continuous and $\mathbb{F}$-adapted solution of the following SDE: \begin{equation*}\label{A1} X_s^{t,x,u}=x+\int_t^sb(X_r^{t,x,u},u_r)dr +\int_t^s\sigma(X_r^{t,x,u},u_r)dW_r,\ s\geq t,\ x\in\mathbb{R}^N, \tag{A1} \end{equation*} and let $(Y^{\lambda,t,x,u}, Z^{\lambda,t,x,u})$ be the unique solution of the following BSDE on the infinite time inteval: \begin{equation*}\label{A2} Y_s^{\lambda,t,x,u}=Y_T^{\lambda,t,x,u}+\int_s^T(\psi(X_r^{t,x,u},Z_r^{\lambda,t,x,u},u_r)-\lambda Y_r^{\lambda,t,x,u})dr-\int_s^TZ_r^{\lambda,t,x,u}dW_r,\ t\leq s\leq T<+\infty, \tag{A2} \end{equation*} where $Y^{\lambda,t,x,u}=(Y_s^{\lambda,t,x,u})_{s\geq t}$ is a bounded continuous $\mathbb{F}$-adapted process and $Z^{\lambda,t,x,u}=(Z_s^{\lambda,t,x,u})_{s\geq t}\in\mathcal{H}^2_{loc}(t,\infty;\mathbb{R}^d)$. Let $\theta_t=\theta_t(\omega)$ be the translation operator on $\Omega$, $\theta_t(\omega)_s=\omega(s+t)-\omega(t),\ \omega\in\Omega, s\geq t$. Given $u\in\mathcal{U}$ we can identify $u$ with a measurable functional applying to $W$. Thus, given an arbitrary element $u_0$ of $\mathcal{U}$, we can define \begin{equation*} \overline{u}_s:= \left\{ \begin{array}{lll} u_0,\ s\in[0,t),\\ u_{s-t}(\theta_t),\ s\geq t. \end{array} \right. \end{equation*} Then, $\overline{u}\in\mathcal{U}$ and \begin{equation*}\label{A3} X_s^{x,u}(\theta_t)=X_{s+t}^{t,x,\overline{u}},\ Y_s^{\lambda,x,u}(\theta_t)=Y_{s+t}^{\lambda,t,x,\overline{u}},\ s\geq0,\ \mathbb{P}\text{-a.s.}, \tag{A3} \end{equation*} and \begin{equation*}\label{A4} Z_s^{\lambda,x,u}(\theta_t)=Z_{s+t}^{\lambda,t,x,\overline{u}},\ dsd\mathbb{P}\text{-a.e.}, \ s\geq 0. \tag{A4} \end{equation*} \end{lemma} \begin{proof} While the existence and the uniqueness of the solution for (\ref{A1}) is standard, that of (\ref{A2}) is shown in analogy to Proposition \ref{th:2.4}. Given $u\in\mathcal{U}$, it is obvious that also $\overline{u}\in\mathcal{U}$, and applying the transformation $\theta_t$ to (\ref{0}) and (\ref{r40}) we see that $(X_{s-t}^{x,u}(\theta_t))_{s\geq t}$, $(Y_{s-t}^{\lambda,x,u}(\theta_t),Z_{s-t}^{\lambda,x,u}(\theta_t))_{s\geq t}$ are solution of (\ref{A1}) and (\ref{A2}) respectively, with control process $\overline{u}$ instead of $u$. From the uniqueness of the solutions of (\ref{A1}) and (\ref{A2}) we obtain (\ref{A3}) and (\ref{A4}). \end{proof} Now we can prove Proposition \ref{dpp} (DPP). \begin{proof}\textbf{(of Proposition \ref{dpp}.)} Let us put $\overline{V}_\lambda(x):=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]$. We have to show that $\overline{V}_\lambda(x)=V_\lambda(x)$.\\ 1) As, for all $y\in\mathbb{R}^d$, ${V}_\lambda(y)$ is deterministic, we obtain from the preceding Lemma 5.1 \begin{equation*} V_\lambda(y)=\mathop{\rm essinf}\limits_{v\in\mathcal{U}}Y_0^{\lambda,y,v}(\theta_t)=\mathop{\rm essinf}\limits_{v\in\mathcal{U}}Y_t^{\lambda,t,y,\overline{v}},\ \mathbb{P}\text{-a.s.}, \end{equation*} with \begin{equation*} \overline{v}_s:= \left\{ \begin{array}{lll} u_0,\ s\in[0,t),\\ v_{s-t}(\theta_t),\ s\geq t. \end{array} \right. \end{equation*} Then, by a standard argument (see, e.g., \cite{Peng 1997}), \begin{equation*} V_\lambda(X_t^{x,u})=\mathop{\rm essinf}\limits_{v\in\mathcal{U}}Y_t^{\lambda,t,X_t^{x,u},\overline{v}}=\mathop{\rm essinf}\limits_{v\in\mathcal{U}}Y_t^{\lambda,x,u\oplus\overline{v}}, \end{equation*} where \begin{equation*} (u\oplus\overline{v})_s= \left\{ \begin{array}{lll} u_s,\ s\in[0,t)\\ \overline{v}_s,\ s\geq t \end{array} \right. \in\mathcal{U}. \end{equation*} Again from an argument by now standard (see, e.g., \cite{Peng 1997}), for all $\varepsilon>0$, there exists $v\in\mathcal{U}$ such that $u=v$, dsd$\mathbb{P}$-a.e. on $[0,t]\times\Omega$ and \begin{equation*} V_\lambda(X_t^{x,u})\geq Y_t^{\lambda,x,v}-\varepsilon,\ \mathbb{P}\text{-a.s.} \end{equation*} Then, from the monotonicity and the Lipschitz property (in $L^2$) of $G_{0,t}^{\lambda,x,u}[\cdot]$ (resulting from BSDE standard estimates, see, e.g., \cite{Peng 1997}), \begin{equation*} \begin{split} G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]\geq& G_{0,t}^{\lambda,x,u}[Y_t^{\lambda,x,v}-\varepsilon] \geq G_{0,t}^{\lambda,x,u}[Y_t^{\lambda,x,v}]-C\varepsilon =Y_0^{\lambda,x,v}-C\varepsilon\\ \geq&\inf\limits_{v\in\mathcal{U}}Y_0^{\lambda,x,v}-C\varepsilon=V_\lambda(x)-C\varepsilon,\ \mathbb{P}\text{-a.s.} \end{split} \end{equation*} Consequently, letting $\varepsilon\downarrow0$, we see that \begin{equation*} \overline{V}_\lambda(x)=\inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]\geq V_\lambda(x). \end{equation*} 2) To prove that $V_\lambda(x)\geq\overline{V}_\lambda(x)$, we let, for any given $\varepsilon>0$, $u\in\mathcal{U}$ be such that $V_\lambda(x)\geq Y_0^{\lambda,x,u}-\varepsilon$. Then, \begin{equation*} \begin{split} V_\lambda(x)\geq& Y_0^{\lambda,x,u}-\varepsilon=G_{0,t}^{\lambda,x,u}[Y_t^{\lambda,x,u}]-\varepsilon\\ \geq&G_{0,t}^{\lambda,x,u}[\mathop{\rm essinf}\limits_{\overline{v}\in\mathcal{U}}Y_t^{\lambda,x,u\oplus\overline{v}}]-\varepsilon\\ =&G_{0,t}^{\lambda,x,u}[\mathop{\rm essinf}\limits_{\overline{v}\in\mathcal{U}}Y_t^{\lambda,t,X_t^{x,u},\overline{v}}]-\varepsilon,\ \mathbb{P}\text{-a.s.} \end{split} \end{equation*} But, $Y_t^{\lambda,t,X_t^{x,u},\overline{v}}=(Y_0^{\lambda,y,v})(\theta_t)|_{y=X_t^{x,u}}$, and thus \begin{equation*} \mathop{\rm essinf}\limits_{\overline{v}\in\mathcal{U}}Y_t^{\lambda,t,X_t^{x,u},\overline{v}}=(\inf\limits_{\overline{v}\in\mathcal{U}}Y_0^{\lambda,y,\overline{v}})(\theta_t)|_{y=X_t^{x,u}}=V_\lambda(X_t^{x,u}). \end{equation*} Consequently, \begin{equation*} V_\lambda(x)\geq G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]-\varepsilon,\ \text{for\ all}\ u\in\mathcal{U}, \end{equation*} from where it follows that \begin{equation*} V_\lambda(x)\geq \inf\limits_{u\in\mathcal{U}}G_{0,t}^{\lambda,x,u}[V_\lambda(X_t^{x,u})]-\varepsilon, \end{equation*} and letting $\varepsilon\downarrow0$ we get $V_\lambda(x)\geq\overline{V}_\lambda(x)$. \end{proof} \end{document}
\boldsymbol{e}gin{document} \title{Near-Optimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model} \boldsymbol{e}gin{abstract} In this paper we consider the problem of computing an $\epsilon$-optimal policy of a discounted Markov Decision Process (DMDP) provided we can only access its transition function through a generative sampling model that given any state-action pair samples from the transition function in $O(1)$ time. Given such a DMDP with states $\cS$, actions $\cA$, discount factor $\gamma\in(0,1)$, and rewards in range $[0, 1]$ we provide an algorithm which computes an $\epsilon$-optimal policy with probability $1 - \delta$ where {\it both} the time spent and number of sample taken are upper bounded by \[ O\left[\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3 \epsilon^2} \log \left(\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)\delta \epsilon} \right) \log\left(\frac{1}{(1-\gamma)\epsilon}\right)\right] ~. \] For fixed values of $\epsilon\in(0,1)$, this improves upon the previous best known bounds by a factor of $(1 - \gamma)^{-1}$ and matches the sample complexity lower bounds proved in \cite{azar2013minimax} up to logarithmic factors. We also extend our method to computing $\epsilon$-optimal policies for finite-horizon MDP with a generative model and provide a nearly matching sample complexity lower bound. \end{abstract} \section{Introduction} Markov decision processes (MDPs) are a fundamental mathematical abstraction used to model sequential decision making under uncertainty and are a basic model of discrete-time stochastic control and reinforcement learning (RL). Particularly central to RL is the case of computing or learning an approximately optimal policy when the MDP itself is not fully known beforehand. One of the simplest such settings is when the states, rewards, and actions are all known but the transition between states when an action is taken is probabilistic, unknown, and can only be sampled from. Computing an approximately optimal policy with high probability in this case is known as PAC RL with a generative model. It is a well studied problem with multiple existing results providing algorithms with improved the sample complexity (number of sample transitions taken) and running time (the total time of the algorithm) under various MDP reward structures, e.g. discounted infinite-horizon, finite-horizon, etc. (See Section~\ref{sec:prev_work} for a detailed review of the literature.) In this work, we consider this well studied problem of computing approximately optimal policies of discounted infinite-horizon Markov Decision Processes (DMDP) under the assumption we can only access the DMDP by sampling state transitions. Formally, we suppose that we have a DMDP with a known set of states, $\cS$, a known set of actions that can be taken at each states, $\cA$, a known reward ${\boldsymbol{r}}_{s,a} \in [0, 1]$ for taking action $a \in \cA$ at state $s \in \cS$, and a discount factor $\gamma\in(0,1)$. We assume that taking action $a$ at state $s$ probabilistically transitions an agent to a new state based on a fixed, but unknown probability vector ${\boldsymbol{P}}_{s, a}$. The objective is to maximize the cumulative sum of discounted rewards in expectation. Throughout this paper, we assume that we have a \emph{generative model}, a notion introduced by \cite{kakade2003sample}, which allows us to draw random state transitions of the DMDP. In particular, we assume that we can sample from the distribution defined by ${\boldsymbol{P}}_{s, a}$ for all $(s,a) \in \mathcal{S} \times \mathcal{A}$ in $O(1)$ time. This is a natural assumption and can be achieved in expectation in certain computational models with linear time preprocessing of the DMDP.\footnote{If instead the oracle needed time $\tau$, every running time result in this paper should be multiplied by $\tau$.} The main result of this paper is that we provide the first algorithm that is sample-optimal and runtime-optimal (up to polylogarithmic factors) for computing an $\epsilon$-optimal policy of a DMDP with a generative model (in the regime of $ 1/\sqrt{(1-\gamma)|\mathcal{S}|}\le \epsilon\le 1$). In particular, we develop a randomized Variance-Reduced Q-Value Iteration (vQVI) based algorithm that computes an $\epsilon$-optimal policy with probability $1 - \delta$ with a number of samples, i.e. queries to the generative model, bound by \[ O\left[\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3 \epsilon^2} \log \left(\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)\delta \epsilon} \right) \log\left(\frac{1}{(1-\gamma)\epsilon}\right)\right] ~. \] This result matches (up to polylogarithmic factors) the following sample complexity lower bound established in \cite{azar2013minimax} for finding $\epsilon$-optimal policies with probability $1-\delta$ (see Appendix~\ref{sec:lower bound}): $$\Omega \left[\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3 \epsilon^2} \log\left(\frac{|\mathcal{S}||\mathcal{A}| }{\delta}\right) \right] ~.$$ Furthermore, we show that the algorithm can be implemented using sparse updates such that the overall run-time complexity is equal to its sample complexity up to constant factors, as long as each sample transition can be generated in $O(1)$ time. Consequently, up to logarithmic factors our run time complexity is optimal as well. In addition, the algorithm's space complexity is $\Theta(|\mathcal{S}||\mathcal{A}|)$. Our method and analysis builds upon a number of prior works. (See Section~\ref{sec:prev_work} for an in-depth comparison.) The paper \cite{azar2013minimax} provided the first algorithm that achieves the optimal sample complexity for finding $\epsilon$-optimal value functions (rather than $\epsilon$-optimal policy), as well as the matching lower bound. Unfortunately an $\epsilon$-optimal value function does not imply an $\epsilon$-optimal policy and if we directly use the method of \cite{azar2013minimax} to get an $\epsilon$-optimal policy for constant $\epsilon$, the best known sample complexity is $\tilde{O}(|\mathcal{S}||\mathcal{A}| (1-\gamma)^{-5} \epsilon^{-2})$. \footnote{\cite{azar2013minimax} showed that one can obtain $\epsilon$-optimal \emph{value} $v$ (instead of $\epsilon$-optimal policy) using sample size $\propto (1-\gamma)^{-3}\epsilon^{-2}$. By using this $\epsilon$-optimal value $v$, one can get a greedy policy that is $[(1-\gamma)^{-1}\epsilon]$-optimal. By setting $\epsilon\rightarrow (1-\gamma)\epsilon$, one can obtain an $\epsilon$-optimal policy, using the number of samples $\propto (1-\gamma)^{-5}\epsilon^{-2}$.} This bound is known to be improvable through related work of \cite{sidford2018variance} which provides a method for computing an $\epsilon$-optimal policy using $\tilde{O}(|\mathcal{S}||\mathcal{A}| (1-\gamma)^{-4} \epsilon^{-2})$ samples and total runtime and the work of \cite{azar2013minimax} which in the regime of small approximation error, i.e. where $\epsilon = O( (1-\gamma)^{-1/2}|\mathcal{S}|^{-1/2})$, already provides a method that achieves the optimal sample complexity. However, when the approximation error takes fixed values, e.g. $\epsilon\geq\Omega((1-\gamma)^{-1/2}|\mathcal{S}|^{-1/2})$, there remains a gap between the best known runtime and sample complexity for computing an $\epsilon$-optimal policy and the theoretical lower bounds. For fixed values of $\epsilon$, which mostly occur in real applications, our algorithm improves upon the previous best sample and time complexity bounds by a factor of $(1-\gamma)^{-1}$ where $\gamma\in(0,1)$, the discount factor, is typically close to 1. We achieve our results by combining and strengthening techniques from both \cite{azar2013minimax} and \cite{sidford2018variance}. On the one hand, in \cite{azar2013minimax} the authors showed that simply constructing a ``sparsified" MDP model by taking samples and then solving this model to high precision yields a sample optimal algorithm in our setting for computing the approximate value of every state. On the other hand, \cite{sidford2018variance} provided faster algorithms for solving explicit DMDPs and improved sample and time complexities given a sampling oracle. In fact, as we show in Appendix~\ref{sec:sparsify mdp}, simply combining these two results yields the first nearly optimal runtime for approximately learning the value function with a generative model. Unfortunately, it is known that an approximate-optimal value function does not immediately yield an approximate-optimal policy of comparable quality (see e.g. \cite{bertsekas2013abstract}) and it is was previously unclear how to combine these methods to improve upon previous known bounds for computing an approximate policy. To achieve our policy computation algorithm we therefore open up both the algorithms and the analysis in \cite{azar2013minimax} and \cite{sidford2018variance}, combining them in nontrivial ways. Our proofs leverage techniques ranging from standard probabilistic analysis tools such as Hoeffding and Bernstein inequalities, to optimization techniques such as variance reduction, to properties specific to MDPs such as the Bellman fixed-point recursion for expectation and variance of the optimal value vector, and monotonicity of value iteration. Finally, we extend our method to finite-horizon MDPs, which are also occurred frequently in real applications. We show that the number of samples needed by this algorithm is $ \wt{O}(H^{3}|\mathcal{S}||\mathcal{A}| \epsilon^{-2}), $ in order to obtain an $\epsilon$-optimal policy for $H$-horizon MDP (see Appendix~\ref{sec:finite_horizon}). We also show that the preceding sample complexity is optimal up to logarithmic factors by providing a matching lower bound. We hope this work ultimately opens the door for future practical and theoretical work on solving MDPs and efficient RL more broadly. \def\mathcal{S}{\mathcal{S}} \def\mathcal{A}{\mathcal{A}} \def|\mathcal{S}|{|\mathcal{S}|} \def|\mathcal{A}|{|\mathcal{A}|} \def\mathcal{O}{\mathcal{O}} \def\tilde\mathcal{O}{\tilde\mathcal{O}} \section{Comparison to Previous Work} \label{sec:prev_work} \boldsymbol{e}gin{table*}[h] \boldsymbol{e}gin{center} {\small \boldsymbol{e}gin{tabular}{|>{\centering\arraybackslash}m{4cm}|>{\centering\arraybackslash}m{5cm}|>{\centering\arraybackslash}m{3cm}|} \hline \textbf{Algorithm} & \textbf{Sample Complexity} & \textbf{References} \\ \hline & & \\[-1em] Phased Q-Learning& $\wt{O} (C\frac{|\mathcal{S}| |\mathcal{A}|}{(1-\gamma)^7\epsilon^2}) $ & \cite{kearns1999finite} \\ \hline & & \\[-1em] Empirical QVI& $\wt{O} (\frac{|\mathcal{S}| |\mathcal{A}|}{(1-\gamma)^5\epsilon^2})$ \footnotemark& \cite{azar2013minimax} \\ \hline && \\[-1em] Empirical QVI & $\wt{O} \big(\frac{|\mathcal{S}| |\mathcal{A}|}{(1-\gamma)^3\epsilon^2}\big)$ if $\epsilon = \wt{O}\big(\frac1{\sqrt{(1-\gamma)|\mathcal{S}|}}\big) $ & { \cite{azar2013minimax}} \\ \hline Randomized Primal-Dual Method& $\wt{O} (C\frac{|\mathcal{S}| |\mathcal{A}|}{(1-\gamma)^4\epsilon^2}) $ & \cite{wang2017randomized}\\ \hline Sublinear Randomized Value Iteration & $\wt{O} \left( \frac{|\mathcal{S}| |\mathcal{A}|}{(1-\gamma)^4 \epsilon^2}\right)$ & \cite{sidford2018variance} \\ \hline & & \\[-1em] Sublinear Randomized QVI & $\wt{O} \left( \frac{|\mathcal{S}| |\mathcal{A}|}{(1-\gamma)^3 \epsilon^2}\right)$ & This Paper \\[-1em] & & \\ \hline \end{tabular} } \end{center} \caption{ \small \textbf{Sample Complexity to Compute $\epsilon$-Approximate Policies Using the Generative Sampling Model}: Here $|\mathcal{S}|$ is the number of states, $|\mathcal{A}|$ is the number of actions per state, $\gamma \in(0,1)$ is the discount factor, and $C$ is an upper bound on the ergodicity. Rewards are bounded between 0 and 1. \label{table-sample}} \label{tab:literature_runtime_apx} \end{table*} \footnotetext{Although not explicitly stated, an immediate derivation shows that obtaining an $\epsilon$-optimal policy in \cite{azar2013minimax} requires $O(|S||A|(1-\gamma)^{-5}\epsilon^{-2})$ samples.} There exists a large body of literature on MDPs and RL (see e.g. \cite{kakade2003sample, strehl2009reinforcement, kalathil2014empirical, dann2015sample} and reference therein). The classical MDP problem is to compute an optimal policy exactly or approximately, when the full MDP model is given as input. For a survey on existing complexity results when the full MDP model is given, see Appendix~\ref{sec:full model compare}. Despite the aforementioned results of \cite{kakade2003sample, azar2013minimax, sidford2018variance}, there exists only a handful of additional RL methods that achieve a small sample complexity and a small run-time complexity at the same time for computing an $\epsilon$-optimal policy. A classical result is the phased Q-learning method by \cite{kearns1999finite}, which takes samples from the generative model and runs a randomized value iteration. The phased Q-learning method finds an $\epsilon$-optimal policy using $\mathcal{O}(|\mathcal{S}| |\mathcal{A}| \epsilon^{-2} / \text{poly}(1-\gamma))$ samples/updates, where each update uses $\wt{O}(1)$ run time.\footnote{The dependence on $(1-\gamma)$ in \cite{kearns1999finite} is not stated explicitly but we believe basic calculations yield $O(1/(1-\gamma)^7)$.} Another work \cite{wang2017randomized} gave a randomized mirror-prox method that applies to a special Bellman saddle point formulation of the DMDP. They achieve a total runtime of $\wt{O} (|\mathcal{S}|^3 |\mathcal{A}| \epsilon^{-2} (1-\gamma)^{-6} )$ for the general DMDP and $\wt{O} (C |\mathcal{S}| |\mathcal{A}| \epsilon^{-2} (1-\gamma)^{-4}) $ for DMDPs that are ergodic under all possible policies, where $C$ is a problem-specific ergodicity measure. A recent closely related work is \cite{sidford2018variance} which gave a variance-reduced randomized value iteration that works with the generative model and finds an $\epsilon$-approximate policy in sample size/run time $\wt{O} (|\mathcal{S}| |\mathcal{A}| \epsilon^{-2} (1-\gamma)^{-4}) $, without requiring any ergodicity assumption. Finally, in the case where $\epsilon = {O}\Big(1/{\sqrt{(1-\gamma)^{-1}|\mathcal{S}|}}\Big)$, \cite{azar2013minimax} showed that the solution obtained by performing exact PI on the empirical MDP model provides not only an $\epsilon$-optimal value but also an $\epsilon$-optimal policy. In this case, the number of samples is $\wt{O} ( |\mathcal{S}| |\mathcal{A}| (1-\gamma)^{-3} \epsilon^{-2})$ and matches the sample complexity lower bound. Although this sample complexity is optimal, it requires solving the empirical MDP exactly (see Appendix~\ref{sec:alg_value}), and is no longer sublinear in the size of the MDP model because of the very small approximation error $\epsilon = {O}(1/{\sqrt{(1-\gamma)|\mathcal{S}|}})$. See Table~\ref{table-sample} for a list of comparable sample complexity results for solving MDP based on the generative model. \section{Preliminaries} \label{sec:prelim} We use calligraphy upper case letters for sets or operators, e.g., $\mathcal{S}$, $\mathcal{A}$ and $\mathcal{T}$. We use bold small case letters for vectors, e.g., ${\boldsymbol{v}}, {\boldsymbol{r}}$. We denote ${\boldsymbol{v}}_{s}$ or ${\boldsymbol{v}}(s)$ as the $s$-th entry of vector ${\boldsymbol{v}}$. We denote matrix as bold upper case letters, e.g., ${\boldsymbol{P}}$. We denote constants as normal upper case letters, e.g., $M$. For a vector ${\boldsymbol{v}}\in \mathbb{R}^{\mathcal{N}}$ for index set $\mathcal{N}$, we denote $\sqrt{{\boldsymbol{v}}}$, $|{\boldsymbol{v}}|$, and ${\boldsymbol{v}}^2$ vectors in $\mathbb{R}^{\mathcal{N}}$ with $\sqrt{\cdot}$, $|\cdot|$, and $(\cdot)^2$ acting coordinate-wise. For two vectors ${\boldsymbol{v}}, {\boldsymbol{u}}\in \mathbb{R}^{\mathcal{N}}$, we denote by ${\boldsymbol{v}}\le {\boldsymbol{u}}$ as coordinate-wise comparison, i.e., $\forall i\in \mathcal{N}: {\boldsymbol{v}}(i)\le {\boldsymbol{u}}(i)$. The same definition are defined to relations $\le$, $<$ and $>$. We describe a DMDP by the tuple $(\mathcal{S}, \mathcal{A}, {\boldsymbol{P}}, {\boldsymbol{r}}, \gamma)$, where $\mathcal{S}$ is a finite state space, $\mathcal{A}$ is a finite action space, ${\boldsymbol{P}}\in\mathbb{R}^{\mathcal{S}\times \mathcal{A}\times \mathcal{S}}$ is the state-action-state transition matrix, ${\boldsymbol{r}}\in\mathbb{R}^{\mathcal{S}\times \mathcal{A}}$ is the state-action reward vector, and $\gamma \in (0, 1)$ is a discount factor. We use ${\boldsymbol{P}}_{s, a}(s')$ to denote the probability of going to state $s'$ from state $s$ when taking action $a$. We also identify each ${\boldsymbol{P}}_{s,a}$ as a vector in $\mathbb{R}^{S}$. We use ${\boldsymbol{r}}_{s,a}$ to denote the reward obtained from taking action $a \in \cA$ at state $s \in \cS$ and assume ${\boldsymbol{r}} \in [0, 1]^{\mathcal{S}\times \mathcal{A}}$.\footnote{A general ${\boldsymbol{r}}\in\mathbb{R}^{\mathcal{S}\times \mathcal{A}}$ can always be reduced to this case by shifting and scaling.} For a vector ${\boldsymbol{v}}\in \mathbb{R}^{\mathcal{S}}$, we denote ${\boldsymbol{P}}{\boldsymbol{v}}\in \mathbb{R}^{\mathcal{S}\times \mathcal{A}}$ as $({\boldsymbol{P}}{\boldsymbol{v}})_{s,a} = {\boldsymbol{P}}_{s,a}^\top {\boldsymbol{v}}$. A policy $\pi:\mathcal{S}\to\mathcal{A}$ maps each state to an action. The objective of MDP is to find the optimal policy $\pi^*$ that maximizes the expectation of the cumulative sum of discounted rewards. In the remainder of this section we give definitions for several prominent concepts in MDP analysis that we use throughout the paper. \boldsymbol{e}gin{definition}[Bellman Value Operator] For a given DMDP the \emph{value operator} $\mathcal{T} : \R^{\states} \mapsto \R^{\states}$ is defined for all $u \in \R^{\states}$ and $s \in \cS$ by $\mathcal{T}({\boldsymbol{u}})_{s} = \max_{a \in \cA} [ {\boldsymbol{r}}_{a}(s) + \gamma \cdot {\boldsymbol{P}}_{s,a}^\top {\boldsymbol{v}} ], \label{eq:value_operator} $ and we let $\bv^{*}$ denote the {\emph{value of the optimal policy $\pi^*$}}, which is the unique vector such that $\mathcal{T}(\bv^{*}) = \bv^{*}$. \end{definition} \boldsymbol{e}gin{definition}[Policy] We call any vector $\pi \in \actions^\states$ a \emph{policy} and say that the action prescribed by policy $\pi$ to be taken at state $s \in \cS$ is $\pi_s$. We let $\mathcal{T}_\pi : \R^{\states} \mapsto \R^{\states}$ denote the \emph{value operator associated with $\pi$} defined for all $u \in \R^{\states}$ and $s \in \cS$ by $ \mathcal{T}_\pi({\boldsymbol{u}})_s = {\boldsymbol{r}}_{s, \pi(s)} + \gamma \cdot {\boldsymbol{P}}_{s, \pi(s)}^\top {\boldsymbol{u}} ~, $ and we let ${\boldsymbol{v}}^\pi$ denote the \emph{values of policy $\pi$}, which is the unique vector such that $\mathcal{T}_\pi({\boldsymbol{v}}^\pi) = {\boldsymbol{v}}^\pi$. \end{definition} Note that $\mathcal{T}_\pi$ can be viewed as the value operator for the modified MDP where the only available action from each state is given by the policy $\pi$. Note that this modified MDP is essentially just an uncontrolled Markov Chain, i.e. there are no action choices that can be made. \boldsymbol{e}gin{definition}[$\epsilon$-optimal value and policy] We say values ${\boldsymbol{u}} \in \R^{\states}$ are \emph{$\epsilon$-optimal} if $\| \bv^{*} - {\boldsymbol{u}}\|_{\infty} \leq \epsilon$ and policy $\pi \in \actions^\states$ is \emph{$\epsilon$-optimal} if $\| \bv^{*} - {\boldsymbol{v}}^{\pi}\|_{\infty} \leq \epsilon$, i.e. the values of $\pi$ are $\epsilon$-optimal. \end{definition} \boldsymbol{e}gin{definition}[Q-function] For any policy $\pi$, we define the Q-function of a MDP with respect to $\pi$ as a vector $\boldsymbol{Q}\in \mathbb{R}^{\mathcal{S}\times\mathcal{A}}$ such that $ \boldsymbol{Q}^{\pi}(s,a) = {\boldsymbol{r}}(s, a) + \gamma{\boldsymbol{P}}_{s, a}^\top {\boldsymbol{v}}^{\pi}. $ The optimal $Q$-function is defined as $\boldsymbol{Q}^{*} = \boldsymbol{Q}^{\pi^*}$. We call any vector $\boldsymbol{Q}\in \mathbb{R}^{\mathcal{S}\times\mathcal{A}}$ a Q-function even though it may not relate to a policy or a value vector and define ${\boldsymbol{v}}(\boldsymbol{Q})\in \mathbb{R}^{\mathcal{S}}$ and $\pi(\boldsymbol{Q})\in \mathcal{A}^{\mathcal{S}}$ as the value and policy implied by $\boldsymbol{Q}$, by \[ \forall s\in \mathcal{S}: {\boldsymbol{v}}(\boldsymbol{Q})(s) = \max_{a\in \mathcal{A}} \boldsymbol{Q}(s,a) \quad\text{and}\quad \pi(\boldsymbol{Q})(s) = \arg\max_{a\in \mathcal{A}} \boldsymbol{Q}(s,a). \] For a policy $\pi$, let ${\boldsymbol{P}}^{\pi}\boldsymbol{Q}\in\mathbb{R}^{\mathcal{S}\times\mathcal{A}}$ be defined as $({\boldsymbol{P}}^{\pi}\boldsymbol{Q})(s,a) = \sum_{s'\in\mathcal{S}}{\boldsymbol{P}}_{s,a}(s')\boldsymbol{Q}(s',\pi(s'))$. \end{definition} \section{Technique Overview} \label{sec:overview} In this section we provide a more detailed and technical overview of our approach. At a high level, our algorithm shares a similar framework as the variance reduction algorithm presented in \cite{sidford2018variance}. This algorithm used two crucial algorithmic techniques, which are also critical in this paper. We call these techniques as the \emph{monotonicity} technique and the \emph{variance reduction} technique. Our algorithm and the results of this paper can be viewed as an advanced, non-trivial integration of these two methods, augmented with a third technique which we refer to as a \emph{total-variation} technique which was discovered in several papers \cite{munos1999variable, lattimore2012pac, azar2013minimax}. In the remainder of this section we give an overview of these techniques and through this, explain our algorithm. \paragraph{The Monotonicity Technique} Recall that the classic value iteration algorithm for solving a MDP repeatedly applies the following rule \boldsymbol{e}gin{align} \label{eqn:value iteration} {\boldsymbol{v}}^{(i)}(s)\gets \max_{a} (r(s,a) + \gamma{\boldsymbol{P}}_{s,a}^\top {\boldsymbol{v}}^{(i-1)}). \end{align} A greedy policy $\pi^{(i)}$ can be obtained at each iteration $i$ by \boldsymbol{e}gin{align} \label{eqn:greedy policy} \forall s: \pi^{(i)}(s)\gets \argmax_{a} (r(s,a) + \gamma{\boldsymbol{P}}_{s,a}^\top {\boldsymbol{v}}^{(i)}). \end{align} For any $u>0$, it can be shown that if one can approximate ${\boldsymbol{v}}^{(i)}(s)$ with $\wh{{\boldsymbol{v}}}^{(i)}(s)$ such that $\|\wh{{\boldsymbol{v}}}^{(i)}-{\boldsymbol{v}}^{(i)}\|_{\infty}\le (1-\gamma)u$ and run the above value iteration algorithm using these approximated values, then after $\Theta((1-\gamma)^{-1}\log[u^{-1}(1-\gamma)^{-1}])$ iterations, the final iteration gives an value function that is $u$-optimal (\cite{bertsekas2013abstract}). However, a $u$-optimal value function only yields a $u/(1-\gamma)$-optimal greedy policy (in the worst case), even if \eqref{eqn:greedy policy} is precisely computed. To get around this additional loss, a monotone-VI algorithm was proposed in \cite{sidford2018variance} as follows. At each iteration, this algorithm maintains not only an approximated value ${\boldsymbol{v}}^{(i)}$ but also a policy $\pi^{(i)}$. The key for improvement is to keep values as a lower bound of the value of the policy on a set of sample paths with high probability. In particular, the following {\it monotonicity condition} was maintained with high probability \[ {\boldsymbol{v}}^{(i)}\le \mathcal{T}_{\pi^{(i)}} ({\boldsymbol{v}}^{(i)}) ~. \] By the monotonicity of the Bellman's operator, the above equation guarantees that ${\boldsymbol{v}}^{(i)}\le {\boldsymbol{v}}^{\pi^{(i)}}$. If this condition is satisfied, then, if after $R$ iterations of approximate value iteration we obtain an value $\wh{{\boldsymbol{v}}}^{(R)}$ that is $u$-optimal then we also obtain a policy $\pi^{(R)}$ which by the monotonicity condition and the monotonicity of the Bellman operator $\mathcal{T}_{\pi^{(R)}}$ yields \[ {\boldsymbol{v}}^{(R)} \le \mathcal{T}_{\pi^{(R)}}({\boldsymbol{v}}^{(R)}) \le \mathcal{T}_{\pi^{(R)}}^2 ({\boldsymbol{v}}^{(R)}) \le \ldots \le \mathcal{T}_{\pi^{(R)}}^{\infty} ({\boldsymbol{v}}^{(R)}) ={\boldsymbol{v}}^{\pi^{(R)}} \le {\boldsymbol{v}}^{*}. \] and therefore this $\pi^{(R)}$ is an $u$-optimal policy. Ultimately, this technique avoids the standard loss of a $(1-\gamma)^{-1}$ factor when converting values to policies. \paragraph{The Variance Reduction Technique} Suppose now that we provide an algorithm that maintains the monotonicity condition using random samples from ${\boldsymbol{P}}_{s,a}$ to approximately compute \eqref{eqn:value iteration}. Further, suppose we want to obtain a new value function and policy that is at least $(u/2)$-optimal. In order to obtain the desired accuracy, we need to approximate ${\boldsymbol{P}}_{s,a}^{\top}{\boldsymbol{v}}^{(i)}$ up to error at most $(1-\gamma)u/2$. Since $\|{\boldsymbol{v}}^{(i)}\|_{\infty}\le (1-\gamma)^{-1}$, by Hoeffding bound, $\wt{O}((1-\gamma)^{-4}u^{-2})$ samples suffices. Note that the number of samples also determines the computation time and therefore each iteration takes $\wt{O}((1-\gamma)^{-4}u^{-2} |\mathcal{S}||\mathcal{A}|)$ samples/computation time and $\wt{O}((1-\gamma)^{-1})$ iterations for the value iteration to converge. Overall, this yields a sample/computation complexity of $\wt{O}((1-\gamma)^{-5}u^{-2} |\mathcal{S}||\mathcal{A}|)$. To reduce the $(1-\gamma)^{-5}$ dependence, \cite{sidford2018variance} uses properties of the input (and the initialization) vectors: $\|{\boldsymbol{v}}^{(0)} - {\boldsymbol{v}}^{*}\|_{\infty}\le u$ and rewrites value iteration \eqref{eqn:value iteration} as follows \boldsymbol{e}gin{align} \label{eqn:value iteration variance reduction} {\boldsymbol{v}}^{(i)}(s)\gets \max_{a} \bigr[r(s,a) + {\boldsymbol{P}}_{s,a}^\top ({\boldsymbol{v}}^{(i-1)} - {\boldsymbol{v}}^{(0)}) + {\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)}\bigr], \end{align} Notice that ${\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)}$ is shared over all iterations and we can approximate it up to error $(1-\gamma)u/4$ using only $\wt{O}((1-\gamma)^{-4}u^{-2})$ samples. For every iteration, we have $\|{\boldsymbol{v}}^{(i-1)} - {\boldsymbol{v}}^{(0)}\|_{\infty}\le u$ (recall that we demand the monotonicity is satisfied at each iteration). Hence ${\boldsymbol{P}}_{s,a}^\top ({\boldsymbol{v}}^{(i-1)} - {\boldsymbol{v}}^{(0)})$ can be approximated up to error $(1-\gamma)u/4$ using only $\wt{O}((1-\gamma)^{-2})$ samples (note that there is no $u$-dependence here). By this technique, over $\wt{O}((1-\gamma)^{-1})$ iterations only $ \wt{O}((1-\gamma)^{-4}u^{-2} + (1-\gamma)^{-3}) $ samples/computation per state action pair are needed, i.e. there is a $(1-\gamma)$ improvement. \paragraph{The Total-Variance Technique} By combining the monotonicity technique and variance reduction technique, one can obtain a $\wt{O}((1-\gamma)^{-4})$ sample/running time complexity (per state-action pair) on computing a policy; this was one of the results \cite{sidford2018variance}. However, there is a gap between this bound and the best known lower bound of $\wt{\Omega}[|S| |A| \epsilon^{-2} (1-\gamma)^{-3}]$ \cite{azar2013minimax}. Here we show how to remove the last $(1-\gamma)$ factor by better exploiting the structure of the MDP. In \cite{sidford2018variance} the update error in each iteration was set to be at most $(1-\gamma) u/2$ to compensate for error accumulation through a horizon of length $(1-\gamma)^{-1}$ (i.e., the accumulated error is sum of the estimation error at each iteration). To improve we show how to leverage previous work to show that the true error accumulation is much less. To see this, let us now switch to Bernstein inequality. Suppose we would like to estimate the value function of some policy $\pi$. The estimation error vector of the value function is upper bounded by $\wt{O}(\sqrt{\boldsymbol{\sigma}_{\pi}/m})$, where $\boldsymbol{\sigma}_{\pi}(s) = \var_{s'\sim {\boldsymbol{P}}_{s, \pi(s)}}({\boldsymbol{v}}^{\pi}(s'))$ denotes the variance of the value of the next state if starting from state $s$ by playing policy $\pi$, and $m$ is the number of samples collected per state-action pair. The accumulated error due to estimating value functions can be shown to obey the following inequality (upper to logarithmic factors) \[ \text{accumated error}\propto\sum_{i=0}^{\infty} \gamma^{i}{\boldsymbol{P}}_{\pi}^i\sqrt{\boldsymbol{\sigma}_{\pi}/m} \le c_1 \left(\frac{1}{1-\gamma}\sum_{i=0}^{\infty}\gamma^{2i}{\boldsymbol{P}}_{\pi}^i\boldsymbol{\sigma}_{\pi}/m\right)^{1/2}, \] where $c_1$ is a constant and the inequality follows from a Cauchy-Swartz-like inequality. According to the \emph{law of total variance}, for any given policy $\pi$ (in particular, the optimal policy $\pi^*$) and initial state $s$, the expected sum of variance of the tail sums of rewards, $\sum\gamma^{2i}{\boldsymbol{P}}_{\pi}^i\boldsymbol{\sigma}_{\pi}$, is exactly the variance of the total return by playing the policy $\pi$. This observation was previously used in the analysis of \cite{munos1999variable, lattimore2012pac,azar2013minimax}. Since the upper bound on the total return is $(1-\gamma)^{-1}$, it can be shown that $\sum_i\gamma^{2i}{\boldsymbol{P}}_{\pi}^i\boldsymbol{\sigma}_{\pi}\le (1-\gamma)^{-2}\cdot \boldsymbol{1}$ and therefore the total error accumulation is $\sqrt{(1-\gamma)^{-3}/m}$. Thus picking $m\approx(1-\gamma)^{-3}\epsilon^{-2}$ is sufficient to control the accumulated error (instead of $(1-\gamma)^{-4}$). To analyze our algorithm, we will apply the above inequality to the optimal policy $\pi^*$ to obtain our final error bound. \paragraph{Putting it All Together} In the next section we show how to combine these three techniques into one algorithm and make them work seamlessly. In particular, we provide and analyze how to combine these techniques into an Algorithm~\ref{alg-halfErr} which can be used to at least halve the error of a current policy. Applying this routine a logarithmic number of time then yields our desired bounds. In the input of the algorithm, we demand the input value ${\boldsymbol{v}}^{(0)}$ and $\pi^{(0)}$ satisfies the required monotonicity requirement, i.e., ${\boldsymbol{v}}^{(0)}\le \mathcal{T}_{\pi^{(0)}} ({\boldsymbol{v}}^{(0)})$ (in the first iteration, the zero vector $\bf{0}$ and an arbitrary policy $\pi$ satisfies the requirement). We then pick a set of samples to estimate ${\boldsymbol{P}} {\boldsymbol{v}}^{(0)}$ accurately with $\wt{O}((1-\gamma)^{-3}\epsilon^{-2})$ samples per state-action pair. The same set of samples is used to estimate the variance vector $\boldsymbol{\sigma}_{{\boldsymbol{v}}^*}$. These estimates serve as the initialization of the algorithm. In each iteration $i$, we draw fresh new samples to compute estimate of ${\boldsymbol{P}} ({\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)})$. The sum of the estimate of ${\boldsymbol{P}} {\boldsymbol{v}}^{(0)}$ and ${\boldsymbol{P}} ({\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)})$ gives an estimate of ${\boldsymbol{P}} {\boldsymbol{v}}^{(i)}$. We then make the above estimates have one-sided error by shifting them according to their estimation errors (which is estimated from the Bernstein inequality). These one-side error estimates allow us to preserve monotonicity, i.e., guarantees the new value is always improving on the entire sample path with high probability. The estimate of ${\boldsymbol{P}} {\boldsymbol{v}}^{(i)}$ is plugged in to the Bellman's operator and gives us new value function, ${\boldsymbol{v}}^{(i+1)}$ and policy $\pi^{(i+1)}$, satisfying the monotonicity and advancing accuracy. Repeating the above procedure for the desired number of iterations completes the algorithm. \boldsymbol{e}gin{algorithm}[htb!] \caption{Variance-Reduced QVI\label{alg-halfErr}} \boldsymbol{e}gin{algorithmic}[1] |\mathcal{S}|tate \textbf{Input:} A sampling oracle for DMDP $\mathcal{M}=(\mathcal{S}, \mathcal{A}, {\boldsymbol{r}}, {\boldsymbol{P}}, \gamma)$ |\mathcal{S}|tate \textbf{Input:} Upper bound on error $u\in[0,(1-\gamma)^{-1}]$ and error probability $\delta \in (0, 1)$ |\mathcal{S}|tate \textbf{Input:} Initial values ${\boldsymbol{v}}^{(0)}$ and policy $\pi^{(0)}$ such that ${\boldsymbol{v}}^{(0)}\le \mathcal{T}_{\pi^{(0)}}{\boldsymbol{v}}^{(0)}$, and ${\boldsymbol{v}}^*-{\boldsymbol{v}}^{(0)} \le u \boldsymbol{1}$; |\mathcal{S}|tate\textbf{Output:} ${\boldsymbol{v}}, \pi$ such that ${\boldsymbol{v}} \le \mathcal{T}_{\pi}({\boldsymbol{v}})$ and ${\boldsymbol{v}}^*-{\boldsymbol{v}} \le (u/2)\cdot\boldsymbol{1}$. |\mathcal{S}|tate |\mathcal{S}|tate\textbf{INITIALIZATION:} |\mathcal{S}|tate Let $\boldsymbol{e}ta\gets (1-\gamma)^{-1}$, and $R\gets\lceil c_1\boldsymbol{e}ta\ln[\boldsymbol{e}ta u^{-1}]\rceil$ for constant $c_1$; |\mathcal{S}|tate Let $m_1 \gets{c_2\boldsymbol{e}ta^3u^{-2}{\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})} }{}$ for constant $c_2$; |\mathcal{S}|tate Let $m_2\gets {c_3\boldsymbol{e}ta^{2}\log[2R|\mathcal{S}||\mathcal{A}|\delta^{-1}]}$ for constant $c_3$; |\mathcal{S}|tate Let $\alpha_1\gets{m_1}^{-1}{\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})}$; |\mathcal{S}|tate For each $(s, a)\in \mathcal{S}\times\mathcal{A}$, sample independent samples $s_{s,a}^{(1)}, s_{s,a}^{(2)}, \ldots, s_{s,a}^{(m_1)}$ from ${\boldsymbol{P}}_{s,a}$; |\mathcal{S}|tate Initialize $\boldsymbol{w}=\wt{\boldsymbol{w}} = \wh{\boldsymbol{\sigma}}=\boldsymbol{Q}^{(0)} \gets {\bf0}_{\mathcal{S} \times \mathcal{A}}$, and $i\gets 0$; \For{each $(s, a)\in \mathcal{S}\times\mathcal{A}$} |\mathcal{S}|tate \emph{\textbackslash \textbackslash Compute empirical estimates of ${\boldsymbol{P}}_{s,a}^{\top}{\boldsymbol{v}}^{(0)}$ and $\boldsymbol{\sigma}_{{\boldsymbol{v}}^{(0)}}(s,a)$} |\mathcal{S}|tate Let $\wt{\boldsymbol{w}}(s,a) \gets \frac{1}{m_1} \sum_{j=1}^{m_1} {\boldsymbol{v}}^{(0)}(s_{s,a}^{(j)})$ \label{alg1: compute w1} |\mathcal{S}|tate Let $\wh{\boldsymbol{\sigma}}(s,a)\gets \frac{1}{m_1} \sum_{j=1}^{m_1}({\boldsymbol{v}}^{(0)})^2(s_{s,a}^{(j)}) - \wt{\boldsymbol{w}}^2(s,a)$ \label{alg1: compute w} |\mathcal{S}|tate |\mathcal{S}|tate \emph{\textbackslash \textbackslash Shift the empirical estimate to have one-sided error and guarantee monotonicity } |\mathcal{S}|tate $\boldsymbol{w}(s, a) \gets \wt{\boldsymbol{w}}(s,a) - \sqrt{2\alpha_1\wh{\boldsymbol{\sigma}}(s,a)} - 4\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}}_\infty - (2/3)\alpha_1\norm{{\boldsymbol{v}}^{(0)}}_{\infty}$ |\mathcal{S}|tate |\mathcal{S}|tate \emph{\textbackslash \textbackslash Compute coarse estimate of the $Q$-function} |\mathcal{S}|tate $\boldsymbol{Q}^{(0)}(s,a) \gets {\boldsymbol{r}}(s,a) + \gamma \boldsymbol{w}(s,a)$ \mathbb{E}ndFor |\mathcal{S}|tate |\mathcal{S}|tate\textbf{REPEAT:} \emph{\qquad\qquad\textbackslash \textbackslash successively improve} \For{$i=1$ to $R$} |\mathcal{S}|tate \emph{\textbackslash \textbackslash Compute $\boldsymbol{g}^{(i)}$ the estimate of ${\boldsymbol{P}} \big[{\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)}\big]$ with one-sided error} |\mathcal{S}|tate\label{alg: v1} Let ${{\boldsymbol{v}}}^{(i)} \gets {\boldsymbol{v}}( \boldsymbol{Q}^{(i-1)})$, ${\pi}^{(i)}\gets \pi(\boldsymbol{Q}^{(i-1)})$; \textbackslash \textbackslash \emph{let $\wt{{\boldsymbol{v}}}^{(i)}\gets {\boldsymbol{v}}^{(i)}, \wt{\pi}^{(i)}\gets {\pi}^{(i)}$ (for analysis)}; |\mathcal{S}|tate\label{alg: v2} For each $s\in \mathcal{S}$, if ${{\boldsymbol{v}}}^{(i)}(s)\le {\boldsymbol{v}}^{(i-1)}(s)$, then ${\boldsymbol{v}}^{(i)}(s)\gets {\boldsymbol{v}}^{(i-1)}(s)$ and $\pi^{(i)}(s)\gets \pi^{(i-1)}(s)$; |\mathcal{S}|tate For each $(s, a)\in \mathcal{S}\times\mathcal{A}$, draw independent samples $\wt{s}_{s,a}^{(1)}, \wt{s}_{s,a}^{(2)}, \ldots, \wt{s}_{s,a}^{(m_2)}$ from ${\boldsymbol{P}}_{s,a}$; |\mathcal{S}|tate \label{alg1: compute g} Let $\boldsymbol{g}^{(i)}(s,a)\gets {\frac{1}{m_2}} \sum_{j=1}^{m_2} \big[{\boldsymbol{v}}^{(i)}(\wt{s}_{s,a}^{(j)}) - {\boldsymbol{v}}^{(0)}(\wt{s}_{s,a}^{(j)}) \big]- (1-\gamma)u/8$; \\ |\mathcal{S}|tate \emph{\textbackslash \textbackslash Improve $\boldsymbol{Q}^{(i)}$} |\mathcal{S}|tate \label{alg: q-func} $\boldsymbol{Q}^{(i)}\gets {\boldsymbol{r}} + \gamma\cdot[\boldsymbol{w}+\boldsymbol{g}^{(i)}]$; \mathbb{E}ndFor |\mathcal{S}|tate \textbf{return} ${\boldsymbol{v}}^{(R)}, \pi^{(R)}$. \end{algorithmic} \end{algorithm} \section{Algorithm and Analysis} \label{sec:alg_policy} In this section we provide and analyze our near sample/time optimal $\epsilon$-policy computation algorithm. As discussed in Section~\ref{sec:overview} our algorithm combines three main ideas: variance reduction, the monotone value/policy iteration, and the reduction of accumulated error via Bernstein inequality. These ingredients are used in the Algorithm~\ref{alg-halfErr} to provide a routine which halves the error of a given policy. We analyze this procedure in Section~\ref{sub:var_reduce} and use it to obtain our main result in Section~\ref{sub:halving}. \subsection{The Analysis of the Variance Reduced Algorithm} \label{sub:var_reduce} In this section we analyze Algorithm~\ref{alg-halfErr}, showing that each iteration of the algorithm approximately contracts towards the optimal value and policy and that ultimately the algorithm halves the error of the input value and policy with high probability. All proofs in this section are deferred to Appendix~\ref{sec:proof of main alg}. We start with bounding the error of $\wt{\boldsymbol{w}}$ and $\wh{\boldsymbol{\sigma}}$ defined in Line \ref{alg1: compute w1} and \ref{alg1: compute w} of Algorithm 1. Notice that these are the empirical estimations of ${\boldsymbol{P}}_{s,a}^{\top}{\boldsymbol{v}}^{(0)}$ and $\boldsymbol{\sigma}_{{\boldsymbol{v}}^{(0)}}(s,a)$. \boldsymbol{e}gin{lemma}[Empirical Estimation Error] \label{lemma: emprical mean} Let $\wt{\boldsymbol{w}}$ and $\wh{\boldsymbol{\sigma}}$ be computed in Line \ref{alg1: compute w1} and \ref{alg1: compute w} of Algorithm \ref{alg-halfErr}. Recall that $\wt{\boldsymbol{w}}$ and $\wh{\boldsymbol{\sigma}}$ are empirical estimates of ${\boldsymbol{P}}{\boldsymbol{v}}$ and $\boldsymbol{\sigma}_{{\boldsymbol{v}}}={\boldsymbol{P}}{\boldsymbol{v}}^2 - ({\boldsymbol{P}}{\boldsymbol{v}})^2$ using $m_1$ samples per $(s,a)$ pair. With probability at least $1-\delta$, for $L \defeq \log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})$, we have \boldsymbol{e}gin{align} \label{eqn:estimate pv} \big|{\wt{\boldsymbol{w}} - {\boldsymbol{P}}^\top{\boldsymbol{v}}^{(0)}}\big|\le \sqrt{{2m_1^{-1}\boldsymbol{\sigma}_{{\boldsymbol{v}}^{(0)}}\cdot{L}}} + {2(3m_1)^{-1}\norm{{\boldsymbol{v}}^{(0)}}_{\infty} {L}} \end{align} and \boldsymbol{e}gin{align} \label{eqn:estimate sigma} \forall (s,a)\in \mathcal{S}\times \mathcal{A}:\quad \big|\wh{\boldsymbol{\sigma}}(s,a) - \boldsymbol{\sigma}_{{\boldsymbol{v}}^{(0)}}(s, a)\big| \le 4\norm{{\boldsymbol{v}}^{(0)}}_{\infty}^2 \cdot \sqrt{{2m_1^{-1}{L}}}. \end{align} \end{lemma} The proof is a straightforward application of Bernstein's inequality and Hoeffding's inequality. Next we show that the difference between $\sigma_{{\boldsymbol{v}}^{(0)}}$ and $\sigma_{{\boldsymbol{v}}^{*}}$ is also bounded. \boldsymbol{e}gin{lemma} \label{lemma: variance triangle} Suppose $\norm{{\boldsymbol{v}}-{\boldsymbol{v}}^*}_{\infty}\le \epsilon$ for some $\epsilon > 0$, then $ \sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}}} \le \sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}^*}} + \epsilon\cdot\boldsymbol{1}. $ \end{lemma} Next we show that in Line~\ref{alg1: compute g}, the computed $\boldsymbol{g}^{(i)}$ concentrates to and is an overestimate of ${\boldsymbol{P}}[{\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)}]$ with high probability. \boldsymbol{e}gin{lemma} \label{lemma:bounds on g} Let $\boldsymbol{g}^{(i)}$ be the estimate of ${\boldsymbol{P}}\big[{\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)}\big]$ defined in Line~\ref{alg1: compute g} of Algorithm~\ref{alg-halfErr}. Then conditioning on the event that $\norm{{\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)}}_{\infty}\le 2u$, with probability at least $1-\delta/R$, \[ {\boldsymbol{P}} \big[{\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)}\big] - \frac{(1-\gamma)u}{4}\cdot\boldsymbol{1}\le \boldsymbol{g}^{(i)} \le {\boldsymbol{P}} \big[{\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)}\big] \] provided appropriately chosen constants $c_1$, $c_2$, and $c_3$ in Algorithm~\ref{alg-halfErr}. \end{lemma} Now we present the key contraction lemma, in which we set the constants, $c_1, c_2, c_3$, in Algorithm~\ref{alg-halfErr} to be sufficiently large (e.g., $c_1 \ge 4, c_2\ge8192, c_3\ge128$). Note that these constants only need to be sufficiently large so that the concentration inequalities hold. \boldsymbol{e}gin{lemma} \label{lemma:induction lemma} Let $\boldsymbol{Q}^{(i)}$ be the estimated $Q$-function of ${\boldsymbol{v}}^{(i)}$ in Line~\ref{alg: q-func} of Algorithm 1. Let $\pi^{(i)}$ and ${\boldsymbol{v}}^{(i)}$ be estimated in iteration $i$, as defined in Line~\ref{alg: v1} and \ref{alg: v2}. Then, with probability at least $1- 2\delta$, for all $1\le i \le R$, \[ {\boldsymbol{v}}^{(i-1)}\le {\boldsymbol{v}}^{(i)}\le \mathcal{T}_{\pi^{(i)}}[{\boldsymbol{v}}^{(i)}],\quad \boldsymbol{Q}^{(i)}\le {\boldsymbol{r}}+\gamma{\boldsymbol{P}} {\boldsymbol{v}}^{(i)}, \quad\text{and}\quad \boldsymbol{Q}^* - \boldsymbol{Q}^{(i)} \le \gamma {\boldsymbol{P}}^{\pi^*}\big[\boldsymbol{Q}^* - \boldsymbol{Q}^{(i-1)}\big] +\boldsymbol{x}i, \] where for $\alpha_1=m_1^{-1}L< 1$ the error vector $\boldsymbol{x}i$ satisfies \[ {\bf0}\le \boldsymbol{x}i\le C\sqrt{\alpha_1\boldsymbol{\sigma}_{{\boldsymbol{v}}^*}} +\Big[{(1-\gamma)u}/C+ C\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}}_\infty\Big]\cdot\boldsymbol{1}, \] for some sufficiently large constant $C\ge 8$. \end{lemma} Using the previous lemmas we can prove the guarantees of Algorithm~\ref{alg-halfErr}. \boldsymbol{e}gin{proposition} \label{thm:alg 1} On an input value vector ${\boldsymbol{v}}^{(0)}$, policy $\pi^{(0)}$, and parameters $u\in(0, (1-\gamma)^{-1}], \delta\in(0,1)$ such that ${\boldsymbol{v}}^{(0)}\le \mathcal{T}_{\pi^{(0)}}[{\boldsymbol{v}}^{(0)}]$, and ${\boldsymbol{v}}^*-{\boldsymbol{v}}^{(0)} \le u \boldsymbol{1}$, Algorithm~\ref{alg-halfErr} halts in time \[ O\bigg[\bigg(u^{-2} + \log\frac{1}{(1-\gamma)u}\bigg)\cdot \frac{|\mathcal{S}||\mathcal{A}|\cdot \log(|\mathcal{S}||\mathcal{A}|\delta^{-1})}{(1-\gamma)^{3}} \bigg] \] and outputs values ${\boldsymbol{v}}$ and policy $\pi$ such that $ {\boldsymbol{v}} \le \mathcal{T}_{\pi}({\boldsymbol{v}})$ and $ {\boldsymbol{v}}^*-{\boldsymbol{v}} \le (u/2) \boldsymbol{1}$ with probability at least $1-\delta$, provided appropriately chosen constants, $c_1, c_2, c_3$. \end{proposition} We prove this proposition by iteratively applying Lemma~\ref{lemma:induction lemma}. Suppose ${\boldsymbol{v}}^{(R)}$ is the output of the algorithm, after $R$ iterations. We show $ {\boldsymbol{v}}^* - {\boldsymbol{v}}^{(R)} \le\gamma^{R-1}{\boldsymbol{P}}^{\pi^*}\big[\boldsymbol{Q}^* - \boldsymbol{Q}^{0}\big] + (\boldsymbol{I}-\gamma{\boldsymbol{P}}^{\pi^*})^{-1}\boldsymbol{x}i. $ Notice that $(\boldsymbol{I}-\gamma{\boldsymbol{P}}^{\pi^*})^{-1}\boldsymbol{x}i$ is related to $(\boldsymbol{I}-\gamma{\boldsymbol{P}}^{\pi^*})^{-1}\sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}^*}}$. We then apply the variance analytical tools presented in Section~\ref{sec:var_bounds} to show that $(\boldsymbol{I}-\gamma{\boldsymbol{P}}^{\pi^*})^{-1}\boldsymbol{x}i \le (u/4) \boldsymbol{1}$ when setting the constants properly in Algorithm~\ref{alg-halfErr}. We refer this technique as the \emph{total-variance technique}, since $\|(\boldsymbol{I}-\gamma{\boldsymbol{P}}^{\pi^*})^{-1}\sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}^*}}\|_{\infty}^2\le O[(1-\gamma)^{-3}]$ instead of a na\"ive bound of $(1-\gamma)^{-4}$. We complete the proof by choosing $R=\wt{\Theta}((1-\gamma)^{-1}\log(u^{-1}))$ and showing that $\gamma^{R-1}{\boldsymbol{P}}^{\pi^*}\big[\boldsymbol{Q}^* - \boldsymbol{Q}^{0}\big]\le (u/4) \boldsymbol{1}$. \subsection{From Halving the Error to Arbitrary Precision} \label{sub:halving} In the previous section, we provided an algorithm that on an input policy, outputs a policy with value vector that has $\ell_\infty$ distance to the optimal value vector only half of that of the input one. In this section, we give a complete policy computation algorithm by by showing that it is possible to apply this error ``halving'' procedure iteratively. We summarize our meta algorithm in Algorithm~\ref{alg-meta}. Note that in the algorithm, each call of $\textsc{HalfErr}$ draws new samples from the sampling oracle. We refer in this section to Algorithm~\ref{alg-halfErr} as a subroutine \textsc{HalfErr}, which given an input MDP $\mathcal{M}$ with a sampling oracle, an input value function ${\boldsymbol{v}}^{(i)}$, and an input policy $\pi^{(i)}$, outputs an value function ${\boldsymbol{v}}^{(i+1)}$ and a policy $\pi^{(i+1)}$. {\small \boldsymbol{e}gin{algorithm}[tb!]\caption{Meta Algorithm\label{alg-meta}} \boldsymbol{e}gin{algorithmic}[1] |\mathcal{S}|tate \textbf{Input:} A sampling oracle of some $\mathcal{M}=(\mathcal{S}, \mathcal{A}, {\boldsymbol{r}}, {\boldsymbol{P}}, \gamma)$, $\epsilon>0, \delta\in(0,1)$ |\mathcal{S}|tate \textbf{Initialize:} ${\boldsymbol{v}}^{(0)}\gets{\bf 0}$, $\pi^{(0)}\gets$ arbitrary policy, $R\gets \Theta[\log(\epsilon^{-1}(1-\gamma)^{-1})]$ \For{$i=\{1,2,\ldots,R\}$} |\mathcal{S}|tate //\emph{\textsc{HalfErr}~ is initialized with QVI$(u=2^{-i+1}(1-\gamma)^{-1}, \delta, {\boldsymbol{v}}^{(0)}={\boldsymbol{v}}^{(i-1)}, \pi^{(0)}=\pi^{(i-1)})$} |\mathcal{S}|tate ${\boldsymbol{v}}^{(i)},\pi^{(i)}\gets\textsc{HalfErr}\gets {\boldsymbol{v}}^{(i-1)},\pi^{(i-1)}$ \mathbb{E}ndFor |\mathcal{S}|tate \textbf{Output:} ${\boldsymbol{v}}^{(R)},\pi^{(R)}$. \end{algorithmic} \end{algorithm} } Combining Algorithm~\ref{alg-meta} and Algorithm~\ref{alg-halfErr}, we are ready to present main result. \boldsymbol{e}gin{theorem} \label{thm:dmdp1} Let $\mathcal{M}=(\mathcal{S}, \mathcal{A}, {\boldsymbol{P}}, {\boldsymbol{r}}, \gamma)$ be a DMDP with a generative model. Suppose we can sample a state from each probability vector ${\boldsymbol{P}}_{s,a}$ within time $O(1)$. Then for any $\epsilon,\delta\in(0,1)$, there exists an algorithm that halts in time \[ T:=O\left[\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3 \epsilon^2} \log \left(\frac{|\mathcal{S}||\mathcal{A}|}{ (1-\gamma)\delta\epsilon}\right)\log\left(\frac{1}{(1-\gamma)\epsilon}\right)\right] \] and obtains a policy $\pi$ such that ${\boldsymbol{v}}^* - \epsilon\boldsymbol{1}\le {\boldsymbol{v}}^{\pi} \le {\boldsymbol{v}}^*$, with probability at least $1-\delta$ where ${\boldsymbol{v}}^{*}$ is the optimal value of $\mathcal{M}$. The algorithm uses space $O(|\mathcal{S}||\mathcal{A}|)$ and queries the generative model for at most $O(T)$ fresh samples. \end{theorem} \boldsymbol{e}gin{remark} In the above theorem, we require $\epsilon\in(0,1)$. For $\epsilon\ge 1$, our sample complexity may fail to be optimal. We leave this for a future project. \end{remark} \boldsymbol{e}gin{remark} The full analysis of the halving algorithm is presented in Section~\ref{sec:half}. Our algorithm can be implemented in space $O(|\mathcal{S}||\mathcal{A}|)$ since in Algorithm~\ref{alg-halfErr}, the initialization phase can be done for each $(s,a)$ and compute $\boldsymbol{w}(s,a), \wt{\boldsymbol{w}}(s,a),$ $\wh{\boldsymbol{\sigma}}(s,a), \boldsymbol{Q}^{(0)}(s,a)$ without storing the samples. The updates can be computed in space $O(|\mathcal{S}||\mathcal{A}|)$ as well. \end{remark} \section{Concluding Remark} In summary, for a discounted Markov Decision Process (DMDP) $\mathcal{M}=(\mathcal{S}, \mathcal{A}, {\boldsymbol{P}}, {\boldsymbol{r}}, \gamma)$ provided we can only access the transition function of the DMDP through a generative sampling model, we provide an algorithm which computes an $\epsilon$-approximate optimal (for $\epsilon\in(0,1)$) policy with probability $1 - \delta$ where both the time spent and number of sample taken is upper bounded by $\wt{O}((1-\gamma)^{-3}\epsilon^{-2}|\mathcal{S}||\mathcal{A}|)$. This improves upon the previous best known bounds by a factor of $1/(1 - \gamma)$ and matches the the lower bounds proved in \cite{azar2013minimax} up to logarithmic factors. The appendix is structured as follows. Section~\ref{sec:full model compare} surveys the existing runtime results for solving the DMDP when a full model is given. Section~\ref{sec:alg_value} provides an runtime optimal algorithm for computing approximate value functions (by directly combining \cite{azar2013minimax} and \cite{sidford2018variance}). Section~\ref{sec:var_bounds} gives technical analysis and variance upper bounds for the total-variance technique. Section~\ref{sec:lower bound} discusses sample complexity lower bounds for obtaining approximate policies with a generative sampling model. Section~\ref{sec:missing proof} provides proofs to lemmas, propositions and theorems in the main text of the paper. Section~\ref{sec:finite_horizon} extends our method and results to the finite-horizon MDP and provides a nearly matching sample complexity lower bound. \appendix \section{Previous Work on Solving DMDP with a Full Model} \label{sec:full model compare} Value iteration was proposed by \cite{bellman1957dynamic} to compute an exact optimal policy of a given DMDP in time $\mathcal{O}(({1-\gamma})^{-1}|\mathcal{S}|^2|\mathcal{A}| L {\log((1-\gamma)^{-1})})$, where $L$ is the total number of bits needed to represent the input; and it can find an approximate $\epsilon$-approximate solution in time $\mathcal{O}(|\mathcal{S}|^2 |\mathcal{A}| (1-\gamma)^{-1} \log(1/\epsilon(1-\gamma)))$; see e.g. \cite{tseng1990solving, littman1995complexity}. The policy iteration was introduced by \cite{howard1960dynamic} shortly after, where the policy is monotonically improved according to its associated value function. Its complexity has also been analyzed extensively; see e.g. \cite{mansour1999complexity,ye2011simplex,scherrer2013improved}. Ye \cite{ye2011simplex} showed that policy iteration and the simplex method are strongly polynomial for DMDP and terminates in $\mathcal{O}(|\mathcal{S}|^2|\mathcal{A}| (1-\gamma)^{-1}\log(|\mathcal{S}|(1-\gamma)^{-1}))$ number of iterations. Later \cite{hansen13} and \cite{scherrer2013improved} improved the iteration bound to $O(|\mathcal{S}||\mathcal{A}| (1-\gamma)^{-1} \log((1-\gamma)^{1}))$ for Howard's policy iteration method. A third approach is to formulate the nonlinear Bellman equation into a linear program \cite{d1963probabilistic, de1960problemes}, and solve it using standard linear program solvers, such as the simplex method by Dantzig \cite{dantzig2016linear} and the combinatorial interior-point algorithm by \cite{ye2005new}. \cite{lee2014path, lee2015efficient} showed that one can solve linear programs in $\wt{O}(\sqrt{\hbox{rank}(A)} )$ number of linear system solves, which, applied to DMDP, yields to a running time of $\wt{O}( |\mathcal{S}|^{2.5} |\mathcal{A}| L )$ for computing the exact policy and $\wt{O}( |\mathcal{S}|^{2.5} |\mathcal{A}| \log(1/\epsilon))$ for computing an $\epsilon$-optimal policy. \cite{sidford2018variance} further improved the complexity of value iteration by using randomization and variance reduction. See Table~\ref{table-exact} for comparable run-time results or computing the optimal policy when the MDP model is fully given. \boldsymbol{e}gin{table}[htb!] \boldsymbol{e}gin{center} \small \centering \boldsymbol{e}gin{tabular}{|>{\centering\arraybackslash}m{4cm}|>{\centering\arraybackslash}m{5cm}|>{\centering\arraybackslash}m{3cm}|} \hline \textbf{Algorithm} & \textbf{Complexity} & \textbf{References} \\ \hline Value Iteration (exact) & $|\mathcal{S}|^2|\mathcal{A}| L \frac{\log(1/(1-\gamma))}{1-\gamma}$ & \cite{tseng1990solving, littman1995complexity}\\ \hline Value Iteration & $|\mathcal{S}|^2|\mathcal{A}| \frac{\log(1/(1-\gamma)\epsilon)}{1-\gamma}$ & \cite{tseng1990solving, littman1995complexity}\\ \hline Policy Iteration (Block Simplex) & $\frac{|\mathcal{S}|^4|\mathcal{A}|^2}{1-\gamma} \log(\frac1{1-\gamma})$ & \cite{ye2011simplex},\cite{scherrer2013improved} \\ \hline & & \\[-1em] Recent Interior Point Methods & $\wt{O}( |\mathcal{S}|^{2.5}|\mathcal{A}| L )$ $\wt{O}( |\mathcal{S}|^{2.5} |\mathcal{A}| \log(1/\epsilon))$ & \cite{lee2014path} \\[-1em] & & \\ \hline Combinatorial Interior Point Algorithm & $|\mathcal{S}|^4|\mathcal{A}|^4\log\frac{|\mathcal{S}|}{1-\gamma}$ & \cite{ye2005new} \\ \hline & & \\[-1em] High Precision Randomized Value Iteration & $ \wt{O} \bigg[\left(\nnz(P) +\frac{|\mathcal{S}| |\mathcal{A}|}{(1 - \gamma)^3} \right) \log\left(\frac{1}{\epsilon\delta}\right) \bigg]$ & \cite{sidford2018variance}\\[-1em] & & \\ \hline \end{tabular} \end{center} \caption{ \small \textbf{Running Times to Solve DMDPs Given the Full MDP Model}: In this table, $|\mathcal{S}|$ is the number of states, $|\mathcal{A}|$ is the number of actions per state, $\gamma \in(0,1)$ is the discount factor, and $L$ is a complexity measure of the linear program formulation that is at most the total bit size to present the DMDP input. Rewards are bounded between 0 and 1. \label{table-exact}} \label{tab:literature_runtime_exact} \end{table} \section{Sample and Time Efficient Value Computation} \label{sec:alg_value} In this section, we describe an algorithm that obtains an $\epsilon$-optimal values in time $\wt{O}(\epsilon^{-2}(1-\gamma)^{-3}|\mathcal{S}||\mathcal{A}|)$. Note that the time and number of samples of this algorithm is optimal (up to logarithmic factors) due to the lower bound in \cite{azar2013minimax} which also established this upper bound on the sample complexity (but not time complexity) of the problem. We achieve this by combining the algorithms in \cite{azar2013minimax} and \cite{sidford2018variance}. First, we use the ideas and analysis of \cite{azar2013minimax} to construct a sparse MDP where the optimal value function of this MDP approximates the optimal value function of the original MDP and then we run the high precision algorithm in \cite{sidford2018variance} on this sparsified MDP. We show that \cite{sidford2018variance} runs in nearly linear time on sparsified MDP. Since the number of samples taken to construct the sparsified MDP was the the optimal number of samples, to solve the problem, the ultimate running time we thereby achieve is nearly optimal as any algorithm needs spend time at least the number of samples to obtain these samples. We include this for completeness but note that the approximate value function we show how to compute here does not suffice to compute policy of the MDP of comparable quality. The greedy policy of an $\epsilon$-optimal value function is an $\epsilon/(1-\gamma)$-optimal policy in the worst case. It has been shown in \cite{azar2013minimax} that the greedy policy of their value function is $\epsilon$-optimal if $\epsilon \le {(1-\gamma)^{1/2}|\mathcal{S}|^{-1/2}}$. However, when $\epsilon$ is so small, the seemingly sublinear runtime $\wt{O}((1-\gamma)^{-3}\mathcal{S}||\mathcal{A}|/\epsilon^2)$ essentially means a linear running time and sample complexity as $O((1-\gamma)^{-3}|\mathcal{S}|^2|\mathcal{A}|)$. The running time can be obtained by merely applying the result in \cite{sidford2018variance} (although with a slightly different computation model). \subsection{The Sparsified DMDP} \label{sec:sparsify mdp} Suppose we are given a DMDP $\mathcal{M}=(\mathcal{S}, \mathcal{A}, {\boldsymbol{r}}, {\boldsymbol{P}}, \gamma)$ with a sampling oracle. To approximate the optimal value of this MDP, we perform a spasification procedure as in \cite{azar2013minimax}. Sparsification of DMDP is conducted as follows. Let $\delta>0,\epsilon>0$ be arbitrary. First we pick a number \boldsymbol{e}gin{align} \label{eqn:num samples per sa} m=\Theta\left[\frac{1}{(1-\gamma)^3 \epsilon^2} \log \left(\frac{|\mathcal{S}||\mathcal{A}|}{\delta}\right)\right] ~. \end{align} For each $s\in \mathcal{S}$ and each $a\in \mathcal{A}$, we generate a sequence of independent samples from $\mathcal{S}$ using the probability vector ${\boldsymbol{P}}_{s,a}$ \[ s_{s,a}^{(1)}, s_{s,a}^{(2)}, \ldots, s_{s,a}^{(m)}. \] Next we construct a new and sparse probability vector $\wh{{\boldsymbol{P}}}_{s,a}\in\Delta_{|\mathcal{S}|}$ as \[ \forall s'\in \mathcal{S}: \wh{{\boldsymbol{P}}}_{s,a}(s') = \frac{1}{m}\cdot\sum_{i=1}^m \boldsymbol{1}(s_{s,a}^{(i)}=s'). \] Combining these $|\mathcal{S}||\mathcal{A}|$ new probability vectors, we obtain a new probability transition matrix $\wh{{\boldsymbol{P}}}\in\mathbb{R}^{\mathcal{S}\times\mathcal{A}\times\mathcal{S}}$ with number of non-zeros \[ \nnz(\wh{{\boldsymbol{P}}}) = O\left[\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3 \epsilon^2} \log \left(\frac{|\mathcal{S}||\mathcal{A}|}{\delta}\right)\right] ~. \] Denote $\wh{\mathcal{M}} = (\mathcal{S}, \mathcal{A}, {\boldsymbol{r}}, \wh{{\boldsymbol{P}}}, \gamma)$ as the sparsified DMDP. In the rest of this section, we use $\wh{\cdot}$ to represent the quantities corresponding to DMDP $\wh{\mathcal{M}}$, e.g., $\wh{{\boldsymbol{v}}}^*$ for the optimal value function, $\wh{\pi}^*$ for a optimal policy, and $\wh{\boldsymbol{Q}}^*$ for the optimal $Q$-function. There is a strong approximation guarantee of the optimal $Q$-function of the sparsified MDP, presented as follows. \boldsymbol{e}gin{theorem}[\cite{azar2013minimax}] \label{thm:qvalue approx} Let $\mathcal{M}$ be the original DMDP and $\wh{\mathcal{M}}$ be the corresponding sparsified version. Let $\boldsymbol{Q}^*$ be the optimal $Q$-function vector of the original DMDP and $\wh{\boldsymbol{Q}}^*$ be the optimal $Q$-function of $\wh{\mathcal{M}}$. Then with probability at least $1-\delta$ (over the randomness of the samples), \[ \|{\wh{\boldsymbol{Q}}^* - \boldsymbol{Q}^*}\|_{\infty}\le \epsilon. \] \end{theorem} Recall that ${{\boldsymbol{v}}}^*$ and $\wh{{\boldsymbol{v}}}^*$ are the optimal value functions of $\mathcal{M}$ and $\wh{\mathcal{M}}$. From Theorem~\ref{thm:qvalue approx}, we immediately have \[ \forall s\in \mathcal{S}:~ |{\boldsymbol{v}}^*(s)-\wh{{\boldsymbol{v}}}^*(s)|= | \max_{a\in\mathcal{A}} \boldsymbol{Q}^*(s, a) - \max_{a\in\mathcal{A}} \wh{\boldsymbol{Q}}^*(s, a)| \le \max_{a\in \mathcal{A}}| \boldsymbol{Q}^*(s, a) - \wh{\boldsymbol{Q}}^*(s, a)| \le \epsilon, \] with probability at least $1-\delta$. \subsection{High Precision Algorithm in the Sparsified MDP} Next we shall use the high precision algorithm of the \cite{sidford2018variance} which has the following guarantee. \boldsymbol{e}gin{theorem}[\cite{sidford2018variance}] \label{thm:high precision} There is an algorithm which given an input DMDP $\mathcal{M} = (\mathcal{S}, \mathcal{A}, {\boldsymbol{r}}, {\boldsymbol{P}}, \gamma)$ in time\footnote{$\wt{O}(f)$ denotes $O(f\cdot \log^{O(1)} f)$.} \[ \wt{O}\bigg[\bigg(\nnz({\boldsymbol{P}}) + \frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3}\bigg)\cdot \log\epsilon^{-1}\cdot\log\delta^{-1}\bigg] \] and outputs a vector $\wt{{\boldsymbol{v}}}^*$ such that with probability at least $1-\delta$, \[ \|{\wt{{\boldsymbol{v}}}^* - {\boldsymbol{v}}^*}\|_{\infty}\le \epsilon. \] where ${\boldsymbol{v}}^*$ is the optimal value of $\mathcal{M}$. \end{theorem} Combining the above two theorems, we immediately obtain an algorithm for finding $\epsilon$-optimal value functions. It works by first generating enough samples for each state-action pair and then call the high-precision MDP solver by \cite{sidford2018variance}. It does not sample transitions adaptively. We show that it achieves an optimal running time guarantee (up to $\mathrm{poly}\log$ factors) of obtaining the value function under the sampling oracle model. \boldsymbol{e}gin{theorem} \label{thm:value} Given an input DMDP $\mathcal{M}=(\mathcal{S}, \mathcal{A}, {\boldsymbol{r}}, {\boldsymbol{P}}, \gamma)$ with a sampling oracle and optimal value function ${\boldsymbol{v}}^*$, there exists an algorithm, that runs in time \[ \wt{O}\bigg(\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3}\cdot\frac{1}{\epsilon^2}\cdot \log^2 \left(\frac{1}{\delta}\right)\bigg) \] and outputs a vector $\wh{{\boldsymbol{v}}}^*$ such that $\|{\wh{{\boldsymbol{v}}}^* - {\boldsymbol{v}}^*}\|_{\infty}\le O(\epsilon)$ with probability at least $1-O(\delta)$. \end{theorem} \boldsymbol{e}gin{proof} We first obtain a sparsified MDP $\wh{\mathcal{M}}=(\mathcal{S}, \mathcal{A}, {\boldsymbol{r}}, \wh{{\boldsymbol{P}}}, \gamma)$ using the procedure described in Section~\ref{sec:sparsify mdp}. This procedure runs in time $O(|\mathcal{S}||\mathcal{A}|m)$, recalling that $m$ is the number of samples per $(s,a)$, defined in \eqref{eqn:num samples per sa}. Let $\wh{{\boldsymbol{u}}}^*$ be the optimal value function of $\wh{\mathcal{M}}$. By Theorem~\ref{thm:qvalue approx}, with probability at least $1-\delta$, $\|\wh{{\boldsymbol{u}}}^* -{\boldsymbol{v}}^*\|\le \epsilon$, which we condition on for the rest of the proof. Calling the algorithm in Theorem~\ref{thm:high precision}, we obtain a vector $\wt{{\boldsymbol{u}}}^*$ in time \[ \wt{O}\bigg[\bigg(\nnz(\wh{{\boldsymbol{P}}}) + \frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3}\bigg)\cdot \log\epsilon^{-1}\cdot\log\delta^{-1}\bigg] = \wt{O}\bigg(\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3}\cdot\frac{1}{\epsilon^2}\cdot \log^2\frac{1}{\delta}\bigg) \] and that with probability at least $1-\delta$, $\|\wt{{\boldsymbol{u}}}^* - \wh{{\boldsymbol{u}}}^*\|\le \epsilon$, which we condition on. By triangle inequality, we have \[ \|\wt{{\boldsymbol{u}}}^* - {\boldsymbol{v}}^*\|_{\infty}\le \|\wt{{\boldsymbol{u}}}^* - \wh{{\boldsymbol{u}}}^*\|_{\infty} + \|\wh{{\boldsymbol{u}}}^* - {{\boldsymbol{v}}}^*\|_{\infty}\le 2\epsilon. \] This concludes the proof. \end{proof} \section{Variance Bounds} \label{sec:var_bounds} In this section, we study some properties of a DMDP. Most of the content in this section is similar to \cite{azar2013minimax}. We provide slight modifications and improvement to make the results fit to our application. The main result of this section is to show the following lemma. \boldsymbol{e}gin{lemma}[Upper Bound on Variance] \label{lemma:variance bound} For any $\pi$, we have \[ \big\|(\boldsymbol{I} - \gamma{\boldsymbol{P}}^{\pi})^{-1}\sqrt{{\boldsymbol{\sigma}}_{{\boldsymbol{v}}^{\pi}}}\big\|_{\infty}^2 \le \frac{1 + \gamma}{\gamma^2(1 - \gamma)^3}, \] where $\sigma_{v^{\pi}}= {\boldsymbol{P}}^{\pi} ({\boldsymbol{v}}^\pi)^2 - ({\boldsymbol{P}}^{\pi}{\boldsymbol{v}}^{\pi})^2$ is the ``one-step'' variance of playing policy $\pi$. \end{lemma} Before we prove this lemma, we introduce another notation. We define $\boldsymbol{|\mathcal{S}|igma}^{\pi}\in \mathbb{R}^{|\mathcal{S}||\mathcal{A}|}$ for all $ (s, a) \in \mathcal{S}\times\mathcal{A}$ by \[ \boldsymbol{|\mathcal{S}|igma}^{\pi}(s, a) := \mathbb{E}\bigg[{\bigg({\boldsymbol{r}}(s, a) + \sum_{t\ge 1}\gamma^{t}{\boldsymbol{r}}(s^t, a^t) - \boldsymbol{Q}^{\pi}(s, a)\bigg)^2\bigg| s^0 =s, a^0 = a, a^t=\pi(s^t)}\bigg] \] where $a^t = \pi(s^t)$. Thus $\boldsymbol{|\mathcal{S}|igma}^{\pi}$ is the variance of the reward of starting with $(s, a)$ and play $\pi$ for infinite steps. The crucial observation of obtaining the near-optimal sample complexity is the following ``Bellman Equation'' for variance. It is a consequence of ``the law of total variance''. \boldsymbol{e}gin{lemma}[Bellman Equation for variance] \label{lem:var_bell} $\boldsymbol{|\mathcal{S}|igma}^{\pi}$ satisfies the Bellman equation \[ \boldsymbol{|\mathcal{S}|igma}^{\pi} = \gamma^2{\boldsymbol{\sigma}}_{{\boldsymbol{v}}^\pi} + \gamma^2\cdot {\boldsymbol{P}}^{\pi} \boldsymbol{|\mathcal{S}|igma}^\pi. \] \end{lemma} \boldsymbol{e}gin{proof} By direct expansion, \boldsymbol{e}gin{align} \boldsymbol{|\mathcal{S}|igma}^{\pi}(s,a) &= \mathbb{E}\bigg[{\big({{\boldsymbol{r}}(s, a) + \sum_{t\ge 1}\gamma^{t}{\boldsymbol{r}}(s^t, a^t)}\big)^2 \bigg| s^0 =s, a^0 = a, a^t=\pi(s^t)}\bigg] - (\boldsymbol{Q}^{\pi}(s,a))^2. \end{align} The first term in RHS can be written as \boldsymbol{e}gin{align*} \mathbb{E}&\bigg[{\bigg({{\boldsymbol{r}}(s, a) + \sum_{t\ge 1}\gamma^{t}{\boldsymbol{r}}(s^t, a^t)}\bigg)^2\bigg| s^0 =s, a^0 = a, a^t=\pi(s^t)} \bigg]\\ &=\sum_{s'\in \mathcal{S}}{\boldsymbol{P}}_{s,a}(s')\mathbb{E} \bigg[{\bigg({{\boldsymbol{r}}(s, a) + \gamma{\boldsymbol{r}}(s',\pi(s')) + \gamma\sum_{t\ge 1}\gamma^{t}{\boldsymbol{r}}(s^t, a^t)}\bigg)^2\bigg| s^0 =s', a^0 = \pi(s'), a^t=\pi(s^t)}\bigg]\\ &={\boldsymbol{r}}(s,a)^2 + 2\gamma {\boldsymbol{r}}(s,a)\cdot \sum_{s'\in \mathcal{S}}{\boldsymbol{P}}_{s,a}(s')\boldsymbol{Q}^{\pi}(s', \pi(s')) \\ & \qquad + \gamma^2\sum_{s'\in \mathcal{S}}{\boldsymbol{P}}_{s,a}(s')\mathbb{E} \bigg[{\bigg({{\boldsymbol{r}}(s',\pi(s')) + \sum_{t\ge 1}\gamma^{t}{\boldsymbol{r}}(s^t, a^t)}\bigg)^2\bigg| s^0 =s', a^0 = \pi(s'), a^t=\pi(s^t)}\bigg]\\ &={\boldsymbol{r}}(s,a)^2 + 2\gamma {\boldsymbol{r}}(s,a)\cdot \sum_{s'\in \mathcal{S}}{\boldsymbol{P}}_{s,a}(s')\boldsymbol{Q}^{\pi}(s', \pi(s')) + \gamma^2({\boldsymbol{P}}^{\pi}\boldsymbol{|\mathcal{S}|igma}^{\pi})(s,a) + \gamma^2 \sum_{s'\in \mathcal{S}}{\boldsymbol{P}}_{s,a}(s')(\boldsymbol{Q}^{\pi}(s', \pi(s')) )^2\\ &=\boldsymbol{Q}^{\pi}(s, a)^2 + \gamma^2({\boldsymbol{P}}^{\pi}\boldsymbol{|\mathcal{S}|igma}^{\pi})(s,a) + \gamma^2 \sum_{s'\in \mathcal{S}}{\boldsymbol{P}}_{s,a}(s')(\boldsymbol{Q}^{\pi}(s', \pi(s')) )^2 - \gamma^2 \bigg({\sum_{s'\in \mathcal{S}}{\boldsymbol{P}}_{s,a}(s')\boldsymbol{Q}^{\pi}(s', \pi(s'))}\bigg)^2\\ & = \boldsymbol{Q}^{\pi}(s, a)^2 + \gamma^2({\boldsymbol{P}}^{\pi}\boldsymbol{|\mathcal{S}|igma}^{\pi})(s,a) + \gamma^2{\boldsymbol{\sigma}}_{{\boldsymbol{v}}^{\pi}}(s,a). \end{align*} Combining the above two equations, we conclude the proof. \end{proof} As a remark, we note that \[ \boldsymbol{|\mathcal{S}|igma}^{\pi} = \gamma^2(\boldsymbol{I} - \gamma^2{\boldsymbol{P}}^{\pi})^{-1}{\boldsymbol{\sigma}}_{{\boldsymbol{v}}^{\pi}}. \] Furthermore, by definition, we have \[ \max_{(s, a)\in \mathcal{S}}\boldsymbol{|\mathcal{S}|igma}^{\pi}(s,a) \le (1-\gamma)^{-2}, \] The next lemma is crucial in proving the error bounds. \boldsymbol{e}gin{lemma} \label{lemma:sqrv} Let ${\boldsymbol{P}} \in \mathbb{R}^{n \times n}$ be a non-negative matrix in which every row has $\ell_1$ norm at most $1$, i.e. $\ell_\infty$ operator norm at most $1$. Then for all $\gamma \in(0, 1)$ and ${\boldsymbol{v}} \in \mathbb{R}^{n}_{\geq 0}$ we have \[ \| (\boldsymbol{I} - \gamma {\boldsymbol{P}})^{-1} \sqrt{{\boldsymbol{v}}} \|_\infty \leq \sqrt{\frac{1}{1 - \gamma} \left\| (\boldsymbol{I} - \gamma {\boldsymbol{P}} )^{-1} {\boldsymbol{v}} \right\|_\infty} \leq \sqrt{\frac{1 + \gamma}{1 - \gamma} \left\| (\boldsymbol{I} - \gamma^2 {\boldsymbol{P}} )^{-1} {\boldsymbol{v}} \right\|_\infty} ~. \] \end{lemma} \boldsymbol{e}gin{proof} Since, every row of ${\boldsymbol{P}}$ has $\ell_1$ norm at most $1$, by Cauchy-Schwarz for $i \in [n]$ we have \[ [{\boldsymbol{P}} \sqrt{{\boldsymbol{v}}}]_i = \sum_{j \in [n]} {\boldsymbol{P}}_{ij} \sqrt{{\boldsymbol{v}}}_j \leq \sqrt{\sum_{j \in [n]} {\boldsymbol{P}}_{ij} \cdot \sum_{j \in [n]} {\boldsymbol{P}}_{ij} {\boldsymbol{v}}_j } \leq \sqrt{{\boldsymbol{P}} {\boldsymbol{v}}} ~. \] Since ${\boldsymbol{v}}$ is non-negative and applying ${\boldsymbol{P}}$ preserves non-negativity, applying this inequality repeatedly yields that ${\boldsymbol{P}}^{k} \sqrt{{\boldsymbol{v}}} \leq \sqrt{{\boldsymbol{P}}^{k} {\boldsymbol{v}}}$ entrywise for all $k >0$. Consequently, Cauchy-Schwarz again yields \boldsymbol{e}gin{align*} (\boldsymbol{I} - \gamma {\boldsymbol{P}})^{-1} \sqrt{{\boldsymbol{v}}} &= \sum_{i = 0}^{\infty} \left[\gamma {\boldsymbol{P}}\right]^{i} \sqrt{{\boldsymbol{v}}} \leq \sum_{i = 0}^{\infty} \gamma^{i} \sqrt{{\boldsymbol{P}}^{i} {\boldsymbol{v}}} \leq \sqrt{ \sum_{i = 0}^{\infty} \gamma^i \cdot \sum_{i = 0}^{\infty} \gamma^i {\boldsymbol{P}}^i v }\\ &\le \sqrt{\frac{1}{1 - \gamma} \left\| (\boldsymbol{I} - \gamma {\boldsymbol{P}} )^{-1} {\boldsymbol{v}} \right\|_\infty} ~. \end{align*} Next, as $(\boldsymbol{I} - \gamma {\boldsymbol{P}})(\boldsymbol{I} + \gamma {\boldsymbol{P}}) = (\boldsymbol{I} - \gamma {\boldsymbol{P}}^2)$ we see that $(\boldsymbol{I} - \gamma {\boldsymbol{P}})^{-1} = (\boldsymbol{I} + \gamma {\boldsymbol{P}}) (\boldsymbol{I} - \gamma^2 {\boldsymbol{P}})^{-1}$. Furthermore, as $\|{\boldsymbol{P}} x\|_\infty \leq \|\boldsymbol{x}\|_\infty$ for all $\boldsymbol{x}$ we have $\|(\boldsymbol{I} + \gamma {\boldsymbol{P}}) x\|_\infty \leq (1 + \gamma) \| x\|_\infty$ for all $\boldsymbol{x}$ and therefore $\| (\boldsymbol{I} - \gamma {\boldsymbol{P}} )^{-1} {\boldsymbol{v}} \|_\infty \leq (1 + \gamma) \| (\boldsymbol{I} - \gamma^2 {\boldsymbol{P}} )^{-1} {\boldsymbol{v}} \|_\infty$ as desired. \end{proof} We are now ready to prove Lemma~\ref{lemma:variance bound}. \boldsymbol{e}gin{proof}[Proof of Lemma~\ref{lemma:variance bound}] The lemma follows directly from the application of Lemma~\ref{lemma:sqrv}. This proof is slightly simpler, tighter, and more general than the one in \cite{azar2013minimax}. \end{proof} \section{Lower Bounds on Policy} \label{sec:lower bound} \boldsymbol{e}gin{lemma} Suppose $\mathcal{M}=(\mathcal{S}, \mathcal{A}, P, \gamma, {\boldsymbol{r}})$ is a DMDP with an sampling oracle. Suppose $\pi$ is a given policy. Then there is an algorithm, halts in $\wt{O}((1-\gamma)^{-3}\epsilon^{-2}|\mathcal{S}|)$ time, outputs a vector ${\boldsymbol{v}}$ such that, with high probability, $\|{\boldsymbol{v}}^\pi - {\boldsymbol{v}}\|_\infty \le \epsilon$. \end{lemma} \boldsymbol{e}gin{proof} The lemma follows from a direct application of Theorem~\ref{thm:high precision}. \end{proof} \boldsymbol{e}gin{remark} Suppose $|\mathcal{A}|= \wt{\Omega}(1)$. Suppose there is an algorithm that obtains an $\epsilon$-optimal policy with $Z$ samples, then the above lemma implies an algorithm for obtaining an $\epsilon$-optimal value function with $Z + \wt{O}((1-\gamma)^{-3}\epsilon^{-2}|\mathcal{S}|)$ samples. By the $\Omega((1-\gamma)^{-3}\epsilon^{-2}|\mathcal{S}||\mathcal{A}|)$ sample bound on obtaining approximate value functions given in \cite{azar2013minimax}, the above lemma implies a \[ Z = \Omega((1-\gamma)^{-3}\epsilon^{-2}|\mathcal{S}||\mathcal{A}|) - \wt{O}((1-\gamma)^{-3}\epsilon^{-2}|\mathcal{S}|) = \Omega((1-\gamma)^{-3}\epsilon^{-2}|\mathcal{S}||\mathcal{A}|) \] sample lower bound for obtaining an $\epsilon$-optimal policy. \end{remark} \section{Missing Proofs} \label{sec:missing proof} Here are several standard properties of the Bellman value operator (see, e.g., \cite{bertsekas2013abstract}). \boldsymbol{e}gin{fact} Let ${\boldsymbol{v}}_1, {\boldsymbol{v}}_2\in \mathbb{R}^{\mathcal{S}}$ be two vectors. Let $\mathcal{T}$ be a value operator of a DMDP with discount factor $\gamma$. Let $\pi\in \mathcal{A}^{\mathcal{S}}$ be an arbitrary policy. Then the follows hold. \boldsymbol{e}gin{itemize} \\\itemm\textbf{Monotonicity}: If ${\boldsymbol{v}}_1 \le {\boldsymbol{v}}_2$ then $\mathcal{T}({\boldsymbol{v}}_1) \le \mathcal{T}({\boldsymbol{v}}_2)$; \\\itemm\textbf{Contraction}: $\|\mathcal{T}({\boldsymbol{v}}_1) - \mathcal{T}({\boldsymbol{v}}_2)\|_{\infty}\le \gamma \|{\boldsymbol{v}}_1 - {\boldsymbol{v}}_2\|_{\infty}$ and $\|\mathcal{T}_{\pi}({\boldsymbol{v}}_1) - \mathcal{T}_{\pi}({\boldsymbol{v}}_2)\|_{\infty}\le \gamma \|{\boldsymbol{v}}_1 - {\boldsymbol{v}}_2\|_{\infty}$. \end{itemize} \end{fact} \subsection{Missing Proofs from Section~\ref{sec:alg_policy}} \label{sec:proof of main alg} To begin, we introduce two standard concentration results. Let $\boldsymbol{p}\in \Delta_{\mathcal{S}}$ be a probability vector, and ${\boldsymbol{v}}\in\mathbb{R}^{\mathcal{S}}$ be a vector. Let ${\boldsymbol{p}}_m\in\Delta_{\mathcal{S}}$ be empirical estimations of $\boldsymbol{p}$ using $m$ i.i.d. samples from the distribution $\boldsymbol{p}$. For instance, let these samples be $s_1, s_2, \ldots, s_m\in \mathcal{S}$, then $\forall s\in \mathcal{S}: {\boldsymbol{p}}_m(s) = \sum_{j=1}^m\boldsymbol{1}(s_j = s)/m$. \boldsymbol{e}gin{theorem}[Hoeffding Inequality] \label{them:hoeffding} Let $\delta\in (0,1)$ be a parameter, vectors $\boldsymbol{p}, \boldsymbol{p}_m$ and ${\boldsymbol{v}}$ defined above. Then with probability at least $1-\delta$, \[ \big|{\boldsymbol{p}^{\top}{\boldsymbol{v}} - \boldsymbol{p}_m^{\top}{\boldsymbol{v}}}\big| \le \inorm{{\boldsymbol{v}}}\cdot\sqrt{{2m^{-1}\log(2\delta^{-1})}}. \] \end{theorem} \boldsymbol{e}gin{theorem}[Bernstein Inequality] \label{them:bernstein} Let $\delta\in (0,1)$ be a parameter, vectors $\boldsymbol{p}, \boldsymbol{p}_m$ and ${\boldsymbol{v}}$ defined as in Theorem~\ref{them:hoeffding}. Then with probability at least $1-\delta$ \[ \big|{\boldsymbol{p}^{\top}{\boldsymbol{v}} - \boldsymbol{p}_m^{\top}{\boldsymbol{v}}}\big| \le \sqrt{2m^{-1}\underset{s'\sim \boldsymbol{p}}{\var}({\boldsymbol{v}}(s'))\cdot \log({2}{\delta^{-1}})} + ({2}/{3})m^{-1}{\inorm{{\boldsymbol{v}}}\cdot\log(2\delta^{-1})}, \] where $\underset{s'\sim \boldsymbol{p}}{\var}({\boldsymbol{v}}(s')) = \boldsymbol{p}^\top {\boldsymbol{v}}^2 - (\boldsymbol{p}^\top {\boldsymbol{v}})^2$. \end{theorem} \boldsymbol{e}gin{proof}[Proof of Lemma~\ref{lemma: emprical mean}] By Theorem~\ref{them:bernstein} and a union bound over all $(s,a)$ pairs, with probability at least $1-\delta/4$, for every $(s,a)$, we have \boldsymbol{e}gin{align} \label{eq:proof emprical 1} \big|{\wt{\boldsymbol{w}}(s,a) - {\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)}}\big|\le\sqrt{{2\sigma_{{\boldsymbol{v}}^{(0)}}\cdot m_1^{-1}\cdot{L}}} + {2}\cdot (3m_1)^{-1}\cdot\norm{{\boldsymbol{v}}^{(0)}}_{\infty}\cdot {L}, \end{align} which is the first inequality. Next, by Theorem~\ref{them:hoeffding} and a union bound over all $(s,a)$ pairs, with probability at least $1-\delta/4$, for every $(s,a)$, we have \[ \big|{\wt{\boldsymbol{w}}(s,a) - {\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)}}\big|\le \norm{{\boldsymbol{v}}^{(0)}}_{\infty} \cdot \sqrt{{2m_1^{-1}{L}}}, \] which we condition on. Thus \boldsymbol{e}gin{align*} \big|\wt{\boldsymbol{w}}(s,a)^2 - ({\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)})^2\big| &= (\wt{\boldsymbol{w}}(s,a) + {\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)})\cdot |\wt{\boldsymbol{w}}(s,a) - {\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)}|\\ &\le \bigg[2 {\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)} + \norm{{\boldsymbol{v}}^{(0)}}_{\infty} \cdot \sqrt{{2m_1^{-1}{L}}}\bigg]\cdot|\wt{\boldsymbol{w}}(s,a) - {\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)}|\\ &\le 2( {\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)})\cdot\norm{{\boldsymbol{v}}^{(0)}}_{\infty} \cdot \sqrt{{2m_1^{-1}{L}}} + \norm{{\boldsymbol{v}}^{(0)}}^2_{\infty} \cdot {{2m_1^{-1}{L}}}. \end{align*} Since ${\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)}\le \norm{{\boldsymbol{v}}^{(0)}}_\infty$, we obtain \[ \big|\wt{\boldsymbol{w}}(s,a)^2 - ({\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)})^2\big| \le 3\norm{{\boldsymbol{v}}^{(0)}}_{\infty}^2 \cdot \sqrt{{2m_1^{-1}{L}}}, \] provided $2m_1^{-1}{L}\le 1$. Next by Lemma~\ref{them:hoeffding} and a union bound over all $(s,a)$ pairs, with probability at least $1-\delta/4$, for every $(s,a)$, we have \[ \left|\frac{1}{m_1} \sum_{j=1}^{m_1}{\boldsymbol{v}}^2(s_{s,a}^{(j)}) - {\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^2\right|\le \norm{{\boldsymbol{v}}^{(0)}}_{\infty}^2 \cdot \sqrt{{2L / m_1}}. \] By a union bound, we obtain, with probability at least $1-\delta/2$, \boldsymbol{e}gin{align} \label{eq:proof emprical 2} \big|\wh{\boldsymbol{\sigma}}(s,a) - \boldsymbol{\sigma}_{{\boldsymbol{v}}^{(0)}}(s, a)\big| &\le \big|\wt{\boldsymbol{w}}(s,a)^2 - ({\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^{(0)})^2\big| + \big|{m_1^{-1}}\sum_{j=1}^{m_1}{\boldsymbol{v}}^2(s_{s,a}^{(j)}) - {\boldsymbol{P}}_{s,a}^\top{\boldsymbol{v}}^2\big|\nonumber\\ &\le 4\norm{{\boldsymbol{v}}^{(0)}}_{\infty}^2 \cdot \sqrt{{2m_1^{-1}{L}}}. \end{align} By a union bound, with probability at least $1-\delta$, both \eqref{eq:proof emprical 1} and \eqref{eq:proof emprical 2} hold, concluding the proof. \end{proof} \boldsymbol{e}gin{proof}[Proof of Lemma~\ref{lemma: variance triangle}] Since for each $(s,a)$, $\boldsymbol{\sigma}_{{\boldsymbol{v}}}(s,a)$ is a variance, then we have triangle inequality, \[ \sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}}} \le \sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}^*}} + \sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}-{\boldsymbol{v}}^*}}. \] Observing that \[ \boldsymbol{\sigma}_{{\boldsymbol{v}}-{\boldsymbol{v}}^*}(s,a) \le {\boldsymbol{P}}_{s,a}^\top({\boldsymbol{v}}-{\boldsymbol{v}}^*)^2 \le \epsilon^2\cdot\boldsymbol{1}. \] We conclude the proof by taking a square root of all three sides of the above inequality. \end{proof} \boldsymbol{e}gin{proof}[Proof of Lemma~\ref{lemma:bounds on g}] Recall that for each $(s, a) \in \mathcal{S}\times \mathcal{A}$, \[ \boldsymbol{g}^{(i)}(s,a)= \frac{1}{m_2} \sum_{j=1}^{m_2} \big[{\boldsymbol{v}}^{(i)}(s_{s,a}^{(j)}) - {\boldsymbol{v}}^{(0)}(s_{s,a}^{(j)}) \big]- (1-\gamma)\frac{u}{8} ~, \] where $m_2 = 128(1-\gamma)^{-2}\cdot\log(2|\mathcal{S}||\mathcal{A}|R/\delta)$ and $s_{s,a}^{(1)}, s_{s,a}^{(2)}, \ldots, s_{s,a}^{(m_2)}$ is a sequence of independent samples from ${\boldsymbol{P}}_{s,a}$. Thus by Theorem~\ref{them:hoeffding} and a union bound over $\mathcal{S}\times \mathcal{A}$, with probability at least $1-\delta/R$, we have \boldsymbol{e}gin{align*} \forall (s,a)\in\mathcal{S}\times\mathcal{A}: \bigg|\sum_{j=1}^{m_2} \big[{\boldsymbol{v}}^{(i)}(s_{s,a}^{(j)}) &- {\boldsymbol{v}}^{(0)}(s_{s,a}^{(j)}) \big] - {\boldsymbol{P}}_{s,a}^{\top} \big[{\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)}\big]\bigg| \\ &\le \norm{{\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)}}_\infty\sqrt{{2m_2^{-1}\log(2|\mathcal{S}||\mathcal{A}|\delta'^{-1})}} \le (1-\gamma)u/8. \end{align*} Finally by shifting the estimate to have one-sided error, we obtain the one-side error $(1-\gamma)u/4$ in the statement of this lemma. \end{proof} \boldsymbol{e}gin{proof}[Proof of Lemma~\ref{lemma:induction lemma}] For $i=0$, $\boldsymbol{Q}^{(0)} = {\boldsymbol{r}} + \gamma \boldsymbol{w}$. By Lemma~\ref{lemma: emprical mean}, with probability at least $1-\delta$, \boldsymbol{e}gin{align*} |{\wt{\boldsymbol{w}} - {\boldsymbol{P}}{\boldsymbol{v}}^{(0)}}| \le \sqrt{2\alpha_1\boldsymbol{\sigma}_{{\boldsymbol{v}}^{(0)}}} + \frac{2}{3}\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}^{(0)}}_{\infty}\boldsymbol{1}, \end{align*} and \boldsymbol{e}gin{align} \label{eqn:var} \big|\wh{\boldsymbol{\sigma}}- \boldsymbol{\sigma}_{{\boldsymbol{v}}^{(0)}}\big| \le 4\norm{{\boldsymbol{v}}^{(0)}}_{\infty}^2 \cdot \sqrt{2\alpha_1} \boldsymbol{1}, \end{align} which we condition on. We have \[ |{\wt{\boldsymbol{w}} - {\boldsymbol{P}}{\boldsymbol{v}}^{(0)}}| \le \sqrt{2\alpha_1\wh{\boldsymbol{\sigma}}} + (4\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}}_\infty + \frac{2}{3}\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}^{(0)}}_{\infty})\boldsymbol{1}. \] Thus \boldsymbol{e}gin{align} \label{eqn:upper w} \boldsymbol{w} = \wt{\boldsymbol{w}} - \sqrt{2\alpha_1\wh{\boldsymbol{\sigma}}} - 4\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}}_\infty\boldsymbol{1} - \frac{2}{3}\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}^{(0)}}_{\infty}\boldsymbol{1} \le {\boldsymbol{P}}{\boldsymbol{v}}^{(0)}, \end{align} and \boldsymbol{e}gin{align*} \boldsymbol{w}\ge {\boldsymbol{P}}{\boldsymbol{v}}^{(0)} - 2\sqrt{2\alpha_1\wh{\boldsymbol{\sigma}}} - (8\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}}_\infty + \frac{4}{3}\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}^{(0)}}_{\infty})\boldsymbol{1} . \end{align*} By \eqref{eqn:var} and Lemma~\ref{lemma: variance triangle}, we have \[ \sqrt{\wh{\boldsymbol{\sigma}}} \le \sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}^{(0)}}} + 2 \norm{{\boldsymbol{v}}^{(0)}}_{\infty} (2\alpha)^{1/4}\boldsymbol{1} \le \sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}^{*}}} + u \boldsymbol{1}+ 2 \norm{{\boldsymbol{v}}^{(0)}}_{\infty} (2\alpha)^{1/4}\boldsymbol{1}. \] we have \boldsymbol{e}gin{align} \label{eqn:lower w} \boldsymbol{w}\ge {\boldsymbol{P}}{\boldsymbol{v}}^{(0)} - 2\sqrt{2\alpha_1\boldsymbol{\sigma}_{{\boldsymbol{v}}^*}} -2\sqrt{2\alpha_1}u\boldsymbol{1}- 16\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}}_\infty\boldsymbol{1} - \frac{4}{3}\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}^{(0)}}_{\infty}\boldsymbol{1} \end{align} For the rest of the proof, we condition on the event that \eqref{eqn:upper w} and \eqref{eqn:lower w} hold, which happens with probability at least $1-\delta$. Denote ${\boldsymbol{v}}^{(-1)}={\bf 0}$. Thus we have ${\boldsymbol{v}}^{(-1)}\le {\boldsymbol{v}}^{(0)} \le \mathcal{T}_{\pi^{(0)}} ({\boldsymbol{v}}^{(0)})$. Next we prove the lemma by induction on $i$. Assume for some $i\ge 1$, with probability at least $1-(i-1)\delta'$ the following holds, \[ \forall 0\le k\le i-1:\quad{\boldsymbol{v}}^{(k-1)}\le {\boldsymbol{v}}^{(k)} \le \mathcal{T}_{\pi^{(k)}} ({\boldsymbol{v}}^{(k)}), \] which we condition on. Next we show that the lemma statement holds for $k=i$. By definition of ${\boldsymbol{v}}^{(i)}$ (Line~\ref{alg: v1} and \ref{alg: v2}), \[ {\boldsymbol{v}}^{(i-1)}\le {\boldsymbol{v}}^{(i)} \quad\text{and}\quad {\boldsymbol{v}}\big(\boldsymbol{Q}^{(i-1)}\big)\le {\boldsymbol{v}}^{(i)}. \] Furthermore, since ${\boldsymbol{v}}^{(0)}\le {\boldsymbol{v}}^{(1)} \le \ldots \le {\boldsymbol{v}}^{(i-1)}\le \mathcal{T}_{\pi^{i-1}}{\boldsymbol{v}}^{(i-1)} \le \mathcal{T}{\boldsymbol{v}}^{(i-1)}\le \mathcal{T}^{\infty}{\boldsymbol{v}}^{(i-1)} = {\boldsymbol{v}}^*$, we have \[ {\boldsymbol{v}}^{(i)}-{\boldsymbol{v}}^{(0)} \le {\boldsymbol{v}}^{*} - {\boldsymbol{v}}^{(0)} \le u\boldsymbol{1}. \] By Lemma~\ref{lemma:bounds on g}, we have, with probability at least $1-\delta'$ \boldsymbol{e}gin{align} \label{eqn:bound g} {\boldsymbol{P}} \big[{\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)}\big] - \frac{(1-\gamma) u}{8}\cdot\boldsymbol{1}\le \boldsymbol{g}^{(i)} \le {\boldsymbol{P}} \big[{\boldsymbol{v}}^{(i)} - {\boldsymbol{v}}^{(0)}\big], \end{align} which we condition on for the rest of the proof. Thus we have \[ \boldsymbol{Q}^{(i)} = {\boldsymbol{r}} + \gamma(\boldsymbol{w} + \boldsymbol{g}^{(i)}) \le {\boldsymbol{r}} + \gamma({\boldsymbol{P}}{\boldsymbol{v}}^{(0)} + {\boldsymbol{P}}{\boldsymbol{v}}^{(i)} - {\boldsymbol{P}}{\boldsymbol{v}}^{(0)}) = {\boldsymbol{r}} + \gamma {\boldsymbol{P}} {\boldsymbol{v}}^{(i)}. \] To show ${\boldsymbol{v}}^{(i)}\le \mathcal{T}_{\pi^{(i)}} {\boldsymbol{v}}^{(i)}$, we notice that if for some $s$, $\pi^{(i)}(s)\neq \pi^{(i-1)}(s)$, then \[ {\boldsymbol{v}}^{(i)}(s) \le [\mathcal{T}_{\pi^{(i)}} {\boldsymbol{v}}^{(i-1)}](s) \le [\mathcal{T}_{\pi^{(i)}} {\boldsymbol{v}}^{(i)}](s), \] where the first inequality follows from ${\boldsymbol{v}}^{(i)}(s) \le {\boldsymbol{r}}(s, \pi^{(i)}(s))+\gamma{\boldsymbol{P}}_{s, \pi^{(i)}(s)}^\top{\boldsymbol{v}}^{(i-1)} = \mathcal{T}_{\pi^{(i)}} {\boldsymbol{v}}^{(i-1)}$. On the other hand, if $\pi^{(i)}(s)= \pi^{(i-1)}(s)$, then \[ {\boldsymbol{v}}^{(i)}(s) = {\boldsymbol{v}}^{(i-1)}(s) \le (\mathcal{T}_{\pi^{(i-1)}} {\boldsymbol{v}}^{(i-1)})(s) \le (\mathcal{T}_{\pi^{(i-1)}} {\boldsymbol{v}}^{(i)})(s) = (\mathcal{T}_{\pi^{(i)}} {\boldsymbol{v}}^{(i)})(s). \] This completes the induction step. Lastly, combining \eqref{eqn:lower w} and \eqref{eqn:bound g}, we have \boldsymbol{e}gin{align*} \boldsymbol{Q}^* - \boldsymbol{Q}^{(i)} &= \boldsymbol{Q}^{*} - {\boldsymbol{r}} - \gamma (\boldsymbol{w} + \boldsymbol{g}^{(i)}) = \gamma{\boldsymbol{P}}{\boldsymbol{v}}(\boldsymbol{Q}^*) - \gamma (\boldsymbol{w} + \boldsymbol{g}^{(i)}) \\ &=\gamma{\boldsymbol{P}}{\boldsymbol{v}}(\boldsymbol{Q}^*) - \gamma {\boldsymbol{P}}({\boldsymbol{v}}^{(i)} -{\boldsymbol{v}}^{(0)}) - \gamma {\boldsymbol{P}} {\boldsymbol{v}}^{(0)} + \boldsymbol{x}i^{(i)}\\ &=\gamma{\boldsymbol{P}}{\boldsymbol{v}}(\boldsymbol{Q}^*) - \gamma {\boldsymbol{P}}{\boldsymbol{v}}^{(i)} + \boldsymbol{x}i^{(i)}, \end{align*} where \[ \boldsymbol{x}i^{(i)} \le {(1-\gamma) u}/{8}\cdot \boldsymbol{1} + 2\sqrt{2\alpha_1\boldsymbol{\sigma}_{{\boldsymbol{v}}^*}} +2\sqrt{2\alpha_1} u\cdot\boldsymbol{1}+ 16\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}}_\infty\cdot\boldsymbol{1} + ({4}/{3})\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}^{(0)}}_{\infty}\cdot\boldsymbol{1}, \] where $\alpha_1=\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})/m_1 \le 1$. Mover, since $ {\boldsymbol{v}}(\boldsymbol{Q}^{(i-1)})\le {\boldsymbol{v}}^{(i)}$, we obtain \boldsymbol{e}gin{align*} \boldsymbol{Q}^* - \boldsymbol{Q}^{(i)} &\le\gamma{\boldsymbol{P}} {\boldsymbol{v}}(\boldsymbol{Q}^*) - \gamma {\boldsymbol{P}}{\boldsymbol{v}}(\boldsymbol{Q}^{(i-1)}) + \boldsymbol{x}i^{(i)} \le \gamma{\boldsymbol{P}}^{\pi^*}\boldsymbol{Q}^* - \gamma {\boldsymbol{P}}^{\pi^*}\boldsymbol{Q}^{(i-1)} + \boldsymbol{x}i^{(i)}, \end{align*} where $\pi^*$ is an arbitrary optimal policy and we use the fact that $\max_{a}\boldsymbol{Q}^{*}(s,a) = \boldsymbol{Q}^*(s,\pi^*(s))$. This completes the proof of the lemma. \end{proof} \boldsymbol{e}gin{proof}[Proof of Proposition~\ref{thm:alg 1}] Recall that we are able to sample a state from each ${\boldsymbol{P}}_{s,a}$ with time $O(1)$. Let $\boldsymbol{e}ta=(1-\gamma)^{-1}$, $R=\lceil c_1\boldsymbol{e}ta\ln[\boldsymbol{e}ta u^{-1}]\rceil, m_1= c_2\boldsymbol{e}ta^3u^{-2}\cdot\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})$ and $ m_2= c_3\boldsymbol{e}ta^2\cdot\log[2R|\mathcal{S}||\mathcal{A}|\delta^{-1}]$ for some constants $c_1, c_2$ and $c_3$ required in Algorithm~\ref{alg-halfErr}. In the following proof, we set $c_1, c_2, c_3$ to be sufficiently large but otherwise arbitrary absolute constants (e.g., $c_1 \ge 4, c_2\ge8192, c_3\ge128$). By Lemma~\ref{lemma:induction lemma}, with probability at least $1-2\delta$ for each $1\le i\le R$, we have ${\boldsymbol{v}}^{(i-1)}\le {\boldsymbol{v}}^{(i)}\le \mathcal{T}_{\pi^{(i)}} {\boldsymbol{v}}^{(i)}$, and $\boldsymbol{Q}^{(i)}\le {\boldsymbol{r}}+\gamma{\boldsymbol{P}} {\boldsymbol{v}}^{(i)}$, \[ \boldsymbol{Q}^* - \boldsymbol{Q}^{(i)} \le \gamma {\boldsymbol{P}}^{\pi^*}\big[\boldsymbol{Q}^* - \boldsymbol{Q}^{(i-1)}\big] +\boldsymbol{x}i, \] where \[ \boldsymbol{x}i\le {(1-\gamma){u}}/{C}\cdot \boldsymbol{1} + C\sqrt{\alpha_1\boldsymbol{\sigma}_{{\boldsymbol{v}}^*}} + C\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}}_\infty\cdot\boldsymbol{1} \] for $\alpha_1=\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})/m_1$ and sufficiently large constant $C$. Solving the recursion, we obtain \boldsymbol{e}gin{align*} \boldsymbol{Q}^* - \boldsymbol{Q}^{(R-1)} &\le \gamma^{R-1} {\boldsymbol{P}}^{\pi^*}\big[\boldsymbol{Q}^* - \boldsymbol{Q}^{0}\big] +\sum_{i=0}^{R-1}\gamma^{i}({\boldsymbol{P}}^{\pi^*})^{i}\boldsymbol{x}i \\ &\le \gamma^{R-1} {\boldsymbol{P}}^{\pi^*}\big[\boldsymbol{Q}^* - \boldsymbol{Q}^{0}\big] + (I-\gamma{\boldsymbol{P}}^{\pi^*})^{-1}\boldsymbol{x}i. \end{align*} We first apply a na\"ive bound $\norm{{\boldsymbol{P}}^{\pi^*}\big[\boldsymbol{Q}^* - \boldsymbol{Q}^{0}\big]}_{\infty}\le ({1-\gamma})^{-1}$. Hence \[ \gamma^{R-1}{\boldsymbol{P}}^{\pi^*}\big[\boldsymbol{Q}^* - \boldsymbol{Q}^{0}\big] \le \frac{{u}}{4}\cdot\boldsymbol{1}, \] where $R=\lceil(1-\gamma)^{-1}\ln[4(1-\gamma)^{-1}{u}^{-1}]\rceil + 1$. The next step is the key to the improvement in our analysis. We further apply the bound in Lemma~\ref{lemma:variance bound}, given by \[ (\boldsymbol{I}-\gamma{\boldsymbol{P}}^{\pi^*})^{-1}\sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}^*}}\le \min(2\gamma^{-1}(1-\gamma)^{-1.5}, (1-\gamma)^{-2})\cdot\boldsymbol{1} \le 3(1-\gamma)^{-1.5}\cdot\boldsymbol{1}, \] where the last inequality follows since $\min(2\gamma^{-1},(1-\gamma)^{-1/2})\le 3$. With $\norm{(\boldsymbol{I}-\gamma{\boldsymbol{P}}^{\pi^*})^{-1}\boldsymbol{1}}_{\infty}\le (1-\gamma)^{-1}$ and $\norm{{\boldsymbol{v}}^{(0)}}_\infty\le (1-\gamma)^{-1}$, we have, \boldsymbol{e}gin{align*} (\boldsymbol{I}-\gamma{\boldsymbol{P}}^{\pi^*})^{-1}\boldsymbol{x}i &\le \left[\frac{{u}}{8} + C'\sqrt{\frac{2\alpha_1}{\gamma^2(1-\gamma)^{3}}} + C'\frac{\alpha_1^{3/4}}{(1-\gamma)^2} \right]\cdot\boldsymbol{1}\\ &\le \bigg[\frac{{u}}{8} + \frac{{u}}{16} + \bigg(\frac{(1-\gamma)^3{u}^2}{C''\cdot(1-\gamma)^{8/3}}\bigg)^{3/4} \bigg]\cdot\boldsymbol{1}\\ &\le \frac{{u}}{4}\cdot \boldsymbol{1}, \end{align*} for some sufficiently large $C'$ and $C''$, which depend on $c_1, c_2$ and $c_3$. Since ${\boldsymbol{v}}(\boldsymbol{Q}^{(R-1)})\le{\boldsymbol{v}}^{(R)}$, we have \[ {\boldsymbol{v}}^* - {\boldsymbol{v}}^{(R)} \le {\boldsymbol{v}}^* -{\boldsymbol{v}}(\boldsymbol{Q}^{(R-1)}) \le\gamma^{R-1}{\boldsymbol{P}}^{\pi^*}\big[\boldsymbol{Q}^* - \boldsymbol{Q}^{0}\big] + (\boldsymbol{I}-\gamma{\boldsymbol{P}}^{\pi^*})^{-1}\boldsymbol{x}i \le \frac{{u}}{2}\cdot\boldsymbol{1}. \] This completes the proof of the correctness. It remains to bound the time complexity. The initialization stage costs $O( m_1)$ time per $(s,a)$. Each iteration costs $O(m_2)$ time per $(s,a)$. We thus have the total time complexity as \[ O( m_1 + R m_2)|\mathcal{S}|||\mathcal{A}|| = O\bigg[{\frac{|\mathcal{S}||\mathcal{A}|}{(1-\gamma)^3}\cdot \log\frac{|\mathcal{S}||\mathcal{A}|}{\delta\cdot(1-\gamma)\cdot{u}}\cdot\bigg(\frac{1}{{u}^2}+\log\frac{1}{(1-\gamma)\cdot{u}}\bigg)}\bigg]. \] Since $\log[(1-\gamma)^{-1}{u}^{-1}] = O(\log[(1-\gamma)^{-1}]{u}^{-2})$, we conclude the proof. \end{proof} \subsection{Missing Analysis of Halving Errors} \label{sec:half} We refer in this section to Algorithm~\ref{alg-halfErr} as a subroutine \textsc{HalfErr}, which given an input MDP $\mathcal{M}$ with a sampling oracle, an input value function ${\boldsymbol{v}}^{(i)}$ and an input policy $\pi^{(i)}$, outputs an value function ${\boldsymbol{v}}^{(i+1)}$ and a policy $\pi^{(i+1)}$ such that, with high probability (over the new samples of the sampling oracle), \[ \|\boldsymbol{Q}^{(i+1)} - \boldsymbol{Q}^*\|_{\infty}\le \|\boldsymbol{Q}^{(i)} - \boldsymbol{Q}^*\|_{\infty}/2 \quad\text{and}\quad \|{\boldsymbol{v}}^{\pi^{(i+1)}}-{\boldsymbol{v}}^*\|_{\infty}\le \|{\boldsymbol{v}}^{\pi^{(i)}}-{\boldsymbol{v}}^*\|_{\infty}/2. \] After $\log[\epsilon^{-1}(1-\gamma)^{-1}]$ calls of the subroutine \textsc{HalfErr}, the final output policy and value functions are $\epsilon$-close to the optimal ones with high probability. We summarize our meta algorithm in Algorithm~\ref{alg-meta}. Note that in the algorithm, each call of $\textsc{HalfErr}$ will draw new samples from the sampling oracle. These new samples guarantee the independence of successive improvements and also save space of the algorithm. For instance, the algorithm \textsc{HalfErr}~only needs to use $O(|\mathcal{S}||\mathcal{A}|)$ words of memory instead of storing all the samples. The guarantee of the algorithm is summarized in Proposition~\ref{prop:meta}. \boldsymbol{e}gin{proposition} \label{prop:meta} Let $\mathcal{M}=(\mathcal{S}, \mathcal{A}, {\boldsymbol{r}}, {\boldsymbol{P}}, \gamma)$ with a sampling oracle. Suppose \textsc{HalfErr}~is an algorithm that takes an input ${\boldsymbol{v}}^{(i)}$ and an input policy $\pi^{(i)}$ and a number $u\in[0,(1-\gamma)^{-1}]$ satisfying ${\boldsymbol{v}}^{*}-u\boldsymbol{1}\le {\boldsymbol{v}}^{(i)}\le {\boldsymbol{v}}^{\pi^{(i)}}$, halts in time $\tau$ and outputs a ${\boldsymbol{v}}^{(i+1)}$ and a policy $\pi^{(i+1)}$ satisfying, \boldsymbol{e}gin{align*} {\boldsymbol{v}}^{*}-\frac{u}{2}\cdot\boldsymbol{1}\le {\boldsymbol{v}}^{(i+1)}\le {\boldsymbol{v}}^{\pi^{(i+1)}} \le {\boldsymbol{v}}^{*}. \end{align*} with probability at least $1-(1-\gamma)\cdot\epsilon\cdot\delta$ (over the randomness of the new samples given by the sampling oracle), then the meta algorithm described in Algorithm~\ref{alg-meta}, given input $\mathcal{M}$ and the sampling oracle, halts in $\tau \cdot \log(\epsilon^{-1}\cdot (1-\gamma)^{-1})$ and outputs an policy $\pi^{(R)}$ such that \[ {\boldsymbol{v}}^{*}-{\epsilon}\cdot\boldsymbol{1}\le {\boldsymbol{v}}^{(R)}\le {\boldsymbol{v}}^{\pi^{(R)}} \le {\boldsymbol{v}}^{*} \] with probability at least $1-\delta$ (over the randomness of all samples drawn from the sampling oracle). Moreover, if \textsc{HalfErr}~uses space $s$, then the meta algorithm uses space $s+O(|\mathcal{S}||\mathcal{A}|)$. If each call of \textsc{HalfErr}~ takes $m$ samples from the oracle, then the overall samples taken by Algorithm~\ref{alg-meta} is $m\cdot \log(\epsilon^{-1}\cdot (1-\gamma)^{-1})$. \end{proposition} The proof of this proposition is a straightforward application of conditional probability. \boldsymbol{e}gin{proof}[Proof of Proposition~\ref{prop:meta}] The proof follows from a straightforward induction. For simplicity, denote $\boldsymbol{e}ta = (1-\gamma)^{-1}$. In the meta-algorithm, the initialization is ${\boldsymbol{v}}^{(0)}= {\bf0}$ and $\pi^{(0)}$ is an arbitrary policy. Thus ${\boldsymbol{v}}^*- \boldsymbol{e}ta\cdot \boldsymbol{1}\le {\boldsymbol{v}}^{(0)} \le {\boldsymbol{v}}^{\pi^{(0)}}$. By running the meta-algorithm, we obtain a sequence of value functions and policies: $\{{\boldsymbol{v}}^{(i)}\}_{i=0}^R$ and $\{\pi^{(i)}\}_{i=0}^R$. Since each call of the \textsc{HalfErr}~uses new samples from the oracle, the sequence of value functions and policies satisfies strong Markov property (given $({\boldsymbol{v}}^{(i)}, \pi^{(i)})$, $({\boldsymbol{v}}^{(i+1)}, \pi^{(i+1)})$ is independent with $\{({\boldsymbol{v}}^{(j)}, \pi^{(j)})\}_{j=0}^{i-1}$). Thus \boldsymbol{e}gin{align*} \Pr&\bigr[{\boldsymbol{v}}^*- 2^{-R}\boldsymbol{e}ta\cdot \boldsymbol{1}\le {\boldsymbol{v}}^{(R)} \le {\boldsymbol{v}}^{\pi^{(R)}}\bigr] \\ &\ge \prod_{i=1}^{R}\Pr\bigr[{\boldsymbol{v}}^*- 2^{-i}\boldsymbol{e}ta\cdot \boldsymbol{1}\le {\boldsymbol{v}}^{(i)} \le {\boldsymbol{v}}^{\pi^{(i)}} \bigr|{\boldsymbol{v}}^*- 2^{-i+1}\boldsymbol{e}ta\cdot \boldsymbol{1}\le {\boldsymbol{v}}^{(i-1)} \le {\boldsymbol{v}}^{\pi^{(i-1)}}\bigr] \\ & \ge 1-\delta. \end{align*} Since $2^{-R}(1-\gamma)^{-1}\le \epsilon$, we conclude the proof. \end{proof} \boldsymbol{e}gin{proof}[Proof of Theorem~\ref{thm:dmdp1}] Our algorithm is simply plugging in Algorithm~\ref{alg-halfErr} as the \textsc{HalfErr}~subroutine in Algorithm~\ref{alg-meta}. The correctness is guaranteed by Proposition~\ref{prop:meta} and Proposition~\ref{thm:alg 1}. The running time guarantee follows from a straightforward calculation. \end{proof} \section{Extension to Finite Horizon} \label{sec:finite_horizon} In this section we show how to apply similar techniques to achieve improved sample complexities for solving finite Horizon MDPs given a generative model and we prove that the sample complexity we achieve is optimal up to logarithmic factors. The finite horizon problem is to compute an optimal non-stationary policy over a fixed time horizon $H$, i.e. a policy of the form $\pi(s, h)$ for $s \in S$ and $h \in \{0, \ldots H\}$), where the reward is the expected cumulative (un-discounted) reward for following this policy. In classic value iteration, this is typically done using a backward recursion from time $H, H-1, \ldots 0$. We show how to use the ideas in this paper to solve for an $\epsilon$-approximate policy. As we have shown in the discounted case, it is suffice to show an algorithm that decrease the error of the value at each stage by half. Our algorihtm is presented in Algorithm~\ref{algH-halfErr}. To analyze the algorithm, we first provide an analogous lemma of Lemma~\ref{lemma: emprical mean}, \boldsymbol{e}gin{lemma}[Empirical Estimation Error] \label{lemma: emprical mean h} Let $\wt{\boldsymbol{w}}_h$ and $\wh{\boldsymbol{\sigma}}_h$ be computed in Line \ref{alg1: computeH w} of Algorithm \ref{algH-halfErr}. Recall that $\wt{\boldsymbol{w}}_h$ and $\wh{\boldsymbol{\sigma}}_h$ are empirical estimates of ${\boldsymbol{P}}{\boldsymbol{v}}_h$ and $\boldsymbol{\sigma}_{{\boldsymbol{v}}_h}={\boldsymbol{P}}{\boldsymbol{v}}_h^2 - ({\boldsymbol{P}}{\boldsymbol{v}}_h)^2$ using $m_1$ samples per $(s,a)$ pair. Then with probability at least $1-\delta$, for $L \defeq \log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})$ and every $h=1,2,\ldots, H$, we have {\small \boldsymbol{e}gin{align} \label{eqn:estimateH pv} \big|{\wt{\boldsymbol{w}}_h - {\boldsymbol{P}}^\top{\boldsymbol{v}}_h^{(0)}}\big|\le \sqrt{{2m_1^{-1}\boldsymbol{\sigma}_{{\boldsymbol{v}}_h^{(0)}}\cdot{L}}} + {2(3m_1)^{-1}\norm{{\boldsymbol{v}}_h^{(0)}}_{\infty} {L}} \end{align} } and {\small \boldsymbol{e}gin{align} \label{eqn:estimateH sigma} \forall (s,a)\in \mathcal{S}\times \mathcal{A}:\quad \big|\wh{\boldsymbol{\sigma}}_h(s,a) - \boldsymbol{\sigma}_{{\boldsymbol{v}}_h^{(0)}}(s, a)\big| \le 4\norm{{\boldsymbol{v}}_h^{(0)}}_{\infty}^2 \cdot \sqrt{{2m_1^{-1}{L}}}. \end{align} } \end{lemma} \boldsymbol{e}gin{proof} The proof of this lemma is identical to that of Lemma~\ref{lemma: emprical mean}. \end{proof} An analogous lemma to Lemma~\ref{lemma:bounds on g} is also presented here. \boldsymbol{e}gin{lemma} \label{lemma:boundsH on g} Let $\boldsymbol{g}_h^{(i)}$ be the estimate of ${\boldsymbol{P}}\big[{\boldsymbol{v}}_h^{(i)} - {\boldsymbol{v}}_h^{(0)}\big]$ defined in Line~\ref{alg1: computeH g} of Algorithm~\ref{algH-halfErr}. Then conditioning on the event that $\norm{{\boldsymbol{v}}_h^{(i)} - {\boldsymbol{v}}_h^{(0)}}_{\infty}\le 2\epsilon$, with probability at least $1-\delta/H$, \[ {\boldsymbol{P}} \big[{\boldsymbol{v}}_h^{(i)} - {\boldsymbol{v}}_h^{(0)}\big] - \frac{\epsilon}{4H}\cdot\boldsymbol{1}\le \boldsymbol{g}_h^{(i)} \le {\boldsymbol{P}} \big[{\boldsymbol{v}}_h^{(i)} - {\boldsymbol{v}}_h^{(0)}\big] \] provided appropriately chosen constants in Algorithm~\ref{algH-halfErr}. \end{lemma} \boldsymbol{e}gin{proof} The proof of this lemma is identical to that of Lemma~\ref{lemma:bounds on g} except that $(1-\gamma)^{-1}$ is replaced with $H$. \end{proof} Similarly, we can show the following improvement lemma. \boldsymbol{e}gin{lemma} \label{lemma:inductionH lemma} Let $\boldsymbol{Q}_h$ be the estimated $Q$-function of ${\boldsymbol{v}}_{h+1}$ in Line~\ref{alg: Hq-func} of Algorithm \ref{algH-halfErr}. Let $\boldsymbol{Q}_h^* = {\boldsymbol{r}} + {\boldsymbol{P}}_h {\boldsymbol{v}}^*_{h+1}$ be the optimal $Q$-function of the DMDP. Let $\pi(\cdot, h)$ and ${\boldsymbol{v}}_h$ be estimated in iteration $h$, as defined in Line~\ref{alg: Hv1} and \ref{alg: Hv2}. Let $\pi^*$ be an optimal policy for the DMDP. For a policy $\pi$, let ${\boldsymbol{P}}_h^{\pi}\boldsymbol{Q}\in\mathbb{R}^{\mathcal{S}\times\mathcal{A}}$ be defined as $({\boldsymbol{P}}_h^{\pi}\boldsymbol{Q})(s,a) = \sum_{s'\in\mathcal{S}}{\boldsymbol{P}}_{s,a}(s')\boldsymbol{Q}(s',\pi(s',h))$. Suppose for all $h\in[H-1]$, ${\boldsymbol{v}}_h^{(0)}\le \mathcal{T}_{\pi^{(0)}(\cdot, h)} {\boldsymbol{v}}_{h+1}^{(0)}$. Let ${\boldsymbol{v}}_{H+1}\defeq \bf{0}$ and $\boldsymbol{Q}_{H+1}\defeq 0$. Then, with probability at least $1- 2\delta$, for all $1\le h \le H$, ${\boldsymbol{v}}_h^{(0)}\le {\boldsymbol{v}}_h\le \mathcal{T}_{\pi(\cdot, h)} {\boldsymbol{v}}_{h+1} \le {\boldsymbol{v}}_{h}^*$, $\boldsymbol{Q}_h\le {\boldsymbol{r}}+{\boldsymbol{P}}_h {\boldsymbol{v}}_{h+1}$ and {\small \[ \boldsymbol{Q}^*_h - \boldsymbol{Q}_h \le {\boldsymbol{P}}^{\pi^*}_h\big[\boldsymbol{Q}^*_{h+1} - \boldsymbol{Q}_{h+1}\big] +\boldsymbol{x}i_h, \] } where the error vector $\boldsymbol{x}i_h$ satisfies \[ {\bf0}\le \boldsymbol{x}i_h\le {8H^{-1}u}\cdot \boldsymbol{1} + 2\sqrt{2\alpha_1\boldsymbol{\sigma}_{{\boldsymbol{v}}^*_{h+1}}} +2\sqrt{2\alpha_1}u\cdot\boldsymbol{1}+ 16\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}_{h+1}}_\infty\cdot\boldsymbol{1} + ({4}/{3})\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}^{(0)}_{h+1}}_{\infty}\cdot\boldsymbol{1}, \] and $\alpha_1=\log(8|\mathcal{S}||\mathcal{A}|H\delta^{-1})/m_1$. \end{lemma} \boldsymbol{e}gin{proof}[Proof of Lemma~\ref{lemma:inductionH lemma}] By Lemma~\ref{lemma: emprical mean}, for any $h=1, 2, \ldots, H$, with probability at least $1-\delta/H$, \boldsymbol{e}gin{align*} |{\wt{\boldsymbol{w}}_h - {\boldsymbol{P}}{\boldsymbol{v}}_{h+1}}| \le \sqrt{2\alpha_1\boldsymbol{\sigma}_{{\boldsymbol{v}}_{h+1}^{(0)}}} + \frac{2}{3}\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}_{h+1}^{(0)}}_{\infty}\cdot\boldsymbol{1}, \end{align*} and \boldsymbol{e}gin{align} \label{eqn:varH} \big|\wh{\boldsymbol{\sigma}}_{h+1}- \boldsymbol{\sigma}_{{\boldsymbol{v}}^{(0)}_{h+1}}\big| \le 4\norm{{\boldsymbol{v}}^{(0)}_{h+1}}_{\infty}^2 \cdot \sqrt{2\alpha_1} \cdot\boldsymbol{1}, \end{align} which we condition on. We have \[ |{\wt{\boldsymbol{w}}_h - {\boldsymbol{P}}{\boldsymbol{v}}_{h+1}^{(0)}}| \le \sqrt{2\alpha_1\wh{\boldsymbol{\sigma}}_{h+1}} + (4\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}_{h+1}}_\infty + \frac{2}{3}\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}^{(0)}_{h+1}}_{\infty})\boldsymbol{1}. \] Thus \boldsymbol{e}gin{align} \label{eqn:upper Hw} \boldsymbol{w}_h = \wt{\boldsymbol{w}}_h - \sqrt{2\alpha_1\wh{\boldsymbol{\sigma}}_{h+1}} - 4\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}_{h+1}}_\infty\boldsymbol{1} - \frac{2}{3}\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}^{(0)}_{h+1}}_{\infty}\boldsymbol{1} \le {\boldsymbol{P}}{\boldsymbol{v}}^{(0)}_{h+1}, \end{align} and \boldsymbol{e}gin{align*} \boldsymbol{w}_{h}\ge {\boldsymbol{P}}{\boldsymbol{v}}^{(0)}_{h+1} - 2\sqrt{2\alpha_1\wh{\boldsymbol{\sigma}}_{h+1}} - (8\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}_{h+1}}_\infty + \frac{4}{3}\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}^{(0)}_{h+1}}_{\infty})\boldsymbol{1} . \end{align*} By \eqref{eqn:var} and Lemma~\ref{lemma: variance triangle}, we have \[ \sqrt{\wh{\boldsymbol{\sigma}}_{h+1}} \le \sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}^{(0)}_{h+1}}} + 2 \norm{{\boldsymbol{v}}^{(0)}_{h+1}}_{\infty} (2\alpha)^{1/4}\boldsymbol{1} \le \sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}^{*}_{h+1}}} + \epsilon \boldsymbol{1}+ 2 \norm{{\boldsymbol{v}}_{h+1}^{(0)}}_{\infty} (2\alpha)^{1/4}\boldsymbol{1}. \] we have \boldsymbol{e}gin{align} \label{eqn:lower Hw} \boldsymbol{w}_h\ge {\boldsymbol{P}}{\boldsymbol{v}}_{h+1}^{(0)} - 2\sqrt{2\alpha_1\boldsymbol{\sigma}_{{\boldsymbol{v}}_{h+1}^*}} -2\sqrt{2\alpha_1}\epsilon\boldsymbol{1}- 16\alpha_1^{3/4}\norm{{\boldsymbol{v}}_{h+1}^{(0)}}_\infty\boldsymbol{1} - \frac{4}{3}\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}_{h+1}^{(0)}}_{\infty}\boldsymbol{1} \end{align} For the rest of the proof, we condition on the event that \eqref{eqn:upper Hw} and \eqref{eqn:lower Hw} hold for all $h=1, 2, \ldots, H$, which happens with probability at least $1-\delta$. Denote ${\boldsymbol{v}}_{H+1}^* = {\boldsymbol{v}}_{H+1} = {\boldsymbol{v}}_{H+1}^{(0)} ={\bf 0}$. Thus we have ${\boldsymbol{v}}_{H+1}^{(0)}\le {\boldsymbol{v}}_{H+1} \le {\boldsymbol{v}}_{H+1}^{*}$. Next we prove the lemma by induction on $h$. Assume for some $h$, with probability at least $1-(h-1)\delta/H$ the following holds, for all $h' = h+1, h+2, \ldots, H,$ \[ {\boldsymbol{v}}^{(0)}_{h'} \le {\boldsymbol{v}}_{h'} \le {\boldsymbol{v}}^{*}_{h'}, \] which we condition on. Next we show that the lemma statement holds for $h$ as well. By definition of ${\boldsymbol{v}}_h$ (Line~\ref{alg: v1} and \ref{alg: v2}), \[ {\boldsymbol{v}}_{h}^{(0)}\le {\boldsymbol{v}}_h \quad\text{and}\quad {\boldsymbol{v}}(\boldsymbol{Q}_{h})\le {\boldsymbol{v}}_h. \] Furthermore, since ${\boldsymbol{v}}^{(0)}_{h+1}\le {\boldsymbol{v}}^*_{h+1} \le {\boldsymbol{v}}^{(0)}_{h+1} + u\boldsymbol{1}$ we have \[ {\boldsymbol{v}}_{h+1}^*-{\boldsymbol{v}}_{h+1} \le {\boldsymbol{v}}^{*}_{h+1} - {\boldsymbol{v}}^{(0)}_{h+1} \le u\boldsymbol{1}. \] By Lemma~\ref{lemma:bounds on g}, we have, with probability at least $1-\delta'$ \boldsymbol{e}gin{align} \label{eqn:bound Hg} {\boldsymbol{P}} \big[{\boldsymbol{v}}_{h+1} - {\boldsymbol{v}}_{h+1}^{(0)}\big] - \frac{u}{8H}\cdot\boldsymbol{1}\le \boldsymbol{g}_h \le {\boldsymbol{P}} \big[{\boldsymbol{v}}_{h+1} - {\boldsymbol{v}}^{(0)}_{h+1}\big], \end{align} which we condition on for the rest of the proof. Thus we have \[ \boldsymbol{Q}_h = {\boldsymbol{r}} + (\boldsymbol{w}_h + \boldsymbol{g}_h) \le {\boldsymbol{r}} + {\boldsymbol{P}}{\boldsymbol{v}}^{(0)}_{h+1} + {\boldsymbol{P}}{\boldsymbol{v}}_{h+1} - {\boldsymbol{P}}{\boldsymbol{v}}^{(0)}_{h+1} = {\boldsymbol{r}} + {\boldsymbol{P}} {\boldsymbol{v}}_{h+1} \le \boldsymbol{Q}_h^*. \] To show ${\boldsymbol{v}}_h\le \mathcal{T}_{\pi(\cdot, h)} {\boldsymbol{v}}_{h+1}$, we notice that if for some $s$, $\pi(s,h)\neq \pi^{(0)}(s,h)$, then, \[{\boldsymbol{v}}_h(s) \le {\boldsymbol{r}}(s, \pi(s, h))+{\boldsymbol{P}}_{s, \pi(s,h)}^\top{\boldsymbol{v}}_{h+1} = \mathcal{T}_{\pi(\cdot, h)} {\boldsymbol{v}}_{h+1}. \] On the other hand, if $\pi(s, h)= \pi^{(0)}(s, h)$, then \[ \forall s\in \mathcal{S}:\quad {\boldsymbol{v}}_h(s) = {\boldsymbol{v}}_{h}^{(0)}(s) \le (\mathcal{T}_{\pi^{(0)}(\cdot, h)} {\boldsymbol{v}}_{h+1}^{(0)})(s) \le (\mathcal{T}_{\pi^{(0)}(\cdot, h)} {\boldsymbol{v}}_{h+1})(s) = (\mathcal{T}_{\pi(\cdot, h)} {\boldsymbol{v}}_{h+1})(s). \] This completes the induction step. Lastly, combining \eqref{eqn:lower Hw} and \eqref{eqn:bound Hg}, we have \boldsymbol{e}gin{align*} \boldsymbol{Q}^*_h - \boldsymbol{Q}_h &= \boldsymbol{Q}^{*}_h - {\boldsymbol{r}} - (\boldsymbol{w}_h + \boldsymbol{g}_h) = {\boldsymbol{P}}{\boldsymbol{v}}(\boldsymbol{Q}_{h+1}^*) - (\boldsymbol{w}_h + \boldsymbol{g}_h) \\ &={\boldsymbol{P}}{\boldsymbol{v}}(\boldsymbol{Q}^*_{h+1}) - {\boldsymbol{P}}({\boldsymbol{v}}_{h+1} -{\boldsymbol{v}}_{h+1}^{(0)}) - {\boldsymbol{P}} {\boldsymbol{v}}_{h+1} + \boldsymbol{x}i_h\\ &={\boldsymbol{P}}{\boldsymbol{v}}(\boldsymbol{Q}^*_{h+1}) - {\boldsymbol{P}}{\boldsymbol{v}}_{h+1} + \boldsymbol{x}i_h, \end{align*} where \[ \boldsymbol{x}i_h \le {H^{-1}u}/{8}\cdot \boldsymbol{1} + 2\sqrt{2\alpha_1\boldsymbol{\sigma}_{{\boldsymbol{v}}_{h+1}^*}} +2\sqrt{2\alpha_1}u\cdot\boldsymbol{1}+ 16\alpha_1^{3/4}\norm{{\boldsymbol{v}}_{h+1}^{(0)}}_\infty\cdot\boldsymbol{1} + ({4}/{3})\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}_{h+1}^{(0)}}_{\infty}\cdot\boldsymbol{1}, \] where $\alpha_1=\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})/m_1$. Mover, since $ {\boldsymbol{v}}(\boldsymbol{Q}_{h+1})\le {\boldsymbol{v}}_{h+1}$, we obtain \boldsymbol{e}gin{align*} \boldsymbol{Q}_h^* - \boldsymbol{Q}_h &\le{\boldsymbol{P}} {\boldsymbol{v}}(\boldsymbol{Q}_{h+1}^*) - {\boldsymbol{P}}{\boldsymbol{v}}(\boldsymbol{Q}_{h+1}) + \boldsymbol{x}i_h \le {\boldsymbol{P}}^{\pi^*}_h\boldsymbol{Q}^*_{h+1} - {\boldsymbol{P}}^{\pi^*}_h\boldsymbol{Q}_{h+1} + \boldsymbol{x}i_h, \end{align*} where $\pi^*$ is an arbitrary optimal policy and we use the fact that $\max_{a}\boldsymbol{Q}_{h}^{*}(s,a) = \boldsymbol{Q}_{h}^*(s,\pi^*(s, h))$. This completes the proof of the lemma. \end{proof} Furthermore, we show an analogous lemma of Lemma~\ref{lemma:variance bound}. \boldsymbol{e}gin{lemma}[Upper Bound on Variance] \label{lemma:varianceH bound} For any $\pi$, we have \[ \bigg\|\sum_{h'=h}^{H-1}\bigg(\prod_{i=h+1}^{h'}{\boldsymbol{P}}_i^{\pi}\bigg)\sqrt{{\boldsymbol{\sigma}}_{{\boldsymbol{v}}_{h'+1}^{\pi}}}\bigg\|_{\infty}^2 \le H^{3/2}. \] \end{lemma} \boldsymbol{e}gin{proof} First, by Cauchy-Swartz inequality, we have \boldsymbol{e}gin{align*} \sum_{h'=h}^{H-1}\bigg(\prod_{i=h+1}^{h'}{\boldsymbol{P}}_i^{\pi}\bigg) \sqrt{{\boldsymbol{\sigma}}_{{\boldsymbol{v}}_{h'+1}^{\pi}}}\le \sqrt{H\sum_{h'=h}^{H-1}\bigg(\prod_{i=h+1}^{h'}{\boldsymbol{P}}_i^{\pi}\bigg){\boldsymbol{\sigma}}_{{\boldsymbol{v}}_{h'+1}^{\pi}}}. \end{align*} Next, by a similar argument of the proof of Lemma~\ref{lem:var_bell}, we can show that \[ \bigg[\sum_{h'=h}^{H-1}\bigg(\prod_{i=h+1}^{h'}{\boldsymbol{P}}_i^{\pi}\bigg){\boldsymbol{\sigma}}_{{\boldsymbol{v}}_{h'+1}^{\pi}}\bigg] (s) = \var\bigg[\sum_{t=h}^H r(s^t, \pi(s^t, t))\bigg|s^h = s\bigg] \le H^2. \] This completes the proof. \end{proof} We are now ready to present the guarantee of the algorithm ~\ref{algH-halfErr}. \boldsymbol{e}gin{proposition} \label{thm:algH 1} On an input value vectors ${\boldsymbol{v}}_1^{(0)}, {\boldsymbol{v}}_2^{(0)}, \ldots, {\boldsymbol{v}}_H^{(0)}$, policy $\pi^{(0)}$, and parameters $u\in(0, \boldsymbol{e}ta], \delta\in(0,1)$ such that ${\boldsymbol{v}}_h^{(0)}\le \mathcal{T}_{\pi^{(0)}(\cdot,h)}{\boldsymbol{v}}_{h+1}^{(0)}$ for all $h\in[H-1]$, and ${\boldsymbol{v}}_h^{(0)}\le {\boldsymbol{v}}_h^* \le {\boldsymbol{v}}_h^{(0)} + u \boldsymbol{1}$, Algorithm~\ref{algH-halfErr} halts in time $ O[u^{-2}\cdot H^4|\mathcal{S}||\mathcal{A}|\cdot \log(|\mathcal{S}||\mathcal{A}\delta^{-1}H u^{-1})] $ and outputs ${\boldsymbol{v}}_1, {\boldsymbol{v}}_2, \ldots, {\boldsymbol{v}}_{H}$ and $\pi:\mathcal{S}\times[H]\rightarrow \mathcal{A}$ such that \boldsymbol{e}gin{align*} \forall h\in [H]: \quad {\boldsymbol{v}}_h \le \mathcal{T}_{\pi(\cdot, h)}({\boldsymbol{v}}_{h+1})\quad\text{and}\quad {\bf 0}\le {\boldsymbol{v}}_h^*-{\boldsymbol{v}}_h \le (u/2)\cdot\boldsymbol{1} \end{align*} with probability at least $1-\delta$, provided appropriately chosen constants, $c_1, c_2$ and $c_3$, in Algorithm~\ref{algH-halfErr}. Moreover, the algorithm uses $ O[u^{-2}\cdot H^3|\mathcal{S}||\mathcal{A}|\cdot \log(|\mathcal{S}||\mathcal{A}\delta^{-1}H u^{-1})] $ samples from the sampling oracle. \end{proposition} \boldsymbol{e}gin{proof}[Proof of Proposition~\ref{thm:algH 1}] Recall that we are able to sample a state from each ${\boldsymbol{P}}_{s,a}$ with time $O(1)$. Let $R=\lceil c_1H\ln[H u^{-1}]\rceil, m_1= c_2H^3u^{-2}\cdot\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})$ and $ m_2= c_3H^2\cdot\log[2R|\mathcal{S}||\mathcal{A}|\delta^{-1}]$ for some constants $c_1, c_2$ and $c_3$ required in Algorithm~\ref{alg-halfErr}. In the following proof, we set $c_1 = 4, c_2=8192, c_3=128$. By Lemma~\ref{lemma:induction lemma}, with probability at least $1-2\delta$ for each $1\le h\le H$, we have ${\boldsymbol{v}}_h^{(0)}\le {\boldsymbol{v}}_h\le \mathcal{T}_{\pi(\cdot, h)} {\boldsymbol{v}}_h$, and $\boldsymbol{Q}_h\le {\boldsymbol{r}}+{\boldsymbol{P}} {\boldsymbol{v}}_{h+1}$, \[ \boldsymbol{Q}_h^* - \boldsymbol{Q}_h \le {\boldsymbol{P}}_h^{\pi^*}\big[\boldsymbol{Q}_{h+1}^* - \boldsymbol{Q}_{h+1}\big] +\boldsymbol{x}i_{h}, \] where \[ \boldsymbol{x}i_h\le {H^{-1}{u}}/{8}\cdot \boldsymbol{1} + 2\sqrt{2\alpha_1\boldsymbol{\sigma}_{{\boldsymbol{v}}_{h+1}^*}} +2\sqrt{2\alpha_1}{u}\cdot\boldsymbol{1}+ 16\alpha_1^{3/4}\norm{{\boldsymbol{v}}_{h+1}^{(0)}}_\infty\cdot\boldsymbol{1} + ({4}/{3})\cdot\alpha_1\cdot \norm{{\boldsymbol{v}}_{h+1}^{(0)}}_{\infty}\cdot\boldsymbol{1}, \] and $\alpha_1=\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})/m_1$. Notice that ${\boldsymbol{v}}_H^{(0)} = {\boldsymbol{v}}_H^* = {\boldsymbol{v}}({\boldsymbol{r}})$, thus the ${\boldsymbol{v}}_H-{\boldsymbol{v}}_H^* = \bf{0}$. Solving the recursion, we obtain \boldsymbol{e}gin{align*} \boldsymbol{Q}_h^* - \boldsymbol{Q}_h &\le \sum_{h'=h}^{H-1}\bigg(\prod_{i=h+1}^{h'}{\boldsymbol{P}}_i^{\pi^*}\bigg)\boldsymbol{x}i_{h'}. \end{align*} The next step is the key to the improvement in our analysis. We further apply the bound in Lemma~\ref{lemma:variance bound}, given by \[ \sum_{h'=h}^{H-1}\bigg(\prod_{i=h+1}^{h'}{\boldsymbol{P}}_i^{\pi^*}\bigg)\sqrt{\boldsymbol{\sigma}_{{\boldsymbol{v}}_{h'+1}^*}}\le H^{3/2}\cdot\boldsymbol{1}. \] With $\norm{ \sum_{h'=h}^{H-1}\prod_{i=h+1}^{h'}{\boldsymbol{P}}_i^{\pi^*}\boldsymbol{1}}_{\infty}\le H -h + 1$ and $\norm{{\boldsymbol{v}}^{(0)}_h}_\infty\le H$, we have, \boldsymbol{e}gin{align*} \sum_{h'=h}^{H-1}\bigg(\prod_{i=h+1}^{h'}{\boldsymbol{P}}_i^{\pi^*}\bigg)\boldsymbol{x}i_{h'} &\le \left[\frac{{u}}{8} + 4\sqrt{2\alpha_1H^{3}} + {2H\sqrt{2\alpha_1}{u}} + {16H^2\alpha_1^{3/4}} +\frac{4\alpha_1H^2}{3}\right]\boldsymbol{1}\\ &\le \bigg[\frac{{u}}{8} + \frac{{u}}{16} + \frac{\sqrt{H^{-1}}{u}}{32} + 16\bigg(\frac{H^{-3}{u}^2}{32\cdot256\cdot(H)^{-8/3}}\bigg)^{3/4} + \frac{4H^{-1}{u}^2}{24\cdot 256}\bigg]\cdot\boldsymbol{1}\\ &\le \frac{{u}}{4}\cdot \boldsymbol{1}, \end{align*} provided \[ \alpha_1=\frac{\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})}{m_1} = c_2^{-1}H^3u^{-2}\le\frac{ H^{-3}{u}^2}{32\cdot 256}. \] Since ${\boldsymbol{v}}(\boldsymbol{Q}_h)\le{\boldsymbol{v}}_h$, we have \[ {\boldsymbol{v}}^*_h - {\boldsymbol{v}}_h \le {\boldsymbol{v}}^* -{\boldsymbol{v}}(\boldsymbol{Q}_h) \le\sum_{h'=h}^{H-1}\bigg(\prod_{i=h+1}^{h'}{\boldsymbol{P}}_i^{\pi^*}\bigg)\boldsymbol{x}i_{h'} \le \frac{{u}}{2}\cdot\boldsymbol{1}. \] This completes the proof of the correctness. It remains to bound the time complexity. The initialization stage costs $O( m_1)$ time per $(s,a)$ per stage $h$. Each iteration costs $O(m_2)$ time per $(s,a)$. We thus have the total time complexity as \[ O( Hm_1 + H m_2)|\mathcal{S}||\mathcal{A}| = O\bigg[{H^4\cdot |\mathcal{S}||\mathcal{A}|\cdot \log\frac{H|\mathcal{S}||\mathcal{A}|}{\delta\cdot{u}}\cdot\frac{1}{{u}^2}}\bigg]. \] The total number of samples used is \[ O( m_1 + H m_2)|\mathcal{S}||\mathcal{A}| = O\bigg[{H^3\cdot |\mathcal{S}||\mathcal{A}|\cdot \log\frac{H|\mathcal{S}||\mathcal{A}|}{\delta\cdot{u}}\cdot\frac{1}{{u}^2}}\bigg]. \] This completes the proof. \end{proof} \boldsymbol{e}gin{algorithm}\caption{FiniteHorizonRandomQVI\label{algH-halfErr}} \boldsymbol{e}gin{algorithmic}[1] |\mathcal{S}|tate \textbf{Input:} $\mathcal{M}=(\mathcal{S}, \mathcal{A}, {\boldsymbol{r}}, {\boldsymbol{P}})$ with a sampling oracle, ${\boldsymbol{v}}^{(0)}_1, {\boldsymbol{v}}^{(0)}_2, \ldots, {\boldsymbol{v}}^{(0)}_H, \pi^{(0)}: \mathcal{S}\times[H]\rightarrow\mathcal{A}, u, \delta\in(0,1)$; |\mathcal{S}|tate \emph{\textbackslash \textbackslash $u$ is the initial error, $\pi^{(0)}$ is the input policy, and $\delta$ is the error probability} |\mathcal{S}|tate\textbf{Output:} ${\boldsymbol{v}}_1, {\boldsymbol{v}}_2, \ldots, {\boldsymbol{v}}_H, \pi$ |\mathcal{S}|tate |\mathcal{S}|tate\textbf{INITIALIZATION:} |\mathcal{S}|tate Let $m_1 \gets{c_1H^3u^{-2}{\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})} }{}$ for constant $c_1$; |\mathcal{S}|tate Let $m_2\gets {c_2H^{2}\log[2H|\mathcal{S}||\mathcal{A}|\delta^{-1}]}$ for constant $c_2$; |\mathcal{S}|tate Let $\alpha_1\gets m_1^{-1}{\log(8|\mathcal{S}||\mathcal{A}|\delta^{-1})}$; |\mathcal{S}|tate For each $(s, a)\in \mathcal{S}\times\mathcal{A}$, sample independent samples $s_{s,a}^{(1)}, s_{s,a}^{(2)}, \ldots, s_{s,a}^{(m_1)}$ from ${\boldsymbol{P}}_{s,a}$; |\mathcal{S}|tate Initialize $\boldsymbol{w}_h=\wt{\boldsymbol{w}}_h = \wh{\boldsymbol{\sigma}}_h=\boldsymbol{Q}^{(0)}_h \gets {\bf0}_{\mathcal{S} \times \mathcal{A}}$ for all $h\in[H]$, and $i\gets 0$; \label{alg1: computeH w} |\mathcal{S}|tate Denote ${\boldsymbol{v}}_{H+1}\gets \bf{0}$ and $\boldsymbol{Q}_{H+1}\gets \bf{0}$ \For{each $(s, a)\in \mathcal{S}\times\mathcal{A}$, $h\in [H]$} |\mathcal{S}|tate \emph{\textbackslash \textbackslash Compute empirical estimates of ${\boldsymbol{P}}_{s,a}^{\top}{\boldsymbol{v}}_h^{(0)}$ and $\boldsymbol{\sigma}_{{\boldsymbol{v}}_{h}^{(0)}}(s,a)$} |\mathcal{S}|tate Let $\wt{\boldsymbol{w}}_h(s,a) \gets \frac{1}{m_1} \sum_{j=1}^{m_1} {\boldsymbol{v}}^{(0)}_h(s_{s,a}^{(j)})$ |\mathcal{S}|tate Let $\wh{\boldsymbol{\sigma}}_h(s,a)\gets \frac{1}{m_1} \sum_{j=1}^{m_1}({\boldsymbol{v}}^{(0)}_h)^2(s_{s,a}^{(j)}) - \wt{\boldsymbol{w}}_h^2(s,a)$ |\mathcal{S}|tate |\mathcal{S}|tate \emph{\textbackslash \textbackslash Shift the empirical estimate to have one-sided error} |\mathcal{S}|tate $\boldsymbol{w}_h(s, a) \gets \wt{\boldsymbol{w}}_h(s,a) - \sqrt{2\alpha_1\wh{\boldsymbol{\sigma}}_h(s,a)} - 4\alpha_1^{3/4}\norm{{\boldsymbol{v}}^{(0)}_h}_\infty - (2/3)\alpha_1\norm{{\boldsymbol{v}}^{(0)}_h}_{\infty}$ \mathbb{E}ndFor |\mathcal{S}|tate Let ${\boldsymbol{v}}_{H+1}\gets \bf{0}$ and $\boldsymbol{Q}_{H+1}\gets \bf{0}$. |\mathcal{S}|tate |\mathcal{S}|tate\textbf{REPEAT:} \emph{\textbackslash \textbackslash successively improve} \For{$h=H, H-1$ to $1$} |\mathcal{S}|tate \emph{\textbackslash \textbackslash Compute ${\boldsymbol{P}}_{s,a}^\top \big[{\boldsymbol{v}}_h - {\boldsymbol{v}}_h^{(0)}\big]$ with one-sided error} |\mathcal{S}|tate\label{alg: Hv1} Let $\wt{{\boldsymbol{v}}}_h \gets {{\boldsymbol{v}}}_h \gets {\boldsymbol{v}}(\boldsymbol{Q}_{h+1})$, $\wt{\pi}(\cdot, h)\gets {\pi}(\cdot, h)\gets \pi(\boldsymbol{Q}_{h+1})$, ${\boldsymbol{v}}_h\gets \wt{{\boldsymbol{v}}}_h$; |\mathcal{S}|tate\label{alg: Hv2} For each $s\in \mathcal{S}$, if $\wt{{\boldsymbol{v}}}_{h}(s)\le {\boldsymbol{v}}^{(0)}_{h}(s)$, then ${\boldsymbol{v}}_h(s)\gets {\boldsymbol{v}}^{(0)}_{h}(s)$ and $\pi(s,h)\gets \pi^{(0)}(s,h)$; |\mathcal{S}|tate For each $(s, a)\in \mathcal{S}\times\mathcal{A}$, sample independent samples $\wt{s}_{s,a}^{(1)}, \wt{s}_{s,a}^{(2)}, \ldots, \wt{s}_{s,a}^{(m_2)}$ from ${\boldsymbol{P}}_{s,a}$; |\mathcal{S}|tate \label{alg1: computeH g} Let $\boldsymbol{g}_{h}(s,a)\gets {m_2^{-1}}\sum_{j=1}^{m_2} \big[{\boldsymbol{v}}_{h}(\wt{s}_{s,a}^{(j)}) - {\boldsymbol{v}}_h^{(0)}(\wt{s}_{s,a}^{(j)}) \big]- H^{-1}u/8$; |\mathcal{S}|tate |\mathcal{S}|tate \emph{\textbackslash \textbackslash Improve $\boldsymbol{Q}_h$:} |\mathcal{S}|tate \label{alg: Hq-func} $\boldsymbol{Q}_h\gets {\boldsymbol{r}} + \boldsymbol{w}_h+\boldsymbol{g}_h$; \mathbb{E}ndFor |\mathcal{S}|tate \textbf{return} ${\boldsymbol{v}}_1, {\boldsymbol{v}}_2, \ldots, {\boldsymbol{v}}_H, \pi$. \end{algorithmic} \end{algorithm} We can then use our meta-algorithm and obtain the following theorem. \boldsymbol{e}gin{theorem} Let $\mathcal{M}=(\mathcal{S}, \mathcal{A}, {\boldsymbol{P}}, {\boldsymbol{r}}, H)$ be a $H$-MDP with a sampling oracle. Suppose we can sample a state from each probability vector ${\boldsymbol{P}}_{s,a}$ within time $O(1)$. Then there exists an algorithm that runs in time \[ O\bigg[{\frac{1}{\epsilon^2}}\cdot{H^4|\mathcal{S}||\mathcal{A}|}\cdot \log\frac{H|\mathcal{S}||\mathcal{A}|}{\delta\cdot\epsilon}\cdot\log\frac{H}{\epsilon}\bigg] \] and obtains a policy $\pi$ such that, with probability at least $1-\delta$, \[ \forall h\in[H]:{\boldsymbol{v}}_h^* - \epsilon\boldsymbol{1}\le {\boldsymbol{v}}_h^{\pi} \le {\boldsymbol{v}}^*_h, \] where ${\boldsymbol{v}}_h^{*}$ is the optimal value of $\mathcal{M}$ at stage $h$. Moreover, the number of samples used by the algorithm is \[ O\bigg[{\frac{1}{\epsilon^2}}\cdot{H^3|\mathcal{S}||\mathcal{A}|}\cdot \log\frac{H|\mathcal{S}||\mathcal{A}|}{\delta\cdot\epsilon}\cdot\log\frac{H}{\epsilon}\bigg]. \] \end{theorem} \subsection{Sample Lower Bound On $H$-MDP} In this section we show that the sample complexity obtained by the algorithm in the last section is essentially tight. Our proof idea is simple, we will reduce the $H$-MDP problem to a discounted MDP problem. If there is an algorithm that solves an $H$-MDP to obtain an $\epsilon$-optimal value, it also gives an value function to the discounted MDP. Therefore, the lower bound of solving $H$-MDP inherits from that of the discounted MDP. The formal guarantee is presented in the following theorem. \boldsymbol{e}gin{theorem} Let $\mathcal{S}$ and $\mathcal{A}$ be finite sets of states and actions. Let $H>0$ be a positive integer and $\epsilon\in (0,1/2)$ be an error parameter. Let $\mathcal{K}$ be an algorithm that, on input an $H$-MDP $\mathcal{M}\defeq(\mathcal{S}, \mathcal{A}, P, {\boldsymbol{r}})$ with a sampling oracle, outputs a value function ${\boldsymbol{v}}_1$ for the first stage, such that $\|{\boldsymbol{v}}_1 - {\boldsymbol{v}}^*_1\|_{\infty}\le \epsilon$ with probability at least $0.9$. Then $\mathcal{K}$ calls the sampling oracle at least $\Omega(H^{-3}\epsilon^{-2}|\mathcal{S}||\mathcal{A}|/\log\epsilon^{-1})$ times on some input $P$ and ${\boldsymbol{r}}\in [0,1]^{\mathcal{S}}$. \end{theorem} \boldsymbol{e}gin{proof} Let $s_0\in \mathcal{S}$ be a state. Denote $\mathcal{S}' = \mathcal{S}\backslash\{s_0\}$ be a subset of $\mathcal{S}$. Let $\gamma\in (0, 1)$ be such that $(1-\gamma)^{-1}\log\epsilon^{-1} \le H$. Suppose we have an DMDP $\mathcal{M}' = (\mathcal{S}', \mathcal{A}, P', \gamma, {\boldsymbol{r}}')$ with a sampling oracle. Let ${\boldsymbol{v}}^{*'}$ be the optimal value function of $\mathcal{M}'$. Note that ${\boldsymbol{v}}^{*'}\in \mathbb{R}^{\mathcal{S}'}$. We will show, in the next paragraph, an $H$-MDP $\mathcal{M} = (\mathcal{S}, \mathcal{A}, P, H, {\boldsymbol{r}})$ with first stage value ${\boldsymbol{v}}_1^*$, such that $\|{\boldsymbol{v}}_1^*|_{\mathcal{S}'}-{\boldsymbol{v}}^{*'}\|\le \epsilon$. Therefore, an $\epsilon$-approximation of ${\boldsymbol{v}}_1^*$ gives a $2\epsilon$-approximation to ${\boldsymbol{v}}^*$. We show that $\mathcal{K}$ can be used to obtain an $\epsilon$-approximate value ${\boldsymbol{v}}_1$ for ${\boldsymbol{v}}_1^*$ of $\mathcal{M}$ and thus $\mathcal{K}$ inherits the lower bound for obtaining $(2\epsilon)$-approximated value for $\gamma$-DMDPs. For $\mathcal{M}$, in each state $s\in \mathcal{S}'$, for any action there is a $(1-\gamma)$ probability transiting to $s_0$ and $\gamma$ probability to do the original transitions in $\mathcal{M}'$; for $s_0$, no matter what action taken, it transits to itself with probability $1$. Formally, for each state $s,s'\in \mathcal{S}', a\in \mathcal{A}$, $P(s'|s, a) = \gamma\cdot P'(\cdot|s,a)$ and $P(s_0|s, a) = (1-\gamma)$; $P(s'|s_0, a) = 0$ and $P(s_0|s_0, a) = 1$. For ${\boldsymbol{r}}$, we set ${\boldsymbol{r}}(s_0, \cdot) ={\bf0}$ and ${\boldsymbol{r}}(s, \cdot) = {\boldsymbol{r}}'(s, \cdot)$ for $s\in \mathcal{S}'$. It remains to show that $\|{\boldsymbol{v}}_1^*|_{\mathcal{S}'} - {\boldsymbol{v}}^*\|_\infty\le \epsilon$. First we note that ${\boldsymbol{v}}({\boldsymbol{r}}) = {\boldsymbol{v}}_{H}^* \le {\boldsymbol{v}}^*$. Then, by monotonicity of the $\mathcal{T}$ operator, we have, for all $h\in [H-1]$ and $s\in\mathcal{S}'$, \boldsymbol{e}gin{align*} {\boldsymbol{v}}_{h}^*|_{\mathcal{S}'}(s) = \max_a[{\boldsymbol{r}}'(s, a) + \gamma{\boldsymbol{P}}_{s,a}^{'\top} {\boldsymbol{v}}_{h+1}^*] \le {\boldsymbol{v}}^{*'}. \end{align*} In particular, ${\boldsymbol{v}}_{1}^*|_{\mathcal{S}'}\le {\boldsymbol{v}}^{*'}$. Since the optimal policy $\pi^{*'}$ of $\mathcal{M}'$ can be used as a policy for the $H$-MDP as a non-optimal one, we have \[ {\boldsymbol{v}}^* - \epsilon\cdot \boldsymbol{1}\le \bigg[1 + \gamma{\boldsymbol{P}}_{\pi^{*'}} + \gamma^2{\boldsymbol{P}}_{\pi^{*'}}^2+\cdot + \gamma^{H}\cdot {\boldsymbol{P}}_{\pi^{*'}}^{H}\bigg]{\boldsymbol{r}}^{\pi^{*'}} \le {\boldsymbol{v}}_1^*|_{\mathcal{S}'}. \] This completes the proof. \end{proof} The above lower bound with our algorithm also implies a sample lower bound for an $\epsilon$-policy. \boldsymbol{e}gin{corollary} Let $\mathcal{S}$ and $\mathcal{A}$ be finite sets of states and actions. Let $H>0$ be a positive integer and $\epsilon\in (0,1/2)$ be an error parameter. Let $\mathcal{K}$ be an algorithm that, on input an $H$-MDP $\mathcal{M}:=(\mathcal{S}, \mathcal{A}, P, {\boldsymbol{r}})$ with a sampling oracle, outputs a policy $\pi: \mathcal{S}\times[H]\rightarrow \mathcal{A}$, such that $\forall h: \|{\boldsymbol{v}}^{\pi}_h - {\boldsymbol{v}}^*_h\|_{\infty}\le \epsilon$ with probability at least $0.9$. Then $\mathcal{K}$ calls the sampling oracle at least $\Omega(H^{-3}\epsilon^{-2}|\mathcal{S}||\mathcal{A}|/\log\epsilon^{-1})$ times on the worst case input $P$ and ${\boldsymbol{r}}\in [0,1]^{\mathcal{S}}$. \end{corollary} \end{document}
\begin{document} \title{Rohlin actions of finite groups on the Razak-Jacelon algebra} \author{Norio Nawata} \address{Department of Educational Collaboration, Osaka Kyoiku University, 4-698-1 Asahigaoka, Kashiwara, Osaka, 582-8582, Japan} \email{[email protected]} \keywords{Stably projectionless C$^*$-algebra; Rohlin property; Kirchberg-Phillips type theorem} \subjclass[2010]{Primary 46L55, Secondary 46L35; 46L40} \thanks{This work was supported by JSPS KAKENHI Grant Number 16K17614} \begin{abstract} Let $A$ be a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, and let $\alpha$ be a strongly outer action of a finite group $G$ on $A$. In this paper, we show that $\alpha\otimes \mathrm{id}$ on $A\otimes\mathcal{W}$ has the Rohlin property, where $\mathcal{W}$ is the Razak-Jacelon algebra. Combing this result with the recent classification results and our previous result, we see that such actions are unique up to conjugacy. \end{abstract} \maketitle \section{Introduction} Let $\mathcal{O}_2$ be the Cuntz algebra generated by 2 isometries. It is known that $\mathcal{O}_2$ is a simple separable unital nuclear purely infinite C$^*$-algebra, and is $KK$-equivalent to $\{0\}$. Kirchberg and Phillips showed that a simple separable unital nuclear C$^*$-algebra $B$ is isomorphic to $\mathcal{O}_2$ if and only if $B$ has an asymptotically central inclusion of $\mathcal{O}_2$ in \cite{KP}. In particular, if $A$ is a simple separable unital nuclear C$^*$-algebra, then $A\otimes\mathcal{O}_2$ is isomorphic to $\mathcal{O}_2$. It is known that $\mathcal{O}_2$ plays an important role in the classification of nuclear C$^*$-algebra (see, for example, \cite{G2} and \cite{Ror1}). Let $\mathcal{W}$ be the Razak-Jacelon algebra studied in \cite{J}, which is a certain simple separable nuclear stably projectionless C$^*$-algebra having trivial $K$-groups and a unique tracial state and no unbounded traces. Note that $\mathcal{W}$ is $KK$-equivalent to $\{0\}$ and $\mathcal{O}_2$. Hence we may regard $\mathcal{W}$ as a stably finite analogue of $\mathcal{O}_2$. Combing Elliott, Gong, Lin and Niu's result \cite{EGLN} and Castillejos and Evington's result \cite{CE} (see also \cite{CETWW}), we see that if $A$ is a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, then $A\otimes\mathcal{W}$ is isomorphic to $\mathcal{W}$. We refer the reader to \cite{EGLN0}, \cite{EGLN} (see also \cite{EN} and \cite{GL}) and \cite{GL2} for recent progress in the classification of stably projectionless C$^*$-algebras. In the theory of operator algebras, the classification of group actions is one of the most fundamental problems and has a long history. There exists a complete classification of actions of countable amenable groups on approximately finite dimensional (AFD) factors. Although there exist some successes in the classification of group actions on ``classifiable'' C$^*$-algebras, the classification of countable amenable group (outer) actions on ``classifiable'' C$^*$-algebras is far from complete because of $K$-theoretical obstructions. We refer the reader to \cite{I} and the references given there for details and results in the classification of group actions on operator algebras. We shall review only some results that are directly related to this paper. Connes \cite{C3} classified finite cyclic group actions on the AFD factor $\mathcal{R}_0$ of type II$_1$ up to conjugacy. More generally, Jones \cite{Jones} classified finite group actions on $\mathcal{R}_0$. In particular, outer actions of a finite group on $\mathcal{R}_0$ are unique up to conjugacy. In \cite{I1}, Izumi introduced the Rohlin property of finite group actions on unital C$^*$-algebras and showed an equivariant version of the Kirchberg-Phillips type theorem for finite group actions on $\mathcal{O}_2$. Indeed, he characterized Rohlin actions on $\mathcal{O}_2$ by using the fixed point subalgebra of the central sequence C$^*$-algebra of $\mathcal{O}_2$ and showed that if $\alpha$ is an outer action of a finite group $G$ on a simple separable unital nuclear C$^*$-algebra $A$, then $\alpha\otimes\mathrm{id}$ on $A\otimes\mathcal{O}_2$ has the Rohlin property. In particular, such actions are unique up to conjugacy. Note that Izumi also showed that there exist uncountably many mutually non-conjugate outer actions of $\mathbb{Z}_2$ on $\mathcal{O}_2$. Also, Goldstein and Izumi obtained an equivariant Kirchberg-Phillips type result for finite group actions on $\mathcal{O}_{\infty}$ in \cite{GI}. Remarkably, Szab\'o generalized Izumi's result to countable amenable group actions in \cite{Sza4}. He showed that countable amenable group outer actions on $\mathcal{O}_2$ that equivariantly absorb the trivial action on $\mathcal{O}_2$ are unique up to strong cocycle conjugacy. Note that Szab\'o considered more general settings and obtained results for strongly self-absorbing C$^*$-dynamical systems. See \cite{Sza3}, \cite{Sza1}, \cite{Sza2}, \cite{Sza4}, \cite{Sza5} and \cite{Sza6}. In this paper, we shall consider an equivariant Kirchberg-Phillips type result for finite group actions on $\mathcal{W}$. Indeed, we shall show that if $\alpha$ is a strongly outer action of a finite group $G$ on a simple separable nuclear C$^*$-algebra $A$ with a unique tracial state and no unbounded traces, then $\alpha\otimes\mathrm{id}$ on $A\otimes\mathcal{W}$ has the Rohlin property (Theorem \ref{thm:main}). Since the author showed that Rohlin actions of a finite group on $\mathcal{W}$ are unique up to conjugacy in \cite{Na4}, we see that such actions are unique up to conjugacy by Elliott, Gong, Lin and Niu's result and Castillejos and Evington's result. Indeed, we obtain the following theorem. \begin{mainthm} (Corollary \ref{cor:main}) \ \\ Let $A$ and $B$ be simple separable nuclear C$^*$-algebras with a unique tracial state and no unbounded traces, and let $\alpha$ and $\beta$ be strongly outer actions of a finite group $G$ on $A$ and $B$, respectively. Then $\alpha\otimes\mathrm{id}$ on $A\otimes\mathcal{W}$ is conjugate to $\beta\otimes\mathrm{id}$ on $B\otimes\mathcal{W}$. \end{mainthm} Our main result (Theorem \ref{thm:main}) is shown by using a cohomolgy vanishing type result (Lemma \ref{lem:cohomology}). The proof of Lemma \ref{lem:cohomology} is based on Connes' $2\times 2$ matrix trick in \cite[Corollary 2.6]{C3}. We need to consider the comparison theory for projections in the fixed point subalgebra $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ of the central sequence C$^*$-algebra of $A\otimes\mathcal{W}$ for Connes' $2\times 2$ matrix trick. We obtain this as a corollary of a classification up to unitary equivalence of certain normal elements in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$. This classification is based on arguments in \cite{Na3} where the author classified certain unitary elements and projections in $F(\mathcal{W})$ up to unitary equivalence. This paper is organized as follows. In Section \ref{sec:Pre}, we collect notations, definitions and some results. In Section \ref{sec:target} and Section \ref{sec:stable-uniqueness}, we show a variant of \cite[Corollary 3.8]{Na3}, which is a main technical tool in this paper. In particular, we introduce a (non-separable) C$^*$-algebra $\mathcal{B}^{\gamma}$, and show that $\mathcal{B}^{\gamma}$ has strict comparison (Proposition \ref{pro:main-section3}) in Section \ref{sec:target}. Note that $\mathcal{B}^{\gamma}$ is a target algebra of a (natural) homomorphism from $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}\otimes\mathcal{W}$. The proof of Proposition \ref{pro:main-section3} is essentially based on arguments in \cite{MS}, \cite{MS2} and \cite{MS3}. In particular, it is important to consider the property (SI) and the weak Rohlin property. These concepts were introduced by Sato in his pioneering work \cite{Sa0} and \cite{Sa} (see also \cite{Kis1}). We refer the reader to \cite{Sa1} for recent progress of such type arguments. Section \ref{sec:stable-uniqueness} is essentially based on arguments in \cite{EN} (see \cite[Section 3]{Na3}). In Section \ref{sec:normal}, we classify certain normal elements in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ up to unitary equivalence (Theorem \ref{thm:classification-normal}), and show a comparison theorem for certain projections in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ (Corollary \ref{cor:comparison}). In Section \ref{sec:main}, we show the main result in this paper. \section{Preliminaries}\label{sec:Pre} In this section we shall collect notations, definitions and some results. For a C$^*$-algebra $A$, let $A_{+}$ denote the set of positive elements in $A$ and $A_{+,1}$ the set of positive contractions in $A$. For $x,y\in A$, let $[x,y]$ be the commutator $xy-yx$. We denote by $K(H)$ and $M_{n^\infty}$ for $n\in\mathbb{N}$ the C$^*$-algebra of compact operators on a Hilbert space $H$ and the uniformly hyperfinite (UHF) algebra of type $n^{\infty}$, respectively. \subsection{Approximate units and actions} If $A$ is a separable C$^*$-algebra, then there exists a positive element $s\in A$ such that $sA$ is dense in $A$. Such a positive element $s$ is said to be {\it strictly positive} in $A$. For any $n\in\mathbb{N}$, define $f_n:[0,1]\to \mathbb{R}$ by $$ f_n(t):=\left\{\begin{array}{cl} 0 & t\in [0,\frac{1}{n+1}] \\ n(n+1)t-n & t\in (\frac{1}{n+1},\frac{1}{n}] \\ 1 & t\in (\frac{1}{n}, 1] \end{array} \right.. $$ If $s$ is a strictly positive element in $A$ and $\|s\|=1$, then $\{f_n(s)\}_{n\in\mathbb{N}}$ is an approximate unit for $A$ with $f_{n+1}(s)f_{n}(s)=f_{n}(s)$. Let $A^{\sim}$ denote the unitization algebra of $A$. Note that we assume $A^{\sim}=A$ if $A$ is unital. Let $M(A)$ be the \textit{multiplier algebra} of $A$, which is the largest unital C$^*$-algebra that contains $A$ as an essential ideal. If $\alpha$ is an automorphism of $A$, then $\alpha$ extends uniquely to an automorphism of $M(A)$. We denote it by the same symbol $\alpha$ for simplicity. We denote by $\mathrm{Aut}(A)$ the automorphism group of $A$. An automorphism $\alpha$ of $A$ is said to be \textit{inner} if there exists a unitary element $u$ in $M(A)$ such that $\alpha (x)=\mathrm{Ad}(u)(x)=uxu^*$ for any $x\in A$. For a subset $F$ of $A$ and $\varepsilon >0$, we say a completely positive (c.p.) map $\varphi :A\to B$ is \textit{$(F,\varepsilon)$-multiplicative} if $$ \| \varphi (xy) - \varphi (x)\varphi(y) \| < \varepsilon $$ for any $x,y\in F$. An \textit{action} $\alpha$ of a discrete group $G$ on $A$ is a homomorphism from $G$ to $\mathrm{Aut}(A)$. We say that $\alpha$ is \textit{outer} if $\alpha_g$ is not inner for any $g\in G\setminus \{\iota \}$ where $\iota$ is the identity of $G$. An $\alpha$-\textit{cocycle} is a map $w$ from $G$ to the unitary group of $M(A)$ such that $w(gh)=w(g)\alpha_{g}(w(h))$ for any $g,h\in G$. We say that an $\alpha$-cocycle $w$ is a \textit{coboundary} if there exists a unitary element $v$ in $M(A)$ such that $w(g)=v\alpha_g(v^*)$ for any $g\in G$. For two $G$-actions $\alpha$ on $A$ and $\beta$ on $B$, we say that $\alpha$ and $\beta$ are \textit{conjugate} if there exists an isomorphism $\theta$ from $A$ onto $B$ such that $\theta \circ \alpha_g=\beta_{g}\circ \theta$ for any $g\in G$. We denote by $A^{\alpha}$ the fixed point algebra. Every tracial state $\tau$ on $A$ extends uniquely to a tracial state on $M(A)$. We denote it by the same symbol $\tau$ for simplicity. Let $(\pi_{\tau}, H_{\tau})$ be the Gelfand-Naimark-Segal (GNS) representation of $A$ associated with $\tau$. Then $\tau$ extends uniquely to a normal tracial state $\tilde{\tau}$ on $\pi_{\tau} (A)^{''}$. If $\alpha$ is an automorphism of $A$ such that $\tau \circ \alpha =\tau$, then $\alpha$ extends uniquely to an automorphism $\tilde{\alpha}$ of $\pi_{\tau} (A)^{''}$. Moreover if $\alpha$ is an action of $G$ on $A$ such that $\tau \circ \alpha_g =\tau$ for any $g\in G$, then $\alpha$ extends uniquely to a von Neumann algebraic action $\tilde{\alpha}$ on $\pi_{\tau}(A)^{''}$. We say that an action $\alpha$ of $G$ on a C$^*$-algebra $A$ with a unique tracial state $\tau$ is \textit{strongly outer} if $\tilde{\alpha}_g$ is not inner in $\pi_{\tau}(A)^{''}$ for any $g\in G\setminus \{\iota\}$. \subsection{Kirchberg's central sequence C$^*$-algebras} We shall recall Kirchberg's central sequence C$^*$-algebras in \cite{Kir2} (see also \cite[Section 5]{Na2} and \cite[Section 2.2]{Na3}). Fix a free ultrafilter $\omega$ on $\mathbb{N}$. For a C$^*$-algebra $A$, put $$ c_{\omega}(A):=\{\{x_n\}_{n\in\mathbb{N}}\in \ell^{\infty}(\mathbb{N}, A)\; |\; \lim_{n \to \omega}\| x_n\| =0 \}, \; A^{\omega}:=\ell^{\infty}(\mathbb{N}, A)/c_{\omega}(A). $$ A sequence $(x_n)_n$ is a representative of an element in $A^{\omega}$. Let $B$ be a C$^*$-subalgebra of $A$. We identify $A$ and $B$ with the C$^*$-subalgebras of $A^\omega$ consisting of equivalence classes of constant sequences. Set $$ A_{\omega}:=A^{\omega}\cap A^{\prime},\; \mathrm{Ann}(B,A^{\omega}):=\{(x_n)_n\in A^{\omega}\cap B^{\prime}\; |\; (x_n)_nb =0 \;\mathrm{for}\;\mathrm{any}\; b\in B \}. $$ Then $\mathrm{Ann}(B,A^{\omega})$ is a closed ideal of $A^{\omega}\cap B^{\prime}$. Define a \textit{central sequence C$^*$-algebra} $F(A)$ of $A$ by $$ F(A):=A_{\omega}/\mathrm{Ann}(A,A^{\omega}). $$ If $\{h_n\}_{n\in\mathbb{N}}$ is a countable approximate unit for $A$, then $[(h_n)_n]$ is a unit in $F(A)$. It can be easily checked that $F(A)$ is isomorphic to $M(A)^\omega\cap A^{\prime}/\mathrm{Ann}(A,M(A)^{\omega})$ and ${A^{\sim}}_{\omega}/ \mathrm{Ann}(A,(A^{\sim})^{\omega})$. If $\alpha$ is an automorphism of $A$, $\alpha$ induces natural automorphisms of $A^{\omega}$, $A_{\omega}$ and $F(A)$. We denote them by the same symbol $\alpha$ for simplicity. For a tracial state $\tau$ on $A$, define $\tau_{\omega}([(x_n)_n]):=\lim_{n\to\omega}\tau (x_n)$. Then $\tau_{\omega}$ is a well defined tracial state on $F(A)$ by \cite[Proposition 2.1]{Na3}. \subsection{Razak-Jacelon algebra} Let $\mathcal{W}$ be the Razak-Jacelon algebra studied in \cite{J}, which is a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, and is $KK$-equivalent to $\{0\}$. The Razak-Jacelon algebra $\mathcal{W}$ is constructed as an inductive limit C$^*$-algebra of Razak's building block in \cite{Raz}. Let $S_1$ and $S_2$ be the generators of the Cuntz algebra $\mathcal{O}_2$. For every $\lambda_1,\lambda_2\in\mathbb{R}$, define a flow $\gamma$ on $\mathcal{O}_2$ by $\gamma_t (S_j)=e^{it\lambda_{j}}S_j$. Kishimoto and Kumjian showed that if $\lambda_{1}$ and $\lambda_{2}$ are all non-zero, of the same sign and $\lambda_1$ and $\lambda_2$ generate $\mathbb{R}$ as a closed subgroup, then $\mathcal{O}_2\rtimes_{\gamma}\mathbb{R}$ is a simple stably projectionless C$^*$-algebra with unique (up to scalar multiple) trace in \cite{KK1} and \cite{KK2}. Robert showed that $\mathcal{W}\otimes K(\ell^2(\mathbb{N}))$ is isomorphic to $\mathcal{O}_2\rtimes_{\gamma}\mathbb{R}$ for some $\lambda_1$ and $\lambda_2$ in \cite{Rob}. (See also \cite{Dean}.) Razak's classification theorem \cite{Raz} implies that $\mathcal{W}$ is UHF-stable, and hence $\mathcal{W}$ is $\mathcal{Z}$-stable. \subsection{Corollaries of Matui and Sato's results} We shall collect some corollaries of Matui and Sato's results in \cite{MS1} and \cite{MS2}. Although they assume that C$^*$-algebras are unital, their arguments for the following results work for non-unital C$^*$-algebras by suitable modifications (see \cite{Na2} and \cite{Na3}). First, we recall the definition of the weak Rohlin property. See \cite[Definition 2.7]{MS1} and \cite[Definition 2.5]{MS2}. Note that Matui and Sato define the weak Rohlin property for more general settings. \begin{Def} Let $A$ be a simple C$^*$-algebra with a unique tracial state $\tau$, and let $\alpha$ be an action of a finite group $G$ on $A$. We say that $\alpha$ has the weak Rohlin property if there exists a positive contraction $f$ in $F(A)$ such that $$ \alpha_g(f)\alpha_h(f)=0, \quad \tau_{\omega}(f)= \frac{1}{|G|} $$ for any $g,h\in G$ with $g\neq h$. \end{Def} Essentially the same proof as \cite[Theorem 3.4]{MS1} shows the following theorem. See also the proof of \cite[Lemma 6.2]{Na3} and \cite[Theorem 3.6]{MS2}. \begin{thm}\label{thm}\label{thm:weak-rohlin} Let $A$ be a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, and let $\alpha$ be an action of a finite group $G$ on $A$. Then $\alpha$ has the weak Rohlin property if and only if $\alpha$ is strongly outer. \end{thm} Essentially the same proofs as \cite[Lemma 4.7]{MS2} and \cite[Proposition 4.8]{MS2} show the following proposition. See also \cite[Propostion 3.3]{MS3} and \cite[Theorem 4.1]{BBSTWW}. Note that if $A$ is a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, then $A\otimes\mathcal{W}$ has property (SI) since $\mathcal{W}$ is $\mathcal{Z}$-stable (see \cite{Ror}, \cite{MS} and \cite{Na2}). \begin{pro}\label{thm:Matui-Sato} Let $A$ be a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, and let $\alpha$ be a strongly outer action of a finite group $G$ on $A$. Then: \ \\ (i) $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ has a unique tracial state $\tau_{\omega}$. \ \\ (ii) If $a$ and $b$ are positive elements in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ satisfying $d_{\tau_{\omega}}(a)< d_{\tau_{\omega}} (b)$, then there exists an element $r\in F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ such that $r^*br=a$. \end{pro} \subsection{Rohlin property and properties of $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$} We shall recall some results in \cite{Na4} (see also \cite{GS}) and \cite{Na3}. \begin{Def} (cf. \cite[Definition 3.1]{I1} and \cite[Definition 3.1]{Na4}). An action $\alpha$ of a finite group $G$ on a separable C$^*$-algebra $A$ is said to have the \textit{Rohlin property} if there exists a partition of unity $\{p_{g}\}_{g\in G}\subset F(A)$ consisting of projections satisfying $$ \alpha_{g} (p_{h}) =p_{gh}, $$ for any $g,h\in G$. \end{Def} For any finite group $G$, there exists an action of $G$ on $\mathcal{W}$ with the Rohlin property by \cite[Example 3.2]{Na4}. The following theorem is \cite[Corollary 3.7]{Na4}. \begin{thm}\label{thm:classification} Let $\alpha$ and $\beta$ be actions of a finite group $G$ on $\mathcal{W}$ with the Rohlin property. Then $\alpha$ and $\beta$ are conjugate. \end{thm} Note that there exists a strongly outer action $\alpha$ of $\mathbb{Z}_2$ on $\mathcal{W}$ such that $\alpha$ does not have the Rohlin property (see \cite[Example 5.6]{Na4}). Since we can regard $F(\mathcal{W})$ is a unital C$^*$-subalgebra of $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$, we obtain the following proposition by \cite[Proposition 4.2]{Na3} and Proposition \ref{thm:Matui-Sato}. \begin{pro}\label{pro:key-pro} Let $\tau_{\omega}$ be the unique tracial state on $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$. \ \\ (i) For any $N\in\mathbb{N}$, there exists a unital homomorphism from $M_N(\mathbb{C})$ to $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$. \ \\ (ii) For any $\theta\in [0,1]$, there exists a non-zero projection $p$ in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ such that $\tau_{\omega}(p)=\theta$. \ \\ (iii) Let $h$ be a positive element in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ such that $d_{\tau_{\omega}}(h)>0$. For any $\theta \in [0, d_{\tau_{\omega}}(h))$, there exists a non-zero projection $p$ in $\overline{hF(A\otimes\mathcal{W})^{\alpha\otimes \mathrm{id}}h}$ such that $\tau_{\omega}(p)=\theta$. \end{pro} Using the proposition above instead of \cite[Proposition 4.2]{Na3}, the same arguments as in \cite[Section 4]{Na3} show the following proposition. \begin{pro}\label{pro:MvN-u} (cf. \cite[Proposition 4.8]{Na3}). Let $p$ and $q$ be projections in $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$ such that $\tau_{\omega} (p)<1$ where $\tau_{\omega}$ is the unique tracial state on $F(A\otimes\mathcal{W})^{\alpha\otimes\mathrm{id}}$. Then $p$ and $q$ are Murray-von Neumann equivalent if and only if $p$ and $q$ are unitarily equivalent. \end{pro} \section{Target algebra}\label{sec:target} In the rest of this paper, we assume that $A$ is a simple separable nuclear C$^*$-algebra with a unique tracial state $\tau_A$ and no unbounded traces, and $\alpha$ is a strongly outer action of a finite group $G$ on $A$. Define an action $\gamma$ on $A\otimes\mathcal{W}$ by $\gamma:= \alpha\otimes\mathrm{id}$. Let $\tau_{\mathcal{W}}$ denote the unique tracial state on $\mathcal{W}$, and let $\tau:=\tau_A\otimes\tau_{\mathcal{W}}$ on $A\otimes\mathcal{W}$. For any $a\in A$ and $b\in \mathcal{W}$, we regard $a\otimes 1_{\mathcal{W}^{\sim}}$ and $1_{A^{\sim}}\otimes b$ as elements in $M(A\otimes\mathcal{W})$. Put $$ \mathcal{A}:= \{(x_n)_n\in (A\otimes\mathcal{W})^{\omega}\; |\; ([x_n ,a\otimes 1_{\mathcal{W}^{\sim}}])_n=0 \text{ for any }a\in A\} $$ and $$ \mathcal{I}:= \{(x_n)_n\in\mathcal{A} \; |\; (x_n (a\otimes 1_{\mathcal{W}^{\sim}}))_n=0 \text{ for any }a\in A \}. $$ Then $\mathcal{I}$ is a closed ideal of $\mathcal{A}$, and define $\mathcal{B}:= \mathcal{A}/\mathcal{I}$. Note that for any $[(x_n)_n]\in \mathcal{B}$, $$ \| [(x_n)_n] \| = \sup_{a\in A_{+,1}} \lim_{n\to \omega} \|x_n(a\otimes 1_{\mathcal{W}^{\sim}}) \|. $$ Indeed, let $\| [(x_n)_n] \|^{\prime}:=\sup_{a\in A_{+,1}}\lim_{n\to \omega} \|x_n (a\otimes 1_{\mathcal{W}^{\sim}}) \|$ for any $[(x_n)_n]\in \mathcal{B}$. Then it can be easily checked that $\|\cdot\|^{\prime}$ is a well defined C$^*$-norm on $\mathcal{B}$. By the uniqueness of the C$^*$-norm, $\| [(x_n)_n] \|= \| [(x_n)_n] \|^{\prime}$ for any $[(x_n)_n] \in\mathcal{B}$. The action $\gamma$ on $A\otimes\mathcal{W}$ induces a natural action on $\mathcal{B}$. We denote it by the same symbol $\gamma$ for simplicity. In this section we shall consider properties of the fixed point algebra $\mathcal{B}^{\gamma}$. Consider the GNS representation $(\pi_{\tau},H_{\tau})$ of $A\otimes\mathcal{W}$ associated with $\tau$. Note that $\pi_{\tau}$ extends to a representation $\overline{\pi}_{\tau}$ of $M(A\otimes\mathcal{W})$ on $H_{\tau}$ and $\overline{\pi}_{\tau}(M(A\otimes\mathcal{W}))\subset \pi_{\tau}(A\otimes\mathcal{W})''$ (see, for example, \cite[3.12]{Ped2}). Put $$ M:= \ell^{\infty}(\mathbb{N}, \pi_{\tau}(A\otimes\mathcal{W})'')/\{\{x_n\}_{n\in\mathbb{N}} \; |\; \lim_{n\to\omega}\tilde{\tau} (x_n^*x_n)=0\}, $$ and define a homomorphism $\Pi$ from $A$ to $M$ by $\Pi (a):= (\overline{\pi}_{\tau}(a\otimes 1_{\mathcal{W}^{\sim}}))_n$. Note that $M$ is a von Neumann algebraic ultrapower of $\pi_{\tau}(A\otimes\mathcal{W})''$. Since $\tau=\tau_A\otimes\tau_\mathcal{W}$, $\pi_{\tau}(A\otimes\mathcal{W})''$ is isomorphic to $\pi_{\tau_A}(A)''\bar{\otimes}\pi_{\tau_\mathcal{W}}(\mathcal{W})''$. Moreover, $\pi_{\tau}(A\otimes\mathcal{W})''$, $\pi_{\tau_A}(A)''$ and $\pi_{\tau_\mathcal{W}}(\mathcal{W})''$ are isomorphic to the AFD II$_1$ factor $\mathcal{R}_0$. Set $$ \mathcal{M}:= M\cap \Pi (A)'. $$ It is easy to see that $\mathcal{M}$ is isomorphic to $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}\cap (\mathcal{R}_0\bar{\otimes}\mathbb{C})'$ where $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}$ is the von Neumann algebraic ultrapower of $\mathcal{R}_0\bar{\otimes}\mathcal{R}_0$. \begin{pro}\label{pro:factor} With notation as above, $\mathcal{M}$ is a factor of type II$_1$. \end{pro} \begin{proof} Let $\{N_n\}_{n=1}^\infty$ be an increasing sequence of finite-dimensional subfactors such that $\mathcal{R}_{0}=(\bigcup_{n=1}^\infty N_n)''$. Since $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})' =(\mathcal{R}_0\cap N_n')\bar{\otimes}\mathcal{R}_0$ is a factor of type II$_1$, the same proof as in \cite[Theorem XIV.4.18]{Tak} shows this proposition. Indeed, let $(a_n)_n$ be an element in $\mathcal{M}\setminus \mathbb{C}1_{\mathcal{M}}$. By the same way as in \cite[Theorem XIV.4.18]{Tak}, we may assume that $\tilde{\tau} (a_n)=0$ for any $n\in\mathbb{N}$ and $\lim_{n\to\infty} \| a_n\|_2>0$ where $\| a_n\|_2= \tilde{\tau}(a_n^*a_n)^{1/2}$. Using the conditional expectation $\mathcal{E}_n$ from $\mathcal{R}_0\bar{\otimes}\mathcal{R}_0$ onto $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})'$, we can choose $\{k_n\; |\; n\in\mathbb{N}\}\in \omega$ and $b_{k_n}\in (\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})'$ such that $\tilde{\tau}(b_{k_{n}})=0$ and $\| b_{k_{n}}-a_{k_{n}}\|_2\leq 1/2^{n}$ for any $n\in\mathbb{N}$. Since $\tilde{\tau}$ is the unique tracial state on $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})'$, $0=\tilde{\tau}(b_{k_n})\in \overline{\mathrm{co}} \{ ub_{k_n}u^*\; |\; u\in (\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})', \; \mathrm{unitary}\}$. Therefore there exists a unitary element $u_n\in (\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)\cap (N_n\bar{\otimes}\mathbb{C})'$ such that $\| b_{k_{n}}- u_n b_{k_n}u_n^*\|_2\geq \| b_{k_n}\|_2/2$ for any $n\in\mathbb{N}$. Then we have $(u_n)_n\in \mathcal{M}$ and $\limsup_{n\in\mathbb{N}}\|a_{k_{n}}-u_na_{k_{n}u_n^*}\|_2>0$. This shows that $(a_n)_n$ can not be in the center of $\mathcal{M}$. \end{proof} The action $\tilde{\gamma}=\tilde{\alpha}\otimes \mathrm{id}$ on $\pi_{\tau}(A\otimes\mathcal{W})''\cong \pi_{\tau_A}(A)''\bar{\otimes}\pi_{\tau_\mathcal{W}}(\mathcal{W})''$ induces an action on $\mathcal{M}$. We denote it by the same symbol $\tilde{\gamma}$ for simplicity. The following lemma is essentially based on \cite[Proposition 2.1.2]{C2}. \begin{lem}\label{lem:central-trivial} The action $\tilde{\gamma}$ on $\mathcal{M}$ is outer. \end{lem} \begin{proof} It is enough to show that for any element $(u_n)_n$ in $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}\cap(\mathcal{R}_0\bar{\otimes}\mathbb{C})'$, there exists an element $(x_n)_n$ in $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}\cap(\mathcal{R}_0\bar{\otimes}\mathbb{C})'$ such that $(\tilde{\gamma}(x_n))_n\neq (x_n)_n$ and $[(x_n)_n, (u_n)_n]=0$. Let $(u_n)_n$ be an element in $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}\cap (\mathcal{R}_0\bar{\otimes}\mathbb{C})'$. By \cite[Theorem XIV.4.16]{Tak}, there exists an element $(a_n)_n$ in $\mathcal{R}_0^{\omega}\cap \mathcal{R}_0'$ such that $(\tilde{\alpha}(a_n))_n\neq (a_n)_n$ because $\tilde{\alpha}$ is outer and $\mathcal{R}_0$ is the AFD II$_1$ factor. Put $(x_n)_n:= (a_n\otimes 1_{\mathcal{R}_0})_n$ in $(\mathcal{R}_0\bar{\otimes}\mathcal{R}_0)^{\omega}$. Then $(\tilde{\gamma}(x_n))_n\neq (x_n)_n$ and $[(x_n)_n ,y]=0$ for any $y\in \mathcal{R}_0 \bar{\otimes}\mathcal{R}_0$. Taking a suitable subsequence of $(x_n)_n$, we obtain the conclusion. \end{proof} By Proposition \ref{pro:factor} and Lemma \ref{lem:central-trivial}, we obtain the following proposition. \begin{pro}\label{pro:fixed-factor} The fixed point algebra $\mathcal{M}^{\tilde{\gamma}}$ is a factor of type II$_1$. \end{pro} Define a homomorphism $\Phi$ from $M(A\otimes\mathcal{W})^{\omega}$ to $M$ by $\Phi ((x_n)_n):= (\overline{\pi}_{\tau}(x_n))_n$. By Kaplansky's density theorem, we see that $\Phi |_{(A\otimes\mathcal{W})^{\omega}}$ is surjective. It is easy to see that $\Phi$ maps $\mathcal{A}$ into $\mathcal{M}$. The following proposition is essentially based on \cite[Theorem 3.3]{KR} and \cite[Theorem 3.1]{MS3}. \begin{pro} The restriction $\Phi|_{\mathcal{A}} : \mathcal{A}\to \mathcal{M}$ is surjective. \end{pro} \begin{proof} Let $x$ be a contraction in $\mathcal{M}$. Since $\Phi |_{(A\otimes\mathcal{W})^{\omega}}$ is surjective, there exists a contraction $(x_n)_n$ in $(A\otimes\mathcal{W})^{\omega}$ such that $\Phi ((x_n)_n)=x$. Let $D$ be a C$^*$-subalgebra of $M(A\otimes\mathcal{W})^{\omega}$ generated by $(x_n)_n$ and $\{ a\otimes1_{\mathcal{W}^{\sim}} \; |\; a\in A\} $, and put $I:=\mathrm{ker}\; \Phi|_{D}$. Then the rest of proof is same as the proof of \cite[Theorem 3.1]{MS3}. Indeed, let $\{e_{k}\}_{k\in\mathbb{N}}$ be an approximate unit for $I$ which is quasicentral for $D$. (Note that $D$ is separable.) Since $[(x_n)_n, a\otimes1_{\mathcal{W}^{\sim}}]\in I$, we have \begin{align*} 0 & =\lim_{k\to\infty} \| (1-e_{k})[(x_n)_n, a\otimes1_{\mathcal{W}^{\sim}}] (1-e_{k})\| \\ & =\lim_{k\to\infty} \|[(1-e_{k})(x_n)_n(1-e_{k}), a\otimes 1_{\mathcal{W}^{\sim}}]\| \end{align*} for any $a\in A$. Then we obtain the conclusion by usual arguments (see the proof of \cite[Theorem 3.3]{KR} and \cite[Theorem 3.1]{MS3}). \end{proof} Let $\{h_n\}_{n\in\mathbb{N}}$ be an approximate unit for $A$. Since $\lim_{n\to \infty} \tau (h_n\otimes 1_{\mathcal{W}})=1$, a similar argument as in the proof of \cite[Proposition 2.1]{Na3} shows $\mathcal{I}\subset \mathrm{ker}\; \Phi|_{\mathcal{A}}$. Therefore $\Phi|_{\mathcal{A}}$ induces a surjective homomorphism $\varrho$ from $\mathcal{B}$ to $\mathcal{M}$. Since $\gamma$ is an action of a finite group, it is easy to show the following proposition. \begin{pro}\label{pro:surjective} The restriction $\varrho|_{\mathcal{B}^{\gamma}}: \mathcal{B}^{\gamma} \to \mathcal{M}^{\tilde{\gamma}}$ is surjective. \end{pro} The following lemma is essentially based on \cite[Lemma 3.2]{MS3}. This lemma may be considered that a homomorphism $a\mapsto a\otimes 1_{\mathcal{W}^{\sim}}$ from $A$ to $M(A\otimes\mathcal{W})^{\omega}$ has ``property (SI) with respect to $A\otimes\mathcal{W}$''. \begin{lem}\label{lem:A-SI} Let $(x_n)_n$ and $(y_n)_n$ be positive contractions in $\mathcal{A}$ such that $$ \lim_{n\to\omega} \tau (x_n)=0 \quad \text{and} \quad \inf_{m\in\mathbb{N}}\lim_{n\to \omega} \tau (y_n^m)>0. $$ Then there exists an element $(s_n)_n$ in $\mathcal{A}$ such that $(s_n^*s_n)_n=(x_n)_n$ and $(y_ns_n)_n=(s_n)_n$. \end{lem} \begin{proof} Similar arguments as in the proofs of \cite[Lemma 3.2]{MS3} and \cite[Theorem 1.1]{MS} with some modifications in \cite[Section 5]{Na2} show this lemma. Indeed, let $\varphi$ be a pure state on $A$. We can uniquely extend $\varphi$ to a pure state $\tilde{\varphi}$ on $A^{\sim}$. Since we may assume that $A$ is a (separable simple) non-type I C$^*$-algebra, $K(H_{\tilde{\varphi}})\cap \pi_{\tilde{\varphi}}(A^{\sim})=\{0\}$. Therefore \cite[Proposition 5.9]{KR} implies that the identity map on $A^{\sim}$ can be approximated in the pointwise norm topology by a completely positive map $\psi$ of the form $$ \psi (a)= \sum_{i,j=1}\tilde{\varphi}(d_i^*ad_j)c_i^*c_j, \quad a\in A^{\sim}, $$ where $c_i,d_i\in A^{\sim}$. Note that $\sum_{i,j=1}\tilde{\varphi}(d_i^*ad_j)(c_i^*c_j\otimes 1_{\mathcal{W}^{\sim}})x_n$ is an element in $A\otimes\mathcal{W}$. Since $A\otimes\mathcal{W}$ has strict comparison, a similar argument as in the proof of \cite[Lemma 3.2]{MS3} (we need to use \cite[Lemma 5.7]{Na2}) shows that there exists a sequence of $(s_n)_n$ in $A\otimes\mathcal{W}$ such that $(f_ns_n)_n=(s_n)_n$ and $(s_n^*(a\otimes 1_{\mathcal{W}^{\sim}})s_n)_n= ((a\otimes 1_{\mathcal{W}^{\sim}})e_n)$ in $(A\otimes\mathcal{W})^{\omega}$ for any $a\in A^{\sim}$. Therefore we obtain the conclusion (see \cite[Remark 5.5]{Na2}). \end{proof} For any $[(x_n)_n]\in\mathcal{B}$, let $\tau_{\mathcal{B}}([(x_n)_n]):=\lim_{n\to\omega}\tau (x_n)$. By a similar argument as in the proof of \cite[Proposition 2.1]{Na3}, $\tau_{\mathcal{B}}$ is a well defined tracial state on $\mathcal{B}$. The following proposition is essentially based on \cite[Proposition 4.5]{MS2}. See also the proof of \cite[Theorem 4.7]{MS1}. \begin{pro}\label{pro:target-si} Let $x$ and $y$ be positive contractions in $\mathcal{B}^{\gamma}$ such that $$ \tau_{\mathcal{B}}(x)=0 \quad \text{and} \quad \inf_{m\in\mathbb{N}}\tau_{\mathcal{B}}(y^m)>0. $$ Then there exists an element $s$ in $\mathcal{B}^{\gamma}$ such that $s^*s=x$ and $ys=s$. \end{pro} \begin{proof} Let $(x_n)_n$ and $(y_n)_n$ be positive contractions in $\mathcal{A}$ such that $x=[(x_n)_n]$ and $y=[(y_n)_n]$. Then we have $$ (\gamma_g(x_n)-x_n)_n(a\otimes 1_{\mathcal{W}^{\sim}})=0 \quad \text{and} \quad (\gamma_g(y_n)-y_n)_n(a\otimes 1_{\mathcal{W}^{\sim}})=0 $$ for any $a\in A$ and $g\in G$. Since $\alpha$ is strongly outer, Theorem \ref{thm:weak-rohlin} implies that there exists a positive contraction $(f_n)_n$ in $A_{\omega}$ such that $$ (\alpha_g(f_n)\alpha_h(f_n))_na=0\quad \text{and} \quad \lim_{n\to\omega}\tau_A(f_n)=\frac{1}{|G|} $$ for any $a\in A$ and $g,h\in G$ with $g\neq h$. Let $\{k_n\}_{n=1}^\infty$ be an approximate unit for $\mathcal{W}$. Then we have $(f_n\otimes k_n)_n\in \mathcal{A}$, $$ \lim_{n\to\omega}\gamma_g(f_n\otimes k_n)\gamma_h(f_n\otimes k_n)(a\otimes 1_{\mathcal{W}^{\sim}}) =0 \quad \text{and} \quad \lim_{n\to\omega}\tau (f_n\otimes k_n)=\frac{1}{|G|} $$ for any $a\in A$ and $g,h\in G$ with $g\neq h$. Using \cite[Lemma 5.6]{Na2} instead of \cite[Lemma 4.6]{MS}, a similar argument as in the proof of \cite[Proposition 4.5]{MS2} shows that there exists a positive contraction $(\tilde{y}_n)_n$ in $\mathcal{A}$ such that $$ (\tilde{y}_n)_n\leq (y_n)_n, \quad \inf_{m\in\mathbb{N}}\lim_{n\to \omega} \tau (\tilde{y}_n^m)>0 \quad \text{and} \quad \lim_{n\to\omega}\gamma_g(\tilde{y}_n)\gamma_h(\tilde{y}_n) (a\otimes 1_{\mathcal{W}^{\sim}})=0 $$ for any $a\in A$ and $g,h\in G$ with $g\neq h$. By Lemma \ref{lem:A-SI}, there exists an element $(r_n)_n$ in $\mathcal{A}$ such that $(r_n^*r_n)_n=(x_n)_n$ and $(\tilde{y}_nr_n)_n=(r_n)_n$. Since $(y_n)_n$ is a positive contraction and $(\tilde{y}_n)_n\leq (y_n)_n$, we have $(y_nr_n)_n=(r_n)_n$. Put $$ (s_n)_n:= \frac{1}{|G|}\sum_{g\in G} \gamma_g ((r_n)_n)\in\mathcal{A} . $$ Then we have $$ (\gamma_g(s_n)-s_n)_n=0, \quad (s_n^*s_n-x_n)_n(a\otimes 1_{\mathcal{W}^{\sim}})=0 \quad \text{and} \quad (y_ns_n-s_n)_n(a\otimes 1_{\mathcal{W}^{\sim}})=0 $$ for any $a\in A$ and $g\in G$. Therefore, putting $s:= [(s_n)_n]\in \mathcal{B}^{\gamma}$, we obtain the conclusion. \end{proof} The following proposition is essentially based on \cite[Proposition 4.8]{MS2} and \cite[Proposition 3.3]{MS3}. \begin{pro}\label{pro:main-section3} (i) $\tau_{\mathcal{B}}$ is the unique tracial state on $\mathcal{B}^{\gamma}$. \ \\ (ii) $\mathcal{B}^{\gamma}$ has strict comparison. \end{pro} \begin{proof} (i) By Proposition \ref{pro:fixed-factor} and Proposition \ref{pro:surjective}, it suffices to show that if $[(x_n)_n]$ is a positive contraction in $\mathrm{ker}\;\varrho|_{\mathcal{B}^{\gamma}}$, then $T([(x_n)_n])=0$ for any tracial state $T$ on $\mathcal{B}^{\gamma}$. Note that $[(x_n)_n]^{1/2}\in\mathrm{ker}\;\varrho|_{\mathcal{B}^{\gamma}}$, and hence $\tau_{\mathcal{B}}([(x_n)_n])=0$. Let $\{e_n\}_{n=1}$ be an approximate unit for $A\otimes\mathcal{W}$. Then it is easy to see that for any $m\in\mathbb{N}$, $\tau_{\mathcal{B}}(([(e_n)_n]-[(x_n)_n])^m)=1$. By Proposition \ref{pro:target-si}, there exists an element $s_1\in\mathcal{B}^{\gamma}$ such that $s_1^*s_1=[(x_n)_n]$ and $([(e_n)_n]-[(x_n)_n])s_1=s_1$. Hence we have $s_1s_1^*\leq [(e_n)_n]-[(x_n)_n]$. Since $[(x_n)_n]+s_1s_1^*$ is a positive contraction and $\tau_{\mathcal{B}}([(x_n)_n]+s_1s_1^*)=0$, the same argument as above shows that there exists an element $s_2\in\mathcal{B}^{\gamma}$ such that $s_2^*s_2=[(x_n)_n]$ and $([(e_n)_n]-[(x_n)_n]-s_1s_1^*)s_2=s_2$. Repeating this process, for any $N\in\mathbb{N}$, we obtain elements $s_1,s_2,...,s_N$ in $\mathcal{B}^{\gamma}$ such that $$ s_i^*s_i=[(x_n)_n] \quad \text{and} \quad [(x_n)_n]+\sum_{i=1}^{N}s_is_i^* \leq [(e_n)_n]. $$ Since $T$ is a tracial state and $[(e_n)_n]$ is a contraction, $(N+1)T([(x_n)_n])\leq 1$. Therefore $T([(x_n)_n])=0$. (ii) Since $\mathcal{W}\otimes M_n(\mathbb{C})$ is isomorphic to $\mathcal{W}$, it can be easily checked that $\mathcal{B}^{\gamma}\otimes M_n(\mathbb{C})$ is isomorphic to $\mathcal{B}^{\gamma}$. Hence it is enough to show that if $a$ and $b$ are positive elements in $\mathcal{B}^{\gamma}$ with $d_{\tau_{\mathcal{B}}}(a)<d_{\tau_{\mathcal{B}}}(b)$, then there exists an element $r$ in $\mathcal{B}^{\gamma}$ such that $r^*br=a$. Using Proposition \ref{pro:fixed-factor}, Proposition \ref{pro:surjective} and Proposition \ref{pro:target-si} instead of \cite[Lemma 4.2]{MS2}, \cite[Theorem 4.3]{MS2} and \cite[Proposition 4.5]{MS2}, the same argument as in the proof of \cite[Proposition 4.8]{MS2} shows this. We shall recall a sketch of a proof for reader's convenience. We may assume that $a$ and $b$ are contractions. Let $\tilde{\tau}_{\omega}$ denote the unique tracial state on $\mathcal{M}^{\tilde{\gamma}}$. Since $\tau_{\mathcal{B}}=\tilde{\tau}_{\omega}\circ \varrho$, we have $$ d_{\tau_{\mathcal{B}}}(a)= \tilde{\tau}_{\omega} (1_{(0, \infty)}(\varrho(a))) \quad \text{and} \quad d_{\tau_{\mathcal{B}}}(b)= \tilde{\tau}_{\omega} (1_{(0, \infty)}(\varrho(b))) $$ where $1_{(0,\infty)}$ is the characteristic function of $(0,\infty)$. Note that $1_{(0, \infty)}(\varrho(a))$ and $1_{(0, \infty)}(\varrho(b))$ are projections in $\mathcal{M}^{\tilde{\gamma}}$ because $\mathcal{M}^{\tilde{\gamma}}$ is a von Neumann algebra. Since $\mathcal{M}^{\tilde{\gamma}}$ is a factor of type II$_1$ and $\varrho|_{\mathcal{B}^{\gamma}}$ is surjective, the same argument as in the proof of \cite[Proposition 4.8]{MS2} shows that there exist positive contractions $y_1$ and $y_2$ in $\mathcal{B}^{\gamma}$ and a projection $p$ in $\mathcal{M}^{\tilde{\gamma}}$ such that $$ y_1y_2=0,\quad \varrho(y_1)=p,\quad \varrho (y_2)=1-p, \quad y_1b=by_1, \quad y_2b=by_2 $$ and there exists $\varepsilon >0$ such that $$ \tilde{\tau}_{\omega}(1_{(\varepsilon, \infty)}(\varrho(b)p))>0 \quad \text{and} \quad \tilde{\tau}_{\omega}(1_{(0, \infty)}(\varrho(a)))< \tilde{\tau}_{\omega}(1_{(\varepsilon, \infty)}(\varrho(b)(1-p))). $$ Furthermore, there exists a unitary element $v$ in $\mathcal{M}^{\tilde{\gamma}}$ such that $$ 1_{(0, \infty)}(\varrho(a)) \leq v1_{(\varepsilon, \infty)}(\varrho(b)(1-p))v^* $$ because $\mathcal{M}^{\tilde{\gamma}}$ is a factor of type II$_1$. Since $\varrho|_{\mathcal{B}^{\gamma}}$ is surjective, there exists an element $w$ in $\mathcal{B}^{\gamma}$ such that $\varrho (w)=v$. Define continuous functions $g$ and $h$ on $ [0, \infty )$ by $g(t)=\min\{1/\varepsilon, 1/t \}$ and $h(t)=tg(t)$. Note that $g(b)$ is an element in $(\mathcal{B}^{\gamma})^{\sim}$ and $$ h(t):= \left\{\begin{array}{cl} t/\varepsilon & \text{if } t\in [0,\varepsilon] \\ 1 & \text{if } t\in (\varepsilon, \infty ) \end{array} \right.. $$ Put $r_1:= y_2^{1/2}g(b)^{1/2}w^*a^{1/2}\in\mathcal{B}^{\gamma}$, then we have $r_1^*br_1\leq a$ and $\varrho (r_1^*br_1)=\varrho(a)$. Therefore $a-r_1^*br_1$ is a positive contraction in $\mathrm{ker}\; \varrho$, and hence we have $$ \tau_{\mathcal{B}} (a-r_1^*br_1)=0. $$ Since $ \tau_{\mathcal{B}}((h(b)y_1)^m)= \tilde{\tau}_{\omega} (\varrho(h(b))^mp) \geq \tilde{\tau}_{\omega}(1_{(\varepsilon, \infty)}(\varrho(b)p))) $ for any $m\in\mathbb{N}$, we have $$ \inf_{m\in\mathbb{N}}\tau_{\mathcal{B}}((h(b)y_1)^m)>0. $$ Therefore Proposition \ref{pro:target-si} implies that there exists an element $s$ in $\mathcal{B}^{\gamma}$ such that $$ s^*s= a-r_1^*br_1 \quad \text{and} \quad h(b)y_1s=s. $$ Put $r_2:=y_1^{1/2}g(b)^{1/2}s\in \mathcal{B}^{\gamma}$, then we have $r_1^*br_2=0$ and $r_2^*br_2=a-r_1^*br_1$. Consequently, put $r=r_1+r_2$, then we have $r^*br=a$. \end{proof} \section{Stable uniqueness theorem}\label{sec:stable-uniqueness} In this section we shall show a variant of \cite[Corollary 3.8]{Na3} which is based on the results in \cite{EN}(see also \cite{EGLN}), \cite{EllK}(see also \cite{G}), \cite{DE1} and \cite{DE2}. First, we shall define a homomorphism $\rho$ from $F(A\otimes \mathcal{W})^{\gamma}$ to $\mathcal{B}^{\gamma}$. Let $\{k_n\}_{n=1}^\infty$ be an approximate unit for $\mathcal{W}$ with $k_{n+1}k_{n}=k_{n}$, and let $\mathcal{W}_0:=\{k_nbk_n \; |\; n\in\mathbb{N}, b\in\mathcal{W}\}$. Then $\mathcal{W}_0$ is a dense self-adjoint subalgebra of $\mathcal{W}$. For any $(x_n)_n\in (A\otimes\mathcal{W})_{\omega}$, $a\in A$, $b\in\mathcal{W}$ and $N\in\mathbb{N}$, we have \begin{align*} ((1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes b)(a\otimes 1_{\mathcal{W}^{\sim}}))_n & = ((1_{A^{\sim}}\otimes k_Nk_{N+1})x_n(a\otimes b))_n \\ & = ((a\otimes k_{N}k_{N+1}b)x_n)_n \\ & = ((1_{A^{\sim}}\otimes k_N)x_n (a\otimes k_{N+1}b))_n \\ & = ((a\otimes k_Nk_{N+1})x_n (1_{A^{\sim}}\otimes b))_n \\ & = ((a\otimes 1_{\mathcal{W}^{\sim}})(1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes b))_n. \end{align*} Hence $((1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes b))_n\in \mathcal{A}$. For any $[(x_n)_n]\in F(A\otimes\mathcal{W})^{\gamma}$ and $k_Nbk_N\in\mathcal{W}_0$, define $$ \rho ([(x_n)_n]\otimes k_{N}bk_{N}):= [((1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes bk_N))_n]\in \mathcal{B}. $$ We shall show this is well defined. Let $[(x_n)_n]=[(y_n)_n]\in F(A\otimes\mathcal{W})^{\gamma}$ and $k_{N}bk_{N}=k_{N^{\prime}}b^{\prime}k_{N^{\prime}}\in \mathcal{W}_0$. For any $a\in A$, we have \begin{align*} & (((1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes bk_N)-(1_{A^{\sim} }\otimes k_{N^{\prime}})y_n(1_{A^{\sim}}\otimes b^{\prime}k_{N^{\prime}})) (a\otimes 1_{\mathcal{W}^{\sim}}))_n \\ & = ((1_{A^{\sim}}\otimes k_N)x_n(a\otimes bk_N)-(1_{A^{\sim} }\otimes k_{N^{\prime}})y_n(a\otimes b^{\prime}k_{N^{\prime}}))_n \\ & = ((a\otimes k_{N}bk_{N})x_n-(a\otimes k_{N^{\prime}}b^{\prime}k_{N^{\prime}})y_n)_n= ((a\otimes k_{N}bk_{N})(x_n-y_n))_n=0. \end{align*} Therefore $[((1_{A^{\sim}}\otimes k_N)x_n(1_{A^{\sim}}\otimes bk_N))_n]= [((1_{A^{\sim}}\otimes k_{N^{\prime}})y_n(1_{A^{\sim}}\otimes b^{\prime}k_{N^{\prime}}))_n]$. By a similar argument, it can be easily checked that $\rho$ is a homomorphism from the algebraic tensor product $F(A\otimes\mathcal{W})^{\gamma}\odot\mathcal{W}_0$ to $\mathcal{B}^{\gamma}$. Since we have \begin{align*} \| \rho ([(x_n)_n]\otimes k_{N}bk_{N})\| & = \sup_{a\in A_{+,1}} \lim_{n\to \omega} \|(1_{A^{\sim}}\otimes k_{N})x_n (1_{A^{\sim}}\otimes bk_{N})(a\otimes 1_{\mathcal{W}^{\sim}}) \| \\ & = \sup_{a\in A_{+,1}} \lim_{n\to \omega} \|(a\otimes k_{N}bk_{N})x_n\| \\ & \leq \sup_{a\in A_{+,1}} \lim_{n\to \omega} \|a\| \| k_{N}bk_{N}\| \| x_n\| \\ & = \lim_{n\to\omega} \| x_n\| \cdot \| k_{N}bk_{N}\|, \end{align*} $\rho$ can be extended to a homomorphism from the algebraic tensor product $F(A\otimes\mathcal{W})^{\gamma}\odot\mathcal{W}$ to $\mathcal{B}^{\gamma}$. Consequently, $\rho$ can be extended to a homomorphism from $F(A\otimes\mathcal{W})^{\gamma}\otimes\mathcal{W}$ to $\mathcal{B}^{\gamma}$ because $\mathcal{W}$ is nuclear. By the construction of $\rho$, it is easy to show the following proposition. \begin{pro}\label{pro:homrho} Let $(z_n)_n$ be an element in $\mathcal{A}$ such that $[(z_n)_n]=\rho ([(x_n)_n]\otimes b)$ for some $[(x_n)_n]\in F(A\otimes\mathcal{W})^{\gamma}$ and $b\in \mathcal{W}$. Then $$ (z_n(a\otimes 1_{\mathcal{W}^{\sim}}))_n= (x_n(a\otimes b))_n $$ for any $a\in A$. \end{pro} \begin{rem} Note that there exists an element $(x_n)_n$ in $(A\otimes\mathcal{W})_{\omega}$ such that $(x_n)_n\notin \mathcal{A}$. Indeed, if $a$ is not an element in the center of $A$, $(a\otimes (k_n^2-k_n))_n$ is such an element. But we do not know whether there exist $(x_n)_n\in (A\otimes \mathcal{W})_{\omega}$ and $b\in \mathcal{W}$ such that $(x_n(1_{A^{\sim}}\otimes b))_n\notin \mathcal{A}$. \end{rem} The following lemma is an analogous lemma of \cite[Lemma 3.6]{Na3}. \begin{lem}\label{lem:Lemma3.6} If $x$ is a positive element in $F(A\otimes\mathcal{W})$, then $$ \tau_{\mathcal{B}}(\rho (x\otimes b))= \tau_{\omega}(x) \tau_\mathcal{W} (b) $$ for any $b\in\mathcal{W}$. \end{lem} \begin{proof} Let $(z_n)_n$ be an element in $\mathcal{A}$ such that $[(z_n)_n]=\rho (x\otimes b)$, and let $\{h_n\}_{n=1}^\infty$ be an approximate unit for $A$. Note that $\tau_{\mathcal{B}}(\rho (x\otimes b))=\lim_{n\to\omega}\tau (z_n)$. Since $\lim_{n\to \infty} \tau (h_n\otimes 1_{\mathcal{W}})=1$, a similar argument as in the proof of \cite[Proposition 5.3]{Na2} shows $$ \lim_{n\to\omega}\tau (z_n)=\lim_{m\to \infty} \lim_{n\to \omega} \tau (z_n(h_m\otimes 1_{\mathcal{W}^{\sim}})). $$ By Proposition \ref{pro:homrho} and \cite[Lemma 3.6]{Na3}, $$ \lim_{n\to \omega} \tau (z_n(h_m\otimes 1_{\mathcal{W}^{\sim}}))= \tau_{\omega}(x) \tau (h_m\otimes b)=\tau_{\omega}(x)\tau_A (h_m)\tau_\mathcal{W}(b) $$ for any $m\in\mathbb{N}$. Therefore $\tau_{\mathcal{B}}(\rho (x\otimes b))= \tau_{\omega}(x) \tau_\mathcal{W} (b)$ since $\lim_{m\to \infty} \tau_A (h_m)=1$. \end{proof} For a projection $p$ in $F(A\otimes\mathcal{W})^{\gamma}$, let $$ \mathcal{B}^{\gamma}_p:=\overline{\rho (p\otimes s) \mathcal{B}^{\gamma} \rho (p\otimes s)} $$ where $s$ is a strictly positive element in $\mathcal{W}$. Note that $\mathcal{B}^{\gamma}_p$ is a hereditary subalgebra of $\mathcal{B}^{\gamma}$. Define a homomorphism $\sigma_p$ from $\mathcal{W}$ to $\mathcal{B}^{\gamma}_p$ by $$ \sigma_p (b) = \rho (p\otimes b) $$ for any $b\in \mathcal{W}$. Since the target algebra $\mathcal{B}^{\gamma}$ has strict comparison by Proposition \ref{pro:main-section3}, the same proof as \cite[Proposition 3.7]{Na3} shows the following proposition by using Lemma \ref{lem:Lemma3.6} instead of \cite[Lemma 3.6]{Na3}. See \cite[Definition 3.2]{Na3} for the definition of the $(L,N)$-fullness. \begin{pro}\label{pro:full-inclusion} There exist maps $L: \mathcal{W}_{+,1}\setminus \{0\}\times (0,1)\to \mathbb{N}$ and $N: \mathcal{W}_{+,1}\setminus \{0\}\times (0,1) \to (0,\infty)$ such that the following holds. If $p$ be a projection in $F(A\otimes\mathcal{W})^{\gamma}$ such that $\tau_{\omega} (p)>0$, then $\sigma_p$ is $(L,N)$-full. \end{pro} The following corollary is an immediate consequence of \cite[Proposition 3.3]{Na3} and the proposition above. For finite sets $\mathcal{F}_1$ and $\mathcal{F}_2$, let $\mathcal{F}_1\odot \mathcal{F}_2:=\{a\otimes b \; |\; a\in \mathcal{F}_1, b\in \mathcal{F}_2\}$. \begin{cor}\label{cor:stable-uniqueness} Let $\Omega$ be a compact metrizable space. For any finite subsets $F_1\subset C(\Omega)$, $F_2\subset \mathcal{W}$ and $\varepsilon>0$, there exist finite subsets $\mathcal{F}_1\subset C(\Omega)$, $\mathcal{F}_2\subset \mathcal{W}$, $m\in\mathbb{N}$ and $\delta >0$ such that the following holds. Let $p$ be a projection in $F(A\otimes\mathcal{W})^{\gamma}$ such that $\tau_{\omega} (p)>0$. For any contractive ($\mathcal{F}_1\odot \mathcal{F}_2, \delta$)-multiplicative maps $\varphi, \psi : C(\Omega)\otimes \mathcal{W}\to \mathcal{B}^{\gamma}_p$, there exist a unitary element $u$ in $M_{m^2+1}(\mathcal{B}^{\gamma}_p)^{\sim}$ and $z_1,z_2,...,z_m\in\Omega$ such that \begin{align*} \| u & (\varphi(f\otimes b) \oplus \overbrace{\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b)\oplus \cdots \oplus\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b) }^m) u^* \\ & - \psi(f\otimes b)\oplus \overbrace{\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b) \oplus \cdots \oplus \bigoplus_{k=1}^m f(z_k)\rho (p\otimes b)}^m\| < \varepsilon \end{align*} for any $f\in F_1$ and $b\in F_2$. \end{cor} \section{Classification of normal elements in $F(A\otimes\mathcal{W})^{\gamma}$} \label{sec:normal} In this section we shall classify certain normal elements in $F(A\otimes\mathcal{W})^{\gamma}$ up to unitary equivalence. Furthermore, we shall consider the comparison theory for certain projections in $F(A\otimes\mathcal{W})^{\gamma}$. We assume that $\Omega$ is a compact metrizable space in this section. Using Proposition \ref{pro:key-pro} and Proposition \ref{pro:MvN-u} instead of \cite[Proposition 4.1]{Na3}, \cite[Proposition 4.2]{Na3} and \cite[Proposition 4.8]{Na3}, we obtain the following lemma by the same proof as \cite[Lemma 5.1]{Na3}. See also \cite[Lemma 4.1]{M2} and \cite[Lemma 4.2]{M2}. \begin{lem}\label{lem:m4.2} Let $F$ be a finite subset of $C(\Omega)$ and $\varepsilon >0$. Suppose that $\varphi$ and $\psi$ are unital homomorphisms from $C(\Omega)$ to $F(A\otimes\mathcal{W})^{\gamma}$ such that $ \tau_{\omega} \circ \varphi = \tau_{\omega} \circ \psi . $ Then there exist a projection $p\in F(A\otimes\mathcal{W})^{\gamma}$, $(F,\varepsilon)$-multiplicative unital c.p. maps $\varphi^{\prime}$ and $\psi^{\prime}$ from $C(\Omega)$ to $pF(A\otimes\mathcal{W})^{\gamma}p$, a unital homomorphism $\sigma$ from $C(\Omega)$ to $(1-p)F(A\otimes\mathcal{W})^{\gamma}(1-p)$ with finite-dimensional range and a unitary element $u\in F(A\otimes\mathcal{W})^{\gamma}$ such that $$ 0 <\tau_{\omega} (p) < \varepsilon, \; \| \varphi (f)- (\varphi^{\prime}(f)+ \sigma (f))\| <\varepsilon, \; \| \psi (f)- u(\psi^{\prime}(f)+ \sigma (f))u^*\| <\varepsilon $$ for any $f\in F$. \end{lem} The following theorem is a variant of \cite[Theorem 5.2]{Na3}. See also \cite[Theorem 4.5]{M2}. \begin{thm}\label{thm:unitary-equivalence-ed} Let $F_1$ be a finite subset of $C(\Omega)$, $F_2$ a finite subset of $A$ and $F_3$ a finite subset of $\mathcal{W}$, and let $\varepsilon >0$. Then there exist mutually orthogonal positive elements $h_1,h_2,...,h_{l}$ in $C(\Omega)$ of norm one such that the following holds. For any $\nu >0$, there exist finite subsets $\mathcal{G}_1\subset C(\Omega)$, $\mathcal{G}_2\subset A\otimes \mathcal{W}$ and $\delta >0$ such that the following holds. If $\varphi$ and $\psi$ are unital c.p. maps from $C(\Omega)$ to $M(A\otimes\mathcal{W})$ such that $$ \tau (\varphi (h_i)) \geq \nu, \; \forall i\in\{1,2,...,l\}, $$ $$ \| [\varphi (f), x] \| < \delta , \; \| [\psi(f),x ] \| < \delta, \; \forall f\in \mathcal{G}_1, x\in \mathcal{G}_2, $$ $$ \| (\varphi (f_1f_2)- \varphi (f_1)\varphi (f_2))x\| < \delta, \; \| (\psi (f_1f_2)- \psi (f_1)\psi (f_2))x\| < \delta, \; \forall f_1,f_2\in \mathcal{G}_1, x\in \mathcal{G}_2, $$ $$ \| (\gamma_g (\varphi (f))-\varphi (f))x \| < \delta, \; \| (\gamma_g (\psi (f))-\psi (f))x \| < \delta, \; \forall g\in G, f\in \mathcal{G}_1, x\in \mathcal{G}_2, $$ $$ | \tau (\varphi (f)) -\tau (\psi (f)) | < \delta, \; \forall f\in \mathcal{G}_1, $$ then there exists a contraction $u$ in $(A\otimes\mathcal{W})^{\sim}$ such that $$ \| (a\otimes b) (u^*u -1)\| < \varepsilon, \; \| (a\otimes b)(uu^* -1) \| < \varepsilon, \; \| (a\otimes b)(\gamma_g(u)-u) \| < \varepsilon, $$ $$ \| u\varphi (f)(a\otimes b)u^* - \psi(f)(a\otimes b) \| < \varepsilon $$ for any $f\in F_1$, $a\in F_2$, $b\in F_3$ and $g\in G$. \end{thm} \begin{proof} We may assume that every element in $F_2$ and $F_3$ is positive and of norm one. Take positive elements $h_1,h_2,...,h_l$ in $C(\Omega)$ by the same way as in the proof of \cite[Theorem 5.2]{Na3}. We will show that $h_1,h_2,...,h_l$ have the desired property. On the contrary, suppose that $h_1,h_2,...,h_l$ did not have the desired property. Then there exists a positive number $\nu$ satisfying the following: For any $n\in\mathbb{N}$, there exist unital c.p. maps $\varphi_n, \psi_n : C(\Omega)\to M(A\otimes\mathcal{W})$ such that $$ \tau (\varphi_n (h_i)) \geq \nu, \: \forall i \in\{1,2,...,l\}, $$ $$ \| [\varphi_n(f_1),x]\| \to 0, \; \| [\psi_n(f_1),x]\| \to 0, \; \| (\varphi_n(f_1f_2)- \varphi_n(f_1) \varphi_n(f_2))x\| \to 0, $$ $$ \| (\psi_n(f_1f_2)- \psi_n (f_1)\psi_n(f_2))x\| \to 0, \; \| (\gamma_g (\varphi_n (f_1))-\varphi_n (f_1))x \|\to 0, $$ $$ \| (\gamma_g (\psi_n (f_1))-\psi_n (f_1))x \|\to 0, \; |\tau (\varphi_n(f_1))-\tau (\psi_n(f_1))| \to 0 $$ as $n\to\infty$ for any $f_1,f_2\in C(\Omega)$, $x\in A\otimes\mathcal{W}$ and $g\in G$ and $$ \max_{f\in F_1, a\in F_2, b\in F_3} \| u\varphi_n (f)(a\otimes b)u^* - \psi_n (f)(a\otimes b)\| \geq \varepsilon $$ for any contraction $u$ in $(A\otimes\mathcal{W})^{\sim}$ satisfying $$ \| (a\otimes b)(\gamma_g(u)-u) \| < \varepsilon, \; \|(a\otimes b) (u^*u -1) \| < \varepsilon, \; \|(a\otimes b) (uu^* -1) \| < \varepsilon $$ for any $a\in F_2$, $b\in F_3$ and $g\in G$. Define homomorphisms $\varphi$ and $\psi$ from $C(\Omega)$ to $F(A\otimes\mathcal{W})^{\gamma}$ by $\varphi (f) := [(\varphi_n(f))_n]$ and $\psi (f):= [(\psi_n(f))_n]$ for any $f\in C(\Omega)$. Then we have $$ \tau_{\omega} \circ \varphi= \tau_{\omega} \circ \psi \quad \text{and} \quad \tau_{\omega}(\varphi (h_i))\geq \nu $$ for any $i=1,2,...,l$. We obtain finite subsets $\mathcal{F}_1\subset C(\Omega)$, $\mathcal{F}_2\subset \mathcal{W}$, $m\in\mathbb{N}$ and $\delta >0$ by applying Corollary \ref{cor:stable-uniqueness} to $F_1$ and $F_3$ and $\varepsilon /7$. Put $$ F_1^{\prime}:= F_1\cup \mathcal{F}_1 \cup \{h_1, h_2,...,h_l \} \quad \text{and} \quad \varepsilon^{\prime}:= \min \left\{\frac{\varepsilon}{7}, \frac{\delta}{\max\{\| b\| \; |\; b\in \mathcal{F}_2\}}, \frac{\nu}{(m^2+2)} \right\}. $$ Applying Lemma \ref{lem:m4.2} to $F_1^{\prime}$, $\varepsilon^{\prime}$, $\varphi$ and $\psi$, there exist a projection $p\in F(A\otimes\mathcal{W})^{\gamma}$, $(F_1^{\prime},\varepsilon^{\prime})$-multiplicative unital c.p. maps $\varphi^{\prime}$ and $\psi^{\prime}$ from $C(\Omega)$ to $pF(A\otimes\mathcal{W})^{\gamma}p$, a unital homomorphism $\sigma$ from $C(\Omega)$ to $(1-p)F(A\otimes\mathcal{W})^{\gamma}(1-p)$ with finite-dimensional range and a unitary element $w\in F(A\otimes\mathcal{W})^{\gamma}$ such that $$ 0 <\tau_{\omega} (p) < \varepsilon^{\prime}, \; \| \varphi (f)- (\varphi^{\prime}(f)+ \sigma (f))\| <\varepsilon^{\prime}, \; \| \psi (f)- w(\psi^{\prime}(f)+ \sigma (f))w^*\| <\varepsilon^{\prime} $$ for any $f\in F_1^{\prime}$. The Choi-Effros lifting theorem implies that there exist sequences of contractive c.p. maps $\varphi_n^{\prime}$, $\psi_n^{\prime}$ and $\sigma_n$ from $C(\Omega )$ to $A\otimes\mathcal{W}$ such that $\varphi^{\prime}(f)=[(\varphi_n^{\prime}(f))_n]$, $\psi^{\prime}(f)=[(\psi_n^{\prime}(f))_n]$ and $\sigma (f)=[(\sigma_n (f))_n]$ for any $f\in C(\Omega )$. By \cite[Proposition 4.9]{Na3}, there exists a unitary element $(w_n)_n$ in $(A\otimes\mathcal{W})^{\sim}_{\omega}$ such that $w=[(w_n)_n]$. Note that we have $(x\gamma_g(w_n))_n=(xw_n)_n$ for any $g\in G$ and $x\in A\otimes\mathcal{W}$, $$ \lim_{n\to\omega}\| \varphi_n (f)(a\otimes b)- (\varphi_n^{\prime}(f)+ \sigma_n (f))(a\otimes b)\| <\frac{\varepsilon}{7} \eqno{(1)} $$ and $$ \lim_{n\to\omega}\| \psi_n (f)(a\otimes b)- w_n(\psi_n^{\prime}(f)+ \sigma_n (f))(a\otimes b)w_n^*\| <\frac{\varepsilon}{7} \eqno{(2)} $$ for any $f\in F_1^{\prime}$, $a\in F_2$ and $b\in F_3$. Define c.p. maps $\Phi^{\prime}$ and $\Psi^{\prime}$ from $C(\Omega)\otimes \mathcal{W}$ to $\mathcal{B}_p^{\gamma}$ by $$ \Phi^{\prime}:= \rho \circ (\varphi^{\prime}\otimes \mathrm{id}_{\mathcal{W}})\quad \text{and} \quad \Psi^{\prime}:= \rho \circ (\psi^{\prime}\otimes \mathrm{id}_{\mathcal{W}}). $$ Then $\Phi^{\prime}$ and $\Psi^{\prime}$ are contractive $(\mathcal{F}_1\odot \mathcal{F}_2, \delta)$-multiplicative maps. Hence Corollary \ref{cor:stable-uniqueness} implies that there exist a unitary element $U$ in $M_{m^2+1}(\mathcal{B}^{\gamma}_p)^{\sim}$ and $z_1,z_2,...,z_m\in\Omega$ such that \begin{align*} \| U & (\Phi^{\prime}(f\otimes b) \oplus \overbrace{\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b)\oplus \cdots \oplus\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b) }^m) U^* \\ & - \Psi^{\prime}(f\otimes b)\oplus \overbrace{\bigoplus_{k=1}^m f(z_k)\rho (p\otimes b) \oplus \cdots \oplus \bigoplus_{k=1}^m f(z_k)\rho (p\otimes b)}^m\| < \frac{\varepsilon}{7} \end{align*} for any $f\in F_1$ and $b\in F_3$. Using Proposition \ref{thm:Matui-Sato} instead of \cite[Proposition 4.1]{Na3}, the same argument as in the proof of \cite[Theorem 5.2]{Na3} shows that there exist mutually orthogonal projections $\{p_{j,k} \}_{j,k=1}^m$ in $(1-p)F(A\otimes\mathcal{W})^{\gamma}(1-p)$ and a homomorphism $\sigma^{\prime\prime}: C(\Omega)\to (1-p-q)F(A\otimes\mathcal{W})^{\gamma}(1-p-q)$ where $q=\sum_{j,k=1}^mp_{j,k}$ such that $$ \| \sigma (f) - \left(\sum_{j=1}^m\sum_{k=1}^mf(z_k)p_{j,k} + \sigma^{\prime\prime} (f) \right) \| < \frac{2\varepsilon}{7} $$ for any $f\in F_1$ and $p_{j,k}$ is Murray-von Neumann equivalent to $p$ for any $j,k=1,2,...,m$. Define a homomorphism $\hat{\sigma}$ from $C(\Omega)$ to $F(A\otimes\mathcal{W})^{\gamma}$ by $$ \hat{\sigma}(f):= \sum_{j=1}^m\sum_{k=1}^mf(z_k)p_{j,k}+ \sigma^{\prime\prime} (f) $$ for any $f\in C(\Omega)$. By the Choi-Effros lifting theorem, there exists a sequence of contractive c.p. maps $\hat{\sigma}_n$ from $C(\Omega)$ to $A\otimes\mathcal{W}$ such that $\hat{\sigma}(f)=[(\hat{\sigma}_n (f))_n]$. Note that we have $$ \lim_{n\to \omega} \|\sigma_n(f)(a\otimes b)- \hat{\sigma}_n(f)(a\otimes b) \|< \frac{2\varepsilon}{7} \eqno{(3)} $$ for any $f\in F_1$, $a\in F_2$ and $b\in F_3$. Since we can regard $\Phi^{\prime}(f\otimes b)+ \sum_{j=1}^m\sum_{k=1}^mf(z_k)\rho (p_{j,k}\otimes b)\in \mathcal{B}_{p+q}^{\gamma}$ as an element in $M_{m^2+1}(\mathcal{B}^{\gamma}_p)$, the same argument as in the proof of \cite[Theorem 5.2]{Na3} shows that there exists a unitary element $V$ in $(\mathcal{B}^{\gamma})^{\sim}$ such that $$ \| V(\Phi^{\prime} (f\otimes b) + \rho (\hat{\sigma}(f)\otimes b))V^*- (\Psi^{\prime}(f\otimes b) +\rho (\hat{\sigma}(f)\otimes b)) \|< \frac{\varepsilon}{7} $$ for any $f\in F_1$ and $b\in F_3$. Let $(v_n)_n$ be a contraction in $\mathcal{A}^{\sim}$ such that $V=[(v_n)_n]$. Then we have $((a\otimes 1_{\mathcal{W}^{\sim}})v_n^*v_n)_n =((a\otimes 1_{\mathcal{W}^{\sim}})v_nv_n^*)_n=a\otimes 1_{\mathcal{W}^{\sim}}$ and $((a\otimes 1_{\mathcal{W}^{\sim}})\gamma_g(v_n))_n=((a\otimes 1_{\mathcal{W}^{\sim}})v_n)_n$ for any $g\in G$ and $a\in A$. Furthermore, we see that $$ \lim_{n\to\omega} \|v_n(\varphi^{\prime}_n(f)+\hat{\sigma}_n(f))(a\otimes b) v_n^* -(\psi^{\prime}_n(f)+\hat{\sigma}_n(f))(a\otimes b)\| < \frac{\varepsilon}{7} \eqno{(4)} $$ for any $f\in F_1$, $a\in F_2$ and $b\in F_3$ by Proposition \ref{pro:homrho}. Put $(u_n)_n:=(w_nv_n)_n\in ((A\otimes\mathcal{W})^{\sim})^{\omega}$. Then we have $$ ((a\otimes b)u_n^*u_n)_n= ((a\otimes b)v_n^*v_n)_n= a\otimes b $$ and $$ ((a\otimes b)u_nu_n^*)_n=(w_n(a\otimes b)v_nv_n^*w_n)_n=(w_n(a\otimes b)w_n^*)_n=a\otimes b $$ for any $a\in A$ and $b\in\mathcal{W}$. Also, we have \begin{align*} ((a\otimes b)\gamma_g(u_n))_n & =((a\otimes b)\gamma_g(w_n)\gamma_g(v_n))_n=((a\otimes b)w_n\gamma_g(v_n))_n \\ & =(w_n(a\otimes b)\gamma_g(v_n))_n=(w_n(a\otimes b)v_n)_n=((a\otimes b)u_n)_n \end{align*} for any $g\in G$, $a\in A$ and $b\in\mathcal{W}$. By (1), (2), (3) and (4), we see that $$ \lim_{n\to\omega} \| u_n\varphi_n(f)(a\otimes b)u_n^* - \psi_n (f)(a\otimes b) \| < \varepsilon $$ for any $f\in F_1$, $a\in F_2$ and $b\in F_3$. Therefore, taking a sufficiently large $n$, we obtain a contradiction. Consequently, the proof is complete. \end{proof} The following theorem is the main result in this section. \begin{thm}\label{thm:classification-normal} Let $N_1$ and $N_2$ be normal elements in $F(A\otimes\mathcal{W})^{\gamma}$ such that $\mathrm{Sp} (N_1)=\mathrm{Sp} (N_2)$ and $\tau_{\omega} (f(N_1)) >0$ for any $f\in C(\mathrm{Sp}(N_1))_{+}\setminus \{0\}$. Then there exists a unitary element $u$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $uN_1u^* =N_2$ if and only if $ \tau_{\omega} (f(N_1))= \tau_{\omega} (f(N_2)) $ for any $f\in C(\mathrm{Sp}(N_1))$. \end{thm} \begin{proof} By a similar argument as in the proof of \cite[Theorem 5.3]{Na3}, we can prove this theorem. We shall give a proof for reader's convenience. Since the only if part is clear, we will show the if part. Define unital homomorphisms $\varphi$ and $\psi$ from $C(\mathrm{Sp}(N_1))$ to $F(A\otimes\mathcal{W})^{\gamma}$ by $\varphi (f):= f(N_1)$ and $\psi (f):=f(N_2)$, respectively. By the Choi-Effros lifting theorem, we see that there exist sequences of unital c.p. maps $\varphi_n$ and $\psi_n$ from $C(\mathrm{Sp}(N_1))$ to $(A\otimes\mathcal{W})^{\sim}$ such that $f(N_1)=[(\varphi_n(f))_n]$ and $f(N_2)=[(\psi_n(f))_n]$ for any $f\in C(\mathrm{Sp}(N_1))$. Then we have $$ |\tau (\varphi_n(f_1))-\tau_{\omega}(f_1(N_1))| \to 0, \; \| [\varphi_n(f_1),x]\| \to 0, \; \| [\psi_n(f_1),x]\| \to 0, $$ $$ \| (\varphi_n(f_1f_2)- \varphi_n(f_1)\varphi_n(f_2))x\| \to 0,\; \| (\psi_n(f_1f_2)- \psi_n (f_1)\psi_n(f_2))x\| \to 0, $$ $$ \| (\gamma_g (\varphi_n (f_1))-\varphi_n (f_1))x \|\to 0, \; \| (\gamma_g (\psi_n (f_1))-\psi_n (f_1))x \|\to 0, $$ $$ \; |\tau (\varphi_n(f_1))-\tau (\psi_n(f_1))| \to 0 $$ as $n\to\omega$ for any $f_1,f_2\in C(\mathrm{Sp}(N_1))$, $x\in A\otimes\mathcal{W}$ and $g\in G$. We denote by $\iota$ the identity function on $\mathrm{Sp}(N_1)$, that is, $\iota (z)=z$ for any $z\in\mathrm{Sp}(N_1)$. Let $F_1:=\{1, \iota \}\subset C(\mathrm{Sp}(N_1))$, and let $\{F_{2,k}\}_{k\in\mathbb{N}}$ and $\{F_{3,k}\}_{k\in\mathbb{N}}$ be increasing sequences of finite subsets in $A$ and $\mathcal{W}$ such that $A=\overline{\bigcup_{k\in\mathbb{N}} F_{2,k}}$ and $\mathcal{W}=\overline{\bigcup_{k\in\mathbb{N}} F_{3,k}}$, respectively. For any $k\in\mathbb{N}$, we obtain mutually orthogonal positive elements $h_{1,k},h_{2,k},...,h_{l(k),k}$ in $C(\mathrm{Sp}(N_1))$ of norm one by applying Theorem \ref{thm:unitary-equivalence-ed} to $F_1$, $F_{2,k}$, $F_{3,k}$ and $1/k$. Put $$ \nu_k := \frac{1}{2} \min\{\tau_{\omega} (h_{1,k}(N_1)),\tau_{\omega} (h_{2,k}(N_1)),..., \tau_{\omega} (h_{l(k),k}(N_1)) \} >0. $$ Applying Theorem \ref{thm:unitary-equivalence-ed} to $\nu_k$, we obtain finite subsets $\mathcal{G}_{1,k}\subset C(\mathrm{Sp}(N_1))$, $\mathcal{G}_{2,k} \subset A\otimes\mathcal{W}$ and $\delta_k>0$. We may assume that $\{\mathcal{G}_{1,k}\}_{k\in\mathbb{N}}$ and $\{\mathcal{G}_{2,k}\}_{k\in\mathbb{N}}$ are increasing sequences and $\delta_k>\delta_{k+1}$ for any $k\in\mathbb{N}$. We can find a sequence $\{X_k\}_{k=1}^\infty$ of elements in $\omega$ such that $X_k\subset X_{k+1}$ and for any $n\in X_{k}$, $$ |\tau (\varphi_n(h_{i,k})) - \tau_{\omega}(h_{i,k}(N_1)) | < \nu_k , \; \| [\varphi_n(f_1),x]\| < \delta_k, \; \| [\psi_n(f_1),x]\| < \delta_k, $$ $$ \| (\varphi_n (f_1f_2)- \varphi_n (f_1)\varphi_n (f_2))x\| < \delta_k, \; \| (\psi_n (f_1f_2)- \psi_n (f_1)\psi_n (f_2))x\| < \delta_k, $$ $$ \| (\gamma_g (\varphi_n (f_1))-\varphi_n (f_1))x \| < \delta_k, \; \| (\gamma_g (\psi_n (f_1))-\psi_n (f_1))x \| < \delta_k, $$ $$ | \tau (\varphi_n (f_1)) -\tau (\psi_n (f_1)) | < \delta_k $$ for any $i\in\{1,2,...,l(k)\}$, $f_1,f_2\in\mathcal{G}_{1,k}$, $x\in\mathcal{G}_{2,k}$ and $g\in G$. Since we have $$ \tau (\varphi_n(h_{i,k})) > \tau_{\omega}(h_{i,k}(N_1))-\nu_k \geq 2\nu_k - \nu_k = \nu_k $$ for any $i\in\{1,2,...,l(k)\}$, Theorem \ref{thm:unitary-equivalence-ed} implies that for any $n\in X_{k}$, there exists a contraction $u_{k,n}$ in $(A\otimes\mathcal{W})^{\sim}$ such that $$ \| (a\otimes b) (u_{k,n}^*u_{k,n}-1)\| < \frac{1}{k}, \; \| (a\otimes b)(u_{k,n}u_{k,n}^*-1)\| < \frac{1}{k}, $$ $$ \|(a\otimes b)(\gamma_g(u_{k,n})-u_{k,n})\| <\frac{1}{k}, \; \| u_{k,n}\varphi_n (f)(a\otimes b)u_{k,n}^* - \psi_n(f)(a\otimes b) \| <\frac{1}{k} $$ for any $f\in F_1$, $a\in F_{2,k}$, $b\in F_{3,k}$ and $g\in G$. Since $F_1=\{1, \iota \}$, we have $$ \| [u_{k,n}, a\otimes b]\| \leq \| u_{k,n} (a\otimes b)(1-u_{k,n}^*u_{k,n})\| + \| (u_{k,n} (a\otimes b)u_{k,n}^*- a\otimes b)u_{k,n} \|< \frac{2}{k} $$ and $$ \| u_{k,n}\varphi_n (\iota) (a\otimes b)u_{k,n}^*- \psi_n(\iota)(a\otimes b)\| < \frac{1}{k} $$ for any $n\in X_{k}$, $a\in F_{2,k}$ and $b\in F_{3,k}$. Put $$ u_{n} := \left\{\begin{array}{cl} 1 & \text{if } n\notin X_1 \\ u_{k,n} & \text{if } n\in X_k\setminus X_{k+1}\quad (k\in\mathbb{N}) \end{array} \right.. $$ Then $$ \| (a\otimes b)(u_n^*u_n-1)\|\to 0, \; \| (a\otimes b)(u_nu_n^*-1)\|\to 0, \; \| (a\otimes b)(\gamma_g (u_n)-u_n)\|\to 0, $$ $$ \| [u_n, a\otimes b]\|\to 0,\; \|(u_n\varphi_n(\iota)u_n^*-\psi_n(\iota)) (a\otimes b)\|\to 0 $$ as $n\to \omega$ for any $a\in A$, $b\in\mathcal{W}$ and $g\in G$. Therefore $[(u_n)_n]$ is a unitary element in $F(A\otimes\mathcal{W})^{\gamma}$ and $[(u_n)_u]N_1[(u_n)_n]^*=N_2$. \end{proof} Applying the theorem above to projections, we obtain the following corollary. Note that if $p$ is a projection, then $C(\mathrm{Sp}(p))$ can be identified with $\{\lambda_1 p+ \lambda_2(1-p)\; |\; \lambda_1,\lambda_2\in\mathbb{C}\}$. Hence it is clear that $\tau_{\omega}(f(p))>0$ for any $f\in C(\mathrm{Sp}(p))_{+}\setminus \{0\}$ if and only if $0<\tau_{\omega}(p)<1$. Also, for projections $p$ and $q$, we have $\tau_{\omega}(f(p))=\tau_{\omega}(f(q))$ for any $f\in C(\mathrm{Sp}(p))$ if and only if $\tau_{\omega}(p)=\tau_{\omega}(q)$. \begin{cor}\label{thm:comparison} Let $p$ and $q$ be projections in $F(A\otimes \mathcal{W})^{\gamma}$ such that $0< \tau_{\omega} (p) <1$. Then $p$ and $q$ are unitarily equivalent if and only if $ \tau_{\omega} (p)= \tau_{\omega} (q) $. \end{cor} The following corollary is important in the next section. \begin{cor}\label{cor:comparison} Let $p$ and $q$ be projections in $F(A\otimes \mathcal{W})^{\gamma}$ such that $0< \tau_{\omega} (p) \leq 1$. Then $p$ and $q$ are Murray-von Neumann equivalent if and only if $ \tau_{\omega} (p)= \tau_{\omega} (q) $. \end{cor} \begin{proof} By Corollary \ref{thm:comparison}, it suffices to show that if $p$ is a projection in $F(A\otimes \mathcal{W})^{\gamma}$ such that $\tau_{\omega}(p)=1$, then $p$ is Murray-von Neumann equivalent to $1$. Proposition \ref{pro:key-pro} implies that there exists a projection $r$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $r\leq p$ and $\tau_{\omega}(r)=1/2$. By Corollary \ref{thm:comparison}, $p-r$ is unitarily equivalent to $1-r$. Therefore $p=(p-r)+r$ is Murray-von Neumann equivalent to $(1-r)+r=1$. \end{proof} \section{Rohlin type theorem}\label{sec:main} In this section we shall show that $\gamma$ has the Rohlin property. For a $\gamma$-cocycle $w$ in $F(A\otimes\mathcal{W})$, define an action $\gamma^{w}$ on $F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C})$ by $$ \gamma^{w}_g := \mathrm{Ad}\left(\left(\begin{array}{cc} 1 & 0 \\ 0 & w(g) \end{array} \right)\right) \circ (\gamma_g\otimes \mathrm{id}) $$ for any $g\in G$. Since $\gamma$ has the weak Rohlin property, we obtain the following lemma by similar arguments as in \cite[Proposition 4.8]{MS2} and \cite[Proposition 3.3]{MS3} (see also arguments in Section \ref{sec:target}). We leave the proof to the reader. \begin{lem}\label{lem:fixed-strict-comparison} Let $a$ and $b$ be positive elements in $(F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$ such that $d_{\tau_{\omega}\otimes\mathrm{Tr}_2}(a) <d_{\tau_{\omega}\otimes\mathrm{Tr}_2}(b)$ where $\mathrm{Tr}_2$ is the (unnormalized) usual trace on $M_2(\mathbb{C})$. Then there exists an element $r$ in $(F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$ such that $r^*br=a$. \end{lem} The proof of the following lemma is based on Connes' $2\times 2$ matrix trick in \cite[Corollary 2.6]{C3}. \begin{lem}\label{lem:cohomology} Every $\gamma$-cocycle $w$ in $F(A\otimes\mathcal{W})$ is a coboundary. \end{lem} \begin{proof} Let $\varepsilon >0$. By Proposition \ref{pro:key-pro}, there exists a projection $p_{\varepsilon}$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $\tau_{\omega}(p_{\varepsilon})=1-\varepsilon$. Taking a suitable subsequence of a representative of $p_{\varepsilon}$, we may assume that $w(g)p_{\varepsilon}=p_{\varepsilon}w(g)$ for any $g\in G$. Lemma \ref{lem:fixed-strict-comparison} implies that there exists an element $R_{\varepsilon}$ in $(F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$ such that $$ R_{\varepsilon}^*\left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) R_{\varepsilon} = \left(\begin{array}{cc} 0 & 0 \\ 0 & p_{\varepsilon} \end{array} \right) . $$ The diagonal argument shows that there exist a projection $p$ in $F(A\otimes\mathcal{W})^{\gamma}$ and an element $R$ in $(F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$ such that $\tau_{\omega}(p)=1$ and $$ R^*\left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) R = \left(\begin{array}{cc} 0 & 0 \\ 0 & p \end{array} \right) . $$ By Corollary \ref{cor:comparison}, there exists an element $s$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $s^*s=1$ and $ss^*=p$. Taking suitable subsequences of representatives of $s$, $p$ and $R$, we may assume that $w(g)s=sw(g)$ for any $g\in G$ and $$ \left(\begin{array}{cc} 0 & 0 \\ 0 & s^* \end{array} \right) R^*\left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) R \left(\begin{array}{cc} 0 & 0 \\ 0 & s \end{array} \right) = \left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right) . $$ It it easy to see that there exists a projection $q$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $\tau_{\omega}(q)=1$ and $$ \left(\begin{array}{cc} q & 0 \\ 0 & 0 \end{array} \right) = \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) R \left(\begin{array}{cc} 0 & 0 \\ 0 & p \end{array} \right) R^* \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) . $$ By Corollary \ref{cor:comparison}, there exists an element $t$ in $F(A\otimes\mathcal{W})^{\gamma}$ such that $t^*t=1$ and $tt^*=q$. Put $$ V:= \left(\begin{array}{cc} 0 & 0 \\ 0 & s^* \end{array} \right) R^* \left(\begin{array}{cc} t & 0 \\ 0 & 0 \end{array} \right) . $$ Then we have $V\in (F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$, $$ V^*V=\left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \quad \text{and} \quad VV^*=\left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right) . $$ It is easy to see that there exists a unitary element $v$ in $F(A\otimes\mathcal{W})$ such that $$ V= \left(\begin{array}{cc} 0 & 0 \\ v & 0 \end{array} \right) . $$ Since $V\in (F(A\otimes\mathcal{W})\otimes M_{2}(\mathbb{C}))^{\gamma^{w}}$, $w(g)\gamma_g(v)=v$ for any $g\in G$. Consequently, $w$ is a coboundary. \end{proof} \begin{rem} The lemma above shows that the first cohomology of $\gamma$ vanishes. This property is one of the important properties for the Bratteli-Elliott-Evans-Kishimoto intertwining argument (see, for example, \cite{EK} and \cite{Kis1}) in the classification of Rohlin actions. \end{rem} The following theorem is the main result in this paper. \begin{thm}\label{thm:main} Let $A$ be a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces, and let $\alpha$ be a strongly outer action of a finite group $G$ on $A$. Then $\gamma=\alpha\otimes\mathrm{id}$ on $A\otimes\mathcal{W}$ has the Rohlin property. \end{thm} \begin{proof} We identify $B(\ell^2(G))$ with $M_{|G|}(\mathbb{C})$. Also, we can identify $F(A\otimes\mathcal{W})^{\gamma}$ with $F(A\otimes \mathcal{W}\otimes \bigotimes_{n\in\mathbb{N}} M_{|G|}(\mathbb{C}))^{\gamma\otimes\mathrm{id}}$ because $\mathcal{W}$ is UHF stable. Let $\lambda$ be the left regular representation of $G$ on $\ell^2(G)$. Define a map $w$ from $G$ to $F(A\otimes\mathcal{W})^{\gamma}$ by $$ w (g) := [(h_n\otimes k_n \otimes \overbrace{1\otimes \cdots \otimes 1}^n \otimes \lambda(g) \otimes 1\otimes \cdots )_n ] $$ where $\{h_n\}_{n=1}^\infty$ and $\{k_n\}_{n=1}^\infty$ are approximate units for $A$ and $\mathcal{W}$, respectively. Then $w$ is a homomorphism, and hence $w$ is a $\gamma$-cocycle in $F(A\otimes\mathcal{W})$. By Lemma \ref{lem:cohomology}, there exists a unitary element $v$ in $F(A\otimes\mathcal{W})$ such that $w(g)=v\gamma_g(v^*)$ for any $g\in G$. For any $g\in G$, let $e_{g}$ be a projection onto $\mathbb{C}\delta_g$ where $\{\delta_h\; |\; h\in G\}$ is the canonical basis of $\ell^2(G)$, and put $$ p_g:= v^*[(h_n\otimes k_n \otimes \overbrace{1\otimes \cdots \otimes 1}^n \otimes e_g \otimes 1\otimes \cdots )_n ]v. $$ Then $\{p_{g}\}_{g\in G}$ is a partition of unity in $F(A\otimes\mathcal{W})$ consisting of projections satisfying $$ \gamma_{g} (p_{h}) =p_{gh} $$ for any $g,h\in G$. Consequently, $\gamma$ has the Rohlin property. \end{proof} Combining the theorem above and the classification results in \cite{CE} and \cite{EGLN}, we obtain the following corollary. \begin{cor}\label{cor:main} Let $A$ and $B$ be simple separable nuclear C$^*$-algebras with a unique tracial state and no unbounded traces, and let $\alpha$ and $\beta$ be strongly outer actions of a finite group $G$ on $A$ and $B$, respectively. Then $\alpha\otimes\mathrm{id}$ on $A\otimes\mathcal{W}$ is conjugate to $\beta\otimes\mathrm{id}$ on $B\otimes\mathcal{W}$. \end{cor} \begin{proof} By \cite[Theorem 6.1]{CE}, $A\otimes\mathcal{Z}$ and $B\otimes\mathcal{Z}$ have finite nuclear dimension. Hence \cite[Corollary 6.7]{EGLN} implies that $A\otimes\mathcal{W}$ and $B\otimes\mathcal{W}$ are isomorphic to $\mathcal{W}$. Therefore we obtain the conclusion by Theorem \ref{thm:classification} and Theorem \ref{thm:main}. \end{proof} \begin{rem} (1) If $\alpha^{\prime}$ is not a strongly outer action of a non-trivial finite group $G$ on $A$, then $(A\otimes\mathcal{W})\rtimes_{\alpha^{\prime}\otimes\mathrm{id}} G$ has at least two extremal tracial state. Hence $\alpha^{\prime}\otimes\mathrm{id}$ is not (cocycle) conjugate to the action in the corollary above. \ \\ (2) There exist uncountably many non-conjugate strongly outer actions of $\mathbb{Z}_2$ on $\mathcal{W}$ by \cite[Example 5.6]{Na4} and \cite[Remark 5.7]{Na4}. \ \\ (3) For generalizing the corollary above to amenable group actions, it seems to be important that we characterize $\mathcal{W}$ by using the central sequence C$^*$-algebra $F(\mathcal{W})$. \ \\ (4) If $\alpha$ is a strongly outer action of a finite group $G$ on a simple separable nuclear C$^*$-algebra $A$ with a unique tracial state and no unbounded traces, then $A\rtimes_{\alpha} G$ is a simple separable nuclear C$^*$-algebra with a unique tracial state and no unbounded traces. Hence $(A\otimes\mathcal{W})\rtimes_{\alpha\otimes\mathrm{id}}G \cong (A\rtimes_{\alpha} G)\otimes\mathcal{W}$ is isomorphic to $\mathcal{W}$. \ \\ (5) We need the unique trace property of $A$ for $A\otimes\mathcal{W}\cong \mathcal{W}$. Moreover, we used this property for many arguments in Section \ref{sec:target}. But it seems to be possible that we generalize some results in this paper to more general $KK$-contractible C$^*$-algebras by suitable modifications. \end{rem} \end{document}
\begin{document} \title{Preconditioners and Tensor Product Solvers for Optimal Control Problems from Chemotaxis} \author{Sergey Dolgov\mathbf{f}ootnote{University of Bath, Claverton Down, BA2 7AY, Bath, United Kingdom. {\tt [email protected]}} and John W. Pearson\mathbf{f}ootnote{School of Mathematics, The University of Edinburgh, James Clerk Maxwell Building, The King's Buildings, Peter Guthrie Tait Road, Edinburgh, EH9 3FD, United Kingdom. {\tt [email protected]}}} \maketitle \begin{abstract} In this paper, we consider the fast numerical solution of an optimal control formulation of the Keller--Segel model for bacterial chemotaxis. Upon discretization, this problem requires the solution of huge-scale saddle point systems to guarantee accurate solutions. We consider the derivation of effective preconditioners for these matrix systems, which may be embedded within suitable iterative methods to accelerate their convergence. We also construct low-rank tensor-train techniques which enable us to present efficient and feasible algorithms for problems that are finely discretized in the space and time variables. Numerical results demonstrate that the number of preconditioned GMRES iterations depends mildly on the model parameters. Moreover, the low-rank solver makes the computing time and memory costs sublinear in the original problem size. \end{abstract} \textbf{Keywords:} \textit{PDE-constrained optimization; Boundary control; Preconditioning; Chemotaxis; Mathematical biology} \section{Introduction}\label{sec:Intro} The process of chemotaxis in biology describes the movement of cells or organisms in a directed fashion as a response to external chemical signals. In 1971, Keller and Segel presented a mathematical model for bacterial chemotaxis \mathbf{c}ite{KellerSegel}. In essence, for large numbers of bacteria, it is predicted that the bacteria will on average move up gradients of the chemoattractant concentration. Since Keller and Segel's work, an area of numerical mathematics that has become a subject of significant interest is that of PDE-constrained optimization, where one wishes to predict the circumstances in which some physical (or in this case biological) objective occurs, subject to a system of PDEs describing the process. Using this technology, one is able to pose an inverse problem for the chemotaxis mechanism: given an observed bacterial cell concentration profile, what can be said about the external chemoattractant at the boundaries of a domain of interest? The constraints for this problem are therefore the PDEs describing bacterial chemotaxis. This is a parameter identification problem that has been considered in literature such as \mathbf{c}ite{LBP,Potschka}, and in particular it was shown numerically by Lebiedz and Brandt-Pollmann that ``it is possible to systematically control spatiotemporal dynamical behavior'' \mathbf{c}ite{LBP}. The fast and efficient iterative solution of PDE-constrained optimization problems has increasingly become an active area of research, and in particular it is now widely recognised that the incorporation of effective preconditioners to accelerate iterative schemes is highly beneficial from a computational point-of-view. Preconditioning theory and numerics for a number of steady \mathbf{c}ite{PSW_StateConstraints,PW2,PW1,RDW,SchoberlZulehner,Zulehner} and time-dependent \mathbf{c}ite{DPSS,PearsonStoll,PSW,SPM} problems have been established, with \mathbf{c}ite{PearsonStoll,SPM} describing the resulting solvers for reaction--diffusion problems from chemistry and biology. In this paper, we derive a potent preconditioner for the chemotaxis problem based on the saddle point structure of the matrix systems resulting from Newton-type iterations of the nonlinear PDEs. When solving these optimization problems, which often involve the solution of a system of PDEs with initial conditions coupled with adjoint PDEs equipped with final-time conditions, there are many challenges arising from the time-dependent component of the problem in particular, due to the forward-backward solves required, and the associated scaling of computational complexity with the fineness of the grid in the time variable. Difficulties also arise from nonlinear problems, due to the matrices arising from the PDE system varying in structure at every time step, unlike linear problems for which some matrices can be re-used repeatedly within a solver. For time-dependent nonlinear problems that arise from chemotaxis, computer storage is therefore a significant bottleneck, unless a numerical algorithm is specifically tailored in order to mitigate this. To combat this issue, in addition to presenting our new preconditioner, we describe an approach for approximating the solution of our problem in a low-rank format, namely the Tensor Train decomposition \mathbf{c}ite{osel-tt-2011}. Low-rank tensor techniques emerge from the separation of variables and the Fourier method for solving PDEs. We can approximate the solution in the form $z(x,y) \approx \sum_{\alpha=1}^{r} v_{\alpha}(x) w_{\alpha}(y)$, using a possibly small number of terms $r$. In this case, the discretized univariate functions in the low-rank decomposition are much cheaper to store than the original multivariate function. The discretized separation of variables requires the low-rank approximation of matrices (for two variables $x,y$), or tensors (for three or more variables). Practical low-rank tensor algorithms employ robust tools of linear algebra, such as the singular value decomposition, to deliver an optimal low-rank approximation for a desired accuracy. Extensive reviews on the topic can be found in \mathbf{c}ite{hackbusch-2012,bokh-surv-2015}. The efficiency of low-rank decompositions depends crucially on the value of the rank $r$, which in turn reflects the structure of a function. Discontinuous functions, in particular level set functions, may require high ranks. However, smooth functions allow very accurate low-rank approximations, and hence a sublinear complexity of the inverse problem solution \mathbf{c}ite{uschmajew-approx-rate-2013,tee-tensor-2003}. The inverse problem implies driving the solution to a desired state, which usually has a simple (and hence low-rank) structure. Therefore, as long as we avoid discontinuous functions in our formulation, the low-rank techniques can be very efficient for the inverse problem. This is demonstrated in our computational experiments. This paper is structured as follows. In Section \ref{sec:Problem} we describe the problem statement of which the numerical solution is considered. In Section \ref{sec:Matrix} we present the structure of the matrix systems that result from the discretization of the system of PDEs. In Section \ref{sec:Preconditioner} we present our preconditioning strategy for these systems, with numerical results provided in Section \ref{sec:NumEx1}. We describe the low-rank tensor decomposition which is employed for the matrix systems in Section \ref{sec:LowRank}, with additional numerical experiments relating to this approach carried out in Section \ref{sec:NumEx2}. Finally, concluding remarks are made in Section \ref{sec:Conc}. \section{Problem statement}\label{sec:Problem} We examine the following problem describing the optimal control of a bacterial chemotaxis system, based on studies in literature such as \mathbf{c}ite{LBP} and \mathbf{c}ite[Chapter 13]{Potschka}: \begin{align} \ \label{CostFunctional} \min_{z,c,u}~~\mathbf{f}rac{1}{2}\int_{\Omega}\left(z(\mathbf{x},T)-\widehat{z}\right)^2+\mathbf{f}rac{\gamma_c}{2}\int_{\Omega}\left(c(\mathbf{x},T)-\widehat{c}\right)^2+\mathbf{f}rac{\gamma_u}{2}{}&{}\int_{\partial\Omega\times(0,T)}u^2 \end{align} subject to \begin{align*} \mathbf{f}rac{\partial{}z}{\partial{}t}-D_{z}\nabla^{2}z-\alpha\nabla\mathbf{c}dot\left(\mathbf{f}rac{\nabla{}c}{(1+c)^2}z\right)=0\quad\quad&\text{on }\Omega\times(0,T), \\ \ \nonumber \mathbf{f}rac{\partial{}c}{\partial{}t}-\nabla^{2}c+\rho{}c-w\mathbf{f}rac{z^2}{1+z^2}=0\quad\quad&\text{on }\Omega\times(0,T), \\ \end{align*} equipped with the boundary conditions and initial conditions: \begin{align} \ \nonumber \mathbf{f}rac{\partial{}z}{\partial{}n}=0\hspace{1.8em}\quad&\text{on }\partial\Omega\times(0,T), \\ \ \nonumber \mathbf{f}rac{\partial{}c}{\partial{}n}+\beta{}c=\beta{}u\hspace{1.1em}\quad&\text{on }\partial\Omega\times(0,T), \\ \ \nonumber z(\mathbf{x},0)=z_0(\mathbf{x})\quad&\text{on }\Omega, \\ \ \nonumber c(\mathbf{x},0)=c_0(\mathbf{x})\quad&\text{on }\Omega. \end{align} This problem is solved on a space-time domain $\Omega\times(0,T)$ with boundary $\partial\Omega\times(0,T)$, and for $\Omega\subset\mathbb{R}^2$. The variables $z$, $c$ denote \emph{state variables}, corresponding to the bacterial cell density and chemoattractant concentration respectively, with $u$ the \emph{control variable}, $\widehat{z}$, $\widehat{c}$ given \emph{desired states}, $z_0$, $c_0$ given initial conditions, and $\gamma_c$, $\gamma_u$, $D_z$, $\alpha$, $\rho$, $w$, $\beta$ given (positive) parameters. We highlight that, by construction of the problem, the control $u$ in some sense relates to the gradient of chemoattractant concentration on the boundary of the domain of interest. The form of the boundary condition which enforces the control makes this a \emph{boundary control problem}. In this PDE-constrained optimization model, we wish to discover what the profile of this control must be in order for the biological system to behave in a way prescribed by the desired states $\widehat{z}$, $\widehat{c}$. \begin{Rem} Although derived similarly, the main difference between the works of \mathbf{c}ite{LBP} and \mathbf{c}ite{Potschka} is that \mathbf{c}ite{LBP} considers solely the misfit between $z$ and $\widehat{z}$, regularized by a term involving the final time $T$. We believe that the methods introduced in this paper are equally applicable to either cost functional. \end{Rem} We now consider the first and second derivatives of the Lagrangian\mathbf{f}ootnote{For ease of notation, we exclude initial conditions within the definition of the Lagrangian.} \begin{align*} \ \mathcal{L}(z,c,u,p,q)={}&\mathbf{f}rac{1}{2}\int_{\Omega}\left(z(\mathbf{x},T)-\widehat{z}\right)^2+\mathbf{f}rac{\gamma_c}{2}\int_{\Omega}\left(c(\mathbf{x},T)-\widehat{c}\right)^2+\mathbf{f}rac{\gamma_u}{2}\int_{\partial\Omega\times(0,T)}u^2 \\ \ &\quad\quad+\int_{\Omega\times(0,T)}p_{\Omega}\left(\mathbf{f}rac{\partial{}z}{\partial{}t}-D_{z}\nabla^{2}z-\alpha\nabla\mathbf{c}dot\left(\mathbf{f}rac{\nabla{}c}{(1+c)^2}z\right)\right) \\ \ &\quad\quad+\int_{\Omega\times(0,T)}q_{\Omega}\left(\mathbf{f}rac{\partial{}c}{\partial{}t}-\nabla^{2}c+\rho{}c-w\mathbf{f}rac{z^2}{1+z^2}\right) \\ \ &\quad\quad+\int_{\partial\Omega\times(0,T)}p_{\partial\Omega}\left(\mathbf{f}rac{\partial{}z}{\partial{}n}\right)+\int_{\partial\Omega\times(0,T)}q_{\partial\Omega}\left(\mathbf{f}rac{\partial{}c}{\partial{}n}+\beta{}c-\beta{}u\right), \end{align*} where $p$ and $q$ denote the adjoint variables corresponding to $z$ and $c$, with $p_{\Omega}$, $q_{\Omega}$ the components of $p$, $q$ within the interior of $\Omega$, and $p_{\partial\Omega}$, $q_{\partial\Omega}$ the components on the boundary. We arrive at the following system for the Newton formulations of the first-order optimality conditions: \begin{align} \ \label{Newton1} &\mathbf{f}rac{\partial{}s_z}{\partial{}t}-D_{z}\nabla^{2}s_z+\alpha\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{1+c}\right)s_z\right)+\alpha\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{(1+c)^2}s_c\right)z\right) \\ \ \nonumber &\quad\quad\quad\quad=-\left(\mathbf{f}rac{\partial{}z}{\partial{}t}-D_{z}\nabla^{2}z-\alpha\nabla\mathbf{c}dot\left(\mathbf{f}rac{\nabla{}c}{(1+c)^2}z\right)\right)\quad\text{on }\Omega\times(0,T), \\ \ \label{Newton2} &\mathbf{f}rac{\partial{}s_c}{\partial{}t}-\nabla^{2}s_c+\rho{}s_c-2w\mathbf{f}rac{z}{(1+z^2)^2}s_z \\ \ \nonumber &\quad\quad\quad\quad=-\left(\mathbf{f}rac{\partial{}c}{\partial{}t}-\nabla^{2}c+\rho{}c-w\mathbf{f}rac{z^2}{1+z^2}\right)\hspace{3.45em}\quad\text{on }\Omega\times(0,T), \\ \ \label{Newton3} &\gamma_u{}s_u-\beta{}s_q=-\left(\gamma_u{}u-\beta{}q\right)\hspace{10.7em}\quad\text{on }\partial\Omega\times(0,T), \\ \ \label{Newton4} &\mathbf{c}hi_{\Omega_T}(s_z)-2wq\mathbf{f}rac{1-3z^2}{(1+z^2)^3}s_z+\alpha\nabla\left(\mathbf{f}rac{2c}{(1+c^2)^2}s_c\right)\mathbf{c}dot\nabla{}p \\ \ \nonumber &\quad\quad\quad\quad-\mathbf{f}rac{\partial{}s_p}{\partial{}t}-D_{z}\nabla^{2}s_p-\alpha\nabla\left(\mathbf{f}rac{1}{1+c}\right)\mathbf{c}dot\nabla{}s_p-2w\mathbf{f}rac{z}{(1+z^2)^2}s_q \\ \ \nonumber &\quad\quad\quad\quad=\widehat{z}-\left(\mathbf{c}hi_{\Omega_T}(z)-\mathbf{f}rac{\partial{}p}{\partial{}t}-D_{z}\nabla^{2}p-\alpha\nabla\left(\mathbf{f}rac{1}{1+c}\right)\mathbf{c}dot\nabla{}p-2w\mathbf{f}rac{zq}{(1+z^2)^2}\right) \\ \ \nonumber &\quad\quad\quad\quad\hspace{19.55em}\text{on }\Omega\times(0,T), \\ \ \label{Newton5} &-\alpha{}p\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{(1+c)^2}\right)s_z\right)+\gamma_{c}\mathbf{c}hi_{\Omega_T}(s_c)+\alpha{}p\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{2c}{(1+c^2)^2}s_c\right)z\right) \\ \ \nonumber &\quad\quad\quad\quad-\alpha\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{(1+c)^2}\right)z\right)s_p-\mathbf{f}rac{\partial{}s_q}{\partial{}t}-\nabla^{2}s_q+\rho{}s_q \\ \ \nonumber &\quad\quad\quad\quad=\gamma_{c}\widehat{c}-\left(\gamma_{c}\mathbf{c}hi_{\Omega_T}(c)-\alpha{}p\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{(1+c)^2}\right)z\right)-\mathbf{f}rac{\partial{}q}{\partial{}t}-\nabla^{2}q+\rho{}q\right) \\ \ \nonumber &\quad\quad\quad\quad\hspace{19.55em}\text{on }\Omega\times(0,T), \end{align} where $s_z$, $s_c$, $s_u$, $s_p$, $s_q$ are the Newton updates for $z$, $c$, $u$, $p$, $q$, and $\mathbf{c}hi_{\Omega_T}(\sq)$ denotes a function that restricts the variable to time $t=T$. The boundary conditions for the state and adjoint variables are given by \begin{eqnarray*} \ \mathbf{f}rac{\partial{}s_z}{\partial{}n}=0,\quad\mathbf{f}rac{\partial{}s_c}{\partial{}n}+\beta{}s_c=\beta{}s_u,\quad\mathbf{f}rac{\partial{}s_p}{\partial{}n}=0,\quad\mathbf{f}rac{\partial{}s_q}{\partial{}n}+\beta{}s_q=0\quad\text{on }\partial\Omega\times(0,T), \end{eqnarray*} with initial and final-time conditions \begin{eqnarray*} \ s_z(\mathbf{x},0)=0,\quad{}s_c(\mathbf{x},0)=0,\quad{}s_p(\mathbf{x},T)=0,\quad{}s_q(\mathbf{x},T)=0\quad\text{on }\Omega, \end{eqnarray*} assuming an initial guess is chosen that satisfies the initial conditions for $z$, $c$ and the final-time conditions for $p$, $q$. \begin{Rem} We highlight that there also exist chemotaxis problems which may be written in distributed control form. For example the work in \mathbf{c}ite{EPS}, on the identification of chemotaxis models with volume-filling, considers (amongst others) a problem which may be interpreted in our setting in the following way: \begin{align*} \ \min_{z,f}~~\mathbf{f}rac{1}{2}\int_{\Omega\times(0,T)}\left(z-\widehat{z}\right)^2+\mathbf{f}rac{\gamma}{2}\int_{\Omega\times(0,T)}\left[f^2+|\nabla{}f|^2\right]& \\ \ \emph{s.t.}\quad\mathbf{f}rac{\partial{}z}{\partial{}t}-\nabla^{2}z+f\nabla^{2}c+\nabla{}f\mathbf{c}dot\nabla{}c=0\hspace{1.8em}\quad\emph{on }&\Omega\times(0,T), \\ \ -\nabla^{2}c+c=z\hspace{1.8em}\quad\emph{on }&\Omega\times(0,T), \\ \ \mathbf{f}rac{\partial{}z}{\partial{}n}-f\mathbf{f}rac{\partial{}c}{\partial{}n}=0\hspace{1.8em}\quad\emph{on }&\partial\Omega\times(0,T), \\ \ \mathbf{f}rac{\partial{}c}{\partial{}n}=0\hspace{1.8em}\quad\emph{on }&\partial\Omega\times(0,T), \\ \ z(\mathbf{x},0)=z_0(\mathbf{x})\quad\emph{on }&\Omega, \end{align*} where $\gamma$ is a positive constant, and $f(z)$ denotes the chemoattractant sensitivity. The challenge in this case is to discover the necessary profile of the function $f$ in order to drive the chemoattractant to a particular state. We believe that variants of the techniques introduced in this paper could also be applied to this distributed control problem. \end{Rem} \section{Matrix systems for Newton and Gauss--Newton}\label{sec:Matrix} In this section, we describe the matrix systems which are obtained by discretization of the optimization problem \eqref{CostFunctional} using the finite element method. Concatenating the Newton equations \eqref{Newton1}--\eqref{Newton5}, along with boundary conditions and initial/final-time conditions, gives a block matrix system of the following form: \begin{align} \ \label{Newton} &\left[\begin{array}{ccccc} \mathcal{L}_{zz} & \mathcal{L}_{zc} & 0 & \mathcal{L}_{zp} & \mathcal{L}_{zq} \\ \mathcal{L}_{cz} & \mathcal{L}_{cc} & 0 & \mathcal{L}_{cp} & \mathcal{L}_{cq} \\ 0 & 0 & \gamma_{u}\mathbf{c}dot\text{Id} & 0 & -\beta\mathbf{c}hi_{\partial\Omega}(\sq)^\top \\ \mathcal{L}_{pz} & \mathcal{L}_{pc} & 0 & 0 & 0 \\ \mathcal{L}_{qz} & \mathcal{L}_{qc} & -\beta\mathbf{c}hi_{\partial\Omega}(\sq) & 0 & 0 \\ \end{array}\right]\left[\begin{array}{c} s_z \\ s_c \\ s_u \\ s_p \\ s_q \\ \end{array}\right] \\ \ \nonumber &\quad\quad\quad\quad=\left[\begin{array}{c} \widehat{z}-\left(\mathbf{c}hi_{\Omega_T}(z)-\mathbf{f}rac{\partial{}p}{\partial{}t}-D_{z}\nabla^{2}p-\alpha\nabla\left(\mathbf{f}rac{1}{1+c}\right)\mathbf{c}dot\nabla{}p-2w\mathbf{f}rac{zq}{(1+z^2)^2}\right) \\ \gamma_{c}\widehat{c}-\left(\gamma_{c}\mathbf{c}hi_{\Omega_T}(c)-\alpha{}p\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{(1+c)^2}\right)z\right)-\mathbf{f}rac{\partial{}q}{\partial{}t}-\nabla^{2}q+\rho{}q\right) \\ -\left(\gamma_u{}u-\beta{}q\right) \\ -\left(\mathbf{f}rac{\partial{}z}{\partial{}t}-D_{z}\nabla^{2}z-\alpha\nabla\mathbf{c}dot\left(\mathbf{f}rac{\nabla{}c}{(1+c)^2}z\right)\right) \\ -\left(\mathbf{f}rac{\partial{}c}{\partial{}t}-\nabla^{2}c+\rho{}c-w\mathbf{f}rac{z^2}{1+z^2}\right) \\ \end{array}\right], \end{align} where \begin{align*} \ \left[\begin{array}{cc} \mathcal{L}_{zz} & \mathcal{L}_{zc} \\ \mathcal{L}_{cz} & \mathcal{L}_{cc} \\ \end{array}\right]={}&\left[\begin{array}{cc} \mathbf{c}hi_{\Omega_T}(\sq)-2wq\mathbf{f}rac{1-3z^2}{(1+z^2)^3} & \alpha\nabla\left(\mathbf{f}rac{2c}{(1+c^2)^2}{\sq}\right)\mathbf{c}dot\nabla{}p \\ -\alpha{}p\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{(1+c)^2}\right)\sq\right) & \gamma_{c}\mathbf{c}hi_{\Omega_T}(\sq)+\alpha{}p\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{2c}{(1+c^2)^2}\sq\right)z\right) \\ \end{array}\right], \\ \ \left[\begin{array}{cc} \mathcal{L}_{pz} & \mathcal{L}_{pc} \\ \mathcal{L}_{qz} & \mathcal{L}_{qc} \\ \end{array}\right]={}&\left[\begin{array}{cc} \mathbf{f}rac{\partial}{\partial{}t}-D_{z}\nabla^{2}+\alpha\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{1+c}\right)\sq\right) & \alpha\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{(1+c)^2}\sq\right)z\right) \\ -2w\mathbf{f}rac{z}{(1+z^2)^2} & \mathbf{f}rac{\partial}{\partial{}t}-\nabla^{2}+\rho\mathbf{c}dot\text{Id} \\ \end{array}\right], \\ \ \left[\begin{array}{cc} \mathcal{L}_{zp} & \mathcal{L}_{zq} \\ \mathcal{L}_{cp} & \mathcal{L}_{cq} \\ \end{array}\right]={}&\left[\begin{array}{cc} -\mathbf{f}rac{\partial}{\partial{}t}-D_{z}\nabla^{2}-\alpha\nabla\left(\mathbf{f}rac{1}{1+c}\right)\mathbf{c}dot\nabla &-2w\mathbf{f}rac{z}{(1+z^2)^2} \\ -\alpha\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{(1+c)^2}\right)z\right) & -\mathbf{f}rac{\partial}{\partial{}t}-\nabla^{2}+\rho\mathbf{c}dot\text{Id} \\ \end{array}\right], \end{align*} with $\text{Id}$ denoting the identity operator, and $\mathbf{c}hi_{\partial\Omega}(\sq)$ representing a function restricted to the boundary $\partial\Omega$. As an alternative to solving the Newton system \eqref{Newton}, it is possible to instead consider a Gauss--Newton approximation, where one neglects second derivatives within the $(1,1)$-block of the saddle point matrix as defined in Section \ref{sec:Preconditioner}. This results in the solution of systems \begin{eqnarray} \ \label{GN} \left[\begin{array}{ccccc} \mathbf{c}hi_{\Omega_T}(\sq) & 0 & 0 & \mathcal{L}_{zp} & \mathcal{L}_{zq} \\ 0 & \gamma_{c}\mathbf{c}hi_{\Omega_T}(\sq) & 0 & \mathcal{L}_{cp} & \mathcal{L}_{cq} \\ 0 & 0 & \gamma_{u}\mathbf{c}dot\text{Id} & 0 & -\beta\mathbf{c}hi_{\partial\Omega}(\sq)^\top \\ \mathcal{L}_{pz} & \mathcal{L}_{qz} & 0 & 0 & 0 \\ \mathcal{L}_{pc} & \mathcal{L}_{qc} & -\beta\mathbf{c}hi_{\partial\Omega}(\sq) & 0 & 0 \\ \end{array}\right]\left[\begin{array}{c} s_z \\ s_c \\ s_u \\ s_p \\ s_q \\ \end{array}\right]=\mathbf{b}, \end{eqnarray} where $\mathbf{b}$ is the same right-hand side vector as in \eqref{Newton}. To be more explicit about the $\mathbf{c}hi_{\Omega_T}(\sq)$ and $\mathbf{c}hi_{\partial\Omega}(\sq)$ terms, the associated matrices contain entries of the form $\int_{\Omega}\phi_i\mathbf{c}dot\phi_j|_{t=T}$ and $\int_{\partial\Omega\times(0,T)}\phi_i\mathbf{c}dot\phi_j|_{\partial\Omega}$ respectively, for finite element basis functions $\{\phi_i\}$ of the same form for each PDE variable. \subsection{Additional control constraints}\label{sec:Matrix_ControlConstraints} It is perfectly reasonable to add the following control constraint: \begin{eqnarray*} \ u_-(\mathbf{x},t)\leq{}u\leq{}u_+(\mathbf{x},t)\quad\text{a.e. on }\partial\Omega\times(0,T), \end{eqnarray*} for given functions $u_-$, $u_+$, into the PDE-constrained optimization model. In other words, we prescribe that the chemoattractant must behave in a ``sensible'' (physical) way on the boundary of the domain of interest. One way in which we can tackle this additional term is to modify the cost functional \eqref{CostFunctional} to add a Moreau--Yosida regularization term (see \mathbf{c}ite{ItoKunisch}) for the bound constraints, thereby minimizing instead \begin{align*} \ \min_{z,c,u}~~&\mathbf{f}rac{1}{2}\int_{\Omega}\left(z(\mathbf{x},T)-\widehat{z}\right)^2+\mathbf{f}rac{\gamma_c}{2}\int_{\Omega}\left(c(\mathbf{x},T)-\widehat{c}\right)^2+\mathbf{f}rac{\gamma_u}{2}\int_{\partial\Omega\times(0,T)}u^2 \\ \ &\quad\quad+\mathbf{f}rac{1}{2\varepsilonilon}\int_{\Omega\times(0,T)}|\max\{0,u-u_+\}|^2+\mathbf{f}rac{1}{2\varepsilonilon}\int_{\Omega\times(0,T)}|\min\{0,u-u_-\}|^2, \end{align*} with $\varepsilonilon$ a given (small) positive constant, chosen to enforce the control constraints efficiently. When forming the Newton system in this setting, we will be required to solve systems relating to the finite element discretization of the following terms: \begin{eqnarray} \ \label{GN_ControlConstraints} \left[\begin{array}{ccccc} \mathbf{c}hi_{\Omega_T}(\sq) & 0 & 0 & \mathcal{L}_{zp} & \mathcal{L}_{zq} \\ 0 & \gamma_{c}\mathbf{c}hi_{\Omega_T}(\sq) & 0 & \mathcal{L}_{cp} & \mathcal{L}_{cq} \\ 0 & 0 & \gamma_{u}\mathbf{c}dot\text{Id}+\mathbf{f}rac{1}{\varepsilonilon}G_{\Lambda} & 0 & -\beta\mathbf{c}hi_{\partial\Omega}(\sq)^\top \\ \mathcal{L}_{pz} & \mathcal{L}_{pc} & 0 & 0 & 0 \\ \mathcal{L}_{qz} & \mathcal{L}_{qc} & -\beta\mathbf{c}hi_{\partial\Omega}(\sq) & 0 & 0 \\ \end{array}\right]\left[\begin{array}{c} s_z \\ s_c \\ s_u \\ s_p \\ s_q \\ \end{array}\right]=\widetilde{\mathbf{b}}, \end{eqnarray} where \begin{eqnarray*} \ \widetilde{\mathbf{b}}:=\left[\begin{array}{c} \widehat{z}-\left(\mathbf{c}hi_{\Omega_T}(z)-\mathbf{f}rac{\partial{}p}{\partial{}t}-D_{z}\nabla^{2}p-\alpha\nabla\left(\mathbf{f}rac{1}{1+c}\right)\mathbf{c}dot\nabla{}p-2w\mathbf{f}rac{zq}{(1+z^2)^2}\right) \\ \gamma_{c}\widehat{c}-\left(\gamma_{c}\mathbf{c}hi_{\Omega_T}(c)-\alpha{}p\nabla\mathbf{c}dot\left(\nabla\left(\mathbf{f}rac{1}{(1+c)^2}\right)z\right)-\mathbf{f}rac{\partial{}q}{\partial{}t}-\nabla^{2}q+\rho{}q\right) \\ \mathbf{f}rac{1}{\varepsilonilon}(G_{\Lambda_+}y_{+}+G_{\Lambda_-}y_{-})-\left(\gamma_u{}u-\beta{}q\right) \\ -\left(\mathbf{f}rac{\partial{}z}{\partial{}t}-D_{z}\nabla^{2}z-\alpha\nabla\mathbf{c}dot\left(\mathbf{f}rac{\nabla{}c}{(1+c)^2}z\right)\right) \\ -\left(\mathbf{f}rac{\partial{}c}{\partial{}t}-\nabla^{2}c+\rho{}c-w\mathbf{f}rac{z^2}{1+z^2}\right) \\ \end{array}\right]. \end{eqnarray*} Here, $G_{\Lambda_+}$, $G_{\Lambda_-}$, $G_{\Lambda}$ denote projections onto the active sets $\Lambda_+:=\{i:u_i>(u_+)_i\}$, $\Lambda_-:=\{i:u_i<(u_-)_i\}$, $\Lambda:=\Lambda_{+}\mathbf{c}up\Lambda_{-}$ (for the $i$-th node on the discrete level). \section{Preconditioning for Gauss--Newton matrix systems}\label{sec:Preconditioner} In this section we focus on deriving effective preconditioners for the matrix systems \eqref{GN} and \eqref{GN_ControlConstraints} resulting from the Gauss--Newton method applied to the chemotaxis problem, both without and with additional control constraints. We base our preconditioners on the well studied field of \emph{saddle point systems}, which take the form \mathbf{c}ite{BGL} \begin{eqnarray} \ \label{SaddlePt} \mathbf{u}nderbrace{\left[\begin{array}{cc} A & B^\top \\ B & 0 \\ \end{array}\right]}_{\mathcal{A}}\left[\begin{array}{c} \mathbf{x}_1 \\ \mathbf{x}_2 \\ \end{array}\right]=\left[\begin{array}{c} \mathbf{b}_1 \\ \mathbf{b}_2 \\ \end{array}\right], \end{eqnarray} with $A$ symmetric positive semidefinite in our case, and $B$ having at least as many columns as rows. Two well-studied preconditioners for the system \eqref{SaddlePt} are given by \mathbf{c}ite{Ipsen,Kuznetsov,MGW} \begin{eqnarray*} \ \mathcal{P}_D=\left[\begin{array}{cc} A & 0 \\ 0 & S \\ \end{array}\right],\quad\quad\mathcal{P}_T=\left[\begin{array}{cc} A & 0 \\ B & -S \\ \end{array}\right], \end{eqnarray*} where the (negative) \emph{Schur complement} $S:=BA^{-1}B^\top$. It is known \mathbf{c}ite{Ipsen,Kuznetsov,MGW} that, provided the preconditioned system is nonsingular, its eigenvalues are given by \begin{equation*} \ \lambda(\mathcal{P}_D^{-1}\mathcal{A})\in\left\{1,\mathbf{f}rac{1}{2}(1\pm\sqrt{5})\right\},\quad\quad\lambda(\mathcal{P}_T^{-1}\mathcal{A})\in\left\{1\right\}, \end{equation*} with these results also holding for the block triangular preconditioner $\mathcal{P}_T$ even if $A$ is not symmetric. Now, as $\mathcal{P}_D^{-1}\mathcal{A}$ is diagonalizable but $\mathcal{P}_T^{-1}\mathcal{A}$ is not, preconditioning with $\mathcal{P}_D$ ($\mathcal{P}_T$) yields convergence of a suitable Krylov subspace method in 3 (2) iterations, respectively. In practice, however, $\mathcal{P}_D$ and $\mathcal{P}_T$ are not useful preconditioners, as the matrices $A$ and $S$ are computationally expensive to invert in general. We therefore instead seek preconditioners of the form \begin{eqnarray*} \ \widehat{\mathcal{P}}_D=\left[\begin{array}{cc} \widehat{A} & 0 \\ 0 & \widehat{S} \\ \end{array}\right],\quad\quad\widehat{\mathcal{P}}_T=\left[\begin{array}{cc} \widehat{A} &0 \\ B & -\widehat{S} \\ \end{array}\right], \end{eqnarray*} where $\widehat{A}$ and $\widehat{S}$ denote suitably chosen approximations of the $(1,1)$-block $A$ and Schur complement $S$. The objective here is that our Krylov method will not converge in 3 or 2 iterations, but just a few more, while at the same time ensuring that our preconditioner is much cheaper to invert. From this point, we focus our attention on preconditioners of block triangular form. \subsection{Construction of the preconditioner}\label{sec:Preconditioner_Construction} We first examine the system \eqref{GN}, and place this in saddle point form \eqref{SaddlePt} as follows: \begin{eqnarray*} \ A=\left[\begin{array}{ccc} \mathbf{c}hi_{\Omega_T}(\sq) & 0 & 0 \\ 0 & \gamma_{c}\mathbf{c}hi_{\Omega_T}(\sq) & 0 \\ 0 & 0 & \gamma_{u}\mathbf{c}dot\text{Id} \end{array}\right],\quad\quad{}B=\left[\begin{array}{ccccc} \mathcal{L}_{pz} & \mathcal{L}_{pc} & 0 \\ \mathcal{L}_{qz} & \mathcal{L}_{qc} & -\beta\mathbf{c}hi_{\partial\Omega}(\sq) \\ \end{array}\right]. \end{eqnarray*} Furthermore, let us decompose the blocks $A$ and $B$ into sub-blocks: \begin{eqnarray*} \ A=\left[\begin{array}{cc} A_s & 0 \\ 0 & A_u \\ \end{array}\right],\quad\quad{}B=\left[\begin{array}{cc} B_s & B_u \\ \end{array}\right], \end{eqnarray*} where \begin{eqnarray*} \ A_s=\left[\begin{array}{cc} \mathbf{c}hi_{\Omega_T}(\sq) & 0 \\ 0 & \gamma_{c}\mathbf{c}hi_{\Omega_T}(\sq) \\ \end{array}\right],\quad\quad{}B_s=\left[\begin{array}{cc} \mathcal{L}_{pz} & \mathcal{L}_{pc} \\ \mathcal{L}_{qz} & \mathcal{L}_{qc} \\ \end{array}\right],\quad\quad{}B_u=\left[\begin{array}{c} 0 \\ -\beta\mathbf{c}hi_{\partial\Omega}(\sq) \\ \end{array}\right]. \end{eqnarray*} In this paper, $A_u$ corresponds to the finite element discretization of the following operators: \begin{eqnarray*} \ A_u\leftarrow\left\{\begin{array}{cl} \gamma_u\mathbf{c}dot\text{Id} & \text{without control constraints}, \\ \gamma_u\mathbf{c}dot\text{Id}+\mathbf{f}rac{1}{\varepsilonilon}G_{\Lambda} & \text{with control constraints}, \\ \end{array}\right. \end{eqnarray*} that is to say the block $A_u$ is altered if we instead consider the matrix \eqref{GN_ControlConstraints} incorporating control constraints. Note that, as the saddle point system is written, the matrix $A$ is not invertible and the Schur complement $S$ therefore does not exist. We hence consider a suitable re-ordering of the matrices \eqref{GN} and \eqref{GN_ControlConstraints} to enable us to utilize classical saddle point theory. In particular, we observe that the matrix under consideration may be factorized as follows: \begin{eqnarray*} \ \left[\begin{array}{ccc} A_s & 0 & B_s^\top \\ 0 & A_u & B_u^\top \\ B_s & B_u & 0 \\ \end{array}\right]=\left[\begin{array}{ccc} I & -A_{s}B_{s}^{-1}B_{u}A_u^{-1} & A_{s}B_s^{-1} \\ 0 & I & 0 \\ 0 & 0 & I \\ \end{array}\right]\mathbf{u}nderbrace{\left[\begin{array}{ccc} 0 & 0 & S_{\angle} \\ 0 & A_u & B_u^\top \\ B_s & B_u & 0 \\ \end{array}\right]}_{\mathcal{P}}, \end{eqnarray*} where identity matrices $I$ are of appropriate dimensions. Note that as $\text{Id}$ corresponds to the identity operator on the continuous level, this will become a finite element mass matrix in the discrete setting. We then take $\mathcal{P}$ to be the foundation of our preconditioner. We define $S_{\angle}$ as the `\emph{pivoted Schur complement}' \mathbf{c}ite[Section 3.3]{DPSS} \begin{eqnarray} \ \label{Schur} S_{\angle}=B_s^\top+A_{s}B_{s}^{-1}B_{u}A_{u}^{-1}B_u^\top. \end{eqnarray} We approximate this term within our preconditioner using the `\emph{matching strategy}' devised in \mathbf{c}ite{PSW,PW2,PW1}, which aims to capture both terms of the Schur complement within the preconditioner. The approximation reads as follows: \begin{eqnarray} \ \label{Sangle} S_{\angle}\approx\widehat{S}_{\angle}:=\Big(B_s^\top+\mathbf{f}rac{1}{\eta}A_s\Big)B_s^{-1}\Big(B_s+\eta{}B_{u}A_u^{-1}B_u^\top\Big), \end{eqnarray} Note that the matrix product $B_s^\top{}B_s^{-1}B_s$ captures the first term $B_s^\top$ of $S_{\angle}$, and $\big(\mathbf{f}rac{1}{\eta}A_s\big)B_s^{-1}\big(\eta{}B_{u}A_u^{-1}B_u^\top\big)$ matches exactly the second term $A_{s}B_{s}^{-1}B_{u}A_{u}^{-1}B_u^\top$. The positive constant $\eta$ is chosen to `balance' the first and last matrix factors, $B_s^\top+\mathbf{f}rac{1}{\eta}A_s$ and $B_s+\eta{}B_{u}A_u^{-1}B_u^\top$, within the Schur complement approximation, so that the two terms in the remainder $S_{\angle}-\widehat{S}_{\angle}$ are approximately of the same norm. Two natural choices for this constant are \begin{eqnarray*} \ \eta=\sqrt{\mathbf{f}rac{\left\|A_s\right\|}{\left\|B_{u}A_u^{-1}B_u^\top\right\|}}\quad\text{or}\quad\eta=\sqrt{\mathbf{f}rac{\max(\text{diag}(A_s))}{\max(\text{diag}(B_{u}A_u^{-1}B_u^\top))}}, \end{eqnarray*} with the second such choice much cheaper to compute. Approximately solving for the matrix $B_s+\eta{}B_{u}A_u^{-1}B_u^\top$ is made tractable by the effective approximation of a mass matrix (or a mass matrix plus a positive diagonal matrix) of the form $A_u$ by its diagonal, see \mathbf{c}ite[Section 4.1]{PearsonGondzio} and \mathbf{c}ite{WathenEigBounds}. Putting all the pieces together, we state our preconditioner \begin{eqnarray*} \ \widehat{\mathcal{P}}=\left[\begin{array}{ccc} 0 & 0 & \widehat{S}_{\angle} \\ 0 & A_u & B_u^\top \\ B_s & B_u & 0 \\ \end{array}\right], \end{eqnarray*} incorporating the Schur complement approximation above. Due to the re-ordering of the saddle point system that we have undertaken, this is a suitable choice of preconditioner that captures the characteristics of the matrix under consideration. \subsection{Application of the preconditioner}\label{sec:Preconditioner_Application} Applying the inverse of the preconditioner, $\widehat{\mathcal{P}}^{-1}$, as is necessary within an iterative method, therefore requires three main operations: \begin{enumerate} \item \mathbf{u}nderline{Applying $B_s^{-1}$:} This is equivalent to solving the forward problem, rather than the coupled optimization problem. In practice this is approached time-step by time-step, using an algebraic or geometric multigrid method, or another suitable scheme, to solve for the matrices arising at each point in time. \item \mathbf{u}nderline{Applying $A_u^{-1}$:} The matrix $A_u$ is a block diagonal matrix, consisting of boundary mass matrices at each time-step (in the case without control constraints), or boundary mass matrices plus positive semidefinite diagonal matrices (if control constraints are present). In either case, these matrices may be well approximated using Chebyshev semi-iteration \mathbf{c}ite{GVI,GVII,WathenRees}, or even using a simple diagonal approximation of a mass matrix \mathbf{c}ite{WathenEigBounds}. \item \mathbf{u}nderline{Applying $\widehat{S}_{\angle}^{-1}$:} Applying the approximation \eqref{Sangle} involves a multiplication operation involving $B_s$, and (approximate) solves for each of $B_s^\top+\mathbf{f}rac{1}{\eta}A_s$ and $B_s+\eta{}B_{u}A_u^{-1}B_u^\top$ which may again be approached at each time-step in turn using multigrid or another appropriate method. \end{enumerate} \subsection{Uzawa approximation}\label{sec:Preconditioner_Uzawa} In practice, we make a further modification to the preconditioner $\widehat{\mathcal{P}}$ in order to ensure it is easier to work with on a computer. In more detail, the term $B_s$ in the bottom-left of $\widehat{\mathcal{P}}$, and the terms $B_s^\top+\mathbf{f}rac{1}{\eta}A_s$ and $B_s+\eta{}B_{u}A_u^{-1}B_u^\top$ within $\widehat{S}_{\angle}$, contain $2\times2$ block systems which we would like to replace with more convenient approximations so that we are only required to (approximately) invert one block at a time. To facilitate this, we replace $2\times2$ block matrices by an inexact Uzawa approximation, with block triangular splitting matrices, where appropriate. This leads to our final choice of preconditioner: \begin{eqnarray*} \ \widehat{\mathcal{P}}_{\text{Uzawa}}=\left[\begin{array}{ccc} 0 & 0 & \Big(B_s^\top+\mathbf{f}rac{1}{\eta}A_s\Big)_{\text{Uzawa}}(B_s)_{\text{Uzawa}}^{-1}\Big(B_s+\eta{}B_{u}A_u^{-1}B_u^\top\Big)_{\text{Uzawa}} \\ 0 & A_u & B_u^\top \\ (B_s)_{\text{Uzawa}} & B_u & 0 \\ \end{array}\right], \end{eqnarray*} where $(\mathbf{c}dot)_{\text{Uzawa}}$ denotes the Uzawa approximation of the corresponding matrix. For ease of reproducibility for the reader, we state the splitting matrices below: \begin{align*} \ (B_s)_{\text{Uzawa}}\rightarrow{}&\left[\begin{array}{cc} \mathcal{L}_{pz} & \mathcal{L}_{pc} \\ 0 & \mathcal{L}_{qc} \\ \end{array}\right], \\ \ \Big(B_s^{\top}+\mathbf{f}rac{1}{\eta}A_s\Big)_{\text{Uzawa}}\rightarrow{}&\left[\begin{array}{cc} \mathcal{L}_{zp}+\mathbf{f}rac{1}{\eta}\mathbf{c}hi_{\Omega_T}(\sq) & 0 \\ \mathcal{L}_{cp} & \mathcal{L}_{cq}+\mathbf{f}rac{\gamma_c}{\eta}\mathbf{c}hi_{\Omega_T}(\sq) \\ \end{array}\right], \\ \ \Big(B_s+\eta{}B_{u}A_u^{-1}B_u^\top\Big)_{\text{Uzawa}}\rightarrow{}&\left[\begin{array}{cc} \mathcal{L}_{pz} & \mathcal{L}_{pc} \\ 0 & \mathcal{L}_{qc}+\eta\beta^2\mathbf{c}hi_{\partial\Omega}(\sq)A_u^{-1}\mathbf{c}hi_{\partial\Omega}(\sq)^\top \\ \end{array}\right]. \end{align*} Now the linear systems with diagonal blocks ($\mathcal{L}_{zp}$, $\mathcal{L}_{qc}$, and so on) can be solved directly. Note that it is also possible to annihilate another off-diagonal block instead within the Uzawa approximation. However, we have found that the approximations listed above yield fast convergence in the numerical experiments. \section{Numerical experiments with control constraints}\label{sec:NumEx1} In this section we benchmark the preconditioned Newton method. For our test problem, the initial distribution of bacterial cells is chosen as a sum of $m_0$ independent Gaussian peaks, \begin{equation} z_0(x,y) = \sum_{i=1}^{m_0} \exp\left(-2560 \mathbf{c}dot \left[(x-x_i)^2 +(y-y_i)^2 \right]\right), \label{eq:z0_full} \end{equation} where the centers $\{x_i,y_i\}$ are chosen randomly on $[0,1]^2$. The desired distribution at the final time $T=1$ is linear, \begin{equation} \widehat z(x,y) = \langle z_0 \rangle \mathbf{c}dot (x+y), \label{eq:zhat} \end{equation} normalized by the initial mass, $$ \langle z_0 \rangle = \int_{[0,1]^2} z_0(x,y)~{\rm d}x{\rm d}y, $$ since the model conserves the normalization of $z$. Both initial and target concentrations $c$ are zero. The experiments were run in {\scshape matlab} R2017b on one core of a 2.4GHz Intel Xeon E5-2640 CPU. In this section, we set $m_0=50$ and the control constraints $u_-=0$ and $u_+=0.2$, in accordance with \mathbf{c}ite{Potschka}. The default regularization parameters are set to $\gamma_u=10^{-3}$ and $\gamma_c=0.5$. The stopping tolerance for the Newton iteration is set to $10^{-4}$. Moreover, we decrease the Moreau--Yosida regularization parameter $\varepsilonilon$ geometrically from $10^{-1}$ to $10^{-4}$ as the iteration converges. This gives more robust behavior of the Newton method. \begin{figure}\label{fig:full_ttimes} \end{figure} \begin{figure}\label{fig:full_Z} \end{figure} The computational time is shown in Fig. \ref{fig:full_ttimes} (left). We see that it grows cubically with respect to the uniform grid refinement, which is expected for a three-dimensional (2D space + time) problem. The number of Newton iterations is quite stable with respect to the grid size, ranging from $11$ to $14$ depending on a particular distribution of the random initial guess. The transient control signal is shown in Fig. \ref{fig:full_ttimes} (right). We notice that it is accurately confined within the prescribed constraints. However, this leads to a rather large misfit in the target cell density (Fig. \ref{fig:full_Z}). While the density follows the linear distribution $\widehat z$ correctly in the top right corner of the domain, in the left bottom corner we see an excessive density of bacteria. This shows that controlling only the chemoattractant might be insufficient for forcing the bacteria to leave a particular area. Lastly in this section, we investigate the performance of the preconditioner proposed in Section \ref{sec:Preconditioner} against variation of parameters. In Table \ref{tab:full_its}, we show the average numbers of GMRES \mathbf{c}ite{gmres} iterations per Newton step for different grid sizes $n$ and regularization parameters $\gamma_u$, $\gamma_c$. We vary only one parameter at a time, while the other two are kept fixed to their default values, $n=64$, $\gamma_u=10^{-3}$, and $\gamma_c=0.5$. The number of iterations grows slightly as the control regularization parameter $\gamma_u$ is decreased, which is expected for a boundary control problem. On the other hand, the preconditioner is reasonably robust with respect to the other parameters, in particular the grid size. \begin{table}[htb] \mathbf{c}entering \mathbf{c}aption{Average number of GMRES iterations per Newton step.} \label{tab:full_its} \begin{tabular}[t]{c|c} $n$ & its \\ \hline 32 & 21.37 \\ 64 & 27.46 \\ 128 & 27.86 \\ \end{tabular} \begin{tabular}[t]{c|c} $\gamma_u$ & its \\ \hline $10^0$ & 6.00 \\ $10^{-1}$ & 9.00 \\ $10^{-2}$ & 15.28 \\ $10^{-3}$ & 27.46 \\ $10^{-4}$ & 46.84 \\ $10^{-5}$ & 69.21 \\ \end{tabular} \begin{tabular}[t]{c|c} $\gamma_c$ & its \\ \hline $0.5$ & 27.46 \\ $10^{-1}$ & 30.55 \\ $10^{-2}$ & 32.11 \\ $10^{-3}$ & 31.77 \\ $10^{-4}$ & 32.67 \\ $10^{-5}$ & 31.57 \\ \end{tabular} \end{table} \section{Low-rank tensor decompositions and algorithms} \label{sec:LowRank} The optimality system \eqref{GN} can result in a huge-scale matrix system, for many spatial degrees of freedom and time steps. One way to reduce the associated computational burden is to seek an approximate solution in a low-parametric representation. In this paper we apply separation of variables, and in particular the Tensor Train (TT) decomposition \mathbf{c}ite{osel-tt-2011}. In this section, we introduce the TT decomposition and the algorithm for an efficient TT-structured solution of the optimality equations. Although the TT approximation can have difficulties with the indicator function of the active set of control constraints (see Remark \ref{rem:indicator} below), for the problem without box constraints it yields a very efficient solver. So in this section we assume an unconstrained control setting. \subsection{Tensor product discretization and indexing} We assume that the solution functions can be discretized on a structured grid, e.g. the cell concentration $z({\bf x},t)$ with a $d$-dimensional spatial variable ${\bf x} = (x_1,\ldots,x_d)$ can be approximated by $$ z({\bf x},t) \approx \sum_{i_1,\ldots,i_d, i_{d+1}=1}^{n_1,\ldots,n_d, n_{d+1}} {\bf z}(i_1,\ldots,i_d,i_{d+1}) \phi_{i_1,\ldots,i_d}({\bf x}) \psi_{i_{d+1}}(t), $$ where $\{\phi_{i_1,\ldots,i_d}({\bf x})\}$ is a set of spatial basis functions as introduced in Section \ref{sec:Matrix}, which we now assume to be indexed by $d$ independent variables. In particular, we consider a square domain ${\bf x} \in [0,1]^d$ and the piecewise polylinear basis functions $$ \phi_{i_1,\ldots,i_d}({\bf x}) = \varphi_{i_1}(x_1) \mathbf{c}dots \varphi_{i_d}(x_d). $$ In turn, $\{\psi_{i_{d+1}}(t)\}$ is a set of nodal interpolation functions in time, associated with the uniform time grid $\{t_{i_{d+1}}\}$, with $t_{i_{d+1}} = \tau \mathbf{c}dot i_{d+1}$, $i_{d+1} = 1,\ldots,n_{d+1}$, and $\tau=T/n_{d+1}$. We can see that the discrete coefficients of $z$ can be collected into a $(d+1)$-dimensional \emph{tensor}. Introducing a uniform bound $n \ge n_k$, $k=1,\ldots,d+1$, we can immediately conclude that the tensor ${\bf z}$ has $\mathcal{O}(n^{d+1})$ entries. The computational complexity of solving \eqref{GN} is usually much higher. This explains the sometimes relatively high computing times in the previous section. \emph{Separation} of the discrete variables $i_1,\ldots,i_{d+1}$ can compress the tensor data from the exponential $\mathcal{O}(n^{d+1})$ to a linear volume $\mathcal{O}(dn)$. Yet we can aim for a higher compression ratio. Assuming that the range $n_k$ of an index $i_k$ is factorizable into a set of divisors $n_{k,1}\mathbf{c}dots n_{k,L_k}=n_k$, we can also factorize the index $i_k$ into the corresponding digits, $$ i_k = 1 + \sum_{\ell=1}^{L_k} (i_{k,\ell}-1) \prod_{p=1}^{\ell-1} n_{k,p}, \qquad k=1,\ldots,d+1. $$ Now the tensor ${\bf z}$ can be enumerated by the elementary digits $i_{k,\ell}$, which we shall denote simply as $i_m$ from now on, for $m=1,\ldots,L = \sum_{k=1}^{d+1} L_k$. Instead of considering ${\bf z}$ as a $(d+1)$-dimensional tensor, we treat it as a $L$-dimensional tensor with elements ${\bf z}(i_1,\ldots,i_L)$, and therefore we will separate now the \emph{virtual} indices $i_m$ \mathbf{c}ite{tee-tensor-2003}. \subsection{Tensor Train decomposition} As a particular separated approximation, we choose the \emph{Tensor Train} (TT) decomposition \mathbf{c}ite{osel-tt-2011}, which is also known as the Matrix Product States \mathbf{c}ite{PerezGarcia-mps-2007,schollwock-2005} in physics: \begin{equation} {\bf z}(i_1,\ldots,i_L) \approx \sum\limits_{s_1=1}^{r_1} \mathbf{c}dots \sum\limits_{s_{L-1}=1}^{r_{L-1}} z^{(1)}_{s_1}(i_1) z^{(2)}_{s_1,s_2}(i_2) \mathbf{c}dots z^{(L)}_{s_{L-1}}(i_L). \label{eq:tt} \end{equation} The factors $z^{(m)}$ on the right hand side are called \emph{TT blocks}, and the ranges $r_1,\ldots,r_{L-1}$ of the auxiliary summation indices are called \emph{TT ranks}. Notice that the TT blocks are at most $3$-dimensional tensors, of sizes $r_{m-1} \times n_m \times r_m$ (for uniformity, we can let $r_0=r_L=1$). Potentially, we can represent any finite dimensional tensor exactly through \eqref{eq:tt} by choosing large enough TT ranks. For reasons of numerical efficiency, we will of course aim for a (sub-)optimal approximation with $r_m$ being as small as possible, and most importantly much smaller than the original tensor size $n_1\mathbf{c}dots n_{d+1}$. The storage needed for the right hand side of \eqref{eq:tt} is of the order of $\mathcal{O}(L n_m r_m^2)$, where $n_m=n_{k,\ell}$ is also chosen to be much smaller than the original $n_k$. For example, if we restrict the grid sizes to be powers of two, $n_k=2^{L_k}$, the range of each index $i_m$ in \eqref{eq:tt} becomes just $\{1,2\}$, whereas $L$, and hence the storage complexity of the TT format, becomes \emph{logarithmic} in the original tensor size, $L = \log_2(n_1\mathbf{c}dots n_{d+1})$. Due to the minimal non-trivial index range in this case, the TT decomposition \eqref{eq:tt} with $i_m \in \{1,2\}$ was called the \emph{Quantized} TT (QTT) decomposition \mathbf{c}ite{khor-qtt-2011}. It was then proved that many examples of vectors \mathbf{c}ite{khor-qtt-2011} and matrices \mathbf{c}ite{khkaz-lap-2012,khkaz-conv-2013}, arising from the discretization of functions and differential operators, allow low-rank QTT decompositions. Abstracting from the original problem dimensions, we can consider only two data representations: a tensor with the smallest possible ranges ${\bf z}(i_1,\ldots,i_L)$, and a vector of the same data entries: \begin{equation} {\bf z}(i) = {\bf z}(i_1,\ldots,i_L), \quad \mbox{where} \quad i = 1 + \sum_{m=1}^{L} (i_{m}-1) \prod_{p=1}^{m-1} n_p. \label{eq:ten-vec} \end{equation} We need the vector notation for setting the Gauss--Newton equations \eqref{GN} on tensors consistently. Boldface letters (e.g. ${\bf z}$) from now on will denote vectors. We can use the Kronecker product ($\otimes$) to rewrite \eqref{eq:tt} in an equivalent vector form, \begin{equation*} {\bf z} = \sum\limits_{s_1,\ldots,s_{L-1}=1}^{r_1,\ldots,r_{L-1}} z^{(1)}_{s_1} \otimes z^{(2)}_{s_1,s_2} \otimes \mathbf{c}dots \otimes z^{(L)}_{s_{L-1}}. \end{equation*} Of course, we shall never actually compute the Kronecker products in the expansion above, but only store and manipulate individual TT blocks on the right hand side. For example, the matrix-vector product ${\bf y} = A{\bf z}$ with ${\bf z}$ given in \eqref{eq:tt} can be computed efficiently if we can also represent the matrix by a TT decomposition, \begin{equation} A = \sum\limits_{s_1,\ldots,s_{L-1}=1}^{R_1,\ldots,R_{L-1}} A^{(1)}_{s_1} \otimes A^{(2)}_{s_1,s_2} \otimes \mathbf{c}dots \otimes A^{(L)}_{s_{L-1}}. \label{eq:ttm} \end{equation} For example if the matrix is diagonal, and the vector of the diagonal values can be represented by a TT decomposition \eqref{eq:tt}, the matrix can be written as in \eqref{eq:ttm}, with the same TT ranks. There are less trivial matrices, arising for example in finite element computations, that admit TT decompositions with modest ranks $R_m$ \mathbf{c}ite{khkaz-lap-2012,khkaz-conv-2013}. Now the result ${\bf y}=A{\bf z}$ can be also written in the TT format and computed block by block. Moreover, a TT decomposition with excessive TT ranks can be efficiently approximated up to a desired accuracy by a decomposition with sub-optimal ranks using QR and singular value decomposition (SVD) factorizations \mathbf{c}ite{osel-tt-2011}, without ever constructing full large tensors. \subsection{Alternating Linear Scheme iteration for solving \eqref{GN}} In addition to the cell concentration $z({\bf x},t)$, we need to represent the other solution components. Since all components are defined on the same domain, we can discretize them using the same basis. The tensors of discrete values therefore have the same sizes. The structure of the problem \eqref{GN} suggests that we approximate them in a shared TT decomposition, the so-called \emph{block} TT format \mathbf{c}ite{dkos-eigb-2014}. We denote the aggregated solution \begin{equation*} {\bf y}^\top = \begin{bmatrix} {\bf z}^\top & {\bf c}^\top & {\bf p}^\top & {\bf q}^\top & {\bf u}^\top \end{bmatrix}, \end{equation*} enumerating the components via ${\bf y}_{j}$, $j=1,\ldots,5$. Now we decompose ${\bf y}$ into a TT format with all the same TT blocks except the $m$-th block for some $m=1,\ldots,L$, which actually carries the enumerator of the components, \begin{equation} {\bf y}_{j} = \sum_{s_1,\ldots,s_{L-1}=1}^{r_1,\ldots,r_{L-1}} y^{(1)}_{s_1} \otimes \mathbf{c}dots \otimes y^{(m-1)}_{s_{\ell-2},s_{\ell-1}} \otimes \widehat y^{(m)}_{s_{\ell-1},s_{\ell}}(j) \otimes y^{(m+1)}_{s_{\ell},s_{\ell+1}} \otimes \mathbf{c}dots \otimes y^{(L)}_{s_{L-1}}. \label{eq:btt} \end{equation} Moreover, we can switch between the representations \eqref{eq:btt} corresponding to different $m$ (and hence having $j$ in different TT blocks) using the SVD \mathbf{c}ite{dkos-eigb-2014}. For example, we can reshape $\widehat y^{(m)}$ into a matrix with elements $$ \widehat Y^{(m)}(s_{m-1},i_m;~j,s_m) = \widehat y^{(m)}_{s_{m-1},s_m}(i_m,j) $$ and compute the truncated SVD $\widehat Y^{(m)} \approx U \Sigma V^\top$. Now we write the left singular vectors $U$ into the $m$-th TT block instead of $\widehat y^{(m)}$, and multiply $\Sigma V^\top$ with the $(m+1)$-th TT block, \begin{align} \label{eq:svd1} y^{(m)}_{s_{m-1},s_m'}(i_m)={}&U(s_{m-1},i_m;~s_m'), \\ \label{eq:svd2} \widehat y^{(m+1)}_{s_m',s_{m+1}}(i_{m+1},j)={}&\sum_{s_m=1}^{r_m}\Sigma V^\top (s_m';~j,s_m) y^{(m+1)}_{s_m,s_{m+1}}(i_{m+1}). \end{align} Note that we have obtained the same representation as \eqref{eq:btt} with $m$ replaced by $m+1$. This process can be continued further, or reversed, and hence the $j$-index can be placed into any TT block. A crucial ingredient for the iterative computation of \eqref{eq:btt} is the \emph{linearity} of the TT format. Having chosen an $m=1,\ldots,L$, we construct the so-called \emph{frame} matrix, where the TT block $\widehat y^{(m)}$ in \eqref{eq:btt} is replaced by the identity matrix, \begin{equation} Y_{m} = \sum_{s_1,\ldots,s_{m-2}}y^{(1)}_{s_1}\otimes \mathbf{c}dots \otimes y^{(m-1)}_{s_{m-2}} \otimes I_{n_m} \otimes \sum_{s_{m+1},\ldots,s_{L-1}} y^{(m+1)}_{s_{m+1}} \otimes \mathbf{c}dots \otimes y^{(L)}_{s_{L-1}}. \label{eq:frame} \end{equation} If we now treat $\widehat y^{(m)}(j)$ as a vector, we can observe that \begin{equation} {\bf y}_j = Y_{m} \widehat y^{(m)}(j), \label{eq:ttlin} \end{equation} i.e. the frame matrix realises a linear map from the elements of $\widehat y^{(m)}$ to the elements of the whole solution vectors. This motivates an iterative algorithm \mathbf{c}ite{holtz-ALS-DMRG-2012}, which was called the Alternating Linear Scheme (ALS): \begin{algorithmic}[1] \For {$\mbox{iter}=0,1,\ldots$ until convergence} \For {$m=1,2,\ldots,d,d-1,\ldots,1$} \State Plug the solution in the form \eqref{eq:ttlin} into the original problem. \State Solve the resulting overdetermined problem on $\widehat y^{(m)}$. \State Prepare the format \eqref{eq:btt} and the frame matrix \eqref{eq:frame} for $m+1$ or $m-1$. \EndFor \EndFor \end{algorithmic} Starting from a low-rank initial guess of the form \eqref{eq:btt}, this algorithm seeks the solution in a low-rank TT format by sweeping through the different TT blocks. However, there might be different ways to resolve the overdetermined problem in Line 4. For the optimality equations of the inverse problem, such as in \eqref{GN}, it was found \mathbf{c}ite{bdos-sb-2016,ds-navier-2017} to be efficient to use columns of the frame matrix as a Galerkin basis and project each submatrix of the Karush--Kuhn--Tucker (KKT) system individually. In our case we notice that the $(3,3)$-block of \eqref{GN} is simply a diagonal matrix in the case where lumped mass matrices are considered, and therefore eliminate the control component from the equations.\mathbf{f}ootnote{Our derivation is of course valid for any invertible matrix $A_u$, however we wish to exploit the simplicity of the matrix structure within our solver. When consistent mass matrices are applied, we can well approximate these by their diagonals within a preconditioner, see \mathbf{c}ite{WathenEigBounds}.} Specifically, we deduce that $s_u = A_u^{-1} \left({\bf b}_u + \beta\mathbf{c}hi_{\partial\Omega}^\top s_q\right)$ and plug this into the fifth row. This gives us a system of 4 equations only. Moreover, instead of using the increments $s_z,s_c,s_p,s_q$, we can rewrite the equations for the new solution components directly: $$ \begin{bmatrix} \mathbf{c}hi_{\Omega_T} & 0 & \mathcal{L}_{zp} & \mathcal{L}_{zq} \\ 0 & \gamma_{c}\mathbf{c}hi_{\Omega_T} & \mathcal{L}_{cp} & \mathcal{L}_{cq} \\ \mathcal{L}_{pz} & \mathcal{L}_{pc} & 0 & 0 \\ \mathcal{L}_{qz} & \mathcal{L}_{qc} & 0 & -\beta^2\mathbf{c}hi_{\partial\Omega}A_u^{-1}\mathbf{c}hi_{\partial\Omega}^\top \\ \end{bmatrix} \begin{bmatrix} {\bf z} \\ {\bf c} \\ {\bf p} \\ {\bf q} \end{bmatrix} = \begin{bmatrix} {\bf \widetilde b}_z \\ {\bf \widetilde b}_c \\ {\bf \widetilde b}_p \\ {\bf \widetilde b}_q \end{bmatrix}, $$ where ${\bf \widetilde b}$ is the correspondingly adjusted right hand side. Now we plug in the solutions in the form \eqref{eq:ttlin} (with $j$ now running only from $1$ to $4$), and project each of the previous equations onto $Y_m$. This gives us a \emph{reduced} system \begin{equation} \begin{bmatrix} \widehat \mathbf{c}hi_{\Omega_T} & 0 & \mathcal{\widehat L}_{zp} & \mathcal{\widehat L}_{zq} \\ 0 & \gamma_{c}\widehat \mathbf{c}hi_{\Omega_T} & \mathcal{\widehat L}_{cp} & \mathcal{\widehat L}_{cq} \\ \mathcal{\widehat L}_{pz} & \mathcal{\widehat L}_{pc} & 0 & 0 \\ \mathcal{\widehat L}_{qz} & \mathcal{\widehat L}_{qc} & 0 & -\beta^2\widehat \mathbf{c}hi^2_{\partial\Omega} \\ \end{bmatrix} \widehat y^{(m)} = \begin{bmatrix} Y_m^\top {\bf \widetilde b}_z \\ Y_m^\top {\bf \widetilde b}_c \\ Y_m^\top {\bf \widetilde b}_p \\ Y_m^\top {\bf \widetilde b}_q \end{bmatrix}, \label{eq:GN-red} \end{equation} with $\widehat \mathbf{c}hi_{\Omega_T} = Y_m^\top \mathbf{c}hi_{\Omega_T} Y_m$, $\mathcal{\widehat L}_{**} = Y_m^\top \mathcal{L}_{**} Y_m$ (where ``$**$'' stands for ``$zp$'', ``$zq$'', ``$cp$'', and so on), and $\widehat \mathbf{c}hi^2_{\partial\Omega} = Y_m^\top \mathbf{c}hi_{\partial\Omega}A_u^{-1}\mathbf{c}hi_{\partial\Omega}^\top Y_m$ the projected square matrices. Each submatrix is of size $r_{m-1}n_m r_m \times r_{m-1}n_m r_m$ (remember that we can choose $n_m=2$), and hence \eqref{eq:GN-red} is easy to solve. Moreover, the singular value decomposition in \eqref{eq:svd1}--\eqref{eq:svd2} maintains the orthogonality of the frame matrices $Y_m$ automatically in the course of alternating iterations, provided that the initial guess is given with this property. This makes the projected submatrices well conditioned if the original matrices were so, which eventually makes the entire matrix in \eqref{eq:GN-red} invertible. We highlight that the preconditioner developed in Section \ref{sec:Preconditioner} can also be used for solving the system \eqref{eq:GN-red}. \subsection{Construction of matrices in the TT format} In the course of the Newton iteration, we need to reconstruct the matrices in \eqref{GN} (and consequently in \eqref{eq:GN-red}) using the new solution. Assume that we need to construct an abstract bilinear form of a nonlinear transformation $f$ of the solution, \begin{equation} \mathcal{L}_{\mathcal{B}} = \int f(z,c,p,q) \nabla^{p} \phi_i \mathbf{c}dot \nabla^q \phi_j~{\rm d}{\bf x}, \label{eq:bilin-abs} \end{equation} where $p,q \in \{0,1\}$ are the differentiation orders, and $\phi_i,\phi_j$ are the basis functions. Instead of the exact functions $z,c,p,q$, we work with the tensors of their values, ${\bf z},{\bf c},{\bf p},{\bf q}$. The corresponding values of $f$ can be also collected into a tensor ${\bf f}$ of the same size, and the original function can be approximated in the same basis, i.e. \begin{equation*} f(z({\bf x}),c({\bf x}),p({\bf x}),q({\bf x})) \approx \sum_{i_1,\ldots,i_d} {\bf f}(i_1,\ldots,i_d) \phi_{i_1,\ldots,i_d}({\bf x}). \end{equation*} Now the computation of \eqref{eq:bilin-abs} involves computing analytical triple products \begin{equation} \mathcal{H}(i,j,k) = \int \phi_k \nabla^{p} \phi_i \mathbf{c}dot \nabla^q \phi_j d{\bf x}, \qquad i,j,k=1,\ldots,(n_1\mathbf{c}dots n_d), \label{eq:triple} \end{equation} and summing them up with the values of ${\bf f}$, \begin{equation} \mathcal{L}_{\mathcal{B}}(i,j) = \sum_{k=1}^{n_1\mathbf{c}dots n_d} \mathcal{H}(i,j,k) {\bf f}(k). \label{eq:bilin-triple} \end{equation} Notice that we assume the basis functions can be enumerated by $d$ independent indices, i.e. $i$ is equivalent to $(i_1,\ldots,i_L)$ through \eqref{eq:ten-vec}, and similarly for $j$ and $k$. The triple elements \eqref{eq:triple} therefore admit a TT decomposition (or even a single Kronecker-product term) similar to \eqref{eq:ttm}. Now, if the ${\bf f}$ tensor can also be approximated in the TT format \eqref{eq:tt}, the bilinear form \eqref{eq:bilin-abs} can be represented in this format, with the TT ranks proportional (or equal) to those of ${\bf f}$. Moreover, the sum in \eqref{eq:bilin-triple} factorizes into individual sums over $k_1,\ldots,k_L$, which can be implemented efficiently block by block. It remains to compute a TT approximation of ${\bf f}$. From the previous Newton iteration we are given the TT representation \eqref{eq:btt} for ${\bf z},{\bf c},{\bf p},{\bf q}$. Hence we can rapidly evaluate any element of the solution components, and afterwards the corresponding value of $f$. In order to construct a TT approximation to ${\bf f}$ using only a few evaluations of $f$, we use the TT-Cross algorithm \mathbf{c}ite{ot-ttcross-2010}. This is similar to the Alternating Linear Scheme outlined above, except that at each step it draws $r_{m-1} r_m$ fibers of the tensor values in the $m$-th direction in order to populate the $m$-th TT block and prepare the optimized fibers for the next step. In total it evaluates $\mathcal{O}(Lr^2)$ elements of the tensor, which is feasible under our assumption of small TT ranks. More robust and rank adaptive generalizations of this algorithm have followed \mathbf{c}ite{mo-rectmaxvol-2018,sav-qott-2014,so-dmrgi-2011proc}. \begin{Rem} \label{rem:indicator} Forming the diagonal of the indicator matrix $G_{\lambda}$ in \eqref{GN_ControlConstraints} seems also to be a task for the TT-Cross algorithm. However, it is likely to perform poorly in this setting, for two reasons. Firstly, if the discontinuity in a function, e.g. $\max\{0,u-u_+\}$, is not aligned to coordinate axes, the corresponding TT approximation requires very large TT ranks. This can be seen already in a two-dimensional case: a triangular matrix with all ones in one of the triangles is full-rank. Secondly, the TT-Cross algorithm is likely to overlook the part of the active set which is not covered by the initial (e.g. random) set of samples. In order to adapt the sampling fibers, the cross methods require a low discrepancy between adjacent tensor elements, which is not the case for $G_{\lambda}$. For this reason, we apply the TT approach only to the case of the unconstrained control. \end{Rem} \section{Numerical experiments with the low-rank approximations}\label{sec:NumEx2} In this section, we benchmark the TT algorithm and compare it to the solver with the full vector representation. The initial distribution of bacterial cells $z_0$ and the desired state $\widehat{z}$ are chosen as in \eqref{eq:z0_full} and \eqref{eq:zhat}, with initial and target concentrations for the chemoattractant $c$ set to zero. In this section the model is solved with an unconstrained control $u$, and final time $T=1$. For the TT computations we used the TT-Toolbox implementation (see https://github.com/oseledets/TT-Toolbox). \subsection{Benchmarking of full and low-rank solvers} First, we compare CPU times of the original scheme that stores full vectors with those of the approximate TT solver (see Fig. \ref{fig:ttimes_n}). We fix $m_0=3$ randomly positioned Gaussian peaks in the initial distribution $z_0$. Since the particular ranks and numbers of iterations depend on the choice of $z_0$, we average the results over $8$ realizations of $z_0$, for each value of $n$. \begin{figure}\label{fig:ttimes_n} \end{figure} The cost of the full-format solver grows slightly faster than cubically, which is expected for a three-dimensional problem. This concerns both the CPU time and the memory. In particular, we could not run the full solver for $n>128$ due to the memory limitations. On the other hand, the TT solver can proceed to much finer grids with lower time and memory footprint. \subsection{Discretization and TT approximation errors} In order to justify the use of very fine grids (up to $n=512$), let us estimate the discretization errors. In Fig. \ref{fig:err_n_eps} (left), we vary the grid levels and plot the relative difference of the solutions on the grids with $n$ and $2n$ points in each direction, $$ \mbox{error}_f(n) = \mathbf{f}rac{\left|\langle f^2_{n}\rangle - \langle f_{2n}^2\rangle\right|}{\langle f_{2n}^2\rangle}, \qquad \langle f^2_n \rangle = f_n(T)^\top M f_n(T), $$ where $f_n(T)$ is the final-time snapshot of the solution component $f \in \{z,c,p,q\}$, computed at the grid with $n$ nodes in each variable, and $M$ is the mass matrix in space. The number of initial peaks $m_0=3$ and their positions are fixed in these experiments. \begin{figure}\label{fig:err_n_eps} \end{figure} We see that the error decays linearly with respect to $n$, as expected from the implicit Euler scheme, for all quantities. Since this decay is rather slow, at least $256$ points in each direction are necessary to achieve an accuracy of $1\%$ in $q$, and hence the control, $u$. The truncated singular value decomposition in the TT algorithm tries to introduce the same average amount of error to all solution components. However, the relative error in each component may differ from $\varepsilon$, depending on the norm scale and other factors of the algorithm, such as the local system solver. In Fig. \ref{fig:err_n_eps} (right) we investigate the relative error in all components $f\in\{z,c,p,q\}$, $$ \mbox{error}_f(\varepsilon) = \mathbf{f}rac{\|f_{\varepsilon}-f_{10^{-8}}\|_F}{\|f_{10^{-8}}\|_F}, $$ where $f_{\varepsilon}$ is the solution vector computed with the TT approximation threshold $\varepsilon$. We see that, on average, the errors decay linearly with $\varepsilon$, as expected. \subsection{Number of peaks in the initial distribution} Since the initial distribution of cells \eqref{eq:z0_full} consists of several randomly located Gaussian peaks, the particular positions of the peaks may influence the performance of the methods. In Fig. \ref{fig:ttimes_m0} we investigate CPU times and TT ranks in the TT solver versus the number of peaks $m_0$ and their positions. The plots show means plus minus standard deviations of the times and ranks with respect to the randomization of peak locations. \begin{figure}\label{fig:ttimes_m0} \end{figure} As expected, the complexity grows with the number of peaks, and for $m_0 \sim 20$ this approaches the estimated time of the full solver (should one have a sufficient amount of memory to run the latter). For a smaller number of peaks the TT solver is more efficient. Moreover, a small relative dispersion shows that it is quite insensitive to the particular realization of the initial distribution. \begin{figure}\label{fig:lr_U} \end{figure} \begin{figure}\label{fig:lr_Z} \end{figure} The initial cell density for $m_0=10$ peaks and the transient control signal are shown in Fig. \ref{fig:lr_U} (left and right, respectively), while the final density and the misfit are shown in Fig. \ref{fig:lr_Z}. The unconstrained control takes negative values in the left bottom corner of the domain. However, this gives a more accurate fit of the cell density to the desired distribution than the constrained control. \section{Concluding remarks}\label{sec:Conc} We have developed a preconditioned Gauss--Newton method for solving optimal control problems in chemotaxis, making use of an effective saddle point type preconditioner coupled with a suitable approximation of the pivoted Schur complement. This enables us to solve potentially huge-scale matrix systems, both without and with additional box constraints imposed on the control variable. Numerical results indicate considerable robustness with respect to the matrix dimension, as well as the parameters involved in the problem set-up. Moreover, we have shown that the problem without box constraints is amenable to a faster solution using the low-rank tensor approximations of all vectors and matrices arising in the discretization. The nonlinearity of the problem can easily be tackled via cross approximation methods, provided that the functions are smooth. The low-rank decompositions are not very suitable for discontinuous functions, such as an indicator of an active set, arising in the problem of finding a constrained control or state. However, in the unconstrained case the low-rank algorithms are much faster and need much less memory than the straightforward solution of the Gauss--Newton equations. Depending on the ``complexity'' of the transient solution (and hence the tensor ranks), we can achieve a speedup of more than an order of magnitude. The importance of the box constraints depends on the particular model. For example, if we can only control the inflow of the chemoattractant, it is reasonable to request a nonnegative control. However, if the laboratory setup allows one also to remove the chemoattractant, or to add a repellent, the negative control becomes physically realizable. This can provide a better control of the cell population, whereas the low-rank numerical algorithms allow a fast simulation of the required profile of the attractant/repellent, even on a low performance desktop. \textbf{Acknowledgements.}~~SD and JWP gratefully acknowledge support from the Engineering and Physical Sciences Research Council (UK) Fellowships EP/M019004/1 and EP/M018857/2, respectively. \end{document}
\begin{document} \title{Classical Limits and Contextuality in a Scenario of Multiple Observers} \author{Roberto D. Baldij\~ao} \affiliation{Institute of Physics ``Gleb Wataghin'', State University of Campinas 13083-859, Campinas, SP, Brazil } \affiliation{Institute for Quantum Optics and Quantum Information, Austrian Academy of Sciences, Boltzmanngasse 3, 1090 Vienna, Austria} \author{Marcelo Terra Cunha} \affiliation{Instituto de Matem\'atica, Estat\'istica e Computa\c{c}\~{a}o Cient\'ifica, Universidade Estadual de Campinas, Campinas, SP, Brazil.} \date{\today} \begin{abstract} Contextuality is regarded as a non-classical feature, challenging our everyday intuition; quantum contextuality is currently seen as a resource for many applications in quantum computation, being responsible for quantum advantage over classical analogs. In our work, we adapt the $N$-cycle scenarios with odd $N$ to multiple independent observers which measure the system sequentially. We analyze the possibility of violating the inequalities as a function of the number of observers and under different measurement protocols. We then reinterpret the results as an open quantum system where the environment is divided into fragments. In this context, the results show the emergence of non-contextuality in such a setting, bringing together the quantum behavior to our classical experience. We then compare such emergence of non-contextuality with that of {objectivity under the `Environment as a Witness paradigm'.} \end{abstract} \maketitle \section{Introduction} \label{sec: Intro} Quantum Contextuality, i.e.~the failure of explaining the probabilities obtained by Quantum Theory (QT) as a result of unknown well defined answers for every question allowed by a specified scenario, is a remarkable non-classical treat. Indeed, in Classical Theories (CTs), non-contextuality is a natural assumption \cite{Spekkens1,SpekkensTalk} and these theories are very successful in explaining our everyday experience. This incompatibility was first shown as a logical impossibility between any theory with such a feature and the predictions of QT \cite{KS}. {The first and stronger form of quantum contextuality is known as state-independent contextuality, when a collection of measurements satisfying some constraints generates correlations which cannot be explained under non-cintextuality assumption for any quantum state \cite{KS,Peres1990_Square,Mermin1990_Square}. A weaker but very interesting form of contextuality happens when the contradiction appears for some quantum states \cite{Bell64,KCBS}. This is called state-dependent (quantum) contextuality and it is the form of contextuality we use in this Article.} A very practical way of witnessing contextuality is using non-contextuality inequalities \cite{KCBS,JA02}, { which are always obeyed by any non-contextual theory, but possibly violated by contextual theories, such as QT. Suitable choices of quantum states and measurements lead to violations of these inequalities, as verified in the laboratory \cite{MauricioeCanas,Ahrens2013,KirchmairBlattC,LeupoldPRL,GIL}. In the lab, one experimentalist chooses randomly among the possible contexts of the scenario, or among the terms involved in the inequality, and makes the measurements, recording the data for future processing.} One interesting open question concerns the robustness of quantum contextuality to sequential \emph{independently chosen} measurements on the same system, made by different observers. Analogous questions were discussed for scenarios regarding steering \cite{Sasmal2018} and non-locality \cite{Silva2015,Das2019}, but a similar study on contextuality-like setups is still lacking. It is necessary to understand the limits that quantum correlations obey in a setting that does not necessarily respect non-disturbance (or non-signaling in a non-locality setting) \cite{Das2019,Silva2015}, since the sequential measurements are not necessarily compatible. This has both a foundational importance and a practical one. In practical terms this non-classical feature is a resource for quantum to overcome classical computation \cite{Karanjai2018,HowardCompQC}, {and} analyzing how robust it is for sequential measurements might be interesting for some algorithms that use an analogous sequential setup \cite{HowardCompQC,MBQC-Contextuality,MBQC,MBQC-Review}. Foundationally, it touches the question of how we, observers, create some classical picture of what is usually called \emph{reality} via a process within QT; this classical limit being attained here by the absence of violation of non-contextuality inequalities, which is consistent with a probabilistic view of the world based on the lack of precise information of some idealized underlying reality. In this work, we discuss the robustness of contextuality for independent observers using the odd $N$-cycle scenarios \cite{N-ciclo}. Those scenarios are interesting since they are the building blocks for contextuality \cite{Cabello2010,CSW}. Moreover, they cannot be mapped into any bipartite scenario, revealing an essential picture of contextuality. {We introduce and study three different measurement protocols {which,} despite being equivalent to test contextuality in the usual single observer implementation, behave quite differently in this multiplayer setting.} {Analyzing for the best realization for the single observer case,} we obtain the best quantum results for each player in each protocol. {These results show} that contextuality quickly disappears in most cases. For one specific protocol, however, violations of non-contextuality inequalities can be found by more than one observer, which can be viewed as a way of protecting quantum contextuality. In this sense, the generic result is consistent with the emergence of a classical world from the quantum realm, while the peculiar exception is a guide towards the quantum engineering and protection of contextual machines. {We move to interpret these results in the light of an interaction of the system and a fragmented environment, called by Zurek\textit{ et al.} as the paradigm of `Environment as a Witness' or the `Redundancy program' \cite{ZUREKREVIEW2007,Ollivier_QD}. {In such an approach, the environment is understood as a channel broadcasting information about the system. Following this paradigm, some processes to explain how objectivity might emerge were proposed: {Quantum Darwinism (QD) \cite{ZUREKQDEnvariance,ZUREK2003,Ollivier_QD}, State Spectrum Broadcasting (SSB)\cite{RHorodecki_SSB} and, finally, Strong Quantum Darwinism (SQD) -- the latter providing sufficient and necessary conditions for emergence of objectivity\cite{Le2019_SQD}.} These processes differ in important aspects \cite{RHorodecki_SSB,Le2019_SQD}, but they do agree on the concept of objectivity: it is attained when independent observers obtain the same information about the system. One very beautiful consequence of this concept is that, when attained, the information about that specific observable of the system is redundantly available in the environment, allowing anyone to learn about it, without disturbing the system. Another very appropriate name for this kind of `objectivity' through agreement of independent agents is \emph{inter-subjectivity} \cite{Mironowicz2017}. From this perspective, our results say that, for all practical purposes, odd $N$-cycle \emph{non-contextuality} emerges from the quantum realm, even in this well designed environment.}} This indicates that non-contextuality is a classical feature that can emerge in open quantum systems. This paper is organized as follows: in Sec. \ref{Sec: ScenariosandMore} we review some important features of the $N$-cycle scenarios and introduce three possible protocols a single player can use to test for contextuality in these scenarios. In Sec. \ref{Sec: Approach} we develop the multiplayers scenario and expose the results. In Sec. \ref{Sec: QD} we interpret these results as emergence of non-contextuality under system-environment interactions, discussing its similarities and differences with emergence of {objectivity under the `Environment as a Witness' paradigm}. Conclusion and open questions are discussed in Sec. \ref{Sec: Conclusions}. The actual proofs of the results are detailed in appendices. \section{Scenarios, Inequalities and Protocols} \label{Sec: ScenariosandMore} Kochen-Specker contextuality scenarios are constituted by a set of available observables (or measurements), the compatibility restrictions among them and the set of outcomes \cite{CSW,TerraBarbaraBook}. The compatibility restrictions can be depicted in a compatibility graph, where each vertex represents an observable and two vertices are connected if and only if they represent compatible observables; sets where all vertices are pairwise connected are called contexts. In the $N$-cycle scenarios, we have $N$ observables and the compatibility graph is a cycle of length $N$ \cite{N-ciclo}. In other words, denoting $A_i$ the observables, the maximal contexts are the elements in the set $\{\{A_0,A_{1}\},...,\{A_{N-1},A_{0}\}\}$. The observables are dichotomic with outcome events $o_i\in\{-1,1\}$ and $i\in\{0,1,...,N-1\}$ labelling each measurement. It is usual to consider the correlations given by theories that respect the \emph{non-disturbance principle}, which states that marginalizing joint probabilities of outcomes of compatible measurements always reproduce the probabilities for the restricted context: $\sum_{o_j}p(o_i,o_j|i,j) = p(o_i|i)$ for every measurement $j$ compatible with $i$. The non-trivial inequalities defining the classical polytope for odd $N$, after relabelling outcomes if necessary, can be written as \begin{equation} \sum_{i=0}^{N-1} \langle A_iA_{i+1}\rangle \, \stackrel{\rm NC}{\geq}\, 2-N, \label{Ineq: Correlators} \end{equation} with sums in the indexes made modulo $N$ \cite{N-ciclo}. Considering our choice of outcomes, the avarage value is given by { \begin{equation} \langle A_iA_{i+1}\rangle = p(o_i=o_{i+1}|i,{i+1}) - p(o_i\neq o_{i+1}|i, {i+1}). \end{equation}} The superscript ``NC'' in Ineq, ~\eqref{Ineq: Correlators} reminds that the inequality was derived assuming non-contextuality. This means that the deterministic outcome assigned to a measurement does not depend on which context it was measured, e.g., if one measures $\{A_0,A_1\}$ or $\{A_1,A_2\}$, a non-contextual theory must assign the same $o_1$ irrespective of the context and so on. Inequality \eqref{Ineq: Correlators} just means that if we try to assign dichotomic values to all observables non-contextually, at least two neighbors must agree. Another way of saying this is through inequality \begin{subequations} \begin{equation} \sum_{i=0}^{N-1}p(o_i = o_{i+1}|i,i+1) \stackrel{\rm NC}{\geq} 1, \label{Ineq: ProbEqual} \end{equation} which, through the normalization condition $p(o_i=o_{i+1}|i,i+1) + p(o_i\neq o_{i+1}|i,i+1) = 1$, is equivalent to \begin{equation} \sum_{i=0}^{N-1}p(o_i\neq o_{i+1}|i,i+1) \stackrel{\rm NC}{\leq} N-1. \label{Ineq: ProbDif} \end{equation} \label{Ineq: Prob} \end{subequations} Now, if we assume \emph{exclusiveness} \cite{Cabello2010}, that is, assume that $p(-1,-1|i,i+1)=0$ for all contexts, then the joint measurements become three-outcome measurements: the event $(-1,-1|i,i\pm1)$ never happens. Then, $\sum_{o}p(-1,o|i,j)$ reduces to $p(-1,+1|i,j)$ and non-disturbance reads $p(-1,+1|i,j) \equiv p(-1|i)$ for all measurements $A_j$ compatible with $A_i$. So, in a theory obeying non-disturbance $+$ exclusiveness in this dichotomic scenario, we can identify, {counterfactually}, the events $(-1,+1|i,i+1) \leftrightarrow (+1,-1|i-1,i) \leftrightarrow (-1|i)$. This allows us to write: \begin{equation} p(o_i\neq o_{i+1}|i,i+1) = p(-1|i)+p(-1|i+1). \label{Eq: NDplusExcl} \end{equation} Let us now assume a generalized probabilistic theory framework to describe how probabilities are obtained \cite{Barret2007,Janotta2014}. In a scenario with non-disturbance, exclusiveness, and dichotomic measurents, one can associate to the event `measuring $A_i$ and obtaining output $o_i=-1$ (resp.~ $o_i=1$)' the effect $a_i$ ($\neg {a_i}$) and to the joint measurement of $\{A_i,A_{i+1}\}$ the set of three exclusive effects $\{a_i \wedge \neg a_{i+1}, \neg a_i \wedge a_{i+1}, \neg a_i \wedge \neg a_{i+1}\}$. The probabilities of the respective events are given by the expected values of such effects\footnote{The identification of events that led to Eq. \eqref{Eq: NDplusExcl} is translated to $a_i\wedge \neg a_{i+1} \leftrightarrow a_i\wedge \neg a_{i-1} \leftrightarrow a_i$. This last identification between effects is also related to the stronger principle of `Gleason Property'\cite{Cabello2010,Gleason}. See the Appendix \ref{Sec: NDandGP} for a quick exposition about the relation and differences between this property, non-disturbance and their equivalent consequences in this scenario.}. In terms of effects, we can rewrite Eq. \eqref{Eq: NDplusExcl}: \begin{align} &p(o_i\neq o_{i+1}|i,i+1) \nonumber \\ &= p(-1|i)+p(-1|i+1) =\left\langle a_i \right\rangle + \left\langle a_{i+1} \right\rangle. \label{Eq: NDplusExclEffects} \end{align} The probabilities of disagreement can then be written only in terms of $\left\{\langle a_i \rangle\right\}$. The agreement probabilities, because of exclusiveness, depends only on the expected values of $\{\neg a_i \wedge \neg a_{i+1}\}$. For a lighter notation, and for historical reasons \cite{Wright} , we will denote $b_i$ instead of $\neg a_i \wedge \neg a_{i+1}$ (see also Fig. \ref{Fig: HyperGraph}). Then, Ineqs. \eqref{Ineq: Prob} turn (with a small twist in ordering) into: \begin{subequations} \begin{align} \alpha = \sum_{i=0}^{N-1}\left\langle a_i\right\rangle \,&\stackrel{\rm NC}{\leq} \,\, \frac{N-1}{2}, \label{Ineq: ProbA}\\ \beta = \sum_{i=0}^{N-1}\left\langle b_i \right\rangle \, &\stackrel{\rm NC}{\geq}\,\, 1. \label{Ineq: ProbB} \end{align} \label{Ineq: ProbAeB} \end{subequations} These are the Inequalities that will be used from now on (Inequality \eqref{Ineq: ProbA} for $N=5$ is the KCBS inequality \cite{KCBS}). It is important noticing that violation of one inequality of \eqref{Ineq: ProbAeB} leads to violation of the other, and so they are equivalent with respect to witnessing contextuality. Another important comment is that, surprisingly, Inequality \eqref{Ineq: ProbA} shows explicitly that we can look to the results of a measurement of each $A_i$ alone, instead of a whole context $\{A_i,A_{i+1}\}$, under the validity of the exposed assumptions. Of course, experimentally, it is important to show that such conditions are met \cite{Cabello2010}. Fig. \ref{Fig: HyperGraph} exhibit the pentagon scenario under the exclusiviness assumption, which gives rise to the trichotomic measurements with effects $\left( a_i, b_i, a_{i+1} \right)$. \begin{figure} \caption{Hypergraph representing a $5$-cycle realization in a theory obeying $p(-1,-1|i,i+1)=0$. Hyperedges represent each context $\{A_i,A_{i+1} \label{Fig: HyperGraph} \end{figure} The trichotomic measurements $\left( a_i, b_i, a_{i+1} \right)$ together with Inequalities \eqref{Ineq: ProbAeB} suggest three different protocols for testing for contextuality in the $N$-cycle obeying exclusiveness hypothesis. For each value of $i$, the first protocol specify the complete measurement of this hyperedge, while two other protocols come from two natural {coarse-grainings}/dichotomizations, i.e: \begin{enumerate} \item $\mathcal{M}_i = \{a_i,b_i,a_{i+1}\}$; suitable for both Inequalities; \item $\mathcal{M}^a_i=\{a_i,\neg a_i\}$ suggested by Ineq. \eqref{Ineq: ProbA}; \item $\mathcal{M}^b_i=\{b_i,\neg b_i\}$, suggested by Ineq. \eqref{Ineq: ProbB}. \end{enumerate} Protocol $1$ is motivated by the usual contextuality scenario, where joint or sequential measurements of the context allows to recover separately the results for both observables in that context. However, as Inequalities \eqref{Ineq: ProbAeB} show, this can carry excess of information: under the assumption $p(-1-1|i,i+1)=0$, one (surprisingly) might know only the occurrences of $a_i$ or $b_i$, which are less invasive measurements (used in several experimental tests of similar inequalities \cite{KirchmairBlattC,MauricioeCanas,Ahrens2013}). In a typical setup with a single observer this makes no difference (since after measuring one context the system is prepared again for another round). This is not the case for sequential observers, as we shall explain soon. Protocol $2$ is designed to evaluate the quantity $\alpha$ and test contextuality through Ineq.~\eqref{Ineq: ProbA}, while protocol $3$ focus on $\beta$ and Inequality \eqref{Ineq: ProbB}. Since our motivation is to consider the robustness of quantum contextuality, it is important to look closely to quantum realizations of the odd $N$ scenarios. The maximum quantum violations for all odd $N\geq 5$ can be realized in a Hilbert space of dimension $d=3$, with measurements defined by $A_i = \mathcal{I} - 2\ket{a_i}\bra{a_i}$, where $\mathcal{I}$ is the identity and, with an appropriate basis choice, the vectors $\ket{a_i}$ are \small \begin{equation} \ket{a_i} =\mathcal{K}\left(\cos{\left(\frac{i\pi(N-1)}{N}\right)},\sin{\left(\frac{i\pi(N-1)}{N}\right)},\sqrt{\cos{\left(\frac{\pi}{N}\right)}}\right)^t \nonumber, \label{Eq: Projectors} \end{equation} \normalsize with $t$ meaning transposition, and $\mathcal{K} =1/\sqrt{(1+\cos(\pi/N))}$ for normalization \cite{N-ciclo}. The vectors $\ket{b_i}$ are orthogonal to $\{\ket{a_i},\ket{a_{i+1}}\}$. The important point here is the symmetry obeyed by the vectors $\{\ket{a_i}\}$: they form a regular polygon in a plane orthogonal to the axis $(0,0,1)^t$. The state that reaches the maximum violations is $\ket{\psi_{handle}} = (0,0,1)^t$, symmetric with respect to the $\{\ket{a_i}\}$. The form above for the vectors implies $\braket{a_i}{a_{i\pm1}} = 0$, obeying the compatibility and exclusiveness constraints required by the scenario. \section{Multiplayers Sequential Approach} \label{Sec: Approach} Usual tests of contextuality involve only one ``player''. Even if many experimentalists access the same system, they do it as a coordinate team. The multiplayer sequential approach (MPSA) that we study here goes in the opposite way: the same quantum system will be tested by different players. The players agree on which protocol to use, but they independently choose the measurement to implement without any knowledge related to other players' measurements or results. {By comparison, if an experiment is draw to test inequality \eqref{Ineq: ProbA} in a given $N$-cycle scenario, usually one considers identical preparations of a system; at each run, the experimentalist chooses $i$, applies the measurement $\mathcal{M}^a_i$, collects the result and discards the system. In the MPSA, given a prepared system, the first player chooses $i_1$, applies the measurement $\mathcal{M}^a_{i_1}$, collects the result and leaves the system for the next player, who independently chooses $i_2$ and so on.} {The question we focus is whether the $k$-th player will be able to witness contextuality under this rules. We will see that the answer depends strongly on the choice of the protocol to be used and of the inequality to be tested.} {More precisely,} the game to analyze the resistance of quantum contextuality under sequential independent players is formulated as follows (see Fig. \ref{Fig: Game}). Fix a $N$-cycle scenario, an Inequality of \eqref{Ineq: ProbAeB} (which all players will evaluate), and a related protocol. Then, (i) an initial state of dimension $d=3$ is prepared; (ii) an order of access of each player to $\mathcal{S}$ is followed; (iii) each player chooses one of the possible quantum projective measurements and executes it on $\mathcal{S}$. Steps (i) to (iii) (called henceforth a run) are repeated for players to collect their individual data to estimate the expression $\alpha_Q^k$ or $\beta_Q^k$, where superscript $k$ labels each player and $Q$ reminds that we are looking at quantum realizations. Player $k$ wins the game if capable of violating the chosen inequality. Players cannot communicate, being ignorant with respect to others' measurements and outcomes\footnote{{This is the sense of saying we relax the non-disturbance condition: the measurements and inequality evaluation of each player dobey non--disturbance, but what was done with the state by previous players will generally not be compatible to what will be done in the future.}}. The game is represented in figure \ref{Fig: Game}; in the following, we analyze the behavior of the observables involved in the inequalities in each of the protocols. \begin{figure} \caption{Scheme for the multiplayers game. a) First player makes a measurement $i_1$ on state $\rho$, obtaining his outcome $o_{i_1} \label{Fig: Game} \end{figure} For protocol $1$, it is useful to define the probability vector $\vec{P}_{i_k}$: each entry tells the probability of obtaining each outcome for the measurement $\mathcal{M}_{i_k}$, with $i_k\in\{0,...,N-1\}$ labelling the choice of the $k$-th observer. For instance, suppose $i_k=3$, then $P_{i_k=3}=(p(a_3|3),p(b_3|3),p(a_4|3))^t$. With $(\vec{P}_{i_k})_{i_k}$ at hand, each observer can calculate either $\alpha_Q$ or $\beta_Q$ by choosing the relevant entries, i.e., using the relation \begin{subequations} \begin{equation} \alpha_Q^k = \sum_{i_k}\langle v_{\alpha},\vec{P}_{i_k}\rangle = \langle v_{\alpha},\sum_{i_k}\vec{P}_{i_k}\rangle, \label{Eq: alphaProtocolOne} \end{equation} \begin{equation} \beta_Q^k = \sum_{i_k}\langle v_{\beta},\vec{P}_{i_k}\rangle = \langle v_{\beta},\sum_{i_k}\vec{P}_{i_k}\rangle, \label{Eq: betaProtocolOne} \end{equation} \label{Eq: IProtocolOne} \end{subequations} where $\langle \cdot,\cdot \rangle$ is the usual Euclidean scalar product, $v_{\beta}^{t} = (0,1,0)$ and $v_{\alpha}^{t} = (1/2,0,1/2)$, see \ref{Sec: Protocol1}. Since protocol $1$ is composed of complete measurements, each of its elements prepares completely the state after a measurement. Hence the dynamics under this protocol takes the form of a Markovian process on the post-measured states (details in Sec. \ref{Sec: Protocol1}). Considering that each player chooses uniformly which measurement to perform and that the same order of players is used in every run, it is possible to show that \begin{align} \sum_{i_k} \vec{P}_{i_k} = \sum_{i_{k-1}}\mathbb{M}_N\vec{P}_{i_{k-1}} = \left(\mathbb{M}_N\right)^{k-1} \sum_{i_1}\vec{P}_{i_1}, \label{Eq: Markov} \end{align} where $\mathbb{M}_N$ is a bistochastic matrix. Given the quantum realization described above, $\mathbb{M}_N$ depends only on the $N$-cycle scenario being tested (thus is fixed). It could happen that the initial state allowing for the best violation for the $k$-th observer would depend on $k$. However Eq. \eqref{Eq: Markov}, together with the form of $\mathbb{M}_N$, shows that the initial quantum state which reaches the maximum violation for the first player maximizes (or minimizes) the value of $\alpha_Q^k$ ($\beta_Q^k$) for the $k$-th observer (see Sec. \ref{Sec: Protocol1} for details). Eq.s \eqref{Eq: Markov} also permits to obtain all the values $(\alpha^k_Q)_k$ or $(\beta_Q^k)_k$ and also to calculate the asymptotic limit. Because $\mathbb{M}_N$ represents a regular (and so, irreducible) Markov process for every odd $N$, its steady eigenvector $\vec{P}^*$ is unique \cite{DurrettStochasticBook}. Since $\mathbb{M}_N$ is bistochastic this is given by the uniform distribution $\vec{P}^*_i = 1/3$, which implies \begin{equation} \lim_{k \to \infty} \alpha_Q^k =\lim_{k \to \infty} \beta_Q^k= \frac{N}{3}. \label{Eq: AsymptoticMarkov} \end{equation} Analyzing $\alpha^k_Q$ and $\beta_Q^k$ for each $k$ via Eq. \eqref{Eq: IProtocolOne}, we see that there is no violation already for the second observer, for all $N$ (see Table \ref{Table1} and Fig. \ref{Fig: ViolationProtocols}). This is a rather extreme behavior, leading to impossibility of attesting non-contextuality after measurement on $\mathcal{S}$ by just one player. This can be considered as a consequence of the high level of disturbance of these measurements, since they completely destroy coherences in the measured basis. It is natural to ask if this radical emergence of no-violation also happens for the other protocols, which preserve coherence in a $2$-dimensional subspace of the Hilbert space. For protocols $2$ and $3$, the post-measurement state can not be completely determined by a previous measurement. Remarkably, using the symmetries of the odd $N$-cycle quantum realizations (defined by the $C_{Nv}$ point group \cite{InuiTanabeOnoderaBook}), it is possible to obtain a relation between $\alpha_Q^k$ and $\alpha_Q^{k-1}$, as well for $\beta^Q$ (see \ref{Sec: Protocols2and3}). Explicitly: \begin{subequations} \begin{align} \alpha_Q^k &= C_N\alpha_Q^{k-1} +c_N, \label{Eq: IneqProtocol2}\\ \beta_Q^k &= B_N\beta_Q^{k-1}+b_N, \label{Eq: IneqProtocol3} \end{align} \label{Eq: IneqProtocols2e3} \end{subequations} with $B_N,b_N,C_N,c_N$ fixed for each $N$. With Eqs. \eqref{Eq: IneqProtocols2e3}, it is possible to see again that the initial state allowing for maximum violations for $k=1$ reaches the highest violations for any $k$, i.e., $\ket{\psi_{\text{handle}}}$. With the form of Eqs. \eqref{Eq: IneqProtocols2e3} it is possible to calculate the quantities for each player and to obtain the asymptotic limit analytically, using the symmetries and dependencies of $B_N\, (C_N)$ and $b_N\,(c_N)$ with $N$ (see \ref{Sec: Protocols2and3}). This gives \begin{equation} \lim_{k \to \infty}\alpha_Q^k= \lim_{k \to \infty} \beta_Q^k=\frac{N}{3}, \label{Eq: AsymptoticNonMarkov} \end{equation} which again means no violation of any of the inequalities and show that dynamics imposed by the game leads to a limit considerably above (below) the non-contextual bound (equal to the asymptotic limit for protocol $1$). Results presented above are depicted in Fig. \ref{Fig: ViolationProtocols} for the case $N=9$. Other cases are presented at Table \ref{Table1}. \begin{figure} \caption{a) minimum value of $\beta_Q^k$ for the $k$-th player, for protocols $1$ (black) and $3$ (red). b) maximum value of $\alpha_Q^k$ for the $k$-th player for measurement protocols $1$ (black) and $2$ (red). Initial state is $\ket{\psi_{handle} \label{Fig: ViolationProtocols} \end{figure} The striking difference between protocols $2$ and $3$ shows that coherence is not the only ingredient necessary to extend survival of contextuality. This difference must be a consequence of orthogonality in the $\{\ket{a_i}\}$ set: if a player obtains outcome $a_i$, it is automatically forbidden for the next one to obtain $a_{i+1}$ or $a_{i-1}$, while the $\ket{b}$ vectors are less restrictive, since obtaining an outcome $b_i$ implies (with high probability) less disturbance on the incoming state. We can understand this disappearance of the violations as a consequence of the degradation of the state of the system in each step. Since the inequalities in consideration are state-dependent, this modification of the state makes it less capable of violating the inequalities, until it goes under (above) the classical bound. The average limit of $N/3$ for $K\rightarrow \infty$ is due to the fact that the average state in this limit is in fact the maximally mixed \ref{Sec: AssympState}. It is noteworthy that independence of players is crucial since \emph{collective strategies can guarantee that all players win}. For instance, if all players combine which measurements to make or post-select the data to keep only those not perturbed by precedent players, (maximal) violation is always reachable. \section{System-Environment Picture and Emergent Classicality} \label{Sec: QD} \subsection{Open Quantum System Interpretation} One valid analysis of those results is obtained by using an open quantum system perspective. Since the system interacts with the players, they can also be interpreted as an environment, causing the system to decohere, to approach a classical description and to lose contextuality. However, it is much more informative to interpret such interaction under the so called `Redundancy program' or `Environment as a Witness' paradigm \cite{ZUREKREVIEW2007,Ollivier_QD}. Usually in this paradigm, the environment consists of several fragments; independent observers can then learn something about the central system, $\mathcal{S}$, by measuring disjoint sets of such fragments. {Taking this paradigm into our game}, each player plays the role of a fragment of the environment monitoring $\mathcal{S}$. A related situation would be to consider a collisional approach to open quantum systems \cite{Ziman05-2}, but with strong interaction rather than weak. {Within this picture, the system first interacts with a fragment $\mathcal{E}_1$. This interaction is capable of establishing the right correlations, transferring information about the monitored observable-- $A_{i_1}$, say -- to the environmental subsystem. After that, the decohered system goes on to the next interaction which, by its turn, establishes new correlations to the next environmental subsystem, $\mathcal{E}_2$. See Fig. \ref{Fig: Interaction}.} \begin{figure} \caption{{We can interpret each measurement on the sequential setup as a strong interaction in a collisional approach. In each interaction (e.g. with $\mathcal{E} \label{Fig: Interaction} \end{figure} A slight modification in the game makes this interpretation {even} stronger. In this new version, in each run the $K$ players access the system in a random order, unknown to them. In this new version, each player would posses an average of the values of $\alpha^k$ ($\beta^k$), given by the uniform distribution of the possible results, $\bar{\alpha}_K = (1/K)\sum \alpha^k$ (similarly for $\beta$), which is now the same quantity for every player `in the environment'. In this case, the answer to the question `does the statistics of outcomes of measurements accept a non-contextual model?' would have a unique, collective, answer. To understand which answer this would be, we show at Table \ref{Table1}, for some values of $N$ and for each protocol, the maximum number of players $K_{\text{max}}$ -- i.e., the `size of the environment' -- that still would violate the relevant inequality. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{6}{c|}{$K_{max}$} \\ \hline & \multicolumn{3}{c|}{$K_{\text{max}}$ - Predef. Order} & \multicolumn{3}{c|}{ $K_{\text{max}}$ - Unif. Distr.} \\ \hline $N$ & $M(\alpha , \beta) $ & $M^a(\alpha)$ & $M^b(\beta)$ & $M(\alpha , \beta)$ & $M^a(\alpha)$ & $M^b(\beta)$ \\ \hline 5 & 1 & 1 & 2 & 1 & 2 & 4 \\ 7 & 1 & 1 & 3 & 1 & 1 & 6 \\ 9 & 1 & 1 & 4 & 1 & 1 & 8 \\ 11 & 1 & 1 & 5 & 1 & 1 & 9 \\ 13 & 1 & 1 & 5 & 1 & 1 & 11 \\ 15 & 1 & 1 & 6 & 1 & 1 & 12 \\ 17 & 1 & 1 & 7 & 1 & 1 & 14 \\ 19 & 1 & 1 & 8 & 1 & 1 & 16 \\ \hline \end{tabular} \caption{Number of players that can witness contextuality ($K_{max}$) for the different protocols in both versions regarding step (ii): (left) order is fixed; (right) the order is sorted randomly for the total number of players. } \label{Table1} \end{table} We can see that $K_{\text{max}}$ is always higher for protocol $3$ and for the randomized access version when compared with the other possibilities, as a consequence of the less disturbance of the measurements in this protocol. Even in this case, it is a quite small environment that can witness contextuality, for these values of $N$. So, our results show that \emph{contextuality is hidden for a usual-sized environment}, if $N$ is not enormously large. Even in this case, the asymptotic limit shows that there is always a big enough environment for which no-violation occur. {One could say that our conclusion is false for enormously high values of $N$ and a fixed usual-sized environment}. Since $K_{\text{max}}$, grows with $N$ for protocol $3$, in principle, a very well designed environment could witness $N$-cycle contextuality for these values of $N$. However, the measurements involved in such high values of $N$ for $N$-cycle inequalities would require a huge precision to be performed and distinguished. We can then conclude that witnessing violation of $N$-cycle non-contextuality inequalities by having access to fragments of an (already very carefully designed) environment is very difficult, if not impossible for all practical purposes. {\subsection{Comparison with emergence of objectivity}} {The `Redundancy' paradigm is the stage where emergence of objectivity might be obtained. So, obtaining emergence of non-contextuality\footnote{Actually, obtaining emergence of the impossibility of attesting contextuality would be more precise, but also more cumbersome.} under this same setup calls for an analysis of the relation between our results and such different emergence of classicality. Under such a paradigm, three processes were suggested to be responsible for objectivity: QD,SSB and SQD. Historically, the process of Quantum Darwinism (QD) was the first one to be proposed \cite{ZUREKQDEnvariance,ZUREK2003}. Further development has replaced QD by State Spectrum Broadcast (SSB) structure\cite{RHorodecki_SSB} or, more recently, by Strong Quantum Darwinism (SQD)\cite{Le2019_SQD}. Let us concisely state the main features of these processes to then see how our results compare to the emerging classicality on them.} In system-environment processes where QD successfully takes place, the interaction both decoheres the system $\mathcal{S}$ in a preferred basis -- the pointer basis-- and broadcasts the information about the projection on that basis to several independent fragments of the environment\footnote{This might not always be the case in nature; for those cases a broader notion of POVM pointer states can apply \cite{BRANDAO,Beny}.}. This allows for independent observers to agree about this information even if they have access only to disjoint fragments of the environment, without disturbing directly the state of the system. {Since this information is redundantly stored in the sub-environments, having access to bigger fragments makes (almost) no difference, as long as there is some missing part. This gives rise to the so-called \emph{classical plateau} in the mutual information plot \cite{ZUREKREVIEW2007}: (almost) irrespective of the number of environmental fragments, $\#f\mathcal{E}$, the mutual information $I(\mathcal{S}:f\mathcal{E})$ is approximately the entropy of $\mathcal{S}$.} This whole process should be seen, according to Zurek et al.'s proposal \cite{ZUREKQDEnvariance,ZUREK2003,Ollivier_QD}, as recovering objectivity inside QT: independent observers would agree on classical information obtained about a system, without further disturbing it. The classical plateau being considered the signature that objectivity has indeed emerged. {However, the classical plateau in the mutual information plot was shown insufficient to attest objectivity in the proposed sense. Indeed, references \cite{RHorodecki_SSB,Pleasance2017,Le2018_QDandObjectivity} have shown that this condition can be obtained by rather non-objective states. In these states, considerable part of the mutual information is composed of quantum correlations, instead of classical. This gave rise to another proposition: objectivity was attained when the combined state of system and environmental fragments assume the SSB structure \cite{RHorodecki_SSB}. In states with such structure, the spectrum of a pointer observable must be encoded into the fragments and no-correlations among environmental subsystems are allowed besides those established through the pointer states of $\mathcal{S}$. This structure was indeed shown \emph{sufficient} for objectivity \cite{RHorodecki_SSB}. Recently, it was shown that one can arise at \emph{necessary and sufficient} conditions for objectivity by requiring weaker conditions than SSB, through strong quantum Darwinism\cite{Le2019_SQD}. This process allows for extra correlations between environmental fragments, as long as they are not quantum correlations, i.e., as long as there is zero-discord. All of those processes vary in their success to signal inter-subjectivity, but they are all based in the same emergence notion of objectivity: information about a pointer basis, selected by the interaction, is spread to independent environmental fragments. In all of those processes, when successful, redundancy-- seen as agreement on the information gathered by independent observers-- brings classicality.} {In the case of our game, there are already two main differences to these processes: here we consider several runs for players to collect their data and the fragments of the environment try to monitor incompatible observables. Interestingly, our results show that these independent players still see classicality to emerge in this program: they would, generically, be unable to attest contextuality that was initially available. This emergence of a different classicality -- non-contextuality instead of objectivity of pointer observables -- can occur even when those processes are \emph{not} successful; in other words, an important quantum feature, quantum contextuality, can be lost even if objectivity does not prevail. This happens because these processes also require the selection of a pointer basis and, in each run of our game, this fails to occur since the observables monitored by each environment fragment are usually not jointly compatible. For instance, if one player monitors the projections $\{\ket{a_0}\bra{a_0}, \mathcal{I}-\ket{a_0}\bra{a_0}\}$, this sets a selection of classical information about $\mathcal{S}$ that might differ from the previous or the next player, if they do not choose projections commuting with $\ket{a_0}\bra{a_0}$. This implies that after the $i$-th interaction, correlations between $\mathcal{S}$ and the $(i-1)$-th fragments are weakened and there is no well-defined \emph{collective} classical information about $\mathcal{S}$ being broadcast to the fragments, in each run.} This means that there is no objectivity in this case -- the observables being monitored are not the same to begin with, so there is no reason to hope the probabilities of each `answer' to be the same. However, there is still emergence of non-contextuality: we can ask to the independent observers, after several runs, if the probabilities they infer about $\mathcal{S}$ allow them to violate the inequality or not. As discussed above, the typical answer is `no', and since this will happen for all $N$-cycle inequalities, it means that the players can agree on a non-contextual model leading to the frequencies that they recorded. So, non-contextuality appears in such a `Redundancy program' even in a situation where objectivity does not emerge. As mentioned already, if we took a different approach, allowing the players to combine strategies instead of acting independently, (maximum) violation could be obtained -- e.g., if every player made the same measurement in each run (every fragment would monitor the same observable). This means that this emergence of non-contextuality could be frustrated if some of the players were allowed to cooperate. In this system-environment interpretation, we can say that if we establish the right correlations between the environment fragments, this kind of classicality might fail to appear. This is similar to what happens with objectivity, which might be hindered if the right correlations are {engineered} between the environmental systems \cite{Ciampini2018}, {a rather unlikely situation}. {Interestingly, this collective approach to the game could be exactly the case where objectivity of pointer observables would emerge \emph{in each run}, with correct information about a unique measurement spread out through the whole environment.} {In other words, objectivity and non-contextuality are shown independent in this game, with the former being related to consistency of outcomes in each run while the latter to the possibility of classically explain the probabilities inferred after many runs.} \section{Conclusions} \label{Sec: Conclusions} In this work, we address the behaviour of quantum contextuality under strong measurements from sequential observers, in the odd $N$-cycle scenario. We focus on independent players for two reasons: they mimic better the situation where unknown effects act on the system and cooperative players can act as one team, bringing back the one-player usual behaviour. We make a separate analysis for each of three protocols that are equivalent in the usual single observer setup, showing they behave differently in this new setting. Results point to protocol $3$ as the one that allows contextuality to survive for more players, thus being better for protecting contextuality. Since contextuality is a striking non-classical feature, it is important to understand classical limits in which non-contextuality emerges. It is fair to say that any classical limit which allows for contextuality is not a fully classical limit. By interpreting the game results in an environment-system language we obtained $N$-cycle non-contextuality as an emergent property in open quantum system approaches and compared it to {emergence of objectivity under} the `Environment as Witness' paradigm. In other words, the impossibility of attesting contextuality emerges for most players when many \emph{independent} observers, with no collective strategies, play the game. The redundant conclusion among players is that it is \emph{not} possible to witness quantum contextuality. This work also opens questions and directions for future research. First, it is valuable to study variations of this game that can be even more related to usual classical limits. For example, considering interaction between players and $\mathcal{S}$ in place of measurements, leading to an approach that is an interplay between collisional models and QD \cite{Rau63,Ciampini2018,Ziman05,Ziman05-2,Lorenzo17}. This scenario is also a good stage to consider weak measurements as done for non-locality \cite{Silva2015}. It is essential to study the response of other forms of contextuality, specially \emph{state-independent} contextuality since our approach is manifestly state-dependent -- a state-recycling approach already shows state-independent contextuality might be more robust to the dynamics described here \cite{Wajs2016-StateRecycling}. If the kind of behaviour obtained in the present work is not present in such type of contextuality, this is interesting for quantum foundations and computation; if non-contextuality still emerges in this setting, it would be interesting to understand why, since the `state-degradation' interpretation is not possible anymore. \begin{acknowledgments} RDB acknowledges funding by S\~{a}o Paulo Research Foundation - FAPESP, grants no. 2016/24162-8 and no. 2019/02221-0. Authors thank the Brazilian agencies CNPq and CAPES for financial support. This work is part of the Brazilian National Institute for Science and Technology in Quantum Information. We thank Felippe Barbosa, J\'{a}nos Bergou, Raphael Drumond, Rafael Rabelo and Ana Bel\'{e}n-Sainz for fruitful discussions. Important clarifications were made thanks to anonymous Referees. \end{acknowledgments} \appendix \onecolumngrid \section{{Non-disturbance and `Gleason property'}} \label{Sec: NDandGP} In this subsection we intend to make a short exposition about some fundamental principles mentioned in the main text: non-disturbance (ND) and `Gleason property' (GP). The ND principle can be stated as follows: a theory obey the ND principle if the marginalization of the joint distribution does not depend on the marginalized measurement. Mathematically, a simplified version considering joint measurements of only two compatible measurements, can be written \begin{equation} p(o_i|i) = \sum_{o_j} p(o_i,o_j|i,j) = \sum_{o_{j'}} p(o_i,o_{j'}|i,j'), \end{equation} for all $A_j,A_{j'}$ compatible with $A_i$. The more general form assume similar restrictions are valid for any number of compatible measurements and any marginalization in a given context \cite{TerraBarbaraBook}. This is a way of saying that, if measurements are compatible, there should be a way of performing them such that they do not disturb the results of each other. This can be seen as a non-signaling-like principle where compatibility does not necessarily arise from spatial separation. If we assume ND and exclusiveness, i.e. $p(-1,-1|i,j)=0$, in a dichotomic scenario such as the $N$-cycles, one has \begin{align} & p(-1|i)=p(-1,+1|i,i+1) + \cancel{p(-1,-1|i,i+1)}=p(+1,-1|i-1,i)+\cancel{p(-1,-1|i-1,i)} \,\,\forall \,i\nonumber\\ \Rightarrow & p(-1|i)=p(-1,+1|i,i+1) =p(+1,-1|i-1,i) \,\,\forall\,i. \label{Eq: ExclusivenessOpEq} \end{align} {What Equation \eqref{Eq: ExclusivenessOpEq} says is that the events $(-1|i)$,$(-1+1|i,i+1)$ and $(+1,-1|i-1,i)$ are \emph{operationally equivalent}: any probability assigned to one of these events must be assigned to the others for the probabilistic model to respect ND and exclusiveness.} This allows us to identify the events $(-1|i), (-1,+1|i,i+1)\text{ and } (+1,-1,|i-1,i)$, and this identification allows us to take Inequalities \eqref{Ineq: ProbDif} and \eqref{Ineq: ProbEqual} to the forms \begin{subequations} \begin{equation} \sum_i p(-1|i)\leq \frac{N-2}{2}, \label{Ineq: pND-1} \end{equation} \begin{equation} \sum_i p(+1,+1|i,i+1)\geq 1. \label{Ineq: pND+1+1} \end{equation} \end{subequations} under the assumption of non-contextuality as described in the main text. {We can visualize such an identification procedure of the operationally equivalent events as identifying vertices in a hypergraph where each vertex is an event and hiperedges circles those vertices which are outcomes of a measurement. } \begin{figure} \caption{a) Two hyperedges defining joint measurements with its initially four outcomes. b) ND says that the probabilities assinged to the upper (lower) events on either hyperedge must sum up to the same value. This can be used together with normalization condition for each hyperedge to obtain that probabilities assinged to the events in the green hyperedges must sum up to one.c) Exclusiveness essentialy allows to take $(-1-1|i,j)$ out of consideration, implying the operation equivalence of these two vertices and the outcome of the measurement of $A_0$ alone. d) The operational equivalence allows to identify these events as one vertex and we end up with this new hypergraph for these contexts.} \label{Fig: HypergraphConstruction} \end{figure} Note that, on the above discussion, \emph{no hypothesis were made on how this probabilities arise}: how states are represented or how measurements act on them, etc.. However, one can also consider some assumptions on the mechanism that lead to such distributions. Since we analyze and compare different measurement protocols, it is also important in this work to consider such additional structure for general theories, the proper language for contextuality scenarios, to only then specialize it to QT. A broad and interesting framework is the one given by generalized probabilistic theories (GPTs) \cite{Barret2007,Janotta2014}. In this framework, one usually considers that propositions about physical systems can be represented by linear functionals called effects, acting on the set of states. Most importantly, effects are in a one-to-one relation to an equivalence class of propositions/events, that is, if two outcomes are attributed the same probability when acting on an initial state (for all initial states), these outcomes are represented by the same effect \cite{Janotta2014}. The effects can then be combined to form complete measurements $\{e_i\}$, summing up to a special functional $u$ that gives probability one to all normalized states, $\sum e_i=u$. All this structure which considers (equivalence classes of) propositions to obtain probabilities, irrespective of the rest of the measurement, implies that the effects cannot depend on the others with which they are combined to give a measurement. This is what is called `Gleason Property' \cite{Cabello2010}. This applies to our case directly: since we can identify $(-1|i),(-1,+1|i,i+1)$ and $(+1,-1|i-1,i)$ these outcomes \emph{are represented by the same effect, namely $a_i$} by any probabilistic model obeying GP -- in particular, the usual approach on GPTs. This means that a joint measurement $A_iA_{i+1}$ for measurements $A_i=\{a_i,\neg a_i\}$ must be made of $a_i$ itself, $a_{i+1}$ and a third effect, which we denote by $b_i$. In other words, a joint measurement $A_0A_1$ that would initially be made of effects $\{a_0\wedge a_1,a_0\wedge\neg a_1,\neg a_0\wedge a_1, \neg a_0\wedge \neg a_1\}$ is adapted, given the operational equivalence brought by exclusiveness and ND, by identifying $a_0\leftrightarrow a_0\wedge\neg a_1$,$a_1\leftrightarrow\neg a-0\wedge a_1$ and $a_0\wedge a_1\leftrightarrow 0$. Finally, we just rename $b_0\equiv \neg a_0\wedge \neg a_1$. This identification can be in every context to get to inequalities \eqref{Ineq: ProbAeB} and to adapt straightforwardly the hypergraph in Figure \ref{Fig: HypergraphConstruction} where vertices are effects and hyperedges show those that form a measurement. When considering all the contexts, one gets a hypergraph like the one in Fig. \ref{Fig: HyperGraph} for $N=5$. \section{Protocol number 1} \label{Sec: Protocol1} In protocol number $1$, the measurements are given by the set of complete projective measurements $\{\ket{a_i}\bra{a_i},\ket{b_i}\bra{b_i},\ket{a_{i+1}}\bra{a_{i+1}}\}_i$, where $i\in\{0,...,N-1\}$ denotes the $N$ possible choices and the sets of outcomes are $\{a_i,b_i,a_{i+1}\}_i$ \footnote{It is important to stress that this notation is related to the GP assumption. It is used in denoting equivalence between the \emph{events} $a_{i_k+1}|i_k = a_{i_k+1}|i_{k+1}$, and is consistent with the use of the same projector in both measurements within Quantum Theory.}. We can define the vector $\vec{P}_{i_k}$ for the $k$-th observer ($i_k$ labeling the choice of measurement for this player) as: \begin{align} \vec{P}_{i_k} =\left(\begin{array}{c} p(a_{i_k}|i_k)\\ p(b_{i_k}|i_k)\\ p(a_{i_k+1}|i_k)\end{array}\right), \end{align} where the entries define probabilities for the different outcomes of the measurement. Using the definitions for $\alpha^k$ and $\beta^k$, given by \eqref{Ineq: Prob}, we can write the sums involved in terms of this vector, obtaining the following: \begin{subequations} \begin{align} &\beta^k=\sum_{i_k}\left(\begin{array}{ccc}0 &1& 0 \end{array}\right) \vec{P}_{i_k} = \left(\begin{array}{ccc}0 &1& 0 \end{array}\right) \sum_{i_k}\vec{P}_{i_k} \label{Eq: BetaProtocol1}\\ &\alpha^k = \sum_{i_k}\left(\begin{array}{ccc}1 &0& 0 \end{array}\right) \vec{P}_{i_k} = \sum_{i_k}\frac{1}{2}\left(\begin{array}{ccc}1 &0& 1 \end{array}\right) \vec{P}_{i_k}= \frac{1}{2}\left(\begin{array}{ccc}1 &0& 1 \end{array}\right) \sum_{i_k}\vec{P}_{i_k}. \label{Eq: GammaProtocol1} \end{align} \label{Eq: Protocol1} \end{subequations} Some comments on \eqref{Eq: GammaProtocol1} are important: the original expression for $\alpha$, Eq. (3a) of the main work, suggests the inner product of each $\vec{P}_{i_k}$ with $(1,0,0)^T$ which picks out the value $p(a_{i_k}|i_k) = \langle \ket{a_{i_k}}\bra{a_{i_k}} \rangle$. However, the last expression is also correct under the non-disturbance $+$ exclusiveness conditions \cite{TerraBarbaraBook}, which is valid for quantum theory and can be stated in this case as: \begin{equation} p\left(a_{i_k+1}|i_k\right) = p\left(a_{i_k+1}|i_k+1\right), \,\, \forall i_k,i_{k+1}. \end{equation} This means, in vector form: \begin{equation} \left(\begin{array}{ccc}0& 0& 1\end{array}\right)\vec{P}_{i_k} = \left(\begin{array}{ccc}1& 0& 0\end{array}\right)\vec{P}_{i_k+1}, \end{equation} which in turn implies that any convex combination of these entries gives the same; in particular, the right hand side of expression \eqref{Eq: GammaProtocol1}. We chose this particular convex combination because of its simplicity and symmetry. These forms for $\alpha^k$ and $\beta^k$ can be summarized by the expression below: \begin{equation} I^k = \sum_{i_k}\langle v_{I},\vec{P}_{i_k}\rangle, \label{Eq: IPk} \end{equation} where $I$ represent either $\alpha$ or $\beta$ and $v_I$ is respectively $(1/2)(1 \,\, 0 \,\, 1)$ or $(1 \,\, 0 \,\, 0)$. We can understand Expression \eqref{Eq: IPk} as follows: for every observer $k$, the collection $(P_{i_k})_{i_k}$ that maximizes $\alpha^k$ is the one that maximizes the extreme entries of $\sum P_{i_k}$ (and with the similar understanding for minimizing $\beta^k$). Given Expressions \eqref{Eq: Protocol1} for $\beta^k$ and $\alpha^k$ in terms of the vectors $\vec{P}_{i_k}$ we can state the problem as a Markovian process, as follows. Denoting as $\rho^k$ the state that is available to the $k$-th player after the measurements of the precedent ones and denoting either $\{a_{i_k},b_{i_k},a_{i_k+1}\}$ as $o_{i_k}$ we can write \begin{align} p(o_{i_k}|i_k) &= \bra{o_{i_k}}\rho^k\ket{o_{i_k}}= \Tr\left[\ket{o_{i_k}}\bra{o_{i_k}}\rho^k\ket{o_{i_k}}\bra{o_{i_k}}\right] \nonumber \\ &=\frac{1}{N}\sum_{ i_{k-1},o_{i_k-1}}\Tr\left[\ket{o_{i_k}}\braket{o_{i_k}}{o_{i_{k-1}}}\bra{o_{i_{k-1}}}\rho^{k-1}\ket{o_{i_{k-1}}}\braket{o_{i_{k-1}}}{o_{i_k}}\bra{o_{i_k}}\right] \nonumber\\ &=\frac{1}{N}\sum_{i_{k-1},o_{i_k-1}} |\braket{o_{i_k}}{o_{i_{k-1}}}|^2p(o_{i_{k-1}}|i_{k-1}). \end{align} We used the fact that the players choose the measurement independently and from an uniform distribution under the $N$ possible choices for defining $\rho^{k-1}$. We can rewrite the relation above in matrix form as \begin{align} \vec{P}_{i_k} =\frac{1}{N}\sum_{i_{k-1}=0}^{N-1}\left( \begin{array}{ccc} |\braket{a_{i_{k-1}}}{a_{i_k}}|^2 & |\braket{b_{i_{k-1}}}{a_{i_k}}|^2 & |\braket{a_{i_{k-1}+1}}{a_{i_k}}|^2\\ |\braket{a_{i_{k-1}}}{b_{i_k}}|^2 & |\braket{b_{i_{k-1}}}{b_{i_k}}|^2 & |\braket{a_{i_{k-1}+1}}{b_{i_k}}|^2\\ |\braket{a_{i_{k-1}}}{a_{i_k+1}}|^2 & |\braket{b_{i_{k-1}}}{a_{i_k+1}}|^2 & |\braket{a_{i_{k-1}+1}}{a_{i_k+1}}|^2\\ \end{array}\right)\vec{P}_{i_{k-1}} =\frac{1}{N}\sum_{i_{k-1}=0}^{N-1} \left[M_{i_k,i_{k-1}}\right]\vec{P}_{i_{k-1}}, \label{MyxPx} \end{align} which defines the matrix $M_{i_k,i_{k-1}}$: each column is labeled by a fixed vector representing an outcome $o_{i_{k-1}}|i_{k-1}$ of the $k-1$-th observer, while the rows are labeled by vectors representing the outcomes $o_{i_k}|i_k$ of the $k$-th observer. It is important to note that, once $N$ is fixed, this matrix depends only on the difference $i_k - i_{k-1}$ (modulo $N$) and is bistochastic -- rows and columns sum to one as they are the norm of normalized states-- denoting the Markovianity of this process. Indeed, if the choice $i_{k-1}$ and values of $\vec{P}_{i_{k-1}}$ are known, then the whole vector $\vec{P}_{i_k}$ can be obtained. Now we are going to investigate the consequences of such Markovianity, leading to the results for this protocol. Using \eqref{MyxPx} in Eq. \eqref{Eq: IPk} we can obtain the expression for $I^k$ in terms of the matrices $\{M_{i_ki_{k-1}}\}$: \begin{align} I^k = \sum_{i_k}\left\langle v_I\,,\, \vec{P}_{i_k} \right\rangle= \sum_{i_k}\left\langle v_I,\frac{1}{N}\sum_{i_{k-1}} M_{i_k,i_{k-1}}\vec{P}_{i_{k-1}}\right\rangle = \left\langle v_I\,,\,\sum_{i_{k-1}}\left(\frac{1}{N}\sum_{i_k} M_{i_k,i_{k-1}}\right)\vec{P}_{i_{k-1}}\right \rangle. \label{Eq: OperatorM} \end{align} The term in parenthesis suggests the definition of the matrix \begin{equation} \mathbb{M}_N = \frac{1}{N}\sum_{i_k} M_{i_k,i_{k-1}}, \label{Eq: DefMN} \end{equation} which does not depend on $i_{k-1}$, since each $M_{i_k,i_{k-1}}$ depend only on the distance $i_k - i_{k-1}$ and, by summing $M_{i_k,i_{k-1}}$ over $i_k$, all differences $i_k-i_{k-1}$ are considered. We can see from Eq. \eqref{Eq: DefMN} that $\mathbb{M}_N$ is also bistochastic. The symmetry of the $N$-cycles and the bistochastic feature reflect into $\mathbb{M}_N$ not only by making it independent of $i_{k-1}$ but also imply that $(\mathbb{M})_{11}=(\mathbb{M})_{13}=(\mathbb{M})_{31}=(\mathbb{M})_{33}\equiv t_N$ and $(\mathbb{M})_{12}=(\mathbb{M})_{21}=(\mathbb{M})_{23}=(\mathbb{M})_{32} = 1-2t_N$ (see Eq. \eqref{MyxPx}). So, we can define $\mathbb{M}_N$ only in terms of $t_N$ as: \begin{align} \mathbb{M}_N =\left( \begin{array}{ccc} t_N & 1-2t_N & t_N \\ 1-2t_N & 4t_N - 1 & 1-2t_N\\ t_N & 1-2t_N & t_N \end{array}\right). \label{Eq: MNSymmetry} \end{align} With $M_N$ in this form we can get bounds on $t_N$, since all elements being positive implies $1\leq4t_N\leq 2$ for all odd $N$. As we know the important features of the matrix $\mathbb{M}_N$ we can turn back to its effects on the inequalities for the $k$-th player. From Eq. \eqref{Eq: OperatorM} and the symmetries of $\mathbb{M}_N$, we can write: \begin{equation} I^k = \left\langle v_I\,,\,\sum_{i_{k-1}}\mathbb{M}_N\vec{P}_{i_{k-1}}\right \rangle= \left\langle v_I\,,\,\mathbb{M}_N\sum_{i_{k-1}}\vec{P}_{i_{k-1}}\right \rangle . \label{Eq: IneqMN} \end{equation} Equation \eqref{Eq: IneqMN} highlights the Markovianity of the game under protocol number $1$. We can see that the relevant vector for the $k$-th player to calculate $I_k$ --i.e., $\sum\vec{P}_{i_k}$ -- can be obtained only from $\mathbb{M}_N$ and the relevant vector of the previous player, $\sum\vec{P}_{i_{k-1}}$. Using the same reasoning as in Eq. \eqref{Eq: OperatorM} for $\vec{P}_{i_{k-1}}$ in terms of $\vec{P}_{i_{k-2}}$ and so on, we arrive at \begin{equation} I^k= \left\langle v_I,\mathbb{M}_N\sum_{i_{k-1}}\vec{P}_{i_{k-1}}\right\rangle=\left\langle v_I,(\mathbb{M}_N)^{k-1}\sum_{i_{1}}\vec{P}_{i_{1}}\right\rangle. \label{Eq: OperatorMN} \end{equation} Since the matrix $\mathbb{M}_N$ is fixed for each $N$, Eq. \eqref{Eq: OperatorMN} already gives all that is necessary to calculate $\alpha^k$ and $\beta^k$ for any $k$ and initial state in terms of $(P_{i_1})_{i_1}$. With Eqs. \eqref{Eq: MNSymmetry},\eqref{Eq: IneqMN} and \eqref{Eq: OperatorMN} it is possible to prove that the initial state that gives maximal violation for the first player also allows for the best attempt for the $k$-th player to win. The proof goes as follows. Writing $\sum_{i_{k-1}}\vec{P}_{i_{k-1}} = (c\,\,d\,\,e)^t$ for the $(k-1)$-th player, $\alpha^{k-1}$ is maximized ($\beta^{k-1}$ is minimized) by the vector with maximum $(c+e)$, as pointed above \footnote{For Quantum Theory (and for any theory respecting the non-disturbance principle) $c=e$, since $p(a_j|i_j) = p(a_j|i_{j-1}) \,\,\forall j$ imply that the sums of the extreme entries are equivalent. However, we do not need to consider it here.}. Now, if this vector is multiplied by $\mathbb{M}_N$, from Eq. \eqref{Eq: MNSymmetry} we have \begin{equation} \sum_{i_k}\vec{P}_{i_k}= \mathbb{M}_N\sum_{i_{k-1}}\vec{P}_{i_{k-1}} = (c+e)\left(\begin{array}{c} t_N \\1-2t_N \\ t_N\end{array}\right) + d\left(\begin{array}{c}1-2t_N\\4t_N - 1\\1-2t_N \end{array}\right). \end{equation} Now, $\alpha^k$ can be written as \begin{equation} \alpha^k= (c+e)t_N + d(1-2t_N) = (c+e)(3t_N-1)+N(1-2t_N), \end{equation} where we used that $c+d+e=N$. We know from the positivity of every element of $\mathbb{M} _N$ that $t_N<1/2$, ensuring positivity of the last term, $(1-2t_N)$. Now, we see that the maximum value of $\alpha^k$ will be given by the maximum value of $(c+e)$ only if $t_N>1/3$. For those cases, we see that the vector $\sum P_{i_{k-1}}$ that maximizes $\alpha^{k-1}$ (the one with maximum $(c+e)$) leads to the vector $\sum P_{i_k}$ that maximizes $\alpha^k$, i.e., leads to the vector $\sum_{i_k}\vec{P}_{i_k}$ with maximized extreme entries. Then, continuing the argument for the previous players, the vector $\sum P_{i_1}$ that leads to the best value of $I^k$ is the one that leads to the best value for $I^1$. Since we know that the quantum state $\ket{\Psi_{\text{handle}}}$ leads to maximum violation for the first player, we know by the previous argument that this initial state also leads to the best hope for the $k$-th player to win, for all $k$. It is left to prove now that $t_N>1/3$ is not a restriction in the scenarios we are considering. In fact, it follows from (i) $t_N = (1/ N)\sum|\braket{a_0}{a_i}|^2$ increases with $N$ and (ii) $t_{N=3}=1/3$, implying that $t_N>1/3 \,\, \forall \,\, N\geq 5$ \footnote{For $N=3$, two out of the three terms in the sum are zero, since each state is orthogonal to the other two, and the remaining term is $1$; this implies $t_3=\mathbb{M}_{11}=1/3$.}. The calculation given by Eq. \eqref{Eq: OperatorMN} can be used for every $k$ and leads naturally to the evaluation of the limit $k \rightarrow \infty$. The asymptotic limit can be calculated exaclty by noting that the matrix $\mathbb{M}_N$ is regular (which implies ireducibility). A stochastic matrix $M$ is said to be regular if there is a natural $r$ such that all entries of the $r$-th power of $M$ are positive: \begin{equation} (M^r)_{ij} > 0, \forall i,j. \end{equation} For the matrices $\mathbb{M}_N$ it is possible to see that $r=1$. This is so because there are null entries in $M_{i_k,i_{k-1}}$ if, and only if, $|i_k - i_{k-1}|\leq 1$. In all other cases, the definition of the vectors $\ket{o_{i_k}}$ imply strictly positive entries for $M_{{i_k},i_{k-1}}$. As $\mathbb{M}_N$ is the sum of all elements of $\{M_{{i_k},{i_{k-1}}}\}$, $\mathbb{M}_N$ has all entries strictly higher than zero. Now, the Perron-Fr\"{o}benius theorem guarantees the existence of a unique stationary distribution eigenvector (i.e., eigenvector with eigenvalue $1$) for any regular stochastic matrix, for which the system will tend by successive applications of this matrix \cite{DurrettStochasticBook}. This in turn implies that $\mathbb{M}_N$ has such an unique stationary distribution; more than that, as $\mathbb{M}_N$ is not only stochastic but bistochastic, it is a fact that the uniform distribution $\vec{P}^* = (1/3)(1,1,1)^T$ is an eigenvector with eigenvalue $1$. As the Perron-Fr\"{o}benius theorem guarantees the stationary vector is unique, this is the only stationary distribution. In other words: \begin{equation} \lim_{n\rightarrow\infty} \left(\mathbb{M}_N\right)^n \vec{P} =\frac{1}{3} \left(\begin{array}{c} 1\\ 1\\ 1\end{array}\right), \,\, \forall \vec{P} \text{ s. t.}\,\, \sum_i P(i)=1. \end{equation} This implies the asymptotic limit presented in the text. \section{Protocols number 2 and 3} \label{Sec: Protocols2and3} For protocols $2$ and $3$, the measurements are $\{\ket{a_{i_k}}\bra{a_{i_k}},\mathcal{I}-\ket{a_{i_k}}\bra{a_{i_k}}\}_{i_k}$ with outcomes $o_{i_k}|i_k\in \{a_{i_k},\neg a_{i_k}\}$ and $\{\ket{b_{i_k}}\bra{b_{i_k}},\mathcal{I}-\ket{b_{i_k}}\bra{b_{i_k}}\}_{i_k}$ with outcomes $o_{i_k}|i_k\in \{b_{i_k},\neg b_{i_k}\}$, respectively. Here we prove \eqref{Eq: IneqProtocols2e3}, relating $\beta^k$ to $\beta^{k-1}$ and $\alpha^k$ to $\alpha^{k-1}$: \begin{subequations} \begin{align} \beta^k &= B_N\beta^{k-1}+b_N,\\ \alpha^k &= C_N\alpha^{k-1} +c_N. \end{align} \end{subequations} We will present the calculations for $\beta^k$ and the results for $\alpha^k$ follows an analogous path. Expression for $\beta^k$ can be written in the form \begin{equation} \beta^k = \sum_{i_k}\Tr\left\{\ket{b_{i_k}}{\bra{b_{i_k}}}\rho^k\ket{b_{i_k}}{\bra{b_{i_k}}}\right\} = \Tr\left\{\left(\sum_{i_k}\ket{b_{i_k}}{\bra{b_{i_k}}}\right)\rho^k\right\} = \Tr\left\{\mathcal{B}_N \rho^k\right\} \end{equation} with $\mathcal{B}_N = \sum_{i_k}\ket{b_{i_k}}\bra{b_{i_k}}$, and $\rho^k$ the state available for the $k$-th player. Since this state is given by the initial state $\rho^1$ measured by the $k-1$ previous players without registering of which measurement and which result occurred, $\rho^k$ is given by: \begin{equation} \rho^k = \left(\frac{1}{N}\right)^{k-1}\sum_{\vec{i},\vec{o}}\Pi_{i_{k-1}}^{o_{i_{k-1}}}...\Pi_{i_1}^{o_{i_1}}\rho\Pi_{i_1}^{o_{i_1}}...\Pi_{i_{k-1}}^{o_{i_{k-1}}}, \end{equation} where we denoted the projective elements of the POVM $\{\ket{b_{i_k}}\bra{b_{i_K}},\mathcal{I}-\ket{b_{i_k}}\bra{b_{i_K}}\}$ as $\{\Pi_{i_k}^{o_{i_k}}\}_{o_{i_k}}$ with $o_{i_k}\in\{b_{i_k},\neg b_{i_k}\}$, $\vec{o}$ stands for $o_1|i_1,...,o_{i_{k-1}}|i_{k-1}$ while $\vec{i} = i_1,...,i_{k-1}$ below the sum. As one can see, the operator $\mathcal{B}_N$ defines the expression $\beta^k$ through its expected value on a given state $\rho^k$. Now, some symmetry aspects of such operator (and the analogous for the operator for $\alpha$) will be discussed. \subsection{Symmetries of the odd N-cycle} \label{Sec: AssympState} The symmetry of the vectors defining the quantum realization leads to some important relations. First, the action of a rotation of $2\pi/N$ by an axis defined by the $\ket{\psi_{handle}}$ or the reflection by a plane that contains both the handle and one of the vectors let the set of operators $\{\ket{b_{i_k}}\bra{b_{i_k}}\}_{i_k}$ unchanged, since it only reorganize its elements \footnote{The $N$-cycle scenario itself has a different symmetry group, since a rotation by $\pi$ along an axis on the plane passing trhough any of the vertices of the $C_N$ compatibility graph is also an element of the group. However, the special use of the third dimension for the quantum realization prohibits this element of the symmetry.}. This means that the operator $\mathcal{B}_N$ is the same after action of any element of such symmetry group (denoted $C_{Nv}$ point group in Ref. \cite{InuiTanabeOnoderaBook}). In other words, for any unitary $U_N$ representing any of the elements of the group, the following relation is valid: \begin{equation} U_N \mathcal{B}_N U_N^{-1}= \sum_i U_N\ket{b_i}\bra{b_i}U_N^{-1} = \mathcal{B}_N \Rightarrow \left[\mathcal{B}_N,U_N\right]=0\,\,\forall\,\, U_N \in C_N. \end{equation} where we used subscript $i$ instead of $i_k$ for sake of an enlightened notation and since this is valid for all $k$. If the representations given by $\{U_N\}$ were irreducible, Schur's lemma would imply that the operator $\mathcal{B}_N$ be a multiple of identity. However, this is not the case, as none of the groups $C_{Nv}$, with odd $N$ has irreducible representations of dimension $3$ \cite{InuiTanabeOnoderaBook}. However, these do have irreducible representation of dimension $2$. This implies, by Schur's Lemma, that this matrix takes the form: \begin{equation} \mathcal{B}_N = \sum_i\ket{b_i}\bra{b_i} = \lambda_0^N\mathcal{I}_{2\times2}\oplus\lambda_1^N\mathcal{I}_{1 \times 1}. \label{Eq: DiagformOn} \end{equation} Of course the same happens with operator $\mathcal{A}_N = \sum_i \ket{a_i}\bra{a_i}$, which acquires the form \begin{equation} \mathcal{A}_N = \sum_i\ket{a_i}\bra{a_i} = \mu_0^N\mathcal{I}_{2\times2}\oplus\mu_1^N \mathcal{I}_{1 \times 1}. \label{Eq: DiagFormPn} \end{equation} \subsection{Proof for protocols 2 and 3} The symmetry relation $\eqref{Eq: DiagformOn}$, implies the operator identity \begin{equation} \sum_{i}\Pi^{b_i}_{i}\mathcal{B}_N\Pi^{b_i}_{i} = z_N\mathcal{B}_N; \label{Eq: PibOn} \end{equation} which says that when $\Pi^{b_i}_{i}$ acts on $\mathcal{B}_N$ we get a multiple of the same $\mathcal{B}_N$, where the constant of proportionality is given by $z_N = (1/N)[2(\lambda_0^N)^2 + (\lambda_1^N)^2]$ and is dependent only on the scenario, i.e., on the odd $N$. It is also possible to analyze the action of $\Pi^{\neg b_i}_i = \mathcal{I} - \Pi_i^{b_i}$ on the same operator, and one gets: \begin{equation} \sum_{i}\Pi^{\neg b_i}_{i}\mathcal{B}_N\Pi^{\neg b_i}_{i} = u_N\mathcal{B}_N + 2\lambda^N_0\lambda^N_1\mathcal{I}_{3\times3}, \label{Eq: PiNegbOn} \end{equation} where $u_N = N + z_N - 2(\lambda_0^N + \lambda_1^N)$, $\lambda_0^N$ and $\lambda_1^N$ are the eigenvalues of $\mathcal{B}_N$ and so $u_N$ is also only dependent on $N$. Now, it is good to rewrite the expressions for $\beta^k$ making explicit the dependence on $\rho^{k-1}$: \begin{equation} \beta^k = \sum_{i_k}p(o_{i_k}=b|i_k)=\Tr\left\{\mathcal{B}_N\rho^k\right\} = \left(\frac{1}{N}\right)\sum_{i_{k-1},o_{i_{k-1}}}\Tr\left\{\mathcal{B}_N\left(\Pi_{i_{k-1}}^{o_{i_{k-1}}}\rho^{k-1}\Pi_{i_{k-1}}^{o_{i_{k-1}}}\right)\right\}. \label{Eq: Betakrhokm1} \end{equation} Using in Eq. \eqref{Eq: Betakrhokm1} the relations \eqref{Eq: PibOn} and \eqref{Eq: PiNegbOn} together with the cyclic property of the trace operation, we get: \begin{equation} \beta^k = \frac{z_N+u_N}{N}\beta^{k-1} + 2\frac{\lambda_o^N\lambda_1^N}{N} = B_N\beta^{k-1} + b_N, \label{Eq: BetakPrevious} \end{equation} which is the relation we aimed to prove. For the asymptotic limit, we use the explicit form for $u_N$ and $z_N$, to get \begin{equation} B_N = 1-\frac{3b_N}{N}. \end{equation} Now, we just have to calculate the limit \begin{equation} \lim_{k\rightarrow \infty}\beta^k = \lim_{k\rightarrow \infty} \left(\sum_{n=0}^k \left[B_N\right]^n\right)b_N = \frac{1}{1-B_N}b_N = \frac{N}{3}, \end{equation} where the fact that $|B_N|<1$ for all odd $N$ was used. This finishes the proof for protocol $3$, and for protocol $2$ the proof is completely analogous. Now it is only left to prove that the average asymptotic state is the maximally mixed, for all the protocols. \section{Average state in the asymptotic limit} To see that the average state is indeed the maximally mixed, we first note that for this state $p(o=b_i|i) = (1/3)$ \emph{for all choices of $i$}. This implies that $\bar{\rho}_{\infty}$ can be written as \begin{equation} \bar{\rho}_{\infty} = \frac{1}{3}\ket{b_i}\bra{b_i} + \frac{2}{3}\Pi_{i}^{\neg b_i}R_i\Pi_{i}^{\neg b_i}, \,\,\, \forall i \in\{0,...N-1\} \label{Eq: averageSBi} \end{equation} with $R_i$ being a positive semidefinite matrix with unit trace. Now, \begin{equation} \bar{\rho}_{\infty} = \frac{N\bar{\rho}_{\infty}}{N} = \frac{1}{N}\sum_i\left(\frac{1}{3}\ket{b_i}\bra{b_i} + \frac{2}{3}\Pi_{i}^{\neg b_i}R_i\Pi_{i}^{\neg b_i}\right) =\frac{1}{N}\left(\frac{1}{3}\mathcal{B}_N + \frac{2}{3}\sum_i\Pi_{i}^{\neg b_i}R_i\Pi_{i}^{\neg b_i}\right). \end{equation} Above we used the fact that Equation~\eqref{Eq: averageSBi} is valid for all $i$. The matrix $\sum_i \Pi_i^{\neg b_i} R_i \Pi_i^{\neg b_i}$ also commutes with every operator representing the group $C_{Nv}$. Then, by Schur's lemma this means \begin{equation} \sum_i\Pi_{i}^{\neg b_i}R_i\Pi_{i}^{\neg b_i} = \left(r_1\mathcal{I}_{2\times2}\right)\oplus r_0 \mathcal{I}_{1 \times 1} . \end{equation} We also showed, in Eq. \eqref{Eq: DiagformOn} that $\mathcal{B}_N$ has an analogous form. This means that the average state also has this form and we can write \begin{equation} \bar{\rho}_{\infty} =\left(r'_1 \mathcal{I}_{2\times 2}\right)\oplus r'_0\mathcal{I}_{1 \times 1}, \label{Eq: averageSum} \end{equation} with $r'_j$ related to the eigenvalues of $\mathcal{B}_N$ and to $r_j$. However, it is not necessary to enter into these details. By \eqref{Eq: averageSBi}, \eqref{Eq: averageSum} and the orthogonality of $\Pi_i^{\neg b_i}R_i\Pi_i^{\neg b_i}$ with $\ket{b_i}\bra{b_i}$, the average state is a diagonal matrix, with one eigenvalue equal to $r'_0= 1/3$ and two other eigenvalues equal to $r'_1$. By the condition of unit trace for $\bar{\rho}_\infty$ this means that $r'_1 = 1/3$ as well.This implies $\bar{\rho}_{\infty}=\mathcal{I}/3$. \end{document}
\begin{document} \preprint{APS/123-QED} \title{Proposal for a clumsiness-free test of macroscopic realism} \author{Devashish Pandey} \affiliation{Department of Electronic Engineering, Universitat Aut\`onoma de Barcelona, 08193 Bellaterra, Spain} \author{Xavier Oriols} \affiliation{Department of Electronic Engineering, Universitat Aut\`onoma de Barcelona, 08193 Bellaterra, Spain} \author{Guillermo Albareda} \affiliation{Max Planck Institute for the Structure and Dynamics of Matter, 22761 Hamburg, Germany} \affiliation{Institute of Theoretical and Computational Chemistry, Universitat de Barcelona, 08028 Barcelona, Spain} \email{[email protected],[email protected]} \date{\today} \begin{abstract} We propose a test of macrorealism that exploits the contextuality of two-time correlation functions to escape the so-called ``clumsiness loophole'' that plagues Leggett-Garg inequalities. The non-contextuality of reduced joint probability distributions is proven to be an unequivocal criterion to guarantee that measurements are carried out in the ideally-weak measurement regime of a class of generalized von Neumann measurements. In this regime, testing the so-called ``no-signaling in time'' condition allows to uncontextually ascertain whether a property of a given system is macrorealistic or non-macrorealistic. Interestingly, the resulting protocol allows for tests of macrorealism in situations where Leggett-Garg inequalities and ideal negative measurement cannot be used at all. \end{abstract} \pacs{Valid PACS appear here} \maketitle Ever since the birth of quantum mechanics, theoretical works have deepened our understanding of its conceptual and mathematical structure. In this respect, any list of highlights should definitively include Bell's~\cite{bell2004speakable} and Leggett-Garg~\cite{leggett1985} inequalities to disprove local~\cite{einstein1935can,bell1964einstein} and macroscopic realism~\cite{leggett1988experimental,leggett2008realism} respectively. In contrast to the violation of local realism~\cite{aspect1999bell,brunner2014bell,reid2009colloquium}, however, an inarguable violation of macrorealism has remained elusive to date~\cite{wilde2012addressing,emary2013}. The reason is that whilst special relativity can be used to close the ``communication loophole'' in a Bell test of local realism~\cite{clauser1978bell,kwiat1994proposal,huelga1995loophole,rosenfeld2009towards}, no such defence exists for a Leggett-Garg test of macrorealism. Macrorealism does not assert that it is impossible to affect a physical system by measurement and therefore a violation of Leggett-Garg inequalities can only be a proof that system's properties are either (i) non-macrorealistic or (ii) macrorealistic but subjected to a measurement technique that happens to disturb the system. This problem is known as the ``clumsiness loophole''~\cite{wilde2012addressing}, and can always be exploited to refute the implications of a Leggett-Garg test of macrorealism. While a number of works have addressed this problem by making the explanation of Leggett-Garg inequalities violations in terms of experimental clumsiness so contrived as to be doubtful~\cite{kwiat1994proposal,knee2012violation}, whether a loophole-free Leggett-Garg protocol can be constructed remains an open question~\cite{emary2013}. In this Letter we propose a clumsiness-free test of macrorealism that relies on the notion of contextuality introduced by Bell~\cite{bell2004speakable}, Kochen and Specker~\cite{Kochen1975}, which is known to yield observable effects at the level of time-correlation functions~\cite{anastopoulos2006classical,dressel2012contextual,dressel2010contextual}. Measuring an observable A at time $t$ and correlating the outcome, $y_A(t)$, with the measured value of B, $y_B(\tau)$, at a later time $\tau \geq t$, represents an unequivocal way of representing the dynamics of classical systems in terms of joint probabilities, i.e., $P(y_A,y_B) \leftrightarrow \textit{system dynamics}$. In quantum mechanics, however, the unavoidable backaction of the measurement process~\cite{braginsky1995quantum,dicke1981interaction} precludes such a clear-cut connection. Even using the best technological means, different measurement schemes, $\{\sigma_A\}$, can yield different probability distributions, i.e., $P(y_A,y_B)_{\sigma_A} \leftrightarrow \textit{system+apparatus dynamics}$. This property of quantum mechanics can result in contradictions among tests of macrorealism that are based on different experimental set-ups. To avoid contextuality, we will first identify the \textit{ideally-weak measurement regime}: the regime where, for pure states, two-time correlation functions distinctively unravel either (i) the expectation value of two-time Heisenberg operators for non-macrorealistic properties or (ii) the product of the expectation values of two independent events for macrorealistic ones. We will then show that this regime can be experimentally identified by witnessing the non-contextuality of a reduced probability distribution. Finally, we will prove that for general (mixed) states, assessing the so-called ``no-signaling in time'' criterion~\cite{kofler2013condition,li2012witnessing} under ideally-weak measurement conditions makes it possible to unambiguously distinguish between macrorealistic and non-macrorealistic properties. We consider a generalized von Neumann measurement~\cite{von2018,jacobs2014}, where the expectation value of a property A, associated to the operator $\hat A = \sum_i a_i |a_i\rangle \langle a_i |$ (with $a_i$ and $| a_i \rangle$ being the corresponding eigen-values and -states), of a quantum system $|\psi(t)\rangle$ is determined by repeatedly reading-out the pointer position of the meter over a large ensemble of identically prepared experiments: \begin{equation}\label{exp_y} \langle y_A(t) \rangle = \int_{-\infty}^{\infty} dy_A y_A P(y_A), \end{equation} where $P(y_A)$ is the probability of finding a value $y_A$ of the pointer position at time $t$. According to Born's rule, $P(y_A)$ can be expressed in terms of the system degrees of freedom as $P(y_A)=|\psi_A(t)|^2$, where \begin{equation}\label{def_kstate} |\psi_A(t) \rangle = \sum_{i} \Omega_{y_A-\lambda a_i} c_i |a_i\rangle \end{equation} is the state of the system right after measuring $y_A$ at time $t$~\cite{AppA}. In \eref{def_kstate} we have defined the coefficients $c_i = \langle a_i|\psi(t)\rangle$, and the displaced (by an amount $\lambda a_i$) wavepacket of the pointer, $\Omega_{y_A-\lambda a_i}$, with $\lambda$ being a macroscopic parameter with units of $[L][A]^{-1}$ that hereafter is assumed to be $\lambda = 1$~\cite{misc3}. In order to ensure that \eref{exp_y} always yields the correct expectation value $\langle y_A(t) \rangle = \langle \hat A \rangle$, it is enough to make the pointer wavepacket to be well normalized and obeying $\int y_A |\Omega_{y_A-a_i}|^{2} dy_A = a_i$ ~\cite{aharonov1991complete,kofman2012,jacobs2014}. A second, subsequent, measurement of a property B, associated to the operator $\hat B = \sum_i b_i |b_i\rangle \langle b_i |$ (with $b_i$ and $| b_i \rangle$ being the corresponding eigen-values and -states) can be easily accommodated into the above scheme by simply reading-out the pointer position of a second measuring apparatus at time $\tau\geq t$. The two-time correlation function $\langle y_A(t)y_B(\tau)\rangle$ can be then evaluated as: \begin{eqnarray}\label{two} \langle y_A(t)y_B(\tau)\rangle = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} dy_A dy_B \; y_A y_B P(y_A,y_B), \end{eqnarray} where $P(y_A,y_B)$ is the joint probability of reading-out the values $y_A$ and $y_B$ at times $t$ and $\tau$ respectively. Using Born's rule, this probability can be written as $P(y_A,y_B) = |\psi_{A,B}(\tau)|^2$, where \begin{equation}\label{def_kwstate} |\psi_{A,B}(\tau)\rangle = \sum_{i,j} \Omega_{y_A-a_i}\Omega_{y_B-b_j} c_{i}c_{j,i}|b_{j}\rangle \end{equation} is the state of the system right after the two-time measurement process~\cite{AppB}. In \eref{def_kwstate} we have defined $c_{j,i} = \langle b_j | \hat U_\tau|a_i\rangle$, and $\hat U_\tau = \exp(i\hat H\tau/\hbar)$ describes the unitary evolution of the system between the two measurements. Without the loss of generality, we can now restrict the meter wavepacket to be represented by a gaussian Krauss operator~\cite{kraus1983states,wiseman2009}, i.e., $\Omega_{y - a_j} = \mathcal{A} \exp{\left[-{(y-a_j)^2}/{4\sigma^2} \right]}$, where $\mathcal{A}$ is a normalization constant. The dependence of \eref{def_kwstate} on the measuring apparatus can be then effectively characterized by the coupling-strength parameter $\sigma = \sigma_{A/B}$ and thus \eref{two} reads~\cite{AppC}: \begin{eqnarray}\label{general} \langle y_A(t)y_B(\tau)\rangle_{\sigma_A} = \frac{1}{2} \sum_{i,j} a_i \mathcal{E}_{j,i}\mathcal{B}_{j,i}(\tau) + c.c, \end{eqnarray} where $\mathcal{B}_{j,i}(\tau) = \langle a_{j}|\hat B(\tau)| a_{i}\rangle$ are the matrix elements of the Heisenberg operator $\hat B(\tau) = \hat U_\tau^\dagger \hat B \hat U_\tau$, and we have defined $\mathcal{E}_{j,i} = c_{j}^*\exp{\big[ -{\left( a_i - a_{j} \right)^2}/{4\sigma_A^2} \big]}c_i$. The expectation value in \eref{general} now bears a subscript $\sigma_A$ that reinforces the idea that this result depends on the measurement scheme. That is, two-time expectation values are generally contextual~\cite{dressel2012contextual}. Concerning the assumption of the meter wavepacket to be represented by a Gaussian operator, it is shown in Ref.~\cite{AppC} that \eref{general} can be derived for non-specific meter wavefunction shapes. The result in \eref{general} can be generalized to systems made of $N$ interacting particles. For that, we consider a general (non-separable) state $|\psi (t)\rangle = \sum_{i_1,..,i_N} c_{i_1,..,i_N} | a_{i_1},..,a_{i_N} \rangle$, where $c_{i_1,..,i_N} = \langle a_{i_1},..,a_{i_N} | \psi (t) \rangle$. We define also the many-body intensive operator $\hat{A} = \sum_{\xi=1}^N \hat I\otimes \cdots \otimes \hat A_\xi \otimes \cdots \otimes \hat I/N$, where the index $\xi$ only denotes the degree of freedom that the single-particle operator, $\hat A_\xi$, acts on. Then, the analogous of \eref{general} for a many-body system reads~\cite{AppD}: \begin{equation}\label{MB} \langle {y_A(t)y_B(\tau)} \rangle_{\sigma_A} = \frac{1}{2N} \! \sum_{\xi=1}^N \sum_{\substack{i_1,..,i_N\\j_1,..,j_N}}^\infty \!\!\!\! a_{i_\xi} \mathcal{E}_{\substack{i_1,..,i_N\\j_1,..,j_N}} \mathcal{B}_{\substack{i_1,..,i_N\\j_1,..,j_N}} \! + c.c, \end{equation} where we have defined the matrix elements: \begin{subequations} \begin{equation}\label{S} \mathcal{B}_{\substack{i_1,..,i_N\\j_1,..,j_N}} = \langle a_{j_1},..,a_{j_N}|\hat{B}(\tau) |a_{i_1},..,a_{i_N}\rangle, \end{equation} \begin{equation}\label{E} \mathcal{E}_{\substack{i_1,..,i_N\\j_1,..,j_N}} = c_{j_1,..,j_N}^* \exp\Bigg[-\frac{\big(\sum_{\nu}^N a_{i_{\nu}} - a_{j_{\nu}}\big)^2}{4\sigma_A^2 N^2} \Bigg] c_{i_1,..,i_N}. \end{equation} \end{subequations} For $N = 1$ \eref{MB} trivially reduces to \eref{general}. For $N\neq 1$, the backaction of the measurement of A can induce entanglement among particles~\cite{cabrillo1999creation,chou2005measurement}. At this point we want to address the question of whether there exists a specific measurement regime where the result in \eref{MB} becomes non-contextual, viz., $\langle y_A y_B \rangle_{\sigma_A} \approx \langle y_A y_B \rangle$. For that, we define the effective dimension of the system, $\textrm{d}_{\text{eff}}$, as a measure of the average width of the relevant spectrum of the system with respect to $\hat{A}$, i.e: $\textrm{d}_\textrm{eff} := {\sum_{\nu=1}^N \max(\Delta A_\nu)}/{N}$, where $\max(\Delta A_\nu)$ is the maximum distance between the occupied upper and lower bounds of the spectrum of $\hat A_\nu$. Then, a simple inspection of the matrix elements in \eref{E} shows that for any coupling $\sigma_A$ fulfilling the condition $\textrm{d}_\textrm{eff} \ll \sigma_A$, one always measures~\cite{AppE}: \begin{equation}\label{noncontextual} \langle {y_A(t)y_B(\tau)} \rangle = {\langle \psi(t)| \hat{A}(t) \hat{B}(\tau) |\psi(t)\rangle} + c.c. \end{equation} The condition $\textrm{d}_\textrm{eff} \ll \sigma_A$ defines what we call the ideally-weak measurement (IWM) regime: the regime where one always measures the same expectation value independently of $\sigma_A$~\cite{misc1}. This is the case even if the joint probability distribution of measuring $y_A(t)$ and $y_B(\tau)$ depends on $\sigma_A$. This result adds to previous findings~\cite{anastopoulos2006classical,di2008weak,dressel2010contextual,dressel2012contextual} by showing that, while quantum backaction is needed for correlation functions to be contextual, viz., $\langle y_A y_B \rangle_\sigma \Rightarrow P(y_{B},y_{A})_{\sigma_A}$, the contrary is not true for a general class of experiments, viz., $P(y_{B},y_{A})_{\sigma_A} \;\;{\nRightarrow} \;\; \langle y_A y_B \rangle_{\sigma_A}$. Yet, the limit $\textrm{d}_\textrm{eff} \ll \sigma_A$ implies a strict cancellation of the sigma-dependence of the joint probability $P(y_A,y_B)_{\sigma_A}$ when integrated over all possible values of $y_A$ for $\sigma_A$s larger than a given threshold $\sigma_{A_o}$~\cite{AppF}, i.e.: \begin{equation}\label{IWM} \textrm{IWM:} \;\;\; \frac{d}{d\sigma_A} \int dy_A P(y_A,y_B)_{\sigma_{A}} = 0, \; (\forall \sigma_A > \sigma_{A_o}). \end{equation} Thus, by assessing the validity of \eref{IWM} for a reasonable number of distinct measurement set-ups (with different system-meter coupling-strenghts), an experimentalist can assert whether or not he/she is working in the IWM and hence whether the measurements conducted in the laboratory are generalized von Neumann measurements of the type described here. Making sure that one is operating in the IWM regime, however, does not guarantee that the measurement of A is non-invasive. This is a crucial point that can be appreciated by rewriting the final state of the system in \eref{def_kwstate} using a first order Taylor expansion of $ \Omega_{y_A-a_i}$ and $\Omega_{y_B-b_j}$ around $y_A$ and $y_B$ in the limit of $\sigma_{A/B}\to\infty$~\cite{AppG}: \begin{eqnarray} |\psi_{A,B}\rangle = \Omega_{y_B}\Omega_{y_A} \Big(\mathbb{I} \; \text{-} \; \frac{y_A}{\sigma_A^2}\hat{B} \Big) \Big(|\psi(\tau)\rangle \; \text{-} \; \frac{y_B}{\sigma_B^2} |\widetilde{\psi}(\tau)\rangle \Big), \label{perturbation} \end{eqnarray} where we have defined $|\psi(\tau)\rangle = \hat U_\tau |\psi(t)\rangle$ and $|\widetilde{\psi}(\tau)\rangle = \hat U_\tau \hat{A} |\psi(t)\rangle$. Expression \eqref{perturbation} tells us that the state of the system right after two ideally-weak measurements can be written as a superposition of two states, and that only the first one, i.e., $\Omega_{y_B}\Omega_{y_A} \left(\mathbb{I} \; \text{-} \; \frac{y_A}{\sigma_A^2}\hat{B} \right) |\psi(\tau)\rangle$, contains information about the system having evolved freely from $t$ to $\tau$. Generally, the second term in \eref{perturbation} is not proportional to $\hat B\hat U_\tau |\psi(t)\rangle$ and hence it represents the non-negligible backaction of the first measurement on the subsequent evolution of the system. Only when the state of the system $|\psi(t)\rangle$ can be approximated by an eigenstate of the operator $\hat{A}$, i.e.: $\hat{A} |\psi(t)\rangle \approx \langle \hat{A} \rangle |\psi(t)\rangle$, then the backaction of the first measurement is avoided, and hence property A is said to be macrorealistic. In short, \eref{noncontextual} reduces to $\langle y_A(t)y_B(\tau)\rangle = \langle y_A(t) \rangle \langle y_B(\tau) \rangle$, which is the definition of macrorealism for a pure state, i.e., $P(y_{B},y_{A})_{\sigma_A} = P(y_A)_{\sigma_A}P(y_B)_{\sigma_A}$. Let us recapitulate. While an IWM of a non-macrorealistic property does induce a backaction on the system (see \eref{perturbation}), the resulting effects at the level of the reduced probabilities $\int dy_A P(y_A,y_B)_{\sigma_{A}}$ in \eref{IWM} are independent of the properties of the measuring apparatus. This is a very interesting result, valid also for general mixed states, that can be exploited to verify that a given experimental set-up can be effectively represented by a generalized von Neumann measurement model. This is precisely the type of ``good'' measuring apparatus that, as shown in the above paragraph, happen to be non-invasive for macrorealistic properties. Therefore, as it will be shown in the following, the use of the IWM conditions in combination with a given test of macrorealism can be used to close the clumsiness loophole. For general (mixed) states, macrorealism can be defined as~\cite{kofler2013condition}: \begin{equation}\label{MR_mixed} \textrm{MR:}\;\;\;P(y_A,y_B) = \sum_\lambda \rho(\lambda) P_\lambda(y_A)P_\lambda(y_B), \end{equation} where $\lambda$ specifies all properties of the system. Due to the mixedness of the initial state, the violation of macrorealism can be hidden in the statistics of the experiment. A test of macrorealism can then be based on the statistical version of the ``non-invasive measurability'' condition, also referred to as ``no-signaling in time'' (NSIT)~\cite{kofler2013condition}: \begin{equation}\label{NSIT} \textrm{NSIT:}\;\;\; P(y_B) = \int dy_A P(y_A,y_B). \end{equation} This condition, originally proposed as an alternative characterization of macrorealism, differs from the Leggett-Garg inequalities~\cite{kofler2013condition,clemente2015necessary}. Hoerver, while MR $\Rightarrow$ NSIT, the violation of NSIT can only indicate either (i) that the system is non-macrorealistic or (ii) that the system is macrorealistic but subjected to a measurement technique that happens to disturb the system~\cite{misc5}. To discard (ii) above, we propose the following: \begin{enumerate}[(S1)] \item Make sure that the measurement of A at time $t$ is carried out in the IWM regime by testing \eref{IWM} for a reasonable number of measurement set-ups $\{\sigma_A\}$. \item Equate the resulting reduced probability distributions as in \eref{NSIT}. Property A is macrorealistic if NSIT is fulfilled and non-macrorealistic otherwise. \end{enumerate} Note that NSIT $\Rightarrow$ IWM, and therefore under the fulfillment of \eref{NSIT} the condition \eref{IWM} is trivially fulfilled. Whenever NSIT is violated, however, being under the IWM regime will be the only warranty that the experimental set-up represents a ``good'' measuring apparatus (i.e. non-invasive for macrorealistic properties). Assessing the IWM condition in \eref{IWM} requires to design a number of different measurement set-ups. The larger the number of measurement set-ups that are compared one to each other, the more trustworthy the test of macrorealism will be. Put differently, the probability that \eref{IWM} is fulfilled simultaneously by a number of classically invasive measurement apparatuses (different from the generalized von Neumann measurements described here) decreases with the number of experimental set-ups itself. Escaping this test would simply be too conspiratorial a loophole to take seriously. Let us mention that the protocol described by (S1) and (S2) only assesses macrorealism at time $t$ and with respect to an intensive property A. In a test of genuine macrorealism the validity of \eref{NSIT} should be proven for any observable at any time. This is obviously a prohibitive experimental task, and hence it is common to associate macrorealism only to a given observable of interest~\cite{palacios2010,goggin2011violation,knee2012violation,athalye2011investigation}. Anyhow, genuine macrorealism is not expected in general, not at least for operators representing extensive properties such as, e.g., the angular momentum or magnetization. Yet, examples of macrorealism for general intensive properties of the type considered here, far from being atypical, can be common for large systems made of weakly-interacting particles. Consider, e.g., a system defined by separable wavefunctions $|\psi (t)\rangle=|\psi_1 (t)\rangle \otimes ... \otimes |\psi_N (t)\rangle$ where $|\psi_i(t)\rangle$ are all identical single-particle states. To determine whether the state $|\psi(t) \rangle$ is an eigenstate of an intensive property $\hat{A}$, i.e., $\hat{A} |\psi(t) \rangle \approx \langle \hat{A} \rangle |\psi(t) \rangle$ with $\langle \hat{A} \rangle = \sum_{\xi}^N \langle \psi_\xi(t) | \hat A_\xi | \psi_\xi(t) \rangle/N$, we check the soundness of the identity $\langle \hat{A}^2 \text{-} \langle \hat{A}\rangle^2 \rangle=0$. By writing $\hat{A}^2 = N^{\text{-}2} \sum_{\xi}^N \left( \hat A_\xi \hat A_\xi+ \sum_{\nu \neq \xi}^N \hat A_\xi \hat A_\nu \right)$, it is easy to realize that the expectation value $\langle \psi(t) |\hat{A}^2 |\psi(t)\rangle$ reads: $\langle \hat{A}^2(t) \rangle = N^{\text{-}2} \sum_{\xi=1}^N \Big[ \langle \hat A_\xi^2(t) \rangle + \sum_{\nu\neq \xi}^N \langle \hat A_\xi(t) \rangle \langle \hat A_\nu(t) \rangle \Big]$. Therefore, in the limit $N\to\infty$ we get $\langle \hat{A}^2 \rangle = \langle \hat{A} \rangle^2$, so we conclude that $\hat{A} |\psi(t)\rangle = \langle \hat{A} \rangle |\psi(t)\rangle$. That is, even if individually $|\psi_\xi(t)\rangle$ are not eigenstates of $\hat A_\xi$, in the limit $N\to\infty$ one could arguably speak of macrorealism of any intensive property A~\cite{misc2}. This is in contrast with the quantumness of the system itself, which, being preserved, would prevent us to talk about realism at the microscopic level~\cite{emary2013}. To illustrate the proposed test of macrorealism, we consider a simple numerical experiment. We will evaluate the autocorrelation function of the center-of-mass position operator, $\hat{{X}} = \sum_{\xi}^N \hat X_{\xi}/N$, for a number $N$ of uncoupled one-dimensional double-well oscillator (see the top panel of \fref{plot1}). Hereafter we use atomic units, $\hbar = m = 1$, and define the single-particle oscillator's Hamiltonian as $\hat H = {\hat P^2}/{2} + {\omega_0^2\hat X^2}/{2} + \cosh^{-2}{\alpha\hat X}$, where $\hat P$ is the momentum operator, and the natural frequency of the underlying harmonic oscillator is $\omega_0 = 4.3\cdot 10^{-3}$a.u. The characteristic width of the barrier between the two wells is set to $\alpha=5\cdot 10^{-2}a.u$. We choose $t=0$ such that the only relevant time in the discussion is $\tau$. We consider that the oscillators are all initially prepared in the ground state. Then, by taking the non-interacting limit of \eref{MB}, we find (for arbitrary initial conditions see~\cite{AppH}): \begin{eqnarray}\label{collective} \langle {y_Ay_B} \rangle_\sigma = \frac{1}{2N} \sum_{i,j}^\infty \mathcal{E}_{j,i} \mathcal{B}_{j,i} \big( a_{i} \!+\! (N \text{-} 1) \langle \hat A(t) \rangle \big) + c.c, \end{eqnarray} which in the limit of $N\to \infty$ reduces to $\langle{y_A(t)}\rangle \langle y_B(\tau) \rangle$. \begin{figure} \caption{Top panel: schematic picture of the double-well oscillator. The potential energy curve is plot in solid black line. The initial state of the system (area in green) is taken to be the ground state of the system. Two main frequencies are involved in the dynamics of the system, viz., $\omega_0$ and $1.28\omega_0$, related respectively with the inter-well and intra-well dynamics. The relevant upper and lower bounds of the spectrum of $\hat X$ are denoted by $\textrm{x} \label{plot1} \end{figure} The dynamics of a single oscillator for different values of $\sigma_X$ is shown in \fref{plot1}. For a projective measurement, i.e., $\sigma_X\to 0$, the dynamics presents a central resonance peak at $\omega_0$ (in dashed red line). This is due to the strong perturbation induced by the projective measurement at $t=0$, which yields a subsequent dynamics characterized by a large amplitude (over-the-barrier) oscillation. Contrarily, in the limit $\sigma_X\to \infty$ the measurement produces only a small perturbation to the initial state and yields an ensuing dynamics confined in the wells with a characteristic frequency $\omega = 1.28\omega_0$ (in dashed blue line). In between these two regimes, an infinite number of dynamics can be inferred depending on the system-meter coupling strength (in black solid lines). To conclude whether the position of a single oscillator is macrorealistic, we first need to ensure that the measurement of $X$ at time $t=0$ is carried out in the IWM regime (i.e., S1), and then compare the expectation values $\langle y(0)y(\tau) \rangle$ and $\langle y(0) \rangle \langle y(\tau) \rangle$ (i.e., S2). Note that since our example only considers pure states, the condition in \eref{MR_mixed} can be replaced by the simpler one $P(y_{B},y_{A}) = P(y_A)P(y_B)$. We address (S1) and (S2) in a compact way using the quantity \begin{equation} \Delta (\sigma_X,N) = \frac{d\langle y_X(0)y_X(\tau) \rangle}{d\sigma_X}d\sigma_X - \Delta_{QC} \end{equation} where $\Delta_{QC} = \langle y_X(0)y_X(\tau) \rangle - \langle y_X(0) \rangle \langle y_X(\tau) \rangle$. Whenever $\Delta (\sigma_X,N)$ becomes constant, \eref{IWM} is fulfilled, and whether the center-of-mass position is macrorealistic or not can be checked by simply assessing $\Delta (\sigma_X,N)$ in the asymptotic region. That is, X is macrorealistic if $\Delta (\sigma_X,N)$ vanishes in the asymptotic region and non-macrorealistic otherwise. In Fig. \ref{plot2} we plot the quantity $\Delta (\sigma_X,N)$ as a function of $\sigma_X$ and the number $N$ of oscillators. A single oscillator is non-macrorealistic as $\Delta (\sigma_X,1)$ asymptotically converges to a non-zero value. For a large enough number of oscillators, however, the dynamics of $\hat{X}$ becomes independent of $\sigma_X$ which is a clear signature of macrorealism as defined in \eref{MR_mixed}. In general, the $N$ oscillators become entangled right after the first measurement process and this allows a smooth transition (exponential decay with $N$) between the non-macroreaslitic and macrorealistic results. \begin{figure} \caption{$\Delta (\sigma_X,N)$ as a function of $\sigma_X$ and the number $N$ of oscillators for $\tau=33.3\pi$. Non-contextual results are differentiated from contextual results by an underlying yellow surface. Non-macrorealism and macrorealism results are shown in red and blue respectively.} \label{plot2} \end{figure} \textit{Conclusion.---} Quantum dynamics is ambiguous unless it goes along with a proper discussion of the system-meter interaction. This applies also to the great majority of tests of macrorealism, where contextuality appears in the form of a clumsiness loophole. In this Letter we have proven a sufficient condition for the non-contextuality of reduced one-time probability densities for a family of, classically non-perturbative, generalized von Neumann measurements. This condition, named IWM regime, can be assessed according to \eref{IWM}, which in turn requires to design a number of different experimental set-ups. For a large enough sample of set-ups, probably implemented at different laboratories, falsifying \eref{IWM} would require a loophole too conspiratorial to be taken seriously. Based on this result we have proposed a test of macrorealism that consists on witnessing the so-called no-signaling in time condition, \eref{NSIT}, under the fulfillment of the IWM regime, \eref{IWM}. The resulting protocol allows for tests in situations (e.g., unbounded and non-dichotomic properties) where Leggett-Garg inequalities and ideal negative measurement cannot be used at all. \end{document}
\begin{document} \title{Taxonomizing local versus global structure \ in neural network loss landscapes} \input Abstract.tex \input 1.Introduction.tex \input 1.5.Setup.tex \input 2.Empirical_results.tex \input 4.Related_work.tex \input 5.Conclusions.tex \textbf{Acknowledgements.} We want to thank Charles Martin, Rajiv Khanna, Zhewei Yao, and Amir Gholami for helpful discussions. Michael W. Mahoney would like to acknowledge the UC Berkeley CLTC, ARO, IARPA (contract W911NF20C0035), NSF, and ONR for providing partial support of this work. Kannan Ramchandran would like to acknowledge support from NSF CIF-2007669, CIF-1703678, and CIF-2002821. Joseph E. Gonzalez would like to acknowledge supports from NSF CISE Expeditions Award CCF-1730628 and gifts from Alibaba Group, Amazon Web Services, Ant Group, CapitalOne, Ericsson, Facebook, Futurewei, Google, Intel, Microsoft, Nvidia, Scotiabank, Splunk and VMware. Our conclusions do not necessarily reflect the position or the policy of our sponsors, and no official endorsement should be~inferred. \ifisarxiv \appendix \input Appendices.tex \fi \end{document}
\begin{document} \markboth{G. M. D'Ariano and P. Perinotti} {On the most efficient unitary transformation for programming quantum channels} \title{On the most efficient unitary transformation for programming quantum channels\footnote{\uppercase{T}his work has been co-founded by the \uppercase{EC} under the program \uppercase{ATESIT} (Contract No. IST-2000-29681), and the \uppercase{MIUR} {\em cofinanziamento} 2003.} } \author{Giacomo Mauro D'Ariano\footnote{\uppercase{W}ork partially supported by the MURI program administered by the \uppercase{U.S. A}rmy \uppercase{R}esearch \uppercase{O}ffice under Grant No. \uppercase{DAAD19-00-1-0177}} and Paolo Perinotti\footnote{\uppercase{W}ork partially supported by \uppercase{INFM} under project \uppercase{PRA-2002-CLON}. }} \address{{\em QUIT group}, INFM-CNR, Dipartimento di Fisica ``A. Volta'', via Bassi 6, 27100 Pavia, Italy\footnote{ \uppercase{P}art of the work has been carried out at the \uppercase{M}ax \uppercase{P}lanck \uppercase{I}nstutute for the \uppercase{P}hysics of \uppercase{C}omplex \uppercase{S}ystems in \uppercase{D}resden during the {\em \uppercase{I}nternational \uppercase{S}chool of \uppercase{Q}uantum \uppercase{I}nformation}, \uppercase{S}eptember 2005.}} \maketitle \abstracts{We address the problem of finding the optimal joint unitary transformation on system + ancilla which is the most efficient in programming any desired channel on the system by changing the state of the ancilla. We present a solution to the problem for $\dim(\sH)=2$ for both system and ancilla.} \keywords{Quantum information theory; channels; quantum computing; entanglement} \section{Introduction} A fundamental problem in quantum computing and, more generally, in quantum information processing\cite{Nielsen} is to experimentally achieve any theoretically designed quantum channel with a fixed device, being able to program the channel on the state of an ancilla. This problem is of relevance for example in proving the equivalence of cryptographic protocols, e.~g. }\def\ie{i.~e. }\def\vec#1{{\boldsymbol #1} proving the equivalence between a multi-round and a single-round quantum bit commitment\cite{dsw}. What makes the problem of channel programmability non trivial is that exact universal programmability of channels is impossible, as a consequence of a no-go theorem for programmability of unitary transformations by Nielsen and Chuang\cite{niels}. A similar situation occurs for universal programmability of POVM's\cite{fiurasek2,our}. It is still possible to achieve programmability probabilistically\cite{buzekproc}, or even deterministically\cite{ciravid}, though within some accuracy. Then, for the deterministic case, the problem is to determine the most efficient programmability, namely the optimal dimension of the program-ancilla for given accuracy. Recently, it has been shown \cite{our} that a dimension increasing polynomially with precision is possible: however, even though this is a dramatical improvement compared to preliminary indications of an exponential grow\cite{fiurasek1}, still it is not optimal. In establishing the theoretical limits to state-programmability of channels and POVM's the starting problem is to find the joint system-ancilla unitary which achieves the best accuracy for fixed dimension of the ancilla: this is exactly the problem that is addressed in the present paper. The problem turned out to be hard, even for low dimension, and here we will give a solution for the qubit case, for both system and ancilla. \section{Statement of the problem} We want to program the channel by a fixed device as follows \begin{equation}\label{partrace} \map{P}_{V,\sigma}(\rho)\doteq\Tr_2[V(\rho\otimes\sigma)V^\dag], \end{equation} with the system in the state $\rho$ interacting with an ancilla in the state $\sigma$ via the unitary operator $V$ of the programmable device (the state of the ancilla is the {\em program}). For fixed $V$ the above map can be regarded as a linear map from the convex set of the ancilla states $\conv{A}$ to the convex set of channels for the system $\conv{C}$. We will denote by $\conv{P}_{V,\conv{A}}$ the image of the ancilla states $\conv{A}$ under such linear map: these are the programmable channels. According to the well known no-go theorem by Nielsen and Chuang it is impossible to program all unitary channels on the system with a single $V$ and a finite-dimensional ancilla, namely the image convex $\conv{P}_{V,\conv{A}}\subset\conv{C}$ is a proper subset of the whole convex $\conv{C}$ of channels. This opens the following problem: \begin{itemize} \item[] {\bf Problem:} {\em For given dimension of the ancilla, find the unitary operators $V$ that are the most efficient in programming channels, namely which minimize the largest distance $\varepsilon(V)$ of each channel $\map{C}\in{\conv{C}}$ from the programmable set $\conv{P}_{V,\conv{A}}$: } \begin{equation}\label{eps} \varepsilon(V)\doteq\max_{\map{C}\in{\conv{C}}}\min_{\map{P}\in\conv{P}_{V,\conv{A}}} \delta(\map{C},\map{P})\equiv\max_{\map{C}\in{\conv{C}}}\min_{\sigma\in\conv{A}} \delta(\map{C},\map{P}_{V,\sigma}). \end{equation} \end{itemize} \par As a definition of distance it would be most appropriate to use the CB-norm distance $\n{\map{C}-\map{P}}_{CB}$. However, this leads to a very hard problem. We will use instead the following distance \begin{equation}\label{del} \delta(\map{C},\map{P})\doteq \sqrt{1-F(\map{C},\map{P})}, \end{equation} where $F(\map{C},\map{P})$ denotes the Raginsky fidelity \cite{raginski}, which for unitary map $\map{C}\equiv\map{U}=U\cdot U^\dag$ is equivalent to the channel fidelity \cite{Nielsen} \begin{equation}\label{ragfidel} F(\map{U},\map{P})=\frac{1}{d^2}\sum_i|\Tr[C_i^\dag U]|^2, \end{equation} where $\map{C}=\sum_i C_i\cdot C_i^\dag$. Such fidelity is also related to the input-output fidelity averaged over all pure states $\overline{F}_{io}(\map{U},\map{P})$, by the formula $\overline{F}_{io}(\map{U},\map{P})=[1+dF(\map{U},\map{P})]/(d+1)$. Therefore, our optimal unitary $V$ will maximize the fidelity \begin{equation}\label{ragfidel2} F(V)\doteq\min_{U\in\Unt{H}}F(U,V),\quad F(U,V)\doteq\max_{\sigma\in\conv{A}} F(\map{U},\map{P}_{V,\sigma}) \end{equation} \section{Reducing the problem to an operator norm} In the following we will use the GNS representation $|\Psi\rangle\!\rangle}\def\bb{\langle\!\langle=(\Psi\otimes I)|I\rangle\!\rangle}\def\bb{\langle\!\langle$ of operators $\Psi\in\Bnd{H}$, and denote by $\transp{X}$ the transposed with respect to the cyclic vector $|I\rangle\!\rangle}\def\bb{\langle\!\langle$, \ie $|\Psi\rangle\!\rangle}\def\bb{\langle\!\langle=(\Psi\otimes I)|I\rangle\!\rangle}\def\bb{\langle\!\langle=(I\otimes\transp{\Psi})|I\rangle\!\rangle}\def\bb{\langle\!\langle$, and by $X^*$ the complex conjugated operator $X^*\doteq(\transp{X})^\dag$, and write $|\upsilon^*\>$ for the vector such that $(|\upsilon\>\langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}\upsilon|\otimes I)|I\rangle\!\rangle}\def\bb{\langle\!\langle=|\upsilon\>|\upsilon^*\>$. Upon spectralizing the unitary $V$ as follows \begin{equation} V=\sum_ke^{i\theta_k}|\Psi_k\rangle\!\rangle}\def\bb{\langle\!\langle\bb\Psi_k|, \end{equation} we obtain the Kraus operators for the map $\map{P}_{V,\sigma}(\rho)$ \begin{equation} \map{P}_{V,\sigma}(\rho)=\sum_{nm}C_{nm}\rho C_{nm}^\dag,\qquad C_{nm}= \sum_ke^{i\theta_k}\Psi_k|\upsilon_n^*\>\langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}\upsilon_m^*|\Psi_k^\dag\sqrt{\lambda_m} \end{equation} where $|\upsilon_n\>$ denotes the eigenvector of $\sigma$ corresponding to the eigenvalue $\lambda_n$. We then obtain \begin{equation} \begin{split} \sum_{nm}|\Tr[C_{nm}^\dag U]|^2=& \sum_{kh}e^{i(\theta_k-\theta_h)}\Tr[\Psi_k^\dag U^\dag\Psi_k\transp{\sigma}\Psi_h^\dag U\Psi_h]\\ =&\Tr[\transp{\sigma} S(U,V)^\dag S(U,V)] \end{split} \end{equation} where \begin{equation} S(U,V)=\sum_k e^{-i\theta_k}\Psi^\dag_kU\Psi_k\,.\label{esse} \end{equation} The fidelity (\ref{ragfidel2}) can then be rewritten as follows \begin{equation}\label{avfuv} F(U,V)=\frac1{d^2}\n{S(U,V)}^2. \end{equation} \section{Solution for the qubit case} The operator $S(U,V)$ in Eq. (\ref{esse}) can be written as follows \begin{equation} S(U,V)=\Tr_1[(\transp{U}\otimes I)V^*]\,. \end{equation} Changing $V$ by local unitary operators transforms $S(U,V)$ in the following fashion \begin{equation} S(U,(W_1\otimes W_2)V(W_3\otimes W_4))=W_2^*S(W_1^\dag UW_3^\dag,V)W_4^*, \end{equation} namely the local unitaries do not change the minimum fidelity, since the unitaries on the ancilla just imply a different program state, whereas the unitaries on the system just imply that the minimum fidelity is achieved for a different unitary---say $W_1^\dag UW_3^\dag$ instead of $U$. For system and ancilla both two-dimensional, one can parameterize all possible joint unitary operators as follows\cite{KrausCirac} \begin{equation} V=(W_1\otimes W_2)\exp[i(\alpha_1\sigma_1\otimes\transp{\sigma_1}+\alpha_2\sigma_2\otimes\transp{\sigma_2}+\alpha_3\sigma_3\otimes\transp{\sigma_3})](W_3\otimes W_4)\,. \label{parambipu} \end{equation} A possible quantum circuit to achieve $V$ in Eq. (\ref{parambipu}) can be designed using the identities \begin{equation}\label{circuit} \begin{split} &[\sigma_\alpha\otimes\sigma_\alpha,\sigma_\beta\otimes\sigma_\beta]=0,\\ &C(\sigma_x\otimes I) C=\sigma_x\otimes\sigma_x,\\ &C(I\otimes \sigma_z) C=-\sigma_z\otimes\sigma_z,\\ &\left(e^{-\frac{i\pi}{4}\sigma_z}\otimes e^{-\frac{i\pi}{4}\sigma_z}\right) C(\sigma_x\otimes I) C\left(e^{\frac{i\pi}{4}\sigma_z}\otimes^{\frac{i\pi}{4}\sigma_z}\right)=\sigma_y\otimes\sigma_y, \end{split} \end{equation} where $C$ denotes the controlled-NOT \begin{equation} C=|0\>\langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}0|\otimes I+|1\>\langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}1|\otimes\sigma_x. \end{equation} This gives the quantum circuit in Fig. \ref{Vc}. \begin{figure} \caption{Quantum circuit scheme for the general joint unitary operator $V$ in Eq. (\ref{parambipu} \label{Vc} \end{figure} The problem is now reduced to study only joint unitary operators of the form \begin{equation} V=\exp[(i(\alpha_1\sigma_1\otimes\transp{\sigma_1}+\alpha_2\sigma_2\otimes\transp{\sigma_2}+\alpha_3\sigma_3\otimes\transp{\sigma_3})]\,. \end{equation} This has eigenvectors \begin{equation} |\Psi_j\rangle\!\rangle}\def\bb{\langle\!\langle=\frac1{\sqrt2}|\sigma_j\rangle\!\rangle}\def\bb{\langle\!\langle, \end{equation} where $\sigma_j$, $j=0,1,2,3$ denote the Pauli matrices $\sigma_0=I$, $\sigma_1=\sigma_x$, $\sigma_2=\sigma_y$, $\sigma_3=\sigma_z$. This means that we can rewrite $S(U,V)$ in Eq.~\eqref{esse} as follows \begin{equation} S(U,V)=\frac12\sum_{j=0}^3e^{-i\theta_j}\sigma_j U\sigma_j\,, \end{equation} with \begin{equation} \theta_0=\alpha_1+\alpha_2+\alpha_3\,,\quad\theta_i=2\alpha_i-\theta_0\,. \end{equation} The unitary $U$ belongs to $\SU2$, and can be written in the Bloch form \begin{equation}\label{Bloch} U=n_0I+i\vec n\cdot\vec\sigma\,, \end{equation} with $n_k\in\mathbb R}\def\Cmplx{\mathbb C$ and $n_0^2+|\vec n|^2=1$. Using the identity \begin{equation} \sigma_j \sigma_l \sigma_j=\epsilon_{jl}\sigma_l,\qquad \epsilon_{j0}=\epsilon_{jj}=1,\quad\epsilon_{jl}=-1\,, l\neq 0,j, \end{equation} we can rewrite \begin{equation} S(U,V)=\tilde n_0 I+\tilde{\vec n}\cdot\vec\sigma, \end{equation} where \begin{equation} \begin{split} \tilde n_j=&t_jn_j,\quad 0\leq j\leq 3, t_0=\frac12\sum_{j=0}^3e^{-i\theta_j},\\ t_j=&e^{-i\theta_0}+e^{-i\theta_j}-t_0,\;1\leq j\leq 3,\qquad t_j=|t_j|e^{i\phi_j},\; 0\leq j\leq 3, \\ \end{split} \end{equation} It is now easy to evaluate the operator $S(U,V)^\dag S(U,V)$. One has \begin{equation} \begin{split} S(U,V)^\dag S(U,V)=&v_0 I+ \vec v\cdot\vec \sigma,\\ v_0=&|\tilde n_0|^2+|\tilde{\vec n}|^2,\quad \vec v=i\left[2\Im(\tilde n_0\tilde{\vec n}^*)+\tilde{\vec n}^*\times\tilde{\vec n}\right]\,. \end{split} \end{equation} Now, the maximum eigenvalue of $S(U,V)^\dag S(U,V)$ is $v_0+|\vec v|$, and one has \begin{equation} |\vec v|^2=\sum_{i,j=0}^3|\tilde n_i|^2|\tilde n_j|^2-\tilde n_i^{*2}\tilde n_j^2=2\sum_{i,j=0}^3|\tilde n_i|^2|\tilde n_j|^2\sin^2(\phi_i-\phi_j), \end{equation} whence the norm of $S(U,V)$ is given by \begin{equation}\label{SUV} \n{S(U,V)}^2=\sum_{j=0}^3n_j^2|t_j|^2+\sqrt{2\sum_{i,j=0}^3n_i^2n_j^2|t_i|^2|t_j|^2\sin^2(\phi_i-\phi_j)}\,. \end{equation} Notice that the unitary $U$ which is programmed with minimum fidelity in general will not not be unique, since the expression for the fidelity depends on $\{n_j^2\}$. Notice also that using the decomposition in Eq.~\eqref{parambipu} the minimum fidelity just depends on the phases $\{\theta_j\}$, and the local unitaries will appear only in the definitions of the optimal program state and of the worstly approximated unitary. It is convenient to write Eq. (\ref{SUV}) as follows \begin{equation}\label{SUV2} \n{S(U,V)}^2=\vec u\cdot\vec t+\sqrt{ \vec u\cdot\vec T\vec u}\,. \end{equation} where $\vec u=(n_0^2 ,n_1^2 ,n_2^2 ,n_3^2 )$, $\vec t=(|t_0|^2 ,|t_1|^2 ,|t_2|^2 ,|t_3|^2)$, and $\vec T_{ij}=|t_i|^2|t_j|^2\sin^2(\phi_i-\phi_j)$. One has the bounds \begin{equation} \vec u\cdot\vec t+\sqrt{ \vec u\cdot\vec T\vec u}\geq \vec u\cdot\vec t\geq \min_j |t_j|^2, \end{equation} and the bound is achieved on one of the for extremal points $u_l=\delta_{lj}$ of the domain of $\vec u$ which is the convex set $\{\vec u,\; u_j\geq 0,\,\sum_j u_j=1\}$ (the positive octant of the unit four dimensional ball $S^4_+$). Therefore, the fidelity minimized over all unitaries is given by \begin{equation} F(V)=\frac1{d^2}\min_j|t_j|^2. \end{equation} The optimal unitary $V$ is now obtained by maximizing $F(V)$. We need then to consider the decomposition Eq.~\eqref{parambipu}, and then to maximize the minimum among the four eigenvalues of $S(U,V)^\dag S(U,V)$. Notice that $t_j=\sum_{\mu}H_{j\mu}e^{i\theta_\mu}$, where $H$ is the Hadamard matrix \begin{equation} H=\frac12 \begin{pmatrix} 1&1&1&1\\ 1&1&-1&-1\\ 1&-1&1&-1\\ 1&-1&-1&1 \end{pmatrix}, \end{equation} which is unitary, and consequently $\sum_j|t_j|^2=\sum_j |e^{i\theta_j}|^2=4$. This implies that $\min_j|t_j|\leq1$. We now provide a choice of phases $\theta_j$ such that $|t_j|=1$ for all $j$, achieving the maximum fidelity allowed. For instance, we can take $\theta_0=0,\theta_1=\pi/2,\theta_2=\pi,\theta_3=\pi/2$, corresponding to the eigenvalues $i,1,-i,1$ for $V$. Another solution is $\theta_0=0,\theta_1=-\pi/2,\theta_2=\pi,\theta_3=-\pi/2$. Also one can set $\theta_i\to -\theta_i$. The eigenvalues of $S(U,V)^\dag S(U,V)$ are then $1,1,1,1$, while for the fidelity we have \begin{equation}\label{optV} F\doteq\max_{V\in\Unt{H^{\otimes 2}}}F(V)=\frac{1}{d^2}=\frac{1}{4}, \end{equation} and the corresponding optimal $V$ has the form \begin{equation} V=\exp\left[\pm i\frac{\pi}{4}\left(\sigma_x\otimes\sigma_x\pm\sigma_z\otimes\sigma_z\right)\right]. \end{equation} A possible circuit scheme for the optimal $V$ is given in Fig. \ref{circuitV}. \begin{figure} \caption{Quantum circuit scheme for the optimal unitary operator $V$ in Eq. (\ref{optV} \label{circuitV} \end{figure} \par We now show that such fidelity cannot be achieved by any $V$ of the controlled-unitary form \begin{equation} V=\sum_{k=1}^2V_k\otimes|\psi_k\>\langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}\psi_k|,\qquad \langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}\psi_1|\psi_2\>=0,\quad V_1, V_2\hbox{ unitary on }\sH\simeq\Cmplx^2. \end{equation} For spectral decomposition $V_k=\sum_{j=1}^2e^{i\theta^{(j)}_k}|\phi^{(k)}_j\>\langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}\phi^{(k)}_j|$ the eigenvectors of $V$ are $|\Psi_{jk}\rangle\!\rangle}\def\bb{\langle\!\langle=|\phi^{(k)}_j\>|\psi_k\>$, and the corresponding operators are $\Psi_{jk}=|\phi^{(k)}_j\>\langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}\psi_k^*|$, namely the operator $S(U,V)$ is \begin{equation} S(U,V)=\sum_{j,k}e^{-i\theta^{(j)}_k}|\psi_k^*\>\langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}\phi^{(k)}_j|U|\phi^{(k)}_j\>\langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}\psi_k^*|\,, \end{equation} with singular values $\sum_{j=1}^2e^{-i\theta^{(j)}_k}\langle}\def\>{\rangle}\def\transp#1{{#1}^\intercal}\def\Tr{\operatorname{Tr}\phi^{(k)}_j|U|\phi^{(k)}_j\>=\Tr[V_k^\dag U]$. Then, the optimal program state is $|\psi_h\>$, with $h=\arg\max_k|\Tr[V_k^\dag U]|$, and the corresponding fidelity is \begin{equation} F(U,V)=\frac{1}{4}|\Tr[V_h^\dag U]|^2\,, \end{equation} and one has \begin{equation} F(V)=\min_U F(U,V)=0, \end{equation} since for any couple of unitaries $V_k$ there always exists a unitary $U$ such that $\Tr[V^\dag_k U]=0$ for $k=1,2$. Indeed, writing the unitaries in the Bloch form (\ref{Bloch}), their Hilbert-Schmidt scalar is equal to the euclidean scalar product in $\mathbb R}\def\Cmplx{\mathbb C^4$ of their corresponding vectors, whence it is always possible to find a vector orthogonal to any given couple in $\mathbb R}\def\Cmplx{\mathbb C^4$. The corresponding $U$ is then orthogonal to both $V_k$, and the minimum fidelity for any controlled-unitary is zero. \end{document}
\begin{document} \title {Continuous-variable QKD over 50km commercial fiber} \author{Yichen Zhang$^{1,2}$, Zhengyu Li$^1$, Ziyang Chen$^1$, Christian Weedbrook$^3$, Yijia Zhao$^2$, Xiangyu Wang$^2$, Yundi Huang$^2$, Chunchao Xu$^2$, Xiaoxiong Zhang$^2$, Zhenya Wang$^2$, Mei Li$^2$, Xueying Zhang$^2$, Ziyong Zheng$^2$, Binjie Chu$^2$, Xinyu Gao$^2$, Nan Meng$^2$, Weiwen Cai$^4$, Zheng Wang$^5$, Gan Wang$^1$, Song Yu$^2$, Hong Guo$^{1,2}$} \address{$^1$State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics Engineering and Computer Science, Center for Quantum Information Technology, Center for Computational Science and Engineering, Peking University, Beijing 100871, China} \address{$^2$State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China} \address{$^3$Xanadu, 372 Richmond St W, Toronto, M5V 2L7, Canada} \address{$^4$China Mobile Group Guangdong Co., Ltd., Guangzhou 510623, China} \address{$^5$Xi'an Uotocom Network Technology Co., Ltd., Xi'an 710075, China} \ead{[email protected], [email protected]} \begin{indented} \item[]May 2019 \end{indented} \begin{abstract} The continuous-variable version of quantum key distribution (QKD) offers the advantages (over discrete-variable systems) of higher secret key rates in metropolitan areas, as well as the use of standard telecom components that can operate at room temperature. An important step in the real-world adoption of continuous-variable QKD is the deployment of field tests over commercial fibers. Here we report two different field tests of a continuous-variable QKD system through commercial fiber networks in Xi'an and Guangzhou over distances of $30.02$~{\rm km} ($12.48$~{\rm dB}) and $49.85$~{\rm km} ($11.62$~{\rm dB}), respectively. We achieve secure key rates two orders-of-magnitude higher than previous field test demonstrations by employing an efficient calibration model with one-time evaluation. This accomplishment is also realized by developing a fully automatic control system which stabilizes system noise, and by applying a rate-adaptive reconciliation method which maintains high reconciliation efficiency with high success probability in fluctuated environments. Our results pave the way to deploy continuous-variable QKD in metropolitan settings. \end{abstract} \section*{Introduction} Quantum key distribution (QKD)~\cite{Gisin_RevModPhys_2002,Scarani_RevModPhys_2009} is one of the most practical applications in the field of quantum information. Its primary goal is to establish secure keys between two legitimate users, typically named Alice and Bob. Continuous-variable (CV) QKD~\cite{Weedbrook_RevModPhys_2012,Diamanti_Entropy_2015} has attracted much attention in the past few years, mainly as it uses standard telecom components that operate at room temperature and it has higher secret key rates (bits per channel use) over metropolitan areas~\cite{Jouguet_NatPhoton_2013,Pirandola_NatPhoton_2015,Weedbrook_PhysRevA_2014,Soh_PhysRevX_2015,Qi_PhysRevX_2015,Kumar_NewJPhys_2015}. We deployed the CV-QKD protocol based on coherent states~\cite{Grosshans_PhysRevLett_2002,Grosshans_Nature_2003,Weedbrook_PhysRevLett_2004} with Gaussian modulation that has been proven secure against arbitrary attacks~\cite{Grosshans_PhysRevLett_2005,Navascues_PhysRevLett_2005, Pirandola_PRL_2008, Pirandola_PhysRevLett_2009}. Such attack is the most optimal in the asymptotical limit~\cite{Renner_PhysRevLett_2009} and is also used in the finite-size regime~\cite{Leverrier_PhysRevLett_2013,Leverrier_PhysRevLett_2015,Leverrier_PhysRevLett_2017}. Furthermore, many experimental demonstrations of CV-QKD protocol have also been achieved~\cite{Lance_PhysRevLett_2005,Lodewyck_PhysRevA_2007,Qi_PhysRevA_2007,Khan_PhysRevA_2013,Pirandola_NatPhoton_2015}. Generally speaking, there are three basic criteria for a practical QKD system: automatic operation, stabilization under a real-world environment, and a moderate secure key rate~\cite{Diamanti_npj_2016}. Up to now, all previous long distance CV-QKD demonstrations have been undertaken in the laboratory without perturbation of the field environment~\cite{Jouguet_NatPhoton_2013,Huang_SciRep_2015}. More recently, demonstrations based on continuous-variable quantum teleportation and continuous-variable Einstein-Podolsky-Rosen entangled states under labortory environment have been reported~\cite{Wang_PRApplied_2018, Huo_SciAdv_2018}. The longest field tests of a CV-QKD system have been achieved over a $17.52$~{\rm km} deployed fiber ($10.25$~{\rm dB} loss)~\cite{Huang_OptLett_2016} and a $17.7$~{\rm km} deployed fiber ($5.6$~{\rm dB} loss)~\cite{Jouguet_OptExpr_2012}, where the secure key rates were $0.2$~{\rm kbps} and $0.3$~{\rm kbps}, respectively. Compared with field tests for discrete-variable QKD systems~\cite{Chen_OptExpr_2010,Sasaki_OptExpr_2011,Stucki_NewJPhys_2011,Wang_OptExpr_2014,Shimizu_JLightwaveTechnol_2014}, these demonstrations have limited transmission distances and low key rates. The demonstration of field tests over longer metropolitan distances using CV-QKD has yet to be achieved. There are several challenges to develop a practical CV-QKD system from a laboratory to the real world. Deployed commercial dark fibers are inevitably subject to much stronger perturbations due to changing environmental conditions and physical stress. This in turn causes disturbances of the transmitted quantum states. Deployed commercial dark fibers also suffer from higher losses due to splices, sharp bends and inter-fiber coupling. The software and hardware of CV-QKD modules must not only be designed to cope with all the conditions affecting the transmission fiber, but also must be robustly engineered to operate in premises designed for standard telecom equipment. Furthermore, as the system need to run continuously and without frequent attention, they should also be designed to automatically recover from any errors and shield the end users from service interruptions. In this paper, we deploy CV-QKD protocol based on coherent states and achieve secure commercial fiber transmission distances of $30.02$~{\rm km} ($12.48$~{\rm dB} loss) and $49.85$~{\rm km} ($11.62$~{\rm dB} loss) in Xi'an and Guangzhou, two cities in China, respectively. The corresponding secret key rates are two orders-of-magnitude higher than that of previous demonstrations~\cite{Huang_OptLett_2016,Jouguet_OptExpr_2012,Fossier_NewJPhys_2009}. We achieve this by developing a fully automatic control system, that achieves stable excess noise, and also by applying a rate-adaptive reconciliation protocol to achieve a high-reconciliation efficiency with high success probability. High secret key rate has been accomplished by using our new proposed one-time calibration model, which provides more efficient calibration procedure as well as simpler experimental implementations. \begin{figure} \caption{ (Color online) Optical layout for the field tests of the continuous-variable QKD system. Alice sends an ensemble of 40 ns Gaussian-modulated coherent states to Bob with a strong local oscillator multiplexed in time using a delay line and orthogonal polarization via a polarizing beamsplitter. The states are demultiplexed on Bob's side with another polarizing beamsplitter combined with an active dynamic polarization controller. After demultiplexing, the signal and local oscillator interfere on a shot-noise-limited balanced pulsed homodyne detector with a phase modulator on the local oscillator path to perform the random choice of the measured signal quadrature. Laser: continuous-wave laser; AM: amplitude modulator; PM: phase modulator; BS: beamsplitter; VATT: variable attenuator; PBS: polarizing beamsplitter; DPC: dynamic polarization controller; PD: photodetector.} \label{fig1} \end{figure} \section*{Results} The CV-QKD experimental setup is illustrated in Fig.~1 and consists of two legitimate users, Alice and Bob. To begin with, Alice generates 40ns coherent light pulses by a $1550$~nm telecom laser diode and two high-extinction amplitude modulators. Here the line width of the laser is 0.5 kHz and the extinction ratio of the amplitude modulators is $45$~{\rm dB}. These two amplitude modulators are pulsed with a duty cycle of 20\% with a frequency of 5 MHz. These pulses are split into weak signals and strong local oscillators (LO) with a $1/99$ beamsplitter, which ensures that the LO has enough power. The signal pulse is modulated with a centered Gaussian distribution using an amplitude and a phase modulator, where the modulation data is originally derived from our self-designed quantum random number generator~\cite{Xu_QST_2019, Zheng_QST_2019}. The variance is controlled using a variable attenuator and optimized with the loss of $12$~{\rm dB}. The signal pulse is delayed by $45$~ns with respect to the LO pulse using a $9$~m delay line. The devices used by Alice are polarization-maintaining. Both pulses are multiplexed with an orthogonal polarization using a polarizing beamsplitter. The time and polarization multiplexed pulses are sent to Bob in one fiber to reduce the phase noise caused in the transmission process. After the fiber link, Bob first uses the dynamic polarization controller to compensate the polarization drift, so that the polarization extinction ratio of polarization demultiplexing is kept at a high level. In our field tests, the extinction ratio of polarization after compensation is about $30$~{\rm dB}. A second delay line on Bob's side, which corresponds to the one used by Alice, allows for a time superposition of both the signal and LO pulses. A part of the LO, around 10\%, is used in the system for clock synchronization, data synchronization, and LO monitor. A phase modulator on the LO path allows for a random selection of the measured signal quadrature and compensation of the phase drift between signal path and LO path. Then the signal and LO interfere on a shot-noise-limited balanced pulsed homodyne detector~\cite{Xiaoxiong_PJ_2018}. A time delay fiber and an attenuator are used to compensate the bias of the beamsplitter and the photodetectors. \begin{figure*} \caption{ (Color online) Bird's-eye view of the two field-environment CV-QKD systems. (a) Field test environment in Xi'an, where Alice is situated at a HuoJuLu Service Room (Xi'an Uotocom Network Technology Co., Ltd.) and Bob is situated at a HeShengJingGuang Service Room (Xi'an Uotocom Network Technology Co., Ltd.). The deployed fiber length is $30.02$~{\rm km} \label{fig2} \end{figure*} For the fiber link, we undertake the field tests in the commercial fiber networks in two different cities in China. The first one is in Xi'an and operated by Xi'an Uotocom Network Technology Co., Ltd. The second one is in Guangzhou and operated by China Mobile Group Guangdong Co., Ltd. The channel loss of these two fiber links is similar, i.e., approximately $12$~{\rm dB}. However, the transmission distances are very different due to the different types of physical networks. As illustrated in Fig.~2~(a), the first network is operated in the inner city of Xi'an, which has higher channel loss per kilometer. The total deployed fiber length is $30.02$~{\rm km} and the transmission loss is $12.48$~{\rm dB} for the fiber link (0.416 {\rm dB/km}). The second network is between two different districts in Guangzhou, as shown in Fig.~2~(b), the total deployed fiber length is $49.85$~{\rm km} and the transmission loss is $11.62$~{\rm dB} for the fiber link (0.233 {\rm dB/km}). For the first field test, Alice is placed at the site of the HuoJuLu Service Room in Xi'an (\emph{N}$34{^{\circ}}15'12''$, \emph{E}$109^{\circ}0'44''$) and Bob at the site of the HeShengJingGuang Service Room (\emph{N}$34^{\circ}14'8''$, \emph{E}$108^{\circ}53'49''$). For the second field test, Alice is placed at the site of the QingHeDong Service Room in Guangzhou (\emph{N}$22^{\circ}59'20''$, \emph{E}$113^{\circ}24'9''$) and Bob at the site of the FangCun Service Room (\emph{N}$23^{\circ}5'49''$, \emph{E}$113^{\circ}14'8''$). To overcome the channel perturbations due to changing environmental conditions, we developed several automatic feedback systems to calibrate time, polarization, and phase of the quantum states transmitted. For the time calibration shown in Fig.~1, there are two modules. One is a data synchronization module, and the other one is a clock synchronization module to achieve a synchronous clock in two remote places. Here we utilize the LO to perform the data synchronization as well as the clock synchronization. The traditional way involves using the modulated signal for the data synchronization module, which requires much more data to eliminate the effects of noise (the signal-to-noise ratio (SNR) of the signal is always extremely low ($<1$) when transmitted long distances) and the success possibility cannot reach $100\%$. For the polarization calibration, we adopt a dynamic polarization controller (DPC) comprised of an electric polarization controller (EPC), a polarization beam splitter (PBS), and a photodetector. With the use of this polarization calibration system operated in real time, the polarization mode is maintained and the power of the LO arm over the signal arm keeps over $30$~{\rm dB}. For the phase calibration, we utilize the phase modulator in the LO path (the same one as the basis sifting) to compensate the phase drift. Alice inserts the reference data in the signal sequence according to a certain period. The period of insertion is determined by the frequency of the phase drift. In the field tests, 100 reference data are inserted into every 1000 quantum signals to perform the phase estimation and compensation since the major contribution of the phase drift comes from the separate arms in Alice and Bob's MZI. The phase drift between the LO and the signal is obtained by the measurement results of the reference signal. Then the feedback voltage of the phase modulator is calculated from the half-wave voltage and is loaded onto the phase modulator in real time. The shot-noise unit (SNU) can be vital to a CV-QKD system as the calibrated SNU will be treated as a normalization parameter to quantize the quadrature measurement results. According to the security analysis of CV-QKD, to evaluate the secret key rate we need to first resolve the corresponding covariance matrix, where its elements need to be normalised by the calibrated SNU. Therefore, the calibrated SNU will directly affect the security analysis on secret key rate. In these field tests the quantum signal path in the homodyne detection is randomly cut off to perform the SNU measurement. After measurement, Alice and Bob will perform postprocessing to distill the final key. The postprocessing of a CV-QKD system contains four parts: basis sifting, parameter estimation, information reconciliation and privacy amplification. To support the high repetition frequency of a CV-QKD system, all parts of the postprocessing need to be executed at high speed. The computational complexity of basis sifting and parameter estimation is low. We can obtain high speed execution on a CPU, where the average speed of basis sifting and parameter estimation can be achieved up to $17.36$~Mbits/s and $16.49$~Mbits/s, respectively. In the information reconciliation part, we combine multidimensional reconciliation and multi-edge type LDPC codes to achieve high efficiency at low SNRs~\cite{Leverrier_PhysRevA_2008,Richardson_MutiLDPC_2002}. However, the speed of the error correction is limited on a CPU because of the long block code length and many iterations of the belief propagation decoding algorithm. We implement multiple code words decoding simultaneously based on our GPU and obtain speeds up to $30.39$~Mbits/s on an NVIDIA TITAN Xp GPU~\cite{Xiangyu_SR_2017}. Privacy amplification is implemented by a hash function (Toeplitz matrices in our scheme)~\cite{Krawczyk_Toeplitz_1994,Fung_PhysRevA_2010}. The average speed of privacy amplification can be achieved to 1.35Gbps at any input length on the GPU ~\cite{Xiangyu_PJ_2018}. Taking finite-size effects into account, the secret key rate, bounded by collective attacks, is given by~\cite{Leverrier_PhysRevA_2010} \begin{equation} K = f\left( {1 - \alpha } \right)\left( {1 - FER} \right)\left[ {\beta I\left( {A:B} \right) - \chi \left( {B:E} \right) - \Delta \left( n \right)} \right], \end{equation} where$\beta \in[0,1]$ is the reconciliation efficiency, $I(A:B)$ is the classical mutual information between Alice and Bob, $\chi(B:E)$ is the Holevo quantity~\cite{Nielsen_QCQI}, $\Delta \left( n \right)$ is related to the security of the privacy amplification, FER is the frame error rate related to the reconciliation efficiency of a fixed error correction matrix, $f$ is the repetition rate of a QKD system ($5$~{\rm MHz} for our system) and $\alpha$ is the system overhead. The overhead here means the percentage of the signals that cannot be used to distill the final secret keys. In our system, some of the signal will be used to do the phase compensation. \begin{figure} \caption{ (Color online) 24 hours continuous test results for the signal-to-noise (SNR) ratio and reconciliation efficiency; in brown, the signal-to-noise-ratio of our system; in blue, the reconciliation efficiency without the rate-adaptive method~\cite{Text} \label{fig3} \end{figure} To achieve a high key rate, we need to achieve a high reconciliation efficiency and have a low FER, which is a trade off in the experimental realization. For a fixed error correction matrix, achieving a high reconciliation efficiency will lead to a high FER. Thus, in the experiment, we need to find an optimal efficiency and an FER to maximize the secret key rate according to their relationship. Furthermore, we also need to decrease the system overhead to let more pulses distill the final keys. For the high reconciliation efficiency and low FER part, we utilize the rate-adaptive reconciliation protocol~\cite{Xiangyu_Arxiv_2017}. As shown in Fig.~3, the reconciliation efficiency of decoding with the original MET-LDPC code~\cite{Jouguet_NatPhoton_2013,Xiangyu_Arxiv_2017, Milicevic_NPJ_2018} varies drastically(blue dash dotted line), because the practical SNRs of the quantum channel float in a range. However, rate-adaptive reconciliation~\cite{Xiangyu_Arxiv_2017} can keep high efficiencies even if the SNRs are unstable. For the parity check matrix with a code rate of $0.02$, the maximum reconciliation efficiency of $97.99\%$ can be achieved when the SNR is $0.0287$. Although such high reconciliation efficiency can be achieved, the secret key rate is not necessarily high because the FER is also higher. According to Eq.1, the secret key rate is influenced by both reconciliation efficiency and FER. Technically, the trend of the efficiency and the FER are opposite. Thus, we can find the optimal trade-off between efficiency and FER to maximize the secret key rate. Practically, for the parity check matrix with an original code rate of $0.02$, we can obtain the maximum secret key rate when the practical SNR is $0.0296$. Here the efficiency is $95\%$ and the FER is $0.1$. Thus, we use rate-adaptive reconciliation to maximize the secret key rate of our system by balancing the efficiency and the FER. Otherwise, the secret key rate will be reduced. Actually, it is difficult to find a fixed relationship between FER and reconciliation efficiency. Because it is related to many factors, including the degree distribution and construction methods of error correction codes, the distribution and randomness of the raw keys. The results in the paper are obtained by a fixed error correction code, and this code is chosen in particular. \begin{figure} \caption{ (Color online) 24 hour continuous test results for the secret key rate and the excess noise; Lower blue mark is the excess noise; Upper red mark represents the the secret key rate.} \label{fig4} \end{figure} For the system overhead part, we can swap the order of parameter estimation and information reconciliation to have an almost doubling of the final key rates~\cite{Lupo_PRL_2018, Xiangyu_Arxiv_2018}. According to Eq.1, the ratio of the data which is used to extract the secret keys has an important impact on the secret key rate. In the previous CV-QKD system, there are only $n$ variables used to extract the keys. The other $m = N - n$ variables are used to estimate the quantum channel parameters. Taking into account the influence of the finite-size regime, the ratio of $m$ to $N$ is great for long distance CV-QKD systems. Generally, half of the data is used for parameter estimation, which will reduce the secret key rate by $50\%$. In our system, we swap the order of parameter estimation and information reconciliation. We use all the data to extract the keys, if Alice and Bob successfully correct the errors between them, Alice can recover Bob's raw keys. If the decoding fails, Bob discloses his raw keys. Therefore, no matter the decoding step successes or not, Alice can use the whole raw keys for parameter estimation. But still the failure is unlikely to happen. At the same time, since we use all of the raw keys to distill the secret keys, the secret key rate can be improved. Using our CV-QKD system integrated with feedback systems, we have accumulated raw data for $24$ hours in Xi'an and $3$ hours in Guangzhou. During these periods, the automatic feedback systems worked effectively (details in Appendix.A). Compared with laboratory experiments, the field tests faced harsher environmental turbulence. For instance, the field environment will change the arrival time of the signals. The time calibration system works to monitor the time shift and then compensate it effectively. The achieved timing calibration precision is below 200 ps, which is much smaller than the $40$~{\rm ns} pulse width of the signal laser. Furthermore, with the help of the aforementioned polarization calibration system, we have compensated for the polarization change and achieved a fluctuation less than $5\%$ in the Signal path. Finally, the achieved phase calibration precision is below $1^\circ $ degree, according to the reference~\cite{Huang_SciRep_2015}, the residue phase drift after the phase compensation causes approximately 0.0004 of the excess noise, which is relatively less compare to the total excess noise. \begin{figure} \caption{ (Color online) Secure key rates of experiments in the commercial fiber as well as the simulation results shown as a function of fiber loss and length. The red and purple five-pointed stars correspond to the experimental results in Xi'an and Guangzhou with the fiber transmitting losses of $12.48$~{\rm dB} \end{figure} To one step further improving the experimental system, we propose and apply a new calibration model with one-time evaluation. Comparing to the rather complicated two-time calibration process, in the one-time evaluation model we only need to measure once the homodyne detector when the LO path is on, and take the result as the shot-noise units (SNU). The new calibration model also attributes to simplify the system implementation as only one optical switch is demanded during the calibration procedure. Moreover, the statistical fluctuation of the SNU introduced by the calibration procedures is reduced. The channel loss versus secret key rate curves have been depicted in Fig.~5, the red solid curve represents the secret key rate in the asymptotic limits when applies our new proposed calibration model, the black dash-dot curve represents the secret key rate in the asymptotic limits but under the original calibration model. It can be observed that the performance of the one-time calibration model and the original two-times calibration model are almost the same except for a slightly different when the channel loss is over 20dB. The 24 hour continuous test results in the Xi'an network for secret key rates and excess noises are shown in Fig.~4, where the average excess noise of our system is $4\%$ $SNU$. It should be noticed that the composable security which can help building up a more complete analysis under a practical environment can be carried out in further work. The secret key rate is $7.57$~{\rm kbps} in the asymptotic limit, while the key rate is $5.91$~{\rm kbps} in finite-size regime (details can be found in Appendix.E). The average reconciliation efficiency is 0.9501 with an average SNR of 0.0295 in our continuous test in Guangzhou. The continuous test results in the Guangzhou network for the secret key rate is $7.43$~{\rm kbps} in the asymptotic limit and $5.77$~{\rm kbps} in the finite-size regime (detailed in Appendix.E). Due to the feedback procedure, this CV-QKD system can run continuously for long periods of time automatically. \section*{Discussion} With these field tests of a continuous-variable QKD system, we have extended the distribution distance to $50$~{\rm km} over commercial fiber~\cite{Huang_OptLett_2016,Jouguet_OptExpr_2012,Fossier_NewJPhys_2009}. In addition, with an optimized scheme in the optical layer and postprocessing, the secure key rates are higher than previous results by two orders-of-magnitude, which is now comparable to the key rates of discrete-variable QKD systems at metropolitan distances~\cite{Chen_OptExpr_2010,Sasaki_OptExpr_2011,Stucki_NewJPhys_2011,Wang_OptExpr_2014,Shimizu_JLightwaveTechnol_2014}. The PLOB bounds which show the maximum achievable secret key rate in QKD have also been drawn in Fig.5 for both the lossy channel and thermal-loss channel. Although we have achieved highest secret key rates among the reported field tests, the secret key rates are still lower than the PLOB bounds for both the lossy channel and thermal-loss channel. The secret key rate we achieved here is based on the system with a repetition rate which is only 5~{\rm MHz}. This repetition rate which is much smaller comparing to discrete-variable QKD systems (around $1$~{\rm GHz}). These results have moved continuous-variable QKD towards a more practical setting and the expectation that a secure metropolitan network could be built and is within reach of current technology. \section*{Appendix} \subsection*{Appendix A: Experimental details} There are three automatic feedback systems to calibrate time, polarization, phase of the quantum states transmitted in our system in detail. These three feedback systems are the key modules to run a CV-QKD system in deployed commercial fibers. For the time calibration shown in Fig.1, there are two modules. One is data synchronization module, and another one is clock synchronization module to get synchronous clock in two remote places. For the data synchronization module, here we utilize a frame synchronization module based on the modulation of the LO. Previously, the traditional way is using the modulated signal, which requires much more data to eliminate the effects of noise (The signal to noise ratio (SNR) of Signal is always extremely low ($<1$) when transmitted long distance) and the success possibility can not reach $100\%$. We modulate data synchronization information on the LO, whose SNR is high enough to reduce the time of synchronization and promote the success possibility to $100\%$. The data synchronization is used to decide the starting point of the signal, so that the receiving system starts to work. It is the premise of the normal operation of the system. The data synchronization signal is a specific train sequence and modulated on the first high-extinction AM for the pulse generation. Because the clock is precisely synchronized in our system, the data synchronization is implemented with a low frequency (0.5kHz) which introduce ignorable effect on shot-noise calibration and the key distribution. For the clock synchronization module, we also utilize LO to accomplish that. Bob utilizes a photoelectric detector (PD) to detect the part of LO pulses. The output signals of the PD is shaped by the comparator, the shaped pulses (5 MHz) are inputted into a clock chip to generate the high frequency clock (200 MHz) which is required by the other modules of the system. In the previous implementation~\cite{Jouguet_NatPhoton_2013}, there is only the realization of clock synchronization by LO but no mention of the data synchronization. For the polarization calibration, we adopt a polarization stabilization system comprised of an electric polarization controller (EPC), a polarization beam splitter (PBS), and a photodetector. We insert the EPC and PBS after the fiber link, where the transmission port of PBS is connected to Lo path and the reflection port of the PBS is connected to Signal path. One percent of the power of LO pluses are utilized to monitor the polarization state and calculate the feedback voltage of EPC. With the use of this polarization calibration system operated in real time, the polarization mode is maintained and the power ratio of the LO and the Signal keeps 30 dB. The all-fiber construction of the calibration system provides very low insertion loss. Compared to the polarization compensation in Ref.[5], distilling the drift information needs more homodyne detection results than the scheme based on detecting the leaked LO in signal path. When the repetition frequency of the system is higher than polarization drift ratio, the amount of detection results is large enough to calculate drift. But in order to deal with various situations where polarization drift may be severe, the polarization controller based on the high-SNR LO pulses is more applicable. For the phase calibration, although the LO and the Signal is transmitted in one fiber with a small delay, a slow phase shift which is aroused from the difference between the LO path and Signal path is unavoidable. To stabilize this, we utilize the phase modulator in the LO path (the same one as the base sifting) to compensate the phase drift. Alice inserts the reference data in the signal sequence according to a certain period. The period of insertion is determined by the frequency of the phase drift. The phase drift between the LO and the Signal is obtained by the measurement results of the reference signal. Then the feedback voltage of the phase modulator is calculated from the half-wave voltage and is loaded onto the phase modulator in real time. \subsection*{Appendix B: Calibration model with one-time evaluation.} In the previous experimental demonstrations, two-times calibration process is extensively used as it requires firstly measuring the homodyne detector output when both LO path and signal path are off, then the homodyne detector output when only LO path is connected. Finally using subtraction to calculate the SNU. In the proposed one-time-evaluation (OTE) calibration model, we only need to measure once the homodyne detector when the LO path is on, thus the SNU of these two models can be deduced as: \begin{equation} \begin{array}{l} SN{U^{TTE}} = {V_{tot}} - {V_{ele}},\\ SN{U^{OTE}} = {V_{tot}} = SN{U^{TTE}} + {V_{ele}}. \end{array} \end{equation} Where $V_{tot}$ and $V_{ele}$ are corresponding to the variance of the output of the homodyne detector when the LO path is on and off. ${SN{U^{TTE}}}$ represents the two-times calibration SNU and ${SN{U^{OTE}}}$ represents the SNU with one-time calibration. Under this modeling, certain advantages can be procured: Firstly, we only need one optical switch in signal path in our system. Secondly, only one-time statistic evaluation is required, which makes it a more utility model applying in practical systems. Thirdly, since we only need one-time period to calculate the SNU, the statistic fluctuation is minimized compared to original calibration model with two-times evaluation. We now give a more detailed analysis on its availability. We first consider the output of a practical homodyne detector with limited detection efficiency and electronic noise: \begin{equation} {X_{out}} = A{X_{LO}}\left( {\sqrt {{\eta _d}} {{\hat x}_B} + \sqrt {1 - {\eta _d}} {{\hat x}_{v1}}} \right) + {X_{ele}}. \end{equation} Where ${\hat x_B}$ represents the x-quadrature of the canonical components of mode after the transmission and ${\hat x_{{v_1}}}$ represents the vacuum state. ${X_{ele}}$ is a Gaussian variable with variance ${v_{el}}$, ${A}$ is the circuit amplification parameter. The output of the homodyne detector need to be further quantized by the SNU to estimate Eve's information. In two-times calibration model, it is $SNU^{TTE} = {A^2}X_{LO}^2$. In one-time-evaluation model, it is ${SNU^{OTE} = {A^2}X_{LO}^2 + \left\langle {X_{ele}^2} \right\rangle = {A^2}X_{LO}^2 + {v_{el}}.}$ Then the data used for postprocessing is: \begin{equation} x_{out}^{OTE} = \frac{{A{X_{LO}}}}{{\sqrt {{A^2}X_{LO}^2 + {v_{el}}} }}\left( {\sqrt {{\eta _d}} {{\hat x}_B} + \sqrt {1 - {\eta _d}} {{\hat x}_{v_1}}} \right) + \frac{{\sqrt {{v_{el}}} }}{{\sqrt {{A^2}X_{LO}^2 + {v_{el}}} }}{\hat x_{v_2}}. \end{equation} Here the Gaussian variable ${X_{ele}}$ in Eq.(3) is replaced with a Gaussian operator $\sqrt {{v_{el}}} {\hat x_{{v_2}}}$ since the electronic noise is not controlled by Eve, ${v_{el}}$ is the variance of ${X_{ele}}$ and ${\hat x_{v_2}}$ is the vacuum of variance 1. If let ${{\eta _e} = \frac{{{A^2}X_{LO}^2}}{{{A^2}X_{LO}^2 + {v_{el}}}}}$, it can adequately characterize the electronic noise using a beamsplitter. The completely version of the EB model is depicted in the upper side of Fig.~6. Alice prepares the EPR state that has two modes A and B where mode A is kept in Alice's side and will be measured by heterodyne detection. Mode B will be sent into the channel, in which Eve can conduct any attack according to her strategies. After the transmission, mode B will pass through two beamsplitters and become mode B' before the ideal homodyne detection. Based on the trusted model assumption, the limited detcted efficiency and electronic noise is not controlled by Eve thus if we commute the order of the beamsplitters it will not change the final detected mode B', the EB model after the beamsplitter switching is depicted on the bottom of Fig.~6. Under this modelling, we no longer need to measure the electronic noise. \begin{figure*} \caption{ (Color online) Entanglement-based model of one-time calibration model. Two beamsplitters are brought in to imitate the limited detection efficiency and the detector electronic noise. EPR states correspond to the two mode squeezed states that has two modes A and B, mode A is measured by Alice using heterodyne detection and mode B is sent to Bob. The mode arrived before the ideal homodyne detection is B'.} \end{figure*} \subsection*{Appendix C: Secret key rate with one-time-evaluation calibration model.} The (asymptotical) secret key rate ${K}$ with one-time-evaluation calibration model against collective attacks for reverse reconciliation is given by~\cite{Devetak_ProcRSoc_2005} \begin{equation} K = f(1 - \alpha )(1 - FER)[\beta I(A:B) - \chi (B:E)]. \end{equation} The classical information of Alice and Bob $I(A:B)$ can be described by Shannon entropy which can be calculated by the variance of Alice and Bob and their covariance: \begin{equation} I\left( {A:B} \right) = \frac{1}{2}{\log _2}(\frac{{{V} + \chi }}{{\chi + 1}}). \end{equation} Here we define ${\chi}$ as ${\chi = \frac{1}{{T{\eta _d}{\eta _e}}} - 1 + {\varepsilon _c}}$ for conciseness, ${T}$ is the channel transmittance and ${\varepsilon _c}$ stands for the channel excess noise. Applying one-time-evaluation model, we no longer need to measure the electronic noise, so mode ${D_2}$ in the Fig.~ 5 is unknown to us. To simplify our calculation, however, the co-variance matrix of mode ${A, D_1, B'}$ can be easily obtained: \begin{equation} {\gamma _{A{D_1}{B'}}} = \left( {\begin{array}{*{20}{c}} {{\gamma _A}}&{{\phi _{A{D_1}}}}&{{\phi _{A{B'}}}}\\ {\phi _{A{D_1}}^T}&{{\gamma _{D_1}}}&{{\phi _{{D_1}{B'}}}}\\ {\phi _{A{B'}}^T}&{\phi _{{D_1}{B'}}^T}&{{\gamma _{{B'}}}} \end{array}} \right). \end{equation} Thus we write the Holevo quantity~\cite{Nielsen_QCQI} ${\chi \left( {B:E} \right)}$ which restricts the upper bound of the information that the eavesdropper Eve can acquire as: \begin{equation} \chi (B:E) = \chi (B:E,{D_2}) = S({\rho _{A{D_1}B'}}) - S(\rho _{A{D_1}}^{{m_{B'}}}). \end{equation} The first term of the right side of the equation can be calculated by its corresponding co-variance matrix ${{\gamma _{A{D_1}B'}}}$ using its symplectic eigenvalues, the covariance matrix is derived as: \begin{equation} \scriptsize{ {\gamma _{A{B'}{D_1}}} = \left( {\begin{array}{*{20}{c}} {V{I_2}}&{\sqrt {T{\eta _e}{\eta _d}({V^2} - 1)} {\sigma _z}}&{\sqrt {T{\eta _e}(1 - {\eta _d})({V^2} - 1)} {\sigma _z}}\\ {\sqrt {T{\eta _e}{\eta _d}({V^2} - 1)} {\sigma _z}}&{[T{\eta _e}{\eta _d}(V - 1 + {\varepsilon _c}) + 1]{I_2}}&{\sqrt {{\eta _d}(1 - {\eta _d})} T{\eta _e}(V - 1 + {\varepsilon _c}){I_2}}\\ {\sqrt {T{\eta _e}(1 - {\eta _d})({V^2} - 1)} {\sigma _z}}&{\sqrt {{\eta _d}(1 - {\eta _d})} T{\eta _e}(V - 1 + {\varepsilon _c}){I_2}}&{[T{\eta _e}(1 - {\eta _d})(V - 1 + {\varepsilon _c}) + 1]{I_2}} \end{array}} \right). } \end{equation} Experimentally, using Alice's data and Bob's data is sufficiently to obtain the parameter values appeared in the co-variance matrix above. The Von Neumann entropy ${S(\rho _{AC}^{{m_{B'}}})}$ is calculated from matrix ${\gamma _{A{D_1}}^{{m_{B'}}}}$, which is the matrix after Bob performs homodyne detection on mode ${B'}$, it can be derived as: \begin{equation} \gamma _{A{D_1}}^{{m_{B'}}} = {\gamma _{A{D_1}}} - {\phi _{A{D_1}B'}}{(X{\gamma _{B'}}X)^{MP}}\phi _{A{D_1}B'}^T. \end{equation} From co-variance matrix ${{\gamma _{A{B'}{D_1}}}}$ we can find three valid symplectic eigenvalues and from ${\gamma _{A{D_1}}^{{m_{B'}}}}$ we can find two valid symplectic eigenvalues. The ${\chi (B:E)}$ can now rewritten as: \begin{equation} \chi (B:E) = \sum\nolimits_{i = 1}^3 {G(\frac{{{\lambda _i} - 1}}{2}) - } \sum\nolimits_{i = 4}^5 {G(\frac{{{\lambda _i} - 1}}{2})} . \end{equation} Where ${G(x) = (x + 1){\log _2}(x + 1) - x{\log _2}x.}$ Then the final secret key rate is obtainable by using the equation (5). It should be noted that this calculation will lead to a secure lower bound of the actual secret key rate. \subsection*{Appendix D: Rate-adaptive reconciliation protocol.} The rate-adaptive reconciliation protocol is to maximize the secret key rate of our system by balancing the efficiency $\beta$ and FER. With the combination of puncturing and shortening techniques, the code rate will be changed to \noindent \begin{equation} R=\frac{n-m-s}{n-p-s} \,, \label{coderate} \end{equation} where the original code rate is $R^{o}=(n-m)/n$, $n$ is the length of the code and $m$ is the length of redundancy bits. The code rate will be changed to $R^{'}=(n-m)/(n-p)$ by adding punctured bits of length $p$. The code rate will be changed to $R^{''}=(n-m-s)/(n-s)$ by adding shortened bits of length $s$. Therefore, we could have the reconciliation efficiency, which is defined as follows: \noindent \begin{equation} \beta=\frac{R}{C} \,, \label{Reconciliationefficiency} \end{equation} where $R$ is the rate of MET-LDPC code, $C$ is the classical capacity of the quantum channel, which is $C=\frac{1}{2}log_{2}(1+SNR)$ for Gaussian variables. The detailed steps of the rate-adaptive protocol are as follows~\cite{Xiangyu_Arxiv_2017}: {\it Step~1}: According to the practical SNR of the experimental data ($0.028$ ~ $0.030$), we calculate the optimal code rate. Then we select a good-performance original MET-LDPC code $0.02$ whose code rate is close to the optimal code rate. {\it Step~2}: We creates a new sequence $\widehat{u}$ with length $n$ in Bob's side: \begin{equation} u=\{u_{1},u_{2},\cdots\cdots,u_{n-p-s}\}\,, \end{equation} \begin{equation} p_{B}=\{{p_{B}}_{1},{p_{B}}_{2},\cdots\cdots,{p_{B}}_{p}\}\,, \end{equation} \begin{equation} s_{B}=\{{s_{B}}_{1},{s_{B}}_{2},\cdots\cdots,{s_{B}}_{s}\}\,, \end{equation} \begin{equation} \widehat{u}=\{u_{1},u_{2},\cdots,{s_{B}}_{1},\cdots,{p_{B}}_{1},\cdots,{s_{B}}_{i},\cdots,{p_{B}}_{j},\cdots,u_{k},\cdots\}\,, \end{equation} where $u$ is the string for Bob's multidimensional reconciliation with length $n-p-s$, the sequences $p_{B}$ and $s_{B}$ represent Bob's punctured bits and shortened bits which are randomly generated by Bob with length $p$ and $s$, and $i\in{\{1,2,\cdots,s\}}$, $j\in{\{1,2,\cdots,p\}}$, $k\in{\{1,2,\cdots,n-p-s\}}$. The new string $\widehat{u}$ is created by randomly inserting $p_{B}$ and $s_{B}$ into the string $u$. Then Bob calculates the syndrome of $\widehat{u}$, such that $c(\widehat{u})=H\widehat{u}^{T}$. $H$ matrix is the parity check matrix of the code, which is used to assist in the encoding and decoding process. Its rows refer to the check nodes and columns refer to the bit nodes. The nodes are obtained by the degree distribution. And H matrix is obtained by some construction method according to the degree distribution. {\it Step~3}: After we receives the message sent by Bob in Alice's side, we constructs a new sequence $\widehat{v}$ with length $n$: \begin{equation} v=\{v_{1},v_{2},\cdots\cdots,v_{n-p-s}\}\,, \end{equation} \begin{equation} p_{A}=\{{p_{A}}_{1},{p_{A}}_{2},\cdots\cdots,{p_{A}}_{p}\}\,, \end{equation} \begin{equation} \widehat{v}=\{v_{1},v_{2},\cdots,{s_{B}}_{1},\cdots,{p_{A}}_{1},\cdots,{s_{B}}_{i},\cdots,{p_{A}}_{j},\cdots,v_{k},\cdots\}\,, \end{equation} where $v$ is the sequence for Alice's multidimensional reconciliation with length $n-p-s$, the sequence $p_{A}$ represents Alice's punctured bits which is randomly generated by Alice, and $i\in{\{1,2,\cdots,s\}}$, $j\in{\{1,2,\cdots,p\}}$, $k\in{\{1,2,\cdots,n-p-s\}}$. The new string $\widehat{v}$ and $\widehat{u}$ have the same positions and length of punctured bits and shortened bits. And they also have the same value of shortened bits $s_{B}$. Then Alice uses the belief propagation decoding algorithm or some other algorithm to recover $\widehat{u}$. Finally Alice and Bob will share a common string. \subsection*{Appendix E: The detailed results of the field test in Xi'an and Guangzhou commercial fiber networks} ~\\ ~\\ ~\\ ~\\ \begin{table}[hbtp] \scriptsize{ \centering \caption{The detailed results of the field test in Xi'an commercial fiber network for 24 hours. The detection efficiency of the homodyne detector is 0.612. SNR: practical signal-to-noise ratio. $\beta^{o}$: reconciliation efficiency of the original code. $\beta$: reconciliation efficiency of the rate-adaptive reconciliation. s: the length of shortened bits. p: the length of punctured bits. $\epsilon$: excess noise. ${\eta _e}$: electronic noise $k_{Asymptotic}$: final secret key rate in asymptotic limits. $k_{finite}$: finial secret key rate in finite-size regime.} \begin{tabular}{|p{1.0cm}<{\centering}||p{1.2cm}<{\centering}|p{1.2cm}<{\centering}|p{1.2cm}<{\centering}|p{1.0cm}<{\centering}|p{1.0cm}<{\centering}|p{1.8cm}<{\centering}|p{1.8cm}<{\centering}|p{1.8cm}<{\centering}|p{1.8cm}<{\centering}|} \hline Time & SNR & $\beta^{o}$ & $\beta$ & s & p & $\epsilon$ & ${{v_{el}}}$ & $k_{Asymptotic}$ (kbps) & $k_{finite}$ (kbps) \\\hline 0:30 & 0.028205 & 0 & 0.950268 & 952 & 0 & 0.045909 & 0.257087972 & 5.768156006 & 4.1097321 \\\hline 1:00 & 0.028365 & 0 & 0.950043 & 848 & 0 & 0.044316 & 0.24805773 & 6.647619698 & 4.989195793 \\\hline 1:30 & 0.028628 & 0 & 0.950305 & 664 & 0 & 0.044234 & 0.212818655 & 6.811478575 & 5.153054669 \\\hline 2:00 & 0.029241 & 0.961982 & 0.950289 & 248 & 0 & 0.044372 & 0.218014447 & 6.966862114 & 5.308438208 \\\hline 2:30 & 0.028632 & 0 & 0.950174 & 664 & 0 & 0.044643 & 0.229307098 & 6.583656263 & 4.925232357 \\\hline 3:00 & 0.029504 & 0.953529 & 0.950165 & 72 & 0 & 0.042427 & 0.221973702 & 8.126644238 & 6.468220332 \\\hline 3:30 & 0.028497 & 0 & 0.950352 & 752 & 0 & 0.042061 & 0.136639115 & 7.927945618 & 6.269521712 \\\hline 4:00 & 0.028918 & 0.972573 & 0.950069 & 472 & 0 & 0.042218 & 0.139557273 & 7.992647763 & 6.334223858 \\\hline 4:30 & 0.028968 & 0.970918 & 0.950357 & 432 & 0 & 0.042114 & 0.136224858 & 8.093704911 & 6.435281006 \\\hline 5:00 & 0.028893 & 0.973403 & 0.950115 & 488 & 0 & 0.04195 & 0.139011213 & 8.131799003 & 6.473375098 \\\hline 5:30 & 0.029327 & 0.959201 & 0.950175 & 192 & 0 & 0.043854 & 0.041323934 & 7.272916672 & 5.614492767 \\\hline 6:00 & 0.028012 & 0 & 0.950024 & 1088 & 0 & 0.042957 & 0.040551421 & 7.229510101 & 5.571086196 \\\hline 6:30 & 0.02993 & 0.940153 & 0.949994 & 0 & 10368 & 0.04181 & 0.052974806 & 8.63214663 & 6.973722724 \\\hline 7:00 & 0.030191 & 0.932144 & 0.949996 & 0 & 10360 & 0.042883 & 0.05196541 & 8.137967712 & 6.479543807 \\\hline 7:30 & 0.028433 & 0 & 0.950132 & 800 & 0 & 0.041958 & 0.11270251 & 7.938746112 & 6.280322206 \\\hline 8:00 & 0.029086 & 0.967035 & 0.95035 & 352 & 0 & 0.040907 & 0.11184432 & 8.80254033 & 7.144116424 \\\hline 8:30 & 0.029145 & 0.965105 & 0.950346 & 312 & 0 & 0.044158 & 0.142472731 & 7.051915692 & 5.393491787 \\\hline 9:00 & 0.028574 & 0 & 0.950145 & 704 & 0 & 0.042901 & 0.148243261 & 7.491192309 & 5.832768403 \\\hline 9:30 & 0.028368 & 0 & 0.950333 & 840 & 0 & 0.043532 & 0.110357487 & 7.08897527 & 5.430551365 \\\hline 10:00 & 0.02949 & 0.953975 & 0.950235 & 80 & 0 & 0.04593 & 0.109311065 & 6.204816116 & 4.54639221 \\\hline 10:30 & 0.02918 & 0.963964 & 0.950357 & 288 & 0 & 0.043481 & 0.159636778 & 7.434540248 & 5.776116342 \\\hline 11:00 & 0.029437 & 0.955668 & 0.950048 & 120 & 0 & 0.041518 & 0.154011192 & 8.590946155 & 6.932522249 \\\hline 11:30 & 0.02928 & 0.960719 & 0.950171 & 224 & 0 & 0.043213 & 0.156899036 & 7.604695583 & 5.946271677 \\\hline 12:00 & 0.029627 & 0.949627 & 0.95 & 0 & 392 & 0.041517 & 0.160396589 & 8.668259572 & 7.009835666 \\\hline 12:30 & 0.02819 & 0 & 0.950375 & 960 & 0 & 0.042405 & 0.145677068 & 8.150250582 & 6.491826677 \\\hline 13:00 & 0.02935 & 0.95846 & 0.950193 & 176 & 0 & 0.042722 & 0.154258455 & 7.735690531 & 6.077266626 \\\hline 13:30 & 0.029502 & 0.953593 & 0.950228 & 72 & 0 & 0.042887 & 0.146898664 & 7.62197071 & 5.963546804 \\\hline 14:00 & 0.0296 & 0.950481 & 0.950108 & 8 & 0 & 0.043946 & 0.147434369 & 6.813599789 & 5.155175883 \\\hline 14:30 & 0.028777 & 0.977271 & 0.950056 & 568 & 0 & 0.042241 & 0.204325891 & 7.871299729 & 6.212875823 \\\hline 15:00 & 0.029437 & 0.955668 & 0.950048 & 120 & 0 & 0.041536 & 0.199646968 & 8.219492648 & 6.561068742 \\\hline 15:30 & 0.029704 & 0.947201 & 0.949998 & 0 & 2944 & 0.042426 & 0.168286976 & 8.19474906 & 6.536325154 \\\hline 16:00 & 0.029493 & 0.953879 & 0.95014 & 80 & 0 & 0.044886 & 0.178200272 & 6.767835413 & 5.109411508 \\\hline 16:30 & 0.029338 & 0.958846 & 0.9502 & 184 & 0 & 0.042852 & 0.193986108 & 7.827893557 & 6.169469651 \\\hline 17:00 & 0.029028 & 0.96894 & 0.950321 & 392 & 0 & 0.044096 & 0.19820625 & 7.038879834 & 5.380455928 \\\hline 17:30 & 0.028786 & 0.97697 & 0.950147 & 560 & 0 & 0.042854 & 0.187691506 & 7.501930063 & 5.843506157 \\\hline 18:00 & 0.028316 & 0 & 0.950104 & 880 & 0 & 0.043287 & 0.188699394 & 7.208032562 & 5.549608656 \\\hline 18:30 & 0.029687 & 0.947736 & 0.949993 & 0 & 2376 & 0.042483 & 0.112187631 & 7.96854483 & 6.310120924 \\\hline 19:00 & 0.03007 & 0.935839 & 0.949998 & 0 & 14904 & 0.042464 & 0.10987308 & 7.762217726 & 6.103793821 \\\hline 19:30 & 0.029721 & 0.946667 & 0.949996 & 0 & 3504 & 0.043288 & 0.321548662 & 7.672138455 & 6.013714549 \\\hline 20:00 & 0.029262 & 0.961301 & 0.950371 & 232 & 0 & 0.046126 & 0.338690481 & 5.903249913 & 4.244826007 \\\hline 20:30 & 0.029622 & 0.949785 & 0.949998 & 0 & 224 & 0.043842 & 0.144982226 & 7.08805103 & 5.429627124 \\\hline 21:00 & 0.028868 & 0.974234 & 0.950162 & 504 & 0 & 0.042965 & 0.143486744 & 7.437802729 & 5.779378823 \\\hline 21:30 & 0.028632 & 0 & 0.95017 & 664 & 0 & 0.044643 & 0.207438908 & 8.033042141 & 6.374618236 \\\hline 22:00 & 0.029504 & 0.953529 & 0.95016 & 72 & 0 & 0.042427 & 0.204039996 & 8.22393663 & 6.565512724 \\\hline 22:30 & 0.028497 & 0 & 0.95035 & 752 & 0 & 0.042061 & 0.321594474 & 8.349520597 & 6.691096691 \\\hline 23:00 & 0.028918 & 0.972573 & 0.95007 & 472 & 0 & 0.042218 & 0.338761392 & 6.861580907 & 5.203157002 \\\hline 23:30 & 0.02983 & 0.943258 & 0.949999 & 0 & 7096 & 0.041853 & 0.256120393 & 8.56634362 & 6.907919714 \\\hline 0:00 & 0.029605 & 0.950323 & 0.950323 & 0 & 0 & 0.043515 & 0.261181193 & 7.399916031 & 5.741492125 \\\hline \end{tabular} } \end{table} \begin{table}[h] \scriptsize{ \centering \caption{The detailed results of the field test in Guangzhou commercial fiber network for continuous 48 blocks. The detection efficiency of the homodyne detector is 0.612. SNR: practical signal-to-noise ratio. s: the length of shortened bits. p: the length of punctured bits. $\beta$: reconciliation efficiency of the rate-adaptive reconciliation. $\beta^{o}$: reconciliation efficiency of the original code. ${\eta _e}$: electronic noise. $\epsilon$: excess noise. $k_{Asymptotic}$: final secret key rate in asymptotic limits. $k_{finite}$: finial secret key rate in finite-size regime.} \begin{tabular}{|p{1.0cm}<{\centering}||p{1.2cm}<{\centering}|p{1.2cm}<{\centering}|p{1.2cm}<{\centering}|p{1.0cm}<{\centering}|p{1.0cm}<{\centering}|p{1.8cm}<{\centering}|p{1.8cm}<{\centering}|p{1.8cm}<{\centering}|p{1.8cm}<{\centering}|} \hline Time & SNR & $\beta^{o}$ & $\beta$ & s & p & $\epsilon$ & ${{v_{el}}}$ & $k_{Asymptotic}$ (kbps) & $k_{finite}$ (kbps) \\\hline 1 & 0.030241 & 0.930625 & 0.949997 & 0 & 20392 & 0.042136 & 0.392581912 & 8.579503464 & 6.921079558 \\\hline 2 & 0.030869 & 0.911972 & 0.949994 & 0 & 40024 & 0.041306 & 0.36425003 & 9.320395083 & 7.661971177 \\\hline 3 & 0.030093 & 0.935135 & 0.949992 & 0 & 15640 & 0.042493 & 0.399424436 & 8.316653184 & 6.658229278 \\\hline 4 & 0.028783 & 0.97707 & 0.950245 & 560 & 0 & 0.044921 & 0.463095576 & 6.496388751 & 4.837964845 \\\hline 5 & 0.030063 & 0.936054 & 0.949992 & 0 & 14672 & 0.042497 & 0.400822547 & 8.302073854 & 6.643649948 \\\hline 6 & 0.029988 & 0.938361 & 0.949996 & 0 & 12248 & 0.042856 & 0.404315349 & 8.070705358 & 6.412281452 \\\hline 7 & 0.030522 & 0.922184 & 0.95 & 0 & 29280 & 0.042205 & 0.379741876 & 8.658056183 & 6.999632277 \\\hline 8 & 0.030833 & 0.91302 & 0.949994 & 0 & 38920 & 0.042003 & 0.365815545 & 8.903768839 & 7.245344933 \\\hline 9 & 0.030322 & 0.928176 & 0.949995 & 0 & 22968 & 0.042875 & 0.388825962 & 8.195404522 & 6.536980616 \\\hline 10 & 0.029675 & 0.948113 & 0.949998 & 0 & 1984 & 0.043563 & 0.419116752 & 7.553054985 & 5.894631079 \\\hline 11 & 0.029938 & 0.939905 & 0.949998 & 0 & 10624 & 0.04319 & 0.406649656 & 7.864232652 & 6.205808746 \\\hline 12 & 0.029998 & 0.938053 & 0.95 & 0 & 12576 & 0.043012 & 0.403840032 & 7.98785762 & 6.329433714 \\\hline 13 & 0.028429 & 0 & 0.950264 & 800 & 0 & 0.045778 & 0.481301599 & 5.914846678 & 4.256422772 \\\hline 14 & 0.028628 & 0 & 0.950305 & 664 & 0 & 0.045419 & 0.471006441 & 6.179197639 & 4.520773733 \\\hline 15 & 0.029767 & 0.945226 & 0.949998 & 0 & 5024 & 0.043204 & 0.41474019 & 7.788255159 & 6.129831253 \\\hline 16 & 0.030184 & 0.932357 & 0.949996 & 0 & 18568 & 0.042452 & 0.395201719 & 8.37753314 & 6.719109234 \\\hline 17 & 0.030235 & 0.930807 & 0.949997 & 0 & 20200 & 0.042501 & 0.392843225 & 8.371020725 & 6.712596819 \\\hline 18 & 0.030251 & 0.930322 & 0.949998 & 0 & 20712 & 0.042284 & 0.392114741 & 8.500176807 & 6.841752902 \\\hline 19 & 0.028431 & 0 & 0.950198 & 800 & 0 & 0.045295 & 0.481217627 & 6.165721806 & 4.507297901 \\\hline 20 & 0.030509 & 0.922571 & 0.949999 & 0 & 28872 & 0.042212 & 0.380330252 & 8.64855253 & 6.990128624 \\\hline 21 & 0.030307 & 0.928628 & 0.95 & 0 & 22496 & 0.042723 & 0.389520645 & 8.275546956 & 6.617123051 \\\hline 22 & 0.030149 & 0.933423 & 0.949999 & 0 & 17448 & 0.042298 & 0.396829981 & 8.450038968 & 6.791615062 \\\hline 23 & 0.030778 & 0.914628 & 0.949998 & 0 & 37232 & 0.041647 & 0.368274421 & 9.085252889 & 7.426828984 \\\hline 24 & 0.030286 & 0.929263 & 0.949995 & 0 & 21824 & 0.042745 & 0.390484445 & 8.254117209 & 6.595693303 \\\hline 25 & 0.028312 & 0 & 0.950236 & 880 & 0 & 0.045654 & 0.487436318 & 5.936894638 & 4.278470733 \\\hline 26 & 0.029887 & 0.941485 & 0.949997 & 0 & 8960 & 0.042907 & 0.409065042 & 8.001486009 & 6.343062103 \\\hline 27 & 0.028648 & 0 & 0.950036 & 656 & 0 & 0.045069 & 0.46999291 & 6.350759686 & 4.692335781 \\\hline 28 & 0.028112 & 0 & 0.950226 & 1016 & 0 & 0.046306 & 0.498004738 & 5.524620815 & 3.866196909 \\\hline 29 & 0.030052 & 0.936392 & 0.949996 & 0 & 14320 & 0.04257 & 0.401332871 & 8.256950665 & 6.598526759 \\\hline 30 & 0.029944 & 0.939719 & 0.949995 & 0 & 10816 & 0.043148 & 0.406369206 & 7.889801849 & 6.231377943 \\\hline 31 & 0.028482 & 0 & 0.95007 & 768 & 0 & 0.045612 & 0.478548565 & 6.00545574 & 4.347031835 \\\hline 32 & 0.028341 & 0 & 0.950057 & 864 & 0 & 0.046096 & 0.485893688 & 5.699515506 & 4.041091601 \\\hline 33 & 0.028639 & 0 & 0.950331 & 656 & 0 & 0.045393 & 0.470441801 & 6.199128217 & 4.540704311 \\\hline 34 & 0.028151 & 0 & 0.950105 & 992 & 0 & 0.046136 & 0.495933886 & 5.617077248 & 3.958653342 \\\hline 35 & 0.028581 & 0 & 0.950301 & 696 & 0 & 0.045512 & 0.473424664 & 6.11261537 & 4.454191464 \\\hline 36 & 0.028332 & 0 & 0.950354 & 864 & 0 & 0.045976 & 0.486371377 & 5.783854353 & 4.125430447 \\\hline 37 & 0.028073 & 0 & 0.950348 & 1040 & 0 & 0.046308 & 0.500088447 & 5.520168261 & 3.861744355 \\\hline 38 & 0.029854 & 0.942511 & 0.949997 & 0 & 7880 & 0.043236 & 0.410610721 & 7.80502983 & 6.146605924 \\\hline 39 & 0.030517 & 0.922333 & 0.949996 & 0 & 29120 & 0.042385 & 0.379960638 & 8.5532151 & 6.894791194 \\\hline 40 & 0.028003 & 0 & 0.950325 & 1088 & 0 & 0.046221 & 0.503846816 & 5.539809861 & 3.881385956 \\\hline 41 & 0.028492 & 0 & 0.950129 & 760 & 0 & 0.0458 & 0.478021028 & 5.91422549 & 4.255801584 \\\hline 42 & 0.028991 & 0.970159 & 0.950375 & 416 & 0 & 0.044721 & 0.452593235 & 6.69144203 & 5.033018124 \\\hline 43 & 0.02827 & 0 & 0.950066 & 912 & 0 & 0.045812 & 0.489642361 & 5.825124611 & 4.166700705 \\\hline 44 & 0.030922 & 0.910432 & 0.949998 & 0 & 41648 & 0.041434 & 0.361903336 & 9.269858261 & 7.611434355 \\\hline 45 & 0.03083 & 0.913108 & 0.949998 & 0 & 38832 & 0.041848 & 0.365955157 & 8.991932879 & 7.333508973 \\\hline 46 & 0.029816 & 0.943695 & 0.949995 & 0 & 6632 & 0.043106 & 0.412416326 & 7.861992735 & 6.20356883 \\\hline 47 & 0.028779 & 0.977204 & 0.950375 & 560 & 0 & 0.045 & 0.463295864 & 6.463372889 & 4.804948983 \\\hline 48 & 0.030125 & 0.934156 & 0.949994 & 0 & 16672 & 0.042621 & 0.39793061 & 8.258044941 & 6.599621035 \\\hline \hline \end{tabular} } \end{table} \section*{References} \section*{Additional information} \textbf{Competing financial interests:} The authors declare that they have no competing interests. \end{document}
\begin{document} \title{Operator~Diagonalizations of~Multiplier~Sequences} \author{Robert D. Bates} \date{\today} \maketitle \begin{abstract} We consider hyperbolicity preserving operators with respect to a new linear operator representation on $\R[x]$. In essence, we demonstrate that every Hermite and Laguerre multiplier sequence can be diagonalized into a sum of hyperbolicity preserving operators, where each of the summands forms a classical multiplier sequence. Interestingly, this does not work for other orthogonal bases; for example, this property fails for the Legendre basis. We establish many new formulas concerning the $Q_k$'s of Peetre's 1959 differential representation for linear operators in the specific case of Hermite and Laguerre diagonal differential operators. Additionally, we provide a new algebraic characterization of the Hermite multiplier sequences and also extend a recent result of T. Forg\'acs and A. Piotrowski on hyperbolicity properties of the polynomial coefficients in hyperbolicity preserving Hermite diagonal differential operators. \end{abstract} \section{Introduction} Define the Jacobi-Theta function by, \begin{equation} \Phi(t):=\sum_{n=1}^\infty (2n^4\pi^2e^{9t}-3n^2\pi e^{5t})e^{-n^2\pi e^{4t}}. \end{equation} It is well known that the Riemann Hypothesis \cite[(1859)]{Rie59} is equivalent to the statement that the integral cosine transform of the Jacobi-Theta function, \begin{equation}\label{eq:jtfunction} \int \Phi(t) \cos(xt) dt, \end{equation} can be uniformly approximable by polynomials with only real zeros (see for example G. Csordas, T. Norfolk, and R. Varga \cite{CNV86}) (see also \cite{CC13,CV88,CV90,Cso98}). In 1913, J. Jensen \cite{Jen13} showed that every entire function, \begin{equation}\label{eq:f(x)} f(x):=\sum_{k=0}^\infty \frac{\gamma_k}{k!}x^k, \end{equation} can be uniformly approximated by polynomials with only real zeros if and only if $g_n(x)$ has only real zeros for each $n\in\N_0$, where \begin{equation} g_n(x):=\sum_{k=0}^n \binom{n}{k} \gamma_k x^k. \end{equation} Hence, the associated Jensen polynomials, $\{g_n(x)\}_{n=0}^\infty$, have received a great deal of attention in modern times (see for example \cite{DC09,CC83,CC89,CW75}). In particular, M. Chasse in 2011 showed remarkably that the first $2\cdot 10^{17}$ Jensen polynomials of \eqref{eq:jtfunction} have only real zeros \cite[Theorem 177, p. 87]{Cha11}. In 1914, G. P\'olya and J. Schur \cite{PS14} gave a complete characterization of hyperbolicity preserving operators (operators that map polynomials with only real zeros to those of the same kind, see Definition \ref{def:hyper}) of the form, \begin{equation}\label{eq:psintro} T[x^n]:=\gamma_n x^n,\ \{\gamma_n\}_{n=0}^\infty\subset\R, \end{equation} by showing that $f(x)$ (from \eqref{eq:f(x)}) must be uniformly approximable by polynomials with zeros of one sign (throughout the literature, $\{\gamma_n\}_{n=0}^\infty$ is called a multiplier sequence). Their work was greatly extended in 2009 by J. Borcea and P. Br\"and\'en \cite{BB09} who demonstrated that essentially every linear operator written in J. Peetre's \cite[(1959)]{Pee59} differential operator form, \begin{equation}\label{eq:do} T:=\sum_{k=0}^\infty Q_k(x)D^k,\ D:=\frac{d}{dx}, \end{equation} is hyperbolicity preserving if and only if either \begin{equation} T[e^{xw}]:=e^{xw}\sum_{k=0}^\infty Q_k(x)w^k\ \ \text{ or }\ \ T[e^{-xw}]:=e^{-xw}\sum_{k=0}^\infty Q_k(x)(-w)^k \end{equation} is uniformly approximable by real two-variable stable polynomials. It was shown by Laguerre in 1882 \cite{Lag82} (later generalized by G. P\'olya \cite[(1913)]{Pol13}, \cite[(1915)]{Pol15}) that every real entire function, which can be uniformly approximated by polynomials with only real zeros, must be of the form, \begin{equation} f(x):=cx^m e^{-ax^2+bx} \prod_{k=1}^\omega \left(1+\frac{x}{x^k}\right)e^{-x/x_k}, \end{equation} where $0\le \omega\le \infty$, $a,b,c\in\R$, $a\ge 0$, $m\in\N_0$, $\{x_k\}_{k=1}^\omega\subset\R$, $x_k\not=0$, and $\sum_{k=1}^\omega \frac{1}{x_k^2}<\infty$. Likewise, real entire functions, that can be uniformly approximated by polynomials with non-positive zeros, must be of the form, \begin{equation}\label{eq:lpplusprod} f(x):=cx^me^{bx}\prod_{k=1}^\omega \left(1+\frac{x}{x^k}\right), \end{equation} where $0\le\omega\le\infty$, $b,c\in\R$, $b\ge 0$, $m\in\N_0$, $\{x^k\}_{k=1}^\omega$, $x_k>0$, and $\sum_{k=1}^\omega \frac{1}{x^k}<\infty$. In 1983, T. Craven and G. Csordas demonstrated an important subclass of functions from \eqref{eq:lpplusprod}, showing that $b\ge 1$ if and only if $f^{(k)}(0)\le f^{(k+1)}(0)$ ($\gamma_k\le \gamma_{k+1}$) for every $k\in\N_0$ \cite{CC83}. This motivated re-investigating some of P.~Tur\'an's results \cite[(1954)]{Tur54} by modifying G. P\'olya and J. Schur's operator, equation \eqref{eq:psintro}, replacing $x^n$ with the $n^{\text{th}}$ Hermite polynomial (see \cite{BC01}). In 2007, A. Piotrowski gave a complete characterization of hyperbolicity preserving operators that diagonalize on the Hermite basis, \begin{equation} T[H_n(x)]:=\gamma_n H_n(x),\ \{\gamma_n\}_{n=0}^\infty\subset\R, \end{equation} where for each $n\in\N_0$, $H_n(x)$ denotes the $n^{\text{th}}$ Hermite polynomial. It was demonstrated that $T$ is hyperbolicity preserving if and only if $\{\gamma_n\}_{n=0}^\infty$ is an increasing classical multiplier sequence (from \eqref{eq:psintro}) (see Theorem \ref{thm:msherm}). Recently, there has been significant motivation in characterizing multiplier sequences of any basis (see \cite{FHMS13, Bat14a, BY13, BDFU12, Cha11, FP13a, FP13b, Pio07, Yos13}). In particular, it has become increasingly apparent, the role that orthogonal polynomials seem to play in defining hyperbolicity preserving operators (see also the recent characterization of Laguerre multiplier sequences by P. Br\"and\'en and E. Ottergren \cite{BO12}, see Theorem \ref{thm:mslag}). In this paper, we modify J. Peetre's differential representation \cite{Pee59}, giving a new differential representation for study with respect to hyperbolicity preservation (Theorem \ref{thm:classic} and \ref{thm:classic2}). We use this to essentially show that every Hermite and Laguerre multiplier sequence can be written as a sum of classical multiplier sequences (Theorem \ref{thm:herm} and \ref{thm:lag}). Interestingly, the Legendre basis does not enjoy this property (Example \ref{ex:leg}). New methods of determining the differential representation of Hermite and Laguerre diagonal differential operators are found (Theorem \ref{thm:hermqk}, \ref{thm:hermcomplex}, and \ref{thm:lagqk}). Additionally, we give a new algebraic characterization of Hermite multiplier sequences (Theorem \ref{thm:minimal}) and generalize a recent statement of T. Forg\'acs and A. Piotrowski \cite{FP13b}, on the hyperbolicity properties of the $Q_k$'s in \eqref{eq:do} that arise from a Hermite diagonal differential operator (Theorem \ref{thm:hermrealzeros}). \begin{definition}\label{def:poly} We will denote the \textit{Hermite}, \textit{Laguerre}, and \textit{Legendre} polynomials as, $\{H_n(x)\}_{n=0}^\infty$, $\{L_n(x)\}_{n=0}^\infty$, and $\{P_n(x)\}_{n=0}^\infty$, respectively \cite[pp. 157, 187, 201]{Rai60}. For each $n\in\N_0$, these polynomials are given by the following formulas, \begin{align} H_n(x)&=\sum_{k=0}^{[n/2]}\frac{(-1)^k n! 2^{n-2k}}{k!(n-2k)!} x^{n-2k}, \\ L_n(x)&=\sum_{k=0}^n \frac{(-1)^k}{k!}\binom{n}{k}x^k,\ \text{and} \\ P_n(x)&=\sum_{k=0}^{[n/2]} \frac{(-1)^k}{2^n}\binom{n}{k}\binom{2n-2k}{n}x^{n-2k}. \end{align} It is well know that these polynomials satisfy the following differential equations \cite[pp. 173, 188, 204, 258]{Rai60}, \begin{align} &\left((-1/2)D^2+(x)D\right)H_n(x)=(n)H_n(x), \\ &\left((-x)D^2+(x-1)D\right)L_n(x)=(n)L_n(x),\ \text{and} \\ &\left((x^2-1)D^2+(2x)D\right)P_n(x)=(n^2+n)P_n(x), \end{align} where $D:=\frac{d}{dx}$. \end{definition} \begin{definition} Suppose $f(x)$ is an entire function, \begin{equation} f(x):=\sum_{k=0}^\infty \frac{\gamma_k}{k!}x^k. \end{equation} For each $n\in\N_0$, we define the $n^\text{th}$ \textit{Jensen polynomial} associated to the entire function $f(x)$ (or associated to the sequence $\{\gamma_k\}_{k=0}^\infty$) by, \begin{equation} g_n(x):=\sum_{k=0}^n \binom{n}{k}\gamma_k x^k. \end{equation} Likewise, for each $n\in\N_0$, we define the $n^\text{th}$ \textit{reversed Jensen polynomial} by, \begin{equation} g_n^*(x):=\sum_{k=0}^n \binom{n}{k} \gamma_k x^{n-k}. \end{equation} \end{definition} \begin{definition}\label{def:diagonalize} Let $T:\R[x]\to\R[x]$ be a linear operator such that $T[B_n(x)]=\gamma_nB_n(x)$ for every $n\in\N_0$, where $\{\gamma_n\}_{n=0}^\infty$ is a sequence of real numbers and $\{B_n(x)\}_{n=0}^\infty$, $\deg(B_n(x))=n$, $B_0\not\equiv 0$, is a basis of real polynomials. Then $T$ will be referred to as a \textit{diagonal differential operator} with respect to the eigenvector sequence, $\{B_n(x)\}_{n=0}^\infty$, and eigenvalue sequence, $\{\gamma_n\}_{n=0}^\infty$. If $\{B_n(x)\}_{n=0}^\infty=\{x^n\}_{n=0}^\infty$ then $T$ is said to be a \textit{classical diagonal differential operator}. Similarly, if $\{B_n(x)\}_{n=0}^\infty$ is the Hermite, Laguerre, or Legendre polynomials (Definition \ref{def:poly}), then $T$ is said to be a \textit{Hermite diagonal differential operator}, \textit{Laguerre diagonal differential operator}, or a \textit{Legendre diagonal differential operator}, respectively. \end{definition} \begin{definition}\label{def:hyper} Let $T:\R[x]\to\R[x]$ be a linear operator. Operator $T$ is said to be \textit{hyperbolicity preserving} if $T[p(x)]$ has only real zeros whenever $p(x)\in\R[x]$ has only real zeros. If in addition, $T$ diagonalizes on $\{B_n(x)\}_{n=0}^\infty=\{x^n\}_{n=0}^\infty$, $\{B_n(x)\}_{n=0}^\infty=\{H_n(x)\}_{n=0}^\infty$, $\{B_n(x)\}_{n=0}^\infty=\{L_n(x)\}_{n=0}^\infty$, or $\{B_n(x)\}_{n=0}^\infty=\{P_n(x)\}_{n=0}^\infty$, as in $T[B_n(x)]=\gamma_n B_n(x)$ for some sequence of real numbers, $\{\gamma_n\}_{n=0}^\infty$, then $\{\gamma_n\}_{n=0}^\infty$ is called a \textit{classical multiplier sequence}, \textit{Hermite multiplier sequence}, \textit{Laguerre multiplier sequence}, or \textit{Legendre multiplier sequence}, respectively. \end{definition} \begin{definition} Suppose $T$ is a hyperbolicity preserving operator that diagonalizes on $\{B_n(x)\}_{n=0}^\infty$ and $\{\gamma_n\}_{n=0}^\infty$, where \begin{equation} \{\gamma_n\}_{n=0}^\infty := \{0,0,\ldots,0,0,\alpha,\beta,0,0,0,\ldots\},\ \alpha,\beta\in\R. \end{equation} Then, $\{\gamma_n\}_{n=0}^\infty$ is called a \textit{trivial multiplier sequence}. In Theorems \ref{thm:msherm} and \ref{thm:mslag} we will exclude all trivial multiplier sequences. \end{definition} \begin{definition}\label{def:lp} The \textit{Laguerre-P\'olya class}, denoted as $\lp$, is the set of entire functions that are uniform limits of \textit{hyperbolic polynomials}, real valued polynomials with only real zeros. We define $\lp^s$ to be the entire functions in $\lp$ with Taylor coefficients of the same sign. Likewise, we define $\lp^a$ to be the entire functions in $\lp$ with alternating Taylor coefficients. The notation, $\lp^{sa}$, is defined as $\lp^{sa}:=\lp^s\cup\lp^a$. Given an interval, $I\subseteq\R$, $\lp^* I$ will denote functions in $\lp^*$ that have zeros only in $I$, where $\lp^*$ is either $\lp$, $\lp^s$, $\lp^a$, or $\lp^{sa}$. \end{definition} \begin{theorem}[{T. Craven and G. Csordas \cite[(1983)]{CC83}}]\label{thm:lpinc} Suppose $f(x)\in\lp^s$. Then $|f^{(k)}(0)|\le |f^{(k+1)}(0)|$ for all $k\in\N_0$ if and only if $e^{-x}f(x)\in\lp^s$. \end{theorem} \begin{remark}\label{rm:lpinc} In the sequel we will make us of the fact that many of the classes defined above are closed under differentiation. Consider an entire function, \begin{equation} f(x):=\sum_{k=0}^\infty \frac{\gamma_k}{k!}x^k. \end{equation} If $f(x)$ is in $\lp^s$, $\lp^a$, or $\lp$, then for every $n\in\N_0$, $f^{(n)}(x)$ is also in $\lp^s$, $\lp^a$, or $\lp$, respectively. Similarly, a slight extension of Theorem \ref{thm:lpinc} shows that if $e^{-\sigma x}f(x)\in\lp^s$ ($\sigma>0$), then for every $n\in\N_0$, $e^{-\sigma x}f^{(n)}(x)\in\lp^s$. Likewise, if $e^{\sigma x}f(x)\in\lp^a$ ($\sigma>0$), then for every $n\in\N_0$, $e^{\sigma x} f^{(n)}(x)\in\lp^a$. \end{remark} \begin{theorem}[{\hspace{1sp}\cite{Bat14a}, \cite{Pee59}, \cite[Proposition 29, p. 32]{Pio07}}]\label{thm:piotr} If $T:\R[x]\to\R[x]$ is any linear operator, then there is a unique sequence of real polynomials, $\{Q_k(x)\}_{k=0}^\infty\subset\R[x]$, such that \begin{equation} \label{eq:linear-operator} T=\sum_{k=0}^\infty Q_k(x) D^k,\ \mbox{where } D:=\frac{d}{dx}. \end{equation} Furthermore, given any sequence of polynomials, $\{B_n(x)\}_{n=0}^\infty$ \textup{(}$\deg(B_n(x))=n$ for each $n\in\N_0$, $B_0(x)\not\equiv 0$\textup{)}, then for each $n\in\N_0$, \begin{equation}\label{equ13} Q_n(x)=\frac{1}{B_n^{(n)}}\left(T[B_n(x)]-\sum_{k=0}^{n-1} Q_k(x)B_n^{(k)}(x)\right). \end{equation} \end{theorem} \begin{theorem}[{G. P\'olya and J. Schur \cite[(1914)]{PS14}}]\label{thm:ms} Let $\{\gamma_k\}_{k=0}^\infty$ be a sequence of real numbers. Sequence $\{\gamma_k\}_{k=0}^\infty$ is a positive or negative multiplier sequence if and only if \begin{equation} \sum_{k=0}^\infty \frac{\gamma_k}{k!} x^k \in \lp^s. \end{equation} Sequence $\{\gamma_k\}_{k=0}^\infty$ is an alternating multiplier sequence if and only if \begin{equation} \sum_{k=0}^\infty \frac{\gamma_k}{k!} x^k \in \lp^a. \end{equation} \end{theorem} \begin{theorem}[{A. Piotrowski \cite[Theorem 152, p. 140 (2007)]{Pio07}}]\label{thm:msherm} Let $\{\gamma_k\}_{k=0}^\infty$ be a sequence of real numbers and let $\{g_k^*(x)\}_{k=0}^\infty$ be the sequence of reversed Jensen polynomials associated with $\{\gamma_k\}_{k=0}^\infty$. Sequence $\{\gamma_k\}_{k=0}^\infty$ is a non-trivial positive or negative Hermite multiplier sequence if and only if \begin{equation} e^{-x} \sum_{k=0}^\infty \frac{\gamma_k}{k!} x^k = \sum_{k=0}^\infty \frac{g_k^*(-1)}{k!}x^k \in \lp^s. \end{equation} Sequence $\{\gamma_k\}_{k=0}^\infty$ is a non-trivial alternating Hermite multiplier sequence if and only if \begin{equation} e^x \sum_{k=0}^\infty \frac{\gamma_k}{k!} x^k = e^{2x} \sum_{k=0}^\infty \frac{g_k^*(-1)}{k!}x^k \in \lp^a. \end{equation} \end{theorem} \begin{theorem}[{P. Br\"and\'en and E. Ottergren \cite[(2014)]{BO12}}]\label{thm:mslag} Let $\{\gamma_k\}_{k=0}^\infty$ be a sequence of real numbers and let $\{g_k^*(x)\}_{k=0}^\infty$ be the reversed Jensen polynomials associated with $\{\gamma_k\}_{k=0}^\infty$. Sequence $\{\gamma_k\}_{k=0}^\infty$ is a non-trivial positive or negative Laguerre multiplier sequence if and only if \begin{equation} \sum_{k=0}^\infty g_k^*(-1)x^k \in \R[x]\cap\lp^s[-1,0]. \end{equation} There are no non-trivial alternating Laguerre multiplier sequences. \end{theorem} From Theorem \ref{thm:ms}, \ref{thm:msherm}, and \ref{thm:mslag}, it is clear that every Laguerre multiplier sequence is a Hermite multiplier sequence, and every Hermite multiplier sequence is a classical multiplier sequence (see also the classification diagram of K. Blakeman, E. Davis, T. Forg\'acs, and K. Urabe \cite{BDFU12}). In the literature, it is common to discuss only the non-negative multiplier sequences. However, many of our results establish strong differences between the sequences from $\lp^a$ and the sequences in $\lp^s$ (see for example Theorem \ref{thm:herminter}). Hence, we will take great care to discuss $\lp^s$ sequences separately from $\lp^a$ sequences. The following example demonstrates the strong differences in differential representation from positive eigenvalues versus alternating eigenvalues. \begin{example}\label{ex:hermfinin} Consider the following hyperbolicity preserving Hermite diagonal differential operators (see Theorem \ref{thm:msherm}), \begin{equation} T[H_n(x)]:=nH_n(x)\ \ \ \text{and}\ \ \ W[H_n(x)]:=(-1)^nnH_n(x). \end{equation} Using the recursive formula from Theorem \ref{thm:piotr}, we calculate $T$ and $W$, \begin{equation} T=(x)D+\left(-\frac{1}{2}\right)D^2, \end{equation} and \begin{equation} W=(-x)D+\left(2x^2-\frac{1}{2}\right)D^2+\left(-2x^3+x\right)D^3+\cdots. \end{equation} We observe that $T$ is a finite order differential operator, while $W$ is an infinite order differential operator. This observation makes sense when we note that $\{(-1)^n n\}_{n=0}^\infty$ is not interpolatable by a polynomial (see \cite{Bat14a}). \end{example} The sensitivity of the two classes, $\lp^s$\ and $\lp^a$, can also be seen in the following theorem, which holds for sequences arising from $\lp^s$, but not for sequences arising from $\lp^a$. \begin{theorem}[{T. Craven and G. Csordas \cite[(1983)]{CC83}}]\label{thm:mscomb} Let $\{\gamma_k\}_{k=0}^\infty$ be a positive or negative classical multiplier sequence. Then, for each $m\in\N_0$, \begin{equation}\label{eq:mscomb} \left\{\sum_{k=0}^n \binom{n}{k} \gamma_{m+k} \right\}_{n=0}^\infty\ \ \ \text{and}\ \ \ \left\{\sum_{k=0}^m \binom{m}{k} \gamma_{n+k} \right\}_{n=0}^\infty, \end{equation} are also positive or negative classical multiplier sequences, respectively. \end{theorem} \begin{proof} For the first sequence, using a Cauchy product, we calculate \begin{equation} \sum_{n=0}^\infty \left(\sum_{k=0}^n \binom{n}{k} \gamma_{m+k}\right)\frac{x^n}{n!} = e^x D^m \sum_{n=0}^\infty \frac{\gamma_n}{n!}x^n \in \lp^s. \end{equation} For the second sequence, using two Cauchy products, we calculate \begin{equation}\nonumber \sum_{n=0}^\infty \left(\sum_{k=0}^m \binom{m}{k} \gamma_{n+k}\right)\frac{x^n}{n!} = e^{-x} D^m e^x \sum_{n=0}^\infty \frac{\gamma_n}{n!}x^n\in \lp^s. \eqno\qed \end{equation} \end{proof} \begin{example} We show that Theorem \ref{thm:mscomb} does not hold for $\lp^a$. Consider the following function in $\lp^a$, \begin{equation} f(x):=\sum_{k=0}^\infty \frac{(-1)^k}{k!} \frac{x^k}{k!}, \end{equation} which is obtained by application of the multiplier sequence $\{\frac{(-1)^k}{k!}\}_{k=0}^\infty$ to the function $e^x$. The sequence, \begin{equation} \{\gamma_n\}_{n=0}^\infty=\left\{\sum_{k=0}^n \binom{n}{k} \frac{(-1)^k}{k!}\right\}_{n=0}^\infty, \end{equation} has the form, \begin{equation} \{\gamma_n\}_{n=0}^\infty=\left\{1,0,-\frac{1}{2},-\frac{2}{3},-\frac{5}{8},\ldots,\frac{887}{5760},\ldots\right\}. \end{equation} Hence, $\{\gamma_n\}_{n=0}^\infty$ is not a multiplier sequence, since there is no function in $\lp^s$ or $\lp^a$ with Taylor coefficients that match the signs of $\{\gamma_n\}_{n=0}^\infty$ (see Theorem \ref{thm:ms}). \end{example} For the reader's convenience we provide the following compilation of combinatorial identities that will be used extensively throughout the paper. These types of calculations have already been observed in the proof of Theorem \ref{thm:mscomb}. \begin{theorem}[{\hspace{1sp}\cite[p. 49]{Rio79}, \cite[Proposition 33, p. 35]{Pio07}}]\label{thm:comb} Given a sequence of real numbers, $\{\alpha_k\}_{k=0}^\infty$, for each $n\in\N_0$, define, \begin{equation} \beta_n= \sum_{k=0}^n \binom{n}{k} \alpha_k. \end{equation} Then, for all $n\in\N_0$, \begin{equation} \alpha_n = \sum_{k=0}^n \binom{n}{k} \beta_k (-1)^{n-k}. \end{equation} In particular, we have, \begin{equation} e^x \sum_{n=0}^\infty \frac{\alpha_n}{n!} x^n = \sum_{k=0}^\infty \frac{\beta_n}{n!}x^n\ \ \ \text{and}\ \ \ e^{-x} \sum_{k=0}^\infty \frac{\beta_n}{n!}x^n = \sum_{k=0}^\infty \frac{\alpha_n}{n!}x^n. \end{equation} Similarly, if $\{g_k^*(x)\}_{k=0}^\infty$ are the reversed Jensen polynomials associated with $\{\gamma_k\}_{k=0}^\infty$, then for every $n\in\N_0$, \begin{equation}\label{eq:jenreverse} \gamma_n=\sum_{k=0}^n \binom{n}{k} g_k^*(-1)\ \ \ \text{and}\ \ \ g_n^*(-1)=\sum_{k=0}^n \binom{n}{k} \gamma_k (-1)^{n-k}. \end{equation} Likewise, if $\{\gamma_k\}_{k=0}^\infty$ diagonalizes the classical diagonal differential operator, $T$, then \begin{equation}\label{eq:diagst} T[x^n]=\left(\sum_{k=0}^\infty \frac{g_k^*(-1)}{k!} x^k D^k \right)x^n = \gamma_n x^n. \end{equation} \end{theorem} \section{Operator Diagonalizations of Diagonalizable Operators} Our main objective is to present a new representation of diagonal differential operators (Theorem \ref{thm:classic}). We will only need to assume that $\deg(Q_k(x))\le k$ for each $k\in\N_0$; a property that all diagonal differential operators have, as the recursive formula of Theorem \ref{thm:piotr} shows (see also \cite{Bat14a}). \begin{theorem}\label{thm:classic} Given a linear operator, $T:\R[x]\to\R[x]$, \begin{equation} T=\sum_{k=0}^\infty Q_k(x)D^k, \end{equation} where $\deg(Q_k(x))\le k$ for every $k\in\N_0$. Define the family of sequences, \begin{equation}\label{eq:aform} \{b_{n,k}\}_{k=0}^\infty:=\left\{\sum_{j=0}^k\binom{k}{j}Q_{j+n}^{(j)}(0)\right\}_{k=0}^\infty,\ \ \ \ \ \ \ \ n\in\N_0. \end{equation} For each $n\in\N_0$, define the classical diagonal differential operator, \begin{equation}\label{eq:tn} T_n[x^k]:=b_{n,k}x^k. \end{equation} Then, \begin{equation}\label{eq:diawriting} T=\sum_{n=0}^\infty T_nD^n. \end{equation} Furthermore, the representation in \eqref{eq:diawriting} is unique. \end{theorem} \begin{proof} We are concerning ourselves with operators defined on $\R[x]$, hence convergence discussions are a non-issue. By Theorem \ref{thm:comb}, for every $n\in\N_0$, we know the differential representation of $T_n$, namely, \begin{equation}\label{eq:formula} T_n=\sum_{k=0}^\infty \left(\sum_{j=0}^k \binom{k}{j}b_{n,j}(-1)^{k-j}\right)\frac{1}{k!}x^kD^k= \sum_{k=0}^\infty \frac{Q_{k+n}^{(k)}(0)}{k!}x^kD^k. \end{equation} Note the calculation $\frac{Q_{k+n}^{(k)}(0)}{k!}x^k$ is precisely the $k^{\text{th}}$ term of the polynomial, $Q_{k+n}(x)$. Hence, each summand, in each $T_n$, is one term from some $Q_k(x)$. Furthermore, no two $T_n$'s use the same term in a particular $Q_k(x)$. Finally, because $\deg(Q_k(x))\le k$, we are assured that every term in every $Q_k(x)$ will be present in some $T_n$. The uniqueness follows from the uniqueness of the differential representation in Theorem \ref{thm:piotr}. \end{proof} \begin{example} Theorem \ref{thm:classic} can be best understood with the aid of a concrete illustrative. Define the differential operator, \begin{equation} T:=\underset{Q_2(x)}{\underbrace{(a_2x^2+b_1x+c_0)}}D^2+\underset{Q_1(x)}{\underbrace{(a_1x+b_0)}}D+\underset{Q_0(x)}{\underbrace{(a_0)}}, \end{equation} where $a_2,a_1,a_0,b_1,b_0,c_0\in\R$. Using Theorem \ref{thm:classic}, we re-write $T$, in terms of $T_n$'s, \begin{equation} \begin{array}{rcl} T &=& \left(\frac{Q_2^{(2)}(0)}{2!}x^2D^2+\frac{Q_1^{(1)}(0)}{1!}x^1D^1+\frac{Q_0^{(0)}(0)}{0!} x^0D^0\right) D^0\ + \\&& \\&&\left(\frac{Q_2^{(1)}(0)}{1!}x^1D^1+\frac{Q_1^{(0)}(0)}{0!}x^0D^0\right) D^1 \ + \\&& \\&&\left(\frac{Q_2^{(0)}(0)}{0!}x^0D^0\right) D^2 \\&& \\&=& \underset{\mathcol{T_0}}{\underbrace{(a_2x^2D^2+a_1xD+a_0)}}+\underset{\mathcol{T_1}}{\underbrace{(b_1xD+b_0)}}D+\underset{\mathcol{T_2}}{\underbrace{(c_0)}}D^2. \end{array} \end{equation} \end{example} Theorem \ref{thm:classic} can be extended to arbitrary linear operators on $\R[x]$; reminiscent of a Laurent series from complex variables (see \cite[p. 222]{MH99}). \begin{theorem}\label{thm:classic2} Let $T:\R[x]\to\R[x]$ be an arbitrary linear operator, \begin{equation} T:=\sum_{k=0}^\infty Q_k(x)D^k. \end{equation} Define the family of sequences, \begin{equation} \{b_{n,k}\}_{k=0}^\infty:=\left\{\sum_{j=0}^k\binom{k}{j}Q_{j+n}^{(j)}(0)\right\}_{k=0}^\infty,\ \ \ \ \ \ n\in\Z, \end{equation} where we take $Q_{j+n}^{(j)}(0)=0$ for $n+j<0$. For each $n\in\Z$, define the classical diagonal differential operator, \begin{equation} T_n[x^k]:=b_{n,k}x^k. \end{equation} Then, \begin{equation}\label{eq:diawriting2} T=\sum_{n=1}^{\infty} T_{-n}D^{-n} + \sum_{n=0}^\infty T_nD^n, \end{equation} where we define $D\cdot D^{-1}=1$. Furthermore, the representation in \eqref{eq:diawriting2} is unique. \end{theorem} \begin{proof} We first note that for each $n\in\N_0$, $ T_n=\sum_{k=0}^\infty \frac{Q_{k+n}^{(k)}(0)}{k!}x^kD^k$ (see Theorem \ref{thm:comb}). Similar to the proof of Theorem \ref{thm:classic}, each term from the $T_n$'s are in one-to-one correspondence with each term in the $Q_k$'s. Thus, a change of index yields, \begin{equation} T=\sum_{n=0}^\infty Q_n(x)D^n = \sum_{n=0}^\infty \left(\sum_{k=0}^\infty \frac{Q_k^{(k)}(0)}{k!}x^k\right) D^n = \sum_{n=-\infty}^\infty \left(\sum_{k=0}^\infty \frac{Q_{k+n}^{(k)}(0)}{k!}x^k\right) D^{k+n} = \sum_{n=-\infty }^\infty T_n D^n. \nonumber\tag*{\qedhere} \end{equation} \end{proof} \begin{example} We provide another example demonstrating Theorem \ref{thm:classic2}. Define the differential operator, \begin{equation} T:=(a_2x^2+b_1x+c_0)D^2+(z_1x^2+a_1x+b_0)D+(y_0x^2+z_0x+a_0), \end{equation} where $y_0,z_1,z_0,a_2,a_1,a_0,b_1,b_0,c_0\in\R$. Using Theorem \ref{thm:classic2}, we rewrite $T$ in terms of $T_n$'s, \begin{align} T=& \underset{T_{-2}}{\underbrace{(y_0x^2D^2)}}\mathcol{D^{-2}}\ + \\& \underset{T_{-1}}{\underbrace{(z_1x^2D^2+z_0xD)}}\mathcol{D^{-1}}\ + \\& \underset{T_0}{\underbrace{(a_2x^2D^2+a_1xD+a_0)}}\mathcol{D^0}\ + \\& \underset{T_1}{\underbrace{(b_1xD+b_0)}}\mathcol{D^1}\ + \\& \underset{T_2}{\underbrace{(c_0)}}\mathcol{D^2}\ . \end{align} \end{example} \begin{example} It is possible for representation \eqref{eq:diawriting2} to be ``transcendental'' in both directions. Consider the differential operator, \begin{equation} T:=\sum_{k=0}^\infty (x^{2k}+1)D^k. \end{equation} Then for $n\in\N$, $T_{-n}=x^{2n}D^{2n}$ and for $n\in\N_0$, $T_{n}=1$. Hence, \begin{align} T=& \cdots+T_{-2}D^{-2}+T_{-1}D^{-1}+T_0 D^0+T_1D^1+T_2D^2+\cdots \\ =&\cdots+(x^4D^4)D^{-2}+(x^2D^2)D^{-1}+(1)D^0+(1)D^1+(1)D^2+\cdots. \end{align} \end{example} Upon attaining the representation \eqref{eq:diawriting} in Theorem \ref{thm:classic}, we direct our attention to the property of hyperbolicity preservation. If $T$ in equation \eqref{eq:diawriting}, is hyperbolicity preserving, then what properties do the $T_n$'s possess? One might hope that the $T_n$'s also enjoy the property of hyperbolicity preservation. This hope would certainly be warranted since, in fact, $T_0$ always possess the property of hyperbolicity preservation in a diagonal differential operator (see \cite{Bat14a} and \cite[Theorem 158, p. 145]{Pio07}). In addition, classical multiplier sequences and operators of the form $f(xD)$ and $f(D)$, from the Hermite-Poulain \cite[p. 4]{Obr63} and Laguerre Theorems \cite[Satz 3.2]{Obr63}, trivially have $T_n$'s that are hyperbolicity preserving. However, in general, our hope is false as the next several examples will demonstrate. The following Tur\'an type inequality, equation \eqref{eq:tti}, will be of great use. \begin{theorem}[{R. Bates and R. Yoshida \cite[(2013)]{BY13}}]\label{thm:BYform} Let $a,b,c,r_1,r_2,r_3\in\R$. Define polynomials $Q_2(x)=a(x-r_1)(x-r_2)$, $Q_1(x)=b(x-r_3)$, and $Q_0(x)=c$. Then $T$ is hyperbolicity preserving, where \begin{equation} T:=Q_2(x)D^2+Q_1(x)D+Q_0(x), \end{equation} if and only if $a, b, c$ are of the same sign and \begin{equation}\label{eq:tti} b^2\left(\frac{(r_1-r_3)(r_3-r_2)}{(r_1-r_2)^2}\right)-ac\ge 0. \end{equation} We take $\left(\frac{(r_1-r_3)(r_3-r_2)}{(r_1-r_2)^2}\right)=\frac{1}{4}$ when $r_1=r_2=r_3$. If $r_1=r_2$ and $r_1\not=r_3$, then $T$ is not hyperbolicity preserving. \end{theorem} \begin{remark} For clarity, we point out that the condition that $a,b,c$ be of the same sign, in Theorem \ref{thm:BYform}, cannot be removed. For example, the following operator satisfies equation \eqref{eq:tti} but not the necessary sign condition of the leading coefficients, \begin{equation} T:=(x-1)(x+1)D^2-2xD+1. \end{equation} Hence, $T$ is not hyperbolicity preserving, as can be seen since $T[x^2]=-x^2-2$. \end{remark} \begin{example}\label{ex:qua} Consider the following differential operator, \begin{align} T:=&(x-2)(x+1)D^2+3(x+1/2)D+1 \\ =& (-2)D^2+(-xD+3/2)D+(x^2D^2+3xD+1) \\ =& T_2D^2+T_1D+T_0. \end{align} By an application of Theorem \ref{thm:BYform}, operator $T$ is certainly hyperbolicity preserving, \begin{equation} 3^2\left(\frac{\lr{2-(-1/2)}\lr{(-1/2)-(-1)}}{\lr{(-1)-2}^2}\right)-1\cdot 1 = \frac{1}{4} \ge 0. \end{equation} However, $T_1=-xD+3/2$ (see \eqref{eq:tn}) is not a hyperbolicity preserver, since $T_1[x^2-1]=(-1/2)x^2-3/2$. \end{example} \begin{example}\label{ex:leg} Consider the Legendre basis of polynomials, $\{P_n(x)\}_{n=0}^\infty$, that satisfy the differential equation (Definition \ref{def:poly}), \begin{equation}\label{eq:legendrehp} ((x^2-1)D^2+(2x)D+1)P_n(x) = (n^2+n+1)P_n(x). \end{equation} Equation \eqref{eq:legendrehp} was first verified to be hyperbolicity preserving by K. Blakeman, E. Davis, T. Forg\'acs, and K. Urabe \cite[Lemma 5]{BDFU12}. We re-verify that $(x^2-1)D^2+(2x)D+1$ is a hyperbolicity preserver using the calculation in Theorem \ref{thm:BYform}, \begin{equation} 2^2\left(\frac{(1-0)(0-(-1))}{(-1-1)^2}\right)-1\cdot 1 = 1-1=0 \ge 0. \end{equation} Hence, compositions are hyperbolicity preserving, and thus, $T$ is hyperbolicity preserving, where $T[P_n(x)]:=(n^2+n+1)^3P_n(x)$. We calculate the differential form of $T$ (see Theorem \ref{thm:piotr}), \begin{align} T=&((x^2-1)D^2+(2x)D+1)^3 \\=&(x^6-3x^4+\mathcol{3x^2}-1)D^6+ \\&(18x^5-36x^3+\mathcol{18x})D^5+ \\&(101x^4-130x^2+\mathcol{29})D^4+ \\&(208x^3-160x)D^3+ \\&(145x^2-57)D^2+ \\&(26x)D+ \\&1. \end{align} Consider the highlighted terms of from above to calculate $T_4$ (see \eqref{eq:tn}), \begin{equation} T_4=3x^2D^2+18xD+29. \end{equation} From Theorem \ref{thm:BYform} we infer that operator $T_4$ fails to be hyperbolicity preserving, \begin{equation} 18^2\left(\frac{1}{4}\right)-3\cdot 29 = 81 - 87 = -6 < 0. \end{equation} \end{example} \begin{example}\label{ex:sher} Due to A. Piotrowski (see \cite[Lemma 157, p. 145]{Pio07}), affine transforms ($\{c_nB_n(\alpha x+\beta)\}_{n=0}^\infty$, $c_n,\alpha,\beta\in\R$, $c_n,\alpha\not=0$) share the same multiplier sequence class as the basis $\{B_n(x)\}_{n=0}^\infty$. Let us consider then an affine transform of the Hermite polynomials, $\{H_n(x\pm 3)\}_{n=0}^\infty$, and a multiplier sequence for these shifted Hermite polynomials, $\{n^2+n+1\}_{n=0}^\infty$ (see Theorem \ref{thm:msherm}). Thus $T$ is hyperbolicity preserving, where $T[H_n(x\pm 3)]=(n^2+n+1)H_n(x\pm 3)$. We calculate the differential form of $T$ (see Theorem \ref{thm:piotr}), \begin{equation}\label{eq:shercalc} T = \left(\frac{1}{4}\right)D^4+\lr{\mathcol{-x}\mp 3}D^3+\lr{x^2\pm 6x+\mathcol{\frac{15}{2}}}D^2+\lr{2x\pm 6}D+\lr{1}. \end{equation} From the highlighted items in \eqref{eq:shercalc} we formulate $T_2=-xD+15/2$ (see \eqref{eq:tn}) and note that $T_2$ is not hyperbolicity preserving since $T[2x^8-2x^6]=-x^8-3x^6$. It is intriguing to see that while affine transforms share multiplier sequence classes, the $T_n$'s in equation \eqref{eq:diawriting} may not share in the property of hyperbolicity preservation. Hence, as we will see in Theorem \ref{thm:hermiteissums} and \ref{thm:herm}, the Hermite polynomials are distinguished amongst all affine transforms of the Hermite polynomials. \end{example} \begin{example}\label{ex:slag} Consider the shifted Laguerre polynomials (see \cite[Lemma 157, p. 145]{Pio07}), $\{L_n(x+2)\}_{n=0}^\infty$, and a multiplier sequence for these shifted Laguerre polynomials, $\{n\}_{n=0}^\infty$ (see Theorem \ref{thm:mslag}). Thus $T$ is hyperbolicity preserving, where $T[L_n(x+2)]=nL_n(x+2)$ and \begin{equation} T = (\mathcol{-x}-2)D^2+(x+\mathcol{1})D+(0). \end{equation} Consider the operator formed by the highlighted terms, $T_1=-xD+1$. Operator $T_1$ fails to preserve hyperbolicity since $T_1[x^2-1]=-x^2-1$. (See also Question \ref{que:que5} in the open problems.) \end{example} \begin{example}\label{ex:malo} A more technical example is the following. Using the generalized Malo-Schur-Szeg\"o Composition Theorem \cite{Bru49,CC04} it can be shown that, given $p(x)=(x+1)^3$, \begin{align} T:&=-\frac{1}{6}p'''(x)D^3+\frac{1}{2}p''(x)D^2-p'(x)D+p(x) \\ &=-D^3+(\mathcol{3x}+3)D^2+(-3x^2-6x\mathcol{-3})D+(x^3+3x^2+3x+1) \end{align} is hyperbolicity preserving \cite[p. 47]{Yos13}. Define $T_{1}:=3xD-3$ (see \eqref{eq:tn}) and note that $T_{1}[x^2-1]=3x^2+3$, thus $T_1$ is not hyperbolicity preserving. \end{example} \begin{example}\label{ex:notdia} Another example involving $Q_k$'s, where $\deg(Q_k(x))>k$ for some of the $k$'s. Using the Hermite-Poulain Theorem \cite[p. 4]{Obr63} it can be shown that the non-diagonalizable operator, \begin{equation} T:=(\mathcol{x^2}+2x+1)D^2-(x^2+2x+\mathcol{1}), \end{equation} preservers hyperbolicity. The operator $T_0=x^2D^2-1$ (see \eqref{eq:tn}) is not a hyperbolicity preserver, since $T_0[x^2-1]=x^2+1$. This example is even more interesting considering the fact that, in general, $W_0$ is always hyperbolicity preserving, whenever $W$ is any arbitrary diagonal differential hyperbolicity preserver (see \cite{Bat14a}). \end{example} By now the reader has hopefully been convinced that Examples \ref{ex:qua}-\ref{ex:notdia} demonstrate the very high sensitivity of the following results; namely, for Hermite or Laguerre multiplier sequences the $T_n$'s in \eqref{eq:tn} from Theorem \ref{thm:classic} are hyperbolicity preservers. It is surprising, that not only will each $T_n$ be hyperbolicity preserving, the family of sequences, $\{b_{n,k}\}_{k=0}^\infty$ (see \eqref{eq:aform}), turn out to be more Hermite or Laguerre multiplier sequences, respectively. In this sense every Hermite or Laguerre multiplier sequence generates an entire family of additional Hermite or Laguerre multiplier sequences. \section{Operator Diagonalizations of Hermite Multiplier Sequences} Our main goal in this section is to demonstrate for hyperbolicity preserving Hermite diagonal differential operators, each $T_n$ defined in Theorem \ref{thm:classic} is hyperbolicity preserving. This will be done in two phases. First we will find a formula for $b_{n,k}$ (see \eqref{eq:aform}). Second, we will show that, for each $n\in\N_0$, $\{b_{n,k}\}_{k=0}^\infty$ a Hermite multiplier sequence and hence $\{b_{n,k}\}_{k=0}^\infty$ is also a classical multiplier sequence, i.e. each $T_n$ is hyperbolicity preserving. \begin{lemma}\label{lem:zeroherm} For $k,j\in\N_0$, the $k^\text{th}$ derivative of the $(k+2j+1)^\text{th}$ and $(k+2j)^\text{th}$ Hermite polynomials \textup{(}see Definition \ref{def:poly}\textup{)} evaluated at zero is, \begin{equation} H_{k+2j+1}^{(k)}(0)=0\ \ \ \text{and}\ \ \ H_{k+2j}^{(k)}(0)=\frac{(k+2j)!2^k(-1)^j}{j!}. \end{equation} \end{lemma} \begin{theorem}\label{thm:hermiteissums} Let $T$ be a Hermite diagonal differential operator, $T[H_n(x)]:=\gamma_n H_n(x)$, where $\{\gamma_n\}_{n=0}^\infty$ a sequence of real numbers. Then there is a sequence of polynomials, $\{Q_k(x)\}_{k=0}^\infty$, and a sequence of classical diagonal differential operators, $\{T_n\}_{n=0}^\infty$, such that \begin{equation}\nonumber T[H_n(x)]:=\left(\sum_{k=0}^\infty Q_k(x)D^k\right) H_n(x) = \left(\sum_{k=0}^\infty T_k D^k \right) H_n(x) = \gamma_n H_n(x). \end{equation} Then, for each $n\in\N_0$, \begin{equation}\nonumber \{b_{2n+1,m}\}_{m=0}^\infty = \{0\}_{m=0}^\infty \end{equation} and \begin{equation}\nonumber \{b_{2n,m}\}_{m=0}^\infty:=\left\{\sum_{k=0}^m \binom{m}{k} \frac{(-1)^n}{n!2^n} \left(\sum_{j=0}^n \binom{n}{j} \frac{g_{k+n+j}^*(-1)}{2^{j}}\right)\right\}_{m=0}^\infty, \end{equation} where $T_n[x^m]=b_{n,m}x^m$ for every $n,m\in\N_0$. \end{theorem} \begin{proof} The exists of the sequences $\{Q_k(x)\}_{k=0}^\infty$ and $\{T_k\}_{k=0}^\infty$ are established by Theorem \ref{thm:piotr} and \ref{thm:classic}. We now begin with the remarkable representation formula of T. Forg\'acs and A. Piotrowski that computes the $Q_k$'s in any Hermite diagonal differential operator \cite[Theorem 3.1]{FP13b}, \begin{equation}\label{eq:forgacs} Q_k(x)=\sum_{j=0}^{[k/2]} \frac{(-1)^j}{j!(k-2j)!2^{k-j}}g_{k-j}^*(-1)H_{k-2j}(x). \end{equation} This formula yields the following expressions for all $k,n\in\N_0$, \begin{equation}\label{eq:calc1} Q_{k+2n+1}^{(k)}(0)=0, \end{equation} and \begin{equation}\label{eq:calc2} Q_{k+2n}^{(k)}(0)=\frac{(-1)^n}{n!2^n}\sum_{j=0}^{n} \binom{n}{j}\frac{g_{k+n+j}^*(-1)}{2^{j}}. \end{equation} Equations \eqref{eq:calc1} and \eqref{eq:calc2} could have been calculated using the recursive formula of Theorem \ref{thm:piotr}, if one knew, \textit{a priori}, the importance of the $g_{k-j}^*(-1)$'s in formula \eqref{eq:forgacs}. However, this dependence was not made apparent until formula \eqref{eq:forgacs} was uncovered. Let us now verify \eqref{eq:calc1} and \eqref{eq:calc2}. Equation \eqref{eq:calc1} is obvious from formula \eqref{eq:forgacs} and the fact that the Hermite polynomials alternate between even and odd polynomials. We now establish \eqref{eq:calc2} using formula \eqref{eq:forgacs} and Lemma \ref{lem:zeroherm} as follows: \begin{align} Q_{k+2n}^{(k)}(0) &=\sum_{j=0}^{[(k+2n)/2]} \frac{(-1)^j}{j!(k+2n-2j)!2^{k+2n-j}}g_{k+2n-j}^*(-1)H_{k+2n-2j}^{(k)}(0) \\ &=\sum_{j=0}^{n} \frac{(-1)^j}{j!(k+2(n-j))!2^{k+n+(n-j)}}g_{k+n+(n-j)}^*(-1)H_{k+2(n-j)}^{(k)}(0) \\ &=\sum_{j=0}^{n} \frac{(-1)^{n-j}}{(n-j)!(k+2j)!2^{k+n+j}}g_{k+n+j}^*(-1)H_{k+2j}^{(k)}(0) \\ &=\sum_{j=0}^{n} \frac{(-1)^{n-j}}{(n-j)!(k+2j)!2^{k+n+j}}g_{k+n+j}^*(-1)\left(\frac{(k+2j)!2^k(-1)^j}{j!}\right) \\ &=\frac{(-1)^n}{n!2^n}\sum_{j=0}^{n} \binom{n}{j}\frac{g_{k+n+j}^*(-1)}{2^{j}}. \end{align} We finish the proof by using formula \eqref{eq:aform}. \end{proof} With the aid of what has been shown thus far, we are now in a position to demonstrate our main result, that every Hermite multiplier sequence is the unique sum of classical multiplier sequences. That is, for Hermite multiplier sequences, each $T_n$ in equation \eqref{eq:tn} is hyperbolicity preserving. The spirit of the following argument will be the establishment of a Rodrigues type formula that relates each governing entire function, $\sum_{k=0}^\infty \frac{b_{n,k}}{k!}x^k$, of each $T_n$, with the entire function that defines the hyperbolicity properties of $T$ itself, $\sum_{k=0}^\infty \frac{\gamma_k}{k!}x^k$ (see Theorem \ref{thm:ms} and \ref{thm:msherm}). \begin{theorem}\label{thm:herm} Let $\{\gamma_k\}_{k=0}^\infty$ is a non-trivial Hermite multiplier sequence and let $\{g_k^*(x)\}_{k=0}^\infty$ be the reversed Jensen polynomials associated with $\{\gamma_k\}_{k=0}^\infty$. Then, for each $n\in\N_0$, \begin{equation}\label{eq:crazyherm} \{b_{n,m}\}_{m=0}^\infty:=\left\{\sum_{k=0}^m \binom{m}{k} \frac{(-1)^n}{n!2^n} \left(\sum_{j=0}^n \binom{n}{j} \frac{g_{k+n+j}^*(-1)}{2^{j}}\right)\right\}_{m=0}^\infty, \end{equation} is a Hermite multiplier sequence. \end{theorem} \begin{proof} By assumption, $\{\gamma_k\}_{k=0}^\infty$ is a Hermite multiplier sequence. Hence, by Theorem \ref{thm:msherm}, if \begin{equation} f(x):=\sum_{k=0}^\infty \frac{f^{(k)}(0)}{k!} x^k := \sum_{k=0}^\infty \frac{g_k^*(-1)}{k!} x^k, \end{equation} then, either $f(x)\in\lp^s$ or $e^{2x}f(x)\in\lp^a$. We wish to show that, $\{b_{n,m}\}_{m=0}^\infty$, is a Hermite multiplier sequence; thus using Theorem \ref{thm:msherm} we must show that if \begin{equation}\label{h_n's} h_n(x):=\sum_{m=0}^\infty \left(\sum_{k=0}^m \binom{m}{k} b_{n,k} (-1)^{m-k} \right) \frac{x^m}{m!}, \end{equation} then either $h_n(x)\in\lp^s$ or $e^{2x}h_n(x)\in\lp^a$. We use Theorem \ref{thm:comb} and perform the following calculation, \begin{align} h_n(x) &=\sum_{m=0}^\infty \left(\sum_{k=0}^m \binom{m}{k} b_{n,k} (-1)^{m-k} \right) \frac{x^m}{m!} \\ &=\sum_{k=0}^\infty \left(\frac{(-1)^n}{n!2^n} \left(\sum_{j=0}^n \binom{n}{j} \frac{g_{k+n+j}^*(-1)}{2^{j}}\right) \right)\frac{x^k}{k!} \\ &=\frac{(-1)^n}{n!2^n} \sum_{j=0}^n \binom{n}{j}\frac{1}{2^{j}} \sum_{k=0}^\infty \left(\frac{g_{k+n+j}^*(-1)}{k!}\right) x^k \\ &=\frac{(-1)^n}{n!2^n} \sum_{j=0}^n \binom{n}{j} \frac{1}{2^j} D^{n+j} f(x) \\ &=\frac{(-1)^n}{n!4^n} D^n \left(\sum_{j=0}^n \binom{n}{j} D^{j} 2^{n-j}\right) f(x) \\ &=\frac{(-1)^n}{n!4^n} D^n (2+D)^n f(x) \\ &=\frac{(-1)^n}{n!4^n} D^n e^{-2x} D^n e^{2x} f(x). \label{eq:hermrecur} \end{align} Hence, if $f(x)\in\lp^s$, then $h_n(x)\in\lp^s$ and if $e^{2x}f(x)\in\lp^a$, then $e^{2x}h_n(x)\in\lp^a$ (see also Remark \ref{rm:lpinc}). \end{proof} Equation \eqref{eq:hermrecur} yields a little more information than Theorem \ref{thm:herm}, in particular we derive the recursive formula, \begin{equation} h_n(x)=\frac{-1}{4n}De^{-2x}De^{2x} h_{n-1}(x),\ \ \ \ \ (n\ge 1,\ h_0(x):=f(x)). \end{equation} Hence, only $T_n$ needs to be diagonalizable with a Hermite multiplier sequence to establish that $T_{n+1}$ is also diagonalizable with a Hermite multiplier sequence. Given a Hermite diagonal differential operator, $T[H_n(x)]=\gamma_n H_n(x)$, $\gamma_n\in\R$, (see Definition \ref{def:diagonalize}), then $T_0$ (see \ref{eq:tn}) diagonalizes with the same eigenvalue sequence, namely $T_0[x^n]=\gamma_n x^n$. In fact, this is more generally known (see \cite{Bat14a}). This indicates that if one assumes each operator $T_n$ yields a Hermite multiplier sequence, then Theorem \ref{thm:herm} has a trivial converse, in the sense that if one assumes each $T_n$ diagonalizes with a Hermite multiplier sequence then $T$ itself will also be hyperbolicity preserving. However, what if one only assumes that each $T_n$ is hyperbolicity preserving? Must $T$ be hyperbolicity preserving? We answer this question in the negative, with the following examples. \begin{example}\label{ex:notherm1} Consider the following Hermite diagonal operator that is not hyperbolicity preserving (see Theorem \ref{thm:msherm}), \begin{equation} T[H_n(x)]:=\left((-1)^{n+1}(n-1)\right)H_n(x). \end{equation} Thus we calculate, \begin{equation} w(x):=\sum_{k=0}^\infty \frac{(-1)^{k+1}(k-1)}{k!}x^k = (x+1)e^{-x}. \end{equation} Hence, using equation \eqref{eq:hermrecur} (note, $f(x)=e^{-x}w(x)$ (see Theorem \ref{thm:msherm})), we can calculate the $h_n$'s, \begin{align} h_0(x)& := \sum_{k=0}^\infty \frac{Q_k^{(k)}(0)}{k!}x^k = (x+1)e^{-2x},\\ h_1(x)& := \sum_{k=0}^\infty \frac{Q_{k+2}^{(k)}(0)}{k!}x^k = \frac{1}{2}e^{-2x},\ \ \ \ \ \text{and}\\ h_n(x)& := \sum_{k=0}^\infty \frac{Q_{k+2n}^{(k)}(0)}{k!}x^k = 0,\ \ \ \ \ \text{for}\ n\ge 2. \end{align} Hence, \begin{align} T&=1-xD+\sum_{k=0}^\infty \Bigg(\frac{\overset{\underbrace{h_0^{(k+2)}(0)}}{\mathcol{k(-2)^{k+1}}}}{(k+2)!}x^{k+2}+\frac{\overset{\underbrace{h_1^{(k)}(0)}}{\mathcol{-(-2)^{k-1}}}}{k!}x^k\Bigg) D^{k+2} \\ &=T_0+T_2D^2. \end{align} Thus, \begin{align} T_0[x^n] &=\Bigg(1-xD+\sum_{k=0}^\infty\frac{k(-2)^{k+1}}{(k+2)!}x^{k+2}D^{k+2}\Bigg) x^n = \left((-1)^{n+1}(n-1)\right) x^n, \\ T_2[x^n]&=\left(\sum_{k=0}^\infty \left(\frac{-(-2)^{k-1}}{k!}x^k\right)D^{k}\right) x^n = \left(\frac{1}{2}(-1)^n\right) x^n,\ \ \ \ \ \text{and} \\ T_{2m}[x^n] &= \left(0\right)x^n=(0)x^n,\ \ \ \ \ \text{for}\ m\ge 2. \end{align} We see that for every $n\ge 1$, $h_n(x)\in\lp^a$, hence $T_{2n}$ is hyperbolicity preserving (see Theorem \ref{thm:ms}). However, the original operator $T$ itself is not hyperbolicity preserving, as the following calculation shows, \begin{align} T[4x^2+2x-5]=&\ T[\overset{\underbrace{-3H_0(x)}}{(-3)}+\overset{\underbrace{H_1(x)}}{(2x)}+\overset{\underbrace{H_2(x)}}{(4x^2-2)}]\\ =&\ 1(-3)+0(2x)+(-1)(4x^2-2)=-4x^2-1. \end{align} \end{example} \begin{example}\label{ex:notherm2} Consider another Hermite diagonal operator that does not preserve hyperbolicity (see Theorem \ref{thm:msherm}), $\{\gamma_k\}_{k=0}^\infty=\{(1/2)^k\}_{k=0}^\infty$; that is, \begin{equation} T[H_n(x)]=\gamma_nH_n(x):=(1/2)^n H_n(x). \end{equation} Using Theorem \ref{thm:classic} we write $T=\sum_{n=0}^\infty T_nD^n$, where $T_n[x^m]=b_{n,m}x^m$. We rewrite formula \eqref{eq:hermrecur} in terms of $b_{n,m}$'s and $\gamma_n$'s (see Theorem \ref{thm:comb}), \begin{equation} \sum_{k=0}^\infty \frac{b_{n,k}}{k!}x^k = \frac{(-1)^n}{n!4^n} e^x D^n e^{-2x} D^n e^x \sum_{k=0}^\infty \frac{\gamma_k}{k!}x^k. \end{equation} Since $\sum_{k=0}^\infty \frac{\gamma_k}{k!}x^k = e^{x/2}$, then \begin{equation} \sum_{k=0}^\infty \frac{b_{n,k}}{k!}x^k = \frac{(-1)^n}{n!4^n}\left(-\frac{1}{2}\right)^n\left(\frac{3}{2}\right)^n e^{x/2}. \end{equation} Thus $\sum_{k=0}^\infty \frac{b_{n,k}}{k!}x^k\in\lp^s$ for every $n\in\N_0$. Hence, $T_{2n}$ is hyperbolicity preserving for every $n\in\N_0$ (see Theorem \ref{thm:ms}), however, as noted above, $T$ is not hyperbolicity preserving (see Theorem \ref{thm:msherm}). \end{example} \begin{example} To demonstrate the usefulness of Theorem \ref{thm:herm}, consider the following example. How would one show that \begin{equation} \{a_m\}_{m=0}^\infty:=\{m^{5/2}\}_{m=0}^\infty \end{equation} is not a multiplier sequence? Sequence $\{a_m\}_{m=0}^\infty$ satisfies the Tur\'an inequalities and is a positive, increasing sequence. Thus some well known methods do not work (see for example see \cite[p. 341]{Lev80}, concerning the Tur\'an inequalities). One could apply the sequence to $(1+x)^5$ to calculate to the fifth associated Jensen polynomial, \begin{equation} = (5)x+(56.56\ldots)x^2+(155.88\ldots)x^3+(160)x^4+(55.90\ldots)x^5 \end{equation} and verify that this polynomial has non-real zeros, however this can prove to be quite tedious. Instead, we apply Theorem \ref{thm:herm} and calculate as summarized in Figure \ref{eq:bnthing}. \begin{figure} \caption{Table of Hermite diagonal differential operator eigenvalues. \label{eq:bnthing} \label{eq:bnthing} \end{figure} Hence, after a few simple \textit{numerical} calculations we arrive at the highlighted portions in Figure \ref{eq:bnthing} and note that they are negative and increasing, so $\{b_{3,n}\}_{n=0}^\infty$ is not a Hermite multiplier sequence (see Theorem \ref{thm:msherm}). Thus, the original sequence, $\{a_m\}_{m=0}^\infty$, is not a Hermite multiplier sequence. Consequently, since $\{a_m\}_{m=0}^\infty$ is an increasing sequence that is not a Hermite multiplier sequence, by Theorem \ref{thm:msherm}, we conclude that $\{a_m\}_{m=0}^\infty$ cannot be a classical multiplier sequence. \end{example} Our next task is to present several relationships between the polynomial coefficients, the $Q_k$'s, and the eigenvalues, the $\gamma_k$'s, in a Hermite diagonal differential operator, \begin{equation} T[H_n(x)]:=\left(\sum_{k=0}^\infty Q_k(x) D^k\right)H_n(x)=\gamma_n H_n(x). \end{equation} In general, in a diagonal differential operator, the relationship between the $Q_k$'s and the $\gamma_k$'s is not well understood, particularly in the context of hyperbolicity preservation. In special cases direct formulas have been found (see for example \eqref{eq:forgacs}) (cf. Theorem \ref{thm:piotr} and \cite[Proposition 216, p. 107]{Cha11}), but a general relation has not been derived that indicates the properties of the $Q_k$'s and the $\gamma_k$'s for arbitrary hyperbolicity preserving operators. Thus, whenever possible, it is beneficial to present formulas that highlight the nature of the $Q_k$'s in terms of the eigenvalues, the $\gamma_k$'s. Using calculation \eqref{eq:calc1} and \eqref{eq:calc2}, in Theorem \ref{thm:hermqk} we can provide another formula for the $Q_k$'s in a Hermite diagonal differential operator. \begin{theorem}\label{thm:hermqk} Let $\{\gamma_n\}_{n=0}^\infty$ be a sequence of real numbers and $\{Q_k(x)\}_{k=0}^\infty$ be a sequence of real polynomials, such that \begin{equation} T[H_n(x)]:=\left(\sum_{k=0}^\infty Q_k(x)D^k\right)H_n(x) = \gamma_n H_n(x),\ \ \ n\in\N_0. \end{equation} Then for each $m\in\N_0$, \begin{equation} Q_m(x)=\sum_{k=0}^{[m/2]} \frac{(-1)^k}{k!2^k} \left(\sum_{j=0}^k \binom{k}{j} \frac{g_{m-k+j}^*(-1)}{2^{j}}\right)\frac{x^{m-2k}}{(m-2k)!}, \end{equation} where $\{g_k^*(x)\}_{k=0}^\infty$ are the associated reversed Jensen polynomials of $\{\gamma_n\}_{n=0}^\infty$. \end{theorem} We also derive a complex formulation for the $Q_k$'s in a Hermite diagonal differential operator (Theorem \ref{thm:hermcomplex}). A heuristic argument of the proof of Theorem \ref{thm:hermcomplex} follows easily by considering the generating function of the Hermite polynomials (see \cite[p. 187]{Rai60}), \begin{equation}\label{eq:hermgen} e^{2xt-t^2}=\sum_{n=0}^\infty \frac{H_n(x)}{n!} t^n. \end{equation} We now calculate $T[e^{2xt-t^2}]$ in two ways, \begin{align} T[e^{2xt-t^2}]&=\left(\sum_{k=0}^\infty Q_k(x)D^k\right) e^{2xt-t^2}=e^{2xt-t^2}\sum_{k=0}^\infty Q_k(x)(2t)^k,\ \ \ \ \ \text{and} \\ T[e^{2xt-t^2}]&=T\left[\sum_{n=0}^\infty \frac{H_n(x)}{n!}t^n\right]=\sum_{n=0}^\infty \frac{\gamma_n H_n(x)}{n!}t^n. \end{align} Hence, \begin{equation}\label{eq19} \sum_{k=0}^\infty Q_k(x) (2t)^k = e^{-2xt+t^2} \left(\sum_{n=0}^\infty \frac{\gamma_n H_n(x)}{n!}t^n \right). \end{equation} Thus, performing a Cauchy product on the right hand side of \eqref{eq19} and comparing the coefficients of $t^n$ on the right and left of \eqref{eq19}, for each $n\in\N_0$, we have, \begin{align} Q_n(x)2^n &= \frac{1}{n!} \sum_{k=0}^n \binom{n}{k} \left.\left(\frac{d^{n-k}}{dt^{n-k}}e^{-2xt+t^2}\right)\right|_{t=0} \left.\left(\frac{d^{k}}{dt^{k}} \sum_{j=0}^\infty \frac{\gamma_j H_j(x)}{j!}t^j\right)\right|_{t=0} \\ &= \frac{1}{n!} \sum_{k=0}^n \binom{n}{k} \left.\frac{d^{n-k}}{dt^{n-k}} \sum_{j=0}^\infty \frac{H_j(ix)}{j!}(it)^j \right|_{t=0} \left.\frac{d^{k}}{dt^{k}} \sum_{j=0}^\infty \frac{\gamma_j H_j(x)}{j!}t^j\right|_{t=0} \\ &= \frac{1}{n!} \sum_{k=0}^n \binom{n}{k} i^{n-k}H_{n-k}(ix) \gamma_k H_k(x). \end{align} \begin{remark}\label{rmk:careful} We must be cautious with the argument above since $T[e^{2xt-t^2}]$ need not converge and hence is only calculated formally. However, even under formal assumptions there is no reason to assume that a differential representation of a linear operator will calculate the same formal series as the operator itself. That is, the calculation, \begin{equation}\label{eq:thingy} T[e^{2xt-t^2}]=e^{2xt-t^2}\sum_{k=0}^\infty Q_k(x)(2t)^k, \end{equation} has not been rigorously established. \end{remark} \begin{theorem}\label{thm:hermcomplex} Let $\{\gamma_n\}_{n=0}^\infty$ be a sequence of real numbers and $\{Q_k(x)\}_{k=0}^\infty$ be a sequence of real polynomials, such that \begin{equation} T[H_n(x)]:=\left(\sum_{k=0}^\infty Q_k(x)D^k\right)H_n(x) = \gamma_n H_n(x),\ \ \ n\in\N_0. \end{equation} Then for each $n\in\N_0$, \begin{equation}\label{eq:newherm} Q_n(x)=\frac{1}{n!2^n}\sum_{k=0}^{n} \binom{n}{k} \gamma_k i^{n-k}H_{n-k}(ix)H_k(x). \end{equation} \end{theorem} \begin{proof} Define \begin{equation} \tilde{T}:=\sum_{k=0}^\infty Q_k(x)D^k, \end{equation} where we define $Q_k(x)$ from equation \eqref{eq:newherm}. In the spirit of T. Forg\'acs and A. Piotrowski \cite[Theorem 3.1]{FP13b}, we need only to show that $\tilde{T}[H_n(x)]=\gamma_n H_n(x)$ for each $n\in\N_0$. We note that for $n,m\in\N_0$, $D^m H_n(x) = 2^m \binom{m}{n} n! H_{n-m}(x)$ \cite[p. 188]{Rai60}. We also note that $\binom{n}{k}\binom{k}{j} = \binom{n}{j}\binom{n-j}{k-j}$ (see \cite[p. 3]{Rio79}). Using the generating function of the Hermite polynomials, equation \eqref{eq:hermgen}, we now calculate \begin{align} \tilde{T}[H_n(x)] &=\sum_{k=0}^n\left(\frac{1}{k!2^k}\sum_{j=0}^{k} \binom{k}{j} \gamma_j i^{k-j}H_{k-j}(ix)H_j(x)\right)\mathcol{D^k H_n(x)} \\ &=\sum_{k=0}^n\left(\frac{1}{k!2^k}\sum_{j=0}^{k} \binom{k}{j} \gamma_j i^{k-j}H_{k-j}(ix)H_j(x)\right)\mathcol{2^k\binom{n}{k}k! H_{n-k}(x)} \\ &=\sum_{j=0}^n \gamma_j H_j(x) \sum_{k=j}^{n} \binom{n}{k} \binom{k}{j} i^{k-j}H_{k-j}(ix)H_{n-k}(x) \\ &=\sum_{j=0}^n \gamma_j H_j(x) \sum_{k=0}^{n-j} \mathcol{\binom{n}{k+j} \binom{k+j}{j}} i^{k}H_{k}(ix)H_{(n-j)-k}(x) \\ &=\sum_{j=0}^n \gamma_j H_j(x) \sum_{k=0}^{n-j} \mathcol{\binom{n}{j} \binom{n-j}{k}} i^{k}H_{k}(ix)H_{(n-j)-k}(x) \\ &=\sum_{j=0}^n \binom{n}{j} \gamma_j H_j(x) \cdot\sum_{k=0}^{n-j} \binom{n-j}{k} \left(\left.\frac{d^{k}}{dt^k}e^{-2xt+t^2}\right|_{t=0}\right)\cdot \left(\left.\frac{d^{(n-j)-k}}{dt^{(n-j)-k}}e^{2xt-t^2}\right|_{t=0}\right) \\ &=\sum_{j=0}^n \binom{n}{j} \gamma_j H_j(x) \left.\frac{d^{n-j}}{dt^{n-j}}e^{-2xt+t^2}e^{2xt-t^2}\right|_{t=0} \\ &=\gamma_n H_n(x). \nonumber\tag*{\qedhere} \end{align} \end{proof} We can also establish an interesting relationship between alternating Hermite diagonal differential operators and non-alternating Hermite diagonal differential operators. This will allow us to provide an alternate proof and a non-obvious extension of T. Forg\'acs and A. Piotrowski \cite[Theorem 3.7]{FP13b} (cf.~Example~\ref{ex:hermfinin}). \begin{theorem}\label{thm:herminter} Let $\{\gamma_k\}_{k=0}^\infty$ be a sequence of real numbers. Define the Hermite diagonal differential operators, \begin{equation} T[H_n(x)]:=\left(\sum_{k=0}^\infty Q_k(x)D^k\right)H_n(x) = \gamma_n H_n(x) \end{equation} and \begin{equation} \tilde{T}[H_n(x)]:=\left(\sum_{k=0}^\infty \tilde{Q}_k(x)D^k\right)H_n(x) = (-1)^n\gamma_n H_n(x). \end{equation} Then for each $n\in\N_0$, \begin{equation}\label{eq:thingy23} Q_n(x)=\frac{(-2)^n}{n!}\left(\sum_{k=0}^\infty \frac{\tilde{Q}_k(x)}{2^k}D^k\right) x^n \end{equation} and \begin{equation} \tilde{Q}_n(x)=\frac{(-2)^n}{n!}\left(\sum_{k=0}^\infty \frac{Q_k(x)}{2^k}D^k\right) x^n. \end{equation} \end{theorem} \begin{proof} In light of Remark \ref{rmk:careful} and Theorem \ref{thm:hermcomplex}, we may conclude that, \begin{equation}\label{eq:thing8} \sum_{k=0}^\infty Q_k(x)(2t)^k = e^{-2xt+t^2}\left(\sum_{n=0}^\infty \frac{\gamma_n H_n(x)}{n!} t^n\right) \end{equation} and \begin{equation} \sum_{k=0}^\infty \tilde{Q}_k(x)(2t)^k = e^{-2xt+t^2}\left(\sum_{n=0}^\infty \frac{(-1)^n\gamma_n H_n(x)}{n!} t^n\right); \end{equation} i.e., as formal power series in $t$, the coefficients are equal (see \cite{Niv69} or \cite[p. 130]{Rot02}). Hence, after substitution of $t\to -t$, we have \begin{align} e^{-4xt}\sum_{k=0}^\infty Q_k(x)(-2t)^k &= e^{-4xt}\left(e^{2xt+t^2} \left(\sum_{n=0}^\infty \frac{(-1)^n\gamma_n H_n(x)}{n!} t^n\right) \right)\\ &= e^{-2xt+t^2} \sum_{n=0}^\infty \frac{(-1)^n\gamma_n H_n(x)}{n!} t^n \\ &= \sum_{k=0}^\infty \tilde{Q}_k(x)(2t)^k. \end{align} Thus, \begin{align} \tilde{Q}_n(x) &= \frac{1}{n!2^n}\frac{d^n}{dt^n} \left. e^{-4xt}\sum_{k=0}^\infty Q_k(x)(-2t)^k \right|_{t=0} \\ &= \frac{1}{n!2^n}\sum_{k=0}^n \binom{n}{k} (-4x)^{n-k} (-2)^k k! Q_k(x) \\ &= \frac{(-2)^n}{n!}\sum_{k=0}^n \binom{n}{k} \frac{x^{n-k}}{2^k} k! Q_k(x) \\ &= \frac{(-2)^n}{n!}\left(\sum_{k=0}^n \frac{Q_k(x)}{2^k}D^k \right)x^n. \end{align} By symmetry, equation \eqref{eq:thingy23} also holds. \end{proof} \begin{theorem}[{\hspace{1sp}\cite[Theorem 3.7]{FP13b}}]\label{thm:hermrealzeros} Let $\{\gamma_k\}_{k=0}^\infty$ be a non-trivial Hermite multiplier sequence, \begin{equation} T[H_n(x)]:=\left(\sum_{k=0}^\infty Q_k(x)D^k\right)H_n(x) = \gamma_n H_n(x). \end{equation} Then each $Q_k(x)$ has only real zeros. \end{theorem} \begin{proof} If $\{\gamma_k\}_{k=0}^\infty$ is a Hermite multiplier sequence, then $\{(-1)^k\gamma_k\}_{k=0}^\infty$ is also a Hermite multiplier sequence \cite[Proposition 119, p. 98]{Pio07}. Hence, \begin{equation} \sum_{k=0}^\infty \tilde{Q}_k(x)D^k, \end{equation} is a hyperbolicity preserver. Thus, using the Borcea-Br\"and\'en Theorem \cite[Theorem 5]{BB09} (which requires non-trivial), we conclude that the operator, \begin{equation} \sum_{k=0}^\infty \frac{\tilde{Q}_k(x)}{2^k}D^k, \end{equation} is also a hyperbolicity preserver. In particular, by Theorem \ref{thm:herminter}, for each $n\in\N_0$, \begin{equation} Q_n(x)=\frac{(-2)^n}{n!}\left(\sum_{k=0}^\infty \frac{\tilde{Q}_k(x)}{2^k}D^k\right) x^n, \end{equation} has only real zeros. \end{proof} Theorem \ref{thm:herminter} actually shows that for every $k\in\N_0$, $Q_k(x)$ and $Q_{k+1}(x)$ have real interlacing zeros \cite[Remark 6, p. 5]{Cha11}; i.e., for every $\alpha,\beta\in\R$, $k\in\N_0$, $\alpha Q_k(x)+\beta Q_{k+1}(x)$ has only real zeros. We also note that Theorem \ref{thm:herminter} seems to indicate that only the polynomials $\{x^n\}_{n=0}^\infty$ are needed to establish that a Hermite diagonal differential operator is a hyperbolicity preserver. This observation provides us a new algebraic characterization of Hermite multiplier sequences (cf. \cite[Theorem 46, p. 44]{Pio07}). \begin{theorem}\label{thm:minimal} Let $\{\gamma_n\}_{n=0}^\infty$ be a non-zero, positive, classical multiplier sequence of real numbers and let $T$ be a Hermite diagonal differential operator, where $T[H_n(x)]:=\gamma_n H_n(x)$ for every $n\in\N_0$. Then $T$ is hyperbolicity preserving if and only if, \begin{equation} T[x^n]\in\lp, \end{equation} for every $n\in\N_0$. \end{theorem} \begin{proof} In order to establish the non-trivial direction, it suffices to show $T$ is hyperbolicity preserving; i.e., $\{\gamma_n\}_{n=0}^\infty$ is a Hermite multiplier sequence. We will make use of the fact that $H'_{n}(x)=2nH_{n-1}(x)$ for every $n\in\N$ \cite[p. 188]{Rai60}. By assumption, for each $n\ge 2$, the following polynomial has only real zeros (see \cite[p. 194]{Rai60} for the Hermite expansion of $x^n$), \begin{align} D^{n-2}T[x^n] &=D^{n-2}T\left[\frac{n!}{2^n}\sum_{k=0}^{[n/2]} \frac{1}{k!(n-2k)!}H_{n-2k}(x)\right] \\ &=D^{n-2}\frac{n!}{2^n} \sum_{k=0}^{[n/2]} \frac{\gamma_{n-2k}}{k!(n-2k)!} H_{n-2k}(x) \\ &=\frac{n!}{2^n}\left(\gamma_n \frac{2^{n-2}}{2!}H_2(x)+\gamma_{n-2} \frac{2^{n-2}}{1!} H_0(x)\right) \\ &=n!\left(\frac{\gamma_n}{8}(4x^2-2)+\frac{\gamma_{n-2}}{4}(1)\right) \\ &=\frac{n!\gamma_n}{4}\left(2x^2+\left(\frac{\gamma_{n-2}}{\gamma_n}-1\right)\right). \end{align} Hence, $\frac{\gamma_{n-2}}{\gamma_{n}}\le 1$ for every $n\ge 2$. Following the outline of A. Piotrowski \cite[Theorem 127, p. 107]{Pio07}, since $\{\gamma_n\}_{n=0}^\infty$ is assumed to be a multiplier sequence, then the Tur\'an inequalities hold, $\gamma_{n-1}^2-\gamma_{n-2}\gamma_n\ge 0$ for every $n\ge 2$. Hence, for each $n\ge 2$, \begin{equation} 1\le \frac{\gamma_{n}}{\gamma_{n-2}}\le \left(\frac{\gamma_{n-1}}{\gamma_{n-2}}\right)^2. \end{equation} Thus, $\gamma_{n-2}\le \gamma_{n-1}$ for $n\ge 2$, and therefore $\{\gamma_n\}_{n=0}^\infty$ is a Hermite multiplier sequence (see Theorem \ref{thm:lpinc} and \ref{thm:msherm}). \end{proof} \section{Operator Diagonalizations of Laguerre Multiplier Sequences} The main objective of this section is exactly the same as that of the previous. We provide a few preliminary remarks for Laguerre multiplier sequences, we then find a formula for the $b_{n,k}$'s (see \eqref{eq:aform}), and finally we show that the $b_{n,k}$'s (see \eqref{eq:aform}) that arise from a Laguerre multiplier sequence yield more Laguerre multiplier sequences. The subtlety of the proceeding results can be seen in Examples \ref{ex:qua}-\ref{ex:notdia}, particularly Example \ref{ex:slag}. \begin{lemma}\label{lem:zerolag} For $k,n\in\N_0$, the $k^\text{th}$ derivative of the $n^\text{th}$ Laguerre polynomial \textup{(}Definition \ref{def:poly}\textup{)} evaluated at zero is, \begin{equation} L_{n}^{(k)}(0)=\binom{n}{k}(-1)^k. \end{equation} \end{lemma} \begin{lemma}\label{lm:horrible} Let $n$, $m$, and $p$ be integers. We then have the following combinatorial identity, \begin{align} \sum_{k=0}^n\sum_{j=0}^m\binom{m}{j}&\binom{k-j}{p-j}\binom{p}{k-j}\binom{n+1}{k+m-j} \nonumber\\ & = \binom{n+1}{p}\binom{n+1}{m} - \binom{n+1-m}{p-m}\binom{p}{n+1-m}. \end{align} \end{lemma} \begin{proof} We first note that $\binom{n+1-m}{p-m}\binom{p}{n+1-m}$ can be added to the summation, hence, we wish to show, \begin{equation}\label{eq:eq13} \sum_{k=0}^{n+1}\sum_{j=0}^{m}\binom{m}{j}\binom{k-j}{p-j}\binom{p}{k-j}\binom{n+1}{k+m-j}=\binom{n+1}{p}\binom{n+1}{m}. \end{equation} We perform a substitution of $l=k-j$ on the left side of \eqref{eq:eq13} and then apply two Vandermonde identities \cite[pp. 9, 15]{Rio79}, \begin{align} \sum_{k=0}^{n+1}\sum_{j=0}^m\binom{m}{j}\binom{k-j}{p-j}\binom{p}{k-j}\binom{n+1}{k+m-j} &=\sum_{l=0}^{n+1}\binom{p}{l}\binom{n+1}{m+l}\mathcol{\sum_{j=0}^m\binom{m}{j}\binom{l}{p-j}} \\ &=\sum_{l=0}^{n+1}\binom{p}{l}\binom{n+1}{m+l}\mathcol{\binom{m+l}{p}} \\ &=\sum_{j=0}^{n+1}\binom{p}{j-m}\binom{n+1}{j}\binom{j}{p} \\ &=\mathcol{\sum_{j=0}^{n+1}\binom{j}{p}\binom{p}{j-m}\binom{n+1}{j}} \\ &=\mathcol{\binom{n+1}{p}\binom{n+1}{m}}. \nonumber\tag*{\qedhere} \end{align} \end{proof} \begin{theorem}\label{thm:lagissums} Let $T$ be a Laguerre diagonal differential operator, $T[L_n(x)]:=\gamma_n L_n(x)$, where $\{\gamma_n\}_{n=0}^\infty$ a sequence of real numbers. Then there is a sequence of polynomials, $\{Q_k(x)\}_{k=0}^\infty$, and a sequence of classical diagonal differential operators, $\{T_n\}_{n=0}^\infty$, such that \begin{equation}\nonumber T[L_n(x)]:=\left(\sum_{k=0}^\infty Q_k(x)D^k\right) L_n(x) = \left(\sum_{k=0}^\infty T_kD^k\right) L_n(x) = \gamma_n L_n(x). \end{equation} Then, for each $n\in\N_0$, \begin{equation}\nonumber \{b_{n,m}\}_{m=0}^\infty:=\left\{\sum_{k=0}^m \binom{m}{k} \frac{(-1)^n}{n!} \left(\sum_{j=0}^n \binom{n}{j} \frac{(k+j)!}{((k+j)-n)!}g_{k+j}^*(-1)\right)\right\}_{m=0}^\infty, \end{equation} where $T_n[x^m]=b_{n,m}x^m$ for every $n,m\in\N_0$. \end{theorem} \begin{proof} The existence of the sequences $\{Q_k(x)\}_{k=0}^\infty$ and $\{T_k\}_{k=0}^\infty$ are established by Theorem \ref{thm:piotr} and \ref{thm:classic}. Recall from Theorem \ref{thm:classic} that, \begin{equation} \{b_{n,m}\}_{m=0}^\infty=\left\{\sum_{k=0}^m \binom{m}{k} Q_{k+n}^{(k)}(0)\right\}_{m=0}^\infty. \end{equation} Hence, we wish to verify that, \begin{equation}\label{form13} Q_{k+n}^{(k)}(0)=\frac{(-1)^n}{n!} \left(\sum_{j=0}^n \binom{n}{j} \frac{(k+j)!}{((k+j)-n)!}g_{k+j}^*(-1)\right). \end{equation} To ease the verification process, we first rewrite formula \eqref{form13} as follows, \begin{equation}\label{eq:lagjens} Q_n^{(m)}(0)=\sum_{p=0}^{n} (-1)^{n-m} \binom{n-m}{p-m}\binom{p}{n-m}g_p^*(-1). \end{equation} We will now verify formula \eqref{eq:lagjens}, \textit{tour de force}, by induction. Suppose for every $m\in\N_0$ and $k\in\{0,1,\ldots,n\}$, formula \eqref{eq:lagjens} holds for $Q_k^{(m)}(0)$. We now calculate $Q_{n+1}^{(m)}(0)$ using the recursive formula of Theorem \ref{thm:piotr}, equation \eqref{eq:jenreverse}, and Lemma \ref{lem:zerolag} and \ref{lm:horrible}, \begin{align} Q_{n+1}^{(m)}(0) =&\frac{1}{L_{n+1}^{(n+1)}}\left(\mathcol{\gamma_{n+1}}\ \mathcol{L_{n+1}^{(m)}(0)} - \sum_{k=0}^n \frac{d^m}{dx^m}\left.\left[Q_k(x) L_{n+1}^{(k)}(x)\right]\right|_{x=0} \right) \\ =& (-1)^{n+1}\Bigg(\mathcol{\sum_{p=0}^{n+1} \binom{n+1}{p}g_p^*(-1)}\ \mathcol{\binom{n+1}{m}(-1)^m} \nonumber\\ &\ \hspace{.2in}- \sum_{k=0}^n \sum_{j=0}^m \binom{m}{j} \mathcol{Q_k^{(j)}(0)}\ \mathcol{L_{n+1}^{(k+m-j)}(0)} \Bigg) \\ =& (-1)^{n+1}\Bigg(\sum_{p=0}^{n+1} \binom{n+1}{p}g_p^*(-1)\ \binom{n+1}{m}(-1)^m \nonumber\\ &\ \hspace{.2in}-\sum_{k=0}^n \sum_{j=0}^m \binom{m}{j} \mathcol{\left(\sum_{p=0}^{n+1} \binom{k-j}{p-j}\binom{p}{k-j} (-1)^{k-j} g_p^*(-1)\right)} \nonumber\\ &\ \hspace{1.7in}\mathcol{\left( \binom{n+1}{k+m-j}(-1)^{k+m-j}\right)} \Bigg) \\ =&\sum_{p=0}^{n+1} \Bigg( (-1)^{n+1-m}\Bigg(\binom{n+1}{p}\binom{n+1}{m} \nonumber\\ &\ \hspace{.2in}-\sum_{k=0}^n\sum_{j=0}^m \binom{m}{j}\binom{k-j}{p-j}\binom{p}{k-j}\binom{n+1}{k+m-j}\Bigg) \Bigg) g_p^*(-1) \\ =& \sum_{p=0}^{n+1} (-1)^{n+1-m}\binom{n+1-m}{p-m}\binom{p}{n+1-m} g_p^*(-1). \nonumber\tag*{\qedhere} \end{align} \end{proof} Similar to the Hermite case (see Theorem \ref{thm:herm}) the following theorem establishes a Rodrigues type formula between $h_n(x)$ ($n\in\N_0$) and $f(x)$. This formula then relates the hyperbolicity preservation of $T$ with each $T_n$ ($n\in\N_0$). \begin{theorem}\label{thm:lag} Suppose $\{\gamma_k\}_{k=0}^\infty$ is a non-trivial Laguerre multiplier sequence and let $\{g_k^*(x)\}_{k=0}^\infty$ be the reversed Jensen polynomials associated with $\{\gamma_k\}_{k=0}^\infty$. Then, for each $n\in\N_0$, \begin{equation}\label{eq:crazylag}\nonumber \{b_{n,m}\}_{m=0}^\infty:=\left\{\sum_{k=0}^m \binom{m}{k} \frac{(-1)^n}{n!} \left(\sum_{j=0}^n \binom{n}{j} \frac{(k+j)!}{((k+j)-n)!}g_{k+j}^*(-1)\right)\right\}_{m=0}^\infty, \end{equation} is a Laguerre multiplier sequence. \end{theorem} \begin{proof} By assumption, $\{\gamma_k\}_{k=0}^\infty$ is a Laguerre multiplier sequence. Hence, by Theorem \ref{thm:mslag}, \begin{equation}\label{eq:lagfx} f(x)=\sum_{k=0}^\infty \frac{f^{(k)}(0)}{k!} x^k := \sum_{k=0}^\infty g_k^*(-1) x^k \in \R[x]\cap\lp^s[-1,0]. \end{equation} To show that, $\{b_{n,m}\}_{m=0}^\infty$ is a Laguerre multiplier sequence we must show that, \begin{equation} h_n(x):=\sum_{m=0}^\infty \left(\sum_{k=0}^m \binom{m}{k} b_{n,k} (-1)^{m-k} \right) x^m \in \R[x]\cap\lp^s[-1,0]. \end{equation} We use Theorem \ref{thm:comb} and perform the following calculations, \begin{align} h_n(x) &=\sum_{k=0}^\infty \left(\sum_{j=0}^k \binom{k}{j} b_{n,j} (-1)^{k-j} \right) x^k \\ &=\sum_{k=0}^\infty \left(\frac{(-1)^n}{n!} \left(\sum_{j=0}^n \binom{n}{j} \frac{(k+j)!}{((k+j)-n)!}g_{k+j}^*(-1)\right) \right)x^k \\ &=\frac{(-1)^n}{n!} \sum_{j=0}^n \binom{n}{j} \sum_{k=0}^\infty \left( \frac{(k+j)!}{((k+j)-n)!}g_{k+j}^*(-1)\right) x^k \\ &=\frac{(-1)^n}{n!} \sum_{j=0}^n \binom{n}{j} \sum_{k=0}^\infty \frac{f^{(k+j)}(0)}{((k+j)-n)!} x^k \\ &=\frac{(-1)^n}{n!} \sum_{j=0}^n \binom{n}{j}x^{n-j} D^n f(x) \\ &=\frac{(-1)^n}{n!} (1+x)^n D^n f(x). \label{eq:lagrecur} \end{align} Hence, if $f(x)\in\R[x]\cap\lp^s[-1,0]$, then $h_n(x)\in\R[x]\cap\lp^s[-1,0]$. \end{proof} Similar to the Hermite case, equation \eqref{eq:lagrecur} also provides a recursive formula, \begin{equation} h_n(x):=\frac{-1}{n}(x+1)^nD(x+1)^{1-n}h_{n-1}(x),\ \ \ \ \ (n\ge 1,\ h_0(x):=f(x)). \end{equation} Thus, again, the hyperbolicity preservation of $T_n$ with a Laguerre multiplier sequence, is enough to establish that $T_{n+1}$ is hyperbolicity preserving with a Laguerre multiplier sequence. \begin{example} We show, similar to Examples \ref{ex:notherm1} and \ref{ex:notherm2}, that it is possible for $T_n$ to be hyperbolicity preserving for every $n$ and yet $T$ fail to be hyperbolicity preserving. Consider the following non-Laguerre multiplier sequence (see \eqref{eq:lagfun} and Theorem \ref{thm:mslag}), \begin{equation} \{a_n\}_{n=0}^\infty:=\{2,3,4,5,6,\ldots\}, \end{equation} where \begin{equation} T[L_n(x)]:=a_nL_n(x). \end{equation} From Theorem \ref{thm:classic}, we obtain $T=\sum_{n=0}^\infty T_n D^n$, where $T_n[x^m]=b_{n,m}x^m$ (see \eqref{eq:aform}) are classical diagonal differential operators. We calculate $f(x)$ from equation \eqref{eq:lagfx}, \begin{equation}\label{eq:lagfun} f(x):=\sum_{k=0}^\infty g_k^*(-1)x^k = x+2. \end{equation} Hence by formula \eqref{eq:lagrecur}, \begin{align*} h_0(x)&=\sum_{k=0}^\infty \left(\sum_{j=0}^k \binom{k}{j} b_{0,j} (-1)^{k-j} \right) x^k=x+2, \\ h_1(x)&=\sum_{k=0}^\infty \left(\sum_{j=0}^k \binom{k}{j} b_{1,j} (-1)^{k-j} \right) x^k=-x-1,\ \ \ \ \ \text{and} \\ h_n(x)&=\sum_{k=0}^\infty \left(\sum_{j=0}^k \binom{k}{j} b_{n,j} (-1)^{k-j} \right) x^k=0,\ \ \ \ \ \text{for}\ n\ge 2. \end{align*} We see that $h_0(x)\not\in\R[x]\cap\lp^s[-1,0]$, hence $\{b_{0,k}\}_{k=0}^\infty$ is not a Laguerre multiplier sequence (see Theorem \ref{thm:mslag}). However, if we define a classical multiplier sequence, $W[x^m]:=\frac{1}{m!}x^m$, then \begin{equation} \sum_{k=0}^\infty \frac{b_{n,k}}{k!}x^k = e^x W[h_n(x)]\in\lp^s. \end{equation} Hence, $\{b_{0,k}\}_{k=0}^\infty$ is a classical multiplier sequence (see Theorem \ref{thm:ms}). In addition, $h_n(x)\in\R[x]\cap\lp^s[-1,0]$ for $n\ge 1$. Thus, each $T_n$ ($n\ge 0$) is hyperbolicity preserving (see Theorem \ref{thm:mslag}), each $T_n$ ($n\ge 1$) diagonalizes with a Laguerre multiplier sequence, but $T$ itself is not a hyperbolicity preserver. \end{example} From the calculations of \eqref{eq:lagjens} we can also provide a formula for the $Q_k$'s in a Laguerre differential operator (cf. Theorem \ref{thm:hermqk} and \ref{thm:hermcomplex}). \begin{theorem}\label{thm:lagqk} Let $\{\gamma_n\}_{n=0}^\infty$ be a sequence of real numbers and $\{Q_k(x)\}_{k=0}^\infty$ be a sequence of polynomials, such that, \begin{equation} T[L_n(x)]:=\left(\sum_{k=0}^\infty Q_k(x)D^k\right)L_n(x)=\gamma_nL_n(x). \end{equation} Then for each $n\in\N_0$, \begin{equation}\label{eq:lagqk} Q_n(x)=\sum_{k=0}^{n}\left(\sum_{p=0}^{n} (-1)^{n-k} \binom{n-k}{p-k}\binom{p}{n-k}g_p^*(-1)\right)x^k, \end{equation} where $\{g_k^*(x)\}_{k=0}^\infty$ are the associated reversed Jensen polynomials of $\{\gamma_n\}_{n=0}^\infty$. \end{theorem} Similar to Theorem \ref{thm:hermcomplex}, we provide another formula for the $Q_k$'s in a Laguerre diagonal differential operator (cf. \cite[Proposition 216, p. 107]{Cha11}). \begin{theorem}\label{thm:lagqk2} Let $\{\gamma_n\}_{n=0}^\infty$ be a sequence of real numbers and $\{Q_k(x)\}_{k=0}^\infty$ be a sequence of polynomials, such that, \begin{equation} T[L_n(x)]:=\left(\sum_{k=0}^\infty Q_k(x)D^k\right)L_n(x)=\gamma_nL_n(x). \end{equation} Then for each $n\in\N_0$, \begin{equation}\label{eq:lagqnn} Q_n(x)=\sum_{k=0}^{n}\frac{(-x)^{k}}{k!}\sum_{j=0}^{n-k} \binom{n-k}{j} (-1)^j\gamma_{j}L_{j}(x). \end{equation} \end{theorem} \begin{proof} The proof is very similar to the proof of Theorem \ref{thm:hermcomplex}. Define \begin{equation} \tilde{T}:=\sum_{n=0}^\infty Q_n(x)D^n, \end{equation} where $Q_n(x)$ is defined from equation \eqref{eq:lagqnn}. We will establish the result by showing that $\tilde{T}[L_m(x)]=\gamma_m L_m(x)$ for every $m\in\N_0$. Define the evaluation operator, \begin{equation} W:=\sum_{n=0}^\infty \frac{(-1)^n}{n!}x^nD^n. \end{equation} Note that $W[f(x)]=f(0)$ for every polynomial $f(x)$. Using Theorem \ref{lem:zerolag} and formula $\binom{n}{k}\binom{k}{j} = \binom{n}{j}\binom{n-j}{k-j}$ (see \cite[p. 3]{Rio79}), we now evaluate $\tilde{T}$ at $L_m(x)$, \begin{align} \tilde{T}[L_m(x)] &= \sum_{n=0}^m\left(\sum_{k=0}^n \frac{(-x)^k}{k!} \sum_{j=0}^{n-k} \binom{n-k}{j}(-1)^j \gamma_j L_j(x)\right) L^{(n)}_m(x) \\ &= \sum_{j=0}^{m} (-1)^j \gamma_j L_j(x) \sum_{k=0}^m \sum_{n=0}^m \binom{n-k}{j}\frac{(-x)^k}{k!} L^{(n)}_m(x) \\ &= \sum_{j=0}^{m} (-1)^j \gamma_j L_j(x) \sum_{k=0}^m \sum_{n=0}^m \binom{k}{j}\frac{(-x)^{n-k}}{(n-k)!} L^{(n)}_m(x) \\ &= \sum_{j=0}^{m} (-1)^j \gamma_j L_j(x) \sum_{k=0}^m\binom{k}{j} \sum_{n=0}^m \frac{(-x)^{n}}{n!} L^{(n+k)}_m(x) \\ &= \sum_{j=0}^{m} (-1)^j \gamma_j L_j(x) \sum_{k=0}^m\binom{k}{j} W[ L^{(k)}_m(x)] \\ &= \sum_{j=0}^{m} (-1)^j \gamma_j L_j(x) \sum_{k=0}^m\binom{k}{j} L^{(k)}_m(0) \\ &= \sum_{j=0}^{m} (-1)^j \gamma_j L_j(x) \sum_{k=0}^m\binom{k}{j} \binom{m}{k}(-1)^k \\ &= \sum_{j=0}^{m} (-1)^j \gamma_j L_j(x) \sum_{k=0}^m\binom{m}{j} \binom{m-j}{k-j}(-1)^k \\ &= \sum_{j=0}^{m} (-1)^j \gamma_j L_j(x) \binom{m}{j}(-1)^j\sum_{k=0}^m\left(\binom{m-j}{k}(-1)^k\right) \\ &= \sum_{j=0}^{m} \binom{m}{j} \gamma_j L_j(x) \sum_{k=0}^m\binom{m-j}{k}(-1)^k \\ &= \gamma_m L_m(x) \nonumber\tag*{\qedhere} \end{align} \end{proof} \section{Open Problems} \begin{problem} A frequent query in the literature is to find properties of the $Q_k$'s such that \begin{equation} T:=\sum_{k=0}^\infty Q_kD^k \end{equation} is hyperbolicity preserving. We ask instead a parallel question; what are the properties needed for classical diagonal differential operators, $T_n$'s, to form hyperbolicity preservers, as in \begin{equation} T=\sum_{k=0}^\infty T_nD^n\ \ \ \text{?} \end{equation} \end{problem} \begin{problem}\label{que:que5} Do the shifted Laguerre polynomials, $\{L_n(x-\alpha)\}_{n=0}^\infty$, possess the same property found in Theorem \ref{thm:lagissums} and Theorem \ref{thm:lag}? Generalized Laguerre? Generalized Hermite? \end{problem} \begin{problem} Find all hyperbolicity preservers that can be written as a sum of classical hyperbolicity preservers ($T=\sum_{k=0}^\infty T_kD^k$), as in Theorem \ref{thm:classic} or \ref{thm:classic2}. \end{problem} \begin{problem}\label{pm:incdegtn} Does there exist a hyperbolicity preserver of the form, \begin{equation} T:=\sum_{k=-\infty}^\infty T_k D^k, \end{equation} such that $T_k\not\equiv 0$ for every $k\in\N$? Compare with the open problem on ``increasing degree'' of A. Piotrowski \cite[Problem 197, p. 172]{Pio07}. \end{problem} \begin{problem} From T. Forg\'acs and A. Piotrowski \cite{FP13b} we are given an intriguing open problem. Namely, if \begin{equation} T[H_n(x)]:=\left(\sum_{k=0}^\infty Q_k(x)D^k\right)H_n(x)=\gamma_n H_n(x) \end{equation} is a Hermite diagonal differential operator where $\{\gamma_n\}_{n=0}^\infty$ is a classic multiplier sequence and each $Q_k(x)$ has only real zeros, then can we conclude that $T$ is a hyperbolicity preserver (cf. Theorem \ref{thm:hermrealzeros})? Using the formulations throughout this paper, we pose two ideas that might prove of use to this question. First, following the method of T. Forg\'acs and A. Piotrowski we analysis the leading and second-leading coefficients of the $Q_k$'s (for the definition of $h_n$, see equation \eqref{h_n's}), \begin{align} \frac{d}{dx}h_0(x)&=\sum_{k=0}^\infty \frac{Q_{k+1}^{(k+1)}(0)}{k!}x^k = e^{-x}(f'(x)-f(x)),\ \ \ \ \ \text{and} \\ (-4)\int h_1(x)dx&= (-4)\sum_{k=0}^\infty \frac{Q_{k+1}^{(k-1)}(0)}{k!}x^k =e^{-x}(f'(x)+f(x)), \end{align} where $f(x)=\sum_{k=0}^\infty \frac{\gamma_k}{k!}x^k$ and where we take $Q_1^{(-1)}(0):=f'(0)+f(0)$. Thus, in general we ask; if $\{\gamma_k\}_{k=0}^\infty$ is a non-increasing, positive, classic multiplier sequence, then can we conclude that $e^{-x}(f'(x)-f(x))$ and $e^{-x}(f'(x)+f(x))$ have at least one Taylor coefficient of opposite sign? Second, according to the Borcea-Branden Theorem \cite[Theorem 5]{BB09}, if \begin{equation} T:=\sum_{k=0}^\infty Q_k(x)D^k,\ \ \ \text{and,}\ \ \ W:=\sum_{k=0}^\infty \frac{Q_k(x)}{2^k}D^k, \end{equation} then $T$ is hyperbolicity preserving if and only if $W$ is hyperbolicity preserving (see also the proof of Theorem \ref{thm:herminter}). However, if $T$ is also a Hermite diagonal differential operator, then only the hyperbolicity of $T[x^n]$ is needed to conclude that $T$ is a hyperbolicity preserver (see Theorem \ref{thm:minimal}). Can the same be said of $W$? This ``minimal set'' ($\{x^n\}_{n=0}^\infty$) that allows the conclusion of hyperbolicity preservation is a commonly sought after attribute of differential operators. We ask, what relationship do the sets $A$ and $B$ have, where \begin{align} A&=\left\{p(x):p(x)=\left(\sum_{k=0}^\infty Q_k(x)D^k\right)f_n(x)\right\},\ \ \ \text{and}\\ B&=\left\{p(x):p(x)=\left(\sum_{k=0}^\infty Q_k(x)\alpha^kD^k\right)f_n(x)\right\}, \end{align} given $\{f_n(x)\}_{n=0}^\infty$ is some sequence of polynomials and $\alpha>0$? If $B$ only has hyperbolic polynomials, then must $A$ have only hyperbolic polynomials? What restrictions would allow this conditional to hold? \end{problem} {} \end{document}
\begin{document} \setcounter{page}{1} \title[Domination number in the annihilating-submodule graph]{Domination number in the annihilating-submodule graph of modules over commutative rings} \author[H. Ansari-Toroghy and S. Habibi]{H. Ansari-Toroghy$^1$ and S. Habibi$^2$} \authorsaddresses{$^1$ Department of pure Mathematics,\\ Faculty of mathematical Sciences,\\ University of Guilan, P. O. Box 41335-19141, Rasht, Iran.\\ e-mail: [email protected]\\ $^2$ School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran, Iran.\\ Department of pure Mathematics, Faculty of mathematical Sciences, University of Guilan, P. O. Box 41335-19141, Rasht, Iran. \\ e-mail: [email protected]} \subjclass[2010]{13C13, 13C99, 05C75} \keywords{Commutative rings, annihilating-submodule graph, domination number.\\This research was in part supported by a grant from IPM (No. 96130028)} \begin{abstract} Let $M$ be a module over a commutative ring $R$. The annihilating-submodule graph of $M$, denoted by $AG(M)$, is a simple graph in which a non-zero submodule $N$ of $M$ is a vertex if and only if there exists a non-zero proper submodule $K$ of $M$ such that $NK=(0)$, where $NK$, the product of $N$ and $K$, is denoted by $(N:M)(K:M)M$ and two distinct vertices $N$ and $K$ are adjacent if and only if $NK=(0)$. This graph is a submodule version of the annihilating-ideal graph and under some conditions, is isomorphic with an induced subgraph of the Zariski topology-graph $G(\tau_T)$ which was introduced in (The Zariski topology-graph of modules over commutative rings, Comm. Algebra., 42 (2014), 3283--3296). In this paper, we study the domination number of $AG(M)$ and some connections between the graph-theoretic properties of $AG(M)$ and algebraic properties of module $M$. \end{abstract} \maketitle \section{Introduction} Throughout this paper $R$ is a commutative ring with a non-zero identity and $M$ is a unital $R$-module. By $N\leq M$ (resp. $N< M$) we mean that $N$ is a submodule (resp. proper submodule) of $M$. Define $(N:_{R}M)$ or simply $(N:M)=\{r\in R|$ $rM\subseteq N\}$ for any $N\leq M$. We denote $((0):M)$ by $Ann_{R}(M)$ or simply $Ann(M)$. $M$ is said to be faithful if $Ann(M)=(0)$. Let $N, K\leq M$. Then the product of $N$ and $K$, denoted by $NK$, is defined by $(N:M)(K:M)M$ (see \cite{af07}). Define $ann(N)$ or simply $annN=\{m\in M|$ $m(K:M)=0\}$. The prime spectrum of $M$ is the set of all prime submodules of $M$ and denoted by $Spec(M)$, $Max(M)$ is the set of all maximal submodules of $M$, and $J(M)$, the jacobson radical of $M$, is the intersection of all elements of $Max(M)$, respectively. There are many papers on assigning graphs to rings or modules (see, for example, \cite{al99, ah14, b88, br11}). The annihilating-ideal graph $AG(R)$ was introduced and studied in \cite{br11}. $AG(R)$ is a graph whose vertices are ideals of $R$ with nonzero annihilators and in which two vertices $I$ and $J$ are adjacent if and only if $IJ=(0)$. Later, it was modified and further studied by many authors (see \cite{aa12, aa13, aa14, nmk, ts}). In \cite{ah14}, the present authors introduced and studied the graph $G(\tau_T)$ (resp. $AG(M)$), called the \textit{Zariski topology-graph} (resp. \textit {the annihilating-submodule graph}), where $T$ is a non-empty subset of $Spec(M)$. $AG(M)$ is an undirected graph with vertices $V(AG(M))$= $\{N \leq M |$ there exists $(0)\neq K<M$ with $NK=(0)$\}. In this graph, distinct vertices $N,L \in V(AG(M))$ are adjacent if and only if $NL=(0)$ (see \cite{ah16, ah160}). Let $AG(M)^{*}$ be the subgraph of $AG(M)$ with vertices $V(AG(M)^{*})=\{ N<M$ with $(N:M)\neq Ann(M)|$ there exists a submodule $K<M$ with $(K:M)\neq Ann(M)$ and $NK=(0)\}$. By \cite[Theorem 3.4]{ah14}, one conclude that $AG(M)^{*}$ is a connected subgraph. Note that $M$ is a vertex of $AG(M)$ if and only if there exists a nonzero proper submodule $N$ of $M$ with $(N:M)=Ann(M)$ if and only if every nonzero submodule of $M$ is a vertex of $AG(M)$. Clearly, if $M$ is not a vertex of $AG(M)$, then $AG(M)=AG(M)^{*}$. In \cite[Lemma 2.8]{ah140}, we showed that under some conditions, $AG(M)$ is isomorphic with an induced subgraph of the Zariski topology-graph $G(\tau_T)$. In this paper, we study the domination number of $AG(M)$ and some connections between the graph-theoretic properties of $AG(M)$ and algebraic properties of module $M$. A prime submodule of $M$ is a submodule $P\neq M$ such that whenever $re\in P$ for some $r\in R$ and $e \in M$, we have $r\in (P:M)$ or $e\in P$ \cite{lu84}. The notations $Z(R)$ and $Nil(R)$ will denote the set of all zero-divisors, the set of all nilpotent elements of $R$, respectively. Also, $Z_{R}(M)$ or simply $Z(M)$, the set of zero divisors on $M$, is the set $\{r\in R|$ $rm=0$ for some $0\neq m\in M \}$. If $Z(M)=0$, then we say that $M$ is a domain. An ideal $I\leq R$ is said to be nil if $I$ consist of nilpotent elements. Let us introduce some graphical notions and denotations that are used in what follows: A graph $G$ is an ordered triple $(V(G), E(G), \psi_G )$ consisting of a nonempty set of vertices, $V(G)$, a set $E(G)$ of edges, and an incident function $\psi_G$ that associates an unordered pair of distinct vertices with each edge. The edge $e$ joins $x$ and $y$ if $\psi_G(e)=\{x, y\}$, and we say $x$ and $y$ are adjacent. The number of edges incident at $x$ in $G$ is called the degree of the vertex $x$ in $G$ and is denoted by $d_G(v)$ or simply $d(v)$. A path in graph $G$ is a finite sequence of vertices $\{x_0, x_1,\ldots ,x_n\}$, where $x_{i-1}$ and $x_i$ are adjacent for each $1\leq i\leq n$ and we denote $x_{i-1} - x_i$ for existing an edge between $x_{i-1}$ and $x_i$. The distance between two vertices $x$ and $y$, denoted $d(x, y)$, is the length of the shortest path from $x$ to $y$. The diameter of a connected graph $G$ is the maximum distance between two distinct vertices of $G$. For any vertex $x$ of a connected graph $G$, the eccentricity of $x$, denoted $e(x)$, is the maximum of the distances from $x$ to the other vertices of $G$. The set of vertices with minimum eccentricity is called the center of the graph $G$, and this minimum eccentricity value is the radius of $G$. For some $U\subseteq V(G)$, we denote by $N(U)$, the set of all vertices of $G\setminus U$ adjacent to at least one vertex of $U$ and $N[U]=N(U)\cup \{U\}$. A graph $H$ is a subgraph of $G$, if $V(H)\subseteq V(G)$, $E(H)\subseteq E(G)$, and $\psi_H$ is the restriction of $\psi_G$ to $E(H)$. A subgraph $H$ of $G$ is a spanning subgraph of $G$ if $V(H)=V(G)$. A spanning subgraph $H$ of $G$ is called a perfect matching of $G$ if every vertex of $G$ has degree 1. A clique of a graph is a complete subgraph and the supremum of the sizes of cliques in $G$, denoted by $cl(G)$, is called the clique number of $G$. Let $\chi(G)$ denote the chromatic number of the graph $G$, that is, the minimal number of colors needed to color the vertices of $G$ so that no two adjacent vertices have the same color. Obviously $\chi(G)\geq cl(G)$. A subset $D$ of $V(G)$ is called a dominating set if every vertex of $G$ is either in $D$ or adjacent to at least one vertex in $D$. The domination number of $G$, denoted by $\gamma(G)$, is the number of vertices in a smallest dominating set of $G$. A total dominating set of a graph $G$ is a set $S$ of vertices of $G$ such that every vertex is adjacent to a vertex in $S$. The total domination number of $G$, denoted by $\gamma_t(G)$, is the minimum cardinality of a total dominating set. A dominating set of cardinality $\gamma(G)$ ($\gamma_t(G)$) is called a $\gamma$-set ($\gamma_t$-set). A dominating set $D$ is a connected dominating set if the subgraph $<D>$ induced by $D$ is a connected subgraph of $G$. The connected domination number of $G$, denoted by $\gamma_c(G)$, is the minimum cardinality of a connected dominating set of $G$. A dominating set $D$ is a clique dominating set if the subgraph $<D>$ induced by $D$ is complete in $G$. The clique domination number $\gamma_{cl}(G)$ of $G$ equals the minimum cardinality of a clique dominating set of $G$. A dominating set $D$ is a paired-dominating set if the subgraph $<D>$ induced by $D$ has a perfect matching. The paired-domination number $\gamma_{pr}(G)$ of $G$ equals the minimum cardinality of a paired-dominating set of $G$. A vertex $u$ is a neighbor of $v$ in $G$, if $uv$ is an edge of $G$, and $u\neq v$. The set of all neighbors of $v$ is the open neighborhood of $v$ or the neighbor set of $v$, and is denoted by $N(v)$; the set $N[v]=N(v)\cup \{v\}$ is the closed neighborhood of $v$ in $G$. Let $S$ be a dominating set of a graph $G$, and $u\in S$. The private neighborhood of $u$ relative to $S$ in $G$ is the set of vertices which are in the closed neighborhood of $u$, but not in the closed neighborhood of any vertex in $S\setminus \{u\}$. Thus the private neighborhood $P_N(u, S)$ of $u$ with respect to $S$ is given by $P_N(u, S)=N[u]\setminus (\cup_{v\in S\setminus \{u\}} N[v])$. A set $S\subseteq V(G)$ is called irredundant if every vertex $v$ of $S$ has at least one private neighbor. An irredundant set $S$ is a maximal irredundant set if for every vertex $u \in V\setminus S$, the set $S\cup \{u\}$ is not irredundant. The irredundance number $ir(G)$ is the minimum cardinality of maximal irredundant sets. There are so many domination parameters in the literature and for more details one can refer \cite{hhs}. A bipartite graph is a graph whose vertices can be divided into two disjoint sets $U$ and $V$ such that every edge connects a vertex in $U$ to one in $V$; that is, $U$ and $V$ are each independent sets and complete bipartite graph on $n$ and $m$ vertices, denoted by $K_{n, m}$, where $V$ and $U$ are of size $n$ and $m$, respectively, and $E(G)$ connects every vertex in $V$ with all vertices in $U$. Note that a graph $K_{1, m}$ is called a star graph and the vertex in the singleton partition is called the center of the graph. We denote by $P_{n}$ a path of order $n$ (see \cite{r05}). In section 2, a dominating set of $AG(M)$ is constructed using elements of the center when $M$ is an Artinian module. Also we prove that the domination number of $AG(M)$ is equal to the number of factors in the Artinian decomposition of $M$ and we also find several domination parameters of $AG(M)$. In section 3, we study the domination number of the annihilating-submodule graphs for reduced rings with finitely many minimal primes and faithful modules. Also, some relations between the domination numbers and the total domination numbers of annihilating-submodule graphs are studied. The following results are useful for further reference in this paper. \begin{prop}\label{p1.1} Suppose that $e$ is an idempotent element of $R$. We have the following statements. \begin {itemize} \item [(a)] $R=R_{1}\times R_{2}$, where $R_{1}=eR$ and $R_{2}=(1-e)R$. \item [(b)] $M=M_{1}\times M_{2}$, where $M_{1}=eM$ and $M_{2}=(1-e)M$. \item [(c)] For every submodule $N$ of $M$, $N=N_{1}\times N_{2}$ such that $N_{1}$ is an $R_{1}$-submodule $M_{1}$, $N_{2}$ is an $R_{2}$-submodule $M_{2}$, and $(N:_{R}M)=(N_{1}:_{R_{1}}M_{1})\times (N_{2}:_{R_{2}}M_{2})$. \item [(d)] For submodules $N$ and $K$ of $M$, $NK=N_{1}K_{1} \times N_{2}K_{2}$ such that $N=N_{1}\times N_{2}$ and $K=K_{1}\times K_{2}$. \item [(e)] Prime submodules of $M$ are $P\times M_{2}$ and $M_{1}\times Q$, where $P$ and $Q$ are prime submodules of $M_{1}$ and $M_{2}$, respectively. \end{itemize} \end{prop} \begin{proof} This is clear. \end{proof} We need the following results. \begin{lem}\label{l1.2} (See \cite[Proposition 7.6]{af74}.) Let $R_{1}, R_{2}, \ldots , R_{n}$ be non-zero ideals of $R$. Then the following statements are equivalent: \begin{itemize} \item [(a)] $R= R_{1} \times \ldots \times R_{n}$; \item [(b)] As an abelian group $R$ is the direct sum of $ R_{1}, \ldots , R_{n}$; \item [(c)] There exist pairwise orthogonal idempotents $e_{1},\ldots, e_{n}$ with $1=e_{1}+ \ldots +e_{n}$, and $R_{i}=Re_{i}$, $i=1, \ldots ,n$. \end{itemize} \end{lem} \begin{lem}\label{l1.3} (See \cite[Theorem 21.28]{l91}.) Let $I$ be a nil ideal in $R$ and $u\in R$ be such that $u+I$ is an idempotent in $R/I$. Then there exists an idempotent $e$ in $uR$ such that $e-u\in I$. \end{lem} \begin{lem}\label{l1.4} (See \cite[Lemma 2.4]{ah16}.) Let $N$ be a minimal submodule of $M$ and let $Ann(M)$ be a nil ideal. Then we have $N^{2}=(0)$ or $N=eM$ for some idempotent $e\in R$. \end{lem} \begin{prop}\label{p1.5} Let $R/Ann(M)$ be an Artinian ring and let $M$ be a finitely generated module. Then every nonzero proper submodule $N$ of $M$ is a vertex in $AG(M)$. \end{prop} \begin{thm}\label{t1.6} (See \cite[Theorem 2.5]{ah16}.) Let $Ann(M)$ be a nil ideal. There exists a vertex of $AG(M)$ which is adjacent to every other vertex if and only if $M=eM\oplus (1-e)M$, where $eM$ is a simple module and $(1-e)M$ is a prime module for some idempotent $e\in R$, or $Z(M)=Ann((N:M)M)$, where $N$ is a nonzero proper submodule of $M$ or $M$ is a vertex of $AG(M)$. \end{thm} \begin{thm}\label{t1.7} (See \cite[Theorem 3.3]{ah16}.) Let $M$ be a faithful module. Then the following statements are equivalent. \begin{itemize} \item [(a)] $\chi(AG(M)^{*})=2$. \item [(b)] $AG(M)^{*}$ is a bipartite graph with two nonempty parts. \item [(c)] $AG(M)^{*}$ is a complete bipartite graph with two nonempty parts. \item [(d)] Either $R$ is a reduced ring with exactly two minimal prime ideals, or $AG(M)^{*}$ is a star graph with more than one vertex. \end{itemize} \end{thm} \begin{cor}\label{c1.8} (See \cite[Corollary 3.5]{ah16}.) Let $R$ be a reduced ring and assume that $M$ is a faithful module. Then the following statements are equivalent. \begin{itemize} \item [(a)] $\chi(AG(M)^{*})=2$. \item [(b)] $AG(M)^{*}$ is a bipartite graph with two nonempty parts. \item [(c)] $AG(M)^{*}$ is a complete bipartite graph with two nonempty parts. \item [(d)] $R$ has exactly two minimal prime ideals. \end{itemize} \end{cor} \begin{prop}\label{p1.9} (See \cite[Proposition 3.9]{hhs}.) Every minimal dominating set in a graph $G$ is a maximal irredundant set of $G$. \end{prop} \section{Domination number in the annihilating-submodule graph for Artinian modules} The main goal in this section, is to obtain the value certain domination parameters of the annihilating-submodule graph for Artinian modules. Recall that $M$ is a vertex of $AG(M)$ if and only if there exists a nonzero proper submodule $N$ of $M$ with $(N:M)=Ann(M)$ if and only if every nonzero submodule of $M$ is a vertex of $AG(M)$. In this case, the vertex $N$ is adjacent to every other vertex. Hence $\gamma(AG(M))=1=\gamma_t((AG(M)))$. So we assume that \textbf{throughout this paper $M$ is not a vertex of $AG(M)$}. Clearly, if $M$ is not a vertex of $AG(M)$, \textbf{then $AG(M)=AG(M)^{*}$.} We start with the following remark which completely characterizes all modules for which $\gamma((AG(M))) = 1$. \begin{rem}\label{r2.1} Let $Ann(M)$ be a nil ideal. By Theorem \ref{t1.6}, there exists a vertex of $AG(M)$ which is adjacent to every other vertex if and only if $M=eM\oplus (1-e)M$, where $eM$ is a simple module and $(1-e)M$ is a prime module for some idempotent $e\in R$, or $Z(M)=Ann((N:M)M)$, where $N$ is a nonzero proper submodule of $M$ or $M$ is a vertex of $AG(M)$. Now, let $Ann(M)$ be a nil ideal and $M$ be a domain module. Then $\gamma((AG(M))) = 1$ if and only if $M=eM\oplus (1-e)M$, where $eM$ is a simple module and $(1-e)M$ is a prime module for some idempotent $e\in R$. \end{rem} \begin{thm}\label{t2.2} Let $M$ be a f.g Artinian local module. Assume that $N$ is the unique maximal submodule of $M$. Then the radius of $AG(M)$ is $0$ or $1$ and the center of $AG(M)$ is $\{K\subseteq ann(N)| K\neq (0)$ is a submodule in $M\}$. \end{thm} \begin{proof} If $N$ is the only non-zero proper submodule of $M$, then $AG(M)\cong K_1$, $e(N) = 0$ and the radius of $AG(M)$ is $0$. Assume that the number of non-zero proper submodules of $M$ is greater than $1$. Since $M$ is f.g Artinian module, there exists $m\in \Bbb N$, $m > 1$ such that $N^m = (0)$ and $N^{m-1}\neq (0)$. For any non-zero submodule $K$ of $M$, $KN^{m-1}\subseteq NN^{m-1} = (0)$ and so $d(N^{m-1}, K) = 1$. Hence $e(N^{m-1}) = 1$ and so the radius of $AG(M)$ is $1$. Suppose $K$ and $L$ are arbitrary non-zero submodules of $M$ and $K\subseteq ann(N)$. Then $KL\subseteq KN = (0)$ and hence $e(K) = 1$. Suppose $(0)\neq K' \nsubseteq ann(N)$. Then $K'N\neq (0)$ and so $e(K') > 1$. Hence the center of $AG(M)$ is $\{K\subseteq ann(N)| K\neq (0)$ is a submodule in $M\}$. \end{proof} \begin{cor}\label{c2.3} Let $M$ be a f.g Artinian local module and $N$ is the unique maximal submodule of $M$. Then the following hold good. \begin{itemize} \item [(a)] $\gamma(AG(M))=1$. \item [(b)] $D$ is a $\gamma$-set of $AG(M)$ if and only if $D\subseteq ann(N)$. \end{itemize} \end{cor} \begin{proof} $(a)$ Trivial from Theorem \ref{t2.6}.\\ $(b)$ Let $D = \{K\}$ be a $\gamma$-set of $AG(M)$. Suppose $K\nsubseteq ann(N)$. Then $KN\neq (0)$ and so $N$ is not dominated by $K$, a contradiction. Conversely, suppose $D\subseteq ann(N)$. Let $K$ be an arbitrary vertex in $AG(M)$. Then $KL\subseteq NL = (0)$ for every $L\in D$. i.e., every vertex $K$ is adjacent to every $L\in D$. If $|D| > 1$, then $D\setminus \{L'\}$ is also a dominating set of $AG(M)$ for some $L'\in D$ and so $D$ is not minimal. Thus $|D| = 1$ and so $D$ is a $\gamma$-set by $(a)$. \end{proof} \begin{thm}\label{t2.4} Let $M=\oplus _{i=1}^n M_i$, where $M_i$ is a f.g Artinian local module for all $1\leq i\leq n$ and $n\geq 2$. Then the radius of $AG(M)$ is $2$ and the center of $AG(M)$ is $\{K\subseteq J(M)| K\neq (0)$ is a submodule in $M \}$. \end{thm} \begin{proof} Let $M=\oplus _{i=1}^n M_i$, where $M_i$ is a f.g Artinian local module for all $1\leq i\leq n$ and $n\geq 2$. Let $J_i$ be the unique maximal submodule in $M_i$ with nilpotency $n_i$. Note that $Max(M) = \{N_1,\ldots ,N_n| N_i = M_1 \oplus \ldots \oplus M_{i-1} \oplus J_i\oplus M_{i+1}\oplus \ldots \oplus M_n, 1\leq i\leq n\}$ is the set of all maximal submodules in $M$. Consider $D_i = (0) \oplus \ldots \oplus (0) \oplus J_i^{n_i-1}\oplus (0)\oplus \ldots \oplus (0)$ for $1\leq i\leq n$. Note that $J(M) = J_1\oplus \ldots \oplus J_n$ is the Jacobson radical of $M$ and any non-zero submodule in $M$ is adjacent to $D_i$ for some $i$. Let $K$ be any non-zero submodule of $M$. Then $K=\oplus _{i=1}^n K_i$, where $K_i$ is a submodule of $M_i$.\\ \textbf{Case 1}. If $K = N_i$ for some $i$, then $KD_j\neq (0)$ and $KN_j\neq (0)$ for all $j\neq i$. Note that $N(K)=\{(0) \oplus \ldots \oplus (0) \oplus L_i\oplus (0)\oplus \ldots \oplus (0)| J_iL_i= (0)$, $L_i$ is a nonzero submodule in $M_i \}$. Clearly $N(K)\cap N(N_j) = (0)$, $d(K,N_j)\neq 2$ for all $j\neq i$, and so $K- D_i - D_j - N_j$ is a path in $AG(M)$. Therefore $e(K) = 3$ and so $e(N) = 3$ for all $N\in Max(M)$.\\ \textbf{Case 2}. If $K\neq D_i$ and $K_i \subseteq J_i$ for all $i$. Then $KD_i = (0)$ for all $i$. Let $L$ be any non-zero submodule of $M$ with $KL\neq (0)$. Then $LD_j = (0)$ for some $j$, $K - D_j - L$ is a path in $AG(M)$ and so $e(K) = 2$.\\ \textbf{Case 3}. If $K_i = M_i$ for some $i$, then $KD_i\neq (0)$, $KN_i \neq (0)$ and $KD_j = (0)$ for some $j\neq i$. Thus $K - D_j - D_i - N_i$ is a path in $AG(M)$, $d(K,N_i) = 3$ and so $e(K) = 3$. Thus $e(K) = 2$ for all $K\subseteq J(M)$. Further note that in all the cases center of $AG(M)$ is $\{K\subseteq J(M)| K\neq (0)$ is a submodule in $M \}$. \end{proof} In view of Theorems \ref{t2.2} and \ref{t2.4}, we have the following corollary. \begin{cor}\label{c2.5} Let $M=\oplus _{i=1}^n M_i$, where $M_i$ is a simple module for all $1\leq i\leq n$ and $n\geq 2$. Then the radius of $AG(M)$ is $1$ or $2$ and the center of $AG(M)$ is $\cup_{i=1}^n D_i$, where $D_i = (0) \oplus \ldots \oplus (0) \oplus M_i\oplus (0)\oplus \ldots \oplus (0)$ for $1\leq i\leq n$. \end{cor} \begin{thm}\label{t2.6} Let $M=\oplus _{i=1}^n M_i$, where $M_i$ is a f.g Artinian local module for all $1\leq i\leq n$ and $n\geq 2$. Then $\gamma(AG(M))=n$. \end{thm} \begin{proof} Let $N_i$ be the unique maximal submodule in $M_i$ with nilpotency $n_i$. Let $\Omega= \{D_1, D_2, \ldots ,D_n\}$, where $D_i = (0) \oplus \ldots \oplus (0) \oplus J_i^{n_i -1}\oplus (0)\oplus \ldots \oplus (0)$ for $1\leq i\leq n$. Note that any non-zero submodule in $M$ is adjacent to $D_i$ for some $i$. Therefore $N[\Omega] = V(AG(M))$, $\Omega$ is a dominating set of $AG(M)$ and so $\gamma(AG(M))\leq n$. Suppose $S$ is a dominating set of $AG(M)$ with $|S| < n$. Then there exists $N\in Max(M)$ such that $NK\neq (0)$ for all $K\in S$, a contradiction. Hence $\gamma(AG(M))=n$. \end{proof} In view of Theorem \ref{t2.6}, we have the following corollary. \begin{cor}\label{c2.7} Let $M=\oplus _{i=1}^n M_i$, where $M_i$ is a f.g Artinian local module for all $1\leq i\leq n$ and $n\geq 2$. Then \begin{itemize} \item [(a)] $ir(AG(M))=n$. \item [(b)] $\gamma_c(AG(M))=n$. \item [(c)] $\gamma_t(AG(M))=n$. \item [(d)] $\gamma_{cl}(AG(M))=n$. \item [(e)] $\gamma_{pr}(AG(M))=n$, if $n$ is even and $\gamma_{pr}(AG(M))=n+1$, if $n$ is odd. \end{itemize} \end{cor} \begin{proof} Consider the $\gamma$-set of $AG(M)$ identified in the proof of Theorem \ref{t2.6}. By Proposition \ref{p1.9}, $\Omega$ is a maximal irredundant set with minimum cardinality and so $ir(AG(M))=n$. Clearly $<\Omega>$ is a complete subgraph of $AG(M)$. Hence $\gamma_c(AG(M))=\gamma_t(AG(M))=\gamma_{cl}(AG(M))=n$. If $n$ is even, then $<\Omega>$ has a perfect matching and so $\Omega$ is a paired dominating set of $AG(M)$. Thus $pr(AG(M)) = n$. If $n$ is odd, then $<\Omega \cup K>$ has a perfect matching for some $K\in V(AG(M))\setminus \Omega$. and so $\Omega \cup {K}$ is a paired dominating set of $AG(M)$. Thus $\gamma_{pr}(AG(M))=n$ if $n$ even and $\gamma_{pr}(AG(M))=n+1$ if $n$ is odd. \end{proof} Let $M=\oplus _{i=1}^n M_i$, where $M_i$ is a f.g Artinian local module for all $1\leq i\leq n$ and $n\geq 2$. Then by Theorem \ref{t2.4}, radius of $AG(M)$ is $2$. Further, by Theorem \ref{t2.6}, the domination number of $AG(M)$ is equal to $n$, where $n$ is the number of distinct maximal submodules of $M$. However, this need not be true if the radius of $AG(M)$ is $1$. For, consider the ring $M = M_1 \oplus M_2$, where $M_1$ and $M_2$ are simple modules. Then $AG(M)$ is a star graph and so has radius $1$, whereas $M$ has two distinct maximal submodules. The following corollary shows that a more precise relationship between the domination number of $AG(M)$ and the number of maximal submodules in $M$, when $M$ is finite. \begin{cor}\label{c2.8} Let $M$ be a finite module and $\gamma((AG(M))) = n$. Then either $M = M_1 \oplus M_2$, where $M_1$, $M_2$ are simple modules or $M$ has n maximal submodules. \end{cor} \begin{proof} When $\gamma((AG(M))) = 1$, proof follows from \cite[Corollary 2.12]{ah16}. When $\gamma((AG(M))) = n$, then $M$ cannot be $M = M_1 \oplus M_2$, where $M_1$, $M_2$ are simple modules. Hence $M=\oplus _{i=1}^m M_i$, where $M_i$ is a f.g Artinian local module for all $1\leq i\leq m$ and $m\geq 2$. By Theorem \ref{t2.6}, $\gamma((AG(M))) = m$. Hence by assumption $m = n$. i.e., $M=\oplus _{i=1}^n M_i$, where $M_i$ is a f.g Artinian local module for all $1\leq i\leq n$ and $n\geq 2$. One can see now that $M$ has $n$ maximal submodules. \end{proof} \section{The relationship between $\gamma_t((AG(M)))$ and $\gamma((AG(M)))$} The main goal in this section is to study the relation between $\gamma_t((AG(M)))$ and $\gamma((AG(M))) $. \begin{thm}\label{t3.1} Let $M$ be a module. Then $\gamma_t((AG(M)))= \gamma((AG(M)))$ or $\gamma_t((AG(M)))= \gamma((AG(M)))+1$. \end{thm} \begin{proof} Let $\gamma_t((AG(M)))\neq \gamma((AG(M)))$ and $D$ be a $\gamma$-set of $AG(M)$. If $\gamma((AG(M)))=1$, then it is clear that $\gamma_t((AG(M)))=2$. So let $\gamma((AG(M)))> 1$ and put $k = Max \{n|$Åthere exist $L_1, \ldots , L_n \in D$ ; $\sqcap_{i=1} ^n L_i \neq 0 \}$. Since $\gamma_t((AG(M)))\neq \gamma((AG(M)))$, we have $k \geq 2$. Let $L_1, \ldots , L_k \in D$ be such that $\sqcap_{i=1} ^k L_i \neq 0$. Then $S = \{\sqcap_{i=1} ^k L_i, ann L_1, \ldots , ann L_k \}\cupÅ D\setminus \{L_1, \ldots , L_k \}$ is a $\gamma_t$-set. Hence $\gamma_t((AG(M)))= \gamma((AG(M)))+1$. \end{proof} In the following result we find the total domination number of $AG(M)$. \begin{thm}\label{t3.2} Let $S$ be the set of all maximal elements of the set $V(AG(M))$. If $|S| > 1$, then $\gamma_t((AG(M))) = |S|$. \end{thm} \begin{proof} Let $S$ be the set of all maximal elements of the set $V(AG(M))$, $K\in S$ and $|S| > 1$. First we show that $K = ann(ann K)$ and there exists $m\in M$ such that $K = ann(m)$. Let $K\in S$. Then $ann K\neq 0$ and so there exists $0\neq m\in ann K$. Hence $K\subseteq ann(ann K)\subseteq ann(m)$. Thus by the maximality of $K$, we have $K = ann(ann K) = ann(m)$. By Zorn' Lemma it is clear that if $V(AG(M))\neq \emptyset$, then $S\neq \emptyset$. For any $K\in S$ choose $m_K \in M$ such that $K = ann(m_K)$. We assert that $D = \{Rm_K | K\in S\}$ is a total dominating set of $AG(M)$. Since for every $L\in ÅV(AG(M))$ there exists $K\in S$ such that $L\subseteq K = ann(m_K)$, $L$ and $Rm_K$ are adjacent. Also for each pair $K, K'\in S$, we have $(Rm_K)(Rm_{K'}) = 0$. Namely, if there exists $m\in (Rm_K)(Rm_{K'})\setminus \{0\}$, then $K = K' = ann(m)$. Thus $\gamma_t((AG(M)))\leq |S|$. To complete the proof, we show that each element of an arbitrary $\gamma_t$-set of $AG(M)$ is adjacent to exactly one element of $S$. Assume to the contrary, that a vertex $L'$ of a $\gamma_t$-set of $AG(M)$ is adjacent to $K$ and $K'$, for $K, K' \in S$. Thus $K = K' = ann L'$, which is impossible. Therefore $\gamma_t((AG(M))) = |S|$. \end{proof} \begin{thm}\label{t3.3} Let $R$ be a reduced ring, $M$ is a faithful module, and $|Min(R)| < \infty$. If $\gamma((AG(M)))> 1$, then $\gamma_t((AG(M)))= \gamma((AG(M)))= |Min(R)|$. \end{thm} \begin{proof} Since $R$ is reduced, $M$ is a faithful module, and $\gamma((AG(M)))> 1$, we have $|Min(R)| > 1$. Suppose that $Min(R) = \{p_1, \ldots , p_n \}$. If $n = 2$, the result follows from Corollary \ref{c1.8}. Therefore, suppose that $n \geq 3$. Define $\widehat{p_iM} = p_1 \ldots p_{i-1}p_{i+1} \ldots p_n M$, for every $i = 1, \ldots , n$. Clearly, $\widehat{p_iM}\neq 0$, for every $i = 1, \ldots , n$. Since $R$ is reduced, we deduce that $\widehat{p_iM} p_iM=0$. Therefore, every $p_i M$ is a vertex of $AG(M)$. If $K$ is a vertex of $AG(M)$, then by \cite[Corollary 3.5]{ati69}, $(K:M)\subseteq Z(R) = \cup_{i=1} ^n p_i$. It follows from the Prime Avoidance Theorem that $(K:M)\subseteq p_i$, for some $i$, $1\leq i \leq n$. Thus $p_iM$ is a maximal element of $V(AG(M))$, for every $i = 1, \ldots , n$. From Theorem \ref{t3.2}, $\gamma_t((AG(M)))= |Min(R)|$. Now, we show that $ \gamma((AG(M)))= n$. Assume to the contrary, that $B = \{J_1, \ldots , J_{n-1}\}$ is a dominating set for $AG(M)$. Since $n \geq 3$, the submodules $p_iM$ and $p_jM$ , for $i \neq j$ are not adjacent (from $p_ip_j = 0 \subseteq p_k$ it would follow that $p_i\subseteq p_k$, or $p_j\subseteq p_k$ which is not true). Because of that, we may assume that for some $k < n - 1$, $J_i = p_iM$ for $i = 1,\ldots, k$, but none of the other of submodules from $B$ are equal to some $p_sM$ (if $B = \{p_1M, \ldots , p_{n-1}M\}$, then $p_nM$ would be adjacent to some $p_iM$, for $i\neq n$). So, every submodule in $\{p_{k+1}M, . . . , p_nM\}$ is adjacent to a submodule in $\{J_{k+1}, . . . , J_{n-1}\}$. It follows that for some $s\neq t$, there is an $l$ such that $(p_sM)J_l= 0 = (p_tM)J_l$. Since $p_s\nsubseteq p_t$, it follows that $J_l\subseteq p_tM$, so $J_l ^2 = 0$, which is impossible, since the ring $R$ is reduced. So $\gamma_t((AG(M)))= \gamma((AG(M)))= |Min(R)|$. \end{proof} Theorem \ref{t3.3} leads to the following corollary. \begin{cor}\label{c3.4} Let $R$ be a reduced ring, $M$ is a faithful module, and $|Min(R)| < \infty$, then the following are equivalent. \begin{itemize} \item [(a)] $\gamma(AG(M))=2$. \item [(b)] $AG(M)$ is a bipartite graph with two nonempty parts. \item [(c)] $AG(M)$ is a complete bipartite graph with two nonempty parts. \item [(d)] $R$ has exactly two minimal primes. \end{itemize} \end{cor} \begin{proof} Follows from Theorem \ref{t3.3} and Corollary \ref{c1.8}. \end{proof} In the following theorem the domination number of bipartite annihilating-submodule graphs is given. \begin{thm}\label{t3.5} Let $M$ be a faithful module. If $AG(M)$ is a bipartite graph, then $\gamma((AG(M)))\leq 2$. \end{thm} \begin{proof} Let $M$ be a faithful module. If $AG(M)$ is a bipartite graph, then from Theorem \ref{t1.7}, either $R$ is a reduced ring with exactly two minimal prime ideals, or $AG(M)$ is a star graph with more than one vertex. If $R$ is a reduced ring with exactly two minimal prime ideals, then the result follows by Corollary \ref{c3.4}. If $AG(M)$ is a star graph with more than one vertex, then we are done. \end{proof} The next theorem is on the total domination number of the annihilating-submodule graphs of Artinian modules. \begin{thm}\label{t3.6} Let $M=\oplus _{i=1}^n M_i$, where $M_i$ is a f.g Artinian local module for all $1\leq i\leq n$, $n\geq 2$, and $M \neq M_1 \oplus M_2$, where $M_1$, $M_2$ are simple modules. Then $\gamma_t((AG(M)))= \gamma((AG(M)))= |Min(R)|$. \end{thm} \begin{proof} By Proposition \ref{p1.5}, every nonzero proper submodule of $M$ is a vertex in $AG(M)$. So, the set of maximal elements of $V(AG(M))$ and $Max(M)$ are equal. Let $M=\oplus _{i=1}^n M_i$, where $(M_i, J_i)$ is a f.g Artinian local module for all $1\leq i\leq n$ and $n\geq 2$. Let $Max(M) = \{N_i = M_1 \oplus \ldots \oplus M_{i-1} \oplus J_i\oplus M_{i+1}\oplus \ldots \oplus M_n | 1 \leq i \leq n \}$. By Theorem \ref{t3.2}, $\gamma_t((AG(M)))= |Max(M)|$. In the sequel, we prove that $\gamma((AG(M))) = n$. Assume to the contrary, the set $\{K_1, \ldots , K_{n-1}\}$ is a dominating set for $AG(M)$. Since $M \neq M_1 \oplus M_2$, where $M_1$, $M_2$ are simple modules, we find that $K_i N_s=K_i N_t=0$, for some $i, t, s$, where $1 \leq i \leq n-1$ and $1 \leq t, s \leq n$. This means that $K_i = 0$, a contradiction. \end{proof} The following theorem provides an upper bound for the domination number of the annihilating-submodule graph of a Noetherian module. \begin{thm}\label{t3.7} If R is a Notherian ring and $M$ a f.g module, then $\gamma((AG(M)))\leq |Ass(M)|< \infty$. \end{thm} \begin{proof} By \cite{s}, Since R is a Notherian ring and $M$ a f.g module, $|Ass(M)|< \infty$. Let $Ass(M) = \{p_1, . . . , p_n\}$ where $p_i = ann(m_i)$ for some $m_i \in M$ for every $i = 1, \ldots , n$. Set $A = \{Rm_i | 1 \leq i \leq n \}$. We show that $A$ is a dominating set of $AG(M)$. Clearly, every $Rm_i$ is a vertex of $AG(M)$, for $i = 1, \ldots , n$ $( (p_iM)(m_iR)=0)$. If $K$ is a vertex of $AG(M)$, then \cite[Corollary 9.36]{s} implies that $(K:M)Å\subseteq Z(M) = \cup_{i=1} ^n p_i$. It follows from the Prime Avoidance Theorem that $(K:M) Å\subseteq p_i$, for some $i$, $1 \leq i \leq n$. Thus $K(Rm_i) = 0$, as desired. \end{proof} The remaining result of this paper provides the domination number of the annihilating-submodule graph of a finite direct product of modules. \begin{thm}\label{c3.8} For a module $M$, which is a product of two $($nonzero$)$ modules, one of the following holds: \begin{itemize} \item [(a)] If $M \cong F \times D$, where $F$ is a simple module and $D$ is a prime module, then $\gamma(AG(M))=1$. \item [(b)] If $M \cong D_1 \times D_2$, where $D_1$ and $D_2$ are prime modules which are not simple, then $\gamma(AG(M))=2$. \item [(c)] If $M \cong M_1 \times D$, where $M_1$ is a module which is not prime and $D$ is a prime module, then $\gamma(AG(M)) = \gamma(AG(M_1)) + 1$. \item [(d)] If $M \cong M_1 \times M_2$, where $M_1$ and $M_2$ are two modules which are not prime, then $\gamma(AG(M)) = \gamma(AG(M_1)) + \gamma(AG(M_2))$. \end{itemize} \end{thm} \begin{proof} Parts $(a)$ and $(b)$ are trivial. $(c)$ With no loss of generality, one can assume that $\gamma(AG(M_1)) < \infty$. Suppose that $\gamma(AG(M_1)) =n$ and $\{K_1, \ldots , K_n \}$ is a minimal dominating set of $AG(M_1)$. It is not hard to see that $\{K_1 \times 0, \ldots , K_n \times 0, 0\timesÅD\}$ is the smallest dominating set of $AG(M)$. $(d)$ We may assume that $\gamma(AG(M_1)) =m$ and $\gamma(AG(M_2)) =n$, for some positive integers $m$ and $n$. Let $\{K_1, \ldots , K_m\}$ and $\{L_1, \ldots , L_n\}$ be two minimal dominating sets in $AG(M_1)$ and $AG(M_2)$, respectively. It is easy to see that $\{K_1 \times 0, \ldots , K_m \times 0, 0 \times L_1 \ldots 0 \times L_n\}$ is the smallest dominating set in $AG(M)$. \end{proof} \end{document}
\begin{document} \fancyhead{} \title{PTHash: Revisiting FCH Minimal Perfect Hashing} \author{Giulio Ermanno Pibiri} \affiliation{ \institution{ISTI-CNR, Pisa, Italy} } \email{[email protected]} \author{Roberto Trani} \affiliation{ \institution{ISTI-CNR, Pisa, Italy} } \email{[email protected]} \begin{abstract} Given a set $S$ of $n$ distinct keys, a function $f$ that bijectively maps the keys of $S$ into the range $\{0,\ldots,n-1\}$ is called a \emph{minimal perfect hash function} for $S$. Algorithms that find such functions when $n$ is large and retain \emph{constant} evaluation time are of practical interest; for instance, search engines and databases typically use minimal perfect hash functions to quickly assign identifiers to static sets of variable-length keys such as strings. The challenge is to design an algorithm which is \emph{efficient} in three different aspects: time to find $f$ (construction time), time to evaluate $f$ on a key of $S$ (lookup time), and space of representation for $f$. Several algorithms have been proposed to trade-off between these aspects. In 1992, Fox, Chen, and Heath (FCH) presented an algorithm at SIGIR providing very fast lookup evaluation. However, the approach received little attention because of its large construction time and higher space consumption compared to other subsequent techniques. Almost thirty years later we revisit their framework and present an improved algorithm that scales well to large sets and reduces space consumption altogether, without compromising the lookup time. We conduct an extensive experimental assessment and show that the algorithm finds functions that are competitive in space with state-of-the art techniques and provide $2-4\times$ better lookup time. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10002951.10003317.10003359.10003363</concept_id> <concept_desc>Information systems~Retrieval efficiency</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003752.10003809.10010031</concept_id> <concept_desc>Theory of computation~Data structures design and analysis</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003752.10003809.10010031.10002975</concept_id> <concept_desc>Theory of computation~Data compression</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Information systems~Retrieval efficiency} \ccsdesc[500]{Theory of computation~Data structures design and analysis} \keywords{Minimal Perfect Hashing; FCH; XOR; Compressed Data Structures} \maketitle \section{Introduction}\label{sec:introduction} The \emph{minimal perfect hashing} problem is to build a data structure that assigns the numbers in $[n]=\{0,\ldots,n-1\}$ to the $n$ distinct elements of a static set $S$. The resulting data structure is called a minimal perfect hash function $f$, or MPHF in short, and should consume little space while supporting constant-time evaluation for any key in $S$. In principle, it would be trivial to obtain a MPHF for $S$ using a perfect hash table: each key $x \in S$ is associated in $\mathcal{O}(1)$ to the pair $\langle x,p \rangle$ stored in the table, where $p$ is the ``identifier'' of $x$ in $[n]$. However, this approach pays the cost of representation for the set $S$ itself, in addition to the storage cost of the numbers in $[n]$, which occupy $\Theta(n \log n)$ bits. Nevertheless, the minimal perfect hashing problem ignores the behaviour of $f$ on keys that are \emph{not} in $S$. This relaxation makes it possible to discard the space for $S$, thus admitting a space lower bound of $\log_2 e \approx 1.44$ {{bits/key}}~\cite{fredman1984storing,mehlhorn1982program} regardless the size and type of the input. Practical applications of minimal perfect hashing are pervasive in computing and involve compressed full-text indexes~\cite{belazzougui2014alphabet}, computer networks~\cite{lu2006perfect}, databases~\cite{chang2005perfect}, prefix-search data structures~\cite{belazzougui2010fast}, language models~\cite{pibiri2017efficient,PibiriV19}, Bloom filters and variants~\cite{broder2004network,fan2014cuckoo,graf2020xor}, just to name a few. Several algorithms are known for this problem, each of them exposing a trade-off between construction time, lookup time, and space consumption. We review these approaches in Section~\ref{sec:related_work}. Our starting point for this work is an old technique, originally described by~\citet*{fox1992faster}, and named FCH after them. They proposed a three-step framework to build a data structure that allows to evaluate the MPHF with just a single memory access to an array, besides a few hash and arithmetic calculations. More specifically, the algorithm builds a MPHF that occupies $cn$ bits for a parameter $c$. This parameter drives the efficiency of the construction: the higher the value of $c$, the lower the time to construct $f$. Unfortunately, to find a MPHF with FCH in a reasonable amount of time on large inputs, one has to use a large value of $c$ and, thus, waste space. On the other hand, more recent algorithms scale up very well and take less {{bits/key}}, but usually perform 3-4 memory accesses per lookup. For this reason, this work aims at combining the fast lookup time of FCH with a more compact representation. It should be noted that, once the space of the MPHF is \emph{sufficiently small} albeit not very close to the theoretic minimum, the most important aspect becomes indeed lookup time \emph{provided that} the function can be built feasibly~\cite{limasset2017fast}. In fact, in practical applications, MPHFs are employed as building blocks to obtain static dictionaries, i.e., key-value stores, which are built \emph{once} and evaluated many times. If the values take much more space than the MPHF itself, which is the typical case in practice, whether the MPHF takes 3 or 2 bits per key is not a critical issue. Fast lookup and feasible construction time are, therefore, the most critical aspects for this problem, hence the ones we focus on. In this paper, we propose {\method{PTHash}} --- a novel algorithm combining the lookup performance of FCH with succinct space and fast construction on large sets. The crucial aspect of {\method{PTHash}} is that it dramatically reduces the entropy of the information stored in the data structure compared to FCH. This, in turn, permits to tailor a light-weight compression scheme that reduces space consumption significantly while maintaining noticeable lookup performance. We evaluate {\method{PTHash}} on large sets of fixed and variable-length keys, in comparison to state-of-the-art techniques. We show that it is competitive in construction time with other techniques and, for approximately the same space consumption, it is $2-4\times$ faster at lookup time. Our C++ implementation of {\method{PTHash}} is available at \url{https://github.com/jermp/pthash}. \section{Related Work}\label{sec:related_work} Up to date, four different approaches have been devised to solve the minimal perfect hashing problem. We summarize them in chronological order of proposal. We also point out that some theoretical constructions, like that by~\citet{hagerup2001efficient}, can be proved to reach the space lower bound of $n\log_2 e$ bits, but only work in the asymptotic sense, i.e., for $n$ too large to be of any practical interest. \parag{Hash and Displace} The ``hash and displace'' technique was originally introduced by~\citet*{fox1992faster} (although \citet*{pagh1999hash} named it this way inspired by a work due to~\citet{tarjan1979storing}). Since we describe their approach (FCH) in details in Section~\ref{sec:FCH}, we just give a simple overview here. Keys are first hashed and mapped into \emph{non-uniform} buckets $\{B_i\}$; then, the buckets are sorted and processed by falling size: for each bucket $B_i$, a displacement value $d_i \in [n]$ is determined so that all keys in the bucket can be placed without collisions to positions $(h(x) + d_i) \mymod n$, for a proper hash function $h$ and $x \in B_i$. Lastly, the sequence of displacements $d_i$ is stored in compact form using $\lceil \log_2 n \rceil$ bits per value. While the theoretical analysis suggests that by decreasing the number of buckets it is possible to lower the space usage (at the cost of a larger construction time), in practice it is unfeasible to go below $2.5$ {{bits/key}} for large values of $n$. In the compressed hash and displace (CHD) variant by~\citet{belazzougui2009hash}, keys are first \emph{uniformly} distributed to buckets, with expected size $\lambda > 0$; then, for each bucket $B_i$, a pair of displacements $\langle d_0,d_1 \rangle$ is determined so that all keys in the bucket can be placed without collisions to positions $(h_1(x) + d_0 h_2(x) + d_1) \mymod n$, for a given pair of hash functions $h_1, h_2$ and for $x \in B_i$. Instead of explicitly storing a pair $\langle d_0,d_1 \rangle$ for each bucket, the index of such pair in the sequence $$\langle 0,0 \rangle,...,\langle 0,n-1 \rangle,\langle 1,0 \rangle,...,\langle 1,n-1 \rangle,...,\langle n-1,0 \rangle,...,\langle n-1,n-1 \rangle$$ is stored. Lastly, the sequence of indexes is stored in compressed form using the entropy coding mechanism introduced by~\citet*{fredriksson2007simple}, retaining $\mathcal{O}(1)$ access time. \parag{Linear Systems} In the late $90$s, \citet{majewski1996family} introduced an algorithm to build a MPHF exploiting a connection between linear systems and hypergraphs. (Almost ten years later, \citet{chazelle2004bloomier} proposed an analogous construction in an independent manner.) The MPHF $f$ is found by generating a system of $n$ random equations in $m$ variables of the form $$w_{h_1(x)} + w_{h_2(x)} + \cdots + w_{h_r(x)} = f(x) \mymod{n}, x \in S,$$ where $h_i : S \rightarrow [m]$ is a random hash function, and $\{w_i\}$ are $m$ variables whose values are in $[n]$. Due to bounds on the acyclicity of random graphs, if the ratio between $m$ and $n$ is above a certain threshold $\gamma_{r}$, the system can be almost always triangulated and solved in linear time by peeling the corresponding hypergraph. The constant $\gamma_{r}$ depends on the degree $r$ of the graph, and attains its minimum for $r=3$, i.e., $\gamma_{3} \approx 1.23$. \citet{belazzougui2014cache} proposed a cache-oblivious implementation of the algorithm suitable for external memory constructions. \citet{genuzio2016fast,genuzio2020fast} demonstrated practical improvements to the Gaussian elimination technique used to solve the linear system, that make the overall construction, lookup time, and space competitive with the cache-oblivious implementation~\cite{belazzougui2014cache}. \parag{Fingerprinting} \citet{muller2014retrieval} introduced a technique based on fingerprinting. The general idea is as follows. All keys are first hashed in $[n]$ using a random hash function and collisions are recorded using a bitmap $B_0$ of size $n_0=n$. In particular, keys that do not collide have their position in the bitmap marked with a \bit{1}; all positions involved in a collision are marked with a \bit{0} instead. If $n_1 > 0$ collisions are produced, then the same process is repeated recursively for the $n_1$ colliding keys using a bitmap $B_1$ of size $n_1$. All bitmaps, called ``levels'', are then concatenated together in a bitmap $B$. The lookup algorithm keeps hashing the key level by level until a \bit{1} is hit, say in position $p$ of $B$. A constant-time ranking data structure~\cite{jacobson1989space,navarro2016compact} is used to count the number of \bit{1}s in $B[0..p]$ to ensure that the returned value is in $[n]$. On average, only $1.56$ levels are accessed in the most succinct setting~\cite{muller2014retrieval} (3 {{bits/key}}). \citet{limasset2017fast} provided an implementation of this approach, named BBHash, that is very fast in construction and lookup, and scales to very large key sets using multiple threads. A parameter $\gamma \geq 1$ is introduced to speedup construction and query time, so that bitmap $B_i$ on level $i$ is $\gamma n_i$ bits large. This clearly reduces collisions and, thus, the average number of levels accessed at query time. However, the larger $\gamma$, the higher the space consumption. \parag{Recursive Splitting} Very recently, \citet{esposito2020recsplit} proposed a new technique, named RecSplit, for building very succinct MPHFs in expected linear time and providing expected constant lookup time. The authors first observed that for very small sets it is possible to find a MPHF simply by brute-force searching for a bijection with suitable codomain. Then, the same approach is applied in a divide-and-conquer manner to solve the problem on larger inputs. Given two parameters $b$ and $\ell$, the keys are divided into buckets of average size $b$ using a random hash function. Each bucket is recursively split until a block of size $\ell$ is formed and for which a MPHF can be found using brute-force, hence forming a rooted tree of splittings. The parameters $b$ and $\ell$ provide different space/time trade-offs. While providing a very compact representation, the evaluation performs one memory access for each level of the tree that penalizes the lookup time. \section{The FCH Technique}\label{sec:FCH} Fox, Chen, and Heath presented a three-step framework~\cite{fox1992faster} for minimal perfect hashing in 1992, which we describe here as it will form the basis for our own development in Section~\ref{sec:main}. For a given set $S$ of $n$ keys, the technique finds a MPHF $f$ of size $cn$ bits, where $c > \log_2 e$ is a parameter that trades space for construction time. \parag{Construction} The construction is carried out in three steps, namely \emph{mapping}, \emph{ordering}, and \emph{searching}. First, the mapping step partitions the set of keys by placing the keys into \emph{non-uniform} buckets. Then, the ordering step orders the buckets by \emph{non-increasing} size, which is the order used to process the keys. Lastly, the searching step attempts to assign the free positions in $[n]$ to the keys of each bucket, in a greedy way. Two hash functions are used during the process. In practice, a pseudo random function $h$ is used, such as MurmurHash~\cite{smhasher}, with two different seeds $s_1$ and $s_2$. The seed $s_1$ can be chosen at random, whereas $s_2$ must satisfy a property as explained shortly. \noindent(i) \emph{Mapping.} The non-uniform mapping of keys into buckets is accomplished as follows. All keys are hashed using $h(\cdot,s_1)$ and a value $p_1$ is chosen such that $S$ is partitioned into two sets, $S_1 = \{x | (h(x,s_1)\mymod{n}) < p_1\}$ and $S_2 = S / S_1$. Then, $m = \lceil cn / (\log_2 n + 1) \rceil$ buckets are allocated to hold the keys, for a given $c$. The lower the value of $c$ the lower the number of buckets and the higher the average number of keys per bucket. Finally, a value $p_2$ is chosen so that each key $x \in S$ is assigned to the bucket \begin{equation}\label{eq:bucket} \func{bucket}(x) = \left\{ \begin{array}{ll} h(x,s_1)\mymod{p_2} & x \in S_1 \\ p_2 + \left(h(x,s_1) \mymod{(m-p_2)}\right) & \mbox{otherwise} \\ \end{array} \right.. \end{equation} \noindent The arbitrary thresholds $p_1$ and $p_2$ are set to, respectively, $0.6n$ and $0.3m$ so that the mapping of keys into the buckets is \emph{uneven}: roughly 60\% of the keys are mapped to the 30\% of the buckets. \noindent(ii) \emph{Ordering.} After all keys are mapped to the $m$ buckets, the buckets are sorted by \emph{non-increasing} size to speed up the searching step. \noindent(iii) \emph{Searching.} For each bucket $B_i$ in the order given by the previous step, a \emph{displacement} value $d_i$ and an extra bit $b_i$ are determined so that all keys in the bucket do not produce collisions with previously occupied positions, when hashed to positions $$ \func{position}(x,d_i,b_i) = (h(x,s_2+b_i) + d_i)\mymod{n}. $$ In particular, the global seed $s_2$ is chosen such that there are no collisions between the keys of the \emph{same} bucket. To add a degree of freedom when searching, an extra bit $b_i$ is used for the seed of $h$ for bucket $B_i$ as to avoid failures when no displacement is found: after trying all displacements for $b_i=\bit{0}$, the same displacements are tried again with $b_i=\bit{1}$. To accelerate the identification of $d_i$, an auxiliary data structure formed by two integer arrays of $\Theta(n \log n)$ bits is used to locate the available positions. Given a random available position $p$, $d_i$ is computed ``aligning'' an arbitrary key of $B_i$ to $p$. This random alignment is rather critical for the algorithm as it guarantees that all positions have the \emph{same} probability to be occupied, thus, preserving the search efficiency. \parag{Discussion} When the searching phase terminates, the $m$ pairs $\langle d_i, b_i \rangle$, one for each bucket, have been determined and stored in an array $P$, in the order given by the function \func{bucket}. This array is what the MPHF actually stores. Since at most $n$ displacement values can be tried for a bucket, each $d_i$ needs $\lceil \log_2 n \rceil$ bits. Therefore, as $\lceil\log_2 n\rceil + 1$ bits are spent to encode the pair $\langle d_i, b_i \rangle$, the total space for storing the array $P$ is $m (\lceil\log_2 n\rceil + 1) \approx cn$ bits. With the array $P$, evaluating $f(x)$ is simple as shown in the following pseudocode. Although the original authors did not discuss implementation details, a \emph{single} memory access is needed by the lookup algorithm if we interleave the two quantities $d_i$ and $b_i$ in a contiguous segment of $\lceil\log_2 n\rceil + 1$ bits. In particular, $b_i$ is written as the least significant bit (from the right) and $d_i$ takes the remaining $\lceil\log_2 n\rceil$ most significant bits. To recover the two quantities from the interleaved value $v$ read from $P$, simple and fast bitwise operations are employed: $b_i = v \,\code{\&}\, \bit{1}$; $d_i = v \gg \bit{1}$. This is the actual implementation of the assignment $\langle d_i, b_i \rangle = P[i]$ in step 3 of the pseudocode. Therefore, the FCH technique achieves very fast lookup time. \SetArgSty{textnormal} \begin{algorithm} \SetKwBlock{Begin}{}{} \Begin({$f$}\text{(}$x$\text{)} :) { $i = \func{bucket}(x)$ \\ $\langle d_i, b_i \rangle = P[i]$ \\ \code{return} $\func{position}(x,d_i,b_i)$ } \end{algorithm} The constant $c$, fixed before construction, governs a clear trade-off between space and construction time: the larger, the higher the probability of finding displacements quickly because fewer keys per buckets have to satisfy the search condition simultaneously. On the other hand, if $c$ is chosen to be small, e.g., $c=2$, space consumption is reduced but the searching for a good displacement value may last for too long --- up to the point when \emph{all} possible $n$ displacements have been tried (for both choices of $b_i$) and construction fails: at that point, one has to re-seed the hash functions and try again. In practice, FCH spends a lot of time to find a MPHF compared to other more recent techniques for the \emph{same} space budget. To make a concrete example, the algorithm takes 1 hour and 10 minutes to build a MPHF with $c=3$ over a set of 100 million random 64-bit keys. Other techniques reviewed in Section~\ref{sec:related_work} are able to do the same in few minutes, or even less, for the \emph{same} final space budget of 3 {{bits/key}}. We will present a more detailed comparison in Section~\ref{sec:overall_comparison}. \section{PTH\lowercase{ash}}\label{sec:main} Inspired by the framework discussed in Section~\ref{sec:FCH}, we now present {\method{PTHash}}, an algorithm that aims at preserving the lookup performance of FCH while improving construction and space consumption altogether. To do so, we maintain the first two steps of the framework, i.e., mapping and ordering, but re-design completely the most critical one --- the searching step. The novel search algorithm allows us to achieve some important goals that are introduced in the following. Before illustrating the details, we briefly sketch the intuition behind {\method{PTHash}}. Recall that FCH finds a MPHF whose size is $cn$ bits, for a given $c$. However, FCH does not offer opportunities for compression: for each bucket the displacement is chosen with uniform probability from $[n]$ to guarantee that all positions are occupied with equal probability during the search. This makes the output of the search --- the array $P$ --- hardly compressible so that $\lceil \log_2 n \rceil$ bits per displacement is the best we can hope for. Now, if $P$ were compressible instead, we could afford to use a different constant $c^{\prime} > c$ for searching, such that the size of the \emph{compressed} $P$ would be approximately $cn$ bits. That is, for the \emph{same space} budget, we search for a MPHF with a larger $c$, hence reducing construction time. Our refined ambition in this section is, therefore, to achieve two related goals: \noindent(i) design an algorithm that guarantees a compressible output;\\ \noindent(ii) introduce an encoding scheme that does not compromise lookup performance. \subsection{Searching} We keep track of occupied positions in $[n]=\{0,\ldots,n-1\}$ using a bitmap, $\var{taken}[0..n-1]$. This will be the main supporting structure for the search and costs just $n$ bits (we also maintain a small integer vector to detect in-bucket collisions, but its space is negligible compared to $n$ bits). We map keys into $m = \lceil cn / \log_2 n \rceil$ buckets, $B_0,\ldots,B_{m-1}$, using Formula (\ref{eq:bucket}) and process them in order of non-increasing size. We choose a seed $s$ for a pseudo-random hash function $h$ so that keys in the same bucket are hashed to distinct hash codes. Now, for each bucket $B_i$, we search for an integer $k_i$ such that the position assigned to $x \in B_i$ is \begin{equation} \func{position}(x,k_i) = (h(x,s) \oplus h(k_i,s)) \mymod{n} \end{equation} and \begin{equation}\label{search_condition} \var{taken}[\func{position}(x,k_i)] = \bit{0}. \end{equation} The quantity $(x \oplus y)$ represents the bitwise XOR between $x$ and $y$. If the search for $k_i$ is successful, i.e. $k_i$ places all keys of the bucket $B_i$ into unused positions, then $k_i$ is saved in the array $P$ and the positions are marked as occupied via $\var{taken}[\func{position}(x,k_i)]=\bit{1}$; otherwise, a new integer $k_i$ is tried. We call the integer $k_i$ a \emph{pilot} for bucket $B_i$ because it uniquely defines the positions of the keys in $B_i$, hence $P$ is a \emph{pilots table} (PT). Differently from FCH, now the random ``displacement'' of keys happens by virtue of the bitwise XOR instead of using its auxiliary data structure. More specifically, $\func{position}(x,k_i)$ is based on the principle that the XOR between two random integers is another random integer. If fact, since both integers involved in the XOR are produced by a random hash function we expect them to have approximately the same proportion of \bit{1} and \bit{0} bits in their (fixed-size) binary representation. The bitwise XOR of two such integer is another integer where the proportion is preserved. Note that the way $\func{position}(x,k_i)$ is computed assumes that $n$ is \emph{not} a power of 2 so that all bits resulting from the XOR are relevant for the modulo. If $n$ is a power of 2, we just map keys to $[n+1]$ and manage the extra position as we are going to see in Section~\ref{sec:load_factor}. This new searching strategy has some direct implications. First, the XOR between $h(x,s)$ and $h(k_i,s)$ induces a random ``displacement'' of the keys in $B_i$ where only the second term changes when changing $k_i$, allowing thus to precompute the hashes of all keys. Avoiding multiple calculations of $h(x,s)$ is very important when working with long keys such as strings. Second, as we already noted, the space of the auxiliary data structure is spared, which improves the memory consumption during construction compared to FCH. Third, the resilience of the algorithm to failures is improved, because there is theoretically no limit to the number of different pilots $k_i$ that can be tried for a bucket. The displacement values used by FCH, instead, can only assume values in $[n]$ and using one extra bit per bucket makes the number of trials to be $2n$, at most. But there is also another, very important, implication: pilots $k_i$ do \emph{not} need to be tried at random, like displacements values of FCH. Therefore, our strategy is to choose $k_i$ as the \emph{first} value of the sequence $K = 0,1,2,3,\ldots$ that satisfies the search condition in (\ref{search_condition}), with the underlying principle being that $\func{position}(x,k_i)$ does \emph{not} look ``more random'' if also $k_i$ is tried at random. We argue that the combination of these two effects --- (i) random positions generated with every $k_i$ and (ii) smaller values always tried first --- makes $P$ compressible. To understand why $P$ is now compressible, let us derive a formula for the expected pilot value $\mathbb{E}[k_i]$. Each $k_i$ is a random variable taking value $v \in K$ with probability depending on the current load factor of the bitmap \var{taken} (fraction of positions marked with \bit{1}). It follows that $k_i$ is \emph{geometrically distributed} with success probability $p_i$ being the probability of placing all keys in $B_i$ without collisions. Let $\alpha(i)$ be the load factor of the bitmap after buckets $B_0,\ldots,B_{i-1}$ have been processed, that is $$ \alpha(i) = \frac{1}{n}\sum_{j=0}^{i-1}|B_j|, \text{ for } i = 1,\ldots,m-1 $$ and $\alpha(0)=0$ for convenience (empty bitmap). Then the probability $p_i$ can be modeled\footnote{We actually assume that there are no collisions among the keys in the same bucket, which is reasonable since $|B_i| \ll n$. In fact, their influence is negligible in practice.} as $ p_i = (1 - \alpha(i))^{|B_i|}. $ Since $k_i$ is geometrically distributed, the probability that $k_i=v$, corresponding to the probability of having success after $v$ failures ($v+1$ total trials), is $p_i(1-p_i)^v$, with expected value \begin{equation}\label{eq:expectation} \mathbb{E}[k_i] = \frac{1}{p_i} - 1 = \Big(\frac{1}{1 - \alpha(i)}\Big)^{|B_i|} - 1. \end{equation} \begin{figure}\label{fig:bucket_size} \label{fig:trials} \end{figure} Formula (\ref{eq:expectation}) shows that pilots $k_i$ are quite small, on average. In fact, $\alpha(i)$ will be very small for the first processed buckets, hence yielding small $k_i$s. On the other hand, the ordering of buckets by falling size plays a crucial role in keeping $k_i$s small even for high load factors. In fact, since the $n$ keys are divided into $m = \lceil cn/\log_2 n \rceil$ non-uniform buckets, $|B_i|$ is a small quantity between 0 and something larger than the average load $n/m$ i.e., $0 \leq |B_i| \leq \Theta(\frac{\log n}{c})$. Figure~\ref{fig:bucket_size} shows a pictorial example of such falling distribution of bucket size, for $n=10^6$ and $c=3.5$, with growing load factor as buckets are processed by the search. We argue that Formula (\ref{eq:expectation}) gives accurate estimates of the average value of $k_i$, hence of the average number of trials needed by the search. Figure~\ref{fig:trials} shows the average number of trials, both measured and expected using Formula (\ref{eq:expectation}), for $n=10^6$ keys and $c=3.5$. As evident, the expectation is almost perfectly equal to the measured value for all buckets. The second important thing to note is that while the number of trials tends to increase as buckets are processed for the growing load factor, it can still be very small as it happens, for example, in correspondence of 65\%, 80\% or 95\%. This is a consequence of the small exponent $|B_i|$ in the formula. Directly from Formula (\ref{eq:expectation}), we obtain the following result which relates the performance of the search with the parameter $c$. \begin{thm}\label{thm:expected_runtime} The expected time of the search, for $n$ keys and a parameter $c > \log_2 e$, is $\mathcal{O}\big(n^{ 1 + \Theta(1/c)}\big).$ \end{thm} (Proof omitted due to space constraints.) \begin{table}[t] \centering \mycaption{Empirical entropy of $P$ for $n=10^6$ keys, by varying $c$. \label{tab:entropy}} \scalebox{0.9}{\input{tables/entropy.tex}} \end{table} \begin{table}[t] \centering \mycaption{Empirical entropy of the \var{front} and \var{back} parts of $P$ for $n=10^6$ keys, by varying $c$. \label{tab:entropy_front_back}} \scalebox{0.9}{\input{tables/entropy_front_back.tex}} \end{table} \subsection{Front-Back Compression}\label{sec:front_back} The net effect of Formula (\ref{eq:expectation}) is that $P$ has a low entropy. We report the 0-th order empirical entropy of $P$ in Table~\ref{tab:entropy}, for different values of $c$. Specifically, we compare the entropy of $P$ when it stores the pilots determined by our approach and when it stores the displacements following the original FCH procedure. As expected, the entropy of the pilots is smaller than that of the displacements, and actually becomes \emph{much} smaller for increasing $c$. This clearly suggests that the output of the search can be compressed very well. We now make one step further. We already noticed that the average number of trials is particularly small for the first processed buckets because of the low load factor. This is graphically evident from Figure~\ref{fig:trials} looking at the first, say, 30\% of the buckets. In other words, the first processed buckets have small pilots. Now we argue that such buckets are those corresponding to the first $p_2=0.3m$ entries of $P$ and hold $p1=0.6n$ keys. This is a direct consequence of the skewed distribution of keys into buckets and the order of processing the buckets. Therefore, the \var{front} part of $P$, $P[0..p_2-1]$, has a lower entropy compared to its \var{back} part, $P[p_2..m-1]$. Table~\ref{tab:entropy_front_back} shows the entropy of the arrays \var{front} and \var{back} by varying $c$ and for $n=10^6$. As evident, the entropy of the \var{front} part is much smaller than that of the \var{back} part (by more than $2\times$ on average). This has two immediate consequences. The first, now obvious, is that the array \var{front} is more compressible than \var{back}, and this saves space compared to the case where $P$ is not partitioned. Note that this partitioning strategy is guaranteed to improve compression by virtue of the skewed distribution, and it is different than partitioning $P$ arbitrarily. The second, even more important, is that we can use two different encoding schemes for \var{front} and \var{back} to help maintaining good lookup performance and save space. In fact, since \var{front} holds 60\% of the keys and its size is 30\% of $|P|$, it is convenient to use a more time-efficient encoding for \var{front}, which is also more compressible, and a more space-efficient encoding for \var{back}. With $P$ rendered as the two partitions \var{front} and \var{back}, evaluating $f(x)$ is still simple. \SetArgSty{textnormal} \begin{algorithm} \SetKwBlock{Begin}{}{} \Begin({$f$}\text{(}$x$\text{)} :) { $i = \func{bucket}(x)$ \\ \code{if} $i < p_2$ \code{then} $k_i = \var{front}[i]$ \\ \code{else} $k_i = \var{back}[i-p_2]$ \\ \code{return} $\func{position}(x,k_i)$ } \end{algorithm} We then expect the evaluation time of $f(x)$ to be approximately $(p_1/n)t_f + (1-p_1/n)t_b$ if $x$ is chosen at random from $S$, with $t_f$ and $t_b$ being the access time of \var{front} and \var{back} respectively. In conclusion, because of the values assigned to $p_1$ and $p_2$, this partitioning strategy allows us to achieve a trade-off between space and lookup time depending on what pair of compressors is used to encode \var{front} and \var{back}. We will explore this trade-off in Section~\ref{sec:experiments}. \subsection{Encoding}\label{sec:encoding} Now that we have achieved the first of the two goals mentioned at the beginning of Section~\ref{sec:main}, i.e., designing an algorithm that guarantees a compressible output, we turn our attention to the second one: devise an encoding scheme that not only takes advantage of the low entropy of $P$ but also maintains noticeable lookup performance. For simplicity of exposition, let us now focus on the case where $P$ is not partitioned into its \var{front} and \var{back} parts (the generalization is straightforward). Since $P$ has a low entropy, we argue that a \emph{dictionary}-based encoding is a good match for our purpose: we collect the \emph{distinct} values of $P$ into an array $D$, the dictionary, and represent $P$ as an array of references to $D$'s entries. Let $r$ be the size of $D$. As $r$ is smaller than or equal to the number of buckets $m$, which in turn is smaller than the number of keys $n$, we can represent each entry of $P$ using $\lceil\log_2 r\rceil$ bits instead of the $(\lceil\log_2 n\rceil+1)$ bits used by FCH. The total space usage is given by the encoded $P$, taking $m\lceil\log_2 r\rceil$ bits, plus the space of the dictionary $D$. The latter cost is small compared to that of $P$. In particular, the larger $c$ is, the smaller this cost becomes. The encoding time is linear in the size of $P$, i.e., $\Theta(m)$, thus it takes a small fraction of the total construction. In particular, all encoding methods we consider in Section~\ref{sec:experiments} take linear time. Using the dictionary, the algorithm for $f(x)$ is as follows. \SetArgSty{textnormal} \begin{algorithm} \SetKwBlock{Begin}{}{} \Begin({$f$}\text{(}$x$\text{)} :) { $i = \func{bucket}(x)$ \\ $k_i = D[P[i]]$ \\ \code{return} $\func{position}(x,k_i)$ } \end{algorithm} Compared to FCH, note that we are performing two memory accesses (for $P$ and $D$) instead of one. However, since $D$ is small, its access is likely to be directed to the cache memory of the target machine, e.g., L2 or even L1. Therefore, the indirection only slightly affects lookup performance, as we are going to show in Section~\ref{sec:experiments}. Generalizing the approach to the \var{front} and \var{back} parts of $P$ is straightforward: each part has its own dictionary and each access to those arrays is executed as shown in step 3 of the pseudocode. Other more sophisticated options are possible to compress $P$, e.g., Elias-Fano~\cite{Elias74,Fano71}, or the Simple Dense Coding (SDC) by~\citet*{fredriksson2007simple}. As we are going to see in Section~\ref{sec:experiments}, using these mechanisms is expected to provide superior compression effectiveness at the price of a slower lookup time. So far we have explored the effects of the search strategy on compression effectiveness because, as introduced at the beginning of Section~\ref{sec:main}, a compressible output enables the use of a larger $c$ value that speed up the construction. Let us recall the concrete example made at the end of Section~\ref{sec:FCH}: for $n=10^8$ 64-bit keys and $c=3$, FCH finds a MPHF in 1 hour and 10 minutes (refer to Section~\ref{sec:experiments} for a description of our experimental setup). {\method{PTHash}} with $c=5.6$ and the introduced front-back encoding finds a MPHF consuming the same amount of space, i.e., 3.0 {{bits/key}}. However, it does so in 70 seconds, i.e., $60\times$ faster than FCH. We will present larger and more detailed experiments in Section~\ref{sec:experiments}. \subsection{Limiting the Load Factor}\label{sec:load_factor} Now, we take a deeper look at searching time. According to Formula (\ref{eq:expectation}), the searching time is skewed towards the \emph{end} of the search, because the expected number of trials rapidly grows as $\alpha \rightarrow 1$. This phenomenon is even more evident when using large values of $c$ because a large $c$ lowers the expected number of trials for most of the buckets by lowering the exponent $|B_i|$ in Formula (\ref{eq:expectation}). In particular, for larger $c$, a small fraction of time is spent on most of the keys, and a large fraction is spent on the last buckets containing only few keys --- the heavy ``tail'' of the distribution. Figure~\ref{fig:time_vs_alpha} shows an example of such distribution for $n=10^9$ keys and $c=9$. Note the high skewness towards the end, after 85\% of the processed buckets: more than 40\% of the total search time is spent for only 1.4\% of the keys falling into the last 5\% of the non-empty buckets. To avoid the burden of the heavy tail, we search for $f$ in a larger space, say $[n^{\prime}=n/\alpha]$ for a chosen maximum load factor $0 < \alpha \leq 1$. For example, if $\alpha=0.99$, then 1\% extra space is used to search for $f$. Limiting the maximum achievable load factor clearly lowers searching time, as well as it affects compression effectiveness, by noting that $\mathbb{E}[k_i]$ \emph{decreases} as per Formula (\ref{eq:expectation}). In fact, considering a generic $0 < \alpha \leq 1$, we obtain the following more general result, whose proof is omitted due to space constraints. \begin{figure}\label{fig:time_vs_alpha} \end{figure} \textcolor{black}{ \begin{thm}\label{thm:expected_runtime_general} The expected time of the search, for $n$ keys and parameters $c > \log_2 e$ and $0 < \alpha \leq 1$, is $\mathcal{O}\big(n^{ 1 + \Theta(\alpha/c)}\big).$ \end{thm} } Now, the issue with searching in a space of size $n^{\prime}=n/\alpha > n$ is that the output of $f$ must be guaranteed to be minimal, i.e., a value in $[n]$, not in $[n^{\prime}]$. One can view the strategy as $f$ leaving some ``holes'' in its codomain $[n]$ that must then be filled in some appropriate manner. We proceed as follows. Suppose $L$ is the list of holes \emph{up to} position $n-1$. There are $|L|$ keys that are actually mapped to positions $p_i \geq n$, that can fill such holes. Therefore we materialize an array $\var{free}[0..n^\prime - n - 1]$, where $$\var{free}[p_i - n] = L[i], \text{ for each } i=0,\ldots,|L|-1.$$ Note that the space for the array \var{free} is that of a sorted integer sequence of size $n^\prime - n$, whose maximum value is less than $n$. Thus it takes small space, especially if compressed with Elias-Fano, i.e., $(n^{\prime}-n)(\lceil\log_2\frac{n}{n^{\prime}-n}\rceil +2+o(1))$ bits. Let us discuss an explanatory example. Suppose $n=9$ and $n^{\prime}=14$. There are $n^{\prime}-n = 14-9 = 5$ holes, say in positions $[0,2,8,9,12]$. $L$ is $[0,2,8]$ (holes up to position $n=9$). Therefore there will be $|L|=3$ keys that are mapped out of the range $[0..8]$, and must be those in positions $[10,11,13]$. Then we assign $\var{free}[10-9]$ = $L[0]$, $\var{free}[11-9]$ = $L[1]$, and $\var{free}[13-9]$ = $L[2]$, yielding a final $\var{free}[0..4]$ = $[\ast, 0, 2, \ast, 8]$, where `$\ast$' indicates an unassigned value. With the array \var{free}, the algorithm for $f(x)$ is as follows. \SetArgSty{textnormal} \begin{algorithm} \SetKwBlock{Begin}{}{} \Begin({$f$}\text{(}$x$\text{)} :) { $i = \func{bucket}(x)$ \\ $k_i = P[i]$ \\ $p = (h(x,s) \oplus h(k_i,s)) \mymod{n^{\prime}}$ \\ \code{if} $p < n$ \code{then} \code{return} $p$ \\ \code{else} \code{return} $\var{free}[p-n]$\\ } \end{algorithm} The second branch of the conditional (\code{else}) will be taken with probability $\approx(1-\alpha)$ for random queries. If $\alpha$ is chosen close to 1.0, e.g., 0.99, the branch will be highly predictable, hence barely affecting lookup performance. Our approach guarantees that each key out of the codomain $[n]$ is mapped back into position with a \emph{single} access to an array, \var{free}. As we are going to show in Section~\ref{sec:experiments}, this is considerably faster than the folklore strategy of filling the array $\var{free}$ with all available free positions in $[n^{\prime}]$. In fact, in that case, a \emph{successor} query must be issued over $\var{free}$ for every position $p$ returned in step 4 of the pseudocode. Lastly, the algorithm is directly applicable to any compressed representation of $P$ that supports random access, e.g., the front-back scheme with dictionary-based encoding we have described in the previous sections. \section{Evaluation}\label{sec:experiments} In this section we present a comprehensive experimental evaluation of {\method{PTHash}}. All experiments were carried out on a server machine equipped with Intel i9-9900K cores (@3.60 GHz), 64 GB of RAM DDR3 (@2.66 GHz), and running Linux 5 (64 bits). Each core has two private levels of cache memory: 32 KiB L1 cache (one for instructions and one for data); 256 KiB for L2 cache. A shared L3 cache spans 16,384 KiB. Both construction and lookup algorithms were run on a single core of the processor, with the data residing entirely in internal memory. The implementation of {\method{PTHash}} is written in C++ and available at \url{https://github.com/jermp/pthash}. The code was compiled with \textsf{gcc} 9.2.1 with optimizating flags \texttt{-O3} and \texttt{-march=native}. Lookup time was measured by looking up every single key in the input, and taking the average time between 5 runs. For construction time, we report the average between 3 runs. We build MPHFs using random integers as input, which is common practice for benchmarking hashing-based algorithms~\cite{esposito2020recsplit,limasset2017fast,fan2014cuckoo,graf2020xor,muller2014retrieval}, given that the nature of the data is completely irrelevant for the space of the data structures. In our case we generated $n$ 64-bit integers uniformly at random in the interval $[0,2^{64})$. We will also evaluate {\method{PTHash}} on real-world string collections to further confirm our results. \begin{table}[t] \centering \mycaption{Construction time, space, and lookup time of {\method{PTHash}} on $n=10^9$ random 64-bit keys, for a range of encoding methods and by varying $c$. \label{tab:encodings}} \subfloat[\boldsymbol{$\alpha=1.00$}]{ \scalebox{0.9}{\input{tables/compression.10e9.mphf_xor.tex}} \label{tab:a1.0} } \subfloat[\boldsymbol{$\alpha=0.99$}]{ \scalebox{0.9}{\input{tables/compression.10e9.a099.mphf_xor.tex}} \label{tab:a0.99} } \end{table} \begin{figure}\label{fig:construction_vs_lookup} \end{figure} \subsection{Tuning}\label{sec:tuning} In this section, we are interested in tuning {\method{PTHash}}; we quantify the impact of (i) different encoding schemes to represent the MPHF data structure, (ii) front-back compression, and (iii) varying the load factor. \parag{Compression Effectiveness} In Table~\ref{tab:a1.0} we report the performance of {\method{PTHash}} with $\alpha=1.0$ in terms of construction time, lookup time, and {{bits/key}} rate, for a range of encoding schemes. We explain the nomenclature adopted to indicate such encodings. ``C'' stands for \emph{compact} and refers to encoding each value in $P$ with $\lceil\log_2(\max(P)+1)\rceil$ bits (note that FCH uses this technique, assuming that $\max(P) = n$). ``D'' indicates the \emph{dictionary}-based method described in Section~\ref{sec:encoding}. ``EF'' stands for Elias-Fano~\cite{Elias74,Fano71}; ``SDC'' means Simple Dense Coding~\cite{fredriksson2007simple}. Methods indicated with ``X-Y'' refer to the front-back compression strategy from Section~\ref{sec:front_back}, where method ``X'' is used for \var{front} and ``Y'' is used for \var{back}. Table~\ref{tab:a1.0} is presented to highlight the spectrum of achievable trade-offs: from top to bottom, we improve construction time by increasing $c$; from left to right we improve space effectiveness but degrade lookup efficiency. We recall that construction time includes the time of mapping, sorting, searching, and encoding. The most time consuming step is the search, especially for small $c$. For the experiment reported in Table~\ref{tab:encodings}, mapping+sorting took 105 seconds, whereas encoding time is essentially the same for all the different methods tested. It ranges from 10 to 15 seconds by varying $c$ from 6 to 11, as $c$ affects the number of buckets used. All the rest of the time is spent during the searching step. Furthermore, we make the following observations. \noindent$\bullet$ Front-back compression pays off for the reasons we explained in Section~\ref{sec:front_back}, as it always improves space effectiveness, i.e., C-C and D-D are more compact than C and D respectively, while preserving their relative lookup efficiency. \noindent$\bullet$ D and D-D are always more compact than C and C-C, and slightly affect lookup time (+12 {{ns/key}} on average for $n=10^9$, but smaller for smaller values of $n$), hence confirming that dictionary-based encoding is a good match as explained in Section~\ref{sec:encoding}. \noindent$\bullet$ EF and SDC are better suited for space effectiveness but also $1.5-3\times$ slower on lookup compared to D-D. \noindent$\bullet$ The configuration D-EF stands in trade-off position between D-D and EF, as the use of EF on the \var{back} part improves space but slows down lookup. \parag{Varying the Load Factor} We explained in Section~\ref{sec:load_factor}, that varying $\alpha$ trades off between construction time and lookup efficiency. Figure~\ref{fig:construction_vs_lookup} shows a pictorial example of this trade-off by varying $\alpha$ from 0.80 to 1.00 with step 0.01. \noindent$\bullet$ Construction time decreases significantly, especially in the range $\alpha \in [0.9,1.0]$. In particular, note the sharp decrease when passing from $\alpha=1.0$ to $\alpha=0.99$ as a consequence of avoiding the heavy ``tail'' already observed in Figure~\ref{fig:time_vs_alpha}. \noindent$\bullet$ Lookup time, instead, increases as $\alpha$ decreases, but at a much slower pace thanks to the fast re-ranking of keys we described in Section~\ref{sec:load_factor} to guarantee that the output of the function is minimal. In fact, while we are able to obtain a $2\times$ faster construction when passing from, say, $\alpha=1.0$ to $\alpha=0.94$, we only increase lookup time by 6 {{ns/key}}. At the other edge of the spectrum visible in Figure~\ref{fig:construction_vs_lookup}, if we use $\alpha=0.80$ we obtain a $3\times$ faster construction but also pay $\approx$22 {{ns/key}} more on lookup. \noindent$\bullet$ The other relevant advantage is that using a lower load factor does \emph{not} consume more space but actually even less, for the reasons explained in Section~\ref{sec:load_factor}. In Figure~\ref{fig:construction_vs_lookup} we show the total space of the MPHF and that taken by the pilots table $P$ alone. When $\alpha < 1.0$, the total space is given by the space of $P$ plus that of the \var{free} array that we compress with Elias-Fano as explained in Section~\ref{sec:load_factor}. \begin{table}[t] \centering \mycaption{The performance of FCH for $n=10^8$ random 64-bit keys, and some {{bits/key}} rates. For comparison, also the performance of {\method{PTHash}} with $\alpha=0.99$ and encoding C, D-D, and D-EF, is reported.} \scalebox{0.9}{\input{tables/perf_FCH.tex}} \label{tab:search_FCH} \end{table} In Table~\ref{tab:a0.99} we report the result of the same experiment in Table~\ref{tab:a1.0} but with load factor $\alpha = 0.99$. The tables are shown next to each other to better highlight the comparison. As already noted in Figure~\ref{fig:construction_vs_lookup}, using just 1\% extra space for the search already introduces important advantages, that are observed for \emph{any} encoding method and \emph{any} value of $c$: (i) 25--35\% reduced construction time; (ii) reduced space usage (with noteworthy improvements for the encodings C and D); (iii) preserved lookup efficiency. \parag{Speeding Up the Search} As a last experiment in this section, Table~\ref{tab:search_FCH} shows the speed up factors achieved by some {\method{PTHash}} configurations over the construction time of FCH, for the \emph{same} final {{bits/key}} rates. Even using the simple C encoding for {\method{PTHash}} yields $7-69\times$ faster construction with equal (or better) lookup efficiency. Moving to the right-hand side of the table brings further advantages in construction time at the price of a penalty in lookup. \begin{table}[t] \centering \mycaption{Construction time, space, and lookup time for a range of methods on 64-bit random keys.} \scalebox{0.9}{\input{tables/overall.tex}} \label{tab:overall} \end{table} \subsection{Overall Comparison}\label{sec:overall_comparison} In this section we compare {\method{PTHash}} with the state-of-the-art techniques reviewed in Section~\ref{sec:related_work}. \noindent$\bullet$ FCH~\cite{fox1992faster} --- It is the only algorithm that we re-implemented (in C++) faithfully to the original paper\footnote{The popular CMPH library contains an implementation of FCH that we could not use because it is orders of magnitude slower than our implementation. }. We tested the algorithm with $c=3,\ldots,7$ as to almost cover the spectrum of {{bits/key}} rates achieved by the other methods. \noindent$\bullet$ CHD~\cite{belazzougui2009hash} --- We tested the method with parameter $\lambda = 4, 5, 6$. We were unable to use $\lambda=7$ for more than a few thousand keys, as already noted in prior work~\cite{esposito2020recsplit}. \noindent$\bullet$ EMPHF~\cite{belazzougui2014cache} --- It is an efficient implementation of the method based on peeling random hypergraphs. Although the library can also target external memory, we run the algorithm in internal memory. \noindent$\bullet$ GOV~\cite{genuzio2016fast,genuzio2020fast} --- It is a method based on solving random linear systems via the Gaussian elimination technique. \noindent$\bullet$ BBHash~\cite{limasset2017fast} --- It is tested with parameter $\gamma = 1,2,5$ as suggested in the original paper. The construction can be multi-threaded, but we used one single thread as to ensure a fair comparison. \noindent$\bullet$ RecSplit~\cite{esposito2020recsplit} --- We tested the method using the same configurations used by the authors in their paper, as to offer different trade-offs between construction time and space effectiveness. For all the above methods we used the source code provided by the original authors (see the References for the corresponding GitHub repositories) and set up a benchmark available at \url{https://github.com/roberto-trani/mphf_benchmark}. \noindent All implementations are in C/C++ except for GOV whose construction is only available in Java. The results in Table~\ref{tab:overall} are strongly consistent with those reported in recent previous work~\cite{limasset2017fast,esposito2020recsplit}. Out of the many possible configurations for {\method{PTHash}}, we isolate the following four ones. \noindent(i) \emph{Optimizing lookup time} --- C-C encoding, $\alpha=0.99$, $c=7$. {\method{PTHash}} in this configuration achieves the same lookup time as FCH but in much better compressed space. It is similar in space to BBHash with $\gamma=1,2$ but $3-4.5\times$ faster at lookup. Compared to other more space-efficient methods, {\method{PTHash}} is $0.5-1$ bit/key larger but also $5.4-11\times$ faster at lookup. \noindent(ii) \emph{Optimizing construction time} --- D-D encoding, $\alpha=0.88$, $c=11$. This configuration shows that {\method{PTHash}} is competitive in construction time with most of the other techniques, at the price of a larger space consumption. In any case, lookup performance is significantly better, by at least a factor of $2\times$. \noindent(iii) \emph{Optimizing space effectiveness} --- EF encoding, $\alpha=0.99$, $c=6$. In this configuration {\method{PTHash}} achieves a space effectiveness comparable with that of the most succinct methods, i.e., CHD and RecSplit, while still being $2-4\times$ faster at lookup than those methods. \noindent(iv) \emph{Optimizing the general trade-off} --- D-D encoding, $\alpha=0.94$, $c=7$. This configuration tries to achieve a balance between the other three configurations, combining good space effectiveness and construction time, with very fast lookup evaluation. The evident takeaway message emerging from the comparison in Table~\ref{tab:overall} is that \emph{there is a configuration of {\method{PTHash}} that takes space similar to another method but provides remarkably better lookup performance, with feasible or better construction speed.} \subsection{Performance on Variable-Length Keys}\label{sec:strings} In this section, we evaluate {\method{PTHash}} on real-world datasets of variable-length keys. We used natural-language $q$-grams as they are in widespread use in IR, NLP, and machine-learning applications; URLs are interesting as they represent a sort of ``worst-case'' input given their very long average length. More specifically, we used the 1-grams and 2-grams from the English GoogleBook (version 2) corpus\footnote{\url{http://storage.googleapis.com/books/ngrams/books/datasetsv2.html}}, that are \num{24357349} and \num{665752080} in number, respectively. For URLs, we used those of the $\approx$50 million Web pages in the ClueWeb09 (Category B) dataset\footnote{\url{https://lemurproject.org/clueweb09}}, and those collected in 2005 from the UbiCrawler~\cite{boldi2004ubicrawler} relative to the .uk domain\footnote{\url{http://data.law.di.unimi.it/webdata/uk-2005/uk-2005.urls.gz}}, for a total of \num{49937704} and \num{39459925} URLs respectively. While the space of the MPHF data structure is independent from the nature of the data, we choose datasets of increasing average key size to highlight the difference in construction and lookup time compared to fixed-size 64-bit keys. Recall that {\method{PTHash}} hashes each input key only once during construction (to distribute keys into buckets), and once per lookup. Thus, we expect the timings to increase \emph{by a constant} amount per key, i.e., by the difference between the time to hash a long key and a 64-bit key. \begin{table}[t] \centering \mycaption{Construction time, space, and lookup time of {\method{PTHash}} on some string collections. For comparison, also the performance on 64-bit (fixed-length) random keys is reported. The used {\method{PTHash}} configuration is (iv) --- D-D, $\alpha=0.94$, $c=7$.} \scalebox{0.9}{\input{tables/perf_variable_length_keys.tex}} \label{tab:perf_variable_length_keys} \end{table} The performance of {\method{PTHash}} on such variable-length keys is reported in Table~\ref{tab:perf_variable_length_keys}, under the configuration (iv) --- D-D, $\alpha=0.94$, $c=7$. Space effectiveness does not change as expected between real-world datasets and random keys. Construction time does not change either, because of the difference in scale between a process that takes seconds and a hash calculation taking nanoseconds: a constant amount of nanoseconds per key during \emph{only} the mapping step does not impact. Instead, lookup time grows proportionally to the length of the keys showing that the hash calculation contributes to most of the lookup time of {\method{PTHash}}. More specifically, it increases by $6-9$ {{ns/key}} on the $q$-gram datasets, for $1.3-2\times$ longer keys; and by $25-26$ {{ns/key}} on URLs, for $7-9\times$ longer keys. These absolute increments show the impact of the hashing of longer keys in a way that is independent of the encoding scheme and size of the dataset. Concerning the other methods tested in Section~\ref{sec:overall_comparison}, a similar increase was observed for those hashing the key once per lookup (like {\method{PTHash}}), or much worse for those that hash the key several times per lookup, e.g., BBHash. \section{Conclusions}\label{sec:conclusions} We presented {\method{PTHash}}, an algorithm that builds minimal perfect hash functions in compact space and retains excellent lookup performance. The result was achieved via a careful revisitation of the framework introduced by~\citet*{fox1992faster} (FCH) in 1992. We conduct a comprehensive experimental evaluation and show that {\method{PTHash}} takes essentially the same space as that of previous state-of-the art algorithms but provides $2-4\times$ better lookup time. While space effectiveness remains a very important aspect, efficient lookup time is even more important for the minimum perfect hashing problem and its applications. Our C++ implementation is publicly available to encourage the use of {\method{PTHash}} and spur further research on the problem. Future work will target parallel and external-memory construction, e.g., by splitting the input into chunks and building an independent MPHF on each chunk~\cite{botelho2013practical}; and devise even more succinct encodings. It would be also interesting to generalize the algorithm to build other types of functions, such as perfect (non-minimal), and $k$-perfect hash functions. \begin{acks} This work was partially supported by the projects: MobiDataLab (EU H2020 RIA, grant agreement N\textsuperscript{\b{o}}101006879) and OK-INSAID (MIUR-PON 2018, grant agreement N\textsuperscript{\b{o}}ARS01\_00917). \end{acks} \balance \renewcommand{3.0pt}{3.0pt} \begin{table*}[t] \centering \mycaption{The competitors' performance on several datasets of variable length keys.} \scalebox{0.9}{\input{tables/perf_competitors_variable_length_keys.tex}} \label{tab:perf_competitors_variable_length_keys} \end{table*} \end{document}
\bg{equation}gin{document} \title{{f Harnack Inequality for Semilinear SPDE with Multiplicative Noise} \bg{equation}gin{abstract} By a new approximate method, dimensional free Harnack inequalities are established for a class of semilinear stochastic differential equations in Hilbert space with multiplicative noise. These inequalities are applied to study the strong Feller property for the semigroup and some properties of invariant measure. \operatorname{e}nd{abstract}\noindent AMS subject Classification (2000):\ 60J60. \noindent Keywords: Harnack inequality, log-Harnack inequality, multiplicative noise, stochastic partial differential equation. \vskip 2cm \section{Introduction and main results} The main aim of this paper is to prove Harnack inequality for semilinear stochastic equations on Hilbert spaces with multiplicative noise. This type of inequality, which was proved for the first time in \cite{Wang97}, has became a powerful tool in infinite dimensional stochastic analysis. There are many papers prove this type of inequality for SPDE with additive noise, see \cite{DPRWang2009,Liu09,LiuW08,Ouyang2009a,Ouyang2009b,Ouyang2011a,OuyangRW2012,Wang2007,Wang2011,WangWX2011,WangX2011} and reference therein. In \cite{RoWang2010}, the log-Harnack inequality for semilinear SPDE with non-additive noise was proved for the first time, but by the gradient estimate method used there, only determine and time independent coefficient was treated. A new method to deal with the case of general coefficients for SDE was introduced in \cite{Wang2011}. This method has been generalized to functional stochastic differential equations, see \cite{WangY2011}. In this paper, we generalized this method to the case of semilinear SPDE. There are some disadvantages for finite dimension approximate method here, see Remark \ref{remark2}, therefore we use the coupling argument again as in \cite{Wang2011} with a slight modification. Since it seems not so clear to solves the similar equation of process $Y_{t}$ ( see equation (2.3) in \cite{Wang2011} ) in infinite dimension, we turn to a new process which plays the role as the difference of the coupling processes, we get it as a local strong solution of a SPDE and solve the equation by truncation in the same sprite in \cite{Brzez97}. By this process and Girsanov theorem, we get a coupling in a new probability space. On the other hand, we get Harnack inequality by another type of approximation. We perturb the linear term by a suitable linear operator which closely relates to diffusion term. It's different from finite dimensional approximate and Yosida approximate, by this perturbation, we get a stronger linear term and it makes us to prove the inequality for the perturbed equation more easy. Let $H$ be a separable Hilbert space with inner product $\langle\cdot,\cdot\rangle$ and norm $\|\cdot\|$, consider the following stochastic differential equation on $H$: \bg{equation}gin{equation}\label{equ1} \mathrm{d} x_{t} = -Ax_{t}\mathrm{d} t + F(t,x_{t})\mathrm{d} t+B(t,x_{t})\mathrm{d} W_{t} \operatorname{e}nd{equation} $W=W(t),t\geq0$ is a cylindrical Brownian motion on $H$ with covariance operator $I$ on filtered probability space $(\Omega, \mathcal{F}, \mathbb{P},(\mathcal{F}_{t})_{t\geq0})$, and the coefficients satisfy the following hypotheses: \bg{equation}gin{enumerate}[({H}1)] \item $A$ is a negative self adjoint operator with discrete spectrum: \bg{equation}gin{equation} 0\leq\lambda_{1} \leq \lambda_{2} \leq \cdots \leq \lambda_{n} \rightarrow \infty, \operatorname{e}nd{equation} $\{\lambda_{n},n\in\mathbb{N}\}$ are the eigenvalues of $A$, and $\{e_{n}\}_{n=1}^{+\infty}$ are the corresponding eigenvectors, the compact $C_{0}$ semigroup generated by $-A$ denoted by $S(t)$.\label{itemH1} \item $F:[0,\infty)\times\Omega\times H \rightarrow H$ and $B: [0,\infty)\times\Omega\times H \rightarrow L(H)$ are $\mathscr{P}_{\infty}\times \mathscr{B}(H)$ measurable, here $\mathscr{P}_{\infty}$ is predictable $\sigma$-algebra on $[0,\infty)\times\Omega$ and $L(H)$ is all the bounded operators on $H$, and there exists an increasing function $K_{1}:[0,+\infty)\rightarrow [0,\infty)$, such that \bg{equation}gin{equation} ||F(t,x)-F(t,y)||+||B(t,x)-B(t,y)||_{HS}\leq K_{1}(t)||x-y||, \operatorname{e}nd{equation} for all $t\geq 0$, $x \in H$, $\mathbb{P}$-a.s, here $||\cdot||_{HS}$ denote the Hilber-Schmidt norm, and there exists $r>1$, such that for all $t>0$, \bg{equation}gin{equation} \mathbb{E}\left(\int_{0}^{t}\left|\left|F(s,0)\right|\right|\mathrm{d} s\right)^{r} < \infty, \operatorname{e}nd{equation} \bg{equation}gin{equation} \sup_{u\in [0,t]}\int_{0}^{u}\left(\mathbb{E}\left|\left|S(u-s)B(s,0)\right|\right|_{HS}^{2r}\right)^{\frac{1}{r}}\mathrm{d} s < \infty, \operatorname{e}nd{equation} \label{itemH2} \item There exist a decreasing function $\rho:[0,\infty)\rightarrow(0,\infty)$, and a bounded self adjoint operator $B_{0}$ satisfying that there exists $\{b_{n}>0|n\in\mathbb{N}\}$ such that $B_{0}e_{n}=b_{n}e_{n}$ and \bg{equation}gin{equation} B(t,x)B(t,x)^{*} \geq \rho(t)^{2}B_{0}^{2},\ \forall x\in H, t\geq 0,\ \mathbb{P}\mbox{-a.s.}, \operatorname{e}nd{equation} \label{itemH3} \item $\textrm{Ran}(B(t,x)-B(t,y))\subset \mathscr{D}(B_{0}^{-1})$ holds for all $(t,x) \in [0,\infty)\times H, \mathbb{P}$-a.s., and there exists an increasing function $K_{2}:[0,\infty)\rightarrow\mathbb{R}$ such that \bg{equation}gin{align*} 2\langleF(t,x)-F(t,y),B_{0}^{-2}(x-y)\rangle+&||B_{0}^{-1}(B(t,x)-B(t,y))||_{HS}^{2}\\ &\leq K_{2}(t)||B_{0}^{-1}(x-y)||^{2} \operatorname{e}nd{align*} holds for all $x,y \in \mathscr{D}(B_{0}^{-2})$ and all $t\geq 0$, $\mathbb{P}$-a.s.,\label{itemH4} \item There exists an increasing function $K_{3}:[0,\infty)\rightarrow (0,\infty)$, such that $||(B(t,x)^{*}-B(t,y)^{*})B_{0}^{-2}(x-y)||\leq K_{3}(t)||x-y||_{H_{0}}$ holds for all $x,y\in H$, $t\geq 0$ and $x-y\in\mathscr{D}(B_{0}^{-1})$ almost surely.\label{itemH5} \operatorname{e}nd{enumerate} \bg{equation}gin{remark}\label{remark1} \bg{equation}gin{enumerate}[(1)] \item Under (H\ref{itemH1}), we can replace $\mathscr{D}(B_{0}^{-2})$ in (H\ref{itemH4}) by $\bigcup_{n} H_{n}$, where $H_{n}= \mathrm{span}\{e_{1},\cdots,e_{n}\}$. \item (H\ref{itemH3}) equals to that $\mathrm{Ran}(B(t,x)) \supset \mathrm{Ran} B_{0}$ and $||B(t,x)^{-1}z||\leq \rho(t)^{-1}||B_{0}^{-1}z||$, for all $z \in \mathscr{D}(B_{0}^{-1})$, $t\geq 0$, $\mathbb{P}$\mbox{-a.s.}, \item (H\ref{itemH5}) will be used as a condition in addition to get Harnack inequality, and by (H\ref{itemH4}), $B_{0}^{-1}(B(t,x)-B(t,y))$ is an bounded operator, so in (H\ref{itemH5}) we only require $x-y\in \mathscr{D}(B_{0}^{-1})$. \operatorname{e}nd{enumerate} \operatorname{e}nd{remark} For the proof of Remark \ref{remark1}, see Appendix. We state our main result of this paper \bg{equation}gin{theorem}\label{mainthm} If (H\ref{itemH1})-(H\ref{itemH4}) hold, then \bg{equation}gin{equation} P_{T}\log{f}(y)\leq\log{P_{T}f(x)}+\frac{K_{2}(T)||x-y||_{H_{0}}}{2(1-e^{K_{2}T})},\ \forall f\in \mathscr{B}_{b}(H),f\geq 1, x,y\in H,T>0. \operatorname{e}nd{equation} If, in addition, (H\ref{itemH5}) holds, then for $p>(1+\frac{K_{3}(T)}{\rho(T)})^{2}$, $\mathrm{d}elta_{p,T} = K_{3}\vee \frac{\rho(T)}{2}(\sqrt{p}-1)$, the Harnack inequality \bg{equation}gin{equation}\label{Harineq} (P_{T}f(y))^{p}\leq (P_{T}f^{p}(x)) \operatorname{e}xp{\left[\frac{K_{2}(T)\sqrt{p}(\sqrt{p}-1)||x-y||_{H_{0}}^{2}}{4\mathrm{d}elta_{p,T}[(\sqrt{p}-1)\rho(T)-\mathrm{d}elta_{p,T}](1-e^{K_{2}T})}\right]}, \operatorname{e}nd{equation} holds for all $T>0$, $x,y\in H$ and $f\in\mathscr{B}_{b}^{+}(H)$, where $||x||^{2}_{H_{0}}=\sum_{n=0}^{+\infty}{b_{n}^{-1}}\langlex,e_{n}\rangle^{2}$, $H_{0}=\mathscr{D}(B_{0}^{-1})$. \operatorname{e}nd{theorem} \bg{equation}gin{remark}\label{remark2} One may use the finite dimension approximate method to get the Harnack inequalities, but here we mention that there are difficulties to overcome and it may not be better than the method used here. Let $\pi_{n}$ be the projection form $H$ to $H_{n}$, then get the following equation on $H_{n}$ \bg{equation}gin{equation} \mathrm{d} x^{n}_{t} = -A_{n}x^{n}_{t}\mathrm{d} t + F_{n}(t,x^{n}_{t})\mathrm{d} t+B_{n}(t,x^{n}_{t})\mathrm{d} W^{n}_{t}, \operatorname{e}nd{equation} where, \bg{equation}gin{equation} A_{n}=\pi_{n}A,\ F_{n}=\pi_{n}F|_{H_{n}},\ B_{n}=\pi_{n}B|_{H_{n}},\ W^{n}=\pi_{n}W, \operatorname{e}nd{equation} one may find that after projecting to lower dimension, an invertible operator may become degenerate, for example, an operator has the matrix form, $ \left( \bg{equation}gin{array}{cc} 0 & 1 \\ 1 & 0 \\ \operatorname{e}nd{array} \right) $ , under the orthonormal basis $\{e_{1},\ e_{2}\}$. It's easy to find that it's degenerate after projecting to the subspace generated by $e_{1}$. By (H\ref{itemH3}), one may replace $B$ by its symmetrization $\sqrt{BB^{*}}$, but constant may become worse in (H\ref{itemH2}) and (H\ref{itemH4}), see remark after theorem 1 in \cite{ArY81}, and it seems not easy to get similar estimate for $\sqrt{BB^{*}}$ as in (H\ref{itemH4}). \operatorname{e}nd{remark} \section{Proof of Theorem \ref{mainthm}} Fixed a time $T>0$, we focus our discussion on the interval [0,T]. In order to prove the main theorem, we need some lemmas, and denote $K_{i}(T)$ by $K_{i}$, $i=1,2,3$, for for simplicity's sake. The first lemma prove the existence and uniqueness of mild solution of the equation (\ref{equ1}), and give some estimates. \bg{equation}gin{lemma} Under the condition (\textrm{H}\ref{itemH1}) and (\textrm{H}\ref{itemH2}), equation (\ref{equ1}) has a pathwise unique mild solution and \bg{equation}gin{equation}\label{solution_estimate_1} \sup_{t\in[0,T]}\mathbb{E}{||x_{t}||^{r}}\leq C(r,T)(1+\mathbb{E}||x_{0}||^{r}). \operatorname{e}nd{equation} \operatorname{e}nd{lemma} \noindent\operatorname{e}mph{Proof.} The existence part goes along the same lines as that of Theorem 7.4 in \cite{DPZ1992}, if we can prove that there exists $p\geq2$, such that \bg{equation}gin{equation} \sup_{t\in[0,T]}\mathbb{E}\left|\left|\int_{0}^{t}{e^{-(t-s)A}F(s,x_{s})\mathrm{d} s}\right|\right|^{p}<\infty, \operatorname{e}nd{equation} and \bg{equation}gin{equation} \sup_{t\in[0,T]}\mathbb{E}{\left|\left|\int_{0}^{t}{e^{-(t-s)A}B(s,x_{s})\mathrm{d} W_{s}}\right|\right|^{p}}<\infty \operatorname{e}nd{equation} for all $H$-valued predictable processes $x$ defined on $[0,T]$ satisfying \bg{equation}gin{equation} \sup_{t\in[0,T]}\mathbb{E}{||x_{t}||^{p}}<\infty. \operatorname{e}nd{equation} In fact, for $r$ in (H\ref{itemH2}), \bg{equation}gin{equation*} \bg{equation}gin{split} &\sup_{t\in[0,T]}\mathbb{E}{\left|\left|\int_{0}^{t}{e^{-(t-s)A}B(s,x_{s})\mathrm{d} W_{s}}\right|\right|^{r}}\\ \leq &\sup_{t\in[0,T]}{\mathbb{E}{\left|\left|\int_{0}^{t}{e^{-(t-s)A}(B(s,x_{s})-B(s,0))\mathrm{d} W_{s}}\right|\right|^{r}}}+\sup_{t\in[0,T]}{\mathbb{E}{\left|\left|\int_{0}^{t}{e^{-(t-s)A}B(s,0)\mathrm{d} W_{s}}\right|\right|^{r}}}\\ \leq & C(r,T)(1+\mathbb{E}{||x_{t}||^{r}})+\left(\frac{r}{2}(r-1)\right)^{\frac{r}{2}} \sup_{t\in[0,T]}\left(\int_{0}^{t}\left(\mathbb{E}||S(t-s)B(s,0)||_{HS}^{r}\right)^{\frac{2}{r}}\right)^{r} <\infty. \operatorname{e}nd{split} \operatorname{e}nd{equation*} $F$ is treated similarly, we omit it. Estimate (\ref{solution_estimate_1}) follows from Grownwall's lemma. For the uniqueness part. If $x_{t}^{1},x_{t}^{2}$ are mild solutions of equation (\ref{equ1}), then \bg{equation}gin{equation} \bg{equation}gin{split} \mathbb{E}\sup_{u\in[0,t]}{||x_{u}^{1}-x_{u}^{2}||^{r}}\leq& 2^{r}T\mathbb{E}{\sup_{u\in[0,t]}{\int_{0}^{u}{||S(u-s)(F(s,x_{s}^{1})-F(s,x_{s}^{2}))||^{r}\mathrm{d} s}}}\\ &+ 2^{r}\mathbb{E}\sup_{u\in[0,t]}{||\int_{0}^{u}{S(u-s)(B(t,x^{1}_{s})-B(t,x^{2}_{s}))\mathrm{d} W_{s}}||^{r}}\\ \leq& 2^{r}T\int_{0}^{t}{\mathbb{E}{||x_{u}^{1}-x_{u}^{2}||^{r}}\mathrm{d} s}+C(r,T)\mathbb{E}\int_{0}^{t}{||x_{s}^{1}-x_{s}^{2}||^{r}\mathrm{d} s}\\ \leq& C(r,T)\int_{0}^{t}{\mathbb{E}\sup_{u\in[0,s]}{||x_{u}^{1}-x_{u}^{2}||^{r}}\mathrm{d} s}, \operatorname{e}nd{split} \operatorname{e}nd{equation} by the second inequality, $\mathbb{E}\sup_{u\in[0,t]}{||x_{u}^{1}-x_{u}^{2}||^{r}}<\infty$, then by Gronwall's lemma, $x_{t}^{1}=x_{t}^{2},\ \forall t\in[0,T]$, $\mathbb{P}$-a.s. \qed Denote $A_{\operatorname{e}psilon} = A + \operatorname{e}psilon B_{0}^{-2}$, $\mathscr{D}(A_{\operatorname{e}psilon}) = \mathscr{D}(A) \bigcap \mathscr{D}(B_{0}^{-2}) \subset \mathscr{D}(B_{0}^{-2})$, it is a self adjoint operator, the eigenvalues of $A_{\operatorname{e}psilon}$ are $\{\lambda_{n,\operatorname{e}psilon}:=\lambda_{n}+\operatorname{e}psilon b_{n}^{-2}\ | n\in \mathbb{N}\}$ and the eigenvectors remain $\{e_{n}|n\in\mathbb{N}\}$. In fact, one can define a self adjoint operator $\tilde{A}$ by \bg{equation}gin{align} \mathscr{D}(\tilde{A})&=\left\{x\in H \ |\ \sum_{n=0}^{+\infty}{(\lambda_{n}+\operatorname{e}psilon b_{n}^{-2})^{2}\langlex,e_{n}\rangle^{2}}<+\infty \right\},\\ \tilde{A}x&=\sum_{n=0}^{+\infty}{(\lambda_{n}+\operatorname{e}psilon b_{n}^{-2})\langlex,e_{n}\ranglee_{n}}, \operatorname{e}nd{align} then by basic inequality and spectral decomposition of $A$ and $B_{0}^{-2}$, it is easy to see that $\tilde{A}=A_{\operatorname{e}psilon}$. \bg{equation}gin{lemma} For the mild solution of equation \bg{equation}gin{equation}\label{equ2} \mathrm{d} x_{t}^{\operatorname{e}psilon}= -(A+\operatorname{e}psilon B_{0}^{-2})x_{t}^{\operatorname{e}psilon}\mathrm{d} t + F(t,x_{t}^{\operatorname{e}psilon})\mathrm{d} t + B(t,x_{t}^{\operatorname{e}psilon})\mathrm{d} W_{t},\ x_{0}^{\operatorname{e}psilon}= x, \operatorname{e}nd{equation} we have \bg{equation}gin{equation} \lim_{\operatorname{e}psilon \rightarrow 0^{+}}\mathbb{E}||x_{t}-x_{t}^{\operatorname{e}psilon}||^{2}=0,\ \forall t\in [0,T]. \operatorname{e}nd{equation} \operatorname{e}nd{lemma} \noindent\operatorname{e}mph{Proof.} Since \bg{equation}gin{align} x_{t} &= e^{-tA}x + \int_{0}^{t}{e^{-(t-s)A}F(s,x_{s})\mathrm{d} s} + \int_{0}^{t}{e^{-(t-s)A}B(s,x_{s})\mathrm{d} W_{s}},\\ x^{\operatorname{e}psilon}_{t} &= e^{-t(A+\operatorname{e}psilon B_{0}^{2})}x + \int_{0}^{t}{e^{-(t-s)(A+\operatorname{e}psilon B_{0}^{2})}F(s,x^{\operatorname{e}psilon}_{s})\mathrm{d} s} + \int_{0}^{t}{e^{-(t-s)(A+\operatorname{e}psilon B_{0}^{2})}B(s,x^{\operatorname{e}psilon}_{s})\mathrm{d} W_{s}}, \operatorname{e}nd{align} then \bg{equation}gin{equation} \bg{equation}gin{split} ||x_{t}-x_{t}^{\operatorname{e}psilon}||^{2}\leq& 3||(e^{-t\operatorname{e}psilon B_{0}^{-2}}-1)e^{(-tA)}x||^{2}\\ &+3||\int_{0}^{t}{(e^{-(t-s)A}F(s,x_{s})-e^{-(t-s)(A+\operatorname{e}psilon B_{0}^{-2})}F(s,x_{s}^{\operatorname{e}psilon}))\mathrm{d} s}||^{2}\\ &+3||\int_{0}^{t}{(e^{-(t-s)A}B(s,x_{s})-e^{-(t-s)(A+\operatorname{e}psilon B_{0}^{-2})}B(s,x_{s}^{\operatorname{e}psilon}))\mathrm{d} W_{s}}||^{2}\\ =:& I_{1}+I_{2}+I_{3}. \operatorname{e}nd{split} \operatorname{e}nd{equation} It's clear that $\lim_{\operatorname{e}psilon \rightarrow 0^{+}}I_{1}=0$. For $I_{2}$, we have \bg{equation}gin{equation} \bg{equation}gin{split} I_{2}&\leq 6T\int_{0}^{t}{||(e^{-(t-s)A}-e^{-(t-s)(A+\operatorname{e}psilon B^{-2}_{0})})F(s,x_{s})||^{2}\mathrm{d} s}\\ &+6T\int_{0}^{t}{||e^{-(t-s)(A+\operatorname{e}psilon B^{-2}_{0})}(F(s,x_{s})-F(s,x_{s}^{\operatorname{e}psilon}))||^{2}\mathrm{d} s} =: I_{2,1}+I_{2,2}, \operatorname{e}nd{split} \operatorname{e}nd{equation} Since \bg{equation}gin{align} &||(e^{-(t-s)A}-e^{-(t-s)(A+\operatorname{e}psilon B_{0}^{-2})})F(s,x_{s})||\leq C(1+||x_{s}||),\\ &\lim_{\operatorname{e}psilon \rightarrow 0^{+}}{||(e^{-(t-s)A}-e^{-(t-s)(A+\operatorname{e}psilon B_{0}^{-2})})F(s,x_{s})||}=0. \operatorname{e}nd{align} By domain convergence theorem $\lim_{\operatorname{e}psilon \rightarrow 0^{+}}{\mathbb{E} I_{2,1}}=0$. On the other hand, \bg{equation}gin{equation} \bg{equation}gin{split} I_{2,2}&\leq 6T\int_{0}^{t}{||e^{-(t-s)(A+\operatorname{e}psilon B_{0}^{2})}(F(s,x_{s})-F(s,x_{s}^{\operatorname{e}psilon}))||^{2}\mathrm{d} s}\\ &\leq 6T\int_{0}^{t}{||F(s,x_{s})-F(s,x_{s}^{\operatorname{e}psilon})||^{2}\mathrm{d} s}\leq 6TK_{1}\int_{0}^{t}{||x_{s}-x_{s}^{\operatorname{e}psilon}||^{2}\mathrm{d} s}. \operatorname{e}nd{split} \operatorname{e}nd{equation} For $I_{3}$, \bg{equation}gin{equation} \bg{equation}gin{split} \mathbb{E}{I_{3}}&\leq 6\mathbb{E}{||\int_{0}^{t}{(e^{-(t-s)A}-e^{-(t-s)(A+\operatorname{e}psilon B_{0}^{-2})})B(s,x_{s})\mathrm{d} W_{s}}||^{2}}\\ &+6\mathbb{E}{||\int_{0}^{t}{e^{-(t-s)(A+\operatorname{e}psilon B_{0}^{-2})}(B(s,x_{s})-B(s,x_{s}^{\operatorname{e}psilon}))\mathrm{d} W_{s}}||^{2}}=I_{3,1}+I_{3,2}, \operatorname{e}nd{split} \operatorname{e}nd{equation} and \bg{equation}gin{equation} \bg{equation}gin{split} \mathbb{E} I_{3,1} \leq &12T \mathbb{E} ||\int_{0}^{t}{(I-e^{-(t-s)\operatorname{e}psilon B_{0}^{-2}})(e^{-(t-s)A}B(s,0))\mathrm{d} W_{s}}||^{2}\\ &+ 12T \mathbb{E} ||\int_{0}^{t}{(e^{-(t-s)A}-e^{-(t-s)(A+\operatorname{e}psilon B_{0}^{-2})})(B(s,x_{s})-B(s,0))\mathrm{d} W_{s}}||^{2}\\ \leq &12T \mathbb{E} \int_{0}^{t}{||(I-e^{-(t-s)\operatorname{e}psilon B_{0}^{-2}})(e^{-(t-s)A}B(s,0))||_{HS}^{2}\mathrm{d} s}\\ &+12T \mathbb{E} \int_{0}^{t}{||(I-e^{-(t-s)\operatorname{e}psilon B_{0}^{-2}})(e^{-(t-s)A}(B(s,x_{s})-B(s,0)))||_{HS}^{2}\mathrm{d} s}\\ =:&I_{3,1,1}+I_{3,1,2}, \operatorname{e}nd{split} \operatorname{e}nd{equation} since \bg{equation}gin{align} ||(I-e^{-(t-s)\operatorname{e}psilon B_{0}^{-2}})e^{-(t-s)A}B(s,0)||^{2}=\sum_{n=1}^{+\infty}||(e^{-(t-s)\operatorname{e}psilon B_{0}^{-2}}-I)e^{-(t-s)A}B(s,0)e_{n}||^{2} \operatorname{e}nd{align} and \bg{equation}gin{align} &\lim_{\operatorname{e}psilon \rightarrow 0}||(e^{-(t-s)\operatorname{e}psilon B_{0}^{-2}}-1)e^{-(t-s)A}B(s,0)e_{n}||=0\\ &||(e^{-(t-s)\operatorname{e}psilon B_{0}^{-2}}-I)e^{-(t-s)A}B(s,0)e_{n}||\leq||e^{-(t-s)A}B(s,0)e_{n}||\\ \operatorname{e}nd{align} and by (H\ref{itemH2}) \bg{equation}gin{equation} \mathbb{E} \int_{0}^{t}{\sum_{n=1}^{+\infty}||e^{-(t-s)A}B(s,0)e_{n}||^{2}\mathrm{d} s}=\mathbb{E} \int_{0}^{t}{||e^{-(t-s)A}B(s,0)||_{HS}^{2}\mathrm{d} s}< \infty. \operatorname{e}nd{equation} By dominate convergence theorem, $\lim_{\operatorname{e}psilon \rightarrow 0}I_{3,1,1}=0$, Note that $B(s,x_{s})-B(s,0)\in L_{HS}(H)$, and \bg{equation}gin{equation} \bg{equation}gin{split} &||(I-e^{-(t-s)\operatorname{e}psilon B_{0}^{2}})e^{-(t-s)A}(B(s,x_{s})-B(s,0))||_{HS}^{2}\\ =&\sum_{n=1}^{+\infty}{||(I-e^{-(t-s)\operatorname{e}psilon B_{0}^{2}})e^{-(t-s)A}(B(s,x_{s})-B(s,0))e_{n}||^{2}} \operatorname{e}nd{split} \operatorname{e}nd{equation} and \bg{equation}gin{align} ||(I-e^{-(t-s)\operatorname{e}psilon B_{0}^{-2}})(e^{-(t-s)A}(B(s,x_{s})-B(s,0)))e_{n}||^{2}&\leq ||(B(s,x_{s})-B(s,0))e_{n}||^{2}\\ \mathbb{E} \int_{0}^{t}{\sum_{n=1}^{+\infty}||(B(s,x_{s})-B(s,0))e_{n}||^{2}\mathrm{d} s}&\leq \mathbb{E}\int_{0}^{t}{||x_{s}||^{2}\mathrm{d} s}<\infty, \operatorname{e}nd{align} by dominate convergence theorem, $\lim_{\operatorname{e}psilon \rightarrow 0}{\mathbb{E} I_{3,1}}=0$. Finally, \bg{equation}gin{equation} \mathbb{E} I_{3,2} \leq 6T \mathbb{E}\int_{0}^{t}{||B(s,x_{s})-B(s,x_{s}^{\operatorname{e}psilon})||_{HS}^{2}\mathrm{d} s}\leq 6TK_{2}\mathbb{E}\int_{0}^{t}{||x_{s}-x_{s}^{\operatorname{e}psilon}||^{2}\mathrm{d} s}. \operatorname{e}nd{equation} Now, we have \bg{equation}gin{equation} \mathbb{E}||x_{t}-x_{t}^{\operatorname{e}psilon}||^{2}\leq \psi_{\operatorname{e}psilon}(t) + C(T,K_{2})\mathbb{E}\int_{0}^{t}{||x_{s}-x_{s}^{\operatorname{e}psilon}||^{2}\mathrm{d} s} \operatorname{e}nd{equation} for some $\psi_{\operatorname{e}psilon}(t)$, which satisfies $\lim_{\operatorname{e}psilon \rightarrow 0}{\psi_{\operatorname{e}psilon}(t)}=0$, then by Gronwall's lemma, \bg{equation}gin{equation} \lim_{\operatorname{e}psilon \rightarrow 0}{\mathbb{E}||x_{t}-x_{t}^{\operatorname{e}psilon}||^{2}}=0,\ \forall t\in [0,T]. \operatorname{e}nd{equation} \qed Firstly, we shall consider the following equation, $\xi_{t}=\frac{2-\theta}{K_{2}}(1-\operatorname{e}^{K_{2}(t-T)})$, \bg{equation}gin{equation}\label{equ3} \bg{equation}gin{split} \mathrm{d} z_{t}=&-A_{\operatorname{e}psilon}z_{t}\mathrm{d} t+(F(t,x_{t})-F(t,x_{t}-z_{t}))\mathrm{d} t + (B(t,x_{t})-B(t,x_{t}-z_{t}))\mathrm{d} W_{t}\\ &-\frac{1}{\xi_{t}}(B(t,x_{t}-z_{t})-B(t,x_{t}))B(t,x_{t})^{-1}z_{t}\mathrm{d} t - \frac{1}{\xi_{t}}z_{t}\mathrm{d} t,\ z_{0}=z. \operatorname{e}nd{split} \operatorname{e}nd{equation} Note that, by (H\ref{itemH2})--(H\ref{itemH4}), \bg{equation}gin{align} &F(t,x_{t}) - F(t,x_{t}-z_{t}) \in H,\ (B(t,x_{t})-B(t,x_{t}-z_{t}))\in L_{HS}(H,H_{0}),\\ &(B(t,x_{t}-z_{t})-B(t,x_{t}))B(t,x_{t})^{-1}\in L(H_{0},H_{0}), \operatorname{e}nd{align} it's natural to solve the equation in $H_{0}$, we shall search a suitable Gelfand triple. To this end, we should restrict the operator $A_{\operatorname{e}psilon}$ to $H_{0}$. \bg{equation}gin{lemma} Define $A_{0,\operatorname{e}psilon}$ as follows \bg{equation}gin{align} \mathscr{D}(A_{0,\operatorname{e}psilon}) = B_{0}(\mathscr{D}(A_{\operatorname{e}psilon})),\ A_{0,\operatorname{e}psilon}x=A_{\operatorname{e}psilon}x, \forall x\in B_{0}(\mathscr{D}(A_{\operatorname{e}psilon})), \operatorname{e}nd{align} then, $A_{0,\operatorname{e}psilon}$ is well defined and $(A_{0,\operatorname{e}psilon}, B_{0}(\mathscr{D}(A_{\operatorname{e}psilon})))= (B_{0}A_{\operatorname{e}psilon}B_{0}^{-1},B_{0}(\mathscr{D}(A_{\operatorname{e}psilon})))$. \operatorname{e}nd{lemma} \noindent\operatorname{e}mph{Proof.} It's well defined. In fact for all $x\in B_{0}(\mathscr{D}(A_{\operatorname{e}psilon}))$, \bg{equation}gin{equation} \sum_{n=1}^{+\infty}{\lambda_{n,\operatorname{e}psilon}\langlex,e_{n}\rangle^{2}} =\sum_{n=0}^{+\infty}{\lambda^{2}_{n,\operatorname{e}psilon}b_{n}^{2}\langleB_{0}^{-1}x,e_{n}\rangle^{2}} \leq ||B||_{H}^{2}\sum_{n=1}^{+\infty}(\lambda_{n,\operatorname{e}psilon}^{2})\langleB_{0}^{-1}x,e_{n}\rangle^{2}<+\infty, \operatorname{e}nd{equation} then $x\in\mathscr{D}(A_{\operatorname{e}psilon})$, and \bg{equation}gin{align} \sum_{n=1}^{+\infty}b_{n}^{-2}\langleA_{\operatorname{e}psilon}x,e_{n}\rangle^{2}=\sum_{n=1}^{+\infty}{\lambda^{2}_{n,\operatorname{e}psilon}\langleB_{0}^{-1}x,e_{n}\rangle^{2}}<+\infty, \operatorname{e}nd{align} then $A_{\operatorname{e}psilon}x\in\mathscr{D}(B_{0}^{-1}),\ \forall x\in B_{0}(\mathscr{D}(A_{\operatorname{e}psilon}))$, i.e. $A_{\operatorname{e}psilon}x\in H_{0}$. Finally, for all $x\in B_{0}(\mathscr{D}(A))$, \bg{equation}gin{equation} B_{0}A_{\operatorname{e}psilon}B_{0}^{-1}x=A_{\operatorname{e}psilon}B_{0}B_{0}^{-1}x=A_{\operatorname{e}psilon}x=A_{0,\operatorname{e}psilon}x. \operatorname{e}nd{equation} \qed Now, we can define our Gelfand triple. Let \bg{equation}gin{equation} (V,||\cdot||_{V} )= (\mathscr{D}(A_{0,\operatorname{e}psilon}^{\frac{1}{2}}),||A_{0,\operatorname{e}psilon}^{\frac{1}{2}}\cdot||_{H_{0}}), \operatorname{e}nd{equation} then $(V^{*},||\cdot||_{V^{*}})$ is the complete of $(H_{0}, ||A_{0,\operatorname{e}psilon}^{-\frac{1}{2}}\cdot||_{H_{0}})$, $V^{*}\supset H_{0}\supset V$ is the triple we need. Since $\mathscr{D}(A_{\operatorname{e}psilon})\subset\mathscr{D}(B_{0}^{-2})$, $\mathscr{D}(A_{0,\operatorname{e}psilon})\subset \mathscr{D}(B_{0}^{-3})$, we have the following relationship moreover \bg{equation}gin{equation} V^{*}\supset H \supset H_{0} \supset \mathscr{D}(B_{0}^{-2}) \supset V. \operatorname{e}nd{equation} \bg{equation}gin{lemma}\label{lemma_strongsolution} If conditions (H\ref{itemH1})-(H\ref{itemH4}) hold, equation (\ref{equ3}) has a unique strong solution up to the explosion time $\tau$. \operatorname{e}nd{lemma} \noindent\operatorname{e}mph{Proof.} Let \bg{equation}gin{equation} G_{n}(t,v)= \left\{ \bg{equation}gin{array}{ll} B(t,x_{t})^{-1}v,& \ ||v||_{H_{0}}\leq n,\\ B(t,x_{t})^{-1}\frac{nv}{\ ||v||_{H_{0}}},& \ ||v||_{H_{0}}>n,\\ \operatorname{e}nd{array} \right. \operatorname{e}nd{equation} and for simplicity's sake, we denote $$F(t,x_{t}-v_{1})-F(t,x_{t}-v_{2}),\ G_{n}(t,v_{1})-G_{n}(t,v_{2}),\ B(t,x_{t})-B(t,x_{t}-z_{t})$$ by $F(t,v_{2},v_{1})$, $G_{n}(t,v_{1},v_{2})$, $\hat{B}(t,z_{t})$ respectively. We consider the following equation firstly, \bg{equation}gin{equation}\label{equ4} \bg{equation}gin{split} \mathrm{d} z_{t}=&-A_{0,\operatorname{e}psilon}z_{t}\mathrm{d} t + F(t,z_{t},0)\mathrm{d} t- \frac{1}{\xi_{t}}z_{t}\mathrm{d} t+\frac{1}{\xi_{t}}\hat{B}(t,z_{t})G_{n}(t,z_{t})\mathrm{d} t + \hat{B}(t,z_{t})\mathrm{d} W_{t}\\ =:&A_{n,\operatorname{e}psilon}(t,z_{t})\mathrm{d} t + \hat{B}(t,z_{t})\mathrm{d} W_{t} \operatorname{e}nd{split} \operatorname{e}nd{equation} It's clearly that the hemicontinuous holds, since $G_{n}(t,\cdot)$ remains a Lipschitz mapping from $H_{0}$ to $H$. By the direct calculus, see Appendix, we get that, for all $v,v_{1},v_{2} \in V$, \bg{equation}gin{enumerate}[({A}1)] \item Local monotonicity \bg{equation}gin{equation*} \bg{equation}gin{split} &2 _{V^{*}}\langleA_{n,\operatorname{e}psilon}(t,v_{1})-A_{n,\operatorname{e}psilon}(t,v_{2}),v_{1}-v_{2}\rangle_{V} +||\hat{B}(t,v_{2})-\hat{B}(t,v_{1})||_{L_{HS}(H,H_{0})}^{2}\\ \leq&\left[K_{2}+\frac{2n\sqrt{K_{2}}-2}{\xi_{t}} +\frac{n^{2}K_{1}||B_{0}||^{2}}{\operatorname{e}psilon^{2}\xi_{t}^{2}\mathrm{d}elta^{2}} +\frac{2}{\xi_{t}}(\sqrt{K_{2}}||v_{2}||_{H_{0}}^{2} +\sqrt{\frac{2K_{1}}{\operatorname{e}psilon}}||B_{0}||\cdot||v_{2}||_{V}^{2})\right]\times\\ &\times||v_{1}-v_{2}||_{H_{0}}^{2}-2(1-\mathrm{d}elta^{2})||v_{1}-v_{2}||_{V}^{2},\ \forall \mathrm{d}elta\in(0,1). \operatorname{e}nd{split} \operatorname{e}nd{equation*}\label{itemA1} \item Coercivity \bg{equation}gin{equation*} \bg{equation}gin{split} &2 _{V^{*}}\langleA_{n,\operatorname{e}psilon}(t,v),v\rangle_{V} + ||\hat{B}(t,v)||_{L_{HS}(H,H_{0})}^{2}\\ \leq& -2(1-\mathrm{d}elta^{2})||v||_{V}^{2}+(\frac{n\sqrt{K_{2}}-2}{\xi_{t}}+\frac{n^{2}K_{1}}{\operatorname{e}psilon^{2}\xi_{t}^{2}\mathrm{d}elta^{2}})||v||_{H_{0}}^{2}, \forall \mathrm{d}elta\in(0,1). \operatorname{e}nd{split} \operatorname{e}nd{equation*}\label{itemA2} \item Growth \bg{equation}gin{equation*} ||A_{n,\operatorname{e}psilon}(t,v)||_{V^{*}}^{2}\leq \left(\frac{||B_{0}||^{2}}{\operatorname{e}psilon\xi_{t}}K_{2} +\left(1+\frac{||B_{0}||^{4}K_{1}}{\operatorname{e}psilon\xi_{t}^{2}}\right)||v||_{V}^{2}\right)(1+||v||_{H_{0}}^{4}). \operatorname{e}nd{equation*}\label{itemA3} \operatorname{e}nd{enumerate} Since \bg{equation}gin{equation}\label{inequ1} \bg{equation}gin{split} ||\hat{B}(t,v)||^{2}_{L_{HS}}=||B_{0}^{-1}\hat{B}(t,v)||^{2}_{HS} \leq K_{2}||v||_{H_{0}}^{2}+\frac{2K_{1}}{\operatorname{e}psilon}||B_{0}||^{3}||v||_{V}||v||_{H_{0}}. \operatorname{e}nd{split} \operatorname{e}nd{equation} does not satisfies the condition (1.2) in \cite{LiuR10}, but by the basic inequality one can check that the proof in Lemma2.2 goes on well, see Appendix B. By the estimates above and Theorem 1.1 in \cite{LiuR10} for any $T_{0}<T$, equation (\ref{equ4}) has unique strong solution $(z_{t}^{n})_{t\in[0,T_{0}]}$, one can extends the solution to the interval $[0,T)$ by the pathwise uniqueness and continuous. Next we shall let $n$ goes to infinite. Let, $m>n$, \bg{equation}gin{equation} \tau_{m}^{n} = \inf\{t\in[0,T)\ |\ ||z_{t}^{m}||_{H_{0}}>n\}, \operatorname{e}nd{equation} definite $\inf\operatorname{e}mptyset=T$, then \bg{equation}gin{equation} \bg{equation}gin{split} z_{t}^{m}=&z_{0} + \int_{0}^{t}{(-A_{0,\operatorname{e}psilon}z_{s}^{m} + F(s,z_{s}^{m},0)-\frac{1}{\xi_{s}}z_{s}^{m})\mathrm{d} s}\\ &-\int_{0}^{t}{\frac{1}{\xi_{s}}\hat{B}(s,z_{s}^{m})B(s,x_{s})^{-1}z_{s}^{m}\mathrm{d} s} +\int_{0}^{t}{\hat{B}(s,z_{s}^{m})\mathrm{d} W_{s}},\ t<\tau_{m}^{n}, \operatorname{e}nd{split} \operatorname{e}nd{equation} by It\^{o}'s formula and (A\ref{itemA1}), for $t<\tau_{n}^{n}\wedge\tau_{m}^{n}$, we have \bg{equation}gin{equation*} \bg{equation}gin{split} &\mathrm{d} ||z_{t}^{n}-z_{t}^{m}||_{H_{0}}^{2}-2\langle\hat{B}(t,z_{t}^{n})-\hat{B}(t,z_{t}^{m}))\mathrm{d} W_{t}, z_{t}^{n}-z_{t}^{m}\rangle_{H_{0}}\\ =&\ 2 _{V^{*}}\langleA_{n,\operatorname{e}psilon}(t,z_{t}^{n})-A_{n,\operatorname{e}psilon}(t,z_{t}^{m}),z_{t}^{n}-z_{t}^{m}\rangle_{V} +||\hat{B}(t,z_{t}^{n})-\hat{B}(t,z_{t}^{m})||_{L_{HS}(H,H_{0})}\mathrm{d} t\\ \leq &\left(K_{2}+\frac{2}{\xi_{t}}(n\sqrt{K_{1}}+\sqrt{K_{2}}||z_{t}^{n}||_{H_{0}}^{2} +\sqrt{\frac{2K_{1}}{\operatorname{e}psilon}}||B_{0}||\cdot||z_{t}^{n}||_{V}^{2}) +\frac{n^{2}K_{1}}{\operatorname{e}psilon^{2}\xi_{t}^{2}\mathrm{d}elta^{2}}||B_{0}||^{2}\right)||z_{t}^{n}-z_{t}^{m}||_{H_{0}}^{2}\\ \operatorname{e}nd{split} \operatorname{e}nd{equation*} define \bg{equation}gin{equation} \bg{equation}gin{split} &\Psi_{s}=K_{2}+\frac{2}{\xi_{s}}(\sqrt{K_{2}}||z_{s}^{n}||_{H_{0}}^{2}+n\sqrt{K_{1}} +\sqrt{\frac{2K_{1}}{\operatorname{e}psilon}}||B_{0}||^{2}||z_{s}^{n}||_{V}^{2}) +\frac{n^{2}K_{1}||B_{0}||^{2}}{\operatorname{e}psilon^{2}\xi_{s}^{2}\mathrm{d}elta^{2}}, \operatorname{e}nd{split} \operatorname{e}nd{equation} then \bg{equation}gin{equation} \bg{equation}gin{split} &\operatorname{e}xp{\left[-\int_{0}^{t}{\Psi_{s}\mathrm{d} s}\right]}||z_{t}^{n}-z_{t}^{m}||_{H_{0}}^{2}\\ \leq &\int_{0}^{t}{2\operatorname{e}xp{\left[-\int_{0}^{r}{\Psi_{s}\mathrm{d} s}\right]}\langle(\hat{B}(r,z_{r}^{n})-\hat{B}(t,z_{r}^{m}))\mathrm{d} W_{r},z_{r}^{n}-z_{r}^{m}\rangle_{H_{0}}}, \operatorname{e}nd{split} \operatorname{e}nd{equation} therefore \bg{equation}gin{equation} \mathbb{E}\left\{\operatorname{e}xp{\left[-\int_{0}^{t\wedge\tau_{n}^{n}\wedge\tau_{m}^{n}}{\Psi_{s}\mathrm{d} s}\right]}||z_{t\wedge\tau_{n}^{n}\wedge\tau_{m}^{n}}^{n}-z_{t\wedge\tau_{n}^{n}\wedge\tau_{m}^{n}}^{m}||_{H_{0}}^{2}\right\}=0. \operatorname{e}nd{equation} Note that \bg{equation}gin{equation} \mathbb{E}\int_{0}^{t}{||z_{s}^{n}||^{2}_{V}\mathrm{d} s}<\infty,\ \forall t<T \operatorname{e}nd{equation} implies \bg{equation}gin{equation} \int_{0}^{t}{||z_{s}^{n}||_{V}^{2}\mathrm{d} s}<\infty,\ \forall t\in[0,T),\ \mathbb{P}\mbox{-a.s.}, \operatorname{e}nd{equation} then \bg{equation}gin{equation} z_{t\wedge\tau_{n}^{n}\wedge\tau_{m}^{n}}^{n}=z_{t\wedge\tau_{n}^{n}\wedge\tau_{m}^{n}}^{m},\ \forall t\in[0,T),\ \mathbb{P}\mbox{-a.s.}, \operatorname{e}nd{equation} let $t\uparrow T$, by the continuity, we have \bg{equation}gin{equation} z_{\tau_{n}^{n}\wedge\tau_{m}^{n}}^{n}=z_{\tau_{n}^{n}\wedge\tau_{m}^{n}}^{m},\ \mathbb{P}\mbox{-a.s.} \operatorname{e}nd{equation} If $\tau_{n}^{n}<\tau_{m}^{n}$, $z_{\tau_{n}^{n}}^{n}=z_{\tau_{n}^{n}}^{m}\in \partial B_{n}^{H_{0}}(0)$, by the definition of $\tau_{m}^{n}$, it's a contradictory. Thus $\tau_{n}^{n}\geq\tau_{m}^{n}$, similarly, $\tau_{n}^{n}\leq\tau_{m}^{n}$, so $\tau_{n}^{n}=\tau_{m}^{n}$, $\mathbb{P}$-a.s. and $z_{\tau_{n}^{n}}^{n}=z_{\tau_{m}^{n}}^{m}$. Therefore, we can definite \bg{equation}gin{equation} z_{t}=z_{t}^{n},\ t<\tau_{n}^{n};\ \tau=\sup_{n}{\tau_{n}^{n}}, \operatorname{e}nd{equation} $(z,\tau)$ is a strong solution of equation (\ref{equ3}). By the same method, we can prove the uniqueness easily. \qed \\\operatorname{e}mph{Proof of Theorem \ref{mainthm}}. Let \bg{equation}gin{align*} \mathrm{d} \tilde{W}_{s} &= \mathrm{d} W_{s} + \frac{1}{\xi_{s}}B(s,x_{s})^{-1}z_{s}\mathrm{d} s,\ s<T\wedge\tau\\ R_{s}&=\operatorname{e}xp{\left[-\int_{0}^{s}\xi_{t}^{-1}\langleB(t,x_{t})^{-1}z_{t},\mathrm{d} W_{t}\rangle-\frac{1}{2}\int_{0}^{s}{\frac{||B(t,x_{t})^{-1}z_{t}||^{2}}{\xi_{t}}\mathrm{d} t}\right]},\ s<T\wedge\tau,\\ \tau_{n}&=\inf\{t\in[0,T)\ |\ ||z_{t}||_{H_{0}}>n\},\ \mathbb{Q}:= R_{T\wedge\tau}\mathbb{P}, \operatorname{e}nd{align*} write the equation of $z$ in the form of $\tilde{W}$: \bg{equation}gin{equation}\label{equ5} \mathrm{d} z_{t} = -A_{0,\operatorname{e}psilon}z_{t}\mathrm{d} t+F(t,z_{t},0)\mathrm{d} t + \hat{B}(t,z_{t})\mathrm{d} \tilde{W}_{t}- \frac{1}{\xi_{t}}z_{t}\mathrm{d} t, \operatorname{e}nd{equation} By It'\^{o}'s formula and (H\ref{itemH4}), for $s\in[0,T)$, and for $t<\tau_{n}\wedge s$, \bg{equation}gin{equation} \bg{equation}gin{split} \mathrm{d} ||z_{t}||_{H_{0}}^{2} = &-2||z_{t}||_{V}^{2}\mathrm{d} t + 2 _{V^{*}}\langleF(t,z_{t},0),z_{t}\rangle_{V}\mathrm{d} t-\frac{2||z_{t}||_{H_{0}}^{2}}{\xi_{t}}\mathrm{d} t\\ &+||\hat{B}(t,z_{t})||_{L_{HS}(H,H_{0})}^{2}\mathrm{d} t+2\langle\hat{B}(t,z_{t})\mathrm{d} \tilde{W},z_{t}\rangle_{H_{0}}\\ \leq&\ 2 \langleF(t,z_{t},0),B_{0}^{-2}z_{t}\rangle\mathrm{d} t + ||\hat{B}(t,z_{t})||_{L_{HS}(H,H_{0})}^{2}\mathrm{d} t \\ &-\frac{2||z_{t}||_{H_{0}}^{2}}{\xi_{t}}\mathrm{d} t +2\langle\hat{B}(t,z_{t})\mathrm{d} \tilde{W},z_{t}\rangle_{H_{0}} \\ \leq& -\frac{2||z_{t}||_{H_{0}}^{2}}{\xi_{t}}\mathrm{d} t +2\langle\hat{B}(t,z_{t})\mathrm{d} \tilde{W},z_{t}\rangle_{H_{0}}+ K_{2}||z_{t}||_{H_{0}}^{2}\mathrm{d} t, \operatorname{e}nd{split} \operatorname{e}nd{equation} and \bg{equation}gin{equation} \bg{equation}gin{split} \mathrm{d} \frac{||z_{t}||_{H_{0}}^{2}}{\xi_{t}} \leq & -\frac{2||z_{t}||_{H_{0}}^{2}}{\xi_{t}^{2}}\mathrm{d} t + \frac{K_{2}}{\xi_{t}}||z_{t}||_{H_{0}}^{2}\mathrm{d} t - \frac{\xi_{t}^{'}}{\xi_{t}^{2}}||z_{t}||_{H_{0}}^{2}\mathrm{d} t+\frac{2}{\xi_{t}}\langle\hat{B}(t,z_{t})\mathrm{d} \tilde{W},z_{t}\rangle_{H_{0}}\\ =&\ \frac{2-K_{2}\xi_{t}+\xi_{t}^{'}}{\xi_{t}^{2}}||z_{t}||_{H_{0}}^{2}\mathrm{d} t+ \frac{2}{\xi_{t}}\langle\hat{B}(t,z_{t})\mathrm{d} \tilde{W},z_{t}\rangle_{H_{0}}\\ =&\ \frac{\theta}{\xi_{t}^{2}}||z_{t}||_{H_{0}}^{2}\mathrm{d} t+ \frac{2}{\xi_{t}}\langle\hat{B}(t,z_{t})\mathrm{d} \tilde{W},z_{t}\rangle_{H_{0}}, \operatorname{e}nd{split} \operatorname{e}nd{equation} by Girsanov theorem, $(\tilde{W})_{t<s\wedge\tau_{n}}$ is a Wiener process under the probability $\mathbb{Q}_{s,n} := R_{s\wedge\tau_{n}}\mathbb{P}$, and \bg{equation}gin{equation}\label{inequ2} \int_{0}^{s\wedge\tau_{n}}{\frac{||z_{t}||^{2}}{\xi_{t}^{2}}\mathrm{d} t}\leq \frac{||z_{0}||_{H_{0}}^{2}}{\theta\xi_{0}}+\int_{0}^{s\wedge\tau_{n}}{\frac{2}{\theta\xi_{t}}\langle\hat{B}(t,z_{t})\mathrm{d} \tilde{W},z_{t}\rangle_{H_{0}}}, \operatorname{e}nd{equation} then \bg{equation}gin{equation} \mathbb{E}_{\mathbb{Q}_{s,n}}{\int_{0}^{s\wedge\tau_{n}}{\frac{||z_{t}||^{2}}{\xi_{t}^{2}}\mathrm{d} t}}\leq \frac{||z_{0}||_{H_{0}}^{2}}{\theta\xi_{0}}, \operatorname{e}nd{equation} Since, by (H\ref{itemH3}) \bg{equation}gin{equation} \bg{equation}gin{split} \log{R_{u}}&= -\int_{0}^{u}\xi_{t}^{-1}\langleB(t,x_{t})^{-1}z_{t},\mathrm{d} \tilde{W}_{t}\rangle+\frac{1}{2}\int_{0}^{u}{\frac{||B(t,x_{t})^{-1}z_{t}||^{2}}{\xi_{t}}\mathrm{d} t}\\ &\leq -\int_{0}^{u}\xi_{t}^{-1}\langleB(t,x_{t})^{-1}z_{t},\mathrm{d} \tilde{W}_{t}\rangle+\frac{1}{2\rho(T)^{2}}\int_{0}^{u}{\frac{||z_{t}||_{H_{0}}^{2}}{\xi_{t}}\mathrm{d} t},\ u\leq s\wedge\tau_{n}, \operatorname{e}nd{split} \operatorname{e}nd{equation} \bg{equation}gin{equation} \mathbb{E}{R_{s\wedge\tau_{n}}}\log{R_{s\wedge\tau_{n}}}\leq \frac{||z_{0}||_{H_{0}}^{2}}{2\theta\xi_{0}\rho(T)^{2}},\ \forall s\in[0,T),n\geq1. \operatorname{e}nd{equation} As in \cite{Wang2011}, we can prove that $\{R_{s\wedge\tau}\ |\ s\in[0,T]\}$ is a martingale. Since \bg{equation}gin{equation} \mathbb{E}_{\mathbb{Q}}1_{[\tau_{n}\leq t]}\frac{||z_{t\wedge\tau_{n}}||_{H_{0}}^{2}}{\xi_{t\wedge\tau_{n}}} \leq\mathbb{E}_{\mathbb{Q}}\frac{||z_{t\wedge\tau_{n}}||_{H_{0}}^{2}}{{\xi_{t\wedge\tau_{n}}}} \leq\frac{||z_{0}||^{2}_{H_{0}}}{\xi_{0}}, \operatorname{e}nd{equation} and \bg{equation}gin{equation} \mathbb{E}_{\mathbb{Q}}1_{[\tau_{n}\leq t]}\frac{||z_{t\wedge\tau_{n}}||_{H_{0}}^{2}}{\xi_{t\wedge\tau_{n}}}\geq\frac{n\mathbb{Q}(\tau_{n}\leq t)}{\xi_{0}} \operatorname{e}nd{equation} let $n$ goes to infinite, we have $\mathbb{Q}(\tau_{n}\leq t)=0$, $\forall t\in [0,T)$, then $\mathbb{Q}(\tau=T)=1$. Now, since $\tau = T$, $\mathbb{Q}$-a.s., equation (\ref{equ5}) can be solved up to time $T$. Let \bg{equation}gin{equation} \zeta=\inf\{t\in[0,T]\ |\ ||z_{t}||_{H_{0}}=0\}, \operatorname{e}nd{equation} we shall prove that $\zeta \leq T$, here we assume $\inf\operatorname{e}mptyset = +\infty$. Otherwise, there exists a set $\Omega_{0}$, such that $\mathbb{P}(\Omega_{0})>0$, and for any $\omega \in \Omega_{0}$, $\zeta(\omega)>T$, then by the continuity of path, we have \bg{equation}gin{equation} \inf_{t\in[0,T]}{||z_{t}(\omega)||_{H_{0}}}>0, \operatorname{e}nd{equation} so \bg{equation}gin{equation} \int_{0}^{T}{\frac{||z_{t}||^{2}_{H_{0}}}{\xi_{t}^{2}}\mathrm{d} t}=+\infty, \operatorname{e}nd{equation} but \bg{equation}gin{equation} \mathbb{E}_{\mathbb{Q}}\int_{0}^{T}{\frac{||z_{t}||^{2}_{H_{0}}}{\xi_{t}^{2}}\mathrm{d} t}\leq \frac{||z_{0}||^{2}_{H_{0}}}{2\rho(T)^{2}\theta\xi_{0}}<+\infty, \operatorname{e}nd{equation} hence, $\zeta\leq T$, $\mathbb{Q}$-a.s., by the uniqueness of solution of equation (\ref{equ5}), we have \bg{equation}gin{equation} z_{t}\operatorname{e}quiv 0,\ t>\zeta,\ \mathbb{Q}\mbox{-a.s.} \operatorname{e}nd{equation} Thus, $z_{T}=0$, $\mathbb{Q}$-a.s. Next, we shall construct the coupling. Since under the probability space $(\Omega,\mathscr{F},R_{\tau\wedge T}\mathbb{P})$, $(\tilde{W}_{t})_{t\in[0,T]}$ is a Wiener process, let $y$ be the unique mild solution of the following equation \bg{equation}gin{equation} \mathrm{d} y_{t}=-A_{\operatorname{e}psilon}y_{t}\mathrm{d} t + F(t,y_{t})\mathrm{d} t + B(t,y_{t})\mathrm{d} \tilde{W}_{t},\ y_{0}=y, \operatorname{e}nd{equation} for $x_{t}$, it's the unique solution of the following equation \bg{equation}gin{equation} \mathrm{d} x_{t}=-A_{\operatorname{e}psilon}x_{t}\mathrm{d} t + F(t,x_{t})\mathrm{d} t -\frac{z_{t}}{\xi_{t}}\mathrm{d} t+ B(t,x_{t})\mathrm{d} \tilde{W}_{t},\ x_{0}=x. \operatorname{e}nd{equation} For the process $x_{t}-y_{t}$, it's the mild solution of the following equation \bg{equation}gin{equation}\label{equ6} \mathrm{d} u_{t}=-A_{\operatorname{e}psilon}u_{t}\mathrm{d} t+F(t,u_{t},0)\mathrm{d} t+\hat{B}(t,u_{t})\mathrm{d} \tilde{W}_{t} -\frac{z_{t}}{\xi_{t}}\mathrm{d} t, \operatorname{e}nd{equation} note that $z_{t}$ is a solution of equation \bg{equation}gin{equation} \mathrm{d} z_{t}=-A_{0,\operatorname{e}psilon}z_{t}\mathrm{d} t+F(t,z_{t},0)\mathrm{d} t+\hat{B}(t,z_{t})\mathrm{d} \tilde{W}_{t} -\frac{z_{t}}{\xi_{t}}\mathrm{d} t, \operatorname{e}nd{equation} Similar to equation (1.41), one can prove that equation (\ref{equ6}) has a strong solution in $H_{0}$, since $V^{*} \supset H\supset H_{0}$ and $A_{0,\operatorname{e}psilon}$ is the restriction of $A_{\operatorname{e}psilon}$ to $H_{0}$, by the relation ship of variational solution and mild solution and the pathwise uniqueness, then $z_{t}=x_{t}-y_{t},\ \forall t\in[0,T]$, $\mathbb{Q}$-a.s. By the method used in \cite{Wang2011}, we have log-Harnack inequality for equation (\ref{equ2}) : \bg{equation}gin{equation} \bg{equation}gin{split} P_{T}^{\operatorname{e}psilon}\log{f}(y)&=\mathbb{E}_{\mathbb{Q}}\log{f(y_{T}^{\operatorname{e}psilon})}=\mathbb{E}{R_{T\wedge\tau}\log{f(x_{T}^{\operatorname{e}psilon})}}\leq\mathbb{E}{R_{T\wedge\tau}\log{R_{T\wedge\tau}}}+\log{\mathbb{E}{f(x_{T}^{\operatorname{e}psilon})}}\\ &\leq\log{P_{T}^{\operatorname{e}psilon}f(x)} +\frac{||x-y||_{H_{0}}}{2\rho(T)^{2}\theta\xi_{0}}=\log{P_{T}^{\operatorname{e}psilon}f(x)}+\frac{K_{2}||x-y||_{H_{0}}}{2\rho(T)^{2}\theta(2-\theta)(1-e^{K_{2}T})}\ , \operatorname{e}nd{split} \operatorname{e}nd{equation} then by lemma 1.2, let $\operatorname{e}psilon\rightarrow 0$, and choose $\theta = 1$, for $f\in\mathscr{B}_{b}^{+}(H)$ and $f\geq 1$, \bg{equation}gin{equation} P_{T}\log{f}(y)\leq\log{P_{T}f(x)}+\frac{K_{2}||x-y||_{H_{0}}}{2\rho(T)^{2}(1-e^{K_{2}T})}. \operatorname{e}nd{equation} If (H\ref{itemH5}) holds in addition, by inequality (\ref{inequ2}), we have \bg{equation}gin{equation} \bg{equation}gin{split} &\mathbb{E}_{s,n}{\operatorname{e}xp{\left[h\int_{0}^{s\wedge\tau_{n}}{\frac{||z_{t}||^{2}_{H_{0}}}{\xi_{t}^{2}}\mathrm{d} t}\right]}}\\ \leq& \operatorname{e}xp{\left[\frac{h||x-y||_{H_{0}}^{2}}{\theta\xi_{0}}\right]} \mathbb{E}_{s,n}{\operatorname{e}xp{\left[\frac{2h}{\theta}\int_{0}^{s\wedge\tau_{n}}{\frac{1}{\xi_{t}}\langle\hat{B}(t,z_{t})\mathrm{d} \tilde{W},z_{t}\rangle}\right]}}\\ \leq&\operatorname{e}xp{\left[\frac{h||x-y||_{H_{0}}^{2}}{\theta\xi_{0}}\right]} \mathbb{E}_{s,n}{\left(\operatorname{e}xp{\left[\frac{8h^{2}K_{3}^{2}}{\theta^{2}} \int_{0}^{s\wedge\tau_{n}}{\frac{||z_{t}||^{2}_{H_{0}}}{\xi_{t}^{2}}\mathrm{d} t}\right]}\right)^{\frac{1}{2}}}, \operatorname{e}nd{split} \operatorname{e}nd{equation} for $h=\frac{\theta^{2}}{8K_{3}^{2}}$, and \bg{equation}gin{equation} \mathbb{E}_{s,n}{\operatorname{e}xp{\left[\frac{\theta^{2}}{8K_{3}^{2}}\int_{0}^{s\wedge\tau_{n}}{\frac{||z_{t}||^{2}_{H_{0}}}{\xi_{t}^{2}}\mathrm{d} t}\right]}} \leq \operatorname{e}xp{\left[\frac{\theta K_{2}||x-y||_{H_{0}}^{2}}{4K_{3}^{2}(2-\theta)(1-e^{-K_{2}T})}\right]}, \operatorname{e}nd{equation} Similar to \cite{Wang2011}, we get that \bg{equation}gin{equation} \sup_{s\in[0,T]}{\mathbb{E}{R_{s\wedge\tau}^{1+r}}\leq \operatorname{e}xp{\left[\frac{\theta K_{2}(2K_{3}+\theta\rho(T))||x-y||_{H_{0}}}{8K_{3}^{2}(2-\theta)(K_{3}+\theta\rho(T))(1-e^{-K_{2}T})}\right]}} \operatorname{e}nd{equation} and for $p>(1+K_{3})^{2}$, $\mathrm{d}elta_{p,T} = K_{3}\vee \frac{\rho(T)}{2}(\sqrt{p}-1)$,$f\in\mathscr{B}_{b}^{+}(H)$, choose $\theta = \frac{2K_{3}\rho(T)}{\sqrt{p}-1}$, \bg{equation}gin{equation} (P_{T}^{\operatorname{e}psilon}f(y))^{p}\leq (P_{T}^{\operatorname{e}psilon}f^{p}(x)) \operatorname{e}xp{\left[\frac{K_{2}(T)\sqrt{p}(\sqrt{p}-1)||x-y||_{H_{0}}^{2}}{4\mathrm{d}elta_{p,T}[(\sqrt{p}-1)\rho(T)-\mathrm{d}elta_{p,T}](1-e^{K_{2}T})}\right]}, \operatorname{e}nd{equation} by lemma 1.2, let $\operatorname{e}psilon\mathrm{d}ownarrow 0$, we have \bg{equation}gin{equation} (P_{T}f(y))^{p}\leq (P_{T}f^{p}(x)) \operatorname{e}xp{\left[\frac{K_{2}(T)\sqrt{p}(\sqrt{p}-1)||x-y||_{H_{0}}^{2}}{4\mathrm{d}elta_{p,T}[(\sqrt{p}-1)\rho(T)-\mathrm{d}elta_{p,T}](1-e^{K_{2}T})}\right]}, \operatorname{e}nd{equation} for $x,y\in H$,$x-y\in\mathscr{D}(B_{0}^{-1})$. \qed \section{Application} In this section, we give some simple applications of Theorem \ref{mainthm}. \bg{equation}gin{corollary} Assume that $F$, $B$ are determined and independent of t and (H\ref{itemH1}) to (H\ref{itemH5}) hold. If $\lambda_{0}>0$, $\lambda_{0}>K_{1}^{2}+2K_{1}$ and $B(0)\in L_{HS}(H)$, then \bg{equation}gin{enumerate}[(1)] \item $P_{t}$ has uniqueness invariant measure $\mu$ and has full support on $H$, $\mu(V)=1$.\label{itemc1} \item If $\sup_{x}||B(x)||<\infty$, then $\mu(\operatorname{e}^{\operatorname{e}psilon_{0}||\cdot||_{H}^{2}})<\infty$ for some $\operatorname{e}psilon_{0}>0$.\label{itemc2} \item If there exists $q>0$ such that $\inf_{n}b_{n}^{2q}\lambda_{n}^{q-1}>0$, then $\mu$ has full support on on $H_{0}$.\label{itemc3} \operatorname{e}nd{enumerate} \operatorname{e}nd{corollary} \noindent\operatorname{e}mph{Proof.} Let $(V,\ ||\cdot||_{V})=(\mathscr{D}(A^{\frac{1}{2}}),\ ||A^{\frac{1}{2}}\cdot||$. Since $\lambda_{0}>0$ and $B(0)\in L_{HS}(H)$, by (H\ref{itemH1}), equation (\ref{equ1}) has strong solution and $P_{t}$ is Feller semigroup. By Ito's formula and $\lambda_{0}>K^{2}_{1}-2K_{1}$, there exists a constant $c>0$ such that \bg{equation}gin{equation*} \mathrm{d} ||x_{t}||^{2} \leq \left( c -2(1-\frac{K^{2}_{1}+2K_{1}}{\lambda_{0}})||x_{t}||^{2}_{V} + 2||F(0)||\cdot||x_{t}||\right)\mathrm{d} t + 2\langleB(x_{t})\mathrm{d} W_{t},x_{t}\rangle \operatorname{e}nd{equation*} and \bg{equation}gin{equation*} \bg{equation}gin{split} \mathrm{d} \operatorname{e}^{\operatorname{e}psilon ||x_{t}||^{2}}\leq &\ \operatorname{e}psilon\operatorname{e}^{\operatorname{e}psilon ||x_{t}||^{2}}\left(c-2(1-\frac{K^{2}_{1}+2K_{1}}{\lambda_{0}})||x_{t}||^{2}_{V} +\frac{\operatorname{e}psilon^{2}}{4}||B^{*}(x_{t})x_{t}||^{2}+2||F(0)||\cdot||x_{t}||\right)\mathrm{d} t \\ &\ + 2\operatorname{e}psilon\operatorname{e}^{\operatorname{e}psilon ||x_{t}||^{2}}\langleB(x_{t})\mathrm{d} W_{t},x_{t}\rangle, \operatorname{e}nd{split} \operatorname{e}nd{equation*} for sufficient small $\operatorname{e}psilon$, by H\"{o}lder inequality and noting that $||\cdot||_{V}$ is compact function on $H$, then by standard argument in Theorem 1.2 in \cite{Wang2007}, one can prove (\ref{itemc1}) and (\ref{itemc2}). For (\ref{itemc3}), $\inf_{n}b_{n}^{2q}\lambda_{n}^{q-1}>0$ implies that there exists a constant $c(m)>0$ such that \bg{equation}gin{equation} ||\cdot||^{2}_{H_{0}} \leq c(m)||\cdot||^{2}+ \frac{1}{m}||\cdot||_{V}^{2},\ \forall m\geq1, \operatorname{e}nd{equation} by Ito's formula, one can get following inequality, \bg{equation}gin{equation} \mathrm{d} ||x_{t}(x)-x||^{2} \leq -||x_{t}(x)-x||_{V}^{2}\mathrm{d} t + (c_{1}+c_{2}||x_{t}(x)||^{2})\mathrm{d} t + 2\langleB(x_{t})\mathrm{d} W_{t},x_{t}-x\rangle \operatorname{e}nd{equation} here we denote $x_{t}(x)$ for the process starts from $x$, $c_{1}$, $c_{2}$ are constants depend on $x$. Using Harnack inequality (\ref{Harineq}), (\ref{itemc3}) can be proved following the line of \cite{WangX2011}. \qed \bg{equation}gin{corollary} Assume (H\ref{itemH1}) to (H\ref{itemH5}) hold, $F$ and $B$ are determined and time independent, then for any $t>0$, $P_{t}$ is $H_{0}$-strong Feller. Let $\mu$ be the $P_{t}$-subinvariant probability with full support on $H_{0}$ as in \cite{RoWang2010}, then the transition density $p_{t}(x,y)$ w.r.t. $\mu$ satisfies \bg{equation}gin{equation} ||p_{t}(x,\cdot)||_{L^{p}(\mu)}\leq\ \left\{ \int_{H_{0}}{\operatorname{e}xp{\left[-\frac{K_{2}\sqrt{q}(\sqrt{q}-1)||x-y||_{H_{0}}^{2}} {4\mathrm{d}elta_{q}[(\sqrt{q}-1)\rho-\mathrm{d}elta_{q}](1-e^{K_{2}t})}\right]}\mu(dy)}\right\}^{-\frac{1}{q}} \operatorname{e}nd{equation} for all $1<p<\frac{(K_{3}+\rho)^{2}}{(K_{3}+\rho)^{2}-1}$, here $q=\frac{p}{p-1}$. \operatorname{e}nd{corollary} \noindent\operatorname{e}mph{Proof.} It follows the proof of \cite{Wang2007,RoWang2010,WangX2011}. \textbf{Acknowledgement} ~The author would like to thank Professor Zdzislaw Brzezniak to provide him the article \cite{Brzez97}, and Professor Feng-Yu Wang for his useful comments. \paragraph{\Large{Appendix}} \paragraph{A. Proof of Remark \ref{remark1}}\ \\ \operatorname{e}mph{Proof of (1):} since $\bigcup_{n} H_{n}$ is a core of $B_{0}^{-2}$, for any $x\in \mathscr{D}(B_{0}^{-2})$, choose $\{x_{n}\}$ such that $x_{n}\rightarrow x$ and $B_{0}^{-2}x_{n}\rightarrow B_{0}^{-2}x$, hence $B_{0}^{-1}x_{n}\rightarrow x$, as $n \rightarrow +\infty$. Similarly, a sequence $\{y_{n}\}$ with the same property. Therefore \bg{equation}gin{align*} &||B_{0}^{-1}[(B(t,x_{n})-B(t,y_{n}))-(B(t,x_{m})-B(t,y_{m})]||_{HS}^{2}\\ \leq &2K_{2}(||B_{0}^{-1}(x_{n}-x_{m})||^{2} +||B_{0}^{-1}(y_{n}-y_{m})||^{2}) -4\langleF(t,x_{n})-F(t,x_{m}),B^{-2}_{0}(x_{n}-x_{m})\rangle\\ &-4\langleF(t,y_{n})-F(t,y_{m}),B^{-2}_{0}(y_{n}-y_{m})\rangle, \operatorname{e}nd{align*} by the continuous of $F$, we have that $\{B(t,x_{n})-B(t,y_{n})\}$ forms a Cauchy sequence in $L_{HS}(H,H_{0})$. Note that $B(t,x_{n})-B(t,y_{n})$ convergent to $B(t,x)-B(t,y)$ in $L_{HS}(H)$, and $B_{0}^{-1}$ is closed, we have $B(t,x)-B(t,y)\in L_{HS}(H,H_{0})$, \bg{equation}gin{equation*} \lim_{n\rightarrow +\infty}(B(t,x_{n})-B(t,y_{n}))=B(t,x)-B(t,y), \operatorname{e}nd{equation*} and \bg{equation}gin{equation*} 2\langleF(t,x)-F(t,y),B_{0}^{-2}(x-y)\rangle+||B_{0}^{-1}(B(t,x)-B(t,y))||_{HS}^{2}\leq K_{2}||B_{0}^{-1}(x-y)||^{2}. \operatorname{e}nd{equation*} \\\operatorname{e}mph{Proof of (2):} we assume $\rho(t)=1$, by definition, it's clear that $B_{0}$ is one to one and has dense range. $$B(t,x)B(t,x)^{*}\geq B_{0}^{2} \Leftrightarrow ||B(t,x)^{*}y||\geq||B_{0}y||, \forall y\in H, $$ implies that $\mathrm{Ran}B(t,x) \supset \mathrm{Ran}B_{0}\mbox{ by Proposition B.1 in \cite{DPZ1992}, and }$ $$||z||\geq||B_{0}(B(t,x)^{*})^{-1}z||, \forall z \in \mathrm{Ran}(B(t,x)^{*}).$$ Since for any $z\in \mathrm{Ran}(B(t,x)^{*})$, $y \in \mathrm{Ran}(B(t,x))$, we have \bg{equation}gin{equation} \langleB(t,x)^{-1}y,z\rangle=\langleB(t,x)B(t,x)^{-1}y,\ (B(t,x)^{*})^{-1}z\rangle=\langley,(B(t,x)^{*})^{-1}z\rangle, \operatorname{e}nd{equation} then \bg{equation}gin{equation} z\in\mathscr{D}((B(t,x)^{-1})^{*}),\ (B(t,x)^{-1})^{*}z=(B(t,x)^{*})^{-1}z. \operatorname{e}nd{equation} On the other hand, for any $z\in \mathscr{D}((B(t,x)^{-1})^{*})$, there exists $z^{*}$ such that \bg{equation}gin{equation} \langleB(t,x)^{-1}y,z\rangle=\langley,z^{*}\rangle,\ \forall y \in \mathscr{D}((B(t,x)^{-1})), \operatorname{e}nd{equation} let $u=B(t,x)^{-1}y$, then $\langleu,z\rangle=\langleB(t,x)u,z^{*}\rangle$, we have $z=B(t,x)^{*}z^{*}$ and $$(B(t,x)^{*})^{-1}z=z^{*}=(B(t,x)^{-1})^{*}z,$$ hence $\mathscr{D}((B(t,x)^{-1})^{*})=\mathscr{D}((B(t,x)^{*})^{-1})$. Therefore, $||z||\geq ||B_{0}(B(t,x)^{-1})^{*}z||$, for all $z\in\mathrm{Ran} B(t,x)^{*}$. Since $\mathrm{Ran}(B(t,x)^{*})$ is dense in $H$, $B_{0}(B(t,x)^{-1})^{*}$ can be extended to be a bounded operator on $H$, and for all $z\in H,\ y\in H$, there is $\{z_{n}\}_{n=1}^{+\infty}$, $\lim_{n}z_{n}=z$, such that $\lim_{n}B_{0}(B(t,x)^{-1})^{*}z_{n}=B_{0}(B(t,x)^{-1})^{*}z$, then \bg{equation}gin{equation} \bg{equation}gin{split} &\langleB_{0}(B(t,x)^{-1})^{*}z,y\rangle=\lim_{n}\langleB_{0}(B(t,x)^{-1})^{*}z_{n},y\rangle\\ =&\lim_{n}\langlez_{n},(B(t,x)^{-1})B_{0}y\rangle=\langlez,(B(t,x)^{-1})B_{0}y\rangle, \operatorname{e}nd{split} \operatorname{e}nd{equation} hence $||(B(t,x)^{-1})B_{0}y||\leq ||y||$, for all $y\in H$, let $z=B_{0}y$, then $||(B(t,x)^{-1})z||\leq ||B_{0}^{-1}z||$, for all $z\in \mathscr{D}(B_{0}^{-1})$. By Proposition B.1 in \cite{DPZ1992}, and the proof above, the converse is easy. \qed \paragraph{B. For Lemma \ref{lemma_strongsolution}}\ \\(1) \operatorname{e}mph{For local monotonicity}. For any $v_{1},v_{2}\in V$, \bg{equation}gin{equation} -2 _{V^{*}}\langleA_{0,\operatorname{e}psilon}(v_{1}-v_{2}),v_{2}\rangle_{V}= -2||\sqrt{A_{0,\operatorname{e}psilon}}(v_{1}-v_{2})||_{H_{0}}^{2}=-2||v_{1}-v_{2}||_{V}^{2}, \operatorname{e}nd{equation} \bg{equation}gin{equation} \bg{equation}gin{split} &2_{V^{*}}\langleF(t,v_{1},v_{2}),v_{1}-v_{2}\rangle_{V} +||\hat{B}(t,v_{1})-\hat{B}(t,v_{2})||_{L_{HS}(H,H_{0})}^{2}\\ =&2\langleF(t,v_{1},v_{2}),B_{0}^{-2}(v_{1}-v_{2})\rangle +||B_{0}^{-1}(\hat{B}(t,v_{1})-\hat{B}(t,v_{2}))||_{HS}^{2}\\ \leq & K_{2}||v_{1}-v_{2}||_{H_{0}}^{2} \operatorname{e}nd{split} \operatorname{e}nd{equation} and \bg{equation}gin{equation} \bg{equation}gin{split} &\frac{1}{\xi_{t}}\ _{V^{*}}\langle\hat{B}(t,v_{1})G_{n}(t,v_{1})-\hat{B}(t,v_{2})G_{n}(t,v_{2}),v_{1}-v_{2}\rangle_{V}\\ =&\frac{1}{\xi_{t}}\ \langle(\hat{B}(t,v_{1})-\hat{B}(t,v_{2}))G_{n}(t,v_{1})-\hat{B}(t,v_{2})G_{n}(t,v_{1},v_{2}),B_{0}^{-2}(v_{1}-v_{2})\rangle\\ \leq& \frac{1}{\xi_{t}}||B_{0}^{-1}(\hat{B}(t,v_{1})-\hat{B}(t,v_{2}))||\cdot||G_{n}(t,v_{1})||\cdot||v_{1}-v_{2}||_{H_{0}}\\ &+\frac{1}{\xi_{t}}||B_{0}^{-1}\hat{B}(t,v_{2})||\cdot||G_{n}(t,v_{1},v_{2})||\cdot||v_{1}-v_{2}||_{H_{0}}, \operatorname{e}nd{split} \operatorname{e}nd{equation} note that, by (H\ref{itemH1}), \bg{equation}gin{equation*} \bg{equation}gin{split} &||B_{0}^{-1}(\hat{B}(t,v_{1})-\hat{B}(t,v_{2}))||_{HS}^{2}\leq K_{2}||v_{1}-v_{2}||_{H_{0}}^{2}-2\langleF(t,v_{1},v_{2}),B_{0}^{-2}(v_{1}-v_{2})\rangle\\ \leq &K_{2}||v_{1}-v_{2}||_{H_{0}}^{2}+ 2K_{1}||B_{0}A_{0,\operatorname{e}psilon}^{-\frac{1}{2}}A_{0,\operatorname{e}psilon}^{\frac{1}{2}}(v_{1}-v_{2})||_{H_{0}} \cdot||B_{0}^{-1}A_{0,\operatorname{e}psilon}^{-\frac{1}{2}}A_{0,\operatorname{e}psilon}^{\frac{1}{2}}(v_{1}-v_{2})||_{H_{0}}\\ \leq &K_{2}||v_{1}-v_{2}||_{H_{0}}^{2}+2K_{1}\left(\sup_{n}\frac{b_{n}}{\sqrt{\lambda_{n}+\operatorname{e}psilon b_{n}^{-2}}}\right)\left(\sup_{n}\frac{1}{b_{n}\sqrt{\lambda_{n}+\operatorname{e}psilon b_{n}^{-2}}}\right)||v_{1}-v_{2}||^{2}_{V}\\ \leq &K_{2}||v_{1}-v_{2}||_{H_{0}}^{2} + \frac{2}{\operatorname{e}psilon}K_{1}||B_{0}||^{2}||v_{1}-v_{2}||_{V}^{2}, \operatorname{e}nd{split} \operatorname{e}nd{equation*} hence \bg{equation}gin{equation} \bg{equation}gin{split} &\frac{1}{\xi_{t}}||B_{0}^{-1}(\hat{B}(t,v_{1})-\hat{B}(t,v_{2}))||\cdot||G_{n}(t,v_{1})||\cdot||v_{1}-v_{2}||_{H_{0}}\\ \leq & \frac{n}{\xi_{t}}(\sqrt{K_{2}}||v_{1}-v_{2}||_{H_{0}}+\sqrt{\frac{2}{\operatorname{e}psilon}K_{1}}||B_{0}||\cdot||v_{1}-v_{2}||_{V})||v_{1}-v_{2}||_{H_{0}}\\ \leq &(\frac{n}{\xi_{t}}\sqrt{K_{2}}+\frac{n^{2}K_{1}||B_{0}||^{2}}{\operatorname{e}psilon \xi_{t}^{2}\mathrm{d}elta^{2}})||v_{1}-v_{2}||_{H_{0}}^{2}+\mathrm{d}elta^{2}||v_{1}-v_{2}||_{V}^{2}, \operatorname{e}nd{split} \operatorname{e}nd{equation} and \bg{equation}gin{equation} \bg{equation}gin{split} &\frac{1}{\xi_{t}}||B_{0}^{-1}\hat{B}(t,v_{2})||\cdot||G_{n}(t,v_{1},v_{2})||\cdot||v_{1}-v_{2}||_{H_{0}}\\ \leq &\frac{1}{\xi_{t}}(\sqrt{K_{2}}||v_{2}||_{H_{0}} +\sqrt{\frac{2K_{1}}{\operatorname{e}psilon}}||B_{0}||\cdot||v_{2}||_{V})||v_{1}-v_{2}||^{2}_{H_{0}}, \operatorname{e}nd{split} \operatorname{e}nd{equation} therefore, we have \bg{equation}gin{equation*} \bg{equation}gin{split} &2 _{V^{*}}\langleA_{n,\operatorname{e}psilon}(t,v_{1})-A_{n,\operatorname{e}psilon}(t,v_{2}),v_{1}-v_{2}\rangle_{V} +||\hat{B}(t,x_{t}-v_{2})-\hat{B}(t,x_{t}-v_{1})||_{L_{HS}(H,H_{0})}^{2}\\ \leq&\left[K_{2}+\frac{2n\sqrt{K_{2}}-2}{\xi_{t}} +\frac{n^{2}K_{1}||B_{0}||^{2}}{\operatorname{e}psilon^{2}\xi_{t}^{2}\mathrm{d}elta^{2}} +\frac{2}{\xi_{t}}(\sqrt{K_{2}}||v_{2}||_{H_{0}}^{2} +\sqrt{\frac{2K_{1}}{\operatorname{e}psilon}}||B_{0}||\cdot||v_{2}||_{V}^{2})\right]\times\\ &\times||v_{1}-v_{2}||_{H_{0}}^{2}-2(1-\mathrm{d}elta^{2})||v_{1}-v_{2}||_{V}^{2}. \operatorname{e}nd{split} \operatorname{e}nd{equation*} \\(2) \operatorname{e}mph{For coercivity:} \bg{equation}gin{equation} -2 _{V^{*}}\langleA_{0,\operatorname{e}psilon}v,v\rangle_{V}=-2||v||_{V}^{2},\ ||B_{0}^{-1}\hat{B}(t,v)||_{HS}^{2} + 2\langleF(t,v,0),B_{0}^{-2}v\rangle\leq K_{2}||v||^{2}, \operatorname{e}nd{equation} \bg{equation}gin{equation} \bg{equation}gin{split} \frac{2}{\xi_{t}}\ _{V^{*}}\langle\hat{B}(t,v)G_{n}(t,v),v\rangle_{V} \leq &\frac{2}{\xi_{t}}||B_{0}^{-1}\hat{B}(t,v)||\cdot||G_{n}(t,v)||\cdot||v||_{H_{0}}\\ \leq&\frac{2n}{\xi_{t}}(K_{2}||v||^{2}+\frac{2K_{1}}{\operatorname{e}psilon}||B_{0}||^{2}||v||_{V}^{2})^{\frac{1}{2}}||v||_{H_{0}}\\ \leq&\frac{2n}{\xi_{t}}(\sqrt{K_{2}}||v||_{H_{0}}+\sqrt{\frac{2K_{1}}{\operatorname{e}psilon}}||B_{0}||\cdot||v||_{V})||v||_{H_{0}}\\ \leq&(\frac{2n\sqrt{K_{2}}}{\xi_{t}}+\frac{2n^{2}K_{1}||B_{0}||^{2}}{\operatorname{e}psilon \xi_{t}^{2}\mathrm{d}elta^{2}})||v||_{H_{0}}^{2}+\mathrm{d}elta^{2}||v||_{V}^{2}, \operatorname{e}nd{split} \operatorname{e}nd{equation} hence \bg{equation}gin{equation} \bg{equation}gin{split} &2 _{V^{*}}\langleA_{n}(t,v),v\rangle_{V} + ||\hat{B}(t,v)||_{L_{HS}(H,H_{0})}^{2}\\ \leq& -2(1-\mathrm{d}elta^{2})||v||_{V}^{2}+(\frac{n\sqrt{K_{2}}-2}{\xi_{t}}+\frac{n^{2}K_{1}}{\operatorname{e}psilon^{2}\xi_{t}^{2}\mathrm{d}elta^{2}})||v||_{H_{0}}^{2}. \operatorname{e}nd{split} \operatorname{e}nd{equation} \\(3) \operatorname{e}mph{For Growth:} \bg{equation}gin{equation} ||A_{0,\operatorname{e}psilon}v||^{2}_{V^{*}}=||v||_{V}^{2},\ ||\frac{1}{\xi_{t}}v||_{V^{*}}=\frac{1}{\xi_{t}}||v||_{V^{*}},\ ||F(t,v,0)||_{V^{*}} \leq \frac{K_{1}}{\sqrt{\operatorname{e}psilon}}||v||, \operatorname{e}nd{equation} since, by (H\ref{itemH1}), \bg{equation}gin{equation} \bg{equation}gin{split} | _{V^{*}}\langleF(t,v,0),z\rangle_{V}|=|\langleF(t,v,0),B_{0}^{-2}z\rangle| \leq K_{1}||v||\cdot||B_{0}^{-2}z||\leq \frac{K_{1}}{\sqrt{\operatorname{e}psilon}}||v||\cdot||z||_{V}. \operatorname{e}nd{split} \operatorname{e}nd{equation} And \bg{equation}gin{equation} \bg{equation}gin{split} ||\frac{1}{\xi_{t}}\hat{B}(t,v)G_{n}(t,v)||_{V^{*}} &\leq\frac{||B_{0}||}{\sqrt{\operatorname{e}psilon}\xi_{t}}||\hat{B}(t,v)G_{n}(t,v)||_{H_{0}}\\ &\leq\frac{||B_{0}||}{\sqrt{\operatorname{e}psilon}\xi_{t}}||B_{0}^{-1}\hat{B}(t,v)||\cdot||G_{n}(t,v)||_{L(H_{0},H)}\\ &\leq\frac{||B_{0}||}{\sqrt{\operatorname{e}psilon}\xi_{t}}||(\sqrt{K_{2}}||v||_{H_{0}} +\sqrt{\frac{2K_{1}}{\operatorname{e}psilon}}||v||_{V}||B_{0}||_{H_{0}})||v||_{H_{0}}, \operatorname{e}nd{split} \operatorname{e}nd{equation} we have \bg{equation}gin{equation} ||A_{n,\operatorname{e}psilon}(t,v)||_{V^{*}}^{2}\leq \left(\frac{||B_{0}||^{2}}{\operatorname{e}psilon\xi_{t}}K_{2} +\left(1+\frac{||B_{0}||^{4}K_{1}}{\operatorname{e}psilon\xi_{t}^{2}}\right)||v||_{V}^{2}\right)(1+||v||_{H_{0}}^{4}). \operatorname{e}nd{equation} \\(4) \operatorname{e}mph{For the Lemma 2.2 of \cite{LiuR10}: } We give new estimates to replace inequalities (2.3) and (2.4) there. For convenience, we use the notations there. In (2.3), we only have to replace $f_{s}\cdot||X_{s}^{(n)}||_{H}^{p-2}$ by $||X_{s}^{(n)}||_{V}||X_{s}^{(n)}||_{H}\cdot||X_{s}^{(n)}||_{H}^{p-2}$ and use the basic inequality \bg{equation}gin{equation} a\cdot b \leq \frac{a^{2}}{2\mathrm{d}elta}+ \frac{\mathrm{d}elta }{2}b^{2}, \forall \mathrm{d}elta>0, \operatorname{e}nd{equation} and note that in our case $\alpha = 2$. For (2.4), one can use the following estimate, \bg{equation}gin{equation*} \bg{equation}gin{split} &\mathbb{E}\left(\int_{0}^{\tau_{R}^{n}}{||X_{s}^{(n)}||_{H}^{2p-2}||B(s,X_{s}^{(n)})||_{2}^{2}}\mathrm{d} s\right)^{\frac{1}{2}}\\ \leq&\ \mathbb{E}\left(\int_{0}^{\tau_{R}^{n}}{C||X_{s}^{(n)} ||_{H}^{2p-2}(||X_{s}^{(n)}||_{V}||X_{s}^{(n)}||_{H}+||X_{s}^{(n)}||_{H}^{2})\mathrm{d} s}\right)^{\frac{1}{2}}\\ \leq&\ C(\mathrm{d}elta_{1})\mathbb{E}\left(\int_{0}^{\tau_{R}^{n}}{||X_{s}^{(n)}||_{H}^{2p-2}||X_{s}^{(n)}||_{H}^{2}\mathrm{d} s}\right)^{\frac{1}{2}} +\sqrt{\mathrm{d}elta_{1}}\mathbb{E}\left(\int_{0}^{\tau_{R}^{n}}{||X_{s}^{(n)}||_{H}^{2p-2}||X_{s}^{(n)}||_{V}^{2}\mathrm{d} s}\right)^{\frac{1}{2}}\\ \leq&\ \mathrm{d}elta_{2}\mathbb{E}\sup_{s\in[0,\tau_{R}^{n}]}{||X_{s}^{(n)}||_{H}^{p}} +C(\mathrm{d}elta_{1},\mathrm{d}elta_{2})\mathbb{E}{\int_{0}^{\tau_{R}^{n}}{||X_{s}^{(n)}||_{H}^{p}\mathrm{d} s}}\\ &+\sqrt{\mathrm{d}elta_{1}}\mathbb{E}\sup_{s\in[0,\tau_{R}^{n}]}{||X_{s}^{(n)}||_{H}^{\frac{p}{2}}} \left(\int_{0}^{\tau_{R}^{n}}{||X_{s}^{(n)}||_{H}^{p-2}||X_{s}^{(n)}||_{V}^{2}\mathrm{d} s}\right)^{\frac{1}{2}}\\ \leq&\ (\mathrm{d}elta_{2}+\mathrm{d}elta_{3})\mathbb{E}\sup_{s\in[0,\tau_{R}^{n}]}{||X_{s}^{(n)}||_{H}^{p}} +\frac{\mathrm{d}elta_{1}}{4\mathrm{d}elta_{3}}\mathbb{E}{\int_{0}^{\tau_{R}^{n}}{||X_{s}^{(n)}||_{H}^{p-2}||X_{s}^{(n)}||_{V}^{2}\mathrm{d} s}} +C(\mathrm{d}elta_{1},\mathrm{d}elta_{2})\mathbb{E}{\int_{0}^{\tau_{R}^{n}}{||X_{s}^{(n)}||_{H}^{p}\mathrm{d} s}}, \operatorname{e}nd{split} \operatorname{e}nd{equation*} choose $\mathrm{d}elta_{2}$, $\mathrm{d}elta_{3}$ small enough and $\mathrm{d}elta_{1}$ such that $\frac{\mathrm{d}elta_{1}}{4\mathrm{d}elta_{3}}$ small enough, using $\alpha = 2$ again, then Gronwall's lemma can be applied as in \cite{LiuR10}. \bg{equation}gin{thebibliography}{99} \bibitem{ArY81} Araki, H. and Yamagami, S. (1981). An inequality for Hilbert-Schmidt norm. \textit{Comm. Math. Phys.} \textit{81}, 89--96. \bibitem{Brzez97} Brzezniak, Z. (1997). On stochastic convolution in Banach spaces and applications. \textit{Stochastics Stochastics Rep.} \textit{61}, 245--295. \bibitem{DPRWang2009} Da Prato, G., R\"{o}ckner, M. and Wang F.-Y. (2009). Singular stochastic equations on Hilbert spaces: Harnack inequalities for their transition semigroups. \textit{J. Funct. Anal.} \textbf{257}, 992--1017. \bibitem{DPZ1992} Da Prato, G. and Zabczyk, J. (1992). \textit{Stochastic Equations in Infinite Dimensions}. \textit{Encyclopedia of mathematics and its applications} \textbf{45}. Cambridge University Press, Cambridge, UK. \bibitem{Liu09} Liu, W. (2009). Harnack inequality and applications for stochastic evolution equations with monotone drifts, \textit{J. Evol. Equat.} \textbf{9}, 747--770. \bibitem{LiuR10} Liu, W. and R\"{o}ckner, M. (2010). SPDE in Hilbert space with locally monotone coefficients. \textit{J. Funct. Anal.} \textbf{259}, 2902--2922. \bibitem{LiuW08} Liu, W. and Wang F.-Y. (2008). Harnack inequality and strong Feller property for stochastic fast-diffusion equations, \textit{J. Math. Anal. Appl.} \textbf{342}, 651--662. \bibitem{IW1989} Ikeda, N. and Watanabe, S. (1989). \textit{Stochastic Differential Equations and Diffusion Processes}, 2nd ed. \textit{North-Holland Mathematical Library} \textbf{24}. North-Holland, Amsterdam. \bibitem{Ouyang2009a} Ouyang, S.-X. 2009. Harnack Inequalities and Applications for Stochastic Equations, Ph.D thesis, Bielefeld University, 2009. Available on \url{http://bieson.ub.uni-bielefeld.de/volltexte/2009/1463/pdf/ouyang.pdf}. \bibitem{Ouyang2009b} Ouyang, S.-X. 2009. Non-time-homogeneous Generalized Mehler Semigroups and Applications, arXiv:1009.5314. \bibitem{Ouyang2011a} Ouyang, S.-X. 2011. Harnack Inequalities and Applications for Multivalued Stochastic Evolution Equations, \textit{Infin. Dimens. Anal. Quantum. Probab. Relat. Top.} \textbf{14}: 261--278. \bibitem{OuyangRW2012} Ouyang, S.-X. and R\"ockner, M. and Wang, F.-Y. 2012. Harnack inequalities and applications for Ornstein-Uhlenbeck semigroups with jump, \textit{Potential Anal.} \textbf{36}: 301--315. \bibitem{PRo2007} Pr\'{e}v\^{o}t, C. and R\"{o}ckner, M. (2007). \textit{A Concise Course on Stochastic Partial Differential Equations}. \textit{Lecture Notes in Mathematics} \textbf{1905}. Springer, Berlin. \bibitem{RoWang2010} R\"{o}ckner, M. and Wang, F.-Y. (2010). Log-Harnack inequality for stochastic differential equations in Hilbert spaces and its consequences. \textit{Infin. Dimens. Anal. Quantum Probab. Relat. Top.} \textbf{13}, 27--37. \bibitem{Wang97} Wang, F.-Y. (1997), Logarithmic {S}obolev inequalities on noncompact {R}iemannian manifolds. \textit{Probab. Theory Related Fields} \textbf{109}, 417--424. \bibitem{Wang2007} Wang, F.-Y. (2007). Harnack inequality and applications for stochastic generalized porous media equations. \textit{Ann. Probab.} \textbf{35}, 1333--1350. \bibitem{Wang2011} Wang, F.-Y. (2011). Harnack inequality for SDE with multiplicative noise and extension to Neumann semigroup on nonconvex manifolds. \textit{Ann. Probab.} \textbf{39} 1449--1467. \bibitem{WangWX2011} Wang, F.-Y., Wu, J.-L. and Xu, L. (2011). Log-Harnack inequality for stochastic Burgers equations and applications. \textit{J. Math. Anal. Appl.} \textbf{384}, 151--159. \bibitem{WangX2011} Wang, F.-Y. and Xu, L. (2011). Derivative formula and applications for hyperdissipative stochastic Navier-Stokes/Burgers equations. \text{Infin. Dimens. Anal. Quantum Probab. Relat. Top.} (to appear) \bibitem{WangY2011} Wang, F.-Y. and Yuan, C. (2011). Harnack inequalities for functional SDEs with multiplicative noise and applications. \textit{Stoch. Proc. Appl.} \textbf{121}, 2692--2710. \bibitem{WangZTS2010} Wang, F.-Y. and Zhang, T.-S. (2010). Gradient estimates for stochastic evolution equations with non-Lipschitz coefficients. \textit{J. Math. Anal. Appl.} \textbf{365}, 1-11. \operatorname{e}nd{thebibliography} \operatorname{e}nd{document}
\begin{equation}gin{document} \title{Accuracy of discrete approximation for integral functionals of Markov processes} \author{Iu. V. Ganychenko} \address{Department of Probability Theory, Statistics and Actuarial Mathematics, Kyiv National Taras Shevchenko University, Kyiv 01601, Ukraine} \email{iurii\[email protected]} \author{V. P. Knopova} \address{V.M.Glushkov Institute of Cybernetics NAS of Ukraine, Acad. 40, Glushkov Ave., Kyiv 03187, Ukraine} \email{[email protected]} \author{A. M. Kulik} \address{Institute of Mathematics, Ukrainian National Academy of Sciences, 01601 Tereshchenkivska str. 3, Kyiv, Ukraine} \email{[email protected]} \subjclass[2010]{60H07, 60H35} \keywords{Markov process, integral functional, approximation rate.} \maketitle \begin{equation}gin{abstract} \noindent The article is devoted to the estimation of the rate of convergence of integral functionals of a Markov process. Under the assumption that the given Markov process admits a transition probability density which is differentiable in $t$ and the derivative has an integrable upper bound of a certain type, we derive the accuracy rates for strong and weak approximations of the functionals by Riemannian sums. Some examples are provided. \end{abstract} \section{Introduction} Let $X_t$, $t\geq 0$, be a Markov process with values in $\mathbb{R}^d$. Consider an integral functional of the form \begin{equation}gin{equation}\label{IT} I_T(h)=\int_0^Th(X_t)\, dt, \end{equation} where $h:\mathbb{R}^d\to \mathbb{R}$ is a given measurable function. In this paper we investigate the accuracy of the approximation of $I_T(h)$ by the Riemannian sums $$ I_{T,n}(h)={T\over n}\sum_{k=0}^{n-1}h(X_{(kT)/n}),\quad n\geq 1. $$ The function $h$ is assumed to be bounded, only; i.e., we do not impose any regularity assumptions on $h$. In particular, under this assumption the class of integral functionals which we investigate contains the class of \emph{occupation time} type functionals (for which $h=1_A$ for a fixed $A\in \mathcal{B}(\mathbb{R}^d)$), which are of particular importance. Integral functionals arise naturally in a wide class of stochastic representation formulae and applied stochastic models. It is very typical that exact calculation of the respective probabilities and/or expectations is hardly possible, which naturally suggests the usage of the approximation methods. As an example of such a situation, we mention the so-called \emph{occupation time option} \cite{Linetsky}, whose price is actually given by an expression similar to the Feynman-Kac formula. The exact calculation of the price is possible only in the particular case when the underlying process is a L\'evy process which is ``spectrally negative'' (i.e. does not have positive jumps, see \cite{Guerin}), and practically more realistic cases of general L\'evy processes, solutions to L\'evy driven SDE's, etc. can be treated only numerically. To estimate the rate of convergence of the respective Monte-Carlo approximative methods, one needs to estimate the accuracy of various approximation steps involved in the algorithm. In this paper we focus on solving of such a problem for the discrete approximation of the integral functional of type \eqref{IT}. For diffusion processes, this problem was studied in \cite{Gobet} and recently in \cite{Kohatsu-Higa}, by means of the methods involving the particular structural features of the process, e.g. the Malliavin calculus tools. On the other hand, in two recent papers \cite{kul-gan}, \cite{kul-gan2}, an alternative method is developed, which exploits only the basic Markov structure of the process and the additive structure of the integral functional and its discrete approximations. One of the aims of this paper is to extend this method for a wider class of Markov processes. To explain our goal more in details, let us formulate our principal assumption on the process $X$. \begin{equation}gin{itemize} \item[\textbf{X.}] The transition probability $P_t(x,dy)$ of $X$ admits a density $p_t(x,y)$ w.r.t. the Lebesgue measure on $\mathbb{R}^d$. This density is differentiable w.r.t. $t$, and its derivative possesses the bound \begin{equation}\label{der_bound} \Big|\partial_tp_t(x,y)\Big|\leq B_{T,X}t^{-\begin{equation}ta} q_{t,x}(y), \quad t\leq T, \end{equation} for some $B_{T,X}\geq 1, \begin{equation}ta\geq 1$, and measurable function $q$, such that for each fixed $t,x$ the function $ q_{t,x}(\cdot)$ is a distribution density. \end{itemize} In \cite{kul-gan}, \cite{kul-gan2}, the condition similar to \textbf{X} was formulated with $\begin{equation}ta=1$. Such a condition is verified for the particularly important classes of diffusion process and symmetric $\alpha$-stable processes. However, in some natural cases one can expect to get \eqref{der_bound} only with $\begin{equation}ta>1$. As the simplest and the most illustrative example one can take an $\alpha$-stable process with drift: $$X_t=ct+Z_t, $$ where $c\not=0$ and $Z$ is a (e.g. symmetric) $\alpha$-stable process. Then $$ p_t(x,y)=t^{-d/\alpha}g^{(\alpha)}\left(y-x-ct\over t^{1/\alpha}\right), $$ where $g^{(\alpha)}$ denotes the distribution density of $Z_1$. Straightforward calculation shows that \eqref{der_bound} holds true with $\begin{equation}ta=\max(1, 1/\alpha)$, which is strictly greater than 1 when $\alpha<1$. Since the L\'evy noises are now used extensively in various applied models, the simple calculation made above shows that it is highly desirable to extend the results of \cite{kul-gan} and \cite{kul-gan2}, which deal with the ``diffusive like'' class of processes satisfying \textbf{X} with $\begin{equation}ta=1$, to the more general case of \textbf{X} with arbitrary $\begin{equation}ta\geq 1$. Another aim of this paper is to develop the tools which would allow us to get the bounds of the form \eqref{der_bound} for a wider class of solutions of L\'evy driven SDEs. One result of such a type is given in the recent preprint \cite{KK14}, with the process $X$ being a solution of the SDE \begin{equation}\label{SDE} dX_t=b(X_t)\, dt+\sigma(X_{t-})\, dZ_t \end{equation} where $Z$ is a symmetric $\alpha$-stable process. The method used therein is a version of the parametrix method, and it is quite sensitive to the form of the L\'evy measure of the process $Z$ on the entire $\mathbb{R}^d$. Recently, apart from the stable noises, various types of ``locally stable'' noises are frequently used in applied models: tempered stable processes, damped stable processes, etc. Heuristically, for a ``locally stable'' process its ``small jumps behavior'' is the same as for the stable one, but the ``large jumps behavior'' of the former is drastically different from ``tail behavior'' of the L\'evy measure. Since \eqref{der_bound} is genuinely related to ``local behavior'' of the process, one can expect that the results of \cite{KK14} should have a natural extension to the case of ``locally stable'' $Z$. However, to make such a conjecture rigorous is a sophisticated problem; the main reason here is that the parametrix method treats the transition probability of a L\'evy process as the ``zero approximation'' for the unknown transition probability density $p_t(x,y)$, and hence any bound for $p_t(x,y)$, which one may expect to design within this method, is at least as complicated as respective bound for the process $Z$. On the other hand, there is an extensive literature on the estimates of transition probability densities for L\'evy processes (e.g. \cite{BGR14}, \cite{KK12a}, \cite{K13}, \cite{KR15}, \cite{M12}, \cite{PT69}, \cite{RS10}, \cite{St10a}, \cite{St10b}, \cite{W07}; this list is far from being complete), which shows that these densities inherit the structure of the densities of the corresponding L\'evy measures. In particular, in order to get the exact two-sided bounds for $p_t(x,y)$ one should impose quite non-trivial structural assumptions on the ``tails'' of the L\'evy measure even in a comparatively simple ``locally stable'' case. Motivated by this observation on one hand, and by the initial approximation problem which suggests the condition \eqref{der_bound} on the other hand, we pose the following general question: \emph{Is it possible to give a ``rough'' upper bound, which would be the same for a large class of transition probability densities of ``locally stable processes'', without assuming complicated conditions on the ``tails'' of their L\'evy measures?} The answer is positive, and it roughly says that one can get the bound \eqref{der_bound}, where at the left hand side we have the transition probability density of the SDE driven by a ``locally stable'' process, and at the right hand side we have a (properly shifted) transition probability density of an $\alpha$-stable process. This bound is not necessarily precise: the power-type ``tail'' of the $\alpha$-stable density might be essentially larger than the ``tail'' e.g. for \emph{exponentially tempered} $\alpha$-stable law. The gain is, however, that under a mild set of assumptions we obtain a uniform-in-class upper bound, which is clearly easy to use in applications. To keep the exposition reasonably compact, we treat this problem in a comparatively simple case of a one-dimensional SDE of the form \eqref{sde1}, see below. The extension of these results to a more general multidimensional case is much more technical, and we postpone it to a separate publication. The structure of the paper is the following. In Section \ref{s2} we formulate and prove two our main results concerning the accuracy of the \emph{strong} and \emph{weak} approximations of an integral functional by Riemannian sums, provided that condition \textbf{X} is satisfied. In Section \ref{s3} we outline a version of the parametrix method, which makes it possible to obtain \eqref{der_bound} for solutions to L\'evy driven SDEs without strong structural assumptions on the ``tails'' of the L\'evy measure of the noise. In Section \ref{s4} an application for the price of an occupation time option is given. \section{Accuracy of discrete approximation for integral functionals}\label{s2} In this section we will prove two results. The first one concerns the ``strong approximation rate'', i.e. the control on the $L_p$-distance between $I_{T}(h)$ and its approximation $I_{T,n}(h)$. \begin{equation}gin{thm}\label{t1} Suppose that \textbf{X} holds. Then for any $p>0$ $$ \left(\mathds{E}_x\Big|I_{T}(h)-I_{T,n}(h)\Big|^p\right)^{1/ p}\leq C_{T,p} \|h\| (D_{T,\begin{equation}ta} (n))^{1/2}, $$ where $\|h\|=\sup_x|h(x)|,$ \begin{equation}gin{equation}\label{DC} D_{T,\begin{equation}ta} (n) = \begin{equation}gin{cases} n^{-1} \log n,& \begin{equation}ta=1,\\ \max\left(1,\frac{T^{1- \begin{equation}ta}}{\begin{equation}ta-1}\right) n^{-1/ \begin{equation}ta},& \begin{equation}ta>1, \end{cases}\quad C_{T,p}= \begin{equation}gin{cases} (14p(p-1)B_{T,X})^{1/2} T,& p\geq 2,\\ C_{T,2}=(28 B_{T,X})^{1/2} T,& p\in (0,2). \end{cases} \end{equation} \end{thm} \begin{equation}gin{rem} This theorem extends \cite[Theorem 2.1]{kul-gan}, where it was assumed that $\begin{equation}ta=1$. \end{rem} The second result concerns the ``weak approximation'', i.e. the control on the difference between the expectations of certain terms, which involve $I_{T}(h)$ together with its approximation $I_{T,n}(h)$. \begin{equation}gin{thm}\label{t2} Suppose that \textbf{X} holds. Then for any $k \in \mathbb{N}$ and any bounded function $f$ we have \begin{equation}gin{equation}\label{t2-eq} \Big|\mathds{E}_x (I_{T}(h))^k f(X_T)- \mathds{E}_x(I_{T,n}(h))^kf(X_T)\Big| \leq 2^{\begin{equation}ta\vee 2} k^2 B_{T,X} T^{k+1} \|h\|^k \|f\| D_{T,\begin{equation}ta} (n). \end{equation} \end{thm} \begin{equation}gin{rem} This theorem extends \cite[Theorem~1.1]{kul-gan2}, where it was assumed that $\begin{equation}ta=1$. In the proof below, we concentrate on the case $\begin{equation}ta>1$. \end{rem} Using the Taylor expansion, one can obtain directly the following corollary of Theorem \ref{t2}. \begin{equation}gin{cor}\label{cor1} Suppose that \textbf{X} holds, and let $\varphi$ be an analytic function defined in some neighbourhood of 0. In addition, suppose that the constants $D_{\varphi},R_{\varphi}>0$ are such that $$ \Big|\frac{\varphi^{(m)}(0)}{m!}\Big| \leq D_{\varphi} \left( \frac{1}{R_{\varphi}} \right)^m,\quad m\geq 0. $$ Then for any bounded function $f$ and a function $h$ such that $T\|h\|< R_\varphi$, we have the following bound: $$ \Big|\mathds{E}_x \varphi( I_{T}(h)) f(X_T)- \mathds{E}_x \varphi( I_{T,n}(h))f(X_T)\Big| \leq C_{T,X,h,\varphi} \|f\| D_{T,\begin{equation}ta} (n), $$ where $$ C_{T,X,h,\varphi}=2^{\begin{equation}ta\vee 2} D_{\varphi}B_{T,X}\frac{ T^2 \|h\|}{R_{\varphi}} \left(1+\frac{ T\|h\|}{R_{\varphi}}\right)\left(1-\frac{ T\|h\|}{R_{\varphi}}\right)^{-3}. $$ \end{cor} Before proceeding to the proof of Theorem \ref{t1}, we give an auxiliary result this proof is based on. This result is, in fact, a weaker version of Theorem \ref{t2} with $k=1$ and $f\equiv 1$, but we give it separately to make the exposition more transparent. \begin{equation}gin{prop}\label{p1} Suppose that \textbf{X} holds. Then $$ \Big|\mathds{E}_xI_{T}(h)-\mathds{E}_xI_{T,n}(h)\Big|\leq 5B_{T,X} T \|h\| D_{T,\begin{equation}ta} (n). $$ \end{prop} \begin{equation}gin{proof} Let us introduce the notation used throughout the whole section: for $t\in [kT/n, (k+1)T/n), k\geq 0 $, we put $\eta_n(t)={kT\over n}, \ \zeta_n(t)={(k+1)T\over n}$; that is, $\eta_n(t)$ is the point of the partition $\{Tk/n,\, k\geq 0\}$ of the time axis, closest to $t$ from the left, and $\zeta_n(t)$ is the point closest to $t$ from the right, which is strictly larger than $t$. We have \begin{equation}gin{align*} \mathds{E}_xI_{T}(h)-\mathds{E}_xI_{T,n}(h) &= \int_0^T \mathds{E}_x[h(X_s) - h(X_{\eta_n(s)})]\,ds\\ & = \int_0^T \int_{\mathbb{R}^d}h(y) [p_s(x,y) - p_{\eta_n(s)}(x,y)]\,dyds \\ &= M_1 + M_2, \end{align*} where $$\begin{aligned} &M_1 = \int_0^{k_{n,\begin{equation}ta}T/n} \int_{\mathbb{R}^d}h(y) [p_s(x,y) - p_{\eta_n(s)}(x,y)]\,dyds,\\& M_2 = \int_{k_{n,\begin{equation}ta}T/n}^T \int_{\mathbb{R}^d}h(y) [p_s(x,y) - p_{\eta_n(s)}(x,y)]\,dyds, \end{aligned}$$ for some $1\leq k_{n,\begin{equation}ta} \leq n$, which will be chosen later. We estimate each term separately. For $M_1$ we have $$ |M_1| \leq \|h\| \int_0^{k_{n,\begin{equation}ta}T/n} \int_{\mathbb{R}^d} [p_s(x,y) + p_{\eta_n(s)}(x,y)]\,dyds = 2\|h\| T \frac{k_{n,\begin{equation}ta}}{n}. $$ Further, using (\ref{der_bound}), we get \begin{equation}\label{M2}\begin{aligned} |M_2| &\leq \|h\| \int_{k_{n,\begin{equation}ta}T/n}^T \int_{\mathbb{R}^d} |p_s(x,y) - p_{\eta_n(s)}(x,y)|\,dyds \\& \leq \|h\| \int_{k_{n,\begin{equation}ta}T/n}^T \int_{\eta_n(s)}^s \int_{\mathbb{R}^d} |\partial_u p_u(x,y)|\,dy duds \\ &\leq B_{T,X}\|h\| \int_{k_{n,\begin{equation}ta}T/n}^T \int_{\eta_n(s)}^s \int_{\mathbb{R}^d} u^{-\begin{equation}ta} q_{u,x}(y) \,dy duds = B_{T,X} \|h\| \int_{k_{n,\begin{equation}ta}T/n}^T \int_{\eta_n(s)}^s u^{-\begin{equation}ta}\,duds\\ &= B_{T,X} \|h\| \sum_{i = k_{n,\begin{equation}ta}}^{n-1} \int_{iT/n}^{(i+1)T/n} \int_{iT/n}^s u^{-\begin{equation}ta}\,duds \\&= B_{T,X} \|h\| \sum_{i = k_{n,\begin{equation}ta}}^{n-1} \int_{iT/n}^{(i+1)T/n} \int_u^{(i+1)T/n} u^{-\begin{equation}ta}\,dsdu\\& \leq \frac{T}{n} B_{T,X} \|h\| \sum_{i = k_{n,\begin{equation}ta}}^{n-1} \int_{iT/n}^{(i+1)T/n} u^{-\begin{equation}ta}\,du = \frac{T}{n} B_{T,X} \|h\| \int_{k_{n,\begin{equation}ta}T/n}^{T} u^{-\begin{equation}ta}\,du. \end{aligned}\end{equation} Now we finalize the argument. 1) If $\begin{equation}ta = 1$, put $k_{n,\begin{equation}ta} = 1, \ n\geq 1$. Then we get $$\begin{aligned} &|M_1| \leq 2\|h\| T n^{-1},\\& |M_2| \leq \frac{T}{n} B_{T,X} \|h\| \int_{T/n}^{T} u^{-1}\,du = B_{T,X} T \|h\| n^{-1} \log n. \end{aligned}$$ 2) If $\begin{equation}ta > 1$, put $k_{n,\begin{equation}ta} = [n^{1-1/\begin{equation}ta}]+1$. Then $$ |M_1| \leq 2\|h\| T \frac{[n^{1-1/\begin{equation}ta}]+1}{n} \leq 2\|h\| T \frac{n^{1-1/\begin{equation}ta}+1}{n} \leq 4\|h\| T n^{-1/\begin{equation}ta}. $$ To estimate $M_2$ observe that \begin{equation}gin{equation}\label{beta1} \begin{equation}gin{split} \int_{k_{n,\begin{equation}ta}T/n}^{T} u^{-\begin{equation}ta}\,du\leq \frac{T^{1-\begin{equation}ta}}{\begin{equation}ta-1}\left(\frac{k_{n,\begin{equation}ta}}{n}\right)^{1-\begin{equation}ta}\leq \frac{T^{1-\begin{equation}ta}}{\begin{equation}ta-1} \left(\frac{n^{1-1/\begin{equation}ta}}{n}\right)^{1-\begin{equation}ta}\leq \frac{T^{1-\begin{equation}ta}}{\begin{equation}ta-1}n^{1-1/\begin{equation}ta}. \end{split} \end{equation} Therefore, \begin{equation}gin{align*} |M_2| &\leq \frac{T}{n} B_{T,X} \|h\| \int_{k_{n,\begin{equation}ta}T/n}^{T} u^{-\begin{equation}ta}\,du \leq \frac{B_{T,X}}{\begin{equation}ta-1} T^{2- \begin{equation}ta} \|h\| n^{-1/\begin{equation}ta}. \end{align*} \end{proof} \begin{equation}gin{proof}[Proof of Theorem \ref{t1}] Since we can obtain the required bound for $p<2$ from the bound with $p=2$ by the H\"older inequality, we consider the case $p\geq 2$ only. Define $$ J_{t,n}(h):=I_{t}(h)-I_{t,n}(h)=\int_0^t \Delta_n(s)ds, \quad \Delta_n(s):=h(X_s) - h(X_{\eta_n (s)}). $$ By definition, the function $t\mapsto J_{t,n}(h)$ is absolutely continuous. Then using the Newton-Leibnitz formula twice we get $$ \Big|J_{T,n}(h)\Big|^p=p(p-1)\int_0^T\Big|J_{s,n}(h)\Big|^{p-2}\Delta_n(s)\left(\int_s^T\Delta_n(t)\, dt\right)ds. $$ Therefore, $$ \Big|J_{T,n}(h)\Big|^p\leq p(p-1)(H_{T,n,p}^1(h)+H_{T,n,p}^2(h)), $$ where $$ H_{T,n,p}^1(h)=\int_0^T\Big|J_{s,n}(h)\Big|^{p-2}|\Delta_n(s)|\left|\int_s^{\zeta_n(s)}\Delta_n(t)\, dt\right|ds, $$ $$ H_{T,n,p}^2(h)=\int_0^T\Big|J_{s,n}(h)\Big|^{p-2}\Delta_n(s)\left(\int_{\zeta_n(s)}^T\Delta_n(t)\, dt\right)ds. $$ Let us estimate separately the expectations of $ H_{T,n,p}^1(h)$ and $H_{T,n,p}^2(h)$. By the H\"older inequality, \begin{equation}gin{align*} \mathds{E}_x H_{T,n,p}^1(h)&\leq \left(\mathds{E}_x\int_0^T\Big|J_{s,n}(h)\Big|^{p}\, ds\right)^{1-2/p}\left(\mathds{E}_x\int_0^T|\Delta_n(s)|^{p/2}\left|\int_s^{\zeta_n(s)}\Delta_n(t)\, dt\right|^{p/2}\, ds\right)^{2/p} \\ &\leq \left(\mathds{E}_x\int_0^T\Big|J_{s,n}(h)\Big|^{p}\, ds\right)^{1-2/p} \left((2\|h\|)^{p/2} T (2\|h\|)^{p/2} \left(\frac{T}{n}\right)^{p/2}\right)^{2/p}\\ & = 4 T^{1+2/p} n^{-1} \|h\|^2 \left(\mathds{E}_x\int_0^T\Big|J_{s,n}(h)\Big|^{p}\, ds\right)^{1-2/p}. \end{align*} Further, observe that for every $s$ the variables $$\Delta_n(s), \quad |J_{s,n}(h)|^{p-2}\Delta_n(s) $$ are $\mathcal{F}_{\zeta_n(s)}$-measurable; here and below $\{\mathcal{F}_t, t\geq 0\}$ denotes the natural filtration for the process $X$. Hence, $$\begin{aligned} \mathds{E}_xH_{T,n,p}^2(h)&=\mathds{E}_x\left(\int_0^T\Big|J_{s,n}(h)\Big|^{p-2}\Delta_n(s)\mathds{E}_x\left(\int_{\zeta_n(s)}^T\Delta_n(t)\, dt\Big| \mathcal{F}_{\zeta_n(s)}\right)ds\right)\\& \leq \mathds{E}_x\left(\int_0^T\Big|J_{s,n}(h)\Big|^{p-2}|\Delta_n(s)|\left|\mathds{E}_x\left(\int_{\zeta_n(s)}^T\Delta_n(t)\, dt\Big| \mathcal{F}_{\zeta_n(s)}\right)\right|ds\right). \end{aligned} $$ By Proposition \ref{p1} and the Markov property of $X$, we have $$ \left|\mathds{E}_x\left(\int_{\zeta_n(s)}^T\Delta_n(t)\, dt\Big|\mathcal{F}_{\zeta_n(s)}\right)\right| = \left|E_{X_{\zeta_n(s)}}\int_{0}^{T-\zeta_n(s)}\Delta_n(t)\, dt\right| \leq 5B_{T,X} T D_{T,\begin{equation}ta} (n) \|h\|. $$ Therefore, using the H\"older inequality, we get $$\begin{aligned} \mathds{E}_xH_{T,n,p}^2(h)&\leq 5B_{T,X} T D_{T,\begin{equation}ta} (n) \|h\| \left(\mathds{E}_x\int_0^T\Big|J_{s,n}(h)\Big|^{p}\, ds\right)^{1-2/p}\left(\mathds{E}_x\int_0^T|\Delta_n(s)|^{p/2}\, ds\right)^{2/p}\\& \leq 10B_{T,X} T^{1+2/p} D_{T,\begin{equation}ta} (n) \|h\|^2\left(\mathds{E}_x\int_0^T\Big|J_{s,n}(h)\Big|^{p}\, ds\right)^{1-2/p}. \end{aligned} $$ Note that $n^{-1}\leq D_{T,\begin{equation}ta} (n)$, hence the above bounds for $\mathds{E}_x H_{T,n,p}^1(h)$ and $\mathds{E}_xH_{T,n,p}^2(h)$ finally yield the estimate \begin{equation}\label{recur_bound} \mathds{E}_x \Big|J_{T,n}(h)\Big|^{p}\leq 14p(p-1)B_{T,X} T^{1+2/p} D_{T,\begin{equation}ta} (n) \|h\|^2\left(\mathds{E}_x\int_0^T\Big|J_{s,n}(h)\Big|^{p}\, ds\right)^{1-2/p}. \end{equation} It can be easily seen that the above inequality also holds true if $J_{T,n}(h)$ in the left hand side is replaced by $J_{t,n}(h)$. Taking the integral over $t\in [0,T]$, we get $$ \mathds{E}_x\int_0^T\Big|J_{t,n}(h)\Big|^{p}\, dt\leq 14p(p-1)B_{T,X} T^{2+2/p} D_{T,\begin{equation}ta} (n) \|h\|^2\left(\mathds{E}_x\int_0^T\Big|J_{s,n}(h)\Big|^{p}\, ds\right)^{1-2/p}. $$ Because $h$ is bounded, the left hand side expression in the above inequality is finite. Hence, resolving this inequality, we get $$ \mathds{E}_x\int_0^T\Big|J_{s,n}(h)\Big|^{p}\,ds\leq (14p(p-1)B_{T,X})^{p/2} T^{p+1} (D_{T,\begin{equation}ta} (n))^{p/2} \|h\|^p, $$ which together with (\ref{recur_bound}) gives the required statement.\color{black} \end{proof} \begin{equation}gin{proof}[Proof of Theorem \ref{t2}] Denote $$S_{k,a,b}:= \{(s_1,s_2,...,s_k) \in \mathbb{R}^k : a\leq s_1 < s_2 < ...< s_k \leq b\}, \ k \in \mathbb{N}, \ a,b \in \mathbb{R}.$$ We have \begin{equation}\begin{aligned} \label{sumJi} &\mathds{E}_x \Big[(I_{T}(h))^k-(I_{T,n}(h))^k \Big] f(X_T)\\& = k!\,\mathds{E}_x \int_{S_{k,0,T}}[h(X_{s_1})h(X_{s_2})...h(X_{s_k}) - h(X_{\eta_n(s_1)})h(X_{\eta_n(s_2)})...h(X_{\eta_n(s_k)})] f(X_T) \prod_{i=1}^k ds_{i} \\& = k!\, \int_{S_{k,0,T}}\int_{(\mathbb{R}^d)^{k+1}}\left(\prod_{i=1}^kh(y_i)\right)f(z) \left(\prod_{i=1}^k p_{s_i-s_{i-1}}(y_{i-1},y_i)\right)p_{T-s_k}(y_k,z)dz \prod_{j=1}^k dy_j \prod_{i=1}^k ds_{i}\\& -k!\, \int_{S_{k,0,T}}\int_{(\mathbb{R}^d)^{k+1}}\left(\prod_{i=1}^kh(y_i)\right)f(z) \left(\prod_{i=1}^k p_{\eta_n(s_i)-\eta_n(s_{i-1})}(y_{i-1},y_i)\right)\\& \times p_{T-\eta_n(s_k)}(y_k,z) dz \prod_{j=1}^k dy_j \prod_{i=1}^k ds_{i}\\& = k!\sum_{r=1}^{k}\int_{S_{k,0,T}}\int_{(\mathbb{R}^d)^{k+1}} \left(\prod_{i=1}^kh(y_i)\right)f(z) J_{s_1, \dots, s_k,T}^{(r)}(x,y_1, \dots, y_k,z)dz \prod_{j=1}^k dy_j \prod_{i=1}^k ds_{i}, \end{aligned}\end{equation} where the convention $s_0 = 0, s_{k+1}=T, y_0 = x, y_{k+1}=z$ is used and the functions $J^{(r)}, r=1, \dots, k$ are defined by the relations \begin{equation}gin{align*} J_{s_1, \dots, s_k,T}^{(r)}&(x,y_1, \dots, y_k,z) = \left(\prod_{i=1}^{r-1} p_{\eta_n(s_i)-\eta_n(s_{i-1})}(y_{i-1},y_i)\right)\\ &\quad \times \Big(p_{s_r-s_{r-1}}(y_{r-1},y_r) - p_{\eta_n(s_r)-\eta_n(s_{r-1})}(y_{r-1},y_r)\Big) \left(\prod_{i=r}^k p_{s_{i+1}-s_{i}}(y_{i},y_{i+1})\right). \end{align*} Let us estimate the $r$-th term in the last line in (\ref{sumJi}). We have \begin{equation}gin{align*} \int_{S_{k,0,T}}&\int_{(\mathbb{R}^d)^{k+1}} \left(\prod_{i=1}^kh(y_i)\right)f(z) J_{s_1, \dots, s_k,T}^{(r)}(x,y_1, \dots, y_k,z) dz \prod_{j=1}^k dy_j \prod_{i=1}^k ds_{i} \\& \leq \|h\|^k \|f\| \int_{S_{k,0,T}}\int_{(\mathbb{R}^d)^{k+1}} \ \big| J_{s_1, \dots, s_k,T}^{(r)}(x,y_1, \dots, y_k,z) \big| dz \prod_{j=1}^k dy_j \prod_{i=1}^k ds_{i}. \end{align*} \begin{equation}gin{comment}By the Chapman-Kolmogorov equation and \eqref{der_bound}, we get \begin{equation}gin{align*} &J_{s_1, \dots, s_k,T}^{(r)}:= \int_{(\mathbb{R}^d)^{k+1}}\big|J_{s_1, \dots, s_k,T}^{(r)}(x,y_1, \dots, y_k,z)\big| dz \prod_{j=1}^k dy_j \\ &\leq B_{T,X} \int_{(\mathbb{R}^d)^2}p_{\eta_n(s_{r-1})-\eta_n(s_0)}(x,y_{r-1}) \Big|p_{s_r-s_{r-1}}(y_{r-1},y_r) - p_{\eta_n(s_r)-\eta_n(s_{r-1})}(y_{r-1},y_r)\Big| dy_{r-1}dy_r. \end{align*} \end{comment} Since the case $\begin{equation}ta=1$ was already treated in \cite{kul-gan2}, for the rest of the proof we assume that $\begin{equation}ta>1$. Consider two cases: a) $s_r-s_{r-1}> k_{n,\begin{equation}ta}T/n$ and b) $s_r-s_{r-1}\leq k_{n,\begin{equation}ta}T/n$. In case a), using condition \textbf{X} and the Chapman-Kolmogorov equation, we derive $$\begin{aligned} &\int_{(\mathbb{R}^d)^{k+1}} \big| J_{s_1, \dots, s_k,T}^{(r)}(x,y_1, \dots, y_k,z) \big| dz \prod_{j=1}^k dy_j \\&\leq B_{T,X} \int_{(\mathbb{R}^d)^2}p_{\eta_n(s_{r-1})-\eta_n(s_0)}(x,y_{r-1}) \left|\int_{\eta_n(s_r)-\eta_n(s_{r-1})}^{s_r-s_{r-1}} v^{-\begin{equation}ta} q_{v,y_{r-1}}(y_r)dv\right|dy_{r-1}dy_r. \end{aligned}$$ Since $k_{n,\begin{equation}ta}\geq 2$, then in case a) we have $s_r-s_{r-1}\geq 2T/n$, and hence $$ \eta_n(s_r)-\eta_n(s_{r-1})\geq s_r-\frac{T}{n}-s_{r-1}\geq \frac{s_r-s_{r-1}}{2}. $$ Therefore, using the fact that $q_{t,y}(\cdot)$ is the probability density for any $t>0$ and $y\in \mathbb{R}^d$, we finally get \begin{equation}gin{align*} \int_{(\mathbb{R}^d)^{k+1}} \big| J_{s_1, \dots, s_k,T}^{(r)}(x,y_1, \dots, y_k,z) \big| dz \prod_{j=1}^k dy_j&\leq B_{T,X} \int_{s_r-s_{r-1}-T/n}^{s_r-s_{r-1}} v^{-\begin{equation}ta}dv\leq \frac{B_{T,X}T}{n} \Big(\frac{s_r-s_{r-1}}{2}\Big)^{-\begin{equation}ta}. \end{align*} In case b) we simply apply the Chapman-Kolmogorov equation: $$\begin{aligned} &\int_{(\mathbb{R}^d)^{k+1}} \big| J_{s_1, \dots, s_k,T}^{(r)}(x,y_1, \dots, y_k,z) \big| dz \prod_{j=1}^k dy_j\\ &\leq \int_{(\mathbb{R}^d)^2}p_{\eta_n(s_{r-1})-\eta_n(s_0)}(x,y_{r-1}) \Big(p_{s_r-s_{r-1}}(y_{r-1},y_r) + p_{\eta_n(s_r)-\eta_n(s_{r-1})}(y_{r-1},y_r)\Big) dy_{r-1}dy_r\\ &\leq 2. \end{aligned}$$ Therefore, summarizing the estimates obtained in cases a) and b) we get, using \eqref{beta1}, the estimates \begin{equation}gin{align*} \int_{S_{k,0,T}}&\int_{(\mathbb{R}^d)^{k+1}} \ \big| J_{s_1, \dots, s_k,T}^{(r)}(x,y_1, \dots, y_k,z) \big| dz \prod_{j=1}^k dy_j \prod_{i=1}^k ds_{i}\\ &\leq \frac{B_{T,X}T}{n} 2^{\begin{equation}ta} \int_0^T \int_0^{s_r-k_{n,\begin{equation}ta}T/n} \frac{s_{r-1}^{r-1}}{(r-1)!}(s_r-s_{r-1})^{-\begin{equation}ta}\frac{(T-s_r)^{k-r}}{(k-r)!}ds_{r-1}ds_r\\ &+ 2\int_0^T\int_{s_r-k_{n,\begin{equation}ta}T/n}^{ s_r}\frac{s_{r-1}^{r-1}}{(r-1)!}\frac{(T-s_r)^{k-r}}{(k-r)!} ds_{r-1}ds_r\\ &\leq \frac{B_{T,X}T}{n} 2^{\begin{equation}ta} \int_0^T \frac{s_{r}^{r-1}}{(r-1)!}\frac{(T-s_r)^{k-r}}{(k-r)!}ds_r\Big(\int_{k_{n,\begin{equation}ta}T/n}^{T}u^{-\begin{equation}ta}du\Big)\\ &+ \frac{2Tk_{n,\begin{equation}ta}}{n}\int_0^T\frac{s_{r}^{r-1}}{(r-1)!}\frac{(T-s_r)^{k-r}}{(k-r)!} ds_r\\ &\leq \frac{ 2^{\begin{equation}ta} B_{T,X}T^{k+2-\begin{equation}ta}}{(\begin{equation}ta-1)(k-1)!} n^{-1/\begin{equation}ta}+ \frac{4T^{k+1}}{(k-1)!} n^{-1/\begin{equation}ta}\\ &\leq \frac{2^{\begin{equation}ta\vee 2} T^{k+1}B_{T,X} D_{T,\begin{equation}ta}(n)}{(k-1)!}, \end{align*} where in the fourth and the fifth lines we used that $s_{r-1}^{r-1}\leq s_r^{r-1}$. Taking into account that in \eqref{sumJi} we have $k$ terms, and the common multiplier $k!$, we finally arrive at \eqref{t2-eq}. \end{proof} \section{Condition \textbf{X} for solutions to L\'evy driven SDEs}\label{s3} Consider the SDE \begin{equation}gin{equation}\label{sde1} dX_t = b(X_t) dt + dZ_t, \quad X_0=x, \end{equation} where $Z$ is a real-valued L\'evy process. In \cite{KK14} it was shown that if $Z_t$ is a symmetric $\alpha$-stable process and $b(\cdot)$ is bounded and Lipschitz continuous, then the solution to equation \eqref{sde1} satisfies condition \textbf{X} with $\begin{equation}ta=\max(1, 1/\alpha)$ (in fact, in \cite{KK14} more general multidimensional SDEs of the form \eqref{SDE} are considered). In this section we outline the argument which makes it possible to extend the class of ``L\'evy noises''. Namely, we will omit the requirement on $Z$ to be \emph{symmetric}, and relax the stability assumption, demanding $Z$ to be ``locally $\alpha$-stable'' in the sense we specify below. Recall that for a real-valued L\'evy process the characteristic function is of the form $$ \mathbb{E} e^{i\xi Z_t}= e^{-t\psi(\xi)}, \quad t>0, \, \xi \in \mathds{R}, $$ where the \emph{characteristic exponent} $\psi$ admits the L\'evy-Khinchin representation \begin{equation}gin{equation}\label{psi1} \psi(\xi) =-ia\xi+{1\over 2}\sigma^2\xi^2+\int_\mathds{R} \big(1-e^{i \xi u}+ i \xi u\mathds{1}_{\{|u|\leq 1\}} \big)\mu(du). \end{equation} In what follows, we assume that $\sigma^2=0$ and the \emph{L\'evy measure} $\mu$ is of the form \begin{equation}gin{equation}\label{tilm} \mu(du)=C_{+} u^{-1-\alpha}\mathds{1}_{u\in (0,1)}du +C_{-} |u|^{-1-\alpha}\mathds{1}_{u\in (-1,0)}du+ m(u) du, \end{equation} with some $C_\pm\geq 0$ and $m(u)\geq 0$ such that $m(u)=0$ for $|u|\leq 1$, and \begin{equation}gin{equation}\label{m1} m(u)\leq c|u|^{-1-\alpha}, \quad |u|\geq 1. \end{equation} On the interval $[-1,1]$ the L\'evy measure $\mu$ given by (\ref{tilm}) coincides with the L\'evy measure of a (non-symmetric) $\alpha$-stable process. This is the reason for us to call $Z$ a ``locally $\alpha$-stable'' process: its ``local behavior'' near the origin is similar to those of the $\alpha$-stable process. In that context condition (\ref{m1}) means that the ``tails'' of the L\'evy measure for $\mu$ are dominated by the ``tails'' the $\alpha$-stable L\'evy measure. Let us impose three minor conventions, which will simplify the technicalities below. First, since we are mostly interested in the case $\begin{equation}ta>1$, we assume that $\alpha<1$. Second, the latter assumption assures that the integral $$ \int_{\{|u|\leq 1\}} u\mu(du) $$ is well defined, and we assume that the constant $a$ in \eqref{psi1} equals to this integral; that is, $\psi$ has the form $$ \psi(\xi) =\int_\mathds{R} \big(1-e^{i \xi u}\big)\mu(du). $$ Clearly, this does not restrict the generality because one can change the constant $a$ by changing respectively the drift coefficient $b(\cdot)$ in \eqref{sde1}. Finally, in order to avoid the usage of the Rademacher theorem (see \cite[Lemma~7.4]{KK14} for the case when $b$ is just Lipschitz continuous), let us assume that $b\in C^1(\mathds{R})$. In what follows we show how the \emph{parametrix construction} developed in \cite{KK14} can be modified to provide the representation and the bonds for the transition probability density $p_t(x,y)$ of the solution to \eqref{sde1} driven by the ``locally stable'' noise $Z$. Let us introduce some notation and give some preliminaries. We denote the space and the space-time convolutions respectively by $$ (f\ast g)(x,y):=\int_{\mathbb{R}^d}f(x,z)g(z,y)\, dz, $$ $$ (f\circledast g)_t(x,y):=\int_0^t(f_{t-s}\ast g_s)(x,y)\, ds=\int_0^t\int_{\mathbb{R}^d}f_{t-s}(x,z)g_{s}(z,y)\, dzds. $$ Generically, the parametrix construction provides the representation of the required transition probability density in the form \begin{equation}gin{equation}\label{p10} p_t(x,y)= p_t^0(x,y)+ \int_0^t \int_\mathds{R} p^0_{t-s}(x,z) \Psi_s (z,y) dzds, \quad t>0, \quad x,y\in \mathds{R}. \end{equation} Here $p^0_t(x,y)$ is a ``zero approximation term'' for the unknown $p_t(x,y)$, the function $\Psi_t(x,y)$ is given by the ``convolution series'' \begin{equation}gin{equation}\label{Psi} \Psi_t(x,y)= \sum_{k=1}^\infty \Phi_t^{\circledast k} (x,y), \quad t>0, \quad x,y\in \mathds{R}, \end{equation} the function $\Phi_t(x,y)$ depends on the particular choice of $p^0_t(x,y)$, and equals \begin{equation}gin{equation}\label{phi10} \Phi_t(x,y):= \big(L_x - \partial_t\big)p_t^0(x,y), \end{equation} where \begin{equation}gin{align*} L f(x):&= b(x) f'(x)+ \int_\mathds{R} \big(f(x+u)-f(x)\big) \mu(du), \quad f\in C^2_b (\mathds{R}) \end{align*} is the formal generator of the process $X$. The subscript $x$ in above expressions means that the operator is applied with respect to the variable $x$. Note that to make the above construction feasible, one should properly choose the ``zero approximation term'' $p_t^0(x,y)$, so that the convolution series \eqref{Psi} converges and the space-time convolution in (\ref{p10}) is well defined. To introduce in our setting such $p_t^0(x,y)$, and then to construct the bounds for the associated $\Phi_t(x,y)$ and its convolution powers, we need some more notation. Denote by $Z^{(\alpha, C_\pm)}$ the $\alpha$-stable process with the L\'evy measure $\mu_{\alpha, C_\pm}(du)=m^{(\alpha, C_\pm)}(u)\, du$, $$ m^{(\alpha, C_\pm)}(u):=C_{+} u^{-1-\alpha}\mathds{1}_{u>0}du +C_{-} (-u)^{-1-\alpha}\mathds{1}_{u<0}, $$ and the characteristic exponent $$ \psi^{(\alpha, C_\pm)}(\xi)= \int_\mathds{R} \big(1-e^{i\xi u}\big)\mu^{(\alpha, C_\pm)}(du). $$ Note that since $$ \psi^{(\alpha, C_\pm)}(c\xi)=c^\alpha \psi^{(\alpha, C_\pm)}(\xi),\quad c>0, $$ the process $Z^{(\alpha, C_\pm)}$ possesses the scaling property $$ \mathrm{Law}\,\big(Z_{ct}^{(\alpha, C_\pm)}\big)=\mathrm{Law}\,\big(c^{1/\alpha}Z_t^{(\alpha, C_\pm)}\big), \quad c>0. $$ Denote by $g_t^{(\alpha,C_\pm)} $ the distribution density of $Z^{(\alpha, C_\pm)}_t$. By the scaling property we have $$ g_t^{(\alpha,C_\pm)}(x)=t^{-1/\alpha}g^{(\alpha,C_\pm)}\left(xt^{-1/\alpha}\right), \quad g^{(\alpha,C_\pm)}:=g^{(\alpha,C_\pm)}_1. $$ Denote also by $Z^{(\alpha)}$ the \emph{symmetric} $\alpha$-stable process; that is, the process of the form introduced above with $C_+=C_-=1$. Let $g_t^{(\alpha)} $ be the respective distribution density and $g^{(\alpha)}:= g_1^{(\alpha)}$. Finally, denote by $\chi_t(x)$ and $\theta_t(y)$, respectively, the solutions to the ODEs \begin{equation}gin{equation}\label{ODE} d\chi_t = b(\chi_t)dt, \quad \chi_0=x,\quad d\theta_t =- b(\theta_t)dt, \quad \theta_0=y. \end{equation} Note that these solutions exist, because $b(\cdot)$ is Lipschitz continuous. Now we are ready to formulate the main statement of this section. \begin{equation}gin{thm}\label{lem-der} Let \begin{equation}gin{equation}\label{p0} p^0_t(x,y):=g^{(\alpha,C_\pm)}_t(\theta_t(y)-x). \end{equation} Then the convolution series \eqref{Psi} is well defined, and the formula (\ref{p10}) gives the representation of the transition probability density $p_t(x,y)$ of the process $X$. This density and its time derivative have the following upper bounds: \begin{equation}gin{equation}\label{ptx} p_t(x,y)\leq C \big(g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big)(y-\chi_t(x)),\quad t\in (0,T], \, x,y\in \mathds{R}, \end{equation} \begin{equation}gin{equation}\label{ptx1} \partial_t p_t(x,y)\leq C t^{-1/\alpha}\big(g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big)(y-\chi_t(x)), \quad t\in (0,T], \, x,y\in \mathds{R}, \end{equation} Consequently, the process $X$ satisfies condition \textbf{X} with $\begin{equation}ta= 1/\alpha$ and $$ q_{t,x}(y)=\big(g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big)(y-\chi_t(x)). $$ \end{thm} \begin{equation}gin{proof} First we evaluate $\Phi_t(x,y)$. If it is not stated otherwise, we assume in all estimates obtained below that $t\in (0,T]$ for some $T>0$, and $x,y\in \mathds{R}$. Observe that $g^{(\alpha,C_\pm)}\in C_b^2(\mathds{R})$. Indeed, this property easily follows from the Fourier inversion formula and the expression for the characteristic function. It is known that $g_t^{(\alpha,C_\pm)}(y-x)$ is the fundamental solution to $\partial_t -L^{(\alpha,C_\pm)}$, where $L^{(\alpha,C_\pm)}$ denotes the generator of the process $Z^{(\alpha,C_\pm)}$: \begin{equation}gin{equation}\label{la} L^{(\alpha,C_\pm)}f(x)=\int_\mathds{R} \big( f(x+u)-f(x)\big) \mu^{(\alpha,C_\pm)}(du), \quad f\in C^2_b(\mathds{R}). \end{equation} Since $$ (\partial_t-L_x^{(\alpha,C_\pm)})g_t^{(\alpha,C_\pm)}(y-x)=0, $$ we have $$\begin{aligned} \partial_tp_t^0(x,y)&=\left[\partial_tg_t^{(\alpha,C_\pm)}(w)+\frac{\partial_t\theta_t(y)}{t^{2/\alpha} } (g^{(\alpha,C_\pm)})'\left(\frac{w}{t^{1/\alpha}} \right)\right]\Big|_{w=\theta_t(y)-x} \\&=\left[\frac{1}{t^{1/\alpha} } (L^{(\alpha,C_\pm)} g^{(\alpha,C_\pm)})\left(\frac{w}{t^{1/\alpha}} \right)+\frac{\partial_t\theta_t(y)}{t^{2/\alpha} } (g^{(\alpha,C_\pm)})'\left(\frac{w}{t^{1/\alpha}} \right)\right]\Big|_{w=\theta_t(y)-x}, \end{aligned} $$ where in the last identity we used the scaling property of $g_t^{(\alpha,C_\pm)}$ and the fact that $L^{(\alpha,C_\pm)}$ is a homogeneous operator of order $1/\alpha$. Next, by the very definition of $L $ and $ L^{(\alpha,C_\pm)}$ we get \begin{equation}gin{equation*} \begin{equation}gin{split} L_x&p_t^0(x,y)=\left[\frac{1}{t^{1/\alpha} } (L^{(\alpha,C_\pm)} g^{(\alpha)})\left(\frac{w}{t^{1/\alpha}} \right)-\frac{b(x)}{t^{2/\alpha} } (g^{(\alpha,C_\pm)})'\left(\frac{w}{t^{1/\alpha}} \right)\right]\Big|_{w=\theta_t(y)-x}\\ &+ \left[ \int_{|u|\geq 1} \left(\frac{1}{t^{1/\alpha} } g^{(\alpha,C_\pm)}\left(\frac{w-u}{t^{1/\alpha}} \right)- \frac{1}{t^{1/\alpha} } g^{(\alpha,C_\pm)}\left(\frac{w}{t^{1/\alpha}} \right)\right)\big(m(u)du-m^{(\alpha,C_\pm)}(du)\big)\right]\Big|_{w=\theta_t(y)-x}. \end{split} \end{equation*} Therefore, using the relation $\partial_t\theta_t(y)=-b(\theta_t(y))$, we get \begin{equation}gin{equation}\label{phi-n10} \begin{equation}gin{split} \Phi_t(x,y)&=\big(L_x - \partial_t\big)p_t^0(x,y)=\frac{b(x)-b(\theta_t(y))}{t^{2/\alpha} } (g^{(\alpha,C_\pm)})'\left(\frac{\theta_t(y)-x}{t^{1/\alpha}} \right)\\ &+\frac{1}{t^{1/\alpha} }\int_{|u|\geq 1} \left( g^{(\alpha,C_\pm)}\left(\frac{\theta_t(y)-x-u}{t^{1/\alpha}} \right)- g^{(\alpha,C_\pm)}\left(\frac{\theta_t(y)-x}{t^{1/\alpha}} \right)\right)\big(m(u)-m^{(\alpha,C_\pm)}(u)\big)\, du\\ &=: \Phi_t^1(x,y)+\Phi_t^2(x,y). \end{split} \end{equation} Further, we give the bounds for the absolute values of $\Phi^1_t(x,y)$, $\Phi^2_t(x,y)$, and $\Phi_t(x,y)$. In what follows, $C$ denotes a generic constant, whose value might be different from place to place. One has \begin{equation}gin{equation}\label{ga9} g^{(\alpha,C_\pm)}(x)\leq C g^{(\alpha)}(x), \quad \quad x\in \mathds{R}, \end{equation} \begin{equation}gin{equation}\label{ga91} \big|(g^{(\alpha,C_\pm)})'(x)\big|\leq C (1+|x|)^{-1} g^{(\alpha)}(x),\quad x\in \mathds{R}, \end{equation} \begin{equation}gin{equation}\label{ga92} \big|(g^{(\alpha,C_\pm)})''(x)\big|\leq C (1+|x|)^{-2} g^{(\alpha)}(x),\quad x\in \mathds{R}. \end{equation} Since the argument used in the proof of (\ref{ga9}) -- (\ref{ga92}) is quite standard (see e.g. \cite[Appendix A]{KK14}), we omit the details. By \eqref{ga91} and the Lipschitz continuity of $b(\cdot)$ we have $$ |\Phi_t^1(x,y)|\leq\frac{C|x-\theta_t(y)|}{t^{2/\alpha} }\left| (g^{(\alpha,C_\pm)})'\left(\frac{\theta_t(y)-x}{t^{1/\alpha}} \right)\right|\leq \frac{C}{t^{1/\alpha} } g^{(\alpha)}\left(\frac{\theta_t(y)-x}{t^{1/\alpha}} \right)=Cg_t^{(\alpha)}(\theta_t(y)-x). $$ To get the estimate for $|\Phi_t^2(x,y)|$, we first observe that $$ \big|m(u)-m^{(\alpha,C_\pm)}(u)\big|I_{|u|\geq 1} \leq C g^{(\alpha)}(u), $$ which implies $$\begin{aligned} |\Phi_t^2(x,y)|&\leq \frac{C}{t^{1/\alpha} }\int_{|u|\geq 1} g^{(\alpha,C_\pm)}\left(\frac{\theta_t(y)-x-u}{t^{1/\alpha}} \right) g^{(\alpha)}(u)\, du+\frac{C}{t^{1/\alpha} }g^{(\alpha,C_\pm)}\left(\frac{\theta_t(y)-x}{t^{1/\alpha}} \right). \end{aligned} $$ Taking into account \eqref{ga9}, we deduce that $$ |\Phi_t^2(x,y)|\leq C \big( g_t^{(\alpha)}* g_1^{(\alpha)}+ g_t^{(\alpha)}\big)(\theta_t(y)-x)= C \big( g_{t+1}^{(\alpha)}(x)+ g_t^{(\alpha)}(x)\big)(\theta_t(y)-x). $$ Combining the estimates for $\Phi_t^1(x,y)$ and $\Phi_t^2(x,y)$, we get \begin{equation}gin{equation}\label{phi-n30} |\Phi_t(x,y)| \leq C \big(g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big)(\theta_t(y)-x). \end{equation} Our next step is to estimate the convolution powers of $\Phi$. It is shown in \cite[Appendix B]{KK14}, that the kernel $g_t^{(\alpha)}(\theta_t(y)-x)$ possess the following \emph{sub-convolution property}: \begin{equation}gin{equation}\label{sub10} \int_\mathds{R} g_{t-s}^{(\alpha)}\big(\theta_{t-s}(z)-x\big)g_s^{(\alpha)}\big(\theta_s(y)-z\big)dz \leq C g_t^{(\alpha)}(\theta_t(y)-x). \end{equation} By this property we get \begin{equation}gin{equation}\label{sub20} \begin{equation}gin{split} \int_\mathds{R} g_{t+1-s}^{(\alpha)}\big(\theta_{t-s}(z)-x\big)g_s^{(\alpha)}\big(\theta_s(y)-z\big)dz &\leq C g_{t+1}^{(\alpha)}(\theta_t(y)-x), \\ \int_{\mathds{R}} g_{t-s+1}^{(\alpha)}(\theta_{t-s}(z)-x)g_{s+1}^{(\alpha)}(\theta_s(y)-z)dz &\leq C g_{t+2}^{(\alpha)}(\theta_t(y)-x)\leq C g_{t+1}^{(\alpha)}(\theta_t(y)-x), \end{split} \end{equation} where in the last line we used that $g_2^{(\alpha)}\leq C g_1$, and therefore $g_{t+2}^{(\alpha)}=g_{t}^{(\alpha)}*g_{2}^{(\alpha)}\leq Cg_{t+2}^{(\alpha)}$. Having these estimates, we deduce in the same way as in \cite[Section 3]{KK14} that \begin{equation}gin{equation}\label{phik} |\Phi_t^{\circledast k} (x,y)|\leq\frac{ C_0(C t)^{k-1}}{k!} \big(g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big)(\theta_t(y)-x). \end{equation} Therefore, the series \eqref{Psi} converges absolutely for $(t,x,y)\in (0,\infty)\times \mathds{R}\times \mathds{R}$, and $$ |\Psi_t(x,y)|\leq C \big(g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big)(\theta_t(y)-x). $$ Applying once again the sub-convolution property \eqref{sub10}, we see that the convolution $p^0\circledast \Psi$ is well defined, and $$ |\big(p^0\circledast \Psi\big)_t(x,y)|\leq C \big(g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big)(\theta_t(y)-x). $$ Thus, the expression \eqref{p10} for $p_t(x,y)$ is well defined for any $(t,x,y)\in (0,\infty)\times \mathds{R}\times \mathds{R}$, and $$ |p_t(x,y)|\leq C \big(g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big)(\theta_t(y)-x). $$ Finally, to get \eqref{ptx} we use the following inequalities, which were proved in \cite[Appendix B]{KK14}: \begin{equation}\label{flow} c |\theta_t(y)-x|\leq |\chi_t(x)-y|\leq C |\theta_t(y)-x|. \end{equation} Since for any constant $c>0$ we have $g_t^{(\alpha)}(x)\asymp g_t^{(\alpha)}(c x)$ for any $t\in (0,T]$, $x,y\in \mathds{R}$, this completes the proof of \eqref{ptx}. Our final step is to use representation \eqref{p10} in order to find the bounds for $\partial_t p_t(x,y)$. Since $p_t^0(x,y)$ and $\Phi_t(x,y)$ are given explicitly, it is straightforward to show that these functions are differentiable with respect to $t$, and to check using \eqref{ga9} -- \eqref{ga92} that \begin{equation}gin{equation}\label{p20} \big| \partial_t p_t^0(x,y)|\leq C t^{-1/\alpha} g_t^{(\alpha)}(\theta_t(y)-x), \end{equation} \begin{equation}gin{equation}\label{tphi} |\partial_t \Phi_t(x,y)|\leq C t^{-1/\alpha}\big( g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big)(\theta_t(y)-x). \end{equation} To show that the convolution powers $\Phi_t^{\circledast k}(x,y)$ are differentiable in $t$ and to get the upper bounds, we use the following trick. The expression for $\Phi_t^{\circledast (k+1)}(x,y)$ can be re-organized as follows: \begin{equation}gin{equation}\label{44} \begin{equation}gin{split} \Phi^{\circledast (k+1)}_t(x,y)&=\int_0^{t}\int_{\mathds{R}}\Phi_{t-s}^{\circledast k}(x,z)\Phi_s(z,y)\,dzds\\ &=\int_0^{t/2}\int_{\mathds{R}}\Phi_{t-s}^{\circledast k}(x,z)\Phi_s(z,y)\,dz ds+\int_0^{t/2}\int_{\mathds{R}}\Phi_{s}^{\circledast k}(x,z)\Phi_{t-s}(z,y)\,dz ds. \end{split} \end{equation} If $k=1$, the first line in \eqref{44} does not allow us to differentiate $\partial_t\Phi^{\circledast (2)}_t(x,y)$, because the upper bound for $\partial_t\Phi_{t-s}(x,z)$ has a non-integrable singularity $(t-s)^{-1/\alpha}$ at the vicinity of the point $s=t$ (recall that $\alpha<1$). However, the identity given by the second line in \eqref{44} does not contain such singularities, and we can show using induction that for any $k\geq 1$ the function $\Phi^{\circledast k}_t(x,y)$ is continuously differentiable in $t$, satisfies \begin{equation}gin{align*} \partial_t\Phi^{\circledast (k+1)}_t(x,y)&=\int_0^{t/2}\int_{\mathbb{R}^d}(\partial_t\Phi^{\circledast k})_{t-s}(x,z)\Phi_s(z,y)\,dz ds+ \int_0^{t/2}\int_{\mathbb{R}^d}\Phi_{s}^{\circledast k}(x,z)(\partial_t\Phi)_{t-s}(z,y)\,dz ds\\ &\quad +\int_{\mathbb{R}^d}\Phi_{t/2}^{\circledast k}(x,z)\Phi_{t/2}(z,y)\, dz. \end{align*} and possesses the bound \begin{equation}gin{equation}\label{phikd} |\partial_t\Phi_t^{\circledast k} (x,y)|\leq \frac{ C_0(Ct)^{k-1} t^{-1/\alpha}}{k!} \big(g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big) (\theta_t(y)-x), \quad\quad k\geq 1. \end{equation} Since the proof is completely analogous to the proof of \cite[Lemma~7.3]{KK14}, we omit the details. From \eqref{phikd} we derive the following bound for the derivative of $\Psi_t(x,y)$: \begin{equation}gin{equation}\label{phikd2} |\partial_t\Psi_t^{\circledast k} (x,y)|\leq C t^{-1/\alpha} \big(g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big) (\theta_t(y)-x). \end{equation} Re-organizing representation (\ref{p10}) in the same way as (\ref{44}), we get $$ p_t(x,y) =p_t^0(x,y)+\int_0^{t/2}\int_{{\mathbb{R}^d}}p_{t-s}^0(x,z)\Psi_s(z,y)\,dzds+\int_{0}^{t/2}\int_{{\mathbb{R}^d}}p_{s}^0(x,z)\Psi_{t-s}(z,y)\,dz\, ds. $$ Using the above representation of $p_t(x,y)$ together with \eqref{p20} and \eqref{phikd2}, we derive the existence of the continuous derivative $\partial_t p_t(x,y)$, which satisfies the inequality $$ |\partial_t p_t(x,y)|\leq C t^{-1/\alpha}\big( g_{t+1}^{(\alpha)}+ g_t^{(\alpha)}\big)(\theta_t(y)-x). $$ Using estimates (\ref{flow}) in the same way as we did that in the proof of (\ref{ptx}), we can change the argument $\theta_t(y)-x$ in the right hand side of the above estimate to $y-\chi_t(x)$, which completes the proof of (\ref{ptx1}). \end{proof} \section{Application: the price of an occupation time option}\label{s4} In this section, we consider an \emph{occupation time option} (see \cite{Linetsky}), with the price of the option depending on the time spent by an asset price process in a given set. Comparing to the standard barrier options, which are activated or cancelled when the asset price process hits some definite level (barrier), the payoff of the occupation time option depends on the time during which the asset price process stays below or above such a barrier. For instance, for the strike price $K$, the barrier level $L$ and the knock-out rate $\rho$, the payoff of a down-and-out call occupation time option equals $$ \exp \left( - \rho \int_0^T \mathbb{I}_{\{S_t \leq L\}} dt \right) (S_T - K)_+, $$ and the price $\mathbf{C}(T)$ of the option is defined as $$ \textbf{C}(T) = \exp (-rT) E \left[ \exp \left( - \rho \int_0^T \mathbb{I}_{\{S_t \leq L\}} dt \right) (S_T - K)_+ \right] $$ where $r$ is the risk-free interest rate (see \cite{Linetsky}). Assume that the price of an asset $S=\{S_t, t\geq 0\}$ is of the form $$ S_t = S_0 \exp(X_t), $$ where $X$ is the Markov process studied in previous sections. Then the time spent by the process $S$ in a set $J\subset \mathbb{R}$ equals to the time spent by $X$ in the set $J'=\log J$. Let us approximate the price $\textbf{C}(T)$ of our option by $$ \textbf{C}_n(T) = \exp (-rT) E \left[ \exp \left( - \rho T/n \sum_{k=0}^{n-1} \mathbb{I}_{\{S_{kT/n} \leq L\}} dt \right) (S_T - K)_+ \right]. $$ Then using the results from the previous sections we can get the control on the accuracy of such an approximation. First we apply Theorem \ref{t1} and derive the strong approximation rate. \begin{equation}gin{prop}\label{p41} Suppose that the process $X$ satisfies condition \textbf{X}, and assume that there exists $\lambda>1$ such that $G(\lambda):= E \exp (\lambda X_T) = E (S_T)^\lambda < + \infty$. Then $$ \Big|\textbf{C}_n(T)- \textbf{C}(T)\Big| \leq \exp(-rT) \rho G(\lambda)^{1/\lambda} C_{T,\lambda/(\lambda-1)} (D_{T,\begin{equation}ta} (n))^{1/2}, $$ with constants $C_{T,\lambda/(\lambda-1)}$ and $D_{T,\begin{equation}ta} (n)$ are given by \eqref{DC}. \end{prop} \begin{equation}gin{proof} The proof is a simple corollary of Theorem \ref{t1}. Denote $h(x)=\rho \mathbb{I}_{x\leq \log L}$. Then keeping the notation of Section \ref{s2} we get $$ \textbf{C}(T)=e^{-rT}Ee^{-I_T(h)}(S_T- K)_+, \quad \textbf{C}_n(T)=e^{-rT}Ee^{-I_{T,n}(h)}(S_T- K)_+. $$ By the H\"older inequality with $p=\lambda$ and $q=\lambda/(\lambda-1)$, $$ \Big|\textbf{C}_n(T)- \textbf{C}(T)\Big| \leq e^{-rT} \Big(E(S_T- K)_+^\lambda\Big)^{1/\lambda}\left(E\left|e^{-I_T(h)}-e^{-I_{T,n}(h)}\right|^{\lambda/(\lambda-1)}\right)^{(\lambda-1)/\lambda}. $$ Since for positive $a$ and $b$ we have $|e^{-a}-e^{-b}|\leq |a-b|$, then $$ \Big|\textbf{C}_n(T)- \textbf{C}(T)\Big| \leq e^{-rT} G(\lambda)^{1/\lambda} \left(E\left|I_T(h)-I_{T,n}(h)\right|^{\lambda/(\lambda-1)}\right)^{(\lambda-1)/\lambda}, $$ and thus the required statement follows directly from Theorem~\ref{t1} with $p=\lambda/(\lambda-1)$. \end{proof} We also can control the accuracy of the approximation using the weak rate bound from Theorem \ref{t2}. Observe that the bound given below is sharper than those obtained in the previous proposition precisely when $\lambda>2$. \begin{equation}gin{prop} Under the assumptions of Proposition~\ref{p41}, we have $$ \Big|\textbf{C}_n(T)- \textbf{C}(T)\Big| \leq 2^{\begin{equation}ta\vee 2+1}\max\{B_{T,X} \rho T^2 (1+\rho T) \exp(\rho T), G(\lambda) \} \exp (-rT) \widetilde{D}_{T, \begin{equation}ta} (n), $$ where $$ \widetilde{D}_{T,\begin{equation}ta} (n) = \begin{equation}gin{cases} n^{-(1-1/\lambda)} \log n,& \begin{equation}ta=1,\\ \max\left(1,\frac{T^{1- \begin{equation}ta}}{\begin{equation}ta-1}\right) n^{-1/ \begin{equation}ta (1-1/\lambda)},& \begin{equation}ta>1. \end{cases} $$ \end{prop} \begin{equation}gin{proof} For some $N>0$ denote $$\begin{aligned} &\textbf{C}^N(T) = \exp (-rT) E \left[ \exp \left( - \rho \int_0^T \mathbb{I}_{\{S_t \leq L\}} dt \right) ((S_T - K)_+ \wedge N) \right],\\& \textbf{C}^N_n(T) = \exp (-rT) E \left[ \exp \left( - \rho T/n \sum_{k=0}^{n-1} \mathbb{I}_{\{S_{kT/n} \leq L\}} dt \right) ((S_T - K)_+ \wedge N) \right]. \end{aligned}$$ Then \begin{equation} \label{exmpl_bound} \Big|\textbf{C}_n(T)- \textbf{C}(T)\Big| \leq \Big|\textbf{C}^N_n(T)- \textbf{C}^N(T)\Big| + \Big|\textbf{C}(T)- \textbf{C}^N(T)\Big| + \Big|\textbf{C}_n(T)- \textbf{C}^N_n(T)\Big|. \end{equation} We estimate each term separately. Using Theorem \ref{t2} and the Taylor expansion of the exponent, we derive \begin{equation}gin{align*} \Big|\textbf{C}^N_n(T)- \textbf{C}^N(T)\Big| &\leq 2^{\begin{equation}ta\vee 2}B_{T,X} NT\exp (-rT) \sum_{k=1}^{\infty} \frac{\rho^k}{k!} k^2 T^{k} D_{T,\begin{equation}ta} (n)\\ &= 2^{\begin{equation}ta\vee 2}B_{T,X} \rho N T^2 (1+\rho T) \exp(\rho T-rT) D_{T,\begin{equation}ta} (n). \end{align*} For the last two terms we get \begin{equation}gin{align*} \Big|\textbf{C}(T)- \textbf{C}^N(T)\Big| + \Big|\textbf{C}_n(T)&- \textbf{C}^N_n(T)\Big| \leq 2 \exp (-rT) E\left[(S_T - K)_+ -(S_T - K)_+ \wedge N \right] \\ & \leq 2 \exp (-rT) E [S_T \mathbb{I}_{\{S_T>N\}}] \\ &= 2 \exp (-rT) E\left[ \frac{S_T N^{\lambda-1}\mathbb{I}_{\{S_T>N\}}}{N^{\lambda-1}} \right] \leq \frac{2G(\lambda)}{N^{\lambda-1}} \exp (-rT). \end{align*} To complete the proof, put $N = n^{1/(\begin{equation}ta \lambda)}$. \end{proof} \begin{equation}gin{thebibliography}{99} \bibitem{BGR14} K.\ Bogdan, T.\ Grzywny, M.\ Ryznar. Density and tails of unimodal convolution semigroups. \emph{J. Func. An.} \textbf{ 266(6)} (2014). 3543--3571. \bibitem{Dynkin} E.\ B.\ Dynkin. \emph{Markov processes}. Fizmatgiz, Moscow, 1963 (in Russian). \bibitem{kul-gan} Iu.\ Ganychenko, A. Kulik, \emph{ Rates of approximation of nonsmooth integral-type functionals of Markov processes}, Modern Stochastics: Theory and Applications, \textbf{2} (2014), 117--126. \bibitem{Gobet} E.\ Gobet, C.\ Labart. Sharp estimates for the convergence of the density of the Euler scheme in small time, \emph{Elect. Comm. in Probab.} \textbf{13} (2008), 352--363. \bibitem{Guerin} H.\ Guerin, J.-F. \ Renaud. Joint distribution of a spectrally negative Levy process and its occupation time, with step option pricing in view. Available at \href{http://arxiv.org/pdf/1406.3130v1.pdf}{http://arxiv.org/pdf/1406.3130v1.pdf} \bibitem{KS13a} K.\ Kaleta, P.\ Sztonyk. Small time sharp bounds for kernels of convolution semigroups. To appear in \emph{J. Anal. Math. } \bibitem{KS13b} K.\ Kaleta, P.\ Sztonyk. Estimates of transition densities and their derivatives for jump L\'evy processes. To appear in \emph{J. Math. Anal. Appl.} \bibitem{KK12a} V.\ Knopova, A.\ Kulik. Intrinsic small time estimates for distribution densities of L\'evy processes. \emph{Random Op. Stoch. Eq.} \textbf{21(4)} (2013), 321--344. \bibitem{K13} V. Knopova. Compound kernel estimates for the transition probability density of a L\'evy process in $\mathbb{R}^n$. \emph{Th. Probab. Math. Stat.} \textbf{89} (2014), 57--70. \bibitem{KK14} V.\ Knopova, A.\ Kulik. Parametrix construction of the transition probability density of the solution to an SDE driven by $\alpha$-stable noise. Preprint 2014. Available at \href{http://arxiv.org/pdf/1412.8732v1.pdf}{http://arxiv.org/pdf/1412.8732v1.pdf} \bibitem{Kohatsu-Higa} A. Kohatsu-Higa, A. Makhlouf, H.L. Ngo. Approximations of non-smooth integral type functionals of one dimensional diffusion precesses. \emph{Stoch. Proc. Appl.} \textbf{124(5)} (2014), 1881--1909. \bibitem{KR15} T.\ Kulczycki, M.\ Ryznar, Gradient estimates of harmonic functions and transition densities for L\'evy processes. To appeat in \emph{ Trans. Amer. Math. Soc.} \bibitem{kul-gan2} A.\ Kulik, Iu.\ Ganychenko. On accuracy of weak approximations of integral-type functionals of Markov processes. Preprint 2015. (In Ukrainian) \bibitem{Linetsky} V.\ Linetsky. Step options. Math. Finance, \textbf{9(1)} (1999), 55--96. \bibitem{M12} A.\ Mimica. Heat kernel upper estimates for symmetric jump processes with small jumps of high intensity. \emph{Poten. Anal.} \textbf{ 36(2)} (2012), 203--222. \bibitem{PT69} W.E. Pruitt, S.J. Taylor, The potential kernel and hitting probabilities for the general stable process in $\mathbb{R}^n$. \emph{Trans. Am. Math. Soc.} \textbf{ 146} (1969), 299--321. \bibitem{RS10} J. Rosinski and J.Singlair, Generalized tempered stable processes. In: \emph{Stability in Probability, Ed. J.K. Misiewicz}, 90, Banach Center Publ. (2010), 153--170. \bibitem{St10a} P. Sztonyk. Estimates of tempered stable densities. \emph{J. Theor. Probab.} \textbf{23(1)} (2010) 127--147. \bibitem{St10b} P. Sztonyk, Transition density estimates for jump L\'evy processes. \emph{Stoch. Proc. Appl.} \textbf{121} (2011), 1245--1265. \bibitem{Sznitman} A. Sznitman. \emph{ Brownian motion, obstacles and random media}. Springer, Berlin, 1998. \bibitem{W07} T. Watanabe, Asymptotic estimates of multi-dimensional stable densities and their applications. \emph{ Trans. Am. Math. Soc.} \textbf{359(6)} (2007), 2851--2879. \end{thebibliography} \end{document}
\begin{document} \publicationdetails{19}{2018}{2}{3}{3286} \title{Improving bounds on packing densities of 4-point permutations} \input struct.tex \input perms.tex \input constr.tex \begin{abstract} We consolidate what is currently known about packing densities of 4-point permutations and in the process improve the lower bounds for the packing densities of 1324 and 1342. We also provide rigorous upper bounds for the packing densities of 1324, 1342, and 2413. All our bounds are within $10^{-4}$ of the true packing densities. Together with the known bounds, this gives us a fairly complete picture of all 4-point packing densities. We also list a number of upper bounds for small permutations of length at least five. Our main tool for the upper bounds is the framework of flag algebras introduced by Razborov in 2007. \end{abstract} \section{Introduction} \label{sec:intro} In this paper, we study packing densities of small permutations. A \emph{permutation} is an ordered tuple utilizing all integers from $\{1,\ldots,n\}$. We say that $S = S[1]S[2]\cdots S[m] $ is a \emph{sub-permutation} of $P=P[1]P[2]\cdots P[n]$ if there exists an $m$-subset $\{k_1,\ldots,k_m\}$ of $\{1,\ldots,n\}$ such that for all $1 \leq i,j \leq m$, $S[i] < S[j]$ whenever $P[k_i] < P[k_j]$. We denote by $\#(S,P)$ the number of occurrences of $S$ as a sub-permutation of $P$. Let $\ensuremath{\mathcal{P}}_n$ be the set of all permutations of length $n$. If $\#(S,n) = \max_{P \in \ensuremath{\mathcal{P}}_n}\#(S,P)$, then the \emph{packing density} of $S$ is defined to be $p(S) = \lim_{n\to\infty} \#(S,n)/\binom{n}{m}.$ \begin{table}[ht] \centering \begin{tabular}{|c | c | c | c | c|} \hline $\mathbf{S}$ & \textbf{lower bound} & \textbf{ref LB} & \textbf{upper bound} & \textbf{ref UB}\\ \hline\hline 1234 & 1 & trivial & 1 & trivial\\ \hline 1432 & $\beta$ & \cite{price1997packing} & $\beta$ & \cite{price1997packing}\\ \hline 2143 & $3/8$ & trivial & 3/8 & \cite{price1997packing}\\ \hline 1243 & $3/8$ & trivial & 3/8 & \cite{albert2002packing}\\ \hline 1324 & $0.244^*$ & \cite{price1997packing} & $ -^* $ & \cite{price1997packing}\\ \hline 1342 & $\gamma^*$ & \cite{batkeyev} & $0.1988373^*$ & \cite{balogh2015minimum}\\ \hline 2413 & $\approx 0.104724$ & \cite{presutti2010packing} & $0.1047805^*$ & \cite{balogh2015minimum}\\ \hline \end{tabular} \caption{\small{Overview of packing densities for 4-point permutations. Values $\beta$ and $\gamma$ are known exactly: $\beta = 6\sqrt[3]{\sqrt{2}-1}-6/\sqrt[3]{\sqrt{2}-1}+4 \sim 0.423570$, $\gamma = (2\sqrt{3}-3)\beta \sim 0.19657960$. We know that the packing density of 1324 is close to 0.244 but there is no non-trivial upper bound. The items with an ($^*$) asterisk will be updated by the current work.}} \label{tab:overview} \end{table} The study of permutation packing densities began with Wilf's 1992 SIAM address. Galvin (unpublished) soon rediscovered the averaging argument of~\cite{katona1964exists}, thus proving that $p(S)$ exists for all permutations $S$. The original argument was in the setting of graph theory. In 1993, Stromquist, and independently Galvin and Kleitman (both unpublished), found the packing density of 132. Up to symmetry, 132 is the only permutation of length 3 with a non-trivial packing density. For 4-point permutations and their packing densities, it is useful to consult Table~\ref{tab:overview}. First results for 4-point permutations, including $1324$, $1432$, and $2143$, came as part of the investigation of various \emph{layered patterns} by~\cite{price1997packing}. Later,~\cite{albert2002packing} proved a tight upper bound for 1243, and upper bounds of $2/9$ for both 2413 and 1342. The current lower bound for the packing density of 2413 was given by~\cite{presutti2010packing}. The upper bounds of 0.1047805 and 0.1988373 for 2413 and 1342, respectively, are mentioned in passing in~\cite{balogh2015minimum}. They do not discuss them any further. It is worthwhile to point out that~\cite{balogh2015minimum} used flag algebras to attack the packing density problem for monotone sequences of length 4. To the best of our knowledge, the only other application of flag algebras to permutation packing, although indirect, is by~\cite{falgas2013applications}. They obtained the inducibility (as packing density is refered to in graph theory) of a 2-star directed graph $\outstar$. Their result implies the known upper bound for the packing density of 132. Later,~\cite{huang2014stars} used an argument exploiting equivalence classes of vertices to extend the result to all directed $k$-stars. This argument was known in the permutations setting since~\cite{price1997packing} used it to establish the packing densities $p(1k\ldots 2)$ for all $k$. Similarly, although flag algebra software package Flagmatic, written by~\cite{flagmatic}, has been available since 2013, it has not previously been used to obtain an upper bound on the packing density of 1324. Therefore, we decided to use the flag algebras method to collect, enhance, and improve results in permutation packing densities. In addition to the mathematical content, we make available a flag algebras package for permutations, \href{http://jsliacan.github.io/permpack/}{Permpack}, written as a \href{http://sagemath.org}{Sage} script. For more information about the software, follow~\cite{sagemath}. It does all our computations and can be used for further research. Permpack uses syntax similar to Flagmatic, but requires no installation. We hope this makes it more user friendly. The rest of this paper is structured as follows. The aim of Section~\ref{sec:defs} is to introduce notation and concepts, including the part of flag algebras that we need. While~\cite{razborov2007original} presented flag algebras in the general setting of a universal model theory without constants and function symbols, we choose permutations to be the structures on which we base our exposition. Section~\ref{sec:mainresults} presents the main results of this chapter. We use flag algebras to provide upper bounds for the packing densities of 4-point permutations 1324, 1342, and 2413. We learnt belatedly about the existence of the latter two bounds from~\cite{balogh2015minimum}. Regarding lower bounds, we give a new lower bound construction for the packing density of 1342 that meets our upper bound to within $10^{-5}$. In case of 1324, we provide a lower bound that agrees with the upper bound on the first five decimal places. Section~\ref{sec:packingsmall} gives a list of selected upper bounds to illustrate the potential of the flag algebras method in the area of permutation packing. These results are not best possible, but can be obtained effortlessly by using our flag algebras package Permpack. \section{Definitions and concepts} \label{sec:defs} A \emph{pattern} of length $k$, where $k \leq n$, is a $k$-tuple of distinct integers from $[n] :=\{1,\ldots,n\}$. Pattern of length $n$ is called a \emph{permutation}. We write tuples as strings: 1324 stands for $(1,3,2,4)$. Two patterns $P$ and $S$ of length $k$ are \emph{identical}, if $P[i] = S[i]$ for all $i \in [k]$. They are \emph{order-isomorphic} if for all pairs of indices $i,j$, it holds that $P[i] < P[j]$ implies $S[i] < S[j]$. For a set $I = \{i_1,\ldots,i_m\}$ of $m$ indices from $[n]$, the \emph{sub-pattern} $P[I]$ is the $m$-tuple $P[i_1]P[i_2]\cdots P[i_m]$. By overloading the notation slightly, we also use $P[I]$ to refer to the \emph{subpermutation} of length $m$ which is order-isomorphic to the sub-pattern $P[I]$. A \emph{decreasing (increasing) permutation} of length $k$ is the $k$-tuple $k\ldots321$ ($123\ldots k$). A permutation $P$ is \emph{layered}, if it is an increasing sequence of decreasing permutations. To be exact, a layered permutation $P$ is a concatenation of smaller permutations $P = P_1P_2\ldots P_\ell$ such that for all $1 \leq i \leq \ell$, $P_i$ is a decreasing sequence of consecutive integers satisfying the following: if $x \in P_i$ and $y \in P_j$ with $i<j$, then $x<y$. For instance, 321465987 can be partitioned as $321|4|65|987$, so it is layered. On the other hand, 2413 is not layered. Given $S$ and $P$ of lengths $m$ and $n$, respectively, we let $\#(S, P)$ denote the number of times that $S$ occurs as a subpermutation of $P$. The \emph{density} of $S$ in $P$ is $$p(S,P) = \frac{\#(S,P)}{\binom{n}{m}}.$$ If $n < m$, we set $p(S,P) = 0$. Intuitively, $p(S,P)$ is the probability that a random $m$-set of positions from $[n]$ induces a pattern in $P$ that is order-isomorphic to $S$. For example, $p(12, 132) = 2/3$ as both 13 and 12 are order-isomorphic to 12 while 32 is not. Let $\ensuremath{\mathcal{F}}$ be a set of \emph{forbidden} permutations. We say that permutation $P$ is \emph{$\ensuremath{\mathcal{F}}$-free} if $\#(F,P) = 0$ for all $F \in \ensuremath{\mathcal{F}}$. Such $P$ is also said to \emph{avoid} $\ensuremath{\mathcal{F}}$ or be \emph{admissible}. We denote by $\ensuremath{\mathcal{P}}_n$ the set of all \emph{admissible} permutations of length $n$. It will always be clear from context what $\ensuremath{\mathcal{F}}$ is. If $\ensuremath{\mathcal{F}} = \emptyset$, then the admissible set $\ensuremath{\mathcal{P}}_n$ is the set of all permutations of length $n$. Notice that if $P$ is admissible, then so are all its subpermutations. Most of the work in this paper concerns the case when $\ensuremath{\mathcal{F}} = \emptyset$. However, the setting remains the same whenever $\ensuremath{\mathcal{F}}$ is non-empty, and we provide a few examples to this effect. \begin{figure} \caption{\small Permutations in $\ensuremath{\mathcal{P} \label{fig:exP3} \end{figure} Let $P \in \ensuremath{\mathcal{P}}_n$ and $S \in \ensuremath{\mathcal{P}}_m$ be admissible permutations, and assume $m \leq n$. The maximum value of $p(S,P)$ over $P \in \ensuremath{\mathcal{P}}_n$ is denoted by $p(S,n)$. Conversely, a permutation $P$ such that $p(S,P) = p(S,n)$ is an $S$-\emph{maximiser} of length $n$. It is well-known that for every $S$, the sequence $\left(p(S,n)\right)_{n\geq 0}$ converges to a value in $[0,1]$ because it is non-increasing and stays between 0 and 1. See~\cite{katona1964exists}. We are now ready to define the quantity that we study, packing density. \begin{definition} Let $S$ be a fixed permutation and $\ensuremath{\mathcal{P}} = \cup_{n\geq 1}\ensuremath{\mathcal{P}}_n$ the set of admissible permutations. The \emph{packing density} of $S$ is $$p(S) = \lim_{n\to\infty}p(S,n).$$ \end{definition} For example, the packing density of 12 in 123-free permutations is $1/2$. Notice that every maximiser of size $n$ has at most two layers. It is then easy to see that they should be of balanced sizes for the packing density to be maximised, i.e.~$\lfloor n/2 \rfloor$ and $\lceil n/2 \rceil$. Let $P_n$ be such balanced 2-layered maximizer of length $n$. Clearly, $p(12, P_n) \to 1/2$ as $n\to\infty$. We now formalise the ideas about asymptotic quantities and objects that the discussion is leading to. Let $(P_n)_n = P_1,P_2,P_3,\ldots$ be a sequence of permutations of increasing lengths. We say that $(P_n)_n$ is \emph{convergent} if for every permutation $S$, $(p(S,P_n))_{n=1}^\infty$ converges. A \emph{permuton} $\mu$ is a probability measure with uniform marginals on the Borel $\sigma$-algebra $\mathcal{B}([0,1]^2)$, i.e.~for every $a,b \in [0,1]$ with $a<b$, it holds that $\mu([a,b] \times [0,1]) = b-a = \mu([0,1] \times [a,b])$. See examples of permutons in Figure~\ref{fig:permutons}. \begin{figure} \caption{\small Increasing} \label{fig:increasing} \caption{\small Lebesgue} \label{fig:lebesgue} \caption{\small 1243-maximiser} \label{fig:max1243} \caption{\small Examples of permutons. In (a) we have the limit of $(1\ldots n)_{n=1} \label{fig:permutons} \end{figure} Let $\mu$ be a permuton and $S$ a permutation on $[m]$. One can sample $m$ points from $[0,1]^2$ according to $\mu$ and with probability one they will be in general position (no two aligned vertically or horizontally). We define $p(S,\mu)$ as the probability that a randomly sampled $m$ points from $[0,1]^2$ according to $\mu$ are order-isomorphic to $S$. It turns out that every convergent sequence of permutations has its permuton and vice versa. In particular,~\cite{hoppen2013permlimits} proved that for every $(P_n)_{n\geq 0}$ there exists a unique permuton $\mu$ such that for every $S$, $p(S,\mu) = \lim_{n\to\infty}p(S,P_n)$. In this sense, $\mu$ is the limit of the sequence $(P_n)_n$. In the other direction, they proved that if $\mu$ is a permuton and $P_n$ is a permutation of length $n$ sampled at random according to $\mu$ from $[0,1]^2$, then with probability one the sequence $(P_n)_n$ is convergent (with $\mu$ as its limit). The concept of permutation limits was known as ``packing measures'' since~\cite{presutti2010packing} used them for constructing the 2413 lower bound. In the current work, we use permutons mainly to describe extremal constructions that yield our lower bounds. \subsection{Flag Algebras} \label{sec:FA} The term \emph{flag algebras} refers to a framework first introduced by~\cite{razborov2007original}. It proved to be a very useful tool for researchers in extremal graph theory, but found use in other fields as well. For an overview of results aided by flag algebras, see Razborov's own survey~\cite{razborov2013interim}. For more extensive expositions, see the PhD theses of~\cite{sperfeld2012thesis} and~\cite{volec2014thesis}. By now, there are also many papers with explanations and examples such as~\cite{babertalbot2011jump}, \cite{pikhurko2015neighbourhoods}, \cite{falgasvaughan2012densities}, \cite{falgas2013applications}. For a long list of important results across disciplines of discrete mathematics that were aided by flag algebras, see the abovementioned theses, especially Chapter 1 of~\cite{volec2014thesis}. The main flag algebra result in permutations is~\cite{balogh2015minimum}. In their work on quasirnadom permutations,~\cite{kral2013quasirandom} mention flag algebras as another way to think about the subject. It is important to note that the method of flag algebras has evolved from other combinatorial and analytic methods in combinatorics which had been used by researchers for a long time. The Cauchy-Schwarz type arguments can be found in e.g.~work by~\cite{bondy1997cs} as early as 1990s. The ideas pertinent to quasirandomness have been around since~\cite{chung1988quasirandom}. And while there are other analytic methods that were used successfully to attack extremal problems in combinatorics, the method of flag algebras is syntactical and lends itself to automation. The syntax-based nature of flag algebras is the main feature that distinguishes the theory of flag algebras from the theory of dense graph limits (see e.g.~\cite{lovasz2012networks}). The crux of the method is a systematic conversion of the combinatorial problem into a semidefinite programming problem. The latter can be solved (efficiently) by current SDP solvers. The numerical values returned by the SDP solvers then need to be transformed to exact values (rational or algebraic) to provide valid upper bounds on packing densities. Before we delve into the method itself, let us consider an example from before. For the remainder of this section, assume that all objects (permutations, flags, types) are admissible unless stated otherwise. Now, assume that we are looking for $123$-free permutations $P$ that are as $12$-dense as possible. We get the following bound without much effort. \begin{equation} \begin{aligned} p(12, P) &= \underbrace{p(12, 123)p(123,P)}_{ = 0} +\ p(12, 132)p(132,P) + p(12, 213)p(213,P)\\ &+ p(12, 231)p(231,P) + p(12, 312)p(312,P) + p(12,321)p(321,P)\\ &\leq \max \left\{\frac{2}{3}, \frac{2}{3}, \frac{1}{3}, \frac{1}{3}, 0\right\} = \frac{2}{3} \label{ex:mantel} \end{aligned} \end{equation} This is strictly better than a trivial bound of 1. However, observe that there is no $P$ of length greater than 4 such that $p(132,P) + p(213,P) = 1$. This follows from Erd\H{o}s-Szekeres theorem (adapted) which states that a permutation of length $(r-1)(s-1)+1$ contains either an increasing subpermutation of length $r$ or a decreasing subpermutation of length $s$. Hence, a permutation of length 5 contains 123 or 321 as a subpermutation. So there are always subsets of size 3 in $P$ which do not induce 132 or 213. Therefore, the bound of 2/3 is unachievable in practice. Knowing this, it would be useful to be able to control how copies of small permutations, such as 132 and 213, interact inside larger permutations. The method of flag algebras helps us systematically take into account the ways in which small patterns overlap inside larger structures. This takes the form of extra coefficients in front of $p(12,P)$ terms in~\eqref{ex:mantel}. If chosen well, they shift weight away from the large values like $p(12,231)$ and $p(12,312)$ and thereby reduce the maximum over all of them. In general, the process is analogous to the example above. If $S$ is a small permutation whose packing density we seek to determine, we pick a reasonably small value $N \geq |S|$. The crude bound then looks as follows. \begin{align} p(S,P) &= \sum_{P' \in \ensuremath{\mathcal{P}}_N}p(S,P')p(P',P)\notag\\ &\leq \max_{P' \in \ensuremath{\mathcal{P}}_N} p(S,P') \label{eq:crudebd} \end{align} Before we describe how exactly we leverage overlaps between small patterns, we need to define flags, types, and operations on them. \begin{definition}[Flag] \label{def:permflag} A permutation $\tau$-\emph{flag} $S^{\tau}$ is a permutation $S$ together with a distinguished subpermutation $\tau$, also called an intersection \emph{type}. \end{definition} \begin{figure} \caption{\small If $\tau = 1$ (as permutation), then there are four distinct $\tau$-flags of length two. The empty circle marks $\tau$ in each flag.} \label{fig:flags1} \end{figure} See Figure~\ref{fig:flags1} for a list of all 1-flags on two vertices. The set of all admissible $\tau$ flags of length $m$ is denoted by $\ensuremath{\mathcal{P}}_m^\tau$. If $\tau$ is the permutation of length 0 or 1, we write $\ensuremath{\mathcal{P}}_m^0$ and $\ensuremath{\mathcal{P}}_m^1$, respectively. Notice that $\ensuremath{\mathcal{P}}_m^0 = \ensuremath{\mathcal{P}}_m$. The \emph{support} $T$ of $\tau$ in $S^{\tau}$ is the set of indices of $S$ that span $\tau$ in $S^{\tau}$. We say that two permutation flags $S_1^{\tau_1}$ and $S_2^{\tau_2}$ are \emph{type}-isomorphic if $S_1 = S_2$ and if the supports of $\tau_1$ and $\tau_2$ are identical. For instance, in Figure~\ref{fig:flags1}, $S_1^1$ and $S_4^1$ are not type-isomorphic, because the support of $\tau$ in $S_1^1$ is $1$ and in $S^1_4$ it is $2$. For convenience, we set $t :=|\tau|$. \begin{definition} Let $S^{\tau}$ be a $\tau$-flag of length $m$, $P^{\tau}$ a $\tau$-flag of length $n \geq m$. We define $\#(S^{\tau}, P^{\tau})$ to be the number of $m$-sets $M \subseteq [n]$ such that $P[M]$ is type-isomorphic to $S^{\tau}$. Flag density is then defined as follows $$ p(S^{\tau},P^{\tau}) = \frac{\#(S^{\tau}, P^{\tau})}{\binom{n-t}{m-t}}.$$ \end{definition} In other words, $p(S^{\tau}, P^{\tau})$ is the probability that a uniformly at random chosen subpermutation of length $m$ from $P^{\tau}$, subject to it containing $\tau$, induces a a flag type-isomorphic to $S^{\tau}$. For instance, consider the following flag densities. The empty circle denotes $\tau = 1$. \begin{center} \begin{tabular}{c c c} $p(\tauab, \tauabc) = 1,$ & $p(\abtau, \tauabc) = 0,$ & $p(\abtau, \ataubc) = 1/2$ \end{tabular} \end{center} Finally, we define joint density of two flags, $p(S_1^{\tau}, S_2^{\tau}; P^{\tau})$, as the probability that choosing an $m_1$-set $M_1 \subseteq [n]$ such that $P[M_1]$ contains $\tau$ and choosing an $m_2$-set $M_2 \subseteq [n]$ such that $P[M_2]$ contains $\tau$ and $M_1 \cap M_2 = \tau$ induces $\tau$-flags $P[M_1]^{\tau}$ and $P[M_2]^{\tau}$ in $P^\tau$ which are type-isomorphic to $S_1^{\tau}$ and $S_2^{\tau}$, respectively. The following proposition turns out to be useful (Lemma 2.3 in~\cite{razborov2007original}). It says that choosing subflags with or without replacement makes no difference asymptotically. \begin{proposition} Let $S_1^{\tau}$ and $S_2^{\tau}$ be flags on $m_1$ and $m_2$ vertices. Let $n \geq m_1 + m_2 - t$ and $P^{\tau}$ be a flag on $n$ vertices. Then $$p(S_1^{\tau},P^{\tau})p(S_2^{\tau},P^{\tau}) = p(S_1^{\tau},S_2^{\tau}; P^{\tau}) + o(1),$$ where $o(1) \to 0$ as $n \to \infty$. \label{thm:littleo} \end{proposition} Let $\ell = |\ensuremath{\mathcal{P}}_m^\tau|$ and fix an order on elements of $\ensuremath{\mathcal{P}}_m^\tau$. Let $S^\tau_i,S^\tau_j$ be $\tau$-flags from $\ensuremath{\mathcal{P}}_m^\tau$ and $P$ a $\tau$-flag from $\ensuremath{\mathcal{P}}_n^\tau$. Furthermore, let $\ensuremath{\mathbf{x}}$ be a vector with $i$-th entry $p(S_i^{\tau}, P^{\tau})$, and let $Q^\tau$ be a positive semi-definite matrix with dimensions $\ell \times \ell$. Then by Proposition~\ref{thm:littleo} and since $Q^\tau \succeq 0$, we have $$ 0 \leq \ensuremath{\mathbf{x}} Q^\tau \ensuremath{\mathbf{x}}^T = \sum_{i,j \leq \ell} Q^\tau_{ij}p(S_i^{\tau},S_j^{\tau},P^{\tau}) + o(1).$$ Moreover, if we let $\sigma$ be a uniformly at random chosen type in $P$ of length $t$, the inequality above remains true. Moreover, an ``average'' $\sigma$ preserves the non-negativity as well. \begin{align} 0 \leq \mathbb{E}_{\sigma}\left(\ensuremath{\mathbf{x}} Q^\tau \ensuremath{\mathbf{x}}^T\right) &= \sum_{i,j \leq \ell}Q^\tau_{ij}\frac{1}{\binom{n}{t}}\sum_{\sigma \in \binom{[n]}{t}} p(S_i^{\tau}, S_j^{\tau}; P^{\sigma}) + o(1). \label{eq:fcoeffs} \end{align} Next, we write the above expression in terms of permutations on $N$ vertices. Having all information in terms of the same objects allows us to combine it together. \begin{align*} \mathbb{E}_{\sigma}\left(\ensuremath{\mathbf{x}} Q^\tau \ensuremath{\mathbf{x}}^T\right) &= \sum_{i,j \leq \ell} Q^\tau_{ij} \frac{1}{\binom{n}{t}}\sum_{\sigma \in \binom{[n]}{t}} \sum_{P' \in \ensuremath{\mathcal{P}}_N}p(S_i^{\tau}, S_j^{\tau}; (P',\sigma)) p(P',P) + o(1)\\ &= \sum_{P' \in \ensuremath{\mathcal{P}}_N}\underbrace{\left(\sum_{i,j \leq \ell} Q^\tau_{ij}\frac{1}{\binom{n}{t}}\sum_{\sigma \in \binom{[n]}{t}}p(S_i^{\tau}, S_j^{\tau}; (P',{\sigma})) \right)}_{\alpha(P', m,\tau)} p(P',P) + o(1) \end{align*} Notice that the last expression is of the form $\sum_{P' \in \ensuremath{\mathcal{P}}_N} \alpha(P',m,\tau)p(P',P)$. There is one of those for each type $\tau$ and value $m$. Every such choice will require another matrix $Q^\tau$. In practice, we first choose $N$, then take all possible pairs of $t$ and $m$ such that $N = 2m-t$. Thus once $N$ is fixed, the choice of $t$ determines the rest. Therefore, let $\alpha(P') = \sum_{\tau} \alpha(P',m,\tau)$ and recall that the expression that we are trying to minimise, subject to $Q^\tau \succeq 0$ for all $\tau$, comes from~\eqref{eq:crudebd}. By adding inequalities of the form of~\eqref{eq:fcoeffs} to~\eqref{eq:crudebd}, we obtain \begin{align} p(S,P) &= \sum_{P'\in \ensuremath{\mathcal{P}}_N}p(S,P')p(P',P) \notag\\ &\leq \sum_{P'\in \ensuremath{\mathcal{P}}_N}p(S,P')p(P',P) + \sum_{P'\in \ensuremath{\mathcal{P}}_N}\alpha(P')p(P',P)\quad \notag\\ &\leq \max_{P'\in \ensuremath{\mathcal{P}}_N}\{p(S,P') + \alpha(P')\}. \label{eq:sdp} \end{align} This problem~\eqref{eq:sdp} has the form of a semidefinite programming problem subject to the condition that $Q^\tau \succeq 0$ for every type $\tau$. There exist numerical solvers, such as CSDP or SDPA, that we can use. However, the solution is in the form of numerical PSD matrices. These need to be converted to exact matrices without floating-point entries in a way that preserves their PSD property and still yields a bound that we are satisfied with. Since none of our bounds is tight, we will take a shortcut in rounding. Let $Q'$ be a numerical matrix returned by the solver. Since it is positive semi-definite, it admits a Cholesky decomposition into a lower and upper triangular matrices: $Q' = L'L'^T$. We compute this decomposition and then round the $L'$ matrices into $L$ matrices in such a way that they do not have negative entries on the diagonals. In certificates, we provide these $L$ matrices instead of $Q$ matrices. This way, one can readily check that $Q = LL^T \succeq 0$ by inspecting the diagonal entries of the $L$ matrices. \subsection{Example} \label{sec:example} The following example is a done-by-hand flag algebras method on a small problem of determining the packing density of 132. We have a lower bound of $2\sqrt{3}-3 \approx 0.464101615\ldots$ given by the standard construction. Assume we want to obtain an upper bound for the packing density of 132. Let $P$ be a (large) 132-maximiser of length $n$ and let $3 \leq \ell \leq n$. By~\eqref{eq:crudebd} we get \begin{align*} p(132) &\leq p(132, P)\\ &= \sum_{P' \in \ensuremath{\mathcal{P}}_\ell}p(132,P')p(P',P)\\ &\leq \max_{P' \in \ensuremath{\mathcal{P}}_\ell} p(132,P'). \end{align*} We choose $\ell = 3$ and set $\lambda = 2\sqrt{3}-3$. Now consider \begin{align*} \Delta &= \lambda p(123,P) + (\lambda-1)p(132,P) + \lambda p(213,P) + \frac{5\lambda-3}{6}p(231,P)\\ &+ \frac{5\lambda-3}{6}p(312,P) + \lambda p(321,P). \end{align*} Adding the linear combination $\Delta$ of $P'$ densities to the previous crude upper bound improves it to $\lambda$. \begin{align*} p(132,P) &\leq \sum_{P' \in \ensuremath{\mathcal{P}}_\ell}p(132,P')p(P',P) + \Delta\\ &\leq \max_{P' \in \ensuremath{\mathcal{P}}_\ell}\{\lambda, \lambda,\lambda, \frac{5\lambda-3}{6}, \frac{5\lambda-3}{6},\lambda\}\\ &= \lambda \end{align*} The key property of $\Delta$ is that it is non-negative for all $P$, including all $P' \in \ensuremath{\mathcal{P}}_3$. Let $\sigma$ be a randomly chosen vertex out of the three available. The matrix $Q$ below is positive semi-definite and $\mathbf{x}_{P'}$ is a vector of flag densities for flags in Figure~\ref{fig:flags1}: $$\mathbf{x}_{P'} = \begin{pmatrix}p(\tauab,(P',\sigma))& p(\batau,(P',\sigma))& p(\tauba,(P',\sigma)) & p(\abtau,(P',\sigma))\end{pmatrix}.$$ \begin{align} Q &= \begin{pmatrix}0 & 0 & 0 & 0 \\ 0 & \lambda & \lambda & 3(\lambda-1)/2\\0 & \lambda & \lambda & 3(\lambda-1)/2\\ 0 & 3(\lambda-1)/2 & 3(\lambda-1)/2 & 3\lambda \end{pmatrix} \label{eq:Q} \end{align} Averaging over $\sigma$ gives the expression~\eqref{eq:delta} that makes the non-negativity of $\Delta$ apparent. \begin{align} \Delta &= \mathbb{E}_\sigma\left(\sum_{P' \in \ensuremath{\mathcal{P}}_3}\mathbf{x}_{P'}Q\mathbf{x}_{P'}^T\right) \geq 0 \label{eq:delta} \end{align} Therefore, we proved that $p(132) \leq 2\sqrt{3}-3$. \subsection{Implementation} Flagmatic 2.0 was written by Emil R. Vaughan and is currently the only general implementation of Razborov's flag algebra framework which is freely available to use and modify. See~\cite{flagmatic} for more information. The project is hosted at \url{http://github.com/jsliacan/flagmatic}. Unfortunatelly, Flagmatic does not support permutations. For this reason, we wrote Permpack, a lightweight implementation of flag algebras on top of SageMath's Sage 7.4 (see~\cite{sagemath}). It does not have all the functionality of Flagmatic but it is sufficient for basic tasks. For more information, code, and installation instructions, see \url{https://github.com/jsliacan/permpack}. Let us consider an example of how Permpack can be used on the above example of 132-packing. It will be clear from Permpack's output where the $Q$ matrix above comes from. In Permpack, one needs to specify the complexity in terms of $N$, the length of the admissible permutations which all computations are expressed in terms of. The \texttt{density\_pattern} argument specifies the permutation whose packing density we want to determine. Once permutations, types, flags, and flag products are computed, we can delegate the rest of the tasks to the solver of our choice (currently supported solvers are \texttt{csdp} and \texttt{sdpa\_dd}). The answer is a numerical upper bound on $p(132)$. It can be rounded automatically to a rational bound by the \texttt{exactify()} method of the \texttt{PermProblem} class. The certificate contains admissible permutations, flags, types, matrices $Q$ (as $L$ matrices in the Cholesky decomposition of $Q$) and the actual bound as a rational number (fraction). These are suficient to verify the bound. Below is the script used to obtain the numerical $Q'$ matrix for the packing density of 132 with Permpack. \lstset{language=Python, basicstyle=\ttfamily\scriptsize, keywordstyle=\color{keywords}, commentstyle=\color{comments}, stringstyle=\color{myred}, showstringspaces=false, identifierstyle=\color{green}, procnamekeys={def,class}, frame=single, caption={Packing 132 with Permpack.}} \begin{lstlisting} p = PermProblem(3, density_pattern="132") p.solve_sdp() \end{lstlisting} \lstset{language=Python, basicstyle=\ttfamily\scriptsize, keywordstyle=\color{black}, commentstyle=\color{black}, stringstyle=\color{black}, showstringspaces=false, identifierstyle=\color{black}, procnamekeys={def,class}, frame=single, caption={Output.}} \begin{lstlisting} ... Success: SDP solved Primal objective value: -4.6410162e-01 Dual objective value: -4.6410162e-01 Relative primal infeasibility: 5.90e-14 Relative dual infeasibility: 1.67e-10 Real Relative Gap: 3.68e-10 XZ Relative Gap: 6.14e-10 \end{lstlisting} It is not difficult to guess the entries of $Q$ from the numerical matrix below, which is part of the output of the SDP solver. The resulting exact matrix $Q$ is shown in~\eqref{eq:Q}. \lstset{language=Python, basicstyle=\ttfamily\scriptsize, keywordstyle=\color{black}, commentstyle=\color{black}, stringstyle=\color{black}, showstringspaces=false, identifierstyle=\color{black}, procnamekeys={def,class}, frame=single, caption={Floating point $Q'$ matrix.}} \section{Results} \label{sec:mainresults} The following theorem will be needed later. There exist further variations of it, e.g.~Proposition~2.1 and Theorem~2.2 in~\cite{albert2002packing}. However, we only need the original version. \begin{theorem}[\cite{stromquist1993unpublished}] \label{thm:layered} Let $S$ be a layered permutation. Then for every $n$, extremal value of $p(S,n)$ is achieved by a layered permutation. Moreover, if $S$ has no layer of size 1, every maximiser of $p(S,n)$ is layered. \end{theorem} \noindent The scripts used to obtain results in this section can be found at\\ \url{https://github.com/jsliacan/permpack/tree/master/scripts}.\\ \noindent The certificates in support of the upper bounds in this section can be found at the address below. With each result, we provide the name of the certificate file that witnesses it, e.g. \texttt{cert1324.js} witnesses the upper bound for $p(1324)$.\\ \url{https://github.com/jsliacan/permpack/tree/master/certificates}. \subsection{Packing 1324} Layered permutations have been studied in depth by~\cite{price1997packing}. He came up with an approximation algorithm that, at $m$-th iteration assumes that the extremal construction has $m$ layers (see Theorem~\ref{thm:layered}) and optimises over their sizes. The algorithm then proceeds to increase $m$ and halts when increasing $m$ does not improve the estimate. In that case, an optimal construction has been found (up to numerical noise from the optimization, if any). In reality, the procedure is stopped manually when approximation is fine enough or the problem becomes too large. Therefore, for every $m$, the value that Price's algorithm gives is a lower-bound for the packing density in question. \\ It is known that the extremal construction for the packing density of 1324 is layered with infinite number of layers. See, for instance,~\cite{albert2002packing} and~\cite{price1997packing}. The main theorem of this section is the following. \begin{theorem} \label{thm:pack1324} \begin{align*} 0.244054321 < p(1324) < 0.244054549 \end{align*} \end{theorem} \begin{proof} Consider the construction $\Gamma$ from Figure~\ref{fig:gamma_constr}, where $\Gamma$ is a permuton. Let $C$ denote the middle layer of $\Gamma$ (the largest layer), $B$ denote the layer above (and $B'$ the layer below) $C$, and $A$ denotes the group of the remaining layers above $B$ (and $A'$ denotes the groupp of layers below $B'$). So $\Gamma = A' \oplus B' \oplus C \oplus B \oplus A$, where $A \oplus B$ means that the layer $A$ is entirely below and to the left of the layer $B$. Let $c = |C|$, $b = |B| = |B'|$, and $a = |A| = |A'|$. We assume that $A$ (and $A'$) is isomorphic to a maximiser for the packing of 132-pattern (213-pattern). The aim is to optimise over $a$ and $b$. Ideally, the tails of $\Gamma$ would also be optimised over, but that is infeasible. So we assume the tails are 132 (213) maximisers. It turns out that the first two steps give a good lower bound. We now compute the density of 1324 patterns in $\Gamma$. There are four distinct (i.e.~up to symmetry) positions that a copy of 1324 can assume in $\Gamma$. Let $xyzw$ be the four points in $\Gamma$ that form a copy of $1324$ in that order. \begin{enumerate} \item $y,z \in C$, $x \in A' \cup B'$, $w \in A \cup B$, there are $N_1$ such copies \item $y,z \in B$, $x \in A' \cup B' \cup C$, $w \in A$, there are $N_2$ such copies \item $y,z,w \in A$, $x \in A' \cup B' \cup C \cup B$, there are $N_3$ such copies \item $x,y,z,w \in A$, , there are $N_4$ such copies \end{enumerate} Let us now determine quantities $N_1,\ldots, N_4$. \begin{enumerate} \item $N_1 = c^2/2 + (a+b)^2$ \item $N_2 = b^2/2 + a(a+b+c)$ \item $N_3 = (2\sqrt{3}-3)\frac{a^3}{6}\cdot (a+2b+c)$ \item $N_4 = \sum_{k=0}^\infty \frac{\sqrt{3}\cdot(2\sqrt{3}-3)}{6 \cdot (\sqrt{3}+1)^{4k+4}}\cdot a^4$. \end{enumerate} Finally, we get the density of 1324 pattern in $\Gamma$. Let $b = 1-c-2a$. Then \begin{align*} p(1324, \Gamma) &= \max_{\substack{0< c\leq 1/2\\ 0 < a < \leq 1/4}}24\cdot(N_1 + 2N_2 + 2N_3 + 2N_4)\\ & > 0.244054321. \end{align*} This proves the lower bound in Theorem~\ref{thm:pack1324}, because $0.244054321 < p(1324,\Gamma) \leq p(1324)$. \begin{figure} \caption{\small{Permuton $\Gamma$ provides a lower bound for $p(1324)$. The triangles at the ends represent permutons that are maximisers for the packing of 132 and 213 (L to R).} \label{fig:gamma_constr} \end{figure} We use Flagmatic to prove the upper bound. Since 1324 is layered, there is a 1324-maximiser that is layered as well. Therefore, we can limit the search space to the layered permutations. Since Flagmatic does not work with permutations, we transformed the problem to an equivalent problem in directed graphs -- which Flagmatic can handle. \begin{lemma} \label{lem:permstographs} Let $\ensuremath{\mathcal{F}} = \{\dicycle, \twochain, \orcocherry\}$ be the set of forbidden digraphs. The packing density of 1324 equals the Tur\'an $\digraphacbd$-density of $\ensuremath{\mathcal{F}}$. In other words, $$p(1324)= p(\digraphacbd, \ensuremath{\mathcal{F}}).$$ \end{lemma} \begin{proof}[\ref{lem:permstographs}] There is a unique way to encode a layered permutation $P$ as a directed graph $D$. If and only if two points $x,y \in P$ form a $12$ pattern, then $xy$ is an arc $x \to y$ in $D$. Forbidding $\dicycle$, $\twochain$, and $\orcocherry$ in $D$ forces it to be a union of independent sets with arcs between them so that if $x,y$ are vertices in one independent set and $u,v$ are vertices in another independent set of $D$, then if $xu$ is an arc in $D$, so are $xv$, $yu$, and $yv$. In other words, all arcs between two independent sets are present, and all go in the same direction. Moreover, the direction is transitive (\dicycle is forbidden). Together with the first rule about the direction of arcs between independent sets, this fully characterizes the digraph $D$ from the permutation $P$. Clearly, the process is reversible. \end{proof} Given Lemma~\ref{lem:permstographs}, we use flag algebra method on directed graphs to compute an upper bound for the packing density of \digraphacbd (an equivalent of 1324 in digraphs) over $\{\dicycle, \twochain, \orcocherry\}$-free digraphs. The resulting bound is the one in Theorem~\ref{thm:pack1324}. The certificate is called \texttt{cert1324flagmatic.js}. Note that this is a Flagmatic certificate and can be verified using the \texttt{inspect\_certificate.py} script that comes with Flagmatic. The script is \texttt{pack1324flagmatic.sage}. \end{proof} A similar bound can be achieved by Permpack. In particular, we can show that $p(1324)< 0.244054540$. Certificate: \texttt{cert1324permpack.js}. Script: \texttt{pack1324permpack.sage}. Despite Permpack being able to prove a good bound, we used Flagmatic in the proof above to emphasise that this result had been available before Permpack was written. \subsection{Packing 1342} \label{sec:pack1342} The previous lower bound for the packing density of 1342 was approximatelly 0.1965796. The result of~\cite{batkeyev} can be found in~\cite{albert2002packing}. \begin{figure} \caption{\small On the left is Batkeyev's construction for the lower bound on $p(1342)$ as product of packing densities of 132 and 1432. On the right is the schematic drawing of it. The triangle stands for a 231-maximiser and the square stands for the part inside which the entire construction is iterated.} \label{fig:batkeyev} \end{figure} Let $\lambda = 2\sqrt{3}-3$ be the packing density of 231 and $\kappa$ the ratio between the top layer and the rest of the 1432-maximiser, see~\cite{price1997packing} ($\kappa$ is the root of $3x^4-4x+1$). Batkeyev suggested to replace each layer in the maximiser of 1432 by a 231-maximiser while preserving the size ratio $\kappa$. The density of 1342 in Batkeyev's construction (see Figure~\ref{fig:batkeyev}) is \begin{align*} p(1342, B) &=(8 \sqrt{3}-12)\cdot \sum _{n=0}^{\infty } (1-\kappa)^3 \kappa ^{4n+1} \\ &= p(132)p(1432)\\ &= 2 \left(2 \sqrt{3}-3\right) \left(3 \sqrt[3]{\sqrt{2}-1}-\frac{3}{\sqrt[3]{\sqrt{2}-1}}+2\right)\\ &\approx 0.1965796\ldots \end{align*} This lower bound was widely regarded as possibly optimal. Our contribution to this problem is finding a vastly better lower bound construction. However, if we restrict the space of admissible permutations to those that avoid 2431, then Batkeyev's construction is likely optimal. We are able to prove the following theorem on $N=6$ admissible graphs to keep the SDP small (if $N=7$ was chosen, the bound would likely be slightly better). \begin{theorem} $$p(1342,\{2431\}) < 0.19658177.$$ \end{theorem} \begin{proof} Certificate: \texttt{cert1342\_forb2431.js}. Script: \texttt{pack1342\_forb2431.sage}. \end{proof} The following result addresses the actual packing density of 1342 without any forbidden patterns. \begin{theorem} \label{thm:newbounds1342} $$0.198836597 < p(1342) < 0.198837287.$$ \end{theorem} \begin{proof} The new lower bound is given by the construction $\ensuremath{\mathcal{P}}i$ in Figure~\ref{fig:pack1342}. The weights we used for the parts are given in \texttt{cert1342lb.txt}, located with other certificates. \begin{figure} \caption{\small New lower bound construction $\ensuremath{\mathcal{P} \label{fig:pack1342} \end{figure} \begin{equation} \begin{aligned} a_1 &= 0.2174127723536347308692444843\\ a_2 &= 0.0170598057899242722740620549\\ a_3 &= 0.0516101402487892270230230972\\ a_4 &= 0.4340722809873864994312953007\\ a_5 &= 0.1479895625950390496250611829\\ a_6 &= 0.0764457255805656971383351365\\ a_7 &= 0.0554097124446605236389787433 \label{weights} \end{aligned} \end{equation} Label the 7 parts of $\ensuremath{\mathcal{P}}i$ from left to right as $a_1,\ldots,a_7$. We assign the weights to them roughly as in~\eqref{weights}. Then a straightforward calculation of the 1342 density in $\ensuremath{\mathcal{P}}i$ implies the desired lower bound. The sage script that does this is called \texttt{lb1342.sage}, located with other scripts. The upper bound certificate is called \texttt{cert1342.js}. Script is \texttt{pack1342.sage}. \end{proof} The upper bound obtained without flag algebras stands at $2/9$, see~\cite{albert2002packing}. The upper bound above was obtained via the flag algebras method and confirms the claimed bound from~\cite{balogh2015minimum}. We used $N = 7$ for our computations. While it is possible that $N=8$ would yield a slightly better bound, the computations would be much more expensive. Without a candidate for an exact lower bound, we were satisfied with the bound we obtained with $N=7$. \subsection{Packing 2413} \label{sec:pack2413} The case packing 2413 patterns is fairly complicated as can be seen from the lower bound construction by~\cite{presutti2010packing}. The previous upper bound obtained without flag algebras was $2/9$ by~\cite{albert2002packing}. The bound below was obtained via flag algebras and is in the same range as the bound in~\cite{balogh2015minimum}. \begin{theorem} \label{thm:high2413} \begin{align*} p(2413) &< 0.10478046354353523761779. \end{align*} \end{theorem} \begin{proof} Certificate: \texttt{cert2413.js}. Script: \texttt{pack2413.sage}. \end{proof} We used admissible permutations of length $N=7$. Again, larger $N$ could yield a slightly better upper bound, but without an exact lower bound, this effort would not be justified. \section{Packing other small permutations} \label{sec:packingsmall} The flag algebras method will yield upper bounds for many problems. In some cases these bounds are particularly interesting because they are close to their corresponding lower bounds. In this section we list a selection of upper and lower bounds that are potentially sharp since their values appear to be close to each other. In the list below we choose to represent the permutations by their drawings in the grid. This is more transparent as the permutations became larger. The extremal constructions (permutons) on the left-hand side of the Table~\ref{tab:otherperms} are represented by their drawings as well. The lower bounds are given on the left-hand side of the table and upper bounds on the right-hand side of the table. This is a sample of the results obtained with Permpack via flag algebras. \begin{table}[ht] \centering \begin{tabular}{l | l | l} Permutation & Lower bounds & Upper bounds\\ \hline 23154 & $p\left(\bcaoba\ ,\ \Amax\right) = 5!\frac{(2/5)^2}{2!}\frac{(3/5)^3}{3!}(2\sqrt{3}-3)$ & $p\left(\bcaoba\right) \leq 0.16039\ldots$\\ 14523 & $p\left(\aobamba\ ,\ \Bmax\right) \sim 0.153649\ldots$ & $p\left(\aobamba\right) \leq 0.153649\ldots$\\ 21354 & $p\left(\baoaoba\ ,\ \Cmax\right) \sim 0.16515\ldots$ & $p\left(\baoaoba\right) \leq 0.16515\ldots$\\ 231654 & $p\left(\bcaocba\ ,\ \AAmax\right) = 6!\frac{(1/2)^6}{3!^2}(2\sqrt{3}-3)$ & $p\left(\bcaocba\right) \leq 0.145031\ldots$\\ 231564 & $p\left(\bcaobca\ ,\ \Dmax\right)= (2\sqrt{3}-3)^2\frac{6!}{48^2}$ & $p\left(\bcaobca\right) \leq 0.0673094$\\ 231645 & $p\left(\bcaocab\ ,\ \Dmaxr\right)= (2\sqrt{3}-3)^2\frac{6!}{48^2}$ & $p\left(\bcaocab\right) \leq 0.0673094$\\ 215634 & $p\left(\baoabmab\ ,\ \Emax\right) = \frac{6!}{9^32^3}$ & $p\left(\baoabmab\right) \leq 0.123456\ldots$ \end{tabular} \caption{\small Exact values are known for all densities on the left-hand side. They are described in the text as they are not easy to write down.} \label{tab:otherperms} \end{table} We now give the descriptions of the lower bound constructions. For $23154 = \bcaoba$, the construction is a sum of two parts in ratio $2:3$ top to bottom. The bottom part is a 231-maximiser while the top part is a simple decreasing segment. Certificate: \texttt{cert23154.js}. Script: \texttt{pack23154.sage}. The construction for $14523 = \aobamba$ is designed as follows. Let $\alpha$ be the maximiser of $5(1-x)^4/(1-x^5)$ such that $\alpha \in [0,1]$. The topmost sum-indecomposable part of the $\aobamba$-maximiser has length $\alpha$ and the remainder of the maximiser has length $(1-\alpha)$. The construction is iterated inside the part of length $(1-\alpha)$. The part of length $\alpha$ is a skew-sum of two balanced increasing segments. The exact value of the density on the left-hand side of Table~\ref{tab:otherperms} is too complicated to fit in. Certificate: \texttt{cert14523.js}. Script: \texttt{pack14523.sage}. The construction for $21354 = \baoaoba$ is a 4-layered permuton with layers of lengths $\beta, 1/2-\beta, 1/2-\beta, \beta$, top to bottom. Here, $\beta$ is the real root of $40x^3 - 32x^2 + 9x - 1 = 0$. Again, we only write the approximate value on the left-hand side in Table~\ref{tab:otherperms} for space reasons. Certificate: \texttt{cert21354.js}. Script: \texttt{pack21354.sage}. The construction for $231654 = \bcaocba$ is identical in structure to the construction for $\bcaoba$, except the ratios of the two parts in the sum are $1:1$. Certificate: \texttt{cert231654.js}. Script: \texttt{pack231654.sage}. The construction for $231564 = \bcaobca$ is the sum of two 231-maximisers of equal size. In case of $231645 = \bcaocab$, the top 231-maximiser is flipped accordingly. Certificates: \texttt{cert231564.js} and \texttt{cert231645.js}. Scripts: \texttt{pack231564.js} and \texttt{pack231645.js}. The construction for $215634 = \baoabmab$ has three segments of equal length arranged as portrayed in Table~\ref{tab:otherperms}. Certificate: \texttt{cert215634.js}. Script: \texttt{pack215634.sage}. \section{Conclusion} While we now know the packing densities of all 4-point permutations with accuracy of 0.01\%, finding candidates for optimal constructions for the cases of 1324 and 1342 remains a challenge. In the case of 1324, a new idea for the part ratios will be needed to come up with a possible extremal construction. As for the 1342 pattern, the extremal construction might use a different layer formation than our $\ensuremath{\mathcal{P}}i$. Even if $\ensuremath{\mathcal{P}}i$ has the right structure, the part ratios remain to be determined precisely. The latest status of 4-point packing densities is depicted in Table~\ref{tab:updated}. \begin{table}[ht] \centering \begin{tabular}{|c | c | c | c | c|} \hline $\mathbf{S}$ & \textbf{lower bound} & \textbf{ref LB} & \textbf{upper bound} & \textbf{ref UB}\\ \hline\hline 1234 & 1 & trivial & 1 & trivial\\ \hline 1432 & $\beta$ & \cite{price1997packing} & $\beta$ & \cite{price1997packing}\\ \hline 2143 & $3/8$ & trivial & 3/8 & \cite{price1997packing}\\ \hline 1243 & $3/8$ & trivial & 3/8 & \cite{albert2002packing}\\ \hline 1324 & $0.244054321^*$ & -- & $0.244054549^*$ & -- \\ \hline 1342 & $0.198836597^*$ & -- & $0.198837286342^*$ & -- \\ \hline 2413 & $\approx 0.104724$ & \cite{presutti2010packing} & $0.104780463544^*$ & -- \\ \hline \end{tabular} \caption{\small{Overview of packing densities for 4-point permutations given the information in this paper. The values with asterisk have been updated.}} \label{tab:updated} \end{table} After 4-point permutations, there are many packing densities of small permutations of length $5, 6,\ldots$. The values of lower bounds and upper bounds in Table~\ref{tab:otherperms} should be made to match. In some cases this will be easier than in others. In particular, the packing density of 21354 has been mentioned in both~\cite{albert2002packing} and~\cite{hasto2002packing}. There are analogous questions to be asked about packing densities when certain patterns are forbidden. As an example, we mentioned $p(1342,\{2431\})$ in relation to $p(1342)$. Next, an interesting line of enquiry was made precise as Conjecture 9 in~\cite{albert2002packing}. For a packing of pattern $S$, is there an extremal construction with infinite number of layers? Are all extremal constructions of that form? More precisely, let an $S$-\emph{maximiser} be an $n$-permutation $P$ such that $p(S,n) = p(S,P)$. If $L_n$ is the number of layers in a layered maximiser of length $n$, what can we say about $L_n$ as $n\to\infty$? For example, we know that the number of layers in every 1324-maximiser is unbounded as $n \to \infty$. We also know that a 2143-maximiser has only two layers, regardless of $n$. \acknowledgements We would like to thank Robert Brignall for heaps of useful discussions. \end{document}
\begin{document} \sloppy \makeatletter \renewcommand{\z@}{\z@} {-3.5ex \@plus -1ex \@minus -.2ex} {1.5ex \@plus.2ex} {\reset@font\large\sffamily}} \renewcommand{\z@}{\z@} {-3.25ex \@plus -1ex \@minus -.2ex} {1.1ex \@plus.2ex} {\reset@font\normalsize\sffamily\flushleft}} \renewcommand{\z@}{\z@} {-3.25ex \@plus -1ex \@minus -.2ex} {1.1ex \@plus.2ex} {\reset@font\normalsize\sffamily\flushleft}} \makeatother \newsavebox{\tempbox} \newlength{\linelength} \setlength{\linelength}{\linewidth-10mm} \makeatletter \renewcommand{\@makecaption}[2] { \renewcommand{1.1}{1.1} \normalsize\small \sbox{\tempbox}{#1: #2} \ifthenelse{\lengthtest{\wd\tempbox>\linelength}} {\noindent\hspace*{4mm}\parbox{\linewidth-10mm}{\sc#1: \sl#2\par}} {\begin{center}\sc#1: \sl#2\par\operatorname{e}nd{center}} } \text{\textit{d\hspace{.5mm}}}f\R{\mathchoice{ \boldsymbol{h}ox{${\rm I}\!{\rm R}$} } { \boldsymbol{h}ox{${\rm I}\!{\rm R}$} } { \boldsymbol{h}ox{$ \scriptstyle {\rm I}\!{\rm R}$} } { \boldsymbol{h}ox{$ \scriptscriptstyle {\rm I}\!{\rm R}$} } } \text{\textit{d\hspace{.5mm}}}f\N{\mathchoice{ \boldsymbol{h}ox{${\rm I}\!{\rm N}$} } { \boldsymbol{h}ox{${\rm I}\!{\rm N}$} } { \boldsymbol{h}ox{$ \scriptstyle {\rm I}\!{\rm N}$} } { \boldsymbol{h}ox{$ \scriptscriptstyle {\rm I}\!{\rm N}$} } } \text{\textit{d\hspace{.5mm}}}f\displaystyle}\def\d{\displaystyle{\displaystyle}\def\d{\displaystyleisplaystyle}\text{\textit{d\hspace{.5mm}}}f\displaystyle}\def\d{\displaystyle{\displaystyle}\def\d{\displaystyleisplaystyle} \title{Non Proportional Odds Models are Widely Dispensable - Sparser Modeling based on Parametric and Additive Location-Shift Approaches } \author{Gerhard Tutz$^*$ \& Moritz Berger$^{**}$ \\ {\small $^*$Ludwig-Maximilians-Universit\"{a}t M\"{u}nchen}\\ {\small $^{**}$ Institut für Medizinische Biometrie, Informatik und Epidemiologie, }\\ {\small Medizinische Fakultät, Universität Bonn}} \maketitle \begin{abstract} \noindent The potential of location-shift models to find adequate models between the proportional odds model and the non-proportional odds model is investigated. It is demonstrated that these models are very useful in ordinal modeling. While proportional odds models are often too simple, non proportional odds models are typically unnecessary complicated and seem widely dispensable. The class of location-shift models is also extended to allow for smooth effects. The additive location-shift model contains two functions for each explanatory variable, one for the location and one for dispersion. It is much sparser than hard-to-handle additive models with category-specific covariate functions but more flexible than common vector generalized additive models. \operatorname{e}nd{abstract} \noindent{\bf Keywords:} Ordinal regression; location-shift model; cumulative model; proportional odds model; adjacent categories model; dispersion; \section{Introduction} The proportional odds model, which was propagated by \citet{McCullagh:80} is probably the most widely used ordinal regression model. The assumption that effects of covariates are not category-specific makes it a simply structured model that allows to interpret parameters in terms of cumulative odds. However, in many applications the model shows poor goodness-of-fit and does not adequately represent the underlying probability structure. As alternatives non proportional and partial proportional odds models were proposed. They allow for category-specific effects of explanatory variables, and typically show much better fit than proportional odds models see, for example, \citet{Brant:90}, \citet{PetHar:90}, \citet{BenGro:98}, \citet{Cox:95}, \citet{kim2003assessing}, \citet{williams2006generalized}, \citet{liu2009graphical}, and \citet{williams2016understanding}. A major disadvantage of non proportional odds models is that many parameters are involved, which makes interpretation of parameters much harder than in the simple proportional odds model. Moreover, the space of explanatory variables can be almost empty, and estimation of parameters tends to fail in cases with a larger number of response categories. In the present paper models between the proportional and non proportional odds models are propagated. They are sufficiently complex to provide an adequate fit but contain much less parameters than the non proportional odds model. Non proportional and proportional odds model are logistic versions of cumulative ordinal models with category-specific or global, that is, not category-specific effects of variables, respectively. We consider the more general class of cumulative models, which may use any response function that is determined by a strictly increasing distribution function. In addition, we consider the alternative class of adjacent categories models with general link functions. For all of these models it is essential to find an adequate representation of data that does not involve too many parameters. The class of models that is investigated contains a location term in the tradition of the proportional odds model (and other models with global parameters), but instead of using a multitude of category-specific parameters, the location term is complemented by a linear term that represents variability of the response, which may be seen as dispersion or, in questionnaires, the tendency of respondents to prefer extreme or middle categories. Parametric models of this type were considered by \citet{TuBerg2016RespStyle, TuBer2017Disp}. The paper has two main objectives. It is demonstrated that location-shift versions of cumulative and adjacent categories models are often adequate when modeling ordinal responses. They can be seen as a natural extension of proportional odds models to complexer models that avoid the complexity of non proportional odds models. Indeed, non proportional odds type models turn out to be a frequently dispensable class of models. They are unnecessarily complicated, and hardly needed in ordinal modelling. In contrast to most statistical papers, which propagate more complex modeling, in the first part of the paper we plead for a simpler class of models instead of a complexer one. In the second part of the paper location-shift models are extended to allow for smooth effects of covariates. Extensions of additive models that include general category-specific effects are rather hard to obtain. The proposed additive location-shift model offers a way to go beyond the simple global effects model without adding too many functions. The main messages of the paper can be summarizes as follows \begin{itemize} \item Proportional odds models, or, more general, models with category-specific parameters are widely dispensable. In many applications a simpler version is appropriate. \item Location-shift models, which are propagated here, have the advantage that they show not only location effects but dispersion effects or tendencies to respond, which are typically present in applications. \item If linear effects are questionable, the smooth location-shift model provides an alternative to simple global effect models. The models allow to account for smooth dispersion effects. \operatorname{e}nd{itemize} In Section \ref{sec:ordinal} parametric ordinal models and their location-shift versions are considered. It is demonstrated that in applications non proportional odds models are often not needed. In Section \ref{sec:additive} traditional additive ordinal models are briefly considered and the additive location-shift model is introduced as an alternative. \section{Ordinal Regression Models }\label{sec:ordinal} \subsection{Proportional and Non Proportional Odds Models } The most widely used ordinal regression model is the proportional odds model, which is a member of the class of cumulative models. {Cumulative models} can be derived from an underlying latent variable. Let $Y^*$ be an underlying latent variable for which the regression model $Y^*=-\boldsymbol{x}^T{\boldsymbol{\beta}}+\operatorname{e}psilon$ holds, where $\operatorname{e}psilon$ is a noise variable with continuous distribution function $F(.)$, $\boldsymbol{x}$ is a vector of explanatory variables, and ${\boldsymbol{\beta}}$ a vector of coefficients. If one assumes that the link between the observable categorical response $Y$ and the latent trait is specified by $Y=r \Leftrightarrow \theta_{r-1}< Y^*\leq\theta_{r}$, where $-\infty =\theta_{0}<\theta_{1}<\displaystyle}\def\d{\displaystyleots<\theta_{k}=\infty$, one obtains the \textit{cumulative model} \begin{equation}\label{eq:cum} P(Y \le r|\boldsymbol{x} )= F(\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}), \quad r=1,\displaystyle}\def\d{\displaystyleots,k-1, \operatorname{e}nd{equation} where the category-specific intercepts $\beta_{0r}$ are identical to the thresholds on the latent scale, that is, $\beta_{0r}=\theta_{r}$. If one uses the logistic distribution $F(\operatorname{e}ta)=\operatorname{e}xp(\operatorname{e}ta)/(1+\operatorname{e}xp(\operatorname{e}ta))$, one obtains the \textit{proportional odds model} \[ \operatorname{logit} P(Y \le r|\boldsymbol{x}) = \beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}. \] The strength of the model is that interpretation of parameters is very simple. Let $\gamma_r(\boldsymbol{x})= {P(Y \le r|\boldsymbol{x})}/{P(Y>r|\boldsymbol{x})}$ denote the cumulative odds, then $\operatorname{e}^{\beta_{j}}$ can be directly interpreted as the odds ratio that compares the cumulative odds with value $x_j+1$ in the $j$-th variable to the odds with value $x_j$ in the $j$-th variable, when all other variables are kept fixed, \begin{equation}\label{eq:int} \operatorname{e}^{\beta_{j}}=\frac{\gamma_r(x_1,\displaystyle}\def\d{\displaystyleots, x_j+1,\displaystyle}\def\d{\displaystyleots,x_p)}{\gamma_r(x_1,\displaystyle}\def\d{\displaystyleots, x_j,\displaystyle}\def\d{\displaystyleots,x_p)}. \operatorname{e}nd{equation} It is important that the interpretation does not depend on the category, $\operatorname{e}^{\beta_{j}}$ is the same for all odds $\gamma_r$, $r=1,\displaystyle}\def\d{\displaystyleots,k-1$. The independence of parameters on categories holds for the whole class of cumulative models (\ref{eq:cum}) since they share the stochastic ordering property, which means that for two sets of explanatory variables $\boldsymbol{x}$ and $\tilde\boldsymbol{x}$ the term \[ F^{-1}(P(Y\leq r|\boldsymbol{x}))-F^{-1}(P(Y\leq r|\tilde\boldsymbol{x}))=(\boldsymbol{x}-\tilde\boldsymbol{x})^T\boldsymbol{\gamma}, \] does not depend on the category $r$. Early versions of the cumulative logistic model were given by \citet{Snell:64}, \citet{WalDun:67}, and \citet{WilGri:72}. More general cumulative models were considered, among others, by \citet{ArmSlo:89}, \citet{GenFar:85}, \citet{AnaKlei:97}, \citet{SteWei:98}. \citet{Rud-etal:95} and \citet{CamDon:89} investigated their use in prediction, more recently, robust estimators have been proposed by \citet{iannario2017robust}. The problem with cumulative models of the form (\ref{eq:cum}) is that they often do not fit the data well, which calls for more complicated models. A class of models that has been considered in the literature is the \textit{cumulative model with category-specific effects} \begin{equation}\label{eq:cum2} P(Y \le r|\boldsymbol{x} )= F(\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}_r), \quad r=1,\displaystyle}\def\d{\displaystyleots,k-1, \operatorname{e}nd{equation} which uses the parameter vectors ${\boldsymbol{\beta}}_r^T=(\beta_{1r},\displaystyle}\def\d{\displaystyleots,\beta_{pr})$, and allows that parameters vary across categories. Logistic models of this type are also called \textit{non proportional odds models} to distinguish them from the simpler versions. If ${\boldsymbol{\beta}}_1=\displaystyle}\def\d{\displaystyleots={\boldsymbol{\beta}}_{k-1}$, the model simplifies to the simple cumulative model (\ref{eq:cum}). Of course the parameters have not to vary over categories for all variables, which might yield two types of variables, variables with a \textit{global} effect, for which $\beta_{j1}=\displaystyle}\def\d{\displaystyleots=\beta_{j,k-1}=\beta_{j}$, and variables with \textit{category specific effects}, that is $\beta_{js}\ne \beta_{jr}$ for at least two categories $s,r$. Logistic versions of this type of model are called \textit{partial proportional odds models} and have been investigated, for example, by \citet{Brant:90}, \citet{PetHar:90}, \citet{BenGro:98}, \citet{Cox:95}, \citet{kim2003assessing} and \citet{liu2009graphical}. In sociology they are also referred to as generalized ordered logit models \citep{williams2006generalized,williams2016understanding}. The general model with category-specific effects is attractive since it usually provides a better fit to the data. However, the whole class of models has some serious disadvantages. One is that one has many parameters, which are much harder to interpret. More seriously, the possible values of explanatory variables can be strongly restricted because it is postulated that $\beta_{01} + \boldsymbol{x}^T{\boldsymbol{\beta}}_1 \le \displaystyle}\def\d{\displaystyleots \le \beta_{0,k-1} + \boldsymbol{x}^T{\boldsymbol{\beta}}_{k-1}$ for all values $\boldsymbol{x}$. Even if estimates exist, in future observations with more extreme values in the explanatory variables the estimated probabilities can be negative. For problems with the model see also \citet{walker2016generalizing} who even concludes that it is impossible to generalize the cumulative class of ordered regression models in ways consistent with the spirit of generalized cumulative regression models. \subsection{Cumulative Location-Shift Models } An alternative extension of the non proportional odds model was proposed by \citet{TuBer2017Disp}. They assume that variables may change the thresholds of the underlying latent trait. Then, thresholds $\beta_{0r}$ in the proportional odds model are replaced by $\beta_{0r}+ (k/2 -r) \boldsymbol{z}^T\boldsymbol{\alpha}$, where $\boldsymbol{z}$ denotes a vector of covariates, possibly containing components of $\boldsymbol{x}$. The replacement yields the so-called location-shift model, which is given in closed form by \begin{equation}\label{eq:locsh} P(Y \le r|\boldsymbol{x} )= F(\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}+ (r -k/2) \boldsymbol{z}^T\boldsymbol{\alpha}), \quad r=1,\displaystyle}\def\d{\displaystyleots,k-1. \operatorname{e}nd{equation} It contains the familiar location term $\boldsymbol{x}^T{\boldsymbol{\beta}}$, which models the location on the latent continuum and therefore the tendency to low or high response categories. In addition it contains the scaled shifting term $(r -k/2) \boldsymbol{z}^T\boldsymbol{\alpha}$, which modifies the thresholds and has a quite different interpretation. The term $\boldsymbol{z}^T\boldsymbol{\alpha}$ determines the shifting of thresholds, whereas the scaling factor $(r-k/2)$ is an additional weight chosen such that the difference between thresholds are widened or shrunk by the same amount. For illustration let us consider the case $k=6$, for which the modified thresholds $\beta_{0r}+(r -k/2) \boldsymbol{z}^T\boldsymbol{\alpha}$ have the form \begin{figure}[H] \centering \begin{tikzpicture}[scale=1] \tikzstyle{ann} = [draw=none,fill=none] \matrix[nodes={draw, thick}, row sep=0.3cm,column sep=0.82cm, column 1/.style={nodes={rectangle, draw, minimum width=2em}}, column 3/.style={nodes={rectangle, draw, minimum width=2em}}, column 5/.style={nodes={rectangle, draw, minimum width=2em}}, column 7/.style={nodes={rectangle, draw, minimum width=2em}}, column 9/.style={nodes={rectangle, draw, minimum width=2em}}, column 11/.style={nodes={rectangle, draw, minimum width=2em}} ] { \node[circle] {1}; &\displaystyle}\def\d{\displaystyleraw [line width=0.03cm] (4,0.7) -- (4,-0.7); & \node[circle] {2};&\displaystyle}\def\d{\displaystyleraw [line width=0.03cm] (4,0.7) -- (4,-0.7); & \node[circle] {3};&\displaystyle}\def\d{\displaystyleraw [line width=0.03cm] (4,0.7) -- (4,-0.7); & \node[circle] {4};&\displaystyle}\def\d{\displaystyleraw [line width=0.03cm] (4,0.7) -- (4,-0.7); & \node[circle] {5};&\displaystyle}\def\d{\displaystyleraw [line width=0.03cm] (4,0.7) -- (4,-0.7); & \node[circle] {6};\\ }; \operatorname{e}nd{tikzpicture} \begin{tikzpicture}[scale=1] \tikzstyle{ann} = [draw=none,fill=none] \matrix[nodes={draw, thick}, row sep=0.3cm,column sep=2.2cm, column 1/.style={nodes={rectangle, draw, minimum width=2cm}} ] { \hspace{1cm} &\small $\beta_{01}-2\boldsymbol{z}^T\boldsymbol{\alpha}$; & &\small $\beta_{02}-\boldsymbol{z}^T\boldsymbol{\alpha}$; & &\phantom{$123$}$\beta_{03}$ \hspace{0.8cm} & &\small $\beta_{04}+\boldsymbol{z}^T\boldsymbol{\alpha}$\phantom{$ $};& &\small $\beta_{05}+ 2\boldsymbol{z}^T\boldsymbol{\alpha}$; \\ }; \operatorname{e}nd{tikzpicture} \operatorname{e}nd{figure} In general, thresholds are widened if $\boldsymbol{z}^T\boldsymbol{\alpha}$ is positive, and shrunk if it is negative. The consequence is that one observes more concentration in the middle or extreme categories, respectively. Since more concentration in the middle means less variability the term $\boldsymbol{z}^T\boldsymbol{\alpha}$ can also be seen as representing dispersion. For positive values the distribution is more concentrated, meaning smaller dispersion, for negative values one has larger dispersion. The effect is also seen from considering the differences between adjacent predictors, \begin{equation}\label{eq:eta} \operatorname{e}ta_r -\operatorname{e}ta_{r-1}=\beta_{0r}-\beta_{0,r-1}+ \boldsymbol{z}^T\boldsymbol{\alpha}, \quad r=2,\displaystyle}\def\d{\displaystyleots,k-1, \operatorname{e}nd{equation} where $\operatorname{e}ta_r =\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}+ (r -k/2) \boldsymbol{z}^T\boldsymbol{\alpha}$ is the $r$-th predictor. For positive values of $\boldsymbol{z}^T\boldsymbol{\alpha}$ the difference between adjacent predictors becomes greater, for negative values it becomes smaller. Thus, $\boldsymbol{\alpha}$ represents the tendency to middle or extreme categories linked to covariates $\boldsymbol{z}$, which is separated from the location effect $\boldsymbol{x}^T{\boldsymbol{\beta}}$. It can be derived that the interpretation of the $\beta$ parameters is the same as in the proportional odds model if $\boldsymbol{x}$ and $\boldsymbol{z}$ are distinct \citep{TuBer2017Disp}. A different view of the model is obtained by seeing it as a non proportional odds model with specific constraints on the parameters. Let us consider the general case $\boldsymbol{x}=\boldsymbol{z}$. Then one has \[ P(Y \le r|\boldsymbol{x} )= F(\beta_{0r}+\boldsymbol{x}^T({\boldsymbol{\beta}}+ (r -k/2) \boldsymbol{\alpha})) = F(\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}_r), \] where ${\boldsymbol{\beta}}_r={\boldsymbol{\beta}}+ (r -k/2) \boldsymbol{\alpha}$. The model is equivalent to a category-specific model with constraints \[ ({\boldsymbol{\beta}}_r- {\boldsymbol{\beta}})/ (r -k/2)= \boldsymbol{c}, \quad r=1,\displaystyle}\def\d{\displaystyleots,k-1, \] where ${\boldsymbol{\beta}}=\sum_{r=1}^{k-1}{\boldsymbol{\beta}}_r$, and $\boldsymbol{c}$ is a vector of constants. If the category-specific model with constraints is assumed to hold, the vector $\boldsymbol{c}$ turns out to be $\boldsymbol{\alpha}$. That means, in particular, that the location-shift model is a submodel of the model with category-specific effects. Since the proportional odds model is a submodel of the location-shift model one has the nested structure \[ \text{proportional odds model} \subset \text{location-shift model}\subset \text{non proportional odds model} \] or, more generally, \[ \text{model with global effects} \subset \text{location-shift model}\subset \text{model with category-specific effects}. \] Since the location-shift model is a (multivariate) generalized linear model one can investigate if the models can be simplified by testing the sequence of nested models, see, for example, \citet{TutzBook2011}. One of the disadvantages of model versions with category-specific effects is that the simple interpretation of parameters gets lost. One has a multitude of parameters for which one might easily lose track. For example, if one has just four variables and ten categories (see the example in Section \ref{sec:ex}) the model contains 45 parameters, for each variable one has 9 parameters. In contrast, the proportional odds model contains only 13 parameters, the impact of one variable is described by just one parameter. The models propagated here are models that are between the most general model and the model with global effects, in them the impact of a single variable is described by just two parameters (instead of $k-1$ parameters as in the general model and one in the model with global effects). \subsection{An Example: Safety in Naples}\label{sec:ex} The package CUB \citep{iannario2018cub} contains the data set relgoods, which provides results of a survey aimed at measuring the subjective extent of feeling safe in the streets. The data were collected in the metropolitan area of Naples, Italy. Every participant was asked to assess on a 10 point ordinal scale his/her personal score for feeling safe with large categories referring to feeling safe. There are $n=2225$ observations and four variables, \textit{Age}, \textit{Gender} (0: male, 1: female), \textit{Residence} (1: City of Naples, 2: District of Naples, 3: Others Campania, 4: Others Italia) and the educational degree (\textit{EduDegree}; 1: compulsory school, 2: high school diploma, 3: Graduated-Bachelor degree, 4: Graduated-Master degree, 5: Post graduated). \begin{table}[!ht] \caption{Fits of cumulative models with logistic link for safety data.}\label{tab:nuccatspec1} \centering \begin{tabularsmall}{lrrrrrrrrrr} \toprule & deviance & df & difference in & df &$p$-value\\ & & &deviances & \\ \midrule Non proportional odds model &9825.78 &19935 \\ Location-shift model &9899.67 &19998 & 73.89 &63 &0.1640\\ Proportional odds model &9948.99 &20007 &49.32 &9 &0.0000\\ \bottomrule \operatorname{e}nd{tabularsmall} \operatorname{e}nd{table} Table \ref{tab:nuccatspec1} shows the deviances of the fitted models and the differences. The full model with category-specific effects has 90 parameters, which reduces to 27 parameters in the location-shift model. The difference in deviances suggests that the full model can be simplified to the location-shift model, however, it certainly does not simplify to the model with global effects (difference of deviances 49.32 on 9 df). That means the location-shift model contains enough structure to explain the effect of covariates on the response, but the simpler structure without the term $(r-k/2) \boldsymbol{z}^T\boldsymbol{\alpha}$ is too simple, that is, relevant effects are missing. It is noteworthy that in this application, as in the other applications used here, the sample size is rather large ($n=2225$). Typically, if sample sizes are large one finds more significant effects. Therefore, it is remarkable that the complex model with category-specific effects can be simplified in spite of the large sample size. \subsection{Adjacent Categories Models } An alternative class of models for ordinal responses are \textit{adjacent categories models}. In its simple version they assume \begin{equation}\label{eq:adj} P(Y > r|Y \in \{ r ,r+1\}, \boldsymbol{x})= F(\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}} ), \quad r=1,\displaystyle}\def\d{\displaystyleots,k-1. \operatorname{e}nd{equation} where $F(.)$ again is a strictly increasing distribution function but no ordering of intercepts has to be postulated. The logistic version has the form \begin{equation}\label{eq:adj2} \log \left(\frac{P(Y = r+1|\boldsymbol{x} )}{P(Y = r|\boldsymbol{x} )}\right)= \beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}} , \quad r=2,\displaystyle}\def\d{\displaystyleots,k-1. \operatorname{e}nd{equation} The interpretation of parameters is as simple as for basic cumulative models; $\operatorname{e}^{\beta_{j}}$ is the odds ratio that compares the odds with value $x_j+1$ in the $j$-th variable to the odds with value $x_j$ in the $j$-th variable, however the odds are not cumulative odds but adjacent categories odds, $\gamma_r(\boldsymbol{x})= {P(Y=r+1|\boldsymbol{x})}/{P(Y=r|\boldsymbol{x})}$. In the same way as in the cumulative models the linear predictor can be replaced by a predictor with category-specific parameters, that is, predictors $\operatorname{e}ta_r=\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}_r$ to obtain a better fit. The corresponding model contains many parameters, which are harder to interpret. A sparser model is the \textit{location-shift version of the adjacent categories model} \begin{equation}\label{eq:adj3} \log \left(\frac{P(Y = r+1|\boldsymbol{x} )}{P(Y = r|\boldsymbol{x} )}\right)= \beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}} + (k/2 -r) \boldsymbol{z}^T\boldsymbol{\alpha}, \quad r=1,\displaystyle}\def\d{\displaystyleots,k-1. \operatorname{e}nd{equation} For $k=6$ one obtains for the term $(k/2 -r) \boldsymbol{z}^T\boldsymbol{\alpha}$, which distinguishes between category $r$ and $r+1$ \begin{figure}[H] \centering \begin{tikzpicture}[scale=1] \tikzstyle{ann} = [draw=none,fill=none] \matrix[nodes={draw, thick}, row sep=0.3cm,column sep=0.82cm, column 1/.style={nodes={rectangle, draw, minimum width=2em}}, column 3/.style={nodes={rectangle, draw, minimum width=2em}}, column 5/.style={nodes={rectangle, draw, minimum width=2em}}, column 7/.style={nodes={rectangle, draw, minimum width=2em}}, column 9/.style={nodes={rectangle, draw, minimum width=2em}}, column 11/.style={nodes={rectangle, draw, minimum width=2em}} ] { \node[circle] {1}; &\displaystyle}\def\d{\displaystyleraw [line width=0.03cm] (4,0.7) -- (4,-0.7); & \node[circle] {2};&\displaystyle}\def\d{\displaystyleraw [line width=0.03cm] (4,0.7) -- (4,-0.7); & \node[circle] {3};&\displaystyle}\def\d{\displaystyleraw [line width=0.03cm] (4,0.7) -- (4,-0.7); & \node[circle] {4};&\displaystyle}\def\d{\displaystyleraw [line width=0.03cm] (4,0.7) -- (4,-0.7); & \node[circle] {5};&\displaystyle}\def\d{\displaystyleraw [line width=0.03cm] (4,0.7) -- (4,-0.7); & \node[circle] {6};\\ }; \operatorname{e}nd{tikzpicture} \begin{tikzpicture}[scale=1] \tikzstyle{ann} = [draw=none,fill=none] \matrix[nodes={draw, thick}, row sep=0.3cm,column sep=2.2cm, column 1/.style={nodes={rectangle, draw, minimum width=2cm}} ] { \hspace{1cm} &\small $2\boldsymbol{z}^T\boldsymbol{\alpha}$; & &\small $\boldsymbol{z}^T\boldsymbol{\alpha}$; & \hspace{1.3cm} &\small $0$; &\small $-\boldsymbol{z}^T\boldsymbol{\alpha}$\phantom{$2$};& &\small $- 2\boldsymbol{z}^T\boldsymbol{\alpha}$; \\ }; \operatorname{e}nd{tikzpicture} \operatorname{e}nd{figure} Thus, one obtains the same effects as in the cumulative location-shift model, if $\boldsymbol{z}^T\boldsymbol{\alpha}$ is large the person has a tendency to choose middle categories, if $\boldsymbol{z}^T\boldsymbol{\alpha}$ is small there is a tendency to extreme categories. \blanco{ It is noteworthy that scaling is chosen such that in both models the cumulative and the adjacent categories model large values of $\boldsymbol{z}^T\boldsymbol{\alpha}$ correspond to a tendency to middle categories. Therefore the predictor contains the term $(r-k/2) \boldsymbol{z}^T\boldsymbol{\alpha}$ in the cumulative model but $(k/2 -r) \boldsymbol{z}^T\boldsymbol{\alpha}= -(r-k/2) \boldsymbol{z}^T\boldsymbol{\alpha}$ in the adjacent categories model. It is important that interpretation changes if one uses different representations of the model. Advanced program packages allow to use reverse categories. Then, the cumulative location-shift model has the form \[ P(Y \ge r|\boldsymbol{x} )= F(\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}+ (r -k/2) \boldsymbol{z}^T\boldsymbol{\alpha}), \quad r=2,\displaystyle}\def\d{\displaystyleots,k. \] However, then the effect of the shifting term is also reversed, large values $\boldsymbol{z}^T\boldsymbol{\alpha}$ indicate a tendency to extreme categories while small values $\boldsymbol{z}^T\boldsymbol{\alpha}$ indicate a tendency to middle categories. Of course, also the meaning of the location effect $\boldsymbol{x}^T{\boldsymbol{\beta}}$ has changed, large values $\boldsymbol{z}^T\boldsymbol{\alpha}$ indicate a tendency to larger categories. } For the adjacent categories model the same hierarchy holds as for the cumulative models. The model with global effects is a sub model of the adjacent categories location-shift model, which is a sub model of the general model with category-specific effects. \subsection*{Safety in Naples} Figure \ref{tab:nuccatspec2} shows fits for logistic adjacent categories models. It is seen that there is no need to use the general model with category-specific effects since the difference in deviances between the general model and the location-shift version is not significant. However, the location-shift model can not be reduced to the model with global effects. The fits are well comparable to the fits obtained for the cumulative models given in Table \ref{tab:nuccatspec1}. For the adjacent categories as well as for the cumulative modeling approach the location-shift versions turn out to be the best compromise between goodness-of-fit and sparsity. The fits of the cumulative versions are slightly better than that of the adjacent categories location-shift version. \begin{table}[!ht] \caption{Fits of adjacent category models with logistic link for safety data.}\label{tab:nuccatspec2} \centering \begin{tabularsmall}{lrrrrrrrrrrr} \toprule & deviance & df & deviance & df &$p$-value\\ & & & & \\ \midrule Model with category-specific effects &9828.07 &19935 \\ Location-shift model &9902.43 &19998 & 74.36 &63 &0.1549\\ Model with global effects &9959.00 &20007 &56.57 &9 &0.0000\\ \bottomrule \operatorname{e}nd{tabularsmall} \operatorname{e}nd{table} In the following we briefly compare the alternative modelling approaches for the safety data. The estimates of the proportional odds model and the cumulative logistic location-shift model are given in Table \ref{tab:nuc2}. It is seen that the dispersion effects are not negligible. All variables show rather small $p$-values in the dispersion component, which explains the strong difference in deviances between the location-shift model and the simple model with global effects, which does not account for varying dispersion. It is seen that the simple proportional odds model yields stronger location effects than the location-shift model, which is a hint that estimates might be biased if dispersion effects are ignored. The same pattern is found for the adjacent categories models (not given). Instead of showing all the parameters we use the plotting tool provided by our R package (see Section \ref{sec:OrdDisp}). In Figure \ref{fig:safetyplott} the location effects of age, gender and residence are plotted against the dispersion effects (left: cumulative model, right: adjacent categories model). The abscissa represents the multiplicative dispersion effect on the odds $e^{\hat{\alpha}}$, and the ordinate axis represents the multiplicative location effect $e^{\hat{\beta}}$ for the variables. In addition to the point estimates pointwise $95\%$ confidence intervals are included. The horizontal and vertical lengths of the stars correspond to the confidence intervals of $e^{\hat{\alpha}}$ and $e^{\hat{\beta}}$, respectively. Thus, the stars also show the significance of effects. If the stars cross the line $y=1$, location effects have to be considered as significant, if they cross the line $x=1$ dispersion effects have to be considered as significant. To make the models comparable we did not use the classical representation of the cumulative model. We used the reverse categories representation $P(Y \ge r|\boldsymbol{x} )= F(\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}+ (k/2-r) \boldsymbol{z}^T\boldsymbol{\alpha})$. Then the location effects have the same interpretation as in the adjacent categories model, large values of $\boldsymbol{x}^T{\boldsymbol{\beta}}$ indicate a preference for high response categories while small values indicate a preference for low categories. The resulting star plots for both models the cumulative and the adjacent categories model are very similar. It is seen that people living outside of the city of Naples feel much safer (y axis, reference category: city of Naples), effects are ordered, the larger the distance to the city the safer they feel. Females and older people feel less safe (below the line $y=1$). As far as concentration of responses is concerned, people living outside the city of Naples and older people have stronger dispersion (below $x=1$) while females show less dispersion than men. It should be noted that age is measured in decades, otherwise the age effect would be too close to zero in the star plot. Although the estimated parameter values are quite different for the cumulative and the adjacent categories models, the conclusions one draws are very similar, as are the goodness-of-fits. Thus one might use either of the two models to investigate the impact of variables. However, it is certainly warranted to account for dispersion effects. \begin{table}[!t] \caption{ Estimates of proportional odds model and cumulative logistic location-shift model for safety data.}\label{tab:nuc2} \begin{center} \begin{tabularsmall}{lrrrrrrrr} \toprule &\multicolumn{4}{c}{\bf Proportional Odds Model } &\multicolumn{4}{c}{\bf Location-Shift Model }\\ \midrule & coef & se & z value & $p$-value & coef & se & z value & $p$-value\\ \midrule \bf Location effects\\ \midrule Age &-0.045 &0.026 &-1.713 &0.086 &-0.041 &0.026 &-1.578 &0.114 \\ Gender &-0.343 &0.075 &-4.563 &0.000 &-0.327 &0.075 &-4.343 &0.000 \\ Residence2 &0.518 &0.090 &5.705 &0.000 &0.572 &0.092 &6.199 &0.000 \\ Residence3 &0.899 &0.117 &7.644 &0.000 &0.938 &0.119 &7.859 &0.000 \\ Residence4 &1.397 &0.141 &9.885 &0.000 &1.339 &0.148 &9.039 & 0.000 \\ EduDegree2 &-0.307 &0.111 &-2.748 &0.006 &0.274 &0.112 &-2.449 &0.014 \\ EduDegree3 &-0.319 &0.150 &-2.118 &0.034 &0.294 &0.151 &-1.947 &0.051 \\ EduDegree4 &-0.162 &0.159 &-1.021 &0.307 &0.112 &0.160 &-0.704 &0.481 \\ EduDegree5 &-0.292 &0.221 &-1.319 &0.187 &-0.261 &0.221 &-1.177 &0.239 \\ \midrule \bf Dispersion effects\\ \midrule Age &&&&&-0.018 &0.007 &-2.442 &0.014 \\ Gender &&&&& 0.045 &0.022 &2.039 &0.041 \\ Residence2 &&&&&-0.085 &0.028 &-2.984 &0.002 \\ Residence3 &&&&&-0.090 &0.036 &-2.458 &0.013 \\ Residence4 &&&&&-0.155 &0.042 &-3.703 &0.000 \\ EduDegree2 &&&&&0.099 &0.030 &3.297 &0.000 \\ EduDegree3 &&&&&0.142 &0.044 &3.176 &0.001 \\ EduDegree4 &&&&&0.163 &0.047 &3.437 &0.000 \\ EduDegree5 &&&&&0.107 &0.064 &1.668 &0.095 \\ \bottomrule \operatorname{e}nd{tabularsmall} \operatorname{e}nd{center} \operatorname{e}nd{table} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.48\textwidth]{./Safetycum_new} \includegraphics[width=0.48\textwidth]{./Safetyacat_new} \caption{Plots of $(e^{\hat{\alpha}},e^{\hat{\beta}})$ for safety data, $y$-axis represents location, $x$-axis represents dispersion, left: cumulative locaion shift model, right: adjacent categories location-shift model.}\label{fig:safetyplott} \operatorname{e}nd{center} \operatorname{e}nd{figure} \subsection{Further Applications } To demonstrate that the location-shift model is frequently a good choice that shows satisfactory goodness-of-fit while being comparably sparse in parameters we consider some more applications. \subsubsection*{Nuclear Energy } The German Longitudinal Election Study (GLES) is a long-term study of the German electoral process \citep{GLES}. The data consist of $2036$ observations and originate from the pre-election survey for the German federal election in 2017 and are concerned with political fears. In particular the participants were asked: ``How afraid are you due to the use of nuclear energy?'' The answers were measured on Likert scales from 1 (not afraid at all) to 7 (very afraid). The explanatory variables used in the model are \textit{Aged} (age of the participant), \textit{Gender} (1: female; 0: male), \textit{EastWest} (1: East Germany/former GDR; 0: West Germany/former FRG). Tables \ref{tab:nuccatne1} and \ref{tab:nuccatne2} show the fits of cumulative and adjacent categories models, respectively. Comparison between the full models with category-specific effects and location-shift versions yields $p$-values greater than 0.8. It is obvious that the location-shift versions of the models represent satisfying approximations while models with global parameters should not be used to describe the underlying response structure. \begin{table}[!ht] \caption{Fits of cumulative models with logistic link for response fear of nuclear energy. }\label{tab:nuccatne1} \centering \begin{tabularsmall}{lrrrrrrrrrr} \toprule & deviance & df & difference in & df &$p$-value\\ & & &deviances & \\ \midrule Model with category-specific effects &7499.61 &12192 \\ Location-shift model &7506.36 &12204 & 6.75 &12 &0.873\\ Model with global effects &7544.60 &12206 &38.24 &2 &0.000\\ \bottomrule \operatorname{e}nd{tabularsmall} \operatorname{e}nd{table} \begin{table}[!ht] \caption{Fits of adjacent categories models with logistic link for response fear of nuclear energy. }\label{tab:nuccatne2} \centering \begin{tabularsmall}{lrrrrrrrrrr} \toprule & deviance & df & difference in & df &$p$-value\\ & & &deviances & \\ \midrule Model with category-specific effects &7500.77 &12192 \\ Location-shift model &7508.72 &12204 & 7.95 &12 &0.997\\ Model with global effects &7545.41 &12206 &36.69 &2 &0.000\\ \bottomrule \operatorname{e}nd{tabularsmall} \operatorname{e}nd{table} The strength of effects is seen from the stars in Figure \ref{fig:Nuclearplott}, which shows parameter estimates for the cumulative location-shift model on the left and the adjacent categories model on the right hand side. Again we used the inverse order of categories in the cumulative model and age measured in decades. It is seen that all parameters have significant location and dispersion effects with the exception of EastWest, for which the dispersion effect is not distinctly significant. It is seen that females and older people are more afraid of the consequences of the use of nuclear energy while residents of the Eastern part are less afraid. Females show stronger dispersion than men, and older people less dispersion than younger respondents. \begin{figure}[!ht] \begin{center} \includegraphics[width=7cm]{./Nuclearcumstar} \includegraphics[width=7cm]{./Nuclearacatstar} \caption{Plots of $(e^{\hat{\alpha}},e^{\hat{\beta}})$ for response fear of nuclear energy, $y$-axis represents location, $x$-axis represents dispersion, left: cumulative location shift model, right: adjacent categories location-shift model.}\label{fig:Nuclearplott} \operatorname{e}nd{center} \operatorname{e}nd{figure} \subsubsection*{Climate Change } Let us again consider the GLES data but now the response to the item ``How afraid are you due to the climate change?''. Tables \ref{tab:nuccats1} and \ref{tab:nuccats2} show the fits of cumulative and adjacent categories models, respectively. Also for this question the full models can be simplified to location-shift versions of the models although the reduction is not so obvious as in the question that refers to the use of nuclear energy. \begin{table}[!ht] \caption{Fits of cumulative models with logistic link for response fear of climate change. }\label{tab:nuccats1} \centering \begin{tabularsmall}{lrrrrrrrrrr} \toprule & deviance & df & difference in & df &$p$-value\\ & & &deviances & \\ \midrule Model with category-specific effects &7152.12 &12192 \\ Location-shift model &7170.56 &12204 & 18.34 &12 &0.1057\\ Model with global effects &7178.06 &12206 &7.50 &2 &0.0235\\ \bottomrule \operatorname{e}nd{tabularsmall} \operatorname{e}nd{table} \begin{table}[!ht] \caption{Fits of adjacent categories models with logistic link for response fear of climate change. }\label{tab:nuccats2} \centering \begin{tabularsmall}{lrrrrrrrrrr} \toprule & deviance & df & difference in & df &$p$-value\\ & & &deviances & \\ \midrule Model with category-specific effects &7153.42 &12192 \\ Location-shift model &7171.93 &12204 & 18.51 &12 &0.1010 \\ Model with global effects &7177.17 &12206 &5.24 &2 &0.0723\\ \bottomrule \operatorname{e}nd{tabularsmall} \operatorname{e}nd{table} \subsubsection*{Demand for Medical Care} \citet{DebTri:97} analyzed the demand for medical care for individuals, aged 66 and over, based on a dataset from the U.S. National Medical Expenditure survey in 1987/88. The data (``NMES1988'') are available from the R package~{AER} \citep{KleZei:2008}. The response is the number of physician/non-physician office and hospital outpatient visits, which is categorized with categories given by 1: zero , 2: 1- 3, 3: 4-6, 4:7-10, 5:11-20, 6: above 20. The available covariates include \textit{Age}, the self-perceived health status (\textit{Health}; 0: poor, 1: average, 2: excellent), the number of chronic conditions (\textit{Numchron}). Since the effects vary across gender, we consider only male, married patients ($n=1388$). The data set is interesting because it is one of the applications in which cumulative models show fitting problems. The model with category-specific effects can not be fitted at all, the cumulative location-shift model yields unstable estimates and no standard errors are available. In contrast, for the adjacent categories model maximum likelihood estimates and standard errors are obtained by regular software. The big advantage of the adjacent categories model over the cumulative model that shows here is that parameter values are not restricted in the adjacent categories model. Table \ref{tab:demacat} shows the fits for the adjacent categories models. It is again seen that one might use the location-shift model but the simple model with global parameters is not appropriate. \begin{table}[!ht] \caption{Fits of adjacent categories models with logistic link for demand for medical care data.}\label{tab:demacat} \centering \begin{tabularsmall}{lrrrrrrrrrr} \toprule & deviance & df & difference in & df &$p$-value\\ & & &deviances & \\ \midrule Non proportional odds model &4258.41 &6910 \\ Location-shift model &4282.05 &6925 & 23.64 &15 &0.0714\\ Proportional odds model &4303.64 &6930 &19.69 &5 &0.0014\\ \bottomrule \operatorname{e}nd{tabularsmall} \operatorname{e}nd{table} \section{Generalized Additive Models }\label{sec:additive} In the following traditional additive models for ordinal responses are considered briefly. Then the additive location-shift model is introduced. \subsection{Generalized Additive Models for Ordinal Responses} Parametric models as the non proportional odds model are rather restrictive. They assume a simple linear predictor, which might be very misleading if, for example U-shaped effects are present. A very flexible class of models that avoids these restrictions are generalized additive models, which are well developed for continuous and univariate responses, see, for example, \citet{BujHasTib:89}, \citet{FriSil:89}, \citet{HasTib:86a}. In generalized additive models the linear predictor $\boldsymbol{x}^T{\boldsymbol{\beta}}$ is replaced by the additive term \[ f_{(1)}(x_1)+\displaystyle}\def\d{\displaystyleots +f_{(p)}(x_p), \] where the $f_{(j)}(.)$ are unspecified functions. The unknown functions may be expanded in basis functions \citep{EilMar:96}, smoothing splines \citep{Gu:2002} or thin-plate splines \citep{Wood:2004}, all of them have been used to model binary or continuous responses. Ordinal models with additive predictors were considered by \citet{YeeWil:96}, \citet{yee2010vgam, yee2015vector} within the framework of vector generalized additive models. For ordinal models one has to replace the whole predictor $\operatorname{e}ta_r=\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}$ by \[ \operatorname{e}ta_r=\beta_{0r}+f_{(1)}(x_1)+\displaystyle}\def\d{\displaystyleots +f_{(p)}(x_p), \] which contains a category-specific intercept but fixed smooth variable effects. The essential trait is that it is assumed that the functions $f_{(j)}(x_p)$ do not vary across categories, they are \textit{global} effects. Thus, if one uses the cumulative approach the models can be considered as additive versions of proportional odds models with accordingly simple interpretation of effects. If the $j$-th variable increases by one unit from $x_j $ to $x_j+1$ and all other variables remain fixed one obtains \begin{align*} &F^{-1}(P(Y\leq r|x_1,\displaystyle}\def\d{\displaystyleots, x_j+1,\displaystyle}\def\d{\displaystyleots,x_p))- F^{-1}(P(Y\leq r|x_1,\displaystyle}\def\d{\displaystyleots, x_j,\displaystyle}\def\d{\displaystyleots,x_p))=\\ &=f_{(j)}(x_j+1)-f_{(j)}(x_j), \operatorname{e}nd{align*} which contains only the function $f_{(j)}(.)$. In the logistic version the inverse distribution function is equivalent to the cumulative log-odds, and one obtains \[ \log\left(\frac{\gamma_r(x_1,\displaystyle}\def\d{\displaystyleots, x_j+1,\displaystyle}\def\d{\displaystyleots,x_p)}{\gamma_r(x_1,\displaystyle}\def\d{\displaystyleots, x_j,\displaystyle}\def\d{\displaystyleots,x_p)}\right)= f_{(j)}(x_j+1)-f_{(j)}(x_j), \] where the $\gamma_r(\boldsymbol{x})= {P(Y \le r|\boldsymbol{x})}/{P(Y>r|\boldsymbol{x})}$ are the cumulative odds. After transformation one has \[ \frac{\gamma_r(x_1,\displaystyle}\def\d{\displaystyleots, x_j+1,\displaystyle}\def\d{\displaystyleots,x_p)}{\gamma_r(x_1,\displaystyle}\def\d{\displaystyleots, x_j,\displaystyle}\def\d{\displaystyleots,x_p)}= e^{f_{(j)}(x_j+1)-f_{(j)}(x_j)}, \] which can be interpreted as the change in odds if the $j$-th variable increases by one unit from $x_j$ to $x_j+1$. If the function is linear, that is, $f_{(j)}(x_j)=x_j\beta_j$, one obtains on the right hand side $e^{\beta_j}$, which is equivalent to (\ref{eq:int}). Then, the effect strength does not depend on the baseline value $x_j$. This is different in the general additive case, in which the change depends on the 'starting' value $x_j$, which is increased by one unit. Nevertheless, the effect strength is not affected by the values of the other covariates. Similar properties hold if one uses the adjacent categories model with additive predictor structure. \subsection{Additive Location-Shift Models} The additive ordinal models with global effects considered in the previous section share some problems with the parametric model with global effects. Although it is more flexible by allowing for smooth effects it is rather restrictive by assuming that the effects of covariates do not depend on the category. Consequently, it might show bad goodness-of-fit. One can extend the model in the same way as linear models by allowing that the smooth functions are category-specific. Then, one postulates \[ \operatorname{e}ta_r=\beta_{0r}+f_{(1),r}(x_1)+\displaystyle}\def\d{\displaystyleots +f_{(p),r}(x_p), \] where the functions $f_{(j),r}(.)$ depend on $r$. However, for each covariate one has to fit $k-1$ functions, which can lead to a confusing number of functions if one has, for example, 10 response categories, which is not unusual in questionnaires. Moreover, the functions are severely restricted since $\operatorname{e}ta_r\le \operatorname{e}ta_{r+1}$has to hold for all $r$, which is hard to control in estimation. If it is not accounted for resulting estimates might yield negative probabilities. One can try to restrict the variation of the functions by assuming that they are varying not too strongly across categories, see \citet{Tutz:2003}, but this approach calls for complicated regularization methods, and still one has $k-1$ functions to interpret for each explanatory variable. The model proposed here is an additive version of the location-shift model, which avoids the large number of functions but typically fits much better than the simple additive model. The \textit{additive location-shift model} uses the predictor \[ \operatorname{e}ta_r=\beta_{0r}+f_{(1)}(x_1)+\displaystyle}\def\d{\displaystyleots +f_{(p)}(x_p)+ (r-k/2) \{ f_{(1)}^{(S)}(z_1)+\displaystyle}\def\d{\displaystyleots +f_{(m)}^{(S)}(z_m)\}, \] where $f_{(1)}^{(S)}(.),\displaystyle}\def\d{\displaystyleots,f_{(m)}^{(S)}(.)$ are unspecified dispersion functions. The predictor contains two types of smooth functions, the ones in the location term $f_{(1)}(x_1)+\displaystyle}\def\d{\displaystyleots +f_{(p)}(x_p)$, and the ones in the dispersion term $f_{(1)}^{(S)}(z_1)+\displaystyle}\def\d{\displaystyleots +f_{(m)}^{(S)}(z_m)$. In particular when $\boldsymbol{x}$ and $\boldsymbol{z}$ are distinct the functions have a simple interpretation. If the $j$-th $x$-variable increases by one unit from $x_j $ to $x_j+1$ and all other variables remain fixed one obtains for the cumulative model the same property as in the simple additive model, \[ \frac{\gamma_r(x_1,\displaystyle}\def\d{\displaystyleots, x_j+1,\displaystyle}\def\d{\displaystyleots,x_p)}{\gamma_r(x_1,\displaystyle}\def\d{\displaystyleots, x_j,\displaystyle}\def\d{\displaystyleots,x_p)}= e^{f_{(j)}(x_j+1)-f_{(j)}(x_j)}, \] which means that the functions can be interpreted as change in (cumulative) odds ratios. For the differences between adjacent predictors one obtains \[ \operatorname{e}ta_r-\operatorname{e}ta_{r-1}=\beta_{0r}-\beta_{0,r-1}+ \{f_{(1)}^{(S)}(z_1)+\displaystyle}\def\d{\displaystyleots +f_{(m)}^{(S)}(z_m)\}. \] That means that large values of $f_{(j)}^{(S)}(.)$ widen the distance between adjacent predictors, while small values shrink the distance between adjacent predictors. Therefore large values indicate a tendency to middle categories or smaller dispersion while small values indicate a tendency to extreme categories or strong dispersion. \blanco{ fraglich: If the $j$-th $z$-variable increases by one unit from $z_j $ to $z_j+1$ and all other variables remain fixed one obtains for the differences of adjacent predictors \begin{align*} &\operatorname{e}ta_r(\boldsymbol{x},z_1,\displaystyle}\def\d{\displaystyleots, z_j+1,\displaystyle}\def\d{\displaystyleots,z_p)-\operatorname{e}ta_{r}(\boldsymbol{x},z_1,\displaystyle}\def\d{\displaystyleots, z_j,\displaystyle}\def\d{\displaystyleots,z_p)=\\ &(r-k/2) \{ f_{(j)}^{(S)}(z_j+1)-f_{(j)}^{(S)}(z_j)\}=(r-k/2) \{ d_{(j)}^{(S)}(z_j)\}, \operatorname{e}nd{align*} where $d_{(j)}^{(S)}(z_j)=f_{(j)}^{(S)}(z_j+1)-f_{(j)}^{(S)}(z_j)$ is the change in function $f_{(j)}(z_j)$ if the $j$-th variable increases by one unit. } As the parametric model the additive location-shift model accounts for dispersion without being too complex. In the general case $\boldsymbol{x}=\boldsymbol{z}$ the additive location-shift model contains just two smooth functions per variable that characterize the effect of explanatory variables on the response, one for the location and one for the response style. That means one has to fit only $2p$ smooth functions instead of $(k-1)p$, which would be needed in the general model with category-specific covariate functions. For the fitting of the unknown functions $f_{(j)}(.), f_{(j)}^{(S)}(.)$ we use an expansion in basis functions. Thus functions are approximated by \[ f_{(j)}(x)= \sum_{s=1}^{M} \beta_{js}\Phi_s(x) \quad f_{(j)}^{(S)}(z) = \sum_{s=1}^{M} \alpha_{js}\Phi_s(z), \] where $\Phi_1(.),\displaystyle}\def\d{\displaystyleots,\Phi_M(.)$ are basis functions. A widely used set of basis functions are B-splines \citep{EilMar:96}, which are also implemented in our R package \textbf{ordDisp} to be described in detail in Section \ref{sec:OrdDisp}. \subsection{Safety Data} It has been shown that the parametric location-shift model provided a good compromise between sparsity and goodness-of-fit for the response feeling safe in Naples. The only continuous variable was age, which had $p$-values 0.086 (cumulative model) and 0.114 (adjacent categories model). The $p$-values are greater than 0.05 but not so far away that one can be sure that there is no effect of age. In the following age is included as a smooth function approximated by four cubic B-splines in the location term and as a linear function in the dispersion term (a linear function turned out as a good approximation in the dispersion term). Figure \ref{fig:safetyplots} shows the resulting curves (left: location effect $f_{(age)}(age)$, right: dispersion effect $f_{(age)}^{(S)}(age)$, upper panel: cumulative model, lower panel: adjacent categories model). It is seen that in both models a linear effect of age seems not appropriate. In particular young and older persons seem to feel less safe than persons in their forties. Testing if the smooth effect of age is needed yields a $p$-value of 0.046 (cumulative model), which indicates that age should not be neglected. \begin{figure}[!ht] \begin{center} \includegraphics[width=13cm]{./Safetycumplot} \includegraphics[width=13cm]{./Safetyacatplot} \caption{Safety data (left: location effect $f_{(age)}(age)$, right: dispersion effect $f_{(age)}^{(S)}(age)$, upper panel: cumulative model, lower panel: adjacent categories model).}\label{fig:safetyplots} \operatorname{e}nd{center} \operatorname{e}nd{figure} \subsection{Nuclear Energy} As a second example the effect of age on the fear of the use of nuclear energy is considered. Figure \ref{fig:Nuclearplots4} shows the estimated location and dispersion effects for the additive cumulative and adjacent categories model. The estimates of the location effect indicate that the fear of the use of nuclear energy is strongest for people in their sixties and weakest for people around thirty. The estimates of the dispersion term indicate that older people tend to have less dispersion than younger respondents. Likelihood ratio tests show that location as well as dispersion effects are not to be neglected. The likelihood ratio test for the location effects is 64.62 on 4 df and for the dispersion effect 21.24 on 1 df if the cumulative model is fitted. Similar values result for the adjacent categories model. \begin{figure}[!ht] \begin{center} \includegraphics[width=12cm]{./Nuclearcum} \includegraphics[width=12cm]{./Nuclearacat} \caption{Nuclear energy data (left: location effect $f_{(age)}(age)$, right: dispersion effect $f_{(age)}^{(S)}(age)$, upper panel: cumulative model, lower panel: adjacent categories model).}\label{fig:Nuclearplots4} \operatorname{e}nd{center} \operatorname{e}nd{figure} \section{Program Package OrdDisp}\label{sec:OrdDisp} Parametric and additive location-shift models can be fitted using the R add-on package \textbf{ordDisp}~\citep{ordDisp}. The call of the main fitting function (in parts) is the following: \begin{verbatim} ordDisp(formula, data, family = c("cumulative", "acat"), n_bs = 6, reverse = FALSE, ...). \operatorname{e}nd{verbatim} The formula needs to have the form \texttt{y $\sim$ x1 + ... + xp | z1 + ... + zq}, where on the right hand side of the formula the $x$-variables of the location term and the $z$-variables of the dispersion term are separated by the \texttt{|}-operator. The function allows to fit smooth effects $f(.)$ and $f^{(S)}(.)$ by using \texttt{s(x)} and \texttt{s(z)} in the respective part of the formula. The functions are then fitted using \texttt{n\_bs} B-spline basis functions. In the case of nominal covariates \texttt{ordDisp()} generates 0-1-coded dummy variables. If \texttt{reverse=TRUE} the function uses the reverse categories representation $P(Y \ge r|\boldsymbol{x})/P(Y < r|\boldsymbol{x})$ for the cumulative model and $P(Y = r|\boldsymbol{x})/P(Y = r+1|\boldsymbol{x})$ for the adjacent categories model. To keep the interpretation of the dispersion effects the scaling factor is reversed to $(k/2-r)$ in the cumulative case and $(r-k/2)$ in the adjacent categories case. Function \texttt{ordDisp()} internally calls function \texttt{vglm()} of the R package \textbf{VGAM}~\citep{yee2010vgam}. Thus the fitted object inherits all the values of a \texttt{vglm}-object and importantly all the methods implemented for objects of class \texttt{vglm}, like \texttt{print}, \texttt{summary}, \texttt{predict} and \texttt{plot} can be applied. Additionally, star plots depicting the location effects against the dispersion effects including pointwise 95\% confidence intervals (cf. Figure \ref{fig:safetyplott}) can be generated using the function \texttt{plotordDisp(object, names, ...)}, where the variables to be plotted are passed to the function by the \texttt{names}-argument. Note that the use of \texttt{plotordDisp()} is only meaningful for variables with both a location effect and a dispersion effect. \section{Concluding Remarks}\label{sec:conclud} It has been demonstrated that parametric location-shift models typically are sufficiently complex to approximate the underlying probability structure in ordinal regression. The models were also extended to allow for a more general additive predictor structure that avoids to fit a large number of functions. Implicitly we compared two approaches to modeling data, the cumulative approach and the adjacent categories approach. Typically the models yield similar goodness-of-fit. A distinct advantage of the adjacent categories model is that no restrictions on the parameter space are postulated, which makes it more adequate when allowing for a more complex predictor structure. There is a third class of ordinal models, namely sequential models, which have not been considered here. Parametric sequential models have the form $P(Y \ge r|Y \ge r-1, \boldsymbol{x})= F(\beta_{0r}+\boldsymbol{x}^T{\boldsymbol{\beta}}_r)$. They reflect the successive transition to higher categories in a stepwise fashion since $Y \ge r$ given $Y \ge r-1$ can be interpreted as the transition to categories higher than category $r-1$ given at least category $r-1$ has been reached. Sequential models are strongly linked to discrete survival, and have been considered, for example, by \citet{ArmSlo:89}, \citet{Tutz:91c}, \citet{AnaKlei:97}. Location-shift models for this type of model seem less useful because of the structure of the model. In sequential models the category-specific parameters ${\boldsymbol{\beta}}_r$ have a distinct meaning, they represent the impact of covariates on the transition to higher categories given lower categories have already been reached. Including a shift term, which represents a tendency to middle or extreme categories seems less useful. \operatorname{e}nd{document}
\begin{document} \title[On the periodicity of a class of arithmetic functions] {On the periodicity of a class of arithmetic functions associated with multiplicative functions} \author{Guoyou Qian} \address{Center for Combinatorics, Nankai University, Tianjin 300071, P.R. China} \email{[email protected], [email protected]} \author{Qianrong Tan} \address{School of Mathematics and Computer Science, Panzhihua University, Panzhihua 617000, P.R. China} \email{[email protected]} \author{Shaofang Hong*} \address{Yangtze Center of Mathematics, Sichuan University, Chengdu 610064, P.R. China and Mathematical College, Sichuan University, Chengdu 610064, P.R. China} \email{[email protected], [email protected], [email protected]} \thanks{*Hong is the corresponding author and was supported partially by the National Science Foundation of China Grant \# 10971145 and by the Ph.D. Programs Foundation of Ministry of Education of China Grant \#20100181110073} \keywords{periodic arithmetic function, arithmetic progression, least common multiple, $p$-adic valuation, Euler phi function, smallest period} \subjclass[2000]{Primary 11B25, 11N13, 11A05} \begin{abstract} Let $k\ge 1,a\ge 1,b\ge 0$ and $ c\ge 1$ be integers. Let $f$ be a multiplicative function with $f(n)\ne 0$ for all positive integers $n$. We define the arithmetic function $g_{k,f}$ for any positive integer $n$ by $g_{k,f}(n):=\frac{\prod_{i=0}^k f(b+a(n+ic))} {f({\rm lcm}_{0\le i\le k} \{b+a(n+ic)\})}$. We first show that $g_{k,f}$ is periodic and $c {\rm lcm}(1,...,k)$ is its period. Consequently, we provide a detailed local analysis to the periodic function $g_{k,\varphi}$, and determine the smallest period of $g_{k,\varphi}$, where $\varphi$ is the Euler phi function. \end{abstract} \maketitle \section{\bf Introduction} Chebyshev \cite{[Ch]} initiated the study of the least common multiple of consecutive positive integers for the first significant attempt to prove prime number theorem. An equivalent of prime number theorem says that $\log {\rm lcm}(1, ...,n)\sim n$ as $n$ goes to infinity. Hanson \cite{[Ha]} and Nair \cite{[N]} got the upper bound and lower bound of ${\rm lcm}_{1\le i\le n}\{i\}$ respectively. Bateman, Kalb and Stenger \cite{[BKS]} obtained an asymptotic estimate for the least common multiple of arithmetic progressions. Hong, Qian and Tan \cite{[HQT1]} obtained an asymptotic estimate for the least common multiple of a sequence of products of linear polynomials. On the other hand, the study of periodic arithmetic function has been a common topic in number theory for a long time. For the related background information, we refer the readers to \cite{[A]} and \cite{[M]}. Recently, this topic is still active. When studying the arithmetic properties of the least common multiple of finitely many consecutive positive integers, Farhi \cite{[F]} defined the arithmetic function $g_k$ for any positive integer $n$ by $g_k(n):=\frac{\prod_{i=0}^{k}(n+i)}{{\rm lcm}_{0\le i\le k}\{n+i\}}$. In the same paper, Farhi showed that $g_k$ is periodic of $k!$ and posed an open problem of determining the smallest period of $g_k$. Let $P_k$ be the smallest period of $g_k$. Define $L_0:=1$ and for any integer $k\ge 1$, we define $L_k:={\rm lcm}(1,...,k)$. Subsequently, Hong and Yang \cite{[HY]} improved the period $k!$ to $L_k$ and produced a conjecture stating that $\frac{L_{k+1}}{k+1}$ divides $P_k$ for all nonnegative integers $k$. By proving the Hong-Yang conjecture, Farhi and Kane \cite{[FK]} determined the smallest period of $g_k$ and finally solved the open problem posed by Farhi \cite{[F]}. Let $k\ge 1,a\ge 1,b\ge 0$ and $ c\ge 1$ be integers. Let ${\mathbb Q}$ and ${\mathbb N}$ denote the field of rational numbers and the set of nonnegative integers. Define ${\mathbb N}^*:={\mathbb N}\setminus\{0\}$. In order to investigate the least common multiple of any $k+1$ consecutive terms in the arithmetic progression $\{b+am)\}_{m\in \mathbb{N}^*}$, Hong and Qian \cite{[HQ]} introduced the arithmetic function $g_{k,a,b}$ defined for any positive integer $n$ by $g_{k,a,b}(n):=\frac{\prod_{i=0}^k (b+a(n+i))}{{\rm lcm}_{0\le i\le k}\{b+a(n+i)\}}.$ They \cite{[HQ]} showed that $g_{k,a,b}$ is periodic and obtained the formula of the smallest period of $g_{k,a,b}$, which extends the Farhi-Kane theorem to the general arithmetic progression case. Let $f$ be a multiplicative function with $f(n)\ne 0$ for all $n\in\mathbb{N}^*$. To measure the difference between $\prod_{i=0}^kf(b+a(n+ic))$ and $f({\rm lcm}_{0\le i\le k}\{b+a(n+ic)\})$, we define the arithmetic function $g_{k,f}$ for any positive integer $n$ by $$ g_{k,f}(n):=\frac{\prod_{i=0}^k f(b+a(n+ic))} {f({\rm lcm}_{0\le i\le k} \{b+a(n+ic)\})}. \eqno (1.1) $$ One naturally asks the following interesting question.\\ \noindent{\bf Problem 1.1.} Let $f$ be a multiplicative function such that $f(n)\ne 0$ for all positive integers $n$. Is $g_{k,f}$ periodic, and if so, what is the smallest period of $g_{k, f}$?\\ As usual, for any prime number $p$, we let $v_{p}$ be the normalized $p$-adic valuation of ${\mathbb Q}$, i.e., $v_p(a)=s$ if $p^{s}\parallel a$. For any real number $x$, by $\lfloor x\rfloor$ we denote the largest integer no more than $x$. Evidently, $v_p(L_k)={\rm max}_{1\leq i\leq k}\{v_{p}(i)\}=\lfloor {\rm log}_{p}k\rfloor$ is the largest exponent of a power of $p$ that is at most $k$. We have the first main result of this paper which answers the first part of Problem 1.1.\\ \noindent{\bf Theorem 1.2.} {\it Let $k\ge 1, a\ge 1, b\ge 0$ and $c\ge 1$ be integers. If $f$ is a multiplicative function so that $f(n)\neq 0$ for all $n\in \mathbb{N}^*$, then the arithmetic function $g_{k,f}$ is periodic and $cL_k$ is its period.}\\ It seems to be difficult to answer completely the second part of Problem 1.1. We here are able to answer it for the Euler phi function $\varphi $ case. In fact, we first prove a generalization of Hua's identity and then use it to show that the arithmetic function $g_{k, \varphi}$ is periodic. Subsequently, we develop $p$-adic techniques to determine the exact value of the smallest period of $g_{k, \varphi}$. Note that it was proved by Farhi and Kane \cite{[FK]} that there is at most one prime $p\le k$ such that $v_p(k+1)\ge v_p(L_k)\ge 1$. We can now state the second main result of this paper as follows.\\ \noindent{\bf Theorem 1.3.} {\it Let $k\ge 1, a\ge 1, b\ge 0$ and $c\ge 1$ be integers. Let $d:={\rm gcd}(a,b)$ and $a':=a/d$. Then $g_{k,\varphi}$ is periodic, and its smallest period equals $Q_{k, a', c}$ except that $v_p(k+1)\ge v_p(L_k)\ge 1$ for at most one odd prime $p\nmid a'$, in which case its smallest period is equal to $\frac{Q_{k, a', c}}{p^{v_p(L_k)}}$, where $$Q_{k, a', c}:=\frac{cL_k}{\eta _{2,k,a',c}\prod_{{\rm prime} \ q|a'}q^{v_{q}(cL_k)}},\eqno(1.2)$$ and \begin{align*} \eta_{2,k,a',c}:={\left\{ \begin{array}{rl} 2^{v_2(L_k)}, &\text{if} \ 2\nmid a' \ {\text and} \ v_2(k+1)\ge v_2(L_k)\ge 2, \\ 2, &\text{if} \ 2\nmid a \ and\ v_2(cL_k)=1, {\text or} \ k=3, 2\nmid a\ {\text and} \ 2|c, {\text or} \ k=3, 2\nmid a'\ {\it and}\ 2|d,\\ 1, &\text{otherwise}. \end{array} \right.} \end{align*}} \\ So we answer the second part of Problem 1.1 for the Euler phi function. The paper is organized as follows. In Section 2, we show that $g_{k,f}$ is periodic and $cL_k$ is its period. In Section 3, we provide a detailed $p$-adic analysis to the periodic arithmetic function $g_{k,\varphi}$, and finally we determine the smallest period of $g_{k,\varphi}$. The final section is devoted to the proof of Theorem 1.3. \section{\bf Proof of Theorem 1.2} In this section, we give the proof of Theorem 1.2. We begin with the following lemma.\\ \noindent{\bf Lemma 2.1.} {\it Let $A$ be any given totally ordered set, and $a_1, ..., a_n$ be any $n$ nonzero elements of $A$ (not necessarily different). If we can define formal multiplication and formal division for the set $A$, then we have $$\max(a_1, ..., a_n)=a_1\cdots a_n \prod_{r=2}^n\prod_ {1\le i_1<\cdots<i_r\le n}(\min(a_{i_1},\ldots,a_{i_r}))^{(-1)^{r-1}}.$$} \begin{proof} Rearrange these $n$ elements $a_1, ..., a_n$ such that $a_{j_1}\ge \cdots\ge a_{j_n}$. For convenience, we let $b_i=a_{j_i}, \ i=1,2,\ldots,n$. Then the desired result in Lemma 2.1 becomes $$b_1=b_1\cdots b_n\prod_{r=2}^n\prod_{1\le i_1<\cdots<i_r\le n} (\min(b_{i_1},\ldots,b_{i_r}))^{(-1)^{r-1}}.\eqno (2.1)$$ To prove the result, it suffices to prove that for each $b_i$ the number of times that $b_i$ occurs on the left side of (2.1) equals the number of times that $b_i$ occurs on the right side of (2.1). We distinguish the following two cases. {\sc Case 1.} If $b_1=b_2=\cdots =b_n$, then the number of times that $b_1$ occurs on the right side of (2.1) is $$n-{n\choose 2}+\cdots+(-1)^{n-1}{n\choose n}= -1+\sum_{r=1}^n(-1)^{r-1}{n\choose r}+1=-(1-1)^n+1=1.$$ Whereas, 1 is just the number of times that $b_1$ occurs on the left side of (2.1). {\sc Case 2.} If there exists a positive integer $s< n$ such that $b_1=b_2=\cdots=b_s>b_{s+1}$, then the number of times $b_1$ occurs on the right side of (2.1) is: $s-{s\choose 2}+\cdots+(-1)^{s-1}{s\choose s}=1$. For any $j>s$, we can always assume that $b_{t+1},...,b_j,..., b_{t+l}$ are just the $l$ terms of the sequence $\{b_i\}_{i=1}^n$ such that $b_s>b_{t+1}=\cdots=b_j=\cdots=b_{t+l}$ for some $t\ge s$. Thus, the number of times that $b_j$ occurs on the right side of (2.1) is \begin{align*} &l-\Big({t+l\choose 2}-{t\choose 2}\Big)+\cdots+(-1)^{t-1}\Big({t+l\choose t}-{t\choose t}\Big)+(-1)^{t}{t+l\choose t+1}+\cdots+(-1)^{t+l-1}{t+l\choose t+l}\\ &=l+\sum_{r=2}^t(-1)^r{t\choose r}+\sum_{i=2}^{t+l}(-1)^{i-1}{t+l\choose i}\\ &=l+(1-1)^t+{t\choose 1}-{t\choose 0}-(1-1)^{t+l}+{t+l\choose 0}-{t+l\choose 1}=0. \end{align*} This completes the proof of Lemma 2.1. \end{proof} In \cite{[Hu]}, Hua gave the following beautiful identity $$ {\rm lcm}(a_1, ..., a_n)=a_1\cdots a_n \prod_{r=2}^{n}\prod_{1\leq i_1<\cdots<i_{r}\leq n}({\rm gcd}(a_{i_1}, ..., a_{i_{r}}))^{(-1)^{r-1}},$$ where $a_1, ..., a_n$ are any given $n$ positive integers. In what follows, using Lemma 2.1, we generalize the above Hua's identity to the multiplicative function case.\\ \noindent{\bf Lemma 2.2.} {\it Let $f$ be a multiplicative function, and $a_1,a_2,\ldots,a_n$ be any $n$ positive integers. If $f(m)\ne 0$ for each $m\in \mathbb{N}^*$, then $$f({\rm lcm}(a_1,a_2,\ldots,a_n))=f(a_1)\cdots f(a_n)\cdot \prod_{r=2}^{n}\prod_{1\leq i_1<\cdots<i_{r}\leq n}\big(f({\gcd}(a_{i_1}, ..., a_{i_{r}}))\big)^{(-1)^{r-1}}.$$ } \begin{proof} Since $f$ is a multiplicative function, we have $$f({\rm lcm}(a_1,a_2,\ldots,a_n))=\prod_{p \ {\rm prime}} f(p^{\max(v_p(a_1),v_p(a_2),\ldots,v_p(a_n))})$$ and $$f(\gcd(a_{i_1},\ldots,a_{i_r}))=\prod_{p \ {\rm prime}} f(p^{\min (v_p(a_{i_1}),\ldots,v_p(a_{i_r}))}).$$ Thus it suffices to prove that \begin{align*} f(p^{\max(v_p(a_1),v_p(a_2),\ldots,v_p(a_n))})= \prod_{r=1}^{n}\prod_{1\leq i_1<\cdots<i_{r}\leq n}\big(f(p^{\min((v_p(a_{i_1}), ..., v_p(a_{i_{r}}))})\big)^{(-1)^{r-1}}\ (2.2) \end{align*} for every prime $p$. Now we define an order $\succeq$ for the set $S=\{f(p^m): \ m\in \mathbb{N}\}$ according to the size of the power $m$ of the prime $p$. That is, $f(p^i)\succeq f(p^j)$ if $i\ge j$ and $f(p^i)\succ f(p^j)$ if $i> j$. It is easy to check that $S$ is a totally ordered set for the order $\succeq$. So the equality (2.2) follows immediately from Lemma 2.1 by letting $a_i$ be $f(p^{v_p(a_i)})$ for $1\le i\le n$ in Lemma 2.1. The proof of Lemma 2.2 is complete. \end{proof} If ${\rm gcd}(a_{i}, a_{j})={\rm gcd}(b_{i }, b_{j})$ for any $1\leq i<j\le n$, then for any $t\ge 3$, one has ${\rm gcd}(a_{i_1},a_{i_2},\ldots,a_{i_t})={\rm gcd}(b_{i_1},b_{i_2},\ldots,b_{i_t})$ for any $1\leq i_1<\cdots<i_t\leq n$. Therefore we immediately derive the following result from Lemma 2.2.\\ \noindent{\bf Lemma 2.3.} {\it Let $a_1,a_2,\ldots,a_n$ and $b_1,b_2,\ldots,b_n$ be any $2n$ positive integers. Let $f$ be a multiplicative function with $f(n)\ne 0$ for all $n\in \mathbb{N}^*$. If ${\rm gcd}(a_{i}, a_{j})={\rm gcd}(b_{i }, b_{j})$ for any $1\leq i<j\leq n$, then we have \begin{align*} \frac{\prod_{1\le i\le n}f(a_i)}{f({\rm lcm}_{1\le i\le n}\{a_i\})} =\frac{\prod_{1\le i\le n}f(b_i)}{f({\rm lcm}_{1\le i\le n}\{b_i\})}. \end{align*}}\\ We are now in a position to show Theorem 1.2.\\ \\ {\it Proof of Theorem 1.2.} Let $n$ be a given positive integer. For any $0\leq i<j\leq k$, we have \begin{align*} {\rm gcd}(b+a(n+ic+cL_k),b+a(n+jc+cL_k))&={\rm gcd}(b+a(n+ic+cL_k),(j-i)ac)\\ &={\rm gcd}(b+a(n+ic),(j-i)ac)\\ &={\rm gcd}(b+a(n+ic),b+a(n+jc)). \end{align*} Thus by Lemma 2.3, we obtain that $g_{k,f}(n+cL_k)=g_{k,f}(n)$ for any positive integer $n$. Therefore $g_{k,f}$ is periodic and $cL_k$ is its period. $\square$\\ Obviously, by Theorem 1.2, the arithmetic function $g_{k,\varphi}$ is periodic and $cL_k$ is its period. In the next section, we will provide detailed $p$-adic analysis to the arithmetic function $g_{k, \varphi}$ which leads us to determine the exact value of the smallest period of $g_{k, \varphi}$. \section{\bf Local analysis of $g_{k,\varphi}$} Throughout this section, we let $a'=a/d$ and $ b'=b/d$ with $d=\gcd(a, b)$. Then $\gcd(a',b')=1$. Let $$ S_{k,a',b',c}(n):=\{b'+a'n,b'+a'(n+c),\ldots,b'+a'(n+kc)\} $$ be the set consisting of $k+1$ consecutive jumping terms with gap $c$ in the arithmetic progression $\{b'+a'm\}_{m\in \mathbb{N}}$. For any given prime number $p$, define $g_{p,k,\varphi}$ for any $n\in \mathbb {N}^*$ by $g_{p,k,\varphi}(n):=v_{p}(g_{k,\varphi}(n))$. Let $P_{k, \varphi}$ be the smallest period of $g_{k,\varphi}$. Then $g_{p,k,\varphi}$ is a periodic function for each prime $p$ and $P_{k,\varphi}$ is a period of $g_{p,k,\varphi}$. Let $P_{p, k, \varphi}$ be the smallest period of $g_{p,k,\varphi}$. Since \begin{align*} \varphi(b+a(n+ic))&=\varphi(d(b'+a'(n+ic)))=\varphi\big( \prod_{p|d}p^{v_p(b'+a'(n+ic))+v_p(d)}\prod_{p\nmid d}p^{v_p(b'+a'(n+ic))}\big)\\ &=\bigg(\prod_{p|d}p^{v_p(d)-1}(p-1)\bigg)\bigg (\prod_{p|d}p^{v_p(b'+a'(n+ic))}\bigg)\bigg(\prod_{p\nmid d}\varphi\big(p^{v_p(b'+a'(n+ic))}\big)\bigg)\\ &=\varphi(d)\bigg(\prod_{p|d}p^{v_p(b'+a'(n+ic))}\bigg)\bigg(\prod_{p\nmid d}\varphi\big(p^{v_p(b'+a'(n+ic))}\big)\bigg), \end{align*} we have \begin{align*} g_{k,\varphi}(n)&=\frac{\prod_{i=0}^k \varphi(b+a(n+ic))} {\varphi({\rm lcm}_{0\le i\le k} \{b+a(n+ic)\})}=\frac{\prod_{i=0}^k \varphi(d(b'+a'(n+ic)))} {\varphi(d\cdot {\rm lcm}_{0\le i\le k} \{b'+a'(n+ic)\})}\\ &= \frac{\prod_{i=0}^k\Big(\varphi(d)\Big(\prod_{p| d} p^{v_p(b'+a'(n+ic))}\Big)\Big(\prod_{p\nmid d}\varphi(p^{v_p(b'+a'(n+ic))})\Big)\Big)}{\varphi(d)\Big(\prod_{p| d}p^{\max_{0\le i\le k}\{v_p(b'+a'(n+ic))\}}\Big)\Big(\prod_{p\nmid d}\varphi(p^{\max_{0\le i\le k}\{v_p(b'+a'(n+ic))\}})\Big)}\\ &=(\varphi(d))^k\frac{\prod_{i=0}^k \Big(\Big(\prod_{p| d} p^{v_p(b'+a'(n+ic))}\Big)\Big(\prod_{p\nmid d}\varphi(p^{v_p(b'+a'(n+ic))})\Big)\Big)}{ \Big(\prod_{p| d}p^{\max_{0\le i\le k}\{v_p(b'+a'(n+ic))\}}\Big)\Big(\prod_{p\nmid d} \varphi(p^{\max_{0\le i\le k}\{v_p(b'+a'(n+ic))\}})\Big)}. \end{align*} Note that for any prime $q$, we have that for any positive integer $e$, $\varphi(q^e)=q^{e-1}(q-1)$. So when computing $p$-adic valuation of $g_{k, \varphi}(n)$, we not only need to compute $v_p(\varphi (p^{\alpha }))$ for $\alpha \ge 2$, but also need to consider $p$-adic valuation of $q-1$ for those primes $q$ with $p|(q-1)$. By some computations, we obtain the following two equalities. If $p\nmid d$, then \begin{align*} g_{p,k,\varphi}(n)&= \sum_{ m\in S_{k,a',b',c}(n)}\max(v_p(m)-1,0)- \max(\max_{ m\in S_{k,a',b',c}(n)}\{v_p(m)-1\},0)\\ & +\sum_{{\rm prime}\ q:\ q\nmid d, q\neq p}\max(0,\#\{m\in S_{k,a',b',c}(n): q| m\}-1)\cdot v_p(q-1)+kv_p(\varphi(d))\\ &= kv_p(\varphi(d))+\sum_{e\ge 2}{\max} (0, \#\{m\in S_{k,a',b',c}(n):p^e| m\}-1)\\ & \ \ +\sum_{{\rm prime} \ q:\ q\nmid d, p|(q-1)}\max(0,\#\{m\in S_{k,a',b',c}(n): q| m\}-1)\cdot v_p(q-1). \ \ \ \ \ \ \ \ (3.1) \end{align*} If $p| d$, then \begin{align*} g_{p,k,\varphi}(n)&= kv_p(\varphi(d))+\sum_{i=0}^kv_p(b'+a'(n+ic))-\max_{0\le i\le k}\{v_p(b'+a'(n+ic))\}\\ & \ \ +\sum_{{\rm prime}\ q:\ q\nmid d, q\neq p}\max(0,\#\{m\in S_{k,a',b',c}(n): q| m\}-1)\cdot v_p(q-1)\\ &=kv_p(\varphi(d))+\sum_{e\ge 1}{\max} (0, \#\{m\in S_{k,a',b',c}(n):p^e| m\}-1)\\ & \ \ +\sum_{{\rm prime} \ q:\ q\nmid d, p|(q-1)}\max(0,\#\{m\in S_{k,a',b',c}(n): q| m\}-1)\cdot v_p(q-1). \ \ \ \ \ \ \ (3.2) \end{align*} In order to analyze the function $g_{p,k,\varphi}$ in detail, we need the following results.\\ \noindent{\bf Lemma 3.1.} {\it Let $e$ and $m$ be positive integers. If $p\nmid a'$, then any $p^e$ consecutive terms in the arithmetic progression $\{b'+a'(m+ic)\}_{i\in \mathbb{N}}$ are pairwise incongruent modulo $p^{v_p(c)+e}$. In particular, there is at most one term divisible by $p^e$ in $S_{k,a',b',c}(n)$ for $e>v_p(cL_k)$.} \begin{proof} Suppose that there are two integers $i,j$ such that $0<j-i\le p^e-1$ and $b'+(m+ic)a'\equiv b'+(m+jc)a' \pmod {p^{v_p(c)+e}}$. Then $p^e\mid (j-i)a'$. Since ${\rm gcd}(p,a')=1$, we have $p^{e}\mid (j-i)$. This is a contradiction. \end{proof} \noindent{\bf Lemma 3.2.} {\it Let $F$ be a positive rational-valued arithmetic function. For any prime $p$, define $F_p$ by $F_p(n):=v_p(F(n))$ for $n\in {\mathbb N}^*$. Then $F$ is periodic if and only if $F_p$ is periodic for each prime $p$ and ${\rm lcm}_{{\rm prime} \ p}\{T_{p,F}\}$ is finite, where $T_{p,F}$ is the smallest period of $F_p$. Furthermore, if $F$ is periodic, then the smallest period $T_F$ of $F$ is equal to ${\rm lcm}_{{\rm prime} \ p}\{T_{p,F}\}$.} \begin{proof} $\Rightarrow)$ Since $F$ is periodic and $T_F$ is its smallest period, we have $F(n+T_F)=F(n)$ for any $n\in \mathbb{N}^*$, and hence $F_p(n+T_F)=v_{p}(F(n+T_F))=v_{p}(F(n))=F_p(n)$. In other words, $F_p$ is periodic and $T_F$ is a period of $F_p$ for every prime $p$. So we have ${\rm lcm}_{{\rm prime} \ p}\{T_{p,F}\}| T_{F}$ and ${\rm lcm}_{{\rm prime} \ p}\{T_{p,F}\}$ is finite. $\Leftarrow)$ Since for any $n\in \mathbb{N}^*$, we have that $v_{q}(F(n+{\rm lcm}_{{\rm prime} \ p \ }\{T_{p,F}\}))=v_{q}(F(n))$ for each prime $q$. Thus $F(n+{\rm lcm}_{{\rm prime} \ p}\{T_{p,F}\})=F(n)$ for any $n\in \mathbb{N}^*$. So $F$ is periodic and ${\rm lcm}_{{\rm prime} \ p}\{T_{p,F}\}$ is a period of it. Hence $T_{F}$ divides ${\rm lcm}_{{\rm prime} \ p}\{T_{p,F}\}$. From the above discussion, we immediately derive that $T_{F}={\rm lcm}_{{\rm prime} \ p}\{T_{p,F}\}$ if $F$ is periodic. \end{proof} For any prime $p\ge cL_k+1$, we have by Lemma 3.1 that there is at most one term divisible by $p$ in $S_{k,a',b',c}(n)$ and there is at most one element divisible by the prime $q$ satisfying $p| (q-1)$ in $S_{k,a',b',c}(n)$. Thus for any prime $p\ge cL_k+1$, we can get from (3.1) and (3.2) that $g_{p,k,\varphi}(n)=kv_p(\varphi(d))$ for every positive integer $n$. Namely, we have $P_{p,k,\varphi}=1$ for each prime $p$ such that $p\ge cL_k+1$. Thus by Lemma 3.2, we immediately have the following.\\ \noindent{\bf Lemma 3.3.} {\it We have} $$ P_{k,\varphi}={\rm lcm}_{{\rm prime} \ p\le cL_k} \{P_{p,k,\varphi}\}. $$\\ In what follows it is enough to compute $P_{p,k,\varphi}$ for every prime $p$ with $p\le cL_k$. First we need to simplify $g_{p,k,\varphi}$ for $p\le cL_k$. For any prime $q$ satisfying $q\nmid cL_k$, we obtain by Lemma 3.1 that there is at most one term divisible by $q$ in $S_{k,a',b',c}(n)$. On the other hand, for any prime $q$ satisfying $q| a'$, we have $\gcd (q,b')=1$ since ${\rm gcd}(a',b')=1$. Thus for $0\leq i\leq k$, we have that $\gcd(q, b'+a'(n+ic))=1$ for all $n\in \mathbb{N}^*$. So there is no term divisible by any prime factor $q$ of $a'$ in $S_{k,a',b',c}(n)$. Thus from (3.1) and (3.2), we derive the following equality:\\ $$g_{p,k,\varphi}(n)=kv_p(\varphi(d))+\sum_{e=1}^{v_p(cL_k)}f_{e}(n)+ \sum_{{\rm prime} \ q: \ q| cL_k \atop p|(q-1), \ q\nmid a}h_q(n), \eqno (3.3) $$ where \begin{align*} f_{e}(n):={\left\{ \begin{array}{rl} 0, \quad&\text{if} \ p\nmid d\ \text{and}\ e= 1,\\ \max (0, \#\{m\in S_{k,a',b',c}(n):p^e| m\}-1), \quad&\text{otherwise} \end{array} \right.} \end{align*} and $$ h_q(n):=\max(0,\#\{m\in S_{k,a',b',c}(n): q| m\}-1)\cdot v_p(q-1).$$ For any positive integer $n$, it is easy to check that $f_{e}(n+p^{v_p(cL_k)})=f_{e}(n)$ for each $1\le e\le v_p(cL_k)$ and $h_q(n+q)=h_q(n)$ for each prime $q$ such that $q\nmid a, \ q| cL_k$ and $p| (q-1)$. Consequently, we obtain that $p^{v_p(cL_k)}\prod_{{\rm prime}\ q:\ q| cL_k \atop q\nmid a, \ p| (q-1)}q$ is a period of the function $g_{p,k,\varphi}$. To get the smallest period of $g_{p,k,\varphi}$ for each prime $p\le cL_k$, we need to make more detailed $p$-adic analysis about $g_{p,k,\varphi}$. We divide it into the following four cases.\\ \noindent{\bf Lemma 3.4.} {\it Let $p$ be a prime such that $p\le cL_k$ and $p\nmid cL_k$. Then \begin{align*} P_{p,k,\varphi}= \prod _{{\rm prime} \ q :\ q| c \atop q\nmid a, \ p| (q-1)} q. \end{align*} } \begin{proof} Since $p\le cL_k$ and $p\nmid cL_k$, we have $k<p\le cL_k$ and $v_p(cL_k)=0$. Hence we have that $g_{p,k,\varphi}(n)=kv_p(\varphi(d))+\sum_{{\rm prime}\ q: \ q| cL_k \atop p| (q-1), \ q\nmid a}h_q(n)$ by (3.3). If there is no prime $q$ satisfying $q| cL_k$, $p| (q-1)$ and $q\nmid a$, then we have $g_{p,k,\varphi}(n)=kv_p(\varphi(d))$ for any positive integer $n$, and hence $P_{p,k,\varphi}=1$ for such primes $p$. If there is a prime $q$ satisfying $q| cL_k$, $p| (q-1)$ and $q\nmid a$, then we must have $q| c$, since such $q$ must satisfy $q\ge p+1>k$. So by the argument before Lemma 3.4, we have that $A:=\prod_{ \ {\rm prime} \ q : \ q| c\atop q\nmid a, \ p| (q-1)}q$ is a period of $g_{p,k,\varphi}$. Now it remains to prove that $A$ is the smallest period of $g_{p,k,\varphi}$. For any prime factor $q$ of $A$, we can choose a positive integer $n_0$ such that $v_{q}(b'+a'n_0)\ge 1$ because $q\nmid a$. Since $q>k$ and $q| c$, we have that $v_q(b'+a'(n_0+ic))\ge 1$ and $v_q(b'+a'(n_0+ic+A/q))=v_q(a'A/q)=0$ for each $0\le i\le k$. Hence there is no term divisible by $q$ in $S_{k,a',b',c}(n_0+A/q)$. Thus $h_q(n_0)=kv_p(q-1)\ge k>0=h_q(n_0+A/q)$. On the other hand, $h_{q'}(n_0)=h_{q'}(n_0+A/q)$ for any other prime factors $q'\ne q$ of $A$. It then follows that $g_{p,k,\varphi}(n_0)\ne g_{p,k,\varphi}(n_0+A/q)$. Therefore $A$ is the smallest period of $g_{p,k,\varphi}$. This completes the proof of Lemma 3.4. \end{proof} \noindent{\bf Lemma 3.5.} {\it Let $p$ be a prime such that $p| cL_k$ and $p|a'$. Then \begin{align*} P_{p,k,\varphi}=\bigg(\prod_{{\rm prime} \ q:\ q\nmid ac, \ q| L_k \atop p| (q-1), \ k+1\not\equiv 0\pmod {q}} \ q \bigg)\bigg(\prod_{{\rm prime}\ q:\ q\nmid a,\atop q| c,\ p|(q-1)}q\bigg). \end{align*} } \begin{proof} For convenience, we let $A$ denote the number on the right side of the equality in Lemma 3.5. First, we prove that $A$ is a period of $g_{p,k,\varphi}$. Since there is no term divisible by $p$ in $S_{k,a',b',c}(n)$ if $p| a'$, we have that $f_e(n)=0$ for any positive integer $n$ and for each $1\le e\le v_p(cL_k)$. Hence we have $g_{p,k,\varphi}(n)=kv_p(\varphi(d))+\sum_{{\rm prime}\ q:\ q| cL_k \atop p| (q-1), \ q\nmid a}h_q(n)$. If there is no prime $q$ such that $q| cL_k$, $p| (q-1)$ and $q\nmid a$, then $g_{p,k,\varphi}(n)=kv_p(\varphi(d))$ for any positive integer $n$, and hence $P_{p,k,\varphi}=1$ for such primes $p$. If $q\nmid ac, q|L_k$ and $k+1\equiv0\pmod{q}$, then any $q$ consecutive terms in the arithmetic progression $\{b'+a'(m+ic)\}_{i\in \mathbb{N}}$ are pairwise incongruent modulo $q$ by Lemma 3.1. Therefore, we have $h_q(n)=h_q(n+1)=v_p(q-1)(\frac{k+1}{q}-1)$ for any positive integer $n$. Namely, 1 is a period of $h_q$ for such primes $q$. Since $q$ is a period of $h_q$ for any other primes $q$ such that $q| cL_k, q\nmid a$ and $ p| (q-1)$, we have that $A$ is a period of the function $g_{p,k,\varphi}$. To prove that $A$ is the smallest period of $g_{p,k,\varphi}$, it is enough to show that $A/q$ is not the period of $g_{p,k,\varphi}$ for every prime factor $q$ of $A$. We divide the prime factors of $A$ into the following two cases. {\sc Case 1.} $q$ is a prime factor of $A$ such that $q\nmid ac$, $q| L_k$, $p| (q-1)$ and $k+1\not\equiv0\pmod{q}$. To prove $A/q$ is not the period of $g_{p,k,\varphi}$, it suffices to prove that $A/q$ is not the period of the function $h_q$ since $A/q$ is a period of $h_{q'}$ for any other primes $q'\ne q$ such that $q'| cL_k, p| (q'-1)$ and $q'\nmid a$. Since $\gcd(A/q, q)=1$, there exists a positive integer $r_0$ such that $r_0A/q\equiv 1\pmod{q}$. We pick a positive integer $n_0$ so that $v_{q}(b'+a'n_0)\ge 1$ since $q\nmid a$. So we have that the terms divisible by $q$ in the arithmetic progression $\{b'+a'(n_0+ic)\}_{i\in \mathbb{N}}$ must be of the form $b'+a'(n_0+tcq)$ for some $t\in \mathbb{N}$, and there are at least two terms divisible by $q$ in $S_{k,a',b',c}(n_0)$ since $q|L_k$. Comparing $S_{k,a',b',c}(n_0)$ with $S_{k,a',b',c}(n_0+r_0A/q)$, we obtain that $b'+a'(n_0+j)\equiv b'+a'(n_0+r_0A/q+j-1) \pmod{q} \ {\rm for \ each} \ 1\le j\le k$ while $b'+a'(n_0+r_0A/q+k)\equiv b'+a'(n_0+k+1)\not\equiv b'+a'n_0\equiv 0\pmod{q}$. Thus the number of terms divisible by $q$ in $S_{k,a',b',c}(n_0+r_0A/q)$ equals the number of terms divisible by $q$ in $S_{k,a',b',c}(n_0)$ minus one, which means that $h_q(n_0+r_0A/q)= h_q(n_0)-v_p(q-1)$. Therefore, $A/q$ is not the period of $g_{p,k,\varphi}$ in this case. {\sc Case 2.} $q$ is a prime factor of $A$ satisfying $q\nmid a$, $q| c$, and $p| (q-1)$. As above, we select two positive integers $r_0$ and $n_0$ such that $r_0A/q\equiv1\pmod{q}$ and $v_{q}(b'+a'n_0)\ge 1$. So we obtain that $v_q(b'+a'(n_0+ic))\ge 1$ and $v_q(b'+a'(n_0+r_0A/q+ic))=v_q(a'r_0A/q)=0$ for each $0\le i\le k$. In other words, all the $k+1$ terms are divisible by $q$ in $S_{k,a',b',c}(n_0)$ while no term is divisible by $q$ in $S_{k,a',b',c}(n_0+r_0A/q)$. Thus $h_q(n_0)=kv_p(q-1)\ge k>0=h_q(n_0+r_0A/q)$. It follows immediately that $A/q$ is not the period of $g_{p,k,\varphi}$ in this case. So $A$ is the smallest period of $g_{p,k,\varphi}$. The proof of Lemma 3.5 is complete. \end{proof} \noindent{\bf Lemma 3.6.} {\it Let $p$ be a prime such that $p| cL_k$, $p\nmid a'$ and $p\nmid d$. Then \begin{align*} P_{p,k,\varphi}= p^{e(p,k)}\bigg(\prod_{{\rm prime} \ q:\ q\nmid ac, \ q| L_k \atop p| (q-1), \ k+1\not\equiv 0\pmod {q}} \ q \bigg)\bigg( \prod_{{\rm prime}\ q: \ q\nmid a,\atop q| c, \ p| (q-1)}q\bigg), \end{align*} where \begin{align*} e(p,k):={\left\{ \begin{array}{rl} 0, \quad&\text{if} \ v_p(cL_k)=1,\\ v_p(c), \quad&\text{if} \ v_{p}(k+1)\geq v_p(L_k) \ and \ v_p(cL_k)\ge 2,\\ v_p(cL_k), \quad&\text{if} \ v_p(k+1)< v_p(L_k) \ and \ v_p(cL_k)\ge 2. \end{array} \right.} \end{align*} } \begin{proof} From (3.3), we get that $$g_{p,k,\varphi}(n)=kv_p(\varphi(d))+\sum_{e= 2}^{v_p(cL_k)}f_e(n)+ \sum_{{\rm prime} \ q: \ q| cL_k \atop p| (q-1), \ q\nmid a}h_q(n). \eqno (3.4)$$ Let $A$ denote the number $p^{e(p,k)}\big(\prod_{{\rm prime} \ q:\ q\nmid ac, \ q| L_k \atop p| (q-1), \ k+1\not\equiv 0\pmod {q}} q\big)\big(\prod_{{\rm prime}\ q: \ q\nmid a\atop q| c, \ p| (q-1)}q\big)$. We distinguish the following three cases. {\sc Case 1.} $v_p(cL_k)= 1$. Since $v_p(cL_k)= 1$, we have $$ g_{p,k,\varphi}(n)=kv_p(\varphi(d))+\sum_{{\rm prime}\ q: \ q| cL_k \atop p| (q-1), \ q\nmid a}h_q(n) $$ for every positive integer $n$ by (3.4). The process of proving that $A$ is the smallest period of $g_{p,k,\varphi}$ is the same as the proof of Lemma 3.5, one can easily check it. {\sc Case 2.} $v_p(k+1)\ge v_p(L_k)$ and $v_p(cL_k)\ge 2$. We consider the following two subcases. {\sc Subcase 2.1.} $p>k$, $v_p(k+1)\ge v_p(L_k)$ and $ v_p(cL_k)\ge 2$. In this case, we have $v_p(cL_k)=v_p(c)\ge 2$. So we obtain that $p^{v_p(c)}$ is a period of $f_e$ for each $2\le e\le v_p(cL_k)=v_p(c)$. By the same method as in the proof of Lemma 3.5, we can derive that $A/p^{v_p(c)}$ is a period of $\sum_{{\rm prime} \ q:\ q| cL_k \atop p| (q-1), \ q\nmid a}h_q(n)$, and hence by (3.4) $A$ is a period of $g_{p,k,\varphi}$. Now it suffices to prove that $A/P$ is not the period of $g_{p,k,\varphi}$ for any prime factor $P$ of $A$. For the prime $p$, we have by (3.4) that $A/p$ is a period of $f_e$ for each $2\le e\le v_p(c)-1$ and is also a period of $h_q$ for each prime $q$ such that $q\nmid a, \ q| cL_k$ and $ p| (q-1)$. So it is enough to prove that $A/p$ is not the period of $f_{v_p(c)}$. Since $p\nmid a'$, we can choose a positive integer $n_0$ such that $v_p(b'+a'n_0)=v_p(c).$ It is easy to see that $p^{v_p(c)}| b'+a'(n_0+ic)$ and $p^{v_p(c)}\nmid b'+a'(n_0+A/p+ic)$ for each $0\le i\le k$. Thus comparing the two sets $S_{k,a',b',c}(n_0)=\{ b'+a'(n_{0}+ic)\}_{0\le i\le k}$ and $S_{k,a',b',c}(n_0+A/p)=\{b'+a'(n_0+A/p+ic)\}_{0\le i\le k}$, we obtain that $$f_{v_p(c)}(n_0)= \max (0, \#\{m\in S_{k,f,c}(n_0):p^{v_p(c)}| m\}-1)=k,$$ \begin{align*} f_{v_p(c)}(n_0+A/p)&= \max (0, \#\{m\in S_{k,f,c}(n_0+A/p):p^{v_p(c)}| m\}-1)=0. \end{align*} Therefore, $A/p$ is not the period of $g_{p,k,\varphi}$. For any prime factor $q$ of $A$ such that $q| cL_k$, $p| (q-1)$ and $q\nmid a$. It is easy to see that $A/q$ is a period of $\sum_{e= 2}^{v_p(cL_k)}f_e(n)$ and $ h_{q'}(n)$ for each prime factor $q'\ne q$ of $A$ satisfying $ q'|cL_k, p|(q'-1)$ and $q'\nmid a$. Similarly to the proof of Lemma 3.5, we can deduce that $A/q$ is not the period of $h_q$, and hence $A/q$ is not the period of $g_{p,k,\varphi}$. Therefore, $A$ is the smallest period of $g_{p,k,\varphi}$ in this subcase. {\sc Subcase 2.2.} $p\le k$, $v_p(k+1)\ge v_p(L_k)$ and $ v_p(cL_k)\ge 2$. To prove that $A$ is a period of $g_{p,k,\varphi}$, it suffices to prove that $p^{v_p(c)}$ is a period of $f_e$ for each $1\le e\le v_p(cL_k)$ by the argument in Subcase 2.1 of this proof. For any given positive integer $n$, comparing the two sets $S_{k,a',b',c}(n)=\{ b'+a'(n+ic)\}_{0\le i\le k}$ and $S_{k,a',b',c}(n+c)=\{ b'+a'(n+ic)\}_{1\le i\le k+1}$, we find that their distinct terms are $b'+a'n$ and $b'+a'(n+(k+1)c)$. From $v_p(k+1)\ge v_p(L_k)$ we deduce that $$b'+a'n\equiv b'+a'(n+(k+1)c)\pmod {p^{v_p(cL_k)}}.$$ Therefore we obtain that $f_e(n)=f_e(n+c)$ for each $e\in \{2,\ldots, v_p(cL_k)\}$. Since $\gcd(c/p^{v_p(c)},p)=1$, we can always find two integers $t,t_1$ such that $tc/p^{v_p(c)}=t_1{p^{v_p(L_k)}}+1$. Note that $p^{v_p(cL_k)}$ is a period of $f_e$ for each $2\le e\le v_p(cL_k)$. Therefore, we have $f_e(n+p^{v_p(c)})=f_e(n +p^{v_p(c)}+t_1p^{v_p(cL_k)})=f_e(n+tcp^{v_p(c)}/p^{v_p(c)})=f_e(n+tc)=f_e(n)$ for each positive integer $n$ and each $2\le e\le v_p(cL_k)$. Thus $A$ is a period of $g_{p,k,\varphi}$ as required. Now we only need to prove that $A/P$ is not the period of $g_{p,k,\varphi}$ for any prime factor $P$ of $A$. For any prime factor $q$ of $A$ such that $q| cL_k$, $p| (q-1)$ and $q\nmid a$, the proof is similar to Subcase 2.1. If $v_p(c)=0$, then $p$ is not a prime factor of $A$, and the proof of this case is complete. In the following, we need to prove that $A/p$ is not the period of $g_{p,k,\varphi}$ if $v_p(c)\ge 1$. Since $A/p$ is a period of $h_q$ for each $q$ such that $q\nmid a, \ q| cL_k$ and $ p| (q-1)$, it is enough to prove that $A/p$ is not the period of the function $\sum_{e=2}^{v_p(cL_k)}f_{e}(n)$. If $v_p(c)\ge 2$, we choose $n_0\in \mathbb{N}^*$ such that $v_p(b'+a'n_0)=v_p(c)$. Comparing $S_{k,a',b',c}(n_0)=\{ b'+a'(n_{0}+ic)\}_{0\le i\le k}$ with $ S_{k,a',b',c}(n_0+A/p)=\{b'+a'(n_0+A/p+ic)\}_{0\le i\le k}$, we obtain that each term of $S_{k,a',b',c}(n_0)$ is divisible by $p^{v_p(c)}$, while there is no term divisible by $p^{v_p(c)}$ in $S_{k,a',b',c}(n_0+A/p)$ since $v_p(b'+a'(n_0+A/p+ic))=\min\big( v_p(b'+a'(n_0+ic)), v_p(a'A/p)\big)=v_p(c)-1$ for each $0\le i\le k$. So $$\sum_{e=2}^{v_p(cL_k)}f_{e}(n_0)\ge k(v_p(c)-1)>k(v_p(c)-2) =\sum_{e=2}^{v_p(cL_k)}f_{e}(n_0+A/p).$$ If $v_p(c)=1$, then $v_p(A/p)=0$. Choosing $n_0\in \mathbb{N}^*$ such that $v_p(b'+a'n_0)=v_p(cL_k)$, we have that there is at least two terms $b'+a'n_0$ and $b'+a'(n_0+p^{v_p(L_k)}c)$ divisible by $p^{v_p(cL_k)}$ in $S_{k,a',b',c}(n_0)$ but no term is divisible by $p$ in $S_{k,a',b',c}(n_0+A/p)$ since $v_p(b'+a'(n_0+A/p+ic))=0$ for all $0\le i\le k$. Therefore, we have $$\sum_{e=2}^{v_p(cL_k)}f_{e}(n_0)\ge v_p(cL_k)-1>0 =\sum_{e=2}^{v_p(cL_k)}f_{e}(n_0+A/p).$$ Thus $A/p$ is not the period of $g_{p,k,\varphi}$ in this case. {\sc Case 3.} $p\le k$, $v_p(k+1)< v_p(L_k)$ and $v_p(cL_k)\ge 2$. By the discussion before Lemma 3.4, it is easy to get that $A$ is a period of $g_{p,k,\varphi}$. As above, it suffices to prove that $A/P$ is not the period of $g_{p,k,\varphi}$ for any prime factor $P$ of $A$ in the following. By a similar argument as in Subcase 2.1, we now only need to show that $A/p$ is not the period of $f_{v_p(cL_k)}$. Since $v_p(A/p)=v_p(cL_k)-1$, we can select a positive integer $r_0$ such that $r_0A/p\equiv p^{v_p(cL_k)-1}\pmod{p^{v_p(cL_k)}}$. In the following, we prove that $p^{v_p(cL_k)-1}$ is not the period of $f_{v_p(cL_k)}$, from which we can deduce that $A/p$ is not the period of $g_{p,k,\varphi}$. Since $v_{p}(k+1)<v_p(L_k)$, we can always suppose that $k+1\equiv r\pmod {p^{v_p(L_k)}} \ {\rm for \ some}\ 1\leq r\leq p^{v_p(L_k)}-1.$ We distinguish the following two subcases. {\sc Subcase 3.1.} $1\le r\le p^{v_p(L_k)}-p^{v_p(L_k)-1}$. Choose a positive integer $n_0$ such that $v_p(b'+a'n_0)\ge v_p(cL_k)$. Compare the number of terms divisible by $p^{v_p(cL_k)}$ in the two sets $S_{k,a',b',c}(n_0)=\{b'+a'(n_0+kc)\}_{0\le i\le k}$ and $ S_{k,a',b',c}(n_0+p^{v_p(L_k)-1}c)=\{b+a(n_0+(p^{v_p(L_k)-1}+i)c)\}_{0\le i \le k}$. Since $\{b'+a'(n_0+p^{v_p(L_k)-1}c),\ldots, b'+a'(n_0+kc)\}$ is the intersection of $S_{k,a',b',c}(n_0)$ and $S_{k,a',b',c}(n_0+p^{v_p(L_k)-1}c)$, it suffices to compare the set $\{b'+a'n_0,\ldots,b'+a'(n_0+(p^{v_p(L_k)-1}-1)c)\}$ with the set $\{b'+a'(n_{0}+(k+1)c),\ldots,b'+a'(n_0+(k+p^{v_p(L_k)-1})c)\}$. By Lemma 3.1, we know that the terms divisible by $p^{v_p(cL_k)}$ in the arithmetic progression $\{b'+a'(n_0+ic)\}_{i\in \mathbb{N}}$ are of the form $b'+a'(n_0+tp^{v_p(L_k)}c), \ t\in \mathbb{N}$. Since $k+1\equiv r\pmod {p^{v_p(L_k)}}$ and $1\leq r\leq p^{v_p(L_k)}-p^{v_p(L_k)-1}$, we have $k+j\equiv r+j-1\not\equiv 0 \pmod {p^{v_p(L_k)}}$ for all $1\leq j\leq p^{v_p(L_k)-1}$. Hence $p^{v_p(cL_k)}\nmid (b'+a'(n_0+(k+j)c))$ for all $1\leq j\leq p^{v_p(L_k)-1}$. Whereas, $b'+a'n_0$ is the only term in the set $\{b'+a'n_0,b'+a'(n_{0}+c),\ldots,b'+a'(n_{0}+(p^{v_p(L_k)-1}-1)c)\}$ which is divisible by $p^{v_p(cL_k)}$. Therefore we have $$ f_{v_p(cL_k)}(n_0+p^{v_p(L_k)-1}c)=f_{v_p(cL_k)}(n_0)-1.$$ {\sc Case 3.2.} $p^{v_p(L_k)}-p^{v_p(L_k)-1}<r\leq p^{v_p(L_k)}-1$. Pick a positive integer $n_0$ such that $v_p(b'+a'(n_0+(p^{v_p(L_k)-1}-1)c))\ge v_p(cL_k)$. Then the terms divisible by $p^{v_p(cL_k)}$ in the arithmetic progression $\{b'+a'(n_0+ic)\}_{i\in \mathbb{N}}$ should be of the form $b'+a'(n_0+(p^{v_p(L_k)-1}-1+tp^{v_p(L_k)})c)$, where $t\in \mathbb{N}$. As in the discussion of Case 3.1, it is sufficient to compare $\{b'+a'n_0,\ldots,b'+a'(n_0+(p^{v_p(L_k)-1}-1)c)\}$ with $\{b'+a'(n_0+(k+1)c),\ldots,b'+a'(n_0+(k+p^{v_p(L_k)-1})c)\}$. By comparison, we obtain that $p^{v_p(cL_k)}\nmid (b'+a'(n_0+(k+j)c))$ for all $1\leq j\leq p^{v_p(L_k)-1}$, while the term $b'+a'(n_0+(p^{v_p(L_k)-1}-1)c)$ is the only term divisible by $p^{v_p(cL_k)}$ in the set $\{b'+a'n_0,\ldots,b'+a'(n_0+(p^{v_p(L_k)-1}-1)c)\}$. Hence we have $f_{v_p(cL_k)}(n_0+p^{v_p(L_k)-1}c)=f_{v_p(cL_k)}(n_0)-1$. From the argument in the above two subcases, we deduce that $p^{v_p(L_k)-1}c$ is not the period of $f_{v_p(cL_k)}$, which shows that $A/p$ is not the period of $g_{p,k,\varphi}$ in Case 3. Thus $A$ is the smallest period of $g_{p,k,\varphi}$ as desired. This completes the proof of Lemma 3.6. \end{proof} \noindent{\bf Lemma 3.7.} {\it Let $p$ be a prime such that $p| cL_k$, $p\nmid a'$ and $p| d$. Then \begin{align*} P_{p,k,\varphi}= p^{e(p,k)}\bigg(\prod_{{\rm prime} \ q:\ q\nmid ac, \ q| L_k\atop p| (q-1), \ k+1\not\equiv 0\pmod {q}} \ q\bigg)\bigg(\prod_{{\rm prime} \ q:\ q\nmid a,\atop q| c, \ p| (q-1)}q\bigg), \end{align*} where \begin{align*} e(p,k):={\left\{ \begin{array}{rl} v_p(c), \quad&\text{if} \ v_{p}(k+1)\geq v_p(L_k),\\ v_p(cL_k), \quad&\text{if} \ v_p(k+1)< v_p(L_k). \end{array} \right.} \end{align*} } \begin{proof} Similarly to the proof of Lemma 3.6, it is enough to show that $p^{e(p,k)}$ is the smallest period of $\sum_{e=1}^{v_p(cL_k)}f_e(n)$ by (3.3). We divide the proof into the following two cases. {\sc Case 1.} $v_p(k+1)\ge v_p(L_k)$. As in the proof of Subcase 2.2 in Lemma 3.6, since $b'+a'n\equiv b'+a'(n+(k+1)c)\pmod{p^{v_p(cL_k)}}$, we can obtain that $p^{v_p(c)}$ is a period of $\sum_{e=1}^{v_p(cL_k)}f_e(n)$. If $v_p(c)=0$, it is complete. If $v_p(c)\ge 1$, then choosing a positive integer $n_0$ such that $v_p(b'+a'n_0)=v_p(c)$, we can show that $p^{v_p(c)-1}$ is not the period of $\sum_{e=1}^{v_p(cL_k)}f_e(n)$ using a similar method as in the proof of Case 2 in Lemma 3.6. {\sc Case 2.} $v_p(k+1)< v_p(L_k)$. Using the same way as the proof of Case 3 in Lemma 3.6, one can easily check that $p^{v_p(cL_k)-1}$ is not the period of $\sum_{e=1}^{v_p(cL_k)}f_e(n)$. The proof of Lemma 3.7 is complete. \end{proof} \section{\bf Proof of Theorem 1.3 and examples} In this section, we first use the results presented in the previous section to show Theorem 1.3. \\ \\ {\it Proof of Theorem 1.3.} By Theorem 1.2, we know that $g_{k, \varphi}$ is periodic and $P_{k, \varphi}|cL_k$. To determine the exact value of $P_{k, \varphi}$, it is sufficient to determine the $p$-adic valuation of $P_{k, \varphi}$ for each prime $p$. By Lemma 3.3, we have $P_{k, \varphi}={\rm lcm}_{{\rm prime}\ p\le cL_k} \{P_{p, k,\varphi}\}$. So it is enough to compute $\max_{{\rm prime}\ q\le cL_k} \{v_p(P_{q, k, \varphi})\}$ for each prime $p$. We consider the following four cases. {\sc Case 1.} $p\nmid cL_k$. Since $P_{k, \varphi}|cL_k$, it is clear that $v_p(P_{k, \varphi})=v_p(cL_k)=0$. {\sc Case 2.} $p| cL_k$ and $p|a'$. Observe from Lemmas 3.4-3.7 that $v_p(P_{q, k, \varphi})=0$ for each prime $q\le cL_k$. So we have $v_p(P_{k, \varphi})=0$. {\sc Case 3.} $p=2$. From the discussion in Case 1 and Case 2, we know that $v_2(P_{k,\varphi})=0$ if $2|cL_k$ and $2| a'$ or if $2\nmid cL_k$. It remains to consider the case $2|cL_k$ and $2\nmid a'$. By Lemmas 3.4-3.7, we know that $v_2(P_{p, k, \varphi})=0$ for all odd primes $p$. So we only need to compute $v_2(P_{2,k,\varphi})$. We now distinguish the following four subcases. {\sc Subcase 3.1.} $2\nmid a$ and $v_2(cL_k)=1$. In this case, by Lemma 3.6, one has $v_2(P_{k, \varphi})=0=v_2(cL_k)-1$. {\sc Subcase 3.2.} $2\nmid a$, $v_2(cL_k)\ge 2$ and $v_2(k+1)\ge v_2(L_k)=1$, or $2\nmid a', 2|d$ and $v_2(k+1)\ge v_2(L_k)=1$. Since $v_2(k+1)\ge v_2(L_k)$ and $v_2(L_k)=1$, we get $k=3$. Thus by Lemmas 3.6 and 3.7, we have that if $k=3$, $2\nmid a$ and $v_2(c)=v_2(cL_k)-v_2(L_k)\ge 2-1=1$, or if $k=3$, $2\nmid a'$ and $2|d$, then $v_2(P_{k, \varphi})=v_2(c)=v_2(cL_k)-1$. {\sc Subcase 3.3.} $v_2(k+1)\ge v_2(L_k)\ge 2$. Using Lemmas 3.6 and 3.7, we obtain that $v_2(P_{k, \varphi})=v_2(c)=v_2(cL_k)-v_2(L_k)$. {\sc Subcase 3.4.} $2\nmid a, v_2(L_k)=0$ and $v_2(cL_k)\ge 2$, or $2\nmid a', 2|d$ and $v_2(L_k)=0$, or $2\nmid a$, $v_2(cL_k)\ge 2$ and $v_2(k+1)<v_2(L_k)$, or $2\nmid a'$, $2|d$ and $v_2(k+1)<v_2(L_k)$. If $2\nmid a, v_2(L_k)=0$ and $v_2(cL_k)\ge 2$, or if $2\nmid a', 2|d$ and $v_2(L_k)=0$, we get $v_2(P_{2, k,\varphi})=v_2(c)=v_2(cL_k)$. So we have $v_2(P_{k, \varphi})=v_2(cL_k)$ in this case. Combining all the above information on $v_2(P_{k,\varphi})$, we have \begin{align*} v_2(P_{k, \varphi})={\left\{ \begin{array}{rl} 0, &\text{if} \ 2| a', \\ v_2(cL_k)-v_2(L_k), &\text{if} \ 2\nmid a' \ {\rm and} \ v_2(k+1)\ge v_2(L_k)\ge 2, \\ v_2(cL_k)-1, &\text{if} \ 2\nmid a \ \text{and}\ v_2(cL_k)=1, \text{or} \ k=3, 2\nmid a\ {\rm and} \ 2|c,\\ &\quad {\rm or} \ k=3, 2\nmid a'\ {\rm and}\ 2|d,\\ v_2(cL_k), &\text{otherwise}. \end{array} \right.}(4.1) \end{align*} {\sc Case 4.} $p\ne 2, p|cL_k$ and $p\nmid a'$. Note that $2|(p-1)$ for each odd prime $p$. Evidently, if $2\nmid cL_k$, then $k=1$. So there is no odd prime $p$ so that $p| L_k$ if $2\nmid cL_k$. Thus by Lemmas 3.4-3.7, for all odd prime factors $p$ of $cL_k$, we obtain that $ v_p(P_{2, k, \varphi})=1$ except that {either $p\nmid ac$}, $p|L_k$ and $k+1\equiv0\pmod p$ {or} $p|d$, in which case $v_p(P_{2, k, \varphi})=0$. On the other hand, for all odd primes $q$ such that $q\ne p$ and $q\le cL_k$, we have by Lemmas 3.4-3.7 that $v_p(P_{q, k, \varphi})=0$ if $p|d$ or if $p\nmid ac$, $p|L_k$ and $k+1\equiv0\pmod p$, and $v_p(P_{q, k, \varphi})\le 1$ otherwise. Hence $v_p(P_{2,k,\varphi})\ge v_p(P_{q, k, \varphi})$ for all odd primes $q$ such that $q\ne p$ and $q\le cL_k$. Therefore we deduce immediately that $$ v_p(P_{k, \varphi})=\max_{{\rm prime}\ q\le cL_k} \{v_p(P_{q, k, \varphi})\}=\max( v_p(P_{2, k, \varphi}), v_p(P_{p, k, \varphi})).\eqno(4.2) $$ Using Lemmas 3.6 and 3.7 to compute $v_p(P_{p,k,\varphi})$, we get that \begin{align*} v_p(P_{p, k, \varphi})={\left\{ \begin{array}{rl} 0, &\text{if} \ v_p(cL_k)=1\ \mbox{and}\ p\nmid d,\\ v_p(c), &\text{if} \ v_{p}(k+1)\ge v_p(L_k), p\nmid d\ \mbox{and}\ v_p(cL_k)\ge 2,\\ &\ \mbox{or\ if}\ v_{p}(k+1)\ge v_p(L_k)\ \mbox{and}\ p|d,\\ v_p(cL_k), &\text{if} \ v_{p}(k+1)< v_p(L_k), p\nmid d\ \mbox{and}\ v_p(cL_k)\ge 2,\\ &\ \mbox{or\ if}\ \ v_{p}(k+1)< v_p(L_k)\ \mbox{and}\ p|d. \end{array} \right.}\quad\quad\quad \quad\quad(4.3) \end{align*} For all the primes $p$ such that $p\nmid d$ and $v_p(cL_k)=1$, we have by the above discussion that $v_p(P_{2, k, \varphi})=0$ only if $v_p(c)=0$, $v_p(L_k)=1$ and $k+1\equiv0\pmod p$. Equivalently, $v_p(P_{2, k, \varphi})=v_p(c)=v_p(cL_k)-v_p(L_k)$ if $v_p(k+1)\ge v_p(L_k) \ge 1$, and $v_p(P_{2, k, \varphi})=v_p(cL_k)$ otherwise. Therefore, for all the odd primes $p$ satisfying $p|cL_k$ and $p\nmid a'$, we derive from (4.2) and (4.3) that $$ v_p(P_{k, \varphi})={\left\{ \begin{array}{rl} v_p(cL_k)-v_p(L_k), &\text{if}\ v_{p}(k+1)\geq v_p(L_k)\ge 1,\\ v_p(cL_k), &\text{otherwise}. \end{array} \right.}\eqno(4.4) $$ Now putting all the above cases together, we get \begin{align*} P_{k, \varphi}&=2^{v_2(P_{k, \varphi})}\bigg(\prod_{{\rm prime}\ p:\ p\ne2 \atop p|a',\ p|cL_k}p^{v_p(P_{k, \varphi})}\bigg)\bigg(\prod_{{\rm prime}\ p:\ p \ne 2 \atop p\nmid a',\ p|cL_k}p^{v_p(P_{k,\varphi})}\bigg)\\ &=\frac{cL_k}{2^{v_2(cL_k)-v_2(P_{k, \varphi})}\bigg(\prod_{{\rm prime}\ p:\ p\ne 2\atop p|a',\ p|cL_k}p^{v_p(cL_k)-v_p(P_{k, \varphi})}\bigg)\bigg(\prod_{{\rm prime}\ p:\ p\ne 2 \atop p\nmid a',\ p|cL_k}p^{v_p(cL_k)-v_p(P_{k,\varphi})}\bigg)}\\ &=\frac{cL_k}{2^{v_2(cL_k)-v_2(P_{k,\varphi})}\bigg(\prod_{{\rm prime}\ q:\ q\ne 2,\ q|a'}q^{v_q(cL_k)}\bigg)\bigg(\prod_{{\rm prime}\ q:\ q\ne 2\atop q\nmid a',\ q|cL_k}q^{v_q(cL_k)-v_q(P_{k,\varphi})}\bigg)}\\ &=\frac{cL_k}{2^{\delta_{2, k,\varphi}}\bigg(\prod_{{\rm prime}\ q|a'}q^{v_q(cL_k)}\bigg)\bigg(\prod_{{\rm prime}\ q:\ q\ne 2 \atop q\nmid a',\ q|cL_k}q^{v_q(cL_k)-v_q(P_{k,\varphi})}\bigg)}, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (4.5) \end{align*} where \begin{align*} \delta_{2, k,\varphi}:={\left\{ \begin{array}{rl} v_2(cL_k)-v_2(P_{k, \varphi }), &\text{\rm if} \ 2\nmid a',\\ 0, &\text{\rm if} \ 2|a'. \end{array} \right.} \end{align*} It then follows from (4.1) that \begin{align*} \delta_{2, k,\varphi}={\left\{ \begin{array}{rl} v_2(L_k), &\text{\rm if} \ 2\nmid a' \ {\rm and} \ v_2(k+1)\ge v_2(L_k)\ge 2, \\ 1, &\text{\rm if} \ 2\nmid a \ {\rm and}\ v_2(cL_k)=1, {\rm or} \ k=3, 2\nmid a\ {\rm and} \ 2|c, {\rm or} \ k=3, 2\nmid a'\ {\rm and}\ 2|d,\\ 0, &\text{\rm otherwise,} \end{array} \right.} \end{align*} which implies that $\eta_{2,k, a', c}=2^{\delta_{2,k,\varphi}}$. Hence by (1.2) we get $$Q_{k, a', c}=\frac{cL_k}{2^{\delta_{2, k,\varphi}}\prod_{{\rm prime}\ q|a'}q^{v_q(cL_k)}}. \eqno(4.6)$$ Since there is at most one odd prime $p\le k$ such that $v_p(k+1)\ge v_p(L_k)\ge 1$ (see \cite{[FK]}), we derive from (4.4) that \begin{align*} \prod_{{\rm prime}\ q:\ q \ne 2,\atop q\nmid a',\ q|cL_k}q^{v_q(cL_k)-v_q(P_{k,\varphi})}={\left\{ \begin{array}{rl} p^{v_p(L_k)}, &\text{if} \ v_p(k+1)\geq v_p(L_k)\ge 1 \ \mbox{for an odd prime}\ p\nmid a',\\ 1, &\text{otherwise.} \end{array} \right.} \end{align*} Thus it follows from (4.5) and (4.6) that $P_{k,\varphi}$ is equal to $Q_{k, a', c}$ except that $v_p(k+1)\ge v_p(L_k)\ge 1$ for at most one odd prime $p\nmid a'$, in which case $P_{k, \varphi}$ equals $\frac{Q_{k, a', c}}{p^{v_p(L_k)}}$. The proof of Theorem 1.3 is complete. $\square$\\ Now we give some examples to illustrate Theorem 1.3.\\ \\ {\bf Example 4.1.} Let $ a\ge 1, b\ge 0$ and $c\ge 1$ be integers, and let $a':=a/\gcd(a, b)$ be odd. Let $k=2^t-1$, where $t\in \mathbb{N}$ and $t\ge 3$. Since $v_2(k+1)=t>v_2(L_k)=t-1\ge 2$, we obtain by Theorem 1.3 that $\eta_{2,k,a',c}=2^{v_2(L_k)}$. On the other hand, there is no odd prime $p$ satisfying $v_p(k+1)\ge v_p(L_k)\ge 1$. Thus we have $$ P_{k, \varphi}=\frac{cL_k}{2^{v_2(L_k)} \prod_{{\rm prime} \ q|a'}q^{v_q(cL_k)}}. $$\\ \\ {\bf Example 4.2.} Let $ a\ge 1, b\ge 0$ and $c\ge 1$ be integers, and let $a':=a/{\rm gcd}(a, b)$. Let $p$ be any given odd prime with $p\nmid a'$, and let $k=p^{\alpha}-1$ for some integer $\alpha\ge 2$. Since $k=p^{\alpha}-1>3$ and $v_2(k+1)=v_2(p^{\alpha})=0$, we have $\eta_{2,k, a',c}=1$. The odd prime $p$ satisfies that $v_p(k+1)=\alpha>\alpha-1=v_p(L_k)\ge 1$. Hence we get by Theorem 1.3 that $$ P_{k, \varphi}=\frac{cL_k}{p^{v_p(L_k)}\prod_{{\rm prime} \ q|a'}q^{v_q(cL_k)}}. $$\\ \\ {\bf Example 4.3.} Let $ a\ge 1, b\ge 0$ and $c\ge 1$ be integers, and let $a':=a/{\rm gcd}(a, b)$. If $k$ is an integer of the form $35^{\alpha}-1$ with $\alpha\ge 2$ and $\alpha\in \mathbb{N}$, then $$ P_{k, \varphi}=\frac{cL_k}{\prod_{{\rm prime} \ q|a'}q^{v_q(cL_k)}}. \eqno(4.7) $$ Actually, since $35^{\alpha}-1>3$ and $v_2(35^{\alpha})=0<v_2(L_k)$, we obtain by Theorem 1.3 that $\eta_{2, k,a',c}=1$. On the other hand, we have that $v_5(k+1)=\alpha<v_5(L_k)$, $v_7(k+1)=\alpha<v_7(L_k)$ and $v_q(k+1)=0$ for any other odd prime $q$. Hence we get $P_{k,\varphi}$ as in (4.7). Furthermore, if $a|b$ or $a'$ is a prime greater than $cL_k$, then there is no prime factor of $a'$ dividing $cL_k$. Therefore for any $k=35^{\alpha}-1$ with $\alpha\ge 2$ and $\alpha\in \mathbb{N}$, one has $P_{k, \varphi}=cL_k$.\\ Finally, by Theorem 1.3, we only need to compute the first $P_{k, \varphi}$ values of $g_{k,\varphi}$ so that we can estimate the difference between $\prod_{0\le i\le k} \varphi(b+a(n+ic))$ and $\varphi({\rm lcm}_{0\le i\le k}\{b+a(n+ic)\})$ for large $n$. In other words, we have $$\min_{1\le m\le P_{k,\varphi}}\{g_{k,\varphi}(m)\} \le \frac{\prod_{0\le i\le k} \varphi(b+a(n+ic))} {\varphi({\rm lcm}_{0\le i\le k}\{b+a(n+ic)\})}=g_{k,\varphi}\big(\langle n\rangle_{P_{k,\varphi}}\big)\le \max_{1\le m\le P_{k,\varphi}}\{g_{k,\varphi}(m)\},$$ where $\langle n\rangle_{P_{k,\varphi}}$ means the integer between $1$ and $P_{k,\varphi}$ such that $n\equiv\langle n\rangle_{P_{k,\varphi}}\pmod{P_{k,\varphi}}$. On the other hand, estimating the difference between $\prod_{0\le i\le k}\varphi(b+a(n+ic))$ and ${\rm lcm}_{0\le i\le k}\{\varphi(b+a(n+ic))\}$ is also an interesting problem. For this purpose, we define the arithmetic function $G_{k, \varphi}$ for any positive integer $n$ by $$ G_{k, \varphi}(n):=\frac{\prod_{i=0}^k \varphi(b+a(n+ic))} {{\rm lcm}_{0\le i\le k} \{\varphi(b+a(n+ic))\}}. $$ Unfortunately, $G_{k, \varphi}$ may not be periodic. For instance, taking $a=1, b=0$ and $c=1$, then the arithmetic function $\bar G_{k, \varphi}$ defined by $\bar G_{k, \varphi}(n):=\frac{\prod_{i=0}^n \varphi(n+i)}{{\rm lcm}_{0\le i\le k}\{\varphi(n+i)\}}$ for $n\in \mathbb{N}^*$ is not periodic. Indeed, for any given positive integer $M$, we can always choose a prime $p>M$ since there are infinitely many primes. By Dirichlet's theorem, we know that there exists a positive integer $m$ such that the term $mp^2+1$ is a prime in the arithmetic progression $\{np^2+1\}_{n\in \mathbb{N}^*}$. Letting $n_0=mp^2$ gives us that $p|\varphi(n_0)$ and $p| \varphi(n_0+1)=\varphi(mp^2+1)=mp^2$. Thus $p|\bar G_{k, \varphi}(n_0)$ and $\bar G_{k, \varphi}(n_0)\ge p>M$. That is, $\bar G_{k, \varphi}$ is unbounded, which implies that $\bar G_{k, \varphi}$ is not periodic. Applying Theorem 1.3, we can give a nontrivial upper bound about the integer ${\rm lcm}_{0\le i\le k}\{\varphi(b+a(n+ic))\}$ as follows.\\ \\ \noindent{\bf Proposition 4.4.} {\it Let $k\ge 1, a\ge 1, b\ge 0$ and $c\ge 1$ be integers. Then for any positive integer $n$, we have \begin{align*} {\rm lcm}_{0\le i\le k}\{\varphi(b+a(n+ic))\} \le \frac{\prod_{i=0}^k\varphi(b+a(n+ic))}{g_{k,\varphi}\big(\langle n\rangle_{P_{k,\varphi}}\big)} \end{align*} with $\langle n\rangle_{P_{k,\varphi}}$ being defined as above.} \begin{proof} For each $0\le i\le k$, since $\varphi$ is multiplicative and $b+a(n+ic)| {\rm lcm}_{0\le j\le k}\{b+a(n+jc)\}$, we have $$ \varphi(b+a(n+ic))\big| \varphi({\rm lcm}_{0\le j\le k}\{b+a(n+jc)\}). $$ So we get ${\rm lcm}_{0\le i\le k}\{\varphi(b+a(n+ic))\}\big| \varphi({\rm lcm}_{0\le i\le k}\{b+a(n+ic)\})$. Thereby $$g_{k,\varphi}\big(\langle n\rangle_{P_{k, \varphi}}\big)=g_{k,\varphi}(n)\le \frac{\prod_{0\le i\le k}\varphi(b+a(n+ic))}{{\rm lcm}_{0\le i\le k}\{\varphi(b+a(n+ic))\}} $$ for any positive integer $n$. The desired result then follows immediately. \end{proof} Theorem 1.3 answers the second part of Problem 1.1 for the Euler phi function. However, the smallest period problem is still kept open for all other multiplicative functions $f$ with $f(n)\ne 0$ for all positive integers $n$. For example, if one picks $f=\sigma _{\alpha}$ with $\sigma _{\alpha}(n):=\sum_{d|n\atop d\ge 1}d^{\alpha }$ for $\alpha \in \mathbb {N}$, then what is the smallest period of $g_{k, f}$? If $f=\xi _{\varepsilon}$ with $\xi _{\varepsilon}(n):=n^\varepsilon$ for $\varepsilon \in \mathbb{R}$, then what is the smallest period of $g_{k, f}$?\\ \begin{center} {\bf Acknowledgements} \end{center} The authors are grateful to the anonymous referees for careful reading of the manuscript and for helpful comments and suggestions. \end{document}
\begin{document} \setlength{\parindent}{1em} \title{Path integral approach to driven quantum harmonic oscillator using Markov chain Monte Carlo methods} \author{Sohini Marik} \author{Souvik Naskar} \thanks{The authors Sohini Marik and Souvik Naskar contributed equally.} \author{Shibaji Banerjee} \affiliation{Department of Physics, St. Xavier's College, Kolkata} \date{\today} \begin{abstract} We have simulated the ground states of quantum harmonic oscillators driven either by constant forces of different magnitudes or time-dependent driving forces. The expectation values of position for various combinations of mass, natural angular frequency, and the coupling constant $\lambda$ were calculated for both driving modes. For constant forcing, coherent states were obtained. The results for both forcing scenarios match the theoretically expected values almost exactly. For the simulations, the Metropolis algorithm was implemented on a discrete time lattice to evaluate the imaginary time path integral of the systems. \end{abstract} \maketitle \section{Introduction} In the Schr\"{o}dinger formulation of quantum mechanics developed in 1925, the time evolution of a non-relativistic system is controlled by its Hamiltonian. The path integral formulation is an alternate approach that relies on a system's Lagrangian as the fundamental quantity. It is the generalization of the classical action principle to quantum mechanics. Most of this formulation was developed by R.P. Feynman in 1948. A precursor to his work was P. Dirac's 1933 paper that proposed an analogy between the complex exponential of the Lagrangian and the transformation function relating quantum mechanical wave functions at consecutive instants of time. Feynman calculated that the complex exponential of the action integrated over all possible trajectories of a particle between two space-time points $\left( x_i, t_i \right) $ and $ \left( x_f, t_f \right) $ yields the probability amplitude that the particle at $x_i$ at $t_i$ will be at $x_f$ at $t_f$\cite{feynman,feynmanhibbs,blundell}. One of the few exactly solvable path integrals is that of the driven harmonic oscillator system. \cite{ingold,jana} Driven harmonic oscillators have been extensively used in literature to approach various problems in physics. For instance, Piilo and Maniscalco simulated a non-Markovian damped oscillator with a forced harmonic oscillator in 2006\cite{opensys}, and Gimelshein \textit{et al} applied a 3D forced harmonic oscillator model of vibration-translation energy to atomic and molecular collisions in 2017\cite{vib}. The expression of the probability amplitude of a one-dimensional harmonic oscillator driven by a general time-dependent force is provided in Feynman and Hibbs\cite{feynman}. In this paper, we will take a non-perturbative computational approach to study the ground state probability distribution of the system when the driving force is beyond the perturbative limit. We have specifically considered two cases --- a constant force and a sinusoidal force. We have simulated the ground states of both systems using Markov Chain Monte Carlo methods to evaluate the imaginary time path integral. A similar procedure has been implemented by Westrbroek \textit{et al}\cite{westbroek} and Mittal \textit{et al}\cite{mittal} to compute the ground state of a simple harmonic oscillator and an anharmonic oscillator respectively. We have further demonstrated that the ground state of the driven harmonic oscillator can be described by coherent states, and compared our simulations with the theoretical result of the position expectation value obtained from Carruthers and Nieto's 1965 paper\cite{carruthers}. The organization of our paper is as follows. We have briefly introduced the Feynman path integrals, coherent states, and the driven quantum harmonic oscillator in the context of our paper in sections \ref{section:pi}, \ref{section:cs}, and \ref{section:fho} respectively. In section \ref{section:mcmc}, we have implemented Markov chain Monte Carlo (MCMC) methods in two driven harmonic oscillator systems and analyzed the results. \section{Feynman Path Integrals} \label{section:pi} \begin{table}[ht] \centering \begin{tabular}{l c}\hline \bf{Parameter} & \bf{Meaning} \\ \hline $\tau$ & Imaginary time $\tau = it$\\ $\delta \tau$ & Lattice spacing in discrete imaginary time lattice\\ $N_{\tau}$ & Length of discrete imaginary time lattice\\ $\lambda$ & Coupling constant for driving force in driven harmonic oscillator\\ $\tilde{m}$ & Dimensionless mass $\tilde{m} = m\delta \tau$\\ $\tilde{\omega}$ & Dimensionless frequency $ \tilde{\omega} = \omega \delta \tau$\\ $\tilde{x}_i$ & Dimensionless position on discrete time lattice $\tilde{x}_i = \dfrac{x_i}{\delta \tau}$\\ $\tilde{F}_i$ & Dimensionless driving force on discrete time lattice $\tilde{F}_i = F_i \left(\delta\tau\right)^2$\\ $\tilde{S}$ & Dimensionless Euclidean action\\ $\alpha$ & Expectation value of position of the coherent state $\ket{\alpha}$\\ \hline \end{tabular} \caption{An overview of the notation used.} \label{tab:notn} \end{table} In the Lagrangian formulation of classical mechanics, the trajectory of a particle is given by the solutions of the Euler-Lagrange equations. This path minimizes the classical action $S_{cl}=\int Ldt,$ where $L$ is the Lagrangian of the system. In quantum mechanics, the particle is not restricted to a single trajectory. It can go from one point to another by all accessible paths. Each path contributes a phase related to the classical action. To compute the probability amplitude, we have to sum over all these phase factors. The propagator of a particle of mass $m$ going from $x_i$ at time $t_i$ to $x_f$ at time $t_f$ in a potential $V(x(t))$ is given by \begin{equation} \langle x_f ,t_f |x_i ,t_i \rangle =\int Dx(t)e^{iS/\hbar }, \end{equation} where the action of the path $x(t)$ is \begin{equation} S=\int_{t_i }^{t_f }{ dt\left\lbrack \frac{1}{2}m{\left(\frac{dx}{dt}\right)}^2 -V\left(x(t)\right)\right\rbrack}. \end{equation} In the limit $t \rightarrow -i\tau$ ($\tau$ is a real number), we get the Euclidean time integral \begin{equation} \langle x_f ,t_f |x_i ,t_i \rangle =\int Dx(t)e^{-S_E/\hbar }, \end{equation} where \begin{equation} S_E=\int_{\tau_i }^{\tau_f } d\tau \left\lbrack \frac{1}{2}m{\left(\frac{dx}{d\tau }\right)}^2 +V\left(x(\tau )\right)\right\rbrack \end{equation} is the Euclidean action. This form of the integral is not oscillatory. Also, it is damped and the contributions of the higher energy states become negligible for large values of $\tau$.\cite{blundell} In this paper, we will apply Monte Carlo Markov Chain methods to the Euclidean time integral and compute the ground state of a driven harmonic oscillator. \section{Coherent states} \label{section:cs} Coherent states are the states of a quantum harmonic oscillator that show classical behavior.\cite{blundell} A coherent state $\ket{\alpha}$ is defined as \begin{equation}\label{coherent} \ket{\alpha} \equiv T_{\alpha}\ket{0} = \exp\left(-\frac{i}{\hbar}\hat{p}\alpha\right) \ket{0}, \end{equation} where $\hat{p}$ is the momentum operator, and $\ket{0}$ is the ground state of the simple harmonic oscillator. The multiplication property of the translation operator $T_{\alpha}$ is the following: \begin{equation} T_{\alpha}T_{\beta} = \exp\left(-\frac{i}{\hbar}\hat{p}\alpha\right)\exp\left(-\frac{i}{\hbar}\hat{p}\beta\right) = \exp\left(-\frac{i}{\hbar}\hat{p}\left(\alpha + \beta\right) \right) = T_{\alpha + \beta}. \end{equation} It follows from the above multiplication property that \begin{equation} T_{\alpha}^{\dagger} = \exp\left(\frac{i}{\hbar}\hat{p}\alpha\right) = \exp\left(-\frac{i}{\hbar}\hat{p}(-\alpha)\right) = T_{-\alpha}= T_{\alpha}^{-1}, \end{equation} establishing $T_{\alpha}$ is unitary. It is called the translation operator due to its action on the position operator $\hat{x}$: \begin{equation} T_{\alpha}^{\dagger}\hat{x}T_{\alpha} = \exp\left(\frac{i}{\hbar}\hat{p}\alpha\right)\hat{x}\exp\left(-\frac{i}{\hbar}\hat{p}\alpha\right)= \hat{x}+\frac{i}{\hbar}\left[\hat{p}, \hat{x}\right]\alpha = \hat{x} + \alpha. \end{equation} Consequently, the position expectation value of the coherent state $\ket{\alpha} $ is \begin{equation} \bra{\alpha}\hat{x}\ket{\alpha} = \bra{0}T_{\alpha}^{\dagger}\hat{x}T_{\alpha}\ket{0} = \bra{0}\hat{x} + \alpha \ket{0} = \alpha.\cite{zwiebach} \end{equation} \section{Driven Harmonic Oscillator} \label{section:fho} The expression of the action of a driven harmonic oscillator as a function of the path $x(t)$ is \begin{equation} S=\int dt\left\lbrack \frac{1}{2}m\dot{x} (t)^2 -\frac{1}{2}kx(t)^2 + x(t)F(t)\right\rbrack\cite{blundell}. \end{equation} We can introduce a coupling constant $\lambda$ to scale the forcing term. Then the action becomes \begin{equation} S=\int dt\left\lbrack \frac{1}{2}m\dot{x} (t)^2 -\frac{1}{2}kx(t)^2 + \lambda x(t)F(t)\right\rbrack. \label{eq:action} \end{equation} According to Carruthers and Nieto\cite{carruthers}, the ground state $\ket{0}'$ of the driven harmonic oscillator with a constant driving force $F_0$ is \begin{equation}\label{newground} \ket{0}' = \exp\left[\frac{x_0F_0}{\hbar \omega} \left(\hat{a}^{\dagger} - \hat{a}\right)\right]\ket{0}, \end{equation} where $x_0 = \sqrt{\dfrac{\hbar}{2m\omega}}$, $\hat{a}$ is the annihilation operator, and $\hat{a}^{\dagger}$ is the creation operator. Now, using the definition of $\hat{p}$: \begin{equation} \hat{p} = -im\omega x_0 \left(\hat{a} - \hat{a}^{\dagger} \right), \end{equation} and substituting that into equation \eqref{newground}, we get \begin{equation}\label{eq:cohground} \ket{0}' = exp\left[-\frac{i}{\hbar}\hat{p}\left\{\frac{F_0}{m\omega^2}\right\}\right]\ket{0} \end{equation}. Now comparing \eqref{coherent} with \eqref{eq:cohground}, we can see that the ground state $\ket{0}'$ of the driven harmonic oscillator is a coherent state with the expectation value of position $\alpha$ given by \begin{equation} \alpha = \frac{F_0}{m\omega^2}. \label{eq:alpha} \end{equation} \begin{figure} \caption{Update of a discrete imaginary-time path between $\tau_i = 0$ to $\tau_i = 9$, illustrating the computational method. The red path represents the thermalized path $\text{path} \label{fig:update} \end{figure} \section{Implementation of Monte Carlo methods}\label{section:mcmc} The Metropolis algorithm has been implemented on a discrete-time lattice with $N_{\tau}$ time slices and a periodic boundary condition $N_{\tau}+1=1$. The Euclidean time is $\tau_i = i \delta \tau $ where $i \in \lbrace 1, \dots N_{\tau} \rbrace$ is the site index, and $\delta \tau$ is the lattice spacing. The trajectory over the time lattice is characterized by a real number array $(x_1 ,\dots,x_N )$. First, we want to express all relevant physical quantities as real numbers. For this, we set $ \hbar =1=c$. It follows that $[\textrm{time}]=[\textrm{length}]=[{\textrm{mass}}^{-1} ]=[{\textrm{energy}}^{-1} ]$. Introducing the dimensionless variables \begin{equation} \tilde{m} =m\delta \tau ,~\tilde{\omega} =\omega \delta \tau ,~{\tilde{x} }_i =\frac{x_i }{\delta \tau },~{\tilde{F} }_i =F_i {\left(\delta \tau \right)}^2 \end{equation} in terms of the lattice spacing $\delta \tau$, we get the discrete form of the dimensionless action as \begin{equation} \displaystyle{ \tilde{S} =\sum_{i=1}^N \frac{1}{2}\tilde{m} {\left({\tilde{x} }_{i+1} -{\tilde{x} }_i \right)}^2 +\frac{1}{2}\tilde{m} {\tilde{\omega} }^2 {\tilde{x} }_i^2 -{\tilde{x} }_i {\tilde{F} }_i }. \end{equation} The initial configuration ${\textrm{path}}^{(0)}$ is updated by the Metropolis algorithm to get the next configuration ${\textrm{path}}^{(1)}$, and so on. One update to the value of the path $x_i$ at the lattice site $i$ constitutes one Monte-Carlo step. The lattice sites are randomly visited. A new value $x_i^{(\textrm{new})} =x_i +u$ is proposed from a symmetric normal distribution around the previous value $x_i$. As $x_i^{(\textrm{new})} $ always depends on $x_i$, the propositions are autocorrelated. There are $N$ Monte-Carlo steps in one Metropolis sweep. In each sweep, every lattice site gets updated once on average. To reduce autocorrelation, we have discarded a number of sweeps between every two path configurations that would be utilized for simulation.\cite{westbroek} In the random walk Metropolis algorithm, the probability $P$ of proposing a state $b$ from the current state $a$ is \begin{equation} P=\frac{\pi (b)}{\pi (a)}, \end{equation} where $\pi$ is the target probability distribution. The probability of accepting the proposition is $\min \left\lbrace P,\,1\right\rbrace$. The candidate states proposed in the initial metropolis sweeps are not from the target distribution. The number of sweeps required to reach the target distribution is called the burn-in period. Once the target distribution is reached, the mean and standard deviation of the paths proposed in each sweep becomes nearly constant.\cite{guilhoto,hastings,creutz} In our case, $\pi$ is $e^{-S_E}$, and the acceptance rate is $\min \left\lbrace e^{-\delta S_E},1\right\rbrace$. $\delta S_E$ is the change in action due to the proposed change in path. Thus, we always accept propositions that decrease the action. Propositions that increase the action are accepted with a probability of $e^{-\delta S_E}$. The function used to simulate each metropolis sweep is provided in the appendix. We have used an array of random numbers as the initial configuration. This is called a hot start. We could also use a cold start with an array of zeros. The choice of initial configuration doesn't make a difference after the burn-in period. Based on Westbroek \textit{et al}\cite{westbroek}, a time lattice with 120 lattice points has been chosen. We have taken a grand total of 12,000 sweeps discarding 12 sweeps in between every accepted configuration after the burn-in period. The lattice can be made finer by increasing the number of lattice points, but that significantly increases computation time. \begin{figure} \caption{\ref{fig:cburn} \label{fig:cburn} \label{fig:cburnz} \label{fig:cburnin} \end{figure} \subsection{Constant Driving Force} \label{section:mcmcfho} In this section, we will consider a constant driving force for different values of the coupling constant $\lambda$. In the discrete form, $\tilde{F}_i = \lambda, \ \forall \ i, \ i \in \lbrack 1,120 \rbrack$. The action in equation \eqref{eq:action} becomes, \begin{equation}\label{eq:consact} \displaystyle{ \tilde{S} =\sum_{i=1}^N \frac{1}{2} {\tilde{m}\left({\tilde{x} }_{i+1} -{\tilde{x} }_i \right)}^2 +\frac{1}{2} \tilde{m}\tilde{\omega}^2{\tilde{x} }_i^2 -{\tilde{x} }_i {\lambda } }. \end{equation} We have assessed the burn-in period from a trial simulation by plotting the standard deviations of the proposed paths vs the number of sweeps taking $\tilde{m} =1$ and $\tilde{\omega} = 1$. From figure \ref{fig:cburnin}, we see that the target distribution is reached in the first 50-100 sweeps. So, we have taken the burn-in period as 100 sweeps for all cases. \begin{figure} \caption{\ref{fig:cgs} \label{fig:cgsstuff} \end{figure} For the ground state simulation, we have plotted the histogram of all the paths generated by the algorithm, barring the burn-in data. Figure \ref{fig:cgs} shows the normalized ground states $\left| \psi_0 \right|^2$ for $\tilde{m}=1$, $\tilde{\omega} = 1$, and $\lambda = 0, 2, 4, 6$. For $\lambda = 0$, we have retrieved the ground state of a simple harmonic oscillator obtained by Westbroek \textit{et al}. The ground states associated with the other values of $\lambda$ have the same waveform, but they are displaced towards the right. This illustrates that these ground states are coherent states. The shift can be calculated from the position expectation value of a coherent state given in equation \eqref{eq:alpha}. Substituting the expression of the forcing function in equation \eqref{eq:alpha}, we get \begin{equation} \alpha = \dfrac{\lambda}{\tilde{m} \tilde{\omega }^2}. \label{eq:calpha} \end{equation} In figure \ref{fig:cgsmw}, we have plotted the ground state probability distribution for different combinations of $\tilde{m}$ and $\tilde{\omega}$, keeping $\lambda$ fixed at $\lambda=5$. The values of $\alpha $ predicted by equation \eqref{eq:calpha} are --- $\alpha = 5$ for $\tilde{m} = 1$ and $\tilde{\omega} = 1$, $\alpha = 10$ for $\tilde{m} = 0.5$ and $\tilde{\omega}= 1$, and $\alpha = 20$ for $\tilde{m} = 1$ and $\tilde{\omega} = 0.5$. These are precisely the values of $\alpha$ we have got from our simulations in figure \ref{fig:cgsmw}. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|} \hline $\bm{\lambda}$ & $\bm{\tilde{m}}$ & $\bm{\tilde{\omega}}$ & $\bm{\langle x \rangle}$ \\ \hline 0.0 & 1.0 & 1.0 & -0.0 \\ \hline 2.0 & 1.0 & 1.0 & 2.0\\ \hline 4.0 & 1.0 & 1.0 & 4.0\\ \hline 6.0 & 1.0 & 1.0 & 6.0\\ \hline 5.0 & 1.0 & 1.0 & 5.0\\ \hline 5.0 & 0.5 & 1.0 & 10.0\\ \hline 5.0 & 1.0 & 0.5 & 20.0\\ \hline \end{tabular} \caption{This is a table of values of the mean position $\langle x \rangle$ for various combinations of the coupling constant $\lambda$, dimensionless mass $\tilde{m}$ and dimensionless frequency $\tilde{\omega}$. At every instance, $\langle x \rangle$ takes the value of $\alpha$ from equation \eqref{eq:calpha}. Equation \eqref{eq:calpha} describes the expectation value of position of a coherent state. This establishes that the ground state of a driven harmonic oscillator with a constant driving force is a coherent state. } \label{tab:meanx} \end{table} \subsection{Sinusoidal Driving Force} In this section, we will study a harmonic oscillator potential driven by a sinusoidal forcing function. The general form of the force is \begin{equation} \tilde{F}_i = \sin \left( \tilde{\omega}_d \tau_i - \tilde{\phi} \right). \label{eq:form} \end{equation} The choice of $\tilde{\omega}_d$ should ensure that $F_{N+1} = F_1$, satisfying the periodic boundary condition imposed on the time lattice. In our simulation, we have taken $ \tilde{\omega}_d = \dfrac{\pi \tilde{\omega}}{12} $, and $\tilde{\phi} = 0$. Substituting these values in equation \eqref{eq:form}, the action in equation \eqref{eq:action} becomes, \begin{equation} \tilde{S} =\sum_{i=1}^N \frac{1}{2}\tilde{m} {\left({\tilde{x} }_{i+1} -{\tilde{x} }_i \right)}^2 +\frac{1}{2}\tilde{m} {\tilde{\omega} }^2 {\tilde{x} }_i^2 -{\tilde{x} }_i \lambda \sin \left(\dfrac{\pi}{12} \tilde{\omega} \tau_i \right). \end{equation} Again, the burn-in period is 100 sweeps. In this case, the histogram of all simulated path configurations will give us the ground state probability distribution averaged over time. Figure \ref{fig:fgs} shows the ground states for $\lambda = 0, 2, 4, 6$. For $\lambda>0$, we get two maxima in the distribution. As $\lambda$ increases, the maxima shift away from zero symmetrically. \begin{figure} \caption{The figure shows the ground state probability distribution of a driven harmonic oscillator with a sinusoidal driving force for different values of the coupling constant $\lambda$. As $\lambda$ is turned on and increased, split peaks occur near the values $+\alpha$ and $-\alpha$. Here, $\alpha$ is the position expectation value calculated from equation \eqref{eq:calpha} \label{fig:fgs} \label{fig:fgs} \end{figure} The driving force has an explicit time dependence. This implies that $F_i$ has a constant value at each lattice point $\tau_i$ in the discrete form. Therefore, the ground state probability density is a coherent state for every $\tau_i$. As in the previous case, we can calculate the displacement of the coherent states from equation \eqref{eq:alpha}. Using $\tilde{F}_i = \lambda \sin \left( \dfrac{\pi}{12} \tilde{\omega} \tau_i \right)$ in equation \eqref{eq:alpha}, we get \begin{equation} \alpha_i = \frac{\lambda}{\tilde{m} \tilde{\omega}^2 } \sin \left(\frac{\pi\tilde{ \omega}}{12} \tau_i \right). \label{eq:fal} \end{equation} So, the displacement of the coherent states as a function of $\tau_i$ is a sinusoidal curve with the same frequency and initial phase as that of the driving force. The amplitude of the curve is the maximum displacement of the mean position of the coherent state from $\langle x \rangle = 0 $. \begin{figure} \caption{The solid (green) line shows the time-varying driving force. The dash-dotted line (magenta) shows the mean position at each point in the time lattice. We see that the time variation of the mean position is sinusoidal and resembles the equation \eqref{eq:fal} \label{fig:falpha} \end{figure} We have already seen the variation of $\alpha_i$ with $\lambda$ for a constant driving force in section \ref{section:mcmcfho}. In figure \ref{fig:falpha}, we have simulated the dependence of $\alpha_i$ on $\tilde{m}$ and $\tilde{\omega}$ for $\lambda = 4$. From equation \eqref{eq:fal}, the theoretical amplitudes of the position expectation values are $|\alpha_{i,max}|= 2 \lambda =8 $ for $\tilde m=0.5, \ \tilde \omega=1$, $|\alpha_{i,max}| = 0.25 \lambda$ =2 for $\tilde m=0.5, \ \tilde \omega=2$, and $|\alpha_{i,max}| = \lambda = 4$ for $\tilde{m} = 1, \ \tilde{\omega} = 1$. The simulated amplitudes of $\alpha_i(\tau_i)$ in figure \ref{fig:falpha} are very close to these the predicted values. \section{Conclusion} In the present study, we have simulated the ground state probability distributions of some forced harmonic oscillator potentials. Firstly, we chose a constant driving force. For each combination of the parameters $\tilde{m}, \tilde{\omega},$ and $\lambda$, the probability distribution of the waveform resembles a coherent state. The calculated position expectation values of the simulated states were found to be nearly identical to theoretical predictions. Subsequently, we considered a time-dependent sinusoidal forcing function and demonstrated that the ground state probability distribution is a coherent state that evolves with time. The position expectation value is also a sinusoidal function of time. It has the same frequency and initial phase as the driving force. We have also explored the dependence of this function on various combinations of $\tilde{m}, \tilde{\omega},$ and $\lambda$. The simulated values of the amplitudes closely match the predicted values. The methodology described here can be applied to any time-dependent forcing function. We can further attempt to simulate a forced harmonic oscillator where the mass and the natural frequency change with time. Such systems are used to formulate several physical phenomena, like the interaction of charged particles with time-varying electromagnetic fields\cite{charged}. Thus, evaluating the imaginary time path integral using MCMC methods is a powerful tool to visualize the ground state probability distributions of quantum systems. \begin{acknowledgments} The authors thank professor Dr. Tanaya Bhattacharyya for her help and gratefully acknowledge the support of St. Xavier's College, Kolkata. \end{acknowledgments} \end{document}
\begin{document} \title{Interrupt Timed Automata: Verification and Expressiveness\thanks{Parts of this paper have been published in the proceedings of FoSSaCS'09~\cite{berard09} \begin{abstract} We introduce the class of Interrupt Timed Automata (ITA), a subclass of hybrid automata well suited to the description of timed multi-task systems with interruptions in a single processor environment. While the reachability problem is undecidable for hybrid automata we show that it is decidable for ITA. More precisely we prove that the untimed language of an ITA is regular, by building a finite automaton as a generalized class graph. We then establish that the reachability problem for ITA is in NEXPTIME and in PTIME when the number of clocks is fixed. To prove the first result, we define a subclass ITA$_-$ of ITA, and show that (1) any ITA can be reduced to a language-equivalent automaton in ITA$_-$ and (2) the reachability problem in this subclass is in NEXPTIME (without any class graph). In the next step, we investigate the verification of real time properties over ITA. We prove that model checking \textsf{SCL}\xspace, a fragment of a timed linear time logic, is undecidable. On the other hand, we give model checking procedures for two fragments of timed branching time logic. We also compare the expressive power of classical timed automata and ITA and prove that the corresponding families of accepted languages are incomparable. The result also holds for languages accepted by controlled real-time automata (CRTA), that extend timed automata. We finally combine ITA with CRTA, in a model which encompasses both classes and show that the reachability problem is still decidable. Additionally we show that the languages of ITA are neither closed under complementation nor under intersection. \keywords{Hybrid automata, timed automata, multi-task systems, interrupts, decidability of reachability, model checking, real-time properties.} \end{abstract} \section{Introduction} \subsection{Context} The model of timed automata (TA), introduced in~\cite{alur94a}, has proved very successful due to the decidability of several important verification problems including reachability and model checking. A timed automaton consists of a finite automaton equipped with real valued variables, called clocks, which evolve synchronously with time, during the sojourn in states. When a discrete transition occurs, clocks can be tested by guards, which compare their values with constants, and reset. The decidability results were obtained through the construction of a finite partition of the state space into regions, leading to a finite graph which is time-abstract bisimilar to the original transition system, thus preserving reachability. Consider several tasks executing on a single processor (possibly scheduled beforehand, although this step is beyond the scope of this paper). As a result, tasks are intertwined and may interrupt one another~\cite{silberschatz08}. Since the behaviour of such systems may depend on the current execution times of the tasks, a timed model should measure these execution times, which involves clock suspension in case of interruptions. Unfortunately, timed automata lack this feature of clock suspension, hence more expressive models should be considered. Hybrid automata (HA) have subsequently been proposed as an extension of timed automata~\cite{maler92}, with the aim to increase the expressive power of the model. In this model, clocks are replaced by variables which evolve according to a differential equation. Furthermore, guards consist of more general constraints on the variables and resets are extended into (possibly non deterministic) updates. This model is very expressive, but reachability is undecidable in HA. The simpler model obtained by allowing clocks to be stopped and resumed, stopwatch automata (SWA), would be sufficient to model task interruptions in a processor. However, reachability is also undecidable for SWA~\cite{cassez00}. Many classes have been defined, between timed and hybrid automata, to obtain the decidability of this problem. Task automata~\cite{fersman07} and suspension automata~\cite{mcmanis94} model explicitly the scheduling of processes. Some classes restrict the use of variation of clock rate in hybrid automata to achieve decidability. Examples of such classes are systems with piece-wise constant derivatives~\cite{asarin95}, controlled real-time automata~\cite{Zielonka}. Guards may also be restricted, as in multi-rate or rectangular automata~\cite{alur95}, some integration graphs~\cite{kesten99}, or polygonal hybrid systems~\cite{asarin07}. Restricting reset may also lead to decidability as in the hybrid automata with strong resets~\cite{bouyer08} or initialized stopwatch automata~\cite{hen98}. O-minimal hybrid systems~\cite{lafferriere99,lafferriere01} provide algebraic constraints on hybrid systems to yield decidability. Extensions of timed automata to release some constraints were also considered, as in some updatable timed automata~\cite{bouyer04}. While untimed properties like reachability and \textsf{LTL}\xspace~\cite{pnueli77,sistla85} or \textsf{CTL}\xspace model checking~\cite{emerson82,queille82,Cim02}, are useful for such models, real time verification consider more precise requirements, for instance quantitative response time properties. Therefore, timed extensions of these logics have been defined. In the case of linear time logics, verification of the most natural extension \textsf{MTL}\xspace~\cite{koymans90} is undecidable on TA. However, several decidable fragments such as \textsf{MITL}\xspace~\cite{alur96} and \textsf{SCL}\xspace~\cite{raskin97} have subsequently been defined. In the case of timed variants of branching time logics, different versions of Timed \textsf{CTL}\xspace (\textsf{TCTL}\xspace)~\cite{alur93,HNSY94} have been defined. Model checking procedures on TA for both versions of \textsf{TCTL}\xspace have been developed and implemented in several tools~\cite{uppaal04,kronos98}. \subsection{Contributions} In this paper, we define a subclass of hybrid automata, called Interrupt Timed Automata (ITA), well suited to the description of multi-task systems with interruptions in a single processor environment. \paragraph{The ITA model.} In an ITA, the finite set of control states is organized according to \emph{interrupt levels}, ranging from $1$ to $n$, with exactly one active clock for a given level. The clocks from lower levels are suspended and those from higher levels are not yet defined (thus have arbitrary value $0$). On the transitions, guards are linear constraints using only clocks from the current level or the levels below and the relevant clocks can be updated by linear expressions, using clocks from lower levels. Finally, each state has a policy (lazy, urgent or delayed) that rules the sojourn time. This model is rather expressive since it combines variables with rate $1$ or $0$ (usually called stopwatches) and linear expressions for guards or updates. The ITA model is formally defined in Section~\ref{sec:background}. \paragraph{Reachability problem.} As said before, the reachability problem is undecidable for automata with stopwatches~\cite{hen98,cassez00,brihaye06}. However, we prove that it is decidable for ITA. More precisely, we first show that the untimed language of an ITA is effectively regular (Section~\ref{sec:regular}). The corresponding procedure significantly extends the classical region construction of~\cite{alur94a} by associating with each state a family of orderings over linear expressions. This construction yields a decision algorithm for reachability in $2$-EXPTIME, and PTIME when the number of clocks is fixed. This should be compared to TA with 3 clocks for which reachability is PSPACE complete~\cite{courcoubetis92}. We define a slight restriction of the model, namely ITA$_-$, which forbids updates of clocks other than the one of the current level. We prove that for any ITA one can build an equivalent ITA$_-$ w.r.t.\ language equivalence, whose size is at most exponential w.r.t.\ the size of the ITA and polynomial when the number of clocks is fixed. Based on the existence of a bound for the length of the minimal reachability path, we then show that reachability on ITA$_-$ can be decided in NEXPTIME without any class graph construction. This yields a NEXPTIME procedure for reachability in ITA (Section~\ref{sec:reachcomplexity}). \paragraph{Model checking over ITA.} We then focus on the verification of real time properties for ITA (Section~\ref{sec:modelchecking}), expressed in timed extensions of \textsf{LTL}\xspace and \textsf{CTL}\xspace. First we show that the model checking of timed (linear time) logic \textsf{MITL}\xspace~\cite{alur96} is undecidable. Actually, even the fragment \textsf{SCL}\xspace~\cite{raskin97} cannot be verified on ITA, while the corresponding verification problem over TA is PSPACE-complete. We then consider two fragments of the timed (branching time) logic \textsf{TCTL}\xspace, introduced in~\cite{HNSY94} and also studied later from the expressiveness point of view~\cite{bouyer05}. The first one, \textsf{TCTL}\xspacecint, contains formulas involving comparisons of model clocks as atomic propositions. In this logic, it is possible to express properties like: \emph{(P1) a safe state is reached before spending 3 t.u. in handling some interruption}. Decidability is obtained by a generalized class graph construction in 2-EXPTIME (PTIME if the number of clocks is fixed). Since the corresponding fragment cannot refer to global time, we consider a second fragment, \textsf{TCTL}\xspacep, in which we can reason on minimal or maximal delays. Properties like \emph{(P2) the system is error free for at least 50 t.u.} or \emph{(P3) the system will reach a safe state within 7 t.u.} can be expressed. In this case, the decidability procedure has a complexity in NEXPTIME for the existential fragment and 2-EXPTIME for the universal fragment (respectively NP and co-NP if the number of clocks is fixed). \paragraph{Expressiveness.} We also study the expressive power of the class ITA (Section~\ref{sec:exp}), in comparison with the original model of timed automata and the more general controlled real-time automata (CRTA) proposed in~\cite{Zielonka}. In CRTA, clocks and states are colored and a time rate is associated with every state. During the visit of a state, all clocks colored by the color of the state evolve with the state rate while the others do not evolve. We prove that the corresponding families of languages ITL and TL, as well as ITL and CRTL, are incomparable. Additionally we show that ITL is neither closed under complementation nor under intersection. \paragraph{Extensions.} We finally investigate compositions of ITA and other timed models (Section~\ref{sec:combination}). In the first composition, a synchronous product of an ITA and a TA, we prove that the reachability problem becomes undecidable. We then define a more appropriate product of ITA and CRTA. The CRTA part describes a basic task at an implicit additional level $0$. For this extended model denoted by ITA$^+$, we show that reachability is still decidable with the same complexity and in PSPACE when the number of clocks is fixed. \section{Interrupt Timed Automata}\label{sec:background} \subsection{Notations} The sets of natural, rational and real numbers are denoted respectively by $\ensuremath{\mathbb{N}}$, $\ensuremath{\mathbb{Q}}$ and $\ensuremath{\mathbb{R}}$. A \emph{timed word} over an alphabet $\Sigma$ is a finite sequence $w=(a_1,\tau_1) \ldots (a_n,\tau_n)$ where $a_i$ is in $\Sigma$ and $(\tau_i)_{1 \leq i \leq n}$ is a non-decreasing sequence of real numbers. The \emph{length} of $w$ is $n$ and the \emph{duration} of $w$ is $\tau_n$. For a finite set $X$ of clocks, a linear expression over $X$ is a term of the form $\sum_{x \in X} a_x \cdot x + b$ where $b$ and $(a_x)_{x \in X}$ are in $\ensuremath{\mathbb{Q}}$. We denote by $\mathcal{C}(X)$ the set of constraints obtained by conjunctions of atomic propositions of the form $C \bowtie 0$, where $C$ is a linear expression over $X$ and $\bowtie \,\in \{>,\geq,=,\leq,<\}$. The subset $\mathcal{C}_0(X)$ of $\mathcal{C}(X)$ contains constraints of the form $x +b \bowtie 0$. An \emph{update} over $X$ is a conjunction (over $X$) of assignments of the form $x := C_x$, where $x$ is a clock and $C_x$ is a linear expression over $X$. The set of all updates over $X$ is written $\mathcal{U}(X)$, with $\mathcal{U}_0(X)$ for the subset containing only assignments of the form $x := 0$ (reset) or of the form $x := x$ (no update). For a linear expression $C$ and an update $u$, the expression $C[u]$ is obtained by ``applying'' $u$ to $C$, \textit{i.e.} substituting each $x$ by $C_x$ in $C$, if $x := C_x$ is the update for $x$ in $u$. For instance, for the set of two clocks $X = \{x_1, x_2\}$, expression $C= x_2 -2x_1 + 3$ and update $u$ defined by $x_1 := 1 \wedge x_2 := 2x_1 +1$, applying $u$ to $C$ yields the expression $C[u] = 2x_1 + 2$. A clock valuation is a mapping $v : X \mapsto \ensuremath{\mathbb{R}}$, with $\vect{0}$ the valuation where all clocks have value $0$. The set of all clock valuations is $\ensuremath{\mathbb{R}}^X$ and we write $v \models \varphi$ when valuation $v$ satisfies the clock constraint $\varphi \in \mathcal{C}(X)$. For a valuation $v$, a linear expression $C$ and an update $u$, the value $v(C)$ is obtained by replacing each $x$ in $C$ by $v(x)$ and the valuation $v[u]$ is defined by $v[u](x) = v(C_x)$ for $x$ in $X$ if $x := C_x$ is the update for $x$ in $u$. Observe that an update is performed simultaneously on all clocks. For instance, let $X = \{x_1, x_2, x_3\}$ be a set of three clocks. For valuation $v = (2, 1.5, 3)$ and update $u$ defined by $x_1 := 1 \wedge x_2 := x_2 \wedge x_3 := 3x_2 - x_1$, applying $u$ to $v$ yields the valuation $v[u] = (1, 1.5, 2.5)$. \subsection{Models of timed systems} The model of ITA is based on the principle of multi-task systems with interruptions, in a single processor environment. We consider a set of tasks with different priority levels, where a higher level task represents an interruption for a lower level task. At a given level, exactly one clock is active (rate $1$), while the clocks for tasks of lower levels are suspended (rate $0$), and the clocks for tasks of higher levels are not yet activated and thus contain value $0$. The mechanism is illustrated in \figurename~\ref{fig:levels}, where irrelevant clock values are greyed. An example of such behavior can be produced by the ITA depicted in \figurename~\ref{fig:italevels}, which describes a system that answer requests according to their priority. It starts by receiving a request for a \emph{main} task of priority $1$. The treatment of this task can be interrupted by tasks of priority $2$ or $3$, depending on how far the system is in the execution of the main task. Tasks of priority $2$ and $3$ may generate errors (modeled by an interruption of higher level), after which the system recovers. On this system, deciding if it is possible -- or always the case -- that the main task is executed in less than a certain amount of time would give an insight on the quality of service of the system. \mathcal{M}S{Reviewer 2, Long commentaire 4. \`A relire.} \begin{figure} \caption{Interrupt levels and clocks in an ITA.} \label{fig:levels} \end{figure} \begin{figure} \caption{An ITA that produces -- among others -- the behavior represented in \figurename~\ref{fig:levels} \label{fig:italevels} \end{figure} Enabling of a transition depends on the clocks valuation. The enabling conditions, called \emph{guards}, are linear constraints on the clock values of levels lower than or equal to the current level: the ones that are relevant before the firing of the transition. Additionally, a transition can update the values of the clocks. If the transition decreases (resp. increases) the level, then each clock which is relevant after (resp. before) the transition can either be left unchanged or take a linear expression of clocks of strictly lower level. Along with its level, each state has a timing policy which indicates whether time may (Lazy, default), may not (Urgent) or must (Delayed) elapse in a state. Note that in TA, this kind of policy can be enforced by an additional clock while this is not possible here because there is a single clock per level. This additional feature is needed for the definition and further use of the model of ITA$_-$ (see Section~\ref{sec:reachcomplexity}). Note that the class graph construction of Section~\ref{sec:regular} is still valid without them. \mathcal{M}S{Reviewer 1, Rq 2. Done} We also add a labeling of states with atomic propositions, in view of interpreting logic formulas on these automata. In the sequel, the level of a transition is the level of its source state. We also say that a transition is lazy (resp. urgent, delayed) if the policy of its source state is lazy (resp. urgent, delayed). \begin{definition} An \emph{interrupt timed automaton} is a tuple $\mathcal{A}=\langle\Sigma, AP, Q, q_0, F,pol, X, \lambda,$ $lab, \mathcal{D}elta\rangle$, where: \begin{itemize} \item $\Sigma$ is a finite alphabet, $AP$ is a set of atomic propositions \item $Q$ is a finite set of states, $q_0$ is the initial state, $F \subseteq Q$ is the set of final states, \item $pol: Q \rightarrow \{Lazy,Urgent,Delayed\}$ is the timing policy of states, \item $X=\{x_1, \ldots, x_n\}$ consists of $n$ interrupt clocks, \item the mapping $\lambda : Q \rightarrow \{1, \ldots, n\}$ associates with each state its level and we call $x_{\lambda(q)}$ the \emph{active clock} in state $q$. The mapping $lab : Q \rightarrow 2^{AP}$ labels each state with a subset of $AP$ of atomic propositions, \item $\mathcal{D}elta \subseteq Q \times \mathcal{C}(X) \times (\Sigma \cup \{\varepsilon\}) \times \mathcal{U}(X) \times Q$ is the set of transitions. Let $q \xrightarrow{\varphi, a, u} q'$ in $\mathcal{D}elta$ be a transition with $k=\lambda(q)$ and $k'=\lambda(q')$. The guard $\varphi$ is a conjunction of constraints $\sum_{j=1}^k a_jx_j +b \bowtie 0$ (involving only clocks from levels less than or equal to $k$). The update $u$ is of the form $\wedge_{i=1}^{n} x_i := C_i$ with: \begin{itemize} \item if $k > k'$, \textit{i.e.} the transition decreases the level, then for $1 \leq i \leq k'$, $C_i$ is either of the form $\sum_{j=1}^{i-1} a_jx_j+b$ or $C_i=x_i$ (unchanged clock value) and for $i > k'$, $C_i=0$; \mathcal{M}S{Reviewer 1, Rq 1. Done.} \item if $k \leq k'$ then for $1 \leq i \leq k$, $C_i$ is of the form $\sum_{j=1}^{i-1} a_jx_j +b$ or $C_i=x_i$, and for $i > k$, $C_i=0$. \end{itemize} \end{itemize} \end{definition} A configuration $(q,v,\beta)$ of the associated transition system consists of a state $q$ of the ITA, a clock valuation $v$ and a boolean value $\beta$ expressing whether time has elapsed since the last discrete transition. This third component is needed to define the semantics according to the policies. \mathcal{M}S{Reviewer 1, Rq 5. Done.} \begin{definition} \label{def:semantics} The semantics of an ITA $\mathcal{A}$ is defined by the (timed) transition system $\mathcal{T}_{\mathcal{A}}= (S, s_0, \rightarrow)$. The set $S$ of configurations is $\left\{\!(q,v,\beta) \mid q \in Q, \ v \in \ensuremath{\mathbb{R}}^X, \ \beta \in \{\top,\bot\} \!\right\}\!$, with initial configuration $s_0=(q_0, \vect{0},\bot)$. The relation $\rightarrow$ on $S$ consists of two types of steps: \begin{description}[font=\em] \item[Time steps:] Only the active clock in a state can evolve, all other clocks are suspended. For a state $q$ with active clock $x_{\lambda(q)}$, a time step of duration $d>0$ is defined by $(q,v,\beta) \xrightarrow{d} (q, v',\top)$ with $v'(x_{\lambda(q)})=v(x_{\lambda(q)})+ d$ and $v'(x)=v(x)$ for any other clock $x$. A time step of duration $0$ leaves the system $\mathcal{T}_{\mathcal{A}}$ in the same configuration. When $pol(q)=Urgent$, only time steps of duration $0$ are allowed from $q$. \item[Discrete steps:] A discrete step $(q, v,\beta) \xrightarrow{a} (q', v',\bot)$ can occur if there exists a transition $q \xrightarrow {\varphi, a, u} q'$ in $\mathcal{D}elta$ such that $v \models \varphi$ and $v' = v[u]$. When $pol(q)=Delayed$ and $\beta=\bot$, discrete steps are forbidden. \end{description} \end{definition} The labeling function $lab$ is naturally extended to configurations by $lab(q,v,\beta)= lab(q)$. An ITA $\mathcal{A}_1$ is depicted in \figurename~\ref{fig:exita1}, with two interrupt levels (and two interrupt clocks). A geometric view is given in figure~\ref{fig:traj}, with a possible trajectory: first the value of $x_1$ increases from $0$ in state $q_0$ (horizontal line) and, after transition $a$ occurs, its value is frozen in state $q_1$ while $x_2$ increases (vertical line) until reaching the line $x_2 = -\frac{1}{2}x_1 + \frac{1}{2}$. The light grey zone defined by $\left(0 < x_1 < 1, \ 0 < x_2 < -\frac{1}{2}x_1 + \frac{1}{2}\right)$ corresponds to the set of valuations reachable in state $q_1$ and from which state $q_2$ is reachable. \mathcal{M}S{Reviewer 1, Rq 6. Done.} \begin{figure} \caption{An example of ITA and a possible execution.} \label{fig:exita1} \label{fig:traj} \label{fig:exita1traj} \end{figure} We now briefly recall the classical model of Timed Automata (TA)~\cite{alur94a} as well as the model of Controlled Real-Time Automata (CRTA)~\cite{Zielonka}. Note that in both models, timing policies can be enforced by clock constraints. \begin{definition} A \emph{timed automaton} is a tuple $\mathcal{A}=\langle\Sigma, Q, q_0, F, X, \mathcal{D}elta\rangle$, where $\Sigma$, $Q$, $q_0$, $F$ are defined as in an ITA, $X$ is a set of clocks and the set of transitions is $\mathcal{D}elta \subseteq Q \times \mathcal{C}_0(X) \times (\Sigma \cup \{\varepsilon\}) \times \mathcal{U}_0(X) \times Q$, with guards in $\mathcal{C}_0(X)$ and updates in $\mathcal{U}_0(X)$. \end{definition} The semantics of a timed automaton is also defined as a timed transition system, with the set $Q \times \ensuremath{\mathbb{R}}^X$ of configurations (no additional boolean value). Discrete steps are similar to those of ITA but in time steps, all clocks evolve with same rate $1$: $(q,v) \xrightarrow{d} (q, v')$ iff for each clock $x$ in $X$, $v'(x) = v(x) + d$. Controlled Real-Time Automata extend TA with the following features: the clocks and the states are partitioned according to colors belonging to a set $\Omega$ and with every state is associated a rational velocity. When time elapses in a state, the set of active clocks (i.e. with the color of the state) evolve with rate equal to the velocity of the state while other clocks remain unchanged. For sake of clarity, we now propose a slightly simplified version of CRTA. \begin{definition} \label{def:crta} A CRTA $\mathcal{A}=(\Sigma, Q, q_0, F, X, up, low, vel, \lambda, \mathcal{D}elta)$ on a finite set $\Omega$ of colors is defined by: \begin{itemize} \item $\Sigma$, the alphabet of actions, \item $Q$, the set of states, with $q_0 \in Q$ the initial state and $F \subseteq Q$ the set of final states, \item $X$ the set of clocks, \item mappings $up$ and $low$ associate with each clock respectively an upper and a lower bound, \item $vel: Q \mapsto \ensuremath{\mathbb{Q}}$ the velocity mapping, \mathcal{M}S{Reviewer 2, Rq 1. Done} \item $\lambda : X \uplus Q \mapsto \Omega$ the coloring mapping and \item $\mathcal{D}elta \subseteq Q \times \mathcal{C}_0(X) \times (\Sigma \cup \{\varepsilon\}) \times \mathcal{U}_0(X) \times Q$ the set of transitions, with guards in $\mathcal{C}_0(X)$ and updates in $\mathcal{U}_0(X)$. \end{itemize} Moreover, the lower and upper bound mappings satisfy $low(x) \leq 0 \leq up(x)$ for each clock $x \in X$, and $low(x) \leq b \leq up(x)$ for each constant $b$ such that $x \bowtie b$ is a constraint in $\mathcal{A}$. \end{definition} The original semantics of CRTA is rather involved in order to obtain decidability of the reachability problem. It ensures that entering a state $q$ in which clock $x$ is active, the following conditions on the clock bounds hold : if $vel(q) > 0$ then $x \geq low(x)$ and if $vel(q) < 0$ then $x \leq up(x)$. Instead (and equivalently) we add a syntactical restriction which ensures this behavior. For instance, if a transition with guard $\varphi$ and reset $u$ enters state $q$ with $vel(q)<0$ and if $x$ is the only clock such that $\lambda(x)=\lambda(q)$, then we replace this transition by two other transitions: the first one has guard $\varphi \wedge x > up(x)$ and adds $x:=0$ to the reset condition $u$, the other has guard $\varphi \wedge x \leq up(x)$ and reset $u$. In the general case where $k$ clocks have color $\lambda(q)$, this leads to $2^k$ transitions. With this syntactical condition, again the only difference from ITA concerns a time step of duration $d$, defined by $(q,v) \xrightarrow{d} (q, v')$, with $v'(x)=v(x)+ vel(q)d$ if $\lambda(x)=\lambda(q)$ and $v'(x)=v(x)$ otherwise. A run of an automaton $\mathcal{A}$ in ITA, TA or CRTA is a finite or infinite path in the associated timed transition system $\mathcal{T}_{\mathcal{A}}$, where (possibly null) time steps and discrete steps alternate. An \emph{accepting run} is a finite run starting in $s_0$ and ending in a configuration associated with a state of $F$. For such a run with label $d_1 a_1 d_2 \ldots d_n a_n$, we say that the word $(a_1,d_1) (a_2,d_1+d_2)\ldots (a_n,d_1+\cdots+d_n)$ (where $\varepsilon$ actions are removed) is accepted by $\mathcal{A}$. The set $\mathcal{L}(\mathcal{A})$ contains the timed words accepted by $\mathcal{A}$ and $Untimed(\mathcal{L}(\mathcal{A}))$, the untimed language of $\mathcal{A}$, contains the projections onto $\Sigma^*$ of the timed words in $\mathcal{L}(\mathcal{A})$. Interrupt Timed Languages or ITL (resp. Timed Languages or TL and Controlled Real-Time Languages or CRTL) denote the family of timed languages accepted by an ITA (resp. a TA and a CRTA). For instance, the language $L_1$ accepted by the ITA $\mathcal{A}_1$ in \figurename~\ref{fig:exita1} is \[L_1 = \mathcal{L}(\mathcal{A}_1) = \{ (a,\tau)(b, 1 + \frac{\tau}{2}) \mid 0 \leq \tau <1 \}.\] Languages of infinite timed words accepted by B\"uchi or Muller conditions could be studied but this analysis should address technical issues such as Zeno runs and infinite sequences of $\varepsilon$-transitions. \mathcal{M}S{Reviewer 1, Rq 7. Done.} In the context of model-checking, we also consider \emph{maximal runs} which are either infinite or such that no discrete step is possible from the last configuration. The set of maximal runs starting from configuration $s$ is denoted by $Exec(s)$. Since maximal runs can be finite or infinite, we do not exclude Zeno behaviors. We use the notion of (totally ordered) positions (which allow to consider several discrete actions simultaneously) along a maximal run~\cite{HNSY94}: for a run $\rho$, we denote by $<_{\rho}$ the strict order over positions. For position $\pi$ along $\rho$, the corresponding configuration is denoted by $s_{\pi}$, the prefix of $\rho$ up to $\pi$ is written $\rho^{\leq \pi}$ and its duration, $Dur\left(\rho^{\leq\pi}\right)$, is the sum of all delays along the finite run $\rho^{\leq \pi}$. Similarly, the suffix of $\rho$ starting from $\pi$ is denoted by $\rho^{\geq \pi}$. For two positions $\pi \leq_{\rho} \pi'$, the subrun of $\rho$ between these positions is written $\rho_{[\pi,\pi']}$, its duration is $Dur\left(\rho^{\leq \pi'}\right)-Dur\left(\rho^{\leq \pi}\right)$. The length of $\rho$, denoted by $|\rho|$, is the number of discrete transitions occurring in $\rho$. \section{Regularity of untimed ITL} \label{sec:regular} We prove in this section that the untimed language of an ITA is regular. Similarly to TA (and to CRTA), the proof is based on the construction of a (finite) class graph which is time abstract bisimilar to the transition system $\mathcal{T}_{\mathcal{A}}$. This result also holds for infinite words with standard B\"uchi conditions. As a consequence, we obtain decidability of the reachability problem, as well as decidability for plain \textsf{CTL}\xspacestar model-checking. \mathcal{M}S{Reviewer 2, Rq 4 and 7. Done} The construction of classes is much more involved than in the case of TA. More precisely, it depends on the expressions occurring in the guards and updates of the automaton (while in TA it depends only on the maximal constant occurring in the guards). We associate with each state $q$ a set of expressions $Exp(q)$ with the following meaning. The values of clocks giving the same ordering of these expressions correspond to a class. In order to define $Exp(q)$, we first build a family of sets \mathcal{M}S{Reviewer 2, Rq 2. Done} $\{E_k\}_{1\leq k\leq n}$. Then $Exp(q)= \bigcup_{k\leq \lambda(q)} E_k$ (recall that $\lambda(q)$ is the index of the active clock in state $q$). Finally in Theorem~\ref{prop:reach} we show how to build the class graph which proves the regularity of the untimed language. This immediately yields a reachability procedure given in Proposition~\ref{prop:reach}. \subsection{Construction of $\{E_k\}_{k\leq n}$} \label{subsec:contructionexpressions} We first introduce an operation, called \emph{normalization}, on expressions relative to some level. As explained in the construction below, this operation will be used to order expression values at a given level. \begin{definition}[Normalization] Let $C=\sum_{i\leq k}a_ix_i+b$ be an expression over $X_k= \{x_i \mid i \leq k\}$, the \emph{$k$-normalization} of $C$, ${\tt norm}(C,k)$, is defined by: \begin{itemize} \item if $a_k\neq 0$ then ${\tt norm}(C,k)=x_k+(1/a_k)(\sum_{i<k}a_ix_i+b)$; \item else ${\tt norm}(C,k)=C$. \end{itemize} \end{definition} Since guards are linear expressions with rational constants, we can assume that in a guard $C \bowtie 0$ occurring in a transition outgoing from a state $q$ with level $k$, the expression $C$ is either $x_k+\sum_{i<k}a_ix_i+b$ (by $k$-normalizing the expression and if necessary changing the comparison operator) or $\sum_{i<k}a_ix_i+b$. It is thus written as $\alpha x_k+\sum_{i<k}a_ix_i+b$, with $\alpha \in \{0,1\}$. The construction of $\{E_k\}_{k\leq n}$ proceeds top down from level $n$ to level $1$ after initializing $E_k=\{x_k,0\}$ for all $k$. As we shall see below, when handling the level $k$, we add new terms to $E_i$ for $1\leq i\leq k$. These expressions are the ones needed to compute a (pre)order on the expressions in $E_k$. \mathcal{M}S{Reviewer 1, Rq 10. Done} \begin{itemize} \item At level $k$, first for every expression $\alpha x_k+\sum_{i<k}a_ix_i+b$ (with $\alpha \in \{0,1\}$) occurring in a guard of an edge leaving a state of level $k$, we add $-\sum_{i<k}a_ix_i-b$ to $E_k$. \item Then we iterate the following procedure until no new term is added to any $E_i$ for $1\leq i\leq k$. \begin{enumerate} \item Let $q \xrightarrow{\varphi,a,u} q'$ with $\lambda(q)\geq k$ and $\lambda(q')\geq k$. Let $C \in E_{k}$, then we add $C[u]$ to $E_{k}$ (recall that $C[u]$ is the expression obtained by applying update $u$ to $C$). \item Let $q \xrightarrow{\varphi,a,u} q'$ with $\lambda(q) < k$ and $\lambda(q') \geq k$. Let $C$ and $C'$ be two different expressions in $E_{k}$. We compute $C''={\tt norm}(C[u]-C'[u],\lambda(q))$, choosing an arbitrary order between $C$ and $C'$ in order to avoid redundancy. Let us write $C''$ as $\alpha x_{\lambda(q)}+\sum_{i<\lambda(q)}a_ix_i+b$ with $\alpha \in \{0,1\}$. Then we add $-\sum_{i<\lambda(q)}a_ix_i-b$ to $E_{\lambda(q)}$. \end{enumerate} \end{itemize} We illustrate this construction of expressions for the automaton $\mathcal{A}_1$ of \figurename~\ref{fig:exita1}. Initially, we have $E_1 = \{0,x_1\}$ and $E_2 = \{0,x_2\}$. When treating level $2$, first, expression $-\frac12 x_1 + 1$ is added to $E_2$ as normalization of the guard $x_1 + 2x_2=2$. Then transition labeled by $a$ updates $x_2$ (by reseting it to $0$). As a result, we have to add to $E_1$ all differences of expressions of $E_2$ updated by $x_2 := 0$. This only produces expression $-\frac12 x_1 +1 - 0$ which is normalized into $x_1 - 2$; thus expression $2$ is added to $E_1$. When treating level $1$, expression $1$ from the guard of transition $a$ is added to $E_1$. As a result, we obtain $E_1 = \{x_1, 0, 1, 2\}$ and $E_2 = \{x_2, 0, -\frac12 x_1 +1\}$. \mathcal{M}S{Reviewer 1, Rq 9. Done} \begin{lemma} \label{prop:terminate} The construction procedure of $\{E_k\}_{k\leq n}$ terminates and the size of every $E_k$ is bounded by $(E+2)^{2^{n(n-k+1)}+1}$ where $E$ is the size of the edges of the ITA. \end{lemma} \begin{proof} Given some $k$, we prove the termination of the stage relative to $k$. Observe that the second step only adds new expressions to $E_{k'}$ for $k'<k$. Thus the two steps can be ordered. Let us prove the termination of the first step of the saturation procedure. We define $E_{k}^{0}$ as the set $E_k$ at the beginning of this stage and $E_{k}^{i}$ as this set after insertion of the $i^{th}$ item in it. With each added item $C[u]$ can be associated its \emph{father} $C$. Thus we can view $E_{k}$ as an increasing forest with finite degree (due to the finiteness of the edges) and finitely many roots. Assume that this step does not terminate. Then we have an infinite forest and by K\"onig lemma, it has an infinite branch $C_0,C_1,\ldots$ where $C_{i+1}=C_i[u_i]$ for some update $u_i$ such that $C_{i+1}\neq C_i$. Observe that the number of updates that change the variable $x_k$ is either 0 or 1 since once $x_k$ disappears it cannot appear again. We split the branch into two parts before and after this update or we still consider the whole branch if there is no such update. In these (sub)branches, we conclude with the same reasoning that there is at most one update that change the variable $x_{k-1}$. Iterating this process, we conclude that the number of updates is at most $2^k-1$ and the length of the branch is at most $2^k$. For the sake of readability, we set $B=E+2$. The final size of $E_{k}$ is thus at most $E_{k}^{0}\times B^{2^k}$ since the width of the forest is bounded by $B$. In the second step, we add at most $B \times (|E_{k}|\times(|E_{k}|-1))/2$ to $E_{i}$ for every $i<k$. This concludes the proof of termination. We now prove by a painful backward induction that as soon as $n\geq 2$, $|E_{k}|\leq B^{2^{n(n-k+1)}+1}$. The doubly exponential size of $E_n$ (proved above) is propagated downwards by the saturation procedure. \mathcal{M}S{Reviewer 1, Rq 11. Done} We define $p_k = |E_{k}|$. \paragraph{Basis case $k=n$.} We have $p_n \leq p_n^{0}\times B^{2^n}$ where $p_n^{0}$ is the number of guards of the outgoing edges from states of level $n$. Thus $p_n \leq B\times B^{2^n}= B^{2^n+1}= B^{2^{n(n-n+1)}+1}$ which is the claimed bound. \paragraph{Inductive case.} Assume that the bound holds for $k < j \leq n$. Due to all executions of the second step of the procedure at strictly higher levels, $p_k^0$ expressions were added to $E_k$, with: \begin{eqnarray*} p_k^{0} &\leq& B + B \times ((p_{k+1}\times (p_{k+1}-1))/2 + \cdots + (p_{n}\times (p_{n}-1))/2)\\ p_k^{0} &\leq& B + B \times (B^{2^{n(n-k)+1}+2} + \cdots + B^{2^{n+1}+2})\\ p_k^{0} &\leq& B \times (n-k+1) \times B^{2^{n(n-k)+1}+2}\\ p_k^{0} &\leq& B \times B^n \times B^{2^{n(n-k)+1}+2} \quad\textrm{(here we use } B \geq 2\textrm)\\ p_k^{0} &\leq& B^{2^{n(n-k)+1}+n+3} \end{eqnarray*} Taking into account the first step of the procedure for level $k$, we have: \[p_k \leq B^{2^{n(n-k)+1}+2^k+n+3}.\] Let us consider the term $\delta = 2^{n(n-k+1)}+1-(2^{n(n-k)+1}+2^k+n+3)$. Since $k < n$, \begin{eqnarray*} \delta &\geq& (2^{n-1}-1)2^{n(n-k)+1}-(2^k+n+2)\\ \delta &\geq& (2^{n-1}-1)2^{n(n-k)+1}-(2^{n-1}+2^n)\\ \delta &\geq& (2^{n-1}-1)2^{n(n-k)+1}-2^{n+1} \geq 0\\ \end{eqnarray*} Thus $p_k \leq B^{2^{n(n-k)+1}+2^k+n+3} \leq B^{2^{n(n-k+1)}+1} = (E+2)^{2^{n(n-k+1)}+1}$ which is the claimed bound. \qed \end{proof} \subsection{Construction of the class automaton} \label{subsec:contructiongraph} In order to analyze the size of the class automaton defined below, we recall and adapt a classical result about partitions of $n$-dimensional Euclidian spaces. \begin{definition} Let $\{H_k\}_{1\leq k \leq m}$ be a family of hyperplanes of $\ensuremath{\mathbb{R}}^n$. A \emph{region} defined by this family is a connected component of $\ensuremath{\mathbb{R}}^n \setminus \bigcup_{1\leq k \leq m} H_k$. An \emph{extended region} defined by this family is a connected component of $\bigcap_{k \in I} H_k \setminus \bigcup_{k \notin I} H_k$ where $I \subseteq \{1,\ldots, m\}$. \end{definition} \begin{proposition}[\cite{zas75}] The number of regions defined by the family $\{H_k\}_{1\leq k \leq m}$ is at most $\sum_{i=0}^n \binom{m}{i}$. \end{proposition} We derive from this proposition: \begin{corollary} \label{cor:zas} The number of extended regions defined by the family $\{H_k\}_{1\leq k \leq m}$ is at most $\sum_{p=0}^n\binom{m}{p}\sum_{i=0}^{n-p} \binom{m-p}{i}\leq e^2m^n$. \end{corollary} \begin{proof} Observe that an extended region is a region belonging to an intersection of at most $n$ hyperplanes (by removing redundant hyperplanes). Thus counting the number of such intersections and applying the previous proposition yields the following formula: \[ \sum_{p=0}^n\binom{m}{p}\sum_{i=0}^{n-p} \binom{m-p}{i}\leq \sum_{p=0}^n\frac{m^p}{p!}\sum_{i=0}^{n-p}\frac{m^{n-p}}{i!}= m^n \sum_{p=0}^n\frac{1}{p!}\sum_{i=0}^{n-p}\frac{1}{i!}\leq e^2m^n\] \qed \end{proof} \begin{theorem} \label{prop:reach} The untimed language of an ITA is regular. \end{theorem} \begin{proof} First, we assume that the policy of every state is lazy. At the end of the proof, we explain how to adapt the construction for states with urgent or delayed policies. \paragraph{Class definition.} Let $\mathcal{A}$ be an ITA with $E$ transitions and $n$ clocks, the decision algorithm is based on the construction of a (finite) class graph which is time abstract bisimilar to the transition system $\mathcal{T}_{\mathcal{A}}$. A class is a syntactical representation of a subset of reachable configurations. More precisely, it is defined as a pair $R=(q,\{\preceq_k\}_{1 \leq k \leq \lambda(q)})$ where $q$ is a state and $\preceq_k$ is a total preorder over $E_k$, for $1 \leq k \leq \lambda(q)$. The class $R$ describes the set of configurations: \[\sem{R}=\{(q,v,\beta) \mid \beta \in \{\top,\bot\},\; \forall k \leq \lambda(q)\ \forall (g,h) \in E_k,\ g[v] \leq h[v] \mbox{~iff~} g \preceq_k h\}\] The initial state of this graph is defined by the class $R_0$ with $\sem{R_0}$ containing $(q_0,{\bf 0},\bot)$ which can be straightforwardly determined. For example, for ITA $\mathcal{A}_1$ of \figurename~\ref{fig:exita1}, the initial class is $R_0=(q_0, Z_0)$ with $Z_0: x_1=0 < 1 < 2$. The final states are all $R=\left(q,\{\preceq_k\}_{1 \leq k \leq \lambda(q)}\right)$ with $q \in F$. Observe that fixing a state, the set of configurations $\sem{R}$ of a non empty class $R$ is exactly an extended region associated with the hyperplanes defined by the comparison of two expressions of some $E_k$. Since $(E+2)^{2^{n^2}+1}$ is an upper bound of the number of expressions of any level, $m=(E+2)^{2^{n^2+1}+2}$ is an upper bound of the number of hyperplanes. So using Corollary~\ref{cor:zas}, the number of semantically different classes for a given state is bounded by: $$e^2m^n=e^2(E+2)^{2^{n^2+1}n+2n}$$ Since one can test semantical equality between classes in polynomial time w.r.t. their size~\cite{RoTeVi97}, we implicitely consider in the sequel of the proof classes modulo the semantical equivalence. As usual, there are two kinds of transitions in the graph, corresponding to discrete steps and time steps. \paragraph{Discrete step.} Let $R=(q,\{\preceq_k\}_{1 \leq k \leq \lambda(q)})$ and $R'=(q',\{\preceq'_k\}_{1 \leq k \leq \lambda(q')})$ be two classes. There is a transition $R \xrightarrow{e} R'$ for a transition $e : q \xrightarrow{\varphi,a,u} q'$ if there is some $(q,v) \in\, \sem{R}$ and $(q',v') \in\, \sem{R'}$ such that $(q,v) \xrightarrow{e} (q',v')$. In this case, for all $(q,v) \in\, \sem{R}$ there is a $(q',v') \in\, \sem{R'}$ such that $(q,v) \xrightarrow{e} (q',v')$. This can be decided as follows. \subparagraph{Firability condition.} Write $\varphi=\bigwedge_{j \in J} C_j \bowtie_j 0$. Since we assumed normalized guards, for every $j$, $C_j=\alpha x_k+\sum_{i<k}a_ix_i+b$ (with $\alpha \in \{0,1\}$ and $k = \lambda(q)$). By construction $C'_j=-\sum_{i<\lambda(q)}a_ix_i-b \in E_k$. For each $j \in J$, we define a condition depending on $\bowtie_j$. For instance, if $C_j \leq 0$, we require that \mathcal{M}S{Reviewer 2, Rq 3. Done} $\alpha x_k \preceq_k C'_j$, or if $C_j > 0$ we require that $\alpha x_k \npreceq_k C_j' \wedge C_j' \preceq \alpha x_k$. \subparagraph{Successor definition.} $R'$ is defined as follows. Let $k \leq \lambda(q')$ and $g',h' \in E_k$. \begin{enumerate} \item Either $k \leq \lambda(q)$, by construction, $g'[u],h'[u] \in E_k$ then $g' \preceq'_k h'$ iff $g'[u] \preceq_k h'[u]$. \item Or $k > \lambda(q)$, let $D=g'[u]-h'[u]=\sum_{i\leq\lambda(q)}c_ix_i+d$, and $C={\tt norm}(D,\lambda(q))$, and write $C=\alpha x_{\lambda(q)}+\sum_{i<\lambda(q)}a_ix_i+b$ (with $\alpha \in \{0,1\}$). By construction $C'=-\sum_{i<\lambda(q)}a_ix_i-b \in E_{\lambda(q)}$.\\ When $c_{\lambda(q)}\geq 0$ then $g' \preceq'_k h'$ iff $\alpha x_{\lambda(q)} \preceq_{\lambda(q)} C' $.\\ When $c_{\lambda(q)}< 0$ then $g' \preceq'_k h'$ iff $C' \preceq_{\lambda(q)} \alpha x_{\lambda(q)}$. \end{enumerate} By definition of $\sem{\,\cdot\,}$, \begin{itemize} \item For any $(q,v) \in \sem{R}$, if there exists $(q,v) \xrightarrow{e} (q',v')$ then the firability condition is fulfilled and $(q',v')$ belongs to $\sem{R'}$. \item If the firability condition is fulfilled then for each $(q,v) \in \sem{R}$ there exists $(q',v') \in \;\sem{R'}$ such that $(q,v) \xrightarrow{e} (q',v')$. \end{itemize} \paragraph{Time step.} Let $R=(q,\{\preceq_k\}_{1 \leq k \leq \lambda(q)})$. There is a transition $R \xrightarrow{succ} Post(R)$ for $Post(R)=(q,\{\preceq'_k\}_{1 \leq k \leq \lambda(q)})$, the time successor of $R$, which is defined as follows. For every $i < \lambda(q)\ \preceq'_i=\preceq_i$. Let $\sim$ be the equivalence relation $\preceq_{\lambda(q)} \cap \preceq^{-1}_{\lambda(q)}$ induced by the preorder. On equivalence classes, this (total) preorder becomes a (total) order. Let $V$ be the equivalence class containing $x_{\lambda(q)}$. \begin{enumerate} \item Either $V=\left\{x_{\lambda(q)}\right\}$ and it is the greatest equivalence class. Then $\preceq'_{\lambda(q)}=\preceq_{\lambda(q)}$ (thus $Post(R)=R$). \item Either $V=\left\{x_{\lambda(q)}\right\}$ and it is not the greatest equivalence class. Let $V'$ be the next equivalence class. Then $\preceq'_{\lambda(q)}$ is obtained by merging $V$ and $V'$, and preserving $\preceq_{\lambda(q)}$ elsewhere. \item Either $V$ is not a singleton. Then we split $V$ into $V\setminus \left\{x_{\lambda(q)}\right\}$ and $\left\{x_{\lambda(q)}\right\}$ and ``extend'' $\preceq_{\lambda(q)}$ by $V \setminus \left\{x_{\lambda(q)}\right\} \preceq'_{\lambda(q)} \left\{x_{\lambda(q)}\right\}$. \end{enumerate} By definition of $\sem{\,\cdot\,}$, for each $(q,v) \in \sem{R}$, there exists $d>0$ such that $(q,v+d) \in \sem{Post(R)}$ and for each $d$ with $0 \leq d' \leq d$, then $(q,v+d') \in \sem{R} \cup \sem{Post(R)}$. We now explain how the policy is handled. Given a state $q$ such that $pol(q)=U$, for every class $R=(q,\{\preceq_k\}_{1 \leq k \leq \lambda(q)})$ we delete the time steps outgoing from $R$. The case of a state $q$ such that $pol(q)=D$, is a little bit more involved. First we partition classes between \emph{time open} classes, where for every every configuration of the class there exists a small amount of time elapse that let the new configuration in the same class, and \emph{time closed} classes. The partition is performed w.r.t. the equivalence class $V$ of $x_{\lambda(q)}$ for the relation $\sim$ (see above in the proof). The class $R$ is time open iff $V=\{x_{\lambda(q)}\}$. Then we successively replace every time closed class $R$ by two copies $R^-$ and $R^+$, which capture wether time has elapsed since the last last discrete step. Thus, a time edge entering $R$ is redirected towards $R^+$ while a discrete edge entering $R$ is redirected towards $R^-$. A time step $R \xrightarrow{succ} R'$ is replaced by two transitions $R^- \xrightarrow{succ} R'$ and $R^+ \xrightarrow{succ} R'$, while a discrete step $R \xrightarrow{e} R'$ is replaced by the transition $R^+ \xrightarrow{e} R'$. Time open classes allow time elapsing, hence no splitting is required for these classes. Since there is at most one time edge outgoing from a class, the number of edges of the new graph is at most twice the number of edges in the original graph. \qed \end{proof} \begin{proposition} \label{prop:reachita} The reachability problem for Interrupt Timed Automata is decidable and belongs to \emph{2-EXPTIME} and \emph{PTIME} when the number of clocks is fixed. \end{proposition} \begin{proof} The reachability problem is solved by building the class graph and applying standard reachability algorithm. Since the number of semantically different classes is at most doubly exponential in the size of the model and the semantical equivalence can be checked in polynomial time w.r.t. the size of the class (also doubly exponential) this leads to a 2-EXPTIME complexity. When the number of clocks is fixed the size of the graph is at most polynomial w.r.t. the size of the problem leading to a PTIME procedure. No complexity gain can be obtained by a non deterministic search without building the graph since the size of the graph is only polynomial w.r.t. the size of a class. \qed \end{proof} \noindent \textbf{Remarks.} This result should be contrasted with the similar one for TA. The reachability problem for TA is PSPACE-complete and thus less costly to solve than for ITA. However, fixing the number of clocks does not reduce the complexity for TA (when this number is greater than or equal to $3$) while this problem belongs now to PTIME for ITA. Summarizing, the main source of complexity for ITA is the number of clocks, while in TA it is the binary encoding of the constants~\cite{courcoubetis92}. \mathcal{M}S{Reviewer 1, rq 14. Done.} Since the construction of the graph depends on a set of expressions, there is no notion of \emph{granularity} as in Timed Automata. \mathcal{M}S{Reviewer 2, Long commentaire 1. Done.} When the only guards are comparisons to constants and the only updates resets of clocks (as in Timed Automata), the abstraction obtained is coarser than the region abstraction of~\cite{alur94a}: it consists only in products of intervals. \mathcal{M}S{Reviewer 1, Rqs 4 and 8. Done.} \subsection{Example} We illustrate this construction of a class automaton for the automaton $\mathcal{A}_1$ of \figurename~\ref{fig:exita1}. The resulting class automaton is depicted on \figurename~\ref{fig:regaut}, where dashed lines indicate time steps. \begin{figure} \caption{The class automaton for $\mathcal{A} \label{fig:regaut} \end{figure} Recall that we obtained $E_1= \{ x_1, 0, 1, 2 \}$ and $E_2= \left\{ x_2, 0, -\frac{1}{2}x_1 + 1\right\}$. In state $q_0$, the only relevant clock is $x_1$ and the initial class is $R_0=(q_0, Z_0)$ with $Z_0: x_1=0 < 1 < 2$. Its time successor is $R_0^1=(q_0, Z_0^1)$ with $Z_0^1: 0 < x_1 < 1 < 2$. Transition $a$ leading to $q_1$ can be taken from both classes, but not from the next time successors $R^2_0=(q_0, 0 < x_1= 1 < 2)$, $R^3_0=(q_0, 0 < 1 < x_1 < 2)$, $R^4_0=(q_0, 0 < 1 < x_1 = 2)$, or $R^5_0=(q_0, 0 < 1 < 2 < x_1)$. Transition $a$ switches from $R_0$ to $R_1=\left(q_1,Z_0,x_2 = 0 < 1\right)$, because $x_1=0$, and from $R_0^1$ to $R^1_1=\left(q_1, Z_0^1, x_2=0 < -\frac{1}{2}x_1 + 1\right)$. Transition $b$ is fired from those time successors for which $x_2= -\frac{1}{2}x_1 + 1$. On the geometric view of figure~\ref{fig:traj}, the displayed trajectory corresponds to the following path in the class automaton: \begin{eqnarray*} &&R_0 \rightarrow R_0^1 \xrightarrow{a} R_1^1 \rightarrow \left(q_1, Z_0^1,0<x_2<-\frac12 x_1 + 1\right) \rightarrow \left(q_1, Z_0^1,0<x_2=-\frac12 x_1 + 1\right)\\ &&\xrightarrow{b} \left(q_2, Z_0^1,0<x_2=-\frac12 x_1 + 1\right) \end{eqnarray*} \section{A simpler model}\label{sec:reachcomplexity} \subsection{Definition of ITA$_-$} We introduce a restricted version of ITA, called ITA$_-$, which is interesting both from a theoretical and a practical point of view. When modeling interruptions in real-time systems, the clock associated with some level measures the time spent in this level or more generally the time spent by some tasks at this level. Thus when going to a higher level, this clock is not updated until returning to this level. The ITA$_-$ model takes this feature into account. Moreover, it turns out that the reachability problem for ITA$_-$ can be solved more efficiently. This also provides a better complexity upper-bound for the reachability problem on ITA (in the general case). \begin{definition} The subclass ITA$_-$ of ITA is defined by the following restriction on updates. For a transition $q \xrightarrow{\varphi, a, u} q'$ of an automaton $\mathcal{A}$ in ITA$_-$ (with $k=\lambda(q)$ and $k'=\lambda(q')$), the update $u$ is of the form $\wedge_{i=1}^{n} x_i := C_i$ with: \begin{itemize} \item if $k > k'$, then for $1 \leq i \leq k'$, $C_i := x_i$ and for $k'+1 \leq i \leq n$, $C_i = 0$, \mathcal{M}S{Reviewer 1, Rq 14. Done.} \textit{i.e.} the only updates are the resets of now irrelevant clocks; \item if $k \leq k'$ then $C_k$ is of the form $\sum_{j=1}^{k-1} a_jx_j +b$ or $C_k=x_k$. For $k < i \leq k'$, $C_i=0$ and $C_i=x_i$ otherwise. \end{itemize} \end{definition} Thus, complex updates appear only in transitions increasing the level, and only for the active clock of the transition level. The proof of the following result is based on Propositions~\ref{proposition:itaitamoins} and~\ref{proposition:reachitamoins} proved in the next two sections. \begin{theorem}\label{thm:optreach} The reachability problem for $ITA$ belongs to \emph{NEXPTIME}. \end{theorem} \begin{proof} Given an ITA $\mathcal A$ with transitions of size $E$ and constants coded over $b$ bits, we build the ITA$_-$ $\mathcal A'$ of Proposition~\ref{proposition:itaitamoins}. Then we apply on $\mathcal A'$ the reachability procedure of Proposition~\ref{proposition:reachitamoins}. In this procedure, we consider paths of length bounded by $(E'+n)^{3n}$, where $E'$ is the number of transitions of $\mathcal A'$. Since $E' \leq 2^{4b \cdot E \cdot n^2}$ (as shown in the proof of Proposition~\ref{proposition:itaitamoins}), the length of the paths considered is bounded by \[(E'+n)^{3n} \leq \left(2^{4b \cdot E \cdot n^2}+n\right)^{3n} \leq (n+2)^{12 b \cdot E \cdot n^3}\] which establishes the claimed upper bounds. \qed \end{proof} \subsection{From ITA to ITA$_-$} \label{sec:relexp} In this subsection we prove that ITA and ITA$_-$ are equivalent w.r.t.\ the associated (timed) languages. \begin{proposition} \label{proposition:itaitamoins} Given an ITA $\mathcal A$, we build an automaton $\mathcal A'$ in ITA$_-$ accepting the same timed language and with the same clocks such that its number of edges (resp. states) is exponential w.r.t. the number of edges (resp. states) in $\mathcal A$ and polynomial when the number of clocks is fixed. \end{proposition} \begin{proof} Starting from ITA $\mathcal{A}=\langle\Sigma, AP, Q, q_0, F,pol, X, \lambda,$ $lab, \mathcal{D}elta\rangle$, the construction of automaton $\mathcal A'$ relies on memorizing at a given level $i$, for every clock $x_{j}$ at a lower level, an expression depending on $x_1,\ldots,x_{j-1}$, corresponding to the delayed update of $x_{j}$. This expression is used later to replace the value of $x_j$ in guards and to restore its correct value by update after decreasing to level $j$. To this aim we associate with every pair of levels $i \geq j$, a set of expressions $F_{i,j}$ inductively defined by: \begin{itemize} \item $F_{i,i}=\{x_i\}$ \item $\forall i>j\ F_{i,j}=F_{i-1,j} \cup \{e[\{x_k \leftarrow e_k\}_{k<j}] \mid e$ is the expression of an update of $x_j$ by an edge of level $i$ and $\forall k,\ e_k \in F_{i,k}\}$ \end{itemize} We write $F_j = F_{n,j} = \bigcup_{i=j}^n F_{i,j}$. The set $F_j$ thus contains all expressions of updates of $x_j$ that appear at higher levels. Although the number of expressions is syntactically doubly exponential w.r.t.\ the number of clocks, one can show that the number of \emph{distinct} expressions is only singly exponential. First we assume that ITA $\mathcal{A}$ has only integral constants, the case of rational constants is handled at the end of the proof. It can be shown that every expression $e_k$ of $F_k$ can be written \[e_k = \sum_{i_0,\dots,i_p \in \textrm{sub}(k)} \alpha_{k,i_p} \cdot \alpha_{i_p,i_{p-1}} \cdots \alpha_{i_1,i_0} \cdot x_{i_0}\] with the convention that $x_0$ is the constant $1$, and where $\textrm{sub}(k)$ is the set of all (ordered) subsequences of $0,\dots,k-1$ and $\alpha_{j,i}$ is the coefficient of $x_i$ in some update of $x_j$. For the family $\alpha$ of all integers $\alpha_{j,i}$, assume that these constants are coded over $b_\alpha$ bits each (including the sign of the coefficient). The expression $x_{i_0}$ can also be coded into an integer of $\log_2(n)$ bits (with a special symbol to indicate that it is the expression of a clock rather than a constant). Let $b = \max(b_\alpha,\log_2(n)+1)$ be the (maximal) number of bits used to code a coefficient. Then each term of the sum is a product of at most $k$ such coefficients, therefore can be coded with $kb$ bits. Summing at most $2^k$ such products yields an integer that can be coded over $kb+k$ bits. Thus there can be at most $2^{k(b+1)}$ different expressions in $E_k$. Automaton $\mathcal A'$ is then defined as follows. \begin{itemize} \item The set of states is \[\begin{array}{rcl} Q' &=& \{(q^+,e_1,\ldots,e_{i-1})\mid q \in Q, \ \lambda(q)=i \mbox{ and } \forall j,\ e_j \in F_j \} \\ & & \cup \{(q^-,e_1,\ldots,e_{i}) \mid q \in Q, \ \lambda(q)=i \mbox{ and } \forall j,\ e_j \in F_j \}, \end{array}\] with $pol(q^+,e_1,\ldots,e_{i-1})=pol(q)$ and $pol(q^-,e_1,\ldots,e_{i})=U$.\\ Note that the sequence is empty if $i=1$. Moreover: \[\lambda(q^+,e_1,\ldots,e_{i-1})=\lambda(q^-,e_1,\ldots,e_{i})=\lambda(q).\] \item The initial state of $\mathcal A'$ is $(q_0^+,x_1,\ldots, x_{i-1})$ if $\lambda(q_0)=i$. The final states of $\mathcal A'$ are the states with first component $q^+$ for $q \in F$. \item Let $q \xrightarrow{\varphi,a,u} q'$ be a transition in $\mathcal{A}$ such that $\lambda(q)=i$, $\lambda(q')=i'$ and $u$ is defined by $\bigwedge_{j=1}^{i} x_j := C_j$. \begin{itemize} \item[$\bullet$] If $i \leq i'$, then for every $(q^+,e_1,\ldots,e_{i-1})$ there is a transition $$(q^+,e_1,\ldots,e_{i-1}) \xrightarrow{\varphi',a,u'} (q'^+,e'_1,\ldots,e'_{i'-1})$$ in $\mathcal{A}'$ with $\varphi'=\varphi(\{x_j \leftarrow e_j\}_{j<i})$, update $u'$ is defined by $x_i:=C_i[\{x_j \leftarrow e_j\}_{j<i}]$; for all $j<i$, $e'_j=C_j[\{x_k \leftarrow e_k\}_{k<i}]$ and for all $j$ such that $i\leq j<i'$, $e'_j=x_j$. \item[$\bullet$] If $i > i'$ then for every $(q^+,e_1,\ldots,e_{i-1})$ there is a transition $$(q^+,e_1,\ldots,e_{i-1}) \xrightarrow{\varphi',a,u'} (q'^-,e'_1,\ldots,e'_{i'})$$ in $\mathcal{A}'$ with $\varphi'=\varphi(\{x_j \leftarrow e_j\}_{j<i})$, update $u'$ contains only the trivial updates $x_j := x_j$ for all clocks and for all $j\leq i'$, $e'_j=C_j[\{x_k \leftarrow e_k\}_{k<i}]$. \item[$\bullet$] For every $(q^-,e_1,\ldots,e_{i})$ there is in $\mathcal{A}'$ a transition \[(q^-,e_1,\ldots,e_{i}) \xrightarrow{true,\varepsilon,x_{i}:=e_{i}} (q^+,e_1,\ldots,e_{i-1}).\] \end{itemize} \end{itemize} In words, given a transition, the guard is modified according to these expressions. The modification of the update consists only in applying the update at the current level and taking into account the other updates in the expressions labeling the destination state. When the transition increases the level, the expression associated with a new ``frozen'' clock ($x_j$ for $i\leq j <i'$) is the clock itself. The urgent states $(q^-,-)$ are introduced for handling the case of a transition that decreases the level. In this case, one reaches such a state that memorizes also the expression of the clock at the current level. Note that the memorized expressions can correspond to an update proceeded at any (higher) level. From this state a single transition must be (immediately) taken whose effect is to perform the update corresponding to the memorized expression. It is routine to check that the languages of the two automata are identical. Each transition in $\mathcal{A}$ is replaced by several transitions in $\mathcal{A}'$, which number is bounded by the number of expressions that can be attached to the source of the original transition. In addition, transitions decreasing level are further ``split'' through states $(q^-,-)$. Thus the number $E'$ of transitions in $\mathcal{A}'$ is bounded by \begin{eqnarray*} E' &\leq& 2\cdot E \cdot |F_n|^n \\ &\leq& 2 \cdot E \cdot \left(2^{n(b(E+1)+1)}\right)^n \\ &\leq& 2 \cdot E \cdot 2^{n^2(b(E+1)+1)} \\ &\leq& 2^{n^2(b(E+1)+1)+1+\log_2(E)} \\ &\leq& 2^{n^2((b+1)(E+1)+1)} \\ E' &\leq& 2^{4b \cdot E \cdot n^2} \end{eqnarray*} (provided $E\geq 2$). This yields the exponential complexity for the number of transitions. The case of the number of states is similar. In the case when there are rational constants, assume each constant is coded with a pair $(r,d)$ of numerator and denominator. Assume each $r$ and $d$ can be coded over $b$ bits. We compute the \emph{lcm} $\delta$ of all denominators: since there are at most $E$ constants ($E$, the size of $\mathcal{D}elta$ contains the number of guards and updates), $\delta$ can be coded over $Eb$ bits. We consider ITA $\mathcal{A}_\delta$ which is $\mathcal{A}$ where all constants are multiplied by $\delta$. Thus a constant of $\mathcal{A}_\delta$ is an integer that can be coded over $b' = Eb +b = b(E+1)$ bits. The above bound on the number of expressions applies on $\mathcal{A}_\delta$. Note that after the construction of $\mathcal{A}_\delta'$, $\mathcal{A}'$ can be obtained by dividing each constant in $\mathcal{A}_\delta'$ by $\delta$. \qed \end{proof} \paragraph{Example.} We illustrate this construction on ITA $\mathcal{A}_2$ of \figurename~\ref{fig:exitatoitamoins}. The sets of expressions are computed as on \tablename~\ref{tab:itatoitamoins} and the resulting ITA$_-$ $\mathcal{A}'_2$ is depicted on \figurename~\ref{fig:exitamoinsfromita}. \begin{figure} \caption{ITA $\mathcal{A} \label{fig:exitatoitamoins} \end{figure} \begin{figure} \caption{ITA$_-$ $\mathcal{A} \label{fig:exitamoinsfromita} \end{figure} \begin{table}[h] \centering \[\begin{array}{|l|c|c|c|} \hline i \ \backslash\ j & 1 & 2 & 3 \\ \hline 1 & \{x_1\} & & \\ \hline 2 & \{x_1,2\} & \{x_2\} & \\ \hline 3 & \{x_1,2,1\} & \{x_2, \underbrace{2x_1+1,\ 5,\ 3}_{q_2 \xrightarrow{x_2:=2x_1+1} q_3}, \underbrace{x_1+1,\ 3,\ 2}_{q_3 \xrightarrow{x_2:=x_1+1} q_4}\!\!\} & \{x_3\} \\ \hline \end{array}\] \caption{Sets of expressions $F_{i,j}$ for $\mathcal{A}_2$.} \label{tab:itatoitamoins} \end{table} The translation above of an ITA into an equivalent ITA$_-$ induces an exponential blowup. The proposition below shows that the bound is reached. \begin{proposition} There exist a family $\{\mathcal{A}_n\}_{n\in \ensuremath{\mathbb{N}}}$ of ITA with two states, $n$ clocks and constants coded over $b$ bits, where $b$ is polynomial in $n$, such that the equivalent ITA$_-$ built by the procedure above has a number of states greater than or equal to $2^n$. \end{proposition} \begin{proof} For $n \in \ensuremath{\mathbb{N}}$, let $\mathcal{A}_n$ be the ITA with $n$ clocks and two states $q_{\textrm{init}}$ (initial) and $q$ (final) both of level $n$ (and lazy policy) built as follows. There is a transition from $q_{\textrm{init}}$ to $q$ with update $\bigwedge_{k=1}^n x_k:=1$ that sets all clocks to $1$. For $1 \leq k \leq n$ there are two loops on $q$ with updates $x_k := x_{k-1}$ and $x_k := \alpha_{k} x_{k-1}$ respectively, where $\alpha_k$ is the $k$th prime number (and with the convention that $x_0$ is the constant $1$). When building the sets of expressions, no expressions are added until level $n$, since all updates occur at this level. At level $k$, $F_{n,k}$ contains (at least) $2^k$ expressions: all possible products of the first $k$ prime numbers, namely \[F_{n,k} \supset \left\{\prod_{i \in I} \alpha_i \:\middle|\: I \subseteq \{1,\dots,k\}\right\}.\] Indeed, at level $1$, $F_{n,1} = \{x_1,1,2\}$. Now assume that $F_{n,k-1}$ contains all products $\prod_{i \in I} \alpha_i$ where $I \subseteq \{1,\dots,k-1\}$. By update $x_k := x_{k-1}$, $F_{n,k} \supset F_{n,k-1}$. By update $x_k := \alpha_k x_{k-1}$, $F_{n,k}$ contains all products $\alpha_k \prod_{i \in I} \alpha_i = \prod_{i \in I\uplus\{k\}} \alpha_i$. Therefore \[F_{n,k} \supset \left\{\prod_{i \in I} \alpha_i \:\middle|\: I \subseteq \{1,\dots,k-1\}\right\} \cup \left\{\prod_{i \in I\uplus\{k\}} \alpha_i \:\middle|\: I \subseteq \{1,\dots,k-1\}\right\} \] \[F_{n,k} \supset \left\{\prod_{i \in I} \alpha_i \:\middle|\: I \subseteq \{1,\dots,k\}\right\}.\] The expressions thus built are distinct, since they are products of distinct prime numbers. Remark that the set of expression for level $k$ is in bijection with a sequence of updates $x_1 := \dots, x_2:= \dots, \dots, x_k:= \dots$, the choice of the update depending on the choice of the set $I$. Therefore all expressions of $F_{n,n}$ are reached (in association with state $q$) and the set of states in $\mathcal{A}_n'$ is at least of size $2^n$. In addition, it should be noted that the $n$th prime number is in $O(n\log_2(n))$, therefore can be coded over $O(\log_2(n)^2)$ bits. So the size of the constants appearing in the updates (and the size of the representation of $\mathcal{A}_n$) is polynomial in $n$ while the representation of $\mathcal{A}'_n$ is exponential in $n$. \end{proof} \subsection{Reachability on ITA$_-$} In this section we use counting arguments to obtain an upper bound for the reachability problem on ITA$_-$. The following counting lemma does not depend on the effect of the updates but only on the timing constraints induced by the policies. \begin{lemma}[Counting Lemma] \label{lemma:counting} Let $\mathcal A$ be an ITA$_-$ with $E$ transitions and $n$ clocks, then in a sequence $(e_1,\ldots,e_l)$ of transitions of $\mathcal A$ where $l> (E+n)^{3n}$, there exist $i<j$ with $e_i=e_j$ such that the level of any transition $e_k$ with $i\leq k\leq j$ is greater than or equal to the level of $e_i$, say $p$, and: \begin{itemize} \item either $e_i$ updates $x_{p}$, \item either no $e_k$ with $i\leq k\leq j$ updates $x_{p}$ and $e_i$ is delayed or lazy. \item or no $e_k$ with $i\leq k\leq j$ updates $x_{p}$ and no time elapses for clock $x_p$ between $e_i$ and $e_j$. \end{itemize} \end{lemma} \begin{proof} Assume that the conclusions of the lemma are not satisfied, we claim that $l\leq (E+2n)^{3n}$. First we prove that the number of transitions of level $m$ that occur between two occurrences of transitions of strictly lower level is less than or equal to $(E+2)^3$. Indeed there can be no more than $E$ occurrences of transitions that update $x_m$. Then between two such transitions (or before the first or after the last) there can be no more than $E$ lazy or delayed transitions of level $m$ that do not update $x_m$. Finally between any kind of previous transitions (or before the first or after the last), there can be no more than $E$ urgent transitions that do not update $x_m$, since they prevent time from elapsing at level $m$. Summing up, there can be no more than $E+E(E+1)+E(E(E+1)+1)\leq (E+1)^3$ transitions of level $m$ that occur between two occurrence of transitions of strictly lower level. Now we prove by induction that the number of transitions at level less than or equal to $m$ is at most $(E+m)^{3m}$. This is true for $m=1$ by the previous proof. Assume the formula valid for $m$, then grouping the transitions of level $m+1$ between the occurrences of transition of lower level (or before the first or after the last), we obtain that the number of transitions at levels less than or equal to $m+1$ is at most: \begin{eqnarray*}(E+m)^{3m} + ((E+m)^{3m}+1)(E+1)^3 &\leq& (E+m)^{3m+3}+2(E+m)^{3m} \\&\leq& (E+m+1)^{3(m+1)}\qquad\qed\end{eqnarray*} \end{proof} \begin{proposition} \label{proposition:reachitamoins} The reachability problem for ITA$_-$ belongs to \emph{NEXPTIME}. More precisely, reachability can be checked over paths with length less than or equal to $(E+n)^{3n}$, where $E$ is the number of transitions and $n$ is the number of clocks. \end{proposition} \begin{proof} Let $\mathcal{A}=(\Sigma, Q, q_0,F,pol, X, \lambda, \mathcal{D}elta)$ be an ITA$_-$ with $n$ clocks. Let $E=|\mathcal{D}elta|$ be the number of transitions of $\mathcal{A}$. Assume that there is a run of minimal length $\rho$ from $(q_0,v_0)$ to some configuration $(q_f,v_f)$. Suppose now that $|\rho| > B = (E+n)^{3n}$. We will build a run $\rho'$ from $(q_0,v_0)$ to $(q_f,v_f)$ that is strictly smaller, hence contradicting the minimality hypothesis. Since $|\rho|>B$, then one of the three cases of Lemma~\ref{lemma:counting} applies. Therefore there is a transition $e$ at level $k$ repeated twice, from positions $\pi$ and $\pi'$ and separated by a subrun $\sigma$ containing only transitions of level higher than or equal to $k$. Moreover: \begin{itemize} \item Either $e$ updates $x_k$. In this case, all clocks have the same value after the first and the second occurrence of $e$. Hence removing $e \sigma = \rho_{[\pi,\pi'[}$ from $\rho$ yields a valid run $\rho'$ of $\mathcal{A}$ reaching $(q_f,v_f)$. Run $\rho'$ is strictly smaller than $\rho$, since $e\sigma$ which is of length at least $1$ was removed. \item Either no update occurred for $x_k$ and $e$ is delayed or lazy. In this case, upon reaching $\pi'$, the clocks of level $i<k$ have retained the same value, while $x_k$ has increased by $Dur\left(\rho_{[\pi,\pi']}\right)$. Hence when replacing $e \sigma = \rho_{[\pi,\pi'[}$ by a time step of duration $Dur\left(\rho_{[\pi,\pi']}\right)$, the configuration in $\pi'$ is unchanged. In addition, since $e$ was delayed or lazy, this time step is allowed in $\mathcal{A}$, and this yields a shorter run of $\mathcal{A}$. \item Or no update occurred and $\pi$ and $\pi'$ are at the same instant (separated by instantaneous actions). In this case, all clocks of level smaller than or equal to $k$ again have the same value after the first and the second occurrence of $e$. Again removing $\rho_{[\pi,\pi'[}$ yields a smaller run. \end{itemize} The decision procedure is as follows. It non deterministically guesses a path in the ITA$_-$ whose length is less than or equal to the bound. In order to check that this path yields a run, it builds a linear program whose variables are $\left\{x_i^j\right\}$, where $x_i^j$ is the value of clock $x_i$ after the $j$th step, and $\{d_j\}$ where $d_j$ is the amount of time elapsed during the $j$th step, when $j$ corresponds to a time step. The equations and inequations are deduced from the guards and updates of discrete transitions in the path and the delay of the time steps. The size of this linear program is exponential w.r.t. the size of the ITA$_-$. As a linear program can be solved in polynomial time~\cite{RoTeVi97}, we obtain a procedure in NEXPTIME. \qed \end{proof} One could wonder whether the class graph construction would lead to a better complexity when applied on ITA$_-$. Unfortunately, the number of expressions occurring in the class graph while being smaller than for ITA is still doubly exponential w.r.t. the size of the model. \section{Timed model-checking}\label{sec:modelchecking} First observe that model-checking \textsf{CTL}\xspacestar formulas on ITA can be done with classical procedures on the class graph previously built. We now consider verification of real time formulas. In the case of linear time, the logic \textsf{LTL}\xspace has been extended into the Metric Temporal Logic (\textsf{MTL}\xspace)~\cite{koymans90}, by adding time intervals as constraints to the $\ensuremath{{\sf \,U\,}}$ modality. However, \textsf{MTL}\xspace suffers from undecidability of the model-checking problem on TA. Hence decidable fragments have been proposed, such as Metric Interval Temporal Logic (\textsf{MITL}\xspace)~\cite{alur96}, which prohibits the use of point intervals (of the form $[a,a]$). Later, \textsf{MITL}\xspace was restricted into State Clock Logic (\textsf{SCL}\xspace)~\cite{raskin97}, in order to obtain more efficient verification procedures. Model-checking \textsf{MITL}\xspace (thus \textsf{SCL}\xspace) on TA is decidable. Unfortunately, we show here that model-checking \textsf{SCL}\xspace (thus \textsf{MITL}\xspace) on ITA is undecidable. For this, we reduce the halting problem on a two counter machine into model-checking an \textsf{SCL}\xspace formula on an ITA. Concerning branching time logics, at least two different timed extensions of \textsf{CTL}\xspace have been proposed. The first one~\cite{alur93} also adds time intervals to the $\ensuremath{{\sf \,U\,}}$ modality while the (more expressive) second one considers formula clocks~\cite{HNSY94}. Model-checking timed automata was proved decidable in both cases and compared expressiveness was revisited later on~\cite{bouyer05}. We conjecture that model-checking of \textsf{TCTL}\xspace is undecidable when using two (or more) formula clocks. Indeed, as shown in Section~\ref{subsec:mctctlundec}, the reachability problem in a product of an ITA and a TA with two clocks is undecidable, thus prohibiting model-checking techniques through automaton product and reachability testing as in~\cite{aceto98}. However, contrary to what is claimed in~\cite{berard10}, this is not enough to yield an undecidability proof. Two fragments for which model-checking is decidable on ITA have nonetheless been identified. The first one, \textsf{TCTL}\xspacecint, accepts only internal clocks (from the automaton on which the formulas will be evaluated) as formula clocks. The second one, \textsf{TCTL}\xspacep, restricts the nesting of $\ensuremath{{\sf \,U\,}}$ modalities. We provide verification procedures in both cases. \subsection{Undecidability of State Clock Logic} We first consider the timed extension of linear temporal logic, and more particularly the \textsf{SCL}\xspace fragment~\cite{raskin97}. \begin{definition} Formulas of the timed logic \textsf{SCL}\xspace are defined by the following grammar: \[\psi = p \mid \psi \wedge \psi \mid \neg \psi \mid \psi \ensuremath{{\sf \,U\,}} \psi \mid \psi \ensuremath{{\sf \,S\,}} \psi \mid \nextoc{\bowtie a} \psi \mid \lastoc{\bowtie a} \psi\] where $p \in AP$ is an atomic proposition, $\bowtie \,\in \{>,\geq,=,\leq,<\}$, and $a$ is a rational number. \end{definition} We use the usual shorthands $\mathbf{t}$ for $\neg(p \wedge \neg p)$, $\ensuremath{{\sf F}} \psi$ for $\mathbf{t} \ensuremath{{\sf \,U\,}} \psi$, $\ensuremath{{\sf G}} \psi$ for $\neg (\ensuremath{{\sf F}} \neg\psi)$ and $\varphi \ensuremath{\mathbb{R}}ightarrow \psi$ for $\neg(\varphi \wedge \neg \psi)$. The semantics are defined in the usual manner for boolean operators and $\ensuremath{{\sf \,U\,}}$. The $\ensuremath{{\sf \,S\,}}$ modality is the past version of $\ensuremath{{\sf \,U\,}}$. Modality $\nextoc{\bowtie a} \psi$ is true if the \emph{next} time $\psi$ is true will occur in a delay that respects the condition $\bowtie a$. Similarly, $\lastoc{\bowtie a} \psi$ is true if the \emph{last} time $\psi$ was true occurred in a (past) delay that respects the condition $\bowtie a$. More formally, for an execution $\rho$, we inductively define $(\rho,\pi) \models \varphi$ by: \[\begin{array}{lcl} (\rho,\pi) \models p &\quad\textrm{iff}\ & p \in lab(s_\pi) \\ (\rho,\pi) \models \varphi \wedge \psi &\quad\textrm{iff}\ & (\rho,\pi) \models \varphi \ \textrm{and}\ (\rho,\pi) \models \psi\\ (\rho,\pi) \models \neg \varphi &\quad\textrm{iff}\ & (\rho,\pi) \not\models \varphi\\ (\rho,\pi) \models \varphi \ensuremath{{\sf \,U\,}} \psi &\quad\textrm{iff}\ &\textrm{there is a position } \pi' \geq_\rho \pi \textrm{ such that } (\rho,\pi') \models \psi\\ & &\textrm{and forall } \pi'' \textrm{ s.t. } \pi \leq_\rho \pi'' <_\rho \pi', \ (\rho,\pi'') \models \varphi \vee \psi\\ (\rho,\pi) \models \varphi \ensuremath{{\sf \,S\,}} \psi &\quad\textrm{iff}\ &\textrm{there is a position } \pi' \leq_\rho \pi \textrm{ such that } (\rho,\pi') \models \psi \\ & &\textrm{and forall } \pi'' \textrm{ s.t. } \pi \geq_\rho \pi'' >_\rho \pi', \ (\rho,\pi'') \models \varphi \vee \psi\\ (\rho,\pi) \models \nextoc{\bowtie a} \varphi &\quad\textrm{iff}\ & \textrm{either } (\rho,\pi) \models \varphi \textrm{ and } 0 \bowtie a\\ & &\textrm{or, there is a position } \pi' >_\rho \pi \textrm{ such that } (\rho,\pi') \models \varphi,\\ & & \ Dur\!\left(\rho_{[\pi,\pi']}\right) \bowtie a \textrm{ and forall } \pi''\textrm{ s.t. } \pi \leq_\rho \pi'' <_\rho \pi', (\rho,\pi'') \not\models \varphi\\ (\rho,\pi) \models \lastoc{\bowtie a} \varphi &\quad\textrm{iff}\ & \textrm{either } (\rho,\pi) \models \varphi \textrm{ and } 0 \bowtie a\\ & &\textrm{or, there is a position } \pi' <_\rho \pi \textrm{ such that } (\rho,\pi') \models \varphi,\\ & & \ Dur\!\left(\rho_{[\pi',\pi]}\right) \bowtie a \textrm{ and forall } \pi'' \textrm{ s.t. } \pi \geq_\rho \pi'' >_\rho \pi', (\rho,\pi'') \not\models \varphi \end{array}\] Given an ITA $\mathcal{A}$ and an \textsf{SCL}\xspace formula $\varphi$, $\mathcal{A} \models \varphi$ if for all executions $\rho$ of $\mathcal{A}$, $(\rho,\pi_0) \models \varphi$, where $\pi_0=0$ is the initial position of $\rho$. \begin{theorem}\label{thm:mcsclundec} Model checking \textsf{SCL}\xspace over ITA is undecidable. Specifically, there exists a fixed formula using only modalities $\ensuremath{{\sf \,U\,}}$ and $\lastoc{= a}$ such that checking its truth over ITA with $3$ levels is undecidable. \end{theorem} \begin{proof} We build an ITA and an \textsf{SCL}\xspace formula that together simulate a deterministic two counter machine. More specifically, we define a formula $\varphi_{2cm}$ such that given a two counter machine $\mathcal{M}$, we can build an ITA $\mathcal{A}_\mathcal{M}$ with three clocks such that $\mathcal{A}_\mathcal{M} \models \varphi_{2cm}$ if and only if $\mathcal{M}$ does not halt. Recall that such a machine $\cal M$ consists of a finite sequence of labeled instructions, which handle two counters $c$ and $d$, and ends at a special instruction with label \emph{Halt}. The other instructions have one of the two forms below, where \(e \in \{c,d\}\) represents one of the two counters: \begin{itemize} \item \(e := e + 1\); goto $\ell'$ \item if \(e > 0\) then ($e := e -1$; goto $\ell'$) else goto $\ell''$ \end{itemize} Without loss of generality, we may assume that the counters have initial value zero. The behavior of the machine is described by a (possibly infinite) sequence of configurations: \(\langle\ell_0,0,0\rangle \langle\ell_1,n_1,p_1\rangle\dots \langle\ell_i,n_i,p_i\rangle\dots\), where $n_i$ and $p_i$ are the respective counter values and $\ell_i$ is the label, after the $i^{th}$ instruction. The problem of termination for such a machine (``is the \emph{Halt} label reached?'') is known to be undecidable~\cite{minsky67}. The idea of the encoding is that, provided the execution satisfies the formula, clocks of level $1$ and $2$ keep the values of $c$ and $d$ indifferently, by $x_i = \frac1{2^n}$ if $n$ is the value of a counter $e$. Level $3$ will be used as the working level. Transmitting the value of clocks to lower levels, prohibited in the ITA model, will be enforced by \textsf{SCL}\xspace formulas. In the sequel, we will define: \begin{itemize} \item a module $\mathcal{A}_{\leftrightarrow}$ and a formula $\varphi_{\leftrightarrow}$ such that the values contained in clocks $x_1$ and $x_2$ at the beginning of an execution $\rho$ are swapped if and only if $(\rho,0) \models \varphi_{\leftrightarrow}$, \item a module $\mathcal{A}_{+}$ and a formula $\varphi_{+}$ such that if the value of $x_2$ is $\frac1{2^n}$ at the beginning of an execution $\rho$, then $x_2$ has value $\frac1{2^{n+1}}$ if and only if $(\rho,0) \models \varphi_{+}$, \item a module $\mathcal{A}_{-}$ and a formula $\varphi_{-}$ such that if the value of $x_2$ is $\frac1{2^n}$ with $n>0$ at the beginning of an execution $\rho$, then $x_2$ has value $\frac1{2^{n-1}}$ if and only if $(\rho,0) \models \varphi_{-}$. \end{itemize} Joining these modules according to $\mathcal{M}$ yields an ITA. Combining the formulas (independently of $\mathcal{M}$), we obtain an \textsf{SCL}\xspace formula that is satisfied if some execution, while complying to the formulas of the modules, reaches the final state. Both constructions are explained in details after the definitions below. Let us define formulas $Span_1 = \ensuremath{\mathsf{q}}' \ensuremath{\mathbb{R}}ightarrow \lastoc{=1} \ensuremath{\mathsf{q}}$ and $Span_2 = \ensuremath{\mathsf{p}}' \ensuremath{\mathbb{R}}ightarrow \lastoc{=2} \ensuremath{\mathsf{p}}$ where $\ensuremath{\mathsf{p}}$, $\ensuremath{\mathsf{p}}'$, $\ensuremath{\mathsf{q}}$, $\ensuremath{\mathsf{q}}'$ are propositional variables. Let $x_1^0$ and $x_2^0$ denote the respective values of $x_1$ and $x_2$ upon entering a given module. \begin{description}[font=\bfseries] \item[Swapping module.] The module $\mathcal{A}_{\leftrightarrow}$ that swaps the values of $x_1$ and $x_2$ is depicted in \figurename~\ref{fig:modswap}. Note that this module does not actually swap the values of $x_1$ and $x_2$ for every execution. However, by imposing that state $q_{end}$ is reached exactly $2$ time units after $q_{0}$ (or $q_0'$) was left, and that $q_4$ (resp. $q_4'$) is reached exactly $1$ t.u. after $q_1$ (resp. $q_1'$) was left, the values of $x_1$ and $x_2$ will be swapped. This requirement can be expressed in \textsf{SCL}\xspace by $\varphi_{\leftrightarrow} = \ensuremath{{\sf G}}\, \left(Span_1 \wedge Span_2\right)$. Let $w_i$ be the time elapsed in state $q_i$, for an execution $\rho$ of $\mathcal{A}_{\leftrightarrow}$ that satisfies $\varphi_{\leftrightarrow}$. Note that $q_{start}$ and $q_{end}^{\neq}$ are all urgent, hence no time can elapse in these states. We shall therefore consider only what happens in the swapping submodules. We detail only the case when $x_2>x_1$, the case when $x_2<x_1$ is analogous. The ITA constraints provide: \[\begin{array}{rclcl} w_0 &=& 0 &\qquad& (q_0 \textrm{ is urgent}) \\ w_1 &=& x_2^0 - x_1^0 && (\textrm{update } x_3:=x_1 \textrm{ and guard } x_3=x_2) \\ w_2 &=& 1 - x_2^0 && (\textrm{guard } x_3=1) \\ w_4 &=& 0 && (q_4 \textrm{ is urgent}) \end{array}\] \mathcal{M}S{Reviewer 1, Rq 17. Done.} The time spent between the last instant $\ensuremath{\mathsf{q}}$ was satisfied (upon leaving $q_1$) and the only instant when $\ensuremath{\mathsf{q}}'$ is true (upon entering $q_4$) is exactly the time spent in states $q_2$ and $q_3$. Similarly, the time between the last instant $\ensuremath{\mathsf{p}}$ was satisfied (leaving $q_0$) and the instant $\ensuremath{\mathsf{p}}'$ is true (when reaching $q_{end}^{\neq}$) is the total amount of time spent in $q_1$, $q_2$, $q_3$, $q_4$, and $q_5$. Hence, if $\varphi_{\leftrightarrow}$ is satisfied then: \[\begin{array}{rclcl} w_2 + w_3 &=& 1 &\qquad& \left(\ensuremath{\mathsf{q}}' \ensuremath{\mathbb{R}}ightarrow \lastoc{=1} \ensuremath{\mathsf{q}}\right) \\ w_1 + w_2 + w_3 + w_4 + w_5 &=& 2 && \left(\ensuremath{\mathsf{p}}' \ensuremath{\mathbb{R}}ightarrow \lastoc{=2} \ensuremath{\mathsf{p}}\right) \end{array}\] Hence $w_3 = x_2^0$ and $w_5=1-w_1=x_1^0-\left(x_2^0-1\right)$. Since upon entering $q_3$, clock $x_1$ has value $0$, when leaving, $x_1$ has value $x_2^0$. Similarly, when entering $q_5$, $x_2$ has value $x_1 - 1 = x_2^0 -1$, therefore $x_2$ has value $x_1^0$ when reaching $q_{end}^{\neq}$. Note that this module swaps $x_1$ and $x_2$ regardless of their coding a counter value. \item[Incrementation module.] The same idea applies for the incrementation module $\mathcal{A}_{+}$ of \figurename~\ref{fig:modincrem}. We force the time spent in total in $r_1$ and $r_2$ is one, expressed in \textsf{SCL}\xspace by $\varphi_{+}=\ensuremath{{\sf G}}\, Span_1$. The guards and updates in $\mathcal{A}_{+}$ ensure that, with the same notation as above, time spent in $r_1$ will be $1-\frac12 x_2^0$. Hence, when reaching $r_3$, clock $x_2$ will have value $\frac12 x_2^0$. Therefore, if $x_2^0 = \frac1{2^n}$, coding a counter of value $n$, at the end of $\mathcal{A}_{+}$, $x_2$ has value $\frac1{2^{n+1}}$, thus coding a value $n+1$ for the same counter. \item[Decrementation module.] Decrementation, for which the corresponding module is depicted on \figurename~\ref{fig:moddecrem}, is handled in a similar manner (with $\varphi_{-}=\varphi_{+}=\ensuremath{{\sf G}}\, Span_1$). The only difference is that $x_2$ has to be compared to $1$ in order to test if the value of the counter encoded by $x_2$ is $0$. \end{description} Since the constraints in $Span_1$ (and $Span_2$) are equalities, they can be satisfied only if $\ensuremath{\mathsf{q}}'$ (and $\ensuremath{\mathsf{p}}'$) are true at a single point in time. \begin{figure}\label{fig:modswapchoice} \label{fig:modswapgeq} \label{fig:modswapleq} \label{fig:modswap} \end{figure} \begin{figure} \caption{Incrementation module.} \label{fig:modincrem} \end{figure} \begin{figure} \caption{Decrementation module.} \label{fig:moddecrem} \end{figure} Automaton $\mathcal{A}_\mathcal{M}$ is then defined as the concatenation of modules according to $\mathcal{M}$. For clarity, a state $(q,\ell)$ denotes state $q$ in a module corresponding to instruction $\ell$. Namely, an instruction $\ell$ incrementing $c$ and going to $\ell'$ is an incrementation module with a transition from $(r_3,\ell)$ to the first state of the module corresponding to $\ell'$ (either $(q_{start},\ell')$, $(r_0,\ell')$ or $(s_0,\ell')$). In the case of an incrementation of $d$, the corresponding module will be the concatenation of $\mathcal{A}_{\leftrightarrow}^{in}$, $\mathcal{A}_{+}$, and $\mathcal{A}_{\leftrightarrow}^{out}$. Modules $\mathcal{A}_{\leftrightarrow}^{in}$ and $\mathcal{A}_{\leftrightarrow}^{out}$ are two copies of a swapping module $\mathcal{A}_{\leftrightarrow}$. The states of $\mathcal{A}_{\leftrightarrow}^{in}$ and $\mathcal{A}_{\leftrightarrow}^{out}$ will be respectively denoted $(q,\ell,in)$ and $(q,\ell,out)$) to avoid confusion. The last swap is performed in order to restore that $x_2$ contains the value of $c$ and $x_1$ the value of $d$. The concatenation is done by transitions from $(q_{end}^{\neq},\ell,in)$ and $(q_{end}^=,\ell,in)$ to $(r_0,\ell)$, from $(r_3,\ell)$ to $(q_{start},\ell,out)$. States $(q_{end}^{\neq},\ell,out)$ and $(q_{end}^=,\ell,out)$ are then linked to the first state of the module for $\ell'$. Decrementation is handled in a similar way. The main difference resides in the fact that $(s_4,\ell)$ is linked to the first state of $\ell''$. In the decrementation of $d$, $(s_4,\ell)$ is linked to a swapping module $\mathcal{A}_{\leftrightarrow}^{out'}$ (disjoint from $\mathcal{A}_{\leftrightarrow}^{in}$ and $\mathcal{A}_{\leftrightarrow}^{out}$), in turn linked to the first state of $\ell''$. The \emph{Halt} instruction is encoded in a single state $h$ labeled with $\{\ensuremath{\mathsf{h}}\}$. The initial state of the automaton is a new state \textit{Init} of level $3$. It has \emph{urgent} policy and satisfies no atomic proposition. State \textit{Init} is linked to the first state of the module corresponding to $\ell_0$, the initial instruction of $\mathcal{M}$, by a transition that updates both $x_1$ and $x_2$ to $1$, simulating the initialization of both counters to $0$. Let us define formula $\varphi_{2cm} = \ensuremath{{\sf F}} (\neg Span_1 \vee \neg Span_2) \vee \ensuremath{{\sf G}} \neg \ensuremath{\mathsf{h}}$. An execution $\rho$ of $\mathcal{A}_\mathcal{M}$ satisfies $\varphi_{2cm}$ if either it violates at some point a constraint $Span_i$, which means $\rho$ does not correspond to an execution of $\mathcal{M}$, or $\rho$ never reaches state $h$, which means the execution of $\mathcal{M}$ is not halting. If $\mathcal{M}$ has a halting execution, then it can be converted into an execution $\rho$ that complies to the $Span_i$ constraints and reaches the final state $h$. Hence $\rho \not\models \varphi_{2cm}$ and $\mathcal{A}_\mathcal{M} \not\models \varphi_{2cm}$. Conversely, if $\mathcal{A}_\mathcal{M} \not\models \varphi_{2cm}$, then consider an execution $\rho$ that does not verify $\varphi_{2cm}$. Execution $\rho$ both reaches $h$ and complies to the $Span_i$ constraints, hence encodes a halting execution of $\mathcal{M}$. As a result, $\mathcal{M}$ has no halting execution if and only if \[\mathcal{A}_\mathcal{M} \models \ensuremath{{\sf F}} \left(\left(\neg \ensuremath{\mathsf{q}}' \wedge \neg \lastoc{=1}\ensuremath{\mathsf{q}}\right) \vee \left(\neg \ensuremath{\mathsf{p}}' \wedge \neg \lastoc{=2}\ensuremath{\mathsf{p}}\right)\right) \vee \ensuremath{{\sf G}} \neg\ensuremath{\mathsf{h}}.\] Remark that this formula does not have nested history or prediction modalities ($\lastoc{\bowtie a}$ and $\nextoc{\bowtie a}$). Hence $\textsf{SCL}\xspace$ with a discrete semantics (evaluating the subformulas only upon entering a state) would also be undecidable. \qed \end{proof} \subsection{Model-checking branching time properties with internal clocks} In this section we consider the extension of \textsf{CTL}\xspace with model clocks, the corresponding fragment being denoted by \textsf{TCTL}\xspacecint. Such a logic allows to reason about the sojourn times in different levels which is quite useful when designing real-time operating systems. For example, formula $\ensuremath{{\sf A\,}} (x_2 \leq 3) \ensuremath{{\sf \,U\,}} \textit{safe}$ expresses that all executions reach a safe state while spending less than 3 time units in level 2 (assuming $x_2$ is not updated during the execution). \mathcal{M}S{Reviewer 2, Rq 5. Done.} \mathcal{M}S{Reviewer 1, Rq 19. Correction comment\'ee car je pense qu'elle fait doublon avec le texte pr\'eexistant.} Model-checking is achieved by adapting a class graph construction for untiming ITA (Section~\ref{sec:regular}) and adding information relevant to the formula. The problem is thus reduced to a \textsf{CTL}\xspace model checking problem on this graph. \begin{definition} Formulas of the timed logic \textsf{TCTL}\xspacecint are defined by the following grammar: \[ \psi ::= p \mid \psi \wedge \psi \mid \neg \psi \mid \sum_{i \geq 1} a_i \cdot x_i + b \bowtie 0 \mid \ensuremath{{\sf A\,}} \psi \ensuremath{{\sf \,U\,}} \psi \mid \ensuremath{{\sf E\,}} \psi \ensuremath{{\sf \,U\,}} \psi \] where $p \in AP$ is an atomic proposition, $x_i$ are model clocks, $a_i$ and $b$ are rational numbers such that $(a_i)_{i \geq 1}$ has finite domain, and $\bowtie \,\in \{>,\geq,=,\leq,<\}$. \end{definition} As before we use the classical shorthands $\ensuremath{{\sf F}}$, $\ensuremath{{\sf G}}$, and boolean operators. Let $\mathcal{A}=\langle\Sigma, AP, Q, q_0, F,pol, X, \lambda, lab, \mathcal{D}elta\rangle$ be an interrupt timed automaton and $S= \{(q,v,\beta) \mid q \in Q, \ v \in \ensuremath{\mathbb{R}}^X, \ \beta \in \{\top,\bot\} \}$, the set of configurations. The formulas of \textsf{TCTL}\xspacecint are interpreted over configurations\footnote{The boolean value in the configuration is not actually used. The logic could be enriched to take advantage of this boolean, to express for example that a run lets some time elapse in a given state.} $s=(q,v,\beta)$. The semantics of \textsf{TCTL}\xspacecint is defined as follows on the transition system $\mathcal{T}_{\mathcal{A}}$ associated with $\mathcal{A}$. For atomic propositions and a configuration $s=(q,v,\beta)$, with $lab(s)=lab(q)$: \[ \begin{array}{lcl} s \models p &\quad\textrm{iff}\ & p \in lab(s) \\ s \models \sum_{i\geq 1} a_i \cdot x_i + b \bowtie 0 &\quad\textrm{iff}\ & v \models \sum_{i\geq 1} a_i \cdot x_i + b \bowtie 0 \end{array} \] and inductively: \[ \begin{array}{lcl} s \models \varphi \wedge \psi &\quad\textrm{iff}\ & s \models \varphi\ \textrm{and} \ s \models \psi \\ s \models \neg\varphi &\quad\textrm{iff}\ & s \not\models \varphi \\ s \models \ensuremath{{\sf A\,}} \varphi \ensuremath{{\sf \,U\,}} \psi & \quad\textrm{iff}\ & \textrm{for all } \rho \in Exec(s), \ \rho \models \varphi \ensuremath{{\sf \,U\,}} \psi\\ s \models \ensuremath{{\sf E\,}} \varphi \ensuremath{{\sf \,U\,}} \psi & \quad\textrm{iff}\ & \textrm{there exists } \rho \in Exec(s) \textrm{ s. t. } \rho \models \varphi \ensuremath{{\sf \,U\,}} \psi\\ \textrm{with } \rho \models \varphi \ensuremath{{\sf \,U\,}} \psi & \quad\textrm{iff}\ &\textrm{there is a position } \pi \in \rho \textrm{ s. t. } s_\pi \models \psi \\ & &\textrm{and } \forall \pi' <_{\rho} \pi, \ s_{\pi'} \models \varphi \vee \psi. \end{array} \] \mathcal{M}S{Reviewer 2, Rq 6. Done.} The automaton $\mathcal{A}$ satisfies $\psi$ if the initial configuration $s_0$ of $\mathcal{T}_{\mathcal{A}}$ satisfies $\psi$. \begin{theorem}\label{thm:mctctlintdec} Model checking \textsf{TCTL}\xspacecint on interrupt timed automata can be done in $2$-EXPTIME, and in PTIME when the number of clocks is fixed. \end{theorem} The proof relies on a refinement of the class graph according to the \mathcal{M}S{Reviewer 2, Rq 7. Done.} comparisons in the formula to model-check. It is detailed in Appendix~\ref{app:proofmctctlindec} and we show the resulting graph on an example below. \paragraph{Example.} Consider the ITA $\mathcal{A}_1$ (\figurename~\ref{fig:exita1}) and the formula $\varphi_1= \ensuremath{{\sf E\,}} \ensuremath{{\sf F}} (q_1 \wedge (x_2 > x_1)$. We assume that $q_1$ is a propositional property true only in state $q_1$. Initially, the set of expressions are $E_1 = \{x_1,0\}$ and $E_2 = \{x_2,0\}$. First the expression $-\frac12 x_1 + 1$ is added into $E_2$ since $x_1 + 2x_2 =2$ appears on the guard in the transition from $q_1$ to $q_2$. Then expression $1$ is added to $E_1$ because $x_1 - 1 < 0$ appears on the guard in the transition from $q_0$ to $q_1$. Finally expression $x_1$ is added to $E_2$ since $x_2 - x_1 > 0$ appears in $\varphi_1$. The iterative part of the procedure goes as follows. Since there is a transition from $q_0$ of level $1$ to state $q_1$ of level $2$, we compute all differences between expressions of $E_2$, then normalize them: \begin{itemize}[label=\textbullet] \item $x_1 - 0$ and $x_2 - 0$ yield no new expression. \item $x_2 - (-\frac12 x_1 + 1)$ and $0 - (-\frac12 x_1 + 1)$ with update $x_2:=0$ both yield expression $2$, that is added to $E_1$. \item $x_1 - (-\frac12 x_1 + 1)$ yields expression $\frac23$, which is also added to $E_1$. \end{itemize} The sets of expressions are therefore $E_1 = \{x_1,0,1,\frac23,2\}$ and $E_2 = \{x_2,0,-\frac12 x_1 + 1, x_1\}$. Remark that knowing the order between $x_1$ and $\frac23$ will allow us to know the order between $-\frac12 x_1 + 1$ and $x_1$. The class graph $\mathcal{G}$ corresponding to $\mathcal{A}_1$ and $\varphi_1$ is depicted in \figurename~\ref{fig:exita1classes}. Note that we replaced $x_1$ by its value, since it is not changed by any update at level $2$. Some time zone notations used in $\mathcal{G}$ are displayed in \tablename~\ref{tab:tzita1}. In the class graph, states where the comparison $x_2 > x_1$ is \emph{true} are greyed. Among these, the ones in which the class corresponds to state $q_1$ are doubly circled, \emph{i.e.} states in which $q_1 \wedge (x_2 > x_1)$ is \emph{true}. Applying standard \textsf{CTL}\xspace model checking procedure on this graph, one can prove that one of these states is reachable, hence proving that $\varphi_1$ is \emph{true} on $\mathcal{A}_1$. \begin{table} \scriptsize \begin{mathpar} Z_0^1=(0 = x_1 < \frac23 < 1 < 2) \and Z_1^1=(0 < x_1 < \frac23 < 1 < 2) \and Z_2^1=(0 < x_1 = \frac23 < 1 < 2) \and Z_3^1=(0 < \frac23 < x_1 < 1 < 2) \and Z_4^1=(0 < \frac23 < x_1 = 1 < 2) \and Z_5^1=(0 < \frac23 < 1 < x_1 < 2) \and Z_6^1=(0 < \frac23 < 1 < x_1 = 2) \and Z_7^1=(0 < \frac23 < 1 < 2 < x_1) \\ Z_0^2=(0 = x_2 < x_1 < -\frac12 x_1 + 1)\and Z_1^2=(0 < x_2 < x_1 < -\frac12 x_1 + 1)\and Z_2^2=(0 < x_1 = x_2 < -\frac12 x_1 + 1)\and Z_3^2=(0 < x_1 < x_2 < -\frac12 x_1 + 1)\and Z_4^2=(0 < x_1 < -\frac12 x_1 + 1 = x_2)\and Z_5^2=(0 < x_1 < -\frac12 x_1 + 1 < x_2)\and Z_6^2=(0 = x_2 < -\frac12 x_1 + 1 < x_1)\and Z_7^2=(0 < x_2 < -\frac12 x_1 + 1 < x_1)\and Z_8^2=(0 < -\frac12 x_1 + 1 = x_2 < x_1)\and Z_9^2=(0 < -\frac12 x_1 + 1 < x_2 < x_1)\and Z_{10}^2=(0 < -\frac12 x_1 + 1 < x_1 = x_2)\and Z_{11}^2=(0 < -\frac12 x_1 + 1 < x_1 < x_2)\and \end{mathpar} \caption{Time zones used in the class graph of $\mathcal{A}_1$ when checking $\varphi_1$.} \label{tab:tzita1} \end{table} \begin{figure} \caption{The class automaton for $\mathcal{A} \label{fig:exita1classes} \end{figure} \subsection{Model-checking \textsf{TCTL}\xspace with subscript} Note that in \textsf{TCTL}\xspacecint, it is not possible to reason about time evolution independently of the level in which actions are performed. For example, properties \emph{(P2) the system is error free for at least 50 t.u.} or \mathcal{M}S{Reviewer 1, Rq 20. Done.} \emph{(P3) the system will reach a safe state within 7 t.u.} involve global time. In order to verify such properties, we introduce the fragment \textsf{TCTL}\xspacep. This fragment is expressive enough to state constraints on earliest (and latest) execution time of particular sequences, like those reaching a recovery state after a crash. \textsf{TCTL}\xspacep is the set of formulas where satisfaction of an \emph{until} modality over propositions can be parameterized by a restricted form of time intervals. \begin{definition} Formulas of \textsf{TCTL}\xspacep are defined by the following grammar: \[\varphi_p := p \mid \varphi_p \wedge \varphi_p \mid \neg \varphi_p \quad\mbox{and}\quad \psi := \psi \wedge \psi \mid \neg \psi\mid \varphi_p \mid \ensuremath{{\sf A\,}} \varphi_p \ensuremath{{\sf \,U\,}}sub{\bowtie a} \varphi_p \mid \ensuremath{{\sf E\,}} \varphi_p \ensuremath{{\sf \,U\,}}sub{\bowtie a} \varphi_p\] where $p \in AP$ is an atomic proposition, $a \in \mathbb{Q}^+$, and $\bowtie \,\in \{>,\geq,\leq,<\}$ is a comparison operator. \end{definition} The properties given in introduction can be expressed by \textsf{TCTL}\xspacep formulas as follows. Property $P2:$ \emph{the system is error free for at least 50 t.u.} corresponds to $\ensuremath{{\sf A\,}} (\neg$\emph{error})$\ensuremath{{\sf \,U\,}}sub{\geq 50} \mathbf{t}$, while property \mathcal{M}S{Reviewer 2, Rq 8. No need for correction.} $P3:$ \emph{the system will reach a safe state within 7 t.u.} is expressed by $\ensuremath{{\sf A\,}} \ensuremath{{\sf F}}_{\leq 7}$\emph{safe}. Formulas of \textsf{TCTL}\xspacep are again interpreted over configurations of the transition system associated with an ITA. For configuration $s=(q,v,\beta)$, with $lab(s)=lab(q)$, the inductive definition is as follows: \[\begin{array}{lcl} s \models p &\ \textrm{iff}\ & p \in lab(s)\\ s \models \varphi \wedge \psi &\ \textrm{iff}\ & s \models \varphi \textrm{ and } s \models \psi \\ s \models \neg\varphi &\ \textrm{iff}\ & s \not\models \varphi \\ s \models \ensuremath{{\sf A\,}}\varphi_p \ensuremath{{\sf \,U\,}}sub{\bowtie a} \psi_p &\ \textrm{iff}\ &\textrm{any execution }\rho \in Exec(s) \textrm{ is such that } \rho \models \varphi_p \ensuremath{{\sf \,U\,}}sub{\bowtie a} \psi_p \\ s \models \ensuremath{{\sf E\,}}\varphi_p \ensuremath{{\sf \,U\,}}sub{\bowtie a} \psi_p &\ \textrm{iff}\ &\textrm{there exists an execution }\rho \in Exec(s) \textrm{ such that } \rho \models \varphi_p \ensuremath{{\sf \,U\,}}sub{\bowtie a} \psi_p \end{array}\] where \[\begin{array}{lcl} \rho \models \varphi_p \ensuremath{{\sf \,U\,}}sub{\bowtie a} \psi_p &\quad\textrm{iff}\ &\textrm{there exists a position }\pi \textrm{ along } \rho \textrm{ such that } Dur(\rho^{\leq\pi}) \bowtie a,\\ && s_{\pi} \models \psi_p, \textrm{ and for any position } \pi' <_\rho \pi,\, s_{\pi'} \models \varphi_p \end{array}\] Again $\mathcal{A} \models \psi$ if $s_0 \models \psi$. We now prove that: \begin{theorem}\label{thm:mcsubscript} Model checking TCTL$_p$ on ITA is decidable. \mathcal{M}S{Reviewer 2, Rq 9. Done.} \end{theorem} The proof consists in establishing procedures dedicated to the four different subcases: \begin{itemize} \item $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{\leq a} r$ and $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{< a} r$ (Proposition~\ref{lem:mcsubscriptexistinf}), \item $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ and $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$ (Proposition~\ref{lem:mcsubscriptexistsup}), \item $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ and $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$ (Proposition~\ref{lem:mcsubscriptallsup}), \item $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{\leq a} r$ and $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{< a} r$ (Proposition~\ref{lem:mcsubscriptallinf}), \end{itemize} where $p$ and $r$ are boolean combinations of atomic propositions. \begin{proposition}\label{lem:mcsubscriptexistinf} Model checking formulas $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{\leq a} r$ and $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{< a} r$ over ITA is decidable in NEXPTIME and in NP if the number of clocks is fixed. \end{proposition} \begin{proof} First consider the case of ITA$_-$. Both formulas are variants of reachability, with the addition of a time bound. Therefore, the proof is similar to the one of Proposition~\ref{proposition:reachitamoins}. Again using Lemma~\ref{lemma:counting} on an ITA$_-$ with $E$ transitions, we can look for a run satisfying one of these formulas and bounded by $B = (E + n)^{3n}$, because shortening longer runs can be can be done while preserving the property. Thus, the decision procedure again consists in guessing a path and building a linear program. The satisfaction of the formula is then checked by separately verifying on one side that the run satisfies $p \ensuremath{{\sf \,U\,}} r$, and on the other side, that the sum of all delays $d_j$ satisfies the constraint in the formula. The complexity is the same as in Proposition~\ref{proposition:reachitamoins}. In the case of ITA, the exponential blowup of the transformation into an equivalent ITA$_-$ does not affect the complexity of the model-checking procedure above, as in Theorem~\ref{thm:optreach}. \qed \end{proof} Note that this problem can be compared with bounded reachability as studied in~\cite{brihaye11}. However, the models seem incomparable: while the variables (that have fixed non-negative rates in a state) are more powerful than interrupt clocks, the guards and updates are rectangular, which in particular forbids additive and diagonal constraints. \begin{proposition}\label{lem:mcsubscriptexistsup} Model checking a formula $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ and $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$ on an ITA is decidable in NEXPTIME and in NP if the number of clocks is fixed. \end{proposition} \begin{proof} Let $\mathcal{A}$ be an ITA$_-$ with $n$ interrupt clocks and $E$ transitions, and $B = (E + n)^{3n}$. The algorithm to decide whether $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ (or $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$) works as follows. It nondeterministically guesses a path of length smaller than or equal to $B$ and builds the associated linear program (as in the proof of Proposition~\ref{proposition:reachitamoins}), then checks that: \begin{itemize} \item this path yields a run, which can be done by solving the linear program; \item there is a position $\pi$ in this run at which $r$ holds and before which $p$ holds continuously; \item the sum of delays before $\pi$ exceeds $a$ (or strictly exceed in the case of $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$). \end{itemize} If this first procedure fails, the algorithm nondeterministically guesses a path of length smaller or equal to $2B+1$ and checks that: \begin{itemize} \item this path yields a run, which can be checked by a linear program as before, \item $p$ holds on this path, but not necessarily in the last state reached, \item $r$ holds in the last state of this path, \item either there is a transition $e$ of level $k$ that updates $x_k$ appearing twice and separated by a sequence $\sigma$ of transitions of level higher than $k$ during which time elapses (globally) ; this last part can be checked with a linear program on the delays corresponding to this subrun. \item or there is a transition $e$ of level $k$ that does not update $x_k$ appearing twice and separated by a sequence $\sigma$ of transitions of level higher than $k$ not updating $x_k$ during which time elapses at levels strictly higher than $k$ but not at level $k$. \end{itemize} The algorithm returns \emph{true} if one of the previous procedure succeeds, and \emph{false} otherwise. We shall now prove that this algorithm is both sound and complete. \paragraph{Soundness.} If the first procedure succeeds, then the path guessed is trivially a witness of $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ (or $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{>a} r$, accordingly). If the second procedure succeeds, then a witness for the formula can be built from the path guessed. Indeed, the path guessed satisfies $p \ensuremath{{\sf \,U\,}} r$, but not necessarily $p \ensuremath{{\sf \,U\,}}sub{\geq a} r$. Assume the sequence $\sigma$ lets elapse $\delta$ time units ($\delta >0$), by repeating $\lceil\frac{a}{\delta}\rceil$ times\footnote{This sequence may be repeated once more in the case of $p \ensuremath{{\sf \,U\,}}sub{> a} r$.} the sequence $\sigma e$, we obtain a run satisfying $p \ensuremath{{\sf \,U\,}}sub{\geq a} r$. Note that since either $e$ updates the clock $x_k$ or there are no updates nor time elapsing at level $k$, and $\sigma$ happens at higher levels, the clock values in each instance of $\sigma e$ will be identical, hence this repetition will always be possible. \paragraph{Completeness.} Now consider a minimal witness $\rho$ of length $h$ for $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{\geq a} r$. Since $\rho$ is minimal, $r$ holds in the last state of $\rho$ and $p$ holds (at least) in every position before. If $h\leq B$, then the first procedure will consider $\rho$. Otherwise, $h>B$, it means that one of the following cases of Lemma~\ref{lemma:counting} happens: \begin{itemize} \item The same transition $e$ of level $k$ leaving $x_k$ unchanged appears twice separated by lazy or delayed transitions between states of level greater than or equal to $k$. In that case, the corresponding subrun can be replaced by a time step of the same duration, not changing the truth value of $p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ on this new smaller run, thus violating the minimality hypothesis. \item The same transition $e$ of level $k$ updating clock $x_k$ appears twice on the subrun $e_1 \dots e_{B+1}$, at positions $i$ and $j$. In that case we have to distinguish two subcases either some time has elapsed between the two occurrences $e_i$ and $e_j$ of $e$, or the transitions were all instantaneous. \begin{itemize} \item If no time has elapsed, the subrun between $e_i$ and $e_j$ can be removed without altering the truth value of $p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ on this new run, which is smaller than $\rho$. Hence there is a contradiction with the minimality hypothesis. \item Or some time elapsed during this subrun. Let $\rho$ be decomposed into $\rho_0 e_i \sigma e_j \rho_j$. Then by applying Lemma~\ref{lemma:counting} to $\rho_j$ there exists a run $\rho_j'$ of length smaller or equal to $B$ such that $\rho'=\rho_0 e_i \sigma e_j \rho_j'$ is also a run. Note that $|\rho'| \leq 2B+1$, that the last state of $\rho'$ will be the same as the last state of $\rho$ hence will satisfy $r$, and that $p$ will also hold along $\rho'$. As a result $\rho'$ will be considered by the second procedure. \end{itemize} \item The same transition $e$ of level $k$ leaving $x_k$ unchanged appears twice, with no time elapsing at level $k$ between these occurrences. In that case, we again distinguish two subcases: \begin{itemize} \item either no time elapsed (globally) the corresponding subrun can be removed, not changing anything to the rest of the execution nor to the satisfaction of $p \ensuremath{{\sf \,U\,}}sub{\geq a} r$, thus violating the hypothesis of minimality of $\rho$; \item or time elapsed at higher levels and, by minimizing the subrun after the second occurrence as above, we deduce that the run will be considered by the second procedure. \end{itemize} \end{itemize} The completeness proof is similar in the case of $\ensuremath{{\sf E\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$. When $\mathcal{A}$ is an ITA, the exponential blowup of the transformation from ITA to ITA$_-$ does not affect the above complexity. \qed \end{proof} While a witness is a finite path in the previous cases, it is potentially infinite for $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ or $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$. The generation of an infinite run relies on the (nondeterministic) exploration of the class graph built in Section~\ref{sec:regular}, thus has a much greater computational complexity. \begin{proposition}\label{lem:mcsubscriptallsup} Model checking a formula $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ and $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$ on an ITA is decidable in 2-EXPTIME and in co-NP if the number of clocks is fixed. \end{proposition} \begin{proof} We consider an ITA $\mathcal{A}$ with $n$ interrupt clocks, $E$ transitions and the bound $B = (n+2)^{12 b \cdot E \cdot n^3}$ where $b$ is the number of bits coding the constants in $\mathcal{A}$. The algorithm to verify $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ (or $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$) works as follows. It nondeterministically guesses a path of length smaller than or equal to $B$, builds its associated linear program, and checks that: \begin{itemize} \item this path yields a run $\rho$ (by solving the linear program); \item this path is maximal, that means no transition can be fired from the last configuration of the run; \item there is a position $\pi$ in $\rho$ occurring at a time stricly less than\footnote{Less than or equal to $a$ in the case of $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$.} $a$ such that \begin{description} \item[Case 1:] either $r$ does not hold from $\pi$ (see \figurename~\ref{fig:finiteneverr}) \item[Case 2:] or there is a position $\pi'$ where neither $p$ not $r$ hold, and $r$ does not hold between $\pi$ and $\pi'$ (see \figurename~\ref{fig:finitenotpandnotr}). \end{description} \end{itemize} \begin{figure} \caption{Proof of Proposition~\ref{lem:mcsubscriptallsup} \label{fig:finiteneverr} \end{figure} \begin{figure} \caption{Proof of Proposition~\ref{lem:mcsubscriptallsup} \label{fig:finitenotpandnotr} \end{figure} If this first procedure fails, then the algorithm guesses: \begin{itemize} \item a class $K$ and a cycle $\cal C$ starting from $K$ in the class graph (without building neither the graph nor the cycle), such that $\cal C$ contains at least a discrete step and only traverses classes where $\neg r$ holds; \item a path in the automaton of length smaller than or equal to the bound $B$; \end{itemize} and checks that: \begin{itemize} \item the path does yield a run $\rho$, that reaches a configuration $(q,v,\beta)$ in class $K$ (through a linear program); \item there is a position $\pi$ in $\rho$ occurring at time strictly less than\footnote{Less than or equal to $a$ in the case of $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$.} $a$ after which $r$ no longer holds. \end{itemize} Remark that the procedure cannot use solely the class graph, since the abstraction is not precise enough to check the existence of position $\pi$. \paragraph{Soundness.} We prove that the algorithm is sound: when one of the procedures succeeds, there exists a counterexample for formula $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ (or $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{> a} r$). In the case of the first procedure, it is trivial that the guessed run does not satisfy $p \ensuremath{{\sf \,U\,}}sub{\geq a} r$ (or $p \ensuremath{{\sf \,U\,}}sub{> a} r$). In the case of the second one, we show that there exists an infinite counterexample. Consider configuration $(q,v,\beta)$, which is reachable by $\rho$. Since $(q,v,\beta)$ belongs to class $K$, for any path $\sigma$ starting from $K$ in the class graph, there is a run in the automaton starting from $(q,v,\beta)$ traversing configurations which belong to the classes traversed by $\sigma$. Since there is a cycle in the class graph, there is an infinite path in the class graph (iterating on this cycle), so there exists an infinite run in the ITA. Also, since $\neg r$ holds in the infinite path of the class graph, it holds in the run of the ITA, and the run is a counterexample for the formula. \paragraph{Completeness.} Assume there exists a finite counterexample $\rho$. Let $\mathcal{A}'$ be the ITA$_-$ accepting the same timed language as $\mathcal{A}$ and let $E'$ denote the number of its transitions. Let $B' = (E'+2n)^{3n}$ (the bound of Lemma~\ref{lemma:counting}), we have $B'\leq B$. If $|\rho| \leq B$, it will be detected by procedure 1. Otherwise let $\rho'$ be the run corresponding to $\rho$ in $\mathcal{A}'$. This run accepts the same timed word as $\rho$ and its sequence of traversed states can be projected onto the sequence of corresponding states of $\rho$, by omitting states of the form $(q^-,-)$: any subsequence $(q^+_0,-) \rightarrow \cdots \rightarrow (q^+_{m-1},-) \rightarrow (q^-_m,-) \rightarrow (q^+_m,-)$ in $\rho'$ corresponds to the subsequence $q_0 \rightarrow \cdots \rightarrow q_{m-1} \rightarrow q_m$ in $\rho$. Note that $|\rho| \leq |\rho'|$ and that $\rho'$ is also a counterexample for the formula (although in $\mathcal{A}'$). Since $|\rho'| > B \geq B'$, then one of the cases of case of Lemma~\ref{lemma:counting} occurs. By removing transitions and maybe replacing them by some time elapsing, as in the proof of Proposition~\ref{lem:mcsubscriptexistsup}, a counterexample $\sigma'$ of size $|\sigma'| \leq B' \leq B$ exists in $\mathcal{A}'$. Now consider the run $\sigma$ in $\mathcal{A}$ which corresponds to $\sigma'$. We have $|\sigma| \leq |\sigma'| \leq B' \leq B$ and $\sigma$ is still a counterexample. Therefore $\sigma$ can be guessed by the first procedure. If there exists an infinite counterexample $\rho$, consider its counterpart $\sigma$ in the class graph. This counterpart is also infinite. More precisely, $\sigma$ contains an infinite number of discrete transitions. Since $\sigma$ traverses a finite number of classes, it contains a cycle $\cal C$ with at least one discrete transition. Choose any class $K$ of this cycle and consider the prefix $\rho_0$ of $\rho$ leading to a configuration in $K$. As in the case of a finite counterexample, there exists $\rho_0'$ of length smaller than $B$ reaching the same configuration. All $\cal C$, $K$ and $\rho_0'$ can be guessed by the second procedure, which will therefore succeed. Procedure 1 operates in NEXPTIME (guessing a path of length $B$ and solving a linear program of size polynomial w.r.t. $B$). Procedure 2 consists in looking for a specific cycle in the class graph which in can be done in time polynomial w.r.t. the size of the graph thus in 2-EXPTIME. The case where the clocks are fixed, is handled as usual. \qed \end{proof} For formulas in case 4, a specific procedure can be avoided, since the algorithms of cases 2 and 3 can be reused: \begin{proposition}\label{lem:mcsubscriptallinf} Model checking a formula $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{\leq a} r$ and $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{< a} r$ on an ITA is decidable in 2-EXPTIME and in co-NP if the number of clocks is fixed. \end{proposition} \begin{proof} Notice that $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{\leq a} r = (\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{\geq0} r) \wedge \neg (\ensuremath{{\sf E\,}} \neg r \ensuremath{{\sf \,U\,}}sub{> a} \mathbf{t})$, and $\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{< a} r = (\ensuremath{{\sf A\,}} p \ensuremath{{\sf \,U\,}}sub{\geq0} r) \wedge \neg (\ensuremath{{\sf E\,}} \neg r \ensuremath{{\sf \,U\,}}sub{\geq a} \mathbf{t})$. \qed \end{proof} \section{Language properties}\label{sec:exp} In this section, we compare the expressive power of the previous models with respect to language acceptance. Recall that TL is strictly contained in CRTL. We prove that: \begin{theorem} The families TL and ITL are incomparable. The families CRTL and ITL are incomparable. \end{theorem} \subsection{ITL is not contained in TL, nor in CRTL} The next proposition shows that ITA cannot be reduced to TA or CRTA. Observe that the automata used in the proof belong to ITA$_-$. Also, the language given for the first point of the proposition is very simple since it contains only words of length 2. \begin{proposition}\label{prop:comp}~\\ 1. There exists a language in ITL whose words have bounded length which is not in TL.\\ 2. There exists a language in ITL which is not in CRTL. \end{proposition} \begin{proof} To prove the first point, consider the ITA $\mathcal{A}_3$ in Fig.~\ref{fig:ctrex1}. Suppose, by contradiction, that $L_3 = \mathcal{L}(\mathcal{A}_3)$ is accepted by some timed automaton $\mathcal{B}$ (possibly \mathcal{M}S{Reviewer 1, Rq 22. Done.} with $\varepsilon$-transitions). Note that since we consider timed languages, we cannot assume that the granularity of $\mathcal{B}$ is $1$. \mathcal{M}S{Reviewer 1, Rq 23. Done.} Let $d$ be the granularity of $\mathcal{B}$, \textit{i.e.} the gcd of all rational constants appearing in the constraints of $\mathcal{B}$ (thus each such constant can be written $k/d$ for some integer $k$). Then the word $w=(a, 1-1/d)(b, 1-1/2d)$ is accepted by $\mathcal{B}$ through a finite path. Consider now the automaton $\mathcal{B}'$ in TA, consisting of this single path (where states may have been renamed). We have $w \in \mathcal{L}(\mathcal{B}') \subseteq \mathcal{L}(\mathcal{B})=L_3$ and $\mathcal{B}'$ contains no cycle. Using the result in~\cite{berard98}, we can build a timed automaton $\mathcal{B}''$ without $\varepsilon$-transition and with same granularity $d$ such that $\mathcal{L}(\mathcal{B}'')=\mathcal{L}(\mathcal{B}')$, so that $w \in \mathcal{L}(\mathcal{B}'')$. The accepting path for $w$ in $\mathcal{B}''$ contains two transitions : $p_0 \xrightarrow{\varphi_1, a, r_1} p_1 \xrightarrow{\varphi_2, b, r_2} p_2$. After firing the $a$-transition, all clock values are $1- 1/d$ or $0$, thus all clock values are $1-1/2d$ or $1/2d$ when the $b$-transition is fired. Let $x \bowtie c$ be an atomic proposition appearing in $\varphi_2$. Since the granularity of $\mathcal{B}''$ is $d$, the $\bowtie$ operator cannot be $=$ otherwise the constraint would be $x = 1/2d$ or $x= 1- 1/2d$. If the constraint is $x<c$, $x\leq c$, $x>c$, or $x\geq c$, the path will also accept some word $(a, 1-1/d)(b, t)$ for some $t \neq 1-1/2d$. This is also the case if the constraint $\varphi_2$ is true. We thus obtain a contradiction with $\mathcal{L}(\mathcal{B}'') \subseteq L_3$, which ends the proof. \begin{figure} \caption{An ITA $\mathcal{A} \label{fig:ctrex1} \end{figure} To prove the second point, consider the language: $$L_4= \{ (c,\tau) (c, 2\tau) \ldots (c, n\tau) \mid n \in \ensuremath{\mathbb{N}}, \tau > 0 \}$$ accepted by the ITA $\mathcal{A}_4$ in Fig.~\ref{fig:ctrex2}. This language cannot be accepted by a CRTA (see~\cite{Zielonka}). \qed \begin{figure} \caption{An ITA $\mathcal{A} \label{fig:ctrex2} \end{figure} \end{proof} \subsection{TL is not contained in ITL} \label{subsec:incomp} \mathcal{M}S{\`A relire; traduction depuis la th\`ese.} We now prove that there exists a language in TL that does not belong to ITL. Let $L_5$ be the language defined by \begin{eqnarray*} L_5=\big\{ (a,\tau_1)(b,\tau_2) &\ldots& (a,\tau_{2p+1})(b,\tau_{2p+2}) \mid p \in \ensuremath{\mathbb{N}},\\ &&\forall 0\leq i\leq p,\ \tau_{2i+1}=i+1 \mbox{ and } i+1<\tau_{2i+2}<i+2, \\ &&\forall 1\leq i\leq p\ \tau_{2i+2} - \tau_{2i+1}< \tau_{2i} - \tau_{2i-1} \big\} \end{eqnarray*} Hence, the untimed language of $L_5$ is $(ab)^*$, there is an occurrence of $a$ at each time unit and the successive occurrences of $b$ come each time closer to the occurrence of $a$ than previously. This language is in TL as can be checked on the TA $\mathcal{A}_5$ of \figurename~\ref{fig:ctrex3} (first proposed in~\cite{alur94a}). \begin{figure} \caption{A timed automaton $\mathcal{A} \label{fig:ctrex3} \end{figure} \begin{proposition} The language $L_5$ does not belong to ITL. \end{proposition} \begin{proof} Assume, by contradiction, that $L_5$ belongs to ITL. Then $L_5$ is accepted by an ITA$_-$ $\mathcal{A}$ with $n$ clocks and $E$ transitions. Let $B = (E+n)^{3n}$ and consider the timed word $w = (a,\tau_1) (b,\tau_2) \cdots (a,\tau_{2B+1}) (b,\tau_{2B+2}) \in L_5$. Word $w$ is accepted by a run $\rho$ of $\mathcal{A}$, which can be assumed of minimal size. However, we know that $|\rho| > B$, so one of the three cases of Lemma~\ref{lemma:counting} occurs in the $B$ first transitions. \begin{itemize}[topsep=-\baselineskip] \item Suppose a transition $e$ of level $k$ that updates $x_k$ appears twice, separated by a subrun $\sigma$ of level greater than or equal to $k$. Remark that the valuations after the first and the second occurrence of $e$ are identical. We distinguish several subcases, depending on the word read along $\sigma e$. \begin{itemize} \item If $\sigma e$ reads the empty word $\varepsilon$, we write $\delta$ for the time spent during $\sigma e$. If $\delta = 0$, then $\sigma e $ can be deleted without affecting neither the remainder of the run nor the accepted word, which contradicts the minimality of $\rho$. If $\delta \geq 1$, then some interval $[i,i+1]$ does not contain any $b$, which contradicts the definition of $L_5$. Otherwise, $0 < \delta < 1$. By deleting $\sigma e$, we obtain an execution $\rho'$ (accepted by $\mathcal{A}$) in which the suffix after $e$ is shifted by $\delta$. Therefore the following occurrence of letter $a$, which appeared in $\rho$ at date $i \in \ensuremath{\mathbb{N}}\setminus\{0\}$, appears in $\rho'$ at date $i - \delta$ which is not integral. So the word accepted by $\rho'$ is not in $L_5$, which is a contradiction. \item If $\sigma e$ reads more $a$s than $b$s or more $b$s than $a$s, by deleting $\sigma e$ we obtain a run accepting a word whose untiming is not in $(ab)^*$ thus does not belong to $L_5$. \item If $\sigma e$ reads as many $a$s as $b$s (and both letters at least once), by duplicating $\sigma e$ we obtain a run accepting a word where a same duration separates an $a$ from the following $b$ is repeated, thus violating the definition of $L_5$. \end{itemize} \item Suppose a transition $e$ of level $k$ delayed or lazy occurs twice, separated by a subrun $\sigma$ of level greater than or equal to $k$, such that $\sigma e$ does not update $x_k$. Then we can replace $e \sigma$ by a time step of the same duration and obtain a new run $\rho'$, accepted by $\mathcal{A}$. \begin{itemize} \item If $e \sigma$ reads $\varepsilon$, then $\rho'$ contradicts the minimality of $\rho$. \item If $e \sigma$ reads the word $b$, then $\rho'$ accepts a word where $a$ and $b$ do not alternate, thus not in $L_5$. \item If $e \sigma$ reads at least an $a$, then $\rho'$ accepts a word with no $a$ at a given integral date, therefore not in $L_5$. \end{itemize} \item Otherwise, a transition $e$ of level $k$ appears twice separated by a subrun $\sigma$ of level greater than or equal to $k$, such that $\sigma e$ does not update $x_k$ nor lets time elapse at level $k$. The same disjunction as in the case of an update of $x_k$ can be applied, since $\sigma e$ can either be deleted or duplicated. \end{itemize} \end{proof} Note that the feature preventing $L_5$ to be in ITL lies in the decreasing delays between the $a$'s and their immediately following $b$. A language in ITL can record $k$ different constant delays, using $k+1$ clocks. For instance on the alphabet $\Sigma=\{a_1, \ldots, a_k\}$, the language \begin{eqnarray*} M_k &=& \{(a_1,\tau_1) \ldots (a_k, \tau_k)(a_1,\tau_1+1) \ldots (a_k, \tau_k+1) \ldots (a_1,\tau_1+n) \ldots (a_k, \tau_k+n) \\&&\qquad\mid n \geq 1, \tau_1 \leq \tau_2 \leq \cdots \leq \tau_k \leq \tau_1 + 1\} \end{eqnarray*} is accepted by an ITA$_-$ with $k+1$ clocks. \figurename~\ref{fig:ex4} illustrates the case where $k=3$, with all states lazy. We conjecture that $M_k$ cannot be accepted by an ITA with $k$ clocks. \mathcal{M}S{Reviewer 1, Rq 21. Done.} \begin{figure} \caption{An interrupt timed automaton for $M_3$} \label{fig:ex4} \end{figure} \subsection{Closure under complementation and intersection} \label{subsec:complementation} \begin{proposition} ITL is not closed under complementation. \end{proposition} \begin{proof} We prove that the complement $L_5^c$ of $L_5$ belongs to ITL. A timed word belongs to $L_5^c$ iff one of the following assertions hold: \begin{enumerate} \item An $a$ occurs not at a time unit. \item An $a$ is missing at some time unit that precedes some letter of the word. \item A $b$ occurs at a time unit. \item There is no $b$ in an interval $[i,i+1]$ with an $a$ at time $i \in \ensuremath{\mathbb{N}}$. \item There are two $b$s in an interval $[i,i+1]$ with an $a$ at time $i \in \ensuremath{\mathbb{N}}$. \item There is an occurrence of $abab$ such that the time difference between the two first occurrences is smaller than or equal to the time difference between the two last occurrences. \end{enumerate} Since ITL is trivially closed under union, it is enough to prove that each assertion from the set above can be expressed by an ITA. The five first assertions are straightforwardly modeled by an ITA with a single clock (and $\varepsilon$-transitions) and we present in \figurename~\ref{fig:cas6} an ITA with two clocks corresponding to the last one.\qed \begin{figure} \caption{An ITA for the language defined by assertion $6$} \label{fig:cas6} \end{figure} \end{proof} \begin{proposition} ITL is not closed under intersection. \end{proposition} \begin{proof} $L_5$ is the intersection of $L'_5$ and $L''_5$ defined as follows: \begin{itemize} \item The words of $L'_5$ are $(a,1)(b,\tau_1)\ldots(a,n)(b,\tau_n)$, with $i<\tau_i<i+1$ for all $i$, $1 \leq i \leq n$. \item The words of $L''_5$ are $(a,\tau'_1)(b,\tau_1)\ldots(a,\tau'_n)(b,\tau_n)$, with $\tau_{i+1}-\tau_i<1$ for all $i$, $1 \leq i \leq n-1$. \end{itemize} Both languages are accepted by one-clock ITA (which are also one-clock TA). In case of $L'_5$, (1) the clock is reset at every occurrence of an $a$; (2) an $a$ must occur when the clock is 1 and (3) a single $b$ must occur when the clock is in $(0,1)$. In case of $L''_5$, (1) the clock is reset at every occurrence of a $b$ (2) a $b$ must occur when the clock is less than 1 except for the first $b$ and (3) a single $a$ must occur before every occurrence of a $b$. \qed \end{proof} \section{Combining ITA with CRTA}\label{sec:combination} In the previous section, we proved that the class of languages defined by ITA and CRTA are incomparable. Here we provide a class containing \mathcal{M}S{Reviewer 2, Rq 10. Done.} both ITL and CRTL. In order to do so, we combine the models of ITA with CRTA. \subsection{An undecidable product}\label{subsec:mctctlundec} The first kind of combination possible is through synchronized product between an ITA and a CRTA. However, this turns out to be a too powerful model, since combining even a TA with an ITA yields the undecidability of the reachability problem. \begin{definition} If $\mathcal{I}=\langle\Sigma, Q_\mathcal{I}, q_0^\mathcal{I}, F_\mathcal{I},pol_\mathcal{I}, X, \lambda_\mathcal{I}, \mathcal{D}elta_\mathcal{I}\rangle$ is an ITA (propositional variables and labeling are omitted) and $\mathcal{T}=\langle\Sigma, Q_\mathcal{T}, q_0^T, F_\mathcal{T}, Y,\mathcal{D}elta_\mathcal{T}\rangle$ is a TA, then $\mathcal{I}\times\mathcal{T} = \langle\Sigma, Q_\mathcal{I}\times Q_\mathcal{T}, (q_0^\mathcal{I},q_0^\mathcal{T}), F,pol, X, Y, \lambda, \mathcal{D}elta\rangle$ is an ITA$\times$TA where: \begin{itemize} \item $pol(q_\mathcal{I},q_\mathcal{T}) = pol_\mathcal{I}(q_\mathcal{I})$ and $\lambda(q_\mathcal{I},q_\mathcal{T}) = \lambda_\mathcal{I}(q_\mathcal{I})$ are lifted from the ITA \item if $q_\mathcal{I} \xrightarrow{\varphi,a,u} q_\mathcal{I}' \in \mathcal{D}elta_\mathcal{I}$ and $q_\mathcal{T} \xrightarrow{\psi,a,v} q_\mathcal{T}' \in\mathcal{D}elta_\mathcal{T}$, then \[(q_\mathcal{I},q_\mathcal{T}) \xrightarrow{\varphi\wedge\psi,a,u\wedge v} (q_\mathcal{I}',q_\mathcal{T}') \in \mathcal{D}elta.\] \end{itemize} \end{definition} The semantics of an ITA$\times$TA is a transition system over configurations \[\left\{(q,v,w,\beta) \mid q\in Q, v \in \ensuremath{\mathbb{R}}^X, w \in \ensuremath{\mathbb{R}}^Y, \beta \in\{\top,\bot\}\right\}.\] Discrete steps are defined analogously as in ITA (see Definition~\ref{def:semantics}). In time steps, clocks of $X$ evolve as in an ITA and clocks of $Y$ as in a TA. More precisely, a time step of duration $d>0$ is defined by $(q,v,w,\beta) \xrightarrow{d} (q,v',w',\top)$ where $v'(x_{\lambda(q)})=v(x_{\lambda(q)})+ d$ and $v'(x)=v(x)$ for any other clock $x \in X$, and $w'(y)=w(y)+d$ for $y\in Y$. \begin{theorem}\label{prop:reachundec} Reachability is undecidable in the class ITA$\times$TA. \end{theorem} \begin{proof}[Sketch] The proof consists in encoding a two counter machine into an ITA$\times$TA. Two classical clocks $\{y_c, y_d\}$ will keep the value of the counters by retaining a value $1-\frac1{2^n}$ to encode $n$. Three interrupt clocks are used to change the value of the classical clocks through appropriate resets. The ITA$\times$TA is defined through basic modules, corresponding to the four possible actions (incrementation or decrementation of $c$ or $d$). Each module is itself composed of submodules: the first one compares the value of $c$ to the one of $d$. The other one performs the action, but depends on the order between $c$ and $d$. For example, the submodule incrementing $c$ when $c \geq d$ is depicted in \figurename~\ref{fig:modincremc}. In this module, the value\footnote{Or rather the complement to $1$ of the value.} of classical clocks is copied into interrupt clocks, updated thanks to linear updates allowed by ITA. the new values are copied into classical clocks by resetting them at the appropriate moment. The valuations of clocks during an execution of this module are given in \tablename~\ref{tab:valhorloges}. Note that the policies are used in this product but they could be replaced by classical clocks. \begin{figure} \caption{Module $\mathcal{A} \label{fig:modincremc} \end{figure} \begin{table} \[ \begin{array}{|c|} \hline \mathbf{(\ell,r_1,>)}\\ \hline y_c = 1-\frac{1}{2^n}\\ y_d = 1-\frac{1}{2^p}\\ x_1 = 0\\ \color{black!50}{x_2 = 0}\\ \color{black!50}{x_3 = 0}\\ \hline \end{array} \stackrel{\frac{1}{2^n}}{\longrightarrow} \begin{array}{|c|} \hline \mathbf{(\ell,r_1,>)}\\ \hline y_c = 1\\ y_d = 1-\frac{1}{2^p}+\frac{1}{2^n}\\ x_1 = \frac{1}{2^n}\\ \color{black!50}{x_2 = 0}\\ \color{black!50}{x_3 = 0}\\ \hline \end{array} \stackrel{b_\ell^1}{\longrightarrow} \begin{array}{|c|} \hline \mathbf{(\ell,r_2,>)}\\ \hline y_c = 1\\ y_d = 1-\frac{1}{2^p}+\frac{1}{2^n}\\ x_1 = \frac{1}{2^n}\\ x_2 = 0\\ \color{black!50}{x_3 = 0}\\ \hline \end{array} \stackrel{b_\ell^2}{\longrightarrow} \begin{array}{|c|} \hline \mathbf{(\ell,r_3,>)}\\ \hline y_c = 1\\ y_d = 1-\frac{1}{2^p}+\frac{1}{2^n}\\ x_1 = \frac{1}{2^n}\\ x_2 = \frac{1}{2^n}\\ \color{black!50}{x_3 = 0}\\ \hline \end{array} \]\[ \xrightarrow{\frac{1}{2^p} - \frac{1}{2^n}} \begin{array}{|c|} \hline \mathbf{(\ell,r_3,>)}\\ \hline y_c = 1 + \frac{1}{2^p} - \frac{1}{2^n}\\ y_d = 1\\ x_1 = \frac{1}{2^n}\\ x_2 = \frac{1}{2^p}\\ \color{black!50}{x_3 = 0}\\ \hline \end{array} \stackrel{b_\ell^3}{\longrightarrow} \begin{array}{|c|} \hline \mathbf{(\ell,r_4,>)}\\ \hline y_c = 0\\ y_d = 1\\ x_1 = \frac{1}{2^n}\\ x_2 = \frac{1}{2^p}\\ x_3 = 0\\ \hline \end{array} \xrightarrow{\frac{1}{2^p} - \frac{1}{2^{n+1}}} \begin{array}{|c|} \hline \mathbf{(\ell,r_4,>)}\\ \hline y_c = \frac{1}{2^p} - \frac{1}{2^{n+1}}\\ y_d = 1 + \frac{1}{2^p} - \frac{1}{2^{n+1}}\\ x_1 = \frac{1}{2^n}\\ x_2 = \frac{1}{2^p}\\ x_3 = \frac{1}{2^p} - \frac{1}{2^{n+1}}\\ \hline \end{array} \]\[ \stackrel{b_\ell^4}{\longrightarrow} \begin{array}{|c|} \hline \mathbf{(\ell,r_5,>)}\\ \hline y_c = \frac{1}{2^p} - \frac{1}{2^{n+1}}\\ y_d = 0\\ x_1 = \frac{1}{2^n}\\ x_2 = \frac{1}{2^p}\\ x_3 = \frac{1}{2^p} - \frac{1}{2^{n+1}}\\ \hline \end{array} \stackrel{1 - \frac{1}{2^p}}{\longrightarrow} \begin{array}{|c|} \hline \mathbf{(\ell,r_5,>)}\\ \hline y_c = 1 -\frac{1}{2^{n+1}}\\ y_d = 1 - \frac{1}{2^p}\\ x_1 = \frac{1}{2^n}\\ x_2 = \frac{1}{2^p}\\ x_3 = 1 - \frac{1}{2^{n+1}}\\ \hline \end{array} \stackrel{b_\ell^5}{\longrightarrow} \begin{array}{|c|} \hline \mathbf{\ell'}\\ \hline y_c = 1 -\frac{1}{2^{n+1}}\\ y_d = 1 - \frac{1}{2^p}\\ x_1 = \frac{1}{2^n}\\ \color{black!50}{x_2 = 0}\\ \color{black!50}{x_3 = 0}\\ \hline \end{array} \] \caption[Clock values in the unique run of $\mathcal{A}^{c++}_{c \geq d}(\ell,\ell')$]{Clock values in the unique run of $\mathcal{A}^{c++}_{c \geq d}(\ell,\ell')$. Irrelevant values of interrupt clocks are greyed.} \label{tab:valhorloges} \end{table} The detailed proof can be found in Appendix~\ref{app:proofreachundec}. \end{proof} Other proofs of undecidability for hybrid systems mixing clocks and stopwatches have been developed (see for instance~\cite[Theorem 4.1]{hen98} for a construction with a single stopwatch and $5$ clocks). While this construction could have been adapted to our setting, this would have led to an ITA$\times$TA with $5$ classical clocks and $2$ interrupt clocks. \subsection{A decidable product of ITA and CRTA: ITA$^+$} \label{sec:comb} We define another synchronized product between ITA and CRTA, in the spirit of multi-level systems, for which reachability is decidable. This class, denoted by ITA$^+$, includes a set of clocks at an implicit additional level $0$, corresponding to a basic task described as in a CRTA. In the definition below, since no confusion can occur, we aggregate the coloring function of CRTA and the level function of ITA, into a single function $\lambda$. \begin{definition}[ITA$^+$] An \emph{extended interrupt timed automaton} is a tuple $\mathcal{A}=\langle Q, q_0,$ $F, pol, X\uplus Y, \Sigma, \Omega, \lambda, up, low, vel, \mathcal{D}elta\rangle$, where: \begin{itemize} \item $Q$ is a finite set of states, $q_0$ is the initial state and $F \subseteq Q$ is the set of final states. \item $pol: Q \mapsto \{L,U,D\}$ is the timing policy of states. \item $X=\{x_1, \ldots, x_n\}$ consists of $n$ interrupt clocks and $Y$ is a set of basic clocks, \item $\Sigma$ is a finite alphabet, \item $\Omega$ is a set of colors, the mapping $\lambda : Q \uplus Y \mapsto \{1, \ldots, n\} \uplus \Omega$ associates with each state its level or its color, with $x_{\lambda(q)}$ the active clock in state $q$ for $\lambda(q)\in \ensuremath{\mathbb{N}}$ and $\lambda(y) \in \Omega$ for $y \in Y$. For every state $q \in \lambda^{-1}(\Omega)$, the policy is $pol(q)=L$. \item $up$ and $low$ are mappings from $Y$ to $\ensuremath{\mathbb{Q}}$ with the same constraints as CRTA (see Definition~\ref{def:crta}), and $vel: Q \mapsto \ensuremath{\mathbb{Q}}$ is the clock rate with $\lambda(q) \notin \Omega \ensuremath{\mathbb{R}}ightarrow vel(q)=1$ \item $\mathcal{D}elta \subseteq Q \times [\mathcal{C}(X\cup Y) \times (\Sigma \cup \{\varepsilon\}) \times \mathcal{U}(X\cup Y)] \times Q$ is the set of transitions. Let $q \xrightarrow{\varphi, a, u} q'$ in $\mathcal{D}elta$ be a transition. \begin{enumerate} \item The guard $\varphi$ is of the form $\varphi_1 \wedge\varphi_2$ with the following conditions. If $\lambda(q)\in \ensuremath{\mathbb{N}}$, $\varphi_1$ is an ITA guard on $X$ and otherwise $\varphi_1=true$. Constraint $\varphi_2$ is a CRTA guard on $Y$ (also possibly equal to $true$). \item The update $u$ is of the form $u_1 \wedge u_2$ fullfilling the following conditions. Assignments from $u_1$ update the clocks in $X$ with the constraints of ITA when $\lambda(q)$ and $\lambda(q')$ belong to $\ensuremath{\mathbb{N}}$. Otherwise it is a global reset of clocks in $X$. Assignments from $u_2$ update clocks from $Y$, like in CRTA. \end{enumerate} \end{itemize} \end{definition} Any ITA can be viewed as an ITA$^+$ with $Y$ empty and $\lambda(Q) \subseteq \{1, \ldots, n\}$, and any CRTA can be viewed as an ITA$^+$ with $X$ empty and $\lambda(Q) \subseteq \Omega$. Class ITA$^+$ combines both models in the following sense. When the current state $q$ is such that $\lambda(q) \in \Omega$, the ITA part is inactive. Otherwise, it behaves as an ITA but with additional constraints about clocks of the CRTA involved by the extended guards and updates. The semantics of ITA$^+$ is defined as usual but now takes into account the velocity of CRTA clocks. \begin{definition}[Semantics of ITA$^+$] The semantics of an automaton $\mathcal{A}$ in ITA$^+$ is defined by the transition system $\mathcal{T}_{\mathcal{A}}= (S, s_0, \rightarrow)$. The set $S$ of configurations is $\left\{ (q,v) \mid q \in Q, \ v \in \ensuremath{\mathbb{R}}^{X\cup Y}, \ \beta \in \{\top,\bot\} \right\}$, with initial configuration $(q_0, \vect{0}, \bot)$. An accepting configuration of $\mathcal{T}_{\mathcal{A}}$ is a pair $(q,v)$ with $q$ in $F$. The relation $\rightarrow$ on $S$ consists of time steps and discrete steps, the definition of the latter being the same as before: \begin{description} \item[Time steps:] Only the active clocks in a state can evolve, all other clocks are suspended. For a state $q$ with $\lambda(q) \in \ensuremath{\mathbb{N}}$ (the active clock is $x_{\lambda(q)}$), a time step of duration $d>0$ is defined by $(q,v,\beta) \xrightarrow{d} (q, v',\top)$ with $v'(x_{\lambda(q)})=v(x_{\lambda(q)})+ d$ and $v'(x)=v(x)$ for any other clock $x$. For a state $q$ with $\lambda(q) \in \Omega$ (the active clocks are $Y'=Y \cap \lambda^{-1}(\lambda(q))$), a time step of duration $d>0$ is defined by $(q,v,\beta) \xrightarrow{d} (q, v',\top)$ with $v'(y)=v(y)+ vel(q)d$ for $y \in Y'$ and $v'(x)=v(x)$ for any other clock $x$. In all states, time steps of duration $d=0$ leave the system $\mathcal{T}_{\mathcal{A}}$ in the same configuration. When $pol(q)=U$, only time steps of duration $0$ $q$ are allowed. \item[Discrete steps:] A discrete step $(q, v) \xrightarrow{a} (q', v')$ occurs if there exists a transition $q \xrightarrow {\varphi, a, u} q'$ in $\mathcal{D}elta$ such that $v \models \varphi$ and $v' = v[u]$. When $pol(q)=D$ and $\beta=\bot$, discrete steps are forbidden. \end{description} \end{definition} In order to illustrate the interest of the combined models, an example of a (simple) login procedure is described in \figurename~\ref{fig:excomb} as a TA with interruptions at a single level. \begin{figure} \caption{An automaton for login in ITA$^+$} \label{fig:excomb} \end{figure} First it immediately displays a prompt and arms a time-out of $1$ t.u. handled by clock $y$ (transition $init \xrightarrow{p} wait$). Then either the user answers correctly within this delay (transition $wait \xrightarrow{ok} log$) or he answers incorrectly or let time elapse, both cases with transition $wait \xrightarrow{er} init$, and the system prompts again. The whole process is controlled by a global time-out of $6$ t.u. (transition $wait \xrightarrow{to} out$) followed by a long suspension ($50$ t.u.) before reinitializing the process (transition $out \xrightarrow{rs} init$). Both delays are handled by clock $z$. At any time during the process (in fact in state $wait$), a system interrupt may occur (transition $wait \xrightarrow{i} I$). If the time spent (measured by clock $x_1$) during the interrupt is less than $3$ t.u. or the time already spent by the user is less than $3$ t.u., the login process resumes (transition $I \xrightarrow{cont} init$). Otherwise the login process is reinitialized allowing again the $6$ t.u. (transition $I \xrightarrow{rs} init$). In both cases, the prompt will be displayed again. Since invariants are irrelevant for the reachability problem we did not include them in the models. Of course, in this example state $wait$ should have invariant $y \leq 1 \wedge z \leq 6$ and state $out$ should have invariant $z\leq 50$. We extend the decidability and complexity results of the previous models when combining them with CRTA. Class ITA$^+_-$ is obtained in a similar way by combining ITA$_-$ with CRTA. \begin{proposition}\label{prop:itaplusmoins} \begin{enumerate}[label=\arabic*.] \item The reachability problem for ITA$^+_-$ is decidable in \emph{NEXPTIME} and is \emph{PSPACE}-complete when the number of interrupt clocks is fixed. \item The reachability problem for ITA$^+$ is decidable in \emph{NEXPTIME} and is \emph{PSPACE}-complete when the number of interrupt clocks is fixed. \end{enumerate} \end{proposition} \begin{proof}~ \paragraph{Case of ITA$^+_-$.} Let $\mathcal{A}= \langle Q, q_0, F, pol, X\uplus Y, \Sigma, \Omega, \lambda, up, low, vel, \mathcal{D}elta \rangle$ be an ITA$^+_-$, with $n = |X|$ the number of ITA clocks, $p= |Y|$ the number of CRTA clocks and $E=|\mathcal{D}elta|$ the number of transitions. We first consider the reachability problem for two states $q_i$ and $q_f$ on the CRTA level (with $\lambda(q_i) \in \Omega$ and $\lambda(q_f) \in \Omega$). The procedure consists in performing a non deterministic search along an elementary path where the vertices are graph classes of the CRTA. Let $(q,Z)$ be the current class, the procedure chooses non deterministically the next class $(q',Z')$ and checks that there exists a configuration of $(q,Z)$ and an execution only through states $q''$ with $\lambda(q'') \in \ensuremath{\mathbb{N}}$ that leads to a configuration of $(q',Z')$. This is solved as previously by non deterministically choosing an execution path, building a linear program related to the path (of exponential size) and solving it. Let us prove that such a path can be chosen whose length is in $O(p(E+2n)^{3n})$. Assume that there is a run $\pi$ from $(q,v) \in (q,Z)$ to some configuration $(q',v') \in (q',Z')$ such that all intermediate states $q''$ are such that $\lambda(q'')\in \ensuremath{\mathbb{N}}$. We say that a transition $e$ of $\pi$ \emph{usefully resets} a clock $y \in Y$ if it is the first transition of $\pi$ that resets $y$. Observe that there are at most $p$ useful resetting transitions and that between two such successive transitions (or before the first one or after the last one) the value of the clocks of $Y$ are unchanged when transitions are fired. We consider a subrun $\rho$ between two such successive transitions (or before the first one or after the last one) from $(q_1,v_1)$ to $(q_2,v_2)$, with $m_k$ the number of transitions of level $k$. Using Lemma~\ref{lemma:counting}, we build a subrun $\rho'$ from $(q_1,v_1)$ to $(q_2,v_2)$ of length smaller than $(E+2n)^{3n}$. Concatenating the subruns, the useful resetting transitions and the initial transition, one obtains a run $\pi'$ from $(q,v)$ to $(q',v')$ of length in $O(p(E+2n)^{3n})$. The key point ensuring correctness of the procedure is that the existence of a solution depends only on the starting class $(q,Z)$ and not on the configuration inside this class. This is due to the separation of guards and updates between the two kinds of clocks on the transitions. When state $q_i$ (resp. $q_f$) is not at the basis level, the procedure adds an initial (resp. final) guess also checked by a linear program. When the number of clocks is fixed the dominant factor is the path search in the class graph and PSPACE hardness follows from the result in TA. \paragraph{Case of ITA$^+$.} We transform the ITA part of the automaton in ITA$_-$ \emph{via} the procedure of proposition~\ref{proposition:itaitamoins} and apply the procedure for ITA$^+_-$. \qed \end{proof} It is also possible to build a class graph for $ITA^+$, combining a class graph for ITA and a region graph for TA. This yields the regularity of the untimed language of an ITA$^+$, hence the strict inclusion in the languages accepted by a stopwatch automaton. Let ITL$^+$ be the family of timed languages defined by ITA$^+$. The class ITL$^+$ syntactically contains ITL$\,\cup\,$CRTL. We can however have a stronger result: \begin{proposition} The class ITL$^+$ \emph{strictly} contains ITL$\,\cup\,$CRTL. \end{proposition} \begin{proof} Recall ITA $\mathcal{A}_4$ of \figurename~\ref{fig:ctrex2}, whose language $L_4$ is not in CRTL, and let $Q_4$ be its set of states. Also recall TA $\mathcal{A}_5$ of \figurename~\ref{fig:ctrex3}, whose language $L_5$ is not in ITL, with set of states $Q_5$. Let $\mathcal{A}_4\otimes\mathcal{A}_5$ be the ITA$^+$ having $\mathcal{A}_5$ at level $0$ and $\mathcal{A}_4$ at levels $1$ and $2$. Formally, $\mathcal{A}_4\otimes\mathcal{A}_5$ has set of states $Q_4 \cup Q_5$, which are all lazy. Interrupt clocks of $\mathcal{A}_4\otimes\mathcal{A}_5$ are $\{x_1,x_2\}$ (active according to $\mathcal{A}_4$). Its basic clocks are $\{z,y\}$ of velocity $1$. Both have the same color as states of $Q_5$. The bounding functions $up$ (resp. $low$) map both $z$ and $y$ to $1$ (resp. $0$). Transitions of $\mathcal{A}_4\otimes\mathcal{A}_5$ are the ones of $\mathcal{A}_4$ and $\mathcal{A}_5$, adding an unguarded, unlabeled transition from $\mathcal{A}_5$'s final state to $\mathcal{A}_4$'s initial one. $\mathcal{A}_4\otimes\mathcal{A}_5$ accepts timed words which start with an alternation of $a$s and $b$s, with the $b$ drawing always closer to its preceding $a$ (as in $\mathcal{A}_5$), and then contains only $c$s separated by the same amount of time (as in $\mathcal{A}_4$). Since both CRTL and ITL are closed under projection, $\mathcal{L}(\mathcal{A}_4\otimes\mathcal{A}_5)$ cannot be accepted by a CRTA nor an ITA. \qed \end{proof} \section{Conclusion} In this paper, we introduced and studied the model of Interrupt Timed Automata. This model is useful to represent timed systems with tasks organized over priority levels. \begin{figure} \caption{Expressiveness of several timed formalisms with respect to timed languages.} \label{fig:expressiveness} \end{figure} While ITA fall into the more general class of hybrid systems, the reachability problem is proved decidable for this subclass. For ITA, the reachability is in NEXPTIME, and PTIME when the number of clocks is fixed by building a class graph. Similar constructions yield decidability of the reachability problem on an extension of ITA where the lowest priority level can behave as a Controlled Real-Timed Automata. It also yields procedure for model checking \textsf{CTL}\xspacestar formulas and timed \textsf{CTL}\xspace formulas constraining only the clocks of the system. Another fragment of interest was identified in timed \textsf{CTL}\xspace as decidable: the one where the only time constraints concern global earliest or latest execution times. On the other hand, model checking the linear time logic \textsf{SCL}\xspace is proved undecidable on ITA, implying that this is also the case for \textsf{MITL}\xspace. On the expressiveness point of view, the class ITL is proved incomparable with both TL and CRTL, and is neither closed under complementation nor intersection. \mathcal{M}S{Reviewer 2, Rq 11. Done.}The expressiveness results are summed up in \figurename~\ref{fig:expressiveness}, where the grey zone represents undecidability of the reachability problem. \mathcal{M}S{Reviewer 1, Rqs 3, 4, 7, 8, 13, 18; reviewer 2 long commentaires 2 et 3. \`A traiter dans la conclu. + open issue \textsf{SCL}\xspace without policy + hierarchie sur le nb d'horloges.} Several problems remain open on the class of ITA. First of all, the effect of having both (limited) stopwatches and linear expressions in guards is combined in ITA, and it is not known which is the cause of the undecidability results presented in this paper. For instance, the undecidability of \textsf{SCL}\xspace may not hold without the possibility of complex updates. More generally, the expressive power of the subclass of ITA restricted with rectangular guards ($x + b \bowtie 0$) and only resets ($x:= 0$) should be investigated. Also, it is conjectured that the class of ITA with $n+1$ clocks is strictly more expressive than the class of ITA with $n$ clocks. Regarding model-checking, the undecidability of full \textsf{TCTL}\xspace remains to be established. Finally, complexity bounds presented in this paper are only upper-bounds, and matching lower-bounds are still missing. \paragraph{Acknowledgments} The authors would like to thank the anonymous reviewers for their insightful comments. This work was supported by projects \textsc{Dots} (ANR-06-SETI-003, French government), \textsc{ImpRo} (ANR-2010-BLAN-0317, French government) and \textsc{CoChaT} (Digiteo 2009-27HD, R\'egion \^Ile de France). \appendix \section{Proof of Theorem~\ref{thm:mctctlintdec}}\label{app:proofmctctlindec} Let $\varphi$ be a formula in \textsf{TCTL}$_c^{\textrm{int}}$ and $\mathcal{A}$ an ITA with $n$ levels and $E$ transitions. Like in Section~\ref{sec:regular}, the proof relies on the construction of a finite class graph. The main difference is in the computation of the $n$ sets of expressions $E_1, \dots, E_n$. Like before, each set $E_k$ is initialized to $\{x_k, 0\}$ and expressions in this set are those which are relevant for comparisons with the current clock at level $k$. In this case, they include not only guards but also comparisons with the constraints from the formula. Recall that the sets are computed top down from $n$ to $1$, using the normalization operation. \begin{itemize} \item At level $k$, we may assume that expressions in guards of an edge leaving a state are of the form $\alpha x_k+\sum_{i<k}a_ix_i+b$ with $\alpha \in \{0,1\}$. We add $-\sum_{i<k}a_ix_i-b$ to $E_k$. \item To take into account the constraints of formula $\varphi$, we add the following step: For each comparison $C \bowtie 0$ in $\varphi$, and for each $k$, with $\mathtt{norm}(C,k) = \alpha x_k+\sum_{i<k}a_ix_i+b$ ($\alpha \in \{0,1\}$), we also add expression $-\sum_{i<k} a_ix_i - b$ to $E_k$. \item Then we iterate the following procedure until no new term is added to any $E_i$ for $1\leq i\leq k$. \begin{enumerate} \item Let $q \xrightarrow{\varphi,a,u} q'$ with $\lambda(q')\geq k$ and $\lambda(q)\geq k$. If $C \in E_{k}$, then we add $C[u]$ to $E_{k}$. \item Let $q \xrightarrow{\varphi,a,u} q'$ with $\lambda(q') \geq k$ and $\lambda(q) < k$. For $C,C' \in E_{k}$, we compute $C''=\mathtt{norm}(C[u]-C'[u],\lambda(q))$. If $C''= \alpha x_{\lambda(q)}+\sum_{i<\lambda(q)}a_ix_i+b$ with $\alpha \in \{0,1\}$, then we add $-\sum_{i<\lambda(q)}a_ix_i-b$ to $E_{\lambda(q)}$. \end{enumerate} \end{itemize} The proof of termination for this construction is similar to the one in Section~\ref{sec:regular}. We now consider the transition system $\mathcal{G}_{\mathcal{A}}$ whose set of configurations are the classes $R = (q,\{\preceq_k\}_{1 \leq k \leq \lambda(q)})$, where $q$ is a state and $\preceq_k$ is a total preorder over $E_k$. The class $R$ describes the set of valuations $\sem{R}=\{(q,v) \mid \forall k \leq \lambda(q)\ \forall (g,h) \in E_k,\ g[v] \leq h[v]$ iff $g \preceq_k h\}$. The set of transitions is defined as in Section~\ref{sec:regular}. The transition system $\mathcal{G}_{\mathcal{A}}$ is again finite and time abstract bisimilar to $\mathcal{T}_{\mathcal{A}}$. Moreover, the truth value of each comparison $C = \sum_{i \geq 1} a_i \cdot x_i + b \bowtie 0$ appearing in $\varphi$ can be set for each class $R$. Indeed, since for every $k$, both $0$ and $\sum_{i \geq 1}^{k-1} a_i \cdot x_i + b$ are in the set of expressions $E_k$, the truth value of $C \bowtie 0$ does not change inside a class. Therefore, introducing a fresh propositional variable $q_C$ for the constraint $C \bowtie 0$, each class $R$ can be labelled with a truth value for each $q_C$. Deciding the truth value of $\varphi$ can then be done by a classical \textsf{CTL} model-checking algorithm on $\mathcal{G}_{\mathcal{A}}$. The complexity of the procedure is obtained by bounding the number of expressions for each level $k$ by $(E+|\varphi|+2)^{2^{n(n-k+1)}+1}$, and applying the same reasoning as for proposition~\ref{prop:reachita}. \section{Proof of Theorem~\ref{prop:reachundec}}\label{app:proofreachundec} We build an automaton in ITA$\times$TA which simulates a deterministic two counter machine $\mathcal{M}$ (as in proof of Theorem~\ref{thm:mcsclundec}). Let $L_\mathcal{M}$ be the set of labels of $\mathcal{M}$. The automaton $\mathcal{A}_{\mathcal{M}} = \langle\Sigma,Q,q_0,F,pol,X \cup Y,\lambda,\mathcal{D}elta\rangle$ is built to reach its final location $Halt$ if and only if $\mathcal{M}$ stops. It is defined as follows: \begin{itemize} \item $\Sigma$ consists of one letter per transition. \item \(Q = L_\mathcal{M} \cup (L _\mathcal{M}\times \{k_0\}) \cup (L_\mathcal{M} \times \{k_1,k_2,r_1,\dots,r_5\} \times \{>,<\})\), \(q_0 = \ell_0\) (the initial instruction of $\mathcal{M}$) and \(F = \{Halt\}\). \item \(pol: Q \rightarrow \{Urgent,Lazy,Delayed\}\) is such that \(pol(q) = Urgent\) iff either $q \in L_\mathcal{M}$ or \(q = (\ell,q_2,\bowtie)\), and \(pol(q) = Lazy\) in most other cases: some states \((\ell,k_i,\bowtie)\) are \emph{Delayed}, as shown on \figurename~\ref{fig:incremc} and~\ref{fig:decremc}. \item \(X = \{x_1,x_2,x_3\}\) is the set of interrupt clocks and \(Y = \{y_c,y_d\}\) is the set of standard clocks with rate $1$. \item \(\lambda: Q \rightarrow \{1,2,3\}\) is the interrupt level of each state. All states in $L_\mathcal{M}$ and $L_\mathcal{M} \times \{k_0,k_1,k_2\}$ are at level $1$; so do all states corresponding to $r_1$. States corresponding to $r_2$ and $r_3$ are in level $2$, while the ones corresponding to $r_4$ and $r_5$ are in level $3$. \item $\mathcal{D}elta$ is defined through basic modules in the sequel. \end{itemize} The transitions of $\mathcal{A}_{\mathcal{M}}$ are built within small modules, each one corresponding to one instruction of $\mathcal{M}$. The value $n$ of $c$ (resp. $p$ of $d$) in a state of $L_\mathcal{M}$ is encoded by the value \(1 - \frac{1}{2^n}\) of clock $y_c$ (resp. \(1 - \frac{1}{2^p}\) of $y_d$). The idea behind this construction is that for any standard clock $y$, it is possible to ``copy'' the value of $k-y$ in an interrupt clock $x_i$, for some constant $k$, provided the value of $y$ never exceeds $k$. To achieve this, we start and reset the interrupt clock, then stop it when \(y = k\). Note that by the end of the copy, the value of $y$ has changed. Conversely, in order to copy the content of an interrupt clock $x_i$ into a clock $y$, we switch from level $i$ to level $i+1$ and reset $y$ at the same time. When $x_{i+1} = x_i$, the value of $y$ is equal to the value of $x_i$. Remark that the form of the guards on $x_{i+1}$ allows us to copy the value of a linear expression on \(\{x_1,\dots,x_i\}\) in $y$. For instance, consider an instruction labeled by $\ell$ incrementing $c$ then going to $\ell'$, with the respective values $n$ of $c$ and $p$ of $d$, from a configuration where $n \geq p$. The corresponding module \(\mathcal{A}_{c\geq d}^{c++}(\ell,\ell')\) is depicted on \figurename~\ref{fig:modincremc} (see main text). In this module, interrupt clock $x_1$ is used to record the value $\frac{1}{2^n}$ while $x_2$ keeps the value $\frac{1}{2^p}$. Assuming that \(y_c = 1 - \frac{1}{2^n}\), \(y_d = 1 - \frac{1}{2^p}\) and \(x_1 = 0\) in state \((\ell,r_1,>)\), the unique run in \(\mathcal{A}_{c\geq d}^{c++}(\ell,\ell')\) will end in state $\ell'$ with \(y_c = 1 - \frac{1}{2^{n+1}}\) and \(y_d = 1 - \frac{1}{2^p}\). The intermediate clock values are shown in \tablename~\ref{tab:valhorloges} (see main text). The module on \figurename~\ref{fig:modincremc} can be adapted for the case of decrementing $c$ by just changing the linear expressions in guards for $x_3$, provided that the final value of $c$ is still greater than the one of $d$. It is however also quite easy to adapt the same module when \(n < p\): in that case we store $\frac{1}{2^p}$ in $x_1$ and $\frac{1}{2^n}$ in $x_2$, since $y_d$ will reach $1$ before $y_c$. We also need to start $y_d$ before $y_c$ when copying the adequate values in the clocks. The case of decrementing $c$ while \(n \leq p\) is handled similarly. In order to choose which module to use according to the ordering between the values of the counters, we use the modules of \figurename~\ref{fig:incremc} and~\ref{fig:decremc}. \figurename~\ref{fig:incremc} represents the case when at label $\ell$ we have an increment of $c$ whereas \figurename~\ref{fig:decremc} represents the case when $\ell$ corresponds to decrementing $c$. In that last case the value of $c$ is compared not only to the one of $d$, but also to $0$, in order to know which branch of the \emph{if} instruction is taken. Note that only one of the branches can be taken until the end\footnote{State policies are used to treat the special cases, \textit{e.g.} $y_c = y_d = 0$.}. Instructions involving $d$ are handled in a symmetrical way. \begin{figure} \caption{Module taking into account the order between the values of $c$ and $d$ when incrementing $c$.} \label{fig:incremc} \end{figure} \begin{figure} \caption{Module taking into account the order between the values of $c$ and $d$ when decrementing $c$.} \label{fig:decremc} \end{figure} Automaton $\mathcal{A}_\mathcal{M}$ is obtained by joining the modules described above through the states of $L_\mathcal{M}$. Let us prove that automaton \(\mathcal{A}_\mathcal{M}\) simulates the two counter machine $\mathcal{M}$, so that $\mathcal{M}$ halts iff \(\mathcal{A}_\mathcal{M}\) reaches the \emph{Halt} state. Let \(\langle\ell_0,0,0\rangle \langle\ell_1,n_1,p_1\rangle \dots \langle\ell_i,n_i,p_i\rangle\dots\) be a run of $\mathcal M$. We show that this run is simulated in \(\mathcal{A}_\mathcal{M}\) by the run \(\langle l_0,\mathbf{0}\rangle\rho_0\langle l_1,v_1\rangle\rho_1\dots\) where $\rho_i$ is either empty or a subrun through states in \(\{(\ell_i,r_j,\bowtie) \,|\, j \in \{1,\dots,5\}, \bowtie \in \{>,<\}\}\) (\emph{i.e.} subruns in modules like \(\mathcal{A}^{c++}_{c \geq d}\) of \figurename~\ref{fig:modincremc}). Moreover, it will be the case that \[\forall i,\quad v_i(y_c) = 1 - \frac{1}{2^{n_i}} \quad \textrm{and} \quad v_i(y_d) = 1 - \frac{1}{2^{p_i}}\] This holds at the beginning of the execution of \(\mathcal{A}_\mathcal{M}\). Suppose that we have simulated the subrun up to \(\langle\ell_i,n_i,p_i\rangle\). Then we are in state $\ell_i$, with clock $y_c$ being \(1 - \frac{1}{2^{n_i}}\) and $y_d$ being \(1 - \frac{1}{2^{p_i}}\). The next configuration of $\mathcal{M}$, \(\langle\ell_{i+1},n_{i+1},p_{i+1}\rangle\), depends on the content of instruction $\ell_i$, and so does the outgoing transitions of state $\ell_i$ in \(\mathcal{A}_\mathcal{M}\). We consider the case where $\ell_i$ decrements $c$ and goes to $\ell'$ if $c$ is greater than 0 and goes to $\ell''$ otherwise, the other ones being similar. We are therefore in the case of \figurename~\ref{fig:decremc}. If $n_i = 0$, the next configuration of $\mathcal{M}$ will be \(\langle\ell'',n_i,p_i\rangle\). Conversely, in \(\mathcal{A}_\mathcal{M}\), if $n_i = 0$ then $y_c = 0$, and there is no choice but to enter $\ell''$, leaving all clock values unchanged (because $\ell_i$ is an \emph{Urgent} state). The configuration of \(\mathcal{A}_\mathcal{M}\) thus satisfies the property. If $n_i > 0$, the next configuration of $\mathcal{M}$ will be \(\langle\ell',n_i -1,p_i\rangle\). In \(\mathcal{A}_\mathcal{M}\), the transition chosen is the one that corresponds to the ordering between $n_i$ and $p_i$. In both cases, similarly to the example of $\mathcal{A}^{c++}_{c \geq d}(\ell,\ell')$, the run reaches state $\ell'$ with \(y_c =1 - \frac{1}{2^{n_i-1}}\) and $y_d$ as before, thus preserving the property. Hence $\mathcal{M}$ halts iff \(\mathcal{A}_\mathcal{M}\) reaches the \emph{Halt} state. The automaton \(\mathcal{A}_\mathcal{M}\) is indeed the product of an ITA $\mathcal{I}$ and a TA $\mathcal{T}$, synchronized on actions. Observe that in all the modules described above, guards never mix a standard clock with an interrupt one. Since each transition has a unique label, keeping only guards and resets on either the clocks of $X$ or on those of $Y$ yields an ITA and a TA whose product is \(\mathcal{A}_\mathcal{M}\).\qed \end{document}
\begin{document} \def{\mathbb{N}}^{\mathbb{N}}{{\mathbb{N}}^{\mathbb{N}}} \def{\mathcal I}{{\mathcal I}} \def2^{< \omega}{2^{< \omega}} \def\mathbb{N}{\mathbb{N}} \def2^{\nat}{2^{\mathbb{N}}} \def\mbox{\sf Fin}{\mbox{\sf Fin}} \def{\mathcal F}{{\mathcal F}} \def$\mbox{p}^+${$\mbox{p}^+$} \def\mbox{$ \text{p}^-$}{\mbox{$ \text{p}^-$}} \def$\mbox{q}^+${$\mbox{q}^+$} \def\not\in{\not\in} \def\mbox{\sf nwd}{\mbox{\sf nwd}} \defF_\sigma{F_\sigma} \def\mathbb{X}{\mathbb{X}} \def\mathbb{Y}{\mathbb{Y}} \newtheorem{definition}{Definition}[section] \newtheorem{theorem}[definition]{Theorem} \newtheorem{example}[definition]{Example} \newtheorem{corollary}[definition]{Corollary} \newtheorem{lemma}[definition]{Lemma} \newtheorem{proposition}[definition]{Proposition} \newtheorem{question}[definition]{Question} \newtheorem{claim}[definition]{Claim} \title{Combinatorial properties on nodec countable spaces with analytic topology} \author{Javier Murgas and Carlos Uzc\'ategui} \address{Escuela de Matem\'aticas, Facultad de Ciencias, Universidad Industrial de Santander, Ciudad Universitaria, Carrera 27 Calle 9, Bucaramanga, Santander, A.A. 678, COLOMBIA. } \email{javier\[email protected]. } \address{Escuela de Matem\'aticas, Facultad de Ciencias, Universidad Industrial de Santander, Ciudad Universitaria, Carrera 27 Calle 9, Bucaramanga, Santander, A.A. 678, COLOMBIA. Centro Interdisciplinario de L\'ogica y \'Algebra, Facultad de Ciencias, Universidad de Los Andes, M\'erida, VENEZUELA.} \email{[email protected].} \thanks{The second author thanks Vicerrector\'ia de Investigaci\'on y Extensi\'on de la Universidad Industrial de Santander for the financial support for this work, which is part of the VIE project \#2422.} \date{} \begin{abstract}We study some variations of the product topology on families of clopen subsets of $2^{\nat}\times\mathbb{N}$ in order to construct countable nodec regular spaces (i.e. in which every nowhere dense set is closed) with analytic topology which in addition are not selectively separable and do not satisfy the combinatorial principle $q^+$. \end{abstract} \maketitle \noindent {\em Keywords:} nodec countable spaces; analytic sets, selective separability, $q^+$ \noindent {\em MSC: 54G05, 54H05, 03E15} \section{Introduction} A topological space $X$ is {\em selectively separable} ($SS$), if for any sequence $(D_n)_n$ of dense subsets of $X$ there is a finite set $F_n\subseteq D_n$, for $n\in\mathbb{N}$, such that $\bigcup_n F_n$ is dense in $X$. This notion was introduced by Scheepers \cite{Scheeper99} and has received a lot of attention ever since (see for instance \cite{BarmanDow2011,BarmanDow2012,Bella2009,Bella_et_al2008,Bella2013,CamargoUzca2018b,Gruenhage2011,Reposvetal2010}). Bella et al. \cite{Bella_et_al2008} showed that every separable space with countable fan tightness is $SS$. On the other hand, Barman and Dow \cite{BarmanDow2011} showed that every separable Fr\'echet space is also $SS$ (see also \cite{CamargoUzca2018b}). A topological space is {\em maximal} if it is a dense-in-itself regular space such that any strictly finer topology has an isolated point. It was shown by van Douwen \cite{Vand} that a space is maximal if, and only if, it is {\em extremely disconnected} (i.e. the closure of every open set is open), {\em nodec} (i.e. every nowhere dense set is closed) and every open set is {\em irresolvable} (i.e. if $U$ is open and $D\subseteq U$ is dense in $U$, then $U\setminus D$ not dense in $U$). He constructed a countable maximal regular space. A countable space $X$ is $\mbox{q}^+$\ at a point $x\in X$, if given any collection of finite sets $F_n\subseteq X$ such that $x\in \overline{\bigcup_n F_n}$, there is $S\subseteq \bigcup_n F_n$ such that $x\in \overline{S}$ and $ S\cap F_n$ has at most one point for each $n$. We say that $X$ is a {\em $\mbox{q}^+$-space} if it is $\mbox{q}^+$\ at every point. Every countable sequential space is $\mbox{q}^+$\ (see \cite[Proposition 3.3]{Todoruzca2000}). The collection of clopen subsets of $2^{\nat}$ with the product topology is not $\mbox{q}^+$\ at any point. This notion is motivated by the analogous concept of a $\mbox{q}^+$\ filter (or ideal) from Ramsey theory. A problem stated in \cite{Bella_et_al2008} was to analyze the behavior of selective separability on maximal spaces. The existence of a maximal regular SS space is independent of ZFC. In fact, in ZFC there is a maximal non SS space \cite{BarmanDow2011} and it is consistent with ZFC that no countable maximal space is SS \cite{BarmanDow2011, Reposvetal2010}. On the other hand, it is also consistent that there is a maximal, countable, SS regular space \cite{BarmanDow2011}. In this paper we are interested in these properties on countable spaces with an analytic topology (i.e. the topology of the space $X$ is an analytic set as a subset of $2^X$ \cite{todoruzca}). Maximal topologies are not analytic. In fact, in \cite{Todoruzca2014} it was shown that there are neither extremely disconnected nor irresolvable analytic topologies, nevertheless there are nodec regular spaces with analytic topology. In view of the above mentioned results about maximal spaces, it seems natural to wonder about the behavior of selective separability on nodec spaces with an analytic topology. Nodec regular spaces are not easy to construct. We continue the study of the method introduced in \cite{Todoruzca2014} in order to construct similar nodec regular spaces with analytic topology that are neither SS nor $\mbox{q}^+$. A countable regular space has an analytic topology if, and only if, it is homeomorphic to a subspace of $C_p({\mathbb{N}}^{\mathbb{N}})$ \cite{todoruzca}. Thus our examples are constructed using some special topologies on a collection of clopen subsets of $2^{\nat}\times \mathbb{N}$. It is an open question whether there is a nodec $SS$ regular space with analytic topology. \section{Preliminaries} An {\em ideal} on a set $X$ is a collection ${\mathcal I}$ of subsets of $X$ satisfying: (i) $A\subseteq B$ and $B\in {\mathcal I}$, then $A\in {\mathcal I}$. (ii) If $A,B\in{\mathcal I}$, then $A\cup B\in {\mathcal I}$. (iii) $\emptyset \in {\mathcal I}$. We will always assume that an ideal contains all finite subsets of $X$. If ${\mathcal I}$ is an ideal on $X$, then ${\mathcal I}^+=\{A\subseteq X:\, A\not\in {\mathcal I}\}$. \mbox{\sf Fin}\ denotes the ideal of finite subsets of the non negative integers $\mathbb{N}$. An ideal ${\mathcal I}$ on $X$ is {\em tall}, if for every $A\subseteq X$ infinite, there is $B\subseteq A$ infinite with $B\in {\mathcal I}$. We denote by $A^{<\omega}$ the collection of finite sequences of elements of $A$. If $s$ is a finite sequence on $A$ and $i\in A$, $|s|$ denotes its length and $ s\widehat{\;\;}i$ the sequence obtained concatenating $s$ with $i$. For $s\in2^{< \omega}$ and $\alpha\in 2^{\nat}$, let $s\prec \alpha$ if $\alpha(i)=s(i)$ for all $i<|s|$ and $$ [s]=\{\alpha\in 2^{\nat}: \; s\prec \alpha\}. $$ If $\alpha\in2^{\nat}$ and $n\in \mathbb{N}$, we denote by $\alpha\restriction n$ the finite sequence $(\alpha(0),\cdots,\alpha(n-1))$ if $n>0$ and $\alpha\restriction 0$ is the empty sequence. The collection of all $[s]$ with $s\in2^{< \omega}$ is a basis of clopen sets for $2^{\nat}$. As usual we identify each $n\in \mathbb{N}$ with $\{0,\cdots, n-1\}$. The ideal of nowhere dense subsets of $X$ is denoted by $\mbox{\sf nwd}(X)$. Now we recall some combinatorial properties of ideals. We put $A\subseteq^*B$ if $A\setminus B$ is finite. \begin{enumerate} \item[({$p^+$})] ${\mathcal I}$ is $\mbox{p}^+$, if for every decreasing sequence $(A_n)_n$ of sets in ${\mathcal I}^+$, there is $A\in {\mathcal I}^+$ such that $A\subseteq^* A_n$ for all $n\in\mathbb{N}$. Following \cite{HMTU2017}, we say that ${\mathcal I}$ is $\mbox{$ \text{p}^-$}$, if for every decreasing sequence $(A_n)_n$ of sets in ${\mathcal I}^+$ such that $A_n\setminus A_{n+1}\in {\mathcal I}$, there is $B\in {\mathcal I}^+$ such that $B\subseteq^* A_n$ for all $n$. \item[($q^+)$] ${\mathcal I}$ is $\mbox{q}^+$\ , if for every $A\in {\mathcal I}^+$ and every partition $(F_n)_n$ of $A$ into finite sets, there is $S\in{\mathcal I}^+$ such that $S\subseteq A$ and $S\cap F_n$ has at most one element for each $n$. Such sets $S$ are called (partial) {\em selectors} for the partition. \end{enumerate} A point $x$ of a topological space $X$ is called a {\em Fr\'echet point}, if for every $A$ with $x\in \overline{A}$ there is a sequence $(x_n)_n$ in $A$ converging to $x$. We will say that $x$ is a $\mbox{q}^+$-{\em point}, if ${\mathcal I}_x$ is $\mbox{q}^+$. We say that a space is a $\mbox{q}^+$-space, if every point is $\mbox{q}^+$. We define analogously the notion of a $\mbox{p}^+$ and $\mbox{$ \text{p}^-$}$ points. Notice that if $x$ is isolated, then ${\mathcal I}_x$ is trivially $\mbox{q}^+$\ as ${\mathcal I}_x^+$ is empty. Thus a space is $\mbox{q}^+$\ if, and only if, ${\mathcal I}_x$ is $\mbox{q}^+$\ for every non isolated point $x$. The same occurs with the other combinatorial properties defined in terms of ${\mathcal I}_x$. We say that a space $Z$ is {\em wSS} if for every sequence $(D_n)_n$ of dense subsets of $Z$, there is $F_n\subseteq D_n$ a finite set, for each $n$, such that $\bigcup_n F_n$ is not nowhere dense in $Z$. In the terminology of selection principles \cite{Scheeper99}, $wSS$ corresponds to $S_{fin}(\mathcal{D}, \mathcal{B})$ where $\mathcal{D}$ is the collection of dense subsets and $\mathcal{B}$ the collection of non nowhere dense sets. Seemingly this notion has not been considered before. Notice that if $Z$ is $SS$ and $W$ is not $SS$, then the direct sum of $Z$ and $W$ is $wSS$ but not $SS$. A subset $A$ of a Polish space is called {\em analytic}, if it is a continuous image of a Polish space. Equivalently, if there is a continuous function $f:{\mathbb{N}}^{\mathbb{N}}\rightarrow X$ with range $A$, where ${\mathbb{N}}^{\mathbb{N}}$ is the space of irrationals. For instance, every Borel subset of a Polish space is analytic. A general reference for all descriptive set theoretic notions used in this paper is \cite{Kechris94}. We say that a topology $\tau$ over a countable set $X$ is {\em analytic}, if $\tau$ is analytic as a subset of the cantor cube $2^X$ (identifying subsets of $X$ with characteristic functions) \cite{todoruzca, Todoruzca2000,Todoruzca2014}, in this case we will say that $X$ is an {\em analytic space}. A regular countable space is analytic if, and only if, it is homeomorphic to a subspace of $C_p({\mathbb{N}}^{\mathbb{N}})$ (see \cite{todoruzca}). If there is a base $\mathcal B$ of $X$ such that $\mathcal B$ is an $F_\sigma$ (Borel) subset of $2^X$, then we say that $X$ has an {\em $F_\sigma$ (Borel) base}. In general, if $X$ has a Borel base, then the topology of $X$ is analytic. We end this section recalling some results about countable spaces that will be used in the sequel. \begin{theorem} \label{Fsesp} \cite[Corollary 3.8]{CamargoUzca2018b} Let $X$ be a countable space with an $F_{\sigma}$ base, then $X$ is $p^+$. \end{theorem} Next result is essentially Lemma 4.6 of \cite{Todoruzca2014}. \begin{lemma}\label{scompact} Let $X$ be a $\sigma$-compact space and $W$ a countable collection of clopen subsets of $X$. Then $W$, as a subspace of $2^{X}$, has an $F_{\sigma}$ base. \end{lemma} \begin{theorem}\label{pesSS}\cite[Theorem 3.5]{CamargoUzca2018b} Let $X$ be a countable space. If $X$ is $\mbox{$ \text{p}^-$}$, then $X$ is $SS$. In particular, if $X$ has an $F_{\sigma}$ base, then $X$ is $SS$. \end{theorem} A space $X$ is {\em discretely generated} (DG) if for every $A\subseteq X$ and $x\in\overline A$, there is $E\subseteq A$ discrete such that $x\in \overline E$. This notion was introduced by Dow et al. in \cite{DTTW2002}. It is not easy to construct spaces which are not DG, the typical examples are maximal spaces (which are nodec). \begin{theorem} \label{sq-disc-generated} Let $X$ be a regular countable space. Suppose every non isolated point is \mbox{$ \text{p}^-$}, then $X$ is discretely generated. \end{theorem} \proof Let $A\subset X$ with $x\in \overline A$. Fix a maximal family $(O_n)_n$ of relatively open disjoint subsets of $A$ such that $x\not\in \overline{O_n}$. Let $B=\bigcup_n O_n$. From the maximality we get that $x\in \overline{B}$. Since each $O_n$ does not accumulate to $x$ and $x$ is a \mbox{$ \text{p}^-$}-point, there is $E$ such that $x\in \overline{E}$ and $E\cap O_n$ is finite for every $n$. Clearly $E$ is a discrete subset of $A$. \qed \begin{theorem}(Dow et al \cite[Theorem 3.9]{DTTW2002}) \label{seq-disc-generated} Every Hausdorff sequential space is discretely generated. \end{theorem} In summary, we have the following implications for countable regular spaces (see \cite{CamargoUzca2018b}). \[ \begin{array}{cccccccccclcl} &&&&&&& \\ & & & & && & &F_\sigma\mbox{-base}\\ &&& &&&&\swarrow\\ & & \mbox{Fr\'echet} && & & \mbox{$\mbox{p}^+$} & \\ &\swarrow & &\searrow && \swarrow&& \\ \mbox{Sequential} & && &\mbox{\mbox{$ \text{p}^-$}}\\ \downarrow & \searrow &&\swarrow && \searrow&\\ \mbox{$\mbox{q}^+$} && \mbox{DG}& & &&\mbox{$SS$} && \\ && \downarrow& & && \downarrow && \\ && \text{non nodec}& & &&\text{$wSS$} && \end{array} \] \subsection{A $SS$, $\mbox{q}^+$\ nodec analytic non regular topology} As we said in the introduction, nodec regular spaces are not easy construct. However, non regular nodec spaces are fairly easy to define. We recall a well known construction given in \cite{Njastad1965}. Let $\tau$ be a topology and define \[ \tau^\alpha=\{V\setminus N:\: V\in\tau\;\mbox{and $N\in\mbox{\sf nwd}(\tau)$}\}. \] Then $\tau^\alpha$ is a topology finer than $\tau$ (see \cite{Njastad1965}). \begin{lemma}\cite{Njastad1965} \label{tau-alpha} Let $(X,\tau)$ be a space. \begin{itemize} \item[(i)] $V\in\tau^\alpha$ iff $V\subseteq int_\tau (cl_\tau(int_\tau(V)))$. \item[(ii)] Let $A\subseteq X$ and $x\not\in A$. Then $x\in cl_{\tau^\alpha} (A)$ if, and only if, $x\in cl_\tau(int_\tau(cl_\tau(A)))$. \item[(iii)] $(X,\tau^\alpha)$ is a nodec space. \end{itemize} \end{lemma} \begin{proposition} \label{tau-alpha-q-point} Let $(X,\tau)$ be a countable space. \begin{itemize} \item[(i)] If $(X,\tau)$ is Fr\'echet, then $(X,\tau^\alpha)$ is a $\mbox{q}^+$-space. \item[(ii)] $(X,\tau)$ is $SS$ if, and only if, $(X,\tau^\alpha)$ is $SS$. \end{itemize} \end{proposition} \proof (i) Suppose $x\in cl_\alpha(A)\setminus A$ and $(F_n)_n$ is a partition of $A$ with each $F_n$ finite. Let $V=int_\tau(cl_\tau(A))$. By Lemma \ref{tau-alpha} we have that $x\in cl_\tau(V)$. Let $(y_m)_m$ be an enumeration of $V$. Since $A$ is $\tau$-dense in $V$, for every $m$ there is a sequence $(x^m_i)_i$ in $A$ such that $x^m_i\rightarrow y_m$ when $i \to \infty$ (with respect to $\tau$). Since each $F_n$ is finite, we can assume (by passing to a subsequence if necessary) that each $(x^m_i)_i$ is a selector for the partition $(F_n)_n$. Let $S_m$ be the range of $(x^m_i)_i$. Notice that $x\not\in cl_\tau(S_m)$ and every infinite subset of $S_m$ is also a selector for $(F_n)_n$. By a straightforward diagonalization, for each $m$, there is $T_m\subseteq S_m$ such that each $T_m$ is a selector and moreover $\bigcup_m T_m$ is also a selector. Hence we can assume that $S=\bigcup_m\{x^m_i:\;i\in\mathbb{N}\}$ is a selector for the partition. But clearly $S$ is $\tau$-dense in $V$ and thus $V\subseteq int_\tau(cl_\tau(S))$. Hence $x\in cl_\alpha(S)$ (by Lemma \ref{tau-alpha}(i)). (ii) By Lemma \ref{tau-alpha}(ii), a set is $\tau$-dense iff it is $\tau^\alpha$-dense. \qed Let $\tau$ be the usual metric topology on the rational ${\mathbb Q}$. It is not difficult to verify that $\tau^\alpha$ is analytic (in fact, it is Borel) and non regular (see \cite{Todoruzca2014}). Thus $({\mathbb Q}, \tau^\alpha)$ is a SS, $\mbox{q}^+$ and nodec non regular space with analytic topology. It is not known if there is a regular space with the same properties. \section{The spaces $\mathbb{X}({\mathcal I})$ and $\mathbb{Y}(\mathcal{I})$} We recall the definitions of the spaces $\mathbb{X}({\mathcal I})$ and $\mathbb{Y}({\mathcal I})$ for an ideal ${\mathcal I}$, which were introduced in \cite{Todoruzca2014}. For each non empty ${\mathcal A}\subseteq 2^{\nat}$, let $\rho_{\mathcal A}$ be the topology on $2^{2^{\nat}\times \mathbb{N}}$ generated by the following sets: \begin{equation*} (\alpha,p)^{+} = \{ \theta \in 2^{2^{\nat}\times \mathbb{N}}: \theta(\alpha,p)=1\}, \hspace{1.2cm} (\alpha,p)^{-} = \{ \theta \in 2^{2^{\nat}\times \mathbb{N}}: \theta(\alpha,p)=0 \}, \end{equation*} with $\alpha\in \mathcal{A}$. A basic $\rho_{\mathcal A}$-open set is as follows: $$ V=\bigcap_{i=1}^{m} (\alpha_i,p_i)^+ \cap \bigcap_{i=1}^{n} (\beta_{i},q_i)^- $$ for some $\alpha_1,\cdots,\alpha_m,\beta_1,\cdots, \beta_{n} \in {\mathcal A}$, $p_1,,...,p_m,q_1,...,q_n \in \mathbb{N}$. We always assume that $(\alpha_i, p_i)\neq (\beta_j, q_j)$ for all $i$ and $j$, which is equivalent to saying that any set $V$ as above is not empty. Let $\mathbb{X}$ be the collection of all finite unions of clopen sets of the form $[s] \times \{n\}$ with $n \in \mathbb{N}$ and $s \in 2^{< \omega}$. We also include $\emptyset$ as an element of $\mathbb{X}$. As usual, we regard $\mathbb{X}$ as a subset of $2^{2^{\mathbb{N}} \times \mathbb{N} }$. Let $\{\varphi_n : n \in \mathbb{N} \}$ be an enumeration of $\mathbb{X}$ and for convenience we assume that $\varphi_0$ is $\emptyset$. Each $\varphi_n$, regarded as a function from $2^{\mathbb{N}}\times \mathbb{N}$ to $\{0,1\}$, is continuous. Notice that $\mathbb{X}$ is a group with the symmetric difference as operation. Let $\psi _n: 2^{\mathbb{N}} \times \mathbb{N} \to \{0,1\} $ be defined by \begin{equation*} \psi _n (\alpha,m)=\left\{ \begin{array}{cl} \varphi_n (\alpha,m), & \text{if } \alpha(n)=0. \\ 1, & \text{if } \alpha(n)=1. \\ \end{array} \right. \end{equation*} Then $\psi_n$ is a continuous function. Let \[ \mathbb{Y}=\{\psi_n:\;n\in \mathbb{N}\}. \] Given $\mathcal{I}\subseteq 2^{\nat}$, we define \begin{eqnarray*} \mathbb{X}(\mathcal{I}) & = & (\mathbb{X},\rho_{\mathcal I}),\\ \mathbb{Y}(\mathcal{I}) & = & (\mathbb{Y},\rho_{\mathcal I}). \end{eqnarray*} Also notice that $\mathbb{X}({\mathcal I})$ is a topological group. To each $F \subseteq \mathbb{N}$, we associate two sets $F'\subseteq \mathbb{X}$ and $\widehat{F}\subseteq \mathbb{Y}$: $$ F':= \{ \varphi_n:\; n \in F\}, $$ $$ \widehat{F}:= \{ \psi_n:\; n \in F\}. $$ The topological similarities between $F'$ and $\widehat{F}$ are crucial to establish some properties of $\mathbb{Y}({\mathcal I})$. As usual, we identify a subset $A\subseteq \mathbb{N}$ with its characteristic function. So from now on, an ideal ${\mathcal I}$ over $\mathbb{N}$ will be also viewed as a subset of $2^{\nat}$. The properties of $\mathbb{Y}({\mathcal I})$ naturally depend on the ideal ${\mathcal I}$. \begin{lemma} \label{complexity} If ${\mathcal I}$ is analytic, then $\mathbb{X}({\mathcal I})$ and $\mathbb{Y}({\mathcal I})$ have analytic topologies. \end{lemma} \begin{proof} It is easy to see that the standard subspace subbases for $\mathbb{X}({\mathcal I})$ and $\mathbb{Y}({\mathcal I})$ are also analytic when ${\mathcal I}$ is analytic. Thus the topology is analytic (see \cite[Proposition 3.2]{todoruzca}). \end{proof} \begin{theorem} \label{fsigY} If $\mathcal{I}$ is an $F_{\sigma}$ ideal over $\mathbb{N}$, then $\mathbb{Y}(\mathcal{I})$ has an $F_{\sigma}$ base and thus it is SS and DG. \end{theorem} \begin{proof} It follows from Lemma \ref{scompact} and Theorems \ref{pesSS} and \ref{sq-disc-generated}. \end{proof} The reason to study the space $\mathbb{Y}( {\mathcal I})$ is the following theorem. Let $$ \mathcal{I}_{nd}:=\{ F \subseteq \mathbb{N} : \{ \varphi_n : n \in F \} \text{ is nowhere dense in } \mathbb{X} \}. $$ \begin{theorem}\cite{Todoruzca2014} \label{yind} $\mathbb{Y}(\mathcal{I}_{nd})$ is a nodec regular space without isolated points and with an analytic topology. \end{theorem} $\mathbb{Y}(\mathcal{I}_{nd})$ was so far the only space we knew with the properties stated above. We will present a generalization of this theorem showing other ideals ${\mathcal I}$ such that $\mathbb{Y}({\mathcal I})$ has the same properties. \subsection{The space $\mathbb{X}({\mathcal I})$} We present some properties of the space $\mathbb{X}({\mathcal I})$ that will be needed later. We are interested in whether $\mathbb{X}({\mathcal I})$ is DG, SS or $\mbox{q}^+$. We start with a general result which is proven as Theorem \ref{fsigY}. \begin{theorem} If $\mathcal{I}$ is an $F_{\sigma}$ ideal over $\mathbb{N}$, then $\mathbb{X}(\mathcal{I})$ has an $F_{\sigma}$ base, and thus it is SS and DG. \end{theorem} We will show that $\mathbb{X}({\mathcal I})$ is not $\mbox{q}^+$\ except in the extreme case when ${\mathcal I}$ is $\mbox{\sf Fin}$. The key lemma to show this is the following result. \begin{lemma} \label{xnoq} There is a pairwise disjoint family $\{ A_n : n \in \mathbb{N} \}$ of finite subsets of $\mathbb{X}$ such that $\bigcup_{k \in E} A_k$ is dense in $\mathbb{X}$ (with the product topology) for any infinite $E \subseteq \mathbb{N}$. Moreover, for each infinite set $E\subseteq \mathbb{N}$, each selector $S$ for the family $\{ A_n : n \in E\}$ and each $\varphi \notin S \cup \{ \emptyset \}$, there is $p\in\mathbb{N}$ and $\alpha \in 2^{\mathbb{N}}$ such that $\alpha^{-1}(1) \subseteq^* E$, $\varphi \in (\alpha,p)^+$ and $(\alpha,p)^+ \cap S$ is finite. \end{lemma} \begin{proof} We say that a $\varphi\in \mathbb{X}$ has the property $(*^m)$, for $m\in\mathbb{N}$, if there are $k \in \mathbb{N}$ and finite sequences $s_i$, for $i=1,...,k$, of length $m+1$ such that $ \varphi = \bigcup _{i=1}^{k} [s_i]\times\{m_i\}$, $m_i\leq m$ and $s_i \restriction m \neq s_j \restriction m$, whenever $m_i=m_j$ (i.e. $[s_j]\cup [s_i]$ is not a basic clopen set). Let $$ A_m = \{ \varphi \in \mathbb{X}: \varphi \text{ has the property }(*^m) \}. $$ Let $E\subseteq \mathbb{N}$ be an infinite set. We will show that $A:=\bigcup_{k \in E} A_k$ is dense in $2^{2^{\nat}\times\mathbb{N}}$. Let $V$ be a basic open set of $2^{2^{\nat}\times\mathbb{N}}$, let us say $$ V=\bigcap_{i=1}^{m} (\alpha_i,p_i)^+ \cap \bigcap_{i=1}^{n} (\beta_{i},q_i)^- $$ for some $\alpha_1,\cdots,\alpha_m,\beta_1,\cdots, \beta_{n} \in 2^{\nat} $, $p_1,\cdots,p_m,q_1,\cdots,q_n \in \mathbb{N}$. We need to show that $V\cap A$ is not empty. Pick $l$ large enough such that $l+1\in E$, $l+1>\max\{p_i,q_j: i\leq m, j\leq n\}$, $\alpha_i\restriction l\neq \alpha_j\restriction l$ for all $i$ and $j$ such that $\alpha_i\neq \alpha_j$, $\beta_i\restriction l\neq \beta_j\restriction l$ for all $i$ and $j$ such that $\beta_i\neq \beta_j$ and $\alpha_i\restriction l\neq \beta_j\restriction l$ for all $i$ and $j$ such that $\alpha_i\neq\beta_j$. Let $\varphi = \bigcup_{i=1}^{m} [\alpha_i \restriction (l+2)]\times\{p_i\}$. Then $\varphi$ belongs to $A_{l+1}\cap V$. To see the second claim, let $E\subseteq\mathbb{N}$ be an infinite set and let $S=\{z_n: n \in E \}$ be a selector, that is, $z_n \in A_n$ for all $n \in E$. Fix $\varphi \notin S \cup \{\emptyset\}$, say $\varphi= \bigcup_{i=1}^{l} [t_i]\times\{p_i\}$ for some $t_i\in 2^{<\omega}$ and $p_i\in \mathbb{N}$. The required $\alpha $ is recursively defined as follows: $$ \alpha (n)=\left\{ \begin{array}{cl} t_1(n), & \text{if } n<|t_1|, \\ 1, &\mbox{if $n \geq |t_1|, n \in E$ and $[(\alpha\restriction n) \widehat{\;\;} 0]\times\{p_1\} \subseteq z_n$}, \\ 0, & \text{otherwise.} \end{array} \right. $$ From the definition of the sets $A_m$, it is easily shown that $(\alpha,p_1) \notin \bigcup\{ z_k: k \geq |t_1| \text{ and }k \in E \}$. Clearly $(\alpha,p_1)^+ \cap S \subseteq \{ z_k : k < |t_1| \text{ and } k \in E \}$ is finite and $\varphi\in (\alpha, p_1)^+$. Finally, it is also clear from the definition of $\alpha$ that $\alpha^{-1}(1) \subseteq^* E$. \end{proof} \begin{theorem} \label{eqclq} Let $\mathcal{I}$ be an ideal on $\mathbb{N}$. Then $\mathbb{X}({\mathcal I})$ is $\mbox{q}^+$\ at some (every) point if, and only if, ${\mathcal I}=\mbox{\sf Fin}$. \end{theorem} \begin{proof} If ${\mathcal I} =\mbox{\sf Fin}$, then $\mathbb{X}({\mathcal I})$ has a countable basis and thus it is $\mbox{q}^+$\ at every point. Since $\mathbb{X}({\mathcal I})$ is homogeneous (as it is a topological group), then if $\mathbb{X}({\mathcal I})$ is $\mbox{q}^+$\ at some point, then it is $\mbox{q}^+$\ at every point. Suppose now that there is $E \in \mathcal{I} \setminus \mbox{\sf Fin}$. We will show that ${\mathcal I}$ is not $\mbox{q}^+$\ at some point. Let $\{ A_n : n \in \mathbb{N}\}$ be the sequence, given by Lemma \ref{xnoq}, of pairwise disjoint finite subsets of $\mathbb{X}$ such that $A:=\bigcup_{k \in E} A_k$ is dense in $\mathbb{X}$. Since the topology of $\mathbb{X}$ is finer than the topology of $\mathbb{X}({\mathcal I})$, then $A$ is dense in $\mathbb{X}({\mathcal I})$. Let $\varphi \not\in A\cup\{\emptyset\}$. We will show that $\mathbb{X}({\mathcal I})$ fails the property $\mbox{q}^+$\ at $\varphi$. Let $S$ be a selector of $\{ A_n : n \in E\}$. Let $\alpha\in 2^{\nat}$ and $p\in \mathbb{N}$ be as in the conclusion of Lemma \ref{xnoq}, that is, $\alpha^{-1}(1) \subseteq^* E$, $\varphi \in (\alpha,p)^+$ and $(\alpha,p)^+ \cap S$ is finite. Notice that $\alpha\in{\mathcal I}$ and hence $\varphi$ is not in the $\rho_{\mathcal I}$-closure of $S$. Hence $\mathbb{X}({\mathcal I})$ is not $\mbox{q}^+$\ at $\varphi$. \end{proof} Now we look at the $SS$ property. The following result provides a method to construct dense subsets of $\mathbb{X}({\mathcal I})$. \begin{lemma} \label{DA} For each $A\subseteq \mathbb{N}$ infinite, let $\mathbf{D}(A)$ be the following subset of $\mathbb{X}$: \[ \left\lbrace \bigcup_{i=0}^{k} [s_i] \times \{ m_i \} \in \mathbb{X}:\; A \cap s_i^{-1}(0) \neq \emptyset\; \mbox{ for all } i\in \{0,...,k\}, k\in \mathbb{N}, s_i \in 2^{< \omega} \right\rbrace \cup \{\emptyset\}. \] Then $A\in {\mathcal I}$ if, and only if, $\mathbf{D}(A)$ is not dense in $\mathbb{X}({\mathcal I})$ if,and only if, $\mathbf{D}(A)$ is nowhere dense and closed in $\mathbb{X}({\mathcal I})$. \end{lemma} \begin{proof} We first show that $\mathbf{D}(A)$ is closed for every $A\in {\mathcal I}$. We shall show that the complement of $\mathbf{D}(A)$ is open in $\mathbb{X}({\mathcal I})$. Let $\varphi\in \mathbb{X}\setminus \mathbf{D}(A)$. Since $\varphi\neq\emptyset$, we have that $\varphi= \bigcup_{i=1}^{k} [s_i] \times \{m_i\}$ and we can assume that $A \cap s_1^{-1}(0) = \emptyset$. Let $B=A \cup s_1^{-1}(1)$. Notice that $B\in {\mathcal I}$. Let $\beta$ be the characteristic function of $B$. Clearly $\beta\in [s_1]$ and thus $\varphi\in (\beta,m_1)^+$. On the other hand, suppose that $\varphi'= \bigcup_{i=1}^{l} [t_i] \times \{p_i\}\in (\beta,m_1)^+$. Assume that $\beta\in [t_1]$ and $p_1=m_1$, then $t_1^{-1}(0)\subset \beta^{-1}(0)$ and hence $t_1^{-1}(0)\cap A=\emptyset$. This shows that $\varphi'\not\in \mathbf{D}(A)$ and thus $(\beta,m_1)^+\cap \mathbf{D}(A)=\emptyset$. Now we show that if $A\in {\mathcal I}$, then $\mathbf{D}(A)$ is nowhere dense. Since $\mathbf{D}(A)$ is closed, it suffices to show that it has empty interior. Let $V$ be a basic $\rho_{\mathcal I}$-open set. Let us say \begin{equation} \label{basicopen1} V=\bigcap_{i=1}^{m} (\alpha_i,p_i)^+ \cap \bigcap_{i=1}^{n} (\beta_{i},q_i)^- \end{equation} for some $\alpha_1,\cdots,\alpha_m,\beta_1,\cdots, \beta_{n} \in \mathcal{I} $, $p_1,,...,p_m,q_1,...,q_n \in \mathbb{N}$. Recall that $(\alpha_i,p_i)\neq (\beta_j,q_j)$ for all $i\neq j$. Since $\beta_i\in {\mathcal I}$, then $\beta_i^{-1}(0)\neq\emptyset$ for all $i$. Let $l=\max \{ \min(\beta_i^{-1}(0) ) : 1 \leq i \leq n \}$ and $t$ be the constant sequence 1 of length $l$. Since $\mathbb{X}$ is clearly $\rho_{\mathcal I}$-dense, let $\varphi\in V\cap \mathbb{X}$. Then $\varphi\cup ([t] \times \{0\})\in V\setminus \mathbf{D}(A)$. Finally, we show that if $A\not\in {\mathcal I}$, then $\mathbf{D}(A)$ is dense. Let $V$ be a basic $\rho_{\mathcal I}$-open set as given by \eqref{basicopen1}. Pick $l$ large enough such that $\alpha_i\restriction l\neq \alpha_j\restriction l$ for $i\neq j$, $\beta_i\restriction l\neq \beta_j\restriction l$ for $i\neq j$ and $\alpha_i\restriction l\neq \beta_j\restriction l$ for all $i$ and $j$ such that $\alpha_i\neq\beta_j$. Then pick $k\geq l$ such that $k\geq \min (\alpha_i^{-1}(0)\cap A)$ for all $i\leq m$ (notice that $\alpha_i^{-1}(0)\cap A\neq \emptyset$ as $A\not\in {\mathcal I}$ and $\alpha_i\in {\mathcal I}$). Let $s_i=\alpha_i\restriction k$ for $i\leq m$ and $\varphi=\bigcup_{i=1}^m [s_i]\times\{p_i\}$. Then $\varphi\in V\cap \mathbf{D}(A)$. \end{proof} We remind the reader that $F'$ denotes the set $\{\varphi_n:\; n\in F\}$ for each $F\subseteq \mathbb{N}$. \begin{theorem}\label{xinoSS} Let ${\mathcal I}$ be an ideal over $\mathbb{N}$. If ${\mathcal I} $ is not $p^+$, then $\mathbb{X}(\mathcal{I})$ is not $wSS$. \end{theorem} \begin{proof} Suppose that $\mathcal I$ is not $p^+$ and fix a sequence $(A_n)_{n \in \mathbb{N}}$ of subsets of $\mathbb{N}$ such that $A_n \notin \mathcal{I}$, $n \in \mathbb{N}$, and $\bigcup_{n \in \mathbb{N}} F_n \in \mathcal{I}$ for all $F_n \subseteq A_n$ finite. Let $D_n= \mathbf{D}(A_n)$ as in Lemma \ref{DA}. We show that the property $wSS$ fails at the sequence $(D_n)_n$. Let $K_n \subseteq D_n$ be a finite set for each $n$, we need to show that $\bigcup_n K_n$ is nowhere dense in $\mathbb{X}({\mathcal I})$. Let us enumerate each $K_n$ as follows: $$ K_n= \left\lbrace \bigcup_{i=0}^{k_{n,l}} [s_i^{n, l}]\times\{p^{n,l}_i\}: l < |K_n| \right\rbrace. $$ Let $q_n > \max \{ |s_i^{n, l}|: l < |K_n|, i \leq k_{n,l} \}$. By hypothesis, $B=\bigcup _{n \in \mathbb{N}} (A_n \cap \{0, \cdots, q_n \}) \in \mathcal{I}$. Let $\beta$ be the characteristic function of $B$. We claim that for all $m\in \mathbb{N}$ $$ (\beta,m)^+ \cap (\bigcup_{n \in \mathbb{N}} K_n) = \emptyset. $$ Otherwise, there are $n \in \mathbb{N}$, $l < |K_n|$ and $i \leq k_{n,l}$ such that $\beta \in [s_i^{n, l}]$, that is, $s_i^{n, l} \preceq \beta$. But this contradicts the fact that $(A_n \cap \{0, \cdots, q_n\}) \cap \left( s_i^{n, l} \right)^{-1} (0) \neq \emptyset$ for all $i $ and $l$ (recall that $D_n=\mathbf{D}(A_n)$). Thus $(\bigcup_{n \in \mathbb{N}} K_n)\cap (\bigcup_m (\beta,m)^+)=\emptyset$. Since $\bigcup_m (\beta,m)^+$ is $\rho_{\mathcal I}$-open dense, $\bigcup_n K_n$ is $\rho_{\mathcal I}$-nowhere dense. \end{proof} \begin{proposition} \label{converseque} Let ${\mathcal I}$ be an ideal over $\mathbb{N}$. Any element of $\mathbb{X}({\mathcal I})$ is a limit of a non trivial sequence. \end{proposition} \begin{proof} Since $\mathbb{X}({\mathcal I})$ is a topological group, it suffices to show that there is a sequence converging to $\emptyset$ (i.e. to $\varphi_0$). Let $(\alpha_n)_{n \in \mathbb{N}}$ be a sequence in $2^{\mathbb{N}}$ such that $\alpha_k \restriction (k+1) \neq \alpha_l \restriction (k+1)$ for each $k<l$. Let $(x_n)_{n}$ be defined by $x_n= [\alpha_n \restriction (n+1)]\times\lbrace 0\rbrace$. Let $V$ be a neighborhood of $\emptyset$, namely, $V= \bigcap_{i=1}^{m} (\beta_i,n_i)^-$ for some $\beta_i\in {\mathcal I}$ and $n_i \in \mathbb{N}$. We have that $\alpha_n \restriction (n+1) \not\preceq \beta_i$ for almost every $n$, therefore $x_n \in V$ and $x_n \to \emptyset$. \end{proof} \begin{question} When is $\mathbb{X}({\mathcal I})$ discretely generated? \end{question} \subsection{The space $c({\mathcal I})$} It is natural to wonder what can be said if instead of $\mathbb{X}$ we use the more familiar space $CL(2^{\nat})$ of all clopen subsets of $2^{\nat}$. Exactly as before we can define a space $c({\mathcal I})$ as follows. \begin{definition} Let $\mathcal{I}$ be an ideal over $\mathbb{N}$ and $c(\mathcal{I})$ be $(CL(2^{\mathbb{N}}), \tau_{\mathcal{I}} )$, where $\tau_{{\mathcal I}}$ is generated by the following subbasis: $$ \alpha^+=\{x \in CL(2^{\mathbb{N}}): \alpha \in x \} \hspace{1cm} \text{and}\hspace{1cm} \alpha^-=\{x \in CL(2^{\mathbb{N}}): \alpha \notin x \}, $$ where $\alpha \in \mathcal{I}$. \end{definition} In fact, it is easy to see that $c({\mathcal I})$ is homeomorphic to $\{\bigcup_{i=0}^{k} [s_i] \times \{ 0 \} \in \mathbb{X}: k\in \mathbb{N}, s_i \in 2^{< \omega} \}$ and by a simple modification of the proofs above we have the following. \begin{theorem} Let $\mathcal{I}$ be an ideal over $\mathbb{N}$. Then $c({\mathcal I})$ is $\mbox{q}^+$\ at some (every) point if, and only if, ${\mathcal I}=\mbox{\sf Fin}$. \end{theorem} \begin{theorem} Suppose that $\mathcal{I}$ is an ideal over $\mathbb{N}$. If ${\mathcal I} $ is not $p^+$, then $c(\mathcal{I})$ is not $wSS$. \end{theorem} \subsection{The space $\mathbb{Y}({\mathcal I})$} In this section we work with the space $\mathbb{Y}({\mathcal I})$ in order to construct nodec spaces. To that end we introduce an operation $\star$ on ideals. We remind the reader that to each $F \subseteq \mathbb{N}$ we associate the sets $F'= \{ \varphi_n:\; n \in F\}$ and $\widehat{F}= \{ \psi_n:\; n \in F\}$. \begin{definition} Let ${\mathcal I}$ be a nonempty subset of $2^{\nat}$. We define: $$ {\mathcal I} ^{\star} =\{F \subseteq \mathbb{N}:\; F' \text{ is nowhere dense in }\mathbb{X}({\mathcal I}) \}. $$ \end{definition} Notice that ${\mathcal I}^{\star}$ is a free ideal and ${\mathcal I}_{nd}=(2^{\nat})^{\star}$. We are going to present several results that are useful to compare $\mathbb{X}({\mathcal I})$ and $\mathbb{Y}({\mathcal I})$. The following fact will be used several times in the sequel. \begin{lemma} \label{simetricdif} Let ${\mathcal I}$ be an ideal over $\mathbb{N}$. Let $V$ be a basic $\rho_{\mathcal I}$-open set. Then \[ \{n\in\mathbb{N}: \varphi_n\in V \} \triangle \{n\in\mathbb{N}: \psi_n\in V\}\in {\mathcal I}. \] \end{lemma} \begin{proof} Let $V$ be a non empty basic open set, that is, \begin{equation} \label{basicopen} V=\bigcap_{i=1}^{m} (\alpha_i,p_i)^{+} \cap \bigcap_{j=1}^{l} (\beta_j,q_j)^{-}. \end{equation} From the very definition of $\psi_n$ and viewing it as a clopen set, we have that $$ \psi_n=\varphi_n\cup (\{\alpha\in 2^{\nat}:\;\alpha(n)=1\}\times \mathbb{N}). $$ From this we have the following: \[ \{n\in\mathbb{N}: \varphi_n\in V \} \setminus \{n\in\mathbb{N}: \psi_n\in V\}\subseteq \bigcup_{j=1}^l \beta_j^{-1}(1) \] and \[ \{n\in\mathbb{N}: \psi_n\in V \} \setminus \{n\in\mathbb{N}: \varphi_n\in V\}\subseteq \bigcup_{i=1}^m \alpha_i^{-1}(1). \] Thus when each $\alpha_i$ and each $\beta_j$ belongs to ${\mathcal I}$, the unions on the right also belong to ${\mathcal I}$. \end{proof} In the following we compare $\mathbb{X}({\mathcal I})$ and $\mathbb{Y}({\mathcal I})$ in terms of their dense and nowhere dense subsets. Some results need that the ideals ${\mathcal I}$ and ${\mathcal I}^\star$ are comparable, i.e. ${\mathcal I}\subseteq {\mathcal I}^\star$ or ${\mathcal I}^\star \subseteq {\mathcal I}$, it is unclear whether this is always the case. We are mostly interested in crowded spaces. The following fact gives a sufficient condition for $\mathbb{Y}({\mathcal I})$ to be crowded. \begin{lemma}\label{crowded} Let ${\mathcal I}$ be an ideal on $\mathbb{N}$. Then \begin{enumerate} \item $\mathbb{X}$ is dense in $(2^{2^{\nat}\times\mathbb{N}},\rho_{\mathcal I})$. \item $int_{\mathbb{X}({\mathcal I})} ( F')= \emptyset$, for all $F\in {\mathcal I}$ if, and only if, $\mathbb{Y}$ is dense in $(2^{2^{\nat}\times\mathbb{N}},\rho_{\mathcal I})$. \item If ${\mathcal I}\subseteq {\mathcal I}^\star$, then $\mathbb{Y}$ is dense in $(2^{2^{\nat}\times\mathbb{N}},\rho_{{\mathcal I}})$. \item If ${\mathcal I}^\star\subseteq {\mathcal I}$, then $\mathbb{Y}$ is dense in $(2^{2^{\nat}\times\mathbb{N}},\rho_{{\mathcal I}^\star})$. \end{enumerate} \end{lemma} \begin{proof} (1) is clear. The {\em only if} part of (2) was shown in \cite[Lemma 4.2]{Todoruzca2014}, but we include a proof for the sake of completeness. Let $V$ be a nonempty basic $\rho_{\mathcal I}$-open set. We need to find $n$ such that $\psi_n\in V$. From Lemma \ref{simetricdif} we have that \[ E=\{n\in\mathbb{N}: \varphi_n\in V \text{ and } \psi_n\not\in V\}\in {\mathcal I}. \] Since $int_{\mathbb{X}({\mathcal I})} ( E')=\emptyset$, there is $n$ such that $\varphi_n\in V$ and $n\not\in E$. Therefore $\psi_n\in V$. For the {\em if} part, suppose that $\mathbb{Y}$ is dense in $(2^{2^{\nat}\times\mathbb{N}},\rho_{\mathcal I})$ and, towards a contradiction, that there is a nonempty basic $\rho_{\mathcal I}$-open set $V$ such that $F=\{n\in \mathbb{N}:\; \varphi_n\in V\}$ belongs to ${\mathcal I}$. From this and Lemma \ref{simetricdif} the following set belongs to ${\mathcal I}$: \[ E=F\cup \{n\in\mathbb{N}: \varphi_n\not\in V \text{ and } \psi_n\in V\}. \] Let $\beta$ be the characteristic function of $E$. Since $V$ is a basic open set of the form \eqref{basicopen}, there is $m$ such that $V\cap (\beta, m)^-\neq\emptyset$. Since $\mathbb{Y}$ is $\rho_{\mathcal I}$-dense, there is $n$ such that $\psi_n\in V\cap (\beta, m)^-$. Hence $\psi_n\not\in (\beta, m)^+$ and, by the definition of $\psi_n$, we have that $\beta(n)=0$. Therefore $n\not\in E$ and $\psi_n\in V$, then $\varphi_n\in V$. Thus $n\in F$, a contradiction. (3) follows immediately from (2). To see (4), it suffices to show that $int_{\mathbb{X}({\mathcal I}^\star)} ( F')= \emptyset$, for all $F\in {\mathcal I}^\star$. Let $F\in {\mathcal I}^\star$. By definition, $F'$ is nowhere dense in $\mathbb{X}({\mathcal I})$. In particular $int_{\mathbb{X}({\mathcal I}^\star)}( F')= \emptyset$, as ${\mathcal I}^\star\subseteq {\mathcal I}$. \end{proof} Now we show that the operation $\star$ is monotone. \begin{lemma}\label{IssinIs} Let ${\mathcal I}$ and $\mathcal{J}$ be ideals over $\mathbb{N}$ with $\mathcal{J}\subseteq {\mathcal I}$. Then \begin{enumerate} \item For every basic $\rho_{\mathcal I}$-open set $V$ of $2^{2^{\nat}\times \mathbb{N}}$ there are sets $W$, $U$ such that $V=W\cap U$, $W$ is a $\rho_{\mathcal{J}}$-open set and $U$ is a basic $\rho_{\mathcal I}$-open set which is also $\rho_{\mathcal{J}}$-dense. \item If $A\subseteq 2^{2^{\nat}\times \mathbb{N}}$ is $\rho_\mathcal{J}$-nowhere dense, then $A$ is $\rho_\mathcal{I}$-nowhere dense. \item $\mathcal{J}^\star\subseteq {\mathcal I}^{\star}$. Moreover, if $\mathcal{J}\subsetneq {\mathcal I}$, then $\mathcal{J}^\star\subsetneq {\mathcal I}^{\star}$. \end{enumerate} \end{lemma} \begin{proof} (1) Let $V$ be a basic open set, that is, \begin{equation} \nonumber V=\bigcap_{i=1}^{m} (\alpha_i,p_i)^{+} \cap \bigcap_{j=1}^{l} (\beta_j,q_j)^{-}. \end{equation} Notice that if every $\alpha$ and $\beta$ belongs to ${\mathcal I}\setminus\mathcal{J}$, then $V$ is $\rho_{\mathcal{J}}$-dense. Thus given such basic open set $V$ where every $\alpha$ and $\beta$ belongs to ${\mathcal I}$, we can separate them and form $W$ and $U$ as desired: For $W$, we use the $\alpha$'s and $\beta$'s belonging to $\mathcal J$ (put $W=2^{2^{\nat}\times \mathbb{N}}$ in case there is none in $\mathcal{J})$ and for $U$, we use the $\alpha$'s and $\beta$'s belonging to ${\mathcal I}\setminus\mathcal J$. (2) Let $A\subseteq 2^{2^{\nat}\times \mathbb{N}}$ be a $\rho_\mathcal{J}$-nowhere dense set. Let $V$ be a basic $\rho_{\mathcal I}$-open set of $2^{2^{\nat}\times \mathbb{N}}$. Then $V=W\cap U$ where $W$ and $U$ are as given by part (1). As $A$ is $\rho_\mathcal{J}$-nowhere dense, there is a non empty $\rho_\mathcal{J}$-open set $W'\subseteq W$ such that $W'\cap A=\emptyset$. Since $W'$ is also $\rho_{\mathcal I}$-open and $U$ is $\rho_\mathcal{J}$-dense, then $U\cap W'$ is a non empty $\rho_{\mathcal I}$-open set disjoint from $A$ and contained in $V$. (3) Since $\mathbb{X}$ is dense in $2^{2^{\nat}\times\mathbb{N}}$, then $A\in \mbox{\sf nwd}(\mathbb{X}({\mathcal I}))$ if, and only if, $A$ is nowhere dense in $(2^{2^{\nat}\times\mathbb{N}},\rho_{\mathcal I})$. From this and (2) we immediately get that $\mathcal{J}^\star\subseteq {\mathcal I}^{\star}$. Finally, notice that from Lemma \ref{DA}, we have that for $A\in {\mathcal I}\setminus \mathcal{J}$, the set $\mathbf{D}(A)$ is nowhere dense in $\mathbb{X}({\mathcal I})$ and dense in $\mathbb{X}(\mathcal{J})$. \end{proof} Next result gives a sufficient condition for $\mathbb{Y}({\mathcal I}^\star)$ to be nodec. It is a generalization of a result from \cite{Todoruzca2014}. \begin{lemma} \label{Lemanodec} Let ${\mathcal I}$ be an ideal over $\mathbb{N}$ and $F\subseteq \mathbb{N}$. \begin{enumerate} \item If $F \in \mathcal{I}$, then $\widehat{F}$ is closed discrete in $\mathbb{Y}(\mathcal{I})$. \item Let ${\mathcal I}$ be such that ${\mathcal I}^{\star} \subseteq {\mathcal I}$. If $\widehat{F}$ is nowhere dense in $\mathbb{Y}({\mathcal I}^{\star})$, then $F \in {\mathcal I}^{\star}$. \item If ${\mathcal I}^{\star} \subseteq {\mathcal I}$, then $\mathbb{Y}({\mathcal I}^{\star})$ is nodec. \end{enumerate} \end{lemma} \begin{proof} (1) is Lemma 4.1 from \cite{Todoruzca2014} we include the proof for the reader's convenience. Since ${\mathcal I}$ is hereditary, it suffices to show that $\widehat{F}$ is closed for every $F\in {\mathcal I}$. Let $F\in{\mathcal I}$ and let $F$ denote also its characteristic function. Notice that for each $m\in \mathbb{N}$, if $C=\{n\in\mathbb{N}:\; \psi_n\in (F,m)^+\}$, then $\widehat{C}$ is closed in $\mathbb{Y}({\mathcal I})$. We claim that \[ F=\bigcap_{m\in\mathbb{N}}\{n\in\mathbb{N}:\; \psi_n\in (F,m)^+\}. \] From this it follows that $\widehat{F}$ is closed in $\mathbb{Y}({\mathcal I})$. To show the equality above, let $n\in F$, then by the definition of $\psi_n$, we have that $\psi_n\in (F,m)^+$ for all $m\in\mathbb{N}$. Conversely, suppose $n\not\in F$ and let $\varphi_n$ be $[s_1]\times \{m_1\}\cup\cdots \cup [s_k]\times \{m_k\}$. Pick $m\not\in\{m_1,\cdots,m_k\}$, then $\varphi_n\not\in (F,m)^+$ and thus $\psi_n\not\in (F,m)^+$ by the definition of $\psi_n$. (2) is a generalization of Lemma 4.3 of \cite{Todoruzca2014}. Let $\widehat{F}$ be nowhere dense in $\mathbb{Y}({\mathcal I}^\star)$ and suppose, towards a contradiction, that $F\not\in {\mathcal I}^\star$. Let $V$ be a basic $\rho_{{\mathcal I}}$-open set such that $F'\cap V$ is $\rho_{\mathcal I}$-dense in $V$. By Lemma \ref{IssinIs}, there are sets $W$ and $U$ such that $V=W\cap U$, $W$ is a $\rho_{{\mathcal I}^\star}$-open set, $U$ is a basic $\rho_{\mathcal I}$-open set and $U$ is also $\rho_{{\mathcal I}^\star}$-dense. Since $\widehat{F}$ is nowhere dense in $\mathbb{Y}({\mathcal I}^\star)$, there is a basic $\rho_{{\mathcal I}^\star}$-open set $W'\subseteq W$ such that $\widehat{F}\cap W'=\emptyset$, that is \[ F\cap\{n\in \mathbb{N}:\; \psi_n\in W'\}=\emptyset. \] From Lemma \ref{simetricdif} we know that \[ \{n\in\mathbb{N}: \varphi_n\in W' \} \setminus \{n\in\mathbb{N}: \psi_n\in W'\}\in {\mathcal I}^\star. \] From this and the previous fact we get \[ F\cap \{n\in \mathbb{N}:\; \varphi_n\in W'\}\in {\mathcal I}^\star. \] This says that $F'\cap W'$ is nowhere dense in $\mathbb{X}({\mathcal I})$, which is a contradiction, as by construction, $F'\cap V$ is $\rho_{\mathcal I}$-dense in $V$ and $W'\cap U\subseteq V$ is a non empty $\rho_{\mathcal I}$-open set (it is non empty as $U$ is $\rho_\mathcal{{\mathcal I}^\star}$-dense). (3) follows immediately from (1) and (2). \end{proof} The natural bijection $\psi_n\mapsto \varphi_n$ is not continuous (neither is its inverse), however it has some form of semi-continuity as we show below. \begin{proposition} Let ${\mathcal I}$ be an ideal over $\mathbb{N}$. Let $\Gamma:\mathbb{Y}\to \mathbb{X}$ given by $\Gamma (\psi_n)=\varphi_n$. Let $\alpha\in {\mathcal I} $ and $p\in \mathbb{N}$. Then $\Gamma^{-1} ((\alpha, p)^+\cap \mathbb{X})$ is open in $\mathbb{Y}({\mathcal I})$. In general, if $V$ is a $\rho_{\mathcal I}$-basic open set, then there is $D\subseteq \mathbb{Y}$ closed discrete in $\mathbb{Y}({\mathcal I})$ and an $\rho_{\mathcal I}$-open set $W$ such that $\Gamma^{-1}(V\cap \mathbb{X})= (W\cap \mathbb{Y})\cup D$. \end{proposition} \begin{proof} Let $\alpha\in {\mathcal I}$ and $p\in\mathbb{N}$. Let $O=\{\psi_n:\; \varphi_n \in (\alpha, p)^+\}$. We need to show that $O$ is open in $\mathbb{Y}({\mathcal I})$. Let $F=\alpha^{-1}(1)$. Since $((\alpha, p)^+\cap \mathbb{Y})\setminus \widehat{F}\subseteq O\subseteq (\alpha, p)^+\cap \mathbb{Y}$, there is $A\subseteq F$ such that $O =((\alpha, p)^+\cap \mathbb{Y})\setminus \widehat{A}$. As $A\in {\mathcal I}$, then by Lemma \ref{Lemanodec}, $\widehat{A}$ is closed discrete in $\mathbb{Y}({\mathcal I})$. Thus $O$ is open in $\mathbb{Y}({\mathcal I})$. On the other hand, $\{\psi_n: \varphi_n\in (\alpha,p)^-\}= ((\alpha,p)^-\cap \mathbb{Y})\cup ( \{\psi_n: \varphi_n\in (\alpha,p)^{-}\}\cap \widehat{F})$. \end{proof} The derivative operator on $\mathbb{Y}({\mathcal I})$ can be characterized as follows. \begin{proposition} Let ${\mathcal I}$ be an ideal over $\mathbb{N}$ and $A\subseteq \mathbb{N}$. Then $\psi_l$ is a $\rho_{\mathcal I}$-accumulation point of $\widehat{A}$ if, and only if, for every non empty $\rho_{{\mathcal I}}$-open set $V$ with $\psi_l\in V$ we have \[ \{n\in A:\;\varphi_n\in V\}\not\in {\mathcal I}. \] \end{proposition} \begin{proof} Let $V$ be a $\rho_{{\mathcal I}}$-open set with $\psi_l\in V$. Suppose $F=\{n\in A:\;\varphi_n\in V\}\in {\mathcal I}$. Then by Lemma \ref{Lemanodec}, $\widehat{F}$ is closed discrete in $\mathbb{Y}({\mathcal I})$ which is a contradiction as $\psi_l$ is an accumulation point of $\widehat{F}$. Conversely, let $V$ be a basic $\rho_{\mathcal I}$-open set containing $\psi_l$. By Lemma \ref{simetricdif} the following set belongs to ${\mathcal I}$: \[ E=\{n\in\mathbb{N}: \varphi_n\in V \text{ and } \psi_n\not\in V\}. \] We also have \[ F=\{n\in A:\;\varphi_n\in V\}\subseteq \{n\in A: \varphi_n\in V \text{ and } \psi_n\in V\}\cup E. \] Since $E\in {\mathcal I}$ and by hypothesis $F\not\in {\mathcal I}$, then there are infinitely many $n\in A$ such that $\psi_n\in V$ and we are done. \end{proof} Now we show that the spaces $\mathbb{X}({\mathcal I})$ and $\mathbb{Y}({\mathcal I})$ are not homeomorphic in general. \begin{proposition} Let ${\mathcal I}$ be a tall ideal over $\mathbb{N}$. There are no non trivial convergent sequences in $\mathbb{Y}({\mathcal I})$. In particular, $\mathbb{Y}({\mathcal I})$ is not homeomorphic to $\mathbb{X}({\mathcal I})$. \end{proposition} \begin{proof} Let $A\subseteq \mathbb{N}$ be an infinite set. We will show that $\widehat{A}=\{\psi_n:\; n\in A\}$ is not convergent in $\mathbb{Y}({\mathcal I})$. Since ${\mathcal I}$ is tall, pick $B\subseteq A$ infinite with $B\in {\mathcal I}$. Then $\widehat{B}$ is closed discrete in $\mathbb{Y}({\mathcal I})$ (by Lemma \ref{Lemanodec}). Thus $\widehat{A}$ is not convergent. From this, the last claim follows since $\mathbb{X}({\mathcal I})$ has plenty of convergent sequences (see Proposition \ref{converseque}). \end{proof} Next result shows that our spaces are analytic. \begin{lemma} \label{staranalytic} Let ${\mathcal I}$ be an analytic ideal over $\mathbb{N}$. Then ${\mathcal I}^{\star}$ is analytic. \end{lemma} \begin{proof} The argument is analogous to that of the Lemma 4.8 of \cite{Todoruzca2014}. We include a sketch of it for the sake of completeness. First, we recall a result from \cite{Todoruzca2014} (see Lemma 4.7). \noindent{\em Claim:} Let $J$ be an infinite set. Then $M \subseteq 2^{J}$ is nowhere dense if, and only if, there is $C \subseteq J$ countable such that $M\restriction C=\{ x \restriction C: x \in M \}$ is nowhere dense in $2^C$ Let $Z$ be the set of all $z \in (2^{\nat} \times \mathbb{N})^{\mathbb{N}}$ such that $z(k) \neq z(j)$ for all $k \neq j$ and $\{z(k): k \in \mathbb{N} \} \subseteq {\mathcal I} \times \mathbb{N}$. Since ${\mathcal I}$ is an analytic set, then $Z$ is an analytic subset of $(2^{\nat} \times \mathbb{N})^{\mathbb{N}}$. Consider the following relation $R \subseteq \mathcal{P}(\mathbb{N}) \times (2^{\nat} \times \mathbb{N})^{\mathbb{N}}$: $$ (F,z) \in R \Leftrightarrow \; z\in Z \;\mbox{and }\; \{\varphi_n \restriction \{z(k): k \in \mathbb{N} \} : n \in F \} \text{ is nowhere dense in }2^{ \{z(k):\; k \in \mathbb{N} \} }. $$ Then $R$ is an analytic set. From the claim above, we have $$ F \in {\mathcal I}^{\star} \Leftrightarrow (\exists z \in (2^{\nat} \times \mathbb{N})^{\mathbb{N}}) R(F,z). $$ Thus, ${\mathcal I}^{\star}$ is analytic. \end{proof} Finally, we can show one of our main results. Let us define a sequence $({\mathcal I}^k)_{k \in \mathbb{N}}$ of ideals on $\mathbb{N}$ as follows: $${\mathcal I}^k=\left\{ \begin{array}{cl} 2^{\nat}, & \text{if }k=0, \\ ({\mathcal I}^{k-1})^{\star}, & \text{if }k>0. \\ \end{array} \right.$$ Notice that ${\mathcal I}^{k+1} \subsetneq {\mathcal I}^{k}$ for each $k \in \mathbb{N}$ by Lemma \ref{IssinIs}. \begin{theorem}\label{Ykesnod} For all $k>0$, $\mathbb{Y}({\mathcal I}^k)$ is analytic, nodec and crowded. \end{theorem} \begin{proof} That $\mathbb{Y}({\mathcal I}^k)$ is analytic and nodec follows from Lemmas \ref{staranalytic}, \ref{complexity}, \ref{Lemanodec} and \ref{IssinIs}. Since ${\mathcal I}^k\subseteq {\mathcal I}_{nd}$, then by Lemma \ref{crowded}, $\mathbb{Y}({\mathcal I}^k)$ is crowded. \end{proof} Thus we do not know whether $\mathbb{Y}({\mathcal I}^\star)$ is nodec for ideals such that ${\mathcal I}^\star \not\subseteq {\mathcal I}$. The reason is that it is not clear if part (2) in Lemma \ref{Lemanodec} holds in general without the assumption that ${\mathcal I}^\star\subseteq {\mathcal I}$. In this respect, we only were able to show the following. \begin{lemma} Let ${\mathcal I}$ be an ideal on $\mathbb{N}$ such that ${\mathcal I}\subseteq {\mathcal I}^\star$. Let $A\subseteq \mathbb{N}$. Then \begin{enumerate} \item Let $V$ be a non empty $\rho_{\mathcal I}$-open set. If $A'$ is $\rho_{\mathcal I}$-dense in $V$, then $\widehat{A}$ is $\rho_{\mathcal I}$-dense in $V$. \item If $\widehat{A}$ is nowhere dense in $\mathbb{Y}({\mathcal I})$, then $A'$ is nowhere dense in $\mathbb{X}({\mathcal I})$ (i.e., $A \in {\mathcal I}^\star$). In particular, if $\widehat{A}$ is nowhere dense in $\mathbb{Y}({\mathcal I})$, then $\widehat{A}$ is closed discrete in $\mathbb{Y}({\mathcal I}^\star)$. \end{enumerate} \end{lemma} \begin{proof} (1) Let $V$ be a non empty $\rho_{\mathcal I}$-open set and suppose $A'$ is $\rho_{\mathcal I}$-dense in $V$. Let $W$ be a basic $\rho_{\mathcal I}$-open set with $W\subseteq V$. We need to find $n\in A$ such that $\psi_n\in W$. By Lemma \ref{simetricdif} the following set belongs to ${\mathcal I}$: \[ E=\{n\in\mathbb{N}: \varphi_n\in W \text{ and } \psi_n\not\in W\}. \] As ${\mathcal I}\subseteq {\mathcal I}^\star$, then $E'$ is nowhere dense in $\mathbb{X}({\mathcal I})$. Since $A'$ is dense in $V$, then $A'\cap W\not\subseteq E'$. Let $n\in A\setminus E$ such that $\varphi_n\in W$. As $n\not\in E$, then $\psi_n\in W$. (2) Follows from (1) and part (1) in Lemma \ref{Lemanodec}. \end{proof} Now we compare the dense sets in $\mathbb{Y}({\mathcal I})$ and $\mathbb{X}({\mathcal I})$. \begin{lemma} \label{denytodenxstar} Let ${\mathcal I}$ be an ideal on $\mathbb{N}$ such that $\mathbb{Y}$ is dense in $(2^{2^{\nat}\times\mathbb{N}},\rho_{\mathcal I})$ and $D\subseteq \mathbb{N}$. If $\widehat{D}$ is dense in $\mathbb{Y}({\mathcal I})$, then $D'$ is dense in $\mathbb{X}({\mathcal I})$. \end{lemma} \begin{proof} Suppose $\widehat{D}$ is dense in $\mathbb{Y}({\mathcal I})$. Let $V$ be a basic $\rho_{\mathcal I}$-open set. We need to find $n\in D$ such that $\varphi_n\in V$. By Lemma \ref{simetricdif} the following set belongs to ${\mathcal I}$: \[ E=\{n\in\mathbb{N}: \varphi_n\not\in V \text{ and } \psi_n\in V\}. \] Let $F=\{n\in D:\; \psi_n\in V\}$. Since $\widehat{D}$ is $\rho_{{\mathcal I}}$-dense, then $F\not\in {\mathcal I}$ (by part (1) of Lemma \ref{Lemanodec} and the assumption that $Y$ is dense in $(2^{2^{\nat}\times\mathbb{N}},\rho_{\mathcal I})$). Thus there is $n\in F\setminus E$. Then $\psi_n\in V$ and $\varphi_n\in V$. \end{proof} Observe that $\mbox{\sf Fin}\subseteq \mbox{\sf Fin}^\star\subseteq \mbox{\sf Fin}^{\star\star}\subseteq \cdots\subseteq {\mathcal I}^k$ for all $k$. Notice that $\mbox{\sf Fin}^\star$ is isomorphic to $\mbox{\sf nwd} (\mathbb{Q})$ as $\mathbb{X}(\mbox{\sf Fin})$ is homeomorphic to $\mathbb{Q}$. The following is a natural and intriguing question. \begin{question} Is $\mathbb{Y}(\mbox{\sf Fin}^\star)$ nodec? \end{question} It is unclear when an ideal ${\mathcal I}$ satisfies either ${\mathcal I}\subseteq {\mathcal I}^\star$ or ${\mathcal I}^\star\subseteq {\mathcal I}$. The following question asks a concrete instance of this problem. \begin{question} Two ideals that naturally extend $\mbox{\sf Fin}$ are $\{\emptyset\}\times\mbox{\sf Fin}$ and $\mbox{\sf Fin}\times \{\emptyset\}$ (where $\times$ denotes the Fubini product). Let ${\mathcal I}$ be any of those two ideals. Is ${\mathcal I}\subseteq {\mathcal I}^\star$? \end{question} \subsection{SS property in $\mathbb{Y}({\mathcal I})$} We do not know whether $\mathbb{Y}({\mathcal I}_{nd})$ is $SS$. However, we show below that $\mathbb{Y}({\mathcal I}^k)$ is not $wSS$ for all $k>1$, this was the reason to introduce the ideals ${\mathcal I}^\star$. We need an auxiliary result. \begin{lemma} \label{densostar} Let ${\mathcal I}$ be an ideal over $\mathbb{N}$ such that ${\mathcal I}^{ \star} \subseteq {\mathcal I}$. Let $V$ be a non empty $\rho_{{\mathcal I}^\star}$-open set and $D \subseteq \mathbb{N}$. If $D'$ is $\rho_{\mathcal I}$-dense in $V$, then $\widehat{D}$ in $\rho_{{\mathcal I}^\star}$-dense in $V$. \end{lemma} \begin{proof} Let $V$ be a non empty $\rho_{{\mathcal I}}$-open set and suppose that $D'$ is $\rho_{\mathcal I}$-dense in $V$. Let $W$ be a $\rho_{{\mathcal I}^\star}$-basic open set such that $W\subseteq V$. We need to show that there is $n\in D$ such that $\psi_n\in W$. By Lemma \ref{simetricdif} the following set belongs to ${\mathcal I}^\star$: \[ E=\{n\in\mathbb{N}: \varphi_n\in W \text{ and } \psi_n\not\in W\}. \] Since $W$ is also $\rho_{\mathcal I}$-open (as ${\mathcal I}^\star \subseteq {\mathcal I}$) and $E'$ is $\rho_{\mathcal I}$-nowhere dense, then there is a non empty $\rho_{\mathcal I}$-open set $V_1\subseteq W$ such that $V_1\cap E'=\emptyset$. Since $D'$ is $\rho_{\mathcal I}$-dense in $V$, there is $n\in D$ such that $\varphi_n\in V_1$. Notice that $n\not\in E$. Since $\varphi_n\in W$, then $\psi_n\in W$. \end{proof} \begin{theorem}\label{Yknoss} Let ${\mathcal I}$ be an ideal over $\mathbb{N}$ such that ${\mathcal I}^\star\subseteq {\mathcal I}$. Then $\mathbb{Y}({\mathcal I}^{\star\star})$ is not $wSS$. \end{theorem} \begin{proof} Notice that $\mathbb{X}$ is $\rho_{{\mathcal I}}$ crowded (see Lemma \ref{crowded}). Also, observe that ${\mathcal I}^{\star\star}\subseteq {\mathcal I}^\star$ (see Lemma \ref{IssinIs}). Let $(U_n)_{n \in \mathbb{N}}$ be a pairwise disjoint sequence of non empty $\rho_{{\mathcal I}^{\star\star}}$-open sets. Let $A_n = \{ m \in \mathbb{N}: \varphi_m \in U_n \}$. It is clear that $A_n \notin {\mathcal I}^{\star}$ for each $n \in \mathbb{N}$. It is easy to verify that the sequence $(A_m)_m$ witnesses that ${\mathcal I}^\star$ is not $\mbox{p}^+$. Let $D_n=\mathbf{D}(A_n)$, as defined in Lemma \ref{DA}. Let $$ E_n= \{\psi_m \in Y:\; \varphi_m \in D_n \}. $$ We claim that the sequence $(E_n)_{n \in \mathbb{N}}$ witnesses that the space $\mathbb{Y}(\mathcal{I}^{\star\star})$ is not $wSS$. In fact, since $A_n \notin {\mathcal I}^{\star}$, then $D_n$ is dense in $\mathbb{X}({\mathcal I}^{\star})$ (by Lemma \ref{DA}), so $E_n$ is dense in $\mathbb{Y}({\mathcal I}^{\star\star})$ (by Lemma \ref{densostar}). Let $K_n \subseteq E_n$ be a finite set and $L_n= \{ \varphi_m: \psi_m \in K_n \}$ for each $m\in \mathbb{N}$. Since $A_n\not\in{\mathcal I}^{\star}$ and ${\mathcal I}^{\star}$ is not $\mbox{p}^+$, then, by the proof of Theorem \ref{xinoSS}, $L=\bigcup_{n \in \mathbb{N}} L_n$ is nowhere dense in $\mathbb{X}(\mathcal{I}^{\star})$. Thus $L\in {\mathcal I}^{\star\star}$. Therefore $\widehat{L}=\bigcup_{n \in \mathbb{N}} K_n$ is closed discrete in $\mathbb{Y}(\mathcal{I}^{\star\star})$ (by Lemma \ref{Lemanodec}). \end{proof} We have seen in Theorem \ref{Ykesnod} that $\mathbb{Y}({\mathcal I}^k)$ is nodec for every $k\geq 1$. From Theorem \ref{Yknoss} we have the following. \begin{corollary} $\mathbb{Y}({\mathcal I}^{k})$ is not $wSS$ for every $k>1$. \end{corollary} Recall that ${\mathcal I}^{1}$ is ${\mathcal I}_{nd}$. We do not know whether $\mathbb{Y}({\mathcal I}_{nd})$ is SS. We only know the following. Suppose $\widehat{D_n}=\{\psi_m: m \in D_n \}$ is open dense in $\mathbb{Y}(\mathcal{I}_{nd})$, for every $n\in \mathbb{N}$. Then there is $F_n \subseteq D_n$ finite for each $n$ such that $\bigcup_{n \in \mathbb{N}} \widehat{F_n}$ is dense. \begin{question} Is there an ideal ${\mathcal I}$ on $\mathbb{N}$ such that ${\mathcal I}\subseteq {\mathcal I}^\star$ and $\mathbb{Y}({\mathcal I}^\star)$ is $wSS$? In particular, is $\mathbb{Y}(\mbox{\sf Fin}^\star)$ $wSS$? \end{question} \subsection{$\mbox{q}^+$\ in $\mathbb{Y}({\mathcal I})$} We shall prove that for certain kind of ideals, $\mathbb{Y}({\mathcal I})$ is not $q^+$. We use a construction quite similar to that in the proof of Theorem \ref{eqclq}. We recall that in the proof of Lemma \ref{xnoq} we have introduced the following property: Let $m \in \mathbb{N}$. We say that $\varphi \in \mathbb{X}$ has the property $(*^m)$ if there are $k \in \mathbb{N}$, $s_i \in 2^{m+1}$ $(i=1,...,k)$ finite sequences and $m_i \leq m$ $(i=1,...,k)$ natural numbers such that $ \varphi= \bigcup _{i=1}^{k} [s_i]\times \{m_i\}$ and if $m_i=m_j$ with $i \neq j$, then $s_i \restriction m \neq s_j \restriction m$. \begin{lemma} \label{xnoq2} Let ${\mathcal I}$ be an ideal over $\mathbb{N}$ such that ${\mathcal I}\subseteq {\mathcal I}_{nd}$. Let $$ A_m = \{ \varphi \in \mathbb{X}: \varphi \text{ has the property }(*^m) \} $$ and \[ B_m=\{\psi_n\in \mathbb{Y}:\; \varphi_n \in A_m\}. \] Let $L= \{n \in \mathbb{N}: \varphi_n \notin \bigcup _{m \in \mathbb{N}} A_m \}$ and suppose there is an infinite set $L'=\{m_k:\;k\in \mathbb{N}\}\subseteq L$ such that $L'\in \mathcal{I}$. Let \[ B=\bigcup_k B_{m_k}. \] Let $q\in \mathbb{N}$ be such that $\varphi_q=2^{\nat}\times \{0\}$. Then \begin{enumerate} \item $B$ is dense in $\mathbb{Y}({\mathcal I})$ and, in particular, $\psi_q \in cl_{\rho_{\mathcal I}} (B)$. \item Let $S\subseteq B$ be such that $S\cap B_{m_k}$ has at most one element for each $k$, then $\psi_q\not\in cl_{\rho_{\mathcal I}} (S)$. \end{enumerate} \end{lemma} \proof (1) Let $A=\bigcup_k A_{m_k}$. By Lemma \ref{xnoq}, $A$ is dense in $\mathbb{X}$. Thus by Lemma \ref{densostar}, $B$ is dense in $\mathbb{Y}({\mathcal I}_{nd})$ (recall that ${\mathcal I}_{nd}=(2^{\nat})^\star$). As ${\mathcal I}\subseteq {\mathcal I}_{nd}$, then $B$ is also dense in $\mathbb{Y}({\mathcal I})$. (2) Let $S=\{\psi_{n_k}:\; k\in \mathbb{N}\}$ be such that $\psi_{n_k}\in B_{m_k}$ for all $k\in \mathbb{N}$. We will show that $\psi_q\not\in cl_{\rho_{\mathcal I}} (S)$. Let $\alpha\in2^{\nat}$ be defined as follows: If $0 \in L'$ and $[\langle 0\rangle] \times \{0\} \subseteq \varphi_{n_0} $, then $\alpha(0)=1$. Otherwise, $\alpha(0)=0$. For $n>1$, $$ \alpha (n)=\left\{ \begin{array}{cl} 1, & \text{if }n \in L' \text{ , } \text{$n=m_k$ for some $k$} \\ & \text{ and }[\langle \alpha(0),...,\alpha(n-1),0\rangle] \times \{0\} \subseteq \varphi_{n_k}. \\ 0, & \text{ otherwise.} \end{array} \right. $$ Observe that $\alpha \in \mathcal{I}$, as $\alpha^{-1}(1) \subseteq L' \in \mathcal{I}$. It is clear that $\psi_q\in (\alpha,0)^+$. To finish the proof, it suffices to show that $(\alpha,0) \notin \bigcup_{k \in \mathbb{N}} \psi_{n_k}$. Suppose, towards a contradiction, that there is $l \in \mathbb{N}$ such that $(\alpha,0) \in \psi_{n_l}$, that is, $(\alpha,0) \in \varphi_{n_l} \cup ([n_l] \times \mathbb{N})$. There are two cases to be considered. (i) Suppose $\alpha(n_l)=1$. Then $n_l\in L'$ and thus $\varphi_{n_l}\not\in A_{m_l}$ which contradicts that $\psi_{n_l}\in B_{m_l}$. (ii) Suppose $\alpha(n_l)=0$ and thus $(\alpha, 0)\in \varphi_{n_l}$. Let $\varphi_{n_l}= \bigcup_{i=1}^{r} [s_i] \times \{p_i\}$ with $s_i \in 2^{m_l+1}$. Then $\alpha \in [s]$, where $s$ is $s_i$ for some $i$ with $p_i=0$. Hence $\alpha(n)=s(n)$ for all $n \leq m_l$. We consider two cases. Suppose $\alpha(m_l)=1$. Then $s(m_l)=1$. Let $t$ be such that $s=t\widehat{\;} 1$. Then by the definition of $\alpha$, we have that $[t\widehat{\;}0]\times\{0\}\subseteq \varphi_{n_l}$. But also $[s]\times\{0\}=[t\widehat{\;}1]\times\{0\}\subseteq \varphi_{n_l}$ which contradicts that $\varphi_{n_l}\in A_{m_l}$ (i.e. that it has property $(*^{m_l}))$. Now suppose that $\alpha(m_l)=0$. Then $[s]\times\{0\}=[t\widehat{\;}0]\times\{0\}\not\subseteq \varphi_{n_l}$, but this contradicts that $[s]\times\{0\}$ is $[s_i]\times\{p_i\}$ for some $i$. \endproof From the previous lemma we immediately get the following. \begin{theorem}\label{YI no q} Let ${\mathcal I}$ be a tall ideal over $\mathbb{N}$ such that $\mathcal{I} \subseteq \mathcal{I}_{nd}$. Then $\mathbb{Y}(\mathcal{I})$ is not $q^+$. \end{theorem} \begin{question} Is there an ideal (necessarily non tall) different from $\mbox{\sf Fin}$ such that $\mathbb{Y}({\mathcal I})$ is $\mbox{q}^+$? Two natural candidates are $\{\emptyset\}\times\mbox{\sf Fin}$ and $\mbox{\sf Fin}\times \{\emptyset\}$. \end{question} Finally, we have the following. \begin{theorem} $\mathbb{Y}({\mathcal I}^k)$ is a non SS, non $\mbox{q}^+$\ nodec regular space with analytic topology for every $k>1$. \end{theorem} \noindent {\bf Acknowledgment:} We are thankful to the referee for his (her) comments that improved the presentation of the paper. \end{document}
\begin{document} \title{Superradiant cooling, trapping, and lasing of dipole-interacting clock atoms} \author{Christoph Hotter$^*$, David Plankensteiner,\\Laurin Ostermann and Helmut Ritsch} \address{Institut f\"ur Theoretische Physik, Universit\"at Innsbruck, Technikerstra\ss e 21a\\A-6020 Innsbruck, Austria\\$^*$\blue{\underline{[email protected]}}} \begin{abstract} A cold atomic gas with an inverted population on a transition coupled to a field mode of an optical resonator constitutes a generic model of a laser. For quasi-continuous operation, external pumping, trapping and cooling of the atoms is required to confine them in order to achieve enough gain inside the resonator. As inverted atoms are high-field seekers in blue detuned light fields, tuning the cavity mode to the blue side of the atomic gain transition allows for combining lasing with stimulated cavity cooling and dipole trapping of the atoms at the antinodes of the laser field. We study such a configuration using a semiclassical description of particle motion along the cavity axis. In extension of earlier work we include free space atomic and cavity decay as well as atomic dipole-dipole interactions and their corresponding forces. We show that for a proper choice of parameters even in the bad cavity limit the atoms can create a sufficiently strong field inside the resonator such that they are trapped and cooled via the superradiant lasing action with less than one photon on average inside the cavity. \end{abstract} \section{Introduction} The idea of building a superradiant laser operating on an ultra-narrow optical clock transition in a cold gas has fostered the vision of implementing the optical analog of microwave clock masers~\cite{vanier1989quantum} with a precision and accuracy improved by many orders of magnitude~\cite{haake1993superradiant,bohnet2012steady,maier2014superradiant,norcia2018cavity}. Today, a central limitation of the best optical clock implementations~\cite{campbell2017fermi,leopardi2018absolute} is noise within the mirrors of the reference oscillators~\cite{ludlow2007compact} that act as the flywheels locked to the atomic transition frequency. When operated on the clock transition in the bad cavity regime and at low photon numbers, superradiant lasers have been predicted to be very insensitive to these fluctuations and create an accurate and precise frequency reference~\cite{meiser2010steady,bohnet2012steady, maier2014superradiant,norcia2016superradiance}. In principle, operated at high photon numbers sufficiently above the lasing threshold, lasers do not exhibit a fundamental limit of their linewidth~\cite{schawlow1958infrared,kuppens1994quantum}. In practice, however, the operational laser linewidth is determined by technical noise in the resonator and in the active medium. Technological advances have reduced this limit down to the order of Hz~\cite{kessler2012sub}, which has lead to a growing interest in using long-lived clock states as the gain medium in a new generation of so-called superradiant lasers~\cite{meiser2009prospects,bohnet2012steady,bohnet2013active,vuletic2012an_almost}. However, the long lifetimes, i.e.\ the small linewidths, of those states entail a minute dipole moment of the involved transitions, thus making it necessary to work in the strong collective coupling regime. In this domain, by means of synchronization through the cavity field~\cite{weiner2017phase,xu2013simulating,henschel2010cavity}, a large collective dipole will build up, which can provide the necessary gain. Here, the atoms do not need to be confined in a small volume as is the case with Dicke superradiance~\cite{dicke1954coherence}, but they have to couple almost equally to the cavity. In the present manuscript we investigate a model of a superradiant laser where the gain medium is self-trapped by the cavity field it creates via stimulated emission into the resonator. At a suitably chosen detuning of the cavity above the atomic transition frequency, the atoms will also slow down and experience a cooling within their prescribed trap positions while simultaneously acting as the gain medium for the laser~\cite{salzburger2004atomic,salzburger2006lasing}. Recently, very efficient cooling has been predicted involving cavity mediated collective superradiant decay and atomic dipole-dipole interactions~\cite{xu2016supercooling}. Since inverted atoms in a blue detuned cavity are high-field seekers they are drawn to mode antinodes and almost equal coupling can be achieved. \section{Model} Let us consider $N$ identical two-level atoms confined to one-dimensional motion along the axis of a Fabry Perot cavity. At finite temperature we can assume a classical description of atomic motion along the cavity axis. All transition dipoles are assumed parallel and perpendicular to the cavity axis as in a $J=0 \to J=1$ transition. The atoms couple to the cavity mode via the well known Tavis-Cummings interaction with a strength given by the cavity mode function at the atomic position, $g(r_i)$. Given that the atomic ensemble is closely spaced, we need to take coherent dipole-dipole energy exchange ($\Omega_{ij}$) and collective spontaneous emission ($\Gamma_{ij}$), which are both mediated by the surrounding vacuum, into account. Furthermore, we assume to create population inversion of the relevant two atomic levels via an individual transverse incoherent pump with the rate $R$. In practise this has to be implemented via a multistep process involving intermediate levels. Photons can leak through the cavity mirrors at a cavity loss rate $\kappa$ (see \fref{fig:model}). \begin{figure} \caption{\emph{Schematic of our model.} \label{fig:model} \end{figure} The Hamiltonian of this system in the rotating wave approximation and in the reference frame of the atoms is \begin{equation} H = \hbar\Delta a^{\dagger} a + \sum_{i = 1}^{N} \hbar g(r_i) [ a \sigma_{i}^{+} + a^{\dagger} \sigma_{i}^{-}] + \sum_{i,j : i \neq j} \hbar\Omega_{ij}\sigma_{i}^{+}\sigma_{j}^{-}, \label{Hamiltonian} \end{equation} where $a^\dagger$ ($a$) is the bosonic creation (annihilation) operator which creates (annihilates) a photon with frequency $\omega\ts{c}$ in the cavity. The operators $\sigma_i^+$ and $\sigma_i^-$ are the atomic raising and lowering operators of the $i$th two-level atom with transition frequency $\omega\ts{a}$. The $i$th dipole couples to the cavity mode with the position-dependent coupling strength $g(r_i) = g \cos(k\ts{c} r_i)$. The coupling constant is denoted by $g$ and $k\ts{c} = 2 \pi / \lambda\ts{c}$ is the wave number of the cavity mode. The frequency $\Omega_{ij}$ quantifies the resonant dipole-dipole energy transfer between atoms $i$ and $j$. The detuning between the cavity resonance frequency and the atomic transition frequency is given by $\Delta = \omega\ts{c} - \omega\ts{a}$. Dissipative processes are accounted for by the Liouvillian $\mathcal{L}$ in the master equation \begin{equation} \dot{\rho} = - \frac{i}{\hbar} \left[ H,\rho \right] + \mathcal{L}\left[\rho\right]. \label{eq:master_eq} \end{equation} Within the Markov approximation our Liouvillian consists of three parts, namely \begin{equation} \mathcal{L}[\rho] = \mathcal{L}_{\mathrm{pump}}[\rho] + \mathcal{L}_{\mathrm{cav}}[\rho] + \mathcal{L}_{\mathrm{cd}}[\rho], \label{eq:Liouvillian} \end{equation} where the individual incoherent transversal pump is characterized by the pump rate $R$, \begin{equation} \mathcal{L}_{\mathrm{pump}}[\rho] = \frac{R}{2} \sum_i (2 \sigma_i^+ \rho \sigma_i^- - \sigma_i^- \sigma_i^+ \rho - \rho \sigma_i^-\sigma_i^+), \label{eq:Liouvillian_pump} \end{equation} the cavity losses occur at the cavity decay rate $\kappa$, \begin{equation} \mathcal{L}_{\mathrm{cav}}[\rho] = \kappa (2 a \rho a^{\dagger} - a^{\dagger} a \rho - \rho a^{\dagger} a), \label{eq:Liouvillian_cavity} \end{equation} and the collective atomic decay is determined by the generalized spontaneous emission rates $\Gamma_{ij}$, \begin{equation} \mathcal{L}_{\mathrm{cd}}[\rho] = \frac{1}{2} \sum_{ij} \Gamma_{ij} (2 \sigma_i^- \rho \sigma_j^+ - \sigma_i^+ \sigma_j^- \rho - \rho \sigma_i^+ \sigma_j^-). \label{eq:Liouvillian_cd} \end{equation} The resonant dipole-dipole couplings $\Omega_{ij}$ and the collective decay rates $\Gamma_{ij}$ depend on the interatomic distances~\cite{lehmberg1970radiation,ficek2002entangled} and are given by \begin{equation}\label{eq:Omega_ij} \Omega_{ij} = - \frac{3 \Gamma}{4} \Big[(1-\cos^2 \Theta) \frac{\cos(k_a r_{ij})}{k_a r_{ij}} - (1-3 \cos^2\Theta) \left(\frac{\sin(k_a r_{ij})}{(k_a r_{ij})^2} + \frac{\cos(k_a r_{ij})}{(k_a r_{ij})^3}\right)\Big] , \end{equation} and \begin{equation}\label{eq:Gamma_ij} \Gamma_{ij} = \frac{3 \Gamma}{2} \Big[(1-\cos^2\Theta) \frac{\sin(k_a r_{ij})}{k_a r_{ij}} + (1-3 \cos^2\Theta) \left(\frac{\cos(k_a r_{ij})}{(k_a r_{ij})^2} - \frac{\sin(k_a r_{ij})}{(k_a r_{ij})^3}\right)\Big]. \end{equation} Here, $k\ts{a} = \omega\ts{a}/c$ is the wavenumber corresponding to the atomic transition frequency and $\Theta$ denotes the angle between the atomic dipoles and the distance vector between atom $i$ and atom $j$. For the time evolution of the classical variables we have for the velocity of the $i$th particle \begin{equation} \dot{r}_i = \frac{p_i}{m} = 2 \omega\ts{r} \frac{p_i}{\hbar k\ts{a}^2}, \label{ehrenfest_dr} \end{equation} and the force acting on a particle is (see Appendix~\ref{app:semi_class_approx} for details) \begin{equation} \dot{p}_i = - \hbar\partial_{r_i} \bigg[ g(r_i) \braket{a \sigma_{i}^{+} + a^{\dagger} \sigma_{i}^{-}} + \sum_{j:j \neq i} 2 \Omega_{ij}\Re{\braket{\sigma_{i}^{+}\sigma_{j}^{-}}} \bigg]. \label{ehrenfest_dp} \end{equation} Here, we defined $\omega\ts{r} := \hbar k\ts{a}^2 / (2m)$ as the recoil frequency, with $m$ the mass of an atom. Note that the above equations are only valid for sufficiently slow particles. This is because we include the time dependence of the collective dipole-dipole interactions via the time-dependent atomic positions only. Thus, Doppler shifts and other effects depending on the velocity, or higher-order derivatives of the position, are neglected in the dipole-dipole coupling. Furthermore, we note that forces stemming from the collective decay are neglected here (see Appendix~\ref{app:semi_class_approx}). Additionally, since we assume classical motion, the recoil from spontaneous emission is neglected in the kinetic energy. This is probably the most drastic approximation made here. \section{Cooling and trapping properties} \label{sec:cooling_and_trapping_properties} We investigate the stability of the system described above in the lasing regime by showing that the atoms are cooled and trapped within the cavity field potential created by the photons scattered from the inverted atoms. Due to the exponential scaling of the Hilbert space dimension with the number of atoms numerical methods are limited. Thus, we restrict ourselves to treating a sufficiently small system that still exhibits collective effects. We study a system with three atoms inside the cavity. The initial state is as follows. The atoms are placed $\lambda\ts{c}/2$ apart at the cavity field antinodes. Furthermore, the atoms are in the ground state and there are no photons inside the cavity. The set of initial momenta is picked from a normal distribution that depends on the recoil frequency. Namely, we always choose a normal distribution for the atomic momentum such that the average kinetic energy is constant in respect to $\omega\ts{r}$. In general, the kinetic energy of the $i$th particle is $p_i^2/(2m)$. Thus, if the recoil frequency is multiplied by an arbitrary constant $c$ (i.e. the mass is divided by $c$), we need to scale the momentum with $1/\sqrt{c}$ in order to keep the kinetic energy constant. The standard deviation $\bar{p}_0$ of the initial momentum distribution of the atoms is chosen depending on the choice of $\omega\ts{r}$ according to this relation. \begin{figure} \caption{\emph{Exemplary trajectories of three particles and their time-averaged kinetic energy loss} \label{fig:position_ab1_ab22_ekin_average} \end{figure} To analyze the trapping and cooling properties of the system we study the time evolution of the particle positions. In \fref{fig:position_ab1_ab22_ekin_average}(a) we show a case where the particles are cooled until they are cold enough to get trapped in the potential created by the cavity field. The particles distribute themselves relatively far from each other, which means that collective effects hardly play a role. As we aim to investigate collective effects as well, we restrict our calculations to particle trajectories that remain in their initially prescribed trap for the entire time evolution. We call them completely stable trajectories, see for example \fref{fig:position_ab1_ab22_ekin_average}(b). We refer to Appendix~\ref{app:stability} for more details on the stability. The momentum transfer from particle 2 to particle 3, depicted in \fref{fig:position_ab1_ab22_ekin_average}(a), stems from the collective dipole-dipole effects. In order to quantitatively capture the cooling process, we study the time evolution of the particles' kinetic energy \begin{align} E\ts{kin}(t) = \sum_i \frac{p_i(t)^2}{2m}, \end{align} and average over $100$ thermally distributed initial momenta. However, we consider the completely stable trajectories only. As evident from \fref{fig:position_ab1_ab22_ekin_average}(c) (blue line) this kinetic energy of the stable trajectories oscillates very rapidly on the time scale of the cooling process and thus does not yield comparable results. Therefore, we introduce the time-averaged kinetic energy $\bar{E}\ts{kin}(t)$, which is obtained by taking the midpoints between two adjacent extrema of the kinetic energy as seen in \fref{fig:position_ab1_ab22_ekin_average}(c) (orange line). The parameter we use in order to characterize cooling or heating is \begin{equation} \bar{E}\ts{kin}^\mathrm{rel}(t) = \frac{\bar{E}\ts{kin}(t)}{\bar{E}\ts{kin}(0)}, \label{eq:e_kin_time-average} \end{equation} which we call time-averaged relative kinetic energy. We scan over the experimentally most accessible parameters using the procedure described above for three different $\omega\ts{r}$. The thermally distributed initial kinetic energy corresponds to a normal distribution of the initial momenta. As discussed above, in order to ensure that the particles start with the same average kinetic energy for all values of $\omega\ts{r}$ we scale the standard deviation $\bar{p}_0$ of the momentum distribution. We choose $\bar{p}_0 = 2\hbar k\ts{a}$ for $\omega\ts{r} = 0.1\Gamma$, $\bar{p}_0 = 2/\sqrt{10} \hbar k\ts{a}$ for $\omega\ts{r} = 1\Gamma$ and $\bar{p}_0 = 2/10 \hbar k\ts{a}$ for $\omega\ts{r} = 10\Gamma$. \fref{fig:e_kin_rel_scan} shows the scan over $\Delta$ and $R$ as well as over $g$ and $R$ for all three values of $\omega\ts{r}$. Every value of $\bar{E}\ts{kin}^\mathrm{rel}(t)$ above $1.0$ corresponds to heating and is artificially fixed to $1.0$, as these are points of little interest. \begin{figure} \caption{\emph{Cycle-averaged relative kinetic energy change after an evolution time of 500 atomic lifetimes as function of various operating parameters.} \label{fig:e_kin_rel_scan} \end{figure} We observe that only trajectories with $\Delta > 0$ realize cooling, which corresponds to the expected blue detuning of the cavity mode with respect to the atoms. This is due to the fact that atoms inside a cavity favour the emission of photons near the cavity resonance frequency~\cite{ritsch2012cavitypotentials}. An atom in a blue detuned cavity emits photons at a frequency higher than its transition frequency. Therefore, the atom has to exert energy in order to lose a photon into the cavity, which it does by losing kinetic energy. The atoms feel an effective friction force that is largest at the points where the cavity field is maximal (high-field seeking). As can be seen from the scans, there is an optimum for the detuning where the cooling is maximal. This is similar to the maximal force in the process of Doppler cooling. The force is also proportional to the excited state population of the atoms. The cooling is thus best when the pump is sufficiently strong to keep the atoms inverted at almost all times. Furthermore, we can see that atoms with a larger $\omega\ts{r}$ reach a lower relative kinetic energy during a fixed cooling time. On the one hand, this means that lighter particles are cooled down faster. On the other hand, their initially larger velocities make them more difficult to trap, i.e. more trajectories are unstable (see Appendix~\ref{app:stability}). Heavier atoms (smaller $\omega\ts{r}$) are easier to trap for the same initial kinetic energy (see \fref{fig:stability}), even though they do not cool as much during the observed time interval. Note the difference between cooling and trapping here: heavier atoms can still be trapped if they cool poorly, since even a larger kinetic energy in this case oftentimes corresponds to a relatively small velocity insufficient for the particles to climb the potential walls of the trapping potential created by the cavity field. Since they start with a lower initial velocity, however, the cooling is much slower. The inverse line of argument holds for lighter atoms: they are more difficult to trap, but if they are trapped the cooling is more efficient. Finally, as can be seen in Figs.~\ref{fig:e_kin_rel_scan}(d)-\ref{fig:e_kin_rel_scan}(f), the coupling to the cavity mode should not be too large in order for the system to cool the atoms. This can be explained by the growing probability of the atoms absorbing photons from the cavity, which causes heating. Hence, the coupling strength should always be well below the cavity loss rate, such that it is much more probable for a photon to leave the cavity than to be reabsorbed. Higher pump strengths can also counteract the heating. If the atoms are pumped strongly they are inverted at almost all times and thus cannot absorb an incident cavity photon. Note that the red areas in \fref{fig:e_kin_rel_scan} with $\Delta > 0$ are mainly caused by extremely slow initial atoms. In these cases the atoms do remain trapped, even though they are heated (note again the difference between cooling and trapping). They are initially so slow that the noise stemming from the cavity field causes heating inside their trap. If the atoms here started with a larger kinetic energy (temperature), they would indeed be cooled. However, they would then also be fast enough to leave their initial traps. At this point we would like to emphasize again that we describe the system in a semiclassical treatment and we neglect the recoil arising from spontaneous emission. Since the absolute values of the particles' momenta are around $\hbar k\ts{a}$ we need to view the cooling and trapping results critically, especially for $\omega\ts{r} = 1\Gamma$ and $\omega\ts{r} = 10\Gamma$. Note, though, that the recoil heating would only have an effect here since we formulate the superradiant laser regime in terms of a toy model. Specifically, the spontaneous emission rate is taken to be in the limit where $\Gamma\ll\kappa$. However, it is still chosen much larger than it would be for realistic clock atoms in order to avoid numerical difficulties due to different timescales. While we choose $\Gamma \sim 10^{-1}\kappa$, a more realistic choice would be $\Gamma\sim 10^{-6}\kappa$. In that case, a spontaneous emission event is so rare that the recoil can be safely neglected. Still, the fact that we do not include recoil effects for our choice of parameters may be viewed as a rather drastic simplification in our model. \subsection{Collective cooling effects} Let us now investigate the relevance of the collective effects for the cooling process. Therefore, we set $\Omega_{ij} = 0$ and $\Gamma_{ij} = \delta_{ij} \Gamma$, and compare this independent cooling to the collective cooling from above. In order to acquire collective effects we find that we need to extend the cooling time by orders of magnitude. In \fref{fig:coll_vs_indep_cooling} we see the time evolution of $\bar{E}\ts{kin}^{\mathrm{rel}}$ for collectively interacting atoms in comparison to independent ones. \begin{figure} \caption{\emph{Comparison of motional cooling in time with and without direct dipole interaction} \label{fig:coll_vs_indep_cooling} \end{figure} The main result from \fref{fig:coll_vs_indep_cooling} is that independent atoms will always reach a lower final kinetic energy for long cooling times. In the collective case the atoms push or pull each other away from the cavity field antinodes and thus their displacement amplitude is larger. Therefore, their kinetic energy is bigger on average. Until approximately $\Gamma t = 5000$ the collective line is slightly below the independent line, for $\omega\ts{r} = 0.1 \Gamma$. The reason for this is that parts of the kinetic energy are absorbed into the dipole-dipole interaction potential. The fact that there is a minimum below the final value in the collective case stems also from the dipole-dipole interaction. The minimal temperatures in the two collective cases shown in \fref{fig:coll_vs_indep_cooling} are reached at approximately the same time in units of $\omega\ts{r}$. As we mentioned in the beginning, we restrict our considerations to $N=3$ due to the exponential growth of the Hilbert space. Let us still comment on what one might expect for a larger number of atoms in terms of cooling. In~\cite{xu2016supercooling}, it has been shown that efficient cavity cooling can be achieved without direct dipole-dipole interactions between the atoms. Rather, the interactions there stem from the cavity mediated dipole coupling. These findings in combination with the result shown in \fref{fig:coll_vs_indep_cooling} suggest that the limit imposed on the final kinetic energy by direct dipole-dipole coupling will be more pronounced for larger atom numbers. More precisely, the cooling without direct dipole-dipole interactions yields lower final kinetic energies with growing atom number~\cite{xu2016supercooling}. The larger energy due to the displacement caused by direct dipole-dipole interactions should thus cause an increasing difference to non-interacting atoms. \section{Laser properties} After having established that the system is stable for a given set of parameters, we proceed by analyzing its lasing properties. To this end we study the cavity spectrum, the average photon number as well as the second-order correlation function. Furthermore, we look at the atomic inversion. We use the density matrices at $\Gamma t = 500 $, which describe a quasi-stationary final state, in order to calculate the properties mentioned above. \subsection{Laser spectrum} \begin{figure} \caption{\emph{Properties of the emitted laser light as a function of different system parameters} \label{fig:lasing_scan} \end{figure} The laser spectrum can be calculated as the Fourier transform of the first order correlation function $g^{(1)}(\tau) = \braket{a^{\dagger}(t+\tau)a(t)}$. According to the Wiener-Khinchin theorem~\cite{puri_mathemmathicalmethods}, we have \begin{equation} S(\omega) = 2\Re { \int_{0} ^{\infty} \mathrm{d} \tau e^{-i \omega \tau} g^{(1)}(\tau)}. \label{eq:Wiener-Khinchin} \end{equation} Appendix~\ref{app:calc_spec} provides details on how the spectrum is calculated in our semiclassical approximation. Most of the spectra are well described by a Lorentzian distribution (see \fref{fig:spectrum_fit}). Thus, we determine the full width at half maximum (FWHM) $\gamma$ and the offset to the atomic resonance frequency $\delta_0$. The dependency of the linewidth $\gamma$ and the offset $\delta_0$ on our scan parameters is depicted in Figs.~\ref{fig:lasing_scan}(a)-\ref{fig:lasing_scan}(d). We show these plots for one choice of the recoil frequency only, namely $\omega\ts{r} = 1 \Gamma$, since they are qualitatively identical for the other two choices of $\omega\ts{r}$. The central observation from Figs.~\ref{fig:lasing_scan}(a)-\ref{fig:lasing_scan}(d) is that the laser offset $\delta_0$ is much smaller than the corresponding detuning $\Delta$ for all parameters. Mathematically, this means that the slope of the offset's dependency on the detuning (cavity pulling coefficient) is smaller than one. For a conventional laser in the good cavity regime the cavity pulling coefficient is approximately one. In our case it is roughly between $0.1$ and $0.2$, depending on the pump rate $R$. In addition, the linewidth $\gamma$ does not vary much with the detuning. The significance of these two features is that the spectrum of the superradiant laser depends less on cavity fluctuations than the spectrum of a normal laser, which is the expected behaviour. Between $\Delta = 5\Gamma$ and $ \Delta = 15 \Gamma$ the frequency offset grows as expected, but for larger detunings it seems to reduce. The reason for this is that too large detunings lead to the formation of a distinct second peak approximately at the cavity resonance frequency, as shown in \fref{fig:spectrum_2peak}. Therefore, in these cases we determine the frequency close to the atomic resonance only, which almost vanishes. \begin{figure} \caption{\emph{Appearance of a second maximum in the spectrum for large atom cavity detuning} \label{fig:spectrum_2peak} \end{figure} The incoherent drive effectively broadens the atomic transition, resulting in an actual linewidth of $R+\Gamma$. We hence plot the FWHM in units of this effective atomic linewidth in \fref{fig:lasing_scan}(c) and \fref{fig:lasing_scan}(d). One can see that there are areas where the laser linewidth is even lower than the effective atomic linewidth. This is the case for high pump strengths and small detunings. The smallest laser linewidth, however, is achieved for small pump rates. We can also see that, for all stable parameters, the laser linewidth is well below the cavity linewidth, $\gamma < 2 \kappa$. The linewidth and the offset grow with increasing pump strength, which implies that the narrowest laser spectrum featuring a low frequency shift is achieved at small pump rates just above the lasing threshold. The atom-field coupling does not affect the offset, but the linewidth grows with it. We note that the lasing properties shown in \fref{fig:lasing_scan} are almost identical for the three different $\omega\ts{r}$, which indicates that the laser properties do not change dramatically compared to a laser with fixed particle positions as described in \cite{maier2014superradiant}. To further support this statement we calculate the spectrum in the same manner as before, but for fixed atomic positions ($r_1 = 0$, $r_2 = \lambda_c/2$ and $r_3 = \lambda_c$). Comparing the resulting spectrum to the one with moving atoms, we find an almost perfect overlap. Therefore, the atomic motion appears to merely change the effective atom-field coupling, which does not significantly alter the spectrum. \subsection{Photon number, second-order correlation and population inversion} Besides the spectrum we also calculate other characteristic quantities of a laser. Specifically, we compute the average intra-cavity photon number, \begin{equation} n = \braket{a^{\dagger} a}, \label{eq:photon_number} \end{equation} and the second-order correlation function at zero time delay, \begin{equation} g^{(2)}(0) = \frac{\braket{a^{\dagger}a^{\dagger}aa}}{\braket{a^{\dagger}a}^2}. \label{eq:2nd_order_cor} \end{equation} Finally, the population inversion of the atoms is relevant as well. The overall excitation is given by \begin{equation} p\ts{e} = \sum_{i=1}^N \braket{\sigma^+_i \sigma^-_i} \label{eq:population} \end{equation} where inversion is achieved if $p\ts{e}>N/2$. Figs.~\ref{fig:lasing_scan}(e)-\ref{fig:lasing_scan}(h) depict $n$ and $g^{(2)}(0)$ as functions of the scan parameters for $\omega\ts{r} = 1 \Gamma$. The most significant feature is that we always have less than half a photon on average inside the cavity. The figure also shows that the most photons are created and the field is most coherent ($g^{(2)}(0) = 1$) for small detunings, large pump strengths and large atom-field coupling. This behaviour coincides with that of a conventional (good-cavity) laser. The excited state population is always above $1.5$, and we note that the overall scan of the atomic excitation is similar to that obtained for a conventional laser. Comparing the cooling and the lasing scan, we find that the optimal lasing point does not coincide with the best cooling, specifically for the pump strength dependency. We therefore conclude that there is a certain trade-off between the optimal cooling and lasing regimes. \section{Conclusions} We have seen that even less than one average intra-cavity photon can be sufficient in order to accumulate excited state atoms dynamically at positions of maximal light coupling, i.e. at field mode antinodes, in the blue-detuned regime. For a sufficient pumping one can thus achieve population inversion and gain which subsequently leads to superradiant lasing. This behaviour is stable with respect to forces and heating induced by dipole-dipole interaction. The output spectrum of such a laser exhibits a very low sensitivity to cavity length fluctuations with a linewidth determined by the atomic linewidth broadened by the pump rate. We have obtained these results by means of a semiclassical model, in which we have treated the atomic states as well as the cavity field mode quantum mechanically, whereas the atomic motion has been described classically. Overall, for sufficiently slow atoms, the atomic motion only marginally affects the operating conditions and output characteristics of such a laser. In particular, its spectral and coherence properties remain almost unchanged as long as the photon numer is low. This is a promising result for the construction of a superradiant laser, where inverted atoms are moved through the cavity by an optical lattice conveyer belt. It seems that using as many atoms as possible with a weak pump and a large bandwidth cavity is the optimal way to operate such a device. Note, that we have used a rather generic rate based spatially uniform pumping scheme. This should be refined and modeled in more detail for future considerations. \section*{Funding} This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 820404 (iqClock) (C.~H. and H.~R.) and from the Austrian Science Fund (FWF) through projects DK-ALM W1259-N27 (D.~P.) and P29318-N27 (L.~O.). \section*{Acknowledgments} The numerical simulations were performed with the open source framework QuantumOptics.jl~\cite{kramer2018quantumoptics}. \appendixtitleon \appendixtitletocon \begin{appendices} \section{Semiclassical master equation for dipole-dipole interacting atoms} \label{app:semi_class_approx} In the following we develop the semiclassical description of our model. The internal atomic degrees of freedom as well as the cavity mode will be described in a quantum mechanical sense, whereas the atomic motion will be written in terms of classical variables only. We start from a full quantum model describing the coupling of moving two-level atoms to a cavity mode as well as to a continuum of free space vacuum modes. The Hamiltonian reads \begin{equation} \label{eq:total_hamiltonian_appendix} \begin{aligned} H\ts{tot} &= H_0 + \sum_i\hbar g(\hat{r}_i^{\text{c}})\left(a^\dag \sigma_i^- + \sigma_i^+a\right) + \sum_{\textbf{k},\lambda}\hbar\omega_k b_{\textbf{k},\lambda}^\dag b_{\textbf{k},\lambda} \\ &+ \sum_i \sum_{\textbf{k}, \lambda}\hbar g_{\textbf{k},\lambda}\left( b_{\textbf{k},\lambda}^\dag \sigma_i^- e^{-i\textbf{k}\cdot \hat{\textbf{r}}_i} + \text{H.c.}\right) + \sum_i \frac{\hat{\textbf{p}}_i^2}{2m}, \end{aligned} \end{equation} where the modes of the free space vacuum are described by the bosonic creation and annihilation operators, $b_{\textbf{k},\lambda}^\dag$ and $b_{\textbf{k},\lambda}$, respectively. Each wavevector $\textbf{k}$ features two polarizations $\lambda=1,2$. The (generally 3D) motion of the atoms is accounted for by the position and momentum operators $\hat{\textbf{r}}_i$ and $\hat{\textbf{p}}_i$. The coupling to the cavity mode is determined by the component of the position along the cavity axis $\hat{r}_i^{\text{c}}= \textbf{k}\ts{c}\cdot \hat{\textbf{r}}_i/k\ts{c}$. The free energy part of the cavity and the atoms is given by \begin{align} H_0 &:= \hbar\omega\ts{c}a^\dag a + \hbar \omega\ts{a}\sum_i \sigma_i^+\sigma_i^- \end{align} Note, that we perform the so-called independent bath assumption for the atoms and the cavity, i.e., the cavity decay does not affect the coupling of the atoms to the environment. Since the cavity damping does not affect the motion of the atoms directly either, we will neglect it for now. The density operator describing the internal atomic dynamics as well as the motional degrees of freedom, the cavity mode and the 3D vacuum modes $\rho\ts{tot}$ is then governed by the von Neumann equation, \begin{align} \dot{\rho}\ts{tot} &= -\frac{i}{\hbar}[H\ts{tot},\rho\ts{tot}]. \end{align} Essentially, the semiclassical approximation consists of two assumptions. First, we assume that there are no correlations (entanglement) between the motion and the remaining degrees of freedom. Secondly, we will assume that the motion is classical such that all expectation values factorize. The assumption that there are no correlations between the motion and the remaining degrees of freedom amounts to setting \begin{align} \label{eq:rho_tot} \rho\ts{tot}(t) &\approx \rho\ts{acf}(t)\otimes \rho\ts{m}(t). \end{align} On the one hand, the density operator $\rho\ts{acf}$ describes the state of the atomic excitation, the cavity, as well as the free-space vacuum modes. On the other hand, the motional degrees of freedom are given by $\rho\ts{m}(t)$. We now aim at finding an equation for the reduced system density operator $\rho\ts{acf}$. To this end, we take the partial trace, \begin{align} \dot{\rho}\ts{acf} &= -\frac{i}{\hbar}\text{tr}\ts{m}\left([H\ts{tot},\rho\ts{tot}]\right) = -\frac{i}{\hbar}[H\ts{acf},\rho\ts{acf}]. \end{align} Here, we have defined the reduced Hamiltonian \begin{equation} \begin{aligned} H\ts{acf} &:= H_0 + \sum_i g(r_i^{c}(t))\left(a^\dag\sigma_i^- + \sigma_i^+ a\right) + \sum_{\textbf{k},\lambda}\hbar\omega_k b_{\textbf{k},\lambda}^\dag b_{\textbf{k},\lambda} \\ &+ \sum_i \sum_{ \textbf{k},\lambda}\hbar g_{\textbf{k},\lambda}\left(b_{\textbf{k},\lambda}^\dag\sigma_i^- e^{-i\textbf{k}\cdot \textbf{r}_i(t)} + \text{H.c.}\right), \end{aligned} \end{equation} where we wrote \begin{align} \textbf{r}_i(t) &= \braket{\hat{\textbf{r}}_i}(t) = \text{tr}\left(\hat{\textbf{r}}_i\rho\ts{m}(t)\right). \end{align} Additionally, we made our second assumption of treating the motion clasically, such that $\braket{f(\hat{\textbf{r}})}\approx f(\textbf{r})$ for any function $f$. The assumptions from above constitute our semiclassical approximation. We can now proceed by eliminating the field modes. This leads to the dipole-dipole interactions among the atoms in the form of coherent energy exchange as well as collective decay. The only additional assumption one has to make to arrive at this is the Markov approximation for the atomic positions. The remaining procedure remains the same and yields~\cite{lehmberg1970radiation}, \begin{align} \dot{\rho} &= \text{tr}\ts{f}\left(\dot{\rho}\ts{acf}\right) = -\frac{i}{\hbar}[H,\rho] + \mathcal{L}\ts{cd}[\rho], \end{align} with $H$ the Hamiltonian from Eq.~\originaleqref{Hamiltonian}. The motion of the atoms, however, is still determined by $H\ts{tot}$ via \begin{equation} \dot{\textbf{p}}_i = \text{tr}(\hat{\textbf{p}}_i \dot{\rho}\ts{tot}) = - \frac{i}{\hbar} \text{tr}(\hat{\textbf{p}}_i [H\ts{tot}, \rho\ts{tot}]) \end{equation} and equivalently for $\dot{\textbf{r}}_i$. While the velocity is given in Eq.~\originaleqref{ehrenfest_dr}, for the average force on the $i$th particle we have (in 1D) \begin{align} \dot{p}_i &= -\hbar \partial_{r_i} \sum_{j:j \neq i} \left(2\Omega_{ij}\Re{\braket{\sigma_i^+\sigma_j^-}} + \Gamma_{ij}\Im{\braket{\sigma_i^+\sigma_j^-}}\right). \end{align} Note that the term proportional to the collective decay does not significantly contribute since in our system $\Im{\braket{\sigma_i^+\sigma_j^-}}\approx 0~\forall~i,j$ due to almost perfect phase invariance~\cite{meiser2009prospects}. \section{Calculation of the spectrum} \label{app:calc_spec} The spectrum is given by the Fourier transform of the correlation function \begin{align} g(\tau) &= \braket{a^\dag(t+\tau) a(t)}. \end{align} A common method to calculate this is to define a new density operator at time $t$ which is then evolved up to a time $t+\tau$. This is also known as the optical regression theorem and the essential steps are as follows. Let $H\ts{tot}$ be a Hamiltonian which describes the entire system and bath dynamics. The evolution of the system is then reversible and given by the unitary operator $U(t)=\exp(-iH\ts{tot}t/\hbar)$. Thus, we can write the correlation function as \begin{align} g(\tau) = \text{tr}\left(U^\dag(t)U^\dag(\tau)a^\dag U(\tau) a U(t)\rho\ts{tot}(0)\right) = \text{tr}\left(a^\dag U(\tau) a \rho\ts{tot} U^\dag(\tau)\right), \end{align} where $\rho\ts{tot}$ is the total density operator. Upon defining a new density operator $\bar{\rho}\ts{tot}(0) := a \rho\ts{tot}(t)$, we may write \begin{align} g(\tau) = \text{tr}\left(a^\dag\bar{\rho}\ts{tot}(\tau)\right). \end{align} The time evolution of $\bar{\rho}\ts{tot}$ is given by the same unitary operator as before. Therefore, eliminating the bath leads to the same master equation, but for a new operator $\bar{\rho}=a\rho$. In this way it is possible to compute $g(\tau)$ from the reduced system density operator via the master equation. One has to be careful when deriving the semiclassical master equation for $\bar{\rho}$, though. In particular, the force on the atoms is proportional to average values of system variables such as $\braket{a^\dag\sigma_j}$. These have to be computed from the actual density operator $\rho$, rather than from $\bar{\rho}$. Assuming, as before, that there is no entanglement between the atomic motion and the remaining degrees of freedom [Eq.~\originaleqref{eq:rho_tot}] is equivalent to writing $U(t)\approx U\ts{acf}(t)\otimes U\ts{m}(t)$. We can hence write \begin{align} \label{eq:rho_bar_tau} \bar{\rho}\ts{tot}(\tau) = U\ts{acf}(\tau)a\rho\ts{acf}(t) U\ts{acf}^\dag(\tau) \otimes U\ts{m}(\tau)\rho\ts{m}(t)U\ts{m}^\dag(\tau) = \bar{\rho}\ts{acf}(\tau) \otimes \rho\ts{m}(t+\tau), \end{align} where in the second step we have implicitly defined $\bar{\rho}\ts{acf}(0) := a \rho\ts{acf}(t)$ and have used the fact that $a$ does not act on the motional degrees of freedom. It is then possible to obtain a semiclassical master equation for $\bar{\rho}$ by tracing out the motion as well as the vacuum modes. However, as can be seen from Eq.~\originaleqref{eq:rho_bar_tau}, the motional degrees of freedom are still determined by $\rho\ts{m}(t+\tau)$. Thus, we need to compute the motion up to a time $t+\tau$ in a time evolution with the density operator $\rho$. Only then can we calculate the proper time evolution of $\bar{\rho}$ by using the previously calculated particle positions and obtain the correlation function $g(\tau)$. If the detuning between the cavity and the atoms is not too large, the cavity output spectrum can be well described by a Lorentzian distribution, see \fref{fig:spectrum_fit}. \begin{figure} \caption{\emph{Lorentzian fit of the normalized spectrum} \label{fig:spectrum_fit} \end{figure} \section{Stability} \label{app:stability} In \fref{fig:stability} we provide a scan of the number of completely stable trajectories, i.e.\ trajectories where the atoms stay in their initial trap for the whole time evolution. \begin{figure} \caption{\emph{Percentage of completely stable trajectories} \label{fig:stability} \end{figure} As mentioned in Sec. \ref{sec:cooling_and_trapping_properties} we see that lighter particles with the same kinetic energy are more difficult to trap. This is simply because they have a higher initial velocity and therefore it is harder for the cavity field to keep them trapped. Also, it takes a certain amount of time for the cavity to build up a sufficiently strong field which can confine the atoms. If the atomic velocity is too large, an atom may leave its initial trap during this build-up time. \end{appendices} \end{document}
\begin{document} \maketitle \begin{abstract} The space $\mathscr B_{mT}[(m_j)_j,(n_j)_j]$ is a Bourgain-Delbaen space modelled on a mixed Tsirelson space $T[(m_j)_j,(n_j)_j]$ and is a slight modification of $\mathfrak B_{\text{mT}}[(m_j)_j,(n_j)_j]$, a space defined by S. Argyros and R. Haydon. We prove that in every infinite dimensional subspace of $\mathscr B_{mT}[(m_j)_j,(n_j)_j]$ there exists a basic sequence equivalent to a sequence of weighted basis averages of increasing length from $T[(m_j)_j,(n_j)_j]$. We remark that the same is true for the original space $\mathfrak B_{\text{mT}}[(m_j)_j,(n_j)_j]$. \end{abstract} \section{Introduction} In 1980, Bourgain and Delbaen \cite{BD} discovered a new general scheme of constructing separable $\mathscr L_\infty$-spaces, i.e. spaces of the form $\overline{\bigcup_{n\in\mathbb N} F_n}$, where $F_n \subset F_{n+1}$ and $F_n$ is $C$-isomorphic to $\ell_\infty^{k_n}$, for every $n\in\mathbb N$, some uniform constant $C$, and some sequence $(k_n)_n\subset\mathbb N$. Using the scheme they constructed two classes of new isomorphic preduals to $\ell_1$, which answered many open problems. Among others, they positively solved the problem of existence of a predual of $\ell_1$ not containing $c_0$. The novelty of their method relied on usage of isomorphic copies of finite dimensional $\ell_\infty^n$ spaces instead of isometric ones. A space constructed by this scheme is nowadays called a Bourgain-Delbaen space. During last 10 years, the Bourgain-Delbaen scheme attracted attention of many researchers who used it to construct new spaces or prove general theorems. To mention only a few: in \cite{AH} the authors construct a hereditarily indecomposable Banach space $\mathfrak X_{\text{AH}}$ with the scalar-plus-compact property, in \cite{FOS} the authors prove the universality of $\ell_1$ as a dual Banach space, and in \cite{AFHO} the authors prove that every separable uniformly convex space can be embedded into a Banach space with the scalar-plus-compact property. Remarkably, the scheme turned out to be the most general way of constructing separable $\mathscr L_\infty$-spaces, as every separable $\mathscr L_\infty$-space is isomorphic to a Bourgain-Delbaen space. It was proved in \cite{AGM}. The ambient space for the construction of $\mathfrak X_{\text{AH}}$ was the space $\mathfrak B_{\text{mT}}=\mathfrak B_{\text{mT}}[(m_j)_j,(n_j)_j]$, for some fixed sequences $(m_j)_j,(n_j)_j$ of natural numbers. It is a Bourgain-Delbaen space, which is modelled on a mixed Tsirelson space $T[(m_j)_j,(n_j)_j]$. Consequently, it is unconditionally saturated and does not contain $c_0$ nor $\ell_p,\ p\in[1,\infty)$. Lately, the space $\mathfrak B_{\text{mT}}$ was used in \cite{AM} and in \cite{MPS}. In \cite{AM} the authors construct an example of a hereditarily indecomposable $\mathscr L_\infty$-space $\mathfrak X_{\text{nr}}$ such that it does not contain $c_0$, $\ell_1$, or reflexive subspaces and has the scalar-plus-compact property. In \cite{MPS} the authors construct an example of an $\mathscr L_\infty$-space $\mathfrak X_{\text{Kus}}$ with the scalar-plus-compact property, but with the opposite structure inside the space to the structure of the spaces $\mathfrak X_{\text{AH}}$ and $\mathfrak X_{\text{nr}}$, namely, the space $\mathfrak X_{\text{Kus}}$ is unconditionally saturated, whereas the spaces $\mathfrak X_{\text{AH}}$ and $\mathfrak X_{\text{nr}}$ are hereditarily indecomposable. Both spaces $\mathfrak X_{\text{nr}}$ and $\mathfrak X_{\text{Kus}}$ are quotients of $\mathfrak B_{\text{mT}}$, constructed using the self-determined-sets technique introduced in \cite{AM}. R. Haydon proved \cite{H} that one of the first examples constructed by Bourgain and Delbaen within their scheme, the space $X_{a,b}$ for $0<b<1/2<a<1$ and $a+b>1$, is saturated with $\ell_p$, where $p\in(1,\infty)$ is determined by equations $1/p+1/q=1$ and $a^q+b^q=1$. In \cite{GPZ} the authors introduce spaces $\mathfrak X_{p}$ for $p\in(1,\infty)$ modelled on the Tsirelson space $T(\mathcal A_n,\overline b)$, for some fixed $n>1$, and a sequence $\overline b=(b_1,\dots,b_n)$ of positive real numbers satisfying $b_1<1,\ b_2,\dots,b_n<1/2$ and $\sum_{i=1}^n b^q=1$. The space $T(\mathcal A_n,\overline b)$ is isomorphic to $\ell_p$ and the space $\mathfrak X_{p}$ is saturated with $\ell_p$. Moreover, the authors notice that for $n=2$ their definition of the space $\mathfrak X_{p}$ essentially coincide with $X_{b_1,b_2}$. Continuing this line of research we prove the following theorem (see Section 4 for more precise formulation) \begin{theorem:a} The space $\mathscr B_{mT}[(m_j)_j,(n_j)_j]$ is saturated with sequences of weighted basis averages of increasing length from the mixed Tsirelson space $T[(m_j)_j,(n_j)_j]$. \end{theorem:a} The space $\mathscr B_{mT}[(m_j)_j,(n_j)_j]$ is a modification of $\mathfrak B_{\text{mT}}[(m_j)_j,(n_j)_j]$. The modification is technical, and was made in order to slightly minimise the notational complexity. Nevertheless, we make comments in the paper showing that Theorem \ref{thm:a} is true for $\mathfrak B_{\text{mT}}[(m_j)_j,(n_j)_j]$ as well. The paper is organized as follows. In Section 2 we recall basic facts and definitions, i.e. the mixed Tsirelson spaces, the space $\mathfrak B_{\text{mT}}$, different types of analyses of nodes, and rapidly increasing sequences. Section 3 contains lemmas used in the proof of Main Theorem. Lemmas \ref{split} and \ref{comp} may be of independent interest. Finally, in Section 4 we give a proof of Theorem \ref{thm:a}. This work is a part of author's Ph.D. thesis, which was written under the direction of Anna Pelczar-Barwacz. The author would like to thank her for her kind introduction to the field, valuable discussions and insightful remarks. \section{Basic Facts and Definitions.} \subsection{The mixed Tsirelson spaces.} They originated in \cite{AD} and generalise the original Tsirelson space \cite{T} and the Schlumprecht space \cite{S} (see Remark \ref{mtex}). \begin{definition} Let $(\mathcal M_n)_{n\in\mathbb N}$ be a sequence of compact subsets of $\mathcal P(\mathbb N)$, $(\theta_n)_{n\in\mathbb N}$ be a sequence of positive reals, and $W=W[(\mathcal M_n,\theta_n)_n]$ be the smallest subset of $c_{00}$ satisfying: \begin{enumerate} \item $\pm e^*_i \in W$ for all $i\in \mathbb N$, \item for all $n,d\in \mathbb N$, $f_1<\cdots<f_d\in W[(\mathcal M_n,\theta_n)_n]$ if $(\min\supp f_i)_{i=1}^d\in \mathcal M_n$, then the combination $\theta_n\sum_{i\leq d}f_i$ is in $W[(\mathcal M_n,\theta_n)_n]$. \end{enumerate} Define $T[(\mathcal M_n,\theta_n)_n]$ to be the completion of $(c_{00}, \|\cdot\|_{W[(\mathcal M_n,\theta_n)_n]})$, where $\|x\|_{W[(\mathcal M_n,\theta_n)_n]} = \sup \{ f(x) \mid f\in W[(\mathcal M_n,\theta_n)_n]\}$, for all $x\in c_{00}$. \end{definition} \begin{remark}\label{mtex} Define $\mathcal A_n = \{F\subset \mathbb N\mid \# F = n\}$, for all $n\in\mathbb N$. \begin{enumerate} \item If $\mathcal M_n=\mathcal M$, $\theta_n=\theta$ for every $n\in\mathbb N$ and some $\mathcal M, \theta$, then \begin{itemize} \item if $i(\mathcal M)<\omega$ and $\frac{1}{i(\mathcal M)}\geq \theta$, then $T[(\mathcal M_n,\theta_n)_n]\cong c_0$, \item if $i(\mathcal M)<\omega$ and $\frac{1}{i(\mathcal M)}< \theta$, then $T[(\mathcal M_n,\theta_n)_n]\cong \ell_p$ for some $p\in (1,\infty)$, \end{itemize} \item If $\mathcal M_n = \mathcal S=\{\,F\subset \mathbb N\mid \#F\leq \min F\,\}$ and $\theta_n = \frac{1}{2}$, for every $n\in\mathbb N$, then $T[(\mathcal M_n,\theta_n)_n]$ is the Tsirelson space, \item if $\mathcal M_n = \mathcal A_n$ and $\theta_n = \frac{1}{\log(n+1)}$, for every $n\in\mathbb N$, then $T[(\mathcal M_n,\theta_n)_n]$ is the Schlumprecht space. \end{enumerate} \end{remark} The following theorem describes basic properties of the mixed Tsirelson spaces needed in the sequel. \begin{theorem}[Theorem I.10 \cite{AT}]\label{basic-mt} The standard vectors $(e_n)_{n\in\mathbb N}\subset c_{00}$ are $1$-unconditional basis for $T[(\mathcal M_n,\theta_n)_n]$ and if for some $n\in\mathbb N$ it holds that the Cantor–Bendixson index $i(\mathcal M_n)\geq\omega$ or $i(\mathcal M_n)=r<\omega$ and $\theta_n > \frac{1}{r}$, then the space $T[(\mathcal M_n,\theta_n)_n]$ is reflexive. Moreover, if the first alternative holds or the second alternative holds for some increasing sequence $(r_{n_k})_{k\in\mathbb N}\subset \mathbb N$, then the space $T[(\mathcal M_n,\theta_n)_n]$ does not contain any of the spaces $c_0,\ \ell_p$, for $p\in [1,\infty)$. \end{theorem} A tree analysis of a functional from $W$ is the main tool for bounding the norm of a vector in a mixed Tsirelson space. \begin{definition}\label{tree-analysis-mt} Let $f\in W[(\mathcal M_n,\theta_n)_n]$, let $\mathcal T$ be a tree with the root $\emptyset$, and let $S_t$ denote the set of all successors of $t$ in $\mathcal T$, for all $t\in \mathcal T$. We call a sequence $(f_t)_{t\in\mathcal T}$ a tree analysis of $f$ if \begin{itemize} \item $f_\emptyset=f$, \item if $t\in \mathcal T$ is a terminal node, then $f=\pm e^*_k$ for some $k\in\mathbb N$, \item if $t\in \mathcal T$ is a non-terminal node, then $f_t=\theta_n\sum_{s\in S_t} f_s$ for some $n\in\mathbb N$ and some block sequence $(f_s)_{s\in S_t} \subset W[(\mathcal M_n,\theta_n)_n]$ with $(\min\supp f_s)_{s\in S_t} \in \mathcal M_n$. \end{itemize} \end{definition} As a consequence of the minimality of $W$ we obtain \begin{proposition} Every $f\in W[(\mathcal M_n,\theta_n)_n]$ admits a tree analysis. \end{proposition} For the rest of this paper we fix sequences $(m_j)_{j \in \mathbb N}$, $(n_j)_{j \in \mathbb N}$ of natural numbers. We need certain growth conditions for them. \begin{assumption} We assume that \begin{enumerate} \item $m_1 = n_1 = 4$, \item $m_{j+1} \geq m_j^2$, \item $n_{j+1} \geq m_{j+1}^2(4n_j)^{\log_2 m_{j+1}}$. \end{enumerate} \end{assumption} \begin{notation} We will write $T=T[(m_j)_j,(n_j)_j]=T[(\mathcal A_{n_j},m_j^{-1})_j]$. \end{notation} By Theorem \ref{basic-mt} the space $T$ is a reflexive space and the standard vectors $(e_n)_{n\in\mathbb N}\subset c_{00}$ are an unconditional basis for it. \begin{proposition}[Lemma II.9 \cite{AT}] \label{baver-norm} Let $(z_i)_{i=1}^{n_j}$ be a subsequence of the standard basis $(e_n)_{n\in\mathbb N}$ in $T$. Then \[ \|\frac{m_j}{n_j}\sum_{i=1}^{n_j} z_i\| = 1. \] \end{proposition} \subsection{The space $\mathscr B_{mT}$} We define the space $\mathscr B_{mT}$ below. It is a slight modification of the space $\mathfrak B_{\text{mT}}$. The construction is a special case of a general scheme for constructing separable $\mathscr L_\infty$-spaces, which was defined for the first time in \cite{BD}. We follow the slightly modified version of the construction from \cite{AH}, where the space $\mathfrak B_{\text{mT}}$ itself was defined. Fix an increasing sequence $(N_q)_{q\in\mathbb N}$ of natural numbers. We define inductively a sequence of disjoint finite sets $(\mathcal Delta_q)_{q\in\mathbb N}$. Let $\mathcal Delta_1=\{1\}$. Assume that sets $\mathcal Delta_1,\dots,\mathcal Delta_q$ have been defined. Set $\Gamma_0 = \emptyset$, $\Gamma_p = \bigcup_{r=1}^p \mathcal Delta_r$, $p\leq q$, and \begin{align*} \mathcal Delta_{q+1} & = \bigcup_{j=1}^{q+1} \{ (q+1, 0, m_j, \varepsilon e^*_\eta) \mid \varepsilon=\pm 1,\ \eta \in \Gamma_q \} \cup \bigcup_{1\leq p<q} \bigcup_{j=1}^{p} \{ (q+1, \xi, m_j, \varepsilon e^*_\eta) \mid \\ & \quad\quad\quad\quad\quad\quad\xi\in \mathcal Delta_p,\ \w(\xi)=m_j^{-1}, \age(\xi)<n_j, \varepsilon=\pm 1,\ \eta \in \Gamma_q\setminus\Gamma_p \}, \end{align*} where $\w(q, \xi, m_j, \varepsilon e^*_\eta) = m_j^{-1}$, $\age(q, 0, m_j, \varepsilon e^*_\eta)=1$, and $\age(q, \xi, m_j, \varepsilon e^*_\eta) = \age(\xi)+1$. For a node $\gamma=(q+1, \xi, m_j, \varepsilon e^*_\eta)$ we define $\rank(\gamma) = q+1$. Let $\Gamma = \bigcup_{q\in\mathbb N}\mathcal Delta_q$, and for every $\gamma\in\Gamma$ we define \[ c^*_\gamma = \begin{cases} m_j^{-1}P^*_{(p,q]} \varepsilon e^*_\eta, &\text{ for } \gamma=(q+1, 0, m_j, \varepsilon e^*_\eta), \\ e^*_\xi+m_j^{-1} P^*_{(p,q]}\varepsilon e^*_\eta, &\text{ for } \gamma=(q+1, \xi, m_j, \varepsilon e^*_\eta), \end{cases} \] and $d^*_\gamma = e^*_\gamma - c^*_\gamma$, where $P^*_{(p,q]}$ is a projection onto $\langle d^*_\gamma \mid \rank(\gamma)\in(p,q]\rangle$. \begin{remark} In \cite{AH} authors use projections of the form $P^*_{(p,\infty)}$. Notice that \[ P^*_{(p,\infty)} \restriction \ell_1(\Gamma_q) = P^*_{(p,q]} \restriction \ell_1(\Gamma_q). \] \end{remark} By Theorem 3.5 \cite{AH} the sequence $(d^*_\gamma)_{\gamma\in\Gamma}$ is a basis for $\ell_1(\Gamma)$ and we take $(d_\gamma)_{\gamma\in\Gamma}\subset \ell_\infty(\Gamma)$ to be a biorthogonal sequence to $(d^*_\gamma)_{\gamma \in \Gamma}$. \begin{definition} We define \[ \mathscr B_{mT} = \overline{\langle d_\gamma \mid \gamma \in \Gamma \rangle} \subset \ell_\infty(\Gamma). \] \end{definition} \begin{remark} The difference between our definition of the space $\mathscr B_{mT}$ and the definitions of the space $\mathfrak B_{\text{mT}}$ from \cite{AH} and \cite{AM} is in the form of nodes. Indeed, we allow nodes of the form $(q+1, \xi, m_j^{-1}, \varepsilon e^*_\eta)$, whereas they allow $(q+1, \xi, m_j^{-1}, b^*)$, where $b^*$ is from a finite net in the unit ball of $\ell_1(\Gamma_q\setminus\Gamma_p)$. We decided to present the simpler version to not further complicate already quite technical proofs, but our results are still true for $\mathfrak B_{\text{mT}}$ - the version from \cite{AH} and \cite{AM}. To obtain a full proof for $\mathfrak B_{\text{mT}}$ we should consider convex combinations of functionals $\pm e^*_\eta$ in the definition of tree-analysis, what would add an additional level of notational complexity in the following proofs. On the other hand, it is easily seen that convex combinations do not break correctness of the proofs. \end{remark} \begin{notation} For a functional $h\in \langle d^*_\gamma \mid \gamma \in \Gamma \rangle$ we define $\rng h = [p,q]$, where $p,q$ are respectively maximal, minimal with $h \in \langle d^*_\gamma \mid \rank(\gamma) \in [p,q] \rangle$. Fox a block $ x \in \langle d_\gamma \mid \gamma \in \Gamma \rangle$ we define $\rng x = [p,q]$, where $p,q$ are respectively maximal, minimal with $x \in \langle d_\gamma \mid \rank(\gamma) \in [p,q] \rangle$. \end{notation} The space $\mathscr B_{mT}$ is an $\mathscr L_\infty$-space with dual isomorphic to $\ell_1$ and saturated by reflexive subspaces with an unconditional basis \cite{AH}. We introduce different types of analysis of evaluation functionals following \cite{AH} and \cite{GPZ}, adjusting their scheme to our situation. \subsection{Different types of analyses of evaluation functionals} \ \\ \textbf{The evaluation analysis of $e_{\gamma}^*$}. First, we notice that every $\gamma\in\Gamma$ admits a unique analysis as follows (Prop. 4.6 \cite{AH}). Let $\w(\gamma)=m_{j}^{-1}$. Then using backward induction we determine a sequence of sets $(I_i,\varepsilon_i e_{\eta_i}^*,\xi_i)_{i=1}^{a}$ so that $\xi_a=\gamma$, $\xi_1=(\max I_1+1,0,m_j, \varepsilon_1 e_{\eta_1}^*),$ $\xi_i=(\max I_i,\xi_{i-1},m_j, \varepsilon_i e^*_{\eta_i}),$ and $\max I_{i-1}+2 = \min I_i$ for every $1<i\leq a$. Repeating the reasoning of \cite{AH}, as $e^*_{\xi}=d^*_{\xi}+c^*_{\xi}$ for each $\xi\in\Gamma$, with the above notation we have \[ e^*_{\gamma}= \sum_{i=1}^{a}d^*_{\xi_{i}}+m_{j}^{-1}\sum_{i=1}^{a}\varepsilon_{i} P^*_{I_i}e^*_{\eta_{i}}. \] \begin{definition} Let $\gamma\in\Gamma$. Then the sequence $(I_i,\varepsilon_i e_{\eta_i}^*, \xi_i)_{i=1}^{a}$ satisfying all the above properties will be called the evaluation analysis of $\gamma$. We define the bd-part and mt-part of $e^*_\gamma$ as \[ \bdp(e^*_\gamma)=\sum_{i=1}^{a} d_{\xi_i}^*, \ \mt(e^*_\gamma)=m_{j}^{-1}\sum_{i=1}^{a}\varepsilon_{i} P^*_{I_i}e^*_{\eta_{i}}. \] \end{definition} \begin{remark}\label{mt-part} Fix $(\eta_s)_{s=1}^a\subset\Gamma$, $(I_s)_{s=1}^a$ a sequence of intervals of natural numbers with $\max I_{s-1}+2 = \min I_s$, and $(\varepsilon_s)_{s=1}^a\subset \{\pm 1\}$, with $a\leq n_j$, $j\leq q_1$, $\eta_s\in\Gamma_{\max I_s}\setminus \Gamma_{\min I_s-1}$, $s=1,\dots,a$. Then the formulae $\xi_1=(\max I_1+1,0,m_j, \varepsilon_1 e^{*}_{\eta_1})$ and $\xi_s=(\max I_s+1,\xi_{s-1},m_j, \varepsilon_s e^*_{\eta_s})$, for any $s\leq a$, give well-defined nodes. \end{remark} \textbf{The $I$(interval)-analysis of a functional $e_{\gamma}^*$}. Fix $I\subset\mathbb N$ and $\gamma\in\Gamma$. Let $\w(\gamma)=m_{j}^{-1}$, $a\leq n_{j}$ and $(I_i,\varepsilon_i e_{\eta_i}^*, \xi_i)_{i=1}^{a}$ be the evaluation analysis of $\gamma$. We define the $I$-analysis of $e_{\gamma}^*$ as follows: \begin{enumerate} \item[(a)] If for at least one $i$ we have $P^*_{I_i\cap I}e^*_{\eta_i}\neq 0$, then the $I$-analysis of $e_{\gamma}^*$ is of the following form \[ (I_i\cap I,\varepsilon_i e^*_{\eta_i},\xi_i)_{i\in A_I}, \] where $A_I=\{i \mid \ P^*_{I_i\cap I}e^*_{\eta_i}\neq 0\}$. In this case we say that $e_{\gamma}^*$ is $I$-decomposable. \item[(b)] If $P^*_{I_i\cap I}e^*_{\eta_i}=0$ for all $i=1,\dots,a$, then we assign no $I$-analysis to $e_{\gamma}^*$ and we say that $e_{\gamma}^*$ is $I$-indecomposable. \end{enumerate} Now we introduce the tree-analysis of $e_{\gamma}^*$ analogous to the tree-analysis of a functional in a mixed Tsirelson space (see \cite{AT} Chapter II.1). We start with some notation. We denote by $(\mathcal T,\preceq)$ a finite tree, whose elements are finite sequences of natural numbers ordered by the initial segment partial order. Given $t\in\mathcal T$ denote by $S_t$ the set of immediate successors of $t$. Let $(I_t)_{t\in\mathcal T}$ be a tree of intervals of $\mathbb N$ such that $t\preceq s$ iff $I_t\supset I_s$ and $t,s$ are incomparable iff $I_t\cap I_s=\emptyset$. \textbf{The tree-analysis of a functional $e_{\gamma}^*$}. Let $\gamma\in\Gamma$. The tree-analysis of $e_{\gamma}^*$ is a family of the form $(I_t,\varepsilon_t,\eta_t )_{t\in\mathcal T}$ defined inductively in the following way: \begin{enumerate} \item $\mathcal T$ is a finite tree with a unique root denoted by $\emptyset$. \item Set $\eta_{\emptyset}=\gamma$, $I_{\emptyset}=(0,\rank\gamma)$, $\varepsilon_\emptyset=1$ and let $(I_i,\varepsilon_i e_{\eta_i}^*, \xi_i)_{i=1}^{a}$ be the evaluation analysis of $e^*_{\eta_{\emptyset}}$. Set $S_{\emptyset}=\{(1),(2),\ldots,(a)\}$ and for every $s=(i)\in S_{\emptyset}$ define $(I_s,\varepsilon_s,\eta_s)=(I_i,\varepsilon_i,\eta_i)$. \item Assume that for $t\in\mathcal T$ the tuple $(I_t,\varepsilon_t,\eta_t)$ is defined. Let $(I_i,\varepsilon_i e^*_{\eta_i}, \xi_i)_i$ be the evaluation analysis of $e_{\eta_t}^*$. Consider two cases: \begin{enumerate} \item If $e_{\eta_t}^*$ is $I_t$-decomposable, let $(I_i,\varepsilon_i e^*_{\eta_i}, \xi_i)_{i\in A_{I_t}}$ be the $I_t$-analysis of $e_{\eta_t}^*$. Set $S_t=\{(t^\smallfrown i): i\in A_{I_t}\}$. For every $s=(t^\smallfrown i)\in S_t$, let $(I_s,\varepsilon_s,\eta_s)=(I_i,\varepsilon_i,\eta_i)$. \item If $e_{\eta_t}^*$ is $I_t$-indecomposable, then $t$ is a terminal node of the tree-analysis. \end{enumerate} \end{enumerate} \begin{remark} For every $\gamma\in\Gamma$ its tree analysis $(I_t,\varepsilon_t,\eta_t )_{t\in\mathcal T}$ is uniquely defined. This is in contrast to the mixed-Tsirelson case. Moreover, for every $t_0\in\mathcal T$ the tree analysis of $e^*_{\eta_{t_0}}$ restricted to $I_{t_0}$ is $(I_t,\varepsilon_t,\eta_t )_{t\in\mathcal T,\, t\succeq t_0}$. \end{remark} \begin{notation} For every $\gamma \in\Gamma$, its evaluation analysis $(I_i,\varepsilon_i e_{\eta_i}^*, \xi_i)_{i=1}^{a}$, its tree analysis $(I_t, \varepsilon_t, \eta_t)_{t\in\mathcal T}$, and a subset $\mathcal T'\subseteq \mathcal T$ we write \begin{enumerate} \item $m_t = m_{i_0}$, $n_t=n_{i_0}$, where $i_0\in\mathbb N$ is such that $m_{i_0}=\w(\eta_t)^{-1}$, for all $t\in \mathcal T$, \item $|\bdp|(e^*_\gamma)=\sum_{i=1}^{a} |d_{\xi_i}^*|$, \item $|\bdp|(e^*_\gamma, \mathcal T') = \sum_{t \in \mathcal T\cap\mathcal T'} |\bdp|(e^*_{\eta_t})\circ P_{I_t}$. \end{enumerate} \end{notation} The following notion is very helpful in regularising ranges of nodes in a tree analysis of a given node versus ranges of a given block sequence. \begin{definition} Let $(x_k)_k \in \mathscr B_{mT}$ be a block sequence, $\gamma\in\Gamma$, and $(I_t, \varepsilon_t, \eta_t)_{t\in\mathcal T}$ be the tree analysis of $e^*_\gamma$. \begin{enumerate} \item We say that $e^*_{\eta_t}P_{I_t}$ covers $x_k$ if $t\in\mathcal T$ is maximal in $\mathcal T$ with $\rng(x_k)\cap\rng(e^*_\gamma) \subset \rng(e^*_{\eta_t}P_{I_t})$. We call $t$ the covering index for $x_k$. \item We say that a finite sequence $(A_l)_{l=1}^i$ of finite intervals $\mathbb N$ is comparable with the sequence $(x_k)_k$ if for all $l,k$ \[ A_l\subseteq \rng x_k \text{ or } \supp x_k \cap \bigcup_{l=1}^iA_l \subseteq A_k \text{ or } A_l \cap \rng x_k = \emptyset. \] \item We say that $e^*_\gamma$ is comparable with $(x_k)_k$ if the sequence $(\rng e^*_{\eta_t}P_{I_t})_{t\in\mathcal T}$ is comparable with $(x_k)_k$. \end{enumerate} \end{definition} \subsection{Rapidly Increasing Sequences} Vectors of the following form are the building blocks for the target sequences of interest, i.e. RISes and two-level RISes. \begin{definition} Let $X$ be a Banach space with a basis, $x \in X$, and $C \geq 1$. We call $x$ a $C-\ell^i_1$ $average$ if there is a block sequence $(x_l)_{l=1}^i$ of vectors from $X$ with norms uniformly bounded from above by $C$ and such that $x=\frac{1}{i}\sum_{l=1}^i x_l$. \end{definition} \begin{lemma}[Lemma 8.2, 8.3 \cite{AH}]\label{average-bound} Let $C\geq 1$, $i\in \mathbb N$. Then \begin{enumerate} \item there exists a $C-\ell^i_1$ average in every infinite dimensional subspace of $\mathscr B_{mT}$, \item for every $\gamma \in \Gamma$ and $C-\ell^i_1$ average $x\in\mathscr B_{mT}$ we have $|d^*_\gamma(x)| \leq \frac{3C}{i}$. \end{enumerate} \end{lemma} \begin{lemma}[Lemma II.23 \cite{AT}]\label{norm-proj} Let $C\geq 1$, $j\leq i$, let $(E_l)_{l=1}^j$ be a sequence of pairwise disjoint intervals and let $x$ be a $C-\ell_1^i$-average in some Banach space. Then \[ \sum_{l=1}^j \|P_{E_l}x\| \leq (1+\frac{2j}{i})\sup_{1\leq l\leq j}\|P_{E_l}x\| . \] \end{lemma} \begin{remark}\label{aveproj} It is easy to see that the above Lemmas \ref{average-bound} and \ref{norm-proj} are true for projections of averages on intervals of positive integers. \end{remark} Now we define the main objects of interest. The crucial property they enjoy is that they behave like the basis of $\mathscr B_{mT}$ but can be found in every subspace. \begin{definition}\label{RIS} Let $I\subseteq\mathbb N$ be an interval and $(x_i)_{i\in I}$ be a block sequence in $\mathscr B_{mT}$. We call it a \textit{rapidly increasing sequence} (RIS) if there exist a constant $C\geq 1$ and an increasing sequence $(j_i)_{i\in I}$ (growth index) such that for all $i$ we have \begin{enumerate} \item $\|x_i\| \leq C$, \item $j_{i+1} > \max \rng x_i$, \item $|x_i(\gamma)| \leq Cm_\gamma^{-1}$ for all $\gamma \in \Gamma$ with $m_\gamma < m_{j_i}$. \end{enumerate} \end{definition} In what comes later we need additional structure in RIS, namely \begin{definition} Let $I\subseteq\mathbb N$ be an interval. We call a sequence $(y_i)_{i\in I}\subset \mathscr B_{mT}$ \textit{a two-level $C$-RIS} if $(y_i)_{i\in I}$ is a normalised $C$-RIS such that for each $i\in I$ there are a sequence $n_{j_i}<n_{j_{i,1}}<\cdots<n_{j_{i,n_{j_i}}}$ of positive integers and a normalised $C$-RIS $(y_{i,j})_{j=1}^{n_{j_i}}$ such that $y_i = \frac{c_im_{j_i}}{n_{j_i}} \sum_{j=1}^{n_{j_i}}y_{i,j}$, where $y_{i,j}$ is a normalised $C-\ell_1^{n_{j_{i,j}}}$ average with $\max \rng y_{i,j} + j_{i,j} <\min\rng y_{i,j+1}$, for $j=1,\dots,n_{j_i}$, and $c_i>0$ is some normalising constant. \end{definition} \begin{proposition}[Prop. 5.6 \cite{AH}]\label{basic-ineq} Let $C\geq 1$ and let $(x_i)_{i=1}^{n_j}$ be a $C$-RIS in $\mathscr B_{mT}$. Then \[ \| \frac{m_j}{n_j} \sum_{i=1}^{n_j} x_i \| \leq 10C. \] Moreover, for all $\gamma \in \Gamma$ with $n_\gamma > n_j$ \[ |\frac{m_j}{n_j}\sum_{i=1}^{n_j} x_i(\gamma)| \leq \frac{10C}{m_j}. \] \end{proposition} \begin{corollary}\label{cis-bounds} For $c_i$'s in the definition of two-level RIS we have $\frac{1}{10C} \leq c_i \leq 1$. \end{corollary} \begin{lemma}[Corollary 8.5\cite{AH}] \label{er} Let $C>2$. In every infinite dimensional subspace of $\mathscr B_{mT}$ there exists a $C$-RIS $(x_i)_i$ of arbitrary length, arbitrary growth index $(j_i)_i$, and such that for every $i$ the vector $x_i$ is an $\ell^{n_{j_i}}_1$-average. \end{lemma} Lemma \ref{er} and Proposition \ref{basic-ineq} imply the following \begin{corollary} Let $C>2$. In every infinite dimensional subspace of $\mathscr B_{mT}$ there exists a two-level $C$-RIS of arbitrary length. \end{corollary} \section{Main Lemmas.} \begin{definition} Let $\mathcal T$ be a tree, $u\preceq v \in \mathcal T'$. We say that a sequence $(u_l)_{l=0}^r$ is a branch of length $r$ from $u$ to $v$, if $u_0 = u$, $u_{l+1}\in S_{u_l},\ l=1,\dots,r-1$, and $u_r=v$. \end{definition} \begin{lemma}\label{lenpath} Let $i\in \mathbb N$, $\mathcal T$ be a tree with the root equal $\emptyset$, $v\succ\emptyset$, $b=(u_l)_{l=0}^r$ be a branch from $\emptyset$ to $v$. \begin{enumerate} \item Let $\gamma \in \Gamma$, $(I_t, \varepsilon_t, \eta_t)_{t\in\mathcal T}$ be its tree analysis, $y\in \mathscr B_{mT}$ be a normalised vector and assume that \begin{enumerate} \item $v$ is the covering index for $y$, \item for all $\xi\in\Gamma$ we have $|d^*_\xi(y)|<4^{-i-3}$, \item $|e^*_\gamma(y)| > 4^{-i-1}$. \end{enumerate} Then $r \leq i+1$. \item Let $f\in W$, $(f_t)_{t\in\mathcal T}$ be its tree analysis, $z=e_l\in T$, for some $l\in\mathbb N$, and assume that \begin{enumerate} \item $f_v=z^*_l$, \item $f(z) > 4^{-i-1}$. \end{enumerate} Then $r \leq i$. \end{enumerate} \end{lemma} \begin{proof} $(1)$. Using the evaluation analysis and the assumption $u_r = v$ we have \begin{align*} e^*_\gamma(y) & = \sum_{s\in S_{u_0}} d^*_{\beta_s}(y) + \frac{1}{m_{u_0}}\sum_{s\in S_{u_0}}\varepsilon_s e^*_{\eta_s}P_{I_s}(y) \\ & = \sum_{s\in S_{u_0}} d^*_{\beta_s}(y) + \frac{1}{m_{u_0}}\varepsilon_{\eta_{u_1}} e^*_{\eta_{u_1}}P_{I_{u_1}}(y) \\ & = \sum_{s\in S_{u_0}} d^*_{\beta_s}(y) + \frac{\varepsilon_{\eta_{u_1}}}{m_{u_0}} \left( \sum_{s\in S_{u_1}} d^*_{\beta_s}(y) + \frac{1}{m_{u_1}}\sum_{s\in S_{u_1}}\varepsilon_{\eta_s}e^*_{\eta_s}P_{I_s}(y)) \right) \\ & = \sum_{l=0}^{r-1} \left( \prod_{a=0}^{l-1}\frac{\varepsilon_{u_{a+1}}}{m_{u_a}} \sum_{s\in S_{u_l}} d^*_{\beta_s}(y) \right) + \left( \prod_{a=0}^{r-1} \frac{\varepsilon_{u_{a+1}}}{m_{u_a}} \right) e^*_{\eta_{u_r}}P_{I_{u_r}}(y). \end{align*} Moreover, the assumption $u_r = v$ implies also that for at most one $l\in\{\,0,\dots,r-1\,\}$ there are two $s \in S_{u_l}$ such that $d^*_{\beta_s}(y) \neq 0$ and for the rest $l$'s from $\{\,0,\dots,r-1\,\}$ there are at most one such $s$. Hence by $(c)$ we have \[ 4^{-i-1} < |e^*_\gamma(y)| \leq 2\cdot 4^{-i-3} \sum_{l=0}^{r-1} \left( \prod_{a=0}^{l-1}\frac{1}{m_{u_a}} \right) + 3\left( \prod_{a=0}^{r-1} \frac{1}{m_{u_a}} \right) \leq 4^{-i-2} + 3\cdot 4^{-r}. \] It follows that \[ 3\cdot 4^{-r} \geq 4^{-i-1} - 4^{-i-2} > 4^{-i-2}, \] which finishes the proof of $(1)$. $(2)$. The second part can be proved in a similar manner as (1). Indeed, it is enough to repeat the arguments omitting the Bourgain-Delbaen parts of functionals. \end{proof} In translating $\mathscr B_{mT}$ to $T$ we need to deal with bd-parts. The following lemma solves one of the frequently occurring situations. \begin{lemma}\label{est-bd} Let $(y_i)$ be a two-level $C$-RIS, $(a_i)$ a sequence of scalars of modulus bounded by $1$, $\gamma\in\Gamma$ with the tree analysis $(I_t, \varepsilon_t, \eta_t)_{t\in\mathcal T}$ satisfying $|e^*_\gamma(y_i)| > 4^{-i-1}$. For every $i$ let $t_i$ be the covering index for $y_i$ and let $\mathcal T'$ be the minimal subtree of $\mathcal T$ containing the root and all $t_i'$s. If $n_{t_i}\leq n_{j_i}$ for every $i$, then \[ |\bdp|(e^*_\gamma, \mathcal T')(\sum_i a_iy_i) \leq 1/n_{j_1}. \] \end{lemma} \begin{proof} Fix $i$ and let $(u_l)_{l=0}^r\subset \mathcal T$ be a branch from $\emptyset$ to $t_i$. The definition of a two-level RIS and Lemma \ref{average-bound} gives for every $i$ and every $\xi\in\Gamma$ that \[ |d^*_\xi(y_i)| \leq \frac{m_{j_i}}{n_{j_i}}\frac{3C}{n_{j_{i,1}}}. \] By Lemma \ref{lenpath} we know that $r\leq i+1$, so we have \begin{align*} |\bdp(e^*_\gamma,\mathcal T')|(y_i) & = \sum_{l=0}^{r-1} \sum_{s\in S_{u_l}} |d^*_{\beta_s}(y_i)| + \sum_{s\in S_{t_i}}|d^*_{\beta_s}(y_i)| \\ & \leq \sum_{l=0}^i 2\frac{m_{j_i}}{n_{j_i}}\frac{3C}{n_{j_{i,1}}} + n_{j_i}\frac{m_{j_i}}{n_{j_i}}\frac{3C}{n_{j_{i,1}}} \leq \frac{3C}{n_{j_{i,1}}} \left(2(i+1)\frac{m_{j_i}}{n_{j_i}} + m_{j_i}\right) \\ & \leq \frac{1}{2n_{j_i}}. \end{align*} Summing for all $i$'s and using $|a_i|\leq 1$ we obtain the statement. \end{proof} \begin{lemma}[Lemma 5.2 \cite{AH}] \label{gonRIS} Let $I\subseteq\mathbb N$ be an interval, $(x_i)_{i\in I}$ be a $C$-RIS, $(j_i)_{i\in I}$ be its growth index. For every $\gamma \in \Gamma $ and $s \in \mathbb N$ we have \[ |e^*_\gamma P_{(s,\infty)}(x_i)| \leq \begin{cases} 5Cm_\gamma^{-1} & \text{, if } m_\gamma < m_{j_i}, \\ 3Cm_\gamma^{-1} & \text{, if } m_\gamma \geq m_{j_{i+1}} \end{cases} \] \end{lemma} The following Proposition describes a possibility of approximation of a given node by a node with controlled rank. \begin{proposition}\label{split} Let $\gamma \in \Gamma$, $K,N\in \mathbb N$, $K\leq\min\rng e^*_\gamma \leq N$. For every $i\in \mathbb N$ there exists $\gamma'\in \Gamma_{N+i}$ such that $K\leq\min\rng e^*_{\gamma'}$ and \[ \|(e^*_\gamma-e^*_{\gamma'})\restriction \langle d_\xi \mid \xi \in \Gamma_N \rangle\| \leq 4^{-i}. \] \end{proposition} \begin{proof} If $\gamma\in \Gamma_{N+i}$, then $\gamma'=\gamma$ satisfies the conditions of the Lemma. Assume that $\gamma\in \Gamma\setminus\Gamma_{N+i}$. Consider the tree analysis $(I_t, \varepsilon_t, \eta_t)_{t\in\mathcal T}$ of $e^*_\gamma$ and for every $t\in\mathcal T$ let $\bdp(e^*_{\eta_t})P_{I_t}=\sum_{s\in S_t} d^*_{\beta_s}$. Define $u_0 = \emptyset$ and $u_1 = \max \{s\in S_{u_0} \mid \min\rng (e^*_{\eta_s}P_{I_s}) \leq N\}$. If $\rank (d^*_{\beta_{u_1}}) \leq N + i$, then we define $\eta_{u_0}' = \beta_{u_1}$ and we are done. Assume that $\rank (d^*_{\beta_{u_1}}) > N + i$ and define $u_2 = \max \{s\in S_{u_1} \mid \min\rng (e^*_{\eta_s}P_{I_s}) \leq N\}$. If $\rank (d^*_{\beta_{u_2}}) + 1 \leq N + i$, then we define $\eta_{u_1}' = \beta_{u_2}$ and $\eta_{u_0}' = (\rank(\eta_{u_1}')+1,\beta_{u_1-},m_{u_0},\varepsilon_{u_1} e^*_{\eta_{u_1}'})$. We continue inductively for $k\leq i+1$. If at some point $k$ we have $\max\rng (e^*_{\eta_{u_k}}P_{I_{u_k}}) + k - 1 \leq N + i$, then we redefine all $\eta_{u_k},\dots,\eta_{u_0}$ as above and we are done. If $k=i+1$ and still $\rank (\beta_{u_k}) + k - 1 > N + i$, then we drop $\eta_{u_k}$ from $\eta_{u_{k-1}}$ redefining $\eta_{u_{k-1}}'=(N+1,\beta_{u_k-{}-},m_{u_{k-1}},\varepsilon_{u_k-} e^*_{\eta_{u_k-}})$ (we change only the rank of $\beta_{u_k-}$) and for all $l=k-2,\dots,0$ we redefine $\eta_{u_l}' = (\rank(\eta_{u_{l+1}}')+1,\beta_{u_{l+1}-},m_{u_l},\varepsilon_{u_{l+1}} e^*_{\eta_{u_{l+1}}'})$. We show that for $\gamma'=\eta_\emptyset'$ we have the claimed bound for the norm of the difference. If we finished the inductive construction without dropping $\eta_{u_{i+1}}$, then $e^*_{\gamma}\restriction \langle d_\xi \mid \xi \in \Gamma_N \rangle = e^*_{\gamma'}\restriction \langle d_\xi \mid \xi \in \Gamma_N \rangle$. If we dropped $\eta_{u_{i+1}}$, then \[ \|(e^*_{\gamma}-e^*_{\gamma'})\restriction \langle d_\xi \mid \xi \in \Gamma_N \rangle\| \leq (\prod_{k=0}^i m_{\eta_{u_l}})\|e^*_{\eta_{u_k}}P_{I_{u_k}}\| \leq 3\cdot 4^{-i-1}. \] \end{proof} \begin{lemma}\label{comp} Let $I\subseteq\mathbb N$ be an interval and let $(x_i)_{i\in I}$ be a $C$-RIS with the growth index $(j_i)_{i\in I}$, $\max\rng x_i + j_i < \min\rng x_{i+1}$, and $|d^*_\xi(x_i)| \leq 3Cn_{j_i}^{-1}$ for all $\xi\in\Gamma$ and $i\in I$. For every node $\gamma\in\Gamma$ there exist $I' \subset I$, a sequence of intervals $(E_i)_{i\in I'}$, and a node $\gamma'\in \Gamma$ with the tree analysis comparable to $(x'_i)_{i\in I'}$, where $x'_i=P_{E_i}x_i$, and \[ |e^*_\gamma(\sum_{i\in I} x_i)|\leq 6|e^*_{\gamma'}(\sum_{i\in I'} x'_i)| + 49C4^{-j_1}. \] \end{lemma} \begin{proof} Fix $\gamma\in\Gamma$ and take $\varepsilon=\pm1$ with $|e^*_\gamma(\sum_{i\in I} x_i)| = \varepsilon e^*_\gamma(\sum_{i\in I} x_i)$. Define $I'=\{i\in I \mid \varepsilon e^*_\gamma(x_i) > 4^{-j_i-1}\}$ and observe that \begin{equation}\label{comp1} \varepsilon e^*_\gamma(\sum_{i\in I\setminus I'} x_i) \leq 4^{-j_1}. \end{equation} Let $(I_t, \varepsilon_t, \eta_t)_{t\in\mathcal T}$ be the tree analysis of $e^*_\gamma$ and for all $i\in I$ set $t_i^0$ to be the covering index for $x_i$. For every $i\in I'$ we define $S_i^0 = \{s\in S_{t_i^0} \mid \rng(e^*_{\eta_s}P_{I_s}) \cap \rng x_i \neq \emptyset\}$, $q_i^0 = \min S_i^0,\ r_i^0 = \max S_i^0$. Denote the predecessor and the successor of $i$ in $I'$ by $i_-,\,i_+$ respectively. Define \( \overline\varepsilon_t=\varepsilon \prod_{\emptyset\preceq u \preceq t} \varepsilon_u, \) for every $t\in\mathcal T$. We will construct a $10C$-RIS $(x'_i)_{i\in I'}\subset \mathscr B_{mT}$ with $x'_i=P_{E_i}x_i$ for some interval $E_i$, such that \begin{equation}\label{xx'} \varepsilon e^*_{\gamma}(x_i) \leq 2 \varepsilon e^*_{\gamma}(x_i'). \end{equation} Fix $i\in I'$. We consider two cases: \noindent \textbf{Case A.} $S_i^0=\{ q_i^0,\, r_i^0 \}$, $\rng (e^*_{\eta_{q_i^0}}P_{I_{q_i^0}})\cap \rng(x_{i_-})\neq \emptyset$, and $\rng (e^*_{\eta_{r_i^0}}P_{I_{r_i^0}})\cap \rng(x_{i_+})\neq \emptyset$. If $\overline\varepsilon_{q_i^0}e^*_{\eta_{q_i^0}}P_{I_{q_i^0}}(x_i) \geq \overline\varepsilon_{r_i^0}e^*_{\eta_{r_i^0}}P_{I_{r_i^0}}(x_i)$ then we define $E_i=\big[0,\min \rng(e^*_{\eta_{r_i^0}}P_{I_{r_i^0}})\big)$ and $x'_i=P_{E_i}x_i$. We have \[ \overline\varepsilon_{t^0_i} e^*_{\eta_{t^0_i}}P_{I_{t^0_i}}(x_i) \leq 2 \overline\varepsilon_{t^0_i} e^*_{\eta_{t^0_i}}P_{I_{t^0_i}}(x_i'), \] which implies inequality (\ref{xx'}). If $\overline\varepsilon_{q_i^0}e^*_{\eta_{q_i^0}}P_{I_{q_i^0}}(x_i) < \overline\varepsilon_{r_i^0}e^*_{\eta_{r_i^0}}P_{I_{r_i^0}}(x_i)$ then we define $E_i=\big(\max \rng(e^*_{\eta_{q_i^0}}P_{I_{q_i^0}}),\infty\big)$ and $x'_i=P_{E_i}x_i$, so (\ref{xx'}) holds as above. \noindent \textbf{Case B.} Case A does not hold. We define $E_i=\rng x_i$, so $x'_i=x_i$ and inequality (\ref{xx'}) holds trivially. For every $i\in I'$ we define $t_i\succeq t^0_i$ to be the covering index for $x'_i$. Define $S_i = \{s\in S_{t_i} \mid \rng(e^*_{\eta_s}P_{I_s}) \cap \rng x_i' \neq \emptyset\}$, $q_i = \min S_i,\ r_i = \max S_i$. Observe that \[ \tag{$T1$} \text{for every $i\in I'$ there exists $s\in S_i$ with $\rng(e^*_{\eta_{s}}P_{I_s})\subset \rng x'_i$.} \] Indeed, since $t_i$ is the covering index for $x_i'$, then $q_i\neq r_i$. In Case B its assumption yields exactly the existence of the required $s\in S_i^0=S_i$. In Case A the definition of $E_i$ yields that $\rng (e^*_{\eta_{q_i}}P_{I_{q_i}})\cap \rng(x_{i_-})= \emptyset$ or $\rng (e^*_{\eta_{r_i}}P_{I_{r_i}})\cap \rng(x_{i_+}) = \emptyset$, hence $(T1)$ holds. Using Lemma \ref{lenpath} for $x'_i,\ j_i+1,\ t_i$ (the lower bound follows from (2)) for every $i\in I'$ we get that \[ \tag{$T2$} \text{the length of the branch linking the root and the node $t_i$ is not greater than $j_i+2$} \] Let $\mathcal T^0\subseteq\mathcal T$ be the smallest subtree containing the root and all $t_i$ for $i\in I'$. For every $t\in \mathcal T^0$ define $i_t=\max \{i\in I\mid m_{j_i} \leq m_t\}$ or $i_t=1$, if the set is empty. Fix $i\in I$ and notice that if there exists $t\preceq t_i$ such that $i<i_t$ then we have $m_{j_i}<m_{j_{i_t}}\leq m_t$, hence by Lemma \ref{gonRIS} \[ |e^*_{\eta_t}P_{I_t}(x'_i)| \leq 6Cm_t^{-1} \leq m_{j_i}^{-1}. \] We show that $i\not\in I'$. Consider the following formula \[ \varepsilon e^*_{\gamma}(x'_i) = \overline\varepsilon_t e^*_{\eta_t}P_{I_t}(x'_i) + \sum_{u\prec t} \big( \prod_{\emptyset\preceq v\prec u}m_v^{-1}\big) \bdp(e^*_{\eta_u})P_{I_u}(x'_i). \] Since $t_i$ is the covering index then for every $u\prec t$ there are at most two functionals in $\bdp(e^*_{\eta_u})$ with non-zero action on $x'_i$. By $(T2)$ we get \[ |e^*_{\gamma}(x'_i)| \leq m_{j_i}^{-1} + (j_i+2) 2 \cdot 3Cn_{j_i}^{-1} < 4^{-j_i}, \] hence $i\not\in I'$. Therefore, \[\tag{$T3$} \text{for every $i\in I'$ and every $t\preceq t_i$ we have $i\geq i_t$.} \] First, we will define $\gamma'\in\Gamma$ such that $e^*_{\gamma'}$ is comparable with $(x'_i)_{i\in I'}$ satisfying certain estimates. Second, we will show the claimed inequality. \noindent \textbf{STEP 1}. Inductive construction. We define inductively on $I'$ a sequence $(\gamma^i)_{i\in \{0\}\cup I'}\subset \Gamma$ with $\gamma^0=\gamma$ and a sequence $(\mathcal T^i)_{i\in I'}$ with $\mathcal T^i\subseteq\mathcal T^0$. Fix $i\in I'$ and denote $(\min I')_-=0$. On the $i$-th step we change $\eta_s^{i_-}$ for some $s\in S_i$ to obtain $\gamma^i\in\Gamma$ (by Remark \ref{mt-part}) with tree analysis $(I_t^i, \varepsilon_t, \eta_t^i)_{t\in\mathcal T^i}$ satisfying the following conditions \begin{enumerate}[(i)] \item $e^*_{\gamma^i}$ is comparable with $(x'_j)_{j\leq i,\, j\in I'}$, \item for all $j>i,\,j\in I'$ we have $\mt(e^*_{\eta_{t_i}^{i_-}})P_{I_{t_i}^{i_-}}(x_j') = \mt(e^*_{\eta_{t_i}^i})P_{I_{t_i}^i}(x_j')$, \item if $i>i_{t_i}$ then \( \overline\varepsilon_{t_i} \mt(e^*_{\eta_{t_i}^{i_-}})P_{I_{t_i}^{i_-}}(x'_i) \leq 3 \overline\varepsilon_{t_i} \mt(e^*_{\eta_{t_i}^i})P_{I_{t_i}^i} (x'_i) + 3C4^{-j_{i}}, \) \noindent if $i=i_{t_i}$ then \( \overline\varepsilon_{t_i} e^*_{\eta_{t_i}^{i_-}}P_{I_{t_i}^{i_-}}(x'_i) \leq 3 \overline\varepsilon_{t_i} e^*_{\eta_{t_i}^i}P_{I_{t_i}^i} (x'_i) + 3C4^{-j_{i}}, \) \item if $i>i_{t_i}$ then \( \overline\varepsilon_{t_i} e^*_{\eta_{t_i}^{i_-}}P_{I_{t_i}^{i_-}}(\sum_{j<i,\,j\in I'} x'_j) \leq \overline\varepsilon_{t_i} e^*_{\eta_{t_i}^i}P_{I_{t_i}^i}(\sum_{j<i,\,j\in I'} x'_j)+ 3C4^{-j_{i_-}}, \) \noindent if $i=i_{t_i}$ then \( e^*_{\eta_{t_i}^i}P_{I_{t_i}^i}(x'_j) = 0, \text{ for all } j<i,\,j\in I', \) \item $\mathcal T^i\subseteq\mathcal T^{i_-}$, \item for all $t\in \mathcal T^i$ incomparable with $t_i$ we have $\eta^i_t=\eta^{i_-}_t,\ I^i_t = I^{i_-}_t$. \end{enumerate} We do the induction on $I'$. Note that the induction stabilises after a finite number of steps since $e^*_\gamma$ has a finite range. The inductive base is an easier version of the inductive step (there are no changes to be made to the left of $\rng x'_{\min I'}$), so we show only the latter. Fix $i\in I'$. We assume the inductive hypothesis for all $j<i,\,j\in I'$. In order to get comparability we change $\eta_{t_i}^{i_-}$. We treat possibilities $i>i_{t_i}$ and $i=i_{t_i}$ separately (recall that by $(T3)$ there are no $i<i_{t_i}$). \noindent \textbf{Case I.} $i>i_{t_i}$. We define $\mathcal T^i=\mathcal T^{i_-}$. By $(T1)$ there exists $s\in S_i$ with $\rng(e^*_{\eta_{s}^{i_-}}P_{I_{s}^{i_-}})\subset \rng x'_i$. On the right end of $\rng x'_i$ we have the following cases: \noindent \textbf{a)} $\rng (e^*_{\eta_{r_i}^{i_-}}P_{I_{r_i}^{i_-}})\cap \rng(x'_{i_+})\neq \emptyset$ and $\overline\varepsilon_{r_i-}e^*_{\eta_{r_i-}^{i_-}}P_{I_{r_i-}^{i_-}}(x'_i)\geq \overline\varepsilon_{r_i}e^*_{\eta_{r_i}^{i_-}}P_{I_{r_i}^{i_-}}(x'_i)$. Define $\eta_{r_i-}^i = \eta_{r_i-}^{i_-}$, $I_{r_i-}^i= I_{r_i-}^{i_-}$, $\eta_{r_i}^i= \eta_{r_i}^{i_-}$, and $I_{r_i}^i = I_{r_i}^{i_-}\cap [\min\rng x'_{i_+},\infty)$. \noindent \textbf{b)} $\rng (e^*_{\eta_{r_i}^{i_-}}P_{I_{r_i}^{i_-}})\cap \rng(x'_{i_+})\neq \emptyset$ and $\overline\varepsilon_{r_i-}e^*_{\eta_{r_i-}^{i_-}}P_{I_{r_i-}^{i_-}}(x'_i) < \overline\varepsilon_{r_i}e^*_{\eta_{r_i}^{i_-}}P_{I_{r_i}^{i_-}}(x'_i)$. Proposition \ref{split} for $(\eta_{r_i}^{i_-}, \max\rng x'_i, j_i)$ gives a suitable node $\eta_{r_i-}^i\in\Gamma$ and we define $I_{r_i-}^i=[\min \rng (e^*_{\eta_{r_i}^{i_-}}P_{I_{r_i}^{i_-}}),\, \rank(\eta_{r_i-}^i)]$, $\eta_{r_i}^i = \eta_{r_i}^{i_-}$ and $I_{r_i}^i=I_{r_i}^{i_-}\cap [\min\rng x'_{i_+},\infty)$. \noindent \textbf{c)} $\rng (e^*_{\eta_{r_i}^{i_-}}P_{I_{r_i}^{i_-}})\cap \rng(x'_{i_+}) = \emptyset$. Define $\eta_{r_i}^i = \eta_{r_i}^{i_-}$ and $I_{r_i}^i=I_{r_i}^{i_-}$. On the left end of $\rng x'_i$ we have the following cases: \noindent \textbf{d)} $\rng (e^*_{\eta_{q_i}^{i_-}}P_{I_{q_i}^{i_-}})\cap \rng(x'_{i_-})\neq \emptyset$ and $\overline\varepsilon_{q_i+}e^*_{\eta_{q_i+}^{i_-}}P_{I_{q_i+}^{i_-}}(x'_i)\geq \overline\varepsilon_{q_i}e^*_{\eta_{q_i}^{i_-}}P_{I_{q_i}^{i_-}}(x'_i)$. Define $\eta_{q_i+}^i = \eta_{q_i+}^{i_-}$, $I_{q_i+}^i= I_{q_i+}^{i_-}$, and $I_{q_i}^i=I_{q_i}^{i_-}\cap [0,\rank(\eta_{q_i}^i)]$, where $\eta_{q_i}^i$ is given by Proposition \ref{split} for \((\eta_{q_i}^{i_-}, \max\rng x'_{i_-}, j_{i_-})\), \noindent \textbf{e)} $\rng (e^*_{\eta_{q_i}^{i_-}}P_{I_{q_i}^{i_-}})\cap \rng(x'_{i_-})\neq \emptyset$ and $\overline\varepsilon_{q_i+}e^*_{\eta_{q_i+}^{i_-}}P_{I_{q_i+}^{i_-}}(x'_i) < \overline\varepsilon_{q_i}e^*_{\eta_{q_i}^{i_-}}P_{I_{q_i}^{i_-}}(x'_i)$. Proposition \ref{split} for $(\eta_{q_i}^{i_-}, \max\rng x'_{i_-}, j_{i_-})$ gives a suitable node $\eta_{q_i}^i\in\Gamma$ and we define $I_{q_i}^i=[\min \rng (e^*_{\eta_{q_i}^{i_-}}P_{I_{q_i}^{i_-}}),\, \rank(\eta_{q_i}^i)]$, $\eta_{q_i+}^i = \eta_{q_i}^{i_-}$ and $I_{q_i+}^i=I_{q_i}^{i_-}\cap [\min\rng x'_i,\infty)$. \noindent \textbf{f)} $\rng (e^*_{\eta_{q_i}^{i_-}}P_{I_{q_i}^{i_-}})\cap \rng(x'_{i_-}) = \emptyset$. Define $\eta_{q_i}^i = \eta_{q_i}^{i_-}$ and $I_{q_i}^i=I_{q_i}^{i_-}$. After introducing the changes listed above the conditions (i)-(v) are satisfied. \noindent \textbf{Case II.} $i=i_{t_i}$. We define $I_{t_i}^i=I_{t_i}^{i_-}\cap [\min\rng x'_i,\infty)$, so we don't have to change $\eta_{q_i}^{i_-}$. Then we change $\eta_{r_i}^{i_-}$ as in case 1a), 1b) or 1c). We define \[ \mathcal T^i=\mathcal T^{i_-}\setminus\{\,s\in \mathcal T^{i_-}\mid s \text{ is a successor of $t_i$ in } \mathcal T^{i_-},\ \max\rng(e^*_{\eta_s}P_{I_s})< \min\rng x_i'\,\}. \] The conditions (i)-(v) are satisfied. For any $s\in S_i$ not considered above we set $\eta^i_s=\eta^{i_-}_s$ and $I^i_s = I^{i_-}_s$. By Remark \ref{mt-part} there is a node $\gamma^i$ with the tree analysis $(I_t^i, \varepsilon_t, \eta_t^i)_{t\in\mathcal T^i}$, where $\eta^i_t=\eta^{i_-}_t$ and $I^i_t = I^{i_-}_t$ for every $t\in\mathcal T^i$ incomparable with $t_i$, i.e. we obtain (vi). This finishes the inductive construction. Notice that change $\gamma^{i_-}\to\gamma^i$ induces only the following changes in the action on $\sum_j x'_j$: $e^*_{\eta^{i_-}_t}\to e^*_{\eta^i_t}$ on $\rng x'_i$ and $\bdp(e^*_{\eta^{i_-}_t})\to\bdp(e^*_{\eta^i_t})$ for $t\preceq t_i$ on $[\min\rng x'_i,\infty)$. We define $\gamma'$ to be the node on which the inductive construction stabilises. Similarly, we define $\mathcal T'$. This finishes STEP 1. We proceed to show the estimate-part of the lemma. \noindent \textbf{STEP 2}. Estimation of the errors. Fix $i\in I'$. We show that after introducing changes on the $i$-th step in the induction we have the following: \begin{equation}\label{compi} \varepsilon e^*_{\gamma^{i_-}}(x'_i) \leq 3 \varepsilon e^*_{\gamma^i}(x'_i) + 4C4^{-j_i}. \end{equation} \begin{equation}\label{compjl} \varepsilon e^*_{\gamma^{i_-}}(\sum_{j<i,\,j\in I'}x'_j) \leq \varepsilon e^*_{\gamma^i}(\sum_{j<i,\,j\in I'}x'_j) + 3C4^{-j_i}. \end{equation} \begin{equation}\label{compjg} \varepsilon e^*_{\gamma^{i_-}}(x'_j) \leq \varepsilon e^*_{\gamma^i}(x'_j) + j_jCm_{j_j}^{-1}\text{ for ever } j>i,\, j\in I'. \end{equation} \noindent \textbf{Ad (\ref{compi})}. We use the equalities \[ \varepsilon e^*_{\gamma^{i_-}}(x'_i) = \overline\varepsilon_{t_i} e^*_{\eta_{t_i}^{i_-}}P_{I_{t_i}^{i_-}}(x'_i) + \sum_{t\prec t_i} \big( \prod_{\emptyset\preceq u\prec t}m_u^{-1}\big) \bdp(e^*_{\eta_t^{i_-}})P_{I_t^{i_-}}(x'_i), \] \[ \varepsilon e^*_{\gamma^i}(x'_i) = \overline\varepsilon_{t_i} e^*_{\eta_{t_i}^i}P_{I_{t_i^i}}(x'_i) + \sum_{t\prec t_i} \big( \prod_{\emptyset\preceq u\prec t}m_u^{-1}\big)\bdp(e^*_{\eta_t^i})P_{I_t}^i(x'_i). \] We start with estimating bd-parts on $x'_i$, i.e. $\sum_{t\prec t_i} \big( \prod_{\emptyset\preceq u\prec t}m_u^{-1}\big)\bdp(e^*_{\eta_t^{i_-}})P_{I_t^{i_-}}(x'_i)$ and $\sum_{t\prec t_i} \big( \prod_{\emptyset\preceq u\prec t}m_u^{-1}\big) \bdp(e^*_{\eta_t^i})P_{I_t^i}(x'_i)$. Since $t_i$ is a covering index for $x'_i$ there are at most two elements in bd-part of $e^*_{\eta_t^{i_-}}$ with non-zero action on $x'_i$ for every $t\prec t_i$. Using $(T2)$ we get \begin{equation}\label{combd} \sum_{t\prec t_i}\big( \prod_{\emptyset\preceq u\prec t}m_u^{-1}\big) \bdp(e^*_{\eta_t^{i_-}})P_{I_t^{i_-}}(x'_i) \leq (j_i+2) 2\cdot3Cn_{j_i}^{-1} < Cm^{-1}_{j_i}. \end{equation} Similarly \begin{equation}\label{combd'} \sum_{t\prec t_i}\big( \prod_{\emptyset\preceq u\prec t}m_u^{-1}\big) \bdp(e^*_{\eta_t^i})P_{I_t^i}(x'_i) \leq (j_i+2) 2\cdot3Cn_{j_i}^{-1} < Cm^{-1}_{j_i}. \end{equation} If $i=i_{t_i}$, then using (iii), (\ref{combd}), and (\ref{combd'}) we get (\ref{compi}). If $i>i_{t_i}$, then we observe that for the bd-part we have \begin{equation}\label{compb} \overline\varepsilon_{t_i}\bdp(e^*_{\eta_{t_i}^{i_-}})P_{I_{t_i}^{i_-}}(x'_i) \leq n_{t_i}3Cn_{j_i}^{-1} \leq Cm^{-1}_{j_i}, \ \overline\varepsilon_{t_i}\bdp(e^*_{\eta_{t_i}^i})P_{I_{t_i}^i}(x'_i) \leq n_{t_i}3Cn_{j_i}^{-1} \leq Cm^{-1}_{j_i}. \end{equation} By (iii), (\ref{combd}), (\ref{combd'}), and (\ref{compb}) we get (\ref{compi}). \noindent \textbf{Ad (\ref{compjl})}. If $i>i_{t_i}$ then (\ref{compjl}) follows directly from (iv) and (vi). If $i=i_{t_i}$, then using Lemma \ref{gonRIS} for every $j<i$ we obtain \[ \overline\varepsilon_{t_i}e^*_{\eta_{t_i}^{i_-}}P_{I_{t_i}^{i_-}}(\sum_{j < i,\,j\in I'} x'_j) \leq (i-1)6Cm_{t_i}^{-1} \leq (i-1)6Cm_{j_i}^{-1} \leq C4^{-j_i}. \] This, (iv) and (vi) gives (\ref{compjl}). \noindent \textbf{Ad (\ref{compjg})}. Fix $j>i,\, j\in I'$. We prove inductively that for every $t\preceq t_i$ we have \[ \overline\varepsilon_te^*_{\eta_t^{i_-}}P_{I_t^{i_-}}(x'_j) \leq \overline\varepsilon_te^*_{\eta_t^i}P_{I_t^i}(x'_j) + \#\{u \in \mathcal T^i \mid t\preceq u \preceq t_i \}\cdot Cm_{j_j}^{-1}. \] Start from $t=t_i$. The condition (ii) gives \[ \overline\varepsilon_{t_i}e^*_{\eta_{t_i}^{i_-}}P_{I_{t_i}^{i_-}}(x'_j) = \overline\varepsilon_{t_i}e^*_{\eta_{t_i}^i}P_{I_{t_i}^i}(x'_j) + \overline\varepsilon_{t_i}\sum_{s\in S_{t_i}} d^*_{\beta_s^{i_-}}(x'_j) - \overline\varepsilon_{t_i}\sum_{s\in S_{t_i}} d^*_{\beta_s^i}(x'_j) \] By ($T3$) we have $i_t\leq i < j$, and thus $n_t < n_{j_j}$. It follows that \[ \overline\varepsilon_t\sum_{s\in S_t} d^*_{\beta_s^{i_-}}(x'_j) - \overline\varepsilon_t\sum_{s\in S_t} d^*_{\beta_s^i}(x'_j) \leq 2n_t3Cn_{j_j}^{-1} \leq Cm_{j_j}^{-1}, \] which finishes the inductive base. Let $t\neq t_i$. For every $s\in S_t$ with $s \not\preceq t_i$ we have by (vi) $\overline\varepsilon_se^*_{\eta_s^{i_-}}P_{I_s^{i_-}}(x'_j) = \overline\varepsilon_se^*_{\eta_s^i}P_{I_s^i}(x'_j)$. Moreover, for $s_0\in S_t$ with $s_0 \preceq t_i$ we have the inductive hypothesis, so \[ \overline\varepsilon_t\mt(e^*_{\eta_t^{i_-}})P_{I_t^{i_-}}(x'_j) \leq \overline\varepsilon_t\mt(e^*_{\eta_t^i}P_{I_t^i}(x'_j)) + \#\{u \in \mathcal T^i \mid s_0\preceq u \preceq t_i \}\cdot Cm_{j_j}^{-1}. \] As in the inductive base we have for bd-parts \[ |\sum_{s\in S_t} d^*_{\beta_s^{i_-}}(x'_j)|, \ |\sum_{s\in S_t} d^*_{\beta_s^i}(x'_j)| \leq n_t3Cn_{j_j}^{-1} \leq Cm_{j_j}^{-1}/2, \] hence \begin{align*} \overline\varepsilon_te^*_{\eta_t^{i_-}}P_{I_t^{i_-}}(x'_j) & = \overline\varepsilon_t\sum_{s\in S_t} d^*_{\beta_s^{i_-}}(x'_j) + \overline\varepsilon_t\mt(e^*_{\eta_t^{i_-}})P_{I_t^{i_-}}(x'_j) \\ & \leq n_t3Cn_{j_j}^{-1} + \overline\varepsilon_t\mt(e^*_{\eta_t^i}P_{I_t^i}(x'_j)) + \#\{u \in \mathcal T^i \mid s_0\preceq u \preceq t_i \}\cdot Cm_{j_j}^{-1} \\ & = n_t3Cn_{j_j}^{-1} + \overline\varepsilon_te^*_{\eta_t^i}P_{I_t^i}(x'_j) - \overline\varepsilon_t\sum_{s\in S_t} d^*_{\beta_s^i}(x'_j) + \#\{u \in \mathcal T^i \mid s_0\preceq u \preceq t_i \}\cdot Cm_{j_j}^{-1} \\ & \leq \overline\varepsilon_te^*_{\eta_t^i}P_{I_t^i}(x'_j) + \#\{u \in \mathcal T^i \mid t\preceq u \preceq t_i \}\cdot Cm_{j_j}^{-1}. \end{align*} The inductive step is finished. Using $(T2)$ we obtain (\ref{compjg}). Indeed, \[ \varepsilon e^*_{\gamma^{i_-}}(x'_j) \leq \varepsilon e^*_{\gamma^i}(x'_j) + (j_i+2)Cm_{j_j}^{-1} \leq \varepsilon e^*_{\gamma^i}(x'_j) + j_jCm_{j_j}^{-1}. \] Finally, having (3), (4) and (5) we obtain by an easy induction that for every $i\in I'$ we have \[ \varepsilon e^*_\gamma(\sum_{j\leq i ,\, j\in I'} x'_j) \leq 3 \varepsilon e^*_{\gamma^i}(\sum_{j\leq i ,\, j\in I'} x'_j) + 23C\sum_{j\leq i ,\, j\in I'}4^{-j_j}, \] \[ \varepsilon e^*_\gamma( x'_j) \leq \varepsilon e^*_{\gamma^i}(x'_j) + ij_jm_{j_j}^{-1}, \text{ for all } j>i,\, j\in I'. \] This, (1), and (2) finishes the proof of the lemma. \end{proof} \begin{lemma}\label{norm-sum} Let $(y_i)_{i=1}^{n_j}$ be a $C$-RIS in $\mathscr B_{mT}$ such that every $y_i$ is a projection on some interval of a $C-\ell^{n_{j_i}}_1$-average and let $(z_i)_{i=1}^{n_j}$ be a subsequence of the standard basis of $T$. Let $m_{j_1} \geq 2n_j^3$, $j\geq 5$, and assume that $\max\rng y_i+j_i<\min\rng y_{i+1}$ for all $i$'s. Then, for every interval $J$, there exists a functional $f$ in the norming set $W$ such that $\supp f \subseteq\supp(\sum_{j\in J}z_j)$ and \[ \|\sum_{i\in J}y_i\| \leq 96f(\sum_{i\in J}z_i). \] \end{lemma} \begin{proof} Set $y=\sum_{i\in J}y_i$ and $z=\sum_{i\in J}z_i$ and choose $\gamma\in\Gamma$ with $|e^*_\gamma(y)|\geq \frac{6}{7}\|y\|$ and the tree analysis $(I_t, \varepsilon_t, \eta_t)_{t\in\mathcal T}$. By Lemma \ref{comp} we can assume that the tree analysis of $e^*_\gamma$ is comparable with $(y'_i)_{i\in J'}$, where $y'_i$ is a projection of $y_i$ on some interval, and $|e^*_\gamma(\sum_{i\in J'} y'_i)|\geq \frac{1}{7}\|y\| - 9C4^{-j_1}$ for some $J'\subseteq J$. We will construct $f\in W$ such that $\supp f\subseteq \bigcup_{i\in J'}\supp z_i$ and $|e^*_\gamma(\sum_{i\in J'} y'_i)| \leq 12f(\sum_{i\in J'} z_i) = 12f(z)$, and thus we may assume that $J'=J$. Moreover, by Lemma \ref{gonRIS} the sequence $(y'_i)_{i\in J}$ is $10C$-RIS so we may assume that $y'_i=y_i$. For every $i$ let $t_i\in \mathcal T$ be the covering index for $y_i$ and for every $t\in \mathcal T$ let $E_t=\{i\in J\mid t=t_i\}$. \begin{claim}\label{x} Fix $t=t_{i_0}$ for some $i_0$ . \begin{enumerate} \item[(a)] Assume that $m_t<m_{j_1}$. Then for $g_i = z_i^*$, $i\in E_t$, we have \[ |e^*_{\eta_t}P_{I_t}(\sum_{i\in E_t}y_i)| \leq \frac{1}{n_j^2} + \frac{6}{m_t}\sum_{i\in E_t}g_i(z). \] \item[(b)] Assume that $m_t \geq m_{j_1}$. Then there exists $g_t\in W$ with $\supp g_t \subseteq \supp (\sum_{i\in E_t} z_i)$, and such that \[ |e^*_{\eta_t}P_{I_t}(\sum_{i\in E_t}y_i)| \leq \frac{1}{n_j^2} + 12g_t(z). \] \end{enumerate} \end{claim} In (a) by defining $g_t = \frac{1}{m_t}\sum_{i\in E_t}g_i$, we obtain the inequality of $(b)$, however, in the sequel we need the more precise form, hence we keep in $(a)$ the precise form of $g_t$. \begin{proof}[Proof of the claim] (a). We set $S_i=\{\,s\in S_t \mid \rng(e^*_{\eta_s}P_{I_s}) \subset \rng(y_i)\,\}$ and set $g_i = z_i^*$, $i\in E_t$. Using Lemmas \ref{average-bound}(2) and \ref{norm-proj} for every $y_i$ (see Remark \ref{aveproj}) we estimate \begin{align*} |e^*_{\eta_t}P_{I_t}(\sum_{i\in E_t} y_i)| & = |\sum_{s\in S_t}d^*_{\beta_s}(\sum_{i\in E_t} y_i) + \frac{1}{m_t} \sum_{i\in E_t} \sum_{s\in S_i}e^*_{\eta_s}P_{I_s}(y_i)| \\ & \leq n_t \frac{3C}{n_{j_1}} + \frac{1}{m_t} 3 \sum_{i\in E_t} \left(1+\frac{2|S_i|}{n_{j_i}}\right) \\ & \leq \frac{1}{n_j^2} + \frac{1}{m_t} 6\sum_{i\in E_t} g_i(z). \end{align*} (b). Define $s_t=\min S_t$, and $i_t = \max\{i\in E_t\mid m_{j_i} \leq m_t\}$. For every $i<i_t$ we have by Lemma \ref{gonRIS} \[ |e^*_{\eta_t}P_{I_t}(y_i)| \leq \frac{6C}{m_t} \leq \frac{6C}{m_{i_t}} < \frac{1}{n_j^3}. \] This implies \begin{equation}\label{b1} |e^*_{\eta_t}P_{I_t}(\sum_{i\in E_t,\, i<i_t} y_i)| \leq \frac{1}{2n_j^2}. \end{equation} Now let $i\in E_t,\ i \geq i_t$. If \begin{equation}\label{it>} |e^*_{\eta_t}P_{I_t}(y_{i_t})| > |e^*_{\eta_t}P_{I_t}(\sum_{i\in E_t,\, i > i_t} y_i)|, \end{equation} then for $g_t = z_{i_t}^*$ we obtain \begin{equation}\label{b2} |e^*_{\eta_t}P_{I_t}(\sum_{i\in E_t,\, i \geq i_t} y_i)| \leq 2\cdot 3C g_t(z). \end{equation} On the other hand, if (\ref{it>}) is not true, then we proceed as in Claim 1. We set $S_i=\{\,s\in S_t \mid \rng(e^*_{\eta_s}P_{I_s}) \subset \rng(y_i)\,\}$ and $g_i = z_i^*$. Then we set $g_t=\frac{1}{m_t} \sum_{i\in E_t,\, i > i_t} g_i$. The definition is correct as $g_i$'s are pairwise different and $\# E_t \leq \# S_t \leq n_t$. Using Lemmas \ref{average-bound}(2) and \ref{norm-proj} for every $y_i$ (see Remark \ref{aveproj}) we estimate \begin{align} \label{b3} |e^*_{\eta_t}P_{I_t}(\sum_{i\in E_t,\, i \geq i_t} y_i)| & \leq 2|\sum_{s\in S_t}d^*_{\beta_s}(\sum_{i\in E_t,\, i > i_t} y_i) + \frac{1}{m_t} \sum_{i\in E_t,\, i > i_t} \sum_{s\in S_i}e^*_{\eta_s}P_{I_s}(y_i)| \\ \nonumber & \leq 2n_t \frac{3C}{n_{j_{i_t+1}}} + 2\frac{1}{m_t} 3 \sum_{i\in E_t,\, i > i_t} \left(1+\frac{2|S_i|}{n_{j_i}}\right) \\ \nonumber & \leq \frac{1}{2n_j^2} + \frac{1}{m_t} 12\sum_{i\in E_t,\, i > i_t} g_i(z) \\ \nonumber & \leq \frac{1}{2n_j^2} + 12g_t(z). \end{align} Combining (\ref{b1}), (\ref{b2}), and (\ref{b3}) we obtain (b). \end{proof} Let $\mathcal T'$ be the smallest subtree of $\mathcal T$ containing the root and all $t_i$'s. \begin{claim}\label{y} For every $t\in \mathcal T'$ there exists $f_t\in W$ such that $\rng f_t \subseteq \bigcup \{\rng z_i \mid \rng y_i \cap \rng(e^*_{\eta_t}P_{I_t}) \neq 0 \}$ and \[ |e^*_{\eta_t}P_{I_t}(y)| \leq \frac{3}{n_j^2}\#\{s \in \mathcal T'\mid s\succeq t\} + 12f_t(z). \] \end{claim} \begin{proof}[Proof of the claim.] We prove the claim by induction on the tree $\mathcal T'$ starting from terminal nodes. Let $t$ be a terminal node. Then $t=t_{i_0}$ for some $i_0$ and we use Claim \ref{x}. If $m_t < m_{j_1}$, then Claim \ref{x}(a) gives $(g_i)_{i\in E_t}$, we set $f_t=\frac{1}{m_t}\sum_{i\in E_t}g_i$ and we are done. If $m_t \geq m_{j_1}$, then we just take $f_t=g_t$ given by Claim \ref{x}(b). Let $t$ be a non-terminal node and define $R_t = J\setminus E_t$. Observe that for every $i\in R_t$ we have $t\prec t_i$, so there are at most two $s\in S_t$ such that $d^*_{\beta_s}(y_i) \neq 0$. This implies \begin{equation}\label{bullet} |\sum_{s\in S_t}d^*_{\beta_s}(\sum_{i\in R_t} y_i)| \leq 2n_j\frac{3C}{n_{j_1}} \leq \frac{1}{n_j^2}. \end{equation} The inductive assumption gives for every $s\in S_t\cap\mathcal T'$ that \[ |e^*_{\eta_s}P_{I_s}(y)| \leq \frac{3}{n_j^2}\#\{u\in \mathcal T' \mid u \succeq s\}) + 12f_s(z), \] where $f_s\in W$ is such that $\supp f_s \subseteq \bigcup \{\supp z_i \mid \rng y_i \cap \rng(e^*_{\eta_s}P_{I_s}) \neq 0 \}$. If $t\neq t_i$ for all $i$ then we define $f_t=\frac{1}{m_t}\sum_{s\in S_t\cap \mathcal T'} f_s$. Functional $f_t$ is in $W$ since by comparability all $(f_s)_{s\in S_t\cap \mathcal T'}$ have pairwise disjoint ranges. We have $E_t = \emptyset$, hence by (\ref{bullet}) and the inductive assumption \begin{align*} |e^*_{\eta_t}P_{I_t}(y)| & \leq |\sum_{s\in S_t}d^*_{\beta_s}(\sum_{i\in R_t} y_i)| + |\frac{1}{m_t}\sum_{s\in S_t\cap \mathcal T'}e^*_{\eta_s}P_{I_s}(y)| \\ & \leq \frac{3}{n_j^2}(1+\sum_{s\in S_t\cap \mathcal T'}\# \{u\in \mathcal T' \mid u \succeq s\}) + 12f_t(z), \end{align*} and we are done. If $t=t_{i_0}$ for some $i_0$ then we use Claim \ref{x}. We have the following cases. If $m_t < m_{j_1}$ then from Claim $1(a)$ we have \[ |e^*_{\eta_t}P_{I_t}(\sum_{i\in E_t}y_i)| \leq \frac{1}{n_j^2} + \frac{6}{m_t}\sum_{i\in E_t}g_i(z). \] Define \[ f_t=\frac{1}{m_t}\left( \sum_{s\in S_t\cap\mathcal T'} f_s +\sum_{i\in E_t}g_i\right). \] Observe that $f_t\in W$ since $\{\rng f_s\mid s\in S_t\cap\mathcal T'\} \cup \{\rng g_i \mid i\in E_t\}$ are pairwise disjoint and $\# E_t \leq \#(S_t\setminus\mathcal T')$. Therefore, using (\ref{bullet}) and the inductive assumption \begin{align*} |e^*_{\eta_t}P_{I_t}(\sum_{i\in R_t} y_i) | & \leq |\sum_{s\in S_t}d^*_{\beta_s}(\sum_{i\in R_t} y_i)| + \frac{1}{m_t} |\sum_{s\in S_t\cap\mathcal T'}e^*_{\eta_s}P_{I_s} (\sum_{i\in R_t}y_i)| \\ & \leq \frac{1}{n_j^2} + \frac{1}{m_t} \sum_{s\in S_t\cap \mathcal T'} \left(\frac{3}{n_j^2}\#\{u\in \mathcal T'\mid u\succeq s\} + 12f_s(z)\right), \end{align*} and thus \[ |e^*_{\eta_t}P_{I_t}(y)| \leq |e^*_{\eta_t}P_{I_t}(\sum_{i\in R_t} y_i)| + |e^*_{\eta_t}P_{I_t}(\sum_{i\in E_t} y_i)| \leq \frac{3}{n_j^2}(1 + \sum_{s\in S_t\cap \mathcal T'}\#\{t_i\mid t_i\succeq s\}) + 12f_t(z), \] and we are done. We are left with the case $m_t \geq m_{j_1}$. Claim $1(b)$ gives $g_t\in W$ such that $\supp g_t \subseteq \supp (\sum_{i\in E_t} z_i)$, and \[ |e^*_{\eta_t}P_{I_t}(\sum_{i\in E_t}y_i)| \leq \frac{1}{n_j^2} + 12g_t(z), \] whereas (\ref{bullet}) gives, as $s\in S_t,\ i\in R_t,\ s\preceq t_i$, that \begin{align*} |e^*_{\eta_t}P_{I_t}(\sum_{i\in R_t} y_i)| & \leq |\sum_{s\in S_t}d^*_{\beta_s}(\sum_{i\in R_t} y_i)| + \frac{1}{m_t} |\sum_{s\in S_t}e^*_{\eta_s}P_{I_s} (\sum_{i\in R_t}y_i)| \\ & \leq \frac{1}{n_j^2} + \frac{1}{m_t}\sum_{i\in R_t}\|y_i\| \leq \frac{1}{n_j^2} + \frac{n_j}{m_t} \leq \frac{2}{n_j^2}. \end{align*} Combining the two above estimations and setting $f_t=g_t$ finishes the inductive proof. \end{proof} Let $f=f_\emptyset \in W$. Claim \ref{y} yields \[ \frac{1}{7}\|y\| -9C4^{-j_1}\leq |e^*_\gamma(y)| \leq \frac{3}{n_j^2}\#\{t\leq \mathcal T'\mid t\succeq \emptyset\}+12f(z) \leq \frac{6}{n_j} + 12f(z), \] as $\|y\|\geq 1$ implies \[ \|y\| \leq 8\cdot12f(z) \leq 96f(z). \] The proof of the lemma is finished. \end{proof} \section{Proof of Main Theorem.} \begin{theorem}\label{thm:a} For every infinite dimensional block subspace $Y$ of $\mathscr B_{mT}$ there exists a block sequence $(y_i)_{i\in \mathbb N}$ in $Y$ and sequences of natural numbers $(j_i)_{i\in\mathbb N}$, $((k_{i,j})_{j=1}^{n_{j_i}})_{i\in\mathbb N}$ such that sequences $(y_i)_{i\in \mathbb N}$ and $(\frac{m_{j_i}}{n_{j_i}}\sum_{j=1}^{n_{j_i}} e_{k_{i,j}} )_{i\in\mathbb N}\subset T$ are equivalent. \end{theorem} \begin{proof} Fix some subspace $Y$ of $\mathscr B_{mT}$ and a two-level $3$-RIS $(y_i)_{i\in \mathbb N}$ from $Y$. By the definition of two-level RIS for every $i\in \mathbb N$ there exists a normalised $3$-RIS $y_{i,1},\dots,y_{i,j_i} \in Y$ with growth index $j_i<j_{i,1}<\dots<j_{i,n_{j_i}}$ such that $y_i = \frac{c_im_{j_i}}{n_{j_i}}\sum_{j=1}^{n_{j_i}} y_{i,j}$, where $c_i>0$ is some normalising constant with $1/30 \leq c_i \leq 1$. Moreover, we assume that $j_1\geq 5$. Concerning basis averages we define for every $i\in \mathbb N$ a vector $z_i = \frac{m_{j_i}}{n_{j_i}}\sum_{j=1}^{n_{j_i}} z_{i,j}$, where $z_{i,j}$'s with lexicographical ordering on pairs $(i,j)$ is a subsequence of the standard basis. Recall $\|z_i\|=1$ by Proposition \ref{baver-norm}. We will show that for every finite sequence of scalars $(a_i)_{i\in I}$ we have \[ 2^{-7}\|\sum_{i\in I} a_iz_i\| \leq \|\sum_{i\in I} a_iy_i\| \leq 2^{10}\|\sum_{i\in I} a_iz_i\|. \] \begin{lemma} For every finite sequence of scalars $(a_i)_{i\in I}$ we have \[ \|\sum_{i\in I} a_iz_i\| \leq 2^7\|\sum_{i\in I} a_iy_i\|. \] \end{lemma} \begin{proof} Fix a finite sequence of scalars $(a_i)_{i\in I}$ and set $y=\sum_{i\in I} a_iy_i$, $z=\sum_{i\in I} a_iz_i$. We assume $\|z\|=1$ and $a_i>0$ for all $i\in I$ (by unconditionality of the standard basis of $T$). Choose $f\in W$ with $f(z)=\|z\|$ and $\supp f\subseteq \supp z$. Let $(f_t)_{t\in\mathcal T}$ be a tree analysis of $f$. Consider $P=\{(i,j) \mid i\in\mathbb N,\,j=1,\dots,n_{j_i})\}$ with lexicographical ordering as a sequence. For every $p\in P$ let $t_p\in\mathcal T$ be such that $f_{t_p} = \pm z^*_p$ (recall $(z_p)_{p\in P}$ is a subsequence of the basis). Without loss of generality we can assume that $f_{t_p} = z^*_p$ for all $p\in P$. Define $P'=\{p\in P\mid f(z_p) > 4^{-j_p-1}\}$. By Lemma \ref{lenpath} for all $p\in P'$ the branch linking the root and $t_p$ is of length less than or equal to $j_p$. Observe that $f(\sum_{p\in P\setminus P'}z_p) \leq 4^{-j_1}$, hence by relaxing $f(z)=1$, to $f(z) \geq 1/2$ we may assume that $P=P'$. Let $\mathcal T'\subseteq\mathcal T$ be the minimal tree containing the root and all $t_p$'s for $p\in P$. We will inductively construct a node $\gamma$ with the tree analysis $(I_t, \varepsilon_t, \eta_t)_{t\in\mathcal T'}$ satisfying for every $t\in\mathcal T'$ the following \[ f_t(z) \leq 2^6 \varepsilon_te^*_{\eta_t}P_{I_t}(y), \] and for every non-terminal $t\in \mathcal T'$, writing $\bdp(e^*_{\eta_t})=\sum_{s\in S_t\cap\mathcal T'}d^*_{\beta_s}$, and for every $s\in S_t\cap\mathcal T'$ we have \[ \max\rng y_{p_s} < \rank(\beta_s) \leq \max\rng y_{p_s} + \len b(s, t_{p_s}), \] where $p_s=\max\{p\in P\mid s\preceq t_p\}$ and $\len b(s,t_{p_s})$ is the length of the branch linking $s$ and $t_{p_s}$. The induction on $\mathcal T'$ starts from terminal nodes. Let $t$ be a terminal node in $\mathcal T'$. Then $t=t_p$ for some $p\in P$. Choose $\eta_{t_p}\in \Gamma$ such that $|e^*_{\eta_{t_p}}(y_p)| \geq 1/2$ and $\rank(\eta_{t_p})\in \rng y_p$. Then choose $\varepsilon_{t_p}$ such that $|e^*_{\eta_{t_p}}(y_p)| = \varepsilon_{t_p}e^*_{\eta_{t_p}}(y_p)$ and $I_{t_p}=\rng e^*_{\eta_{t_p}} \cap [\min\rng y_p, \infty)$. This choice guarantees \[ f_{t_p}(z_p) = 1 \leq 2 \varepsilon_{t_p}e^*_{\eta_{t_p}}P_{I_{t_p}}(y_p). \] Using the lower bound $c_i \geq 1/30$ we get \[ f_{t_p}(z) \leq 2^6 \varepsilon_{t_p}e^*_{\eta_{t_p}}P_{I_{t_p}}(y). \] Let $t$ be a non-terminal node in $\mathcal T'$. Then $f_t=m_t^{-1}\sum_{s\in S_t\cap\mathcal T'}f_s$ and we choose $\eta_t\in\Gamma$ such that \[ \mt(e^*_{\eta_t})=m_t^{-1}\sum_{s\in S_t\cap\mathcal T'}\varepsilon_se^*_{\eta_s}P_{I_s},\ \bdp(e^*_{\eta_t})=\sum_{s\in S_t\cap\mathcal T'}d^*_{\beta_s}, \] with \[ \rank(\beta_s) = \begin{cases} \max\rng y_{p_s} + 1 & \text{, if $s$ is a terminal node,} \\ \rank(\eta_s) + 1 & \text{, if $s$ is a non-terminal node.} \end{cases} \] Set $\varepsilon_t=1$ and $I_t=\rng(e^*_{\eta_t})$. The choice of $\beta_s$'s, the bound of the length of the branch linking $z_p$'s and the root, and the assumption on the size of gaps between consecutive $y_p$'s gives $\bdp(e^*_{\eta_t})(y)=0$. Indeed, for every $s\in S_t\cap\mathcal T'$ we have $\len b(s, t_{p_s}) \leq j_{p_s}$ by Lemma \ref{lenpath} and $\max\rng y_{p_s} + j_{p_s} < \min\rng y_{p_s+}$ by the definition of two-level RIS. Finally the inductive assumption gives \[ f_t(z) = m_t^{-1}\sum_{s\in S_t}f_s(z) \leq \sum_{s\in S_t}d^*_{\beta_s}(y) + m_t^{-1}\sum_{s\in S_t}2\varepsilon_se^*_{\eta_s}P_{I_s}(y) = 2^6 \varepsilon_te^*_{\eta_t}P_{I_t}(y). \] This finishes the inductive construction. We set $\gamma=\eta_\emptyset$ and notice that \[ \|z\|/2 = 1/2 < f_t(z) \leq 2^6 e^*_\gamma(y)\leq 2\|y\|. \] The proof of the lemma is finished. \end{proof} \begin{lemma} For every finite sequence of scalars $(a_i)_{i\in I}$ we have \[ \|\sum_{i\in I} a_iy_i\| \leq 2^{10}\|\sum_{i\in I} a_iz_i\|. \] \end{lemma} \begin{proof} Fix a finite sequence of scalars $(a_i)_{i\in I}$ and set $y=\sum_{i\in I} a_iy_i$, $z=\sum_{i\in I} a_iz_i$. We assume $\|y\|=1$, hence for every $i\in I$ we have $|a_i| \leq \| P_{\rng y_i}(y)\| \leq 4$. Let $\gamma$ be such that $|e^*_\gamma(y)|\geq 6/7$ and take its tree analysis $(I_t, \varepsilon_t, \eta_t)_{t\in\mathcal T}$. Consider $P=\{(i,j) \mid i\in\mathbb N,\,j=1,\dots,n_{j_i})\}$ with lexicographical ordering as a sequence. Notice that the sequence $(y_p)_{p\in P}$ is also a RIS. Thus by applying Lemma \ref{comp} for $\gamma$ and $(a_iy_{i,j})_{(i,j)\in P}$ we can assume that the tree analysis of $\gamma$ is comparable with $(y'_{i,j})$, where $y'_p$ is a projection of $y_p$ on some interval, and $|e^*_\gamma(\sum_{(i,j)\in P'}a_iy'_{i,j})|\geq 1/7 - 27\cdot 4^{j_1}$, for some $P'\subseteq P$. Since by $1$-unconditionality of the standard basis of $T$ we have $\|\sum_{(i,j)\in P'} a_iz_{i,j}\| \leq \|z\|$, we can assume $P=P'$. Moreover, with the abuse of the notation we shall write $y_p, y_i$ also for the restrictions $y'_p$ and their sums. For every $i$ we define $t_i\in\mathcal T$ to be the covering index for $y_i$. Now we will show that there exists $f\in W$ such that $\supp f \subseteq \supp z$ and \[ |e^*_\gamma(y)| \leq 4n_{j_1}^{-1} + 96f(z). \] Let $\mathcal T'$ be the smallest subtree of $\mathcal T$ containing the root and all $t_i$'s and define for all $t\in\mathcal T'$ a tree $\mathcal T'_t = \{s \in \mathcal T'\mid s \succeq t\}$ and \[ A_t = \{ i \in I \mid \rng(e^*_{\eta_t}P_{I_t}) \cap \rng y_i \neq \emptyset\}, \ y_t = \sum_{i\in A_t} a_iy_i,\ z_t=\sum_{i\in A_t} a_iz_i. \] \begin{claim} For all $t \in \mathcal T'$ there is $f_t\in W$ such that $\supp f_t\subseteq \supp z_t$ and \[ |e^*_{\eta_t}P_{I_t}(y_t)| \leq |\bdp|(e^*_{\eta_t}, \mathcal T'_t)(y_t) + 96f_t(z_t). \] \end{claim} \begin{proof}[Proof of the claim] Proof by induction on $\mathcal T'$ starting from the terminal nodes of $\mathcal T'$. The argument for the base of the induction is a simpler version of the argument for the inductive step (see Case 2 below), hence we show only the latter. Let $t\in \mathcal T'$ be a non-terminal node. Consider the evaluation analysis of $e^*_{\eta_t}$ \[ e^*_{\eta_t}=\sum_{s\in S_t} d^*_{\beta_s} + \frac{1}{m_t}\sum_{s\in S_t} \varepsilon_s e^*_{\eta_s} P_{I_s}. \] \noindent Case 1. $t \neq t_i$ for all $i$. Let $f_t = \frac{1}{m_t}\sum_{s\in S_t \cap \mathcal T'} f_s$. Then using the inductive assumption we obtain \begin{align*} |e^*_{\eta_t}P_{I_t}(y_t)| & \leq \sum_{s\in S_t} |d^*_{\beta_s}P_{I_t}(y_t)| + \frac{1}{m_t}\sum_{s\in S_t \cap \mathcal T'} |e^*_{\eta_s} P_{I_s} (y_s)| \\ & \leq \sum_{s\in S_t} |d^*_{\beta_s}P_{I_t}(y_t)| + \frac{1}{m_t}\sum_{s\in S_t \cap \mathcal T'}|\bdp|(e^*_{\eta_s}, \mathcal T'_s)(y_t) + \frac{1}{m_t}\sum_{s\in S_t \cap \mathcal T'} 96f_s(z_s) \\ & \leq |\bdp|(e^*_{\eta_t}, \mathcal T'_t)(y_t) + 96f_t(z_t) \end{align*} \noindent Case 2. $t=t_{i_0}$ for some $i_0$. We define $E_t=\{i\in I\mid t =t_i\}$ and estimate \begin{align*} |e^*_{\eta_t}P_{I_t}(y_t)| & \leq \sum_{s\in S_t} |d^*_{\beta_s}P_{I_t}(y_t)| + \frac{1}{m_t}\sum_{s\in S_t \cap \mathcal T'} |e^*_{\eta_s} P_{I_s} (y_s)| + \frac{1}{m_t}\sum_{s\in S_t\setminus \mathcal T'} |e^*_{\eta_s} P_{I_s} (\sum_{i\in E_t} a_iy_i)|\\ & \leq |\bdp|(e^*_{\eta_t}, \mathcal T'_t)(y_t) + \frac{1}{m_t}\sum_{s\in S_t \cap \mathcal T'}96f_s(z_s) + \frac{1}{m_t}\sum_{s\in S_t\setminus \mathcal T'} |e^*_{\eta_s} P_{I_s} (\sum_{i\in E_t} a_iy_i)|\\ & \leq \dots \end{align*} Fix $i\in E_t$. Define $S_i=\{s\in S_t\setminus \mathcal T'\mid \rng e^*_{\eta_s}P_{I_s} \cap \rng y_i \neq \emptyset\}$ and $S_{i,j}=\{s\in S_t\setminus \mathcal T'\mid \rng e^*_{\eta_s}P_{I_s} \cap \rng y_{i,j} \neq \emptyset\}$. Let $A_i=\{s\in S_i\mid \exists j\colon \rng y_{i,j} \subset \rng (e^*_{\eta_s}P_{I_s}) \}$ and $B_i=S_i\setminus A_i$. For every $s\in A$ let $J_s =\{j\mid \rng y_{i,j} \subset \rng (e^*_{\eta_{s}}P_{I_s})\}$. Let $B_j = \{s\in B\mid \rng (e^*_{\eta_s}P_{I_{s}}) \subseteq \rng y_{i,j}\}$ for $j$ in $J_i=\{j\mid \exists s\in S_i\colon \rng (e^*_{\eta_s}P_{I_s}) \subseteq \rng y_{i,j}\}$. Using the above notation we have the following splitting \[ \sum_{s\in S_i} |e^*_{\eta_s} P_{I_s} (\sum_{j=1}^{n_{k_i}} y_{i,j})| = \sum_{s\in A_i} |e^*_{\eta_s} P_{I_s} (\sum_{j\in J_s} y_{i,j})| + \sum_{j\in J_i} \sum_{s\in B_j} |e^*_{\eta_s} P_{I_s} (y_{i,j})|. \] Concerning the first part, by Lemma \ref{norm-sum} for every $s\in A$ there exists a functional $f_s\in W$ with $\supp f_s \subset \supp \sum_{j\in J_s} z_{i,j}$ and \[ |e^*_{\eta_s} P_{I_s} (\sum_{j\in J_s} y_{i,j})| \leq 96 f_s(\sum_{j\in J_s} z_{i,j}). \] Concerning the second part, by Lemma \ref{norm-proj} (see Remark \ref{aveproj}) for every $j\in J_i$ the following inequality holds: \[ \sum_{s\in B_j} |e^*_{\eta_s} P_{I_s} (y_{i,j})| \leq 9\left(1 + \frac{2\#B_j}{n_{k_{i,j}}}\right) \leq 18 = 18f_j(z_{i,j}), \] for the functional $f_j\in W$ dual to $z_{i,j}$ as $\#B_j \leq |S_t| \leq n_{k_{i}} < n_{k_{i,j}}$. Finally, we have \begin{align*} \sum_{s\in S_i} |e^*_{\eta_s} P_{I_s} (y_{i})| &= \sum_{s\in A_i} |e^*_{\eta_s} P_{I_s} (\frac{c_im_{k_i}}{n_{k_i}}\sum_{j\in J_s} y_{i,j})| + \sum_{j\in J_i} \sum_{s\in B_j} |e^*_{\eta_s} P_{I_s} (\frac{c_im_{k_i}}{n_{k_i}}y_{i,j})| \\ & \leq 96\sum_{s\in A_i}f_s(\frac{m_{k_i}}{n_{k_i}}\sum_{j\in J_s} z_{i,j}) + 18\sum_{j\in J_i} f_{s_j}(\frac{m_{k_i}}{n_{k_i}} z_{i,j}). \end{align*} We define \[ f_t=\frac{1}{m_t}\left(\sum_{s\in S_t\cap\mathcal T'}f_s + \sum_{i\in E_t}(\sum_{s\in A_i} f_s + \sum_{j\in J_i} f_j)\right). \] The functional $f_t$ is in $W$ since all $f_s$'s and $f_j$'s have pairwise disjoint ranges and we have $\sum_{i\in E_t}(\# A_i + \# J_i) \leq \#(S_t\setminus \mathcal T')$. Going back to the main estimation we obtain \[ \dots \leq |\bdp|(e^*_{\eta_t}, \mathcal T_t')(y_t) + 96f_t(z_t), \] which finishes the inductive construction. \end{proof} Going back to the main proof we have that $y_\emptyset=y,\ z_\emptyset = z,\ \eta_\emptyset=\gamma$, and $|e^*_{\gamma}(y)| \leq |\bdp|(e^*_\gamma, \mathcal T')(y) + 96\|z\|$. For every $i\in I$ we have $|a_i| \leq \|P_{\rng y_i}(y)\| \leq 4$, and thus by Lemma \ref{est-bd} we have $|\bdp|(e^*_\gamma, \mathcal T')(y)\leq 4n_{j_1}^{-1}$. The assumption on $\gamma$ yields \[ \frac{1}{7} - 27 \cdot 4^{-j_1} \leq \frac{4}{n_{j_1}} + 96 \|z\|, \] hence, as $j_1\geq 5$, we get \[ \|y\| \leq 8\cdot 96\|z\| \leq 2^{10} \|z\|. \] \end{proof} \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{Design of Trusted Market Platforms using Permissioned Blockchains and Game Theory} \author{Shivika Narang} \submitdate{MAY 2018} \dept{Computer Science and Automation} \degreein{Computer Science and Engineering} \mscengg \iisclogotrue \tablespagetrue \maketitle \blankpage \begin{dedication} \begin{center} DEDICATED TO \\[2em] {\large \em My Parents and My Grandmother}\\ for\\ {\large\em Their Unending Love and Support } \end{center} \end{dedication} \blankpage \acknowledgements My first and foremost gratitude is towards my advisor, Prof Y Narahari. He took me as a student despite my lack of understanding of game theory and proved to be the most understanding of advisors. In the last two years, he has been encouraging, kind, and supportive along each and every turn. His patience and kindness are virtues I hope to inculcate within myself. It was also thanks to him that I got an opportunity to spend a summer at IBM research where the work presented in this thesis began. I am hugely indebted to Pankaj and Vinayaka who introduced me to the field of blockchains and bore with my shortcomings with grace. None of the work presented in this thesis would have been possible without their guidance. Many thanks to Pankaj for putting up with my incessant questions all through the summer, despite his busy schedule. A big thanks to Megha, for her technical contributions as well as her emotional support in the last year. This work would have been incomplete without her presence, patience, and warmth. Despite her many other obligations, she enthusiastically helped me at every step. I'm very grateful to have met someone with whom my thinking and tastes match. I hope we can continue our many adventures as long as possible. My biggest supporters are my parents. Without them, their love, and their support, none of the things I consider my achievements would have been possible. Thank you so much for putting up with my rants and tirades all these years, for always giving me the freedom to dream, for helping me get centered on the many occasions I lost my cool. Thank you for teaching me to be self-reliant and independent. You both were my first teachers, and, all my accomplishments are a result of your teachings. A lot of people helped make this thesis, and I would like to take this opportunity to thank all of them. Shweta and Ganesh, you both have been helping me out since the first version of this work was written up. Thank you so much for taking out the time from your busy schedules to guide me and my clumsy writing. A big thank you to Mayank and Vishakha for working even after their exams were over and helping proofread this thesis. Thanks to Arpita for her early inputs on this work and all the encouragement. A big thank you to everybody who had to sit through my mock presentations. I know they were nothing short of torture, but thank you for being so gracious. Emotional support goes a long way in bringing a work like this to fruition. So a big thank you to Megha, Pooja, and Urvashi for being there for me, listening to my many rants, cheering me up, and going along with my crazy plans. Pooja and Urvashi, our tastes may not match in some things but thank you for going along with me anyways. Thank you guys also for being awesome company these last two years. Lastly, but not the least, I'd like to thank my Nani. I wish you were here with me today. You are as much a parent to me as Mama and Papa. You raised me for thirteen years, and gave me the courage to believe in myself. I miss you every single day, especially on such occasions, which I want to share with you. Thank you for your love and warmth. I hope wherever you are, you're happy. \begin{abstract} The {\em blockchain\/} concept forms the backbone of a new wave technology that promises to be deployed extensively in a wide variety of industrial and societal applications. Governments, financial institutions, banks, industrial supply chains, service companies, and even educational institutions and hospitals are investing in a substantial manner in the hope of improving business efficiency and operational robustness through deployment of blockchain technology. This thesis work is concerned with designing trustworthy business-to-business (B2B) market platforms drawing upon blockchain technology and game theory. The proposed platform is built upon three key ideas. First, we use permissioned blockchains with smart contracts as a technically sound approach for building the B2B platform. The blockchain deploys smart contracts that govern the interactions of enterprise buyers and sellers. Second, the smart contracts are designed using a rigorous analysis of a repeated game model of the strategic interactions between buyers and sellers. We show that such smart contracts induce honest behavior from buyers and sellers. Third, we embed cryptographic regulation protocols into the permissioned blockchain to ensure that business sensitive information is not revealed to the competitors. We believe our work is an important step in the direction of building a powerful B2B platform that maximizes social welfare and enables trusted collaboration between strategic enterprise agents. \end{abstract} \blankpage \makecontents \notations \begin{table}[h!] \large \hspace{0.1\linewidth} \begin{tabular}{cc} \hline Acronym & Expansion\\ \hline B2B & Business-to-Business \\ B2C & Business-to-Customer \\ CPA & Chosen Plaintext Attack\\ EDI & Electronic Data Exchange\\ MPC & Multi-Party Computation\\ MSNE & Mixed Strategy Nash Equilibrium \\ PSNE & Pure Strategy Nash Equilibrium \\ SE & Software Entity \\ SHA & Secure Hash Algorithm \\ SGPE & Subgame Perfect Equilibrium \\ \hline \end{tabular} \caption{List of Acronyms} \end{table} \blankpage \end{frontmatter} \chapter{Introduction} \subfile{Ch1} \chapter{Foundations of Blockchain Technology} \subfile{Ch2} \chapter{Problem Formulation and Approach} \subfile{Ch3} \chapter{Game Theory Based Smart Contracts} \subfile{Ch4} \chapter{Cryptographic Regulation Protocols} \subfile{Ch5} \chapter{Summary and Future Work} \section{Summary } This thesis presented a novel foray into the study of the intersection of game theory and blockchains. While blockchains are far too recent, for anything to be truly `traditional', the traditional view of the relevance of game theory to blockchains was with regards to mining and cryptoeconomic incentives. To the best of our knowledge, the work presented in this thesis is the first to use game theory to design smart contracts. This thesis first presented an overview of blockchains and blockchain technology in Chapter 2. Starting with the building block of blockchains, covering the data structure, organization and storage of the contents of the blockchain, along with a brief overview of mining and some of its commonly followed techniques. Key features of blockchain technology were discusses, how the blockchain functions as an asset ledger, along with the basics of smart contracts. To conclude the overview of blockchains, a common classification of blockchains into permissionless (public) and permissioned (private) blockchains, was discussed, along with the various issues associated with both. \subsubsection*{Contributions} In this thesis, we designed a blockchain-based market platform, which allows for B2B collaboration, discussed in Chapter 3. This platform is intended for the enterprise procurement process. Enterprises would like to choose buy their required goods from the seller, who is most likely to give them high-quality. In order to do so, they need both their own personal experience, along with reviews of other enterprise buyers. To serve this purpose, the market platform allows buyers to buy from the seller of their choice and consequently give feedback on the quality they received, so as to rate the corresponding seller. The collection and aggregation of this feedback also has constraints. As businesses cannot afford have their buying history leaked, for fear of revealing business strategy to rivals, the feedback must be collected such that the identities of the buyers are not revealed. Further, it is necessary to ensure that sellers are not decipher the exact feedback given by a specific buyer in a particular round. This level of anonymity can induce a buyer to be strategically dishonest about her feedback regarding a certain seller, so as to induce that seller to give high-quality, in an effort to increase his ratings, or reduce his selling price. Thus, an additional requirement, along with complete anonymity of the feedback is that buyers are incentivized to be honest. \subsubsection*{Regulation Protocol} These requirements were met through cryptographic protocols deployed in the form of blockchains on the platform by way of smart contracts. These protocols are detailed in Chapter 5. There are essentially two sub-protocols, the monitoring protocol and the public perception protocol, that together form the 'regulation protocol' which make the platform usable for B2B collaboration. In the public perception protocol, buyers on the platform, send their feedback vectors in an encrypted format to all the other buyers. The encryption scheme is homomorphic, allowing them to aggregate the data into a vector, without decrypting the contents. This vector is then decrypted and buyers locally compute the rating vector for the current round. In the monitoring protocol, each buyer's feedback is sent to a monitor, through the SE, encrypted with the monitor's public key. A monitor is a software entity itself, and does not know the identity of the corresponding buyer, identifying her only with an alternateId. The monitor uses this alternateId to alert the SE, in case the buyer is found to be dishonest. These protocols ensure that a reliable ratings vector is computed in each round. This vector is used by the buyers to estimate which seller is most likely to give high-quality. We conduct game theoretic analysis to characterize how to best incentivize sellers to choose to produce high-quality goods. The purpose of this analysis is to help a mechanism designer ensure that the rules and parameters of the market platform are conducive to high-quality offerings in equilibrium. \subsubsection*{Game Theoretic Analysis} We first studied a pricing rule where all sellers charge the same price. We found that in such a pricing model, the only pure strategy subgame perfect equilibrium possible, if any, will be either giving out only low quality or only high-quality. We further deduced that {\em profit margin} and {\em competition} are necessary to induce high-quality offerings. Within homogeneous pricing, two different punishment models were studied: the ``local punishment'' model, where punishment is given locally for each infraction made by the seller, and the ``threshold punishment'' model where sellers are punished only when their ratings drop below a certain threshold. Out of the two, it was seen that the threshold punishment model is comparatively better at disincentivizing low-quality offerings. We then studied a case where the prices charged by sellers can be one of two prices. Within this case, we explored both types of pricing rules, where the price charged by the sellers did and did not change. For both we found that given the parameters of the system, the binary pricing case would reduce to a case of homogeneous pricing with certain parameters. We found that under any punishment rule in discrete pricing, a seller giving only high-quality, would never lose buyers, and when all other sellers give only low-quality, a seller giving high-quality, even sporadically, would be preferred by all the buyers. It was found that these two conditions do not hold in the case of continuous pricing, where the price charged by a seller, changes even with small changes in the seller's public perception. When continuous pricing is adopted, behaviour of buyers is quite different from the behaviour observed in discrete pricing. A seller with a poorer rating may be preferred by certain buyers over a seller with a higher rating, as the former's price would be lower. As a result of this behaviour, a straightforward analysis of equilibrium behaviour is not possible for continuous pricing. \section{Future Work } While the work presented in this thesis aims to be as comprehensive and cover as many practical aspects as possible, there still remains scope for further work. \subsection*{Heterogeneous Buyers} All buyers have so far been assumed to be homogeneous. Each buyer buys the same quantity of the good in each round, and has the same value for the good. While, the case of each buyer having non-identical, but consistent demand for the good can be reduced to the model studied, with each buyer corresponding to as many pseudo-buyers, as the units of good she demands, other case cannot be captured. The case of buyers having inconsistent demand across rounds is interesting in itself. Further, when buyers do not have the same valuation for the goods, defining a threshold for the threshold punishment model itself becomes an interesting question. Also, when buyers do not have identical valuations, it is not necessary that $v_H-p_H$ be greater than $v_L-p_L$. The analysis gets further complicated when this relation holds for a fraction of buyers and does not for others. The assumption that all buyers are identical to sellers need not be true itself. Certain buyers may be more influential than others, and consequently, sellers may try to sustain their costs by giving high-quality to important buyers and low-quality to less important ones. Thus, buyers being heterogeneous is an intriguing direction of future work. \subsection*{Economies of Scale and Seller Capacities} The model presented in this paper assumes that each seller can produce as many goods as requested, and the cost of producing each good is fixed. This is not consistent with commonly studied economic models. Microeconomic analysis commonly studies a case of {\em Economies of Scale}, where the cost of producing each additional unit of a good initially decreases and then becomes constant/increases. This case is demonstrated in Figure 6.1. This variability is not captured in the model discussed, but is essential to completely capture real-world settings. \begin{figure} \caption{Economies of Scale \cite{EoS} \label{fig:eos} \end{figure} \noindent Like costs are not fixed, neither is the additional profit from each unit of good sold. The famous ``Law of Diminishing Returns'' states that there is always a point beyond which it is no longer profitable to sell an additional unit of the good. Thus, for each seller, selling a particular good, there is a {\em capacity} beyond which it is not profitable for the seller to sell. Incorporating such capacities makes the selection of a seller by a buyer less straightforward, giving scope for changes to the model presented. Also, the purpose of the analysis presented was to maximize the high-quality offerings, without any regards to the revenue generated. A study of a setting which aims to achieve an optimal tradeoff between the two will be relevant. \subsection*{Uncertainty in the Quality Produced} Throughout the analysis we assume that a seller has complete control over the quality delivered to the buyer. The model presented does not take into consideration possibilities of error, which are beyond the control of the seller. This error may stem from manufacturing defects or from a mistake on the part of the logistics provider. However, if the model were to recognize possibility of non-strategic reasons behind a buyer receiving low-quality, the punishment models would need to be modified. While the threshold punishment model does not need any further restructuring to address such a case, the local punishment model may be unnecessarily harsh for this setting. Punishing every individual infraction would not be appropriate. One possible punishment method would be to punish the seller with the probability of receiving high-quality, when the seller intends to produce high-quality. Under such a punishment rule, the analysis would change drastically, even for cases that presently seem to be relatively straightforward. \subsection*{Differing Initial Perceptions} The analysis of Binary Pricing rules assumes that a buyer has the same perception for all sellers, irrespective of the price they are selling at need not be true. Logically, a seller selling a good at a higher price should guarantee better quality. Thus, differing values of $\xi$ for different price brackets, with buyers further having different values of $\xi$ amongst each other as well. The assumption of the quality of a product being one of two states itself is somewhat harsh. Quality of a product may be one of many discrete states, or may even belong to a continuous interval. For example the duration before the product wears out is a good indication of its quality. In such a setting, valuations will also change from a set of values to a function, giving scope for further work in this direction. \subsection*{Learning Mechanisms} The work discussed in this thesis is purely analytical. We assume that the parameters of the system are correctly known by all the agents, and agents even know their own private values correctly. The latter need not be true in the case of sellers. A seller need not accurately know the quality that he is capable of providing, and further need not know the price that he should be charging. In such a case, it would be appropriate to have a learning mechanism which, through the feedback provided by the buyers, aims to learn the quality of the sellers and accordingly elicit a price corresponding to each seller. Such a mechanism can be implemented by means of a smart contract on the platform, in conjunction with the regulation protocols, so as to maintain the anonymity requirements. As a step towards achieving such a mechanism, a mechanism design formulation of the problem studied in this thesis would be required. \subsection*{Decentralizing the Software Entity} Currently, the software entity plays a single point of centralization in the monitoring protocol. This doesn't fit very well within the decentralized paradigm intended by blockchains. A significant improvement would be to have a distributed manner of matching buyers with monitors without leaking the identity of the buyers to the monitors. Eliminating the need of zero-knowledge proofs in this system would be another possible line of future work. \subsection*{Implementation} The implementation of a B2B market platform, as studied in this thesis is of independent interest. As discussed, the implementation is very much feasible on currently available blockchain technology, such as Hyperledger Fabric. It is, however, non-trivial. While, private channels between a buyer and seller and well-established on such platforms, providing for inter-operable smart contracts is not easy to accomplish. Further, implementing MPC protocols on a blockchain-based framework are a novel contribution in itself. The implementation using permissioned blockchains to deploy smart contracts to compute reliable ratings and incentivize sellers to provide high-quality is necessary to bring to completion the work presented in this thesis. This we believe is an important direction for translating this work into a real-world system. \end{document}
\begin{document} \title{Maximal ideals of generalized summing linear operators} \author{Geraldo Botelho\thanks{Supported by CNPq Grant 304262/2018-8 and Fapemig Grant PPM-00450-17.}\,, Jamilson R. Campos and Lucas Nascimento\thanks{Supported by a CAPES scholarship. \newline 2020 Mathematics Subject Classification: 47L20, 46J20, 46B45, 47B37, 46B10.\newline Keywords: Tensor norms, operator ideals, summing operators, sequence spaces.}} \date{} \maketitle \begin{abstract} We prove when a Banach ideal of linear operators defined, or characterized, by the transformation of vector-valued sequences is maximal. Known results are recovered as particular cases and new information is obtained. To accomplish this task we study a tensor quasi-norm determined by the underlying sequence classes. The duality theory for these tensor quasi-norms is also developed. \end{abstract} \section{Introduction} The theory of operator ideals is central in modern mathematical analysis (see \cite{handbook}, \cite[6.3]{history}) and, in this context, maximal ideals play a key role. For recent developments on maximal operator ideals, see, e.g, \cite{samuel, kim, J. A. Lopez Molina.tres, turcovillafane}. A number of important operator ideals are defined, or characterized, by the transformation of vector-valued sequences; and some of these ideals are known to be maximal. A unifying approach to this kind of operators ideals was proposed in \cite{G. Botelho} using the concept of {\it sequence classes}. For sequence classes $X$ and $Y$, a linear operator $T \colon E \longrightarrow F$ is $(X;Y)$-summing if $T((x_j))_{j=1}^\infty \in Y(F)$ whenever $(x_j)_{j=1}^\infty \in X(E)$. The Banach operator ideal of such operators is denoted by ${\cal L}_{X;Y}$. This approach has proved to be quite fruitful, see \cite{achour2018, complut, Jamilson.dual, espaco.mid, G. Botelho and D. Freitas, raquel, J.R.Campos.J.Santos, J. Ribeiro and F. Santos, J. Ribeiro and F. Santos.dois}. The purpose of this paper is to study the maximality of these Banach operator ideals. Generalizing some well known cases, we find conditions on the sequences classes $X$ and $Y$ so that the Banach operator ideal ${\cal L}_{X;Y}$ is maximal. Following the long tradition of the interplay between operator ideals and tensor norms, which comes from Grothendieck's seminal works and stands to the day (see recent developments in \cite{achour2018, D.Achour, sheldon, maite, maite2020, kim2020, J. A. Lopez Molina, J. A. Lopez Molina.tres, lopezmolina2019, miguel}), we prove our main results defining, developing and applying a tensor quasi-norm $\alpha_{X;Y}$ determined by the sequence classes $X$ and $Y$. The tensor quasi-norms $\alpha_{X;Y}$ can be regarded as generalizations of the classical Chevet-Saphar tensor norms (see \cite{A.Defant, R. Ryan}). In Section 2 we define a tensor quasi-norm associated to the underlying sequences classes and apply it to give conditions so that the corresponding ideal of summing operators is maximal. Known results are recovered as particular instances and new concrete information is obtained. The duality theory of the tensor quasi-norm $\alpha_{X;Y}$ associated to the sequence classes $X$ and $Y$ is developed in Section 3. For Banach spaces $E$ and $F$ we describe the continuous linear functionals on $E \otimes_{\alpha_{X;Y}} F$ as linear operators from $E$ to $F^*$ and as continuous bilinear forms on $E \times F$. As a byproduct we show when the tensor quasi-norm $\alpha_{X;Y}$ satisfies a condition that is equivalent to maximality in the case of operator ideals associated to finitely generated tensor norms. For operator ideals we refer to \cite{A.Defant, A.Pietsch}, for the interplay between tensor norms and operator ideals to \cite{A.Defant, R. Ryan}, for the theory of absolutely summing operators to \cite{J.Diestel} and for quasi-norms and quasi-normed spaces to \cite{N. J. Kalton 2}. Banach spaces over $\mathbb{K} = \mathbb{R}$ or $\mathbb{C}$ shall be denoted by $E$ and $F$. The closed unit ball of $E$ is denoted by $B_E$ and its topological dual by $E^*$. The symbol $E \stackrel{1}{\hookrightarrow} F$ means that $E$ is a linear subspace of $F$ and $\|x\|_F \leq \|x\|_E$ for every $x \in E$; and $E\stackrel{1}{=}F$ means that $E$ is isometrically isomorphic to $F$. By $L(E;F)$ we denote the space of all linear operators from $E$ to $F$ and by $\mathcal{L}(E;F)$ the Banach space of all continuous linear operators $T:E\longrightarrow F$ endowed with the usual sup norm. The same notation will be used if $E$ and $F$ are quasi-normed spaces. For $x \in E$ and $j \in \mathbb{N}$, the symbol $x\cdot e_j$ denotes the sequence $(0,\ldots, 0,x,0, 0,\ldots ) \in E^\mathbb{N}$, where $x$ is placed at the $j$-th coordinate. The symbol $(x_{j})_{j=1}^{n}$, where $x_1, \ldots, x_n \in E$, stands for the sequence $(x_{1},x_{2},\ldots,x_{n},0,0,\ldots) \in E^\mathbb{N}$. According to \cite{G. Botelho}, a {\it sequence class} is a rule $X$ that assigns to each Banach space $E$ a Banach space $X(E)$ of $E$-valued sequences, that is $X(E)$ is a vector subspace of $E^{\mathbb{N}}$ with the coordinatewise operations, such that:\\ (i) $c_{00}(E) \subseteq X(E) \stackrel{1}{\hookrightarrow} \ell_\infty(E)$ for every Banach space $E$,\\ (ii) $\|x \cdot e_j\|_{X(E)}= \|x\|_E$ for every Banach space $E$, every $x \in E$ and every $j \in \mathbb{N}$. To avoid ambiguity, we shall eventually denote the sequence class $X$ by $X(\cdot)$. Given sequences classes $X$ and $Y$, we say that an operator $T \in {\cal L}(E;F)$ is {\it $(X;Y)$-summing} if $T((x_j))_{j=1}^\infty \in Y(F)$ whenever $(x_j)_{j=1}^\infty \in X(E)$. In this case, the induced linear operator $$\widehat{T} \colon X(E) \longrightarrow Y(F)~,~\widehat{T}\left( (x_j)_{j=1}^\infty \right) = \left(T(x_j)\right)_{j=1}^\infty ,$$ is continuous and $$\|T \|_{X;Y} : = \|\widehat{T}\| $$ is a norm that makes the space ${\cal L}_{X;Y}(E;F)$ of $(X;Y)$-summing operators a Banach space. Whenever we refer to ${\cal L}_{X;Y}(E;F)$ we assume that it is endowed with the norm $\|\cdot \|_{X;Y}$. A sequence class $X$ is {\it linearly stable} if, regardless of the Banach spaces $E$ and $F$, every operator $T \in {\cal L}(E;F)$ is $(X;X)$-summming and $\|T\|_{X;X} = \|T\|$, that is, $\mathcal{L}_{X;X}(E;F)\stackrel{1}{=} \mathcal{L}(E;F)$. If the sequence classes $X$ and $Y$ are linearly stable and $X(\mathbb{K}) \stackrel{1}{\hookrightarrow} Y(\mathbb{K})$, then $\mathcal{L}_{X;Y}$ is a Banach operator ideal \cite[Theorem 3.6]{G. Botelho}. \begin{example}\label{exsec}\rm Let $p \geq 1$. The following are well known linearly stable sequence classes, endowed with their usual norms: \\ $\bullet$ The class $E \mapsto c_0(E)$ of norm null sequences.\\ $\bullet$ The class $E \mapsto \ell_\infty(E)$ of bounded sequences.\\ $\bullet$ The class $E \mapsto \ell_p(E)$ of absolutely $p$-summable sequences.\\ $\bullet$ The class $E \mapsto \ell_p^w(E)$ of weakly $p$-summable sequences.\\ $\bullet$ The class $E \mapsto \ell_p\langle E \rangle$ of Cohen strongly $p$-summable sequences.\\ $\bullet$ The class $E \mapsto {\rm Rad}(E)$ of almost unconditionally summable sequences. Consider $${\rm RAD}(E) : = \left\{(x_{j})_{j=1}^{\infty} \in E^{\mathbb{N}} : \|(x_{j})_{j=1}^{\infty}\|_{{\rm RAD}(E)} := \sup\limits_{k} \|(x_{j})_{j=1}^{k}\|_{{\rm Rad}(E)} < \infty\right\}$$ (see \cite{J.Diestel, tarieladze}), $$\ell_p^{\rm mid}(E) := \left\{(x_{j})_{j=1}^{\infty} \in E^{\mathbb{N}} : \|(x_{j})_{j=1}^{\infty}\|_{{\rm mid},p} := \sup\limits_{(\varphi_n)_{n=1}^\infty \in B_{\ell_p^w(E^*)}} \left(\sum_{j,n = 1}^\infty |\varphi_n(x_j)|^p \right)^{1/p} < \infty\right\},$$ and the closed subspace $\ell_p^u(E)$ of $\ell_p^w(E)$ formed by unconditionally $p$-summable sequences, that is $$\ell_p^u(E) = \left\{(x_j)_{j=1}^\infty \in \ell_p^w(E) : \lim\limits_k \|(x_j)_{j=k}^\infty\|_{w,p} = 0\right\} $$ (see \cite{A.Defant}). Then the correspondences $E \mapsto {\rm RAD}(E)$, $E \mapsto \ell_p^{\rm mid}(E)$ and $E \mapsto \ell_p^u(E)$ are also linearly stable sequences classes. \end{example} The dual of a sequence class $X$ was introduced in \cite{Jamilson.dual} in the following fashion: \begin{equation*} X^{\rm dual}(E) = \left\{(x_j)_{j=1}^\infty\ \mathrm{in\ } E: \sum_{j=1}^\infty \varphi_j(x_j)\ \mathrm{converges\ } \text{for every}\ (\varphi_j)_{j=1}^\infty\ \mathrm{in\ } X(E^*)\right\}. \end{equation*} A sequence class $X$ is {\it spherically complete} if $(\lambda_jx_j)_{j=1}^\infty \in X(E)$ and $\|(\lambda_jx_j)_{j=1}^\infty \|_{X(E)} = \|(x_j)_{j=1}^\infty\|_{X(E)}$ whenever $(x_j)_{j=1}^\infty \in X(E)$ and $(\lambda_j)_{j=1}^\infty \in \mathbb{K}^\mathbb{N}$ is such that $|\lambda_j| = 1$ for every $j$. For example, the sequence classes $c_0(\cdot), \ell_\infty(\cdot), {\rm RAD}(\cdot), \ell_p(\cdot), \ell_p^w(\cdot), \ell_p\langle \cdot \rangle, \ell_p^u(\cdot), \ell_p^{\rm mid}(\cdot), 1 \leq p < \infty$, are spherically complete. Let $X$ be a linearly stable and spherically complete sequence class. In \cite{Jamilson.dual} it is proved that the expression \begin{equation*} \left\|(x_{j})_{j=1}^{\infty} \right\|_{X^{\rm dual}}:= \sup_{(\varphi_{j})_{j=1}^{\infty}\in B_{X(E^*)} }\sum_{j=1}^{\infty}\left|\varphi_{j}(x_{j}) \right| = \sup_{(\varphi_{j})_{j=1}^{\infty}\in B_{X(E^*)} }\left|\sum_{j=1}^{\infty}\varphi_{j}(x_{j}) \right| \end{equation*} makes $X^{\rm dual}(E)$ a Banach space and $X^{\rm dual}(\cdot)$ a linearly stable spherically complete sequence class. Conditions on $X$ so that $X^{\rm dual}(E^*)$ is canonically isometrically isomorphic to $X(E)^*$ are also given in \cite{Jamilson.dual}. For example, for $1 \leq p \leq \infty$ and $\frac{1}{p} + \frac{1}{p^*} = 1$, $(\ell_p^w)^{\rm dual} = \ell_{p^*}\langle \cdot \rangle$ and $(\ell_p)^{\rm dual} = \ell_{p^*}( \cdot )$ (the case $p = \infty$ of this last equality is somewhat surprising). \section{Maximal ideals} The purpose of this section is to find conditions on $X$ and $Y$ so that ${\cal L}_{X;Y}$ is a maximal Banach operator ideal. Oddly enough, we begin by giving plenty of counterexamples. \begin{example}\label{exnew}\rm For $1 \leq p < \infty$, by ${\cal C}_p$ we denote the ideal of $p$-converging operators (see, e.g., \cite{chen}), that is, the operators that send weakly $p$-summable sequences to norm null sequences. In our notation, ${\cal C}_p = {\cal L}_{\ell_p^w(\cdot); c_0(\cdot)}$. The case $p = 1$ recovers the ideal of unconditionally summing operators from \cite[1.7.1]{A.Pietsch}. All these ideals are not maximal, see, e.g., \cite[Theorem 2.7]{chen}. In the same fashion, the classical ideal ${\cal CC} : = {\cal L}_{c_o^w(\cdot); c_0(\cdot)}$ of completely continuous operators is not maximal. \end{example} Since $c_0$ is not a dual space, there is no sequence class $Y$ such that $Y^{\rm dual} = c_0(\cdot)$. Our guess is that this is the fact behind the non maximality of the ideals in Examples \ref{exnew}. So, the search for maximality should be restricted to operator ideals of the form ${\cal L}_{X;Y^{\rm dual}}$. As announced, we shall do that by considering tensor quasi-norms. Of course we need ${\cal L}_{X;Y^{\rm dual}}$ to be a Banach operator ideal. So, according to the linear case of \cite[Theorem 3.6]{G. Botelho}, whenever we refer to ${\cal L}_{X;Y^{\rm dual}}$ in this section we assume that the sequence classes $X$ and $Y$ are linearly stable, $Y$ is spherically complete and $X(\mathbb{K}) \stackrel{1}{\hookrightarrow} Y^{\rm dual}(\mathbb{K})$. The linear cases of the characterizations proved in \cite[Proposition 2.4]{G. Botelho} shall be used making no explicit reference. A careful look at the definition of reasonable tensor norm and at the proof of \cite[Proposition 6.1]{R. Ryan} makes the following definition quite natural. \begin{definition}\rm Let $\varepsilon$ be the injective tensor norm. For Banach spaces $E$ and $F$, a quasi-norm $\alpha$ on $E\otimes F$ is said to be {\it reasonable} if $\varepsilon \leq \alpha$ and $ \alpha(x \otimes y) \le \|x\|\cdot\|y\|$ for all $x \in E$ and $y \in F$. \end{definition} Let $X$ and $Y$ be sequence classes. For Banach spaces $E$ and $F$, consider the map $\alpha_{X,Y} \colon E\otimes F\longrightarrow \mathbb{R}$ given by $$\alpha_{X,Y}^{}(u)= \inf\left\lbrace \left\|(x_{j})_{j=1}^{n} \right\|_{X(E)} \cdot \left\|(y_{j})_{j=1}^{n} \right\|_{Y(F)} : u=\sum_{j=1}^{n}x_{j}\otimes y_{j} \right\rbrace. $$ Only one condition on the sequences classes $X$ and $Y$ is needed for $\alpha_{X,Y}$ to be a reasonable quasi-tensor norm. The notion we define now is quite weaker than the related ones that can be found in the literature (see, e.g., \cite{Botelhojlucas, argentinos}). A sequence class $X$ is \emph{monotone} if for every Banach space $E$ and all $m,n \in \mathbb{N}$ and $x_1, \ldots, x_n \in E$, the following holds: $$\|(\underbrace{0,0, \ldots, 0}_{m \rm{\,times}}, x_1, \ldots, x_n, 0,0, \ldots)\|_{X(E)} = \|(x_1, \ldots, x_n, 0,0,\ldots)\|_{X(E)}. $$ All sequence classes in Example \ref{exsec} are monotone. \begin{proposition}\label{razoavel} If $X$ and $Y$ are monotone sequence classes and $\varepsilon\leq \alpha_{X,Y}$, then $\alpha_{X,Y}^{}$ is a reasonable quasi-norm on $E\otimes F$ for all $E$ and $F$. \end{proposition} \begin{proof} Let us show that, for all Banach spaces $E$ and $F$ and all $u_{1},u_{2}\in E\otimes F$ $$\alpha_{X,Y}^{}(u_{1}+u_{2})\leq 2\left( \alpha_{X,Y}^ {}(u_{1}) + \alpha_{X,Y}^{}(u_{2})\right).$$ Given $\eta>0$, choose representations $u_{i}=\sum\limits_{j=1}^{n}x_{ij}\otimes y_{ij}$ such that $$\left\|(x_{ij})_{j=1}^{n} \right\|_{X(E)}\leq \alpha_{X,Y}^{}(u_{i} ) + \eta \ \text{ and } \ \left\|(y_{ij})_{j=1}^{n} \right\|_{Y(F)}\leq 1, \, i=1,2.$$ So, $\sum\limits_{j=1}^{n}x_{1j}\otimes y_{1j}+ \sum\limits_{j=1}^{n}x_{2j} \otimes y_{2j}$ is a representation of $u_{1}+ u_{2}$. Using that $X$ and $Y$ are monotone, we get $\alpha_{X,Y}^{}(u_{1}+u_{2}) \leq$ \begin{align*} &\leq\|(x_{11}, \ldots, x_{1n}, x_{21}, \ldots, x_{2n},0,0,\ldots) \|_{X(E)}\cdot \| (y_{11}, \ldots, y_{1n}, y_{21}, \ldots, y_{2n},0,0,\ldots) \|_{Y(F)}\\ & \leq \left(\|(x_{11}, \ldots, x_{1n}, 0,0,\ldots) \|_{X(E)}+ \|(0, \ldots, 0, x_{21}, \ldots, x_{2n},0,0,\ldots) \|_{X(E)}\right)\cdot \\ & ~~~\cdot \left(\|(y_{11}, \ldots, y_{1n}, 0,0,\ldots) \|_{Y(F)}+ \|(0, \ldots, 0, y_{21}, \ldots, y_{2n},0,0,\ldots) \|_{Y(F)}\right)\\ & = \left(\|(x_{11}, \ldots, x_{1n}, 0,0,\ldots) \|_{X(E)}+ \|(x_{21}, \ldots, x_{2n},0,0,\ldots) \|_{X(E)}\right)\cdot \\ & ~~~\cdot \left(\|(y_{11}, \ldots, y_{1n}, 0,0,\ldots) \|_{Y(F)}+ \|(y_{21}, \ldots, y_{2n},0,0,\ldots) \|_{Y(F)}\right)\\ &\leq 2\left(\alpha_{X,Y}^{}(u_{1}) + \alpha_{X,Y}^{}(u_{2}) + 2\eta \right). \end{align*} The desired inequality follows by making $\eta \longrightarrow 0^+$. The other facts either follow from the definition of sequence class or are straightforward. \end{proof} Given a (non necessarily continuous) linear operator $T \colon E \longrightarrow F$, it is clear that $$A_{T} \colon E \times F^*\longrightarrow \mathbb{K}~,~A_{T}(x,\psi)=\psi (T(x)),$$ is a (non necessarily continuous) bilinear form. Calling $\varphi_T$ the linearization of $A_T$, we have a linear functional $\varphi_{T}\colon E\otimes F^*\longrightarrow \mathbb{K}$ satisfying $$\varphi_{T}\left(\sum_{j=1}^{n}x_{j}\otimes \psi_{j} \right)= \sum_{j=1}^{n}\psi_{j}(T(x_{j}))$$ for every $\sum\limits_{j=1}^{n}x_{j}\otimes \psi_{j} \in E \otimes F^*$. Of course, the map $T \mapsto \varphi_T$ is linear. To proceed we need to recall one more definition from \cite{G. Botelho}. A sequence class $X$ is said to be:\\ $\bullet$ \emph{Finitely determined} if for every sequence $(x_j)_{j=1}^\infty \in E^{\mathbb{N}}$, $(x_j)_{j=1}^\infty \in X(E)$ if and only if $\displaystyle\sup_k \left\|(x_j)_{j=1}^k \right\|_{X(E)} < +\infty$ and, in this case, $\left\|(x_j)_{j=1}^\infty \right\|_{X(E)} = \sup_k \left\|(x_j)_{j=1}^k \right\|_{X(E)}. $\\ $\bullet$ {\it Finitely dominated} if there is a finitely determined sequence class $Y$ such that, for every Banach space $E$, $X(E)$ is a closed subspace of $Y(E)$ and one of the following conditions holds:\\ (i) For every sequence $(x_j)_{j=1}^\infty \in Y(E)$, $(x_j)_{j=1}^\infty \in X(E)$ if and only if $\displaystyle\lim_k \|(x_j)_{j=k}^\infty\|_{Y(E)} = 0$.\\ (ii) For every sequence $(x_j)_{j=1}^\infty \in Y(E)$, $(x_j)_{j=1}^\infty \in X(E)$ if and only if $\displaystyle\lim_{k,l} \|(x_j)_{j=k}^l\|_{Y(E)} = 0$. For example, for $1 \leq p < \infty$, the classes $\ell_p(\cdot), \ell_p^w(\cdot), \ell_p\langle \cdot \rangle, \ell_p^{\rm mid}(\cdot), \ell_\infty(\cdot)$ and ${\rm RAD}(\cdot)$ are finitely determined; $c_0(\cdot)$ is finitely dominated by $\ell_\infty(\cdot)$, $\ell_p^u(\cdot)$ is finitely dominated by $\ell_p^w(\cdot)$ and ${\rm Rad}(\cdot)$ is finitely dominated by ${\rm RAD}(\cdot)$. Moreover, the dual $Y^{\rm dual}$ of a sequence class $Y$ is always finitely determined, even if $Y$ is not. \begin{lemma}\label{1lema.ideal.maximal.qn} Let $T\in \mathcal{L}(E;F)$ and suppose that $\alpha_{X,Y}^{}$ is a reasonable quasi-norm.\\ {\rm (a)} If $T\in \mathcal{L}_{X;Y^{\rm dual}}(E;F)$, then $\varphi_{T}\colon E\otimes_{\alpha_{X,Y}^{}} F^*\longrightarrow \mathbb{K}$ is continuous and $\left\|\varphi_{T} \right\| \leq \left\|T \right\|_{X;Y^{\rm dual}}$.\\ {\rm (b)} If, in addition, $X$ and $Y$ are finitely determined or finitely dominated, then the functional $\varphi_{T} \colon E\otimes_{\alpha_{X,Y}^{}} F^*\longrightarrow \mathbb{K}$ is continuous if and only if $T\in \mathcal{L}_{X;Y^{\rm dual}}(E;F)$; and, in this case, $\left\|T \right\|_{X;Y^{\rm dual}}= \left\|\varphi_{T} \right\|$. \end{lemma} \begin{proof} (a) For every $u=\sum\limits_{j=1}^{n}x_{j}\otimes \psi_{j} \in E\otimes F^*$, we have \begin{equation*} \left|\varphi_{T}\left(u \right) \right|= \left|\sum_{j=1}^{n}\psi_{j}(T(x_{j})) \right| \leq \sum_{j=1}^{n}\left|\psi_{j}(T(x_{j})) \right| \leq \left\|T \right\|_{X;Y^{\rm dual}} \cdot \left\|(x_{j})_{j=1}^{n} \right\|_{X(E)} \cdot \left\|(\psi_{j})_{j=1}^{n} \right\|_{Y(F^*)}, \end{equation*} where the last inequality follows from a simple manipulation of the norm of $Y^{\rm dual}$. Taking the infimum over all representations of $u$ it follows that the functional $\varphi_{T}\colon E\otimes_{\alpha_{X,Y}^{}} F^*\longrightarrow \mathbb{K}$ is continuous with $\left\|\varphi_{T} \right\| \leq \left\|T \right\|_{X;Y^{\rm dual}}. $\\ (b) We prove the case that $X$ and $Y$ are finitely determined. Given $(x_j)_{j=1}^\infty \in X(E)$, for every $n \in \mathbb{N}$, \begin{align*} \left\| (T(x_{j}))_{j=1}^{n} \right\|_{Y^{\rm dual}(F)} &= \sup_{(\psi_{j})_{j=1}^{\infty}\in B_{Y(F^*)}}\left|\sum_{j=1}^{n}\psi_{j}(T(x_{j})) \right| = \sup_{(\psi_{j})_{j=1}^{\infty}\in B_{Y(F^*)}}\left|\varphi_{T}\left(\sum_{j=1}^{n}x_{j}\otimes \psi_{j} \right) \right| \\ &\leq \sup_{(\psi_{j})_{j=1}^{\infty}\in B_{Y(F^*)}}\left\|\varphi_{T} \right\|\cdot\alpha_{X,Y}^{}\left(\sum_{j=1}^{n}x_{j}\otimes \psi_{j} \right)\\ &\leq \left\|\varphi_{T} \right\| \cdot \left\|(x_{j})_{j=1}^{n} \right\|_{X(E)}\cdot \sup_{(\psi_{j})_{j=1}^{\infty}\in B_{Y(F^*)}}\left\|(\psi_{j})_{j=1}^{n} \right\|_{Y(F^*)}\\ &\leq \left\|\varphi_{T} \right\| \cdot \sup_k\left\|(x_{j})_{j=1}^{n} \right\|_{X(E)}\cdot \sup_{(\psi_{j})_{j=1}^{\infty}\in B_{Y(F^*)}}\sup_k\left\|(\psi_{j})_{j=1}^{k} \right\|_{Y(F^*)}\\ &= \left\|\varphi_{T} \right\| \cdot \left\|(x_{j})_{j=1}^{\infty} \right\|_{X(E)}\cdot \sup_{(\psi_{j})_{j=1}^{\infty}\in B_{Y(F^*)}}\left\|(\psi_{j})_{j=1}^{\infty} \right\|_{Y(F^*)}\\ &= \left\|\varphi_{T} \right\|\cdot \left\|(x_{j})_{j=1}^{\infty} \right\|_{X(E)}. \end{align*} Since $Y^{\rm dual}$ is also finitely determined, taking the supremum over $n$ we get $(T(x_{j}))_{j=1}^{\infty} \in Y^{\rm dual}(F)$ and $$\left\| (T(x_{j}))_{j=1}^{\infty} \right\|_{Y^{\rm dual}(F)} \leq \left\|\varphi_{T} \right\|\cdot \left\|(x_{j})_{j=1}^{\infty} \right\|_{X(E)}, $$ from which it follows that $T\in \mathcal{L}_{X;Y^{\rm dual}}(E;F)$ and $ \left\|T \right\|_{X;Y^{\rm dual}}\leq \left\|\varphi_{T} \right\|$. \end{proof} In the same fashion of norms on tensor products (see \cite[Section 6.1]{R. Ryan}), we say that a reasonable quasi-norm $\alpha$ is {\it a tensor quasi-norm} if:\\ $\bullet$ $\alpha$ is uniform, that is, for all Banach spaces $E_1,E_2, F_1, F_2$ and all operators $T_i \in {\cal L}(E_i,F_i)$, $i = 1,2$, $$\|T_1 \otimes T_2 \colon E_1 \otimes_\alpha E_2 \longrightarrow F_1 \otimes_\alpha F_2\| \leq \|T_1\|\cdot \|T_2\|.$$ $\bullet$ $\alpha$ is finitely generated, that is, for all Banach spaces $E,F$ and any $u \in E \otimes F$, $$\alpha(u; E \otimes F) = \inf\left\{\alpha(u; M \otimes N) : u \in M \otimes N, M \in {\cal F}(E),N \in {\cal F}(F) \right\}, $$ where ${\cal F}(E)$ is the collection of all finite dimensional subspaces of $E$. Let us see that, in the environment of sequences classes, tensor quasi-norms are not rare. A sequence class $X$ is said to be {\it finitely injective} if $\|(x_j)_{j=1}^k\|_{X(E)} \leq \|(i(x_j))_{j=1}^k\|_{X(F)}$ whenever $i \colon E \longrightarrow F$ is a metric injection, $k \in \mathbb{N}$ and $x_1, \ldots, x_k \in E$. If $X$ is also linearly stable, then we actually have $\|(x_j)_{j=1}^k\|_{X(E)} = \|(i(x_j))_{j=1}^k\|_{X(F)}$. All sequence classes listed in Example \ref{exsec}, but $\ell_p\langle \cdot \rangle$, are finitely injective. \begin{proposition} \label{porpr} Let $X$ and $Y$ be sequence classes such that $\alpha_{X,Y}^{}$ is a reasonable quasi-norm. If $X$ and $Y$ are linearly stable and finitely injective, then $\alpha_{X,Y}^{}$ is a tensor quasi-norm. \end{proposition} \begin{proof} Given $T_{i}\in \mathcal{L}(E_{i};F_{i})$, $i=1,2$, $u= \sum\limits_{j=1}^{n}x_{j}\otimes y_{j}\in E_1\otimes_{\alpha_{X,Y}^{}}E_2$, the linear stability of $X$ and $Y$ gives \begin{align*} \alpha_{X,Y}^{}\left(T_{1}\otimes T_{2}(u) \right)& =\alpha_{X,Y} \left(\sum_{j=1}^n T_1(x_j) \otimes T_2(y_j) \right) \\ &\leq \left\|(T_{1}(x_{j}))_{j=1}^{n} \right\|_{X(F_{1})}\cdot \left\|(T_{2}(y_{j}))_{j=1}^{n} \right\|_{Y(F_{2})}\\ &\leq \left\|T_{1} \right\|\cdot \left\|T_{2} \right\| \cdot \left\|(x_{j})_{j=1}^{n} \right\|_{X(E_{1})}\cdot \left\|(y_{j})_{j=1}^{n} \right\|_{Y(E_{2})}. \end{align*} Since this holds for every representation of $u$, it follows that $\alpha_{X,Y}^{}\left(T_{1}\otimes T_{2}(u) \right)\leq \left\|T_{1} \right\|\cdot \left\|T_{2} \right\|\cdot \alpha_{X,Y}^{}(u)$, so $\left\|T_{1}\otimes T_{2} \right\| \leq \left\| T_{1}\right\| \cdot \left\| T_{2}\right\|.$ This proves that $\alpha_{X,Y}$ is uniform. Let $u\in E\otimes F$ be given. Given $\eta>0$, we can take a representation $u = \sum\limits_{j=1}^{n}x_{j}\otimes y_{j}$ so that $$\left\|(x_{j})_{j=1}^{n} \right\|_{X(E)}\cdot\left\|(y_{j})_{j=1}^{n} \right\|_{Y(F)} \leq \alpha_{X,Y}^{}(u; E\otimes F) + \eta. $$ Taking $M= {\rm span}\{x_{1},\ldots,x_{n}\}$ and $N= {\rm span}\{y_{1}\ldots,y_{n}\}$, we have $u\in M\otimes N$ and $\alpha_{X,Y}^{}(u;E\otimes F)\leq \alpha_{X,Y}^{}(u;M\otimes N)$ because $\alpha_{X,Y}^{}$ is uniform. Moreover, the finite injectivity of $X$ and $Y$ yields \begin{align*} \alpha_{X,Y}^{}(u;M\otimes N)&\leq \left\|(x_{j})_{j=1}^{n} \right\|_{X(M)} \cdot \left\|(y_{j})_{j=1}^{n} \right\|_{Y(N)} \\ &\leq\left\|(x_{j})_{j=1}^{n} \right\|_{X(E)}\cdot \left\|(y_{j})_{j=1}^{n} \right\|_{Y(F)} \leq \alpha_{X,Y}^{}(u; E\otimes F) + \eta. \end{align*} It follows that $\alpha_{X,Y}^{}(u;E\otimes F)= \inf\left\lbrace \alpha_{X,Y}^{}(u;M\otimes N); u\in M\otimes N, M \in {\cal F}(E), N \in {\cal F}(F) \right\rbrace$, proving that $\alpha_{X,Y}$ is finitely generated. \end{proof} Recall that a Banach operator ideal $[{\cal I}, \|\cdot\|_{\cal I}]$ is {\it maximal} (see \cite[p.\,197]{R. Ryan}) if it is the only Banach operator ideal $[{\cal J}, \|\cdot\|_{\cal J}]$ satisfying:\\ (i) ${\cal I}(E;F) \subseteq {\cal J}(E;F)$ for all Banach spaces $E$ and $F$ and $\|u\|_{\cal J} \leq \|u\|_{\cal I}$ for every $u \in {\cal I}(E;F)$, and\\ (ii) $\|u\|_{\cal J} = \|u\|_{\cal I}$ for every finite rank operator. We denote by $\mathcal{CF}(E)$ the collection of all finite codimensional subspaces of $E$. For $M \in \mathcal{F}(E)$ we denote by $I_M \colon M \longrightarrow E$ we denote the inclusion operator and for $L \in \mathcal{CF}(F)$ we denote by $Q_L \colon F \longrightarrow F/L$ the quotient operator. \begin{theorem}\label{maxim} Suppose that $\alpha_{X,Y}$ is a tensor quasi-norm and that $X$ and $Y$ are finitely determined or finitely dominated. For an operator $T\in \mathcal{L}(E;F)$, $T\in \mathcal{L}_{X;Y^{\rm dual}}(E;F)$ if and only if $$s: = \sup\left\{\left\| Q_{L}\circ T \circ I_{M}\right\|_{X;Y^{\rm dual}} : (M,L)\in \mathcal{F}(E)\times \mathcal{CF}(F)\right\} < \infty,$$ and, in this case, $\left\|T \right\|_{X;Y^{\rm dual}} = s$. In particular, the Banach operator ideal $\mathcal{L}_{X;Y^{\rm dual}}$ is maximal. \end{theorem} \begin{proof} Suppose that $T\in \mathcal{L}_{X;Y^{\rm dual}}(E;F)$. For being a finite rank operator, each $Q_{L}\circ T \circ I_{M}$ belongs to $\mathcal{L}_{X;Y^{\rm dual}}(M;F/L)$. The ideal inequality of the norm of $\mathcal{L}_{X;Y^{\rm dual}}$ gives \[\left\|Q_{L}\circ T \circ I_{M} \right\|_{X;Y^{\rm dual}}\leq \left\| Q_{L}\right\|\cdot \left\| T\right\|_{X;Y^{\rm dual}}\cdot \left\| I_{M}\right\| =\left\| T\right\|_{X;Y^{\rm dual}},\] which proves that $s\leq \left\|T \right\|_{X;Y^{\rm dual}} < \infty.$ Conversely, suppose that $s < \infty$. Let $u\in E\otimes F^*$ and $\eta>0$ be given. As $\alpha_{X,Y}^{}$ is finitely generated (Proposition \ref{porpr}), there are $M\in \mathcal{F}(E)$, $N\in \mathcal{F}(F^*)$ and a representation $u=\sum\limits_{j=1}^{n}x_{j}\otimes y_{j}^*\in M\otimes N$ such that \[\alpha_{X,Y}^{}\left(u; M\otimes N \right)\leq (1+\eta)\alpha_{X,Y}^{}\left(u ; E\otimes F^* \right).\] Let $L\in \mathcal{CF}(F)$ be such that $\left(F/L \right)^* \stackrel{1}{=}N$ by means of the operator $Q_{L}^*\colon \left(F/L \right)^* \longrightarrow N$. Choose functionals $\psi_{j}\in \left( F/L\right)^* $ such that $Q_{L}^*(\psi_{j})= y_{j}^*, j= 1,\ldots,n$. In the chain $$M \otimes_{\alpha_{X;Y}} N \stackrel{Id_M \otimes (Q_L^*)^{-1}}{\xrightarrow{\hspace*{2cm}}} M \otimes_{\alpha_{X;Y}}(F/L)^* \stackrel{\varphi_{Q_L \circ T \circ I_M}}{\xrightarrow{\hspace*{2cm}}} \mathbb{K}, $$ the operator $Id_M \otimes (Q_L^*)^{-1}$ is continuous because $\alpha_{X;Y}$ is uniform (Proposition \ref{porpr}), and the functional $\varphi_{Q_L \circ T \circ I_M}$ is continuous with $\left\|\varphi_{Q_{L}\circ T \circ I_{M}} \right\|\le \left\| Q_{L}\circ T \circ I_{M}\right\|_{X;Y^{\rm dual}} $ by Lemma \ref{1lema.ideal.maximal.qn} because $Q_{L}\circ T \circ I_{M}$ belongs to $\mathcal{L}_{X;Y^{\rm dual}}(M;F/L)$. It follows that \begin{align*} |\varphi_T(u)| & = \left|\sum_{j=1}^n y_j^* ( T(x_j)) \right| = \left|\sum_{j=1}^n Q_L^*(\psi_j)(T(x_j))\right| \\ &= \left|\sum_{j=1}^n \psi_j(Q_L(T(x_j))\right| = \left|\sum_{j=1}^n \varphi_{Q_L \circ T \circ I_M}\left(x_j \otimes \psi_j \right)\right| \\ & = \left|\sum_{j=1}^n [ \varphi_{Q_L \circ T \circ I_M} \circ (Id_M \otimes (Q_L^*)^{-1})]\left( x_j \otimes y_j^* \right)\right|\\ &= \left| [ \varphi_{Q_L \circ T \circ I_M} \circ (Id_M \otimes (Q_L^*)^{-1})]\left( \sum_{j=1}^n x_j \otimes y_j^* \right)\right| \\ &= \left| [ \varphi_{Q_L \circ T \circ I_M} \circ (Id_M \otimes (Q_L^*)^{-1})](u)\right|\\ &\leq \|\varphi_{Q_L \circ T \circ I_M}\|\cdot \|Id_M \otimes (Q_L^*)^{-1}) \| \cdot \alpha_{X,Y}(u ; M \otimes N)\\ & \leq \left\| Q_{L}\circ T \circ I_{M}\right\|_{X;Y^{\rm dual}} \cdot \|Id_M \|\cdot \|(Q_L^*)^{-1}) \| \cdot (1+\eta)\alpha_{X,Y}(u ; E \otimes F^*)\\ & \leq s \cdot (1+\eta)\alpha_{X,Y}(u ; E \otimes F^*). \end{align*} Making $\eta \longrightarrow 0^+$ we get $\left|\varphi_{T}(u) \right|\leq s\cdot \alpha_{X,Y}^{}\left(u \right)$, which implies the continuity of the functional $\varphi_{T}\colon E\otimes_{\alpha_{X,Y}^{}}F^*\longrightarrow \mathbb{K}$ and $\|\varphi_T\| \leq s$. Calling Lemma \ref{1lema.ideal.maximal.qn} once again it follows that $T\in \mathcal{L}_{X;Y^{\rm dual}}(E;F)$ and $\left\|T \right\|_{X;Y^{\rm dual}}\le \left\|\varphi_{T} \right\|\leq s .$ This completes the proof of the first assertion. The second assertion follows from the first one combined with \cite[8.11]{R. Ryan} (see also \cite[Theorem 8.7.5]{A.Pietsch}). \end{proof} The next corollary is just a combination of the theorem above with \cite[Corollary 17.8(4)]{A.Defant}. \begin{corollary} Let $u \in {\cal L}(E;F)$ be given. Under the assumptions of Theorem \ref{maxim} we have $u \in \mathcal{L}_{X;Y^{\rm dual}}(E;F)$ if and only if $u^{**} \in \mathcal{L}_{X;Y^{\rm dual}}(E^{**};F^{**})$ and $\|u\|_{X,Y^{\rm dual}} = \|u^{**}\|_{X,Y^{\rm dual}}$. \end{corollary} \begin{examples}\rm (a) Theorem \ref{maxim} recovers the following well known facts.\\ $\bullet$ The Banach ideal of absolutely $(q,p)$-summing operators: $$\Pi_{q,p} := {\cal L}_{\ell_p^w(\cdot); \ell_q(\cdot)} = {\cal L}_{\ell_p^w(\cdot); [\ell_{q^*}(\cdot)]^{\rm dual}},$$ $1 \leq p \leq q < \infty$, is maximal \cite[Proposition 10.2]{J.Diestel}. In particular, the ideal $\Pi_p$ of absolutely $p$-summing operators is maximal.\\ $\bullet$ The Banach ideal of Cohen strongly $(q,p)$-summing operators: $${\cal D}_{q,p} := {\cal L}_{\ell_p(\cdot); \ell_q\langle\cdot \rangle} = {\cal L}_{\ell_p(\cdot); [\ell^w_{q*}(\cdot )]^{\rm dual}},$$ $1 \leq p \leq q < \infty$, is maximal. Although we found no reference to quote, we believe this is a well known fact. \\ $\bullet$ The Banach ideal of cotype $q$ operators: $$\mathfrak{C}_q := {\cal L}_{{\rm RAD}(\cdot); \ell_q(\cdot)} = {\cal L}_{{\rm RAD}(\cdot); [\ell_{q^*}(\cdot)]^{\rm dual}},$$ $2 \leq q < \infty$, is maximal \cite[17.4]{A.Defant}.\\ {\rm (b)} Just to illustrate the new information that can be obtained from Theorem \ref{maxim} we mention that the Banach ideals $${\cal L}_{\ell_p^{\rm mid}(\cdot); \ell_q\langle\cdot \rangle} = {\cal L}_{\ell_p^{\rm mid}(\cdot); [\ell^w_{q*}(\cdot )]^{\rm dual}} {\rm ~~and~~} {\cal L}_{\ell_p^{\rm mid}(\cdot); \ell_q (\cdot )} = {\cal L}_{\ell_p^{\rm mid}(\cdot); [\ell_{q*}(\cdot )]^{\rm dual}},$$ which were studied in \cite{espaco.mid, J.R.Campos.J.Santos}, are maximal. \end{examples} \section{The dual of $E \otimes_{\alpha_{X,Y}}F$} As is usual in the case of tensor norms (see \cite{A.Defant, R. Ryan}), for the tensor quasi-norm $\alpha_{X,Y}$ we describe the linear functionals on $E \otimes_{\alpha_{X,Y}}F $ as linear operators from $E$ to $F^*$ and as bilinear forms on $E \times F$. As a consequence we show when these tensor quasi-norms satisfy a condition that is equivalent to maximality of the corresponding operator ideal in the case of tensor norms. For the first part of this section, which comprises Theorems \ref{dualB} and \ref{Teo.prin.1}, we want $\alpha_{X,Y}$ to be a reasonable quasi-norm, so we will suppose that $X$ and $Y$ are monotone sequences classes and $\varepsilon \leq \alpha_{X,Y}$. To describe the linear functionals on $E \otimes_{\alpha_{X,Y}}F $ as bilinear forms we need one more concept introduced in \cite{G. Botelho}. A bilinear form $A \colon E \times F \longrightarrow \mathbb{K}$ is said to be $(X,Y;\ell_1)$-summing if $(A(x_j,y_j))_{j=1}^\infty \in \ell_1$ whenever $(x_j)_{j=1}^\infty \in X(E)$ and $(y_j)_{j=1}^\infty \in Y(F)$. The space ${\cal L}_{X,Y;\ell_1}(E,F;\mathbb{K})$ of all such bilinear forms is a Banach space under the norm $$\|A\|_{X,Y;\ell_1} = \sup\left\{\| (A(x_j,y_j))_{j=1}^\infty \|_1 : (x_j)_{j=1}^\infty \in B_{X(E)}, (y_j)_{j=1}^\infty \in B_{Y(F)}\right\}.$$ We also need a property that is neither weaker nor stronger than being spherically complete: a sequence class $X$ is said to be {\it finitely boundedly complete}, {\it FBC} for short, if for any Banach space $E$, every $n \in \mathbb{N}$ and all $(x_{j})_{j=1}^{n}\in X(E)$ and $(\lambda_{j})_{j=1}^{n}\in \ell_{\infty}$, it holds $(\lambda_{j}x_{j})_{j=1}^{n}\in X(E)$ and $\left\|(\lambda_{j}x_{j})_{j=1}^{n} \right\|_{X(E)}\leq \left\|(\lambda_{j})_{j=1}^{n} \right\|_\infty\cdot \left\|(x_{j})_{j=1}^{n} \right\|_{X(E)}. $ It is easy to check that the sequence classes $c_0(\cdot), \ell_\infty(\cdot), \ell_p(\cdot), \ell_p^w(\cdot), \ell_p^u(\cdot), \ell_p\langle \cdot \rangle, \ell_p^{\rm mid}(\cdot)$, $1 \leq p < \infty$, are FBC. Kahane's contraction principle \cite[12.2]{J.Diestel} guarantees that ${\rm Rad}(\cdot)$ and ${\rm RAD}(\cdot)$ are FBC in the real case $\mathbb{K} = \mathbb{R}$. \begin{theorem}\label{dualB} Suppose that $X$ and $Y$ are finitely determined or finitely dominated and that one of them is FBC. Then, \[(E\otimes_{\alpha_{X,Y}^{}}F)^*\stackrel{1}{=} \mathcal{L}_{X,Y;\ell_{1}}(E,F;\mathbb{K}).\] \end{theorem} \begin{proof} Let us see that map $\Psi \colon \mathcal{L}_{X,Y;\ell_{1}}(E,F;\mathbb{K})\longrightarrow (E\otimes_{\alpha_{X,Y}^{}}F)^*$ given by $$\Psi(A)\left(\sum_{j=1}^{n}x_{j}\otimes y_{j} \right)= \sum_{j=1}^{n}A(x_{j},y_{j}), $$ is a well defined bounded linear operator. First note that $\Psi(A)$ is the linearization of the bilinear form $A$, so it is a well defined linear functional on $E \otimes F$. To check its continuity with respect to $\alpha_{X,Y}$, note that, for $u=\sum\limits_{j=1}^{n}x_{j}\otimes y_{j}\in E\otimes F$, since $A\in \mathcal{L}_{X,Y;\ell_{1}}(E,F;\mathbb{K})$ we have \begin{align*} \left|\Psi(A)(u) \right| = \left|\sum_{j=1}^{n}A(x_{j},y_{j}) \right|\leq \sum_{j=1}^{n}\left|A(x_{j},y_{j}) \right| \leq \left\|A \right\|_{X,Y;\ell_{1}}\cdot \left\|(x_{j})_{j=1}^{n} \right\|_{X(E)}\cdot \left\|(y_{j})_{j=1}^{n} \right\|_{Y(F)}. \end{align*} Taking the infimum over all representations of $u$ it follows that $\left|\Psi(A)(u) \right|\leq \left\|A \right\|_{X,Y;\ell_{1}}\alpha_{X,Y}^{}(u)$, proving that $\Psi(A)$ is a bounded linear functional and $\left\|\Psi(A) \right\|\leq \left\|A \right\|_{X,Y;\ell_{1}}$. Now it is enough to prove that $\Psi$ is a surjective isometry. Given $\varphi\in (E\otimes_{\alpha_{X,Y}^{}}F)^*$, it is clear that $$A_{\varphi} \colon E\times F\longrightarrow \mathbb{K} : A_{\varphi}(x,y)= \varphi(x\otimes y),$$ is a bilinear form. Taking $(x_{j})_{j=1}^{n}$ in $E$ and $(y_{j})_{j=1}^{n}$ in $F$, assuming $X$ is FBC (the other case is analogous), we get \begin{align*} \left\|(A_{\varphi}(x_{j},y_{j}))_{j=1}^{n} \right\|_{1} &= \sup_{(\lambda_{j})_{j=1}^{\infty}\in B_{\ell_{\infty}}}\left|\sum_{j=1}^{n}\lambda_{j} A_{\varphi}(x_{j},y_{j}) \right| = \sup_{(\lambda_{j})_{j=1}^{\infty}\in B_{\ell_{\infty}}}\left|\varphi\left( \sum_{j=1}^{n}(\lambda_{j}x_{j})\otimes y_{j})\right) \right|\\ &\leq \sup_{(\lambda_{j})_{j=1}^{\infty}\in B_{\ell_{\infty}}}\left\|\varphi \right\|\cdot \left\|(\lambda_{j}x_{j})_{j=1}^{n} \right\|_{X(E)}\cdot \left\|(y_{j})_{j=1}^{n} \right\|_{Y(F)}\\ &\leq \left\|\varphi \right\|\cdot \left\|(x_{j})_{j=1}^{n} \right\|_{X(E)}\cdot \left\|(y_{j})_{j=1}^{n} \right\|_{Y(F)}. \end{align*} Using that $X$ and $Y$ are finitely determined we conclude that $A_{\varphi}\in \mathcal{L}_{X,Y;\ell_{1}}(E,F;\mathbb{K})$ and $\left\|A_{\varphi} \right\|_{X,Y;\ell_{1}}\leq \left\| \varphi \right\|$. A straightforward computation shows that $\Psi(A_{\varphi})= \varphi$ and completes the proof. \end{proof} Now we represent linear functionals on $E\otimes_{\alpha_{X,Y}^{}}F$ as linear operators from $E$ to $F^*$. \begin{theorem}\label{Teo.prin.1} Suppose that $X$ and $Y$ are finitely determined and $Y$ is spherically complete. Then, \[(E\otimes_{\alpha_{X,Y}^{}} F )^*\stackrel{1}{=} \mathcal{L}_{X;Y^{\rm dual}}(E;F^*).\] \end{theorem} \begin{proof} Given $T \in \mathcal{L}_{X;Y^{\rm dual}}(E;F^*)$, call $\Psi(T)$ the linearization of the bilinear form $$(x,y) \in E \times F \mapsto T(x)(y) \in \mathbb{K}. $$ So, $\Psi(T)$ is a linear functional on $E \otimes F$ such that \[\Psi\left( T\right) \left( \sum_{j=1}^{n}x_{j}\otimes y_{j}\right) = \sum_{j=1}^{n}T\left( x_{j}\right) \left( y_{j}\right).\] To check its continuity with respect to $\alpha_{X,Y}$, note that for $u=\sum\limits_{j=1}^{n}x_{j}\otimes y_{j}\in E\otimes F$, denoting by $J_F \colon F \longrightarrow F^{**}$ the canonical embedding, \begin{align*} \left|\Psi(T)(u) \right|&\leq \sum_{j=1}^{n}\left|T(x_{j})(y_{j}) \right|= \sum_{j=1}^{n} \left| J_{F}(y_{j})(T(x_{j})) \right|\\ & \leq \left\|T \right\|_{X;Y^{\rm dual}} \cdot \left\|(x_{j})_{j=1}^{n} \right\|_{X(E)}\cdot \left\|(J_F(y_{j}))_{j=1}^{n} \right\|_{Y(F^{**})}\\ &\leq \left\|T \right\|_{X;Y^{\rm dual}} \cdot \left\|(x_{j})_{j=1}^{n} \right\|_{X(E)}\cdot \left\|(y_{j})_{j=1}^{n} \right\|_{Y(F)}, \end{align*} where the last inequality follows from the linear stability of $Y$. Since this estimate holds for every representation of $u$, it follows that $\left|\Psi(T)(u) \right|\leq \left\|T \right\|_{X;Y^{\rm dual}}\cdot \alpha_{X,Y}^{}(u) $. Hence, $\Psi(T) \in (E\otimes_{\alpha_{X,Y}^{}} F )^*$, proving that $\Psi \colon \mathcal{L}_{X;Y^{\rm dual}}(E;F^*)\longrightarrow (E\otimes_{\alpha_{X,Y}^{}} F )^*$ is a (obviously linear) bounded operator with $\left\|\Psi(T) \right\|\leq \left\|T \right\|_{X;Y^{\rm dual}}. $ Recall that $\mathcal{L}_{X;Y^{\rm dual}}(E;F^*)$ is a Banach space because $Y$ is spherically complete. Now it is enough to show that $\Psi$ is a surjective isometry. For $\varphi\in (E\otimes_{\alpha_{X,Y}^{}}F)^*$, the map $$T_{\varphi}\colon E\longrightarrow F^*~,~T_{\varphi}(x)(y)= \varphi(x\otimes y),$$ is clearly a bounded linear operator. Given $(x_{j})_{j=1}^{\infty} \in X(E)$, for every $n \in \mathbb{N}$, \begin{align*} \left\|\left( T_{\varphi}(x_{j})\right)_{j=1}^{n} \right\|_{Y^{\rm dual}(F^*)} = \sup_{(y_{j})_{j=1}^{\infty}\in B_{Y(F)}}\left|\sum_{j=1}^{n}\varphi(x_{j}\otimes y_{j})\right| \leq \left\|\varphi \cdot \right\|\left\|(x_{j})_{j=1}^{n} \right\|_{X(E)}. \end{align*} Since $X$ and $Y^{\rm dual}$ are finitely determined, taking the supremum over $n$ it follows that $T_{\varphi}\in \mathcal{L}_{X;Y^{\rm dual}}(E;F^*)$ and $\left\|T_{\varphi} \right\|_{X;Y^{\rm dual}} \leq \left\|\varphi \right\|$. The easily checked equality $\Psi(T_{\varphi})=\varphi$ completes the proof. \end{proof} It is well known (see, e.g., \cite[Ex. 17.2]{A.Defant}) that a normed operator ideal $\cal I$ is maximal if and only if there exists a finitely generated tensor norm $\alpha$ such that \begin{equation}\label{99999} {\cal I}(E;F)\stackrel{1}{=} (E \otimes_{\alpha}F^*)^*\cap \mathcal{L}(E,F) \end{equation} for all Banach $E$ and $F$. We finish the paper establishing conditions under which the tensor quasi-norm $\alpha_{X,Y}^{}$ satisfies \eqref{99999}. Therefore, according to the Propositions \ref{razoavel} and \ref{porpr}, we will assume that $X$ and $Y$ are linearly stable, monotone and finitely injective sequences classes and $\varepsilon \leq \alpha_{X,Y}$. \begin{theorem} Suppose that $X$ and $Y$ are finitely determined or finitely dominated and that $Y$ is spherically complete. Then, regardless of the Banach spaces $E$ and $F$, $$\mathcal{L}_{X;Y^{\rm dual}}(E;F)\stackrel{1}{=}(E\otimes_{\alpha_{X,Y}^{}}F^*)^*\cap \mathcal{L}(E;F).$$ \end{theorem} \begin{proof} The assumptions guarantee that $\mathcal{L}_{X;Y^{\rm dual}}(E;F)$ is a Banach space. The same arguments used before show that $\Phi \colon \mathcal{L}_{X;Y^{\rm dual}}(E,F)\longrightarrow (E\otimes_{\alpha_{X,Y}}F^*)^*\cap \mathcal{L}(E,F)$ given by \[\Phi(T)\left(\sum_{j=1}^{n}x_{j}\otimes \psi_{j} \right) = \sum_{j=1}^{n} \psi_{j}(T(x_{j})),\] is a well defined linear operator. For $u=\sum\limits_{j=1}^{n}x_{j}\otimes \psi_{j}\in E\otimes F^*$, \begin{align*} \left|\Phi(T)(u) \right| \leq \sum_{j=1}^{n}\left|\psi_{j}(T(x_{j})) \right| \leq \left\|T \right\|_{X;Y^{\rm dual}}\cdot \left\|(x_{j})_{j=1}^{n} \right\|_{X(E)} \cdot \left\|(\psi_{j})_{j=1}^{n} \right\|_{Y(F^*)}. \end{align*} Again, since this holds for any representation of $u$ we have $ \left|\Phi(T)(u) \right|\leq \left\|T \right\|_{X;Y^{\rm dual}} \cdot \alpha_{X,Y}^{}(u) $, which proves, in particular, that $\Phi(T)$ is continuous and $\left\|\Phi(T) \right\|\leq \left\| T\right\|_{X;Y^{\rm dual}}.$ Once again, it is enough to show that $\Phi$ is a surjective isometry. To do so, given $\varphi\in (E\otimes_{\alpha_{X,Y}^{}}F^*)^*\cap \mathcal{L}(E,F)$ consider the continuous linear operator $T_{\varphi} \colon E\longrightarrow J_{F}(F) \subseteq F^{**}$ given by $T_{\varphi}(x)(\psi)= \varphi(x\otimes \psi)$. Let $M\in \mathcal{F}(E)$ and $L\in \mathcal{CF}(F)$ be given. Considering the isometric isomorphism $Q_{L}^* \colon (F/L)^{*}\longrightarrow L^{\perp}$, Proposition \ref{Teo.prin.1} gives \begin{equation}\label{eq.iso.iso} \left(M\otimes_{\alpha_{X,Y}^{}}\left(F/L\right)^* \right)^* \stackrel{1}{=} \mathcal{L}_{X;Y^{\rm dual}}\left(M; \left(F/L \right)^{**} \right) \stackrel{1}{=} \mathcal{L}_{X;Y^{\rm dual}}\left(M;F/L \right). \end{equation} Considering the composition $$M \otimes_{\alpha_{X;Y}} (F/L)^* \stackrel{Id_M \otimes Q_L^*}{\xrightarrow{\hspace*{2cm}}} M \otimes_{\alpha_{X;Y}}L^\perp \stackrel{\varphi|_{M \otimes_{\alpha_{X;Y}}L^\perp }}{\xrightarrow{\hspace*{2cm}}} \mathbb{K}, $$ we have $\varphi|_{M \otimes_{\alpha_{X;Y}}L^\perp }\circ (Id_M \otimes Q_L^*) \in \left(M\otimes_{\alpha_{X,Y}^{}}\left(F/L\right)^* \right)^*$. By (\ref{eq.iso.iso}) there is a unique $T \in \mathcal{L}_{X;Y^{\rm dual}}\left(M;F/L \right)$ such that $\Psi(T) = \varphi|_{M \otimes_{\alpha_{X;Y}}L^\perp }\circ (Id_M \otimes Q_L^*)$, where $\Psi$ is the isomorphism constructed in the proof of Proposition \ref{Teo.prin.1}. For every tensor $\sum\limits_{j=1}^{n}x_{j}\otimes \psi_{j}\in M\otimes L^{\perp}$, we have $\sum\limits_{j=1}^{n}x_{j}\otimes (Q^*_{L})^{-1}(\psi_{j})\in M\otimes \left( F/L\right)^* $, so \begin{align*}\Psi(T)& \left(\sum_{j=1}^{n}x_{j}\otimes (Q_{L}^*)^{-1}(\psi_{j}) \right) = \varphi|_{M \otimes_{\alpha_{X;Y}}L^\perp }\circ (Id_{M}\otimes Q_{L}^*)\left(\sum_{j=1}^{n}x_{j}\otimes (Q_{L}^*)^{-1}(\psi_{j}) \right)\\ &= \varphi\left(\sum_{j=1}^{n}x_{j}\otimes \psi_{j} \right)= \sum_{j=1}^{n} T_{\varphi}(x_{j})(\psi_{j}) = \sum_{j=1}^{n}\psi_{j}(T_{\varphi}\circ Id_{M})(x_{j})\\ &= \sum_{j=1}^{n}(T_{\varphi}\circ Id_{M})^*(\psi_{j})(x_{j})= \sum_{j=1}^{n}(Id_{M}^*\circ T_{\varphi}^*)(\psi_{j})(x_{j})\\ &= \sum_{j=1}^{n}(Id_{M}^*\circ T_{\varphi}^*\circ Q_{L}^*)\left((Q_{L}^*)^{-1}(\psi_{j})\right)(x_{j})= \sum_{j=1}^{n}(Q_{L}^*)^{-1}(\psi_{j})(Q_{L}\circ T_{\varphi}\circ Id_{M})(x_{j})\\ & = \Psi(Q_L \circ T_\varphi \circ Id_M)\left(\sum_{j=1}^{n}x_{j}\otimes (Q_{L}^*)^{-1}(\psi_{j}) \right). \end{align*} The injectivity of $\Psi$ gives $T= Q_{L}\circ T_{\varphi}\circ I_{M}$, and the fact that it is an isometry yields \begin{align*} \left\|Q_{L}\circ T_{\varphi}\circ I_{M} \right\|_{X;Y^{\rm dual}}& = \|\Psi^{-1}(T)\|= \left\|\varphi|_{M \otimes_{\alpha_{X;Y}}L^\perp }\circ (Id_M \otimes Q_L^*)\right\|\\ &\leq \left\| \varphi\right\|\cdot \left\| I_{M}\otimes Q_{L}^*\right\| \leq \left\| \varphi\right\|. \end{align*} It follows from Theorem \ref{maxim} that $T_{\varphi}\in \mathcal{L}_{X;Y^{\rm dual}}(E;F)$ and $$\left\|T_{\varphi} \right\|_{X;Y^{\rm dual}}\leq \sup_{M,L}\left\|Q_{L}\circ T_{\varphi}\circ I_{M} \right\|_{X;Y^{\rm dual}} \leq \left\|\varphi \right\|.$$ Finally, it is not difficult to see that $\Phi(T_{\varphi})=\varphi$. \end{proof} Plenty of concrete cases for which the theorem above applies can be provided just bearing in mind all that was said about the sequence classes listed in Example \ref{exsec}. \noindent Faculdade de Matem\'atica\\ Universidade Federal de Uberl\^andia\\ 38.400-902 -- Uberl\^andia -- Brazil\\ e-mail: [email protected]\\ \noindent Departamento de Ci\^{e}ncias Exatas\\ Universidade Federal da Para\'iba\\ 58.297-000 -- Rio Tinto -- Brazil\\ \hspace*{1,7cm} and \noindent Departamento de Matem\'atica\\ Universidade Federal da Para\'iba\\ 58.051-900 -- Jo\~ao Pessoa -- Brazil \noindent e-mails: [email protected] \, and/or \, [email protected]\\ \noindent Departamento de Matem\'atica\\ Universidade Federal da Para\'iba\\ 58.051-900 -- Jo\~ao Pessoa -- Brazil\\ e-mail: [email protected] \end{document}
\begin{document} \title{A definability theorem for first order logic } \author{Carsten Butz ({\AA}rhus) and Ieke Moerdijk (Utrecht)\thanks{ Both authors acknowledge support from the Netherlands Science Organisation~(NWO).} } \date{\relax} \maketitle In this paper, we will present a definability theorem for first order logic. This theorem is very easy to state, and its proof only uses elementary tools. To explain the theorem, let us first observe that if $\mathit{M}$ is a model of a theory $\mathit{T}$ in a language~$\mathcal{L}$, then, clearly, any definable subset $S\subset \mathit{M}$ (i.e.,~a subset $S=\{a\mid \mathit{M}\mathit{M}s\varphi(a)\}$ defined by some formula~$\varphi$) is invariant under all automorphisms of~$\mathit{M}$. The same is of course true for subsets of $\mathit{M}^n$ defined by formulas with $n$ free variables. Our theorem states that, if one allows Boolean valued models, the converse holds. More precisely, for any theory $\mathit{T}$ we will construct a Boolean valued model~$\mathit{M}$, in which precisely the $\mathit{T}$--provable formulas hold, and in which every (Boolean valued) subset which is invariant under all automorphisms of $\mathit{M}$ is definable by a formula of~$\mathcal{L}$. Our presentation is entirely selfcontained, and only requires familiarity with the most elementary properties of model theory. In particular, we have added a first section in which we review the basic definitions concerning Boolean valued models. The Boolean algebra used in the construction of the model will be presented concretely as the algebra of closed and open subsets of a topological space $X$ naturally associated with the theory~$\mathit{T}$. The construction of this space is closely related to the one in~\cite{Butz-Moerdijk:96c}. In fact, one of the results in that paper could be interpreted as a definability theorem for infinitary logic, using topological rather than Boolean valued models. \section{Preliminary definitions} \label{preliminary} In this section we review the basic definitions concerning Boolean valued models (see e.g.~\cite{Koppelberg:85}). Most readers will be familiar with these notions, and they are advised to skip this section. They should note, however, that our Boolean algebras are not necessarily complete, and that we treat constants and function symbols as functional relations. Let us fix a signature~$S$, consisting of constants, function and relation symbols. For simplicity we assume it is a single sorted signature, although this restriction is by no means essential. Let $\mathcal{L}$ denote the associated first order language $\mathcal{L}_{\omega\omega}(S)$. A {\em Boolean valued interpretation\/} of $\mathcal{L}$ is a triple $\mathfrak{M}=(B,\card{\mathfrak{M}},\valuate{-})$, where $B$ is a Boolean algebra, $\card{\mathfrak{M}}$ is the underlying set of the interpretation, and $\valuate{-}$ is an operation which assigns to each formula $\varphi(x_1,\ldots,x_n)$ of $\mathcal{L}$ with free variables among $x_1,\ldots,x_n$ a function $\card{\mathfrak{M}}^n\toB$, whose value at $(m_1,\ldots,m_n)$ is denoted $$\valuate{\varphi(m_1,\ldots,m_n)}.$$ These functions are required to satisfy the usual identities (where we write $m$ for $m_1,\ldots,m_n$): \begin{enumerate} \item[(i)] $\valuate{\varphi\wedge\psi(m)}= \valuate{\varphi(m)}\wedge\valuate{\psi(m)}$ and similar for the other Boolean connectives. \item[(ii)] $\valuate{\exists y\varphi(y,m)}= \bigvee\{\valuate{\varphi(k,m)}\mid k\in\card{\mathfrak{M}}\}$,\\[1ex] $\valuate{\forall y\varphi(y,m)}= \bigwedge\{\valuate{\varphi(k,m)}\mid k\in\card{\mathfrak{M}}\}$, \end{enumerate} where it is part of the definition of an interpretation that these sups and infs are required to exists in~$B$. Finally, we require \begin{enumerate} \item[(iii)] if $\vdash\varphi(x_1,\ldots,x_n)$ then $\valuate{\varphi(m)}=1_{B}$ for any $m\in\card{\mathfrak{M}}^n$. \end{enumerate} In (iii), $\vdash$ denotes derivability in (one of the usual axiomatisations of) classical first order logic. \begin{remark} \begin{rm} \begin{enumerateremark} \item \leftmargin0pt Note that, in particular, $\card{\mathfrak{M}}$ is equipped with a $B$--valued equality $\equal{x_1}{x_2}\colon\card{\mathfrak{M}}^2\toB$, satisfying the identities for reflexivity, transitivity and symmetry, $$\begin{array}{l} \equal{m}{m}=1_B,\\[1ex] \equal{m_1}{m_2}=\equal{m_2}{m_1},\\[1ex] \equal{m_1}{m_2}\wedge\equal{m_2}{m_3}\leq\equal{m_1}{m_3}. \end{array} $$ \item For each constant $c$ the formulas $c=x$ and $x=c$ define the same function $C=\equal{c}{x}\colon\card{\mathfrak{M}}\toB$, which should be viewed as the interpretation of~$c$. It satisfies the conditions $C(m)\wedge\equal{m}{m'}\leq C(m')$ and $\bigvee\{C(m)\mid m\in\card{\mathfrak{M}}\}=1_B$. Similarly, each $n$--ary function symbol is interpreted, via the formula $f(x_1,\ldots,x_n)=y$, by a function $F\colon\card{\mathfrak{M}}^n\times\card{\mathfrak{M}}\toB$. This function satisfies the conditions $F(m,k)\wedge\equal{m}{m'}\wedge\equal{k}{k'}\leq F(m',k')$ and $\bigvee\{F(m,k)\mid k\in \card{\mathfrak{M}}\}=1_B$. (Here $m=m_1,\ldots,m_n$ as before, and $\equal{m}{m'}$ stands for $\bigwedge_{i=1}^n\equal{m_i}{m_i'}$.) \item For each $n$--ary relation symbol $r$ the formula $r(x_1,\ldots,x_n)$ defines a map $R\colon\card{\mathfrak{M}}^n\toB$, which is extensional in the sense that $R(m)\wedge\equal{m}{m'}\leq R(m')$. \item The entire interpretation is determined by these data in (i)--(iii). First, using derivability of usual equivalences such as $\vdash f(g(x))=y\leftrightarrow\exists z(f(z)=y\wedge g(x)=z)$, one obtains by induction for each term $t(x_1,\ldots, x_n)$ a function $T\colon\card{\mathfrak{M}}^{n+1}\toB$ interpreting the formula $t(x_1,\ldots,x_n)=y$. Next, one builds up the interpretation of formulas in the usual way, using the assumption that all necessary sups and infs exist in~$B$. \end{enumerateremark} \end{rm} \end{remark} As usual, we write $\mathfrak{M}\mathit{M}s\varphi$ if $\valuate{\varphi(m)}=1$ for all $m\in\card{\mathfrak{M}}^n$, and we say $\mathfrak{M}$ is a model of a theory $\mathit{T}$ if $\mathfrak{M}\mathit{M}s\varphi$ whenever~$\mathit{T}\vdash\varphi$. In this case, we write $\mathfrak{M}\mathit{M}s\mathit{T}$, as usual. \section{Automorphisms of models and statement of the theorem} Consider a fixed Boolean valued model $\mathfrak{M}=(B,\card{\mathfrak{M}},\valuate{-})$. An {\em automorphism\/} $\pi$ of $\mathfrak{M}$ consists of two mappings $\pi_0$ and~$\pi_1$. The map $\pi_0\colonB\toB$ is an automorphism of the Boolean algebra~$B$, while $\pi_1\colon\card{\mathfrak{M}}\to\card{\mathfrak{M}}$ is an automorphism of the underlying set~$\card{\mathfrak{M}}$, with the property that \begin{equation}\label{eq:2.1} \pi_0\valuate{\varphi(m_1,\ldots,m_n)}= \valuate{\varphi(\pi_1(m_1),\ldots,\pi_1(m_n))}, \end{equation} for any formula $\varphi(x_1,\ldots,x_n)$ and any $m_1,\ldots,m_n\in\card{\mathfrak{M}}$. (Of course it is enough to check a condition like~(\ref{eq:2.1}) for constants, functions and relations of~$\mathcal{L}$, and deduce~(\ref{eq:2.1}) for arbitrary $\varphi$ by induction.) An ($n$--ary) {\em predicate\/} on $\mathfrak{M}$ is a map $p\colon\card{\mathfrak{M}}^n\toB$ which satisfies the extensionality condition \begin{equation}\label{eq:2.2} p(m)\wedge\equal{m}{m'}\leq p(m') \end{equation} for any $m,m'\in\card{\mathfrak{M}}^n$ (where $\equal{m}{m'}$ stands for $\bigwedge_{i=1}^n\equal{m_i}{m'_i}$, as before). Such a predicate $p$ is {\em definable\/} if there is a formula $\varphi(x_1,\ldots,x_n)$ such that \begin{equation}\label{eq:2.3} p(m)=\valuate{\varphi(m)},\qquad\mbox{for all $m\in\card{\mathfrak{M}}^n$.} \end{equation} It is {\em invariant\/} under an automorphism $\pi$ if \begin{equation}\label{eq:2.4} \pi_0p(m)=p(\pi_1(m)),\qquad\mbox{for all $m\in\card{\mathfrak{M}}^n$,} \end{equation} (where $\pi_1(m)$ is $(\pi_1(m_1),\ldots,\pi_1(m_n))$). Obviously, every definable predicate is invariant. Our theorem states the converse. \begin{theorem}\label{main-theorem} Let $\mathit{T}$ be any first order theory. There exists a Boolean valued model $\mathfrak{M}$ such that \begin{enumerate} \item $\mathfrak{M}$ is a conservative model of~$\mathit{T}$, in the sense that $\mathfrak{M}\mathit{M}s\varphi$ iff $\mathit{T}\vdash\varphi$, for any sentence~$\varphi$. \item Any predicate which is invariant under all automorphisms of $\mathfrak{M}$ is definable. \end{enumerate} \end{theorem} Before proving the theorem in \S\ref{proof}, we will first give an explicit description of the Boolean algebra and the interpretation involved in the next section. \section{Construction of the model} \label{the-model} Our Boolean algebra will be defined as the algebra of all clopen (i.e.,~closed and open) sets in a topological space~$X$. To describe~$X$, let $\kappa\geq\omega$ be the cardinality of our language~$\mathcal{L}$. We fix a {\em set\/} $\mathcal{S}_{\theory}$ of (ordinary, two--valued) models $\mathit{M}$ of $\mathit{T}$ such that every model of cardinality $\leq\kappa$ is isomorphic to a model in~$\mathcal{S}_{\theory}$. Then, in particular, a formula is provable from $\mathit{T}$ iff it holds in all models in the set~$\mathcal{S}_{\theory}$. \begin{definition} An {\em enumeration\/} of a model $\mathit{M}$ is a function $\alpha\colon\kappa\to\card{\mathit{M}}$ such that $\alpha^{-1}(a)$ is infinite for all $a\in\card{\mathit{M}}$ (here $\card{\mathit{M}}$ is the underlying set of~$\mathit{M}$). \end{definition} The space $X$ has as its points the equivalence classes of pairs $(\mathit{M},\alpha)$, where $\mathit{M}\in\mathcal{S}_{\theory}$ and $\alpha$ is an enumeration of~$\mathit{M}$. Two such pairs $(\mathit{M},\alpha)$ and $(\mathrm{N},\beta)$ are {\em equivalent\/} if there exists an isomorphism of models $\theta\colon\mathit{M}\stackrel{\simeq}{\to}\mathrm{N}$ such that $\beta=\theta\circ\alpha$. We will often simply write $(\mathit{M},\alpha)$ when we mean the equivalence class of~$(\mathit{M},\alpha)$. The topology of $X$ is generated by all the basic open sets of the form \begin{equation}\label{3.1} U_{\varphi,\xi}=\{(\mathit{M},\alpha)\mid\mathit{M}\mathit{M}s\varphi(\alpha(\xi))\}. \end{equation} Here $\varphi=\varphi(x_1,\ldots,x_n)$ is any formula with free variables among $x_1,\ldots,x_n$, while $\xi=(\xi_1,\ldots,\xi_n)$ is a sequence of elements of~$\kappa$ (i.e.,~ordinals $\xi_i<\kappa$); we use $\alpha(\xi)$ as an abbreviation of $\alpha(\xi_1),\ldots,\alpha(\xi_n)$. Observe that each such basic open set $U_{\varphi,\xi}$ is also closed, with complement $U_{\neg\varphi,\xi}$. So $X$ is a zero--dimensional space. We now define the Boolean algebra $B$ as \begin{equation}\label{eq:3.2} B=\mathrm{Clopens}(X), \end{equation} the algebra of all open and closed sets in~$X$. Notice that arbitrary suprema need not exist in~$B$, although $B$ has many infinite suprema. In particular, if $U\subset X$ is clopen and $\{U_i\}_{i\in I}$ is a cover of $U$ by basic open sets, then the union $\bigcup_{i\in I}U_i$ defines a supremum $U=\bigvee_{i\in I}U_i$ in~$B$; we only need suprema of this kind. \vskip 2ex The Boolean algebra $B$ just constructed is part of a natural Boolean valued model $\mathfrak{M}=(B,\card{\mathfrak{M}},\valuate{-})$, with \begin{equation}\label{eq:4.1} \card{\mathfrak{M}}=\kappa \end{equation} and evaluation of formulas defined by \begin{equation}\label{eq:4.2} \valuate{\varphi(\xi_1,\ldots,\xi_n)}=U_{\varphi,\xi}, \end{equation} for any formula $\varphi(x_1,\ldots,x_n)$ and any sequence $\xi=\xi_1,\ldots,\xi_n$ of ordinals~$\xi_i<\kappa$. \begin{lemma}\label{4.1} This evaluation defines a $B$--valued interpretation of the language~$\mathcal{L}$. \end{lemma} \begin{proof} One needs to check the requirements (i)--(iii) from Section~\ref{preliminary}. Now~(iii) is clear, while (i) and (ii) are completely straightforward. For illustration, we give the case of the existential quantifier. Suppose $\varphi(y,x)$ is a formula with just two free variables $x$ and $y$. Then for any $\xi<\kappa$, \begin{eqnarray*} \valuate{\exists y\varphi(y,\xi)} & = & \{(\mathit{M},\alpha)\mid \mathit{M}\mathit{M}s\exists y\varphi(y,\alpha(\xi))\}\\ & = & \{(\mathit{M},\alpha)\mid \exists\eta<\kappa\colon\ \mathit{M}\mathit{M}s\varphi(\alpha(\eta),\alpha(\xi))\} \\ & & \mbox{(since each $\alpha$ is surjective)}\\ & = & \bigcup_{\eta<\kappa}\{(\mathit{M},\alpha)\mid \mathit{M}\mathit{M}s\varphi(\alpha(\eta),\alpha(\xi))\}\\ & = & \bigcup_{\eta<\kappa}\valuate{\varphi(\eta,\xi)}, \end{eqnarray*} and this union is a supremum in~$B$, by the remark above. \end{proof} \section{Proof of the theorem} \label{proof} We will now show that the interpretation $\mathfrak{M}$ has the two properties stated in Theorem~\ref{main-theorem}. The first one is easy: \begin{proposition} $\mathfrak{M}$ is a conservative model of~$\mathit{T}$. \end{proposition} \begin{proof} We need to show that $\mathfrak{M}\mathit{M}s\sigma$ iff $\mathit{T}\vdash\sigma$, for any sentence $\sigma\in\mathcal{L}$. By Lemma~\ref{4.1}, $\valuate{\sigma}=\{(\mathit{M},\alpha)\mid\mathit{M}\mathit{M}s\sigma\}$. Thus $\valuate{\sigma}=X$ iff $\mathit{M}\mathit{M}s\sigma$ for all $\mathit{M}\in\mathcal{S}_{\theory}$, and this holds iff $\mathit{T}\vdash\sigma$, by definition of~$\mathcal{S}_{\theory}$. \end{proof} For the proof of the definability result~\ref{main-theorem}(ii), we shall only need a particular collection of automorphisms of the model~$\mathfrak{M}$. Let $S_\kappa$ denote the symmetric group of permutations of~$\kappa$. Then $S_\kappa$ acts on the model $\mathfrak{M}$ as follows. Any $\pi_1\in S_\kappa$ induces a homeomorphism $\pi_0\colon X\to X$, defined by $$\pi_0(\mathit{M},\alpha)=(\mathit{M},\alpha\circ\pi_1^{-1}).$$ This map has the property that $\pi_0(U_{\varphi,\xi})=U_{\varphi,\pi_1(\xi)}$, or $$\pi_0\valuate{\varphi(\xi)}=\valuate{\varphi(\pi_1(\xi))}, $$ for any formula $\varphi(x_1,\ldots,x_n)$ and any $\xi=\xi_1,\ldots,\xi_n<\kappa$. Thus, the pair $\pi=(\pi_1,\pi_0)$ is an automorphism of~$\mathfrak{M}$. This defines an action of $S_\kappa$ on~$\mathfrak{M}$, i.e., a representation $$\rho\colon S_\kappa\to\mathrm{Aut}(\mathfrak{M}),\qquad\rho(\pi_1)=\pi. $$ For the second part of Theorem~\ref{main-theorem}, it will now be enough to show: \begin{proposition}\label{5.2} Any $S_\kappa$--invariant predicate is definable. \end{proposition} To simplify notation, we will only prove this for a unary predicate. So let us fix such an invariant predicate~$p$. It is a function $p\colon\card{\mathfrak{M}}=\kappa\longrightarrowB$ satisfying the extensionality condition $$p(\xi)\wedge\equal{\xi}{\xi'}\leq p(\xi'),$$ as well as the invariance condition $$ p(\pi_1\xi)=\pi_0(p(\xi)),$$ for any $\pi_1\in S_\kappa$. We will first show that $p$ is ``locally'' definable (Lemma~\ref{5.5}). \begin{lemma}\label{5.3} Let $(\mathit{M},\alpha)\in U\in B$ and $\eta_0\in\kappa$. Then there is a formula $\delta(x_1,\ldots,x_n,y)$ and elements $\xi_1,\ldots,\xi_n\in\kappa$ such that \begin{enumerate} \item $(\mathit{M},\alpha)\in U_{\delta,(\xi,\eta_0)}\leq U$. \item For any point $(\mathrm{N},\beta)$ in $X$, any $b_1,\ldots,b_n,c\in\card{\mathrm{N}}$ such that $\mathrm{N}\mathit{M}s\delta(b_1,\ldots,b_n,c)$, and any $\eta\in\kappa$ with $\beta(\eta)=c$, there exists a $\pi_1\in S_\kappa$ such that $\pi_1(\eta)=\eta_0$ and $\pi_0(\mathrm{N},\beta)\in U_{\delta,(\xi,\eta_0)}$. \end{enumerate} \end{lemma} \begin{proof} Choose a basic open set $U_{\delta',\xi}$, given by a formula $\delta'(x_1,\ldots,x_n)$ and $\xi_1,\ldots,\xi_n<\kappa$, such that $$(\mathit{M},\alpha)\in U_{\delta',\xi}\subset U.$$ Let $\mathrm{Eq}_\alpha(x_1,\ldots,x_n,y)$ be the formula $$\bigwedge_{\alpha(\xi_i)=\alpha(\xi_j)}x_i=x_j\wedge \bigwedge_{\alpha(\xi_i)=\alpha(\eta_0)}x_i=y, $$ and define $\delta$ to be $\delta'\wedge\mathrm{Eq}_\alpha$. Then obviously $$(\mathit{M},\alpha)\in U_{\delta,\xi,\eta_0}\subset U_{\delta',\xi}\subset U. $$ Now choose any $(\mathrm{N},\beta)$, $b_1,\ldots,b_n,c$ and $\eta$ satisfying the hypothesis of part~(ii) of the lemma. Then in particular $\mathrm{N}\mathit{M}s\mathrm{Eq}_\alpha(b_1,\ldots,b_n,c)$ and $c=\beta(\eta)$. Since $\beta\colon\kappa\to\card{\mathrm{N}}$ has infinite fibres, we can find $\zeta_1,\ldots,\zeta_n<\kappa$ such that $\beta(\zeta_i)=b_i$, while the sequence $\zeta_1,\ldots,\zeta_n,\eta$ satisfies {\em exactly\/} the same equalities and inequalities as the sequence $\xi_1,\ldots,\xi_n,\eta_0$. [Indeed, if $\zeta_1,\ldots,\zeta_i$ have been found, and $\xi_{i+1}=\xi_k$ for some $k\leq i$ or $\xi_{i+1}=\eta_0$, then also $\alpha(\xi_{i+1})=\alpha(\xi_k)$ or $\alpha(\xi_{i+1})=\alpha(\eta_0)$, hence $b_{i+1}=b_k$ or $b_{i+1}=c$ since $\mathrm{N}\mathit{M}s \mathrm{Eq}_\alpha(b_1,\ldots,b_n,c)$. Thus, we can choose $\zeta_{i+1}=\zeta_k$ respectively $\zeta_{i+1}=\eta$. If, on the other hand, $\xi_{i+1}\notin\{\eta_0,\xi_1,\ldots,\xi_i\}$, we can use the fact that $\beta^{-1}(b_{i+1})$ is infinite, to find $\zeta_{i+1}\in\beta^{-1}(b_{i+1})\setminus\{\eta,\zeta_1,\ldots,\zeta_i\}$.] Thus, there is a permutation $\pi_1\in S_\kappa$ with $$\pi_1(\eta)=\eta_0, \pi_1(\zeta_1)=\xi_1,\ldots, \pi_1(\zeta_n)=\xi_n.$$ But then $\mathrm{N}\mathit{M}s\delta(b_1,\ldots,b_n,c)$ means that $\mathrm{N}\mathit{M}s\delta(\beta(\pi_1^{-1}(\xi_1)),\ldots, \beta(\pi_1^{-1}(\xi_n)),$ $\beta(\pi_1^{-1}(\eta_0)))$, or that $\pi_0(\mathrm{N},\beta)\in U_{\delta,(\xi,\eta_0)}$. \end{proof} \begin{lemma}\label{5.4} Let $\eta_0<\kappa$. There is a cover $p(\eta_0)=\bigvee_{i \in I(\eta_0)}U_i$ in~$B$, and formulas $\psi_i^{\eta_0}(y)$, such that for any $i\in I(\eta_0)$, \begin{enumerate} \item $U_i\leq\valuate{\psi_i^{\eta_0}(\eta_0)}$. \item For any $\eta<\kappa$, $\valuate{\psi_i^{\eta_0}(\eta)}\leq p(\eta)$. \item$\bigvee\limits_{i\in I(\eta_0)}\valuate{\psi_i^{\eta_0}(\eta_0)}=p(\eta_0)$. \end{enumerate} \end{lemma} \begin{proof} Observe that (iii) follows from (i) and~(ii). To prove these, write $U=p(\eta_0)$, and apply Lemma~\ref{5.3} to each of the points $(\mathit{M},\alpha)\in U$. This will give a cover $U=\bigcup_{i\in I}U_i$ by basic open sets, and for each index $i$ a formula $\delta_i(x_1,\ldots,x_n,y)$ and elements $\xi_1,\ldots,\xi_n<\kappa$ such that $$U_i=U_{\delta_i,(\xi,\eta_0)},$$ and moreover such that property~(ii) of Lemma~\ref{5.3} holds for each of these formulas~$\delta_i$. Now define $$\psi_i^{\eta_0}(y)=\exists x_1\ldots\exists x_n\delta_i(x_1,\ldots,x_n,y).$$ It is now clear that statement~(i) in the lemma holds. For~(ii), suppose $(\mathrm{N},\beta)\in\valuate{\psi_i^{\eta_0}(\eta)}$. This means that $\mathrm{N}\mathit{M}s \exists x_1\ldots\exists x_n\delta_i(x_1,\ldots,x_n,\beta(\eta))$. By~\ref{5.3}(ii), we can find a $\pi_1\in S_\kappa$ such that $\pi_1(\eta)=\eta_0$ and $\pi_0(\mathrm{N},\beta)\in U_{\delta_i,(\xi,\eta_0)}=U_i$. Since $U_i\subset U=p(\eta_0)$, also $\pi_0(\mathrm{N},\beta)\in p(\eta_0)$, and hence, by invariance of~$p$, $(\mathrm{N},\beta)\in p(\pi_1^{-1}(\eta_0))=p(\eta)$, as required. \end{proof} \begin{lemma}\label{5.5} There is a family $\{\psi_i(y)\mid i\in I\}$ of formulas such that, for all $\eta<\kappa$, $$p(\eta)=\bigvee_{i\in I}\valuate{\psi_i(\eta)}.$$ \end{lemma} \begin{proof} This follows immediately from the previous lemma, for the collection of formulas $\{\psi_i^{\eta_0}\mid\eta_0<\kappa,\ i\in I(\eta_0)\}$. \end{proof} \begin{proof*}{Proof of Proposition~\ref{5.2}.} Consider the function $p'\colon\card{\mathfrak{M}}\toB$ defined by $p'(\eta)=\neg p(\eta)$. Clearly, since $p$ is a predicate, so is~$p'$, i.e., $p'(\eta)\wedge\equal{\eta}{\eta'}\leq p'(\eta')$ for all $\eta,\eta'<\kappa$. Moreover, $p'$~is invariant since $p$ is. So we can apply Lemma~\ref{5.5} to~$p'$, to find a collection of formulas $$ \{\varphi_j(y)\mid j\in J\} $$ such that for all $\eta<\kappa$, \begin{equation}\label{eq:5.1} p'(\eta)=\bigvee_{j\in J}\valuate{\varphi_j(\eta)}. \end{equation} The definability of $p$ now follows by a standard compactness argument. Let $c$ be a ``new'' constant, and consider the theory $\mathit{T}'=\mathit{T}\cup\{\neg\psi_i(c)\mid i\in I\} \cup\{\neg\varphi_j(c)\mid j\in J\}$. If $\mathit{T}'$ where consistent, it would have a model~$\mathit{M}$, which we can assume to be (an expansion of a model) in the set~$\mathcal{S}_{\theory}$. Let $\alpha$ be an enumeration of~$\mathit{M}$, and choose $\eta<\kappa$ with $\alpha(\eta)=c^{(\mathit{M})}$, the interpretation of $c$ in~$\mathit{M}$. Then $(\mathit{M},\alpha)\in X=p(\eta)\vee p'(\eta)$, hence $(\mathit{M},\alpha)\in\valuate{\psi_i(\eta)}$ for some $i\in I$ or $(\mathit{M},\alpha)\in\valuate{\varphi_j(\eta)}$ for some $j\in J$. This means that $\mathit{M}\mathit{M}s\psi_i(\alpha(\eta))\vee\varphi_j(\alpha(\eta))$, contradicting the fact that $\mathit{M}$ models~$\mathit{T}'$. This proves that $\mathit{T}'$ is inconsistent. Now apply compactness, to find $i_1,\ldots,i_n\in I$ and $j_1,\ldots,j_m\in J$ such that \begin{equation}\label{eq:5.2} \mathit{T}\vdash\forall y(\psi_{i_1}(y)\vee\cdots\vee\psi_{i_n}(y)\vee \varphi_{j_1}(y)\vee\cdots\vee\varphi_{j_m}(y)). \end{equation} Write $\psi=\psi_{i_1}\vee\cdots\vee\psi_{i_n}$ and $\varphi=\varphi_{j_1}\vee\cdots\vee\varphi_{j_m}$. We claim that $\psi$ defines~$p$. Indeed, let $(\mathit{M},\alpha)$ be any point in~$X$, and let $\eta<\kappa$. By~(\ref{eq:5.2}), $\mathit{M}\mathit{M}s\psi(\alpha(\eta))\vee\varphi(\alpha(\eta))$, or in other words, either $(\mathit{M},\alpha)\in\valuate{\psi(\eta)}$ or $(\mathit{M},\alpha)\in\valuate{\varphi(\eta)}$. If $(\mathit{M},\alpha)\in\valuate{\psi(\eta)}$, then $(\mathit{M},\alpha)\in p(\eta)$ by Lemma~\ref{5.2}. And if $(\mathit{M},\alpha)\in \valuate{\varphi(\eta)}$, then $(\mathit{M},\alpha)\in p'(\eta)$ by~(\ref{eq:5.1}), hence $(\mathit{M},\alpha)\notin p(\eta)$. Thus $(\mathit{M},\alpha)\in \valuate{\psi(\eta)}$ iff $(\mathit{M},\alpha)\in p(\eta)$. This shows that $\valuate{\psi(\eta)}=p(\eta)$ for any $\eta<\kappa$, and completes the proof. \end{proof*} \vskip 3ex \begin{minipage}{\textwidth} \footnotesize Ieke Moerdijk, Mathematisch Instituut, Universiteit Utrecht, Postbus 80.010, NL--3508 TA Utrecht, The Netherlands, [email protected].\\[1ex] Carsten Butz, BRICS, Computer Science Department, Aarhus University, Ny Munkegade, Building 540, DK-8000 {\AA}rhus~C, Denmark, [email protected].\\[1ex] {\bf BRICS}\\ Basic Research in Computer Science, Centre of the Danish National Research Foundation. \end{minipage} \end{document}
\betaegin{document} \title[Brauer $p$-dimension of HDV-fields of residual chacteristic $p$]{On the Brauer $p$-dimension of Henselian discrete valued fields of residual characteristic $p > 0$} \kappaeywords{Henselian field, Brauer $p$-dimension, totally ramified extension, mixed characteristic, normal element\\ 2020 MSC Classification: 16K50, 12J10 (primary), 16K20, 12E15, 11S15 (secondary).} \alphauthor{Ivan D. Chipchakov} \alphaddress{Institute of Mathematics and Informatics\\Bulgarian Academy of Sciences\\muathbf 113 Sofia, Bulgaria: E-mail address: [email protected]} \betaegin{abstract} Let $(K, v)$ be a Henselian discrete valued field with residue field $\widehat K$ of characteristic $p > 0$, and Brd$_{p}(K)$ be the Brauer $p$-dimension of $K$. This paper shows that Brd$_{p}(K) \gammae n$ if $[\widehat K\colon \widehat K ^{p}] = p ^{n}$, for some $n \in \muathbb{N}$. It proves that Brd$_{p}(K) = \infty $ if and only if $[\widehat K\colon \widehat K ^{p}] = \infty $. \varepsilonnd{abstract} \muaketitle \par \muedskip \sigmaection{\betaf Introduction} \par \muedskip Let $E$ be a field, Br$(E)$ its Brauer group, $s(E)$ the class of associative finite-dimensional central simple algebras over $E$, and $d(E)$ the subclass of division algebras $D \in s(E)$. For each $A \in s(E)$, let $[A]$ be the equivalence class of $A$ in Br$(E)$, and let deg$(A)$, ind$(A)$, exp$(A)$ be the degree, the Schur index and the exponent of $A$, respectively. It is well-known (cf. \cite{P}, Sect. 14.4) that exp$(A)$ divides ind$(A)$ and shares with it the same set of prime divisors; also, ind$(A) \muid {\rm deg}(A)$, and deg$(A) = {\rm ind}(A)$ if and only if $A \in d(E)$. Note that if $B _{1}, B _{2} \in s(E)$ and g.c.d.$\{{\rm ind}(B _{1}), {\rm ind}(B _{2})\} = 1$, then ind$(B _{1} {\mathord{\,\otimes }\,}imes _{E} B _{2}) = {\rm ind}(B _{1}){\rm ind}(B _{2})$; equivalently, if $B _{j} ^{\prime } \in d(E)$, $j = 1, 2$, and g.c.d.$\{{\rm deg}(B _{1} ^{\prime }), {\rm deg}(B _{2} ^{\prime })\}$ $= 1$, then $B _{1} ^{\prime } {\mathord{\,\otimes }\,}imes _{E} B _{2} ^{\prime } \in d(E)$ (see \cite{P}, Sect. 13.4). Since Br$(E)$ is an abelian torsion group and ind$(A)$, exp$(A)$ are invariants both of $A$ and $[A]$, these results show that the study of the restrictions on the pairs ind$(A)$, exp$(A)$, $A \in s(E)$, reduces to the special case of $p$-primary pairs, for an arbitrary prime $p$. The Brauer $p$-dimensions Brd$_{p}(E)$, $p \in \muathbb P$, where $\muathbb P$ is the set of prime numbers, are defined as in \cite{ABGV}, and contain essential information on these restrictions. We say that Brd$_{p}(E) = n < \infty $, for a given $p \in \muathbb P$, if $n$ is the least integer $\gammae 0$, for which ind$(P) \muid {\rm exp}(P) ^{n}$ whenever $P \in s(E)$ and $[P]$ lies in the $p$-component Br$(E) _{p}$ of Br$(E)$; if no such $n$ exists, we put Brd$_{p}(E) = \infty $. For instance, Brd$_{p}(E) \lambdae 1$, for all $p \in \muathbb P$, if and only if $E$ is a stable field, i.e. deg$(D) = {\rm exp}(D)$, for each $D \in d(E)$; Brd$_{p'}(E) = 0$, for some $p ^{\prime } \in \muathbb P$, if and only if the $p'$-component Br$(E) _{p'}$ of Br$(E)$ is trivial. \par The absolute Brauer $p$-dimension abrd$_{p}(E)$ of $E$ is defined to be the supremum of Brd$_{p}(R)\colon R \in {\rm Fe}(E)$, where Fe$(E)$ is the set of finite extensions of $E$ in a separable closure $E _{\rm sep}$. This trivially implies abrd$_{p}(E) \gammae {\rm Brd}_{p}(E)$, for each $p$. We have abrd$_{p}(E) \lambdae 1$, $p \in \muathbb P$, if $E$ is an absolutely stable field, i.e. its finite extensions are stable fields. Class field theory gives examples of such fields: it shows that Brd$_{p}(\Phi ) = {\rm abrd}_{p}(\Phi ) = 1$, $p \in \muathbb P$, if $\Phi $ is a global or local field (see, e.g., \cite{Re}, (31.4) and (32.19)). The same equalities hold, if $\Phi = \Phi _{0}((X))((Y))$ is an iterated formal Laurent power series field in $2$ variables over a quasifinite field $\Phi _{0}$ (see \cite{Ch1}, Corollary~4.5 (ii)). \par The knowledge of the sequence Brd$_{p}(E), {\rm abrd}_{p}(E)\colon p \in \muathbb P$, is helpful for better understanding the behaviour of index-exponent relations over finitely-generated transcendental extensions of $E$ \cite{Ch4}. This is demonstrated by the description in \cite{Ch5} of the set of sequences Brd$_{p}(K _{q})$, abrd$_{p}(K _{q})$, $p \in \muathbb P$, $p \nueq q$, where $K _{q}$ runs across the class of fields with Henselian valuations $v _{q}$ whose residue fields $\widehat K _{q}$ are perfect of characteristic $q \gammae 0$, such that their absolute Galois groups $\muathcal{G}_{\widehat K _{q}} = \muathcal{G}(\widehat K _{q,{\rm sep}}/\widehat K _{q})$ are projective profinite groups, in the sense of \cite{S1}. The description relies on formulae for Brd$_{p}(K _{q})$, $p \nueq q$, which depend only on whether $\widehat K _{q}$ contains a primitive $p$-th root of unity. Thus Brd$_{p}(K _{q})$ is determined, for each $p \nueq q$, by two invariants: one of the value group $v _{q}(K _{q})$, and one of the Galois group $\muathcal{G}(\widehat K _{q}(p)/\widehat K _{q})$ of the maximal $p$-extension $\widehat K _{q}(p)$ of $\widehat K _{q}$ in $\widehat K _{q,{\rm sep}}$. \par A formula for Brd$_{q}(K _{q})$ in terms of invariants of $\widehat K _{q}$ and $v(K _{q})$ has also been found when char$(K _{q}) = q > 0$, $\widehat K _{q}$ is perfect and $(K _{q}, v _{q})$ is a maximally complete field (see \cite{Ch6}, Proposition~3.5). By definition, the imposed restriction on $(K _{q}, v _{q})$ means that it does not admit immediate proper extensions, i.e. valued extensions $(K _{q} ^{\prime }, v'_{q}) \nueq (K _{q}, v _{q})$ with $\widehat K _{q} ^{\prime } = \widehat K _{q}$ and $v'_{q}(K _{q} ^{\prime }) = v _{q}(K _{q})$. The considered fields are singled out by the fact (established by Krull, see \cite{Wa}, Theorem~31.24 and page 483) that every valued field $(L _{0}, \lambdaambdambda _{0})$ has an immediate extension $(L _{1}, \lambdaambdambda _{1})$ that is a maximally complete field. Note here that no formula for Brd$_{q}(K _{q})$ as above exists if $(K _{q}, v _{q})$ is only Henselian. More precisely, one can show using suitably chosen valued subfields of maximally complete fields that if $(K, v)$ runs across the class of Henselian fields of characteristic $q$, then Brd$_{q}(K)$ does not depend only on $\widehat K$ and $v(K)$. Specifically, it has been proved (see \cite{Ch6}, Example~3.7) that for any integer $t \gammae 2$, the iterated formal Laurent power series field $Y _{t} = \muathbb{F} _{q}((T _{1})) \deltaots ((T _{t}))$ in $t$ variables over the field $\muathbb{F} _{q}$ with $q$ elements possesses subfields $K _{\infty }$ and $K _{n}$, $n \in \muathbb{N}$, such that: \par \muedskip\nuoindent (1.1) (a) Brd$_{q}(K _{\infty }) = \infty $; $n + t - 1 \lambdae {\rm Brd}_{q}(K _{n}) \lambdae n + t$, for each $n \in \muathbb{N}$; \par (b) The valuations $v _{m}$ of $K _{m}$, $m \lambdae \infty $, induced by the standard $\muathbb{Z} ^{t}$-valued valuation of $Y _{t}$ are Henselian with $\widehat K _{m} = \muathbb{F} _{q}$ and $v _{m}(K _{m}) = \muathbb{Z} ^{t}$; here $\muathbb{Z} ^{t}$ is viewed as an abelian group endowed with the inverse-lexicographic ordering. \par \muedskip Statement (1.1) attracts interest in the study of Brauer $p$-dimensions of Henselian fields of residual characteristic $p > 0$ from suitably chosen special classes. This paper considers Brd$_{p}(K)$, for a Henselian discrete valued field (abbr., an HDV-field) $(K, v)$ with char$(\widehat K) = p$. Our research is related to the problem of describing index-exponent relations over finitely-generated field extensions. It proves the right-to-left implication in the equivalence \par\nuoindent Brd$_{p}(K) = \infty $ $\Leftrightarrow $ the degree $[\widehat K\colon \widehat K ^{p}]$ is infinite (in case char$(K) = 0$, the inverse implication is a consequence of \cite{PS}, Corollary~2.5, see also Fact \ref{fact3.5}), $\widehat K ^{p}$ being the subfield $\{u ^{p}\colon u \in \widehat K\}$. When $[\widehat K\colon \widehat K ^{p}] < \infty $, we prove the lower bound in the following conjecture (stated by Bhaskhar and Haase \cite{BH}\footnote{The Brauer $p$-dimension, in the sense of \cite{PS} and \cite{BH}, means the same as the absolute Brauer $p$-dimension in the present paper.} for complete discrete valued fields): \par \sigmamallskip \betaegin{conj} \lambdaambdabel{conj1.1} If $(K, v)$ is an {\rm HDV}-field with {\rm char}$(\widehat K) = p > 0$ and $[\widehat K\colon \widehat K ^{p}] = p ^{n}$, for some $n \in \muathbb{N}$, then $n \lambdae {\rm abrd}_{p}(K) \lambdae n + 1$. \varepsilonnd{conj} \par \muedskip\nuoindent Conjecture \ref{conj1.1} has been stated at the end of \cite{BH}, under the extra hypothesis that char$(K) = 0$ and char$(\widehat K) = p$. This restriction is not emphasized in the present paper, as we prove Conjecture \ref{conj1.1} in case char$(K) = p$ (see Proposition \ref{prop7.1}). Note also that the class of HDV-fields $(K, v)$ with char$(\widehat K) = p > 0$ and $[\widehat K\colon \widehat K ^{p}] = p ^{n}$ is closed under taking finite extensions (cf. \cite{E3}, Corollary~14.2.2, and \cite{BH}, Lemma~2.12). It is therefore clear that the upper bound in Conjecture \ref{conj1.1} will follow, if the inequality Brd$_{p}(K) \lambdae n + 1$ holds, for an arbitrary HDV-field $(K, v)$ with $\widehat K$ as above. The inequality Brd$_{p}(K) \gammae n$ implies trivially the lower bound $n \lambdae {\rm abrd}_{p}(K)$ in Conjecture \ref{conj1.1}. It attracts interest in finding formulae for Brd$_{p}(K)$, for example, when $(K, v)$ belongs to some basic classes of HDV-fields of residual characteristic $p$ (see Conjecture \ref{conj7.3} and Problem \ref{prob7.4}). \par\vskip0.8truecm\nuoindent {\betaf Basic notation and abbreviations used in the paper} \betaegin{itemize} \item $\muathbb{P}$ - the set of prime numbers; $\muathbb{N}$ - the set of positive integers; $\muathbb{Z}$ - the set (additive group, ring) of integers; \item $\muathbb{Q}$ and $\muathbb{R}$ the additive groups (the fields) of rational numbers and of real numbers, respectively; \item Abbreviations: HDV - Henselian discrete valued; TR - totally ramified; \item For any field $E$, we use the following notation: \item $E ^{\alphast }$ is the multiplicative group of $E$; $E ^{\alphast n}$ is the subgroup of $n$-th powers $E ^{\alphast n} = \{\alphalpha ^{n}\colon \alphalpha \in E ^{\alphast }\}$, for each $n \in \muathbb{N}$; \item $s(E)$ - the class of associative finite-dimensional central simple $E$-algebras, $d(E)$ - the subclass of division algebras $D \in s(E)$, Br$(E)$ - the Brauer group of $E$; \item $E _{\rm sep}$ is a separable closure of $E$, Fe$(E)$ is the set of finite extensions of $E$ in $E _{\rm sep}$, $\muathcal{G}_{E} := \muathcal{G}(E _{\rm sep}/E)$ is the absolute Galois group of $E$; $N(E _{1}/E)$ denotes the norm group of the extension $E _{1}/E$, for any $E _{1} \in {\rm Fe}(E)$; \item For each $p \in \muathbb{P}$, $_{p}{\rm Br}(E) = \{b _{p} \in {\rm Br}(E)\colon \ pb _{p} = 0\}$ is the maximal subgroup of Br$(E)$ of period dividing $p$, Br$(E) _{p}$ - the $p$-component of Br$(E)$, Brd$_{p}(E)$ - the Brauer $p$-dimension of $E$, abrd$_{p}(E)$ - the absolute Brauer $p$-dimension of $E$; also, $E(p)$ is the maximal $p$-extension of $E$ (in $E _{\rm sep}$), and cd$_{p}(\muathcal{G}_{E})$ - the cohomological $p$-dimensions of $\muathcal{G}_{E}$, in the sense of \cite{S1}; \item For any field extension $E ^{\prime }/E$, $I(E ^{\prime }/E)$ denotes the set of intermediate fields of $E ^{\prime }/E$, and Br$(E ^{\prime }/E)$ is the relative Brauer group of $E ^{\prime }/E$; \item Algebraic structures attached to a field $K$ with a nontrivial Krull valuation $v$: $O _{v}(K) = \{a \in K\colon \ v(a) \gammae 0\}$ - the valuation ring of $(K, v)$; $M _{v}(K) = \{\muu \in K\colon \ v(\muu ) > 0\}$ - the maximal ideal of $O _{v}(K)$; $O _{v}(K) ^{\alphast } = \{u \in K\colon \ v(u) = 0\}$ - the multiplicative group of $O _{v}(K)$; $v(K)$ - the value group of $(K, v)$; $\overline {v(K)}$ - a divisible hull of $v(K)$; for each $\gammaamma \in \overline {v(K)}$, $\gammaamma \gammae 0$, $n+abla _{\gammaamma }(K)$ denotes the set $\{\lambdaambdambda \in K\colon \ v(\lambdaambdambda - 1) > \gammaamma \}$; \item $\widehat K = O _{v}(K)/M _{v}(K)$ is the residue field of $(K, v)$, and for any $\lambdaambdambda \in O _{v}(K)$, $\hat \lambdaambdambda \in \widehat K$ is the residue class $\lambdaambdambda + M _{v}(K)$; $(K, v)$ is said to be of mixed characteristic $(0, p)$ if char$(K) = 0$ and char$(\widehat K) = p > 0$; \lambdaambdabel{k999} \item When $(K, v)$ is a real-valued field, $K _{v}$ stands for the completion of $K$ with respect to the topology induced by $v$, and $\betaar v$ is the valuation of $K _{v}$ continuously extending $v$; \lambdaambdabel{approx} \item Given an HDV-field $(K, v)$ of mixed characteristic $(0, p)$, and a primitive $p$-th root of unity $\varepsilon \in K _{\rm sep}$, we write $\betaeta \alphapprox \betaeta '$, for some $\betaeta , \betaeta ' \in K(\varepsilon ) ^{\alphast }$, if $v(\betaeta - \betaeta ') > p\kappaappa $, where $\kappaappa = v(p)/(p - 1)$; given an element $\pi \in K$ with $0 < v(\pi ) < p\kappaappa $, we write $\betaeta \sigmaim \betaeta '$ if $v(\betaeta - \betaeta ') > p\kappaappa - v(\pi ) = v((1 - \varepsilon ) ^{p}\pi ^{-1})$. \varepsilonnd{itemize} \sigmaection{\betaf Statement of the main result} \par \muedskip Let $(K, v)$ be an HDV-field with char$(\widehat K) = p > 0$. As shown in \cite{PS}, if char$(K) = 0$ and $[\widehat K\colon \widehat K ^{p}] = p ^{n}$, for some $n \in \muathbb{N}$, then $[n/2] \lambdae {\rm abrd}_{p}(K) \lambdae 2n$; abrd$_{p}(K) = \infty $ if and only if $[\widehat K\colon \widehat K ^{p}] = \infty $ (this is contained in \cite{PS}, Corollary~2.5 and Lemma~2.6). When $[\widehat K\colon \widehat K ^{p}] = p ^{n}$ and $n$ is odd, it has been proved in \cite{BH} that abrd$_{p}(K) \gammae 1 + [n/2]$. The proofs of these results show their validity for Brd$_{p}(K)$ if $K$ contains a primitive $p$-th root of unity (see Remark \ref{rema6.2}). \par \muedskip The purpose of the present paper, in the first place, is to prove the inequality Brd$_{p}(K) \gammae n$ in general, and thereby, to obtain the inequality abrd$_{p}(K) \gammae n$ in Conjecture \ref{conj1.1}. Also, its major objective is to give an optimal infinitude criterion for Brd$_{p}(K)$. Our main result can be stated as follows: \par \muedskip \betaegin{theo} \lambdaambdabel{theo2.1} Let $(K, v)$ be an {\rm HDV}-field with {\rm char}$(\widehat K) = p > 0$. Then: \par {\rm (a)} {\rm Brd}$_{p}(K)$ is infinite if and only if $\widehat K/\widehat K ^{p}$ is an infinite extension; \par {\rm (b)} There exists $D \in d(K)$ with {\rm exp}$(D) = p$ and {\rm deg}$(D) = p ^{n}$, provided that $[\widehat K\colon \widehat K ^{p}] = p ^{n}$, for some $n \in \muathbb{N}$; in particular, {\rm Brd}$_{p}(K) \gammae n$. \varepsilonnd{theo} \par \muedskip Theorem \ref{theo2.1} (b) and the right-to-left implication in Theorem \ref{theo2.1} (a) are proved in Section 6. For our proof, we construct in Section 3 an algebra $D \in d(K)$ with exp$(D) = p$ and deg$(D) = p ^{\muu }$, assuming that $K$ has a TR and Galois extension $M _{\muu }$ of degree $[M _{\muu }\colon K] = p ^{\muu } \lambdae [\widehat K\colon \widehat K ^{p}]$ with an abelian Galois group $\muathcal{G}(M _{\muu }/K)$ of period $p$. By a TR-extension of $K$, we mean here a finite extension $M/K$ with $\widehat M = \widehat K$. This agrees, by Lemma \ref{lemm3.2} (b), with the definition of a TR-extension over any valued field, given before the statement of Lemma \ref{lemm3.3} (for the case of a discrete valued field, see the paragraph before the statement of Lemma \ref{lemm5.2}). The existence of $M _{\muu }$ is a consequence of the following result, which is of independent interest when $(K, v)$ is of mixed characteristic $(0, p)$: \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm2.2} Let $(K, v)$ be an {\rm HDV}-field with {\rm char}$(\widehat K) = p > 0$ and $\widehat K$ infinite. Then $K$ has {\rm TR} extensions $M _{\muu }$, $\muu \in \muathbb{N}$, such that $[M _{\muu }\colon K] = p ^{\muu }$, $M _{\muu }/K$ is a Galois extension and the group $\muathcal{G}(M _{\muu }/K)$ is abelian of period $p$, for each $\muu $. \varepsilonnd{lemm} \par \sigmamallskip Theorem \ref{theo2.1} and Lemma \ref{lemm2.2} have already been proved in case char$(K) = p$ (cf. \cite{Ch4}, Lemma~4.2). Moreover, it follows that, in the setting of the lemma, if char$(K) = p$, then each finite $p$-group $G$ is isomorphic to $\muathcal{G}(M _{G}/K)$, for some TR and Galois extension $M _{G}$ of $K$ (see \cite{Ch6}, Lemma~2.3). When char$(K) = 0$ and $(K, v)$ is an HDV-field of type II, in the sense of Kurihara, this is not true, for any cyclic $p$-group $G$ of sufficiently large order \cite{MKu}, 12.2, Theorem~(b). \par \sigmamallskip Lemma \ref{lemm2.2} is proved in Sections 5 and 6. Section 3 contains valuation-theoretic preliminaries used in the sequel. We also show there how Theorem \ref{theo2.1} can be deduced from Lemma \ref{lemm2.2} (see Lemma \ref{lemm3.6}). For reasons noted above, here we focus our attention on the mixed characteristic case $(0, p)$. For the proof of Lemma \ref{lemm2.2}, we take into consideration whether or not $v(p) \in pv(K)$ (see Lemmas \ref{lemm4.8}, \ref{lemm6.1} and Lemma \ref{lemm5.2} (b), respectively). Section 4 is devoted to the technical preparation for the proof of Lemma \ref{lemm2.2}. As noted above, in Section 7, we prove Conjecture \ref{conj1.1} for an HDV-field of characteristic $p$. Open questions concerning Brd$_{p}(K)$ are also posed in two frequently considered special cases. \par \muedskip The basic notation, terminology and conventions kept in this paper are standard and essentially the same as in \cite{L}, \cite{TW} and \cite{Ch4}. Missing definitions concerning central simple algebras can be found in \cite{P}. Throughout, Brauer and value groups are written additively, Galois groups are viewed as profinite under the Krull topology, and by a profinite group homomorphism, we mean a continuous one. For any discrete valued field $(K, v)$, we suppose that $v(K)$ is chosen to be a subgroup of the additive group $\muathbb{Q}$ of rational numbers. By an $n$-dimensional local field, for some $n \in \muathbb{N}$, we mean a complete $n$-discretely valued field $K _{n}$, in the sense of \cite{F1} (see also \cite{Zh}), with a quasifinite $n$-th residue field $K _{0}$. \muedskip \sigmaection{\betaf Preliminaries} \muedskip Let $K$ be a field with a (nontrivial) Krull valuation $v$. We say that $v$ is Henselian, if it extends uniquely, up-to equivalence, to a valuation $v _{L}$ on each algebraic extension $L$ of $K$. This holds, if $K = K _{v}$ and $(K, v)$ is a real-valued field, i.e. $v(K)$ is isomorphic to an ordered subgroup of the additive group $\muathbb{R}$ of real numbers (cf. \cite{L}, Ch. XII). Maximally complete fields are also Henselian, since Henselizations of valued fields are their immediate extensions (see \cite{E3}, Theorem~15.3.5). The valuation $v$ is Henselian if and only if any of the following two equivalent conditions holds (cf. \cite{E3}, Sect. 18.1, and \cite{Wa}, Theorem~32.19): \par \muedskip\nuoindent (3.1) (a) Given a polynomial $f(X) \in O _{v}(K) [X]$ and an element $a \in O _{v}(K)$, such that $2v(f ^{\prime }(a)) < v(f(a))$, where $f ^{\prime }$ is the formal derivative of $f$, there is a zero $c \in O _{v}(K)$ of $f$ satisfying the equality $v(c - a) = v(f(a)/f ^{\prime }(a))$; \par (b) For each normal extension $\Omega /K$, $v ^{\prime }(\tilde{\alpha}u (\muu )) = v ^{\prime }(\muu )$ whenever $\muu \in \Omega $, $v ^{\prime }$ is a valuation of $\Omega $ extending $v$, and $\tilde{\alpha}u $ is a $K$-automorphism of $\Omega $. \par \muedskip When $(K, v)$ is real-valued, it is Henselian if and only if $K$ is (relatively) separably closed in $K _{v}$ (cf. \cite{E3}, Theorems~15.3.5, 17.1.5). The following lemma allows to extend to the Henselian case results on complete real-valued fields (e.g., the Grunwald-Wang theorem, see \cite{LR} and Remark \ref{rema5.3}). \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm3.1} Let $(K, v)$ be a real-valued field, $\betaar v$ the continuous prolongation of $v$ on $K _{v}$, and $(\muathcal{K}, v')$ an intermediate valued field of $(K _{v}, \betaar v)/(K, v)$. Suppose that $(\muathcal{K}, v')$ is Henselian, identify $\muathcal{K} _{\rm sep}$ with its $\muathcal{K}$-isomorphic copy in $K _{v,{\rm sep}}$, and let $f$ be the mapping {\rm Fe}$(\muathcal{K}) \to {\rm Fe}(K _{v})$, by the rule $\Lambda ^{\prime } \to \Lambda ^{\prime }K _{v}$. Then: \par {\rm (a)} $\muathcal{K} _{\rm sep} \cap K _{v} = \muathcal{K}$, and each $\Lambda \in {\rm Fe}(K _{v})$ contains a primitive element $\lambdaambdambda \in \muathcal{K} _{\rm sep}$ over $K _{v}$, such that $[K _{v}(\lambdaambdambda )\colon K _{v}] = [\muathcal{K}(\lambdaambdambda )\colon \muathcal{K}]$; \par {\rm (b)} $\muathcal{K} _{\rm sep}K _{v} = K _{v,{\rm sep}}$ and $\muathcal{G}_{\muathcal{K}} \cong \muathcal{G}_{K _{v}}$; \par {\rm (c)} The correspondence $f$ is bijective and degree-preserving; moreover, $f$ and the inverse mapping $f ^{-1}: {\rm Fe}(K _{v}) \to {\rm Fe}(\muathcal{K})$, preserve the Galois property and the isomorphism class of the corresponding Galois groups; \par {\rm (d)} For each $\nuu \in \muathbb{N}$ not divisible by {\rm char}$(K)$, $K _{v} ^{\alphast \nuu } \cap \muathcal{K} ^{\alphast } = \muathcal{K} ^{\alphast \nuu }$. \varepsilonnd{lemm} \par \sigmamallskip \betaegin{proof} The conditions on $(K, v)$ and $(\muathcal{K}, v')$ ensure that $\muathcal{K} _{\rm sep} \cap K _{v} = \muathcal{K}$. The latter part of Lemma \ref{lemm3.1} (a) can be deduced from Krasner's lemma (see \cite{L2}, Ch. II, Propositions~3, 4). Lemma \ref{lemm3.1} (c) follows from Lemma \ref{lemm3.1} (a) and Galois theory (cf. \cite{L}, Ch. VI, Theorem~1.12), and Lemma \ref{lemm3.1} (b) - from Lemma \ref{lemm3.1} (a), (c) and the definition of the Krull topology on $\muathcal{G}_{\muathcal{K}}$ and $\muathcal{G}_{K _{v}}$. Lemma \ref{lemm3.1} (d) is implied by the density of $\muathcal{K}$ in $K _{v}$, and by the fact that the set $n+abla _{\gammaamma }(K _{v}) = \{\alphalpha \in K _{v}\colon \betaar v(\alphalpha - 1) > \gammaamma \}$ is an open subgroup of $K _{v} ^{\alphast \nuu }$, provided $\gammaamma \in \muathbb{R}$ is sufficiently large (one may put $\gammaamma = 0$ if char$(\widehat K) \numid \nuu $). \varepsilonnd{proof} \par \muedskip\nuoindent When $v$ is Henselian, so is $v _{L}$, for any algebraic field extension $L/K$; in this case, $\widehat L/\widehat K$ is algebraic as well. We write $v$ instead of $v _{L}$ and view $v(L)$ as an ordered subgroup of a fixed divisible hull $\overline {v(K)}$. This is allowed, since $v(K)$ is an ordered subgroup of $v(L)$, such that $v(L)/v(K)$ is a torsion group; hence, $v(L)$ embeds in $\overline {v(K)}$ as an ordered subgroup. These facts follow from Ostrowski's theorem (see \cite{E3}, Theorem~17.2.1), namely, the assertion that if $[L\colon K]$ is finite, then $[\widehat L\colon \widehat K]e(L/K)$ divides $[L\colon K]$ and $[L\colon K][\widehat L\colon \widehat K] ^{-1}e(L/K) ^{-1}$ has no divisor $p \in \muathbb P$, $p \nueq {\rm char}(\widehat K)$; here $e(L/K)$ denotes the ramification index of $L/K$ (the index $\vert v(L)\colon v(K)\vert $ of $v(K)$ in $v(L)$). We state below several known criteria that $[L\colon K] = [\widehat L\colon \widehat K]e(L/K)$: \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm3.2} Let $(K, v)$ be a Henselian field and $L/K$ a finite extension. Then $[L\colon K] = [\widehat L\colon \widehat K]e(L/K)$ in the following cases: \par {\rm (a)} If char$(\widehat K) \numid [L\colon K]$ (apply Ostrowski's theorem); \par {\rm (b)} If $(K, v)$ is HDV and $L/K$ is separable (see \cite{E3}, Sect. 17.4); \par {\rm (c)} When $(K, v)$ is maximally complete (cf. \cite{Wa}, Theorem~31.21). \par\nuoindent Under the hypotheses of (c), if {\rm char}$(K) = p > 0$, then $K ^{p}$ is maximally complete (relative to the valuation induced by $v$) with a residue field $\widehat K ^{p}$ and a value group $pv(K)$; this ensures that $[K\colon K^{p}]$ is finite if and only if so are $[\widehat K\colon \widehat K ^{p}]$ and the quotient group $v(K)/pv(K)$. \varepsilonnd{lemm} \par \muedskip\nuoindent Assume that $(K, v)$ is a nontrivially valued field. A finite extension $R$ of $K$ is said to be inertial with respect to $v$, if $R$ has a unique (up-to equivalence) valuation $v _{R}$ extending $v$, the residue field $\widehat R$ of $(R, v _{R})$ is separable over $\widehat K$, and $[R\colon K] = [\widehat R\colon \widehat K]$; $R/K$ is called a TR-extension with respect to $v$, if $v$ has a unique prolongation $v _{R}$ on $R$, and the index $\vert v _{R}(R)\colon v(K)\vert $ equals $[R\colon K]$. When $v$ is Henselian, $R/K$ is TR if and only if $e(R/K) = [R\colon K]$. Inertial extensions of Henselian fields have useful properties, some of which are presented by the following lemma (for a proof, see \cite{TW}, Theorem~A.23): \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm3.3} Let $(K, v)$ be a Henselian field. Then: \par {\rm (a)} An inertial extension $R ^{\prime }/K$ is Galois if and only if $\widehat R ^{\prime }/\widehat K$ is Galois. When this holds, $\muathcal{G}(R ^{\prime }/K)$ and $\muathcal{G}(\widehat R ^{\prime }/\widehat K)$ are canonically isomorphic. \par {\rm (b)} The compositum $K _{\rm ur}$ of inertial extensions of $K$ in $K _{\rm sep}$ is a Galois extension of $K$ with $\muathcal{G}(K _{\rm ur}/K) \cong \muathcal{G}_{\widehat K}$. \par {\rm (c)} Finite extensions of $K$ in $K _{\rm ur}$ are inertial, and the natural mapping of $I(K _{\rm ur}/K)$ into $I(\widehat K _{\rm sep}/\widehat K)$ is bijective. \varepsilonnd{lemm} \par \muedskip It is known (cf. \cite{Sch}, Ch. 2, Sect. 7, and \cite{TW}, Sect. 1.2.2) that if $(K, v)$ is Henselian, then $v$ extends on each $D \in d(K)$ to a unique valuation $v _{D}$, up-to equivalence. Put $v(D) = v _{D}(D)$ and denote by $\widehat D$ the residue division ring of $(D, v _{D})$. Note that $\widehat D$ is a division $\widehat K$-algebra with $[\widehat D\colon \widehat K] < \infty $, and $v(D)$ is an ordered abelian group including $v(K)$ is an ordered subgroup of finite index $e(D/K)$. In addition, the following holds, by \cite{TY}, Proposition~2.2: \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm3.4} If $(K, v)$ is an {\rm HDV}-field, then $[D\colon K] = [\widehat D\colon \widehat K]e(D/K)$, for every $D \in d(K)$. \varepsilonnd{lemm} Next we state results on any HDV-field $(K, v)$ that are used in Section 7 for proving Conjecture \ref{conj1.1} in the case of char$(K) = p$. They reduce the proof of the upper bound in this conjecture to considering only the case where $(K, v)$ is a complete discrete valued field (which allows to apply results of \cite{PS} and \cite{BH}): \par \muedskip \betaegin{fact} \lambdaambdabel{fact3.5} {\rm (a)} The scalar extension map {\rm Br}$(K) \to {\rm Br}(K _{v})$ is an injective homomorphism which preserves Schur indices and exponents (cf. \cite{Cohn}, Theorem~1, and \cite{Sch}, Ch. 2, Theorem~9); hence, {\rm Brd}$_{p'}(K) \lambdae {\rm Brd}_{p'}(K _{v})$, for every $p' \in \muathbb P$; \par {\rm (b)} The valued field $(K _{v}, \betaar v)$ (see page \pageref{k999}) is maximally complete (cf. \cite{Sch}, Ch. 2, Theorem~8, or \cite{TW}, Example~3.11); in addition, $(K _{v}, \betaar v)/(K, v)$ is an immediate extension (cf. \cite{E3}, Theorem~9.3.2, or \cite{L}, Ch. XII, Sect. 5). \varepsilonnd{fact} \par \muedskip Let now $(K, v)$ be an HDV-field with char$(\widehat K) = p$. Suppose that there exists a Galois extension $M/K$ with $\muathcal{G}(M/K)$ abelian of period $p$ and order $p ^{\muu }$, for some $\muu \in \muathbb{N}$. Then, by Galois theory, $M$ equals the compositum $L _{1} \deltaots L _{\muu }$ of degree $p$ (Galois) extensions $L _{j}$ of $K$ in $M$, $j = 1, \deltaots , \muu $. This enables one to construct various algebras of degree $p ^{\muu }$ presentable as tensor products of cyclic $K$-algebras of degree $p$ (concerning cyclic algebras in general, see, e.g., \cite{P}, Sect. 15). When $M/K$ is a TR-extension and $p ^{\muu } \lambdae [\widehat K\colon \widehat K ^{p}]$, our next lemma provides a criterion for an algebra of this type to lie in $d(K)$, which is used for proving Theorem \ref{theo2.1}. Before stating it, note that a finite system $\Theta $ of $m$ elements of a field $E$ with char$(E) = p$ is called $p$-independent over $E ^{p}$, if $[E ^{p}(\Theta )\colon E ^{p}] = p ^{m}$. \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm3.6} Let $(K, v)$ be an {\rm HDV}-field with {\rm char}$(\widehat K) = p > 0$, and let $M/K$ be a {\rm TR} and Galois extension with $\muathcal{G}(M/K)$ abelian of period $p$ and finite order $p ^{\muu } \lambdae [\widehat K\colon \widehat K ^{p}]$. Fix a presentation $M = L _{1} \deltaots L _{\muu }$ as a compositum of degree $p$ extensions of $K$ in $M$, take a generator $\sigmaigma _{j}$ of $\muathcal{G}(L _{j}/K)$, for each index $j$, and choose elements $a _{j} \in O _{v}(K)$, $j = 1, \deltaots , \muu $, so that the system $\hat a _{j} \in \widehat K$, $j = 1, \deltaots , \muu $, be $p$-independent over $\widehat K ^{p}$. Then the tensor product $D _{\muu } = {\mathord{\,\otimes }\,}imes _{j=1} ^{\muu } \Delta _{j}$ of the cyclic $K$-algebra $\Delta _{j} = (L _{j}/K, \sigmaigma _{j}, a _{j})$, $j = 1, \deltaots , \muu $, lies in $d(K)$, where ${\mathord{\,\otimes }\,}imes = {\mathord{\,\otimes }\,}imes _{K}$. Moreover, $v(D _{\muu }) = v(M)$ and $\widehat D _{\muu }$ is a root field over $\widehat K$ of the binomials $X ^{p} - \hat a _{j}$, $j = 1, \deltaots , \muu $, so $[\widehat D _{\muu }\colon \widehat K] = p ^{\muu }$. \varepsilonnd{lemm} \par \muedskip The proof of Lemma \ref{lemm3.6} is done by induction on $\muu $, by the method of proving \cite{Ch4}, Lemma~4.2 (b) (which covers the case of $p = {\rm char}(K)$). For convenience of the reader, we outline its main steps. In fact, it suffices to prove that $D _{\muu } \in d(K)$; then the rest of the lemma can be deduced from Lemma \ref{lemm3.4}, the equality $[D _{\muu }\colon K] = p ^{2\muu }$, and the existence of $K$-subalgebras $\Theta _{\muu }$ and $W _{\muu }$ of $D _{\muu }$, such that $\Theta _{\muu } \cong M$ and $W _{\muu }$ is a root field over $K$ of the binomials $X ^{p} - a _{j}$, $j = 1, \deltaots , \muu $. If $\muu = 1$, then $\hat a _{1} \nuotin \widehat K ^{p} = \widehat L _{1} ^{p}$, which implies $a _{1} \nuotin N(L _{1}/K)$; hence, by \cite{P}, Proposition~15.1~b, $D _{1} \in d(K)$. When $\muu \gammae 2$, it suffices to show that $D _{\muu } \in d(K)$, under the extra hypothesis that the centralizer $C = C _{D _{\muu }}(L _{\muu })$ lies in $d(L _{\muu })$. As $C = D _{\muu -1} {\mathord{\,\otimes }\,}imes _{K} L _{\muu }$, where $D _{\muu -1} = {\mathord{\,\otimes }\,}imes _{j=1} ^{\muu -1} \Delta _{j}$, it is easy to see that $v _{C}(C) = v(M)$ and $\widehat C$ equals the (commutative) field $\widehat K(\sigmaqrt[p]{\hat a _{1}}, \deltaots , \sigmaqrt[p]{\hat a _{\muu -1}})$; in particular, $\widehat C$ does not possess nontrivial $\widehat K$-automorphisms. Observing that $D _{\muu } \in s(K)$, consider the $K$-automorphism $\varphi $ of $C$ which induces the identity on $D _{\muu -1}$ and the automorphism $\sigmaigma _{\muu }$ of $L _{\muu }$. It follows from the Skolem-Noether theorem (see \cite{P}, Sect. 12.6) that $\varphi $ is induced by the inner automorphism of $D _{\muu }$ defined by conjugation by an element $x _{\muu } \in \Delta _{\muu }$ that induces $\sigmaigma _{\muu }$ on $L _{\muu }$, satisfies $x _{\muu } ^{p} = a _{\muu }$, and generates $D _{\muu }$ over $C$. Thus, $D _{\muu }$ is a cyclic generalized crossed product over $C$ as described in \cite{A2}, Ch. XI, Theorems~10, 11. In view of (3.1) (b), it is easily verified that the composition $v _{C} \circ \varphi $ is a valuation of $C$ extending the prolongation of $v$ on $L _{\muu }$. As $v$ is Henselian, this means that $v _{C} \circ \varphi = v _{C}$ which implies $v _{C}(d) = 0$ and $\hat d = \hat d ^{\prime p} \in \widehat C ^{p}$, provided that $d = \prod _{i=0} ^{p-1} \varphi ^{i}(d ^{\prime })$, for some $d ^{\prime } \in C$ with $v _{C}(d ^{\prime }) = 0$. Since $\hat a _{\muu } \nuotin \widehat C ^{p}$ (and $v _{C}(d) \nueq 0$ if $v _{C}(d ^{\prime }) \nueq 0$), one thereby concludes that $\prod _{i=0} ^{p-1} \varphi ^{i} (\tilde d) \nueq a _{\muu }$, for any $\tilde d \in C$. Hence, by the equality $x _{\muu } ^{p} = a _{\muu }$ and the hypothesis that $C \in d(L _{\muu })$, the assertion that $D _{\muu } \in d(K)$ can be obtained from \cite{A2}, Ch. XI, Theorem~12, so Lemma \ref{lemm3.6} is proved. \par \muedskip Theorem \ref{theo2.1} is implied by Lemmas \ref{lemm3.6} and \ref{lemm2.2}, so our main goal in the rest of the paper is to prove Lemma \ref{lemm2.2}. As noted in Section 2, one may consider only the case of char$(K) = 0$. Our next lemma is used in Section 5 for proving Lemma \ref{lemm2.2}, under the extra hypothesis that $v(p) \nuotin pv(K)$. \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm3.7} Let $(K, v)/(\Phi , \omegaega )$ be a valued field extension, such that the index $\vert v(K)\colon \omegaega (\Phi )\vert $ of $\omegaega (\Phi )$ in $v(K)$ is finite, and let $\Psi $ be an extension of $\Phi $ in $K _{\rm sep}$ of degree $p ^{\muu }$, for some $p \in \muathbb{P}$, $\muu \in \muathbb{N}$. Suppose that $\Psi $ is {\rm TR} over $\Phi $ relative to $\omegaega $, and $p \numid \vert v(K)\colon \omegaega (\Phi )\vert $. Then $\Psi K/K$ is {\rm TR} relative to $v$ and $[\Psi K\colon K] = p ^{\muu }$. \varepsilonnd{lemm} \par \sigmamallskip \betaegin{proof} In view of \cite{E3}, Theorem~15.3.5, and our assumptions, one may suppose, for the proof, that the value groups of all valuations of $\Psi K$ extending $\omegaega $ are ordered subgroups of $\overline {v(K)}$. Let $v'$ be any valuation of $\Psi K$ extending $v$. By the Fundamental Inequality (cf. \cite{E3}, Theorem~17.1.5), \par \muedskip\nuoindent (3.2) $\vert v'(\Psi K)\colon v(K)\vert \lambdae [\Psi K\colon K] \lambdae [\Psi \colon \Phi ] = p ^{\muu }$. \par \muedskip\nuoindent As $\Psi /\Phi $ is TR relative to $\omegaega $, $\Psi $ has a unique valuation $\omegaega '$ extending $\omegaega $. This shows that $\omegaega '$ equals the valuation of $\Psi $ induced by $v'$. Note further that $$p ^{\muu } = \vert \omegaega '(\Psi )\colon \omegaega (\Phi )\vert ,$$ $$ \vert \omegaega '(\Psi )\colon \omegaega (\Phi )\vert \muid \vert v'(\Psi K)\colon \omegaega (\Phi )\vert $$ $${\rm and} \ \vert v'(\Psi K)\colon \omegaega (\Phi )\vert = \vert v'(\Psi K)\colon v(K)\vert . \vert v(K)\colon \omegaega (\Phi )\vert .$$ \par \muedskip\nuoindent Since $p \numid \vert v(K)\colon \omegaega (\Phi )\vert $ by hypothesis, it follows that $p ^{\muu } \muid \vert v'(\Psi K)\colon v(K)\vert $, which implies the inequalities in (3.2) must be equalities. Hence, $[\Psi K\colon K] = p ^{\muu }$, and by the Fundamental Inequality, it turns out that $v'$ is the unique valuation of $\Psi K$ extending $v$ and, moreover, $\Psi K/K$ is TR relative to $v$, as required. \varepsilonnd{proof} \par \muedskip The next lemma presents well-known properties of binomial extensions of prime degree, and of cyclotomic extensions. They are often used without an explicit reference (for a proof of the lemma, see \cite{L}, Ch. VI, Sects. 3, 9). \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm3.8} Let $E$ be a field and $p \in \muathbb{P}$. Then: \par {\rm (a)} For any $\thetaeta \in E^{\alphast }$, the polynomial $X ^{p} - \thetaeta $ is irreducible over $E$ if and only if it has no root in $E$. \par {\rm (b)} If $L/E$ is a finite extension, such that $p \numid [L\colon E]$, then $L ^{\alphast p} \cap E ^{\alphast } = E ^{\alphast p}$. \par {\rm (c)} If $p \nueq {\rm char}(E)$ and $\varepsilon $ is a primitive $p$-th root of unity in $E _{\rm sep}$, then $E(\varepsilon )/E$ is a Galois extension with $\muathcal{G}(E(\varepsilon )/E)$ cyclic and $[E(\varepsilon )\colon E] \muid p - 1$; in particular, $E(\varepsilon ) ^{\alphast p} \cap E ^{\alphast } = E ^{\alphast p}$. \varepsilonnd{lemm} \par \muedskip At the end of this section we recall some known properties of cyclotomic extensions of valued fields that are used in the sequel. \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm3.9} Let $(K, v)$ be a valued field of mixed characteristic $(0, p)$ containing a primitive $p$-th root of unity $\varepsilon $. Then: \par {\rm (a)} $v(1 - \varepsilon ) = v(p)/(p - 1)$; \par {\rm (b)} $v(-i + \sigmaum _{j=0} ^{i-1} \varepsilon ^{j}) \gammae v(1 - \varepsilon )$, for each $i \in \muathbb{N}$ not divisible by $p$; \par {\rm (c)} $v((1 - \varepsilon ) ^{p-1} + p) \gammae v((1 - \varepsilon ) ^{p}) = pv(p)/(p - 1)$. \varepsilonnd{lemm} \par \sigmamallskip \betaegin{proof} The assumption on char$(\widehat K)$ ensures that $v(p) > 0$, $\varepsilon \in O _{v}(K)$ and the residue class $\hat \varepsilon $ equals the unit of $\widehat K$. Therefore, $v(1 - \varepsilon ) > 0$, and by the proof of Proposition~4.1.2 (i) of \cite{Co-Th}, $v(p) = (p - 1)v(1 - \varepsilon )$, as claimed by Lemma \ref{lemm3.9} (a). Also, the inequality $v(1 - \varepsilon ) > 0$ implies $\hat e _{i} = i \nueq 0$, for each $i \in \muathbb{N}$ not divisible by $p$, where $e _{i} = \sigmaum _{j=0} ^{i-1} \varepsilon ^{j}$. Lemma \ref{lemm3.9} (b) follows from the fact that $\muathbb{Z}[\varepsilon ] \sigmaubset O _{v}(K)$ and $\varepsilon - 1$ divides (in the ring $\muathbb{Z}[\varepsilon ]$) the elements $e _{i} - i = \sigmaum _{j=0} ^{i-1} (\varepsilon ^{j} - 1)$, $i = 1, \deltaots , p - 1$. Clearly, Lemma \ref{lemm3.9} (b) shows that $v \betaig ((p - 1)! - \prod _{i=1} ^{p-1} e _{i}\betaig ) \gammae v(1 - \varepsilon ),$ which implies together with the equalities $$\Phi _{p}(1) = \prod _{i=1} ^{p-1} (1 - \varepsilon ^{i}) = p = (1 - \varepsilon ) ^{p-1}\prod _{i=1} ^{p-1} e _{i},$$ where $\Phi _{p}(X) = \sigmaum _{j=0} ^{p-1} X ^{j}$ is the $p$-th cyclotomic polynomial, that $$v((p - 1)!(1 - \varepsilon ) ^{p-1} - p) \gammae v((1 - \varepsilon ) ^{p}) = pv(p)/(p - 1).$$ \par\nuoindent As $(p - 1)! \varepsilonquiv -1 ({\rm mod} \ p)$ (Wilson's theorem), this proves Lemma \ref{lemm3.9} (c). \varepsilonnd{proof} \sigmaection{\betaf Normal elements and radical degree $p$ extensions of HDV-fields} \par \muedskip Our goal in this section is to prepare technically the proof of Lemma \ref{lemm2.2}. In order to achieve it, we need information on the algebraic properties of $p$-th roots of elements of $n+abla _{0}(F)$, for a valued field $(F, v)$ of mixed characteristic $(0, p)$. A part of this information is contained in the following two lemmas. \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm4.1} Let $(F, v)$ be a valued field of mixed characteristic $(0, p)$, and let $\alphalpha \in F$, $\betaeta \in F ^{\alphast }$ be elements, such that $(1 + \betaeta ) ^{p} = 1 + \alphalpha $ and $v(\alphalpha ) > 0$. Put $\varepsilonta = \alphalpha - \betaeta ^{p} - p\betaeta $ and $\kappaappa = v(p)/(p - 1)$. Then $v(\varepsilonta ) \gammae v(p) + 2v(\betaeta )$. Moreover, \par {\rm (a)} $v(\alphalpha ) < p\kappaappa $ if and only if $v(\betaeta ) < \kappaappa $; when this holds, $v(\betaeta ) = v(\alphalpha )/p$ and $v(\betaeta ^{p} - \alphalpha ) > v(\alphalpha ) = v(\betaeta ^{p})$. \par {\rm (b)} If $v(\alphalpha ) = p\kappaappa $, then $v(\betaeta ) = \kappaappa $. \varepsilonnd{lemm} \par \sigmamallskip \betaegin{proof} By Newton's binomial formula, one has $$(1 + \betaeta ) ^{p} = 1 + \alphalpha = 1 + \betaeta ^{p} + \sigmaum _{i=1} ^{p-1} {p \betarack i} \betaeta ^{i}.$$ Since $v(\alphalpha ) > 0$ and char$(\widehat F) = p$, this ensures that $v(\betaeta ) > 0$. The binomial \par\vskip0.05truecm\nuoindent formula also shows that $\varepsilonta = 0$ if $p = 2$, and $\varepsilonta = \sigmaum _{i=2} ^{p-1} {p \betarack i} \betaeta ^{i}$ if $p > 2$. Note \par\vskip0.05truecm\nuoindent further that $v({p \betarack i}) = v(p)$, for all $i < p$, which implies that, in case $p > 2$, the sequence of values $v({p \betarack i}\betaeta ^{i})$, $i = 1, \deltaots , p - 1$, strictly increases. These facts prove \par\vskip0.05truecm\nuoindent that $v(\varepsilonta ) \gammae v(p) + 2v(\betaeta )$. The obtained inequality has the following consequences, which in turn imply statements (a) and (b) of Lemma \ref{lemm4.1}: \par \muedskip\nuoindent ${\rm (i)} \ {\rm If} \ v(\betaeta ) < \kappaappa , \ {\rm then} \ v(\alphalpha ) = v(\betaeta ^{p}) < pv(\betaeta ) < p\kappaappa $ \par\vskip0.08truecm\nuoindent ${\rm and} \ v(\alphalpha - \betaeta ^{p}) = v(p\betaeta + \varepsilonta ) = v(p\betaeta ) > v(\betaeta ^{p});$ \par \vskip0.08truecm\nuoindent ${\rm (ii)} \ {\rm If} \ v(\betaeta ) > \kappaappa , {\rm then} \ v(\betaeta ^{p}) > v(p\betaeta ),$ $ v(\alphalpha - p\betaeta ) = v(\betaeta ^{p} + \varepsilonta ) > v(p\betaeta ),$ \par \vskip0.09truecm\nuoindent ${\rm and} \ v(\alphalpha ) = v(p\betaeta ) = v(p) + v(\betaeta ) > p\kappaappa $; \par \muedskip\nuoindent ${\rm (iii)} \ {\rm If} \ v(\betaeta ) = \kappaappa , \ {\rm then} \ v(\betaeta ^{p}) = v(p\betaeta ) = p\kappaappa < v(\varepsilonta ), \ {\rm whence}, v(\alphalpha )\gammae p\kappaappa .$ \varepsilonnd{proof} \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm4.2} Let $(F, v)$ be a valued field of mixed characteristic $(0, p)$, and there exist $\gammaamma \in F ^{\alphast }$ with $v(\gammaamma ) = v(p)/(p - 1) : = \kappaappa $. Assume that $\alphalpha \in F$ and $\betaeta \in F ^{\alphast }$ satisfy $v(\alphalpha ) \gammae p\kappaappa $ and $(1 + \betaeta ) ^{p} = 1 + \alphalpha $, put $\deltaelta = \betaeta /\gammaamma $, and denote by $g$ the polynomial $g(X) = \gammaamma ^{-p}[(1 + \gammaamma X) ^{p} - (1 + \alphalpha )] \in F[X]$. Then: \par {\rm (a)} $g$ is monic of degree $p$, $g(\deltaelta ) = 0$ and $g \in O _{v}(F)[X]$; \par {\rm (b)} The reduction $\hat g \in \widehat F[X]$ of $g$ modulo $M _{v}(F)$ equals $X ^{p} + \hat cX - \hat d$, where $c = p/\gammaamma ^{p-1}$, $\hat c \nueq 0$ and $d = \gammaamma ^{-p}\alphalpha $; also, $\hat d \nueq 0$ if and only if $v(\alphalpha ) = p\kappaappa $. \varepsilonnd{lemm} \par \muedskip \betaegin{proof} (a): Evidently, $1 + \betaeta $ is a root of the binomial $X ^{p} - (1 + \alphalpha )$, so \par\vskip0.12truecm\nuoindent $h(\betaeta ) = 0$, where $h(X) = (X + 1) ^{p} - 1 - \alphalpha = X ^{p} + (\sigmaum _{i=1} ^{p-1} {p \betarack i} X ^{p-i}) - \alphalpha $. Observing also that $g(X) = \gammaamma ^{-p}h(\gammaamma X) = X ^{p} + \sigmaum _{i=1} ^{p-1} ({p \betarack i} /\gammaamma ^{i}).X ^{p-i} - (\alphalpha /\gammaamma ^{p}),$ one obtains that $g(X)$ is monic of degree $p$ and $g(\deltaelta ) = 0$, as required by \par\vskip0.11truecm\nuoindent Lemma \ref{lemm4.2} (a). Since $v(\alphalpha ) \gammae p\kappaappa $, $v(\gammaamma ) = \kappaappa $, and $p \muid {p \betarack i}$, $i = 1, \deltaots , p - 1$, it \par\vskip0.12truecm\nuoindent is easily verified that $v(\alphalpha /\gammaamma ^{p}) \gammae 0$ and $v({p \betarack i} /\gammaamma ^{i}) = (p - i - 1)\kappaappa \gammae 0$, for $i = 1, \deltaots , p - 1$, proving that $g(X) \in O _{v}(F)[X]$. \par\vskip0.1truecm (b): The preceding calculations show that $v({p \betarack p-1} /\gammaamma ^{p-1}) = 0$, and in case $p > 2$, they yield $v({p \betarack i} /\gammaamma ^{i}) > 0$, $i = 1, \deltaots , p - 2$. Also, by the assumptions on $v(\gammaamma )$ and $v(\alphalpha )$, there exist $c \in O _{v}(F) ^{\alphast }$ and $d \in O _{v}(F)$, such that $p = \gammaamma ^{p-1}c$ and $\alphalpha = \gammaamma ^{p}d$. These observations show that $\hat g(X) = X ^{p} + \hat cX - \hat d \in \widehat F[X]$ and $\hat c \nueq 0$. They also prove that $\hat d \nueq 0$ if and only if $v(\alphalpha ) = p\kappaappa $, as claimed. \varepsilonnd{proof} \par \muedskip Our approach to the proof of Lemma \ref{lemm2.2} in the case where $v(p) \in pv(K)$ relies on the following lemma (which is an extended version of \cite{TY}, Lemma~2.1). \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm4.3} Let $(K, v)$ be a Henselian field of mixed characteristic $(0, p)$, $\varepsilon $ be a primitive $p$-th root of unity in $K _{\rm sep}$, and $\kappaappa = v(p)/(p - 1)$. Then: \par {\rm (a)} The polynomial $g _{\lambdaambdambda }(X) = (1 - \varepsilon ) ^{-p}[((1 - \varepsilon )X + 1) ^{p} - \lambdaambdambda ]$ lies in $O _{v}(K(\varepsilon ))[X]$ and has a root in $K(\varepsilon )$, for each $\lambdaambdambda \in n+abla _{p\kappaappa }(K(\varepsilon ))$; in particular, $\lambdaambdambda \in K(\varepsilon ) ^{\alphast p}$. \par {\rm (b)} $n+abla _{\kappaappa '}(K) \sigmaubset K ^{\alphast p}$, in case $\kappaappa ' \in v(K)$ and $\kappaappa ' \gammae p\kappaappa $. \par {\rm (c)} For any pair $\lambdaambdambda _{1} \in n+abla _{0}(K)$, $\lambdaambdambda _{2} \in K$, such that $v(\lambdaambdambda _{1} - \lambdaambdambda _{2}) > p\kappaappa $, the elements $\lambdaambdambda _{2}$ and $\lambdaambdambda _{2}\lambdaambdambda _{1} ^{-1}$ lie in $n+abla _{0}(K)$ and $K ^{\alphast p}$, respectively. \varepsilonnd{lemm} \par \sigmamallskip \betaegin{proof} (a): We have $v(1 - \varepsilon ) = \kappaappa $ and $v(\lambdaambdambda - 1) > p\kappaappa $, whence, Lemma \ref{lemm4.2} applies to $g _{\lambdaambdambda }(X)$ and yields $g _{\lambdaambdambda }(X) \in O _{v}(K(\varepsilon ))[X]$. Denote by $\widehat K _{\varepsilon }$ the residue field of $(K(\varepsilon ), v)$. Lemma \ref{lemm4.2}, combined with Lemma \ref{lemm3.9} (c), shows that the reduction $\hat g _{\lambdaambdambda }(X) \in \widehat K _{\varepsilon }[X]$ of $g _{\lambdaambdambda }(X)$ modulo $M _{v}(K(\varepsilon ))$ equals the binomial $X ^{p} - X$ ($\hat g _{\lambdaambdambda }(0) = 0$, since $v((\lambdaambdambda - 1)/(1 - \varepsilon ) ^{p}) > 0$). This implies $\hat g _{\lambdaambdambda }(X)$ has a simple zero in $\widehat K _{\varepsilon }$, so it follows from (3.1) (a) that $g _{\lambdaambdambda }(X)$ has a zero in $O _{v}(K(\varepsilon ))$; hence, $\lambdaambdambda \in K(\varepsilon ) ^{\alphast p}$. \par\vskip0.05truecm (b): Lemmas \ref{lemm3.8} (c) and \ref{lemm4.3} (a) imply $n+abla _{\kappaappa '}(K) \sigmaubset K(\varepsilon ) ^{\alphast p} \cap K ^{\alphast } = K ^{\alphast p}$. \par\vskip0.05truecm (c): Clearly, $n+abla _{0}(K)$ contains $\lambdaambdambda _{2}$ and $\lambdaambdambda _{1} ^{-1}$, and $\lambdaambdambda _{2}\lambdaambdambda _{1} ^{-1} = 1 + (\lambdaambdambda _{2} - \lambdaambdambda _{1})\lambdaambdambda _{1} ^{-1}$ \par\vskip0.07truecm\nuoindent lies in $n+abla _{p\kappaappa }(K(\varepsilon ))$, whence, $\lambdaambdambda _{2}\lambdaambdambda _{1} ^{-1} \in (K(\varepsilon ) ^{\alphast p} \cap K ^{\alphast }) = K ^{\alphast p}$, as claimed. \varepsilonnd{proof} \par \muedskip\nuoindent {\betaf Definition~1.} An element $\lambdaambdambda \in n+abla _{0}(K)$, where $(K, v)$ is an HDV-field of mixed characteristic $(0, p)$, is called normal over $K$ (or $K$-normal), if $\lambdaambdambda \nuotin K ^{\alphast p}$ and $v(\lambdaambdambda - 1) \gammae v(\lambdaambdambda ^{\prime } - 1)$, for each element $\lambdaambdambda ^{\prime }$ of the coset $\lambdaambdambda K ^{\alphast p}$. \par \muedskip When $\lambdaambdambda \nuotin K ^{\alphast p}$, $\lambdaambdambda K ^{\alphast p}$ contains $K$-normal elements, as Lemma \ref{lemm4.3} (b) and the cyclicity of $v(K)$ show that the system $v(\lambdaambdambda ^{\prime } - 1)$, $\lambdaambdambda ^{\prime } \in \lambdaambdambda K ^{\alphast p}$, contains a maximal element $v(\xi - 1)$ (and $\xi $ is $K$-normal). Our next lemma characterizes $K$-normal elements. Its conclusions follow from Lemma \ref{lemm3.1} and \cite{Hyo}, Lemma~(2-16), if $K$ contains a primitive $p$-th root of unity. Stating the lemma, we use the implication $pv(p)/(p - 1) \in v(K) \Rightarrow v(p)/(p - 1) \in v(K)$. \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm4.4} Let $(K, v)$ be an {\rm HDV}-field of mixed characteristic $(0, p)$, and let $\varepsilon $ be a primitive $p$-th root of unity in $K _{\rm sep}$. Suppose that $\lambdaambdambda \in n+abla _{0}(K)$, put $\pi = \lambdaambdambda - 1$, $\kappaappa = v(p)/(p - 1)$, and let $K ^{\prime }$ be an extension of $K$ in $K _{\rm sep}$ obtained by adjunction of a $p$-th root $\lambdaambdambda ^{\prime }$ of $\lambdaambdambda $. Then $\lambdaambdambda $ is $K$-normal if and only if one of the following three conditions is fulfilled: \par {\rm (a)} $v(\pi ) < p\kappaappa $ and $v(\pi ) \nuotin pv(K)$; when this holds, $K ^{\prime }/K$ is {\rm TR}; \par {\rm (b)} $v(\pi ) < p\kappaappa $ and $\pi = \pi _{1} ^{p}a$, for some $\pi _{1} \in K$, $a \in O _{v}(K) ^{\alphast }$ with $\hat a \nuotin \widehat K ^{\alphast p}$; in this case, $\hat a \in \widehat K ^{\prime p}$ and $\widehat K ^{\prime }/\widehat K$ is purely inseparable of degree $p$; \par {\rm (c)} $v(\pi ) = p\kappaappa $, and for any $\pi _{1} \in K$ with $v(\pi _{1}) = \kappaappa $, the polynomial $X ^{p} + \hat bX - \hat d \in \widehat K[X]$ is irreducible over $\widehat K$, $\hat b$ and $\hat d$ being the residue classes of the elements $b = p/\pi _{1} ^{p-1}$ and $d = \pi /\pi _{1} ^{p}$, respectively; when this holds, $K ^{\prime }/K$ is inertial and $K(\sigmaqrt[(p-1)]{-b} \ ) = K(\varepsilon )$. \varepsilonnd{lemm} \par \muedskip \betaegin{proof} Put $\pi ^{\prime } = \lambdaambdambda ^{\prime } - 1$. The conditions of the lemma show that $v(\pi ) > 0$ and $\lambdaambdambda ^{\prime } \in n+abla _{0}(K ^{\prime })$, i.e. $v(\pi ^{\prime }) > 0$. In view of Lemma \ref{lemm4.3} (b), one may assume, for the proof, that $v(\pi ) \lambdae p\kappaappa $. Hence, by Lemma \ref{lemm4.1} (a) and (b) (applied to $(1 + \pi ^{\prime }) ^{p} = 1 + \pi$), $v(\pi ^{\prime }) \lambdae \kappaappa $, where equality holds only in case $v(\pi ) = p\kappaappa $. Our proof proceeds in three steps. \par\vskip0.05truecm Step 1. Let $v(\pi ) < p\kappaappa $ and $\pi $ violate both conditions (a) and (b). Then \par\vskip0.05truecm\nuoindent $\lambdaambdambda = 1 + \pi _{0} ^{p}a _{0} ^{p} + \pi _{0} ^{\prime }$, for some $a _{0} \in O _{v}(K) ^{\alphast }$ and $\pi _{0}, \pi _{0} ^{\prime } \in K$, such that $v(\pi _{0} ^{p}) = v(\pi ) < v(\pi _{0} ^{\prime })$. Therefore, applying Lemma \ref{lemm4.1} to $(1 - \pi _{0}a _{0}) ^{p}$, one obtains that $v(\lambdaambdambda (1 - \pi _{0}a _{0}) ^{p} - 1) > v(\pi ) = v(\lambdaambdambda - 1)$; hence, $\lambdaambdambda $ is not $K$-normal. \par\vskip0.05truecm Step 2. Assume now that $\pi $ satisfies condition (a) or (b) of Lemma \ref{lemm4.4}. Then, for each $\tilde \lambdaambdambda \in n+abla _{0}(K)$ with $v(\tilde \lambdaambdambda - 1) > v(\pi )$, the element $\lambdaambdambda \tilde \lambdaambdambda - 1$ has value $v(\lambdaambdambda \tilde \lambdaambdambda - 1) = v(\pi )$ and satisfies the same condition as $\pi $. Moreover, under condition (b), $(\lambdaambdambda \tilde \lambdaambdambda - 1)/\pi _{1} ^{p}$ lies in $O _{v}(K) ^{\alphast }$ and its residue class equals $\hat a$. Observing also that $\tilde \lambdaambdambda ^{-1} \in n+abla _{0}(K)$ and $v(\tilde \lambdaambdambda ^{-1} - 1) = v(\tilde \lambdaambdambda - 1)$, one concludes that the $K$-normality of $\lambdaambdambda $ will be proved, if we show that $\lambdaambdambda \nuotin K ^{\alphast p}$. The equality $(1 + \pi ^{\prime }) ^{p} = 1 + \pi = \lambdaambdambda $ and Lemma \ref{lemm4.1} (a) imply $v(\pi - \pi ^{\prime p}) > v(\pi ) = v(\pi ^{\prime p}) = pv(\pi ^{\prime })$, proving that $v(\pi ) \in pv(K ^{\prime })$. When $v(\pi ) \nuotin pv(K)$, this means that $K ^{\prime }/K$ is TR, $[K ^{\prime }\colon K] = p$ and $\lambdaambdambda \nuotin K ^{\alphast p}$. Similarly, it follows from Lemma \ref{lemm4.1} (a) that if $\pi = \pi _{1} ^{p}a$, where $\pi _{1} \in K$ and $a \in O _{v}(K) ^{\alphast }$, then $\pi ^{\prime } = \pi _{1}a _{1}$, for some $a _{1} \in O _{v}(K ^{\prime }) ^{\alphast }$ with $v(a - a _{1} ^{p}) > 0$; hence, $\hat a _{1} ^{p} = \hat a$, proving that $\hat a \in \widehat K ^{\prime p}$. This shows that if $\hat a \nuotin \widehat K ^{p}$, then $[K ^{\prime }\colon K] = [\widehat K ^{\prime }\colon \widehat K] = p$, $\widehat K ^{\prime }/\widehat K$ is purely inseparable and $\lambdaambdambda \nuotin K ^{\alphast p}$. Thus our assumptions on $\pi $ guarantee that, in both cases, $\tilde \lambdaambdambda ^{-1}\lambdaambdambda \nuotin K ^{\alphast p}$, for any $\tilde \lambdaambdambda \in n+abla _{0}(K)$ with $v(\tilde \lambdaambdambda - 1) > v(\pi )$, which implies $\lambdaambdambda $ is $K$-normal. \par Step 3. Suppose that $v(\pi ) = p\kappaappa $, take $\pi _{1} \in K$ so that $v(\pi _{1}) = \kappaappa $, define $b$ and $d$ as in Lemma \ref{lemm4.4} (c), and put $g(X) = \pi _{1} ^{-p}[(1 + \pi _{1}X) ^{p} - \lambdaambdambda ]$. It is easily verified that $v(b) = v(d) = 0$, $g(\pi ^{\prime }/\pi _{1}) = 0$, and $g(X) \in K[X]$ is monic; also, it follows from Lemma \ref{lemm3.8} (a) that $g(X)$ is irreducible over $K$ if and only if $\lambdaambdambda \nuotin K ^{\alphast p}$. At the same time, Lemma \ref{lemm4.3} (b) implies $\lambdaambdambda \nuotin K ^{\alphast p}$ if and only if $\lambdaambdambda $ is $K$-normal. Note further that, by Lemma \ref{lemm4.2}, $g(X) \in O _{v}(K)[X]$ and its reduction $\hat g(X)$ modulo $M _{v}(K)$ equals the trinomial $X ^{p} + \hat bX - \hat d \in \widehat K[X]$. In addition, the equality $v(b) = 0$ shows that $\hat g(X)$ is separable. Using (3.1)(a), one also proves that $\hat g(X)$ is irreducible over $\widehat K$ if and only if $\lambdaambdambda \nuotin K ^{\alphast p}$. It is now easy to see that $\lambdaambdambda $ is $K$-normal if and only if $K ^{\prime }/K$ is inertial with $[K ^{\prime }:K] = p$. \par For the rest of the proof of Lemma \ref{lemm4.4} (c), we assume that $\lambdaambdambda \nuotin K ^{\alphast p}$, fix a root $\xi \in K _{\rm sep}$ of the binomial $b(x) = X ^{p-1} + b$, and put $B = K(\xi )$. We first show that $[B:K] \muid p - 1$. As char$(\widehat K) = p$, $\widehat K$ contains a primitive $(p - 1)$-th root of unity $\hat \rho $, and since $v$ is Henselian, (3.1) (a), applied to the binomial $X ^{p-1} - 1$, shows that $\hat \rho $ can be lifted to such a root $\rho \in K$. Hence, the fact that $[B:K] \muid p - 1$ follows from Galois theory (cf. \cite{L}, Ch. VI, Theorem~6.2). \par Finally, we prove that $B = K(\varepsilon )$. It is easily verified that $\pi ^{\prime }/(\pi _{1}\xi )$ is a root of the monic polynomial $h(X) = \xi ^{-p}g(\xi X)$. Observing that $v(\xi ) = 0$, one obtains from the already noted properties of $g(X)$ that $h(X) \in O _{v}(B)[X]$ and the reduction $\hat h(X) \in \widehat B[X]$ of $h(X)$ modulo $M _{v}(B))$ is an Artin-Schreier trinomial. Moreover, it becomes clear that $\widehat h(X) = \hat \xi ^{-p}\hat g(\hat \xi X)$, which implies in conjunction with Lemma \ref{lemm3.8} (b) (and the divisibility of $p - 1$ by $[B\colon K]$) that $\hat g(X)$ and $\hat h(X)$ are irreducible over $\widehat B$. Hence, by Lemma \ref{lemm3.3} and the Artin-Schreier theorem (cf. \cite{L}, Ch. VI, Sect. 6), applied to $\hat h(X)$, $K ^{\prime }B/B$ is an inertial Galois extension of degree $p$. In view of the definition of $K ^{\prime }$, this proves that $\varepsilon \in B$. Let now $\widehat K _{\varepsilon }$ be the residue field of $(K(\varepsilon ), v)$, and set $g _{0}(X) = (1 - \varepsilon ) ^{-p}[(1 + (1 - \varepsilon )X) ^{p} - \lambdaambdambda ]$. Then $g _{0}(X)$ is monic, and it follows from Lemmas \ref{lemm4.2} (a), \ref{lemm3.9} (a) that $g _{0}(\pi ^{\prime }/(1 - \varepsilon )) = 0$ and $g _{0}(X) \in O _{v}(K(\varepsilon ))[X]$. Moreover, Lemmas \ref{lemm3.8} (a) and \ref{lemm3.9} (c) imply the reduction $\hat g _{0}(X) \in \widehat K _{\varepsilon }[X]$ is an Artin-Schreier trinomial irreducible over $\widehat K _{\varepsilon }$. Lemma \ref{lemm4.2} (b), applied to $g(X)$ and $g _{0}(X)$, further indicates that if $c = (1 - \varepsilon )/\pi _{1}$, then $v(c) = 0$ and $\hat c ^{p-1} = -\hat b \in \widehat K _{\varepsilon }$. Hence, by (3.1) (a), $b(x)$ has a root in $K(\varepsilon )$. As $K$ contains a primitive $(p - 1)$-th root of unity, this means that all roots of $b(X)$ in $K _{\rm sep}$ in fact lie in $K(\varepsilon )$. It is now obvious that $B = K(\varepsilon )$, so Lemma \ref{lemm4.4} is proved. \varepsilonnd{proof} \par \muedskip It follows from Lemmas \ref{lemm3.2} (b) and \ref{lemm4.4} that if $\alphalpha \in K$ is normal over $K$, then it is normal over any finite extension of $K$ of prime-to $p$ degree. \par \muedskip\nuoindent {\betaf Definition~2.} In the setting of Lemma \ref{lemm4.4}, an element $\lambdaambdambda \in n+abla _{0}(K)$ is called (u)-normal over $K$, where $(u) \in \{(a), (b), (c)\}$, if it satisfies condition (u). \par \muedskip Next we present Albert's characterization \cite{A1}, Ch. IX, Theorem~6, of Galois extensions of prime degree different from the characteristic of the ground field. The characterization is based on Lemma \ref{lemm3.8} (c). \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm4.5} Assume that $K$ is an arbitrary field, $\varepsilon $ is a primitive $p$-th root of unity in $K _{\rm sep}$, for some $p \in \muathbb P \sigmaetminus \{{\rm char}(K)\}$, and $\varphi $ a generator of $\muathcal{G}(K(\varepsilon )/K)$. Fix an integer $s > 0$ satisfying $\varphi (\varepsilon ) = \varepsilon ^{s}$, and let $\lambdaambdambda $ be an element of $K(\varepsilon ) ^{\alphast }$. Then the following conditions are equivalent: \par {\rm (a)} $\lambdaambdambda \nuotin K(\varepsilon ) ^{\alphast p}$ and $\varphi (\lambdaambdambda )\lambdaambdambda ^{-s} \in K(\varepsilon ) ^{\alphast p}$; \par {\rm (b)} If $L _{\lambdaambdambda } ^{\prime } = K(\varepsilon )(\sigmaqrt[p]{\lambdaambdambda })$, then $L _{\lambdaambdambda } ^{\prime }$ contains as a subfield a Galois extension $L _{\lambdaambdambda }$ of $K$ of degree $p$ (equivalently, the extension $L _{\lambdaambdambda } ^{\prime }/K$ is Galois with $\muathcal{G}(L _{\lambdaambdambda } ^{\prime }/K)$ cyclic and $[L _{\lambdaambdambda } ^{\prime }\colon K] = p[K(\varepsilon )\colon K]$). \varepsilonnd{lemm} \par \muedskip Denote by $K(p, 1)$ the compositum of the extensions of $K$ in $K(p)$ of degree $p$, put $K _{\muathcal{G}} = \{\alphalpha \in K(\varepsilon ) ^{\alphast }\colon \ \varphi (\alphalpha )\alphalpha ^{-s} \in K(\varepsilon ) ^{\alphast p}\}$, and fix $\varepsilonll \in \muathbb{N}$ so that $s\varepsilonll \varepsilonquiv 1 ({\rm mod} \ p)$. Obviously, $K _{\muathcal{G}}$ is a subgroup of $K(\varepsilon ) ^{\alphast }$ including $K(\varepsilon ) ^{\alphast p}$. Note also that $K(p, 1)/K$ is a Galois extension with $\muathcal{G}(K(p, 1)/K)$ abelian of period $p$; this can be deduced from Galois theory and the normality of maximal subgroups of nontrivial finite $p$-groups (see \cite{L}, Ch. I, Sect. 6; Ch. VI, Theorem~1.14). With this notation, Lemma \ref{lemm4.5} can be supplemented as follows: \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm4.6} {\rm (a)} There is a bijection $\varrho $ of the set $\Sigma _{p}$ of finite extensions of $K$ in $K(p,1)$ upon the set $\muathcal{G}_{p}$ of finite subgroups of $K _{\muathcal{G}}/K(\varepsilon ) ^{\alphast p}$, such that \par\nuoindent $\varrho (\Lambda ) \cong \muathcal{G}(\Lambda /K) \cong \muathcal{G}(\Lambda (\varepsilon )/K(\varepsilon ))$, for each $\Lambda \in \Sigma _{p}$; \par {\rm (b)} For each $\lambdaambdambda \in K(\varepsilon ) ^{\alphast }$, the product $\Omega (\lambdaambdambda ) = \prod _{j=0} ^{m-1} \varphi ^{j}(\lambdaambdambda ) ^{\varepsilonll (j)}$ lies in $K _{\muathcal{G}}$, where $m = [K(\varepsilon )\colon K]$ and $\varepsilonll (j) = \varepsilonll ^{j}$, $j = 0, \deltaots , m - 1$. \varepsilonnd{lemm} \par \muedskip \betaegin{proof} It follows from Lemma \ref{lemm3.8} (c) and Galois theory (cf. \cite{L}, Ch. VI, Theorem~1.12) that the mapping $\sigmaigma $ of $\Sigma _{p}$ into the set $\Sigma _{p} ^{\prime }$ of finite extensions of $K(\varepsilon )$ in $K(p, 1)(\varepsilon )$, by the rule $\Lambda \to \Lambda (\varepsilon )$, is bijective with $\muathcal{G}(\Lambda /K) \cong \muathcal{G}(\Lambda (\varepsilon )/K(\varepsilon ))$, for each $\Lambda \in \Sigma _{p}$. Moreover, by Kummer theory and Lemma \ref{lemm4.5}, there is a bijection $\varrho ': \Sigma _{p} ^{\prime } \to \muathcal{G}_{p}$, such that $\varrho '(\Lambda ^{\prime }) \cong \muathcal{G}(\Lambda ^{\prime }/K(\varepsilon ))$, for each $\Lambda ^{\prime } \in \Sigma _{p} ^{\prime }$. Therefore, the composition $\varrho = \varrho ' \circ \sigmaigma $ has the properties required by Lemma \ref{lemm4.6} (a). \par We prove Lemma \ref{lemm4.6} (b). If $\varepsilon \in K$, then the assertion is obvious, so we assume that $\varepsilon \nuotin K$, i.e. $m \gammae 2$. It is easily verified that $$\varphi (\Omega (\lambdaambdambda )) = \prod _{j=0} ^{m-1} \varphi ^{j+1}(\lambdaambdambda ) ^{\varepsilonll (j)} = \prod _{j=1} ^{m} \varphi ^{j}(\lambdaambdambda ) ^{\varepsilonll (j-1)} = \lambdaambdambda ^{\varepsilonll (m-1)}\prod _{j=1} ^{m-1} \varphi ^{j}(\lambdaambdambda ) ^{\varepsilonll (j-1)},$$ \par\vskip0.15truecm\nuoindent ${\rm and} \ \Omega (\lambdaambdambda ) ^{s} = \Omega (\lambdaambdambda ^{s}) = \prod _{j=0} ^{m-1} \varphi ^{j}(\lambdaambdambda ) ^{s.\varepsilonll (j)}$, for each $\lambdaambdambda \in K(\varepsilon )^{\alphast }$. Since \par\vskip0.14truecm\nuoindent $s ^{m} \varepsilonquiv s\varepsilonll \varepsilonquiv 1 ({\rm mod} \ p)$, it follows that $\varepsilonll ^{m} \varepsilonquiv 1 ({\rm mod} \ p)$, $s \varepsilonquiv \varepsilonll ^{m-1} ({\rm mod} \ p)$, \par\vskip0.14truecm\nuoindent $${\rm and} \ s.\varepsilonll (j) \varepsilonquiv \varepsilonll (j - 1) ({\rm mod} \ p), j = 1, \deltaots, m - 1,$$ so our calculations prove that $\varphi (\Omega (\lambdaambdambda )).\Omega (\lambdaambdambda ) ^{-s} \in K(\varepsilon ) ^{\alphast p}$, as claimed. \varepsilonnd{proof} \par \muedskip \betaegin{rema} \lambdaambdabel{rema4.7} Let $(K, v)$ be an HDV-field of mixed characteristic $(0, p)$, and let $\varepsilon $ be a primitive $p$-th root of unity in $K _{\rm sep}$. Then: \par {\rm (a)} The existence of a {\rm (c)}-normal element over $K$ ensures that $\varepsilon \in K _{\rm ur}$. \par {\rm (b)} It can be deduced from Lemma \ref{lemm4.5} that if $K(\varepsilon )/K$ is TR and $\varepsilon \nuotin K$ (this holds, for example, if $v(p)$ generates $v(K)$), then each Galois extension $L$ of $K$ of degree $p$ is $K$-isomorphic to $L _{\lambdaambdambda (L)}$, for some $\lambdaambdambda (L) \in K _{\muathcal{G}} \cap n+abla _{0}(K(\varepsilon ))$. \par {\rm (c)} When $\lambdaambdangle v(p){\mathord{\;\rightarrow\;}}ngle = v(K)$, we have $\lambdaambdangle v(1 - \varepsilon ){\mathord{\;\rightarrow\;}}ngle = v(K(\varepsilon ))$, which enables one to obtain from Lemma \ref{lemm4.5}, the preceding observation and Lemma \ref{lemm4.4} (applied over $K(\varepsilon )$) that a Galois extension of $K$ of degree $p$ is either inertial or TR (this is a special case of Miki's theorem, see \cite{MKu}, 12.2). Moreover, it turns out that degree $p$ extensions of $K _{\rm ur}$ in $K _{\rm ur}(p)$ are TR (whereas finite extensions of $K _{\rm ur}$ in $K _{\rm ur}(p)$ need not be TR unless $\widehat K$ is perfect, see Lemmas \ref{lemm5.4} and \ref{lemm3.2} {\rm (b)}). \varepsilonnd{rema} \par \muedskip We conclude this section with the following lemma. As demonstrated in Section 6, it makes it possible to turn Lemmas \ref{lemm4.3}, \ref{lemm4.4} and \ref{lemm4.6} (a) into the tools we need for the proof of Lemma \ref{lemm2.2} in the case where $v(p) \in pv(K)$. \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm4.8} Let $(K, v)$ be an {\rm HDV}-field of mixed characteristic $(0, p)$. Fix a primitive $p$-th root of unity $\varepsilon \in K _{\rm sep}$, a generator $\varphi $ of $\muathcal{G}(K(\varepsilon )/K)$, and some $s \in \muathbb{N}$ so that $\varphi (\varepsilon ) = \varepsilon ^{s}$. Take any $\alphalpha \in K(\varepsilon )$ with $v(\alphalpha ) > v(p)$, and put $\lambdaambdambda = 1 + \alphalpha $. Then $\varphi (\lambdaambdambda )\lambdaambdambda ^{-s} \in K(\varepsilon ) ^{\alphast p}$ in case $v(\varphi (\alphalpha ) - s\alphalpha ) > pv(p)/(p - 1)$. \par\vskip0.12truecm\nuoindent This holds, if $\alphalpha = p(1 - \varepsilon )\xi ^{-1}$, where $\xi \in K ^{\alphast }$ with $v(\xi ) < v(p)/(p - 1)$. \varepsilonnd{lemm} \par \muedskip \betaegin{proof} Put $\kappaappa = v(p)/(p - 1)$, and {\it use the relation $\alphapprox $ introduced on page \pageref{approx}}. Since for $j \gammae 2$, $v(\alphalpha ^{j}) > 2v(p) \gammae p\kappaappa $, Newton's binomial formula shows that \par\vskip0.11truecm\nuoindent $\lambdaambdambda ^{s} \alphapprox 1 + s\alphalpha $; hence, $\lambdaambdambda ^{-s} \alphapprox 1 - s\alphalpha $. Note also that \par\vskip0.12truecm\nuoindent $v(\varphi (\alphalpha )) = v(\alphalpha )$ because $v$ is Henselian (apply (3.1) (b)). Thus, $$\varphi (\lambdaambdambda )\lambdaambdambda ^{-s} \alphapprox (1 + \varphi (\alphalpha ))(1 - s\alphalpha ) \alphapprox (1 + \varphi (\alphalpha ) - s\alphalpha ) \alphapprox 1.$$ Hence, $\varphi (\lambdaambdambda )\lambdaambdambda ^{-s} \in K(\varepsilon ) ^{\alphast p}$, by Lemma \ref{lemm4.3} (a). \par\vskip0.15truecm Let now $\alphalpha = p(1 - \varepsilon )\xi ^{-1}$, where $\xi \in K ^{\alphast }$ with $0 < v(\xi ) < \kappaappa $. \par\vskip0.171truecm\nuoindent Then Lemma \ref{lemm3.9} (b) implies the following, for each $t \in \muathbb{N}$ not divisible by $p$: $$v(1 - \varepsilon ^{t} - t(1 - \varepsilon )) = v((1 - \varepsilon ) \sigmaum _{j=0} ^{t-1} (\varepsilon ^{j} - 1)) \gammae 2\kappaappa .$$ Therefore, $v(\alphalpha ) = v(p) + \kappaappa - v(\xi ) > v(p)$ and \par\vskip0.171truecm\nuoindent $v(\varphi (\alphalpha ) - s\alphalpha ) = v(p[(1 - \varepsilon ^{s}) - s(1 - \varepsilon )]\xi ^{-1}) \gammae v(p) + 2\kappaappa - v(\xi ) > p\kappaappa .$ \varepsilonnd{proof} \muedskip \sigmaection{\betaf Proof of Lemma \ref{lemm2.2} in case char$(K) = 0$ and $v(p) \nuotin pv(K)$} \par \muedskip In this section, we consider degree $p$ cyclic extensions related to Lemma \ref{lemm4.4} (a) and (b), which allows to prove Lemma \ref{lemm2.2} and Theorem \ref{theo2.1} in the case where char$(K) = 0$ and $v(p) \nuotin pv(K)$. Our starting point is the following lemma. \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm5.1} Let $(K, v)$ be an {\rm HDV}-field of mixed characteristic $(0, p)$, and let $\varepsilon \in K _{\rm sep}$ be a primitive $p$-th root of unity, $\varphi $ a generator of $\muathcal{G}(K(\varepsilon )/K)$, $s$ and $\varepsilonll $ positive integers, such that $\varphi (\varepsilon ) = \varepsilon ^{s}$ and $s\varepsilonll \varepsilonquiv 1 ({\rm mod} \ p)$. Assume that $[K(\varepsilon )\colon K] = m$, and $\lambdaambdambda = 1 + (1 - \varepsilon ) ^{p}\pi ^{-1}$, for some $\pi \in K$ with $0 < v(\pi ) < p\kappaappa $, where $\kappaappa = v(p)/(p - 1)$. Denote by $\betaar \lambdaambdambda $ the element $\Omega (\lambdaambdambda )$ defined in Lemma \ref{lemm4.6} {\rm (b)}, and let $L _{\betaar \lambdaambdambda }$ be the extension of $K$ in $K _{\rm sep}$ associated with $\betaar \lambdaambdambda $ in accordance with Lemma \ref{lemm4.5} {\rm (b)}. Then: \par {\rm (a)} If $v(\pi ) \nuotin pv(K)$, then $\lambdaambdambda $ and $\betaar \lambdaambdambda $ are {\rm (a)}-normal over $K(\varepsilon )$; in addition, $[L _{\betaar \lambdaambdambda }\colon K] = p$, and $L _{\betaar \lambdaambdambda }/K$ is both Galois and {\rm TR}; \par {\rm (b)} If $\pi = \pi _{1} ^{p}a$, where $\pi \in K$, $a \in O _{v}(K) ^{\alphast }$ and $\hat a \nuotin \widehat K ^{p}$, then $\lambdaambdambda $ and $\betaar \lambdaambdambda $ are {\rm (b)}-normal over $K(\varepsilon )$; also, $L _{\betaar \lambdaambdambda }/K$ is Galois, $[L _{\betaar \lambdaambdambda }\colon K] = p$ and $\widehat L _{\betaar \lambdaambdambda } = \widehat K(\sigmaqrt[p]{\hat a})$. \varepsilonnd{lemm} \par \muedskip \betaegin{proof} Our assumptions and Lemma \ref{lemm3.8} (c) imply $v(\pi ) \in pv(K)$ if and only if $v(\pi ) \in pv(K(\varepsilon ))$, and $v(\lambdaambdambda - 1) \in pv(K(\varepsilon ))$ if and only if $v(\pi ) \in pv(K)$. They prove that $\widehat K _{\varepsilon } ^{p} \cap \widehat K = \widehat K ^{p}$, $\widehat K _{\varepsilon }$ being the residue field of $(K(\varepsilon ), v)$. \par\vskip0.11truecm\nuoindent Putting $e _{n} = \sigmaum _{\nuu =0} ^{n-1} \varepsilon ^{\nuu }$, for each $n \in \muathbb{N}$, one obtains from Lemma \ref{lemm3.9} (a), (b) \par\vskip0.11truecm\nuoindent that $v(n - \varepsilon ^{u}.e _{n}) \gammae v(1 - \varepsilon )$, for any pair $u, n \in \muathbb{N}$ with $p \numid n$. Since $p \muid n ^{p} - n$ \par\vskip0.11truecm\nuoindent (by Fermat's little theorem), $v(1 - \varepsilon ) = \kappaappa $, and $n ^{p} - e _{n} ^{p} = \prod _{u=0} ^{p-1} (n - \varepsilon ^{u}.e _{n})$, \par\vskip0.11truecm\nuoindent this shows that $v(e _{n} ^{p} - n) \gammae v(p),$ which implies the following: \par \vskip0.25truecm\nuoindent (5.1) $v((1 - \varepsilon ^{n}) ^{p} - n(1 - \varepsilon ) ^{p}) \gammae v((1 - \varepsilon ) ^{p}) + v(p) > p\kappaappa .$ \par \vskip0.22truecm\nuoindent Our proof of Lemma \ref{lemm5.1} also relies on the following facts: \par \vskip0.22truecm\nuoindent (5.2) (a) $v(\betaar \lambdaambdambda - (1 + m(1 - \varepsilon ) ^{p}\pi ^{-1})) > v((1 - \varepsilon ) ^{p}\pi ^{-1})$; \par \muedskip (b) $v(\betaar \lambdaambdambda - 1) = v(m(1 - \varepsilon ) ^{p}\pi ^{-1}) = p\kappaappa - v(\pi )$. \par \vskip0.22truecm\nuoindent The equalities in (5.2) (b) follow from (5.2) (a) (and the equality $v(m) = 0$ implied by Lemma \ref{lemm3.8} (c)). To prove (5.2) (a) {\it we use the relation $\sigmaim $ defined on page \pageref{approx} ($\sigmaim $ depends on $\pi $)}. As $s\varepsilonll \varepsilonquiv 1 ({\rm mod} \ p)$, the relations below, where $s(j) = s ^{j}$ and $\varepsilonll (j) = \varepsilonll ^{j}$, include the content of (5.2) (a) (and forms of (5.1)): $$\betaar \lambdaambdambda = \prod _{j=0} ^{m-1} [1 + (1 - \varepsilon ^{s(j)}) ^{p}\pi ^{-1}] ^{\varepsilonll (j)} \sigmaim 1 + \sigmaum _{j=0} ^{m-1} \varepsilonll (j)(1 - \varepsilon ^{s(j)}) ^{p}\pi ^{-1}$$ $$\sigmaim 1 + \sigmaum _{j=0} ^{m-1} \varepsilonll (j)s(j)(1 - \varepsilon ) ^{p}\pi ^{-1} \sigmaim 1 + m(1 - \varepsilon ) ^{p}\pi ^{-1}.$$ \nuoindent Statements (5.2) and observations at the beginning of our proof imply the former parts of Lemma \ref{lemm5.1} (a) and (b), so we assume further that either $v(\pi ) \nuotin pv(K)$ or $\pi = \pi _{1} ^{p}a$, for some $\pi _{1} \in K$ and $a \in O _{v}(K) ^{\alphast }$ with $\hat a \nuotin \widehat K ^{p}$. In the former case, $\lambdaambdambda $ and $\betaar \lambdaambdambda $ are (a)-normal (over $K(\varepsilon )$), and in the latter one, they are (b)-normal. Let $L _{\betaar \lambdaambdambda } ^{\prime } = K(\varepsilon , \betaar \lambdaambdambda ')$, where $\betaar \lambdaambdambda ' \in K _{\rm sep}$ and $\betaar \lambdaambdambda '^{p} = \betaar \lambdaambdambda $. The normality of $\betaar \lambdaambdambda $ over $K(\varepsilon )$ ensures that $[L _{\betaar \lambdaambdambda } ^{\prime }\colon K(\varepsilon )] = p$. Using Lemma \ref{lemm4.4}, one obtains that: if $\betaar \lambdaambdambda $ is (a)-normal, then $L _{\betaar \lambdaambdambda } ^{\prime }/K(\varepsilon )$ is TR; when $\betaar \lambdaambdambda $ is (b)-normal, $\widehat L _{\betaar \lambdaambdambda } ^{\prime }/\widehat K _{\varepsilon }$ is inseparable of degree $p$ with $\hat a \in \widehat L _{\betaar \lambdaambdambda } ^{\prime p}$. Also, it follows from Lemmas \ref{lemm4.5}, \ref{lemm4.6} (b) and the $K(\varepsilon )$-normality of $\betaar \lambdaambdambda $ that $L _{\betaar \lambdaambdambda } ^{\prime } = L _{\betaar \lambdaambdambda }(\varepsilon )$, and the extension $L _{\betaar \lambdaambdambda }$ of $K$ in $L _{\betaar \lambdaambdambda } ^{\prime }$ pointed out in the statement of Lemma \ref{lemm5.1} is Galois with $[L _{\betaar \lambdaambdambda }\colon K] = p$. As $[L _{\betaar \lambdaambdambda } ^{\prime }\colon L _{\betaar \lambdaambdambda }] = m$ and $m \muid p - 1$, these observations prove the following: $L _{\betaar \lambdaambdambda }/K$ is TR if and only if so is $L _{\betaar \lambdaambdambda } ^{\prime }/K(\varepsilon )$; $\widehat L _{\betaar \lambdaambdambda }/\widehat K$ is inseparable of degree $p$ if and only if so is $\widehat L _{\betaar \lambdaambdambda } ^{\prime }/\widehat K _{\varepsilon }$. Note finally that $[\widehat L _{\betaar \lambdaambdambda } ^{\prime }\colon \widehat L _{\betaar \lambdaambdambda }] \muid [L _{\betaar \lambdaambdambda } ^{\prime }\colon L _{\betaar \lambdaambdambda }]$. This implies together with Lemma \ref{lemm3.8} (b) that if $\betaar \lambdaambdambda $ is (b)-normal, then $\hat a \in \widehat L _{\betaar \lambdaambdambda } ^{p}$, which completes our proof. \varepsilonnd{proof} \par \muedskip Lemma \ref{lemm3.6} and our next lemma prove Theorem \ref{theo2.1} in case char$(K) = 0$ and $v(p) \nuotin pv(K)$. In this situation, our proof of the lemma relies on the fact (see \cite{FV}, Ch. 2, (3.6), and \cite{E3}, Theorem~15.3.5) that a finite extension $E ^{\prime }$ of a discrete valued field $(E, w)$ is TR relative to $w$ if and only if $E ^{\prime }/E$ has a primitive element $\thetaeta $ whose minimal polynomial $f$ over $E$ is Eisenstein at $w$, i.e. $f$ is monic, all of its coefficients but the leading one lie in $M _{w}(E)$, and the free coefficient of $f$ generates $M _{w}(E)$ as an ideal of $O _{w}(E)$. \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm5.2} Let $(K, v)$ be an {\rm HDV}-field of mixed characteristic $(0, p)$. Suppose that one of the following two conditions is satisfied: \par {\rm (a)} $\widehat K$ is an infinite perfect field; \par {\rm (b)} $\widehat K$ is imperfect and $v(p) \nuotin pv(K)$. \par\nuoindent Then there exist {\rm TR} and Galois extensions $M _{\muu }/K$, $\muu \in \muathbb{N}$, such that $[M _{\muu }\colon K]$ $= p ^{\muu }$ and $\muathcal{G}(M _{\muu }/K)$ is abelian of period $p$, for each $\muu $. \varepsilonnd{lemm} \par \sigmamallskip \betaegin{proof} We assume, in agreement with conditions (a) and (b), that $\widehat K$ is infinite. Since the prime subfield, say $\muathbb{F}$, of $\widehat K$ is finite, this ensures that $\widehat K/\muathbb{F}$ is an infinite extension, whence, there is a sequence $\tilde b = b _{\muu } \in O _{v}(K) ^{\alphast }$, $\muu \in \muathbb{N}$, such that the system $\betaar b = \hat b _{\muu } \in \widehat K$, $\muu \in \muathbb{N}$, is linearly independent over $\muathbb{F}$. Denote by $V$ the $\muathbb{F}$-linear span of the set $\{\hat b _{\muu }\colon \muu \in \muathbb{N}\}$ and fix a primitive $p$-th root of unity $\varepsilon \in K _{\rm sep}$, a generator $\varphi $ of $\muathcal{G}(K(\varepsilon )/K)$, and integers $s$, $\varepsilonll $ as in Lemma \ref{lemm5.1}. Define $K _{\muathcal{G}}$ and $\Omega \colon K(\varepsilon ) ^{\alphast } \to K _{\muathcal{G}}$ as in Lemma \ref{lemm4.6}, and put $m = [K(\varepsilon )\colon K]$ and $\lambdaambdambda _{\muu } = \Omega (1 + (1 - \varepsilon ) ^{p}\pi ^{-1}b _{\muu })$, $\muu \in \muathbb{N}$, where $\pi \in K$ is fixed so that $v(\pi ) \nuotin pv(K)$ and $0 < v(\pi ) \lambdae v(p)$. Take a $p$-th root $\varepsilonta _{\muu } \in K _{\rm sep}$ of $\lambdaambdambda _{\muu }$, for each $\muu \in \muathbb{N}$, and consider the fields $L _{\muu } ^{\prime } = K(\varepsilon , \varepsilonta _{\muu })$, $\muu \in \muathbb{N}$. Lemmas \ref{lemm4.4} and \ref{lemm4.5} show that $[L _{\muu } ^{\prime }\colon K(\varepsilon )] = p$ and there is a unique Galois extension $L _{\muu }$ of $K$ in $L _{\muu } ^{\prime }$ of degree $[L _{\muu }\colon K] = p$. Let $L _{\infty } ^{\prime }$ be the compositum of the fields $L _{\muu } ^{\prime }$, $\muu \in \muathbb{N}$, and $\Lambda $ be the subgroup of $K(\varepsilon ) ^{\alphast }$ generated by the set $K(\varepsilon ) ^{\alphast p} \cup \{\lambdaambdambda _{\muu }\colon \ \muu \in \muathbb{N}\}$. Obviously, $\Lambda $ is a subgroup of $K _{\muathcal{G}}$ including $K(\varepsilon ) ^{\alphast p}$. It follows from the assumption on the sequence $\tilde b$ that, for each $h \in \Lambda \sigmaetminus K(\varepsilon ) ^{\alphast p}$, the coset $hK(\varepsilon ) ^{\alphast p}$ contains an element of the form $\lambdaambdambda (h) = 1 + m(1 - \varepsilon ) ^{p}\pi ^{-1}\betaeta _{h} + \pi (h)$, where $\pi (h) \in K(\varepsilon )$, $v(\pi (h)) > v(m(1 - \varepsilon ) ^{p}\pi ^{-1})$, $\betaeta _{h} \in O _{v}(K) ^{\alphast }$ and $\hat \betaeta _{h} \in V$. Therefore, by the assumptions on $\pi $, $\lambdaambdambda (h)$ is (a)-normal over $K(\varepsilon )$ (so Lemma \ref{lemm5.1} (a) applies to it). This implies $\hat \betaeta _{h}$ is uniquely determined by $h$ and $\pi $, and does not depend on the choice of $\lambdaambdambda (h)$ (see Step 2 of the proof of Lemma \ref{lemm4.4}). More precisely, if $h = \lambdaambdambda _{\muu _{1}} ^{k_{1}} \deltaots \lambdaambdambda _{\muu _{y}} ^{k _{y}}$, for some $y \in \muathbb{N}$, and $k _{1}, \deltaots , k _{y} \in \muathbb{N}$, with $p \numid k _{j'}$, for at least one index $j'$, then $h \nuotin K(\varepsilon ) ^{\alphast p}$, so one may put $\lambdaambdambda (h) = h$ and $\betaeta _{h} = \sigmaum _{j=1} ^{y} k _{j}b _{\muu _{j}}$, whence, $\hat \betaeta _{h} = \sigmaum _{j=1} ^{y} k _{j}\hat b _{\muu _{j}}$. These observations prove that \par \vskip0.22truecm\nuoindent (5.3) $\{\lambdaambdambda _{\muu }K(\varepsilon ) ^{\alphast p}\colon \muu \in \muathbb{N}\}$ is a minimal generating set of $\Lambda /K(\varepsilon ) ^{\alphast p}$, and there is a unique isomorphism $\rho $ of $\Lambda /K(\varepsilon ) ^{\alphast p}$ upon the additive group of $V$, which maps the coset $\lambdaambdambda _{\muu }K(\varepsilon ) ^{\alphast p}$ into $\hat b _{\muu }$, for each $\muu \in \muathbb{N}$. \par \vskip0.22truecm\nuoindent Statement (5.3), the argument proving it, and Lemmas \ref{lemm4.6} and \ref{lemm5.1}~(a) imply that the fields $L _{\infty } ^{\prime }$ and $L _{\muu }$, $\muu \in \muathbb{N}$, satisfy the following: \par \muedskip\nuoindent (5.4) (a) $[L _{1} \deltaots L _{\muu }\colon K] = [L _{1} ^{\prime } \deltaots L _{\muu } ^{\prime }\colon K(\varepsilon )] = p ^{\muu }$, for each $\muu $; \par (b) The compositum $L _{\infty }$ of all $L _{\muu }$, $\muu \in \muathbb{N}$, is an infinite Galois extension of $K$ with $L _{\infty }(\varepsilon ) = L _{\infty } ^{\prime }$ and $\muathcal{G}(L _{\infty }/K)$ abelian of period $p$; \par (c) Every extension of $K$ in $L _{\infty }$ of degree $p$ is Galois and TR over $K$. \par \muedskip Suppose now that $\widehat K$ is perfect. Then every $R \in {\rm Fe}(K)$ contains as a subfield an inertial extension $R _{0}$ of $K$ with $\widehat R _{0} = \widehat R$ (cf. \cite{TW}, Proposition~A.17). In view of Lemmas \ref{lemm3.2} and \ref{lemm3.3} (c), this allows to deduce from (5.4) (b), (c) and Galois theory that finite extensions of $K$ in $L _{\infty }$ are TR. Thus the fields $M _{\muu } = L _{1} \deltaots L _{\muu }$, $\muu \in \muathbb{N}$, have the properties claimed by Lemma \ref{lemm5.2}. \par It remains for us to prove Lemma \ref{lemm5.2} (b). The idea of our proof has been borrowed from \cite{N}, 2.2.1. Identifying $\muathbb{Q}$ with the prime subfield of $K$, put $E _{0} = \muathbb{Q}(t _{0})$, where $t _{0} \in O _{v}(K) ^{\alphast }$ is chosen so that $\hat t _{0} \nuotin \widehat K ^{p}$ (whence, $\hat t _{0}$ is transcendental over $\muathbb{F}$). Denote by $\omegaega $ and $v _{0}$ the valuations induced by $v$ upon $\muathbb{Q}$ and $E _{0}$, respectively, and fix a system $t _{\muu } \in K _{\rm sep}$, $\muu \in \muathbb{N}$, such that $t _{\muu } ^{p} = t _{\muu -1}$, for each $\muu > 0$. It is easy to see that $\muathbb{F}$ equals the residue field of $(\muathbb{Q}, \omegaega )$, and the fields $E _{\muu } = \muathbb{Q}(t _{\muu })$, $\muu \in \muathbb{N}$, are purely transcendental extensions of $\muathbb{Q}$. Let $v _{\muu }$ be the restricted Gauss valuation of $E _{\muu }$ extending $\omegaega $, in the sense of \cite{E3}, for each $\muu \in \muathbb{N}$. Clearly, for any pair of indices $\nuu , \muu $ with $0 < \nuu \lambdae \muu $, $E _{\nuu - 1}$ is a subfield of $E _{\muu }$ and $v _{\muu }$ is the unique prolongation of $v _{\nuu - 1}$ on $E _{\muu }$. Hence, the union $E _{\infty } = \cup _{\muu =0} ^{\infty } E _{\muu }$ is a field with a unique valuation $v _{\infty }$ extending $v _{\muu }$, for every $\muu < \infty $. Denote by $\widehat E _{\muu }$ the residue field of $(E _{\muu }, v _{\muu })$, for each $\muu \in \muathbb{N} \cup \{0, \infty \}$. The Gaussian property of $v _{\muu }$, $\muu < \infty $, guarantees that $v _{\muu }(E _{\muu }) = \omegaega (\muathbb{Q})$, $v _{\muu}(t _{\muu }) = 0$, $\hat t _{\muu }$ is a transcendental element over $\muathbb{F}$ and $\widehat E _{\muu } = \muathbb{F}(\hat t _{\muu })$ (see \cite{E3}, Examples~4.3.2 and 4.3.3). Observing also that $\hat t _{\muu } ^{p} = \hat t _{\muu -1}$, $\muu \in \muathbb{N}$, $\widehat E _{\infty } = \cup _{\muu =1} ^{\infty } \widehat E _{\muu }$ and $\muathbb{F} ^{p} = \muathbb{F}$, one concludes that $\widehat E _{\infty }$ is infinite and perfect. It is therefore clear from \par\nuoindent Lemma \ref{lemm5.2} (a) and Grunwald-Wang's theorem (see Remark \ref{rema5.3}), that if $(E _{\infty } ^{\prime }, v _{\infty } ^{\prime })$ is a Henselization of $(E _{\infty }, v _{\infty })$ with $E _{\infty } ^{\prime } \sigmaubset K _{\rm sep}$, then there exist TR and Galois extensions $T _{\muu } ^{\prime }/E _{\infty } ^{\prime }$ and $T _{\muu }/E _{\infty }$, $\muu \in \muathbb{N}$, such that $[T _{\muu }\colon E _{\infty }] = [T _{\muu } ^{\prime }\colon E _{\infty } ^{\prime }] = p ^{\muu }$, $T _{\muu } ^{\prime } = T _{\muu }E _{\infty } ^{\prime }$, $\muathcal{G}(T _{\muu }/E _{\infty })$ is abelian of period $p$, and $\muathcal{G}(T _{\muu }/E _{\infty }) \cong \muathcal{G}(T _{\muu } ^{\prime }/E _{\infty } ^{\prime })$, for every $\muu $. Now fix an arbitrary index $\muu $, choose $\thetaeta \in T _{\muu }$ so that the minimal polynomial $f(X)$ of $\thetaeta $ over $E _{\infty }$ be Eisenstein at $v _{\muu }$, and take a sufficiently large index $k \gammae \muu $ such that $f(X) \in E _{k}[X]$ and $f(X)$ splits over $E _{k}(\thetaeta )$. Then the extension $E _{k}(\thetaeta )/E _{k}$ is both TR and Galois with $\muathcal{G}(E _{k}(\thetaeta )/E _{k}) \cong \muathcal{G}(T _{\muu }/E _{\infty })$, and $f(X)$ is Eisenstein at $v _{k}$. Let $\psi $ be the isomorphism $E _{k} \to E _{0}$ mapping $t _{k}$ into $t _{0}$. Then $\psi $ extends uniquely to a degree-preserving isomorphism $\psi ^{\prime }: E _{k}[X] \to E _{0}[X]$ of polynomial rings, such that $\psi ^{\prime }(X) = X$; also, $\psi ^{\prime }$ maps $O _{v _{k}}(E _{k})[X]$ into $O _{v _{0}}(E _{0})[X]$. Note that, for each $g(X) \in E _{k}[X]$, $\psi ^{\prime }$ induces canonically a ring isomorphism $\psi ^{\prime } _{g}: R _{k} \to R _{0}$ extending $\psi $, where $R _{k} = E _{k}[X]/(g(X))$ and $R _{0} = E _{0}[X]/(\psi ^{\prime }(g(X)))$. Clearly, $\psi ^{\prime }_{g}$ maps bijectively the set of roots of $g(X)$ in $R _{k}$ on the set of roots of $\psi ^{\prime }(g(X))$ in $R _{0}$. One also sees that $g(X)$ is irreducible over $E _{k}$ if and only if so is $\psi ^{\prime }(g(X))$ over $E _{0}$. Therefore, $R _{k}/E _{k}$ is a field extension if and only if so is $R _{0}/E _{0}$; when this occurs, $[R _{k}\colon E _{k}] = [R _{0}\colon E _{0}] = {\rm deg}(g)$. Moreover, it follows that $R _{k}/E _{k}$ is Galois if and only if so is $R _{0}/E _{0}$ (and this holds if and only if $g(X)$ is irreducible over $E _{k}$ and $R _{k}$ is a root field of $g(X)$ over $E _{k}$). Suppose now that $R _{k}/E _{k}$ is Galois. Then, for each $\sigmaigma \in \muathcal{G}(R _{k}/E _{k})$, there is a unique $\sigmaigma ^{\prime } \in \muathcal{G}(R _{0}/E _{0})$, such that $\sigmaigma ^{\prime }(\psi ^{\prime } _{g}(r _{k})) = \psi ^{\prime } _{g}(\sigmaigma (r _{k}))$, for every $r _{k} \in R _{k}$; in addition, the mapping of $\muathcal{G}(R _{k}/E _{k})$ into $\muathcal{G}(R _{0}/E _{0})$, by the rule $\sigmaigma \to \sigmaigma ^{\prime }$, is an isomorphism. Note finally that $v _{k}(e _{k}) = v _{0}(\psi (e _{k}))$, for every $e _{k} \in E _{k}$, which implies $g(X)$ is Eisenstein at $v _{k}$ if and only if so is $\psi ^{\prime }(g(X))$ at $v _{0}$; hence, $R _{k}/E _{k}$ is TR relative to $v _{k}$ if and only if so is $R _{0}/E _{0}$ relative to $v _{0}$. When $g(X) = f(X)$, these observations show that $\psi $ extends to an isomorphism of $E _{k}(\thetaeta )$ on the root field $R \in {\rm Fe}(E _{0})$ of $\psi ^{\prime }(f(X))$ over $E _{0}$, and that $R/E _{0}$ is TR (relative to $v _{0}$) and Galois with $\muathcal{G}(R/E _{0}) \cong \muathcal{G}(E _{k}(\thetaeta )/E _{k})$. As $v(p) \nuotin pv(K)$, one obtains from Lemma \ref{lemm3.7} and the described properties of $R/E _{0}$ (regarding $E _{0,{\rm sep}}$ as an $E _{0}$-subalgebra of $K _{\rm sep}$) that $RK/K$ is TR and Galois, $[RK\colon K] = p ^{\muu }$ and $\muathcal{G}(RK/K) \cong \muathcal{G}(R/E _{0})$ is abelian of period $p$. Because of the arbitrary choice of the index $\muu $, this proves Lemma \ref{lemm5.2} (b). \varepsilonnd{proof} \par \sigmamallskip \betaegin{rema} \lambdaambdabel{rema5.3} Lemma \ref{lemm3.1} shows that given a field $L$ with nonequivalent real-valued valuations $w _{1}, \deltaots , w _{n}$, for some $n \in \muathbb{N}$, Grunwald-Wang's theorem holds, if applied to a Henselization of $(L, w _{i})$ (instead of $(L _{w _{i}}, \betaar w _{i})$), for $i = 1, \deltaots , n$. \varepsilonnd{rema} \par \sigmamallskip \betaegin{lemm} \lambdaambdabel{lemm5.4} Let $(K, v)$ be an {\rm HDV}-field of mixed characteristic $(0, p)$ with $v(p) \in pv(K)$ and $\widehat K \nueq \widehat K ^{p}$. Let $\widetilde \Lambda /\widehat K$ be an inseparable extension of degree $p$. Then there exists $\Lambda \in I(K(p)/K)$ with $[\Lambda \colon K] = p$ and $\widehat \Lambda \cong \widetilde \Lambda $ over $\widehat K$. \varepsilonnd{lemm} \par \sigmamallskip \betaegin{proof} The condition that $v(p) \in pv(K)$ means that there is $\pi _{1} \in K$ with $v(\pi _{1}) = v(p)/p$, so our conclusion follows at once from Lemma \ref{lemm5.1} (b). \varepsilonnd{proof} \par \muedskip \betaegin{prop} \lambdaambdabel{prop5.5} Let $(K, v)$ be an HDV-field of mixed characteristic $(0, p)$ and with $\widehat K \nueq \widehat K ^{p}$. Suppose that $v(p) \in pv(K)$ or $K$ contains a primitive $p$-th root of unity $\varepsilon $. Then each proper extension $\widetilde L$ of $\widehat K$ satisfying the inclusion $\widetilde L ^{p} \sigmaubseteq \widehat K$ is $\widehat K$-isomorphic to $\widehat L$, for some Galois extension $L$ of $K$, such that $v(L) = v(K)$ and $\muathcal{G}(L/K)$ is an abelian group of period $p$. \varepsilonnd{prop} \par \muedskip \betaegin{proof} If $v(p) \in pv(K)$, then our assertion follows from Lemmas \ref{lemm3.2}, \ref{lemm5.4} and Galois theory; when $\varepsilon \in K$, it can be deduced from Kummer theory. \varepsilonnd{proof} \par \muedskip \betaegin{rema} \lambdaambdabel{rema5.6} Let $(K, v)$, $p$ and $\varepsilon \in K _{\rm sep}$ satisfy the conditions of Lemma \ref{lemm4.4}, and let $\widehat K \nueq \widehat K ^{p}$. Take $c \in O _{v}(K)$ with $\hat c \nuotin \widehat K ^{p}$, and suppose that $K$ has a degree $p$ extension $C$ in $K(p)$, such that $\hat c \in \widehat C ^{p}$. By Lemma \ref{lemm5.4}, an extension of this kind exists if $v(p) \in pv(K)$ or $\varepsilon \in K$ (this need not hold in general, see Remark \ref{rema4.7} {\rm (c)}). It is easily verified that $v(C) = v(K)$, $v(z) \in pv(K)$, for each $z \in N(C/K)$, and $\hat z \in \widehat C ^{p}$ in case $v(z) = 0$. Therefore, if $[\widehat K\colon \widehat K ^{p}] \gammae p ^{2}$, $\hat c, \hat b \in \widehat K$ are $p$-independent over $\widehat K ^{p}$, and $b \in O _{v}(K)$ is a pre-image of $\hat b$, then $b \nuotin N(C/K)$ and (by \cite{P}, Proposition~15.1~b) the cyclic $K$-algebra $V = (C/K, \tilde{\alpha}u , b)$ of degree $p$ lies in $d(K)$, $\tilde{\alpha}u $ being a generator of $\muathcal{G}(C/K)$. Since $v _{C}(\tilde{\alpha}u (\alphalpha ) - \alphalpha ) > v _{C}(\alphalpha )$, for any $\alphalpha \in C ^{\alphast }$, this implies $\widehat V$ contains commuting $p$-th roots $\hat \varepsilonta _{c} = \sigmaqrt[p]{\hat c}$ and $\hat \varepsilonta _{b} = \sigmaqrt[p]{\hat b}$. Hence, by Lemma \ref{lemm3.4}, $v(V) = v(K)$ and $\widehat V$ equals the field $\widehat K(\hat \varepsilonta _{c}, \hat \varepsilonta _{b})$. Also, it follows from Kummer theory that $V$ is a symbol $K$-algebra, in the sense, e.g. of \cite{PS}, if and only if $\varepsilon \in K$. \varepsilonnd{rema} \par \muedskip\nuoindent A detailed and systematic study of algebras $W \in d(K)$, such that $v(W) = v(K)$ and $\widehat W/\widehat K$ is a purely inseparable extension would surely be of interest. This, however, goes beyond the scope of the present paper. \par \muedskip \sigmaection{\betaf Proof of Theorem \ref{theo2.1}} \par \muedskip We begin this section with a lemma which completes our preparation for the proof of Lemma \ref{lemm2.2} in case char$(K) = 0$, $\widehat K \nueq \widehat K ^{p}$ and $v(p) \in pv(K)$. Stating the lemma, we note that the imposed restriction on $v(p)$ requires the existence of an element $\pi \in K$ satisfying the conditions $v(\pi ) \nuotin pv(K)$ and $v(p) < pv(p)/(p - 1) - v(p)/p \lambdae v(\pi ) < pv(p)/(p - 1)$. \par \muedskip \betaegin{lemm} \lambdaambdabel{lemm6.1} Let $(K, v)$ be an {\rm HDV}-field of mixed characteristic $(0, p)$ with $\widehat K$ infinite and $v(p) \in pv(K)$, and let $\muathbb{F}$ be the prime subfield of $\widehat K$. Fix an integer $\muu > 0$ and elements $\pi \in K$, $\alphalpha _{1}, \deltaots , \alphalpha _{\muu } \in O _{v}(K) ^{\alphast }$, such that $v(\pi ) \nuotin pv(K)$, $v(p) < v(\pi ) < pv(p)/(p - 1)$, and the system $\hat \alphalpha _{1}, \deltaots , \hat \alphalpha _{\muu }$ is linearly independent over $\muathbb{F}$. Put $\lambdaambdambda _{j} = 1 + \pi \alphalpha _{j} ^{p^{\muu }}$, $j = 1, \deltaots , \muu $, and for any $j$, let $L _{j} = K(\lambdaambdambda _{j} ^{\prime })$, where $\lambdaambdambda _{j} ^{\prime } \in K _{\rm sep}$ and $\lambdaambdambda _{j} ^{\prime p} = \lambdaambdambda _{j}$. Then the field $M = L _{1} \deltaots L _{\muu }$ is a {\rm TR}-extension of $K$ of degree $p ^{\muu }$. Moreover, if $K$ contains a primitive $p$-th root of unity, then $M/K$ is Galois with $\muathcal{G}(M/K)$ abelian of period $p$. \varepsilonnd{lemm} \par \muedskip \betaegin{proof} We first show that one may consider only the special case where $K$ contains a primitive $p$-th root of unity. Let $\varepsilon $ be such a root in $K _{\rm sep}$. Then $$[M(\varepsilon )\colon K] = [M(\varepsilon )\colon M][M\colon K] = [M(\varepsilon )\colon K(\varepsilon )][K(\varepsilon )\colon K].$$ Since, by Galois theory, $[M(\varepsilon )\colon M] \muid [K(\varepsilon )\colon K]$, we have $[M(\varepsilon )\colon K(\varepsilon )] \muid [M\colon K].$ In addition, $[K(\varepsilon )\colon K] \muid p - 1$, by Lemma \ref{lemm3.8}~(c), which implies $p \numid e(K(\varepsilon )/K)$ (Lemma \ref{lemm3.2}~(b)), proving that $v(\pi ) \nuotin pv(K(\varepsilon ))$. Moreover, Lemmas \ref{lemm3.2} (b) and \ref{lemm3.8} imply $\pi $ and $\alphalpha _{1}, \deltaots , \alphalpha _{\muu }$ satisfy the conditions of Lemma \ref{lemm6.1} with respect to $(K(\varepsilon ), v)$. Also, it follows from the definition of $M$ that $[M\colon K] \lambdae p ^{\muu }$. As $p \numid e(M(\varepsilon )/M)$, these observations prove that if $M(\varepsilon )/K(\varepsilon )$ is a TR-extension of degree $p ^{\muu }$, then so is $M/K$. This leads to the desired reduction. \par Henceforth, we assume that $\varepsilon \in K$. Then the concluding assertion of Lemma \ref{lemm6.1} is implied by Kummer theory and the definition of $M$, so it remains to be seen that $M/K$ is TR (of degree $p ^{\muu }$). Put $\kappaappa = v(p)/(p - 1)$ and $\gammaamma = p\kappaappa - v(\pi )$. It follows from the conditions on $\pi $ that $0 < \gammaamma < \kappaappa $ and $\gammaamma \nuotin pv(K)$. The rest of our proof relies on the fact that, by Lemma \ref{lemm4.4} (a), $L _{1}/K$ is TR and $[L _{1}\colon K] = p$, which means that $M/K$ is TR, provided so is $M/L_{1}$. Using a standard inductive argument, one may assume for the rest of the proof that $\muu \gammae 2$ and, when $\muu $ is replaced by $\muu - 1$, the assertion of Lemma \ref{lemm6.1} holds, for any HDV-field $(K ^{\prime }, v ^{\prime })$ of mixed characteristic $(0, p)$ with $\widehat K ^{\prime }$ infinite and $v ^{\prime }(p) \in pv ^{\prime }(K ^{\prime })$. Then the assertion that $M/L _{1}$ is TR of degree $p ^{\muu -1}$ can be deduced from the existence of elements $\pi _{1}$ and $\lambdaambdambda _{1,j} \in L _{1} ^{\alphast }$, $\alphalpha _{1,j} \in O _{v}(L _{1}) ^{\alphast }$, $j = 2, \deltaots , \muu $, such that: \par\muedskip\nuoindent (6.1) $\hat \alphalpha _{1,2}, \deltaots , \hat \alphalpha _{1,\muu }$ are linearly independent over $\muathbb{F}$; $v(\pi _{1}) = p\kappaappa - (\gammaamma /p)$ (whence, $v(\pi _{1}) \nuotin pv(L _{1})$); $\lambdaambdambda _{1,j} = 1 + \pi _{1}\alphalpha _{1,j} ^{p ^{\muu -1}}$ and \par\vskip0.04truecm\nuoindent $\lambdaambdambda _{1,j}L _{1} ^{\alphast p} = \lambdaambdambda _{j}L _{1} ^{\alphast p}$, $j = 2, \deltaots , \muu $. \par\muedskip\nuoindent Since the elements $\hat \alphalpha _{j}\hat \alphalpha _{1} ^{-1}$, $j = 1, \deltaots , \muu $, are linearly independent over $\muathbb{F}$, it suffices to prove the existence of elements satisfying the conditions of (6.1) only in the special case where $\alphalpha _{1} = 1$ (considering $\pi \alphalpha _{1} ^{p ^{\muu }}$ and $\alphalpha _{2}\alphalpha _{1} ^{-1}, \deltaots , \alphalpha _{\muu }\alphalpha _{1} ^{-1}$ \par\vskip0.04truecm\nuoindent instead of $\pi $ and $\alphalpha _{2}, \deltaots , \alphalpha _{\muu }$, respectively). Putting $\varepsilonta _{1} = \lambdaambdambda _{1} ^{\prime } - 1$, we show that, in this case, $\pi _{1}$ and $\alphalpha _{1,j}, \lambdaambdambda _{1,j}$, $j = 2, \deltaots , \muu $, can be chosen as follows: \par \muedskip\nuoindent (6.2) $\pi _{1} = -p\varepsilonta _{1}$, $\alphalpha _{1,j} = \alphalpha _{j} - \alphalpha _{j} ^{p}$, and $\lambdaambdambda _{1,j} = 1 - p\varepsilonta _{j}$, where $\varepsilonta _{j} = \varepsilonta _{1}\alphalpha _{1,j} ^{p ^{\muu -1}}$. \par \muedskip\nuoindent In the rest of the proof, {\it we use the relation $\alphapprox $ introduced on page \pageref{approx}}. As \par\nuoindent $(1 + \varepsilonta _{1}) ^{p} = 1 + \pi $ (and $p \gammae 2$), Lemma \ref{lemm4.1} (a) shows that $$v(\varepsilonta _{1}) = v(\pi )/p > v(p)/p, \ {\rm so} \ v(p\varepsilonta _{1} ^{2}) > (p + 2)v(p)/p \gammae p\kappaappa ;$$ hence; by the former conclusion of Lemma \ref{lemm4.1}, $\pi \alphapprox \varepsilonta _{1} ^{p} + p\varepsilonta _{1}$. At the same time, the equality $\lambdaambdambda _{j} = 1 + \pi \alphalpha _{j} ^{p ^{\muu }}$ implies $\lambdaambdambda _{j} ^{-1} \alphapprox 1 - \pi \alphalpha _{j} ^{p ^{\muu }}$. Likewise, from $\lambdaambdambda _{1,j} = 1 - p\varepsilonta _{j}$, one obtains that $\lambdaambdambda _{1,j} ^{-1} \alphapprox 1 + p\varepsilonta _{j}$. Let $\Omega _{j} = 1 + \varepsilonta _{1}\alphalpha _{j} ^{p ^{\muu -1}}$. Then $$\Omega _{j} ^{p} \alphapprox 1 + \varepsilonta _{1} ^{p}\alphalpha _{j} ^{p ^{\muu }} + p\varepsilonta _{1}\alphalpha _{j} ^{p ^{\muu -1}} = [1 + (\varepsilonta _{1} ^{p} + p\varepsilonta _{1})\alphalpha _{j} ^{p ^{\muu }}] + [p\varepsilonta _{1}(\alphalpha _{j} ^{p ^{\muu -1}} - \alphalpha _{j} ^{p ^{\muu }})]$$ $$\alphapprox \lambdaambdambda _{j} + [p\varepsilonta _{1}(\alphalpha _{j} - \alphalpha _{j} ^{p}) ^{p ^{\muu -1}}] = \lambdaambdambda _{j} + p\varepsilonta _{j} = \lambdaambdambda _{j}(1 + p\varepsilonta _{j}\lambdaambdambda _{j} ^{-1}) \alphapprox \lambdaambdambda _{j}(1 + p\varepsilonta _{j}) \alphapprox \lambdaambdambda _{j}\lambdaambdambda _{1,j} ^{-1}.$$ Hence, by Lemma \ref{lemm4.3} (c), $\lambdaambdambda _{j}\lambdaambdambda _{1,j} ^{-1} \in L _{1} ^{\alphast p}$. \par\vskip0.14truecm We are now in a position to prove Lemma \ref{lemm6.1}. As already shown, $$v(p) < v(\pi _{1}) = v(p\varepsilonta _{1}) = v(p) + v(\varepsilonta _{1}) = p\kappaappa - (\gammaamma /p)$$ and $pv _{1}(L) = v(K)$, which implies $v(\pi _{1}) \nuotin pv(L _{1})$. Observing that $\alphalpha _{1} = 1$, \par\vskip0.11truecm\nuoindent the field $\muathbb{F}$ equals the set $\{\hat y \in \widehat K\colon \hat y ^{p} = \hat y\}$, ${\rm and} \ \alphalpha _{1,j} = \alphalpha _{j} - \alphalpha _{j} ^{p}$, $j = 2, \deltaots , \muu $, \par\vskip0.08truecm\nuoindent are elements of $O _{v}(L _{1}) ^{\alphast }$, such that $\hat \alphalpha _{1}, \deltaots , \hat \alphalpha _{\muu }$ are linearly independent (over $\muathbb{F}$), one concludes that $\hat \alphalpha _{1,2}, \deltaots , \hat \alphalpha _{1,\muu }$ are linearly independent as well. Thus the field $M$ and the elements $\pi _{1} = -p\varepsilonta _{1}$, and $\alphalpha _{1,j}, \lambdaambdambda _{1,j}$, $j = 2, \deltaots \muu $, defined in (6.2) satisfy the conditions of Lemma \ref{lemm6.1} (over $L _{1}$), and by the inductive hypothesis, $M/L _{1}$ is a TR-extension of degree $p ^{\muu -1}$, so Lemma \ref{lemm6.1} is proved. \varepsilonnd{proof} \par \muedskip We can now take the final step towards the proof of Lemma \ref{lemm2.2} (and Theorem \ref{theo2.1}) in general. In view of Lemma \ref{lemm3.6} and \cite{Ch4}, Lemma~4.2, one may consider only the case of mixed characteristic $(0, p)$. We also assume that $v(p) \in pv(K)$ and $\widehat K \nueq \widehat K ^{p}$, which is allowed by Lemma \ref{lemm5.2}. As $v(K)$ is cyclic, the condition on $v(p)$ ensures that there is $\xi \in K$ with $0 < v(\xi ) \lambdae v(p)/p$ and $v(\xi ) \nuotin pv(K)$. Since $\widehat K$ is infinite, there are $\alphalpha _{\nuu } \in O _{v}(K) ^{\alphast }$, $\nuu \in \muathbb{N}$, such that the system $\hat \alphalpha _{\nuu } \in \widehat K$, $\nuu \in \muathbb{N}$, is linearly independent over the prime subfield of $\widehat K$. Take a primitive $p$-th root of unity $\varepsilon \in K _{\rm sep}$, a generator $\varphi $ of $\muathcal{G}(K(\varepsilon )/K)$, and $s \in \muathbb{N}$ so that $\varphi (\varepsilon ) = \varepsilon ^{s}$. Fix any $\muu \in \muathbb{N}$, put $\lambdaambdambda _{j} = 1 + p(1 - \varepsilon )\xi ^{-1}\alphalpha _{j} ^{p ^{\muu }}$, for $j = 1, \deltaots , \muu $, and denote by $M _{\muu } ^{\prime }$ the extension of $K(\varepsilon )$ generated by the set $\{\lambdaambdambda _{j} ^{\prime }\colon j = 1, \deltaots , \muu \}$, where $\lambdaambdambda _{j} ^{\prime } \in K _{\rm sep}$ and $\lambdaambdambda _{j} ^{\prime p} = \lambdaambdambda _{j}$, for any index $j$. It follows from Lemma \ref{lemm6.1} that $M _{\muu } ^{\prime }/K(\varepsilon )$ is TR and Galois of degree $p ^{\muu }$ with $\muathcal{G}(M _{\muu } ^{\prime }/K(\varepsilon ))$ abelian of period $p$. Furthermore, Lemma \ref{lemm4.8} and the conditions on $\xi $ and $\alphalpha _{1}, \deltaots , \alphalpha _{\muu }$ show that \par\vskip0.04truecm\nuoindent $\varphi (\lambdaambdambda _{j})\lambdaambdambda _{j} ^{-s} \in K(\varepsilon ) ^{\alphast p}$, $j = 1, \deltaots , \muu $. Therefore, Lemmas \ref{lemm4.5} and \ref{lemm4.6}~(a) yield \par\vskip0.04truecm\nuoindent $M _{\muu } ^{\prime } = M _{\muu }(\varepsilon )$, for some Galois extension $M _{\muu }$ of $K$ in $K(p)$, such that \par\vskip0.04truecm\nuoindent $\muathcal{G}(M _{\muu }/K) \cong \muathcal{G}(M _{\muu } ^{\prime }/K(\varepsilon ))$; hence, $[M _{\muu }\colon K] = p ^{\muu }$ and $[M _{\muu } ^{\prime }\colon M _{\muu }] = [K(\varepsilon )\colon K]$. As $p \numid [K(\varepsilon )\colon K]$ and $M _{\muu } ^{\prime }/K(\varepsilon )$ is TR, it is now easy to see that $M _{\muu }/K$ is also TR. Because of the arbitrary choice of $\muu $, this proves Lemma \ref{lemm2.2}, Theorem \ref{theo2.1} (b) and the right-to-left implication in Theorem \ref{theo2.1} (a). Finally, by Fact \ref{fact3.5}, the converse implication follows from \cite{PS}, Corollary~2.5, so Theorem \ref{theo2.1} is proved. \par \muedskip \betaegin{rema} \lambdaambdabel{rema6.2} It should be pointed out that in case $(K, v)$ is an HDV-field containing a primitive $p$-th root unity $\varepsilon $, the right-to-left-implication in Theorem \ref{theo2.1} (a) becomes obvious as a result of the proof of the lower bound for abrd$_{p}(K)$ in \cite{PS}, Lemma~2.6. The conditions of the cited lemma do not require that $\varepsilon \in K$. However, the assumption that $\varepsilon \in K$ is necessary to define over $K$ tensor products of symbol algebras like those used in the proof of \cite{PS}, Lemma~2.6. This allows to show easily that if $\varepsilon \in K$, then the lower bound in the cited lemma is also such a bound for Brd$_{p}(K)$, which proves the right-to-left implication in Theorem \ref{theo2.1} (a). \varepsilonnd{rema} \par \muedskip To end the present Section, we note that Theorem~2 of \cite{PS} and the conclusion of Theorem \ref{theo2.1} {\rm (b)} in case char$(K) = 0$ leave open the question of whether abrd$_{p}(E) > 2{\rm Brd}_{p}(E) + 1$, for any field $E$ with a primitive $p$-th root of unity and Brd$_{p}(E) < \infty $. Moreover, it seems to be unknown whether abrd$_{p}(E) = \infty $. \par \muedskip \sigmaection{\betaf Open problems and further results} \par \muedskip We begin this section with a proof of Conjecture \ref{conj1.1} in case char$(K) = p$. \par \muedskip \betaegin{prop} \lambdaambdabel{prop7.1} If $(K, v)$ is an {\rm HDV}-field with char$(K) = p > 0$, then: \par {\rm (a)} Brd$_{p}(K) = \infty $ if $[\widehat K\colon \widehat K ^{p}] = \infty $; when $(K, v)$ is complete, the equality $[\widehat K\colon \widehat K ^{p}] = \infty $ holds if and only if $[K\colon K ^{p}] = \infty $; \par {\rm (b)} $n \lambdae {\rm Brd}_{p}(K) \lambdae n + 1$, provided that $n < \infty $ and $[\widehat K\colon \widehat K ^{p}] = p ^{n}$; \par {\rm (c)} If $(K, v)$ is complete, $[\widehat K\colon \widehat K ^{p}] = p ^{n}$ and $K ^{\prime }/K$ is a finite field extension, then $[K ^{\prime }\colon K ^{\prime p}] = p ^{n+1}$. \varepsilonnd{prop} \par \muedskip\nuoindent \betaegin{proof} The former part of Proposition \ref{prop7.1} (a) and the lower bound on Brd$_{p}(K)$ in Proposition \ref{prop7.1} (b) are implied by \cite{Ch4}, Lemma~4.2 (b). Proposition \ref{prop7.1} (c) and the latter part of Proposition \ref{prop7.1} (a) follow from Fact 3.5 (b), Lemma \ref{lemm3.2}, and the equality $[L\colon L ^{p}] = [K\colon K ^{p}]$, for every finite extension $L/K$ (cf. \cite{BH}, Lemma~2.12). It remains to prove the upper bound in Proposition \ref{prop7.1} (b). Let $\overline K$ be an algebraic closure of $K$. In view of \cite{Ch2}, Lemma~4.1, it suffices to show that, for any finite extension $K ^{\prime }$ of $K$ in $\overline K$, we have deg$(D ^{\prime }) \muid p ^{n+1}$ whenever $D ^{\prime } \in d(K ^{\prime })$ and exp$(D ^{\prime }) = p$. In addition, Fact \ref{fact3.5} (a) allows us to consider only the case of $K = K _{v}$. Let $K _{1} ^{\prime } = \{\lambdaambdambda \in \overline K\colon \lambdaambdambda ^{p} \in K ^{\prime }\}$. Then $K _{1} ^{\prime } \in I(\overline K/K ^{\prime })$, $K _{1} ^{\prime p} = K ^{\prime }$, and by Proposition \ref{prop7.1} (c), $[K _{1} ^{\prime }\colon K ^{\prime }] = p ^{n+1}$. Since, by Albert's theorem, $_{p}{\rm Br}(K ^{\prime })$ is a subgroup of Br$(K _{1} ^{\prime }/K ^{\prime })$ (cf. \cite{A2}, Ch. VII, Theorem~28), this yields deg$(D ^{\prime }) \muid p ^{n+1}$ (see \cite{P}, Sect. 13.4), so Proposition \ref{prop7.1} is proved. \varepsilonnd{proof} \par \muedskip Our next result proves Conjecture \ref{conj1.1} in the special case where $\widehat K$ is an $n$-dimensional local field of characteristic $p$ with a finite $n$-th residue field. \par \muedskip \betaegin{prop} \lambdaambdabel{prop7.2} Assume that $(K, v)$ is an {\rm HDV}-field, such that $\widehat K$ is an $n$-dimensional local field with {\rm char}$(\widehat K) = p$. Then {\rm Brd}$_{p}(K) \gammae n$. Moreover, if the $n$-th residue field $\widehat K _{0}$ of $\widehat K$ is finite, then {\rm abrd}$_{p}(K) \lambdae n + 1$. \varepsilonnd{prop} \par \sigmamallskip \betaegin{proof} As $[\widehat K\colon \widehat K ^{p}] = p ^{n}$, Theorem \ref{theo2.1} (b) yields Brd$_{p}(K) \gammae n$, so it suffices to prove that if $\widehat K _{0}$ is finite, then abrd$_{p}(K) \lambdae n + 1$. In view of Proposition \ref{prop7.1} (b) and Fact 3.5 (a), one may consider only the case of char$(K) = 0$ and $K = K _{v}$. Then $K$ is an $(n + 1)$-dimensional local field with last residue field $\widehat K _{0}$, whence, by \cite{Ch6}, Proposition~4.4, abrd$_{p}(K) \lambdae n + 1$, as required. \varepsilonnd{proof} \par \muedskip It would be of interest to know whether an HDV-field $(K, v)$ with $\widehat K _{\rm sep} = \widehat K$ and $[\widehat K\colon \widehat K ^{p}] = p ^{n}$, for some $n \in \muathbb{N}$, satisfies Brd$_{p}(K) = n$ (see page \pageref{stconj}). This is the same as to find whether Brd$_{p}(K) = n$, provided that $p \numid [\widehat K^{\prime }\colon \widehat K]$ when $\widehat K ^{\prime }$ runs across Fe$(\widehat K)$ (cf. \cite{P}, Sects. 13.4 and 14.4). The condition on $\widehat K$ means that cd$_{p}(\muathcal{G}_{\widehat K}) = 0$. If $\widehat K _{\rm sep} \nueq \widehat K$, then it is possible that Brd$_{p}(K) \gammae n + 1$; such is the case where $\widehat K/\muathbb{F} _{p}$ is a finitely-generated extension of transcendence degree $n$ (see the proof of \cite{Ch4}, Proposition~6.3, or \cite{BH}, Theorem~5.2). The same inequality for Brd$_{p}(K)$ is obtained by the method of proving \cite{Ch4}, Proposition~6.3, when char$(\widehat K) = p$ and $\widehat K$ is a finitely-generated extension of transcendence degree $n > 0$ over a perfect field $\widehat K _{0}$ with cd$_{p}(\muathcal{G}_{\widehat K _{0}}) \nueq 0$ (see \cite{S1}, Ch. I, 3.3). Since cd$_{p}(\muathcal{G}_{\widehat K _{0}}) \lambdae 1$ (cf. \cite{S1}, Ch. II, 2.2), Theorem \ref{theo2.1} (b) and the preceding observations attract interest in the following special case of Conjecture \ref{conj1.1}: \par \sigmamallskip \betaegin{conj} \lambdaambdabel{conj7.3} If $(K, v)$ is an HDV-field with char$(\widehat K) = p > 0$ and $\widehat K$ is a finitely-generated extension of transcendence degree $n > 0$ over its maximal perfect subfield $\widehat K _{0}$, then {\rm Brd}$_{p}(K) = n + {\rm cd}_{p}(\muathcal{G}_{\widehat K _{0}})$. \varepsilonnd{conj} \par \muedskip Theorem \ref{theo2.1} (b) and the upper bounds in \cite{PS}, Theorem~2, \cite{BH}, Corollary~4.7 and Theorem~4.16, and Proposition \ref{prop7.1} (b) of the present paper prove Conjecture \ref{conj1.1}, for $n = 1, 2, 3$. Note also that Conjecture \ref{conj7.3} holds, for $n = 1, 2$. In view of the remarks preceding the statement of Conjecture \ref{conj7.3}, this can be obtained by using Theorem \ref{theo2.1} (b), \cite{BH}, Theorem~4.16, and Case IV of the proof of \cite{BH}, Theorem~5.3. As to Conjecture \ref{conj7.3}, it need not be true if $(K, v)$ is merely HDV with char$(\widehat K) = p$ and $[\widehat K\colon \widehat K ^{p}] < \infty $. One may take as a counter-example the iterated formal power series field $K = \widehat K _{0}((X _{1})) \deltaots ((X _{n}))((Y))$ in a system of variables $X _{1}, \deltaots , X _{n}, Y$ over a quasifinite field $\widehat K _{0}$ with char$(\widehat K _{0}) = p$. Then Brd$_{p}(K) = n$, by \cite{Ch6}, Proposition~3.5 (implied by \cite{Ch2}, Lemma~4.3~(b), \cite{Ch4}, Lemma~4.2 and \cite{AJ}, Theorem~3.3), whereas the formula in Conjecture \ref{conj7.3} requires Brd$_{p}(K) = n + 1$ (the standard discrete valuation on $K$ is Henselian with $\widehat K = \widehat K _{0}((X _{1})) \deltaots ((X _{n}))$, whence, $[\widehat K\colon \widehat K ^{p}] = p ^{n}$ and cd$_{p}(\widehat K _{0}) = 1$). This example as well as Proposition \ref{prop7.2} draw one's attention to the following problem: \par \muedskip \betaegin{prob} \lambdaambdabel{prob7.4} Let $(K, v)$ be an {\rm HDV}-field with char$(\widehat K) = p > 0$. Suppose that $\widehat K$ is an $n$-dimensional local field, for some $n \in \muathbb{N}$, with an $n$-th residue field $\widehat K _{0}$. Find whether {\rm Brd}$_{p}(K) = n$. \varepsilonnd{prob} \par \muedskip The conditions of Problem \ref{prob7.4} show that $K _{v}$ is an $(n + 1)$-dimensional local field with last residue field $\widehat K _{0}$ (and $\widehat K$ is isomorphic to an iterated formal power series field in $n$ variables over the quasifinite field $\widehat K _{0}$, see \cite{F1}, 2.5.2). Therefore, in case char$(K) = p$, Fact \ref{fact3.5} (a) and \cite{Ch6}, Proposition~3.5, give an affirmative answer to Problem \ref{prob7.4}. When $n = 1$, such an answer is contained in the following result of \cite{Ch7}, obtained as a final step towards a full characterization of stable HDV-fields by properties of their residue fields: \par \muedskip \betaegin{prop} \lambdaambdabel{prop7.5} Let $(K, v)$ be an {\rm HDV}-field with {\rm char}$(\widehat K) = p > 0$. Then {\rm Brd}$_{p}(K) \lambdae 1$ if and only if the following condition is fulfilled: \par $[\widehat K\colon \widehat K ^{p}] \lambdae p$, and in case {\rm Brd}$_{p}(\widehat K) \nueq 0$, every degree $p$ extension of $\widehat K$ in $\widehat K(p)$ is embeddable as a $\widehat K$-subalgebra in each $D _{p} \in d(\widehat K)$ of degree $p$. \par\nuoindent The equality {\rm Brd}$_{p}(K) = 0$ holds if and only if $\widehat K$ is perfect and $\widehat K(p) = \widehat K$. \varepsilonnd{prop} \par \muedskip \betaegin{rema} \lambdaambdabel{rema7.6} The inequalities $n \lambdae {\rm Brd}_{p}(K) \lambdae n + 1$ hold, for any HDV-field $(K, v)$, such that $\widehat K$ is an $n$-dimensional local field with a finite $n$-th residue field and with char$(\widehat K _{1}) = p$, $\widehat K _{1}$ being the $(n - 1)$-th residue field of $\widehat K$. Proposition \ref{prop7.2} reduce the proofs to the case of char$(\widehat K) = 0$ (and $n \gammae 3$, in view of Proposition \ref{prop7.5}). Then the stated inequalities are contained in \cite{Ch6}, Proposition~4.4. \varepsilonnd{rema} \par \muedskip Note finally that the interest in the question of whether Brd$_{p}(K) = n$, if $(K, v)$ is an HDV-field, char$(\widehat K) = p > 0$, $\widehat K _{\rm sep} = \widehat K$ and $[\widehat K\colon \widehat K ^{p}] = p ^{n}$, for some $n \in \muathbb{N}$, is motivated not only by Theorem \ref{theo2.1} (b) and \cite{BH}, Theorem~4.16, but also by the following well-known conjecture (see, e.g., \cite{ABGV}, Sect. 4): \par \muedskip \betaegin{conj} \lambdaambdabel{conj7.7} Assume that $F$ is a field of type $C _{\nuu }$, i.e. each homogeneous polynomial $f(X _{1}, \deltaots , X _{m}) \in F[X _{1}, \deltaots , X _{m}]$ of degree $d$ with $0 < d ^{\nuu } < m$, has a nontrivial zero over $F$. Then abrd$_{p}(F) < \nuu $. \varepsilonnd{conj} \lambdaambdabel{stconj} \par \muedskip\nuoindent To show how Conjecture \ref{conj7.7} is related to the noted question, fix an HDV-field $(E, \omegaega )$ so that char$(\widehat E) = p > 0$, $\widehat E$ be algebraically closed, and when char$(E) = p$, $E = E _{\omegaega }$. Consider a finitely-generated extension $F/E$ of transcendence degree $n$. By Lang's theorem \cite{L1}, $E$ is of type $C _{1}$, whence, by the Lang-Nagata-Tsen theorem \cite{Na}, $F$ is of type $C _{n+1}$. The assumptions on $F$ and $E$ also imply the existence of a discrete valuation $\omegaega ^{\prime }$ of $F$ extending $\omegaega $, such that $\widehat F/\widehat E$ is a finitely-generated extension of transcendence degree $n$ (when $F/E$ is purely transcendental, one may take as $\omegaega ^{\prime }$ the restricted Gauss prolongation of $\omegaega $ on $F$). Thus it follows that $[\widehat F ^{\prime }\colon \widehat F ^{\prime p}] = p ^{n}$, for every finite extension $F ^{\prime }/F$. This enables one to deduce (e.g., from \cite{Ch4}, Lemmas~3.1 and 4.3) that if $(L, w)$ is a Henselization of $(F, \omegaega ^{\prime })$, then abrd$_{p}(L) \lambdae {\rm Brd}_{p}(F)$. Hence, Conjecture \ref{conj7.7} and the $C _{n+1}$ type of $F$ require that abrd$_{p}(L) \lambdae n$. On the other hand, $(L, w)/(F, \omegaega ^{\prime })$ is immediate, so $[\widehat L\colon \widehat L ^{p}] = p ^{n}$, and by Theorem \ref{theo2.1} (b), Brd$_{p}(L) \gammae n$. Thus the assertion that Brd$_{p}(L) = n$ can be viewed as a special case of Conjecture \ref{conj7.7}. \vskip0.38truecm \varepsilonmph{Acknowledgment.} I would like to thank the referee for the careful reading of an earlier version of this paper, and for a number of suggestions used for improving the organization (and other aspects) of its presentation. The paper presents a research partially supported by Grant KP-06 N 32/1 of 07.12.2019 "Groups and Rings - Theory and Applications" of the Bulgarian National Science Fund. \vskip0.1truecm \muedskip \betaegin{thebibliography}{aa} \betaibitem{A1} A.A. Albert, \varepsilonmph{Modern Higher Algebra}, Univ. of Chicago Press, XIV, Chicago, Ill., 1937. \betaibitem{A2} A.A. Albert, \varepsilonmph{Structure of Algebras}, Amer. Math. Soc. Colloq. Publ., XXIV, 1939. \betaibitem{AJ} R. Aravire, B. Jacob, \varepsilonmph{$p$-algebras over maximally complete fields. With an Appendix by J.-P. Tignol}, in: B. Jacob, et al. (Eds.), $K$-theory and algebraic geometry: connections with quadratic forms and division algebras (Santa Barbara, CA (USA), July 6-24, 1992), in: Proc. Symp. Pure Math. 58, Part 2, 27-49, Amer. Math. Soc., Providence, RI, 1995. \betaibitem{ABGV} A. Auel, E. Brussel, S. Garibaldi, U. Vishne, \varepsilonmph{Open problems on central simple algebras}, Transform. Groups 16 (2011), 219-264. \betaibitem{BH} N. Bhaskhar, B. Haase, \varepsilonmph{Brauer $p$-dimension of complete discretely valued fields}, Trans. Amer. Math. Soc. 373 (2020), 3709-3732. \betaibitem{Ch1} I.D. Chipchakov, \varepsilonmph{Henselian valued stable fields}, J. Algebra 206 (1) (1998), 344-369. \betaibitem{Ch2} I.D. Chipchakov, \varepsilonmph{On the behaviour of Brauer $p$-dimensions under finitely-generated field extensions}, J. Algebra 428 (2015), 190-204. \betaibitem{Ch4} I.D. Chipchakov, \varepsilonmph{On Brauer $p$-dimensions and index-exponent relations over finitely-generated field extensions}, Manuscr. Math. 148 (3-4) (2015), 485-500. \betaibitem{Ch5} I.D. Chipchakov, \varepsilonmph{On Brauer $p$-dimensions and absolute Brauer $p$-dimensions of Henselian fields}, J. Pure Appl. Algebra 223 (1) (2019), 10-29. \betaibitem{Ch6} I.D. Chipchakov, \varepsilonmph{On index-exponent relations over Henselian fields with local residue fields}, Serdica Math. J. 44 (3-4) (2018), 303-328. \betaibitem{Ch7} I.D. Chipchakov, \varepsilonmph{Henselian discrete valued stable fields}, Preprint, arXiv:1802.10193v1 [math.RA] 27 Feb. 2018. \betaibitem{Cohn} P.M. Cohn, \varepsilonmph{On extending valuations in division algebras}, Stud. Sci. Math. Hungar. 16 (1981), 65-70. \betaibitem{Co-Th} J.-L. Colliot-Th\'{e}l\`{e}ne, \varepsilonmph{Cohomologie galoisienne des corps valu\'{e}s discrets hens\'{e}liens, d'apr\`{e}s K. Kato et S. Bloch}, Bass, H. (ed.) et al., Algebraic K-theory and its applications. Proc. workshop and symposium, ICTP, Trieste, Italy, September 1-19, 1997. Singapore: World Scientific. 120-163 (1999). \betaibitem{E3} I. Efrat, \varepsilonmph{Valuations, Orderings, and Milnor $K$-Theory}, Math. Surveys and Monographs, vol. 124, Amer. Math. Soc., XIII, Providence, RI, 2006. \betaibitem{F1} I.B. Fesenko, \varepsilonmph{Theory of local fields. Local class field theory. Multidimensional local class field theory}, Algebra Anal. 4 (3) (1992), No. 3, 1-41 (Russian: transl. in St. Petersburg Math. J. 4 (3) (1993), 403-438). \betaibitem{FV} I.B. Fesenko, S.V. Vostokov, \varepsilonmph{Local Fields and Their Extensions. With a foreword by I.R. Shafarevich}, 2nd ed., Transl. Math. Monographs, vol. 121, Amer. Math. Soc., Providence, RI, 2002. \betaibitem{Hyo} O. Hyodo, \varepsilonmph{Wild ramification in the imperfect residue field case}, Galois representations and arithmetic algebraic geometry, Proc. Symp., Kyoto/Jap. 1985 and Tokyo/Jap. 1986, 287-314, Adv. Stud. Pure Math., vol. 12, North-Holland, Amsterdam, 1987. \betaibitem{MKu} M. Kurihara, \varepsilonmph{Two types of complete discrete valuation fields}, in: I. Fesenko, et al. (Eds.), Invitation to higher local fields, M\"{u}nster, Germany, Aug.-Sept., 1999, in: Geometry and Topology Publications, Geom. Topol. Monogr., vol. 3, Part I, Sect. 1, 109-112 (electronic), Coventry, 2000. \betaibitem{L1} S. Lang, \varepsilonmph{On quasi-algebraic closure}, Ann. Math. (2) 55 (1952), 373-390. \betaibitem{L2} S. Lang, \varepsilonmph{Algebraic Numbers}, Addison-Wesley Series in Mathematics. Reading, Mass. etc.: Addison-Wesley Publishing Company, Inc. IX, 1964. \betaibitem{L} S. Lang, \varepsilonmph{Algebra}, Revised 3rd ed., Graduate Texts in Math., vol. 211, Springer, New York, 2002. \betaibitem{LR} F. Lorenz, P. Roquette, \varepsilonmph{The theorem of Grunwald-Wang in the setting of valuation theory}, in: F.V. Kuhlmann, et al. (Eds.), Valuation theory and Its Applications, vol. II, Saskatoon, SK, 1999, in: Fields Inst. Commun., vol. 33, Amer. Math. Soc., Providence, RI, 2003, pp. 175-212. \betaibitem{N} M. Nikolov, \varepsilonmph{Necessary conditions for stability of Henselian discrete valued fields}, Master Thesis, FMI, Sofia Univ., 2002 (Bulgarian). \betaibitem{Na} M. Nagata, \varepsilonmph{Note on a paper of Lang concerning quasi algebraic closure}, Mem. Coll. Sci., Univ. Kyoto, Ser. A, 30 (1957), 237-241. \betaibitem{PS} R. Parimala, V. Suresh, \varepsilonmph{Period-index and $u$-invariant questions for function fields over complete discretely valued fields}, Invent. Math. 197 (1) (2014), 215-235. \betaibitem{P} R. Pierce, \varepsilonmph{Associative Algebras}, Graduate Texts in Math., vol. 88, Springer-Verlag, New York-Heidelberg-Berlin, 1982. \betaibitem{Re} I. Reiner, \varepsilonmph{Maximal Orders}, Lond. Math. Soc. Monographs, vol. 5, Academic Press, a subsidiary of Harcourt Brace Jovanovich, Publishers, London-New York-San Fransisco, 1975. \betaibitem{Sch} O.F.G. Schilling, \varepsilonmph{The Theory of Valuations}, Mathematical Surveys, vol. 4, Amer. Math. Soc., New York, N.Y., 1950. \betaibitem{S1} J.-P. Serre, \varepsilonmph{Galois Cohomology}, Transl. from the French by Patrick Ion, Springer-Verlag, X, Berlin-Heidelberg-New York, 1997. \betaibitem{TW} J.-P. Tignol, A.R. Wadsworth, \varepsilonmph{Value Functions on Simple Algebras, and Associated Graded Rings}, Springer Monographs in Math., Springer, Cham-Heidelberg-New York-Dordrecht-London, 2015. \betaibitem{TY} I.L. Tomchin, V.I. Yanchevskij, \varepsilonmph{On defects of valued division algebras}, Algebra Anal. 3 (1991), 147-164 (Russian: transl. in St. Petersburg Math. J. 3 (1992), 631-647). \betaibitem{Wa} S. Warner, \varepsilonmph{Topological Fields}, North-Holland Math. Studies (Notas de Mat\'{e}matica), vol. 157, North-Holland Publishing Co., Amsterdam, 1989. \betaibitem{Zh} I. Zhukov, Higher dimensional local fields, in: I. Fesenko, et al. (Eds.), Invitation to higher local fields, M\"{u}nster, Germany, Aug.-Sept., 1999, in: Geometry and Topology Publications, Geom. Topol. Monogr., vol. 3, Part I, Sect. 1, 5-18 (electronic), Coventry, 2000. \varepsilonnd{thebibliography} \varepsilonnd{document}
{\rm \bf m}athbfegin{equation}gin{document} \tildetle{Anisotropic Fast-Marching on cartesian grids\\ using Lattice Basis Reduction \thanks{ This work was partly supported by ANR grant NS-LBR ANR-13-JS01-0003-01. } } \author{Jean-Marie Mirebeau\footnote{CNRS, University Paris Dauphine, UMR 7534, Laboratory CEREMADE, Paris, France.}} {\rm \bf m}aketitle \date{} {\rm \bf m}athbfegin{equation}gin{abstract} We introduce a modification of the Fast Marching algorithm, which solves the anisotropic eikonal equation associated to an arbitrary continuous Riemannian metric ${\rm \bf m}athcal M$, on a two or three dimensional domain. The algorithm has a complexity ${\rm \bf m}athcal O(N \leftn N + N \leftn \kappa({\rm \bf m}athcal M))$, where $N$ is the discrete domain cardinality. The logarithmic dependency in the maximum anisotropy ratio $\kappa({\rm \bf m}athcal M)$ of the Riemannian metric allows to handle extreme anisotropies for a limited numerical cost. We prove the consistence of the algorithm, and illustrate its efficiency by numerical experiments. The algorithm relies on the computation at each grid point $z$ of a special system of coordinates: a {\bf v}arepsilonmph{reduced} basis of the lattice ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$, with respect to the symmetric positive definite matrix ${\rm \bf m}athcal M(z)$ encoding the desired anisotropy at this point. {\bf v}arepsilonnd{abstract} \section*{Introduction} The anisotropic Eikonal equation, or static Hamilton-Jacobi equation, is a Partial Differential Equation (PDE) which describes an elementary front propagation model: the speed of the front depends only on the front position and orientation. This PDE is encountered in numerous applications, such as motion planning control problems \cite{SV03}, modeling of bio-medical phenomena \cite{SKD07}, and image analysis \cite{PPKC10}. It was also recently used in the context of medical image analysis \cite{BC10} for extracting vessels in two dimensional projections or three dimensional scans of the human body, and for performing virtual endoscopies. This application requires to solve a highly anisotropic generalized eikonal equation with a high resolution on a cartesian grid, at a computational cost compatible with user interaction. It is one of our key motivations. This paper is devoted to the construction and the study of a new algorithm, Fast Marching using Lattice Basis Reduction (FM-LBR), designed to solve the anisotropic eikonal equation associated to a given Riemannian metric ${\rm \bf m}athcal M$, and able to handle large or even extreme anisotropies. The domain must be of dimension two or three, and discretized on a cartesian grid. The FM-LBR, as its name indicates, is a variant of the classical Fast Marching algorithm \cite{SV03,T95}, an efficient method for solving the eikonal equation when the metric is isotropic (proportional at each point to the identity matrix). Lattice Basis Reduction \cite{NS09} is a concept from discrete mathematics, used in the FM-LBR to produce sparse causal stencils for the discretization of the eikonal equation; it allows to benefit in an optimal way of the interplay between the Riemannian geometric structure of the PDE, and the arithmetic structure of the discretization grid. A similar technique is used in \cite{FM13} to construct sparse non-negative stencils for anisotropic diffusion. In order to illustrate the specificity of our approach, we need to introduce some notation. Denote by $S_m^+$ the collection of $m\tildemes m$ symmetric positive definite matrices, and associate to each $M \in S_m^+$ the norm $\|u\|_M := $\diamond$\\rightt{\leftangleu, M u\rightightarrowngle}$ on ${\rightm \hbox{I\kern-.2em\hbox{R}}}^m$. Consider a bounded open domain $\Omega \subset {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$, equipped with a Riemannian metric ${\rm \bf m}athcal M \in C^0(\overlineverline \Omega, S_m^+)$. We address the anisotropic eikonal equation: find the unique viscosity \cite{L82} solution $\distC : \overlineverline \Omega \to {\rightm \hbox{I\kern-.2em\hbox{R}}}$ of {\rm \bf m}athbfegin{equation} \leftabel{eikonal} \lefteft\{ {\rm \bf m}athbfegin{equation}gin{array}{rl} \|{\rm \bf n}abla \distC(z)\|_{{\rm \bf m}athcal M(z)^{-1}} = 1 & \text{ for almost every } z\in \Omega,\\ \distC = 0 & \text{ on } \ \partial \Omega. {\bf v}arepsilonnd{array} \rightight. {\bf v}arepsilonnd{equation} See the end of \S \rightef{sec:MainResults} for different boundary conditions, and algorithmic restrictions on the dimension $m$. Introduce the Riemannian length of a Lipschitz path $\gamma : [0,1] \to \overlineverline \Omega$: {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{def:Length} \leftength(\gamma) := \int_0^1 \|\gamma'(t) \|_{{\rm \bf m}athcal M(\gamma(t))} dt, {\bf v}arepsilonnd{equation} and denote by $\distC(x,y)$ the length of the shorted path joining $x,y \in \overlineverline \Omega$, also referred to as the Riemannian distance between these points. The PDE {\bf v}arepsilonqref{eikonal} admits an optimal control interpretation: $\distC(x) = {\rm \bf m}in \{\distC(x,y); \, y \in \partial \Omega\}$ is the minimal distance from $x \in \Omega$ to the boundary. Consider a discrete set $Z \subset {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$, which in our case will be a cartesian grid. Discretizations of {\bf v}arepsilonqref{eikonal} take the form of a fixed point problem: find $\dist : Z \to {\rightm \hbox{I\kern-.2em\hbox{R}}}$ such that {\rm \bf m}athbfegin{equation} \leftabel{discreteSys} \lefteft\{ {\rm \bf m}athbfegin{equation}gin{array}{ll} \dist(z) = \poundsambda(\dist,z) & \text{ for all } z\in Z \cap \Omega,\\ \dist(z) = 0 & \text{ for all } z \in Z \setminus \Omega. {\bf v}arepsilonnd{array} \rightight. {\bf v}arepsilonnd{equation} This formulation involves the Hopf-Lax update operator $\poundsambda(\dist,z)$ \cite{SV03,BR06,KushnerDupuis92,GonzalesRofman85}, which mimics at the discrete level Belmann's optimality principle associated to the optimal control interpretation of {\bf v}arepsilonqref{eikonal}. See Appendix \rightef{subsec:Accuracy} for details on this principle, the approximations underlying its discretization, and their accuracy. The definition of $\poundsambda(\dist, x)$ involves a mesh (or {\bf v}arepsilonmph{stencil}) $V(x)$ of a small neighborhood of $x \in Z \cap \Omega$, with vertices on $Z$, and reads: {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{def:HopfLax} \poundsambda(\dist,x) := {\rm \bf m}in_{y \in \partial V(x)} \|x-y\|_{{\rm \bf m}athcal M(x)} + \interp_{V(x)} \dist (y), {\bf v}arepsilonnd{equation} where $\interp_{V}$ denotes piecewise linear interpolation on a mesh $V$. (In this paper, a mesh in ${\rightm \hbox{I\kern-.2em\hbox{R}}}^m$ is a finite collection ${\cal T}$ of $m$-dimensional non-flat simplices, which is conforming in the sense that the intersection $T \cap T'$ of any $T,T' \in {\cal T}$ is the convex hull of their common vertices.) Numerical solvers of the eikonal equation differ by (i) the construction of the stencils $V(z)$, $z\in \Omega$, and (ii) the approach used to solve the system {\bf v}arepsilonqref{discreteSys}, which is inspired by the algorithms of Bellmann-Ford or of Dijkstra used for computing distances in graphs, instead of continuous domains. The algorithm presented in this paper, FM-LBR, belongs to the category of Dijkstra inspired algorithms with static stencils, and among these is the first one to guarantee a uniform upper bound on the stencil cardinality, independently of the Riemannian metric ${\rm \bf m}athcal M$. The anisotropy ratio $\kappa(M)$ of a matrix $M\in S_m^+$, and the maximum anisotropy $\kappa( {\rm \bf m}athcal M)$ of the Riemannian metric ${\rm \bf m}athcal M$, are defined as follows: {\rm \bf m}athbfegin{equation} \leftabel{defKappa} \kappa(M) := {\rm \bf m}ax_{\|u\|=\|v\|=1} \frac{\|u\|_M}{\|v\|_M} = $\diamond$\\rightt{\|M\| \|M^{-1}\|}, \qquad \kappa({\rm \bf m}athcal M) := {\rm \bf m}ax_{z\in \overlineverline \Omega} \kappa({\rm \bf m}athcal M(z)). {\bf v}arepsilonnd{equation} We denote by $N := \#(Z \cap \Omega)$ the cardinality of the discrete domain. {\rm \bf m}athbfegin{equation}gin{itemize} \item {{\rm \bf m}athbff Bellman-Ford inspired algorithms.} The discrete fixed point problem {\bf v}arepsilonqref{discreteSys} is solved via Gauss-Seidel iteration: the replacement rule $\dist(z_k) \leftarrow \poundsambda(\dist, z_k)$ is applied for $k=0,1,2,...$ to a mutable map $\delta : Z \cap \Omega \to {\rightm \hbox{I\kern-.2em\hbox{R}}}$, until a convergence criterion is met. In the fast sweeping methods, see \cite{TsaiChengOsherZhao03} and references therein, the sequence of points $(z_k)_{k \geq 0}$ enumerates repeatedly the lines and the columns of $Z \cap \Omega$. Alternatively this sequence is obtained via a priority queue in the Adaptive Gauss-Seidel Iteration (AGSI) of Bornemann and Rasch \cite{BR06}. The stencil $V(z)$ of a point $z\in Z \cap \Omega$ is usually the offset by $z$ of a fixed stencil $V$ given at the origin, such as those illustrated on Figure \rightef{fig:Classical}. Fast sweeping methods have ${\rm \bf m}athcal O(\leftambda({\rm \bf m}athcal M) N)$ complexity when the metric ${\rm \bf m}athcal M$ is {\bf v}arepsilonmph{isotropic} (proportional to the identity at each point), but this result does not extend to anisotropic Riemannian metrics, see \cite{Zhao05} for the proof and the expression of $\leftambda({\rm \bf m}athcal M)$. The AGSI has complexity ${\rm \bf m}athcal O({\rm \bf m}u({\rm \bf m}athcal M) N^{1+\frac 1 m})$, for arbitrary anisotropic Riemannian metrics, where ${\rm \bf m}u({\rm \bf m}athcal M)$ is a non explicit constant which depends on global geometrical features of the metric \cite{BR06}. The AGSI is a popular, simple, and quite efficient method, which is included for comparison in our numerical tests. \item {{\rm \bf m}athbff Dijkstra inspired algorithms.} The system {\bf v}arepsilonqref{discreteSys} is solved in a single pass, non-iteratively, using an ordering of $Z \cap \Omega$ determined at run-time. This is possible provided the Hopf-Lax update operator satisfies the so-called ``causality property'', see Proposition \rightef{prop:Causality}, which can be ensured if the stencil $V(z)$ of each $z\in Z \cap \Omega$ satisfies some geometrical properties depending on ${\rm \bf m}athcal M(z)$, see Definition \rightef{def:Mesh}. The different Dijkstra inspired methods are characterized by the construction of the stencils $V(z)$, in contrast with Bellman-Ford inspired methods which are characterized by the choice of the sequence $(z_k)_{k \geq 0}$. Solving the system {\bf v}arepsilonqref{discreteSys} with a Dijkstra inspired algorithm has complexity ${\rm \bf m}athcal O({\rm \bf m}u({\rm \bf m}athcal M) N \leftn N)$, where ${\rm \bf m}u({\rm \bf m}athcal M)$ is an upper bound on the cardinality of the stencils (the number of simplices they are built of). In the Ordered Upwind Method (OUM) of Sethian and Vladimirsky \cite{SV03,VladTime06}, the stencils are constructed at run-time; their cardinality is bounded by ${\rm \bf m}athcal O(\kappa({\rm \bf m}athcal M)^m)$ and drops to ${\rm \bf m}athcal O(\kappa({\rm \bf m}athcal M)^{m-1})$ as $N \to \infty$. In contrast, the stencils are constructed during a preprocessing step and then static in the Monotone Acceptance Ordered Upwind Method (MAOUM) of Alton and Mitchell \cite{AltonMitchell12}; their cardinality is bounded by ${\rm \bf m}athcal O(\kappa({\rm \bf m}athcal M)^m)$. The FM-LBR introduced in the present work uses a similar approach, except that the stencils cardinality is ${\rm \bf m}athcal O(1)$, fully independent of the Riemannian metric ${\rm \bf m}athcal M$. The complexity estimates are thus ${\rm \bf m}athcal O(\kappa({\rm \bf m}athcal M)^m N \leftn N)$ for the OUM and the MAOUM (asymptotically ${\rm \bf m}athcal O(\kappa({\rm \bf m}athcal M)^{m-1} N \leftn N)$ for the OUM), and ${\rm \bf m}athcal O(N \leftn N + N \leftn \kappa({\rm \bf m}athcal M))$ for our approach the FM-LBR, where the second term in the complexity accounts for the stencil construction. {\bf v}arepsilonnd{itemize} The above mentioned algorithms are consistent for the anisotropic eikonal equation associated to an arbitrary continuous Riemannian metric ${\rm \bf m}athcal M \in C^0(\overlineverline \Omega, S_m^+)$, in the sense that the discrete output $\dist_h$ of the algorithm executed on the grid $Z_h := h {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^2$, of scale $h>0$, converges to the viscosity solution $\distC$ of the continuous problem {\bf v}arepsilonqref{eikonal} as $h \to 0$. Some more specialized variants of the fast marching algorithm are only consistent for a restricted set of metrics, but can be executed nonetheless with an arbitrary anisotropic metric ${\rm \bf m}athcal M$; in that case the discrete system {\bf v}arepsilonqref{discreteSys} may not be solved, and the numerical results are variable, see \S \rightef{sec:num}. For instance the original fast marching algorithm \cite{T95} is consistent if ${\rm \bf m}athcal M(z)$ is proportional to the identity matrix for each $z\in \Omega$, and more generally if ${\rm \bf m}athcal M(z)$ is a diagonal matrix. In addition to these cases of isotropy and axis-aligned anisotropy, some variants are also consistent if the metric anisotropy $\kappa({\rm \bf m}athcal M)$ is smaller than a given bound $\kappa_0$, see \cite{JBTDPIB08} and Figure \rightef{fig:Classical}. Our numerical experiments \S \rightef{sec:num} include for comparison one of these methods: Fast Marching using the 8 point stencil (FM-8, center left stencil on Figure \rightef{fig:Classical}), which is popular in applications \cite{BC10} thanks to its short computation times and despite the lack of convergence guarantee for arbitrary metrics. Depending on the implementation \cite{RS09,SV03,T95}, involving either a sorted list or a bucket sort, these methods have complexity ${\rm \bf m}athcal O(N\leftn N)$ or ${\rm \bf m}athcal O(\Upsilon({\rm \bf m}athcal M) N)$, where $$ \Upsilon({\rm \bf m}athcal M) := $\diamond$\\rightt{{\rm \bf m}ax_{z\in \Omega} \|{\rm \bf m}athcal M(z)\| {\rm \bf m}ax_{z'\in \Omega} \|{\rm \bf m}athcal M(z')^{-1}\|}. $$ In the applications for which our method is intended, one typically has $\leftn(N) \hbox{\kern -.2em\rightightarrowisebox{-1ex}{$~\stackrel{\textstyle<}{\sim}~$}}\kern -.2em \kappa({\rm \bf m}athcal M) \lefteq \Upsilon( {\rm \bf m}athcal M) \leftl N$, in such way that the complexity ${\rm \bf m}athcal O(N \leftn N + N \leftn \kappa({\rm \bf m}athcal M))$ of the proposed method is comparable to ${\rm \bf m}athcal O(N\leftn N)$ and smaller than ${\rm \bf m}athcal O(\Upsilon({\rm \bf m}athcal M) N)$. In summary, the FM-LBR combines the universal consistency (i.e.\ for any Riemannian metric) of the AGSI, OUM and MAOUM, with a quasi-linear complexity just as the original Fast Marching algorithm. {\rm \bf m}athbfegin{equation}gin{remark*} Each solver of the eikonal equation comes with a specific construction of the stencils $V(z)$, $z \in \Omega$. The highly efficient, but specialized, approach used in the FM-LBR limits its potential for generalization, see the end of \S \rightef{sec:MainResults}. Static adaptive stencils also have a memory impact, see Remark \rightef{rem:memory}. Since the Hopf-Lax update operator {\bf v}arepsilonqref{def:HopfLax} depends on the stencils, the discrete solution $\dist$ of {\bf v}arepsilonqref{discreteSys} is scheme dependent, and so is its accuracy. See Appendix \rightef{subsec:Accuracy} for a heuristic accuracy analysis, and \cite{M12b} for the case of constant metrics. Numerical experiments \S \rightef{sec:num}, on application inspired test cases, show that the FM-LBR accuracy is competitive. {\bf v}arepsilonnd{remark*} {\rm \bf m}athbfegin{equation}gin{figure} {\rm \bf m}athbfegin{equation}gin{center} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/FMNeigh4.pdf} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/FMNeigh8.pdf} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/FMNeigh8_3D.png} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/FMNeigh26_3D.png} {\bf v}arepsilonnd{center} \caption{ \leftabel{fig:Classical} Some classical stencils used in the discretization of two dimensional (left) or three dimensional (right) eikonal equations. The meshes are $M$-acute, a property implying discrete causality, see Definition \rightef{def:Mesh}, for matrices $M$ which are diagonal or of anisotropy ratio $\kappa(M)$ bounded by respectively $1, \ 1+ $\diamond$\\rightt 2, \ 1, \ ( $\diamond$\\rightt 3 +1)/2$ (from left to right). } {\bf v}arepsilonnd{figure} \section{Sparse causal stencils for the anisotropic eikonal equation} \leftabel{sec:MainResults} Our main contribution is the construction of discretization stencils for anisotropic eikonal equations, which have a uniformly bounded cardinality and preserve a structural property of the PDE: causality, inherited from its interpretation as a deterministic control problem. Discrete causality, allowing to solve the fixed point system {\bf v}arepsilonqref{discreteSys} in a single pass, is the following property. {\rm \bf m}athbfegin{equation}gin{proposition}[Causality property, J.A.\ Sethian, A.\ Vladimirsky, Appendix of \cite{SV03}] \leftabel{prop:Causality} Let $x \in {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$, and let $V$ be a finite mesh of a neighborhood of $x$. Let $M \in S_m^+$ and let us assume that $\leftangley-x, M (z-x) \rightightarrowngle \geq 0$ for any vertices $y$, $z$ of a common face of $\partial V$ (acuteness condition). Consider a discrete map $\dist$ defined on the vertices of $V$, and the optimization problem {\rm \bf m}athbfegin{equation}gin{equation*} \poundsambda := {\rm \bf m}in_{y \in \partial V} \|x-y\|_M + \interp_V \dist (y), {\bf v}arepsilonnd{equation*} where $\interp$ denotes piecewise linear interpolation. Then the minimum defining $\poundsambda$ is attained on a $k$-face $[y_1, \cdots, y_k]$ of $\partial V$, with $k \lefteq m$, such that $\poundsambda > \dist(y_i)$ for all $1 \lefteq i \lefteq k$. {\bf v}arepsilonnd{proposition} A mesh $V$ satisfying the geometric acuteness condition of Proposition \rightef{prop:Causality}, is called a causal stencil, at the point $x$, and with respect to $M$. Consider the directed graph $G$, associated to the fixed point problem {\bf v}arepsilonqref{discreteSys}, where for all $x,y \in Z$ we place an arrow $x \to y$ iff the minimum defining $\dist(x) = \poundsambda(\dist,x)$ is attained on a face of the stencil $\partial V(x)$ containing $y$. The causality property, states that the presence of an arrow $x \to y$ implies $\dist(x) > \dist(y)$; in particular the graph $G$ has no cycles, hence the system of equations {\bf v}arepsilonqref{discreteSys} does not feature any dependency loop. The Fast-Marching algorithm \cite{T95} traces back theses dependencies, determining at run-time an ordering $(x_i)_{i=1}^N$, of the discrete domain $Z \cap \Omega$, such that the distances $(\dist(x_i))_{i=1}^N$ are increasing. We reproduce this method for completeness, see Algorithm \rightef{algo:FastMarching}, but refer to its original introduction \cite{T95}, or its use with alternative static adaptive stencils in the MAOUM \cite{AltonMitchell08}, for the proof that it solves the discrete system {\bf v}arepsilonqref{discreteSys}. {\rm \bf m}athbfegin{equation}gin{algorithm} \leftabel{algo:FastMarching} \caption{The Fast Marching algorithm, with static stencils adapted to a metric.} {\rm \bf m}athbfegin{equation}gin{tabular}{l} \textbf{Input}: The values ${\rm \bf m}athcal M(z)$ of a Riemannian metric, for all $z \in Z \cap \Omega$.\\ \textbf{Construct} causal stencils $V(z)$, with respect to ${\rm \bf m}athcal M(z)$, at all points $z\in Z \cap \Omega$.\\ \textbf{Construct} the reversed stencils, defined by $V[y] := \{ z \in Z\cap \Omega ; \, y \text{ is a vertex of } V(z)\}$.\\ \textbf{Initialize} a (mutable) table $\dist : Z \to {\rightm \hbox{I\kern-.2em\hbox{R}}}$, to $+ \infty$ on $Z \cap \Omega$, and to $0$ elsewhere.\\ \textbf{Initialize} a (mutable) boolean table $b : Z \to \{trial, accepted\}$ with $b(y) \leftarrow trial$ iff $V[y]{\rm \bf n}eq {\bf v}arepsilonmptyset$.\\ \textbf{While} there remains a $trial$ point (i.e.\ $y\in Z$ such that $b(y) = trial$) \textbf{do}\\ \phantom{Whi} Denote by $y$ a $trial$ point which minimizes $\dist$, and set $b(y) \leftarrow accepted$.\\ \phantom{Whi} For all $x \in V[y]$, set $\dist(x) \leftarrow {\rm \bf m}in \{ \dist(x), \, \poundsambda(\dist,x; \, b,y)\}$.\\ \textbf{Output}: The map $\dist : Z \to {\rightm \hbox{I\kern-.2em\hbox{R}}}$. {\bf v}arepsilonnd{tabular} {\bf v}arepsilonnd{algorithm} We denoted by $\poundsambda(\dist,x; \, b, y)$ the modification of the Hopf-Lax update operator {\bf v}arepsilonqref{def:HopfLax} in which the minimum is only taken over faces (of any dimension) of $\partial V(x)$ which vertices (i) contain $y$, and (ii) are all $accepted$. Regarding the FM-LBR complexity ${\rm \bf m}athcal O(N \leftn N + N \leftn \kappa({\rm \bf m}athcal M))$, we refer for details to the classical analysis in \cite{T95,SV03,AltonMitchell08} and simply point out that (i) each FM-LBR causal stencil costs ${\rm \bf m}athcal O(\leftn \kappa({\rm \bf m}athcal M))$ to construct, (ii) maintaining a list of $\Omega \cap Z$, sorted by increasing values of the mutable map $\dist$, costs ${\rm \bf m}athcal O(\leftn N)$ for each modification of a single value of $\dist$, with a proper heap sort implementation, and (iii) the optimization problem defining the Hopf-Lax update {\bf v}arepsilonqref{def:HopfLax}, or its variant $\poundsambda(\dist,x; \, b, y)$, has an explicit solution: the minimum associated to each face of $\partial V(x)$ is the root of a simple univariate quadratic polynomial, see Appendix of \cite{SV03}. Memory usage is discussed in detail in Remark \rightef{rem:memory}. As announced, we limit our attention to PDE discretizations on cartesian grids, of the form {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{def:Grid} Z = h R (\xi + {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m), {\bf v}arepsilonnd{equation} where $h>0$ is a scaling parameter, $R$ a rotation, and $\xi \in {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$ an offset (in {\bf v}arepsilonqref{def:Grid} and {\bf v}arepsilonqref{eq:VFromT} we abusively apply geometric transformations not only to points, but also to sets of points and meshes). The use of an unbounded grid {\bf v}arepsilonqref{def:Grid} is a mathematical artifact aimed to simplify the exposition; only points of $Z$ close to $\Omega$ (precisely: such that $V[y] {\rm \bf n}eq {\bf v}arepsilonmptyset$) play an active role in Algorithm \rightef{algo:FastMarching}. We may construct a causal stencil at $x \in Z$ with respect to $M\in S_m^+$, by suitably scaling, translating and rotating an $R^\trans M R$-acute mesh ${\cal T}$, defined below: {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{eq:VFromT} V = x + h \,R \,{\cal T}. {\bf v}arepsilonnd{equation} {\rm \bf m}athbfegin{equation}gin{definition} \leftabel{def:Mesh} An $M$-acute mesh, where $M \in S_m^+$, is an $m$-dimensional mesh ${\cal T}$ covering a neighborhood of the origin, and such that the vertices $v_0, \cdots , v_m$ of any simplex $T \in {\cal T}$ satisfy (i) $v_0=0$, (ii) $(v_1, \cdots, v_m)$ form a basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$, and (iii) $\leftanglev_i, M v_j\rightightarrowngle\geq 0$ for all $1 \lefteq i < j \lefteq m$. {\bf v}arepsilonnd{definition} The condition $|\det(v_1, \cdots, v_m)|=1$ ensures that $0$ (resp.\ $x$) is the only grid point in the interior of the domain covered by ${\cal T}$ (resp.\ $V$), hence that information does not ``fly over'' some grid points when solving {\bf v}arepsilonqref{discreteSys}. Applying the next proposition to the classical stencils of Figure \rightef{fig:Classical}, we obtain that they are causal for Riemannian metrics of limited anisotropy. {\rm \bf m}athbfegin{equation}gin{proposition} \leftabel{propGammaT} Let ${\cal T}$ be an $m$-dimensional mesh satisfying the requirements of Definition \rightef{def:Mesh}, except (iii) (which does not make sense without a given matrix $M \in S_m^+$). Let {\rm \bf m}athbfegin{equation} \leftabel{defKappaT} \kappa({\cal T}) := $\diamond$\\rightt{\frac{1+\gamma({\cal T})}{1-\gamma({\cal T})}}, \quad \text{where} \quad \gamma({\cal T}) := {\rm \bf m}in_{T,(u,v)} \frac{\leftangleu,v\rightightarrowngle}{\|u\| \|v\|}, {\bf v}arepsilonnd{equation} and where the minimum in $\gamma({\cal T})$ is taken among all non-zero vertices $u,v$ of a common simplex $T\in {\cal T}$. The mesh ${\cal T}$ is $M$-acute for any $M\in S_m^+$ such that $\kappa(M) \lefteq \kappa({\cal T})$. {\bf v}arepsilonnd{proposition} {\rm \bf m}athbfegin{equation}gin{proof} Let $u,v$ be two non-zero vertices of a common simplex $T\in {\cal T}$, and let $M\in S_m^+$. Let $u' := u/\|u\|$ and let $v' := v/\|v\|$. By construction one has $$ \|u'+v'\|^2 = 2(1+\leftangleu', v'\rightightarrowngle) \geq 2(1+\gamma({\cal T})), \qquad \|u'-v'\|^2 = 2(1-\leftangleu',v'\rightightarrowngle) \lefteq 2(1-\gamma({\cal T})). $$ Let us assume for contradiction that $\leftangleu, M v\rightightarrowngle < 0$, which implies that $\|u'+v'\|_M < \|u'-v'\|_M$. Observing that $$ \kappa(M)^2 = \|M\| \|M^{-1}\| \geq \frac {\|u'-v'\|^2_M}{\|u'-v'\|^2} \frac{\|u'+v'\|^2}{\|u'+v'\|^2_M} >\frac{1+\gamma({\cal T})}{1-\gamma({\cal T})}, $$ we obtain that $\kappa(M) > \kappa({\cal T})$, which concludes the proof of this proposition. {\bf v}arepsilonnd{proof} Our construction of $M$-acute meshes relies on special coordinate systems in the grid ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$, adapted to the anisotropic geometry encoded by $M\in S_m^+$. {\rm \bf m}athbfegin{equation}gin{definition}[Bases and superbases] \leftabel{def:BasisSuperbase} A basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ is an $m$-plet $(e_1, \cdots, e_m) \in ({\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m)^m$ such that $|\det(e_1, \cdots, e_m)| = 1$. A superbase of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ is an $m+1$-plet $(e_0, \cdots, e_m)$ such that $e_0+\cdots+e_m = 0$, and $(e_1, \cdots, e_m)$ is a basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$. {\bf v}arepsilonnd{definition} {\rm \bf m}athbfegin{equation}gin{definition} \leftabel{def:ObtuseSuperbase} A superbase $(e_0, \cdots, e_m)$ of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ is said to be $M$-obtuse iff $\leftanglee_i, M e_j\rightightarrowngle \lefteq 0$ for all $0 \lefteq i < j \lefteq m$. {\bf v}arepsilonnd{definition} There exists for each $M \in S_m^+$, $m \in \{2,3\}$ at least one $M$-obtuse superbase \cite{CS92}. The construction of 2D and 3D $M$-obtuse superbases relies on lattice basis reduction algorithms \cite{L1773,NS09} (hence the name of our numerical scheme) and has cost ${\rm \bf m}athcal O(\leftn \kappa(M))$, see \S \rightef{sec:reduced}. We arrive at the main contribution of this paper: the FM-LBR stencils, which are causal and of bounded cardinality. {\rm \bf m}athbfegin{equation}gin{figure} {\rm \bf m}athbfegin{equation}gin{center} \iftoggle{siam}{ \includegraphics[width=2.5cm]{Illustrations/ FastMarchingIllus/RevisedStencils/Stencil2D_AllVertices.pdf} \includegraphics[width=3.5cm]{Illustrations/ FastMarchingIllus/RevisedStencils/Stencil3D_AllVertices.pdf} \hspace{0.2cm} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/RevisedStencils/Stencil3D_kappa=3.png} \hspace{-0.3cm} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/RevisedStencils/Ellipse3D_kappa=3.png} }{ \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/RevisedStencils/Stencil2D_AllVertices.pdf} \includegraphics[width=4.5cm]{Illustrations/ FastMarchingIllus/RevisedStencils/Stencil3D_AllVertices.pdf} \hspace{0.2cm} \includegraphics[width=4cm]{Illustrations/ FastMarchingIllus/RevisedStencils/Stencil3D_kappa=3.png} \hspace{-0.3cm} \includegraphics[width=4cm]{Illustrations/ FastMarchingIllus/RevisedStencils/Ellipse3D_kappa=3.png} } {\bf v}arepsilonnd{center} \caption{\leftabel{figNeigh} Connectivity of the 2D (left) and 3D (center left, interior edges omitted, missing boundary edges are symmetric w.r.t.\ the origin) FM-LBR meshes ${\cal T}(M)$, see Proposition \rightef{prop:Stencils} (note that $- e_i= \sum_{j {\rm \bf n}eq i} e_j$). Mesh ${\cal T}(M)$ (center right) associated to $M \in S_3^+$ of eigenvalues $3^2,1,1$, and eigenvector $(3,1,2)$ for the first eigenvalue. Right: unit ball for the norm $\|\cdot\|_M$. } {\bf v}arepsilonnd{figure} {\rm \bf m}athbfegin{equation}gin{proposition}[The FM-LBR acute meshes] \leftabel{prop:Stencils} Let $M \in S_m^+$, and let $(e_0,\cdots,e_m)$ be an $M$-obtuse superbase of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ (if one exists). An $M$-acute mesh ${\cal T}(M)$ is obtained by collecting the simplices of vertices $(\sum_{i=0}^{k-1} b_i)_{k=0}^{m}$ associated to all $(m+1)!$ permutations $(b_i)_{i=0}^m$ of $(e_i)_{i=0}^m$. It has $2^{m+1} -1$ vertices. {\bf v}arepsilonnd{proposition} {\rm \bf m}athbfegin{equation}gin{proof} Proof of the properties of the simplices. Let $(b_i)_{i=0}^m$ be a permutation of $(e_i)_{i=0}^m$, and let $v_k := \sum_{i=0}^{k-1} b_i$, for all $0 \lefteq k \lefteq m$. Clearly (i) $v_0=0$, and (ii) $|\det(v_1, \cdots, v_m)| = |\det(b_1, \cdots, b_m)| = |\det(e_1, \cdots, e_m)|= 1$. Acuteness condition (iii): for any $1 \lefteq k < l \lefteq m$ one has $v_l = -\sum_{j=l}^m b_j$ by Definition \rightef{def:BasisSuperbase}, hence by Definition \rightef{def:ObtuseSuperbase} {\rm \bf m}athbfegin{equation}gin{equation*} \leftanglev_k, M v_l\rightightarrowngle = - \sum_{0 \lefteq i < k} \ \sum_{l \lefteq j \lefteq m} \leftanglee_i,Me_j\rightightarrowngle \geq 0. {\bf v}arepsilonnd{equation*} Proof that ${\cal T}(M)$ is a conforming mesh covering a neighborhood of the origin. Consider the $m+1$-dimensional Kuhn simplices $T_\sigma := \{ (\leftambda_0, \cdots, \leftambda_m); \, 0 \lefteq \leftambda_{\sigma(0)} \lefteq \leftambda_{\sigma(1)} \lefteq \cdots \lefteq \leftambda_{\sigma(m)} \lefteq 1\}$, associated to all permutations $\sigma$ of $\{0,\cdots,m\}$, which form a conforming mesh ${\cal T}_0$ of $[0,1]^{m+1}$. The linear map $A : {\rightm \hbox{I\kern-.2em\hbox{R}}}^{m+1} \to {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$, defined by $A(\leftambda_0, \cdots, \leftambda_m) := -\sum_{i=0}^m \leftambda_i e_i$, has a kernel generated by $v_1 := (1, \cdots,1)$. It sends $[0,1]^{m+1}$ onto a neighborhood of $0 \in {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$, and transforms ${\cal T}_0$ into an $m$-dimensional conforming mesh ${\cal T}_M$ of this neighborhood, collapsing onto $0$ the common edge $[0,v_1]$ of the simplices $T_\sigma$. Other vertices of $[0,1]^{m+1}$ have pairwise distinct images by $A$, since their difference is not proportional to $v_1$; hence ${\cal T}_M$ has $2^{m+1}-1$ vertices (one less than ${\cal T}_0$). Noticing the identity $-\sum_{i=0}^m \leftambda_i e_i = \sum_{k=1}^m (\leftambda_{\sigma(k)} - \leftambda_{\sigma(k-1)}) \sum_{i=0}^{k-1} e_{\sigma(k)}$, we find that the image of $T_\sigma$ by $A$ is the simplex of ${\cal T}(M)$ associated to the permuted superbase $(e_{\sigma(i)})_{i=0}^m$, hence ${\cal T}_M={\cal T}(M)$, which concludes the proof. {\bf v}arepsilonnd{proof} Obtuse superbases are similarly used in \cite{FM13} to produce sparse non-negative stencils for Anisotropic Diffusion (AD-LBR scheme), with different results: 3D stencils have $12$ non-zero vertices in \cite{FM13}, and $14$ here. This illustrates the versatility of this concept, which can be used to design stencils satisfying various geometric properties: an acuteness condition for the FM-LBR (implying the scheme causality), and the non-negative decomposition of a tensor for the AD-LBR (guaranteeing the scheme monotonicity). {\rm \bf m}athbfegin{equation}gin{definition} \leftabel{def:Admissibility} A family of meshes $({\cal T}(x))_{x \in \Omega}$, defined for all points of an open domain $\Omega \subset {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$ is admissible iff there exists two constants $0 < r$ and $R < \infty$ such that: for each $x\in \Omega$ {\rm \bf m}athbfegin{equation}gin{itemize} \item The mesh ${\cal T}(x)$ covers neighborhood of $0$, and its vertices belong to ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$. \item (Boundedness) The vertices $e$ of ${\cal T}(x)$ satisfy $\|e\| \lefteq R$. \item (Stability) There exists a basis ${\rm \bf m}athcal B(x)$ of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ which elements, and their opposites, are vertices of ${\cal T}(y)$ for all $y \in \Omega$ such that $\|x-y\| < r$. {\bf v}arepsilonnd{itemize} {\bf v}arepsilonnd{definition} {\rm \bf m}athbfegin{equation}gin{figure} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/VladBench/Neighs/RotatedEllipses.pdf} {\rightightarrowise \iftoggle{siam}{0.5cm}{0cm} \hbox{ \includegraphics[width= \iftoggle{siam}{10cm}{13cm}]{Illustrations/ FastMarchingIllus/VladBench/Neighs/RotatedNeighs.pdf} }} \caption{ \leftabel{fig:BasisRotate} The unit sphere $\{u\in {\rightm \hbox{I\kern-.2em\hbox{R}}}^2; \, \|u\|_M=1\}$, an $M$-reduced basis $(u,v)$, and the boundary of the FM-LBR mesh ${\cal T}(M)$, for some $M\in S_2^+$ of anisotropy ratio $\kappa(M)=6$, and eigenvector $(\cos\theta,\, \sin\theta)$, $\theta\in [\pi/4,\pi/2]$, associated to the small eigenvalue. } {\bf v}arepsilonnd{figure} A fortunate but non-trivial fact is that the FM-LBR family of meshes $({\cal T}({\rm \bf m}athcal M(x)))_{x\in \Omega}$ is admissible, see Proposition \rightef{prop:Admissibility} and Figures \rightef{fig:BasisRotate} and \rightef{fig:BasisScale}. This property implies the FM-LBR consistence, using Proposition \rightef{prop:Convergence} which is a minor extension of the convergence results in \cite{T95,BR06}. {\rm \bf m}athbfegin{equation}gin{proposition} \leftabel{prop:Admissibility} Let $\Omega \subset {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$ be open and bounded, and let ${\rm \bf m}athcal M\in C^0(\overlineverline \Omega, S_m^+)$. If $m \lefteq 3$ then the FM-LBR family of meshes $({\cal T}({\rm \bf m}athcal M(x)))_{x \in \Omega}$ is admissible (with Boundedness constant $R=C_m \kappa({\rm \bf m}athcal M)$, $C_2=2$, $C_3=4$). More generally, if $m \lefteq 4$ then any family of meshes $({\cal T}(x))_{x \in \Omega}$ such that ${\cal T}(x)$ is ${\rm \bf m}athcal M(x)$-acute for all $x\in \Omega$, satisfies the (Stability) property. {\bf v}arepsilonnd{proposition} {\rm \bf m}athbfegin{equation}gin{proposition} \leftabel{prop:Convergence} Let $\Omega \subset {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$ be an open bounded set, equipped with a Riemannian metric ${\rm \bf m}athcal M \in C^0(\overlineverline \Omega, S_m^+)$, and an admissible family $({\cal T}(z))_{z \in \Omega}$ of meshes. For all $h>0$ let $Z_h := h {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^2$, and for all $z \in Z_h \cap \Omega$ consider the stencil $V_h(z) := z+ h {\cal T}(z)$. Then the solutions $\dist_h : Z_h \to {\rightm \hbox{I\kern-.2em\hbox{R}}} \cup \{+\infty\}$ of the discrete system {\bf v}arepsilonqref{discreteSys} converge uniformly as $h \to 0$ to the viscosity solution $\distC$ of the eikonal equation {\bf v}arepsilonqref{eikonal}: {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{eq:UnifConv} \leftim_{h \to 0} \, {\rm \bf m}ax_{z \in Z_h \cap \Omega} |\dist_h(z) - \distC(z)| = 0. {\bf v}arepsilonnd{equation} {\bf v}arepsilonnd{proposition} To sum up the FM-LBR strengths, this algorithm is universally consistent, has a competitive accuracy w.r.t.\ alternative methods, and its computational cost is almost unaffected by the Riemannian metric anisotropy. For fairness we discuss below the potential downsides of our original stencil construction, which limits the potential for generalization (efficiency is as often at the cost of specialization), and impacts memory usage. {\rm \bf m}athbfegin{equation}gin{itemize} \item (Finsler metrics) The FM-LBR only applies to the anisotropic eikonal equation associated to a Riemannian metric, while other methods such as the AGSI, OUM, MAOUM \cite{BR06,SV03,AltonMitchell12} can handle more general Finsler metrics. Indeed the structure $\|\cdot\|_{{\rm \bf m}athcal M(z)}$ of the local norm at $z$ associated to a Riemannian metric is required in the FM-LBR stencil construction Proposition \rightef{prop:Stencils}, which involves an ${\rm \bf m}athcal M(z)$-obtuse superbase. Finsler metrics are in contrast defined by arbitrary asymmetric norms $|\cdot|_z$, depending continuously on $z$. Constructing causal static stencils is an active subject of research in the case of Finsler metrics. A characterization obtained in \cite{VladMSSP}, was used in \cite{M12c} to develop the FM-ASR (Fast Marching using Anisotropic Stencil Refinement), which is close in spirit and in efficiency to the FM-LBR, but has a different application range, since it handles Finsler metrics on two dimensional domains. It was also observed in \cite{SethBook96,SV03,AltonMitchell08} that the canonical stencil (Figure \rightef{fig:Classical}, left) is causal for Finsler metrics featuring only axis-aligned anisotropy, in such way that the original fast marching algorithm can be applied. \item (Domain discretization) The FM-LBR requires a cartesian grid discretization. In contrast an important research effort \cite{BS98,BR06,KS98,LYC03,SV03} has been devoted to the more difficult setting of meshed domains, with an unstructured set of vertices. These methods are natural candidates for the computation of distances on a manifold ${\cal S} \subset {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$, while the FM-LBR would require a (collection of) local chart(s) ${\bf v}arphi : \Omega \to {\cal S}$ equipped with the metric ${\rm \bf m}athcal M(z) := {\rm \bf n}abla {\bf v}arphi(z)^\trans {\rm \bf n}abla {\bf v}arphi(z)$. The FM-LBR heavily relies on the cartesian grid arithmetic structure, through the concept of obtuse superbases. \item (Boundary conditions) The null boundary conditions chosen in the eikonal equation {\bf v}arepsilonqref{eikonal} can be replaced with Dirichlet data $\distC_0 : \partial \Omega \to {\rightm \hbox{I\kern-.2em\hbox{R}}}$, of $1$-Lipschitz regularity \cite{L82} with respect to the Riemannian distance $\distC(\cdot, \cdot)$, see {\bf v}arepsilonqref{def:Length}. In that event, Algorithm \rightef{algo:FastMarching} requires to extend the boundary data $\distC_0$ to a ghost layer, containing all grid points $y \in Z \setminus \Omega$ such that the reverse stencil $V[y]$ is non-empty. The FM-LBR uses large stencils, of euclidean radius ${\rm \bf m}athcal O(h \kappa({\rm \bf m}athcal M))$ on a grid {\bf v}arepsilonqref{def:Grid} of scale $h$, which complicates this extension in contrast with e.g.\ the AGSI \cite{BR06} of stencil radius ${\rm \bf m}athcal O(h)$. Outflow boundary conditions, on a portion $\Gamma \subsetneq \partial \Omega$ of the domain's boundary, are natural in applications, see \S \rightef{sec:num}. They are implemented by excluding in the definition {\bf v}arepsilonqref{def:HopfLax} of $\poundsambda(\dist, x)$, faces of $\partial V(x)$ containing an exterior vertex $y \in Z \setminus \Omega$ close to $\Gamma$. If $x$ lies in a corner of $\Omega$, and if the stencil $V(x)$ is strongly anisotropic, then it may happen that all the vertices of $\partial V(x)$ lie {\bf v}arepsilonmph{outside} $\Omega$, so that the solution of {\bf v}arepsilonqref{discreteSys} satisfies $\dist(x) = +\infty$. (In our experiments \S \rightef{sec:num}, this happened in the square domain's corners when test case 1 was rotated by an angle $\theta \in [0.56,0.61]$ radians. These four infinite values were rejected when estimating numerical errors.) Despite these minor inconveniences, the FM-LBR behaves remarkably well in our numerical experiments, \S \rightef{sec:num} and Appendix \rightef{subsec:Accuracy}, with these more general boundary conditions. \item (Dimension) The FM-LBR causal stencils construction, see Proposition \rightef{prop:Stencils}, is limited to domains of dimension $2$ and $3$, because some matrices $M \in S_4^+$ do not admit any $M$-obtuse superbase \cite{NS09}. An alternative construction of $M$-acute meshes of uniformly bounded cardinality is proposed in \cite{M12b}, for all $M \in S_4^+$; this cardinality is not small unfortunately, with $768$ simplices. The FM-LBR however extends in a straightforward manner to Riemannian metrics ${\rm \bf m}athcal M$ having a block diagonal structure, with blocks of size $1$, $2$ or $3$, using the following construction. For $i \in \{1,2\}$ let $m_i$ be a positive integer, let $M_i \in S_{m_i}^+$, and let ${\cal T}_i$ be an $M_i$-acute mesh. Let $m := m_1+m_2$ and let $M\in S_m^+$ be the matrix of diagonal blocks $M_1,M_2$. An $M$-acute mesh ${\cal T}$ is obtained by collecting the $m$-dimensional simplices of vertices $(0,0)$, $(u_1,0), \cdots, (u_{m_1},0)$, $(0,v_1), \cdots, (0,v_{m_2})$, where the simplices of vertices $0, u_1, \cdots, u_{m_1}$ and $0,v_1, \cdots, v_{m_2}$ belong to ${\cal T}_1$ and ${\cal T}_2$ respectively. Block diagonal metrics are not uncommon in the context of medical imaging \cite{BC10}, as they inherit the cartesian product structure of the fast marching domain: $\Omega = \Omega_0 \tildemes \Omega_1$, where $\Omega_0$ is a physical domain of dimension $\lefteq 3$, and $\Omega_1$ is an abstract parameter domain of dimension $\lefteq 2$. {\bf v}arepsilonnd{itemize} {\rm \bf m}athbfegin{equation}gin{figure} {\rm \bf m}athbfegin{equation}gin{center} {\rm \bf m}athbfegin{equation}gin{tabular}{cc} {\rightightarrowise \iftoggle{siam}{0.2cm}{0.5cm} \hbox{ \includegraphics[width=2cm]{Illustrations/ FastMarchingIllus/VladBench/Neighs/ScaledEllipses.pdf} }} & \includegraphics[width= \iftoggle{siam}{7cm}{8cm}]{Illustrations/ FastMarchingIllus/VladBench/Neighs/ScaledNeighs.pdf} {\bf v}arepsilonnd{tabular} {\bf v}arepsilonnd{center} \iftoggle{siam}{}{{\bf v}space{-0.8cm}} \caption{ \leftabel{fig:BasisScale} The unit sphere $\{u\in {\rightm \hbox{I\kern-.2em\hbox{R}}}^2; \, \|u\|_M=1\}$, an $M$-reduced basis $(u,v)$, and the boundary of the FM-LBR mesh ${\cal T}(M)$, for some $M\in S_2^+$ of anisotropy ratio $\kappa(M)$ ranging from $1$ to $15$, and eigenvector $(\cos(3\pi/8),\sin(3 \pi/8))$ associated to the small eigenvalue. } {\bf v}arepsilonnd{figure} \paragraph{Outline.} Further insight on the FM-LBR is given in \S \rightef{sec:reduced}, including the proof of Propositions \rightef{prop:Admissibility} and \rightef{prop:Convergence}. Numerical experiments are presented in \S \rightef{sec:num}. In addition, Appendix \rightef{sec:Paths} describes a robust minimal path extraction method for the FM-LBR and other Dijkstra inspired solvers of the eikonal equation. A heuristic analysis of the FM-LBR accuracy, and a last numerical experiment, appear in Appendix \rightef{subsec:Accuracy}. {\rm \bf m}athbfegin{equation}gin{remark}[Memory requirements] \leftabel{rem:memory} The memory requirements of numerical methods for the eikonal equation, such as the AGSI, the OUM and the FM-LBR, are dominated by (I) storing the discrete solution $\dist$ and the Riemannian metric ${\rm \bf m}athcal M$, sampled on the discrete domain $\Omega \cap Z$, and (II) storing the graph structure underlying the numerical scheme. Point (I) requires two tables of $N$ and $N m (m+1)/2$ reals, which may be represented in 32 bit (single precision) or 64 bit (double precision) format. The storage cost for the metric can be avoided if it has an analytical expression. Point (II) can be avoided for the AGSI and the OUM when these methods are executed on a mesh with a trivial periodic structure, which is the case in our experiments. For the FM-LBR, Point (II) amounts to storing the non-empty reverse stencils $V[y]$, at all points of $Y := \{y \in Z; \, V[y]{\rm \bf n}eq {\bf v}arepsilonmptyset\}$, since the direct ones can be recomputed individually on demand for a minor cost. The set $Y$ the union of $\Omega \cap Z$ and of a thin boundary layer (in our experiments, $Y=\Omega \cap Z$ due to the use of outflow boundary conditions). The chosen data structure uses two tables: one of vectors (the differences $x-y$, for $x \in V[y]$, $y \in Y$, enumerated consecutively), and one of $\#(Y) \approx N$ integers (the start, for each $y \in Y$, of the description of $V[y]$ in the previous table). We represent integers in 32bit format, and vector components in 8bit format, since these are small integers by construction. Summing up, we find that the memory requirements of the FM-LBR are larger than those of the AGSI or the OUM (on a grid), by a factor ranging from $2$ (metric and solution stored in double precision), to $9$ (analytical metric, solution stored in single precision), through $3$ (metric and solution stored in single precision), in two dimensions. Respectively, in three dimensions, from $2.6$ to $24$, through $4.3$. {\bf v}arepsilonnd{remark} \section{Analysis of the FM-LBR} \leftabel{sec:reduced} We introduce in \S \rightef{sec:basis} the concepts of Lattice Basis Reduction. They are used in \S \rightef{sec:MeshProperties} to estimate the construction cost of the FM-LBR meshes ${\cal T}(M)$, and to prove their admissibility in the sense of Definition \rightef{def:Admissibility}, as announced in Proposition \rightef{prop:Admissibility}. We finally prove in \S \rightef{sec:Convergence} the announced convergence result Proposition \rightef{prop:Convergence}. \subsection{Introduction to Lattice Basis Reduction} \leftabel{sec:basis} We briefly introduce the framework of (low dimensional) Lattice Basis Reduction, used in the next subsection to construct and demonstrate the properties of $M$-acute meshes. See \cite{NS09} and references therein for more details on this rich theory, from which we use only one result: Theorem \rightef{th:Red} stated below. We denote by $b_1 {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}} +\cdots + b_k {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}$ the sub-lattice of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ generated by $b_1, \cdots, b_k \in {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$: $$ b_1 {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}} +\cdots + b_k {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}} := \{b_1 z_1 + \cdots + b_k z_k ; \ z_1, \cdots, z_k \in {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}\}. $$ If $k=0$ then the above sum equals $\{0\}$ by convention. {\rm \bf m}athbfegin{equation}gin{definition} \leftabel{def:RedBasis} Let $1 \lefteq m \lefteq 4$ and let $M \in S_m^+$. A basis $(b_1, \cdots , b_m)$ of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ is said $M$-reduced iff for all $1 \lefteq k \lefteq m$: {\rm \bf m}athbfegin{equation} \leftabel{def:uk} b_k \in \argmin\{ \|z\|_M; \, z\in {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m \setminus (b_1 {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}+ \cdots +b_{k-1} {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}})\}. {\bf v}arepsilonnd{equation} {\bf v}arepsilonnd{definition} If $M\in S_m^+$ is a diagonal matrix of coefficients $(\leftambda_1, \cdots, \leftambda_m)$, and if $0<\leftambda_{\sigma(1)} \lefteq \cdots \lefteq \leftambda_{\sigma(m)}$ for some permutation $\sigma$, then the permutation $(e_{\sigma(i)})_{i=1}^m$ of the canonical basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ is an $M$-reduced basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$. See Figures \rightef{figNeigh}, \rightef{fig:BasisRotate}, \rightef{fig:BasisScale}, for some examples of $M$-reduced bases associated to non-diagonal matrices $M$. In dimension $m\geq 5$ there exists matrices $M\in S_m^+$ such that no basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ satisfies the relations {\bf v}arepsilonqref{def:uk}, see \cite{NS09} (these relations state that $\|b_i\|_M$ equals the $i$-th Minkowski's minimum $\leftambda_i(M)$). Minkowski's reduction \cite{NS09} is the natural generalization of Definition \rightef{def:RedBasis} in dimension $m \geq 5$. {\rm \bf m}athbfegin{equation}gin{theorem}[Nguyen, Stelh\'e, 2009] \leftabel{th:Red} There exists an algorithm which, given a matrix $M\in S_m^+$ as input, $1 \lefteq m \lefteq 4$, produces an $M$-reduced basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ and has the numerical cost ${\rm \bf m}athcal O(1+\leftn \kappa(M))$. {\bf v}arepsilonnd{theorem} {\rm \bf m}athbfegin{equation}gin{proof} The proof is contained in \cite{NS09}, and we only point out here the precise reference within the paper and the slight differences in notations. The algorithm described in \cite{NS09} takes as input a basis $(b_1, \cdots, b_m)$ (here: the canonical basis of ${\rightm \hbox{I\kern-.2em\hbox{R}}}^m$) of a lattice $L$ (here: ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$), and its Gram matrix with respect to some scalar product (here: the Gram matrix is $M$). The algorithm outputs a greedy reduced basis of the lattice $L$, a notion which coincides with Minkowski's reduction if $m\lefteq 4$ (Lemma 4.3.2 in \cite{NS09}), which itself coincides with Definition \rightef{def:RedBasis} if $m\lefteq 4$. The main loop of the iterative algorithm is executed at most the following number of times (Theorem 6.0.5 in \cite{NS09}): $$ {\rm \bf m}athcal O\lefteft( 1 + \leftn {\rm \bf m}ax_{1 \lefteq i \lefteq m} \|b_i\|_M - \leftn {\rm \bf m}in_{u\in L} \|u\|_M \rightight), $$ hence ${\rm \bf m}athcal O(1+ \leftn \|M\|^\frac 1 2-\leftn \|M^{-1}\|^{-\frac 1 2}) = {\rm \bf m}athcal O(1+\leftn \kappa(M))$ times in our setting. The complexity of each of these iterations is dominated by a closest vector search, described in Theorem 5.0.4 in \cite{NS09}, which consists of the inversion of a $k\tildemes k$ Gram matrix, where $1 \lefteq k \lefteq m-1$, and an exhaustive search among ${\rm \bf m}athcal O(1)$ candidate vectors. In terms of elementary operations ($+,-,\tildemes, /$) among reals, each iteration of this algorithm thus has cost ${\rm \bf m}athcal O(1)$, and the overall cost is the number of iterations ${\rm \bf m}athcal O(1+\leftn \kappa(M))$. Note that an important part of the discussion in \cite{NS09} is devoted to the special case where the vectors $(b_1, \cdots, b_m)$ have {\bf v}arepsilonmph{large integer} coefficients, the Gram matrix is computed with respect to the standard euclidean scalar product, and the complexity of an elementary operation ($+,-,\tildemes, /$) among integers is not ${\rm \bf m}athcal O(1)$ but depends on the size of these integers. This more subtle notion of complexity, named bit complexity, is not relevant in our setting. {\bf v}arepsilonnd{proof} The two dimensional version of the algorithms mentioned in Theorem \rightef{th:Red} dates back to Lagrange \cite{L1773}, and mimicks the search for the greatest common divisor of two integers. This algorithm uses only a pair $(u,v)$ of (mutable) variables in ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^2$, initialized as the canonical basis of ${\rightm \hbox{I\kern-.2em\hbox{R}}}^2$. The pair $(u,v)$ becomes an $M$-reduced basis at the end of the following loop, which takes at most ${\rm \bf m}athcal O(\leftn \kappa(M))$ iterations. ${\rm \bf m}athrm{Round}$ denotes rounding to a closest integer. {\rm \bf m}athbfegin{equation}gin{center} {\rm \bf m}athbfegin{equation}gin{tabular}{l} {\rm \bf m}athbff Do $(u,v) \, \leftarrow \, (v, \ u - {\rm \bf m}athrm{Round}(\leftangleu, M v\rightightarrowngle/\|v\|_M^2 ) \,v)$,\\ {\rm \bf m}athbff While $\|u\|_M > \|v\|_M$. {\bf v}arepsilonnd{tabular} {\bf v}arepsilonnd{center} We end this subsection with a basic estimate of the norms and scalar products of the elements of $M$-reduced bases. {\rm \bf m}athbfegin{equation}gin{proposition} \leftabel{prop:normRed} Let $1 \lefteq m \lefteq 4$, let $M\in S_m^+$ and let $(b_1, \cdots, b_m)$ be an $M$-reduced basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$. Then for any $1 \lefteq i \lefteq m$ {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{biNorm} \|b_i\| \lefteq \kappa(M), \quad \text{ and } \quad \|b_i\|_M \lefteq \kappa(M)\|b_1\|_M. {\bf v}arepsilonnd{equation} For any integer combination $z \in b_1 {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}} + \cdots + b_{i-1} {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}} + b_{i+1} {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}} + \cdots + b_m {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}$, of the basis elements distinct from $b_i$, one has {\rm \bf m}athbfegin{equation} \leftabel{scalNormIneq} 2 |\leftangleb_i, M z\rightightarrowngle| \lefteq \|z\|_M^2. {\bf v}arepsilonnd{equation} {\bf v}arepsilonnd{proposition} {\rm \bf m}athbfegin{equation}gin{proof} Proof of \rightef{biNorm}. Let $(e_j)_{j=1}^m$ denote the canonical basis of ${\rightm \hbox{I\kern-.2em\hbox{R}}}^m$. By a dimensionality argument there exists $1 \lefteq j \lefteq m$ such that $e_j {\rm \bf n}otin b_1{\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}+\cdots + \cdots b_{i-1} {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}$. By construction {\bf v}arepsilonqref{def:uk} we obtain $\|b_i\|_M \lefteq \|e_j\|_M \lefteq \|M\|^\frac 1 2$ since $\|e_j\|=1$, hence $\|b_i\| \lefteq \kappa(M)$ as announced in {\bf v}arepsilonqref{biNorm}. Observing that $\|b_1\|_M \geq \|M^{-1}\|^{-\frac 1 2}$ since $\|b_1\| \geq 1$, and recalling that $\|b_i\|_M \lefteq \|M\|^\frac 1 2$, we obtain the second announced estimate. Proof of {\bf v}arepsilonqref{scalNormIneq}. Remark that $b_i+z{\rm \bf n}otin b_1 {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}+ \cdots+b_{i-1} {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}$, since otherwise the basis element $b_i$ would be a linear combination of $b_1, \cdots, b_{i-1}, b_{i+1}, \cdots, b_m$. Definition \rightef{def:RedBasis} thus implies: $$ \|b_i\|^2_M \lefteq \|b_i+z\|^2_M = \|b_i\|_M^2+2 \leftangleb_i, M z\rightightarrowngle+\|z\|_M^2, $$ hence $-2 \leftangleb_i, M z\rightightarrowngle \lefteq \|z\|_M^2$. Likewise $2 \leftangleb_i, M z\rightightarrowngle \lefteq \|z\|_M^2$, which concludes the proof. {\bf v}arepsilonnd{proof} \subsection{Properties of $M$-acute meshes} \leftabel{sec:MeshProperties} \paragraph{The FM-LBR stencil construction has numerical cost ${\rm \bf m}athcal O(1+ \leftn \kappa(M))$.\\} The construction of an FM-LBR mesh ${\cal T}(M)$, $M \in S_m^+$, has unit cost (for fixed $m$) given an $M$-obtuse superbase, see Proposition \rightef{prop:Stencils}. We give below a unit cost construction of an $M$-obtuse superbase given an $M$-reduced basis, in dimension $m \in \{2,3\}$, which by Theorem \rightef{th:Red} can itself be obtained at cost ${\rm \bf m}athcal O(1+\leftn \kappa(M))$. Dimension 2. Let $(b_1,b_2)$ be an $M$-reduced basis. Up to replacing $b_2$ with its opposite, we may assume that $\leftangleb_1, M b_2 \rightightarrowngle \lefteq 0$. Using {\bf v}arepsilonqref{scalNormIneq} we obtain $2|\leftangleb_1, M b_2\rightightarrowngle| \lefteq \|b_1\|_M^2$, hence $\leftangleM b_1,-b_1-b_2\rightightarrowngle \lefteq 0$ and likewise $\leftangleM b_2,-b_1-b_2\rightightarrowngle \lefteq 0$, so that $(b_1,b_2, -b_1-b_2)$ is an $M$-obtuse superbase. Dimension $3$: we reproduce without proof the construction of \cite{FM13}. Let $(b_1,b_2,b_3)$ be the elements of an $M$-reduced basis, permuted and signed so that $|\leftangleb_1, M b_2\rightightarrowngle| \lefteq {\rm \bf m}in \{ - \leftangleb_1, M b_3\rightightarrowngle, \allowbreak -\leftangleb_2, M b_3\rightightarrowngle\}$. An $M$-obtuse basis is given by $(b_1,b_2,b_3,-b_1-b_2-b_3)$ if $\leftangleb_1, M b_2\rightightarrowngle \lefteq 0$, and $(-b_1,b_2,b_1+b_3,-b_2-b_3)$ otherwise. \paragraph{Radius of the FM-LBR stencils.\\} Assume that an FM-LBR acute mesh ${\cal T}(M)$ is built from an $M$-obtuse superbase obtained as in the previous paragraph\footnote{This implementation is used in our numerical experiments. See Corollary \rightef{corol:VertexNorm} for a proof with no assumption.} from an $M$-reduced basis $(b_i)_{i=1}^m$. Then one easily checks that its vertices have the form $e = \sum_{i=1}^m {\bf v}arepsilon_i b_i$, where ${\bf v}arepsilon_i \in \{-1,0,1\}$ and $(b_i)_{i=1}^m$ is the used $M$-reduced basis. Hence the vertices norm $\|e\| \lefteq \sum_{i=1}^m \|b_i\| \lefteq m \kappa(M)$ obeys the announced bound. An $M$-reduced basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ contains by definition {\bf v}arepsilonmph{small} vectors with respect to the norm $\|\cdot\|_M$: the smallest linearly independent ones with integer coordinates. As a result, the FM-LBR meshes ${\cal T}(M)$ have a small radius with respect to the norm $\|\cdot \|_M$. In constrast, these meshes can be large from the euclidean perspective. This has consequences on the FM-LBR accuracy, see Appendix \rightef{subsec:Accuracy}. \paragraph{Stability of $M$-acute meshes.\\} Consider the following distance on the set $S_m^+$ of symmetric positive definite matrices: for all $M,N\in S_m^+$ {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{def:dtimes} \distance(M,N) := \sup_{u{\rm \bf n}eq 0} \lefteft|\leftn \|u\|_M - \leftn \|u\|_N\rightight|. {\bf v}arepsilonnd{equation} This distance allows to compare the norms of vectors multiplicatively, in contrast with the classical operator norm which is tailored for additive comparisons. Indeed denoting $\alpha := \distance(M,N)$ and ${\rm \bf m}athbfegin{equation}ta := \|M^\frac 1 2 - N^\frac 1 2\|$, one has for all $u\in {\rightm \hbox{I\kern-.2em\hbox{R}}}^2$ such that $\|u\| = 1$ $$ e^{-\alpha} \lefteq \|u\|_M/\|u\|_N \lefteq e^{\alpha}, \stext{ and } -{\rm \bf m}athbfegin{equation}ta \lefteq \|u\|_M - \|u\|_N \lefteq {\rm \bf m}athbfegin{equation}ta. $$ The next lemma establishes a lower bound on the $\|\cdot\|_M$ norm of points with integer coordinates outside of an $N$-reduced mesh, when the matrices $M,N\in S_m^+$ are close enough. {\rm \bf m}athbfegin{equation}gin{lemma} \leftabel{lem:NormOutT} Let $M,N\in S_m^+$, with $m\lefteq 4$. Let $u_1, \cdots, u_m$ be an arbitrary $M$-reduced basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$, and let ${\cal T}$ be an $N$-reduced mesh. Consider a point $z\in {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ which is not a vertex of ${\cal T}$. Then there exists $1 \lefteq l \lefteq m$ such that $$ z \in u_1 {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}+ \cdots + u_l{\rightm {{\rightm Z}\kern-.28em{\rightm Z}}} \ \text{ and } \ \|z\|_M^2 \,e^{4 \distance(M,N)} \geq \|u_l\|_M^2 + \|u_1\|_M^2. $$ {\bf v}arepsilonnd{lemma} {\rm \bf m}athbfegin{equation}gin{proof} Since ${\cal T}$ covers a neighborhood of the origin, there exists a simplex $T\in {\cal T}$ and a real $\leftambda >0$ such that $\leftambda z \in T$. Denoting by $v_1, \cdots, v_m$ the non-zero vertices of $T$, there exists therefore non-negative reals $\alpha_1, \cdots, \alpha_m\in {\rightm \hbox{I\kern-.2em\hbox{R}}}_+$ such that $z = \alpha_1 v_1 + \cdots + \alpha_m v_m$. Since $(v_1, \cdots, v_m)$ form a basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$, these coefficients are integers. Up to reordering the vertices $v_1, \cdots, v_m$, we may assume that $\alpha_1, \cdots, \alpha_k$ are positive, and that $\alpha_{k+1},\cdots, \alpha_m$ are zero, for some $1 \lefteq k \lefteq m$. Let $l$ be the smallest integer such that $z\in u_1 {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}+ \cdots +u_l {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}$. There exists $1 \lefteq i \lefteq k$ such that $v_i {\rm \bf n}otin u_1 {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}+ \cdots +u_{l-1} {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}$, hence $\|v_i\|_M\geq \|u_l\|_M$; other vertices satisfy $\|v_j\|_M \geq \|u_1\|_M$, $1\lefteq j \lefteq k$, since they are non-zero and have integer coordinates. Therefore, denoting $\delta := \distance(M,N)$ {\rm \bf m}athbfegin{equation}gin{align*} e^{4 \delta} \|z\|_M^2 &\geq e^{2 \delta} \|z\|_N^2 = e^{2 \delta}\lefteft(\sum_{1 \lefteq i \lefteq k} \alpha_i^2 \|v_i\|_N^2 + 2\sum_{1 \lefteq i<j\lefteq k} \alpha_i\alpha_j \leftanglev_i, N v_j\rightightarrowngle\rightight)\\ &\geq e^{2 \delta}\sum_{1 \lefteq i \lefteq k} \alpha_i^2 \|v_i\|_N^2 \geq \sum_{1 \lefteq i \lefteq k} \alpha_i^2 \|v_i\|_M^2 \geq \|u_l\|_M^2 + \lefteft( \sum_{1 \lefteq i \lefteq k} \alpha_i^2 -1\rightight)\|u_1\|_M^2. {\bf v}arepsilonnd{align*} Observing that $\alpha_1^2+ \cdots+\alpha_k^2 \geq 2$, since $z$ is not a vertex of $T$, we conclude the proof. {\bf v}arepsilonnd{proof} We prove in the next corollary that the vertices of an $M$-acute mesh contain the elements of an $N$-reduced basis, and their opposites, when the matrices $M,N$ are sufficiently close. This result, and the compactness of $\overlineverline \Omega$, immediately implies the second point of Proposition \rightef{prop:Admissibility}. {\rm \bf m}athbfegin{equation}gin{corollary} \leftabel{corol:BasisVertex} Let $M,N\in S_m^+$, with $m \lefteq 4$, be such that {\rm \bf m}athbfegin{equation} \leftabel{dtimeskappa} \distance(M,N) < \leftn(1+\kappa(M)^{-2})/4. {\bf v}arepsilonnd{equation} Let $(b_1, \cdots, b_m)$ be an $M$-reduced basis of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$, and let ${\cal T}$ be an $N$-reduced mesh. Then $b_1, \cdots, b_m$ and $-b_1, \cdots, -b_m$ are vertices of ${\cal T}$. {\bf v}arepsilonnd{corollary} {\rm \bf m}athbfegin{equation}gin{proof} Let $1 \lefteq l \lefteq m$. We have $b_l {\rm \bf n}otin b_1 {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}} + \cdots + b_{l-1} {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}$, since $(b_i)_{i=1}^m$ is a basis, and Proposition \rightef{prop:normRed} implies $$ \|b_l\|^2_M e^{4 \distance(M,N)} < \|b_l\|_M^2+ \kappa(M)^{-2} \|b_l\|_M^2 \lefteq \|b_l\|_M^2+\|b_1\|_M^2. $$ Hence $b_l$ is a vertex of ${\cal T}$, and likewise $-b_l$, by Lemma \rightef{lem:NormOutT}. {\bf v}arepsilonnd{proof} We finally estimate the radius of the FM-LBR meshes ${\cal T}(M)$, see Proposition \rightef{prop:Stencils}, in terms of the condition number of the matrix $M \in S_m^+$. This concludes the proof of Proposition \rightef{prop:Admissibility}. {\rm \bf m}athbfegin{equation}gin{corollary} \leftabel{corol:VertexNorm} Let $M \in S_m^+$, with $m \in \{2,3\}$. Then any vertex $e$ of ${\cal T}(M)$ satisfies $\|e\| \lefteq C_m \kappa(M)$, with $C_2:=2$ and $C_3:=4$. {\bf v}arepsilonnd{corollary} {\rm \bf m}athbfegin{equation}gin{proof} Given $m$ linearly independent vertices $(b_i)_{i=1}^m$ of ${\cal T}(M)$, one can express any other vertex under the form $e=\sum_{i=1}^m \alpha_i b_i$. A simple check by exhaustive enumeration shows that $\sum_{i=1}^m |\alpha_i| \lefteq C_m$ (these coefficients are independent of $M$), hence $\|e\| \lefteq C_m {\rm \bf m}ax \{\|b_i\|; \, 1 \lefteq i \lefteq m\}$. Applying Corollary \rightef{corol:BasisVertex} with $M = N$, we find that the vertices of ${\cal T}(M)$ contain an $M$-reduced basis $(b_i)_{i=1}^m$, which by Proposition \rightef{prop:normRed} satisfies $\|b_i\| \lefteq \kappa(M)$ for all $1 \lefteq i \lefteq m$. This concludes the proof. {\bf v}arepsilonnd{proof} \subsection{Convergence of the FM-LBR} \leftabel{sec:Convergence} We prove in this section the uniform convergence of the discrete system {\bf v}arepsilonqref{discreteSys} solutions towards the anisotropic eikonal PDE {\bf v}arepsilonqref{eikonal} solution, under the assumptions of Proposition \rightef{prop:Convergence} and with its notations. Following the steps of \cite{BR06}, we begin with a discrete Lipschitz regularity estimate for the maps $\dist_h : Z_h \to {\rightm \hbox{I\kern-.2em\hbox{R}}}$. {\rm \bf m}athbfegin{equation}gin{lemma} \leftabel{lem:Lipschitz} There exists constants $h_0 > 0$ and $C_0 < \infty$ such that for all $0 < h \lefteq h_0$ and all $x, y \in Z_h$ one has {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{eq:LipschitzDiscrete} |\dist_h(x) - \dist_h(y) | \lefteq C_0 \|x-y\|. {\bf v}arepsilonnd{equation} {\bf v}arepsilonnd{lemma} {\rm \bf m}athbfegin{equation}gin{proof} We prove below that $|\dist_h(x) - \dist_h(y) | \lefteq C_1 h$ when $\|x-y\| = h$, in other words when $x$ and $y$ are neighbors on the grid $Z_h := h {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$. This immediately implies $|\dist_h(x) - \dist_h(y)| \lefteq C_1 \|x-y\|_1$, where $\|(\leftambda_1, \cdots, \leftambda_m) \|_1 := \sum_{i=1}^m |\leftambda_i|$, hence also {\bf v}arepsilonqref{eq:LipschitzDiscrete} with $C_0 := C_1 $\diamond$\\rightt m$. If $x{\rm \bf n}otin \Omega$ and $y {\rm \bf n}otin \Omega$, then $\dist_h(x) = \dist_h(y)=0$, and the result is proved. Up to exchanging $x$ and $y$, we may therefore assume that $x \in \Omega$. Let $B := {\rm \bf m}athcal B(x)$ be the basis corresponding to the property (Stability) of the admissible family of meshes $({\cal T}(x))_{x \in \Omega}$, see Definition \rightef{def:Admissibility}. Each element $e$ of $B$ is a vertex of ${\cal T}(x)$, hence satisfies $\|e\| \lefteq R$, see Definition \rightef{def:Admissibility} (Boundedness). We abusively regard $B$ as an $m\tildemes m$ matrix which columns are the basis elements, and observe that $\|B\| \lefteq R $\diamond$\\rightt m$ and $|\det(B)| = 1$. Therefore $\|B^{-1}\| \lefteq \|B\|^{m-1}/|\det B| \lefteq C_2 := (R $\diamond$\\rightt m)^{m-1}$. Since we assumed $\|x-y\|=h$, we have $y=x+ {\bf v}arepsilon h e_j$, for some ${\bf v}arepsilon \in \{-1,1\}$, $1 \lefteq j \lefteq m$, and where $(e_j)_{j=1}^m$ denotes the canonical basis of ${\rightm \hbox{I\kern-.2em\hbox{R}}}^m$. Hence denoting by $(b_i)_{i=1}^m$ the elements of the basis $B$, and by $(\alpha_{ij})_{i,j=1}^m$ the coefficients of $B^{-1}$, we obtain $y = x + {\bf v}arepsilon h\sum_{i=1}^m \alpha_{ij} b_i$. Let $s := \sum_{i=1}^m |\alpha_{ij}|$, and let $(x_k)_{k=0}^s$ be a finite sequence of points of $Z_h$ such that $x_0:=x$, $x_s:=y$, and $x_{k+1}-x_k \in \{\pm h b_i; \, 1 \lefteq i \lefteq m\}$ for all $0 \lefteq k < s$. By Cauchy-Schwartz's inequality one has $s/ $\diamond$\\rightt m \lefteq (\sum_{i=1}^m \alpha^2_{ij})^\frac 1 2 \lefteq \|B^{-1}\| \lefteq C_2$, hence $\|x_k - x\| \lefteq h R C_2 $\diamond$\\rightt m$ for any $0 \lefteq k \lefteq s$. We choose the upper grid scale bound $h_0$ so that this constant smaller than the radius $r$ involved in property (Stability) of Definition \rightef{def:Admissibility}. If $x_k \in \Omega$, then by property (Stability) a grid point $x_{l}$, with $|k-l| \lefteq 1$ and $0\lefteq l \lefteq s$, is a vertex of the stencil $V_h(x)$. Hence inserting $x_l$ in {\bf v}arepsilonqref{def:HopfLax} we obtain $\dist_h(x_k) = \poundsambda_h(\dist_h, x_k) \lefteq \|x_k - x_l\|_{{\rm \bf m}athcal M(x)} + \dist_h(x_l) \lefteq h R C_3+ \dist_h(x_l)$, with $C_3 := {\rm \bf m}ax \{\|{\rm \bf m}athcal M(z)\|^\frac 1 2; \, z \in \overlineverline \Omega\}$. If $x_k {\rm \bf n}otin \Omega$ then $\dist_h(x_k) = 0$, hence obviously $\dist_h(x_k) \lefteq \dist_h(x_l)$. Exchanging the roles of of $k$ and $l$ we obtain $|\dist_h(x_k) - \dist_h(x_l)| \lefteq h R C_3$, hence by the triangular inequality $|\dist_h(x) - \dist_h(y) | \lefteq h R C_3 s \lefteq h R C_3 C_2 $\diamond$\\rightt m$, which concludes the proof. {\bf v}arepsilonnd{proof} The rest of the proof is only sketched, since it amounts to a minor adaptation of Theorem 11 in \cite{BR06}. The only difference lies in the presence, in \cite{BR06}, of a canonical interpolation operator on $\Omega$ (piecewise linear interpolation on a prescribed mesh). Denote by $\distC_h$ the bilinear interpolant \footnote{ Other interpolation schemes could be used, such as piecewise linear interpolation on a trivial periodic mesh, provided one can control the Lipschitz regularity constant and the support of the interpolated function. } of $\dist_h$ on the grid $Z_h$, and observe that $|\distC_h(x) - \distC_h(y)| \lefteq K C_0 \|x-y\|$, for all $x, y \in {\rightm \hbox{I\kern-.2em\hbox{R}}}^2$ and all $0 < h \lefteq h_0$, where $K$ is an absolute constant depending only on the interpolation scheme, and $C_0$, $h_0$, are the constants of Lemma \rightef{lem:Lipschitz}. Note also that $\supp(\distC_h) \subset \{z+e; \, z \in \overlineverline \Omega, \, \|e\| \lefteq h $\diamond$\\rightt m\}$, hence by Lipschitz regularity $\distC_h$ is bounded uniformly independently of $h$, and therefore by Arzel\`a-Ascoli's theorem the family $(\distC_h)_{0 < h \lefteq h_0}$ is pre-compact. Considering an arbitrary converging sub-sequence $\distC_{h(n)}$, with $h(n) \to 0$ as $n \to \infty$, one observes that the limit is supported on $\overlineverline \Omega$, and applying the arguments of Theorem 11 in \cite{BR06} that it is a viscosity solution of the eikonal PDE {\bf v}arepsilonqref{eikonal}. Uniqueness of such a solution \cite{L82} implies the pointwise convergence $\distC_h(x) \to \distC(x)$, as $h \to 0$, for all $x \in \overlineverline \Omega$. Finally the announced uniform convergence {\bf v}arepsilonqref{eq:UnifConv} follows from the uniform $KC_0$-Lipschitz regularity of $\distC_h$, for all $0 < h \lefteq h_0$. \section{Numerical experiments} \leftabel{sec:num} We compare numerically the FM-LBR with two popular solvers (AGSI, FM-8) of the eikonal equation, which enjoy a reputation of simplicity and efficiency in applications, and with the recent and closely related MAOUM. The Adaptive Gauss Seidel Iteration\footnote{ As suggested in \cite{BR06}, the stopping criterion tolerance for the AGSI iterations is set to $10^{-8}$. } (AGSI) \cite{BR06} produces numerical approximations which are guaranteed to converge towards the solution of the continuous anisotropic eikonal equation as one refines the computation grid\footnote{ The grid is triangulated with a trivial periodic mesh, for the AGSI and the MAOUM. As a result the AGSI uses a 6 point stencil. } , for an arbitrary continuous Riemannian metric ${\rm \bf m}athcal M$. Fast Marching using the 8 point stencil (FM-8, stencil illustrated on Figure \rightef{fig:Classical}, center left) does not offer this convergence guarantee, but has a quasi-linear complexity ${\rm \bf m}athcal O(N \leftn N)$, in contrast with the super-linear complexity ${\rm \bf m}athcal O({\rm \bf m}u({\rm \bf m}athcal M) N^{1+\frac 1 m})$ of the AGSI. Fast Marching using Lattice Basis Reduction \footnote{ The FM-LBR stencil $V(z)$, at each grid point $z \in \Omega \cap Z$, is built {\bf v}arepsilonqref{eq:VFromT} from the ${\rm \bf m}athcal M(z)$-reduced mesh ${\cal T}({\rm \bf m}athcal M(z))$ of Proposition \rightef{prop:Stencils}, except if the matrix ${\rm \bf m}athcal M(z)$ is detected to be exactly diagonal. In that case we use the standard $4$ vertices neighborhood in 2D (resp. $6$ vertices in 3D), which is an ${\rm \bf m}athcal M(z)$-reduced mesh, see Figure \rightef{fig:Classical} (left and center right). This modification has little impact on accuracy or CPU time, but avoids to pointlessly break symmetry. } (FM-LBR) aims to offer the best of both worlds: a convergence guarantee, and fast computation times \footnote{ Note that the FM-LBR memory requirement is higher than that of the AGSI and FM-8, see Remark \rightef{rem:memory}. }. We also implemented the Monotone Acceptance Ordered Upwind Method (MAOUM) \cite{AltonMitchell12}, a Dijkstra inspired method using static stencils, like the FM-LBR. The difference between these two methods is that the MAOUM stencil \footnote{ The stencils for the MAOUM are here built using the {\rightm ComputeUpdateSet} stencil construction routine described and used in all the numerical experiments of \cite{AltonMitchell12}. The paper \cite{AltonMitchell12} also outlines sufficient conditions (called $\delta$-NGA or DRB) for anisotropic stencils to be causal, but no explicit anisotropic stencil construction. } $V(z)$ at a grid point $z\in \Omega \cap Z$, is isotropic, only depends on the anisotropy ratio $\kappa({\rm \bf m}athcal M(z))$, and its boundary has cardinality ${\rm \bf m}athcal O(\kappa({\rm \bf m}athcal M(z))^{m-1} )$; in contrast the FM-LBR stencil is anisotropic, aligned with the ellipse defined by ${\rm \bf m}athcal M(z)$, and of cardinality ${\rm \bf m}athcal O(1)$. The MAOUM stencils were precomputed and stored in a look-up table, resulting in a complexity ${\rm \bf m}athcal O(\kappa({\rm \bf m}athcal M) N \leftn N)$ for this algorithm in 2D. {\rm \bf m}athbfegin{equation}gin{figure} {\rm \bf m}athbfegin{equation}gin{center} \iftoggle{siam}{ \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/VladBench/Vlad2_dist.png} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/VladBench/Vlad2Rot_dist.png} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/VladBench/Vlad_dist.png} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_contours.png} }{ \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/VladBench/Vlad2_dist.png} \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/VladBench/Vlad2Rot_dist.png} \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/VladBench/Vlad_dist.png} \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_contours.png} } {\bf v}arepsilonnd{center} \caption{ \leftabel{fig:Vlad1} Level lines of the solutions of the two dimensional test cases. } {\bf v}arepsilonnd{figure} We consider application inspired test cases, which violate some of the simplifying assumptions used in our convergence analysis Proposition \rightef{prop:Convergence}: they involve outflow boundary conditions, non-trivial Dirichlet boundary conditions in Appendix \rightef{subsec:Accuracy}, and discontinuous Riemannian metrics in cases 3 and 4. Their exact technical description is given in Remark \rightef{rem:ExactDescription}.\\ The first test is a distance computation on a parametrized surface, considered in \cite{VladMSSP}. As shown on Figure \rightef{fig:tables}, the FM-LBR is the fastest in terms of CPU time \footnote{ All timings obtained on a 2.4Ghz Core 2 Duo, using a single core. Timings of the FM-LBR include the stencil construction, which typically accounts for $25\%$. }, but is less accurate than the AGSI or the FM-8. Rotating this test case by the angle $\theta = \pi/6$, and conducting the same experiment, shows a different story: the numerical errors of the AGSI and the FM-8 increase by a factor larger than $5$, while the FM-LBR, unaffected, is now the most accurate method, see Figure \rightef{fig:tables}. The FM-LBR cuts $L^\infty$ and $L^1$ numerical errors by 40\% in comparison with the AGSI and the MAOUM, and CPU time by 85\%, while the FM-8 produces even larger errors. Figure \rightef{fig:tables} shows that the FM-LBR offers the best accuracy for more grid orientations $\theta$ than its alternatives. The maximal error and averaged error with respect to $\theta$ are also in favor of the FM-LBR. The strong dependence of the AGSI and the FM-8 accuracy on the test case orientation is puzzling, and contrasts with the more consistent behavior of the MAOUM and the FM-LBR. The author's heuristic and personal interpretation of this phenomenon, which is open to debate and put into question by a reviewer, is that this test case is for $\theta=0$ dominated by (close to) axis-aligned anisotropy. The AGSI and the FM-8, which are based on small and fixed stencils, benefit from this special configuration; the FM-8 also works well for $\theta=\pi/4$, because its stencil includes the four diagonals.\\ {\rm \bf m}athbfegin{equation}gin{figure} {\rm \bf m}athbfegin{equation}gin{center} \iftoggle{siam}{ {\rm \bf m}athbfegin{equation}gin{minipage}{9cm} {\rm \bf m}athbfegin{equation}gin{tabular}{c|cccc} & FM-LBR& FM-8 & AGSI & MAOUM\\ \hline & {\rm \bf m}ulticolumn{4}{c}{First test}\\ CPU time & \tr{0.19} & \tr{0.19} & 1.01 & 1.28 \\ $L^\infty$ error & 3.99 & \tr{1.47} & 1.62 & 8.80\\ $L^1$ error & 1.13 & 0.53 & \tr{0.51} & 2.33\\ \hline & {\rm \bf m}ulticolumn{4}{c}{First test, rotated by $\pi/6$}\\ CPU time & \tr{0.20} & 0.21 & 1.44 & 1.31 \\ $L^\infty$ error & \tr{5.52} & 12.5 & 9.45 & 8.56\\ $L^1$ error & \tr{1.46} & 3.42 & 2.51 & 2.52 \\ \hline & {\rm \bf m}ulticolumn{4}{c}{Second test}\\ CPU time & \tr{0.076} & 0.079 & 0.77 & 0.36 \\ $L^\infty$ error & \tr{2.90} & 3.03 & 3.67 & 7.66\\ $L^1$ error & \tr{1.03} & 1.30 & 1.40 & 2.3\\ {\bf v}arepsilonnd{tabular} {\bf v}arepsilonnd{minipage} \hspace{-0.5cm} {\rm \bf m}athbfegin{equation}gin{minipage}{3cm} \includegraphics[width=4cm]{Illustrations/ FastMarchingIllus/VladBench/Vlad2LInf_MAOUM.pdf} \includegraphics[width=4cm]{Illustrations/ FastMarchingIllus/VladBench/Vlad2L1_MAOUM.pdf} {\bf v}arepsilonnd{minipage} }{ {\rm \bf m}athbfegin{equation}gin{minipage}{9cm} {\rm \bf m}athbfegin{equation}gin{tabular}{c|cccc} & FM-LBR& FM-8 & AGSI & MAOUM\\ \hline & {\rm \bf m}ulticolumn{4}{c}{First test}\\ CPU time & \tr{0.19} & \tr{0.19} & 1.01 & 1.28 \\ $L^\infty$ error & 3.99 & \tr{1.47} & 1.62 & 8.80\\ $L^1$ error & 1.13 & 0.53 & \tr{0.51} & 2.33\\ \hline & {\rm \bf m}ulticolumn{4}{c}{First test, rotated by $\pi/6$}\\ CPU time & \tr{0.20} & 0.21 & 1.44 & 1.31 \\ $L^\infty$ error & \tr{5.52} & 12.5 & 9.45 & 8.56\\ $L^1$ error & \tr{1.46} & 3.42 & 2.51 & 2.52 \\ \hline & {\rm \bf m}ulticolumn{4}{c}{Second test}\\ CPU time & \tr{0.076} & 0.079 & 0.77 & 0.36 \\ $L^\infty$ error & \tr{2.90} & 3.03 & 3.67 & 7.66\\ $L^1$ error & \tr{1.03} & 1.30 & 1.40 & 2.3\\ {\bf v}arepsilonnd{tabular} {\bf v}arepsilonnd{minipage} {\rm \bf m}athbfegin{equation}gin{minipage}{4cm} \includegraphics[width=5cm]{Illustrations/ FastMarchingIllus/VladBench/Vlad2LInf_MAOUM.pdf} \includegraphics[width=5cm]{Illustrations/ FastMarchingIllus/VladBench/Vlad2L1_MAOUM.pdf} {\bf v}arepsilonnd{minipage} } {\bf v}arepsilonnd{center} \caption{ \leftabel{fig:tables} Tables of CPU time in seconds, $L^\infty$ error an averaged $L^1$ error (left). Accuracy, in the first test rotated by an angle $\theta\in [0, \pi/4]$ (this interval is enough, since the dependence in $\theta$ is $\pi/2$-periodic and even). In average over theta, CPU times are $0.21s$, $0.20s$, $1.37s$, $1.31s$, $L^\infty$ errors $5.16$, $7.64$, $6.86$, $8.57$ and averaged $L^1$ errors $1.34$, $2.58$, $1.95$, $2.40$ for the FM-LBR, FM-8, AGSI and MAOUM respectively. {\bf v}arepsilonmph{All errors are multiplied by $100$, for better readability.}} {\bf v}arepsilonnd{figure} {\rm \bf m}athbfegin{equation}gin{figure} {\rm \bf m}athbfegin{equation}gin{center} \iftoggle{siam}{ \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_unfolded_1000.pdf} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/Detail_201.pdf} \includegraphics[width=2.9cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/Detail_501.pdf} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/Detail_1001.pdf} }{ \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_unfolded_1000.pdf} \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/Detail_201.pdf} \includegraphics[width=3.7cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/Detail_501.pdf} \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/Detail_1001.pdf} } {\bf v}arepsilonnd{center} \caption{ \leftabel{fig:neigh} Reference solution for the third test case (left). The Riemannian metric ${\rm \bf m}athcal M$ is anisotropic only on a thin band along a spiraling curve, wide of a few grid points. Detail at resolutions $n\tildemes n$, where $n$ equals $200$ (center left), $500$ (center right) and $1000$ (right). } {\bf v}arepsilonnd{figure} The second benchmark, discussed in \cite{VladThesis01,SV03}, is inspired by seismic imaging. There is no bias here towards axis-aligned anisotropy. As shown on the table Figure \rightef{fig:tables}, the FM-LBR takes a smaller CPU time and offers a better accuracy than its alternatives. Note that one can also construct configurations in which anisotropy is not axis-aligned {\bf v}arepsilonmph{and} the AGSI is more accurate than the FM-LBR. In some cases, the accuracy advantage of the AGSI even grows unboundedly as the anisotropy ratio of the metric tends to infinity. A heuristic analysis and prediction of this phenomenon is presented in Appendix \rightef{subsec:Accuracy}, where it is illustrated with a fifth test case.\\ The third \cite{BC10} and fourth test cases are relevant benchmarks if one's objective is to use fast marching methods for the segmentation of tubular structures, in medical images or volume data respectively. The FM-LBR reveals its full potential, and stands out as the only practical option, in this more difficult setting which involves a discontinuous and highly anisotropic metric. The Riemannian metric tensor ${\rm \bf m}athcal M(z)$ is the identity matrix, except at points $z$ on the neighborhood of a curve $\Gamma$, where ${\rm \bf m}athcal M(z)$ has a small eigenvalue $\delta_0^2$ associated to an eigenvector tangent to $\Gamma$, and the other eigenvalue is $1$ on the orthogonal space. The shortest path joining the point $(1,-1)$ (resp.\ $(0,0,3)$) to the origin is extracted via ``gradient descent on the Riemannian manifold $(\Omega, {\rm \bf m}athcal M)$'', see Appendix \rightef{sec:Paths} for details: {\rm \bf m}athbfegin{equation} \leftabel{descent} \gamma'(t) = -{\rm \bf m}athcal M(\gamma(t))^{-1} {\rm \bf n}abla \distC(\gamma(t)). {\bf v}arepsilonnd{equation} By construction of the Riemannian metric ${\rm \bf m}athcal M$, traveling close and tangentially to the curve $\Gamma$ is cheap. This is reflected by the level lines of $\distC$, and by the allure of the minimal path, see Figures \rightef{fig:Vlad1}, \rightef{fig:neigh} and \rightef{fig3d}. Heuristically, this path joins the neighborhood of the curve $\Gamma$ in straight line, almost orthogonally, and then follows it. The alignment of the minimal path with the direction of anisotropy, observed in this test case, is not an uncommon phenomenon. The FM-LBR presumably benefits a lot from this behavior in terms of accuracy, since its stencils typically provide a good ``angular resolution'' in the direction of anisotropy, see Figures \rightef{fig:BasisRotate}, \rightef{fig:BasisScale}, \rightef{fig:neigh}. Since in addition the stencil radii remain rather small for most anisotropy orientations, see \cite{M12b} for details, usually most updates for points in the ``fast band'' come from the fast band when the FM-LBR is run on these examples. When the fast band is missed however, accuracy degrades and zig-zag artifacts appear in the extracted path, see Figure \rightef{fig:path}. {\rm \bf m}athbfegin{equation}gin{figure} {\rm \bf m}athbfegin{equation}gin{center} \iftoggle{siam}{ \includegraphics[width=4cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/CPUtimeFinsler.pdf} \includegraphics[width=4cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/LInf.pdf} \includegraphics[width=4cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/L1.pdf} }{ \includegraphics[width=5cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/CPUtimeFinsler.pdf} \includegraphics[width=5cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/LInf.pdf} \includegraphics[width=5cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/L1.pdf} } {\bf v}arepsilonnd{center} \caption{ \leftabel{fig2d} CPU Time (left, in seconds), $L^\infty$ error (center), and averaged $L^1$ error (right) of the FM-LBR, FM-8 and AGSI, at several resolutions ranging from $120$ to $1200$ (log-log scale). } {\bf v}arepsilonnd{figure} Third test case \cite{BC10}, in 2D. The different method's performance is illustrated on Figure \rightef{fig2d}, except for the MAOUM which showed a poor accuracy, presumably due to the huge stencils it generated. The CPU time/resolution curve of the AGSI shows a stronger slope than for the one pass solvers, presumably reflecting its super-linear complexity ${\rm \bf m}athcal O({\rm \bf m}u({\rm \bf m}athcal M) N^{\frac 3 2})$. The $L^\infty$ and $L^1$ error curves suggest that the FM-8 is not consistent in this test case, contrary to the FM-LBR and the AGSI. At the resolution $1000\tildemes 1000$, typical in image analysis, the FM-LBR cuts the $L^\infty$ error by $80\%$ and the $L^1$ error by $75\%$ with respect to the AGSI, while reducing CPU time from 11 minutes to 2.5 seconds (!). As illustrated on Figure \rightef{comp2d}, the better accuracy of the FM-LBR in this test case effectively translates into a better extraction of minimal paths. The reason for the, unrivaled, performance of the FM-LBR in this specific test case is partly elucidated in Appendix \rightef{subsec:Accuracy}.\\ {\rm \bf m}athbfegin{equation}gin{figure} {\rm \bf m}athbfegin{equation}gin{center} {\rm \bf m}athbfegin{equation}gin{tabular}{rccc} Method $\setminus$ Grid & 200x200 & 500x500 & 1000x1000 \\ {\rightightarrowise 12mm \hbox{ FM-LBR }} &\includegraphics[width=3cm,height=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_acute_200.pdf} &\includegraphics[width=3cm,height=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_acute_501.pdf} &\includegraphics[width=3cm,height=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_acute_1000.pdf}\\ {\rightightarrowise 12mm \hbox{ FM-8 }} &\includegraphics[width=3cm,height=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_constant8_200.pdf} &\includegraphics[width=3cm,height=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_constant8_501.pdf} &\includegraphics[width=3cm,height=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_constant8_1000.pdf}\\ {\rightightarrowise 12mm \hbox{ AGSI }} &\includegraphics[width=3cm,height=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_BR6_200.pdf} &\includegraphics[width=3cm,height=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_BR6_501.pdf} &\includegraphics[width=3cm,height=3cm]{Illustrations/ FastMarchingIllus/VladBench/Snail2_New/snail2_BR6_1000.pdf} {\bf v}arepsilonnd{tabular} \caption{Visual comparison of the accuracy of three algorithms, at three resolutions, in the 2D test case. Qualitatively, the approximate geodesic has the right behavior for a resolution as low as $170 \tildemes 170$ with the FM-LBR, and $1000 \tildemes 1000$ with the AGSI. This is presumably never the case for the FM-8, which is not consistent here. } \leftabel{comp2d} {\bf v}arepsilonnd{center} {\bf v}arepsilonnd{figure} {\rm \bf m}athbfegin{equation}gin{figure} {\rm \bf m}athbfegin{equation}gin{center} \includegraphics[width=5cm]{Illustrations/ FastMarchingIllus/Runs/3D/Contour3.png} {\rightightarrowise 3mm \hbox{ \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/Runs/3D/Geodesic.pdf} }} {\rightightarrowise 10mm \hbox{ \includegraphics[width=4cm]{Illustrations/ FastMarchingIllus/Runs/3D/Sampling.png} }} {\bf v}arepsilonnd{center} \caption{\leftabel{fig3d} Results of the FM-LBR in the fourth, 3D, test case. Iso-surface $\{\dist(z)=2\}$ (left), and shortest path joining the points $(0,0,0)$ and $(3,0,0)$ (center). Detail of the discrete points (represented by small cubes), in the neighborhood of the curve $\Gamma(t)=(\cos \overlinemega_0 t, \sin \overlinemega_0 t, t)$, for which the Riemannian metric is not euclidean (right).} {\bf v}arepsilonnd{figure} Fourth test case, in 3D. CPU time was 105s for the FM-LBR, while the AGSI took 480s and failed to recover the minimal path presented on Figure \rightef{fig3d} (center) (a straight line joining the two endpoints was obtained instead). The FM-LBR is capable of addressing a large scale (more than $10$ millions grid points), strongly anisotropic ($\kappa({\rm \bf m}athcal M)=50$) three dimensional shortest path problem, with a good accuracy and within reasonable CPU time on standard laptop computer. {\rm \bf m}athbfegin{equation}gin{remark} The four application inspired test cases. Outflow boundary conditions, except for $\distC(0)=0$; hence the solution is the Riemannian distance to the origin: $\distC(z) = \distC(z,0)$. Numerical errors with respect to a $4000 \tildemes 4000$ reference solution, bi-linearly interpolated, obtained in cases 1,2, with the AGSI, and in case 3 with the FM-ASR \cite{M12c}. \leftabel{rem:ExactDescription} {\rm \bf m}athbfegin{equation}gin{enumerate} \item (Geometry processing \cite{VladThesis01}, $\kappa({\rm \bf m}athcal M)\simeq 5.1$) Compute the Riemannian distance from the origin $(0,0,0)$ on parametric surface of height map $z(x,y) := (3/4) \allowbreak \sin (3 \pi x) \sin (3 \pi y)$. Riemannian metric: ${\rm \bf m}athcal M(x,y) = {\rightm \hbox{I\kern-.2em\hbox{I}}}d + {\rm \bf n}abla z(x,y) {\rm \bf n}abla z(x,y)^\trans$. Coordinates $(x,y)$ restricted to the square $[-0.5,0.5]^2$, discretized on a $292\tildemes 292$ grid, or in a second step to this square rotated by the indicated angle $\theta$. \item (Seismic imaging \cite{VladThesis01,SV03}, $\kappa({\rm \bf m}athcal M)=4$) The metric ${\rm \bf m}athcal M(x,y)$ has the two eigenvalues $0.8^{-2}$, $0.2^{-2}$, the former associated to the eigenvector $(1, (\pi/2) \cos (4 \pi x))$. Domain $[0.5,0.5]^2$, discretized on a $193 \tildemes 193$ grid. \item (Tubular segmentation \cite{BC10}, $\kappa({\rm \bf m}athcal M)=100$) Define the curve $\Gamma(t) = t (\cos \overlinemega_0 t, \sin \overlinemega_0 t)$, $t \in [0,1]$. Set ${\rm \bf m}athcal M(z) = {\rightm \hbox{I\kern-.2em\hbox{I}}}d$, except if there exists $0\lefteq t \lefteq 1$ and $0 \lefteq r \lefteq r_0$ such that $z = \Gamma(t)+r (\cos \overlinemega_0 t, \, \sin \overlinemega_0 t)$. In that case ${\rm \bf m}athcal M(z)$ has the eigenvalues $\delta_0^2$ and $1$, the former with eigenvector $\Gamma'(t)$. Parameters: $\overlinemega_0 := 6 \pi$, $r_0 := \delta_0:= 0.01$. Domain: $[-1,1]^2$, grid sizes $n \tildemes n$ with $120 \lefteq n \lefteq 1200$. \item (Tubular segmentation, $\kappa({\rm \bf m}athcal M)=50$, 3D) Define the curve $\Gamma(t) = (\cos \overlinemega_0 t, \sin \overlinemega_0 t, t)$, with $\overlinemega_0 := (5/2) \pi$. Set ${\rm \bf m}athcal M(z)={\rightm \hbox{I\kern-.2em\hbox{I}}}d$, except if there exists $t, \leftambda, {\rm \bf m}u \in {\rightm \hbox{I\kern-.2em\hbox{R}}}$ such that $z=\Gamma(t)+(\leftambda \cos\overlinemega_0 t, \leftambda \sin \overlinemega_0 t, {\rm \bf m}u)$ and $\leftambda^2+ {\rm \bf m}u^2 \lefteq (r_0/2)^2$. In that case ${\rm \bf m}athcal M(z)$ has the eigenvalues $\delta_0^2$ and $1$, the former with eigenvector $\Gamma'(t)$ and the latter with multiplicity $2$. Parameters: $\overlinemega_0 := (5/2) \pi$, $\delta_0 = r_0 = 0.02$. Domain: $[-1.1,1.1]^2 \tildemes [0,3]$, grid size $200\tildemes 200 \tildemes 272$. {\bf v}arepsilonnd{enumerate} {\bf v}arepsilonnd{remark} \section*{Conclusion} The FM-LBR, introduced in this paper, combines the Fast Marching algorithm with a concept from discrete geometry named Lattice Basis Reduction. It has the following strongpoints. (I, Convergence) The FM-LBR is consistent for the anisotropic eikonal equation associated to any continuous Riemannian metric, of arbitrary anisotropy. (II, Complexity) It has a numerical cost comparable to classical isotropic Fast Marching, independently of the problem anisotropy. (III, Accuracy) The accuracy of the FM-LBR is competitive in general, and striking in test cases, related to tubular segmentation in medical images, where the Riemannian metric has a pronounced anisotropy close to and tangentially to a curve. These qualities come at the price of the specialization of the FM-LBR: (i) the Riemannian metric may not be replaced with a more general Finsler metric, see \cite{M12c} for an adaptation to this setting in 2D, (ii) the domain needs to be discretized on a cartesian grid, and (iii) of dimension $2$ or $3$. Hopefully these requirements are met in many applications, and future work will be devoted to the application of the proposed algorithm in the context of medical image processing. \paragraph{Acknowledgement.} The author thanks Pr A. Vladimirsky for constructive discussions on the choice of test cases and the FM-LBR accuracy, and Pr P. Q. Nguyen for pointing out the notion of obtuse superbase of a lattice. \appendix \section{Robust extraction of minimal paths} \leftabel{sec:Paths} Obtaining the shortest path joining two given points is essential in motion planning control problems \cite{AltonMitchell12}, as well as in the envisioned application to tubular structure centerline extraction \cite{BC10}. This involves solving the Ordinary Differential Equation (ODE) {\bf v}arepsilonqref{descent}, a task less trivial than it seems. The author is conscious that a comparison of minimal paths, as on Figure \rightef{comp2d}, reflects the properties of the ODE solver (and the time spent adjusting its sometimes numerous parameters), as much as those of the eikonal solver. This is done nevertheless due to the importance of minimal paths in applications. Eikonal solvers based on the discrete fixed point problem {\bf v}arepsilonqref{discreteSys}, such as the FM-LBR, FM-8 and AGSI, provide at each grid point $x \in \Omega \cap Z$ an estimate $\dist(x)$ of the distance $\distC(x)$, and in addition an estimate $v(x)$ of the direction and orientation of the distorted negative gradient $-{\rm \bf m}athcal M(x)^{-1} {\rm \bf n}abla \distC (x)$, of the form: {\rm \bf m}athbfegin{equation} \leftabel{defvz} v(x) := \sum_{1 \lefteq j \lefteq k} \alpha_j (z_j-x), {\bf v}arepsilonnd{equation} where the integer $1 \lefteq k \lefteq m$, the {\bf v}arepsilonmph{positive} coefficients $(\alpha_j)_{j=1}^k$ and the vertices $(z_j)_{j=1}^k$ of $\partial V(z)$ are the barycentric coordinates of the point $y \in \partial V(x)$ achieving the minimum in the Hopf-Lax update operator {\bf v}arepsilonqref{def:HopfLax}, in the face of $\partial V(x)$ containing $y$ and of minimal dimension. {\rm \bf m}athbfegin{equation}gin{figure} {\rm \bf m}athbfegin{equation}gin{center} \iftoggle{siam}{ \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/Geodesics/GeoNotations.pdf} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/Geodesics/pathPointArrows.pdf} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/Geodesics/GeoVladEasy.pdf} \includegraphics[width=3cm]{Illustrations/ FastMarchingIllus/Geodesics/GeoZigZag.pdf} }{ \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/Geodesics/GeoNotations.pdf} \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/Geodesics/pathPointArrows.pdf} \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/Geodesics/GeoVladEasy.pdf} \includegraphics[width=3.8cm]{Illustrations/ FastMarchingIllus/Geodesics/GeoZigZag.pdf} } {\bf v}arepsilonnd{center} \caption{ \leftabel{fig:path} Left: Notations for the minimal path computation; the contour of the stencil $V(x_i)$ is shown dotted. Center left: Grid points $x_0,\cdots,x_r \in \Omega \cap Z$, corrections $u_0, \cdots, u_r \in {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$ shown as arrows, and piecewise linear path $\gamma$. Center right: Grid Points $(x_i)_{i=1}^r$ and piecewise linear path $\gamma$ in the second test case at resolution $100\tildemes 100$, using the FM-LBR. Right: In hard test cases, combining strong anisotropy, metric discontinuity, and low grid resolution, the extracted path exhibits zig-zag artifacts (detail of the third test case at resolution $n=200$). } {\bf v}arepsilonnd{figure} From this point, a typical approach to solve {\bf v}arepsilonqref{descent} is to extend the values of $\dist$ or $v$ to the continuous domain $\Omega$ via an interpolation procedure, and then to use a black box ODE solver or a Runge Kutta method. Note that the accuracy usually expected from these high order methods is mostly doomed, since the discretization {\bf v}arepsilonqref{discreteSys} of the eikonal equation is only first order, and since the vector field ${\rm \bf m}athcal M^{-1}{\rm \bf n}abla \distC$ is discontinuous both at ``caustics'' and discontinuities of ${\rm \bf m}athcal M$. A more significant issue is that computations frequently get stuck, despite the use of state of the art and/or commercial interpolation methods and ODE solvers, see e.g.\ \cite{AltonMitchell12} Figure 5.10.\\ We propose a method for the computation of minimal paths, which trades high order accuracy for robustness, and never gets stuck if the eikonal solver is Dijkstra inspired. It takes advantage of the specific form {\bf v}arepsilonqref{discreteSys} of the discretization of the eikonal equation, and does not rely on black box routines. It is also parameter free: there is not interpolation order or gradient step to adjust. {\rm \bf m}athbfegin{equation}gin{algorithm} \caption{ Minimal path computation, starting from a given grid point $x_0\in \Omega \cap Z$. } \leftabel{algo:Path} {{\rm \bf m}athbff Initialisation:} $x \lefteftarrow x_0$, $u \lefteftarrow 0$ (mutable variables).\\ {{\rm \bf m}athbff While} $x$ is not an initial source point, {{\rm \bf m}athbff do}\\ {\rm \bf m}athbfegin{equation}gin{tabular}{cl} &Denote by $z_1, \cdots, z_k$ the grid points appearing in the expression {\bf v}arepsilonqref{defvz} of $v(x)$.\\ &Find $\leftambda \in {\rightm \hbox{I\kern-.2em\hbox{R}}}_+$, $1 \lefteq j \lefteq k$, which minimize $\|x+ u + \leftambda v(x) - z_j \|$. \\ &Perform updates $u \lefteftarrow x+u + \leftambda v(x) - z_j$ and $x \lefteftarrow z_j$.\\ {\bf v}arepsilonnd{tabular} {\bf v}arepsilonnd{algorithm} The successive iterations of Algorithm \rightef{algo:Path} generate grid points $x_0, \cdots, x_r \in Z$, and small correcting offsets $u_0, \cdots, u_r \in {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$. If the causality property holds, see Proposition \rightef{prop:Causality}, then the values $(\dist(x_i))_{i=0}^r$ are strictly decreasing, so that the algorithm is guaranteed to terminate. The piecewise linear path $\gamma : [0,r] \to \Omega$, parametrized so that $\gamma(i) := x_i+u_i$, satisfies the differential equation {\rm \bf m}athbfegin{equation} \leftabel{paramGamma} \gamma'(t) = \leftambda_{\leftfloor t \rightfloor} v(x_{\leftfloor t \rightfloor}), {\bf v}arepsilonnd{equation} for non-integer $t \in [0,r]$, where the constants $(\leftambda_i)_{i=0}^{r-1}$ are the minimizers in the second step of the while loop, and the vector $v(z)$, $z\in \Omega \cap Z$, is defined in {\bf v}arepsilonqref{defvz}. The particularity of our path extraction method is that the direction field $v$ is not evaluated on the curve $\gamma$, but at the points $(x_i)_{i=0}^r$ which remain close as shown by the next proposition (the involved exponent 3 seems to be either over-estimated or a rare worst case scenario, in view of the experiments Figure \rightef{fig:path}). {\rm \bf m}athbfegin{equation}gin{proposition} Let $\Omega \subset {\rightm \hbox{I\kern-.2em\hbox{R}}}^2$ be an open bounded set, equipped with a Riemannian metric ${\rm \bf m}athcal M \in C^0(\overlineverline \Omega, S_2^+)$, and an admissible family $({\cal T}(z))_{z \in \Omega}$ of meshes with ``Boundedness'' constant $R$. (See Definition \rightef{def:Admissibility}, $R \leftesssim \kappa({\rm \bf m}athcal M)$ for the FM-LBR). Consider a cartesian grid $Z$ of scale $h$, equipped with the corresponding stencils, see {\bf v}arepsilonqref{def:Grid} and {\bf v}arepsilonqref{eq:VFromT}, and solve the discrete system {\bf v}arepsilonqref{discreteSys}. Let $\gamma \in C^0([0,r], \Omega)$ be a path extracted with Algorithm \rightef{algo:Path} and parametrized as in {\bf v}arepsilonqref{paramGamma}. Then for all $t\in [0,r]$ {\rm \bf m}athbfegin{equation} \leftabel{upperGammaX} \| \gamma(t) - x_{\leftfloor t \rightfloor} \| \lefteq C h R^3, {\bf v}arepsilonnd{equation} where $C$ is an absolute constant (i.e.\ independent of $\Omega$, $({\cal T}(z))_{z \in \Omega}$, ${\rm \bf m}athcal M$, or the path origin). {\bf v}arepsilonnd{proposition} The rest of this appendix is devoted to the proof. Consider a fixed $0 \lefteq i < r$, and observe that for all $t\in [i,i+1[$ one has, since $\gamma$ is linear on this interval and since $\gamma(i) = x_i+u_i$: {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{eq:gammaDist} \| \gamma(t) - x_{\leftfloor t \rightfloor} \| \lefteq {\rm \bf m}ax \{ \| \gamma(i+1) - x_i\| ,\ \|\gamma(i) - x_i\| \} \lefteq {\rm \bf m}ax \{ \| u_{i+1}\| + \|x_{i+1} - x_i\|,\ \|u_i\|\}. {\bf v}arepsilonnd{equation} In order to avoid notational clutter, we denote $x:=x_i$, $x':=x_{i+1}$, $u:=u_i$, $u':=u_{i+1}$ and $v := v(x_i)$. Let $k \in \{1,2\}$ and let $z_j$, $1 \lefteq j \lefteq k$, be the vertices of $\partial V(x)$ appearing in the expression {\bf v}arepsilonqref{defvz} of the discrete negative gradient $v$. We assume without loss of generality that the grid is $Z := {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^2$, so that $v_j := z_j-x$ is a vertex of $\partial {\cal T}(x)$, for all $1 \lefteq j \lefteq k$. Hence $\|v_j\| \lefteq R$ by (Boundedness) and if $k =2$ then $\det(v_1, v_2)$ is a non-zero integer. In particular $\|x'-x\| = \|v_j\| \lefteq R$, for some $1 \lefteq j \lefteq k$. By construction (second step of the while loop in the path computation), we have for any $\leftambda \in {\rightm \hbox{I\kern-.2em\hbox{R}}}_+$ and any $1 \lefteq j \lefteq k$ {\rm \bf m}athbfegin{equation} \leftabel{inequp} \| u' \| \lefteq \| u + \leftambda v - v_j\|. {\bf v}arepsilonnd{equation} We prove in the following an upper bound of the form $\|u'\| \lefteq {\rm \bf m}ax \{\|u\|, C R^3\}$, which by an immediate induction argument implies $\|u_i\| \lefteq C R^3$ for any $1 \lefteq i \lefteq r$. Using {\bf v}arepsilonqref{upperGammaX}, our previous estimate on $\|x_{i+1}-x_i\|$, and rescaling by a factor $h$, we obtain as announced {\bf v}arepsilonqref{upperGammaX}. Case $k=1$: choosing $\leftambda=1$, $j=1$, and observing that $v = v_1$, we obtain $\|u'\| \lefteq \|u + 1 \tildemes v - v_1\|= \|u\|$. The second case begins with a lemma. {\rm \bf m}athbfegin{equation}gin{lemma} Let $v_1,v_2 \in {\rightm \hbox{I\kern-.2em\hbox{R}}}^2$, and let $v := \alpha_1 v_1+\alpha_2 v_2$, with $\alpha_1, \alpha_2>0$. Let ${\rm \bf m}u := $\diamond$\\rightt 2$, and let $w_1 := v_1 {\rm \bf m}u - v_2/{\rm \bf m}u$, $w_2 := v_2 {\rm \bf m}u - v_1 /{\rm \bf m}u$. Then one can choose $\leftambda \in {\rightm \hbox{I\kern-.2em\hbox{R}}}_+$ and $1 \lefteq j \lefteq 2$, such that $v_j - \leftambda v$ is positively proportional to any of the following vectors: $w_1$, $w_2$, $-w_1$ or $-w_2$, with a proportionality constant $0 < {\rm \bf n}u \lefteq {\rm \bf m}u$. {\bf v}arepsilonnd{lemma} {\rm \bf m}athbfegin{equation}gin{proof} In the case of $w_1$, choose $j:=1$, $\leftambda := 1/(\alpha_1+ {\rm \bf m}u^2 \alpha_2)$, so that ${\rm \bf n}u = 1/({\rm \bf m}u + {\rm \bf m}u^{-1}\alpha_1/ \alpha_2 ) \lefteq 1/{\rm \bf m}u$. In the case of $-w_1$, choose $j:=2$, $\leftambda := 1/(\alpha_2+{\rm \bf m}u^{-2} \alpha_1)$, so that ${\rm \bf n}u =1/({\rm \bf m}u^{-1}+{\rm \bf m}u \alpha_2/\alpha_1) \lefteq {\rm \bf m}u$. The cases of $w_2$ and $-w_2$ are similar. {\bf v}arepsilonnd{proof} Case $k=2$. We use the notations of the lemma. Denoting by $A$ the matrix of lines $w_1,w_2$, there exists $1 \lefteq k \lefteq 2$ and ${\bf v}arepsilon \in \{-1,1\}$ such that {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{uwLower} $\diamond$\\rightt 2 \leftangleu, {\bf v}arepsilon w_k\rightightarrowngle \geq $\diamond$\\rightt{ \leftangleu, w_1\rightightarrowngle^2 + \leftangleu, w_2\rightightarrowngle^2 } = \|A u\| \geq \|A^{-1}\|^{-1} \|u\|. {\bf v}arepsilonnd{equation} The norm $\|A^{-1}\|^{-1}$ is estimated as follows. $|\det A| = ({\rm \bf m}u^2-{\rm \bf m}u^{-2}) |\det (v_1,v_2)| \geq {\rm \bf m}u^2-{\rm \bf m}u^{-2}$, since $\det(v_1,v_2)$ is a non-zero integer. On the other hand $\|w_j\| \lefteq ({\rm \bf m}u+{\rm \bf m}u^{-1}) {\rm \bf m}ax\{\|v_1\|,\|v_2\|\} \lefteq C_0 R$ with $C_0=2 ({\rm \bf m}u+{\rm \bf m}u^{-1})$, hence $\|A\| \lefteq $\diamond$\\rightt{\|w_1\|^2 +\|w_2\|^2}\lefteq $\diamond$\\rightt 2 C_0 R$. Finally $\|A^{-1}\|^{-1} = |\det A| / \|A\| \geq C_1 /R$ where $C_1 := ({\rm \bf m}u-{\rm \bf m}u^{-1})/( $\diamond$\\rightt 2 C_0)$. We next choose $\leftambda$ and $j$, using the previous lemma, so that $v_j - \leftambda v = {\rm \bf n}u {\bf v}arepsilon w_k$, with ${\bf v}arepsilon$, $k$ as in {\bf v}arepsilonqref{uwLower}, and where $0 < {\rm \bf n}u \lefteq {\rm \bf m}u := $\diamond$\\rightt 2$. Hence, using {\bf v}arepsilonqref{inequp}: {\rm \bf m}athbfegin{equation}gin{align*} \| u' \|^2 &\lefteq \| u - (v_j - \leftambda v)\| = \|u - {\rm \bf n}u {\bf v}arepsilon w_k\|^2 = \|u\|^2 - 2 {\rm \bf n}u \leftangleu, {\bf v}arepsilon w_k\rightightarrowngle+ {\rm \bf n}u^2 \|w_k\|^2 \\ & \lefteq \|u\|^2 - 2 {\rm \bf n}u \|A^{-1}\|^{-1} \|u\| + {\rm \bf n}u^2 \|w_k\|^2 \lefteq \|u\|^2 - 2 C_1 \|u\|/R+ C_0^2 R^2. {\bf v}arepsilonnd{align*} If $\|u\| \geq C_2 R^3$, with $C_2 := C_0^2/(2C_1)$, then $\|u'\| \lefteq \|u\|$. If $\|u\|$ is below this bound, then choosing $\leftambda = 0$ in {\bf v}arepsilonqref{inequp} yields $\|u'\| \lefteq \|u\|+\|v_1\| \lefteq C_2 R^3 + 2 R$. Thus $\|u'\| \lefteq {\rm \bf m}ax \{\|u\|, C R^3\}$ as announced, which concludes the proof. \section{Stencil size, discretization errors and metric regularity} \leftabel{subsec:Accuracy} Experience in the discretization of Partial Differential Equations (PDEs) tells that robust and accurate numerical schemes are usually based on small, localized and isotropic stencils. The FM-LBR, which achieves causality in the eikonal equation by the use of long range, sparse and highly anisotropic stencils, seems to violate this principle. We give in this section a {\bf v}arepsilonmph{heuristic} analysis of its accuracy, which explains its excellent performance in the third and fourth tests (once a source of puzzlement for the author and the reviewers), but also exposes some weaknesses. Let us emphasize that the computational domain $\Omega$ is equipped with two geometries. {\rm \bf m}athbfegin{equation}gin{itemize} \item The {\bf v}arepsilonmph{extrinsic} Euclidean geometry, inherited from the embedding $\Omega \subset {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$. \item The {\bf v}arepsilonmph{intrinsic} Riemannian geometry, given by the Riemannian metric ${\rm \bf m}athcal M$ on $\Omega$, which is part of eikonal PDE structure. {\bf v}arepsilonnd{itemize} The AGSI stencils are small, localized, with respect to the extrinsic Euclidean distance. The FM-LBR stencil at a point $z \in \Omega$ is built from an ${\rm \bf m}athcal M(z)$-reduced basis, see \S \rightef{sec:MeshProperties}, which consists of the smallest linearly independent vectors of ${\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}^m$ in the local norm $\|\cdot\|_{{\rm \bf m}athcal M(z)}$. Hence this stencil is by construction small and localized {\bf v}arepsilonmph{in the twisted perspective} of the intrinsic Riemannian distance. We refer to \cite{M12b} for a quantitative estimate of the size of the FM-LBR stencil, from this perspective, in average over all orientations of the discretization grid. To better reflect the shapes of these stencils, which (as much as possible) adapt their orientation and aspect ratio to the metric ${\rm \bf m}athcal M$, but keep a constant volume (they are built of $(m+1)!$ simplices of volume $h^m/m!$), we introduce a normalized metric $\widehat {\rm \bf m}athcal M$: for all $z\in \Omega$ {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{normalizedM} \widehat {\rm \bf m}athcal M(z) := \det({\rm \bf m}athcal M(z))^{-\frac 1 m} {\rm \bf m}athcal M(z), {\bf v}arepsilonnd{equation} so that $\det(\widehat {\rm \bf m}athcal M(z)) = 1$ identically. For all $x,y\in \overlineverline \Omega$ we denote by $\distC(x,y)$ (resp.\ $\widehat \distC(x,y)$) the Riemannian distance {\bf v}arepsilonqref{def:Length} on $\Omega$ associated to ${\rm \bf m}athcal M$ (resp.\ $\widehat {\rm \bf m}athcal M$). Bellman's optimality principle applied to the solution $\distC$ of the eikonal equation {\bf v}arepsilonqref{eikonal}, reads: for all $x \in V \subset \Omega$ {\rm \bf m}athbfegin{equation}gin{equation*} \distC(x) = {\rm \bf m}in_{y \in \partial V} \distC(x,y) + \distC (y). {\bf v}arepsilonnd{equation*} The Hopf-Lax update operator {\bf v}arepsilonqref{def:HopfLax} reflects this identity at discrete level, up to two approximations: {\rm \bf m}athbfegin{equation}gin{align} \leftabel{approxM} \distC(x,y) & \approx \|x-y\|_{{\rm \bf m}athcal M(x)},\\ \leftabel{approxD} \distC (y) &\approx \interp_{V(x)} \dist (y), {\bf v}arepsilonnd{align} where $V(x)$ denotes the stencil at $x$ (which is given under the form of a triangulation of a neighborhood of $x$), $\interp_V$ the linear interpolation operator on $V$, and $y \in \partial V$. If one ignores the issue of the scheme causality, then the choice of stencil should be dictated by the local regularity properties of the quantities that are approximated on it. Discretization {\bf v}arepsilonqref{approxM} amounts to approximating the metric ${\rm \bf m}athcal M$ with the constant ${\rm \bf m}athcal M(x)$ on the stencil $V(x)$. Such piecewise constant approximation errors are controlled by Lipschitz regularity constants. The natural distance on $S_m^+$ is defined in {\bf v}arepsilonqref{def:dtimes}, but the distance on $\Omega$ should be chosen appropriately so as to reflect the geometry of the computational stencils. Indeed, the AGSI (resp.\ FM-LBR) stencil at a point $x\in \Omega$ is heuristically not much different from ball centered at $x$ and of radius the grid scale $h$, defined with respect to the euclidian distance (resp.\ Riemannian distance $\widehat D$). The AGSI stencil should therefore be preferred if the metric ${\rm \bf m}athcal M$ has a small Lipschitz regularity constant $K_0$ with respect to the extrinsic Euclidean distance: {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{regME} \distance({\rm \bf m}athcal M(x), {\rm \bf m}athcal M(y)) \lefteq K_0 \|x-y\|. {\bf v}arepsilonnd{equation} On the other hand, the FM-LBR stencil is more suitable for metrics which have a small Lipschitz regularity constant $K_1$ with respect to the intrinsic (up to the normalization {\bf v}arepsilonqref{normalizedM}) distance $\widehat D$: {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{regMD} \distance({\rm \bf m}athcal M(x), {\rm \bf m}athcal M(y)) \lefteq K_1 \widehat D(x,y). {\bf v}arepsilonnd{equation} In the fifth numerical example below, the most accurate of these two methods can indeed be guessed from the ratio $K_0/K_1$. Regularity conditions of the form {\bf v}arepsilonqref{regMD} arise naturally in the study of anisotropic mesh generation, see Part III of \cite{Mirebeau10}. The Riemannian metric involved in the third and fourth numerical tests of this section, inspired by applications to tubular structure segmentation \cite{BC10}, varies slowly in the direction of the eigenvector associated to the small eigenvalue of ${\rm \bf m}athcal M(z)$ (the direction of the tube), but quickly (in fact discontinuously) in the orthogonal direction. Thus, although discontinuous, it is heuristically not far from satisfying {\bf v}arepsilonqref{regMD}, which explains the exceptional performance of the FM-LBR on these specific examples. The interpolation error {\bf v}arepsilonqref{approxD} is harder to estimate, yet in favor of the FM-LBR stencil let us mention the intrinsic $1$-Lipschitz regularity of the solution: for all $x,y \in \Omega$ {\rm \bf m}athbfegin{equation}gin{equation*} |D(x) - D(y) | \lefteq D(x,y). {\bf v}arepsilonnd{equation*} In the special case of a constant metric, where {\bf v}arepsilonqref{approxM} is exact and all error comes from {\bf v}arepsilonqref{approxD}, the FM-LBR stencil offers the best accuracy, see \cite{M12b}. \\ The following illustrative example was proposed by A.\ Vladimirsky : let $M \in S_m^+$, let $u \in {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$, and let ${\rm \bf m}athcal M : {\rightm \hbox{I\kern-.2em\hbox{R}}}^m \to S_m^+$ be the Riemannian metric defined by {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{def:MExp} {\rm \bf m}athcal M(z) := M {\bf v}arepsilonxp( \leftangleu, z\rightightarrowngle). {\bf v}arepsilonnd{equation} Assume for normalization that $\det (M)=1$, so that $\widehat {\rm \bf m}athcal M = M$ identically, and $\widehat D(x,y) = \|x-y\|_{M}$ for all $x,y \in {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$. Then, with $D := M^{-1}$: {\rm \bf m}athbfegin{equation}gin{align*} \distance({\rm \bf m}athcal M(x), {\rm \bf m}athcal M(y)) = |\leftangleu, x-y\rightightarrowngle| &\lefteq \|u\| \|x-y\|,\\ \distance({\rm \bf m}athcal M(x), {\rm \bf m}athcal M(y)) = |\leftangleu, x-y\rightightarrowngle| &\lefteq \|u\|_{D} \|x-y\|_{M} = \|u\|_{D} \widehat D(x,y). {\bf v}arepsilonnd{align*} The Lipschitz regularity constants are therefore $K_0 = \|u\|$, and $K_1 = \|u\|_{D}$. The discretization {\bf v}arepsilonqref{approxM} is hence likely more accuracte with the AGSI stencil if $\|u\| \lefteq \|u\|_D$, and with the FM-LBR stencil otherwise. Defining for all $z\in {\rightm \hbox{I\kern-.2em\hbox{R}}}^m$ {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{def:DExp} \dist(z) := {\bf v}arepsilonxp(\leftangleu,z\rightightarrowngle)/\|u\|_D, {\bf v}arepsilonnd{equation} we observe that this map is unimodular: $\|{\rm \bf n}abla \dist (z) \|_{{\rm \bf m}athcal M(z)^{-1}} = 1$. The value $\dist(z)$ can also be regarded as the geodesic distance from $z$ to a point at infinity in the direction of $-D u$. The characteristic curves of the solution are parallel straight lines, of direction $D u$. Although the present discussion is about accuracy rather than CPU time, let us mention that thanks to this special property, and as pointed out by A. Vladimirsky, the AGSI converges in a single pass in the numerical tests below (provided its priority queue is suitably initialized: the upwind boundary points $z$ must be sorted by increasing values of their scalar product $\leftanglez, D u \rightightarrowngle$ with the characteristics direction $D u$). As a result, and in contrast with \S \rightef{sec:num}, the AGSI CPU time is here only {\bf v}arepsilonmph{half} of the FM-LBR CPU time. Our fifth and last numerical test, involves a metric depending on three parameters $\kappa\geq 1$, $\theta, {\bf v}arphi\in {\rightm \hbox{I\kern-.2em\hbox{R}}}$: denoting $e_\theta := (\cos \theta, \sin \theta)$, and $(x,y)^{\rm per}p := (y,-x)$, {\rm \bf m}athbfegin{equation}gin{equation} \leftabel{parametrizedMetric} {\rm \bf m}athcal M(z; \, \kappa, \theta, {\bf v}arphi) := M(\kappa, \theta) {\bf v}arepsilonxp(\leftanglez, e_{\bf v}arphi\rightightarrowngle), \ \text{ with } M(\kappa, \theta) := \kappa e_\theta e_\theta^\trans + \kappa^{-1} e_\theta^{\rm per}p (e_\theta^{\rm per}p)^\trans . {\bf v}arepsilonnd{equation} For each matrix $M = M(\theta, {\bf v}arphi)$, given by its condition number $\kappa$, and anisotropy direction $\theta$, we consider different unit vectors $u = e_{\bf v}arphi$, given by their angle ${\bf v}arphi$ with respect to the $x$-axis. For ${\bf v}arphi = \theta$ the FM-LBR is favored, since $K_0/K_1$ takes its maximal value $ $\diamond$\\rightt {\kappa}$. For ${\bf v}arphi = -\pi/4$, the AGSI is favored, since $K_0/K_1$ is close to its minimal value $1/ $\diamond$\\rightt \kappa$, and the interpolation error {\bf v}arepsilonqref{approxD} vanishes. The domain is the $[-2,2]^2$ square discretized on a $100 \tildemes 100$ grid. The FM-LBR stencil is included in a square of $(2 w+1) \tildemes (2 w+1)$ pixels, where $w$ respectively equals $2$, $3$, and $5$ for the three different pairs $(\kappa,\theta)$ in Table \rightef{table:ExpMet}. The boundary condition {\bf v}arepsilonqref{def:DExp} is applied on a the upwind part of the square boundary, on a layer of width $w$. The numerical tests presented in Table \rightef{table:ExpMet} are typical of the authors experience, and tend to agree with the above heuristical error analysis. Note however that the FM-LBR performance is unexpectedly bad \footnote{ The FM-LBR in that case produces large errors close to the {\bf v}arepsilonmph{downwind} boundary, presumably due to its large stencils. Removing a $5$ pixel band on the boundary yields in this case the $L^\infty$ errors: 32.3 (FM-LBR) and 45.8 (AGSI), in favor of the FM-LBR as predicted from the Lipschitz constants ratio. } in case $^*$, and unexpectdly good in cases $\dagger$. The accuracy advantage of the AGSI is larger than a factor $6$ for the last set of parameters in Table \rightef{table:ExpMet}, and it may of course grow unboundedly as the anisotropy ratio $\kappa$ tends to infinity, for suitable angles $\theta, {\bf v}arphi$. Anisotropy does therefore, sometimes, play against the FM-LBR accuracy. {\rm \bf m}athbfegin{equation}gin{table} {\rm \bf m}athbfegin{equation}gin{center} {\rm \bf m}athbfegin{equation}gin{tabular}{|c|c|c|c|c|c|c|c|} \hline {\rm \bf m}ultirow{2}{*}{$\kappa$} & {\rm \bf m}ultirow{2}{*}{$\theta$} & {\rm \bf m}ultirow{2}{*}{${\bf v}arphi$} & $K_0/K_1$ & {\rm \bf m}ulticolumn{2}{c|}{$L^1$ error} & {\rm \bf m}ulticolumn{2}{c|}{$L^\infty$ error} \\ & & & $=\|u\|/\|u\|_D$ & FM-LBR & AGSI & FM-LBR & AGSI \\ \hline \hline {\rm \bf m}ultirow{3}{*}{3} & {\rm \bf m}ultirow{3}{*}{$\pi/3$} & $\pi/3$ & 1.73 & \tr{2.78} & 6.96 & \tr{40.2} & 60.5 \\ & & $\pi/6$ & 1 & 2.80 & 3.07 & 17.1 & 18.5 \\ & & $- \pi/4$ & 0.59 & 3.95 & \tr{1.45} & 37.4 & \tr{13.7} \\ \hline \hline {\rm \bf m}ultirow{3}{*}{10} & {\rm \bf m}ultirow{3}{*}{$3 \pi/8$} & $3 \pi/8$ & 3.16 & \tr{3.74} & 9.45 & \tr{38.4} & 79.5 \\ && 1.48 & 1 & 1.34 & 2.03 & 7.44 & 11.4 \\ && $-\pi/4$ & 0.34 & 2.92 & \tr{0.83} & 27.3 & \tr{7.82} \\ \hline \hline {\rm \bf m}ultirow{3}{*}{30} & {\rm \bf m}ultirow{3}{*}{$\pi/8$} & $\pi/8$ & 5.48 & \tr{3.89} & 6.62 & 94.3$^*$ & \tr{61.0} \\ && 0.57 & 1 & \tr{0.91}$^\dagger$ & 2.55 & \tr{4.62}$^\dagger$ & 12.9 \\ && $-\pi/4$ & 0.20 & 3.24 & \tr{0.49} & 29.4 & \tr{4.51} \\ \hline {\bf v}arepsilonnd{tabular} {\bf v}arepsilonnd{center} \caption{Metric $z {\rm \bf m}apsto {\rm \bf m}athcal M(z; \,\kappa, \theta, {\bf v}arphi)$, see {\bf v}arepsilonqref{parametrizedMetric}, on the $[-2,2]^2$ square discretized on a $100\tildemes 100$ grid. The most accurate algorithm, among the AGSI and the FM-LBR, can in most cases be predicted by the Lipschitz constants ratio $K_0/K_1$, at least when it is far from $1$. The star points out an exception. CPU time approximatively $10$ ms for the AGSI, and $20$ ms for the FM-LBR. Numerical errors multiplied by 100 for better readability.} \leftabel{table:ExpMet} {\bf v}arepsilonnd{table} {\rm \bf m}athbfegin{equation}gin{thebibliography}{99} {\rm \bf m}athbfegin{itemize}bitem{AltonMitchell08} K. Alton, I. M. Mitchell, {\it Fast Marching Methods for Stationary Hamilton-Jacobi Equations with Axis-Aligned Anisotropy}, SIAM Journal of Numerical Analysis, 47:1, pp. 363--385, 2008. {\rm \bf m}athbfegin{itemize}bitem{AltonMitchell12} K. Alton, I. M. Mitchell, {\it An Ordered Upwind Method with Precomputed Stencil and Monotone Node Acceptance for Solving Static Hamilton-Jacobi Equations}, Journal of Scientific Computing, 51:2, pp. 313--348, 2012. {\rm \bf m}athbfegin{itemize}bitem{BS98} T. J. Barth, J. A. Sethian, {\it Numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains}, Journal of Computational Physics, 145(1), 1-40, 1998. {\rm \bf m}athbfegin{itemize}bitem{BC10} F. Benmansour, L. D. Cohen, {\it Tubular Structure Segmentation Based on Minimal Path Method and Anisotropic Enhancement}, International Journal of Computer Vision, 92(2), 192-210, 2010. {\rm \bf m}athbfegin{itemize}bitem{BR06} F. Bornemann, C. Rasch, {\it Finite-element Discretization of Static Hamilton-Jacobi Equations based on a Local Variational Principle}, Computing and Visualization in Science, 9(2), 57-69, 2006. {\rm \bf m}athbfegin{itemize}bitem{CS92} J. H. Conway, N. J. A. Sloane, {\it Low-dimensional lattices. VI. Voronoi reduction of three-dimensional lattices.}, Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences 436.1896, 55-68, 1992. {\rm \bf m}athbfegin{itemize}bitem{FM13} J. Fehrenbach, and J.-M. Mirebeau, {\it Sparse Non-Negative Stencils for Anisotropic Diffusion}, Journal of Mathematical Imaging and Vision, available online, to appear in print, 2013. {\rm \bf m}athbfegin{itemize}bitem{GonzalesRofman85} R. Gonzales, E. Rofman,{\it On Deterministic Control Problems: an Approximate Procedure for the Optimal Cost, I, the Stationary Problem}, SIAM Journal on Control and Optimization, 23, 2, pp. 242-266, 1985. {\rm \bf m}athbfegin{itemize}bitem{JBTDPIB08} S. Jbabdi, P. Bellec, R. Toro, J. Daunizeau, M. P{\'e}l{\'e}grini-Issac, H. Benali, {\it Accurate Anisotropic Fast Marching for Diffusion-Based Geodesic Tractography}, International Journal of Biomedical Imaging, 2008. {\rm \bf m}athbfegin{itemize}bitem{KS98} R. Kimmel, J. A. Sethian, {\it Computing geodesic paths on manifolds}, Proceedings of the National Academy of Sciences USA, 95(15), 8431-8435, 1998. {\rm \bf m}athbfegin{itemize}bitem{KushnerDupuis92} H.J. Kushner, P.G. Dupuis, {\it Numerical Methods for Stochastic Control Problems in Continuous Time}, Academic Press, New York, 1992. {\rm \bf m}athbfegin{itemize}bitem{L1773} J. L. Lagrange, {\it Recherches d'arithm\'etique}, Nouveaux M\'emoires de l’Acad\'emie de Berlin, 1773. {\rm \bf m}athbfegin{itemize}bitem{LYC03} X.-G. Li, W. Yan, C. K. Chan, {\it Numerical schemes for Hamilton-Jacobi equations on unstructured meshes}, Numerische Mathematik, 94(2), 315-331, 2003. {\rm \bf m}athbfegin{itemize}bitem{L82} P.L. Lions, {\it Generalized solutions of Hamilton-Jacobi equations}, Pitman, Boston, 1982. {\rm \bf m}athbfegin{itemize}bitem{M12b} J.-M. Mirebeau, {\it On the Accuracy of Anisotropic Fast Marching}, preprint available on Arxiv, 2012. {\rm \bf m}athbfegin{itemize}bitem{M12c} J.-M. Mirebeau, {\it Efficient Fast Marching with Finsler Metrics}, Numerische Mathematic, available online, to appear in print, 2013. {\rm \bf m}athbfegin{itemize}bitem{Mirebeau10} J.-M. Mirebeau, {\it Adaptive and anisotropic finite element approximation : Theory and algorithms}, PhD thesis. {\rm \bf m}athbfegin{itemize}bitem{NS09} P. Q. Nguyen, and D. Stehl{\'e}, {\it Low-dimensional lattice basis reduction revisited}, ACM Transactions on Algorithms, Article 46, 2009. {\rm \bf m}athbfegin{itemize}bitem{PPKC10} G. Peyr{\'e}, M. P{\'e}chaud, R. Keriven, L. D. Cohen, {\it Geodesic Methods in Computer Vision and Graphics}, Foundations and Trends in Computer Graphics and Vision, 5(3-4), 197-397, 2010. {\rm \bf m}athbfegin{itemize}bitem{RS09} C. Rasch, T. Satzger, {\it Remarks on the ${\rm \bf m}athcal O(N)$ Implementation of the Fast Marching Method}, IMA Journal of Numerical Analysis, 29, 806-813, 2009. {\rm \bf m}athbfegin{itemize}bitem{SKD07} M. Sermesant, E. Konukoglu, H. Delingette, {\it An anisotropic multi-front fast marching method for real-time simulation of cardiac electrophysiology}, Proc of Functional Imaging and Modeling of the Heart, 2007. {\rm \bf m}athbfegin{itemize}bitem{SethBook96} J.A. Sethian, {\it Level Set Methods and Fast Marching Methods: Evolving Interfaces in Computational Geometry, Fluid Mechanics, Computer Vision and Materials Sciences}, Cambridge University Press, 1996. {\rm \bf m}athbfegin{itemize}bitem{SV03} J. A. Sethian, A. Vladimirsky, {\it Ordered Upwind Methods for Static Hamilton-Jacobi Equations: Theory and Algorithms}, SIAM Journal of Numerical Analysis, 41(1), 325-363, 2003. {\rm \bf m}athbfegin{itemize}bitem{SV00} J. A. Sethian, A. Vladimirsky, {\it Fast methods for the Eikonal and related Hamilton-Jacobi equations on unstructured meshes}, Proceedings of the National Academy of Sciences, 97(11), 5699-5703, 2000. {\rm \bf m}athbfegin{itemize}bitem{TsaiChengOsherZhao03} Y.-H.R. Tsai, L.-T. Cheng, S. Osher, and H.-K. Zhao, {\it Fast sweeping algorithms for a class of Hamilton-Jacobi equations}, SIAM Journal on Numerical Analysis, 41:2, pp.659-672, 2003. {\rm \bf m}athbfegin{itemize}bitem{T95} J. N. Tsitsiklis, {\it Efficient algorithms for globally optimal trajectories}, IEEE Transactions on Automatic Control, 40(9), 1528-1538, 1995. {\rm \bf m}athbfegin{itemize}bitem{VladMSSP} A. Vladimirsky, {\it Label-setting methods for Multimode Stochastic Shortest Path problems on graphs}, Mathematics of Operations Research 33(4), pp. 821-838, 2008. {\rm \bf m}athbfegin{itemize}bitem{VladThesis01} A. Vladimirsky, {\it Fast methods for static Hamilton-Jacobi Partial Differential Equations}, PhD Thesis, 2001. {\rm \bf m}athbfegin{itemize}bitem{VladTime06} A. Vladimirsky, {\it Static PDEs for Time-Dependent Control Problems}, Interfaces and Free Boundaries, 8(3), pp. 281-300, 2006. {\rm \bf m}athbfegin{itemize}bitem{Zhao05} H. Zhao, {\it A Fast Sweeping Method for Eikonal Equations}, Mathematics of Computation, 74(250), 603-627, 2005. {\bf v}arepsilonnd{thebibliography} {\bf v}arepsilonnd{document}
\begin{document} \title{ A fast algorithm for the linear canonical transform \vskip.5cm} \author{Rafael G. Campos and Jared Figueroa\\ Facultad de Ciencias F\'{\i}sico-Matem\'aticas,\\ Universidad Michoacana, \\ 58060, Morelia, Mich., M\'exico.\\ \hbox{\small [email protected], [email protected]}\\ } \date{} \maketitle { \noindent MSC: 65T50; 44A15; 65D32\\ \noindent Keywords: Linear Canonical Transform, Fractional Fourier Transform, Quadrature, Hermite polynomials, Fractional Discrete Fourier Transform, {\tt fft} }\\ \vspace*{1truecm} \begin{center} Abstract \end{center} In recent years there has been a renewed interest in finding fast algorithms to compute accurately the linear canonical transform (LCT) of a given function. This is driven by the large number of applications of the LCT in optics and signal processing. The well-known integral transforms: Fourier, fractional Fourier, bilateral Laplace and Fresnel transforms are special cases of the LCT. In this paper we obtain an ${\mathcal O}(N\log N)$ algorithm to compute the LCT by using a chirp-FFT-chirp transformation yielded by a convergent quadrature formula for the fractional Fourier transform. This formula gives a unitary discrete LCT in closed form. In the case of the fractional Fourier transform the algorithm computes this transform for arbitrary complex values inside the unitary circle and not only at the boundary. In the case of the ordinary Fourier transform the algorithm improves the output of the FFT. \vskip1.5cm \section{Introduction}\label{intro} The Linear Canonical Transform (LCT) of a given function $f(x)$ is a three-parameter integral transform that was obtained in connection with canonical transformations in Quantum Mechanics \cite{Mos71, Wol79}. It is defined by \begin{eqnarray}\label{lctuno} {\mathcal L}^{\{a,b,c,d\}}[f(x),y]&=&\frac{1}{\sqrt{2\pi i b}}\int_{-\infty}^\infty e^{\frac{i}{2b}(ax^2-2xy+dy^2)}f(x)dx,\nonumber \end{eqnarray} for $b\ne0$, and by $\sqrt{d} e^{\frac{i}{2}cdy^2} f(dy)$, if $b=0$. The four parameters $a$, $b$, $c$ and $d$ appearing in (\ref{lctuno}), are the elements of a $2\times 2$ matrix with unit determinant, i.e., $ad-bc=1$. Therefore, only three parameters are free. Since this transform is a useful tool for signal processing and optical analysis, its study and direct computation in digital computers has become an important issue \cite{Hea09}-\cite{Pei05}, particularly, fast algorithms to compute the linear canonical transform have been devised \cite{Koc08, Hen05}. These algorithms use the following related ideas: (a) use of the periodicity and shifting properties of the discrete LCT to break down the original matrix into smaller matrices as the FFT does with the DFT, (b) decomposition of the LCT into a chirp-FFT-scaling transformation and (c) decomposition of the LCT into a fractional Fourier transform followed by a scaling-chirp multiplication. All of these are algorithms of ${\mathcal O}(N\log N)$ complexity.\\ In this paper we present an algorithm that takes ${\mathcal O}(N\log N)$ time based in the decomposition of the LCT into a scaling-chirp-DFT-chirp-scaling transformation, obtained by using a quadrature formula of the continuous Fourier transform \cite{Cam09, Cam92}. Here, DFT stands for the standard discrete Fourier transform. To distinguish this discretization from other implementations, we call it the extended Fourier Transform (XFT). Thus, the quadrature from which the XFT is obtained, uses some asymptotic properties of the Hermite polynomials and yields a fast algorithm to compute the Fourier transform, the fractional Fourier transform and therefore, the LCT. The quadrature formula is ${\mathcal O}(1/N)$-convergent to the continuous Fourier transform for certain class of functions \cite{Cam95}. \section{A discrete fractional Fourier transform}\label{secdos} In previous work \cite{Cam92}, \cite{Cam95}, \cite{Cam08}, we derived a quadrature formula for the continuous Fourier transform which yields an accurate discrete Fourier transform. For the sake of completeness we give in this section a brief review of the main steps to obtain this formula.\\ Let us consider the family of Hermite polynomials $H_n(x)$, $n=0,1,\ldots$, which satisfies the recurrence equation \begin{equation}\label{receqg} H_{n+1}(x)+2nH_{n-1}(x)=2xH_n(x), \end{equation} with $H_{-1}(x)\equiv 0$. Note that the recurrence equation (\ref{receqg}) can be written as the eigenvalue problem \begin{equation}\label{eighinf} \begin{pmatrix} 0&1/2&0&\cdots \\1&0& 1/2&\cdots \\0& 2& 0&\cdots \\\vdots&\vdots&\vdots&\ddots\\ \end{pmatrix} \begin{pmatrix} H_0(x)\\ H_1(x)\\ H_2(x)\\ \vdots\end{pmatrix}=x\begin{pmatrix} H_0(x)\\ H_1(x)\\ H_2(x)\\ \vdots\end{pmatrix}. \end{equation} Let us now consider the eigenproblem associated to the principal submatrix of dimension $N$ of (\ref{eighinf}) \[ {\mathcal H}=\begin{pmatrix}0&1/2&0&\cdots & 0& 0\\1& 0& 1/2&\cdots & 0& 0\\0& 2& 0&\cdots & 0& 0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0& 0& 0&\cdots & 0&1/2\\0& 0& 0&\cdots & N-1&0\end{pmatrix}. \] It is convenient to symmetrize ${\mathcal H}$ by using the similarity transformation $S{\mathcal H} S^{-1}$ where $S$ is the diagonal matrix \[ S=\text{diag}\left\{1,\frac{1}{\sqrt{2}},\ldots,\frac{1}{\sqrt{(N-1)!\,2^{N-1}}}\right\}. \] Thus, the symmetric matrix $H=S{\mathcal H} S^{-1}$ takes the form \[ \begin{pmatrix}0&\sqrt{\frac{1}{2}}&0&\cdots & 0& 0\\\sqrt{\frac{1}{2}}& 0& \sqrt{\frac{2}{2}}&\cdots & 0& 0\\ 0& \sqrt{\frac{2}{2}}& 0&\cdots & 0& 0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0& 0& 0&\cdots & 0&\sqrt{\frac{N-1}{2}}\\0& 0& 0&\cdots & \sqrt{\frac{N-1}{2}}&0\end{pmatrix}. \] The recurrence equation (\ref{receqg}) and the Christoffel-Darboux formula \cite{Sze75} can be used to solve the eigenproblem \[ Hu_k=x_k u_k,\quad k=1,2,\ldots, N, \] which is a finite-dimensional version of (\ref{eighinf}). The eigenvalues $x_k$ are the zeros of $H_N(x)$ and the $k$th eigenvector $u_k$ is given by \[ c_k\left(s_1 H_0(x_k),s_2 H_1(x_k),\cdots,s_N H_{N-1}(x_k)\right)^T, \] where $s_1,\ldots,s_N$ are the diagonal elements of $S$ and $c_k$ is a normalization constant that can be determined from the condition $u_k^T\,u_k=1$, i.e., from \[ c_k^2\,\sum_{n=0}^{N-1} \frac{H_n(x_k)H_n(x_k)}{2^n n!}=1. \] Therefore, \[ c_k=\sqrt{\frac{2^{N-1}\,(N-1)!}{N}}\,\frac{(-1)^{N+k}}{ H_{N-1}(x_k)}. \] Thus, the components of the orthonormal vectors $u_k$, $k=1,2,\ldots, N$, are \begin {equation}\label{ortvec} (u_k)_n=(-1)^{N+k}\sqrt{\frac{2^{N-n}\,(N-1)!}{N\,(n-1)!}}\,\frac{H_{n-1}(x_k)}{H_{N-1}(x_k)}, \end{equation} $n=1,\ldots,N$. Let $U$ be the orthogonal matrix whose $k$th column is $u_k$ and let us define the matrix \[ {\mathcal F}_z=\sqrt{2\pi}U^{-1}D(z)U, \] where $D(z)$ is the diagonal matrix $D(z)=\text{diag}\{1,z,z^2,\ldots,z^{N-1}\}$ and $z$ is an complex number. Therefore, the components of ${\mathcal F}_z$ are given by \begin{eqnarray}\label{tmat} ({\mathcal F}_z)_{jk}&=&\sqrt{2\pi}\,\frac{(-1)^{j+k}\,2^{N-1}\,(N-1)!}{N\,H_{N-1}(x_j)H_{N-1}(x_k)}\sum_{n=0}^{N-1}\frac{z^n}{2^n\,n!}H_n(x_j)H_n(x_k). \end{eqnarray} Next, we want to prove that if $N$ is large enough, (\ref{tmat}) approaches the kernel of the fractional Fourier transform evaluated at $x=x_j$, $y=x_k$. To this, we use the asymptotic expression for $H_N(x)$ \cite{Sze75}) \begin{equation}\label{asymhnt} H_N(x)\simeq\frac{\Gamma(N+1)e^{x^2/2}}{\Gamma(N/2+1)}\cos(\sqrt{2N+1}\,\,x-\frac{N\pi}{2}). \end{equation} Thus, the asymptotic form of the zeros of $H_N(x)$ are \begin{equation}\label{asymcer} x_k= \left(\frac{2 k-N-1}{\sqrt{2N}}\right)\frac{\pi}{2}, \end{equation} $k=1,2,\ldots,N$. The use of (\ref{asymhnt}) and (\ref{asymcer}) yields \[ H_{N-1}(x_k)\simeq (-1)^{N+k}\,\frac{\Gamma(N)}{\Gamma(\frac{N+1}{2})}\,e^{x_k^2/2},\quad N\to\infty, \] and the substitution of this asymptotic expression in (\ref{tmat}) yields \begin{eqnarray*} ({\mathcal F}_z)_{jk}&\simeq&\sqrt{2\pi}\,\frac{2^{N-1}\,[\Gamma(\frac{N+1}{2})]^2}{\Gamma(N+1)}\,\,e^{-(x_j^2+x_k^2)/2}\sum_{n=0}^{\infty}\frac{z^n}{2^n\,n!}H_n(x_j)H_n(x_k). \end{eqnarray*} Finally, Stirling's formula and Mehler's formula \cite{Erd53} produce \begin{equation}\label{fasy} ({\mathcal F}_z)_{jk}\simeq \sqrt{\frac{2}{1-z^2}}\,e^{-\frac{(1+z^2)(x_j^2+x_k^2)-4x_jx_kz}{2(1-z^2)}}\Delta x_k, \end{equation} where $\Delta x_k$ is the difference between two consecutive asymptotic Hermite zeros, i.e., \begin{equation}\label{deltk} \Delta x_k=x_{k+1}-x_k=\frac{\pi}{\sqrt{2N}}. \end{equation} Let us consider now the vector of samples of a given function $f(x)$ \[ f=(f(x_1),f(x_2),\ldots,f(x_N))^T. \] The multiplication of the matrix ${\mathcal F}_z$ by the vector $f$ gives the vector $g$ with entries \begin{eqnarray}\label{primcuad} g_j&=&\sum_{k=1}^N({\mathcal F}_z)_{jk}f(x_k)\simeq \sqrt{\frac{2}{1-z^2}}\sum_{k=1}^N e^{-\frac{(1+z^2)(x_j^2+x_k^2)-4x_jx_kz}{2(1-z^2)}}f(x_k)\Delta x_k,\nonumber \end{eqnarray} where $j=1,2\ldots,N.$ This equation is a Riemann sum for the integral \begin{eqnarray}\label{eqifft} {\mathscr F}_z[f(x),y]&=&\sqrt{\frac{2}{1-z^2}}\int_{-\infty}^\infty e^{-\frac{(1+z^2)(y^2+x^2)-4xyz}{2(1-z^2)}}f(x)dx,\nonumber \end{eqnarray} where $\vert z\vert<1$. Therefore, if we make $y_j=x_j$, \begin{equation}\label{cuadtrfor} {\mathscr F}_z[f(x),y_j]\simeq\sum_{k=1}^N({\mathcal F}_z)_{jk}f(x_k),\quad N\to\infty. \end{equation} Note that ${\mathscr F}_z[g(x),y]$ is the continuous fractional Fourier transform \cite{Nam80} of $g(x)$ except for a constant and therefore, ${\mathcal F}_z$ is a discrete fractional Fourier transform. \section{A fast linear canonical transform}\label{sectres} Firstly, note that if $b\ne 0$, the LCT can be written as a chirp-FT-chirp transform \begin{eqnarray}\label{lctdos} {\mathcal L}^{\{a,b,c,d\}}[f(x),y]&=&\frac{e^{\frac{idy^2}{2b}}}{\sqrt{2\pi i b}}\int_{-\infty}^\infty e^{-\frac{ixy}{b}}e^{\frac{iax^2}{2b}}f(x)dx.\nonumber \end{eqnarray} Thus, for $b\ne 0$, the LCT of the function $f(x)$ can be represented by the $1/b$-scaled Fourier transform of the function $g(x)=e^{\frac{iax^2}{2b}}f(x)$, multiplied by $\frac{e^{\frac{idy^2}{2b}}}{\sqrt{2\pi i b}}$.\\ On the other hand, note that for the case $z=\pm i$, (\ref{fasy}) yields a discrete Fourier transform $({\mathcal F}_{\pm i})_{jk}\simeq e^{\pm it_jt_k}\Delta t_k$, that can be related to the standard DFT as follows. The use of (\ref{asymcer}) yields \begin{equation}\label{fftuno} ({\mathcal F}_i)_{jk}=e^{\pm it_jt_k}\Delta t_k=\frac{\pi}{\sqrt{2N}} e^{i\frac{\pi^2}{2N} \left(j-\frac{N-1}{2}\right) \left(k-\frac{N-1}{2}\right)} \end{equation} where we have used (\ref{asymcer}) and (\ref{deltk}). Since $\sum_{k=1}^N({\mathcal F}_i)_{jk}f(x_k)$ is a quadrature and therefore, an approximation of \[ g(y_j)=\int_{-\infty}^\infty e^{i y_j x} f(x)dx, \] a scaled Fourier transform \begin{equation}\label{furesc} \int_{-\infty}^\infty e^{i \kappa y_j x} f(x)dt=g(\kappa y_j) \end{equation} has the quadrature $\sum_{k=1}^N\tilde{F}_{jk}f(x_k)$, where \begin{equation}\label{fftdos} \tilde{F}_{jk}=\frac{\pi}{\sqrt{2N}}e^{i \kappa\frac{\pi^2}{2N} \left(j-\frac{N-1}{2}\right) \left(k-\frac{N-1}{2}\right)}. \end{equation} If we choose $\kappa=4/\pi$, (\ref{fftdos}) takes the form \begin{eqnarray}\label{dislctn} F_{jk}&=&\frac{\pi e^{i\frac{\pi}{2}\frac{(N-1)^2}{N}}}{\sqrt{2N}}\left[e^{-i\pi\frac{N-1}{N} j}\right]\left[e^{i\frac{2\pi}{N}jk}\right]\nonumber \left[e^{-i\pi\frac{N-1}{N} k}\right], \end{eqnarray} for $ j,k=0,1,2,\ldots,N-1$, and $\sum_{k=1}^NF_{jk}f(x_k)$ is an approximation of $g(4 y_j/\pi)$. If now we choose $\kappa=4b/\pi$, but we keep the same matrix (\ref{dislctn}), then $\sum_{k=1}^NF_{jk}f(x_k)$ is an approximation of \[ \int_{-\infty}^\infty e^{i\frac{y_j}{b} x} f(x)dt. \] If now we replace $f(x)$ by $e^{\frac{iax^2}{2b}}f(x)$ and take into account (\ref{lctdos}), we have that \[ \sum_{k=1}^NF_{jk}e^{iax_k^2/2b}f(x_k) \] is an approximation of the product of functions $ \left(\frac{e^{\frac{idy^2}{2b}}}{\sqrt{2\pi i b}}\right)^{-1}{\mathcal L}^{\{a,b,c,d\}}[f(x),y]$ evaluated at $y_j=4bx_j/\pi$. Therefore, a discrete (scaled) linear canonical transform $L$ can be given in closed form. If we denote by $G(y)$ the LCT of $f(x)$, then \[ G(y_j)=G(4bx_j/\pi)=\sum_{k=1}^N(S_1FS_2)_{jk}f(x_k), \] where $S_1$ and $S_2$ are diagonal matrices whose diagonal elements are $e^{\frac{idy_j^2}{2b}}/\sqrt{2\pi i b}$, and $e^{iax_j^2/2b}$, respectively. As it can be seen, the matrix $L=S_1FS_2$, which gives the discrete LTC, i.e., the XFT, consists in a chirp-DFT-chirp transformation, where DFT stands for the standard discrete Fourier transform. Therefore, we can use any FFT to give a fast computation of the linear canonical transform $G=Lf$.\\ Now, the fast algorithm for the linear canonical transform can be given straightforwardly. \fbox{ \begin{minipage}{15cm} \begin{center} \vskip 0.4cm Algorithm \end{center} \hrule width 15cm \vskip.3truecm To compute an approximation $G=(G_1,G_2,\ldots,G_N)^T$ of the linear canonical transform $G(y)={\mathcal L}^{\{a,b,c,d\}}[f(x),y]$ evaluated at $y_j=4bx_j/\pi$, where $x_j= \left(\frac{2 j-N-1}{\sqrt{2N}}\right)\frac{\pi}{2}$. \begin{enumerate} \item For given $N$ set up the vector $u$ of components \begin{eqnarray*} u_k&=&e^{-i\pi \frac{(k-1)(N-1)}{N}}e^{iax_k^2/2b}f\left(\pi \frac{2 k-N-1}{2\sqrt{2N}}\right), \end{eqnarray*} $k=1,2,\ldots,N$. \item Set $y_j=4bx_j/\pi$ and compute the diagonal matrix $S$ according to \[ S_{jk}=\frac{\pi e^{i\frac{\pi}{2}\frac{(N-1)^2}{N}}}{\sqrt{2N}}\frac{e^{\frac{idy_j^2}{2b}}e^{-i\pi\frac{N-1}{N}(j-1)}}{\sqrt{2\pi i b}}\delta_{jk}, \] $j,k=1,2,\ldots,N$. \item Let $D_F$ be the discrete Fourier transform, i.e., $(D_F)_{jk}=e^{i\frac{2\pi}{N}jk}$, $j,k=0,1,2,\ldots,N-1$. Obtain the approximation $G_j$ to $G(\frac{4b}{\pi} x_j)$ by computing the matrix-vector product \begin{equation}\label{algfxft1} G=SD_Fu, \end{equation} with a standard FFT algorithm. \end{enumerate} \end{minipage}} \section{Example} For this example we take an integral formula found in \cite{Gra94} that gives \begin{eqnarray}\label{exa1} G(y)&=&\frac{\sqrt{\pi}}{\sqrt{2\pi ib}(\alpha^2+\frac{a^2}{4 b^2})^{1/4}} e^{\frac{\alpha(\beta^2-\alpha\gamma)}{\alpha^2+\frac{a^2}{4 b^2}}} e^{\frac{i}{2} \arctan(\frac{a}{2\alpha b})} e^{-\frac{\alpha y^2+2 \beta a y+a^2\gamma}{4 b^2\alpha^2+a^2}} \\ &\times& e^{i\frac{a c y^2}{2(4 b^2 \alpha^2+a^2)}} e^{i\frac{2 b (\alpha^2 d y^2+2 \beta\alpha y+\beta^2 a)}{4 b^2 \alpha^2+a^2}},\nonumber \end{eqnarray} if $f(x)=e^{-(\alpha y^2+2 \beta y+\gamma)}$, $\alpha>0$. Figure 1 shows the exact LTC with $\alpha=1$, $\beta=2$, $\gamma=3$, $a=1$, $b=2$, $c=1/2$, and $d=2$, compared with the approximation given by the XFT. \begin{figure} \caption{Real part ({\bf A} \label{Figu} \end{figure} Figure 2 shows the exact Fresnel transform of $f(x)=e^{-(\alpha y^2+2\beta y+\gamma)}$ with $\alpha=2$, $\beta=1$, $\gamma=3$, $a=1$, $b=100$, $c=0$, and $d=1$, compared with the approximation given by the XFT. \begin{figure} \caption{Real part ({\bf A} \label{Figd} \end{figure} \section{Conclusion}\label{seccuatro} We have obtained a discrete linear canonical transform and a fast algorithm to compute this transform by projecting the space of functions onto a vector space spanned by a finite number of Hermite functions. The XFT is a discrete LCT given by a unitary matrix in a closed form in which the DFT can be found at the core, surrounded by diagonal transformations, which makes easy to implement it in a fast algorithm. Since this discrete LCT is related to a quadrature formula of the fractional Fourier transform, it yields accurate results. \end{document}
\begin{document} \title{A new class of complex nonsymmetric algebraic Riccati equations} \author{ {\sc Liqiang Dong\thanks{Email: [email protected]}, Jicheng Li\thanks{Corresponding author. Email: [email protected]} and Xuenian Liu\thanks{Email: [email protected]}} \\[2pt] School of Mathematics and Statistics, Xi'an Jiaotong University,\\ Xi'an, 710049, People's Republic of China } \shortauthorlist{Liqiang Dong \emph{et al.}} \maketitle \begin{abstract} {In this paper, we first propose a new parameterized definition of comparison matrix of a given complex matrix, which generalizes the definition proposed by \cite {Axe1}. Based on this, we propose a new class of complex nonsymmetric algebraic Riccati equations (NAREs) which extends the class of nonsymmetric algebraic Riccati equations proposed by \cite {Axe1}. We also generalize the definition of the extremal solution of an NARE and show that the extremal solution of an NARE exists and is unique. Some classical algorithms can be applied to search for the extremal solution of an NARE, including Newton's method, some fixed-point iterative methods and doubling algorithms. Besides, we show that Newton's method is quadratically convergent and the fixed-point iterative method is linearly convergent. We also give some concrete strategies for choosing suitable parameters such that the doubling algorithms can be used to deliver the extremal solutions, and show that the two doubling algorithms with suitable parameters are quadratically convergent. Numerical experiments show that our strategies for parameters are effective.} {parameterized comparison matrix; nonsymmetric algebraic Riccati equations (NAREs); the extremal solution; doubling algorithms; parameter selection strategy.} \end{abstract} \section{Introduction} \label{sec;introduction} A complex nonsymmetric algebraic Riccati equation (NARE) has the following form \begin{eqnarray}\label{e1} XCX-XD-AX+B=0, \end{eqnarray} where $A\in \mathbb{C}^{m\times m}, B\in \mathbb{C}^{m\times n}, C\in \mathbb{C}^{n\times m}$ and $D\in \mathbb{C}^{n\times n}$ are known matrices and $X\in \mathbb{C}^{m\times n}$ is an unknown matrix. Let \begin{eqnarray}\label{e2} H= \left( \begin{matrix} D&-C\\ B&-A \end{matrix} \right),~~~Q=JH=\left( \begin{matrix} D&-C\\ -B&A \end{matrix} \right), \end{eqnarray} where $J= \left( \begin{matrix} I_n&0\\ 0&-I_m \end{matrix} \right) $. The complementary (dual) algebraic Riccati equation of the NARE (\ref{e1}) will be \begin{eqnarray}\label{e3} YBY-YA-DY+C=0, \end{eqnarray} which will be abbreviated as cARE, where $Y\in \mathbb{C}^{n\times m}$ is the unknown matrix. In the following statements, some notations are necessary. $\textbf{1}$ denotes a column vector whose elements are all equal to one and size is implied by the context. $[A]_{ij}$ denotes the $(i,j)$ element of the matrix $A$. The inequality $A\geq B$ ($A>B$) means that $[A]_{ij}\geq [B]_{ij}$ ($[A]_{ij}> [B]_{ij}$) for all $i,j$. For a square matrix $A$, we use $\rho(A)$ to denote its spectral radius and use ${\rm diag}(A)$ and ${\rm offdiag}(A)$ to denote its diagonal parts and off-diagonal parts, respectively. A real square matrix $A$ is called a nonsingular M-matrix if all of its off-diagonal elements are nonpositive and $Au>0$ for some positive vector $u$. Let ${\rm j}$ denote the imaginary unit. $|z|$, ${\rm Re}(z)$ and ${\rm Im}(z)$ denote the module, the real part and the imaginary part of the complex number $z$, respectively. For a complex matrix $A\in \mathbb{C}^{m\times n}$, $|A|$ is defined by $[|A|]_{ij}=|[A]_{ij}|$ for all $i,j$. Let $$z_\omega=-(1-\omega)+\omega {\rm j},~~~z_{\omega}^\bot=\omega+{\rm j}(1-\omega),$$ where $\omega$ is a real in the interval $[0,1]$. Then $$L(z_\omega,\kappa)=lz_\omega+\frac{\kappa}{1-\omega}{\rm j},$$ where $\kappa$ is a given real number and $l$ takes all real numbers, represents a straight line passing through $\frac{\kappa}{1-\omega}{\rm j}$; and $R(z_{\omega}^\bot)= rz_{\omega}^\bot$, where $r$ takes all positive real numbers, represents a ray orthogonal to $L(z_\omega,\kappa)$. In particular, $L(z_\omega,0)=lz_\omega$, where $l$ takes all real numbers, represents a straight line passing through the origin. Obviously, $L(z_\omega,0)$ is parallel to $L(z_\omega,\kappa)$ for any nonzero real number $\kappa$. In the literature, many researchers have studied the NARE (\ref{e1}). Some definitions about comparison matrix of a given matrix are introduced as auxiliary tools. Definition \ref{thm39} below has been given by \cite {Axe1}. Definition \ref{thm31} below is usual definition of comparison matrix. Definition \ref{thm32} below is a generalization of Definition \ref{thm39}. \begin{definition}{\rm (\cite {Axe1}).}\label{thm39} The first comparison matrix of the complex square matrix $A$, will be denoted by $\widehat{A}$, is defined by \begin{eqnarray}\label{e4} [\widehat{A}]_{ij}= \begin{cases} {\rm Re}([A]_{ij}),&i=j,\\ -|[A]_{ij}|,&i\neq j. \end{cases} \end{eqnarray} \end{definition} \begin{definition}\label{thm31} The second comparison matrix of the complex square matrix $A$, will be denoted by $\overline{A}$, is defined by \begin{eqnarray}\label{e5} [\overline{A}]_{ij}= \begin{cases} |[A]_{ij}|,&i=j,\\ -|[A]_{ij}|,&i\neq j. \end{cases} \end{eqnarray} \end{definition} \begin{definition}\label{thm32} The $\omega$-comparison matrix of the complex square matrix $A$, will be denoted by $A_{\omega}$, is defined by \begin{eqnarray}\label{e6} [A_{\omega}]_{ij}= \left\{ \begin{array}{lc} \omega{\rm Re}([A]_{ij})+(1-\omega){\rm Im}([A]_{ij}),&i=j,\\ -|[A]_{ij}|,&i\neq j, \end{array} \right. \end{eqnarray} where $\omega$ is a given real number in the interval $[0,1]$. \end{definition} Next, we give some simple terminologies which have been introduced in the literature. We call the NARE (\ref{e1}) is an M-matrix algebraic Riccati equation (MARE) if the matrix $Q$ defined by (\ref{e2}) is a nonsingular M-matrix or an irreducible singular M-matrix \cite[see.][]{Axe7}. We call the NARE (\ref{e1}) is in class $\mathbb{M}$ if the matrix $Q$ is simply a nonsingular M-matrix \cite[see.][]{Axe3}. We call the NARE (\ref{e1}) is in class $\mathbb{H}^+$ if the second comparison matrix $\overline{Q}$ of the matrix $Q$ is a nonsingular M-matrix and the diagonal elements of the matrix $Q$ are positive. We call the NARE (\ref{e1}) is in class $\mathbb{H}^-$ if the second comparison matrix $\overline{Q}$ of the matrix $Q$ is a nonsingular M-matrix and the diagonal elements of the matrix $Q$ are negative. In fact, the NARE in class $\mathbb{H}^+$ and the NARE in class $\mathbb{H}^-$ can be transformed into each other \cite[see.][]{Axe2}. We call the NARE (\ref{e1}) is in class $\mathbb{H}^*$ if the first comparison matrix $\widehat{Q}$ of the matrix $Q$ is a nonsingular M-matrix. We call the NARE (\ref{e1}) is in class $\mathbb{H}^\omega$ if the $\omega$-comparison matrix $Q_{\omega}$ of the matrix $Q$ is a nonsingular M-matrix. We call the NARE (\ref{e1}) is in class $\mathbb{H}$ if the second comparison matrix $\overline{Q}$ of the matrix $Q$ is a nonsingular M-matrix, i.e., $Q$ is an H-matrix. We observe that class $\mathbb{H}^\omega$ is a subclass of the class $\mathbb{H}$. At present, many researchers have been studying NAREs. MAREs have been studied by many researchers, such as, \cite {Axe3} and so on. As is known to us all, an MARE has a unique minimal nonnegative solution and the solution can be obtained by many classical methods, mainly including the fixed-point iterative methods, Newton's method and doubling algorithms (SDA by \cite {Axe14}, SDA-ss by \cite {Axe12}, ADDA by \cite {Axe9}). The NAREs in class $\mathbb{H}^+$($\mathbb{H}^-$) have been studied by \cite {Axe2} . The NAREs in class $\mathbb{H}^*$ have been studied by \cite {Axe1} and \cite {Axe5}. The extremal solution of the NARE in class $\mathbb{H}^*$ exists and is unique and can also be obtained by the existing classical algorithms. We can see that the existing methods for the NAREs in class $\mathbb{M}$ can be applied to solve NAREs in class $\mathbb{H}^+$ ($\mathbb{H}^-$), in class $\mathbb{H}^*$. We can observe that Definition \ref{thm32} is reduced to Definition \ref{thm39} and $\mathbb{H}^\omega$ is reduced to $\mathbb{H}^*$ when $\omega=1$. The NAREs in class $\mathbb{H}^\omega$ for any $\omega \in [0,1]$ hasn't been studied except $\omega=1$. So, we will consider how to solve the NAREs in class $\mathbb{H}^\omega$ for general $\omega \in[0,1]$ in this paper. The remainder of this paper is organized as follows. Section 2 gives some preliminaries which will be helpful for subsequent arguments. A generalized definition of the extremal solution of the NARE (\ref{e1}) in class $\mathbb{H}^\omega$ is given and the existence and uniqueness of the extremal solution of an NARE (\ref{e1}) in class $\mathbb{H}^\omega$ is proved in Section 3. In Section 4 and 5, Newton's method and fixed-point iterative methods are applied to deliver the extremal solution of the NARE (\ref{e1}) in class $\mathbb{H}^\omega$, respectively. In Section 6, two existing doubling algorithms, including ADDA and SDA, are applied to solve the NARE (\ref{e1}) in $\mathbb{H}^\omega$ by choosing suitable parameters. In Section 7, numerical experiments show that our algorithms are effective. Some concluding remarks are introduced to attract more valuable researches in Section 8. \section{Preliminary knowledge} \begin{lemma}\label{thm1} Let $A\in \mathbb{R}^{n\times n}$ be a nonsingular M-matrix. 1. If $B\geq A$ is a $Z$-matrix, then $B$ is a nonsingular M-matrix with $B^{-1}\leq A^{-1}$. 2. Let $A$ be partitioned as \begin{eqnarray*} A=\left( \begin{matrix} A_{11}&A_{12}\\ A_{21}&A_{22} \end{matrix} \right), \end{eqnarray*} where $A_{11}$ and $A_{22}$ are square matrices, then $A_{11}$, $A_{22}$ and their Schur complements $$A_{22}-A_{21}A_{11}^{-1}A_{12},~~~A_{11}-A_{12}A_{22}^{-1}A_{21},$$ are nonsingular M-matrices. In addition, if $A\textbf{1}>0$, then $A_{11}\textbf{1}>0$, $A_{22}\textbf{1}>0$, and $$(A_{22}-A_{21}A_{11}^{-1}A_{12})\textbf{1}>0,~~~(A_{11}-A_{12}A_{22}^{-1}A_{21})\textbf{1}>0.$$ \end{lemma} \begin{lemma}{\rm(\cite [see][]{Axe1})}.\label{thm2} Let $A\in \mathbb{R}^{n\times n}$ be a nonsingular M-matrix. If $B\in \mathbb{C}^{n\times n}$ satisfies that \begin{eqnarray*} |{\rm diag}(B)|\geq{\rm diag}(A),~~~|{\rm offdiag}(B)|\leq |{\rm offdiag}(A)|, \end{eqnarray*} then $B$ is nonsingular with $|B^{-1}|\leq A^{-1}$. \end{lemma} \begin{lemma}\label{thm3} Let $B\in \mathbb{C}^{n\times n}$ and suppose that $B_{\omega}$ is a nonsingular M-matrix. Then $B$ is nonsingular with \begin{eqnarray*} |B^{-1}|\leq (B_{\omega})^{-1}. \end{eqnarray*} \end{lemma} \begin{proof} Because \begin{eqnarray*} \begin{aligned} |{\rm diag}(B)|&=\sqrt{({\rm Re}({\rm diag}(B)))^2+({\rm Im}({\rm diag}(B)))^2}\\ &\geq\sqrt{\omega^2+(1-\omega)^2}\sqrt{({\rm Re}({\rm diag}(B)))^2+({\rm Im}({\rm diag}(B)))^2}\\ &\geq \omega {\rm Re}({\rm diag}(B))+(1-\omega){\rm Im}({\rm diag}(B))\\ &={\rm diag}(B_{\omega}) \end{aligned} \end{eqnarray*} and $$|{\rm offdiag}(B)|=|{\rm offdiag}(B_{\omega})|,$$ by Lemma \ref{thm2}, we have that $B$ is a nonsingular matrix with \begin{eqnarray*} |B^{-1}|\leq (B_{\omega})^{-1}. \end{eqnarray*} \end{proof} \begin{lemma}\label{thm4} Let $B\in \mathbb{C}^{n\times n}$ and suppose that $B_{\omega}$ is a nonsingular M-matrix. Let $B$ be partitioned as \begin{eqnarray*} B=\left( \begin{matrix} B_{11}&B_{12}\\ B_{21}&B_{22} \end{matrix} \right), \end{eqnarray*} where $B_{11}$ and $B_{22}$ are square matrices. Let $S_{11}=B_{11}-B_{12}B_{22}^{-1}B_{21}$ and $S_{22}=B_{22}-B_{21}B_{11}^{-1}B_{12}$ be the Schur complement of $B_{22}$ and $B_{11}$, respectively. Then, $(B_{11})_{\omega}$, $(B_{22})_{\omega}$, $(S_{11})_{\omega}$ and $(S_{22})_{\omega}$ are nonsingular M-matrices. Moreover, if $B_{\omega} \textbf{1}>0$, then $$(B_{11})_{\omega}\textbf{1}>0,~~(B_{22})_{\omega}\textbf{1}>0,~~(S_{11})_{\omega}\textbf{1}>0,~~(S_{22})_{\omega}\textbf{1}>0.$$ \end{lemma} \begin{proof} We have $$B_{\omega}= \left( \begin{matrix} (B_{11})_{\omega}&-|B_{12}|\\ -|B_{21}|&(B_{22})_{\omega} \end{matrix} \right).$$ By Lemma \ref{thm1}, we know that $(B_{11})_{\omega}$, $(B_{22})_{\omega}$ and their Schur complements, $$(B_{22})_{\omega}-|B_{21}|((B_{11})_{\omega})^{-1}|B_{12}|,~~~(B_{11})_{\omega}-|B_{12}|((B_{22})_{\omega})^{-1}|B_{21}|$$ are also nonsingular M-matrices. By Lemma \ref{thm3}, $|B_{11}^{-1}|\leq ((B_{11})_{\omega})^{-1}, |B_{22}^{-1}|\leq ((B_{22})_{\omega})^{-1}$. Then, \begin{eqnarray*} \begin{aligned} &~~~~{\rm diag}((S_{11})_{\omega})\\ &=\omega {\rm Re}({\rm diag}(S_{11}))+(1-\omega){\rm Im}({\rm diag}(S_{11}))\\ &=\omega {\rm Re}({\rm diag}(B_{11}))+(1-\omega){\rm Im}({\rm diag}(B_{11}))-\omega {\rm Re}({\rm diag}(B_{12}B_{22}^{-1}B_{21}))-(1-\omega){\rm Im}({\rm diag}(B_{12}B_{22}^{-1}B_{21}))\\ &\geq \omega {\rm Re}({\rm diag}(B_{11}))+(1-\omega){\rm Im}({\rm diag}(B_{11}))-|{\rm diag}(B_{12}B_{22}^{-1}B_{21})|\\ &\geq {\rm diag}((B_{11})_{\omega})-{\rm diag}(|B_{12}||((B_{22})_{\omega})^{-1}||B_{21}|)\\ &={\rm diag}((B_{11})_{\omega}-|B_{12}||((B_{22})_{\omega})^{-1}||B_{21}|), \end{aligned} \end{eqnarray*} \begin{eqnarray*} \begin{aligned} {\rm offdiag}((S_{11})_{\omega})&=-|{\rm offdiag}((B_{11}-B_{12}B_{22}^{-1}B_{21})_{\omega})|\\ &\geq -|{\rm offdiag}(B_{11})|-|{\rm offdiag}(B_{12}B_{22}^{-1}B_{21})|\\ &\geq {\rm offdiag}((B_{11})_{\omega})-{\rm offdiag}(|B_{12}||((B_{22})_{\omega})^{-1}||B_{21}|)\\ &={\rm offdiag}((B_{11})_{\omega}-|B_{12}||((B_{22})_{\omega})^{-1}||B_{21}|), \end{aligned} \end{eqnarray*} which gives $$(S_{11})_{\omega}\geq (B_{11})_{\omega}-|B_{12}||((B_{22})_{\omega})^{-1}||B_{21}|,$$ we have that $(S_{11})_{\omega}$ is a nonsingular M-matrix by Lemma \ref{thm1}. Similarly, $(S_{22})_{\omega}$ is also a nonsingular M-matrix. If $B_{\omega}\textbf{1}>0$ , then $(B_{11})_{\omega}\textbf{1}>0$ and $((B_{11})_{\omega}-|B_{12}||((B_{22})_{\omega})^{-1}||B_{21}|)\textbf{1}>0$. Besides, $$(S_{11})_{\omega}\textbf{1}\geq ((B_{11})_{\omega}-|B_{12}||((B_{22})_{\omega})^{-1}||B_{21}|)\textbf{1}>0.$$ Similarly, $(B_{22})_{\omega}\textbf{1}>0$ and $(S_{22})_{\omega}\textbf{1}>0$. \end{proof} \section{The existence and uniqueness of the extremal solution of the NARE (\ref{e1}) in class {\rm $\mathbb{H}^\omega$}} \begin{theorem}\label{thm5} Suppose the NARE {\rm(\ref{e1})} is in class {\rm $\mathbb{H}^\omega$} and $Q_{\omega}\textbf{1}>0$. Let $\widetilde{Q}$ be a nonsingular M-matrix satisfying $\widetilde{Q}\leq Q_{\omega}$ and $\widetilde{Q}\textbf{1}>0$, and partition $\widetilde{Q}$ as \begin{eqnarray}\label{e8} \widetilde{Q}= \left( \begin{matrix} \widetilde{D}&-\widetilde{C}\\ -\widetilde{B}&\widetilde{A} \end{matrix} \right), \end{eqnarray} where $\widetilde{A}$, $\widetilde{B}$, $\widetilde{C}$ and $\widetilde{D}$ have same sizes as $A$, $B$, $C$ and $D$, respectively. Let $\widetilde{\Phi}$ and $\widetilde{\Psi}$ be the minimal nonnegative solutions of the NARE \begin{eqnarray}\label{e7} X\widetilde{C}X-X\widetilde{D}-\widetilde{A}X+\widetilde{B}=0 \end{eqnarray} and its cNARE \begin{eqnarray}\label{e9} Y\widetilde{B}Y-Y\widetilde{A}-\widetilde{D}Y+\widetilde{C}=0, \end{eqnarray} respectively. Then 1. The NARE {\rm(\ref{e1})} has a unique solution, denoted by $\Phi$, such that $|\Phi|\leq\widetilde{\Phi}$. Moreover, $D-C\Phi$ is a nonsingular matrix whose diagonal entries are in the upper right of $L(z_\omega,0)$, and $\Phi$ is the unique extremal solution of the NARE {\rm(\ref{e1})}, i.e., the solution such that the eigenvalues of $D-C\Phi$ are in the upper right of $L(z_\omega,0)$. 2. The NARE {\rm(\ref{e3})} has a unique solution, denoted by $\Psi$, such that $|\Psi|\leq\widetilde{\Psi}$. Moreover, $A-B\Psi$ is a nonsingular matrix whose diagonal entries are in the upper right of $L(z_\omega,0)$, and $\Psi$ is the unique extremal solution of the NARE {\rm(\ref{e3})}, i.e., the solution such that the eigenvalues of $A-B\Psi$ are in the upper right of $L(z_\omega,0)$. \end{theorem} \begin{proof} We only prove Theorem \ref{thm5} for the NARE (\ref{e1}). Similar arguments also work for the dual NARE (\ref{e3}) and thus omitted. Our proof is similar to that given by \cite {Axe1}. Here we only emphasize the differences. The proof will be completed in four steps: \rmnum{1}. Define the linear operators $\Upsilon$ and $\widetilde{\Upsilon}$ and prove that the invertibility of $\Upsilon$ and $\widetilde{\Upsilon}$. \rmnum{2}. The mapping $f:\mathbb{C}^{m\times n}\rightarrow \mathbb{C}^{m\times n}$ given by $f(Z)=\Upsilon^{-1}(ZCZ+B)$ has a fixed point in the set $\mathbb{S}=\{Z:|Z|\leq \widetilde{\Phi}\}$. \rmnum{3}. $D-C\Phi$ is a nonsingular matrix whose diagonal entries are in the upper right of $L(z_\omega)$. Moreover, $\Phi$ is the extremal solution of the NARE (\ref{e1}), i.e., the solution such that the eigenvalues of $D-C\Phi$ are in the upper right of $L(z_\omega)$. \rmnum{4}. The uniqueness of the extremal solution. (\rmnum{1}). Let the linear operators $\Upsilon,\widetilde{\Upsilon}: \mathbb{C}^{m\times n}\rightarrow \mathbb{C}^{m\times n}$ be defined by \begin{eqnarray}\label{e10} \Upsilon(X)=XD+AX,~~~~\widetilde{\Upsilon}(X)=X\widetilde{D}+\widetilde{A}X, \end{eqnarray} respectively. The operator $\widetilde{\Upsilon}$ is invertible as $\widetilde{A}$ and $\widetilde{D}$ are nonsingular M-matrices. As $Q_{\omega}= \left( \begin{matrix} D_{\omega}&-|C|\\ -|B|&A_{\omega} \end{matrix} \right)$ is a nonsingular M-matrix, $A_{\omega}$ and $D_{\omega}$ are also nonsingular M-matrices. We have \begin{eqnarray*} \widetilde{D}\leq D_{\omega},~\widetilde{A}\leq A_{\omega},~|C|\leq\widetilde{C},~|B|\leq\widetilde{B}, \end{eqnarray*} since $\widetilde{Q}\leq Q_{\omega}$. By (\ref{e10}), we have ${\rm vec}(\Upsilon(X))={\rm vec}(XD+AX)=(D^T\otimes I+I\otimes A){\rm vec}(X)$. Because for $i=1,2,\cdots,n$; $j=1,2,\cdots,m$, we have \begin{eqnarray*} \begin{aligned} |[D]_{ii}+[A]_{jj}|&=\sqrt{({\rm Re}([D]_{ii})+{\rm Re}([A]_{jj}))^2+({\rm Im}([D]_{ii})+{\rm Im}([A]_{jj}))^2}\\ &\geq \sqrt{\omega^2+(1-\omega)^2}\sqrt{({\rm Re}([D]_{ii})+{\rm Re}([A]_{jj}))^2+({\rm Im}([D]_{ii})+{\rm Im}([A]_{jj}))^2}\\ &\geq \omega({\rm Re}([D]_{ii})+{\rm Re}([A]_{jj}))+(1-\omega)({\rm Im}([D]_{ii})+{\rm Im}([A]_{jj}))\\ &=(\omega{\rm Re}([D]_{ii})+(1-\omega){\rm Im}([D]_{ii}))+(\omega{\rm Re}([A]_{jj})+(1-\omega){\rm Im}([A]_{jj})). \end{aligned} \end{eqnarray*} This leads to \begin{eqnarray*} \begin{aligned} |{\rm diag}(D^T\otimes I+I\otimes A)|&\geq{\rm diag}((D_{\omega})^T\otimes I +I\otimes A_{\omega})\geq {\rm diag}(\widetilde{D}^T\otimes I +I\otimes \widetilde{A}),\\ |{\rm offdiag}(D^T\otimes I +I\otimes A)|&\leq|{\rm offdiag}((D_{\omega})^T\otimes I +I\otimes A_{\omega})|\leq |{\rm offdiag}(\widetilde{D}^T\otimes I +I\otimes \widetilde{A})|. \end{aligned} \end{eqnarray*} By Lemma \ref{thm2}, we have that $D^T\otimes I +I\otimes A$ is nonsingular with $|(D^T\otimes I +I\otimes A)^{-1}|\leq(\widetilde{D}^T\otimes I +I\otimes \widetilde{A})^{-1}$. So, the operator $\Upsilon$ is also invertible. (\rmnum{2}). The proof is similar to that of Theorem 3.1 given by \cite {Axe1}, thus omitted. (\rmnum{3}). Let $R=D-C\Phi$ and $\widetilde{R}=\widetilde{D}-\widetilde{C}\widetilde{\Phi}$. As $\widetilde{Q}\textbf{1}>0$, $\widetilde{R}$ is a nonsingular M-matrix with $\widetilde{R}\textbf{1}>0$. For $i=1,2,\cdots,n$, we have $[\widetilde{R}]_{ii}=[\widetilde{D}]_{ii}-[\widetilde{C}\widetilde{\Phi}]_{ii}>\sum_{j=1,j\neq i}^{n}([\widetilde{C}\widetilde{\Phi}]_{ij}-[\widetilde{D}]_{ij})\geq0$ and $\sum_{j=1,j\neq i}^{n}|[R]_{ij}|<[\widetilde{D}]_{ii}-[\widetilde{C}\widetilde{\Phi}]_{ii}.$ On the other hand, we have \begin{eqnarray}\label{e11} \begin{aligned} &~~~~\omega {\rm Re}([R]_{ii})+(1-\omega){\rm Im}([R]_{ii})\\ &=\omega {\rm Re}([D]_{ii})+(1-\omega){\rm Im}([D]_{ii})-\omega {\rm Re}([C\Phi]_{ii})-(1-\omega){\rm Im}([C\Phi]_{ii})\\ &\geq [\widetilde{D}]_{ii}-|[C\Phi]_{ii}|\\ &\geq[\widetilde{D}]_{ii}-[\widetilde{C}\widetilde{\Phi}]_{ii}\\ &>\sum_{j=1,j\neq i}^{n}|[R]_{ij}|,~~i=1,2,\cdots,n. \end{aligned} \end{eqnarray} By (\ref{e11}), we can say that $[R]_{ii}$ are in the upper right of $L(z_\omega,\sum_{j=1,j\neq i}^{n}|[R]_{ij}|),i=1,2,\cdots,n$. The distance between $L(z_\omega,0)$ and $L(z_\omega,\sum_{j=1,j\neq i}^{n}|[R]_{ij}|)$ can be expressed as $\frac{\sum_{j=1,j\neq i}^{n}|[R]_{ij}|}{\sqrt{\omega^2+(1-\omega)^2}}\geq \sum_{j=1,j\neq i}^{n}|[R]_{ij}|$ since $\omega^2+(1-\omega)^2\leq1$ for any $\omega\in[0,1]$. That is to say, the distance between $[R]_{ii}$ and $L(z_\omega,0)$ is greater or equal to $\sum_{j=1,j\neq i}^{n}|[R]_{ij}|$ for each $i$. So it follows from Gershgorin's theorem that the eigenvalues of $R$ are in the upper right of $L(z_\omega,0)$. (\rmnum{4}). Let $S=A-B\Psi$. Similar to the proof of (\rmnum{3}), it is easy to show that the eigenvalues of $S$ are in the upper right of $L(z_\omega,0)$. Because \begin{eqnarray*} H \left( \begin{matrix} I\\ \Phi \end{matrix} \right) = \left( \begin{matrix} I\\ \Phi \end{matrix} \right)R,~~~ H \left( \begin{matrix} \Psi\\ I \end{matrix} \right) =\left( \begin{matrix} \Psi\\ I \end{matrix} \right)(-S), \end{eqnarray*} $\Phi$ is determined by an invariant subspace of $H$ corresponding to $n$ eigenvalues in the upper right of $L(z_\omega,0)$, and $\Psi$ is determined by an invariant subspace of $H$ corresponding to $m$ eigenvalues in the lower left of $L(z_\omega,0)$. Note that $Q_{\omega}\textbf{1}>0$, it follows from Gershgorin's theorem that $$\left( \begin{matrix} D_{\omega}&-|C|\\ |B|&-A_{\omega} \end{matrix} \right)$$ has exactly $n$ eigenvalues in $\mathbb{C}^+$ and $m$ eigenvalues in $\mathbb{C}^-$. Similarly to proof of (\rmnum{3}), $H$ has exactly $n$ eigenvalues in the upper right of $L(z_\omega,0)$ and $m$ eigenvalues in the lower left of $L(z_\omega,0)$. Consequently, $\Phi$ is the unique solution such that the eigenvalues of $D-C\Phi$ are in the upper right of $L(z_\omega,0)$ and $\Psi$ is the unique solution such that the eigenvalues of $A-B\Psi$ are in the upper right of $L(z_\omega,0)$. \end{proof} \begin{remark}\label{thm9} 1. When $\omega=0$, $z_\omega=-1$, the straight line $L(z_\omega,0)=-l$, where $l$ takes all real numbers. Obviously, the line will be the real axis. The upper right of $L(z_\omega,0)$ will be upper half plane of the whole complex plane. 2. When $\omega=1$, $z_\omega={\rm j}$, the straight line $L(z_\omega,0)={\rm j}l$, where $l$ takes all real numbers. Obviously, the line will be the imaginary axis. The upper right of $L(z_\omega,0)$ will be $\mathbb{C}^+$. 3. When $\omega$ ranges from $0$ to $1$, the straight line $L(z_\omega,0)$ rotates clockwise from the real axis to the imaginary axis at the center of origin, and the the upper right of $L(z_\omega,0)$ rotates clockwise from the upper half plane of the whole complex plane to the half plane $\mathbb{C}^+$ at the center of origin. So we can say that our results generalize the results of Theorem 3.1 given by \cite {Axe1}. \end{remark} \section{Apply Newton's method to solve the NARE (\ref{e1}) in class $\mathbb{H}^{\omega}$} Applying Newton's method to solve the NARE (\ref{e1}) in class $\mathbb{H}^\omega$, we get the iterative scheme: \begin{eqnarray}\label{e12} \begin{aligned} \Phi_0=0,~~(A-\Phi_kC)\Phi_{k+1}+\Phi_{k+1}(D-C\Phi_{k})=B-\Phi_{k}C\Phi_{k},~k\geq0. \end{aligned} \end{eqnarray} In this section we will show that $\{\Phi_k\}$ quadratically converges to the extremal solution $\Phi$ of the NARE (\ref{e1}) in class $\mathbb{H}^\omega$. To achieve this, we consider Newton's iteration for the NARE (\ref{e7}) in class $\mathbb{M}$: \begin{eqnarray}\label{e13} \begin{aligned} \widetilde{\Phi}_0=0,~~(\widetilde{A}-\widetilde{\Phi}_k\widetilde{C})\widetilde{\Phi}_{k+1}+\widetilde{\Phi}_{k+1}(\widetilde{D}-\widetilde{C}\widetilde{\Phi}_{k})&=\widetilde{B}-\widetilde{\Phi}_{k}\widetilde{C}\widetilde{\Phi}_{k},~k\geq0. \end{aligned} \end{eqnarray} It has been shown by \cite {Axe14} that $\{\widetilde{\Phi}_k\}$ generated by (\ref{e13}) increasingly quadratically converges to the minimal nonnegative solution $\widetilde{\Phi}$ of the NARE (\ref{e7}). We first compare the increments at each iterative step of the two iterations (\ref{e12}) and (\ref{e13}) as in Lemma \ref{thm6} below. Its proof is slightly different from that of Lemma 4.1 given by \cite {Axe1}. Here we give its detailed proof for readers' convenience. \begin{lemma}\label{thm6} Let the sequences $\{\Phi_k\}$ and $\{\widetilde{\Phi}_k\}$ be generated by the iterations {\rm(\ref{e12})} for the NARE {\rm(\ref{e1})} in class $\mathbb{H}^\omega$ and {\rm(\ref{e13})} for the NARE {\rm(\ref{e7})} in class $\mathbb{M}$, respectively. Let $Q$ and $\widetilde{Q}$ be as in {\rm(\ref{e2})} and {\rm(\ref{e8})}, respectively. Suppose that $Q_\omega$ and $\widetilde{Q}$ are nonsingular M-matrices satisfying $$Q_\omega\textbf{1}>0,~~\widetilde{Q}\textbf{1}>0,~~\widetilde{Q}\leq Q_\omega.$$ Then we have $$|\Phi_{k+1}-\Phi_k|\leq \widetilde{\Phi}_{k+1}-\widetilde{\Phi}_k,~k=1,2,\cdots.$$ \end{lemma} \begin{proof} Notice that $\widetilde{\Phi}_k\leq \widetilde{\Phi}_{k+1}$ and $I\otimes (\widetilde{A}-\widetilde{\Phi}_k\widetilde{C})+(\widetilde{D}-\widetilde{C}\widetilde{\Phi}_k)^T\otimes I$ is a nonsingular M-matrix for each nonnegative integer $k$. The proof goes on by mathematical induction on $k$. For $k=0$, we have $$(I\otimes A+D^T\otimes I) {\rm vec}(\Phi_1)={\rm vec}(B),~~~(I\otimes \widetilde{A}+\widetilde{D}^T\otimes I) {\rm vec}(\widetilde{\Phi}_1)={\rm vec}(\widetilde{B}).$$ By the proof of Theorem \ref{thm5}, we can obtain $$|(I\otimes A+D^T\otimes I)^{-1}|\leq (I\otimes \widetilde{A}+\widetilde{D}^T\otimes I)^{-1},$$ which results in $|\Phi_1|\leq \widetilde{\Phi}_1$. For $k\geq 1$, we have \begin{eqnarray*} \begin{aligned} &(A-\Phi_kC)(\Phi_{k+1}-\Phi_k)+(\Phi_{k+1}-\Phi_k)(D-C\Phi_k)=(\Phi_{k}-\Phi_{k-1})C(\Phi_{k}-\Phi_{k-1}),\\ &(\widetilde{A}-\widetilde{\Phi}_k\widetilde{C})(\widetilde{\Phi}_{k+1}-\widetilde{\Phi}_k) +(\widetilde{\Phi}_{k+1}-\widetilde{\Phi}_k)(\widetilde{D}-\widetilde{C}\widetilde{\Phi}_k) =(\widetilde{\Phi}_{k}-\widetilde{\Phi}_{k-1})\widetilde{C}(\widetilde{\Phi}_{k}-\widetilde{\Phi}_{k-1}). \end{aligned} \end{eqnarray*} Suppose that $|\Phi_{k+1}-\Phi_{k}|\leq\widetilde{\Phi}_{k+1}-\widetilde{\Phi}_{k}$ for $k\leq l-1$. Then, $$|\Phi_k|\leq \sum_{j=1}^{k}|\Phi_{k}-\Phi_{k-1}|\leq \sum_{j=1}^{k}(\widetilde{\Phi}_{k}-\widetilde{\Phi}_{k-1})=\widetilde{\Phi}_k,~1\leq k\leq l.$$ Because $|{\rm diag}(I\otimes A+D^T\otimes I)|\geq|{\rm diag}(I\otimes A_{\omega}+(D_\omega)^T\otimes I)|\geq {\rm diag}(I\otimes \widetilde{A}+\widetilde{D}^T\otimes I)$, we have \begin{eqnarray*} \begin{aligned} &~~~~|{\rm diag}(I\otimes(A-\Phi_lC)+(D-C\Phi_l)^T\otimes I)|\\ &\geq|{\rm diag}(I\otimes A+D^T\otimes I)|-|{\rm diag}(I\otimes\Phi_lC+(C\Phi_l)^T\otimes I)|\\ &\geq{\rm diag}(I\otimes \widetilde{A}+\widetilde{D}^T\otimes I)-{\rm diag}(I\otimes\widetilde{\Phi}_l\widetilde{C}+(\widetilde{C}\widetilde{\Phi}_l)^T\otimes I)\\ &={\rm diag}(I\otimes (\widetilde{A}-\widetilde{\Phi}_l\widetilde{C})+(\widetilde{D}-\widetilde{C}\widetilde{\Phi}_l)^T\otimes I) \end{aligned} \end{eqnarray*} and \begin{eqnarray*} \begin{aligned} &~~~~|{\rm offdiag}(I\otimes (A-\Phi_lC)+(D-C\Phi_l)^T\otimes I)|\\ &\leq|{\rm offdiag}(I\otimes A)|+|{\rm offdiag}(I\otimes(\Phi_lC))|+|{\rm offdiag}(D^T\otimes I)|+|{\rm offdiag}((C\Phi_l)^T\otimes I)|\\ &\leq-{\rm offdiag}(I\otimes \widetilde{A})+|{\rm offdiag}(I\otimes(\widetilde{\Phi}_l\widetilde{C}))|-{\rm offdiag}(\widetilde{D}^T\otimes I)+|{\rm offdiag}((\widetilde{C}\widetilde{\Phi}_l)^T\otimes I)|\\ &=|{\rm offdiag}(I\otimes(\widetilde{A}-\widetilde{\Phi}_l\widetilde{C}))|+|{\rm offdiag}((\widetilde{D}-\widetilde{C}\widetilde{\Phi}_l)^T\otimes I)|\\ &=|{\rm offdiag}(I\otimes (\widetilde{A}-\widetilde{\Phi}_l\widetilde{C})+(\widetilde{D}-\widetilde{C}\widetilde{\Phi}_l)^T\otimes I)|. \end{aligned} \end{eqnarray*} Thus we have $$|(I\otimes(A-\Phi_lC)+(D-C\Phi_l)^T\otimes I)^{-1}|\leq (I\otimes (\widetilde{A}-\widetilde{\Phi}_l\widetilde{C})+(\widetilde{D}-\widetilde{C}\widetilde{\Phi}_l)^T\otimes I)^{-1}$$ and $$(\Phi_{l+1}-\Phi_l)=(I\otimes(A-\Phi_lC)+(D-C\Phi_l)^T\otimes I)^{-1}(\Phi_{l}-\Phi_{l-1})C(\Phi_{l}-\Phi_{l-1}).$$ By the induction hypothesis, $$|(\Phi_l-\Phi_{l-1})C(\Phi_l-\Phi_{l-1})|\leq (\widetilde{\Phi}_l-\widetilde{\Phi}_{l-1})\widetilde{C}(\widetilde{\Phi}_l-\widetilde{\Phi}_{l-1}).$$ Therefore, we have $$|\Phi_{l+1}-\Phi_{l}|\leq \widetilde{\Phi}_{l+1}-\widetilde{\Phi}_l.$$ \end{proof} After establishing Lemma \ref{thm6}, we can easily prove that the sequence $\{\Phi_k\}$ quadratically converges to the extremal solution $\Phi$ of the NARE (\ref{e1}) in class $\mathbb{H}^\omega$ as in Theorem \ref{thm7} below. Although the proof of Theorem \ref{thm7} is similar to that of Theorem 4.1 given by \cite {Axe1}, thus omitted, the connotation of Theorem \ref{thm7} is more abundant than that of Theorem 4.1 given by \cite {Axe1}. \begin{theorem}\label{thm7} The sequence $\{\Phi_k\}$ generated by the iterations {\rm(\ref{e12})} for the NARE {\rm(\ref{e1})} in class $\mathbb{H}^\omega$ quadratically converges to the extremal solution $\Phi$. \end{theorem} \section{Apply the fixed-point iterative methods to solve the NARE (\ref{e1}) in class $\mathbb{H}^\omega$} Based on the two splittings $$A=A_1-A_2,~D=D_1-D_2,$$ we derive the fixed-point iterative scheme \begin{eqnarray}\label{e14} \begin{aligned} \Phi_0=0,~~A_1\Phi_{k+1}+\Phi_{k+1}D_1=B+A_2\Phi_{k}+\Phi_{k}D_2+\Phi_{k}C\Phi_{k},~k\geq0, \end{aligned} \end{eqnarray} for the NARE (\ref{e1}) in class $\mathbb{H}^\omega$. Theorem \ref{thm8} below gives a sufficient condition under which the sequence $\{\Phi_k\}$ generated by the fixed-point iteration (\ref{e14}) is linearly convergent to the extremal solution $\Phi$ of the NARE (\ref{e1}) in class $\mathbb{H}^\omega$. \begin{theorem}\label{thm8} Suppose that $Q_{\omega}\textbf{1}>0$, let $$\widetilde{A}=(A_1)_{\omega}-|A_2|,~~\widetilde{D}=(D_1)_{\omega}-|D_2|,~~\widetilde{B}=|B|,~~\widetilde{C}=|C|.$$ If $$I\otimes (A_1)_{\omega}+((D_1)_{\omega})^T\otimes I$$ is a nonsingular M-matrix, and $\widetilde{Q}$ is a nonsingular M-matrix satisfying $$\widetilde{Q}\leq Q_{\omega},~~~\widetilde{Q}\textbf{1}>0,$$ then the sequence $\{\Phi_k\}$ generated by the iterative scheme {\rm(\ref{e14})} linearly converges to the extremal solution $\Phi$ of the NARE {\rm(\ref{e1})} in class $\mathbb{H}^{\omega}$. \end{theorem} \begin{proof} Consider the fixed-point iteration \begin{eqnarray}\label{e15} \begin{aligned} \widetilde{\Phi}_0=0,~~(A_1)_{\omega}\widetilde{\Phi}_{k+1}+\widetilde{\Phi}_{k+1}(D_1)_{\omega}=\widetilde{B}+|A_2|\widetilde{\Phi}_k+\widetilde{\Phi}_k|D_2|+\widetilde{\Phi}_k\widetilde{C}\widetilde{\Phi}_k,~k\geq0, \end{aligned} \end{eqnarray} applied to the NARE (\ref{e7}) in class $\mathbb{M}$. It is well-known that $\widetilde{\Phi}_k\leq \widetilde{\Phi}_{k+1}$ for each nonnegative integer $k$ and ${\rm lim}_{k\rightarrow\infty}\widetilde{\Phi}_k=\widetilde{\Phi}$, where $\widetilde{\Phi}$ is the minimal nonnegative solution of the NARE (\ref{e7}) in class $\mathbb{M}$ \citep [see.] []{Axe13}. For $k=0$, we have $$A_1\Phi_1+\Phi_1D_1=B,~~~~(A_1)_{\omega}\widetilde{\Phi}_1+\widetilde{\Phi}_1 (D_1)_{\omega}=\widetilde{B}.$$ We can prove that $$|(I\otimes A_1+D_1^T\otimes I)^{-1}|\leq (I\otimes (A_1)_{\omega}+((D_1)_{\omega})^T\otimes I)^{-1}$$ by verifying \begin{eqnarray*} \begin{aligned} |{\rm diag}(I\otimes A_1+D_1^T\otimes I)|&\geq {\rm diag}(I\otimes (A_1)_{\omega}+((D_1)_{\omega})^T\otimes I),\\ |{\rm offdiag}(I\otimes A_1+D_1^T\otimes I)|&\leq |{\rm offdiag}(I\otimes (A_1)_{\omega}+((D_1)_{\omega})^T\otimes I)|, \end{aligned} \end{eqnarray*} which results in $|\Phi_1|\leq\widetilde{\Phi}_1$. For $k\geq1$, it follows from the two iterations (\ref{e14}) and (\ref{e15}) that \begin{eqnarray*} \begin{aligned} &~~~~A_1(\Phi_{k+1}-\Phi_k)+(\Phi_{k+1}-\Phi_k)D_1\\ &=A_2(\Phi_{k}-\Phi_{k-1})+(\Phi_{k}-\Phi_{k-1})D_2+\Phi_kC(\Phi_{k}-\Phi_{k-1})+(\Phi_{k}-\Phi_{k-1})C\Phi_{k-1},\\ &~~~~(A_1)_{\omega}(\widetilde{\Phi}_{k+1}-\widetilde{\Phi}_k)+(\widetilde{\Phi}_{k+1}-\widetilde{\Phi}_k)((D_1)_{\omega})^T\\ &=|A_2|(\widetilde{\Phi}_{k}-\widetilde{\Phi}_{k-1}) +(\widetilde{\Phi}_{k}-\widetilde{\Phi}_{k-1})|D_2|+\widetilde{\Phi}_k\widetilde{C}(\widetilde{\Phi}_{k}-\widetilde{\Phi}_{k-1})+(\widetilde{\Phi}_{k}-\widetilde{\Phi}_{k-1})\widetilde{C}\widetilde{\Phi}_{k-1}. \end{aligned} \end{eqnarray*} Using the similar way that we prove Lemma \ref{thm6}, we can show inductively that $$|\Phi_{k+1}-\Phi_k|\leq\widetilde{ \Phi}_{k+1}-\widetilde{\Phi}_k,~~k\geq 0,$$ and $${\rm lim}_{k\rightarrow \infty }\Phi_k=\Phi.$$ Next, we consider the convergence speed of the sequence $\{\Phi_k\}$. Similar to the proof of Theorem 3.2 given by \cite{Axe13}, we can prove that, for the fixed-point iteration (\ref{e14}) with $\Phi_0=0$, $${\rm lim}_{k\rightarrow \infty}{\rm sup}\sqrt[k]{\parallel \Phi_k-\Phi\parallel}=\rho((I\otimes A_1+D_1^T\otimes I)^{-1}(I\otimes(A_2+\Phi C)+(D_2+C\Phi)^T\otimes I)).$$ Because $$M_{\widetilde{\Phi}}=(I\otimes (A_1)_{\omega}+((D_1)_{\omega})^T\otimes I)-(I\otimes(|A_2|+\widetilde{\Phi} \widetilde{C})+(|D_2|+\widetilde{C}\widetilde{\Phi})^T\otimes I)$$ is a regular splitting of a nonsingular M-matrix $M_{\widetilde{\Phi}}=I\otimes(\widetilde{A}-\widetilde{\Phi} \widetilde{C})+(\widetilde{D}-\widetilde{C}\widetilde{\Phi})^T\otimes I$, $$\rho((I\otimes (A_1)_{\omega}+((D_1)_{\omega})^T\otimes I)^{-1}(I\otimes(|A_2|+\widetilde{\Phi} \widetilde{C})+(|D_2|+\widetilde{C}\widetilde{\Phi})^T\otimes I))<1.$$ Because \begin{eqnarray*} \begin{aligned} &~~~~\rho((I\otimes A_1+D_1^T\otimes I)^{-1}(I\otimes(A_2+\Phi C)+(D_2+C\Phi)^T\otimes I))\\ &\leq\rho((I\otimes (A_1)_{\omega}+((D_1)_{\omega})^T\otimes I)^{-1}|I\otimes(|A_2|+|\Phi| |C|)+(|D_2|+|C||\Phi|)^T\otimes I|)\\ &\leq\rho((I\otimes (A_1)_{\omega}+((D_1)_{\omega})^T\otimes I)^{-1}((I\otimes(|A_2|+\widetilde{\Phi} \widetilde{C})+(|D_2|+\widetilde{C}\widetilde{\Phi})^T\otimes I)))\\ &<1, \end{aligned} \end{eqnarray*} the sequence $\{\Phi_k\}$ is linearly convergent to the extremal solution $\Phi$. \end{proof} Let $A=H_A-L_A-U_A$ and $D=H_D-L_D-U_D$, where $H_A$ and $H_D$ are the diagonal parts, $-L_A$ and $-L_D$ are the strictly lower triangular parts, $-U_A$ and $-U_D$ are the strictly upper triangular parts, of $A$ and $D$, respectively. Then the following fourteen splittings for $A=A_1-A_2$ and $D=D_1-D_2$ obviously satisfy the conditions on splittings in Theorem \ref{thm8}: (\uppercase\expandafter{\romannumeral1}) Trivial splitting: \begin{eqnarray*} \begin{aligned} &({\rm \romannumeral1})&A_1=A,~A_2=0,~D_1=D,~D_2=0, \end{aligned} \end{eqnarray*} and we call the correspondingly resulted fixed-point iteration as trivial fixed-point (TFP) iteration. (\uppercase\expandafter{\romannumeral2}) Jacobi-type splitting: \begin{eqnarray*} \begin{aligned} &({\rm \romannumeral2})&A_1=H_A,~A_2=L_A+U_A,~D_1=H_D,~D_2=L_D+U_D, \end{aligned} \end{eqnarray*} and we call the correspondingly resulted fixed-point iteration as Jacobi-type fixed-point (JFP) iteration. (\uppercase\expandafter{\romannumeral3}) Gauss-Seidel-type splittings: \begin{eqnarray*} \begin{aligned} &({\rm \romannumeral3})&A_1=H_A-U_A,~A_2=L_A,~D_1=H_D-L_A,~D_2=U_D;\\ &({\rm \romannumeral4})&A_1=H_A-U_A,~A_2=L_A,~D_1=H_D-U_A,~D_2=L_D;\\ &({\rm \romannumeral5})&A_1=H_A-L_A,~A_2=U_A,~D_1=H_D-L_D,~D_2=U_D;\\ &({\rm \romannumeral6})&A_1=H_A-L_A,~A_2=U_A,~D_1=H_D-U_D,~D_2=L_D;\\ \end{aligned} \end{eqnarray*} and we call the correspondingly resulted fixed-point iterations as Gauss-Seidel-type fixed-point (GSFP) iterations. (\uppercase\expandafter{\romannumeral4}) SOR-type splittings ($\omega$ is a positive parameter): \begin{eqnarray*} \begin{aligned} &({\rm \romannumeral7})&A_1&=(1/\omega)H_A-U_A,&A_2&=(1/\omega-1)H_A+L_A,\\ &&D_1&=(1/\omega)H_D-L_D,&D_2&=(1/\omega-1)H_D+U_D; \\ &({\rm \romannumeral8})&A_1&=(1/\omega)H_A-U_A,&A_2&=(1/\omega-1)H_A+L_A,\\ &&D_1&=(1/\omega)H_D-U_D,&D_2&=(1/\omega-1)H_D+L_D;\\ &({\rm \romannumeral9})&A_1&=(1/\omega)H_A-L_A,&A_2&=(1/\omega-1)H_A+U_A,\\ &&D_1&=(1/\omega)H_D-L_D,&D_2&=(1/\omega-1)H_D+U_D;\\ &({\rm \romannumeral10})&A_1&=(1/\omega)H_A-L_A,&A_2&=(1/\omega-1)H_A+U_A,\\ &&D_1&=(1/\omega)H_D-U_D,&D_2&=(1/\omega-1)H_D+L_D; \end{aligned} \end{eqnarray*} and we call the correspondingly resulted fixed-point iterations as SOR-type fixed-point (SORFP) iterations. (\uppercase\expandafter{\romannumeral5}) AOR-type splittings ($\omega$ and $\gamma$ are two positive parameters): \begin{eqnarray*} \begin{aligned} &({\rm \romannumeral11})&A_1&=(1/\omega)(H_A-\gamma U_A),&A_2&=(1/\omega)((1-\omega)H_A+(\omega-\gamma)U_A+\omega L_A),\\ &&D_1&=(1/\omega)(H_D-\gamma L_D),&D_2&=(1/\omega)((1-\omega)H_D+(\omega-\gamma)L_D+\omega U_D);\\ &({\rm \romannumeral12})&A_1&=(1/\omega)(H_A-\gamma U_A),&A_2&=(1/\omega)((1-\omega)H_A+(\omega-\gamma)U_A+\omega L_A),\\ &&D_1&=(1/\omega)(H_D-\gamma U_D),&D_2&=(1/\omega)((1-\omega)H_D+(\omega-\gamma)U_D+\omega L_D);\\ &({\rm \romannumeral13})&A_1&=(1/\omega)(H_A-\gamma L_A),&A_2&=(1/\omega)((1-\omega)H_A+(\omega-\gamma)L_A+\omega U_A),\\ &&D_1&=(1/\omega)(H_D-\gamma U_D),&D_2&=(1/\omega)((1-\omega)H_D+(\omega-\gamma)U_D+\omega L_D);\\ &({\rm \romannumeral14})&A_1&=(1/\omega)(H_A-\gamma L_A),&A_2&=(1/\omega)((1-\omega)H_A+(\omega-\gamma)L_A+\omega U_A),\\ &&D_1&=(1/\omega)(H_D-\gamma L_D),&D_2&=(1/\omega)((1-\omega)H_D+(\omega-\gamma)L_D+\omega U_D); \end{aligned} \end{eqnarray*} and we call the correspondingly resulted fixed-point iterations as AOR-type fixed-point (AORFP) iterations; Evidently, JFP, GSFP and SORFP can be considered as special cases of AORFP when the iteration parameters $(\omega, \gamma)$ are specified to be $(1,0), (1,1)$ and $(\omega,\omega)$, respectively. \section{Apply doubling algorithms to solve the NARE (\ref{e1}) in class $\mathbb{H}^\omega$} ADDA and SDA are two existing doubling algorithms which have been successfully applied to the NAREs in class $\mathbb{M}$, $\mathbb{H}^+$, $\mathbb{H}^-$ and $\mathbb{H}^*$. This section discusses how to choose suitable parameters to make them also deliver the extremal solution $\Phi$ of the NARE (\ref{e1}) in class $\mathbb{H}^\omega$, and analyzes the convergence of the two doubling algorithms. \subsection{{\rm General framework of doubling algorithms for NAREs}} Let $\Phi$ and $\Psi$ be the solutions of the NAREs (\ref{e1}) and (\ref{e3}), respectively. Then \begin{eqnarray}\label{e16} H\left( \begin{matrix} I\\ \Phi \end{matrix} \right) = \left( \begin{matrix} I\\ \Phi \end{matrix} \right)R,~~ H\left( \begin{matrix} \Psi\\ I \end{matrix} \right)=\left( \begin{matrix} \Psi\\ I \end{matrix} \right)(-S), \end{eqnarray} where \begin{eqnarray}\label{e17} R=D-C\Phi,~~S=A-B\Psi. \end{eqnarray} First choose some suitable parameters and transform (\ref{e16}) into \begin{eqnarray}\label{e18} \left( \begin{matrix} E_0&0\\ -H_0&I \end{matrix} \right)\left( \begin{matrix} I\\ \Phi \end{matrix} \right) = \left( \begin{matrix} I&-G_0\\ 0&F_0 \end{matrix} \right)\left( \begin{matrix} I\\ \Phi \end{matrix} \right)\mathcal{R},~~ \left( \begin{matrix} E_0&0\\ -H_0&I \end{matrix} \right)\left( \begin{matrix} \Psi\\ I \end{matrix} \right)\mathcal{S} = \left( \begin{matrix} I&-G_0\\ 0&F_0 \end{matrix} \right)\left( \begin{matrix} \Psi\\ I \end{matrix} \right). \end{eqnarray} Then construct a sequence of quaternion $\{E_k,F_k,G_k,H_k\},k=0,1,2,\cdots,$ such that \begin{eqnarray}\label{e19} \left( \begin{matrix} E_k&0\\ -H_k&I \end{matrix} \right)\left( \begin{matrix} I\\ \Phi \end{matrix} \right)=\left( \begin{matrix} I&-G_k\\ 0&F_k \end{matrix} \right) \left( \begin{matrix} I\\ \Phi \end{matrix} \right) \mathcal{R}^{2^k},~ \left( \begin{matrix} E_k&0\\ -H_k&I \end{matrix} \right) \left( \begin{matrix} \Psi\\ I \end{matrix} \right) \mathcal{S}^{2^k} = \left( \begin{matrix} I&-G_k\\ 0&F_k \end{matrix} \right)\left( \begin{matrix} \Psi\\ I \end{matrix} \right). \end{eqnarray} It can be verified that \begin{eqnarray}\label{e20} \Phi-H_k=(I-H_k\Psi)\mathcal{S}^{2^k}\Phi \mathcal{R}^{2^k},~ \Psi-G_k=(I-G_k\Phi)\mathcal{R}^{2^k}\Psi \mathcal{S}^{2^k}. \end{eqnarray} If $H_k$ and $G_k$ are uniformly bounded with respect to $k$, then it was shown by \cite{Axe14} that for any consistent matrix norm $\parallel\cdot\parallel$, \begin{eqnarray*} \begin{aligned} &\lim_{k\rightarrow \infty}{\rm sup}\parallel\Phi-H_k\parallel^{1/2^k}\leq \rho(\mathcal{R})\rho(\mathcal{S}),\\ &\lim_{k\rightarrow \infty}{\rm sup} \parallel\Psi-G_k\parallel^{1/2^k}\leq \rho(\mathcal{R})\rho(\mathcal{S}), \end{aligned} \end{eqnarray*} which implies the sequences $\{H_k\}$ and $\{G_k\}$ converge quadratically to $\Phi$ and $\Psi$, respectively, if \begin{eqnarray}\label{e21} \rho(\mathcal{R})\rho(\mathcal{S})<1. \end{eqnarray} \subsection{Doubling algorithms for the NARE (\ref{e7}) in class $\mathbb{M}$} In this subsection, for subsequent convenience, we give a iterative scheme for the NARE (\ref{e7}) in class $\mathbb{M}$ which is slightly different from that proposed by \cite{Axe1}. In order to distinguish from the doubling algorithms for the NARE (\ref{e1}) in class $\mathbb{H}^\omega$, we use different notations to denote all involved matrices in ADDA, such as $\widetilde{\mathcal{R}}$ instead of $\mathcal{R}$. Let $\widetilde{R}=\widetilde{D}-\widetilde{C}\widetilde{\Phi}$ and $\widetilde{S}=\widetilde{A}-\widetilde{B}\widetilde{\Psi}$, we first choose suitable complex parameters $\widetilde{\alpha}$, $\widetilde{\beta}$ such that $$\widetilde{R}+(\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha})) I,~~\widetilde{S}+(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})) I,$$ $$\widetilde{A}_{\widetilde{\beta}}=\widetilde{A}+(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})) I,~~\widetilde{D}_{\widetilde{\alpha}}=\widetilde{D}+(\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha})) I,$$ and $$\widetilde{W}_{\widetilde{\alpha} \widetilde{\beta}}=\widetilde{A}_{\widetilde{\beta}}-\widetilde{B}\widetilde{D}_{\widetilde{\alpha}}^{-1}\widetilde{C},~~~\widetilde{V}_{\widetilde{\alpha} \widetilde{\beta}}=\widetilde{D}_{\widetilde{\alpha}}-\widetilde{C}\widetilde{A}_{\widetilde{\beta}}^{-1}\widetilde{B}$$ are nonsingular. Then the constructions of the matrices $\widetilde{\mathcal{R}}$ and $\widetilde{\mathcal{S}}$ and initial matrices $\widetilde{E}_0$, $\widetilde{F}_0$, $\widetilde{G}_0$, $\widetilde{H}_0$ in ADDA are as follows. \begin{eqnarray}\label{e22} \begin{aligned} \widetilde{\mathcal{R}}&=(\widetilde{R}-(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})) I)(\widetilde{R}+(\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha})) I)^{-1},\\ \widetilde{ \mathcal{S}}&=(\widetilde{S}-(\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha})) I)(\widetilde{S}+(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})) I)^{-1},\\ \widetilde{E}_0&=I-((\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha}))+(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})))\widetilde{V}_{\widetilde{\alpha}\widetilde{\beta}}^{-1},\\ \widetilde{F}_0&=I-((\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha}))+(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})))\widetilde{W}_{\widetilde{\alpha}\widetilde{\beta}}^{-1},\\ \widetilde{G}_0&=((\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha}))+(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})))\widetilde{D}_{\widetilde{\alpha}}^{-1}\widetilde{C}\widetilde{W}_{\widetilde{\alpha}\widetilde{\beta}}^{-1},\\ \widetilde{H}_0&=((\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha}))+(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})))\widetilde{W}_{\widetilde{\alpha}\widetilde{\beta}}^{-1}\widetilde{B}\widetilde{D}_{\widetilde{\alpha}}^{-1}. \end{aligned} \end{eqnarray} After $\widetilde{E}_0$, $\widetilde{F}_0$, $\widetilde{G}_0$ and $\widetilde{H}_0$ are set, the sequence $\{\widetilde{E}_k,\widetilde{F}_k,\widetilde{G}_k,\widetilde{H}_k\},k=0,1,2,\cdots,$ is produced with the iterative scheme: \begin{eqnarray}\label{e23} \begin{aligned} \widetilde{E}_{k+1}&=\widetilde{E}_k(I-\widetilde{G}_k\widetilde{H}_k)^{-1}\widetilde{E}_k,\\ \widetilde{F}_{k+1}&=\widetilde{F}_k(I-\widetilde{H}_k\widetilde{G}_k)^{-1}\widetilde{F}_k,\\ \widetilde{G}_{k+1}&=\widetilde{G}_k+\widetilde{E}_k(I-\widetilde{G}_k\widetilde{H}_k)^{-1}\widetilde{G}_k\widetilde{F}_k,\\ \widetilde{H}_{k+1}&=\widetilde{H}_k+\widetilde{F}_k(I-\widetilde{H}_k\widetilde{G}_k)^{-1}\widetilde{H}_k\widetilde{E}_k, \end{aligned} \end{eqnarray} as long as the matrices $I-\widetilde{G}_k\widetilde{H}_k$ and $I-\widetilde{H}_k\widetilde{G}_k$ are invertible for all $k$. \begin{theorem}\label{thm10} {\rm (\cite[see.][]{Axe9}.} Let $\widetilde{Q}$ in {\rm(\ref{e8})} be a nonsingular M-matrix. Let $\widetilde{\Phi}$ and $\widetilde{\Psi}$ be the minimal nonnegative solutions of the NARE {\rm(\ref{e7})} and its dual NARE {\rm(\ref{e9})}, respectively. Let the sequences $\{\widetilde{E}_k\}$, $\{\widetilde{F}_k\}$, $\{\widetilde{G}_k\}$ and $\{\widetilde{H}_k\}$ be generated by ADDA applied to the NARE {\rm(\ref{e7})} with the parameters $\widetilde{\alpha}$, $\widetilde{\beta}$ satisfying $$\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha}) \geq \max_{i=1,2,\cdots,m}[\widetilde{A}]_{ii}=\max_{i=n+1,n+2,\cdots,m+n}[\widetilde{Q}]_{ii},$$ $$\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta}) \geq \max_{i=1,2,\cdots,n}[\widetilde{D}]_{ii}=\max_{i=1,2,\cdots,n}[\widetilde{Q}]_{ii},$$ then {\rm(1)} $\rho(\widetilde{\mathcal{R}})\rho(\widetilde{\mathcal{S}})<1$; {\rm(2)} $\widetilde{E}_0$, $\widetilde{F}_0\leq 0$; $\widetilde{E}_k$, $\widetilde{F}_k\geq 0$, for $k\geq 1$; {\rm(3)} $I-\widetilde{G}_k\widetilde{H}_k$ and $I-\widetilde{H}_k\widetilde{G}_k$ are nonsingular M-matrices; {\rm(4)} $0\leq \widetilde{H}_k\leq \widetilde{H}_{k+1}\leq \widetilde{\Phi}$, $0\leq \widetilde{G}_k\leq \widetilde{G}_{k+1}\leq \widetilde{\Psi}$, and $$\lim_{k\rightarrow \infty}\widetilde{H}_k=\widetilde{\Phi},~~~\lim_{k\rightarrow\infty}\widetilde{G}_{k}=\widetilde{\Psi}.$$ \end{theorem} Notice that ADDA is reduced to SDA when $\widetilde{\alpha}=\widetilde{\beta}$. \subsection{Doubling algorithms for the NARE (\ref{e1}) in class $\mathbb{H}^\omega$} We choose suitable complex parameters $\alpha$, $\beta$ such that $$R+\alpha I,~~S+\beta I,~~A_{\beta}=A+\beta I,~~D_{\alpha}=D+\alpha I,$$ and $$W_{\alpha \beta}=A_{\beta}-BD_{\alpha}^{-1}C,~~~V_{\alpha \beta}=D_{\alpha}-CA_{\beta}^{-1}B$$ are nonsingular. Then the constructions of $\mathcal{R}$, $\mathcal{S}$, $E_0$, $F_0$, $G_0$ and $H_0$ in ADDA are as follows. \begin{eqnarray}\label{e24} \begin{aligned} \mathcal{R}&=(R-\beta I)(R+\alpha I)^{-1},& \mathcal{S}&=(S-\alpha I)(S+\beta I)^{-1},\\ E_0&=I-(\alpha+\beta)V_{\alpha\beta}^{-1},&F_0&=I-(\alpha+\beta)W_{\alpha\beta}^{-1},\\ G_0&=(\alpha+\beta)D_{\alpha}^{-1}CW_{\alpha\beta}^{-1},&H_0&=(\alpha+\beta)W_{\alpha\beta}^{-1}BD_{\alpha}^{-1}. \end{aligned} \end{eqnarray} After $E_0,F_0,G_0$ and $H_0$ are set, the sequence $\{E_k,F_k,G_k,H_k\},k=0,1,2,\cdots,$ is produced with the iterative scheme: \begin{eqnarray}\label{e23} \begin{aligned} E_{k+1}&=E_k(I-G_kH_k)^{-1}E_k,\\ F_{k+1}&=F_k(I-H_kG_k)^{-1}F_k,\\ G_{k+1}&=G_k+E_k(I-G_kH_k)^{-1}G_kF_k,\\ H_{k+1}&=H_k+F_k(I-H_kG_k)^{-1}H_kE_k, \end{aligned} \end{eqnarray} as long as the matrices $I-G_kH_k$ and $I-H_kG_k$ are invertible for all $k$. \begin{lemma}\label{thm11} Let $A\in \mathbb{R}^{n\times n}$ be a nonsingular M-matrix and be written as $A=D_1-N_1$, where $D_{1}$ is diagonal with positive diagonal entries and $N_1\geq 0$. Let $B\in \mathbb{C}^{n\times n}$ be written as $B=D_2-N_2$, where $D_2$ is diagonal. If $$D_1\leq |D_2|,~~~|N_2|\leq N_1,$$ then $B$ is nonsingular with $|B^{-1}|\leq A^{-1}C$ and $|B^{-1}|\leq CA^{-1}$, where $C=|D_1D_2^{-1}|$. \end{lemma} \begin{theorem}\label{thm12} Let $\Phi$ and $\Psi$ be the extremal solutions of the NARE {\rm(\ref{e1})} and its dual NARE {\rm(\ref{e3})} in class $\mathbb{H}^\omega$, respectively. Suppose that $Q_{\omega}\textbf{1} > 0$. Let $\{E_k\}$, $\{F_k\}$, $\{G_k\}$, $\{H_k\}$ be generated by ADDA applied to the NARE {\rm(\ref{e1})} with the parameters $\alpha$, $\beta$ ($\omega {\rm Re}(\alpha)+(1-\omega){\rm Im}(\alpha)>0$, $\omega {\rm Re}(\beta)+(1-\omega){\rm Im}(\beta)>0$) satisfying \begin{eqnarray} &\frac{\omega {\rm Re}(\alpha)+(1-\omega) {\rm Im}(\alpha)+q_i}{|\alpha+[Q]_{ii}|}|\beta-[Q]_{ii}|<\omega {\rm Re}(\beta)+(1-\omega) {\rm Im}(\beta)-q_i,~i=1,2,\cdots,n;~~\label{e25}\\ &\frac{\omega {\rm Re}(\beta)+(1-\omega) {\rm Im}(\beta)+q_i}{|\beta+[Q]_{ii}|}|\alpha-[Q]_{ii}|<\omega {\rm Re}(\alpha)+(1-\omega) {\rm Im}(\alpha)-q_i,~i=n+1,n+2,\cdots,n+m.~~~~\label{e26} \end{eqnarray} Then $\{H_k\}$ and $\{G_k\}$ quadratically converge to $\Phi$ and $\Psi$, respectively. \end{theorem} \begin{proof} Since $Q_{\omega}\textbf{1}>0$, we can take $\epsilon>0$ sufficiently small such that $$\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})\geq q_i+\epsilon,~~~~i=1,2,\cdots,m+n.$$ Furthermore, we assume that $\epsilon$ is small enough such that the inequalities in (\ref{e25}) and (\ref{e26}) strictly hold with $q_i$ and $q_j$ replaced by $q_i+\epsilon$ and $q_j+\epsilon$, respectively. Let $\widetilde{Q}= \begin{cases} q_i+\epsilon,&i=j,\\ -|[Q]_{ij}|,&i\neq j, \end{cases}$ and $\{\widetilde{E}_{k}\}$, $\{\widetilde{F}_{k}\}$, $\{\widetilde{G}_{k}\}$ and $\{\widetilde{H}_{k}\}$ be generated by ADDA applied to the MARE (\ref{e7}) with parameters $\widetilde{\alpha}=\alpha$, $\widetilde{\beta}=\beta$. As $(\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha}))\geq \max_{i=n+1,n+2,\cdots,n+m}[\widetilde{Q}]_{ii}$ and $(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta}))\geq \max_{i=1,2,\cdots, n}[\widetilde{Q}]_{ii}$, Theorem \ref{thm10} applies. We need to show that ADDA is well-defined when applied to the NARE (\ref{e1}) in class $\mathbb{H}^\omega$ and that the sequences $\{H_k\}$ and $\{G_k\}$ are bounded. Similarly to the statements explained by \cite{Axe1}, we observe that it is enough to show \begin{eqnarray}\label{e27} \begin{aligned} |E_0|&\leq |\widetilde{E}_0|,&|F_0|&\leq |\widetilde{F}_0|,\\ |G_0|&\leq \widetilde{G}_0,&|H_0|&\leq \widetilde{H}_0, \end{aligned} \end{eqnarray} where $E_0$, $F_0$, $G_0$ and $H_0$ are as in (\ref{e24}). Let $$N={\rm diag}(\widetilde{D}_{\widetilde{\alpha}})|{\rm diag}(\alpha I+D)|^{-1},~~~M={\rm diag}(\widetilde{A}_{\widetilde{\beta}})|{\rm diag}(\beta I+A)|^{-1}.$$ Because $\widetilde{A}\leq A_{\omega}$ and $\widetilde{D}\leq D_{\omega}$, \begin{eqnarray*} \begin{aligned} {\rm diag}(\widetilde{D}_{\widetilde{\alpha}})&={\rm diag}(\widetilde{D})+\omega {\rm Re}(\widetilde{\alpha})I+(1-\omega){\rm Im}(\widetilde{\alpha}) I\\ &\leq\omega {\rm diag}({\rm Re}(D))+(1-\omega){\rm diag}({\rm Im}(D))+\omega {\rm Re}(\alpha)I+(1-\omega){\rm Im}(\alpha) I\\ &=\omega({\rm diag}({\rm Re}(D))+{\rm Re}(\alpha) I)+(1-\omega)({\rm diag}({\rm Im}(D))+{\rm Im}(\alpha) I)\\ &\leq \sqrt{({\rm diag}({\rm Re}(D))+{\rm Re}(\alpha) I)^2+({\rm diag}({\rm Im}(D))+{\rm Im}(\alpha) I)^2}\\ &=|{\rm diag}(D+\alpha I)|=|{\rm diag}(D_{\alpha})|. \end{aligned} \end{eqnarray*} Then $0\leq N\leq I_n$, $0\leq M\leq I_m$, and the inequalities (\ref{e25}) and (\ref{e26}) in Theorem \ref{thm12} can be written as \begin{eqnarray} N|{\rm diag}(\beta I-D)|\leq {\rm diag}((\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})) I-\widetilde{D}),\label{e28}\\ M|{\rm diag}(\alpha I-A)|\leq {\rm diag}((\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha})) I-\widetilde{A}).\label{e29} \end{eqnarray} Because $\widetilde{A}_{\widetilde{\beta}},\widetilde{D}_{\widetilde{\alpha}},\widetilde{W}_{\widetilde{\alpha}\widetilde{\beta}}$ and $\widetilde{V}_{\widetilde{\alpha}\widetilde{\beta}}$ are nonsingular M-matrices, it follows from Lemma \ref{thm11} that $$|(A_{\beta})^{-1}|\leq (\widetilde{A}_{\widetilde{\beta}})^{-1}M\leq (\widetilde{A}_{\widetilde{\beta}})^{-1},~~~~|(D_{\alpha})^{-1}|\leq (\widetilde{D}_{\widetilde{\alpha}})^{-1}N\leq (\widetilde{D}_{\widetilde{\alpha}})^{-1}.$$ Thus \begin{eqnarray*} \begin{aligned} |W_{\alpha\beta}^{-1}|&=|(A_{\beta}-BD_{\alpha}^{-1}C)^{-1}|\\ &\leq |(I-A_{\beta}^{-1}BD_{\alpha}^{-1}C)^{-1}||A_{\beta}^{-1}|\\ &\leq (I-\widetilde{A}_{\widetilde{\beta}}^{-1}\widetilde{B}\widetilde{D}_{\widetilde{\alpha}}^{-1}\widetilde{C})^{-1}\widetilde{A}_{\widetilde{\beta}}^{-1}M\\ &=\widetilde{W}_{\widetilde{\alpha}\widetilde{\beta}}^{-1}M\leq \widetilde{W}_{\widetilde{\alpha}\widetilde{\beta}}^{-1}. \end{aligned} \end{eqnarray*} Similarly, $$|V_{\alpha\beta}^{-1}|\leq \widetilde{V}_{\widetilde{\alpha}\widetilde{\beta}}^{-1}N\leq\widetilde{V}_{\widetilde{\alpha}\widetilde{\beta}}^{-1} .$$ From the inequality (\ref{e28}), we have \begin{eqnarray*} \begin{aligned} |E_0|&=|V_{\alpha\beta}^{-1}(D-\beta I-CA_{\beta}^{-1}B)|\\ &\leq \widetilde{V}_{\widetilde{\alpha}\widetilde{\beta}}^{-1}N(|\beta I-D|+|CA_{\beta}^{-1}B|)\\ &\leq \widetilde{V}_{\widetilde{\alpha}\widetilde{\beta}}^{-1}(N|\beta I-D|+\widetilde{C}\widetilde{A}_{\widetilde{\beta}}^{-1}\widetilde{B})\\ &\leq \widetilde{V}_{\widetilde{\alpha}\widetilde{\beta}}^{-1}((\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})) I-\widetilde{D}+\widetilde{C}\widetilde{A}_{\widetilde{\beta}}^{-1}\widetilde{B})\\ &=-\widetilde{E}_0=|\widetilde{E}_0| \end{aligned} \end{eqnarray*} Similarly, $|F_0|\leq -\widetilde{F}_0=|\widetilde{F}_0|$ by (\ref{e29}). It follows from (\ref{e24}) that \begin{eqnarray*} \begin{aligned} |G_0|&= |(\alpha+\beta)D_{\alpha}^{-1}CW_{\alpha\beta}^{-1}|\\ &=|D_{\alpha}^{-1}C(I-F_0)|\\ &\leq|D_{\alpha}^{-1}||C||I-F_0|\\ &\leq\widetilde{D}_{\widetilde{\alpha}}^{-1}\widetilde{C}(I+|F_0|)\\ &\leq \widetilde{D}_{\widetilde{\alpha}}^{-1}\widetilde{C}(I-\widetilde{F}_0)\\ &=(\omega {\rm Re}(\widetilde{\alpha}+\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\alpha}+\widetilde{\beta}))\widetilde{D}_{\widetilde{\alpha}}^{-1}\widetilde{C}\widetilde{W}_{\widetilde{\alpha}\widetilde{\beta}}^{-1}\\ &=\widetilde{G}_0 \end{aligned} \end{eqnarray*} Similarly, $|H_0|\leq \widetilde{H}_0$. To complete the proof, we also need to show that $\rho(\mathcal{R})\rho(\mathcal{S})<1$. We have $$|(R+\alpha I)^{-1}|\leq (\widetilde{R}+(\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha})) I)^{-1}N$$ and $$|R-\beta I|\leq -N^{-1}(\widetilde{R}-(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})) I).$$ Then \begin{eqnarray*} \begin{aligned} |\mathcal{R}|&\leq |R-\beta I||(R+\alpha I)^{-1}|\leq \\ &-N^{-1}(\widetilde{R}-(\omega {\rm Re}(\widetilde{\beta})+(1-\omega){\rm Im}(\widetilde{\beta})) I)(\widetilde{R}+(\omega {\rm Re}(\widetilde{\alpha})+(1-\omega){\rm Im}(\widetilde{\alpha})) I)^{-1}N\\ &=-N^{-1}\widetilde{\mathcal{R}}N. \end{aligned} \end{eqnarray*} Similarly, $|\mathcal{S}|\leq-M^{-1}\mathcal{\widetilde{S}}M$. Therefore $\rho(\mathcal{R})\rho(\mathcal{S})\leq\rho(\widetilde{\mathcal{R}})\rho(\widetilde{\mathcal{S}})<1$. \end{proof} Notice that ADDA is reduced to SDA when $\alpha=\beta$. We will gain Theorem \ref{thm13} below which states the convergence of SDA applied to the NARE (\ref{e1}) in class $\mathbb{H}^\omega$. \begin{theorem}\label{thm13} Let $\Phi$ and $\Psi$ be the extremal solutions of the NARE {\rm(\ref{e1})} and its dual NARE {\rm(\ref{e3})} in class $\mathbb{H}^\omega$, respectively. Suppose that $Q_{\omega}\textbf{1} > 0$. Let $\{E_k\}$, $\{F_k\}$, $\{G_k\}$ and $\{H_k\}$ be generated by SDA applied to the NARE {\rm(\ref{e1})} with the parameters $\alpha$ satisfying \begin{eqnarray*} \frac{\omega {\rm Re}(\alpha)+(1-\omega) {\rm Im}(\alpha)+q_i}{|\alpha+[Q]_{ii}|}|\alpha-[Q]_{ii}|<\omega {\rm Re}(\alpha)+(1-\omega) {\rm Im}(\alpha)-q_i,~~~~i=1,2,\cdots,m+n. \end{eqnarray*} Then the sequences $\{H_k\}$ and $\{G_k\}$ quadratically converge to $\Phi$ and $\Psi$, respectively. \end{theorem} \subsection{Strategies of choosing parameters $\alpha$ and $\beta$} Theoretically, we have known from (\ref{e25}) and (\ref{e26}) how to choose the parameters $\alpha$ and $\beta$, but the ranges of the parameters $\alpha$ and $\beta$ are not intuitive. We need to clarify them in terms of the real parts and imaginary parts of the parameters $\alpha$ and $\beta$. Next, we will search for the parameters $\alpha$ and $\beta$ in the ray $R(z_{\omega}^\bot)$. Let $\alpha=tz_{\omega}^\bot$ and $\beta=\gamma z_{\omega}^\bot$ with $t>0$, $\gamma>0$. Then we have ${\rm Re}(\alpha)=t\omega$, ${\rm Im}(\alpha)=t(1-\omega)$ and ${\rm Re}(\beta)=\gamma\omega$, ${\rm Im}(\beta)=\gamma(1-\omega)$. We can rewrite (\ref{e25}) and (\ref{e26}) as \begin{eqnarray*} \begin{aligned} \frac{t\omega^2+t(1-\omega)^2+q_i}{|\alpha+[Q]_{ii}|}|\beta-[Q]_{ii}|&<\gamma\omega^2+\gamma(1-\omega)^2-q_i,&i&=1,2,\cdots,n.\\ \frac{\gamma\omega^2+\gamma(1-\omega)^2+q_i}{|\beta+[Q]_{ii}|}|\alpha-[Q]_{ii}|&<t\omega^2+t(1-\omega)^2-q_i, &i&=n+1,n+2,\cdots,n+m. \end{aligned} \end{eqnarray*} Let $\varpi=\omega^2+(1-\omega)^2$ with $\omega\in[0,1]$, then we have $\varpi>0$ and \begin{eqnarray*} \begin{aligned} &(t^2\varpi^2+q_i^2+2t\varpi q_i)((\gamma\omega-{\rm Re}([Q]_{ii}))^2+(\gamma(1-\omega)-{\rm Im}([Q]_{ii}))^2)&&\\ <&(\gamma^2\varpi^2+q_i^2-2\gamma \varpi q_i)((t\omega+{\rm Re}([Q]_{ii}))^2+(t(1-\omega)+{\rm Im}([Q]_{ii}))^2),~~~\gamma \varpi>q_i,~~~i=1,2,\cdots,n.\\ &(\gamma ^2\varpi^2+q_i^2+2\gamma \varpi q_i)((t\omega-{\rm Re}([Q]_{ii}))^2+(t(1-\omega)-{\rm Im}([Q]_{ii}))^2)&&\\ <&(t^2\varpi^2+q_j^2-2t\varpi q_i)((\gamma \omega+{\rm Re}([Q]_{ii}))^2+(\gamma (1-\omega)+{\rm Im}([Q]_{ii}))^2),~~~t\varpi>q_i,~~~i=n+1,n+2,\cdots,n+m. \end{aligned} \end{eqnarray*} The formulas are equivalent to \begin{eqnarray*} \begin{aligned} (\varpi|[Q]_{ii}|^2-q_i^2)(t\varpi-\gamma \varpi)<&2(t\gamma \varpi^2-q_i(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})))(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i)\\ &-2q_i(\omega {\rm Im}([Q]_{ii})-(1-\omega){\rm Re}([Q]_{ii}))^2,~~~~\gamma \varpi>q_i,~~~~i=1,2,\cdots,n.\\ (\varpi|[Q]_{ii}|^2-q_j^2)(\gamma \varpi-t\varpi)<&2(t\gamma \varpi^2-q_i(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})))(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i)\\ &-2q_i(\omega {\rm Im}([Q]_{ii})-(1-\omega){\rm Re}([Q]_{ii}))^2,~~~~t\varpi>q_i,~~~~i=n+1,n+2,\cdots,n+m. \end{aligned} \end{eqnarray*} Let $$p_{\omega,i}=\frac{\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})+q_i}{2}+\frac{(\omega{\rm Im}([Q]_{ii})-(1-\omega){\rm Re}([Q]_{ii}))^2}{2(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i)},~~~~i=1,2,\cdots,n+m,$$ $$s_{\omega,i}=\frac{\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i}{2}+\frac{(\omega{\rm Im}([Q]_{ii})-(1-\omega){\rm Re}([Q]_{ii}))^2}{2(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i)},~~~~i=1,2,\cdots,n+m.$$ Then we have \begin{eqnarray*} \begin{aligned} (t\varpi+p_{\omega,i})(\gamma \varpi-p_{\omega,i})&>-s_{\omega,i}^2,&\gamma \varpi&>q_i,&i&=1,2,\cdots,n,\\ (\gamma \varpi+p_{\omega,i})(t\varpi-p_{\omega,i})&>-s_{\omega,i}^2,&t\varpi&>q_i,&j&=n+1,n+2,\cdots,n+m, \end{aligned} \end{eqnarray*} equivalently, \begin{align} &(t+\frac{p_{\omega,i}}{\varpi})(\gamma-\frac{p_{\omega,i}}{\varpi})>-\frac{s_{\omega,i}^2}{\varpi^2},~~~~\gamma>\frac{q_i}{\varpi}, ~~~~i=1,2,\cdots,n,,\label{e30}\\ &(\gamma+\frac{p_{\omega,i}}{\varpi})(t-\frac{p_{\omega,i}}{\varpi})>-\frac{s_{\omega,i}^2}{\varpi^2},~~~~t>\frac{q_i}{\varpi},~~~~j=n+1,n+2,\cdots,n+m,\label{e31} \end{align} \subsubsection{Immediate choice of the resulting parameters $t$ and $\gamma$}\ Let $$\psi_1=\max_{i=n+1,n+2,\cdots,m+n}\frac{p_{\omega,i}}{\varpi}=\frac{\max_{i=n+1,n+2,\cdots,m+n}p_{\omega,i}}{\varpi},~~~~\psi_2=\max_{i=1,2,\cdots,n}\frac{p_{\omega,i}}{\varpi}=\frac{\max_{i=1,2,\cdots,n}p_{\omega,i}}{\varpi}.$$ It is easy to show that $\frac{p_{\omega,i}}{\varpi}>\frac{q_i}{\varpi}$ since $\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})>q_i$, $i=1,2,\cdots,m+n$. So, Theorem \ref{thm14} and Theorem \ref{thm15} below are obvious. Theorem \ref{thm14} states the convergence results for ADDA, and Theorem \ref{thm15} states the convergence results for SDA. \begin{theorem}\label{thm14} Suppose the NARE (\ref{e1}) is in class $\mathbb{H}^\omega$ and $Q_\omega \textbf{1}>0$. Let $\Phi$ and $\Psi$ be as in Theorem \ref{thm5}. Apply ADDA to the NARE (\ref{e1}) with $\alpha=tz_{\omega}^\bot$ and $\beta=\gamma z_{\omega}^\bot.$ If $$t\geq\psi_1,~~~~\gamma\geq\psi_2,$$ then the sequences $\{E_k\}$, $\{F_k\}$, $\{H_k\}$, $\{G_k\}$ are well-defined and $\{H_k\}$ and $\{G_k\}$ quadratically converge to $\Phi$ and $\Psi$, respectively. \end{theorem} \begin{theorem}\label{thm15} Suppose the NARE (\ref{e1}) is in class $\mathbb{H}^\omega$ and $Q_\omega \textbf{1}>0$. Let $\Phi$ and $\Psi$ be as in Theorem \ref{thm5}. Apply SDA to the NARE (\ref{e1}) with $\alpha=tz_{\omega}^\bot$. If $$t\geq\max\{\psi_1,\psi_2\},$$ then the sequences $\{E_k\}$, $\{F_k\}$, $\{H_k\}$, $\{G_k\}$ are well-defined and $\{H_k\}$ and $\{G_k\}$ quadratically converge to $\Phi$ and $\Psi$, respectively. \end{theorem} \subsubsection{Choice of the parameters $t$ and $\gamma$ using a preprocessing procedure}\ For a given NARE, $\frac{\max_{i=1,2,\cdots,m+n}p_{\omega,i}}{\varpi}$ may be very large, which will result in slow convergence. But it is quite possible that we can transform the given NARE into a new NARE for which $\frac{\max_{i=1,2,\cdots,m+n}p_{\omega,i}}{\varpi}$ is much smaller and the solution sets of the two NAREs are related in a simple way. We first rewrite $\frac{p_{\omega,i}}{\varpi}$, $i=1,2,\cdots,m+n,$ in a more compact form: $$\frac{p_{\omega,i}}{\varpi}=\frac{\varpi|[Q]_{ii}|^2-q_i^2}{2\varpi(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i)},~~i=1,2,\cdots,m+n,$$ where $\varpi=\omega^2+(1-\omega)^2$. For the given NARE (\ref{e1}) with $Q_\omega \textbf{1}>0$ in class $\mathbb{H}^\omega$, we consider the new NARE \begin{eqnarray}\label{e32} X(\chi C)X-X(\chi D)-(\chi A)X+\chi B=0, \end{eqnarray} where $\chi$ is on the unit circle. Obviously, the new NARE (\ref{e32}) has the same solution set as the original one. The matrix corresponding to the NARE (\ref{e32}) is \begin{eqnarray*} Q_{\chi}= \left( \begin{matrix} \chi D&-\chi C\\ -\chi B&\chi A \end{matrix} \right). \end{eqnarray*} Note that the $\omega$-comparison matrix $(Q_{\chi})_{\omega}$ of $Q_{\chi}$ is \begin{eqnarray*} (Q_{\chi})_{\omega}= \left( \begin{matrix} (\chi D)_\omega&-|\chi C|\\ -|\chi B|&(\chi A)_\omega \end{matrix} \right), \end{eqnarray*} we have \begin{eqnarray}\label{e33} \begin{aligned} \frac{p_{\omega,i}^\chi}{\varpi}&=\frac{\varpi|\chi [Q]_{ii}|^2-(|\chi| q_i)^2}{2\varpi(\omega {\rm Re}(\chi [Q]_{ii})+(1-\omega){\rm Im}(\chi [Q]_{ii})-|\chi|q_i)}\\ &=\frac{\varpi|[Q]_{ii}|^2-q_i^2}{2\varpi(\omega {\rm Re}(\chi [Q]_{ii})+(1-\omega){\rm Im}(\chi [Q]_{ii})-q_i)},~~~i=1,2,\cdots,m+n \end{aligned} \end{eqnarray} where $\chi$ is such that $\omega {\rm Re}(\chi [Q]_{ii})+(1-\omega){\rm Im}(\chi [Q]_{ii})>q_i,i=1,2,\cdots,m+n$ (so the NARE(\ref{e32}) is still in class $\mathbb{H}^\omega$). Note that \begin{eqnarray*} \begin{aligned} \varpi|[Q]_{ii}|^2-q_i^2&=(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})+q_i)(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i)\\ &~~~+(\omega{\rm Im}([Q]_{ii})-(1-\omega){\rm Re}([Q]_{ii}))^2>0\\ \end{aligned} \end{eqnarray*} since $q_i\geq0$ and $\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i>0$ for $i=1,2,\cdots,m+n$. Thus, for each fixed $i$, in order to minimize the positive quantity $\frac{p_{\omega,i}^\chi}{\varpi}$, we need to search for a complex number $\chi$ such that $\omega {\rm Re}(\chi [Q]_{ii})+(1-\omega){\rm Im}(\chi [Q]_{ii})-q_i$ attains the maximal value. Let $\chi=e^{-{\rm j}\vartheta}$ and $[Q]_{ii}=|[Q]_{ii}|e^{{\rm j}\phi_i},i=1,2,\cdots,m+n$. Then \begin{eqnarray*} \begin{aligned} &~~~~\omega {\rm Re}(\chi [Q]_{ii})+(1-\omega){\rm Im}(\chi [Q]_{ii})-q_i\\ &=|[Q]_{ii}|(\omega {\rm Re}(e^{{\rm j}(\phi_i-\vartheta)})+(1-\omega){\rm Im}(e^{{\rm j}(\phi_i-\vartheta)}))-q_i\\ &=|[Q]_{ii}|(\omega {\rm cos}(\phi_i-\vartheta)+(1-\omega){\rm sin}(\phi_i-\vartheta))-q_i\\ &=|[Q]_{ii}|\sqrt{\omega^2+(1-\omega)^2}(\frac{\omega }{\sqrt{\omega^2+(1-\omega)^2}}{\rm cos}(\phi_i-\vartheta)+\frac{1-\omega }{\sqrt{\omega^2+(1-\omega)^2}}{\rm sin}(\phi_i-\vartheta))-q_i,\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~i=1,2,\cdots,m+n. \end{aligned} \end{eqnarray*} Let ${\rm cos}(\varphi)=\frac{\omega }{\sqrt{\omega^2+(1-\omega)^2}}$, and further ${\rm sin}(\varphi)=\frac{1-\omega }{\sqrt{\omega^2+(1-\omega)^2}}$, then for $i=1,2,\cdots,m+n,$ \begin{eqnarray*} \begin{aligned} \omega {\rm Re}(\chi [Q]_{ii})+(1-\omega){\rm Im}(\chi [Q]_{ii})-q_i&=|[Q]_{ii}|\sqrt{\omega^2+(1-\omega)^2}{\rm cos}(\phi_i-\varphi-\vartheta)-q_i, \end{aligned} \end{eqnarray*} which attains the maximum $|[Q]_{ii}|\sqrt{\omega^2+(1-\omega)^2}-q_i>0$ at $\vartheta=\phi_i-\varphi$ since $\varpi|[Q]_{ii}|^2-q_i^2>0$. If all complex numbers $[Q]_{ii}$ are on the same line passing through the origin, i.e., $\phi_i$ is constant for all $i$, then a common value $\chi=e^{-{\rm j}(\phi_i-\varphi)}$ will make all $\omega {\rm Re}(\chi [Q]_{ii})+(1-\omega){\rm Im}(\chi [Q]_{ii})-q_i$ attain the maximum $|[Q]_{ii}|\sqrt{\omega^2+(1-\omega)^2}-q_i>0$ for $i=1,2,\cdots,m+n$. In other words, the new NARE with this $\chi$ is in class $\mathbb{H}^{\omega}$. In general, we cannot find a fixed $\chi=e^{-{\rm j}\vartheta}$ to minimize $\frac{p_{\omega,i}^\chi}{\varpi}$ for all $i$. So, we let $$f_{i}(\vartheta)=\frac{\varpi|[Q]_{ii}|^2-q_i^2}{\varpi(|[Q]_{ii}|\sqrt{\omega^2+(1-\omega)^2}{\rm cos}(\phi_i-\varphi-\vartheta)-q_i)}, \vartheta\in \mathbb{R}, $$ and try to find $\vartheta$ such that $$f(\vartheta)=\max_{i=1,2,\cdots,m+n}f_i(\vartheta)$$ is minimized, subject to the condition that $\omega {\rm Re}(e^{-{\rm j}\vartheta} [Q]_{ii})+(1-\omega){\rm Im}(e^{-{\rm j}\vartheta} [Q]_{ii})-q_i>0$ for all $i$. \begin{theorem}\label{thm16} Suppose the NARE (\ref{e1}) is in class $\mathbb{H}^\omega$ and $Q_{\omega}\textbf{1}>0$. Then the function $f(\vartheta)$ has a unique minimizer $\vartheta_*\in (-\pi,\pi)$ and $\vartheta_*$ can be computed by a bisection procedure. \end{theorem} \begin{proof} The existence of a minimizer is quite obvious. Next we describe a unusual bisection procedure that can search for a unique minimizer. The procedure is based on the simple observation: Let $\phi_i$ be the angles of the complex numbers $[Q]_{ii}$, $i=1,2,\cdots,m+n$. Note that $\varphi-\frac{\pi}{2}<\phi_i<\varphi+\frac{\pi}{2}$ ($-\frac{\pi}{2}<\phi_i-\varphi<\frac{\pi}{2}$) and $f_i(\vartheta)$ is minimized at $\vartheta=\phi_i-\varphi$. Moreover, $f_i(\vartheta)$ is strictly decreasing on the left of $\phi_i-\varphi$ and is strictly increasing on the right of $\phi_i-\varphi$. Let $d=\max_{i=1,2,\cdots,m+n}f_i(0)>0$. For each $i$ let $\Delta_i$ be the set of all values $\vartheta\in (-\pi,\pi)$ such that $0<f_i(\vartheta)\leq d$. It is clear that $\Delta_i$ is a closed interval containing 0. Let $\Delta=\bigcap_{i=1,2,\cdots,m+n}\Delta_i$. Then $\Delta$ is also a closed interval containing 0. Now $\min f(\vartheta)=\min_{\vartheta\in \Delta} f(\vartheta)$ is attained at some $\vartheta_*\in \Delta$ since $f$ is continuous on $\Delta$. The interval $\Delta$ above can be given explicitly as follows. Let $\underline{\delta_i}\leq \overline{\delta_i}$ be the two (usually different) solutions of $f_i(\vartheta)=d$. Namely, $\underline{\delta_i}=\phi_i-\varphi-\Psi_i$ and $\overline{\delta_i}=\phi_i-\varphi+\Psi_i$ with $$\Psi_i={\rm arccos}(\frac{1}{|[Q]_{ii}|\sqrt{\omega^2+(1-\omega)^2}}(q_i+\frac{\varpi|[Q]_{ii}|^2-q_i^2}{d\varpi})),~~~~ \Psi_i\in [0,\frac{\pi}{2}) $$ Now, $\Delta_i=[\underline{\delta_i},\overline{\delta_i}]$ and $\Delta=[\max \underline{\delta_i},\min \overline{\delta_i}]$. The interval $\Delta$ may be large even when all $\phi_i$ are equal to $\phi_*$, in which case we know that $\phi_*-\varphi$ is the unique minimizer of $f(\vartheta)$. To avoid using an unnecessarily large search interval in situations like this, we let $\phi_{\min}=\min \phi_i$ and $\phi_{\max}=\max \phi_i$ and we claim that any minimizer of $f(\vartheta)$ must be in $[\phi_{\min}-\varphi,\phi_{\max}-\varphi]$. In fact, $$f_{i}(\vartheta)=\frac{\varpi|[Q]_{ii}|^2-q_i^2}{\varpi(\omega {\rm Re}(e^{-{\rm j}\vartheta} [Q]_{ii})+(1-\omega){\rm Im}(e^{-{\rm j}\vartheta} [Q]_{ii})-q_i)}=\frac{\varpi|[Q]_{ii}|^2-q_i^2}{\varpi(|[Q]_{ii}|\sqrt{\omega^2+(1-\omega)^2}{\rm cos}(\phi_i-\varphi-\vartheta)-q_i)}.$$ For any $\vartheta\in (-\pi, \phi_{\min}-\varphi)$ such that $f_i(\vartheta)>0$ for each $i$, we have $0<\phi_i-\phi_{\min}<\phi_i-\varphi-\vartheta<\pi+\phi_i-\varphi$ and $f_i(\vartheta)>f_{i}(\phi_{\min}-\varphi)>0$ for each $i$. So any minimizer $\vartheta_*$ must satisfy $\vartheta_*\geq \phi_{\min}-\varphi$. Similarly we can show that $\vartheta_*\leq \phi_{\max}-\varphi$. The initial search interval for a minimizer of $f(\vartheta)$ is then $[\vartheta_{\min},\vartheta_{\max}]$, where $$\vartheta_{\min}=\max\{\max\underline{\delta_i},\phi_{\min}-\varphi\},~~~~\vartheta_{\max}=\min\{\min\overline{\delta_i},\phi_{\max}-\varphi\}. $$ The first step of the bisection procedure is to take $\vartheta_1=\frac{1}{2}(\vartheta_{\min}+\vartheta_{\max})$. Let $$a_1=\max_{\phi_i-\varphi>\vartheta_1}f_i(\vartheta_1),~~ b_1=\max_{\phi_i-\varphi<\vartheta_1}f_{i}(\vartheta_1),~~ c_1=\max_{\phi_i-\varphi=\vartheta_1}f_i(\vartheta_1),$$ where the maximum over an empty set is defined to be $0$. If $c_1\geq \max\{a_1,b_1\}$, then $\vartheta_1$ is a unique minimizer. Now suppose $c_1<\max\{a_1,b_1\}$. If $a_1=b_1$, then $\vartheta_1$ is still a unique minimizer. If $a_1\neq b_1$, any minimizer must be in $[\vartheta_1,\vartheta_{\max}]$ if $a_1>b_1$ and must be in $[\vartheta_{\min},\vartheta_1]$ if $a_1<b_1$. So for the second step of the bisection procedure we take \begin{eqnarray*} \begin{aligned} \vartheta_2= \begin{cases} \frac{1}{2}(\vartheta_1+\vartheta_{\max}),& if~~~~ a_1>b_1,\\ \frac{1}{2}(\vartheta_1+\vartheta_{\min}),& if~~~~ a_1<b_1. \end{cases} \end{aligned} \end{eqnarray*} Let $$a_2=\max_{\phi_i-\varphi>\vartheta_2}f_i(\vartheta_2),~~ b_2=\max_{\phi_i-\varphi<\vartheta_2}f_{i}(\vartheta_2),~~ c_2=\max_{\phi_i-\varphi=\vartheta_2}f_i(\vartheta_2).$$ As before we can determine whether $\vartheta_2$ is a unique minimizer. If not, for $a_1>b_1$ any minimizer must be in $[\vartheta_2,\vartheta_{\max}]$ if $a_2>b_2$ and must be in $[\vartheta_1,\vartheta_{2}]$ if $a_2<b_2$; for $a_1<b_1$ any minimizer must be in $[\vartheta_2,\vartheta_{1}]$ if $a_2>b_2$ and must be in $[\vartheta_{\min},\vartheta_{2}]$ if $a_2<b_2$. We can then continue the bisection procedure. Unless a unique minimizer is found in a finite number of steps, we get a sequence $\{\vartheta_k\}$ with ${\rm lim}_{k\rightarrow \infty}\vartheta_k=\vartheta_*$ and $|\vartheta_k-\vartheta_*|\leq\frac{\vartheta_{\max}-\vartheta_{\min}}{2^k}<\frac{\pi}{2^k}$. By construction, $\vartheta_*$ is the only candidate for the minimizer. So it is the unique minimizer since the existence is already known. \end{proof} In step $k$ of the above bisection procedure, $\phi_i-\varphi,i=1,2,\cdots,m+n$ are divided into three parts: one with $\phi_i-\varphi>\vartheta_k$, one with $\phi_i-\varphi<\vartheta_k$, and the other with $\phi_i-\varphi=\vartheta_k$. We have assumed that this division is done in exact arithmetic. In practice this division is done by a computer and may be different from the division in exact arithmetic when some $\phi_i-\varphi$ are extremely close to $\vartheta_k$. But this will have very little effect on the accuracy of the computed $\vartheta_*$. When $m=n$, our bisection procedure requires $O(n)$ operations each step, while the doubling algorithm requires $O(n^3)$ operations each step. We have already seen in the proof of Theorem \ref{thm16} that the $k$-th approximation $\vartheta_k$ to the minimizer $\vartheta_*$ satisfies $|\vartheta_k-\vartheta_*|<\frac{\pi}{2^k}$. So when $n$ is large, the computational work for using the bisection procedure to approximate $\vartheta_*$ to machine precision is negligible compared to the work for the doubling algorithm. In practice, there is no need to compute $\vartheta_*$ so accurately. If the $k$-th approximation is obtained by $\vartheta_k=\frac{1}{2}(\vartheta_a +\vartheta_b)$, we will stop the bisection if $|\vartheta_b-\vartheta_a|<tol$. We will take $tol=10^{-6}$ in our numerical experiments. The computational work is negligible. A simple preprocessing procedure for the NARE (\ref{e1}) is then as follows: Use the bisection method described in the proof of Theorem \ref{thm16} to determine a good approximation $\widetilde{\vartheta}$ to $\vartheta_*$, let $\chi=e^{-{\rm j}\widetilde{\vartheta}}$ and transform the NARE (\ref{e1}) to the NARE (\ref{e32}). Let $$\widetilde{\psi}_1=\max_{i=n+1,n+2,\cdots,m+n}\frac{p_{\omega,i}^\chi}{\varpi},~~~~\widetilde{\psi}_2=\max_{i=1,2,\cdots,n}\frac{p_{\omega,i}^\chi}{\varpi},$$ where $\frac{p_{\omega,i}^\chi}{\varpi},i=1,2,\cdots,m+n$ are as in (\ref{e33}). Then we can apply ADDA to (\ref{e32}) with $t>\widetilde{\psi}_1$ and $\gamma>\widetilde{\psi}_2$. We often have the situation that the two largest values of $f_i(\widetilde{\vartheta}˜)$, say $f_{i_1}(\widetilde{\vartheta}˜)$ and $f_{i_2}(\widetilde{\vartheta}˜)$ with $i_1<i_2$, are very close. If it happens for $i_1\in\{1,2,\cdots,n\}$ and $i_2\in\{n+1,n+2,\cdots,m+n\}$, then $\widetilde{\psi}_1\approx\widetilde{\psi}_2$ and we may take $t=\gamma$ for ADDA. In this case, ADDA is reduced to SDA. While the solution sets for the NAREs (\ref{e1}) and (\ref{e32}) are the same, there are many solutions in the set. We still need to make sure that the required solutions $\Phi$ and $\Psi$ are obtained when ADDA is applied to the transformed equation (\ref{e32}). \begin{theorem}\label{thm17} Suppose the NARE (\ref{e1}) is in class $\mathbb{H}^\omega$ and $Q_\omega \textbf{1}> 0$. Let $\chi$ be any unimodular number such that $\omega {\rm Re}(\chi [Q]_{ii})+(1-\omega){\rm Im}(\chi [Q]_{ii})-q_i>0$ for all $i\in\{1,2,\cdots,m+n\}$ ($\chi=e^{-{\rm j}\widetilde{\vartheta}}$ in particular). Apply ADDA to the NARE (\ref{e1}) with $\alpha=tz_{\omega}^\bot$ and $\beta=\gamma z_{\omega}^\bot$. If $$t\geq \widetilde{\psi}_1,~~~~\gamma\geq\widetilde{\psi}_2,$$ then the sequences $\{E_k\}$, $\{F_k\}$, $\{G_k\}$ and $\{H_k\}$ are well-definied and $\{H_k\}$ and $\{G_k\}$ converge quadratically to $\Phi$ and $\Psi$, respectively. \end{theorem} ADDA stated by Theorem \ref{thm17} will be denoted by pADDA. \begin{theorem}\label{thm28} Suppose the NARE (\ref{e1}) is in class $\mathbb{H}^\omega$ and $Q_\omega \textbf{1}> 0$. Let $\chi$ be any unimodular number such that $\omega {\rm Re}(\chi [Q]_{ii})+(1-\omega){\rm Im}(\chi [Q]_{ii})-q_i>0$ for all $i\in\{1,2,\cdots,m+n\}$ ($\chi=e^{-{\rm j}\widetilde{\vartheta}}$ in particular). Apply SDA to the NARE (\ref{e1}) with $\alpha=tz_{\omega}^\bot$. If $$t\geq\max\{\widetilde{\psi}_1,\widetilde{\psi}_2\},$$ then the sequences $\{E_k\}$, $\{F_k\}$, $\{G_k\}$ and $\{H_k\}$ are well-definied and $\{H_k\}$ and $\{G_k\}$ converge quadratically to $\Phi$ and $\Psi$, respectively. \end{theorem} SDA stated by Theorem \ref{thm28} will be denoted by pSDA. \subsubsection{Further choice of the parameters $t$ and $\gamma$}\ When $(\omega{\rm Im}([Q]_{ii})-(1-\omega){\rm Re}([Q]_{ii}))^2$ is large compared to $\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i$ for some $i$, the numbers $\psi_1$ and/or $\psi_2$ will be large and the convergence of ADDA may be slow. To improve the performance of ADDA, we will find smaller parameters from the convergence region given by (\ref{e25}) and (\ref{e26}). The idea is to use the straight line $\gamma=ct$ to cut the convergence region, where the slope $c>0$ is to be chosen properly. \begin{theorem}\label{thm18} Suppose the NARE (\ref{e1}) is in class $\mathbb{H}^\omega$ and $Q_\omega \textbf{1}>0$. Let $\Phi$ and $\Psi$ be as in Theorem \ref{thm5}. Apply ADDA to the NARE (\ref{e1}) with $\alpha=tz_{\omega}^\bot$ and $\beta=\gamma z_{\omega}^\bot.$ Assume that $\gamma=ct$. If \begin{eqnarray*} \begin{aligned} t>\max\{\eta_{\omega,1}(c),\eta_{\omega,2}(c)\}, \end{aligned} \end{eqnarray*} where $$\eta_{\omega,1}(c)=\max_{i=1,2,\cdots,n}r_{\omega,i}(c),~~~~\eta_{\omega,2}(c)=\max_{i=n+1,n+2,\cdots,m+n}r_{\omega,i}(c)$$ with \begin{eqnarray*} \begin{aligned} r_{\omega,i}(c)&=\frac{-(c-1)p_{\omega,i}+\sqrt{(c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)}}{2c\varpi},&i&=1,2,\cdots,n,\\ r_{\omega,i}(c)&=\frac{(c-1)p_{\omega,i}+\sqrt{(c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)}}{2c\varpi},&i&=n+1,n+2,\cdots,m+n,\\ \varpi&=\omega^2+(1-\omega)^2,&& \end{aligned} \end{eqnarray*} then the sequences $\{E_k\}$, $\{F_k\}$, $\{H_k\}$, $\{G_k\}$ are well-defined and $\{H_k\}$ and $\{G_k\}$ converge quadratically to $\Phi$ and $\Psi$, respectively. \end{theorem} \begin{proof} Since $\gamma=ct$, the inequalities (\ref{e30}) and (\ref{e31}) are equivalent to \begin{eqnarray*} \begin{aligned} ct^2+(c-1)\frac{p_{\omega,i}}{\varpi}t-\frac{p_{\omega,i}^2}{\varpi^2}+\frac{s_{\omega,i}^2}{\varpi^2}&>0,&t&>\frac{q_i}{c\varpi},&i&=1,2,\cdots,n,\\ ct^2-(c-1)\frac{p_{\omega,i}}{\varpi}t-\frac{p_{\omega,i}^2}{\varpi^2}+\frac{s_{\omega,i}^2}{\varpi^2}&>0,&t&>\frac{q_i}{\varpi},&i&=n+1,n+2,\cdots,m+n. \end{aligned} \end{eqnarray*} Let $r_{\omega,i}(c)$ be the larger root of $ct^2+(c-1)\frac{p_{\omega,i}}{\varpi}t-\frac{p_{\omega,i}^2}{\varpi^2}+\frac{s_{\omega,i}^2}{\varpi^2}=0,i=1,2,\cdots,n$, i.e., \begin{eqnarray*} \begin{aligned} r_{\omega,i}(c)&=\frac{-(c-1)\frac{p_{\omega,i}}{\varpi}+\sqrt{(c-1)^2\frac{p_{\omega,i}^2}{\varpi^2}+4c(\frac{p_{\omega,i}^2}{\varpi^2}-\frac{s_{\omega,i}^2}{\varpi^2})}}{2c}\\ &=\frac{-(c-1)p_{\omega,i}+\sqrt{(c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)}}{2c\varpi},~~~~i=1,2,\cdots,n.\\ \end{aligned} \end{eqnarray*} If $t>r_{\omega,i}(c)$, then $ct^2+(c-1)\frac{p_{\omega,i}}{\varpi}t-\frac{p_{\omega,i}^2}{\varpi^2}+\frac{s_{\omega,i}^2}{\varpi^2}>0$. Besides, we can prove $r_{\omega,i}(c)\geq\frac{q_i}{c\varpi},i=1,2,\cdots,n$. In fact, we have \begin{eqnarray*} (c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)-((c-1)p_{\omega,i}+2q_i)^2=4(c+1)q_is_{\omega,i}\geq0, \end{eqnarray*} so \begin{eqnarray*} r_{\omega,i}(c)-\frac{q_i}{c\varpi}=\frac{\sqrt{(c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)}-((c-1)p_{\omega,i}+2q_i)}{2c\varpi}\geq0. \end{eqnarray*} Let $r_{\omega,i}(c)$ be the larger root of $ct^2-(c-1)\frac{p_{\omega,i}}{\varpi}t-\frac{p_{\omega,i}^2}{\varpi^2}+\frac{s_{\omega,i}^2}{\varpi^2}=0,i=n+1,n+2,\cdots,m+n$, i.e., \begin{eqnarray*} \begin{aligned} r_{\omega,i}(c)&=\frac{(c-1)\frac{p_{\omega,i}}{\varpi}+\sqrt{(c-1)^2\frac{p_{\omega,i}^2}{\varpi^2}+4c(\frac{p_{\omega,i}^2}{\varpi^2}-\frac{s_{\omega,i}^2}{\varpi^2})}}{2c}\\ &=\frac{(c-1)p_{\omega,i}+\sqrt{(c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)}}{2c\varpi},~~~~i=n+1,n+2,\cdots,m+n. \end{aligned} \end{eqnarray*} Similarly, if $t>r_{\omega,i}(c)$, then $ct^2-(c-1)\frac{p_{\omega,i}}{\varpi}t-\frac{p_{\omega,i}^2}{\varpi^2}+\frac{s_{\omega,i}^2}{\varpi^2}>0$. We can also prove $r_{\omega,i}(c)\geq\frac{q_i}{\varpi},i=n+1,n+2,\cdots,m+n$. In fact, we have \begin{eqnarray*} (c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)-(2cq_i-(c-1)p_{\omega,i})^2=4c(c+1)q_is_{\omega,i}\geq0, \end{eqnarray*} so \begin{eqnarray*} r_{\omega,i}(c)-\frac{q_i}{\varpi}=\frac{\sqrt{(c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)}-(2cq_i-(c-1)p_{\omega,i})}{2c\varpi}\geq0. \end{eqnarray*} Thus if $t>\max\{\eta_{\omega,1}(c),\eta_{\omega,2}(c)\}$, we have \begin{eqnarray*} \begin{aligned} (t+\frac{p_{\omega,i}}{\varpi})(\gamma-\frac{p_{\omega,i}}{\varpi})&>-\frac{s_{\omega,i}^2}{\varpi^2},&\gamma&>\frac{q_i}{\varpi},& i&=1,2,\cdots,n,\\ (\gamma+\frac{p_{\omega,i}}{\varpi})(t-\frac{p_{\omega,i}}{\varpi})&>-\frac{s_{\omega,i}^2}{\varpi^2},&t&>\frac{q_i}{\varpi},&i&=n+1,n+2,\cdots,m+n, \end{aligned} \end{eqnarray*} then (\ref{e25}) and (\ref{e26}) are satisfied, then the sequences $\{E_k\}$, $\{F_k\}$, $\{H_k\}$, $\{G_k\}$ are well-defined and $\{H_k\}$ and $\{G_k\}$ converge quadratically to $\Phi$ and $\Psi$, respectively. \end{proof} Next, we consider how to apply SDA to the NARE (\ref{e1}). By taking $c=1$, we have \begin{eqnarray*} r_{\omega,i}(1)=\frac{\sqrt{q_i(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})+\frac{(\omega {\rm Im}([Q]_{ii})-(1-\omega){\rm Re}([Q]_{ii}))^2}{\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i})}}{\varpi},~~~~i=1,2,\cdots,m+n. \end{eqnarray*} \begin{theorem}\label{thm19} Suppose the NARE (\ref{e1}) is in class $\mathbb{H}^\omega$ and $Q_\omega \textbf{1}>0$. Let $\Phi$ and $\Psi$ be as in Theorem \ref{thm5}. Apply SDA to the NARE (\ref{e1}) with $\alpha=tz_{\omega}^\bot$. If $$t>\max_{i=1,2,\cdots,m+n}\tau_{\omega,i}$$ where \begin{eqnarray*} \begin{aligned} \tau_{\omega,i}&=\frac{\sqrt{q_i(\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})+\frac{(\omega {\rm Im}([Q]_{ii})(1-\omega){\rm Re}([Q]_{ii}))^2}{\omega {\rm Re}([Q]_{ii})+(1-\omega){\rm Im}([Q]_{ii})-q_i})}}{\varpi},~~~~i=1,2,\cdots,m+n,\\ \varpi&=\omega^2+(1-\omega)^2, \end{aligned} \end{eqnarray*} then the sequences $\{E_k\},\{F_k\},\{H_k\},\{G_k\}$ are well-defined and $\{H_k\}$ and $\{G_k\}$ converge quadratically to $\Phi$ and $\Psi$, respectively. \end{theorem} Notice that $\max_{i=1,2,\cdots,m+n}\tau_{\omega,i}<\max_{i=1,2,\cdots,m+n}{\frac{p_{\omega,i}}{\varpi}}$. So we now allow smaller values of the parameter $t$ for SDA. By a more careful choice of $c$ in Theorem \ref{thm18}, we can allow smaller values of the parameters $t$ and $\gamma$ for ADDA. \begin{theorem}\label{thm20} Suppose the NARE (\ref{e1}) is in class $\mathbb{H}^\omega$ and $Q_\omega \textbf{1}>0$. Let $\Phi$ and $\Psi$ be as in Theorem \ref{thm5}. Then there is a unique $c^*>0$ such that $\max_{i=1,2,\cdots,n}r_{\omega,i}(c^*)=\max_{i=n+1,n+2,\cdots,m+n}r_{\omega,i}(c^*)$. Let $t^*=\max_{i=n+1,n+2,\cdots,m+n}r_{\omega,i}(c^*)$ and $\gamma^*=c^*t^*$. Then for any $t$ and $\gamma$ satisfying (\ref{e30}) and (\ref{e31}), we have $t>t^*$ and $\gamma>\gamma^*$. In particular, $\psi_1>t^*$, $\psi_2>\gamma^*$. \end{theorem} \begin{proof} For $i=1,2,\cdots,n$, the derivative of $r_{\omega,i}(c)$ about $c$ is $$r_{\omega,i}(c)^{'}=\frac{-(c+1)p_{\omega,i}^2+2cs_{\omega,i}^2-p_{\omega,i}\sqrt{(c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)}}{2c^2\varpi\sqrt{(c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)}}<0$$ since $$(2cs_{\omega,i}^2-(c+1)p_{\omega,i}^2)^2-p_{\omega,i}^2((c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2))=4c^2s_{\omega,i}^2(s_{\omega,i}^2-p_{\omega,i}^2)<0.$$ So $r_{\omega,i}(c)$ ($i=1,2,\cdots,n$) is strictly decreasing on $c\in(0,\infty)$, from $\infty$ to $0$ for each $\omega\in[0,1]$. It follows that $\eta_{\omega,1}(c)$ is strictly decreasing on $(0,\infty)$, from $\infty$ to $0$ for each $\omega\in[0,1]$. Similarly, for $i=n+1,n+2,\cdots,m+n$, the derivative of $r_{\omega,i}(c)$ about $c$ is $$r_{\omega,i}(c)^{'}=\frac{-(c+1)p_{\omega,i}^2+2cs_{\omega,i}^2+p_{\omega,i}\sqrt{(c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)}}{2c^2\varpi\sqrt{(c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2)}}>0$$ since $$p_{\omega,i}^2((c-1)^2p_{\omega,i}^2+4c(p_{\omega,i}^2-s_{\omega,i}^2))-((c+1)p_{\omega,i}^2-2cs_{\omega,i}^2)^2=4c^2s_{\omega,i}^2(p_{\omega,i}^2-s_{\omega,i}^2)>0.$$ So $r_{\omega,i}(c)$ ($i=n+1,n+2,\cdots,m+n$) is strictly increasing on $(0,\infty)$, from $\frac{p_{\omega,i}^2-s_{\omega,i}^2}{\varpi p_{\omega,i}}$ to $\frac{p_{\omega,i}}{\varpi}$ for each $\omega\in[0,1]$. It follows that $\eta_{\omega,2}(c)$ is strictly increasing on $(0,\infty)$, from $\max_{i=n+1,n+2,\cdots,m+n}\frac{p_{\omega,i}^2-s_{\omega,i}^2}{\varpi p_{\omega,i}}$ to $\max_{i=n+1,n+2,\cdots,m+n}\frac{p_{\omega,i}}{\varpi}$ for each $\omega\in[0,1]$. We only need to show that $t>t^*$ and $\gamma>\gamma^*$ for any $t$ and $\gamma$ satisfying (\ref{e30}) and (\ref{e31}). In fact, $\gamma=ct$ with $c=\gamma/t$ and the conditions (\ref{e30}) and (\ref{e31}) require that $$t>\max\{\max_{i=1,2,\cdots,n}r_{\omega,i}(c),\max_{i=n+1,n+2,\cdots,m+n}r_{\omega,i}(c)\}\geq\max\{\max_{i=1,2,\cdots,n}r_{\omega,i}(c^*),\max_{i=n+1,n+2,\cdots,m+n}r_{\omega,i}(c^*)\}=t^*.$$ For any $c>0$ we replace $t$ by $\gamma/c$ in (\ref{e30}) and (\ref{e31}), and find as before that (\ref{e30}) is equivalent to $\gamma>c\eta_{\omega,1}(c)$ and (\ref{e31}) is equivalent to $\gamma>c\eta_{\omega,2}(c)$. So we need $\gamma>\max\{c\eta_{\omega,1}(c),c\eta_{\omega,2}(c)\}$. As before we can show that $c\eta_{\omega,1}(c)$ is strictly decreasing on $(0,\infty)$, from $\max_{i=1,2,\cdots,n}\frac{p_{\omega,i}}{\varpi}$ to $\max_{i=1,2,\cdots,n}\frac{p_{\omega,i}^2-s_{\omega,i}^2}{\varpi p_{\omega,i}}$ for each $\omega\in[0,1]$, and that $c\eta_{\omega,2}(c)$ is strictly increasing on $(0,\infty)$, from $0$ to $\infty$ for each $\omega\in[0,1]$. It follows that $\gamma>\max\{c\eta_{\omega,1}(c),c\eta_{\omega,2}(c)\}\geq\max\{c^*\eta_{\omega,1}(c^*),c^*\eta_{\omega,2}(c^*)\}=\gamma^*$. \end{proof} From the above proof, we can also see that $$t^*>\underline{t}=\max_{i=n+1,n+2,\cdots,m+n}\frac{p_{\omega,i}^2-s_{\omega,i}^2}{\varpi p_{\omega,i}},$$ $$\gamma^*>\underline{\gamma}=\max_{i=1,2,\cdots,n}\frac{p_{\omega,i}^2-s_{\omega,i}^2}{\varpi p_{\omega,i}}.$$ It follows that $$\underline{\gamma}/\max_{i=n+1,n+2,\cdots,m+n}\frac{p_{\omega,i}}{\varpi}<c^*<\max_{i=1,2,\cdots,n}\frac{p_{\omega,i}}{\varpi}/\underline{t},$$ so $c^*$ can be found by the usual bisection method applied to the function $\eta_{\omega,1}(c)-\eta_{\omega,2}(c)$ on the interval $$[\underline{\gamma}/\max_{i=n+1,n+2,\cdots,m+n}\frac{p_{\omega,i}}{\varpi},\max_{i=1,2,\cdots,n}\frac{p_{\omega,i}}{\varpi}/\underline{t}].$$ While Theorem \ref{thm20} allows us to use smaller parameters $t$ and $\gamma$ for ADDA, the smaller parameters will not always provide better convergence. Generally speaking, the smaller $\rho(\mathcal{R})\rho(\mathcal{S})$ is, the faster ADDA converges. In this regard, we are to choose parameters $t$ and $\gamma$ to make $\rho(\mathcal{R})\rho(\mathcal{S})$ as small as possible. Once again we fix $c>0$ and let $\gamma=ct$. We will try to find good values for $t$. Let $\lambda$ and $\mu$ be any eiegnvalues of $R$ and $S$, respectively. Note that $\lambda$ is an eigenvalue of $$H=\left( \begin{matrix} D&-C\\ B&-A \end{matrix} \right)$$ in the upper right of $L(z_\omega)$ and $-\mu$ is an eigenvalue of $H$ in the lower left of $L(z_\omega)$. It follows from Gershgorin's theorem and triangle inequality that $$|\lambda|\leq \max_{i=1,2,\cdots,n}(|[Q]_{ii}|+q_i),~~~~~|\mu|\leq \max_{i=n+1,n+2,\cdots,m+n}(|[Q]_{ii}|+q_i).$$ The eigenvalue of $\mathcal{R}$ and $\mathcal{S}$ are $\frac{\lambda-c\alpha}{\lambda+\alpha}=\frac{\lambda-ctz_{\omega}^\bot}{\lambda+tz_{\omega}^\bot}$ and $\frac{\mu-\alpha}{\mu+c\alpha}=\frac{\mu-tz_{\omega}^\bot}{\mu+ctz_{\omega}^\bot}$. \begin{proposition}\label{thm21} Let $c>0$ and $x=a+{\rm j}b$ with $x$ being in the upper right of $L(z_\omega)$. Then the function $$f_\omega(t)=|\frac{x-ctz_{\omega}^\bot}{x+tz_{\omega}^\bot}|$$ is increasing when \begin{eqnarray}\label{e34} t\geq\frac{-(c-1)\varpi|x|^2+\sqrt{(c-1)^2\varpi^2|x|^4+4c\varpi\theta^2 |x|^2}}{2c\varpi\theta}, \end{eqnarray} where $\varpi=\omega^2+(1-\omega)^2$ and $\theta=\omega a+(1-\omega)b$. \end{proposition} \begin{proof} We have \begin{eqnarray*} f_\omega(t)=\sqrt{\frac{(a-ct\omega)^2+(b-ct(1-\omega))^2}{(a+t\omega)^2+(b+t(1-\omega))^2}}, \end{eqnarray*} a simple computation shows that $f_\omega(t)^{\prime}\geq0$ if and only if \begin{eqnarray}\label{e35} \begin{aligned} &c(a\omega+b(1-\omega)-tc\varpi)(|x|^2+t^2\varpi+2at\omega+2bt(1-\omega))\\ &+(|x|^2+c^2t^2\varpi-2act\omega-2bct(1-\omega))(a\omega+b(1-\omega)+t\varpi)\leq0. \end{aligned} \end{eqnarray} Let $\theta=\omega a+(1-\omega)b$, then (\ref{e35}) is equivalent to \begin{eqnarray*} c(\theta-tc\varpi)(|x|^2+t^2\varpi+2t\theta)+(|x|^2+c^2t^2\varpi-2ct\theta)(\theta+t\varpi)\leq0, \end{eqnarray*} equivalently, $$t^2c^2\varpi\theta+t^2c\varpi\theta+|x|^2t\varpi(c^2-1)-|x|^2\theta(c+1)\geq0,$$ i.e., $$ c\varpi\theta t^2+(c-1)\varpi|x|^2t-|x|^2\theta\geq0.$$ Because the function $c\varpi\theta t^2+(c-1)\varpi|x|^2t-|x|^2\theta$ about $t$ is quadratic and its discriminant $\Omega=(c-1)^2\varpi^2|x|^4+4c\varpi\theta^2 |x|^2\geq 0$ and $\theta=a\omega+b(1-\omega)>0$. Thus we have that if $$t\geq\frac{-(c-1)\varpi|x|^2+\sqrt{(c-1)^2\varpi^2|x|^4+4c\varpi\theta^2 |x|^2}}{2c\varpi\theta},$$ then $f_\omega(t)$ is increasing. \end{proof} If we use SDA, then $c=1$ and (\ref{e34}) simplifies to $$t\geq\frac{|x|\sqrt{\varpi}}{\varpi}=\frac{|x|}{\sqrt{\varpi}}.$$ It follows that $\rho(\mathcal{R})\rho(\mathcal{S})$ is increasing in $t$ if \begin{eqnarray}\label{e36} t\geq \frac{\max_{i=1,2,\cdots,m+n}(|[Q]_{ii}|+q_i)}{\sqrt{\varpi}}. \end{eqnarray} Note however that this is only a sufficient condition and $\rho(\mathcal{R})\rho(\mathcal{S})$ may be smaller for some smaller $t$ values. In this regard, when $\frac{\max_{i=1,2,\cdots,m+n}(|[Q]_{ii}|+q_i)}{\sqrt{\varpi}}\geq\max\{\psi_1,\psi_2\}=\psi^*$, we will take $t=\psi^*=\max\{\psi_1,\psi_2\}$, i.e., we stick to the original strategy for choosing $t$ for SDA. Now suppose that $\frac{\max_{i=1,2,\cdots,m+n}(|[Q]_{ii}|+q_i)}{\sqrt{\varpi}}<\psi^*=\max\{\psi_1,\psi_2\}$. Since strict inequality is required in Theorem \ref{thm19} we will require $t\geq \tau_{\omega}^*=\zeta \max_{i=1,2,\cdots,m+n}\tau _{\omega,i}$, where $\zeta$ is slightly bigger than 1 (we take $\zeta=1.01$ in our numerical experiments). Our strategy for choosing $t$ in SDA is then to take $$t=\max\{\tau_{\omega}^*,\frac{1}{2}\frac{\max_{i=1,2,\cdots,m+n}(|[Q]_{ii}|+q_i)}{\sqrt{\varpi}}\},$$ where the factor $\frac{1}{2}$ is introduced to account for the fact that (\ref{e36}) is only a sufficient condition. For the NARE (\ref{e1}), SDA with this new parameter strategy will be denoted by SDAn. The situation for ADDA is more complicated since the inequality in (\ref{e34}) is complicated when $c\neq1$. When $c\geq1$ it is easy to see that (\ref{e34}) holds if $t\geq \frac{|x|}{\sqrt{c\varpi}}$. But no such simplification is available when $c<1$. When we apply Proposition \ref{thm21} to $\frac{\lambda-c\alpha}{\lambda+\alpha}$ and $\frac{\mu-\alpha}{\mu+c\alpha}=\frac{\mu-\frac{1}{c}c\alpha}{\mu+c\alpha}$, we will run into difficulties when $c\neq1$ since either $c$ or $\frac{1}{c}$ will be smaller than 1. Since we have no useful monotonicity results to apply for ADDA, our parameter strategy is solely based on Theorem \ref{thm20}. We compute $c^*$ by bisection method and take $t=\zeta t^*=\zeta\eta_{\omega,1}(c^*)$ and $\gamma=c^*t^*$, where $\zeta$ is slightly bigger than $1$ (we take $\zeta=1.01$ in our numerical experiments). For the NARE (\ref{e1}), ADDA with this new parameter strategy will be denoted by ADDAn. Since there is more uncertainty about ADDAn, it may be appropriate to use SDAn when $0.1<\frac{\psi_1}{\psi_2}<10$ and use ADDAn otherwise. For the NARE (\ref{e1}), this method will be denoted by DAn. Since the bounds $0.1$ and $10$ are somewhat arbitrary, one cannot expect DAn to be always better than SDAn and ADDAn. Note that our new parameter strategies are also applied to the NARE (\ref{e32}) which can be transformed from the NARE (\ref{e1}) by a precessing procedure. For the NARE (\ref{e32}), new parameter strategies will be denoted by pSDAn, pADDAn and pDAn, respectively. \begin{remark}\label{thm30} 1. When $\omega=0$, $L(z_\omega,0)$ will be the real axis. The parameters $\alpha$ and $\beta$ can be found in the ray $R(z_\omega^\bot)$, i.e., the upper half imaginary axis. 2. When $\omega=1$, $L(z_\omega,0)$ will be the imaginary axis. The parameters $\alpha$ and $\beta$ can be found in the ray $R(z_\omega^\bot)$, i.e., the right half real axis. 3. When $\omega$ ranges from $0$ to $1$, the straight line $L(z_\omega,0)$ rotates clockwise from the real axis to the imaginary axis at the center of origin. The parameters $\alpha$ and $\beta$ can be found in the ray $R(z_\omega^\bot)$, it is a ray located in the first quadrant. In a word, all of our results generalize the results obtained by \cite{Axe5}. \end{remark} \section{Numerical experiments} In this section, we present some numerical examples to illustrate the effectiveness of our methods, including Newton's method, the fixed-point iterative methods and the two doubling algorithms:ADDA and SDA. Besides, the effectiveness of our preprocessing technique and new parameter selection strategies for ADDA and SDA is also demonstrated. All experiments are performed under Windows 7 and MATLAB(R2014a) running on a Lenovo desktop with an Intel(R) Core(TM) i5-4590, CPU at 3.30 GHz and 4 GB of memory. An algorithm for computing the extremal solution of the NARE (\ref{e1}) is terminated when the approximate solution $\Phi_k$ satisfies ${\rm NRes}<10^{-12}$, where \begin{eqnarray*} {\rm NRes}=\frac{\|\Phi_k C\Phi_k-\Phi_k D-A\Phi_k+B\|_1}{\|\Phi_k\|_1(\|\Phi_k\|_1\|C\|_1+\|D\|_1+\|A\|_1)+\|B\|_1} \end{eqnarray*} is the normalized residual. We use IT and CPU to denote the numbers and the consumed time of iterations, respectively. For demonstrating the convergence of the fixed-point iterative methods, we will take TFP as a representative. The convergence of JFP, GSFP, SORFP and AORFP can be demonstrated easily. We apply ADDA and SDA to the NARE (\ref{e1}) directly, and for ADDA we take $t=\psi_1$ and $\gamma=\psi_2$, and for SDA we take $t=\max\{\psi_1,\psi_2\}$. ADDA and SDA with our preprocessing procedure will be denoted by pADDA and pSDA, respectively. We apply pADDA and pSDA to the NARE (\ref{e32}) with $\chi=e^{-{\rm j} \vartheta_*}$. For pADDA we take $t=\widetilde{\psi}_1$ and $\gamma=\widetilde{\psi}_2$, and for pSDA we take $t=\max\{\widetilde{\psi}_1,\widetilde{\psi}_2\}$. Example 7.1 below demonstrates the performance of our proposed methods for the NARE (\ref{e1}) in the situation where the matrix $Q$ may have same diagonal elements. In this situation, ADDA and pADDA are reduced to SDA and pSDA, respectively. Besides, a common value $\chi=e^{-{\rm j} \vartheta}=e^{-{\rm j}(\phi_i-\varphi)}$ can be found such that $f_i(\vartheta)$ attains the minimum for all $i$. And thus the bisection procedure in preprocessing procedure is unnecessary. \textbf{Example 7.1.} Let $A,B,C,D\in \mathbb{C}^{n\times n}$ be given by $$A=P+({\rm j}\eta)I_n,~~~~ D=P+( {\rm j}\eta)I_n,~~~~B=C=u I_n,$$ where $u\in (0,2),\eta\in \mathbb{R}$ and $$P=\left( \begin{matrix} \xi&-1\\ &\xi&-1\\ &&\ddots&\ddots\\ &&&\xi&-1\\ -1&&&&\xi \end{matrix} \right).$$ \begin{table}[!htbp] \caption{The numerical results for Example 7.1 with $n=512$.} \centering \footnotesize{ \setlength{\tabcolsep}{1.3mm} \begin{tabular}{cc|c|cccccccccc} \hline &&&\multicolumn{10}{c}{$u=0.01$}\\ \hline $\omega$&$(\xi,\eta)$&&Newton&TFP&SDA&ADDAn&SDAn&DAn&pSDA&pADDAn&pSDAn&pDAn\\ \hline \multirow{2}{*}{0}&\multirow{2}{*}{(-5,1.05)}&IT&2&3&16&12&12&12&4&5&4&4\\ &&CPU&29.6298&37.9619&10.9125&9.4652&9.4989&9.9400&6.4912&8.9096&7.0260&6.4569\\ \hline \multirow{2}{*}{0.1}&\multirow{2}{*}{(-3,1.5)}&IT&2&3&13&10&10&10&4&4&4&4\\ &&CPU&31.7026&38.9788&10.0546&9.5092&9.5827&9.5305&9.3744&9.5600&9.2472&9.3298\\ \hline \multirow{2}{*}{0.5}&\multirow{2}{*}{(-1,4)}&IT&2&3&7&6&6&6&4&4&4&4\\ &&CPU&29.7952&38.0195&10.8399&10.5415&10.6214&10.6065&8.1645&8.6008&8.1516&8.1328\\ \hline \multirow{2}{*}{0.9}&\multirow{2}{*}{(0,11)}&IT&2&2&15&11&11&11&4&5&4&4\\ &&CPU&29.6355&26.0940&10.9842&9.6224&10.0095&9.6687&4.1576&5.3052&4.1425&4.1373\\ \hline \multirow{2}{*}{1}&\multirow{2}{*}{(1.05,-5)}&IT&2&3&16&12&12&12&4&5&4&4\\ &&CPU&29.7714&38.9348&10.9525&9.4719&9.4764&9.5643&6.7502&8.8732&6.4463&6.4653\\ \hline \end{tabular} \begin{tabular}{cc|c|cccccccccc} \hline &&&\multicolumn{10}{c}{$u=0.1$}\\ \hline $\omega$&$(\xi,\eta)$&&Newton&TFP&SDA&ADDAn&SDAn&DAn&pSDA&pADDAn&pSDAn&pDAn\\ \hline \multirow{2}{*}{0}&\multirow{2}{*}{(-10,1.2)}&IT&2&3&15&11&11&11&4&5&4&4\\ &&CPU&29.9453&37.1055&10.6396&9.9290&9.9784&9.9687&4.2959&5.4316&4.2641&4.3756\\ \hline \multirow{2}{*}{0.1}&\multirow{2}{*}{(-5,2)}&IT&3&4&10&8&8&8&4&4&4&4\\ &&CPU&43.3523&50.4984&9.3162&8.8982&8.9718&8.9615&6.6374&7.0228&6.6194&6.6003\\ \hline \multirow{2}{*}{0.5}&\multirow{2}{*}{(-1,5)}&IT&3&4&6&6&6&6&4&4&4&4\\ &&CPU&45.2348&51.8292&10.3645&10.9073&10.9314&10.9917&6.8078&7.1063&6.7415&6.7997\\ \hline \multirow{2}{*}{0.9}&\multirow{2}{*}{(0,12)}&IT&2&3&14&10&10&10&4&5&4&4\\ &&CPU&29.9612&44.5258&10.9930&9.7872&9.5895&9.7139&4.1036&5.2070&4.0702&4.1008\\ \hline \multirow{2}{*}{1}&\multirow{2}{*}{(1.2,-10)}&IT&2&3&15&11&11&11&4&5&4&4\\ &&CPU&30.3147&37.6761&10.7674&9.9678&10.0018&10.0234&4.2497&5.4287&4.2365&4.2334\\ \hline \end{tabular} \begin{tabular}{cc|c|cccccccccc} \hline &&&\multicolumn{10}{c}{$u=1$}\\ \hline $\omega$&$(\xi,\eta)$&&Newton&TFP&SDA&ADDAn&SDAn&DAn&pSDA&pADDAn&pSDAn&pDAn\\ \hline \multirow{2}{*}{0}&\multirow{2}{*}{(-50,2.01)}&IT&2&4&20&13&13&13&4&6&4&4\\ &&CPU&29.7309&50.4191&14.9592&12.5041&12.4715&12.4394&2.6849&3.8142&2.6675&2.6483\\ \hline \multirow{2}{*}{0.1}&\multirow{2}{*}{(-10,5)}&IT&3&5&7&6&6&6&4&4&4&4\\ &&CPU&44.7391&64.0577&9.8632&9.5124&9.5572&9.5649&4.3355&4.4114&4.2914&4.3118\\ \hline \multirow{2}{*}{0.5}&\multirow{2}{*}{(-2,7)}&IT&3&7&7&6&6&6&4&4&4&4\\ &&CPU&44.0267&89.8634&11.0217&10.9717&10.9457&10.9726&5.6055&5.4055&5.3099&5.3105\\ \hline \multirow{2}{*}{0.9}&\multirow{2}{*}{(0,21)}&IT&3&4&14&9&9&9&4&5&4&4\\ &&CPU&44.9850&50.5941&13.5404&10.8258&10.8532&10.8801&3.4014&4.2805&3.3786&3.3771\\ \hline \multirow{2}{*}{1}&\multirow{2}{*}{(2.01,-50)}&IT&2&4&20&13&13&13&4&6&4&4\\ &&CPU&30.0403&50.6845&14.9062&12.4521&12.4635&12.5044&2.6612&3.8160&2.6324&2.6395\\ \hline \end{tabular} } \end{table} For Example 7.1, we test the performance of our methods for different $\omega$, $\xi$, $\eta$ and $u$. We let the size $n=512$. The numerical results are shown in Table 7.1. We can see that Newton's method converges fastest among all proposed methods, but it consumes too much time. Besides, we can also observe that FTP converges faster than SDA, but FTP consumes more time than SDA. Among all doubling algorithms, pSDA converges faster than SDA; ADDAn, SDAn and DAn converge faster than SDA; pADDAn, pSDAn and pDAn convege faster than SDA, ADDAn, SDAn and DAn. DAn chooses the faster method between ADDAn and SDAn, and pDAn chooses the faster method between pADDAn and pSDAn for this example. We must notice that pADDAn may converge slower than pADDA. This observation may be explained from the fact that smaller parameters may not result in faster algorithms for doubling algorithms. We observe that pSDA, pADDA, pSDAn and pDAn converge fastest among all doubling algorithms for this example. Example 7.2 below demonstrates the performance of our proposed methods for the NARE (\ref{e1}) in the situation where the matrix $Q$ may have different diagonal elements. In this situation, we can't find a common value $\chi=e^{-{\rm j} \vartheta}$ such that $f_i(\vartheta)$ attains the minimum for all $i$, and we need take some time to perform the bisection procedure in the preprocessing procedure. \textbf{Example 7.2.} For $n=512,$ let $$A=\eta\left( \begin{matrix} I_{n/2}&\\ &-I_{n/2} \end{matrix} \right)I+3*{\rm j},~D=2\eta\left( \begin{matrix} I_{n/2}&\\ &-I_{n/2} \end{matrix} \right)+3*{\rm j},~B=I_{n},~ C=I_{n}.$$ \begin{table}[!htbp] \caption{The numerical results for Example 7.2 with $n=512$.} \centering \footnotesize{ \setlength{\tabcolsep}{1mm} \begin{tabular}{c|c|cccccccccccc} \hline &&&\multicolumn{10}{c}{$\omega=0$}\\ \hline $\eta$&&Newton&TFP&ADDA&SDA&ADDAn&SDAn&DAn&pADDA&pSDA&pADDAn&pSDAn&pDAn\\ \hline \multirow{2}{*}{-20}&IT&3&4&8&10&7&7&7&8&10&7&7&7\\ &CPU&58.4497&73.0989&2.3950&2.8773&2.0640&2.0422&2.0788&2.5258&3.0417&2.2458&2.1973&2.2419\\ \hline \multirow{2}{*}{-10}&IT&3&5&7&8&6&6&6&7&8&6&6&6\\ &CPU&56.6389&84.5592&2.1478&2.3578&1.8240&1.8499& 1.8526&2.1427&2.4126&1.8741&1.9059&1.8623\\ \hline \multirow{2}{*}{-5}&IT&3&6&5&7&5&6&6&5&7&5&6&6\\ &CPU&57.6388&98.7460&1.6236&2.1024&1.5899&1.8365&1.8188&1.7059&2.2481&1.7125&2.0061&1.9677\\ \hline \multirow{2}{*}{0}&IT&4&10&4&4&4&4&4&4&4&4&4&4\\ &CPU&69.9680&145.0492&0.8615&0.8934&0.8588&0.8539&0.8622&0.8428&0.8957&0.8889&0.8864&0.9051\\ \hline \multirow{2}{*}{5}&IT&3&6&5&7&5&6&6&5&7&5&6&6\\ &CPU&57.7678&96.9707&1.6153&2.0731&1.5593&1.8183&1.8211&1.7015&2.2595&1.6744&1.9892&1.9588\\ \hline \multirow{2}{*}{10}&IT&3&5&7&8&6&6&6&7&8&6&6&6\\ &CPU&57.8238&84.4355&2.1602&2.3556&1.8236&1.8634&1.8367&2.1626&2.4128&1.8675&1.9107&1.8655\\ \hline \multirow{2}{*}{20}&IT&3&4&8&10&7&7&7&8&10&7&7&7\\ &CPU&57.4388&71.1231&2.4200&2.8483&2.0967&2.0673&2.1081&2.5161&3.0977&2.2476&2.2522&2.2352\\ \hline \end{tabular} \begin{tabular}{c|c|cccccccccccc} \hline &&&\multicolumn{10}{c}{$\omega=0.1$}\\ \hline $\eta$&&Newton&TFP&ADDA&SDA&ADDAn&SDAn&DAn&pADDA&pSDA&pADDAn&pSDAn&pDAn\\ \hline \multirow{2}{*}{-8}&IT&3&5&8&13&7&8&7&6&8&6&6&6\\ &CPU&57.1384&85.7310&2.3625&3.5691&2.1001&2.3310&2.1114&1.9961&2.5491&1.9967&1.9912&1.9514\\ \hline \multirow{2}{*}{-4}&IT&3&6&6&7&5&6&6&5&6&5&5&5\\ &CPU&58.2886&94.0871&1.8640&2.0755&1.5566&1.8467&1.8080&1.6114&1.8708&1.6243&1.6179&1.5984\\ \hline \multirow{2}{*}{-1}&IT&4&9&4&4&4&4&4&4&4&4&4&4\\ &CPU&70.4656&133.0604&1.3229&1.3265&1.3301&1.3124&1.2978&1.3424&1.3510&1.3502&1.3595&1.3298\\ \hline \multirow{2}{*}{0}&IT&4&10&4&4&4&4&4&4&4&4&4&4\\ &CPU&70.5027&147.2159&1.3492&1.3400&1.3215&1.3341&1.3284&1.4362&1.4414&1.4462&1.4307&1.4174\\ \hline \multirow{2}{*}{1}&IT&4&9&4&4&4&4&4&4&4&4&4&4\\ &CPU&69.6891&133.1174&1.3567&1.3174&1.3221&1.3221&1.2696&1.3460&1.8313&1.3482&1.3476&1.3328\\ \hline \multirow{2}{*}{4}&IT&3&6&6&7&5&6&6&5&6&5&5&5\\ &CPU&57.0302&95.8556&1.8563&2.0993&1.5799&1.8120&1.8292&1.6144&1.8739&1.6128&1.6306&1.5934\\ \hline \multirow{2}{*}{8}&IT&3&5&8&13&7&8&7&6&8&6&6&6\\ &CPU&57.7937&84.3933&2.3935&3.5873&2.0995&2.3519&2.1214&1.9627&2.5571&1.9775&1.9913&1.9290\\ \hline \end{tabular} \begin{tabular}{c|c|cccccccccccc} \hline &&&\multicolumn{10}{c}{$\omega=0.5$}\\ \hline $\eta$&&Newton&TFP&ADDA&SDA&ADDAn&SDAn&DAn&pADDA&pSDA&pADDAn&pSDAn&pDAn\\ \hline \multirow{2}{*}{-0.45}&IT&4&9&6&8&5&6&6&4&3&4&3&3\\ &CPU&70.7762&133.8419&1.8694&2.3648&1.5755&1.8512&1.8348&1.4298&1.2643&1.4479&1.1447&1.1438\\ \hline \multirow{2}{*}{-0.3}&IT&4&9&6&6&5&5&5&3&3&4&3&3\\ &CPU&70.0761&134.2049&1.8255&1.8351&1.5679&1.5860&1.6046&1.1846&1.1592&1.5908&1.1601&1.1384\\ \hline \multirow{2}{*}{-0.15}&IT&4&9&5&5&5&5&5&3&3&4&3&3\\ &CPU&69.2073&133.8227&1.5746&1.5703&1.6081&1.5855&1.6146&1.1578&1.1762&1.4354&1.1664&1.1516\\ \hline \multirow{2}{*}{0}&IT&4&10&5&5&4&4&4&3&3&4&3&3\\ &CPU&69.7512&145.1118&1.6052&1.5625&1.3468&1.3138&1.3153&1.1592&1.3629&1.4478&1.1765&1.1472\\ \hline \multirow{2}{*}{0.15}&IT&4&9&5&5&5&5&5&3&3&4&3&3\\ &CPU&70.5676&135.1130&1.6306&1.5829&1.5304&1.6044&1.6004&1.1546& 1.1481&1.4354&1.1637&1.1447\\ \hline \multirow{2}{*}{0.3}&IT&4&9&6&6&5&5&5&3&3&4&3&3\\ &CPU&70.9305&131.3888&1.8863&1.7906&1.5782&1.5405&1.5636&1.1681&1.1545&1.4178&1.1809&1.1561\\ \hline \multirow{2}{*}{0.45}&IT&4&9&6&8&5&6&6&4&3&4&3&3\\ &CPU&71.2917&130.1491&1.8720&2.3765&1.5757&1.8181&1.8227&1.4440&1.0912&1.4321&1.1646&1.1388\\ \hline \end{tabular} \begin{tabular}{c|c|cccccccccccc} \hline &&&\multicolumn{10}{c}{$\omega=0.9$}\\ \hline $\eta$&&Newton&TFP&ADDA&SDA&ADDAn&SDAn&DAn&pADDA&pSDA&pADDAn&pSDAn&pDAn\\ \hline \multirow{2}{*}{-0.35}&IT&4&9&9&9&15&9&9&4&4&4&4&4\\ &CPU&74.9940&134.6261&2.6350&2.6163&4.1425&2.5889&2.5731&1.4688&1.4320&1.4589&1.4341&1.4459\\ \hline \multirow{2}{*}{-0.2}&IT&4&9&10&10&14&10&10&4&4&4&4&4\\ &CPU&71.3762&132.5856&2.9131&2.8304&3.7992&2.8453&2.8254&1.4619&1.4413&1.4515&1.4455&1.4145\\ \hline \multirow{2}{*}{-0.05}&IT&4&10&8&8&11&8&8&4&4&4&4&4\\ &CPU&72.0856&144.8090&2.3739&2.3684&3.0708&2.3756&2.3361&1.4179&1.4491&1.4141&1.4436&1.4298\\ \hline \multirow{2}{*}{0}&IT&4&10&8&8&10&8&8&4&4&4&4&4\\ &CPU&71.9337&144.5986&2.3964&2.3432&2.8074&2.3352&2.3504&1.4293&1.4401&1.4586&1.4487&1.4476\\ \hline \multirow{2}{*}{0.05}&IT&4&10&8&8&11&8&8&4&4&4&4&4\\ &CPU&72.1193&142.7633&2.3727&2.3463&3.0807&2.3879&2.3307&1.4140&1.4460&1.4291&1.4217&1.4300\\ \hline \multirow{2}{*}{0.2}&IT&4&9&10&10&14&10&10&4&4&4&4&4\\ &CPU&72.8750&133.7942&2.9122&2.8388&3.8541&2.8803&2.8312&1.4597&1.4336&1.4463&1.4438&1.4127\\ \hline \multirow{2}{*}{0.35}&IT&4&9&9&9&15&9&9&4&4&4&4&4\\ &CPU&71.9151&133.6850&2.6592&2.5762&4.1201&2.5861&2.5914&1.3834&1.4862&1.4082&1.4407&1.4544\\ \hline \end{tabular} } \end{table} For Example 7.2, we test the performance of our methods for different $\omega$ and $\eta$. The numerical results are shown in Table 7.2. We can see that Newton's method converges fastest among all proposed methods, but it consumes too much time. Among all doubling algorithms, pADDA converges faster than ADDA and pSDA converges faster than SDA; ADDAn, SDAn and DAn converge faster than ADDA and SDA most of time; pADDAn, pSDAn and pDAn convege faster than ADDA, SDA, ADDAn, SDAn and DAn most of time. DAn chooses the faster method between ADDAn and SDAn, and pDAn chooses the faster method between pADDAn and pSDAn most of time. We must note that pADDAn may converge slower than pADDA. We observe that pADDA, pSDA, pADDAn, pSDAn and pDAn converge fastest among all doubling algorithms for our example as a whole. Abnormal situations may be explained from the fact that the smaller parameters may not result in faster algorithms for doubling algorithms. From the results, we can observe that the computational work of the bisection procedure is negligible compared to that of the doubling algorithms in the preprocessing procedure. \section{Conclusions} In this paper, based on a new parameterized definition of the comparison matrix of a given complex matrix, we propose a new class of complex nonsymmetric algebraic Riccati equations (NAREs) which extends the class of nonsymmetric algebraic Riccati equations proposed by \cite{Axe1}. We also generalize the definition of the extremal solution of the NARE and show that the extremal solution exists and is unique. Besides, we show that Newton's method for solving the NARE is quadratically convergent and the fixed-point iterative methods are linearly convergent. We also give some concrete parameters selection strategies such that the doubling algorithms, including ADDA and SDA, can be used to deliver the extremal solution, and show that the two doubling algorithms with suitable parameters are quadratically convergent. Furthermore, some invariants of the doubling algorithms are also analyzed. However we fail applying the structure-preserving doubling algorithm with shrink-shift (SDA-ss) to solve the NAREs proposed by us. We will be devoted to choosing suitable parameters such that SDA can be applied to solve the NAREs. \end{document}